Relevant questions to ask when applying Artificial Intelligence

Blog post by Pernille Kræmmergaard, June 2023
Find more blog posts

Then, a truly transformative event occurred in the realm of artificial intelligence, leaving me breathless yet elated at the remarkable attention and vigorous public discourse that has emerged since my previous newsletter and blog post titled "How do you engage with artificial intelligence - and its implications for you and your professional endeavors?" published in May.

Among other things, on June 14, the EU Parliament adopted the first regulation on artificial intelligence - the AI Act. Negotiations with EU countries are now underway, with the aim of reaching a final agreement by the end of the year. The EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of the technology, which is a good thing. The agreement means that different AI systems will be analyzed and classified according to the risk they pose to users. The different risk levels will correspond to differing degrees of regulation. Once the agreement is approved, this will be the world's first rules for artificial intelligence.

Regulation in this area is good and necessary. We just need to be careful that we don't over-regulate and thereby reduce, and in the worst-case scenario lose, the opportunity to use AI for the benefit of society, welfare, companies and us as individuals.

The regulation of AI was a big part of the debates I either participated in or attended at the People's Meeting on Bornholm. On June 16, I participated in the Geopolitical Courtyard, which was filled to capacity, together with Thomas Jensen, CEO, Milestone, Casper Klynge, Deputy Director of Digital, Consulting and Service, Danish Chamber of Commerce and Mikkel Flyverbom, Professor of Communication and Digital Transformations, CBS, in a debate entitled: "AI: How do we govern artificial intelligence? How do we create frameworks and legislation that limit the threat of AI without stifling innovation?". 

The debate was moderated by Mathias Bay Lynggaard, Project Manager, Rasmussen Global, and opened by Anders Fogh Rasmussen, Chairman and Founder, Rasmussen Global, former NATO Secretary General from 2009-2014 and Danish Prime Minister from 2001-2009. In 2019, at the National Security Commission on Artificial Intelligence (NSCAI) conference, he discussed how AI will affect the global order.

The description of the debate was as follows; Artificial intelligence can help solve some of the biggest societal challenges of our time. But it can also be misused and threaten fundamental rights. Can Denmark and Europe find the answer that balances the pros and cons and puts us at the forefront of international competition? Technological breakthroughs have been accelerated by the rapid rise of AI. China is already using AI in business and defense, and the US is struggling to rise to the challenge. But where are Europe and Denmark in this technological race? How can European businesses and politicians put responsible technology on the agenda and ensure sustainable frameworks and legislation for the benefit of people and society? How do we steer AI towards responsible use that protects the right to privacy and other fundamental rights? And what are the consequences if new regulations tighten the grip so much that they take the air out of the innovation and competitiveness that will help solve some of today's biggest societal challenges?

Exciting and relevant questions to ask and debate. During the debate in the courtyard, I argued, among other things, that we must dare to try something, that we must show courage, that we must be aware that ethics shift in connection with technological development and its use and, last but not least, that within professional areas and specialties, we must remember to involve tech professionals in the development of new ways of using AI and not just leave it to the professional staff.

The moderator of the debate also let the participants in the courtyard ask questions to us on the panel. The three most interesting ones, in my opinion, I will reproduce here and give my answer to.

  1. A young communications student asked whether she, as a communications student, and others such as copywriters, translators, and journalists, should be worried that technologies such as AI and Chatbots (including ChatGPT) would put her and others out of work. There's no doubt in my mind that it will to some extent. But at the same time, people are still needed to do communication, draft texts, translate and write articles, but their jobs and the skills they need will be different from a world without the use of these technologies. Instead of asking the question as she did, the question in my opinion should instead be: how can I as a communication student, and also a copywriter, translator and journalist, learn to ask the right questions to the technologies and how can I quality assure their answers.
  2. Another young participant asked if the business models that live off people's data should be banned? To this I replied that I don't think that's the right way to go! I can't even imagine what the consequences of such a law would mean for us and for the sea of apps that we use every single day, e.g., to get to and from work, to be entertained in our free time, to tell guests what we want, etc. Instead of banning them, I think we should ensure transparency and knowledge of what the data is used for, give users ownership of their own data, and perhaps most importantly ensure that the population understands how these business models work and what AI is and can do. Exactly as Finland has done.
  3. A third asked about the risk of hacking and misuse of data. This risk is there, and it is great, and there are forces that work hard every day to hack and misuse data for their own gain, not just commercially, but also in relation to national security. That is why cybersecurity is and will continue to become more and more important, both for us as individuals, for companies and public organizations, and for society, in an increasingly digital reality.

I will continue to reflect on these questions and my answers to them over the summer, in connection with the development of generation 6 and the new skills that technological and digital development calls for, in addition to the 10+3 described in my books.

I expect that generation 6 will be ready to "showcase" and discuss with others during the fall and of the new skills, I am already sure that Cybersecurity will be one.

With the hope that you enjoy AI and the new open chatbots, and that you take care of yourself digitally, I wish you a great summer.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram