AI podcast

Velkommen til vores miniserie om AI-styring, som er præsenteret af Thomas Douglas, Global ICT Industry Manager hos DNV.

Ifølge en nylig ViewPoint-undersøgelse fra DNV er 70 % af virksomhederne kun lige begyndt på deres AI-udvikling. Da AI udvikler sig hurtigt, er det afgørende at etablere styring tidligt i projektet.

I denne miniserie diskuterer Thomas vigtigheden af at forstå AI's risici og fordele, skabe tillid til AI og implementere effektiv AI-styring. Han fremhæver ISO-certificeringens rolle og giver praktiske trin, herunder en 8-trins-guide og ressourcer til uddannelse.

Vi er glade for at kunne tilbyde en begrænset kampagne for at tilmelde sig vores ISO/IEC 42001:2023 Requirements Course. Denne E-Learning er designet til at give deltagerne mulighed for at få grundlæggende viden om standardens krav samt den grundlæggende viden, der kræves for at implementere den. Brug kampagnekoden "ISO42KAI-24" ved tilmelding for at få gratis adgang til e-learningen. Skynd dig, der er kun 25 ledige pladser.

Både podcasten og vores E-Learning er på engelsk.

Læs transskriptionen af vores podcast serie

Hello and welcome to Navigating AI Assurance from DNV, a globally leading independent certification body and training provider.
My name is Thomas Douglas, Global ICT Industry Manager at DNV. And over the next three episodes, I'll be guiding you through some of the key considerations for businesses starting on the AI journey and showing you how AI governance can put you on the path to safe, reliable and ethical usage of this technology.
So, as has been said many times, there really never is a dull moment with AI. It's always a topic of conversation in some way, shape or form and everyone is somewhere on the AI journey, or certainly strongly contemplating starting on this journey.
We begin by understanding the risks and rewards of AI. According to a recent viewpoint survey conducted by DNV, 58% of companies have not yet implemented AI technology, while 70% are starting on the AI journey. So it's important to get clarity on what to look out for, while AI does indeed continue to change and evolve at a fast pace. 
Every new technology comes with its risks and there's always two sides of the coin. So there's the rewards, and then there's also the risks. And it's extremely important for us as individuals as well as the organizations to understand these both sides - the limitations, what can go wrong and how can we benefit from AI, and what sorts of things need to be put in place in order to allow us to, to benefit safely from the use of AI.
So if we start off by looking at some of the rewards of AI, of course, top of my mind is efficiency and productivity. We know AI can automate repetitive tasks and it can free up some time for us to focus on more challenging or creative work, which can in turn lead to some increased productivity in various industries or everyday business. So AI can drive innovation in different fields such as transportation, healthcare, finance. 
To give one practical example, AI algorithms can analyse medical data to improve, diagnostics and different treatment plans. 
And then finally we could also discuss data analysis so AI can process and analyse huge amounts of data very quickly and give us real time insights that can ultimately lead us to better decision making as well as strategic planning.
So let's look at some of the risks of AI. 
Well, first and foremost lack of transparency. So there can tend to be a lack of transparency in AI systems, particularly when we look at deep learning models that can sometimes be complex to interpret.
Sometimes this can affect the decision making process, as well as the understanding of the logic of these technologies. And when people aren't able to comprehend how an AI system arrives, to its conclusions, this can actually lead to a little bit of consumer distrust and, and reluctance to actually adopt these technologies.
Privacy concerns - so the use of AI in data collection and analysis can lead to quite a few privacy violations if, of course, it is not properly regulated. 
Security threats - AI can be used for good, but it can also be used for bad. And it can be used maliciously in the creation of deepfakes, can also be used to automate cyber attacks. 
And then, of course, there's, there's the question of ethical use of AI. So by us instilling moral and ethical values in AI systems, especially those that have decision making context with quite significant, potentially, consequences. This can present, a bit of a challenge and bias and discrimination. So AI systems can sometimes amplify some existing biases that do exist if they are trained on biased data.
So it's all about balancing risks and rewards.
Great. So, AI is here. We know that we can harness this for the better. But now what? As we've said, organizations are wanting to quickly implement AI in order to innovate, streamline some processes, potentially incorporate it into some of their products or services. And of course, that there is that question of industry fervour. So not wanting to fall behind our industry peers and competitors. So organizations are trying to figure out what their strategy with AI is - where and how can they start. And one of the big topics when confronted with this enormous topic, which is AI, is that of governing AI.
So to maximize the benefits of AI while mitigating the risks, it really is vital in order for us to implement, some robust governance frameworks, ensure transparency and fairness in AI systems, and of course promote continuous learning. 
And it's within this governance topic that the recently released ISO 42001 standard for AI management system comes in as something that can be utilized for good. So it is a blueprint for designing and embedding AI governance within your organization. As it provides quite a nice and adaptable playbook for an organization that's thinking of implementing AI or utilizes AI, develops AI, and sort of prompts an organization to think before you start on your AI journey.
So related to policies, responsibilities, and of course, the governance of AI within your organization. So it's very important from that point of view.
So what this standard is great for is that it gives you a guidance and blueprint of how to go about this subject. So what are the sort of use cases within AI that you're thinking of in your organization and really think before you act. So, for example, as an organization, are we going down the path with AI that isn't really wise from a potential impact and risk perspective? And what are some of the key considerations that we have to have in mind if we do decide to go down this path with AI?
So I'd qualify this standard as being really the first step and a guiding baseline. So as I say, the governance of it all, the different processes, policies, objectives and people necessary to run an AI program where we can maximize all the rewards and avoid the risks which come with utilizing such powerful tools.
If you'd like to learn more about what an AI management system is and how you can reap the benefits, visit www.dnv.com/what-is-aims. On this page, you also find a link to our latest Viewpoint survey, which looks at how companies are approaching AI and where they are on their journey.
Thank you for listening to today's episode of DNV's Navigating AI Assurance on the risks and rewards of AI. Join me next time where we'll be taking a look at bridging the trust gap of AI.