How Safe is AI?

How Safe is AI?

Mike Canfield, Morgan Stanley’s Head of Europe Sustainability Research, discusses why ensuring safe and responsible artificial intelligence is essential to the AI revolution.


----- Transcript -----


Mike Canfield: Welcome to Thoughts on the Market. I'm Mike Canfield, Morgan Stanley's Europe, Middle East and Africa Head of Sustainability Research.

Today I'll discuss a critical issue on a hot topic: How safe is AI?

It's Thursday 10th of October at 2pm in London.

AI is transforming the way that we live, work, and connect. It's really got the potential at every level and aspect of society, from personal decisions to global security. But as these systems become ever more integrated into our critical functions – whether that's healthcare, transportation, finance, or even defense – we do need to develop and deploy safe AI that keeps pace with the velocity of technological advances.

Market leaders, academic think tanks, NGOs, industry bodies, intergovernmental organizations have all attempted to codify what safe or responsible AI should look like. But at the most fundamental level, the guidelines and standards we've seen so far share a number of clear similarities. Typically, they focus on fostering innovation in practical terms, as well as supporting economic prosperity – but also asserting the need for AI systems to respect fundamental human rights and values and to demonstrate trustworthiness.

So where are we now in terms of regulations around the world?

The EU's AI Act leads the way with its detailed risk-based approach. It really focuses on transparency as well as risks to people and fundamental rights. In the USA, while there's no comprehensive federal regulation or legislation, there are some federal laws that offer some sector specific guidance on AI applications. Things like the National Defense Authorization Act of 2019 and the National AI Initiative Act of 2020. Alongside those, President Biden's published an executive order on AI, promoting safety, responsible innovation, and supporting Americans and their rights, including things like privacy. In Asia Pacific, meanwhile, countries are working to establish their own guidelines on consumer protection, privacy, and transparency and accountability.

In general, it’s very clear that policymakers and regulators increasingly expect AI systems developers to adopt what we'd call the socio-technical approach, focused on the interaction between people and technology. Having examined numerous existing regulations and foundational standards from around the world, we think a successful policymaking approach requires the combination of four core conceptual pillars.

We've called them STEP. That's Safety, Transparency, and Ethics and Privacy. With these core considerations, AI can take a step – pun intended – in the right direction. Within safety, the focus is on reliability of systems, avoiding harm to people and society, and preventing misuse or subversion. Transparency includes a component of explainability and accountability; so, systems allowing for future feedback and audits of outcomes. Ethically, the avoidance of bias, preventing discrimination, inclusion, and the respect for the rule of law are key components. Then finally, privacy considerations include elements like data protection, safeguards during operation, and allowing users consent in data used for training.

Of course, policymakers contend with a variety of challenges in developing AI regulations. Issues like bias, like discrimination, implementing guardrails without stifling innovation, the sheer speed at which AI is evolving, legal responsibility, and much more beyond. At its most basic, though, arguably the most critical challenge of regulating AI systems is that the logic behind outcomes is often unknown, even to the creators of AI models, because these systems are intrinsically designed to learn.

Ultimately, ensuring safety and responsibility in the use of AI is an essential step before we can really tap into ways AI could positively impact society. Some of these exciting opportunities include things like improving education outcomes, smart electric grid management, enhanced medical diagnostics, precision agriculture, and biodiversity monitoring and protection efforts. AI clearly has enormous potential to accelerate drug development, to advance material science research, to boost manufacturing efficiency, improve weather forecasting, and even deliver better natural disaster predictions.

In many ways, we need guardrails around AI to maximize its potential growth.

Thanks for listening. If you enjoy the show, please do leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.

Episoder(1510)

Michael Zezas: Indirect Impacts

Michael Zezas: Indirect Impacts

In today’s podcast, Head of U.S. Public Policy strategy Michael Zezas discusses how the great debate playing out in markets around trade is about more than direct impacts.

26 Jun 20193min

Mike Wilson: Are Markets Putting Stock in Trade?

Mike Wilson: Are Markets Putting Stock in Trade?

With corporate confidence softening, could movement on U.S.-China trade at the G20 be the catalyst for growth in the second half of the year? Chief Investment Officer Mike Wilson has analysis.

24 Jun 20193min

Andrew Sheets: Let’s Say the Fed Cuts Rates in July…

Andrew Sheets: Let’s Say the Fed Cuts Rates in July…

Morgan Stanley's economics team now expects the Fed to cut interest rates by half a percent possibly as soon as July. On today’s podcast, Chief Cross-Asset Strategist Andrew Sheets examines how markets could react.

21 Jun 20193min

Michael Zezas: Three Possible Trade Paths from the G20

Michael Zezas: Three Possible Trade Paths from the G20

On today’s podcast, Head of U.S. Public Policy strategy Michael Zezas says three likely U.S.-China trade scenarios will come out of the G20. But a tariff pause might be the trickiest for investors.

19 Jun 20192min

Mike Wilson: How Confident Are U.S. Businesses in the Economy?

Mike Wilson: How Confident Are U.S. Businesses in the Economy?

On today’s episode, Chief Investment Officer Mike Wilson shares a readout on the firm’s proprietary Business Conditions Index. Are the data softening more than investors realize?

17 Jun 20193min

Andrew Sheets: The Dangers of Cheering for Weaker Data

Andrew Sheets: The Dangers of Cheering for Weaker Data

On today’s podcast, Chief Cross-asset Strategist Andrew Sheets provides a bit of historical perspective on the logic of rooting for weaker data and lower interest rates.

14 Jun 20193min

Michael Zezas: Why ‘Slowbalization’ May Be Feeding Trade Tensions

Michael Zezas: Why ‘Slowbalization’ May Be Feeding Trade Tensions

Head of U.S. Public Policy Michael Zezas says that independent of current trade concerns, the trend toward globalized supply chains is fading, as companies respond both to political and market incentives.

12 Jun 20193min

Mike Wilson: Why Trade Tensions Are Only Part of the Story

Mike Wilson: Why Trade Tensions Are Only Part of the Story

Investors and media have been hyper-focused on trade and Fed policymaking. But according to Chief Investment Officer Mike Wilson, some key economic data points are the real story to watch.

10 Jun 20193min

Populært innen Business og økonomi

stopp-verden
dine-penger-pengeradet
e24-podden
rss-penger-polser-og-politikk
kommentarer-fra-aftenposten
rss-borsmorgen-okonominyhetene
finansredaksjonen
lydartikler-fra-aftenposten
rss-vass-knepp-show
pengepodden-2
tid-er-penger-en-podcast-med-peter-warren
livet-pa-veien-med-jan-erik-larssen
stormkast-med-valebrokk-stordalen
morgenkaffen-med-finansavisen
rss-sunn-okonomi
rss-rettssikkerhet-bak-fasaden-pa-rettsstaten-norge-en-podcast-av-sonia-loinsworth-og-foreningen-rettssikkerhet-for-alle
utbytte
okonomiamatorene
lederpodden
rss-markedspuls-2