What potential threats does artificial intelligence pose for companies today?
Dr Geoffrey Hinton, often dubbed the ‘godfather of AI’, quit Google this week* to speak freely about the dangers of AI. Dr Hinton cited concerns over the flood of misinformation and the possibility for AI to upend the job market and went as far to say he in part regrets his contribution to the field of AI. Luckily for the team at FundCalibre, that same week we had Chris Ford, manager of the Sanlam Global Artificial Intelligence fund, in our offices so we asked him to share his views.
Chris Ford shares his views on the potential drawbacks and dangers associated with AI, including the role of regulators across the globe when considering the different perspectives of artificial intelligence and the “shades of grey” from a philosophical point of view. He then gives his thoughts on Dr. Geoffrey Hinton stepping back from his role at Google, and one of the biggest misapprehensions about artificial intelligence: the conflation of human intelligence being perfectly correlated with digital intelligence.
*week commencing 1 May 2023
Want to learn more about artificial intelligence?
Watch: What is artificial intelligence?
Watch: What are the pros and cons of artificial intelligence?
Want to hear more from Chris Ford?
Watch: Artificial intelligence: a craze or a huge investment opportunity? [Feb 2023]
Listen: Episode 174 of the ‘Investing on the go’ podcast — Artificial intelligence: how it is changing the world and our investment options [Feb 2022]
How do we tackle the dangers associated with AI?
So, the issues around the drawbacks and dangers associated with AI, I think are increasingly uncovered in the popular press and in the news field. They’re considerable and they need to be thought about, and regulators absolutely have a role to play.
One of the features of the regulatory world in the AI space is that the perspective is different depending on where you sit, and the perspective from a regulatory position in Washington [DC] or in Brussels will be different from that in Shanghai or Beijing. And it really stems from a difference in philosophical point of departure, and what you believe about the rights of the individual relative to the rights of the collective society.
This is going to be a feature of the AI world over the course of the next 20 years. There is no right or wrong answer, there are just shades of grey. My job as an investment manager is to make sure that we navigate our way through that period in an informed and safe manner on behalf of our fund holders.
Geoffrey Hinton, you know, recently stepped back from his role at Google saying a number of things about his time as a researcher in the field of artificially intelligence systems. One of the most interesting things I think he said, concerned the similarities or perhaps, more importantly, differences between digital intelligence and biological intelligence, and the importance of understanding that there are some fundamental differences between the two.
And I think one of the biggest misapprehensions about AI stems from the conflation of human intelligence being perfectly correlated with digital intelligence; it is not the case. And the more informed and nuanced our understanding of artificially intelligent systems becomes, I’m more hopeful in that circumstance that we end up with better regulatory outcomes in a better and more safely curated environment within which artificially intelligent systems can be delivered, to the benefit of society as a whole.