Computers have changed most people’s lives beyond recognition over the last 30 or 40 years. The Tandy 200 laptop terminals I was using as a journalist in the 1980s had a built-in memory of roughly 20,000 bytes, sufficient to upload 3,000 words of text into a newspaper’s editorial system. There are one million bytes in a megabyte, and the PowerPoint file that our speaker Peter Ivey compiled for his fascinating presentation, complete with illustrations and charts, came out at a staggering 237 megabytes.
Where will it all end, and will it end in tears? That was the question Peter (our own member, and a leading expert in computer technology) put to us in his hour-long talk which left many of us reeling at the sheer power of computers in 2023 and where developments in Artificial Intelligence (AI) are leading us. AI has only recently been making the headlines, with big advances due to increasing computer power. But the principle of AI is nothing new, and its origins can be traced back to the 1950s.
We were welcomed to the world of AI on the big screen by Peter’s imaginary friend Ryan, who told us he was an actor but not a real one. “Peter has asked me to introduce his presentation as he wrote the words, so he can take the blame,” a very lifelike Ryan told us.
Those of us who are familiar with such services as Google, Alexa, Netflix, Prime Video or Spotify are already using AI, perhaps without realising it. Personal assistants such as Siri, Alexa and Google Assistant use AI to process speech, and streaming services like Netflix and Spotify use AI to analyse behaviour and personalise content.
AI algorithms are used in fraud detection, and smart home devices use AI to learn user behaviour and adjust settings accordingly. And, most topically perhaps, AI is at the heart of the rather scary (in my opinion) prospect of self-driving cars.
It was Alan Turing, the World War II code breaker, who laid the foundations for modern cryptology and computer technology with his 1950 paper Computing Machinery and Intelligence. Sadly, his work was cut short by his untimely death in 1954.
Prof Ivey’s artificially intelligent assistant, Ryan, created using Colossyan software
Two years later, the Dartmouth Conference proposed the creation of machines to simulate human intelligence, initially focussed on game playing, language translation and theorem proving.
AI today has many applications in healthcare, including medical imaging, remote monitoring and robotic surgery. Online retailers use data and algorithms to predict customers’ needs and preferences.
AI as we know it is more correctly called Narrow AI and is designed to perform specific tasks such as image recognition or language translation. General AI (or Strong AI) is designed to have the ability to understand or learn any intellectual task that a human could, and achieving that, Peter told us, would be a major milestone in the history of AI and could impact the way we live and work.
Peter explained some of the problems of AI including bias, privacy, accountability, safety and job displacement.
“Overall,” he told us, “These problems highlight the need for careful consideration and ethical guidelines around the development and use of AI”
On that point, I think everybody in the room was in full agreement.