Artificial Intelligence: Powerful AI systems ‘can’t be controlled’ and ‘do harm’, says UK expert | Science and technology news


A British scientist known for his contributions to artificial intelligence told Sky News that powerful AI systems “can’t be controlled” and are “already doing damage”.

Professor Stuart Russell was one of more than 1,000 experts who attended last month signed an open letter calls for a six-month hiatus in developing systems that are even more powerful than the newly introduced OpenAI GPT-4 – the successor of its online chatbot ChatGPT, based on GPT-3.5.

The headline feature of the new model is his ability to recognize and explain images.

Speaking to Sky’s Sophy Ridge, Professor Russell said of the letter: “I signed it because I think it needs to be said that we don’t understand, like these [more powerful] systems work. We don’t know what they are capable of. And that means we can’t control them, we can’t make them behave themselves.”

He said that “people were concerned about disinformation, about racial and gender bias in the outcomes of these systems.”

And he argued with the rapid progression of AItime is needed to “develop regulations that ensure that the systems benefit people and do not harm them”.

He said one of the biggest concerns is disinformation and deepfakes (videos or photos of a person in which their face or body has been digitally altered to appear to be someone else – typically with malicious intent or to spread false information).

More about artificial intelligence

He said that while there has been disinformation for “propaganda” purposes for a long time, the difference now is that using Sophy Ridge as an example, he could ask GPT-4 to try to “manipulate” them to make them “less supporting” be Ukraine”.

He said the technology would read Ridge’s social media presence and anything she’s ever said or written, then run a phased campaign to “customize” her newsfeed.

Professor Russell said of Ridge: “The difference here is that I can now ask GPT-4 to read everything about Sophy Ridge’s social media presence, everything Sophy Ridge has ever said or written, everything about Sophy Ridge’s friends, and then just start a campaign by gradually adjusting your newsfeed, maybe occasionally send some fake news to your newsfeed so you support Ukraine a little less and you start pushing harder on politicians who say , we should support Ukraine in the war against Russia and so on.

“It will be very easy. And the really scary thing is that we could do that to a million different people before lunch.”

Please use Chrome browser for a more accessible video player

Will this chatbot replace humans?

The expert, a professor of computer science at the University of California, Berkeley, warned of “a tremendous degradation of these systems by manipulating people in ways they don’t even know is happening”.

Describing it as “really really scary,” Ridge asked if that’s happening now, to which the professor replied, “Probably, yes.”

He said China, Russia and North Korea have big teams “spreading disinformation” and with AI “we gave them a power tool”.

“The letter’s concern really pertains to the next generation of the system. At the moment, the systems have some limitations in their ability to create complicated plans.”

Continue reading:
What is GPT-4 and how does it improve ChatGPT?

Elon Musk reveals plan to build “TruthGPT” despite warning of AI dangers

He suggested that companies could be guided by AI systems in the next generation of systems or the one after that. “You could see military campaigns being organized by AI systems,” he added.

“If you build systems that are more powerful than humans, how do humans retain power over those systems forever?” That is the real concern behind the open letter.”

Click here to subscribe to the Sophy Ridge on Sunday Podcast

The professor said he’s trying to convince governments of the need to plan ahead for when “we need to change the way our entire digital ecosystem… works”.

Since its release last year, OpenAI’s Microsoft-backed ChatGPT has prompted competitors to accelerate the development of similarly sized language models and encouraged companies to incorporate generative AI models into their products.

Great Britain presents proposals for “light touch” regulations around AI

It’s coming, as the UK government recently revealed Proposals for a “light touch” regulatory framework around AI.

The government’s approach, outlined in a strategy paper, would split responsibility for managing AI between its human rights, health and safety, and competition regulators, rather than creating a new body dedicated to the technology.

Source link

Leave a Comment