Calls mount for pausing AI development

Shiva Singh
3D rendering artificial intelligence AI research of robot and cyborg development for future of people living. Digital data mining and machine learning technology design for computer brain.

Belgium, (Brussels Morning Newspaper) Calls to pause the development of AI continues to mount after Europol joined experts and bodies expressing concerns earlier this week.

A group of AI experts and industry leaders released a letter on Wednesday calling for a 6-month pause on the development of new AI systems, according to Reuters reporting.

The group warns that AI systems are more advanced than the recently launched GPT-4 and could pose risks to society and humanity, stressing the importance of developing and implementing shared safety protocols.

The letter was signed by more than 1,000 experts and industry players including entrepreneur Elon Musk, AI developers Stuart J. Russell and Yoshua Bengio, and researchers at DeepMind, among others.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stressed and reiterated that such systems present potential threats.

Signatories called on AI developers to cooperate with governments and regulatory authorities to make sure that deployment of advanced AI systems does not cause political and economic disruptions.

Potential for abuse

Earlier this week, Europol warned that new AI systems could be abused for nefarious purposes including cybercrime, phishing, and more.

Meanwhile, the UK proposed a new regulatory framework for AI on Wednesday according to which no new bureaucratic bodies would be formed and responsibility for AI regulation would be split between existing regulators for health and safety, human rights, and competition.

Since OpenAI, supported by US tech giant Microsoft, released ChatGPT last year, other AI developers accelerated their work on similar systems and companies started looking for ways to integrate such systems into their products.

A spokesperson at Future of Life Institute, which published the letter, noted that OpenAI CEO Sam Altman did not sign the letter.

Gary Marcus, professor of neural science at New York University and one of the signatories, pointed out “the letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications.”

He warned that deployment of advanced AI systems could “cause serious harm… the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

About Us

Brussels Morning is a daily online newspaper based in Belgium. BM publishes unique and independent coverage on international and European affairs. With a Europe-wide perspective, BM covers policies and politics of the EU, significant Member State developments, and looks at the international agenda with a European perspective.
Share This Article
Shiva is a professional digital marketer who covers the latest updates in the tech industry from across the globe. With an experience of over 5 years in the world of Information Technology, he likes to keep up with every major development and writes fact-based pieces backed by in-depth research.