Space Media Network Trade News Advertising

news.robodaily.com
September 19, 2024

Perovskite powered solar cells at Solardaily.com

AI anxiety as computers get super smart


Advertisement

JPN: Japan's Jewel
AI contextual ads for the Land of Rising Sun.
Engage with a loyal Japanese audience.
www.JPN.co.jp
https://www.spacemedianetwork.com



AI anxiety as computers get super smart

By Julie JAMMOT
San Francisco (AFP) Nov 1, 2023
From Hollywood's death-dealing Terminator to warnings from genius Stephen Hawking or Silicon Valley stars, fears have been fueled that artificial intelligence (AI) could one day destroy humanity.

Tech titans are racing toward creating AI far smarter than people, pushing US President Joe Biden to impose emergency regulation and the European Union seeking major legislation to be agreed by the end of this year.

A two-day summit starting Wednesday in London will explore regulatory safeguards against AI risks such as those below.

- Job stealer? -

The success of ChatGPT from OpenAI has ignited debate about whether "generative AI" capable of quickly producing text, images and audio from simple commands in everyday language is a tremendous threat to jobs held by people.

Automated machinery is already used to do labor in factories, warehouses, and fields.

Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers.

A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.

Boosters of such technology have invoked the notion of a universal basic income in which machines generate wealth that is shared with people freed of the burdens of work.

But it is also possible companies would reap profits of improved efficiencies, leaving those out of work to fend for themselves.

- Copycat? -

Artists were quick to protest software such as Dall-E, Midjourney and Stable Diffusion that are capable of creating images in nearly any style on demand.

Computer coders and writers followed suit, critiquing AI creators for "training" software on their work, enabling it to replicate their styles or skills without permission or compensation.

AI models have been taught using massive amounts of information and imagery found online.

"That's what it trains on, a fraction of the huge output of humanity," OpenAI co-founder Sam Altman said at a conference in September.

"I think this will be a tool that amplifies human beings, not replace them."

- Disinformation tools? -

Fake news and deepfakes have been around for years but being able to easily crank it out using generative AI raises fears of rampant online deception.

Elections run the risk of being won by those most adept at spreading disinformation, contends cognitive scientist and AI expert Gary Marcus.

"Democracy depends on access to the information needed to make the right decisions," Marcus said.

"If no one knows what's true and what's not, it's all over".

- Fraud? -

Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches.

Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.

US President Biden called the ability of AI to imitate people's voices "mind blowing" while signing his recent executive order aimed at the technology.

There are even language models trained specifically to produce such malicious content.

- Human role models -

As with other technologies with the potential for good or evil, the main danger is posed by humans who wield it.

Since AI is trained on data put on the web by humans, it can mirror soci ety's prejudices, biases, and injustices.

AI also has the potential to make it easier to create bioweapons; hack banks or power grids; run oppressive government surveillance, and more.

- AI overlord? -

Some industry players fear AI could become so smart that it could seize control from humans.

"It is not difficult to imagine that at some point in the future, our intelligent computers will become as smart or smarter than people," OpenAI co-founder and chief scientist Ilya Sutskever said at a recent TED AI conference.

"The impact of such artificial intelligence is going to be truly vast."

OpenAI and rivals maintain the goal is for AI to benefit humanity, solving long-intractable problems such as climate change.

At the same time, AI industry leaders are calling for thoughtful regulation to prevent risks such as human extinction.


Artificial Intelligence Analysis

Objectives:

This text discusses the potential risks of artificial intelligence (AI) and explores regulatory safeguards against those risks.

Current

State-of-the-Art and Limitations:

The current state-of-the-art includes the development of generative AI which can quickly produce text, images, and audio from simple commands in everyday language. This technology has sparked debate about whether it could lead to job loss. Automation is already being used in factories, warehouses, and fields, and generative AI could take aim at white collar jobs.

Whats New and Why it Will Succeed:

The two-day summit in London will explore regulatory safeguards against AI risks, such as the potential for job loss due to generative AI and potential for AI to copycat artists’ work without permission or compensation.

Target Audience and Impact:

The target audience of this text is those who are concerned about the potential risks of artificial intelligence. If successful, the regulatory safeguards explored in the summit could help protect people from job loss and from having their work copied without permission or compensation.

Risks Involved:

The risks involved in pursuing this approach include potential job loss due to automation and generative AI, and potential for AI to copycat artists’ work without permission or compensation.

Cost and

Timeline:

The cost and timeline of pursuing this approach is unknown.

Success Metrics:

The mid-term and final success metrics for this approach is unclear.

Score: 8/10

DARPA is likely to be interested in this text as it discusses the potential risks of artificial intelligence and explores regulatory safeguards against those risks. The text also identifies the target audience and potential impact of this approach, as well as the risks involved. However, the cost and timeline for pursuing this approach is unknown, as are the mid-term and final success metrics.

This AI report is generated by a sophisticated prompt to a ChatGPT API. Our editors clean text for presentation, but preserve AI thought for our collective observation. Please comment and ask questions about AI use by Spacedaily. We appreciate your support and contribution to better trade news.


Gain In-Demand Skills
Master AI-powered CRM
Boost your employability
www.TheMBAMachine.com




Next Story




Buy Advertising About Us Editorial & Other Enquiries Privacy statement

The content herein, unless otherwise known to be public domain, are Copyright 1995-2023 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement