Space Media Network Trade News Advertising

news.robodaily.com
June 30, 2024

Nuclear fusion: Cheap and foolproof.

The fight over a 'dangerous' ideology shaping AI debate


Advertisement

Navigate Mapping Innovations
Trade news, trends, insights from mapping industry
Curate and share content using AI
www.gpsdaily.com
https://www.gpsdaily.com/



The fight over a 'dangerous' ideology shaping AI debate

By Joseph BOYLE
Paris (AFP) Aug 28, 2023
Silicon Valley's favourite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction.

But increasingly vocal critics are warning that the philosophy is dangerous, and the obsession with extinction distracts from real problems associated with AI like data theft and biased algorithms.

Author Emile Torres, a former longtermist turned critic of the movement, told AFP that the philosophy rested on the kind of principles used in the past to justify mass murder and genocide.

Yet the movement and linked ideologies like transhumanism and effective altruism hold huge sway in universities from Oxford to Stanford and throughout the tech sector.

Venture capitalists like Peter Thiel and Marc Andreessen have invested in life-extension companies and other pet projects linked to the movement.

Elon Musk and OpenAI's Sam Altman have signed open letters warning that AI could make humanity extinct -- though they stand to benefit by arguing only their products can save us.

Ultimately critics say this fringe movement is holding far too much influence over public debates over the future of humanity.

- 'Really dangerous' -

Longtermists believe we are dutybound to try to produce the best outcomes for the greatest number of humans.

This is no different to many 19th century liberals, but longtermists have a much longer timeline in mind.

They look to the far future and see trillions upon trillions of humans floating through space, colonising new worlds.

They argue that we owe the same duty to each of these future humans as we do to anyone alive today.

And because there are so many of them, they carry much more weight than today's specimens.

This kind of thinking makes the ideology "really dangerous", said Torres, author of "Human Extinction: A History of the Science and Ethics of Annihilation".

"Any time you have a utopian vision of the future marked by near infinite amounts of value, and you combine that with a sort of utilitarian mode of moral thinking where the ends can justify the means, it's going to be dangerous," said Torres.

If a superintelligent machine could be about to spring to life with the potential to destroy humanity, longtermists are bound to oppose it no matter the consequences.

When asked in March by a user of Twitter, the platform now known as X, how many people could die to stop this happening, longtermist idealogue Eliezer Yudkowsky replied that there only needed to be enough people "to form a viable reproductive population".

"So long as that's true, there's still a chance of reaching the stars someday," he wrote, though he later deleted the message.

- Eugenics claims -

Longtermism grew out of work done by Swedish philosopher Nick Bostrom in the 1990s and 2000s around existential risk and transhumanism -- the idea that humans can be augmented by technology.

Academic Timnit Gebru has pointed out that transhumanism was linked to eugenics from the start.

British biologist Julian Huxley, who coined the term transhumanism, was also president of the British Eugenics Society in the 1950s and 1960s.

"Longtermism is eugenics under a different name," Gebru wrote on X last year.

Bostrom has long faced accusations of supporting eugenics after he listed as an existential risk "dysgenic pressures", essentially less-intelligent people procreating faster than their smarter peers.

The philosop her, who runs the Future of Humanity Institute at the University of Oxford, apologised in January after admitting he had written racist posts on an internet forum in the 1990s.

"Do I support eugenics? No, not as the term is commonly understood," he wrote in his apology, pointing out it had been used to justify "some of the most horrific atrocities of the last century".

- 'More sensational' -

Despite these troubles, longtermists like Yudkowsky, a high school dropout known for writing Harry Potter fan-fiction and promoting polyamory, continue to be feted.

Altman has credited him with getting OpenAI funded and suggested in February he deserved a Nobel peace prize.

But Gebru, Torres and many others are trying to refocus on harms like theft of artists' work, bias and concentration of wealth in the hands of a few corporations.

Torres, who uses the pronoun they, said while there were true believers like Yudkowsky, much of the debate around extinction was motivated by profit.

"Talking about human extinction, about a genuine apocalyptic event in which everybody dies, is just so much more sensational and captivating than Kenyan workers getting paid $1.32 an hour, or artists and writers being exploited," they said.

jxb/er-elc

X


Artificial Intelligence Analysis

Defense Industry Analyst:

8/10

Stock Market Analyst:

6/10

General Industry Analyst:

7/10

Analyst

Summary

:

This article discusses the philosophy known as “longtermism,” which is popular in Silicon Valley and is a central concept in the debate over artificial intelligence (AI). Longtermism is based on the idea of human extinction, but its critics are warning that this obsession distracts from the real issues associated with AI, such as data theft and biased algorithms. This fringe movement holds a great deal of influence in universities, the tech sector, and among venture capitalists, and some argue that this influence is too great. Longtermists believe that we are obligated to produce the best outcomes for the greatest number of people, looking to the far future and trillions upon trillions of humans. Author Emile Torres argues that this ideology is “really dangerous” and can be compared to the principles used to justify mass murder in the past.

Comparing this article’s content to significant events and trends in the space and defense industry over the past 25 years, there is a clear correlation between the emergence of AI and the rapid development of technology in the industry. As AI technology has become more advanced, the potential implications of its use have become increasingly concerning, and this article’s discussion of longtermism reflects the debate surrounding these implications. There is also a notable similarity between the concepts of longtermism and transhumanism, both of which focus on the potential of humans to transcend their physical limitations and extend their lifespans.

Investigative

Question:

  • 1. What are the potential implications of longtermism on data security?

  • 2. How has longtermism been used to influence public debates on AI?

  • 3.
What are the ethical implications of longtermism?

4. How has the development of AI technology in the space and defense industry changed over the past 25 years?

5. What are the potential benefits of longtermism?

This AI report is generated by a sophisticated prompt to a ChatGPT API. Our editors clean text for presentation, but preserve AI thought for our collective observation. Please comment and ask questions about AI use by Spacedaily. We appreciate your support and contribution to better trade news.


Sinodaily.com: Your News Navigator
Custom newsletters for your business.
High open rates, high impact.
www.Sinodaily.com




Next Story




Buy Advertising About Us Editorial & Other Enquiries Privacy statement

The content herein, unless otherwise known to be public domain, are Copyright 1995-2023 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement