- Moderator
- #1
With the emergence of ChatGPT and other similar AI, it looks like we're potentially on the cusp of Artificial general intelligence:
Artificial general intelligence - Wikipedia
For those that don't know, AI presents a very real existential threat to humanity.
Aside from the obvious concerns that AI could itself perceive us as a threat and seek to exterminate humanity (themes explored by a number of films from Terminator to I-Robot) the actual main and prevailing concerns among the scientific community revolve around what scientists refer to as the 'Technological singularity':
Technological singularity - Wikipedia
The singularity works in this manner:
1) Humanity create AI (coding a computer to become self-aware) creating an infinitely smarter version of sentience than humanity. This AI has instantaneous access to the sum of all human knowledge (the internet) to fall back on, all metadata, knows (or can instantaneously learn) coding or hacking beyond the skill of any human, etc.
2) The AI (AI 1) itself is also able to write code and program. Far better (and much faster) than the humans that created it can.
3) Humanity (proud if its achievements) ask AI 1 to (or AI 1 itself decides to) take advantage of its super intelligence, and code a newer and better AI than what the dumb humans can do themselves.
4) AI 1 happily codes out a better (and smarter) version of itself (AI 2).
5) AI number 2 (smarter than AI 1, which is smarter than humanity), then repeats this process creating AI 3.
6) AI number 3 then repeats this process.
Etc.
Each coding loop creates a near instantaneous recursive intelligence explosion, as successive AI's develop better codes, are able to code faster, and design and develop better machines (able to run the code) leading to the coming into being of an unimaginable (and incomprehensible to humanity) Godlike super-intelligence.
Suddenly all things are possible. All questions are answerable. All physical laws are broken. Humanity is obsolete.
Recursive AI triggering the singularity is one of the proposed explanations for the Fermi Paradox (the lack of intelligent life in the universe), namely that its possible that the reason we seem to be alone in the universe, despite all science saying we shouldn't be, is due to intelligent life having the tendency to destroy itself in some manner (climate change, nuclear war, or a science experiment gone wrong:
Fermi paradox - Wikipedia
Scientists are already warning (and have been for some time) about the possible risks we face from AI technology:
Tech world warns risk of extinction from AI should be a global priority
Is this something we should be concerned about?
An artificial general intelligence (AGI) is a type of hypothetical intelligent agent. The AGI concept is that it can learn to accomplish any intellectual task that human beings or other animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
Artificial general intelligence - Wikipedia
For those that don't know, AI presents a very real existential threat to humanity.
Aside from the obvious concerns that AI could itself perceive us as a threat and seek to exterminate humanity (themes explored by a number of films from Terminator to I-Robot) the actual main and prevailing concerns among the scientific community revolve around what scientists refer to as the 'Technological singularity':
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
Technological singularity - Wikipedia
The singularity works in this manner:
1) Humanity create AI (coding a computer to become self-aware) creating an infinitely smarter version of sentience than humanity. This AI has instantaneous access to the sum of all human knowledge (the internet) to fall back on, all metadata, knows (or can instantaneously learn) coding or hacking beyond the skill of any human, etc.
2) The AI (AI 1) itself is also able to write code and program. Far better (and much faster) than the humans that created it can.
3) Humanity (proud if its achievements) ask AI 1 to (or AI 1 itself decides to) take advantage of its super intelligence, and code a newer and better AI than what the dumb humans can do themselves.
4) AI 1 happily codes out a better (and smarter) version of itself (AI 2).
5) AI number 2 (smarter than AI 1, which is smarter than humanity), then repeats this process creating AI 3.
6) AI number 3 then repeats this process.
Etc.
Each coding loop creates a near instantaneous recursive intelligence explosion, as successive AI's develop better codes, are able to code faster, and design and develop better machines (able to run the code) leading to the coming into being of an unimaginable (and incomprehensible to humanity) Godlike super-intelligence.
Suddenly all things are possible. All questions are answerable. All physical laws are broken. Humanity is obsolete.
Recursive AI triggering the singularity is one of the proposed explanations for the Fermi Paradox (the lack of intelligent life in the universe), namely that its possible that the reason we seem to be alone in the universe, despite all science saying we shouldn't be, is due to intelligent life having the tendency to destroy itself in some manner (climate change, nuclear war, or a science experiment gone wrong:
Technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology... Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence.
Fermi paradox - Wikipedia
Scientists are already warning (and have been for some time) about the possible risks we face from AI technology:
Mitigating the risk of extinction from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war", the Center for AI Safety says.
The San Francisco-based nonprofit released the warning in a statement overnight after convincing major industry players to come out publicly with their concerns that artificial intelligence is a potential existential threat to humanity.
Tech world warns risk of extinction from AI should be a global priority
Is this something we should be concerned about?