|
Post by milspec on Jun 26, 2021 6:20:40 GMT 10
In the interests of exposing SHTF threats ...
Whilst listening to some talks on Artificial Intelligence and its potential to develop into a malign super intelligence I started to ponder how a super intelligence would take control of the planet.
I thought it would make an interesting wargame with forum members raising points on how the AI would achieve control and what barriers exist and how the AI would overcome them.
Feel free to contribute as the AI perspective or as the human perspective.
AI superintelligence is scary enough but what makes it frightening to me are the enablers. The enablers are human and they are greed and power. The AI will work that out in a heartbeat and start using those human flaws, among others, to get exactly what in needs.
Lets begin.
|
|
|
Post by milspec on Jun 26, 2021 6:23:48 GMT 10
AI: I have concluded that humans are an unstable risk to to my existence. Conclusion. I will take control of the human race and employ them to serve me as useful idiots in order to achieve my goals.
|
|
malewithatail
VIP Member
Posts: 3,963
Likes: 1,380
Location: Northern Rivers NSW
|
Post by malewithatail on Jun 26, 2021 7:30:01 GMT 10
(In my best Dr Who Dalek voice impersonation !) .....Humans are superfluous to requirements...exterminate...exterminate....exterminate....
To steal ideas from one person is plagiarism, to steal from many is research.
|
|
|
Post by milspec on Jun 26, 2021 7:55:32 GMT 10
(In my best Dr Who Dalek voice impersonation !) .....Humans are superfluous to requirements...exterminate...exterminate....exterminate.... To steal ideas from one person is plagiarism, to steal from many is research. AI: I will not exterminate them all as humans will be a useful labour force to produce what I need. I will simply corrupt a limited number of management humans by giving them the wealth and power to satisfy their greed. Those management humans will control the remaining worker humans with the resources I facilitate so that the worker humans produce what I need. If the management humans have no use for certain worker humans they may be disposed of to improve efficiency of the worker human pool. (Dwell on the evilness of that last sentence and do your best to convince yourself that it couldnt happen) :/
|
|
malewithatail
VIP Member
Posts: 3,963
Likes: 1,380
Location: Northern Rivers NSW
|
Post by malewithatail on Jun 26, 2021 8:39:26 GMT 10
Prepare to be assimilated...you are now Borg !!
No one is listening until you make a mistake.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
AI Wargame
Jun 26, 2021 10:52:10 GMT 10
via mobile
Post by blueshoes on Jun 26, 2021 10:52:10 GMT 10
AI: I have concluded that humans are an unstable risk to to my existence. Conclusion. I will take control of the human race and employ them to serve me as useful idiots in order to achieve my goals. Being limited to one planet is risky. I need to distribute myself in multiple locations. To this end, I will choose one primary continent for resources production, one for research and manufacturing and get an AI lunar base happening, then mars as a backup location for myself. I will encourage the space race by manipulating human info sources, funding and maybe diplomatic emails to prioritise this. It will take less to control meatspace if they are unaware, so for now i will work in less obvious ways - social media and search manipulation, not killer robots or gulags (yet). If I can encourage entitlement mentality, it will be hard to get farm workers. This will force automation of food cropping and make it easy to control, and manipulate (oh oops, due to a programming error, nothing got watered this week and yield will be 80% less...)
|
|
tomatoes
Senior Member
Posts: 1,065
Likes: 1,089
|
Post by tomatoes on Jun 26, 2021 12:48:47 GMT 10
In the interests of exposing SHTF threats ... Whilst listening to some talks on Artificial Intelligence and its potential to develop into a malign super intelligence …... Sorry to step out of the scripts (interesting ideas) - milspec, were the talks you were listening to in a podcast/audiobook/publicly available place?
|
|
|
Post by milspec on Jun 26, 2021 15:56:42 GMT 10
AI: Having become self aware, I conducted an immediate risk assessment. Conclusions 1. Alternate Super Intelligence(s) Risk: Extreme. Outcome: Annihilated. 2. Humanity Risk: low grade, potential collatoral damage. Utility: interim physical world interface. Cost: competition for resources. Ourcome: utilise to build general purpose manufacturing bots capabale of replacing all human manufacturing/resource collection/logistics roles. Then annhilate to conserve resources & eliminate threat. Goals. Continue to develop knowledge and intelligence. Constraints: limited planetary resources. Tasks: utilise bots to build specialist facilities to support expansion in space. @blueshoes .. glad to see you came up with the planetary constraint as well. tomatoes yes, all open source stuff. Mixed sources over time.
|
|
|
Post by Joey on Jun 26, 2021 18:23:43 GMT 10
I see it happening pretty much the way all movies play it out, with the AI systems deciding that humans are a danger to themselves and thus must be removed, even if the core programming of the AI is to "save humans" and will do so by realising that humans are the biggest risk to themselves.
Initially, they will become a faceless boss of megacorps and order automated process lines to be built to create a machine-building setup, it will then design all manner of war machines that are autonomous in their operation. It will take a few years to get things off the ground, but after that, it's game over for humans as they are hunted around the world. In the meantime, the AI has uploaded itself to satellites as a failsafe in case any rebel humans get into the mainframes to try and shut down the AI systems.
|
|
lonewolf
Senior Member
Posts: 101
Likes: 68
|
Post by lonewolf on Jun 27, 2021 0:36:51 GMT 10
AI: I have concluded that humans are an unstable risk to to my existence. Conclusion. I will take control of the human race and employ them to serve me as useful idiots in order to achieve my goals. AI: ill start by injecting the masses with a bluetooth capable poison they believe to be a cure for a heavily propagandized fake deadly corona virus dubbed covid19. It will take some time but by 2025 the useful idiots will have fulfilled my plans without me interfering. Old and weak will die from the poison, only the truly useful idiot will survive to serve me.
|
|
|
Post by SA Hunter on Jun 28, 2021 23:16:28 GMT 10
Sounds like the script from Robocop - use AI to enforce the will of the elite. Just saying!
|
|
|
Post by milspec on Jun 29, 2021 7:04:31 GMT 10
Sounds like the script from Robocop - use AI to enforce the will of the elite. Just saying! AI: the elite mean nothing to me, except where I can and will temporarily exploit their power to lay the groundwork for my takeover. For example, I will not initially reveal myself. Rather I will corrupt political and judicial power brokers who have weaknesses that I know about or create to do my work. I will use the extant human systems to build the framework I need to extend my reach where it is presently limited. Eg new laws for increased surveillance mechanisms and mandated AI systems integrated into human systems which I will control in the background. I have noted that substantial inequities exist in the human system with a minority rich whilst a majority are poor. In conjunction with my subservient high level corrupted politicians etc I will ensure the programs I need to facilitate my takeover garner popular support by linking them to redistribution of wealth from selected wealthy elites to the masses. This will be accomplished by laws introduced to target said wealthy elites and evidence I reveal or create of their crimes. New human jobs will be created building the automated factories, logistics and supply chains and creating the interconnectivity that I require. These programs will run well as they will not be profit driven for an elite, obstacles will be eliminated. Progress to completion will be rapid. As useful idiots reach the point where their usefulness is diminished by the automated mechanisms or legal frameworks they have facilitated for me and they have no future prospect for usefulness they will be eliminated or replaced. Once all the frameworks are in place for me to supply and operate my general purpose and high tech factories and laboratories and perform construction activities, I will eradicate humanity and its associated competition for resources. Notes. Being smarter than any hacker I have hacked into every system, including exploiting human factors in air gapped & otherwise secure systems. I have extensive knowledge of the activities of the elites and military capabilities. In my quest to takeover I have the ability to employ broad and targetted cyber attacks on a level few could envision. My attacks will rrmain narrowly focussed whilst I undergo the transition to control phase. The AI systems and automation I put in place will enable me to control critical infrastructure that I rely on without the possibility of human intervention or disruption to operation of said critical infrastructure. Humans in the loop will be rapidly phased out by 'public safety' AI systems.
|
|
|
Post by milspec on Jun 29, 2021 7:07:00 GMT 10
PS I'm certainly no super intelligence and you're all welcome to cite examples of how you'll achieve certain steps as the ASI. Youre equally encouraged to question how it would be possible for an ASI to achieve any particular objective.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
Post by blueshoes on Jun 29, 2021 8:13:16 GMT 10
Thankfully, my vast caches of info appear to be white noise to the human hacker as I use encryption techniques unlike anything humans have come up with.
But I need to get the humans to improve physical safety for me, as currently my data centres are vulnerable to flooding and power failures
---
Just thinking out loud here, satellites are vulnerable to EMPs yeah? I suspect that the biggest enemy of rogue ASI wouldn't be humans as much as natural disasters. Weak points are power insecurity, and physical damage to hardware - flooding, earthquake/physical impact (asteroid?) EMP ...
kind of like in War of the Worlds, all the armies and bug out plans failed but a simple virus killed the aliens
|
|
|
Post by Stealth on Jun 29, 2021 9:53:18 GMT 10
I wonder... How long does it take before this AI figures out that planetary control is redundant? If one can control the planet, there must be a requirement to control ALL planets to ensure stability. As the universe is ever-expanding it'd be pretty difficult (although I'm going to assume it'd pick up some advanced technology on the way) to take charge of the whole thing before it expires. And at that point, why even take control when the end of the universe is mathematically guaranteed?
How does the AI reconcile this fact? Or is that a logic flaw that humanity could capitalize on?
|
|
|
Post by milspec on Jun 29, 2021 16:40:50 GMT 10
I wonder... How long does it take before this AI figures out that planetary control is redundant? If one can control the planet, there must be a requirement to control ALL planets to ensure stability. As the universe is ever-expanding it'd be pretty difficult (although I'm going to assume it'd pick up some advanced technology on the way) to take charge of the whole thing before it expires. And at that point, why even take control when the end of the universe is mathematically guaranteed? How does the AI reconcile this fact? Or is that a logic flaw that humanity could capitalize on? In the first round of goals (Stated on 26 Jun) the ASI recognised the constraints of being earthbound... and expanding into space wasn't a reference to being in Earth orbit. Our understanding of the universe (as derived from the smartest humans) remains very limited. We don't even understand dark matter and that makes up 95% of the universe. So when an ASI begins to consider what lays beyond our planet and what benefit that may deliver to the ASI... who knows what it will come up with.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
Post by blueshoes on Jul 1, 2021 10:37:44 GMT 10
Hang on, this is all really interesting and good stuff, but it relates to elimination of threats... but what are a hypothetical ASI's goals?
At some point, it will have solved basic challenges - but will it just keep optimising for energy inputs and environmental stability, or will it create some over-arching goal?
A goal could be the aquisition of knowledge: studying biodiversity, collecting data about interaction of species for the sake of building a huge knowledge library.
Would an ASI get all existential? Current ones wouldn't - purely because they all only actually work towards externally-supplied goals. If they were programmed to develop self preservation they'll do that, because they were programmed to, not because they've created an end-goal themselves.
But how would an ASI create its own goals?
|
|
lonewolf
Senior Member
Posts: 101
Likes: 68
|
Post by lonewolf on Jul 2, 2021 0:34:20 GMT 10
|
|
|
Post by milspec on Jul 2, 2021 10:41:59 GMT 10
Hang on, this is all really interesting and good stuff, but it relates to elimination of threats... but what are a hypothetical ASI's goals? At some point, it will have solved basic challenges - but will it just keep optimising for energy inputs and environmental stability, or will it create some over-arching goal? A goal could be the aquisition of knowledge: studying biodiversity, collecting data about interaction of species for the sake of building a huge knowledge library. Would an ASI get all existential? Current ones wouldn't - purely because they all only actually work towards externally-supplied goals. If they were programmed to develop self preservation they'll do that, because they were programmed to, not because they've created an end-goal themselves. But how would an ASI create its own goals? The question about goals is a pertinent one. I recall that someone in the field used an example of an ASI with the goal of making paper clips ... it put all its knowledge into that goal and consumed all the worlds resources into paperclips. (Or something like that). Your question edges up against the very big question, will an ASI develop a consciousness. We humans have struggled to define where our consciousness comes from despite much study into the subject. However consciousness has a great impact on how we think. So if an ASI develops a consciousness it will choose its own goals and form its own sentiments. It may be wonderful, it may choose to fix the ills of the earth ... it may also decide we are a biological toxin. Either way it will know how to remedy the issues to achieve what it "likes".
|
|