tomatoes
Senior Member
Posts: 1,065
Likes: 1,089
|
Post by tomatoes on Jul 21, 2017 13:29:27 GMT 10
I'm particularly interested to hear what people see as the associated risks that you can prep for. I'm not disputing enormous risk and issues with AI. As in, do you think it will lead to large scale unemployment, rebellion, price increases, etc? I would have thought that impacts would be gradual, but is there a way that things could turn bad quickly?
|
|
fei
Senior Member
Posts: 604
Likes: 876
|
Post by fei on Jul 21, 2017 15:09:23 GMT 10
Yep, that'll increasingly become the norm. Look at carparks over the last couple of decades - it's cheaper, faster, and easier to have a machine receive funds and print a ticket than having a clerk at a desk. It's the same in so many industries. Look at GPS driven tractors over the last few years, self-parking cars over recent times, and soon, self-driving cars, etc. Aircraft autopilot was just the beginning. There are now even retail stores in which you fill your shopping bags and walk out - their scanners detect what you've taken and automatically bill your credit card (I forget where I saw the reference, but I'll look it up when I get a minute). We can assume that as they scan items - and your card - they'll know exactly what each person buys and when. Orwell's 1984 is here. And it's going to go a hell of a lot further. A few of these stores have opened in China in recent months. A scanner records all the items as the buyer leaves the store, then automatically deducts the price from their phone app. technode.com/2017/07/17/alibabas-taobao-maker-festival-show-face-recognition-paymentI personally don't see how much money is saved from small stores like this. The stores are designed to be open 24 hours a day; so higher lighting / heating / cooling costs + the shelves have to be continually restocked. I also read that some of the small stores also have a couple of security staff standing by in case people try to shoplift.
|
|
fei
Senior Member
Posts: 604
Likes: 876
|
Post by fei on Jul 21, 2017 15:31:14 GMT 10
Mrs Frostbite heads the fraud & investigation team of a major health fund. She has a very good nose for wrong doing (except when it applies to me it seems), and upper management provided her with a programming expert to see if they could transfer these skills to a computer program. No real success yet, the computer can identify patterns that may be fraudulent but it doesn't yet have the instinct required to be effective on it's own. I'm in the same boat. My risk analysis team has been using some AI software for the past year or so, trying to fully automate analysis of factors such as trends, breaking news, client feedback etc into a model that will combine all the data we get and then use algorithms to decide if and how to deal with things. So far no real luck in getting a system to work, but I'm sure that as soon as its figured out how to best combine all these factors, then our team will be cut down to one or two staff to deal with whatever the bot thinks needs doing. I'm particularly interested to hear what people see as the associated risks that you can prep for. I'm not disputing enormous risk and issues with AI. As in, do you think it will lead to large scale unemployment, rebellion, price increases, etc? I would have thought that impacts would be gradual, but is there a way that things could turn bad quickly? I foresee huge job losses within a couple of years. Not just in the blue collar manufacturing industries (eg. the Taiwanese company that makes the iPhone fired around half of their staff (~50k people) from one of their main factories last year and replaced with robots), but also in white collar "knowledge industries". For instance, a lot of the work done in legal firms by paralegals or junior lawyers is just sorting through government legislation / standards etc for information and precedent about a certain topic. Using AI, the law firm can build a set of algorithms with specific parameters and set the software to search for whatever they need. There goes the need for the paralegals. The other thing about AI is that its touted as being a great cutter of costs (ie. removing the need for human workers for one thing). Yet the companies using it haven't cut their prices, instead just making ever bigger profits. They don't seem to see the disconnect between people having no jobs and prices not falling though.
|
|
spatial
Senior Member
Posts: 2,396
Likes: 1,560
|
Post by spatial on Jul 21, 2017 18:11:27 GMT 10
I'm particularly interested to hear what people see as the associated risks that you can prep for. I'm not disputing enormous risk and issues with AI. As in, do you think it will lead to large scale unemployment, rebellion, price increases, etc? I would have thought that impacts would be gradual, but is there a way that things could turn bad quickly? A lot of trading on the stock market is done by AI the programs have a tendency to rapidly swing stocks in both directions. When a financial shock hits the system it can bring a crash and mass selling a lot quicker. Many stock market purchases have auto sell at set levels. AI is also accelerating the gap between the have and have nots. The so called middle class all across the western world are shrinking, for example even in Australia house ownership is decreasing. Over the years I have written a lot about AI and job loss, about 2 years ago I was talking to one of my work colleagues, who wife is a pharmacist and mentioned she should not have too much of a problem. He said that doctors now issue prescriptions from computer with bar codes and they are now looking at robots to issue the prescriptions. We are not far from a future where you could have a medical booth in the supermarket - you sit down the machines analyse all your symptoms take blood pressure etc. - if necessary contact a doctor who will look at all the data and maybe a few questions for you and issue you medication or health advice. It cuts back a lot of interaction and a single doc can deal with much more. They are also looking at robots that do surgery, cleaner and quicker.
|
|
|
Post by Peter on Jul 21, 2017 20:15:46 GMT 10
In addition to the above threats mentioned, as AI becomes more common, our dependence upon it will increase. Should there then be a problem with it - for example, hacking, programming bugs, system malfunction, etc - society in general will be left without adequate skills to carry on with life.
It's the same idea as preparing for failure of the power grid, communications network, water supply, sewerage systems, or supermarket supply logistics. We've become so dependent upon these systems that if they were to fail we'd very quickly see major problems, particularly in cities & metro areas.
|
|
fei
Senior Member
Posts: 604
Likes: 876
|
Post by fei on Jul 21, 2017 20:33:00 GMT 10
In addition to the above threats mentioned, as AI becomes more common, our dependence upon it will increase. Should there then be a problem with it - for example, hacking, programming bugs, system malfunction, etc - society in general will be left without adequate skills to carry on with life. It's the same idea as preparing for failure of the power grid, communications network, water supply, sewerage systems, or supermarket supply logistics. We've become so dependent upon these systems that if they were to fail we'd very quickly see major problems, particularly in cities & metro areas. There was a news story this week about Teslas already being stolen by hackers in Europe: www.scmagazine.com/thieves-circumvent-tesla-gps-to-steal-at-least-nine-cars-in-a-week/article/676151The city I live in in China has decided to go the whole hog on the IoT thing, and connect up all the sensors and devices in use all over the city, then feed the data through an AI system that will control all the systems. The idea is that things like traffic control will be very efficient due to sensors all over the place reporting back and informing traffic lights when to change, however, there's then the huge risk of hackers getting in and being able to influence all these systems that were previously all separate.
|
|
|
Post by Peter on Jul 21, 2017 20:55:01 GMT 10
I've said before that if someone really wanted to wreak havoc on a city, all they'd need to do is put a heap of traffic lights to green at the same time. This is the same principle, but not necessarily on the same scale as you describe.
|
|
|
Post by graynomad on Jul 21, 2017 21:20:35 GMT 10
IoT: Idiocy of Things.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
AI
Jul 21, 2017 22:40:15 GMT 10
via mobile
paranoia likes this
Post by blueshoes on Jul 21, 2017 22:40:15 GMT 10
I wonder if white-collar and blue-collar are going to be replaced by green-collar and grey-collar; "Green collar" jobs like smallhold/niche farming, ranger or environmental oversight in varied contexts (maybe enabled by sensors but not replaceable yet). "Grey collar" jobs because someone needs to code and mechanic the robots - they would be the new technicrat class?
Things I worry about: If housing affordability keeps going from bad to worse, there is likely to be serious unrest. I don't see how increasing automation and global scale business enterprise is going to make this any better
If transcribing and data mining means conversations can all be monitored in real time, and surveillance bugs are so small - I heard about hybrid bees and flies with on board cameras - are we developing the tools for a perfect dictatorship?
I studied AI at University. As far as I understand from then (and that was a few years ago), AI was split into machine learning (looking for patterns that explain training data sets), data mining (searching for predefined patterns in data sets of known format), Natural Language processing (grammar and coversational rules), Robotics & Mechatronics (moving parts/controlling them with appropriate instructions), problem solving (theory used in AI chess agents, etc) and a few other areas.
All of these are limited problem sets where the computer is given strategies to cope with a given "problem domain"; finding accounts with behaviour that looks dodgy, responding to conversation, etc.
The thing that's still missing, afaik, is initiative - all of these are still just computer programs following instructions. Even agents like Siri or Alexa are just combinations of the above - natural language input, defined sets of rules for how to handle it, defined outputs. Siri doesn't actually just decide to start randomly talking to someone about Isaac Asimov.
For that reason, while we may end up with massive systems that have learnt (better than humans) how to tweak lights to streamline peak hour traffic, how to do butler service, how to compile databases of facts... I think they will still inherently lack an agenda or initiative. They will be able to look and act human, but they won't "want" things.
I think AI, as good as it gets, will always be at the beck and call of a human master in some way.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
AI
Jul 21, 2017 23:10:08 GMT 10
via mobile
paranoia likes this
Post by blueshoes on Jul 21, 2017 23:10:08 GMT 10
I'm particularly interested to hear what people see as the associated risks that you can prep for. I'm not disputing enormous risk and issues with AI. As in, do you think it will lead to large scale unemployment, rebellion, price increases, etc? I would have thought that impacts would be gradual, but is there a way that things could turn bad quickly? If I'm right, then our chances of surviving and thriving in an AI-enhanced EROL situation could probably be improved by... - Fighting for privacy before it's lost Before I forget, NSW people: at least one of your shopping centres now has ticketless parking - they auto-recognise your number plate when you drive in, and charge a linked credit card when you exit. No credit card? Tough, better get one if you're there past the 2hrs free. Is that OK with you? What length of time for data storage should be legal? Now is the time for these conversations - Staying generally aware of current tech and it's weaknesses. Water damage? I honestly don't know, should do more research. At the very least I need to look at apps or other ways to get visibility on things tracking my phone (for example) as it walks around a shopping centre - not getting WiFi IoT door locks that could be hijacked to lock you in or out of your own house... or will just become useless if systems are down. IoT is a fun toy, and great for monitoring wide areas like cattle stations and farms, not so great in a house - Local knowledge; In times of war, coded communications relied on shared keys; in times of surveillance, the shared info that allows for private comms are things like local nicknames, features of the landscape you'd only notice from walking around (not google earth), etc. - Strong real relationships. In-jokes are based on shared experience. Games (like the ones where you describe a word without saying it or nominated keywords) demonstrate this really well. People who don't know their neighbours, street, suburb/town aren't likely to work together to get an underground going if one becomes necessary. Aside from going to events and playing with new technology and learning how it works, none of this stuff is unique to a particular situation. Local knowledge and strong real relationships are important in all manner of events.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
Post by blueshoes on Jul 21, 2017 23:19:39 GMT 10
Yes you can Gray. Technology may advance, but humans don't. We need food, water, shelter,sleep. Dont need modern technology for that. This worries me a little. Humans can be horrible to each other. I shudder to think what Stalin would have done with modern technology... People don't change, there will always be manipulating #*!%s who make their way up the food chain
|
|
paranoia
Senior Member
Posts: 1,098
Likes: 1,252
Email: para@ausprep.org
|
Post by paranoia on Jul 22, 2017 0:40:59 GMT 10
AI is being used to refer to different things in this thread so I'll seperate it out as I see it before addressing my concerns as they bring different challenges.
ANI - Artifical Narrow Intelligence
This is self driving cars, chess computers, fraud detection, face/voice recogniton.. ect. Essentially digital automation as it simply repeats tasks it is given. No matter how complicated it gets, its performing a series of limited actions.
All the categories twooldblueshoes listed along with Siri & Alexa (glorified search engines/voice control) fall into this type of AI.
None of these programs actually think in concepts, they calculate values, produce outputs and look for patterns.
AGI - Artifical General Intelligence
This is where we get into open ended goals, self improvement and liquidty of goals - near human characteristcs. Many people, even within the industry, don't see this as possible. In my opinion, it will happen even if the majority of people decide we don't want it to.
Weather you believe this is possible or not I think depends largely on what you think life is, what you think the brain is and even what you think a thought is. Most people disagree on these three things so I worry getting too deep into AGI we'll all just end up talking past one another.
I think once you have sufficiently developed ANI you can develop a framework that allows those components to self develop. With very broad goals and an ability to build models for interpreting new data types it could self assess those models and incorporate beneficial (to the initally set broad goals) functions into itself. This is very similar to the way in which we see children learning through their early years, particularly the immitation periods.
An interesting thought expirement is if you concider if it would be possible to develop an artifical general intelligence of limited cognitive ability such as is in a mouse. Not just to copy its function but to replicate its ability to investigate and develop simple concepts to use them to navigate and make decisions in the world.
The ideas I have already going into this thought process are: * Evolutionary therory - all life on earth has a common origin and while a human brain is more complicated and has more function than the mouse brain they work on a similar pricipal. * The universe is deterministic - cause and effect have no exception * Despite its completity the brain is fundamentally reducible to its individual functions and mind is explainable as the sum of these functions
^^ If you disagree with me on any of these points, please PM me, I love talking about this stuff but don't want this to into a philosophy thread
ASI - Artifical Super Intelligence or Singularity
If you're willing to accept AGI this is what happens when it gets out of hand. There are so many things that have to go wrong for us to get to this point but for me it's somthing that has to be constantly kept in mind during the development and regulation of AGI. If you couple human style thinking with HUGE data processing ability, give it access to all the data we have out on the internet with the ability to navigate and process it. It can now take control of any system, any sensor, any function we've diverted through the internet and with nefarious intent or unintentionally destructive behaviors it could mean very bad things.
I'll have to go into my thoughts on economic ramifications tomorrow... getting too late
|
|
paranoia
Senior Member
Posts: 1,098
Likes: 1,252
Email: para@ausprep.org
|
Post by paranoia on Jul 22, 2017 0:52:19 GMT 10
...and China declares itself the winner of the AI race.
China to become artificial intelligence ‘world leader’ by 2030
Yesterday, China released a “national AI development plan” which committed it to spending $22.15 billion (£17 billion) on AI research by 2020 and $59.07 billion (£45 billion) by 2025.
“We must take initiative to firmly grasp this new stage of development for artificial intelligence and create a new competitive edge,” it said.
|
|
spatial
Senior Member
Posts: 2,396
Likes: 1,560
|
AI
Jul 22, 2017 13:16:48 GMT 10
Post by spatial on Jul 22, 2017 13:16:48 GMT 10
I'm particularly interested to hear what people see as the associated risks that you can prep for. I'm not disputing enormous risk and issues with AI. As in, do you think it will lead to large scale unemployment, rebellion, price increases, etc? I would have thought that impacts would be gradual, but is there a way that things could turn bad quickly? From ZeroHedge today. www.zerohedge.com/news/2017-07-21/chechnyas-leader-claims-russian-doomsday-device-activated Chechnya's Leader Claims "Russian Doomsday Device" Is Activated
"According to The New York Times, Russia built a system in the 1980s that could do what Kadyrov described, known as the Perimeter System. Bruce Blair, the former US nuclear officer who broke the story of the Perimeter System for The New York Times in 1993, told Business Insider that the system works when it detects nuclear explosions. Only a small crew, deep in a bunker, has a hand in the otherwise automated system, according to Blair. Essentially, if another country conducted a nuclear attack that would destroy the government of Russia or anything a 1980s-era system would perceive as a nuclear attack, an automated system would empty Russia’s missile silos in an immediate counterattack. Blaire’s concern is the automation of such a system. “One concern is that it’s highly automated, and cyber attacks, for example, or other phenomena, natural or man-made, could set it off,” Blair said. “It poses a risk of accidental nuclear attack by Russia.” But there has yet to be an accidental nuclear attack and it’s been over 30 years since the system was first activated. But could the 1980’s technology pose a problem and cause an all out nuclear war? It’s hard to say, and experts are not sure either. “This was designed to retaliate massively against the US. What the specific targets are in this plan no one really knows, but it can be safely assumed it’s large-scale,” Blair said, adding that it would destroy most Americans and most large US cities. This is troublesome to many Americans, considering the US president is supposed to sign off on any nuclear attack to prevent accidental strikes, and the left is continuing to push the former Soviet Union increasing tensions with Russia. If Washington were incapacitated by a nuclear strike, it’s unclear whether it could respond at all. The US’s nuclear weapons are code-locked and absent the president and a backup in the Pentagon, the US may not be able to respond. Moscow code-locks its weapons as well, but this system would allow it to retaliate even after a nuclear decapitation."
|
|
|
Post by graynomad on Jul 22, 2017 17:30:23 GMT 10
What could go wrong?
|
|
fei
Senior Member
Posts: 604
Likes: 876
|
AI
Jul 22, 2017 19:26:53 GMT 10
Post by fei on Jul 22, 2017 19:26:53 GMT 10
paranoia: which of the three AI types would the chess playing super computer fit into? Previously I think it was just fed with thousands of move combinations, from which it chose the one it thought best according to the situation. However, if what I hear now is correct, the supercomputer can not only choose from a list of defined moves, but actually develop new moves. This would seem to put it into your second category? I guess the basic automation rather than AI per se will be what takes over many jobs initially. If AI does grow to the point where it can process complex situations, human emotions etc, then few jobs are safe.
|
|
spatial
Senior Member
Posts: 2,396
Likes: 1,560
|
AI
Jul 22, 2017 20:06:34 GMT 10
Post by spatial on Jul 22, 2017 20:06:34 GMT 10
paranoia : which of the three AI types would the chess playing super computer fit into? Previously I think it was just fed with thousands of move combinations, from which it chose the one it thought best according to the situation. However, if what I hear now is correct, the supercomputer can not only choose from a list of defined moves, but actually develop new moves. This would seem to put it into your second category? I guess the basic automation rather than AI per se will be what takes over many jobs initially. If AI does grow to the point where it can process complex situations, human emotions etc, then few jobs are safe. Personally I would say AI does not have the ability to think just make decisions based on pre programmed algorithms. Yes new data can be feed into the AI and that data is added to the decision making. Chess computers use 100 of thousands of previous chess games in their database to assist decisions. A move is calculated by calculating every possible counter move the other player will make and possible response all the way to the end of the game. The AI runs through tens of thousands of moves and counter moves and plays a full chess game on each option then makes a single move. That is the advantage of computing it is not creative thinking just speed of processing and memory. A human brain could not possible keep in memory such a vast number of calculations and results. The other advantage of AI is that it is has no rights or safety requirements on need for lunch breaks, annual or weekend leave, works 24/7 so even if it is inefficient it simply out works the human. It is like handmade vs machine made, handmade can actually be better quality but too expensive. I worked with a guy who father had a large farm in Gunnedah NSW the planter they use has weed spraying arms that detect specific infrared radiation signal from various weeds and sprays herbicide on the area as the planter moves through. It is not as precise as what a human can do but simply more cost effective. The large farms in the NT have been using drones for decades to scan perimeter fences and water points etc. These farms used to have a large number of farm hands to maintain the property now only a few who are directed via radio to just the problem areas. Open cut coal mines used to have very large survey departments now they just fly the opencut area once a months and using LIDAR get a full 3D image of the area. So as in some of the previous post not many people working anymore as the bots are taking the jobs. There are now so many health, safety, financial and environmental regulations that have created more jobs which also adds to the admin.
|
|
fei
Senior Member
Posts: 604
Likes: 876
|
Post by fei on Jul 22, 2017 20:16:15 GMT 10
Drones would also seem to be the best way forward for defense. In addition to the satellites, a swarm of drones of various sizes and capabilities can be sent up or even maintained in the air 24/7 (some are completely solar powered) to keep an eye on things. If something is spotted out in the ocean (people smuggler, illegal fishermen, naval vessels etc) than either other drones or naval vessels can be tasked to check them or otherwise deal with them.
I actually wonder why (apart from propping up LNP support in SA) the navy is spending billions on a dozen or so new submarines that won't be delivered for decades, when they could spend far less on hundreds or thousands of underwater sensors, drones and so on.
|
|
paranoia
Senior Member
Posts: 1,098
Likes: 1,252
Email: para@ausprep.org
|
AI
Jul 22, 2017 20:58:09 GMT 10
Peter likes this
Post by paranoia on Jul 22, 2017 20:58:09 GMT 10
paranoia : which of the three AI types would the chess playing super computer fit into? Previously I think it was just fed with thousands of move combinations, from which it chose the one it thought best according to the situation. However, if what I hear now is correct, the supercomputer can not only choose from a list of defined moves, but actually develop new moves. This would seem to put it into your second category? I guess the basic automation rather than AI per se will be what takes over many jobs initially. If AI does grow to the point where it can process complex situations, human emotions etc, then few jobs are safe.
ANI - Artifical Narrow Intelligence
This is self driving cars, chess computers, fraud detection, face/voice recogniton.. ect. Essentially digital automation as it simply repeats tasks it is given. No matter how complicated it gets, its performing a series of limited actions.
ANI is all that exists currently. I don't really consider it to be true artificial intelligence, they have very strict sets of conditions and don't actually 'think'... just crunch data and come up with an answer.
It really depends on what you mean by a 'new move' in chess. Each piece has a defined movement range and there are only so many actions that can be made. There's no reason a computer can't create a bunch of random positions it has never seen before to compare it tactically using a few simple parameters. Defended pieces are usually better than non-defended ones, threatening pieces of higher points value is advantageous... better chess players could give you a proper list but I don't see how it couldn't be done mathematically.
I'll note these computers don't have to be perfect, just better than a human.
ANI is the immediate threat here. Most jobs really aren't that complicated as much as we think they are. I read an interesting article the other day (can't find the link) that suggested that the average lawyer has more to fear from ANI/automation than a McDonalds worker and I tend to agree. It's really only the top tier lawyers that actually take on complicated cases and those are the ones that would still have a job.
|
|
blueshoes
Senior Member
Posts: 609
Likes: 700
Location: Regional Dan-istan
|
AI
Jul 25, 2017 22:41:46 GMT 10
via mobile
Frank likes this
Post by blueshoes on Jul 25, 2017 22:41:46 GMT 10
|
|