Rise of the machines: has technology evolved beyond our control?

In the Guardian Book Preview of New Dark Age: Technology and the End of the Future, author James Bridle discusses how our world is changing due to technology:  “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?”.

One of the more powerful systems in daily use is Google Translate.  “While a human can draw a line between the words “tank” and “water” easily enough, it quickly becomes impossible to draw on a single map the lines between “tank” and “revolution”, between “water” and “liquidity”, and all of the emotions and inferences that cascade from those connections. The map is thus multidimensional, extending in more directions than the human mind can hold. As one Google engineer commented, when pursued by a journalist for an image of such a system: “I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space.” This is the unseeable space in which machine learning makes its meaning. Beyond that which we are incapable of visualising is that which we are incapable of even understanding.”

 

Artificial Intelligence as Religion or God?

The Guardian article “Deus ex machina: former Google engineer is developing an AI god” discusses the development of AI “gods” using the example of Anthony Levandowski, a controversial engineer and entrepreneur who recently filed papers for the creation of a non-profit religious organization called “Way of the Future”.

In response, Elon Musk has shared his thoughts on anyone planning to create artificially intelligent (AI) digital deities for us to worship. Elon tweets: “On the list of people who should absolutely *not* be allowed to develop digital superintelligence.”

MCE 2016 - Josh Switkes, Sean Waters and Anthony Levandowski
Anthony Levandowski (right) at American Trucking Foundation’s MCE 2016
Wired magazine explains in its article “God is a Bot . . . “ from September 27, 2017, how the engineer and founder Levandowski played a key role in the development of google street maps and self-driving cars in a convoluted career so far.  Levandowski is also the focus of intense litigation between Google, Uber and Otto about self-driving car and truck technology and software, having worked at all three companies as well as having formed several suppliers for these companies.

FACIT: Because AI developers are among the first to realize the magnitude and ramifications of the AI revolution, it would be natural that some would be first to feel the need for a moral and/or religious explanation or framework which incorporates Artificial Intelligence. Once AI becomes more evident in everyday life, consumers may also react with religious inclinations both pro and contra.

Are Truckers in Denial about losing their Jobs to AI autonomous trucks?

“The only human beings left in the modern supply chain are truck drivers”. In the Guardian newspaper article of October 13, 2017, author Dominic Rushe relates his experiences talking to truck drivers at the world’s biggest truck stop – “Iowa 80” – about the possible prospects of losing their jobs to autonomous trucks.

Similar to our experience interviewing people whose jobs are at risk through automation/AI, Rushe also observes that the vast majority of truckers seem to be in denial.  Our question would be, what do you call that? Mass Denial? Mass Repression? Or even Mass Sublimation, if significant portions of certain sectors of the economy lose their jobs to automation, and unleash fury on immigrants instead?

Rushe also interviews Finn Murphy, author of The Long Haul, the story of a long-distance truck driver. Finn Murphy states that the days of the truck driver as we know him are coming to an end. Trucking is a $700bn industry, in which a third of costs go to compensating drivers, and, he says, if the tech firms can grab a slice of that, they will.

Finn continues, “The only human beings left in the modern supply chain are truck drivers. If you go to a modern warehouse now, say Amazon or Walmart, the trucks are unloaded by machines, the trucks are loaded by machines, they are put into the warehouse by machines. Then there is a guy, probably making $10 an hour, with a load of screens watching these machines. Then what you have is a truckers’ lounge with 20 or 30 guys standing around getting paid. And that drives the supply chain people nuts,” he says.

The goal, he believes, is to get rid of the drivers and “have ultimate efficiency”.

“I think this is imminent. Five years or so. This is a space race – the race to get the first driverless vehicle that is viable,” says Murphy. “My fellow drivers don’t appear to be particularly concerned about this. They think it’s way off into the future. All the people I have talked to on this book tour, nobody thinks this is imminent except for me. Me and Elon Musk, I guess.”

Take a look at the article, it is a good read. And coincides with our interview experience. Sam Harris opens his TEDtalk discussing the denial problem in regards to AI, and this is more of the same.

Further research is needed, if we are to understand the challenges confronting society as these changes begin.

Lack of an Appropriate Response to Societal Impact of AI

Sam Harris remarks in the opening part of his TEDtalk, that people don’t seem to be able to muster an appropriate response to the dangers of AI:

Already today, anyone who takes the trouble can read in the papers, on the net and even see on TV that machines will likely take substantial chunks of employment away from humans, quite soon. And that AIs will eventually attain Super-Intelligence or Singularity according to many experts.

So far, we just don’t see much of a reaction to that. We’ve asked supermarket employees, truck drivers, Uber drivers, Taxi drivers. We just don’t get much of a response. We hear things like – “gee that sounds like it could be a problem – lets wait and see”.

You could infer from Sam Harris’s talk that people tend instead to be concerned about much less likely scenarios. Like Terror Attacks. North Korean atomic warfare. Losing their jobs to immigrants. Getting mugged in the city. Or global warming (which aside from producing extreme weather, hurricanes, droughts and floods, re-arrangement of waterfront cities etc., will likely be more of an annoyance to the vast majority of the world’s population than an existential threat to civilization as we know it).

We recently interviewed Michele Hanson from the Guardian (brilliant woman). She points out that the so-called “Universal Basic Income”, should it ever become reality, would likely not be an adequate income to maintain a normal consumer lifestyle, but instead a pittance (perhaps like the US “welfare” system, or in the best case like the German “Hartz IV” – https://en.wikipedia.org/wiki/Hartz_concept ?).

So what are the roots of mass denial?  Why does it seem humans are relatively incapable of grasping the impending societal disruptions?  More research is needed.

Maintaining Social Balance during Transition to AI Economy

The investors in AI projects, the companies involved in AI, the banks and the wealthy themselves must share a self-interest in maintaining a balanced, stable society during the transition to an AI economy.  Not just the “progressives”, middle-class and the disadvantaged.

During the transition phase to an AI economy, the financial interests of the corporations lies in the maximization of the number of well-earning consumers and the minimization of instability, strife and chaos.  This may be why some of the best studies on this subject have been financed by Citibank and UBS.

Modern corporations would have little to gain, and in fact quite a bit to lose, if automation and AI were to result in widespread job loss and a semi-feudal economic system characterized by a privileged upper class and mass borderline poverty.

Surely the well-heeled families and the captains of industry must share an interest in continued social stability and widespread wealth.  Already today, most of the top 1% live in guarded, almost prison-like compounds owing to concerns about potential violence and robbery and not infrequently feel trapped by it.

Our experience is that most AI developers have at least a vague idea about the potential dangers to society of the widespread implementation of Artificial Intelligence.  Amongst the ever-increasing numbers of AI-developers there are likely increasing numbers of conscientious individuals who are becoming concerned.  We believe these individuals potentially may begin to take action on their own, or be attracted to organizations they perceive to be aligned with their concerns.  They could either:
a. become advocates for awareness and sources of information for spreading awareness
or
b. become passionate advocates for constructive solutions, dialog and cooperation.

If you are reading this, we invite you to participate!  See our section what you can do or sign-up to volunteer.

 

AI-based VR Gaming as addiction — Ready Player One (2018)

Ready Player One produced and directed by Steven Spielberg, stars Tye Sheridan as Wade Watts, a teenager in a ravaged future in 2044, in which people escape their grim real lives by escaping into a virtual reality world of the “OASIS.”

FACIT:   “Ready Player One” focuses on arcana, facts and fads from the 80s. Not our subject.  But the film will be served to consumers both on public screens as well as on (addictive) VR headsets at launch in 2018 — 27 years before the films time frame 2044.  From the plot summary it seems that most people depicted in this, yet another dystopian, apocalyptic film, are unemployed in substandard living conditions, i.e. a more or less feudal system has arisen, presumably due to automation and AI.  Perhaps the film’s timeline needs adjusting, both for earlier technical development of AI-assisted VR and for earlier societal ramifications of AI automation.

Adapted from the book by Ernest Cline, Ready Player One is a sci-fi action thriller that unfolds primarily within the virtual reality space. The film is slated for global theatrical release on March 30, 2018, from Warner Bros. Pictures, Amblin Partners and Village Roadshow Pictures.

Plot summary: In 2044, the material world is a hellish dystopia, for various reasons: “The ongoing energy crisis. Catastrophic climate change. Widespread famine, poverty, and disease. Half a dozen wars,” Cline writes.  Our hero is Wade Watts, an 18-year-old boy living in a Columbus Ohio trailer park where all the trailers are stacked vertically. Everyone spends the vast majority of their time in the OASIS, a massive virtual-reality utopia created by a reclusive Steve Jobs–esque super-genius named James Halliday. For the right price, you can go anywhere, do anything, be anyone.

As the book opens, news breaks that Halliday has died, and has left his entire $240 billion fortune to whoever can navigate a series of trials and riddles and fetch quests, all of which require an encyclopedic knowledge of 1980s arcana, including movies, music, TV shows, and especially video games, all compiled in a downloadable, 1,000-page bible called Anorak’s Almanac.

Ready Player One is one of the most anticipated movies in the world, and has tremendous potential to engage and entertain the worldwide market, showcasing the transformative nature of VR, and what it can and will be,” said Rikard Steiber, President, Viveport – exclusive partner for VR based on the film.

AI, Big Data and Influence on Human Behavior

Yesterday, , Op-Ed Contributor for Mediapost, described in a commentary how, as consumers begin to realize how web cookies, location information, and sensors follow their digital lives – and who is watching them – they will become uncomfortable.

Mike quotes tech angel investor Esther Dyson, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they’re doing with it. And it’s going to backfire on them, just as the Snowden revelations backfired on the NSA.”

As Jonathan H. King & Neil M. Richards wrote back in 2014, “Big data analytics can compromise identity by allowing institutional surveillance to moderate and even determine who we are before we make up our own minds.”

As we’ve stated in previous posts, it is likely that AI can soon predict what we may want to do or even what we are about to think.

The question is, how consumers will react?

Mike rightly is concerned about the loss of the role of chance in our lives (or destiny), as AI frequently lays out an invisible path of preferences, suggestions and hints that consumers are quite likely to find on the one hand hypnotic but also nearly inescapable.   Think of those ads that follow you everywhere on the net.

Our concern in this regard revolves around three issues:
1.  To what extent will consumers become addicted to AI’s unending need to please, predict, entertain (and sell)?
2.  What influence will AI’s soon all-encompassing predictive capacity have on the natural development processes of humans, i.e. psychology and behavior?
3.  How might society react as AI’s influence becomes more and more visible and in some ways almost confining?

AI will become nearly pervasive across most sectors of society, because companies have a very high profit motivation to satisfy our needs and desires, whether it be choice of music or choice of mates, whether it be information or communication, education or indulgence.

AI robot Learns Positive Human Values – Chappie (2015)

Chappie is a 2015 science fiction film directed by South African director Neill Blomkamp and written by Blomkamp and Terri Tatchell.

FACIT:  Although the film did not enjoy positive reviews (Tomatometer 33%), it is a thought-provoking story of the learning process of an AI robot, and provides a simplistic but somewhat realistic-feeling scenario how AI robots might develop and threaten mankind, and also how one programmer by himself with access to the AI system can pose quite a threat!

The film, set and shot in Johannesburg, is about an artificially intelligent law enforcement robot captured and taught by gangsters, who nickname it Chappie.

Chappie stars Sharlto Copley, Dev Patel, Jose Pablo Cantillo, Sigourney Weaver, Hugh Jackman, and Watkin Tudor Jones (Ninja) and Yolandi Visser of the South African zef rap-rave group Die Antwoord as metafictional versions of themselves.  Chappie premiered in New York City on March 4, 2015, and was released in U.S. cinemas on March 6, 2015. The film grossed $102 million worldwide against a $49 million budget.

 

Scenario 3: Human addiction to Virtual Assistants

Timeframe 2018 – 2025
Dependency on Virtual Assistants / AI becomes widespread


 Situation:  Virtual Assistants / AI become addictive
– AI may become better at anticipating what you want than you are
– AI may converse intelligently about your favorite topics, drive you where you want to go, find and play delightful music, console you when you feel down, entertain and even cook, shop and “take care” of you in many ways
– AI-linked sex toys may cause another kind of dependency
– An “arms race” of virtual assistant-linked devices is likely, i.e. my fridge is smarter than yours, my Alexa can do more than your Google Home, mine does video, mine is 3D, mine is Virtual Reality etc.

– Towards the end of this time period, you may need a virtual assistant just to talk to other virtual assistants “Have your Virtual Assistant talk to my Virtual Assistant and we’ll do lunch.”

– See background below for more information

Potential Effects:  Human addiction to Virtual Assistants / AI
– Social inequality may increase as AI is seen as crucial to success, “happiness”, security, but not unavailable for the lowest income levels
– Alienation, depression, dependency, asocial behavior issues may become widespread among those who use it regularly
– Attempts to regulate use of personal assistants as if it were drugs or alcohol may not be successful
– Possible growth of “self-help” groups to curb addiction
– Possible growth of “Luddite” movements, i.e. back to nature, anti-technology
– Possible growth of anti-AI religious sects or political movements
– Possible rise in populism and social unrest

Background
– Dedicated personal assistant devices such as Amazon Echo, Google Home using AI-supported speech recognition and search are growing at the rate of 35% per year, Amazon has sold 1.2 billion as of mid year 2017
– Very soon screens and image recognition will follow, soon after that 3D, holographics and virtual reality devices
In the future, almost any electric device may act as your assistant or interact with it, from toasters to thermostats to lamps to televisions to refrigerators to automobiles
– Personal preference prediction accuracy (Netflix or iTunes or Amazon recommendations, etc.) is increasing rapidly with AI, already over 60%

 

 

Scenario 2: Internet Outages and Hacking Attacks

Timeframe 2018 – 2021:
Increasing
Temporary Outages of the Internet
Increasing Hacking compromising large data quantities

 Situation:  Internet outages and/or frequent hacking attacks
– Internet outages interrupt business operations, communications and life in general – similar to widespread power outages, natural disasters, etc.
– Increasingly frequent data theft of large quantities of data
– Possibly in connection with ransom demands or political conflicts
– Email, video, phone, television, social media interrupted
– Possible government or terror linkages, i.e. even military agendas

– Safety issues may appear (emergency services, navigation, military)
– See background below for more information

Potential Effects:  Rising Public Concern and potential Backlash
– Public clamor for risk reduction
– Possible push-back against Gov. & Industry for failing to protect
– Possible demand for regulation of internet, i.e. tech restrictions
– Possible demand for second-tier “secure” internet
– Possible calls for regulation of AI technology itself
– Possible rise in populism and social unrest
– Possible blaming of foreign powers, i.e. calls for Counterattacks or even Cyberwars

Background
“Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.  Also the availability of large amounts of social network and public data sets (Big Data residing on the internet) increases the risks.”
— Deepak Dutt, founder of Zighra mobile security, Gizmodo 9/11/17

– No interfaces are needed – AI already resides on the web.  The internet is an open system and the perfect network for Artificial Intelligence to farm data, interact worldwide and potentially launch denial of service attacks or sophisticated hacking attacks.

– Widespread suspicion among experts that AI-supported hacking was behind the recent Equifax hacking attack. 

– Countermeasures essentially involve using AI to simulate attacks, and AI to simulate defenses, leading to an AI arms race

– Possible AI involvement in Russian-sponsored effort to influence US election campaign 2016