Video:  Slaughterbots — Published November 13, 2017

This video was created by the group “Ban Lethal Autonomous Weapons” to communicate the risks of AI-run drone swarms.  Professor Stuart Russell, one of AI’s leading researchers warns at the end of the video about the dangers of autonomous weaponry.  The Future of Life Institute in Boston has shared this video, and it is featured in an article in the Guardian “Killer Robots UN Convention on Conventional Weapons“.

Video:  Humans Need Not Apply by C.P.G. Grey — Published Aug 13, 2014

Mr. Grey’s remarkable presentation is a great starting place to put the social implications of the first stage of AI into perspective.

Video:  What happens when our computers get smarter than we are? | Nick Bostrom | TED Conference 2015 — Published March 2015

Nick Bostrom asks the big questions, in this case about the last stage of AI, the super-intelligent AI.  And I think he has a guide to the right answers.  He basically says we have only one chance to get this right, and that is the single most important challenge to mankind.

Video:  Can we build AI without losing control over it | Sam Harris | TED 2016 — Published October 19, 2016

Sam Harris presents a provocative viewpoint regarding the future of AI and the challenges for AI safety.

Sam Harris concludes “I think we need something like a Manhattan Project on the subject of Artificial Intelligence.  Not to build it, because I think we will inevitably do that, but to understand how to avoid an arms race, and to build it in a way that is aligned with our interests.  When you talk about super-intelligent AI, that can make changes to itself, it seems like we only get one chance to get the initial conditions right.  And even then, we will need to absorb the economic and political consequences of getting them right.”

Video:  3 Principles for Creating Safer AI | Stuart Russell | TED 2017 — Published June 2, 2017

Stuart Russell explains the concept of the “off button” for AI / Robots, and promotes the idea that longterm AI / Robot safety can be achieved by 3 rules:
1.  Robots / AI must be purely altruistic
2.  Robots / AI must have uncertain objectives
3.  Robots / AI must learn (Ethics) by observing (all) humans

One could doubt whether these steps suffice, because humans so far have not always behaved truly altruistically and ethically, but rather ensured their survival as individuals above all else!

Video:  Machine Learning | Neil Jacobstein | Exponential Manufacturing | Singularity University — Published May 2, 2017

Neil Jacobstein’s presentation explains all the latest developments in AI, focused on a technical audience

The Incredible Inventions of Intuitive AI | Maurice Conti | TED 2017 — Published February 28, 2017

Maurice Conti gives an informative overview of the current capabilities of AI for design and construction of new objects.

Video:  How computers are learning to be creative | Blaise Agüera y Arcas | TED 2016 — Published July 22, 2016

Blaise Agüera y Arcas from Google presents examples of current AI generating original art, design and music which are surprising.  Truly creative endeavors will one day be possible in fact normal for super-intelligent AI.

more videos to come, soon . . . .