The Narrow Road to the Deep Learning that Leads to …
Immortality …. Or Not
I am wading into the murky field of Artificial Intelligence (AI) as it was a conversation around AI that influenced me to start this blog. If this were me a few weeks ago, I would have rolled my eyes and clicked off of this page. But, instead, I spent the last week reading as much as I could about AI to get my head around the topic, and it wasn’t that bad! There are millions of books, articles and posts out there. I only scanned the tip of the iceberg, but I already feel much more enlightened about what is going on. I can also recommend a few articles that are fun to read and very comprehensive for anyone who wants to know more and doesn’t have time to carry out their own search. There is a short reading list at the end of this piece for anyone hungry for more information.
A Word about Robots
Before I launch into my quick overview of where I think we are in terms of AI and some of the implications that may be worth thinking about, I want to say a word about robots. I am not going to use the word “robot” very much in this piece, although robots are very much part of this debate. Robots are just the shells in which AI exists even though we often think of them as one and the same thing. AI is the computer (the “brain”, sort of) that controls the robot shell (the “body”). However, how AI develops and how it affects our lives will also be affected by what kinds of robots we make to house the AI in. Just like human progress was influenced by changes to our physicality (standing upright, developing opposable thumbs, etc…), so the physical attributes of the robots we create will impact how the AI that is developed impacts us; how “effective” it is in replacing humans doing certain tasks, for example. A quick example is the recently developed Atlas Robot that can pick itself up after it falls (https://www.youtube.com/watch?v=PisoSgwcRVQ). This was apparently a major development in robotics. So, keeping in mind that the shell part of this is very important here is what I have learned about AI.
ANI – AGI – ASI: The Path to Artificial Super-Intelligence
Yikes, this sounds scary and it is to an extent. But it gets less scary once we understand what is going on. That is our goal. This is a just quick summary so again, if you want more, please take some time to read what the experts have written.
There are three broad categories of Artificial Intelligence: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super-Intelligence (ASI).
Artificial Narrow Intelligence
ANI is with us and all around us in the form of computer programs that can accomplish specific narrow tasks usually carried out by humans. This can range from voice recognition programs and GPS to the technology in self-driving cars. SIRI is a kind of ANI, and so are the computers that can play strategy games like Go and Chess.
(As I write this DeepMind’s AlphaGo program has just beaten the world Go champion, Lee Sedol, in the first of a series of matches taking place in South Korea. See more about that here: (Match 1 – Google DeepMind Challenge Match: Lee Sedol vs AlphaGo).)
The computer programs that track our purchases on Amazon, or our searches on Google are ANI. And here is a new one that recently moved into my Microsoft Outlook mailbox: Clutter. Clutter is an email filter. It sifts through my emails for me and files away ones that it thinks I don’t want to read. Every once in a while it sends me a list of these emails and asks me to put back into my inbox any important messages. It explains, “Clutter will learn from this and do better next time.” Yay!
ANI (with the proper robot bodies) can stack boxes; go into warehouses and collect deliveries and potentially then go on to make those deliveries; (or without elaborate physical robot bodies) can carry out financial analysis and legal research…you name it. The goal of developers of ANI seems to be to create programs that can do these specific tasks better than humans, thus freeing up humans to do other more fun or interesting things with their time. (But wait, what happened to work, and earning a living?) Anyway ….
I think of ANI as really advanced computational ability, rather than something more difficult to define like “intelligence”. But apparently, when you take this advanced computational ability and then allow computers with these abilities to access other programs/algorithms out there that can do similar types of computations, these linked programs can make interesting connections and deductions and begin to teach themselves new, more efficient ways of doing the task, and you eventually end up with something that is very close to “intelligence”. This machine learning is happening now and not only improving ANI (for example perfecting speech patterns in computers so that they can sound and respond more and more like humans), but leading us … some say quickly some say slowly… to the next level of AI called Artificial General Intelligence.
Artificial General Intelligence
AGI does not yet exist. It refers to a “computer” that has learned how to do all of the different things that humans can do from narrow tasks captured in ANI – ranging from tying one’s shoe to recognising a face or an object to advanced calculus – and everything else that you or I can do. Like ANI the objective of AGI is to create computers that can do everything that humans can do, but do it better. This way we could have one computer doing everything rather than one that plays chess, and another that stacks boxes, and a third that does financial analysis.
Finally the third level of AI, Artificial Super-Intelligence or ASI occurs when AGI is so effective at self-teaching that it surpasses human level general intelligence in every way. This is the point at which we have reached the so-called existential threat to humanity. It is the point where we either are able to attain the ultimate goal of immortality (… by the way, is this is a valid goal and who is deciding this for us?) or where we write ourselves out of history. However, as explained in some detail in the articles below and referred to in my point about robots above, the benefits and/or threats to our humanity of achieving ASI depend also on progress in other fields, and in particular, robotics and nanotechnology (the manipulation of very small matter). This is because once the ability to manipulate atomic particles combines with Super- intelligence, we have no idea what might result.
Experts are trying to develop AGI in at least three ways according to the post by Tim Urban below. Here is how I would try to summarize this: i) neural networks: using our understanding of how our brains receive and process information and creating computer programs to replicate that process; (ii) replicating something like evolution: by developing many different programs/algorithms and letting the most successful ones combine with other successful ones (while the ones that are unsuccessful die out) until you get the algorithm that provides the best intended outcome; and (iii) computer-led: leaving it up to computers to research and develop on their own by creating programs/algorithms that allow the computer to sift through existing programs and building programs that allow the computer to create new ones, thus endlessly improving itself on its own. (Again, a more sophisticated explanation can be found in the bibliography.)
Do any of these get to the essence of being a human?
Intuition and that “Gut-feeling”
One question I have that I have not seen addressed in the literature that I have scanned relating to AI (although I only scanned the tip of the ice-berg), is how do we account for our so-called “second brain”? The impact on our emotional selves that our gut and/or enteric nervous system seems to have on us? This is a rather new area of neurobiology, so we don’t know the answers. The book Gut, The Inside Story of our Body’s Most Under-rated Organ, by Guilia Enders is a good read and provides an overview of this area of study. One thing is for sure, if we do not understand it, then we cannot program computers to replicate it, although I suppose the advocates of reaching ASI would respond that this question and any other question we could think of could be answered by the super intelligent computer… so keep developing.
AI is being researched at a feverish pitch. There is a literal race to achieve AGI although it isn’t something you or I would necessarily know about not working in the field. It is being pursued by governments, large commercial entities (we have all seen reference to Facebook and Google’s AI research (Google owns DeepMind: Find out more about it here: Google DeepMind), as wells as start-ups, and possibly fringe elements throughout the world.
Experts in the field disagree on how quickly we may reach AGI, but most agree that once we develop AGI, ASI will surely follow, and possibly very quickly. Timeframes for reaching AGI range from 2045 being the most optimistic, to 2060 being the median range, with a very small percentage thinking that we will never achieve this. I am personally a bit more cynical, but maybe that is because I am very outside of this field.
Reigning It In….Issues to Consider Before its Too Late
Should any of this worry us now? If so, what can we or should we be doing as individuals and societies to address it? I am sure that you have heard prominent tech experts like Stephen Hawking and Elon Musk expressing concern about AI. Concern from these types of insiders certainly makes me sit up and take notice. They and others have called for an ethical framework for the development of AI. Like climate-change, however, this requires us to look at a potential, distant threat (where there is no consensus on how real that threat is, and where the science and technology behind it is very hard to understand) and take steps today to attempt to reign it in.
Also like climate science, the science behind AI seems to suggest that there is a trip point beyond which it is too late to turn back. The jump from “controllable” AGI to potentially out of control ASI is predicted to be very fast and unstoppable. This suggests that we should be thinking now about the kinds of safeguards programmers should be including as they build algorithms for AGI.
Azeem Azhar raises some of the many ethical issues for consideration in his most recent Exponential View (EV) discussing Facebook’s AI research:
Facebook is using classic kids story books to teach AIs how to read. The list is rather interesting and does have some old favourites, like Lucy Maude Montgomery’s Anne of Green Gables.
… This cross-section of old British and American literature reflects the politics, power dynamics and beliefs of the time. Kipling’s Jungle Book is amongst this canon: a book which has a well-identified and troubling relationship with colonial prejudices and racist undertones. As a parent taking a child through the Jungle Book, we can make the child aware of those dimensions and put it, and Kipling himself, in a wider context. …
It would be comforting to know a cultural anthropologist or a critical theorist was embedded with the team providing some kind of lens into the perspectives embedded in those texts. Has Facebook done this? I don’t know but it’s unsettling. Azeem Azhar, The Exponential View.
In my mind, all of this means we need to be open to some kind of regulation (yikes!). But aside from being a politically-charged word, regulating the tech industry is easier said than done. Focusing on the robotics side of this debate, the European Union undertook a study on “Regulating Emerging Robotic Technologies in Europe. Robotics Facing Law and Ethics” between 2012-2014. In the paper that resulted from the study on page 199 Regulating Robotics: What Role for Europe they basically describe how difficult it is to define a workable regulatory framework for this industry. (http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf).
Aside from the difficulty in defining a workable regulatory regime, tech experts and many others will fervently fight attempts to regulate their activities arguing that it will stifle creativity. Moreover, this field is international, requiring supra-national agreement on how to proceed. Any kind of unilateral regulation would surely be opposed. One motivation for writing this blog is to have a platform to raise awareness about these developments that are very difficult to understand, but affect our most fundamental rights. Surely we the people should have some say in how this kind of technology develops.
There are many directions to take these thoughts, yet I promised to keep these posts brief. Therefore, I am going to stop here for today, and proceed in small steps. My next post will focus on ANI taking over more and more human tasks – are we equipped for the fall-out?
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html (A comprehensive and entertaining two-part piece that explains just about everything there is to know about the current state of AI. It took me hours to get through (stop and go), but it was worth it.)
http://www.rollingstone.com/culture/features/inside-the-artificial-intelligence-revolution-a-special-report-pt-1-20160229?page=5 (Also featured in EV (Issue 52), this is a good comprehensive piece similar to the above.)
http://www.alphr.com/science/1002792/artificial-intelligence-ten-things-you-need-to-understand (A quick read that does what it says on the package.)
http://www.theverge.com/2016/2/29/11133682/deep-learning-ai-explained-machine-learning (A more tempered look at the science behind AI.)
Two pieces that have informed my next post, which will focus on the current implications of ever-improving ANI:
Sorry, nothing artistic yet, but I like the comment from Kirsten. Thanks!