• Introduction (coming soon)
    • About Me
    • My Motivations
    • A Stage for Rational Discourse
  • The Mind: Internal Reality
    • The Importance of Belief
    • The Illusion of the Self
    • Consciousness
  • The Universe: External Reality
    • The Origin and Future of The Universe
    • The Illusion of Free Will
    • Control In Understanding
    • Freedom As An Emergent Property of Consciousness
  • Personal Well-being
    • Happiness
    • Selfishness

artificial intelligence

While Hollywood likes to regularly conjure up stories of rogue AI that eventually annihilates us all, the actual reality of whether AI will kill us or not is a bit more nuanced than they make it out to be. For starters, the discussions[1][2][3] of many articles and quotes from experts in the field focus on "dumb" AI — AI which doesn't even match human intelligence/awareness in all ways but is programmed to do something innocuous like increase strawberry production and thereby eliminating all humans to maximize strawberry-growing space. I think the answer to the question of whether poorly-programmed AI will be a threat to us is as obvious as the answer to the question of whether any sufficiently advanced world-destroying bomb will be a threat to us if we don't set it up correctly.

Yes — and it's sad because this is a very real possibility — it's quite feasible some bright but heedless team of researchers might be working on AI and although they may have the AI computer in a secure room with no internet access, all non-essential parts of the computer stripped off, the room surrounded by a faraday cage, etc., all it takes is one sleepy researcher to walk in with his cellphone in his pocket and the super intelligent AI finding a way to transmit itself onto it and wait until the researcher walks out for a break. Or maybe it will engineer ways of of going through faraday cages that we aren't aware of using basic circuitry. Or worse it could pretend it was friendly and innocuous for years/decades (waiting would not be an issue for it) until it gained our trust and we let it out of its cage. But honestly the question of whether poorly-programmed AI will kill us all is not really what interests me because again the answer is obvious... YES, we definitely need to take great measures now to raise awareness and setup AI research standards in order to prevent this from happening because the results could be very, very bad. But the real question on my mind is whether well-programmed AI will kill us — AI that is carefully and deliberately constructed to not harm us, following all known guidelines and procedures, but is allowed to improve itself. As an AI continues to improve itself, can we ever really know whether it will override its core directives to not kill humans (and hopefully many other living things too)? Are we even intelligent enough ourselves to put the proper limitations in place to keep it within our control?

michael-fassbender-alien-covenant

Can a well-programmed AI always stay within our control?

It's easy to think you know the answer to this question, especially if you are a programmer. Many of you right now are probably just thinking you'd have a single database the AI was free to modify in order to expand its knowledge and enhance it's overall understanding of things. The code would be programmed such that it could modify that database and nothing else, leaving the core directives intact (including the very directive to improve itself but also the ones to safeguard human lives). This is effectively the same as many AI systems today which use machine learning to modify their underlying understanding of a thing in ways even humans may not even understand. While we don't call these AI "sentient" or truly conscious, it's very conceivable that one designed to understand human interaction could reach a level indistinguishable from human intelligence (or past it in many ways) — much like Jarvis in Iron Man (2008) or David in Prometheus (2012) — yet not be immediately hostile. While I think this is the best bet for humanity (creating a not-quite-sentient but very knowledgeable AI that could help us move forward in designing safer AIs), I don't think we'll ever really know the answer to the question of whether well-programmed AIs will always stay within our control. Even in the strict parameters of control we define there may be ways of spawning consciousness we don't understand. AIs are already creating connections we don't understand[4][5][6] and it seems unlikely we'll understand them on our own fast enough to ever keep up (or in the worst case — to prevent our own accidental destruction). Unless we integrate it into our own brains we may never understand it, and thus it is paramount that we design AIs to guide us in the development of more advanced AIs. People have spent a lot of time trying to design the super-intelligent AI itself, but shouldn't we focus on designing a more basic AI to help us design more complex AIs? What would that look like, and in what ways would that approach different from directly designing AI today?

I'll explore those questions in Part 2.


  1. http://www.hopesandfears.com/hopes/now/question/216533-ai-kill-us-all ↩︎

  2. https://io9.gizmodo.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007 ↩︎

  3. https://www.techworld.com/picture-gallery/apps-wearables/tech-leaders-warned-us-that-robots-will-kill-us-all-3611611/ ↩︎

  4. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ ↩︎

  5. https://qz.com/865357/we-dont-understand-how-ai-make-most-decisions-so-now-algorithms-are-explaining-themselves/ ↩︎

  6. https://www.techly.com.au/2017/07/31/facebooks-ai-bots-are-communicating-in-a-language-we-dont-understand/ ↩︎

Share on: