Watch |Kai-Fu Lee, How AI can Save our Humanity
Watch| Nick Bostrom: What Happens When our Computers Get Smarter Than We Are?
Read | Murray Shanahan, “Heaven or Hell,” The Technological Singularity (Cambridge: MIT, 2015).
As we barrel towards an uncertain future, we face new challenges, specifically those dealing with our relationship with technology. AI and the implications of this technology are front and center when looking at how our technology might shape our behavior, our actions, and our values and how we might prepare for or predict the inevitable impact this technology will have on our way of life. Questions come up around intelligence and what might happen if AI becomes smarter than us and we become subservient to the machines we created. In Shanahan’s piece, Heaven or Hell, he presents us with the various scenarios that could play out if we do not address the complexity of a future with super-intelligent beings. One is that we could design a path to our own destruction.
One solution proposed is to place safeguards on the artificially intelligent systems we design and make the process a gradual one so that we have time to refine and improve. However, even with the opportunity and awareness, I ask how we might implement these safeguards? Is it realistic for us to believe that we could even understand the technology enough to confidently incorporate these safeguards within the system as it evolves? Or will this be the job of other artificially intelligent machines or of a hybrid human / artificial intelligent being? What if we instead look at placing limits on how far we can take technology? What if we seek goals that don’t make us “smarter” but make us more human? or empathetic towards other life on this planet and move away from human-centric views?
I’m reminded of our technology advances so far with regards to the military. I would argue that weapons technology has already surpassed a point to where we regret its creation and we are now looking to dismantle and suppress these destructive devices around the world. Because we brought this technology to fruition, there is more conflict than resolution and I ask why must we create technology just to create it? Can we imagine a world without super-intelligence and be okay without it? How might we realign our efforts and design systems that dismantle technology integration rather than build it?