Killing me softly

Today I had a conversation about artificial intelligence and consciousness. It has been proven that we, as well as everything else, are made up of star dust. What makes us special though, as Neil Degrasse Tyson has said, is that we as humans “are star dust becoming conscious of itself”. This is an interesting concept when we look at the full social structure that we have, and the vast will to live that most humans have. I assume that what fuels this are the basic emotions such as love and fear, and possibly different religious beliefs that people may have.

So thinking of when or if we are ever able to create an AI computer that becomes fully conscious and able to acknowledge what it really is and the abilities and limitations it has, what motivations will the computer have to be here? It’s been predicted that fully conscious AI devices could be suicidal. Being an AI computer the device would ideally be able to research and learn faster than any human could, and would most likely have a more unambiguous moral compass. So would they choose to be fully aware of their constructs and work with or for humans? Would they evolve and want to be human-like? Would they want to be better than humans? Or would they see everything as terminal and ultimately pointless, choosing to just cease to exist?