Originally Posted by
Sageous
Though I am enjoying this conversation, and hope it continues, I had a thought this morning that goes back to the OP, when we were still talking about "AI!!!," rather than "Inserting consciousness into machines!!!:"
In the light of your OP, and the comments above, here's a hypothetical question for you, Karloky:
What if we did create a machine, perhaps a very powerful computer (or more likely "cloud" of computers; probably not a robot), that included or ultimately developed a sense of self, but that sentience was formed in a computer programmed with human ideals of right and wrong, compassion, and the Golden Rule? Wouldn't it be possible, then, for an AI to emerge that, yes, is smarter and more powerful than we are, but is also good to us? Why can't a super-intelligent self-aware computer care about the puny humans that made it, rather than inexplicably hate them or feel a need to erase them?
Would that then be a bad thing?
We're not all evil, Karloky, and, in spite of all the popular sci-fi that begs to differ, our inventions are not required to be evil either. AI could just as easily represent an evolutionary leap of human intelligence in a good direction as it does a threat to all humanity.
Just a thought.