 Originally Posted by Sageous
Though I am enjoying this conversation, and hope it continues, I had a thought this morning that goes back to the OP, when we were still talking about "AI!!!," rather than "Inserting consciousness into machines!!!:"
In the light of your OP, and the comments above, here's a hypothetical question for you, Karloky:
What if we did create a machine, perhaps a very powerful computer (or more likely "cloud" of computers; probably not a robot), that included or ultimately developed a sense of self, but that sentience was formed in a computer programmed with human ideals of right and wrong, compassion, and the Golden Rule? Wouldn't it be possible, then, for an AI to emerge that, yes, is smarter and more powerful than we are, but is also good to us? Why can't a super-intelligent self-aware computer care about the puny humans that made it, rather than inexplicably hate them or feel a need to erase them?
Would that then be a bad thing?
We're not all evil, Karloky, and, in spite of all the popular sci-fi that begs to differ, our inventions are not required to be evil either. AI could just as easily represent an evolutionary leap of human intelligence in a good direction as it does a threat to all humanity.
Just a thought.
I agree with this; my only concern is the feasibility of such a thing in a world where governments decide everything for a complacent population that is fearful of the slightest discomfort.
I have spent much of my life in the study of early American politics, and have yet to find one person with whom these ideas do not awaken fear and even anger towards myself. If I cannot engage in a peaceful debate about the very idea of limited government, then how on earth are we to discuss other, way more difficult to understand topics? The answer is that we don't discuss it.
The bottom line is however that eventually the technology will be invented; regardless of whether it is used for good or bad, and hopefully by then (yeah right) people will be less afraid of everything under the sun. It doesn't change anything about the topic of course; I am just explaining why I personally would like to delay such things until we have at least left the stone age of self knowledge, though I know that this is impossible; there is no debate really. Hypothetically speaking, when a person who does not know anything about themselves and is in constant need of approval from others says that they would like the idea of human-like machines, I become concerned. Such people don't think things through; it is like wanting something from the government with no thought towards where the money comes from. Again, maybe it's just my own *very* limited experience with people (I'm a hermit these days for this very reason), but those folks seem to be way more common than even a simple majority.
I realize that this was not the point of the topic, so on a lighter note, perhaps this technology could be smart enough to help fix some of our troubles; like the internet has done to an extent; but who creates this new tech and who controls it?
|
|
Bookmarks