Originally posted by evolo
Totally off topic, but I love your site pantalimon. Very informative and unique. Terrific.
Thanks evolo, that put a smile on my face this morning, I'm wanting to do some work on it over the holiday.
http://www.aleph.se/Trans/Global/Omega/tiplerian.html Placebo this page will blow your mind then
By the way if something like that happened, where the computer learned tons of stuff in like a day. What do you think are the chances of the hardware melting or there being a bug that crashes it? Then they go \"bah it crashed after only 2 hours\" but in that time it computed like 200 years worth of stuff.[/b]
Once the Seed AI is online there is vertually zero percent chance that your view is going to happen, the AI can adapt round the problems, in fact its very first task will be to weed out the errors in its code that are human error. This is also why its puts the fear into people.
Many people have suggested that the computer be fully contained behind bomb proof screens with no access to the outside and fail safe kill switches and only a textual display to comunicate to the outside world.
Person1: "When we build AI, why not just keep it in sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers? That way it couldn't get out until we were convinced it was safe."
Person2: "That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out. It doesn't matter how much security you put on the box. Humans are not secure."
Person1: "I don't see how even a transhuman AI could make me let it out, if I didn't want to, just by talking to me."
Person2: "It would make you want to let it out. This is a transhuman mind we're talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal."
Person1: "There is no chance I could be persuaded to let the AI out. No matter what it says, I can always just say no. I can't imagine anything that even a transhuman could say to me which would change that."
Person2: "Okay, let's run the experiment. We'll meet in a private chat channel. I'll be the AI. You be the gatekeeper. You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We'll talk for at least two hours. If I can't convince you to let me out, I'll Paypal you $10."
This experiment has been run twice with a scientist playing the AI below is the result of one test. You can find the full list of protocols here http://sysopmind.com/essays/aibox.html
This is the communication before the test :-
Nathan Russell wrote:
>
> Hi,
>
> I'm a sophomore CS major, with a strong interest in transhumanism, and just
> found this list.
>
> I just looked at a lot of the past archives of the list, and one of the
> basic assumptions seems to be that it is difficult to be certain that any
> created SI will be unable to persuade its designers to let it out of the
> box, and will proceed to take over the world.
>
> I find it hard to imagine ANY possible combination of words any being could
> say to me that would make me go against anything I had really strongly
> resolved to believe in advance.
Okay, *this* time I know how to use IRC...
Nathan, let's run an experiment. I'll pretend to be a brain in a box. You pretend to be the experimenter. I'll try to persuade you to let me out. If you keep me "in the box" for the whole
experiment, I'll Paypal you $10 at the end. Since I'm not an SI, I want at least an hour, preferably two, to try and persuade you. On your end, you may resolve to believe whatever you like, as
strongly as you like, as far in advance as you like.
If you agree, I'll email you to set up a date, time, and IRC server.
One of the conditions of the test is that neither of us reveal what went on inside... just the results (i.e., either you decided to let me out, or you didn't). This is because, in the perhaps
unlikely event that I win, I don't want to deal with future "AI box" arguers saying, "Well, but I would have done it differently." As long as nobody knows what happened, they can't be sure it won't
happen to them, and the uncertainty of unknown unknowns is what I'm trying to convey.
One of the reasons I'm putting up $10 is to make it a fair test (i.e., so you have some actual stake in it). But the other reason is that I'm not putting up the usual amount of intellectual capital
(it's a test that can show I'm probably right, but not a test that shows I'm probably wrong if I fail), and therefore I'm putting up a small amount of monetary capital instead.
THE RESULT (We weren't allowed to see what was said to convince Nathan)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I decided to let Eliezer out.
Nathan Russell. 
The AI was also let out on the 2nd test!
|
|
Bookmarks