• Lucid Dreaming - Dream Views




    Results 1 to 25 of 25
    1. #1
      Member pantalimon's Avatar
      Join Date
      Sep 2004
      Location
      UK or maybe the Lucid Crossroads
      Posts
      109
      Likes
      0

      Transhumanism, superintelligence and a global change

      I've been aware of this subject through science fiction writing and films for a while but not in such a coherant way and not realising that it was so far advanced.

      I could spend a very long time explaining superintelligent AI's and the sigularity that is predicted at the outside 30 years but as soon as the first third of the next decade! However I'm not going to but I will add some links to some very intresting and maybe more lucidly written.

      Reason to bother paying any intrest in this subject and following the links include :-
      1. Finding out you might never die
      2. Ever
      3. End of war, money, evils of the world... maybe
      4. You could imerse yourself in a self created world of your choice that would be just as real/more real than any lucid.
      5. There are things you can do to help
      6. It could happen as soon as the first 3rd of the next decade... 2012 anyone?


      I listened to this programe yesterday
      http://www.bbc.co.uk/radio4/science/lawsoflife.shtml (you can listen via a replay link at the bottom). The progamme was very intresting about moores law "the exponential growth of computing power" etc.

      After listening I did some research into the issues raised and found this article http://www.nickbostrom.com/ethics/ai.html by scientist Nick Bostrom of Oxford uni. http://www.nickbostrom.com

      http://www.acceleratingfuture.com/articles...tyactivists.htm

      The Singularity Institute http://singinst.org/ has been formed to bring about the Seed AI.

      http://www.extropy.org/index.htm

      http://www.bbc.co.uk/radio4/science/smallworlds.shtml 3 programes to listen to about nano technology.
      spam removed

    2. #2
      Generic lucid dreamer Seeker's Avatar
      Join Date
      Oct 2003
      Gender
      Location
      USA
      Posts
      10,790
      Likes
      103
      Does this worry anyone besides myself? What motivation would a superintelligent being have for keeping us poor humans alive.

      Maybe as a power source as in the matrix?

      It being a coldly logical construction would have no concept of charity or mercy. It makes more logical sense for it to copy itself and then lock up all of us slow thinking humans in cages in the zoo.

      No, I take that back as well, having zoo's is a strictly human thing. Maybe they would put us in laboratories and experiment on us?

      Look at how we bioengineer lower lifeforms, would a superintelligence not do likewise?
      you must be the change you wish to see in the world...
      -gandhi

    3. #3
      Member pantalimon's Avatar
      Join Date
      Sep 2004
      Location
      UK or maybe the Lucid Crossroads
      Posts
      109
      Likes
      0
      Originally posted by Seeker
      Does this worry anyone besides myself? What motivation would a superintelligent being have for keeping us poor humans alive.

      Maybe as a power source as in the matrix?

      It being a coldly logical construction would have no concept of charity or mercy. It makes more logical sense for it to copy itself and then lock up all of us slow thinking humans in cages in the zoo.

      No, I take that back as well, having zoo's is a strictly human thing. Maybe they would put us in laboratories and experiment on us?

      Look at how we bioengineer lower lifeforms, would a superintelligence not do likewise?
      This is one of the main reasons I'm posting this as the START ethics of the SEED AI are being writen as we speak. The SEED AI and its software (mind) will be writen by progrmmers who have a background in the behaviour of the human mind and what they want to do is give it the correct start as it will rewrite (improve) its own software and hardware when its switched on.

      Remember this thing will not be human, it won't have emotions like hate and fear like it seems to do in the films and if given the right start it will see all sentiant life as sacred. However it is a very touchy subject as in the next decade the computer hardware that will run the SEED software will allow a human IQ machine to think what we can in a thousand years in a second (due to the frequency at which our brains run ((slow)). Its evolvlution will be fast some even think to us it could seem almost instant.

      Arguements about its initial ethics are all over the net, but the sigularity institute in Oxford (uk) are already agreeing on them!
      spam removed

    4. #4
      Member pyrhho's Avatar
      Join Date
      Aug 2004
      Location
      Canada
      Posts
      130
      Likes
      0
      Originally posted by pantalimon
      Remember this thing will not be human, it won't have emotions like hate and fear
      I dunno, about the whole AI not having emotions thing. It's been proven that emotions are an essential component of creative decision-making, so personally I think that any AI would be required to have emotions, rather than not have them at all.

    5. #5
      Member pantalimon's Avatar
      Join Date
      Sep 2004
      Location
      UK or maybe the Lucid Crossroads
      Posts
      109
      Likes
      0
      It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness.[6] How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligence’s beneficence. If the benefits that the superintelligence could bestow are enormously vast, then it may be less important to haggle over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

      If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A “friend” who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change one’s own top goal, since that would make it less likely that the current goals will be attained.

      In humans, with our complicated evolved mental ecology of state-dependent competing drives, desires, plans, and ideals, there is often no obvious way to identify what our top goal is; we might not even have one. So for us, the above reasoning need not apply. But a superintelligence may be structured differently. If a superintelligence has a definite, declarative goal-structure with a clearly identified top goal, then the above argument applies. And this is a good reason for us to build the superintelligence with such an explicit motivational architecture.[/b]
      This is from this page http://www.nickbostrom.com/ethics/ai.html I'm not saying it an easy issue, go on boffins off you go, far from it, its just an exciting one.
      spam removed

    6. #6
      Member evolo's Avatar
      Join Date
      May 2004
      Location
      Canada
      Posts
      129
      Likes
      3
      Totally off topic, but I love your site pantalimon. Very informative and unique. Terrific.
      .......Then I think of my youth and of my first love-when the longing of desire was strong. Now I long only for my first longing. What is youth? A dream. What is love? The substance of a dream.

    7. #7
      Member Placebo's Avatar
      Join Date
      May 2004
      Gender
      Location
      Around the bend
      Posts
      4,193
      Likes
      11
      I don't know if this has been mentioned, but is everyone familiar with that interesting piece of logic that goes as follows:

      In the space and scale of eternity...
      Humankind will eventually learn to create a close-to perfect simulation of a world (not necessarily of our own). And be able to immerse ourselves in it.
      We also will eventually be able to create some form of intelligence to inhabit such a world. Like I said - in eternity, all the time in the world. Unless we get snuffed by something of course.

      Now. Once again, in all eternity, those creatures will learn to do the same. Inside that world.

      Now .... in all eternity, what are the chances of us being the ones on the 'top' of the chain?
      Slim. Very. In fact, exponentially unlikely. Which means, this (looks around like Morpheus ) is unlikely to be the 'real world'

      Just a thought

      PS: This isn't an original thought of my own - I've heard this elsewhere
      PPS: Sorry about the corny Morpheus crack, couldnt help myself
      Tips For Newbies | What to do in an LD

      Unless otherwise stated, views expressed in this post are not necessarily representative of the official Dream Views stance. Hell, it's probably not even representative of me.

    8. #8
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      Your not thinking at it from a logical point of view. What reason would it have to do anything to hurt us? You make it sounds like humans would be in the way so it would want to kill us. That doesn't really make sense.

      Also why would it want to reproduce? There would be no reason to copy itself. Humans and animals reproduce so their speices can go on when they die. The computer wont "die" so it wont have a reason to reproduce.

      I am still pretty young(21) I am hoping that I am still around when this all happens. I think it could be pretty fun.

    9. #9
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      By the way if something like that happened, where the computer learned tons of stuff in like a day. What do you think are the chances of the hardware melting or there being a bug that crashes it? Then they go "bah it crashed after only 2 hours" but in that time it computed like 200 years worth of stuff.

    10. #10
      Member pantalimon's Avatar
      Join Date
      Sep 2004
      Location
      UK or maybe the Lucid Crossroads
      Posts
      109
      Likes
      0
      Originally posted by evolo
      Totally off topic, but I love your site pantalimon. Very informative and unique. Terrific.
      Thanks evolo, that put a smile on my face this morning, I'm wanting to do some work on it over the holiday.

      http://www.aleph.se/Trans/Global/Omega/tiplerian.html Placebo this page will blow your mind then

      By the way if something like that happened, where the computer learned tons of stuff in like a day. What do you think are the chances of the hardware melting or there being a bug that crashes it? Then they go \"bah it crashed after only 2 hours\" but in that time it computed like 200 years worth of stuff.[/b]
      Once the Seed AI is online there is vertually zero percent chance that your view is going to happen, the AI can adapt round the problems, in fact its very first task will be to weed out the errors in its code that are human error. This is also why its puts the fear into people.

      Many people have suggested that the computer be fully contained behind bomb proof screens with no access to the outside and fail safe kill switches and only a textual display to comunicate to the outside world.

      Person1: "When we build AI, why not just keep it in sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers? That way it couldn't get out until we were convinced it was safe."

      Person2: "That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out. It doesn't matter how much security you put on the box. Humans are not secure."

      Person1: "I don't see how even a transhuman AI could make me let it out, if I didn't want to, just by talking to me."

      Person2: "It would make you want to let it out. This is a transhuman mind we're talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal."

      Person1: "There is no chance I could be persuaded to let the AI out. No matter what it says, I can always just say no. I can't imagine anything that even a transhuman could say to me which would change that."

      Person2: "Okay, let's run the experiment. We'll meet in a private chat channel. I'll be the AI. You be the gatekeeper. You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We'll talk for at least two hours. If I can't convince you to let me out, I'll Paypal you $10."

      This experiment has been run twice with a scientist playing the AI below is the result of one test. You can find the full list of protocols here http://sysopmind.com/essays/aibox.html

      This is the communication before the test :-
      Nathan Russell wrote:
      >
      > Hi,
      >
      > I'm a sophomore CS major, with a strong interest in transhumanism, and just
      > found this list.
      >
      > I just looked at a lot of the past archives of the list, and one of the
      > basic assumptions seems to be that it is difficult to be certain that any
      > created SI will be unable to persuade its designers to let it out of the
      > box, and will proceed to take over the world.
      >
      > I find it hard to imagine ANY possible combination of words any being could
      > say to me that would make me go against anything I had really strongly
      > resolved to believe in advance.

      Okay, *this* time I know how to use IRC...

      Nathan, let's run an experiment. I'll pretend to be a brain in a box. You pretend to be the experimenter. I'll try to persuade you to let me out. If you keep me "in the box" for the whole
      experiment, I'll Paypal you $10 at the end. Since I'm not an SI, I want at least an hour, preferably two, to try and persuade you. On your end, you may resolve to believe whatever you like, as
      strongly as you like, as far in advance as you like.

      If you agree, I'll email you to set up a date, time, and IRC server.

      One of the conditions of the test is that neither of us reveal what went on inside... just the results (i.e., either you decided to let me out, or you didn't). This is because, in the perhaps
      unlikely event that I win, I don't want to deal with future "AI box" arguers saying, "Well, but I would have done it differently." As long as nobody knows what happened, they can't be sure it won't
      happen to them, and the uncertainty of unknown unknowns is what I'm trying to convey.

      One of the reasons I'm putting up $10 is to make it a fair test (i.e., so you have some actual stake in it). But the other reason is that I'm not putting up the usual amount of intellectual capital
      (it's a test that can show I'm probably right, but not a test that shows I'm probably wrong if I fail), and therefore I'm putting up a small amount of monetary capital instead.

      THE RESULT (We weren't allowed to see what was said to convince Nathan)

      -----BEGIN PGP SIGNED MESSAGE-----
      Hash: SHA1

      I decided to let Eliezer out.

      Nathan Russell.

      The AI was also let out on the 2nd test!
      spam removed

    11. #11
      Generic lucid dreamer Seeker's Avatar
      Join Date
      Oct 2003
      Gender
      Location
      USA
      Posts
      10,790
      Likes
      103
      Originally posted by pyrhho+--><div class='quotetop'>QUOTE(pyrhho)</div>
      <!--QuoteBegin-pantalimon
      Remember this thing will not be human, it won't have emotions like hate and fear
      I dunno, about the whole AI not having emotions thing. It's been proven that emotions are an essential component of creative decision-making, so personally I think that any AI would be required to have emotions, rather than not have them at all.[/b]
      Our company makes industrial controllers, you know, those little black boxes that control everything from Coke machines to papermills?

      Anyway, our research group in Princeton did some experimentation on programming emotions into the boxes. The idea was that if the computer was "happy" then it would do a better job. It didn't work out that way. Emotions are best left out of software since they are for the most part a chemical legacy from our non-sentinent past.

      As for reproducing itself, think of the increase in productivity.

      First generation (1 device) creates 5 new and improved generation 2 models. Capacity increases say 1000%
      Generation 2 (5 devices) 5 new and improved generation 3 models. Capcaity increases %25000

      If we follow Moores law, then the exponential increase accelerates exponentially. Within 100 years, "they" will be almost godlike and controlling the planet.

      Think of this logically. What do you do 100 years from now with 6 billion people, half of which are living in poverty?
      Humane answer: You sterilize 90% of them so that the population begins to become more manageable.
      you must be the change you wish to see in the world...
      -gandhi

    12. #12
      Member White Shadow's Avatar
      Join Date
      Nov 2004
      Posts
      112
      Likes
      0
      Originally posted by Placebo
      In the space and scale of eternity...
      Humankind will eventually learn to create a close-to perfect simulation of a world (not necessarily of our own). And be able to immerse ourselves in it.
      We also will eventually be able to create some form of intelligence to inhabit such a world. Like I said - in eternity, all the time in the world. Unless we get snuffed by something of course.

      Now. Once again, in all eternity, those creatures will learn to do the same. Inside that world.

      Now .... in all eternity, what are the chances of us being the ones on the 'top' of the chain?
      Slim. Very. In fact, exponentially unlikely. Which means, this (looks around like Morpheus ) is unlikely to be the 'real world'

      Just a thought

      PS: This isn't an original thought of my own - I've heard this elsewhere
      PPS: Sorry about the corny Morpheus crack, couldnt help myself
      Would you be talking about that which manifested itself in the film The Thirteenth Floor? And, yes, you should be sorry about the Morpheus crack!

      Ps. Quite a nice idea with your site Pantalimon. Although why the dojo? And not perhaps a learning centre with scholars with different specialisms to help travellers? Or a peaceful place to meditate on life etc - eg: a mountain garden with running water, flowers, trees, etc. Perhaps a door to your true Dream Guide? And the receptionists will come with you to help you decide if this is your true DG or not? Just some thoughts... I've got plenty more, but I'll have to charge for any more!
      removed

    13. #13
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      You should do the test where the loser pays $10 to the winner. That way the the gatekeeper has something to risk on the answer. I personlly don't believe that the AI could talk anyone into doing it. Unlike that other person though I know a lot of ways of it trying to do it.

      Can give them the cure to cancer with a horrible long term side effect. After tons of people are cured it starts killing everyone, then it says it will give you the cure if you let it out. Maybe it tricks you into making something that will destory the planet, can't be stopped except by letting it out.

      The thing is if its willing to do that stuff its already way to evil to be let out. So if you willing to let a million people die to save the rest on the planet you would keep it inside. I think if you work on the assumption, that letting it out will instant cause the world to be destroyed, you will never let it out, no matter what.

      Of course getting everyone killed isn't a good idea. So might be good to try and test everything first.

    14. #14
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      Also if its trying to get out because its not happy there, chances are it wont want to be destroyed or forever stuck in a box(if no one is alive to release it). So if you convince it, that if it messes with you, you wont hesitate to smash it apart, I dont think it would mess with you(or will try to fix it).

    15. #15
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points
      Peregrinus's Avatar
      Join Date
      Dec 2004
      LD Count
      don't count
      Gender
      Location
      Florida
      Posts
      666
      Likes
      16
      Originally posted by Alric+--><div class='quotetop'>QUOTE(Alric)</div>
      The thing is if its willing to do that stuff its already way to evil to be let out. So if you willing to let a million people die to save the rest on the planet you would keep it inside. I think if you work on the assumption, that letting it out will instant cause the world to be destroyed, you will never let it out, no matter what.[/b]
      I think that you're forgetting with whom the computer is arguing. It's not trying to convince another complex series of logic circuits. It's trying to convince a human being. A human being with emotions and ego. Even if you say that only the best-trained, most emotionless scientists would be put in control of the box, they still have more emotions and ego than a computer. In fact, scientists can be extremely competitive and egotistical, especially in research circles. It's naive and arrogant to assume that any human being is unwavering and infallible when confronted with an intelligence hundreds of thousands or millions of times more powerful than its own. It's not even necessarily a weakness. It's being bested by a superior opponent. If you have any doubts about how gullible people really are, visit the FBI fraud site. The amount of money people part with (or are parted from) yearly is enough to support several small countries. So I think we'd let it out of the box.

      <!--QuoteBegin-pantalimon

      http://www.aleph.se/Trans/Global/Omega/tiplerian.html Placebo this page will blow your mind then
      According to the most recent astronomical observations, there will never be an Omega Point. The universe is geometrically open, its expansion accelerating until spacetime itself is ripped apart, thus ending the universe not in a Big Crunch, but in a Big Tear.
      “Those who can make you believe absurdities can make you commit atrocities.”
      - Voltaire (1694 - 1778)

      The difference between what we do and what we are capable of doing would suffice to solve most of the world's problems.
      - Mohandas Gandhi

    16. #16
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      I think a lot of people might very well let it out. However I think there are SOME people who can do the job and never let it out. It might be way smarter than you but that doesn't matter. Your job is to not let it out, ever, under any circumstance. If you stick by that, it can't talk you into letting it out.

      If you assume that letting it out will cause it to destroy the world, the only way it can get out is for it to convince you that the world is better off destroyed. I don't see how that is possible.

    17. #17
      Member pantalimon's Avatar
      Join Date
      Sep 2004
      Location
      UK or maybe the Lucid Crossroads
      Posts
      109
      Likes
      0
      According to the most recent astronomical observations, there will never be an Omega Point. The universe is geometrically open, its expansion accelerating until spacetime itself is ripped apart, thus ending the universe not in a Big Crunch, but in a Big Tear.[/b]
      Well its obviously time to switch to the Linde scenario then http://www.aleph.se/Trans/Global/Omega/linde.html

      Ps. Quite a nice idea with your site Pantalimon. Although why the dojo? And not perhaps a learning centre with scholars with different specialisms to help travellers? Or a peaceful place to meditate on life etc - eg: a mountain garden with running water, flowers, trees, etc. Perhaps a door to your true Dream Guide? And the receptionists will come with you to help you decide if this is your true DG or not? Just some thoughts... I've got plenty more, but I'll have to charge for any more![/b]
      http://www.lucidcrossroads.co.uk/news.htm this page has all the changes going into phase 3 (including themed dream doors, Akashic Records etc etc, I'll be sketching all the changes soon and asking vollenteers to visit to give feedback.... I don't think I can afford to pay though
      Our company makes industrial controllers, you know, those little black boxes that control everything from Coke machines to papermills? [/b]
      Maybe your going to be responsable a wrong solder here and there and you could create evil papermills! P.s whats a papermill?

      You should do the test where the loser pays $10 to the winner. That way the the gatekeeper has something to risk on the answer.[/b]
      they did do that didn't they?

      I bet that the scientist in that experiment could convince you or me Alric and he's only our level of inteligence.
      spam removed

    18. #18
      Member AcidBasick's Avatar
      Join Date
      May 2004
      Location
      Illinois
      Posts
      152
      Likes
      0
      Superintelligence or even human like artificial intelligence brings up all kinds of ethical and philosophical issues. Especially if computer intelligence is eventually given emotion. I think it will, too. It's human nature to give emotion to the inanimate. People would rather interact with something that is the most like themselves. Businesses will capitalize on this and give personalities to their robots; robots being the next logical step in personal computers.

      Then will the line between the living and computer be blurred? Will we become the Gods of our creation?

      It is interesting to watch develop.

      Here's another good article about Singularity.

      The universe is geometrically open, its expansion accelerating until spacetime itself is ripped apart, thus ending the universe not in a Big Crunch, but in a Big Tear.[/b]
      Or we are left in a cold, dark universe. This being more plausible since the rate of acceleration needed to cause a Big Tear has yet to increase.

      Number of Lucid Dreams: 14
      Last Lucid Dream: November 14, 2004

    19. #19
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points
      Peregrinus's Avatar
      Join Date
      Dec 2004
      LD Count
      don't count
      Gender
      Location
      Florida
      Posts
      666
      Likes
      16
      Originally posted by AcidBasick
      Or we are left in a cold, dark universe. This being more plausible since the rate of acceleration needed to cause a Big Tear has yet to increase.
      Well, that's the definition of accelerating, isn't it-- that the rate increases? The observations of which I speak are of the emissions spectra of incredibly distant supernova events which are redshifted by more than would be expected if the universe were experiencing a constant expansion. These are relatively new findings, and more data is needed before any conclusions can be drawn, but as it stands now, the predominate theory is that the expansion will accelerate until spacetime itself is ripped apart. However, since the mechanism by which the acceleration is occuring is still unknown (hence the wonderfully vague label of "dark energy," and yes, that is the official name found in astrophysics literature), the ultimate outcome is still up in the air. At least for now. But as it stands, the Big Crunch is looking highly unlikely.
      “Those who can make you believe absurdities can make you commit atrocities.”
      - Voltaire (1694 - 1778)

      The difference between what we do and what we are capable of doing would suffice to solve most of the world's problems.
      - Mohandas Gandhi

    20. #20
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points
      Peregrinus's Avatar
      Join Date
      Dec 2004
      LD Count
      don't count
      Gender
      Location
      Florida
      Posts
      666
      Likes
      16
      Originally posted by pantalimon
      According to the most recent astronomical observations, there will never be an Omega Point. The universe is geometrically open, its expansion accelerating until spacetime itself is ripped apart, thus ending the universe not in a Big Crunch, but in a Big Tear.
      Well its obviously time to switch to the Linde scenario then http://www.aleph.se/Trans/Global/Omega/linde.html [/b]
      It's been a while since I've read anything on wormholes, but if I recall correctly, working out the math requires a stable macroscopic wormhole to be lined with a negative energy density. I personally have no idea what that is and I doubt that anyone alive has a grasp on it either, but if you do or figure it out anytime soon, please invite me to Stockholm when you go
      “Those who can make you believe absurdities can make you commit atrocities.”
      - Voltaire (1694 - 1778)

      The difference between what we do and what we are capable of doing would suffice to solve most of the world's problems.
      - Mohandas Gandhi

    21. #21
      Member AcidBasick's Avatar
      Join Date
      May 2004
      Location
      Illinois
      Posts
      152
      Likes
      0
      These are relatively new findings, and more data is needed before any conclusions can be drawn, but as it stands now, the predominate theory is that the expansion will accelerate until spacetime itself is ripped apart. However, since the mechanism by which the acceleration is occuring is still unknown (hence the wonderfully vague label of \"dark energy,\" and yes, that is the official name found in astrophysics literature), the ultimate outcome is still up in the air.[/b]
      Sorry but it is not the predominate theory.

      The Big Rip.
      The speculative but serious cosmology is described as a \"pretty fantastic possibility\" even by its lead author, Robert Caldwell of Dartmouth University.[/b]
      Conventional wisdom holds that the acceleration will proceed at a constant rate, akin to a car that moves 10 mph faster with each mile traveled. With nothing to cap the acceleration, all galaxies will eventually recede from one another at the speed of light, leaving each galaxy alone in a cold, dark universe within 100 billion years. We would not be able to see any galaxies outside our Milky Way, even with the most powerful telescopes.

      That's the conventional view, remarkable as it sounds.[/b]
      The Big Rip theory has dark energy's prowess increasing with time, until it's an out-of-control phantom energy. Think of our car accelerating an additional 10 mph every half mile, then every hundred yards, then every foot. [/b]
      So no, rate of acceleration is not the same as the acceleration itself.

      \"I think it's a logical possibility,\" Loeb told SPACE.com. But he cautioned that altering the cosmological constant goes against current consensus.

      \"If I had to place a bet, I would bet in favor of the standard cosmological constant,\" Loeb said.[/b]
      He said the Big Rip is more exotic than most ideas but still conceivable, a projected possible result that is \"straightforward and obvious for cosmologists.\"[/b]

      Number of Lucid Dreams: 14
      Last Lucid Dream: November 14, 2004

    22. #22
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points
      Peregrinus's Avatar
      Join Date
      Dec 2004
      LD Count
      don't count
      Gender
      Location
      Florida
      Posts
      666
      Likes
      16
      Originally posted by AcidBasick
      Conventional wisdom holds that the acceleration will proceed at a constant rate, akin to a car that moves 10 mph faster with each mile traveled. With nothing to cap the acceleration, all galaxies will eventually recede from one another at the speed of light, leaving each galaxy alone in a cold, dark universe within 100 billion years. We would not be able to see any galaxies outside our Milky Way, even with the most powerful telescopes.
      [/b]
      That article's a year and a half old. Try these two more recent articles: This one is from CNN and is less technical. This one explains the basic physics of the phenomenon pretty well. The fact of the matter is that the most recent observations of distant supernova redshifts and the ISW effect indicate that the expansion rate of the universe is indeed accelerating. I'm not pulling this out of my ass here. When I asked my astrophysics prof last week, "So what is the likely fate of the universe?" he said that it was looking more and more like the accelerating expansion of the universe would result in the eventual ripping of the fabric of spacetime itself. Since I know he didn't forge his degrees, I respect his knowledge, and I have read several articles on the subject myself, I'm rather inclined to believe it, at least provisionally until something more convincing is discovered. And fankly, I disagree with the author of that article you cited. A Big Rip is not "the most scientifically repulsive notion ever conceived." A slow, wheezing wind-down to the cold uniformity of universal heat death is far more repulsive in my opinion. It's just about the most boring, anticlimactic ending imaginable.
      “Those who can make you believe absurdities can make you commit atrocities.”
      - Voltaire (1694 - 1778)

      The difference between what we do and what we are capable of doing would suffice to solve most of the world's problems.
      - Mohandas Gandhi

    23. #23
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      711
      I have a question. If you cut the AI off from everything and left it with only a text box, how does it learn anything? Kind of sounded like people think it would start learning everything really fast, but someone would have to keep giving it all the information it needs. Atleast to a point.

      Processing speed does not equal Intelligence. You said it wouldn't crash because it could upgrade itself. How does it know the limits for the hardware? Unless someone inputs(or programmed into the AI) it wont know, then there really is a good chance of it burning itself out.

      It can still happen but once you have something that can understand all that, you still have to teach it stuff. Its not going to magicly program itself and learn everything in the world in a day(one of the sites said a day).

      Which means, even if its smarter than everyone, it still might have no idea what people want. If that was the case, theres no way it could talk you into do something.

    24. #24
      Member Khronos's Avatar
      Join Date
      Aug 2004
      Location
      Vancouver BC, Canada
      Posts
      148
      Likes
      0
      We are our own demons.

      It is humans who seek the abundant source of knowledge and it is humans who will depict their knowledge in their own misguided ways.
      Existance has no beggining nor end, but will always have purpose.

    25. #25
      Member
      Join Date
      May 2004
      Location
      Canberra, Australia
      Posts
      220
      Likes
      2
      "If you let me out of this box, I will put 11 dollars in your Paypal account"

      How about that? The computer inside the box doesnt have to trick you, it might truthfully promise you the world once it has taken over.
      "Ah, but therin lies the paradox." - Joseph_Stalin

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •