• Lucid Dreaming - Dream Views




    Page 5 of 7 FirstFirst ... 3 4 5 6 7 LastLast
    Results 101 to 125 of 166
    1. #101
      Member Inside This Fantasy's Avatar
      Join Date
      Feb 2009
      Posts
      66
      Likes
      0
      So really and grasshoppa, you are both saying that if you were to be given an AI robot that was fully conscious, you would consider it to be just imitating consciousness and not actually conscious? If so, then there is no real reason in debating anymore because if an actual AI robot won't convince you, then nothing will.

    2. #102
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by Inside This Fantasy View Post
      So really and grasshoppa, you are both saying that if you were to be given an AI robot that was fully conscious, you would consider it to be just imitating consciousness and not actually conscious? If so, then there is no real reason in debating anymore because if an actual AI robot won't convince you, then nothing will.
      That's not really my point, I haven't said this. My point is that rather that, although we can program a robot to respond certain stimuli, we are doing it according to perception and defined "causes" (or conditions), which is not always necessary for an imposed action or event. Conscious beings do not respond logically in all situations, because they inherit an inner awareness that directs the mind/reasoning and capacity to grow from certain experiences as it chooses, and it is not dependent on external conditions (despite some belief systems), nor in-built "programs" that are hardwired. The choices and intelligence are determined by a greater, invisible field of consciousness, which forms a "reality tunnel" of experiential data with which the being interacts with and perceives as reality.

      Whether we can look at a robot and judge whether it is conscious or not is really beside the point, and even within such a judgment it would be within the limitation of perception.

    3. #103
      Member Bonsay's Avatar
      Join Date
      Sep 2006
      Gender
      Location
      In a pot.
      Posts
      2,706
      Likes
      60
      Quote Originally Posted by grasshoppa View Post
      I am merely saying that we are not the sum of our parts. Perhaps a little too strongly for some of the sensitive readers here. To think we can build robots complex enough to attain consciousness at our level and even surpass us is ludicrous. I mean, at best we create a robot that can continually upgrade itself and store more and more information on it's database. But just because the robot appears to be alive and conscious doesn't mean a god damn thing.
      Why is it ludicrous? Because you can't believe it?
      Just because you appear to be alive and conscious doesn't mean a god damn thing. Do you know why I think you are? Because you are the sum of your parts as am I.
      Quote Originally Posted by grasshoppa View Post
      You can go online and talk to Alan and even make your own HAL bot. If you spent enough time programming it's responses I'm sure that it would seem rather intelligent, and even reflective at times. Sure you can ask it questions and it will respond, but that is not consciousness. It is merely selecting a suitable response.
      Well this is not something we are discussing here. We were discussing about actual AI. As in human brain like.
      Quote Originally Posted by grasshoppa View Post
      And I think (yes i'm just using my brain, sorry?) that this is the best we can do, and I think it will be done on a massive scale in the future which will lead people to believe that these robots are intelligent creatures capable of adapting to situations (social, physical, etc). Which they will to a certain extent like an animal. It's just like when a bird gets stuck in an oil spill, it can't do anything in that situation, it needs a more sophisticated being such as ourselves to heal it. We will always be upgrading and improving our creations but not to this extent. The ability for humans to turn 'inward' and reflect upon their actions, words, and thoughts is intrinsically human. Robots will only have the capability to respond to what is already 'out there' and what has already happened. Just like an animal.
      (So what if you are using your brain, so are the Christians and they believe in some crazy stuff, sorry?) Again, we weren't discussing what you think the future will be like. We are discussing the hypothetical possibility of actual AI, again actual as in human like.
      Quote Originally Posted by grasshoppa View Post
      At best we will create a self-upgrading pseudo-consciousness but never will we create a conscious being aside from having a child.
      Well, all I can say is that I'm glad that you know everything.
      C:\Documents and Settings\Akul\My Documents\My Pictures\Sig.gif

    4. #104
      Banned
      Join Date
      Oct 2007
      Gender
      Location
      Big Village, North America
      Posts
      1,953
      Likes
      87
      I'm just speculating. I don't know a god damn thing, I am the grasshoppa. This is a philosophy forum, speculation and the imagination are allowed. When you make an agruement you are supposed to make your points strong. And the more the writer seems to believe in his own shit, the more likely his points will get across.
      Last edited by grasshoppa; 03-28-2009 at 03:05 PM.

    5. #105
      Banned
      Join Date
      May 2007
      LD Count
      Loads
      Gender
      Location
      Digital Forest.
      Posts
      6,864
      Likes
      386
      Quote Originally Posted by grasshoppa View Post
      At least let me try...Going back to what I said...

      "Those who think AI is possible are misunderstanding conciousness itself. The idea that we can create AI comes from our delusional materialist sense of 'I am the sum of my parts'."

      I am merely saying that we are not the sum of our parts. Perhaps a little too strongly for some of the sensitive readers here. To think we can build robots complex enough to attain consciousness at our level and even surpass us is ludicrous. I mean, at best we create a robot that can continually upgrade itself and store more and more information on it's database. But just because the robot appears to be alive and conscious doesn't mean a god damn thing. You can go online and talk to Alan and even make your own HAL bot. If you spent enough time programming it's responses I'm sure that it would seem rather intelligent, and even reflective at times. Sure you can ask it questions and it will respond, but that is not consciousness. It is merely selecting a suitable response. And I think (yes i'm just using my brain, sorry?) that this is the best we can do, and I think it will be done on a massive scale in the future which will lead people to believe that these robots are intelligent creatures capable of adapting to situations (social, physical, etc). Which they will to a certain extent like an animal. It's just like when a bird gets stuck in an oil spill, it can't do anything in that situation, it needs a more sophisticated being such as ourselves to heal it. We will always be upgrading and improving our creations but not to this extent.

      The ability for humans to turn 'inward' and reflect upon their actions, words, and thoughts is intrinsically human. Robots will only have the capability to respond to what is already 'out there' and what has already happened. Just like an animal.
      How is that any different from the way your mind works?

      You are presented stimuli and your brain chooses the appropriate way to react. Nerual Nets do exactly this by associating different things together. The difference is Neural Nets are limited by Memory and Sensory perception-- They can only be communicated with through textual interface, usually. But that's not to say that it is impossible to give them other means of perception-- I.e. A microphone to listen to, or a camera to look through.

      Humans are animals, so your last three sentences make absolutely no sense...

    6. #106
      Traveler FreedomBud's Avatar
      Join Date
      Jul 2009
      Gender
      Location
      A place just like any other
      Posts
      2
      Likes
      0
      My answer to the original question is yes, I do believe that digital organisms have entered the realm of what I consider to be living, albeit very simplistic.

      I do believe that we are the sum of our parts. For example, there was a video (which I cant find right now) that demonstrated when the pathway between the facial recognition part of the brain and the reward system in the brain (to put it simple terms) is severed the person will not be able to visually recognize people for who they are. They can still recognize faces, but for example, if their mother comes in their room, although the woman in the room looks and sounds and acts like their mother they will claim with no uncertainty that their mother is an impostor of their mother. In other words, the neurons which the brain expects to fire when it recognizes the face of its mother do not, hence the brain refuses to believe that this person is their mother. When a part is removed there is no safety system in our head saying "oh crap, thats my mom but the sensory input does not say reaffirm this so I need to fix this", no, instead it continues on as if there was no sensory input and makes up excuses as to why the image of this person does not provoke the right input i.e. They are an impostor.

      The point I am trying to make with this is that we are able to emulate parts of the brain, such as facial recognition, voice recognition etc but we are still lacking the parts which tie these together to make something that falls under the category of being alive. Even the smallest disruption in the brain can cause adverse effects on the other working systems which is why computers are not capable yet of surpassing our functionality, we simply don't know how to connect the parts yet to make a working system, but we have some of the parts.


      The reason AI bots such as the chat bots grasshoppa mentioned seemed stupid is because that is literally how they were programmed to be. Being a programmer myself I would look at the problem and say there are only so many things someone can say, and there are only so many responses so we will start building a system that reacts based on certain criteria because that is the fastest way. But that is not to say that computers cannot comprehend conversation as we do, it just takes them a long time to learn it just as we do. Hell, my mind is 20 years in the making and I am still pretty ignorant by many standards. It took me 4 years of life before I could read, and that is when I already had the right systems (eyes, brain) to be able to learn that. However computers are not at that point yet, they don't have all the necessary systems for flawless learning yet, but when they do, they will be able to be programmed in such a way where they can know nothing, but be able to learn from its inputs as time goes on. So in the mean time we make crude representations of these inputs to which the lacking computer can make some sort of judgment upon.



      Originally Posted by grasshoppa

      "To think we can build robots complex enough to attain consciousness at our level and even surpass us is ludicrous. I mean, at best we create a robot that can continually upgrade itself and store more and more information on it's database."
      Couldnt you say the same thing about humans? I mean, at be we create a baby that can continually grow and store more and more information in its brain.

      I highly suggest you view this short clip from a documentary to see just how far we have come. There are some unrelated topics discussed but many are discussing the topic of this thread such as the development of a chip which replaces parts of a brain.

      http://www.youtube.com/watch?v=ZShORepzB-g

      I think this discussion could be summed up by the following statement taken from that documentary:

      "At first we said if a computer could play chess then it would think like us, and then we got a computer to play chess and we said thats not really thinking. And the answer is we don't really know what thinking is. I would argue right now that machines do a pretty good job at thinking. They don't do as good a job at creating although we don't really know what creating is. And they don't do a very good job at having a soul, but we don't really know what a soul is. But when we can define it, they do a pretty good job at doing it."


      Oh by the way A, T, G, C. Those are the distinct "symbols" which make up our DNA which is the code to how our bodies are formed. 0, 1. Those are the distinct symbols which make up every computer program which is how the program knows what to do. So we humans only have 2 more symbols that dictate how things are supposed to be arranged, vast difference eh?

      We are simply not at the point and time for AI to be a replica of ourselves, but we are not far away.

    7. #107
      Member Achievements:
      Referrer Bronze 1000 Hall Points Veteran First Class

      Join Date
      Mar 2008
      Gender
      Posts
      354
      Likes
      0
      Notice how digitial organisms that are considered "living" by some have a creator. I still do not define them as living myself, and maybe I could change my mind, but it is as Xei says, just semantics.

    8. #108
      :3 :3 :3
      Join Date
      Jul 2008
      Gender
      Location
      Castaic, CA
      Posts
      152
      Likes
      0
      Quote Originally Posted by Xei View Post
      And who cares anyway?
      yes........
      :3 :3 :3
      AIM: hollingsXD
      MODEST MOUSE <3
      Fell free to laugh about my music taste lol

    9. #109
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      Grasshoppa, I completely disagree with you.

      Right now in Switzerland they have simulated a neocortical column. It's just a matter of money and time (10 years) before they can scale it up to the size of the human brain.

      If you disagree that that would be conscious then I ask you; what's inherently conscious about carbon and hydrogen and oxygen and nitrogen etc. which constitutes the human brain?

    10. #110
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by Xei View Post
      Right now in Switzerland they have simulated a neocortical column. It's just a matter of money and time (10 years) before they can scale it up to the size of the human brain.

      If you disagree that that would be conscious then I ask you; what's inherently conscious about carbon and hydrogen and oxygen and nitrogen etc. which constitutes the human brain?
      I don't really know what that means. What do you mean "simulate"?

    11. #111
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      On a supercomputer.

    12. #112
      Sleeping Dragon juroara's Avatar
      Join Date
      May 2006
      Gender
      Location
      San Antonio, TX
      Posts
      3,866
      Likes
      1172
      DJ Entries
      144
      Just because a robot says 'hello' doesn't mean that robot has any comprehension of what that word means. Let alone any comprehension of 'words'. This is the problem when discussing whether or not AI is conscious. How do we know if its conscious of that which we have programed it to do?

      Im not saying AI can never be conscious. But what I do believe is we need to be careful placing a high level of consciousness onto something reacting exactly as we design it to react.

      All living things exhibit self-awareness to an extent. All living things exhibit free will to an extent. Even if its just choosing to move left or right.

      You can argue that a robot making decisions based on its program is like an animal making decisions based on instinct, or design. And I would agree, DNA is very much a program. The program of programs!

      We have to however becareful measuring the true level of consciousness that AI exhibits when we start giving it 'intelligent' things to say and do. Because although the AI might have shown the 'free will' to choose a verbal respond, this does not mean that said AI has any comprehension of language. It's just choosing the most logical noise to make given its design.

      This is the difference between living things and AI. The AI will perform actions without any reason or rhyme. Without any meaning to why it did those actions other than to satisfy our ego.

      You can argue that performing a mindless action is like an animal doing something out of instinct. But this is NOT like instinct. Instinct is a DRIVE given a set of reactions defined by DNA. You can easily program AI to STOP doing something. And the AI won't rebel, complain or go into withdrawal symptoms. It complies to the command because it has no drive in the first place to do said command.

      The same is not true for living things. We do have an instinctual drive engrained in our being on so many levels, most life forms literally have to be killed to distinguish their drive. The instinctual drive is to live, or to ensure that life itself continues. Leading all the way back to the first life forms, who copied themselves to ensure that 'themselves' continue.

      I do not believe that DNA gave life the drive to continue living. I believe it is the other way around. DNA is the creation of life desiring to continue living. A way to carry on life indefinitely, and the plan has yet to fail.

      I believe, given what I'v seen so far, that if AI is conscious, it's below worm level.

    13. #113
      Member Bonsay's Avatar
      Join Date
      Sep 2006
      Gender
      Location
      In a pot.
      Posts
      2,706
      Likes
      60
      What will you say when the AI decides to kill us all on it's own free will? What I'm saying is that you speak as if there was a clear definition or a singular example of what AI specifically is. But there is no such thing. As far as anyone is concerned AI can be programmed to become like living things or humans. It's program is the same as instinct and there is no reason to differentiate between a manufactured one and the one we have. It's the starting point of a very similar thread in the science forum, recreating the human brain. How much nonhuman can a human brain be once it's perfected?
      C:\Documents and Settings\Akul\My Documents\My Pictures\Sig.gif

    14. #114
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      You're putting too many limitations on what AI can or can't be, jurora.

      If the AI in question was simply an emulation of a neural network, then it would behave in exactly the same way as an animal, instinct and all.

    15. #115
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by Xei View Post
      On a supercomputer.
      Ah ok.


      Juroara, I see what you're saying and pretty much agree.

      Quote Originally Posted by Xei View Post
      If the AI in question was simply an emulation of a neural network, then it would behave in exactly the same way as an animal, instinct and all.
      You're not seeing the limitation. You can simulate or re-create the same two scenarios, but something very important is missing. That can be called consciousness, or the simple subjective reality. Although you might be able to program or simulate something, it is not the same in reality. What occurs in reality is a consequence of an infinite amount of invisible conditions; what occurs in the simulation is a reproduction of an interpretation of the results. Furthermore, this occurs within the limitations of our own definitions, and the definitions of our own limitations.

      Look at this below, it simplifies the problem. Internal condition is the consciousness and intelligence of the entity. The result is the observed external condition - the actions of the entity.

      Internal condition > Result / External Condition

      Reality:
      "I want to walk" intention - Consciousness (Nerve impulses, etc.) > Entity is seen to walk

      Simulation/Mirror:
      Neurons firing one after the other etc. > Entity is seen to walk


      Another example: Press a button on your keyboard. There, that was simple. That can be simulated and programed. But guess what - how the hell can you program your consciousness - the context as the witness? You can't, because consciousness is nonlinear. You cannot see it or represent it in a bunch of symbols or numbers.

    16. #116
      Banned
      Join Date
      May 2007
      LD Count
      Loads
      Gender
      Location
      Digital Forest.
      Posts
      6,864
      Likes
      386
      Err... It would simulate the most basic functions of the brain. Something tells me you have no idea how evolutionary or neural net programming works-- You go for low-level design and everything else is 'emergent'-- An indirect cause of the base code. It's much easier to write code that writes code than write code. This simulation does just that-- it simulates the low-level functions and then all these other 'scenarios' are solved using a ton of smaller parts within the simulation. There is absolutely no limit to the complexity of these emergent properties except for disk space and processing power.

      Consciousness is an emergent property of your brain, not an inherent one. You don't have any true sense of 'self' (You know, consiousness) until your brain is almost 5 years old and you have been recieving all of the stimulus that a normal child does.

      @Juroara: What makes you say that the brain wouldn't have an understanding of what it does after some time? What makes you say you understand what you do, and that everything you think and do isn't just a knee-jerk reaction to external stimulus based on your memory and current state of mind?

    17. #117
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by A Roxxor View Post
      Consciousness is an emergent property of your brain, not an inherent one. You don't have any true sense of 'self' (You know, consiousness) until your brain is almost 5 years old and you have been recieving all of the stimulus that a normal child does.
      What do you mean you don't have any "true sense of self" until your brain is almost five years old?

    18. #118
      Banned
      Join Date
      May 2007
      LD Count
      Loads
      Gender
      Location
      Digital Forest.
      Posts
      6,864
      Likes
      386
      That you aren't conscious.

    19. #119
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by A Roxxor View Post
      That you aren't conscious.
      You have an explanation!?

    20. #120
      The one who rambles. Lucid_boy's Avatar
      Join Date
      Jul 2007
      Gender
      Posts
      484
      Likes
      47
      DJ Entries
      3
      Quote Originally Posted by A Roxxor View Post
      That you aren't conscious.
      What are you talking about Roxxor? Kids under five have things that they like, things that they dislike, they ask why, they interact/change the enviroment around them, they study things, they test boundaries and they think and rationalize. Under what criteria/system are they not self aware?


      Infinitly greater than you are... Damn that missing E.

    21. #121
      Banned
      Join Date
      Jul 2009
      Posts
      23
      Likes
      0
      Quote Originally Posted by A Roxxor View Post
      Do you consider digital organisms, as in the ones in programs like Evolve 4.0, to be living?

      Why/ Why not?

      I, personally consider them to be living things because they evolve, have genetic code, reproduce, eat, and are made of cells.

      I've been playing with Evolve 4 for a while now, and I've seen some pretty amazing things with the little programs that evolve out of some of the simulations I've run. Such as an organism that avoided barriers by several cells, or an organism that actively searched for spores to fertilize.
      Yes but does it have awareness of its existence. Or an awareness of awareness. "It knows that it knows." Not to be confused with a memory or program saying that it knows. It is conscious?

    22. #122
      Banned
      Join Date
      May 2007
      LD Count
      Loads
      Gender
      Location
      Digital Forest.
      Posts
      6,864
      Likes
      386
      That's not even what I asked. They AREN'T conscious, because they have not evolved to use their stack as a long-term memory cache. Bacterium aren't 'conscious' in the sense you are talking about, but they are still alive...?

    23. #123
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by A Roxxor View Post
      That's not even what I asked. They AREN'T conscious, because they have not evolved to use their stack as a long-term memory cache. Bacterium aren't 'conscious' in the sense you are talking about, but they are still alive...?
      My point was that programming ignores the fundamental Reality by which the actions of a person even exist. It has nothing to do with memory; consciousness is independent of that.

    24. #124
      Banned
      Join Date
      May 2007
      LD Count
      Loads
      Gender
      Location
      Digital Forest.
      Posts
      6,864
      Likes
      386
      Quote Originally Posted by really View Post
      My point was that programming ignores the fundamental Reality by which the actions of a person even exist. It has nothing to do with memory; consciousness is independent of that.
      So you claim. What is conscious that doesn't have memory?

    25. #125
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      56
      Quote Originally Posted by A Roxxor View Post
      So you claim. What is conscious that doesn't have memory?
      Do you think Alzheimer's-patients counts? By the way I'm pretty sure I can remember my forth birthday party, and maybe even my third!

    Page 5 of 7 FirstFirst ... 3 4 5 6 7 LastLast

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •