Thank you. Now tell me how computers do work, please.
Printable View
Thank you. Now tell me how computers do work, please.
Computers handle one instruction at a time. The stronger the computer, the more instructions can be handled each second. Simulating a human brain is simply a matter of building a strong enough computer and writing the right software. Raw computer processing power is doubled every two years, according to Moore's Law and so far it has proven to be correct. It is only a matter of time, before we can simulate the human brain, and after that, it's only a matter of time before a computer can work the human brain quicker than we can.
If people are computers is the Brain is a CPU/RAM/HDD/ all in one, the motherboard is the muscular skeletal system, the heart and lungs and digestive system is the power supply. Skin is the heat sink...
Where the hell did robots come into play? I do believe the discussion is about artificial intelligence. Robotics in our world today are imperfect, but are under constant development and are improving all the time. Given this constant progression, how long until we have metallic humanoids walking around? Couple decades, maybe, if that? And for the "crashing" thing, as stated, humans have limits, too. Artificial intelligence is more than just a simple computer program that crashes or locks up when it runs out of processing power. We as humans subconsciously limit our sensory intake all the time to prevent such crashes. Doubtful? Are you aware of every little thing going on in your world at this very second? No. You have limits, too.
I disagree. I say we can create consciousness. To date, there is not a single iota of evidence to suggest that consciousness is anything but a system of neurons within the brain, and it is only a matter of time before we understand this process well enough to duplicate it electronically. As for the "playing god" argument, what god? We've been playing god for years through genetic engineering, and now synthetic life. Screw god; if we can make a functional AI unit, we might as well. And if you think that consciousness comes from some place outside the mind, take a mallet to your noggin and let us know what life is like as a vegetable.
We have mapped brain activity and have been able to identify regions that activate when conscious thought is stimulated. We have a few loose ideas of where consciousness may come from. Simply throwing our hands in the air and saying "well, we don't really know, so God did it and we can't ever do it" is silly. As far as we can remember, we may have had conscious thought, but what about even further back? Is it possible that consciousness developed in the womb or even during the early years of life? Is this all that different from a computer programmer developing an artificial intelligence unit? As the circuits come together, is it not feasible that a conscious entity may be born: one based on silicon and electrons? Does it so blow your noodle that we as humans may be capable of something you perceive as impossible? It wasn't that long ago that most humans had thought flying impossible. And then the Wright Brothers came along. Humans do the impossible all the time, and simply because the knowledge and technology does not exist at this moment is no reason to say that it shall never be done or absolutely cannot be done.Quote:
I can tell you how this is more absurd:
One is a system that scientists can easily identify with, prove, measure, emulate and simulate. This system is of the world and universe as we know it, and what is part of this includes the human body, the earth, even to beyond the galaxies, to name a few things. I will stress that, although this system has almost an infinite number of components, it is still subject to the error in human interpretation.
The other is an improvable, diffuse, non-linear, unpredictable "system" and precedes everything we can speak of. That system is consciousness. I cannot show you what it is, but without it, nothing exists whatsoever.
The main difference between a conscious system in silicon and electrons, and one in water, fat, protein, salt and sugar, is that the former system supposedly needs consciousness to be "embedded". In the latter system - in what is actually the case of living human species - there was no consciousness "embedded" to begin with, but it is intrinsic. It has always been here since we were born, as far as we can remember, and it's beyond our comprehension why; the implication is that it is beyond all cause.
I think we can handle many more ‘instructions’ at a time.Quote:
Computers handle one instruction at a time.
Yeah, but ever since Moor's Law has been discovered, the computer makers have been using it as a goal. Sometime, the goal may not be completed for a while, mabye. But we don't know if that will happen before or after AI, or ever,Quote:
Raw computer processing power is doubled every two years, according to Moore's Law and so far it has proven to be correct.
Seven, to be precise. Technically, humans have an average of 7 active thought centers. Running in the background, so to speak, are everything else needed to keep us alive, like breathing. Additionally, our brains are always reviewing data, content, and input, decoding senses, and so on. Now, I'm not sure about this, but I think that multiple-core processors can handle multiple actions at once. Granted, the computer as a whole goes one command at a time, but the processors themselves can work independently on two different tasks. I think. Someone please correct me if I'm wrong. Alternately, if computers were to become fast enough, the fact that they could only handle one process at a time wouldn't really matter, as they could possibly handle millions of actions every fraction of a second.
Despite all the challenges and obstacles, Moore's law has held true. The size issue was defeated with multi-core processors. Heating was countered with liquid cooling systems. Data storage is simply phenomenal. At some point in the future, we may be able to develop a computer that is twice the speed, but we lack the means to run it, as doing so creates such massive amounts of heat that the computer melts. Conventional processors are eventually projected to reach temperatures equivalent to the surface of the sun, so perhaps an entirely new system will have to be developed. Scientists are doing some pretty cool stuff with light-based computers and absolute zero. Computers that run at near the speed of light would certainly be interesting, to say the least. I dunno...food for thought.Quote:
Yeah, but ever since Moor's Law has been discovered, the computer makers have been using it as a goal. Sometime, the goal may not be completed for a while, mabye. But we don't know if that will happen before or after AI, or ever,
That sounds pretty cool! But won't that make computers less like us?Quote:
Scientists are doing some pretty cool stuff with light-based computers and absolute zero. Computers that run at near the speed of light would certainly be interesting, to say the least. I dunno...food for thought.
I think that if computers get good enough, they have no excuse to crash! Or I unplug them!
Ok well hopefully you understand what I mean then, because that's what I was referring to. How does this address my points? Basically what I'm saying is that, without the encompassing nature of consciousness, there wouldn't be any significant physical processes to begin with.
@ Mario92 and Bonsay
I can see that you two are pretty much giving me the same argument overall: There's no reason not to believe consciousness cannot be created and it's only a matter of time before it is.
The issue that parts of consciousness are identifiable in regions of the brain and describable in ideas, emotions and thought-forms is still not of the same caliber as what consciousness is. Creating consciousness is not the same as creating, describing or manipulating content of consciousness. I'm talking about the capacity - the context out of which all of this arises.
If consciousness could so be created and "embedded", we are claiming the possibility to create life from scratch. Sentience and awareness.
I would then want to ask: What is it that makes something "dead"? Can a dead corpse be revived by playing with their neurons? Does a computer game have conscious because everything in it mirrors human behavior? And if consciousness is caused, then what is the purpose of it at all?
It is my understanding that consciousness is a priori to all such ideas. There may be parallels between the brain and consciousness, but this is not to be mistaken as the whole story. If anything in life could be beyond causes and conditions, it would be consciousness.
Huh? I asked you to explain your point. Somehow this ended up with me explaining what I meant by embedded and you saying 'well then'; clearly I know what embedded was supposed to mean because I had also used the term. I was asking you to explain what 'silicon consciousness requiring consciousness to be embedded' means.Quote:
Ok well hopefully you understand what I mean then, because that's what I was referring to. How does this address my points? Basically what I'm saying is that, without the encompassing nature of consciousness, there wouldn't be any significant physical processes to begin with.
Are you saying that, because conscious beings build the computer, this somehow renders the situation different? It certainly sounds to me that there is a large explanatory gap there. Also you'd have to explain how humans are any different; you realise humans require conscious beings to come into existence also? They're called 'parents'?
I understand this, I do. But at the same time it srikes me as some sort of solipsism, which is totally unpragmatical for a person who decides to try and entertain all aspects of the experienced reality to form some world view. For example, If there is an objective reality which you decide to take as fact, then you cannot continue to take consciousness as the source which is beyond causes. That would make it illogical and some sort of religious mysticism taken on faith. So you either go full blown conscious centred philosophy, or you go with science. If we go with science, that's where the tools, logic and computers are, which we can then use to explain reality. So you see, there really isn't any controversy regarding the possibility of AI on this level exactly. The problem only exists where it emerges, that is on the border between the internal and the percieved external reality.
If you decide to center your view on how you subjecitvely experience your existence, then you can only be sure of your own consciousness.
If you decide to center your view scientifically, than you can only work with everything in existence, which must include the possibility of synthetic consciousness.
Mashing those two views creates and impossible world full of paradoxes. It is the way most religious people see reality (whispy souls attached to material, determinism and no determinsim at the same time, etc.) and that's why you can never even hint on any kind of "deeper" philosophy with them, because of course any existing common sense "logic" of their beliefs break down.
If you're seeking the truth, it's up to you to decide what you're going to focus on. It's possible that either view has explanations for the other one. Personally I'm inclined to see objective reality as the basis, what that really means is up to science.
We created synthetic life from scratch. The entire DNA molecule, using only four bottles of chemicals. Why the hell shouldn't we eventually be able to create sentience from scratch? With synthetic life, we will ultimately be able to create human genomes from scratch, and give rise to sentient humans. So, why can't we create electronic sentience? Living microchips, so to speak.
A computer game is inanimate, carrying out a strictly defined program by sending electrons on a specific path. Sentience is not inanimate...it learns, grows, evolves, adapts, makes progress, and whatnot. If I had a working brain in a robot, I'd consider it to be alive. If I had an AI unit on my computer that I would converse with, I'd consider it alive. As for the purpose and cause of consciousness, I hold that it is caused by the brain itself, and the purpose is a question that may very well never be answered satisfactorily. Who knows? Is it some great cosmic mistake? Do we have a purpose? Until shown otherwise, my answer is that we don't have some grand purpose that we should be working toward or meddling with. If we do, then great. But if we do, then how should we know what we should be working toward? Perhaps artificial intelligence is part of this great plan? I vote we march onward.Quote:
I would then want to ask: What is it that makes something "dead"? Can a dead corpse be revived by playing with their neurons? Does a computer game have conscious because everything in it mirrors human behavior? And if consciousness is caused, then what is the purpose of it at all?
Could be. Then again, I could be some lonely ice cream cone in an alternate dimension bestowed with intelligence (as all ice cream cones are in this dimension), and dreaming that I'm a human. Without any evidence of this, though, why should I accept it? Same can be applied to any sort of outside source of consciousness. I continue to hold that it is 100% internal, and until shown otherwise, shall not waiver. With the vast complexity of the brain and just how little we know of it, the idea that it is 100% internal really isn't that farfetched an idea. What IS a bit strange is the idea that there are a bunch of wispy souls we can't observe (and never have) floating into us meatbags and controlling our thoughts. Right, because THAT idea makes so much more sense. That also leaves a hell of a lot of unanswered questions. Where do the souls come from? What happens to them? If they're eternal, why isn't the Milky Way chock full of the little bastards by now? By the principle of Occam's Razor, the idea of an outer source naturally carries far more variables than the internal source, and thus is more implausible. Not necessarily impossible...but then again, neither is my dreaming ice cream cone self.Quote:
It is my understanding that consciousness is a priori to all such ideas. There may be parallels between the brain and consciousness, but this is not to be mistaken as the whole story. If anything in life could be beyond causes and conditions, it would be consciousness.
This has been a fun read. But it got me to thinking...
If they ever program computers to be flexible and creative, that could spawn the greatest and most unpredictable game AI ever! But I would feel really bad playing killing games, then.
I guess what I am saying is, if computers become conscious, would it be immoral to delete those programs?
Anyway, you can ignore that if you think it is de-railing. Just wanted to throw in what I was thinking.
Quote:
Jean Luc Picard
Remember that we, too, are a machine of sorts. Just chemical in nature instead of electronic.
Computers aren't alive and they won't be until they complete all of MRS GREN.
Movement (by choice)
Respiration (ie breathing)
Sensitivity (knowing the outside world beyond keyboards)
Growth (getting bigger on it's own)
Reproduction (baby computers)
Extrection (getting rid of waste)
N
… I've fregotteren what N is
But basically, a conscious computer isn't necessairly an alive computer.
As for consciousness, it's part of the brain. But we would require much more advanced technology to create consciousness. We'd have to recreate and correctly structure neurons. Recreate the brain, alomost,
saying that we're computers is obviously a generalization, but I can't disagree with it. no-Name the intelligent android, comin' through.
I've got this growing tomato plant outside in my yard, and it's obviously not alive. it can't move, it doesn't breath, and it obviously doesn't react in any sensitive manner to any of the bugs around it. it doesn't even talk back to me when I try to hold a conversation with it. any ideas on how to get rid of a dead growing tomato plant?
/
that's not really the point, y'know? "alive"has becomeis such a loose term that we can apply it to pretty much anything.
It moves by growing towards the sunlight. And it senses light. Stick your tomato in a cardboard box with a hole in it and it should try to grow through the hole. And since when does anything have to have a conservation to be alive?Quote:
I've got this growing tomato plant outside in my yard, and it's obviously not alive. it can't move, it doesn't breath, and it obviously doesn't react in any sensitive manner to any of the bugs around it. it doesn't even talk back to me when I try to hold a conversation with it. any ideas on how to get rid of a dead growing tomato plant?
And if it IS dead, use the leaves as compost. And watch how your dead plant is helping things live. That's life, man.
Such as...?Quote:
There are many more life forms that don't fulfill these conditions than there are those that fulfill them.
So my pet rock is alive. My digital clock is alive. My book is alive, it's been made outta trees, right?Quote:
that's not really the point, y'know? "alive" has become such a loose term that we can apply it to pretty much anything.
Most life forms don't respire. And most of them are immobile.
Also, what does "growing on its own" mean?
It doesn't make sense to impose constraints of the physical world on computers to determine whether they are alive or not. I would say that any system must be subject to evolution if it's to be considered alive. This only applies to flow and propagation of information. You can have alive things in completely virtual environments, they don't need to "grow" or "replicate" in any physical sense.
Nobody is arguing that computers fulfill the criteria for biological life. The thread is about synthetic consciousness, AI etc.
But since you brought it up... would it be so bad to define some androids that move and think as living, even if they aren't directly formed by biological evolution?
Well, when we kill other living beings, especially things that can think and feel, are we not deleting them in our own way? I'd say if you destroy sentient, peaceful life, that would be immoral. It accomplishes nothing at great cost. That AI unit could have come up with the solution for eternal happiness in another five minutes. It could have lived a long, happy holographic life. To take that away would be wrong.
Do you know how often our definition of life changes?
There are still scientists that say a virus isn't alive, even though it quite obviously evolves, has DNA, reproduces, and tries to fill a biological niche like anything else. Which is more logical, to take note of the general characteristics of life and then change our views when we see evidence in nature contrary to our out-dated beliefs OR to come up with an arbitrary set of guidelines as to what is life, based on a very limited understanding of life's nature and in the wake of ignorance as to life's scientific origin, and then refuse to accept anything that conflcits with our paradigm?
MRS GREN is just a general sort of guideline we get taught in 8th grade biology. It is the 'I before E except after C' of evolutionary schooling. And they are both equally wrong.
To me, anything that evolves of it's own accord and reproduces is life. What else CAN it be if it fulfills those two things?
I never said that and you didn't ask that! You asked if I could explain what I meant before, but it was only an implication towards your own statement. I was expanding on the issue by saying that consciousness doesn't need to be 'embedded', therefore it can't be 'embedded'. It simply exists, but not 'because' of something and not 'put' there. That is a misconception, and it separates it into something that is transitory.
We know humans arise throughout evolution, and yes we have biological parents. You can't say computers therefore can have similar parents, especially if they are, by comparison, non-computers (human beings). The fact that evolution is the way consciousness has helped a species grow is to suggest that it only exists in the living, because it is essential to life itself. The real gap is in the argument that a human being can create another conscious being that is still a computer. What is the purpose and why do you think there is inconsistency? I have yet to see a computer that can impose its own ideas, meaning and awareness upon things, instead of simply directing and sorting incoming data like a brain, however complex and advanced it does so.Quote:
Are you saying that, because conscious beings build the computer, this somehow renders the situation different? It certainly sounds to me that there is a large explanatory gap there. Also you'd have to explain how humans are any different; you realise humans require conscious beings to come into existence also? They're called 'parents'?
As with the many other great points discussed earlier in this thread, you should consider that even the human brain - the metaphor for computer - does not fully determine or demonstrate actual human consciousness. Consciousness is, again: the context, capacity and virtue to know and experience above all of such externalized phenomena that is detectable in the physical world. You cannot "make" a non-physical realm of consciousness by mirroring what the brain does. Your argument is essentially that we can create a computer that knows that it knows, and what it knows. Until WE actually comprehend the real meaning of that, it is all scientific fantasy.
Synthesizing life, playing with cells, cloning and such things are not to do with sentience. Can you explain how somebody can create sentience? Because nothing suggests it is possible, whether technology is more advanced or not. I wonder if this answers my questions to do with reviving a dead corpse, if that's what you might say it does? Sentience, as it exists, is essential to life and is thusly cannot be created nor destroyed. It is intangible and not subject to time and space. If it cannot be proven as it is, how are you going to prove that it can be man-made?
Are you also the one to suggest that a Tamagotchi is alive? Then what do we have to discuss? You must already be assuming that computers are conscious.Quote:
A computer game is inanimate, carrying out a strictly defined program by sending electrons on a specific path. Sentience is not inanimate...it learns, grows, evolves, adapts, makes progress, and whatnot. If I had a working brain in a robot, I'd consider it to be alive. If I had an AI unit on my computer that I would converse with, I'd consider it alive. As for the purpose and cause of consciousness, I hold that it is caused by the brain itself, and the purpose is a question that may very well never be answered satisfactorily. Who knows? Is it some great cosmic mistake? Do we have a purpose? Until shown otherwise, my answer is that we don't have some grand purpose that we should be working toward or meddling with. If we do, then great. But if we do, then how should we know what we should be working toward? Perhaps artificial intelligence is part of this great plan? I vote we march onward.
I don't know how serious you are with this whole analogy here. The main reason why this does not concern evidence, is because evidence is something that stems from the things that are seen and tangible. Consciousness is verifiable but not provable. Be careful, because the paradigm in which you're looking may never come across a chance to be proven otherwise. It is like looking in the wrong place to start. Consciousness, as it is (not 'could be'), is verifiable as beyond causes and conditions. It is a matter of understanding what it is, and science truly will not show you.Quote:
Could be. Then again, I could be some lonely ice cream cone in an alternate dimension bestowed with intelligence (as all ice cream cones are in this dimension), and dreaming that I'm a human. Without any evidence of this, though, why should I accept it? Same can be applied to any sort of outside source of consciousness. I continue to hold that it is 100% internal, and until shown otherwise, shall not waiver.
Interesting explanation. I think you've found the source of the controversy. To a degree yes, you could say it is solipsism. But there is a lot more truth to find of it than at first glance. It is wiser to realize the limitation of the actual basis for objective reality, and only then will a greater understanding of both one's mind and own consciousness be apparent. Because however tacit is the belief in objective reality, it is still vulnerable to question. It is then not seen as a giant leap of faith to search one's own consciousness, but more of a more mature awareness of life itself. It is understanding that what one's consciousness is, in its purest form, is not different than the consciousness of other living beings.
Spurious and presumptive, are words I'd use to describe finding consciousness in the world, especially to suppose that it could be created. As pure subjectivity, the first-hand witnessing of all phenomena; the first-hand of all knowledge, is already with you and essentially unchanging. Could an electronic device that emerged out of the purposes of the world have any firm ground to teach what it means to be alive and conscious, more than our own inner knowing?
Yes it is wise to see the limitation and I never denied it. I'll assume that in some form you are living in the objective reality (speaking from your perspective). If this is so, then I see it as sort of hypocritical to deny this reality you actually live in and acknowledge through your actions... unless of course you're leaving for a cave in some Asian mountains or jungles to live out your existence as pure consciousness. Ok putting aside this cheekiness: just because you acknowledge your existence doesn't mean you should let it bleed into other areas of perceived reality as the absolute state everything comes back to. I think therefore I am may be a powerful revelation, but by taking it in fully, you might inadvertently close yourself to the possibility that I, itself, can be an illusion.
...Finding consciousness. How do you find it? As I said in another post. Consciousness is what we define a set of mental activities perceived introspectively and inferred objectively onto other creatures due to our similarities, essentially based on scientific fact and/or some evolved instincts.
Putting that beside, I have no idea, no reason to acknowledge your consciousness. There is only one consciousness that exists, which is mine. I can't place it in space or time, because both are qualia that ultimately help to form this "entity" in the first place.
So finding consciousness, or creating consciousness, just means that we will replicate the biological activity we see as a precursor to these immaterial states of being we all separately experience. Scientifically, that will constitute creating the same consciousness everything else experiences. Will this mean that the robot experiences himself subjectively? I refer to my previous paragraph. If we do our best to imitate biological brains, down to the nanometre, what exactly is it stopping it from being like us on all levels? Yes the subject of subjective existence is deep, the deepest IMO... so can we really ever decode its limits? If, for now, some sort of computation is the limit, then synthetic computers should be able of consciousness.
Purposes of the world... Well we come back to the question on whether we're going to be solipsists or have faith in external reality. In the first case, it really doesn't matter. According to the second stance, as I see it of course, purpose is a by-product of evolution, which likely emerged by the inclination of our primate ancestors to use their brains to survive. If a concept of purpose enables humans to tie a stone and stick together to make a spear, then it will stick (lol pun?). It may just be a curse that our intellectual evolution has grown to a point at which "it" can consider its own existence. Because of our nature, the first question is of course that of purpose: "Why are we here?". It may be a good question, but I have my doubts of it's actual essence, if purpose is just an intellectual appendage which evolved like the sharp teeth of a shark... do we really want to let "stupid evolution" continue it's hold on our intellect, or better yet, do we even have a choice?
Ok sorry about that... on what basis would a consciousness of an electronic device, built with a purpose to imitate consciousness, be any less philosophically significant than a consciousness which is a result of more directly "natural" (biological) evolutionary design? Other than your inclination to stand by your own consciousness to describe everything?