• Lucid Dreaming - Dream Views




    Results 1 to 19 of 19
    Like Tree2Likes
    • 1 Post By Photolysis
    • 1 Post By DuB

    Thread: Neural Incompleteness Theorem

    1. #1
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3082

      Neural Incompleteness Theorem

      "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t."
      ~ Emerson M. Pugh

      I don't necessarily think what follows is true, but I would like to talk about it to stimulate my own thoughts:

      Imagine you have a system which does something simple, like adding up two numbers in base 10.

      Is it possible for this system to conceptualise how it (itself) works?

      Patently not, it can only add up.

      What if we append to the original system another system, which is capable of comprehending the original adding system.

      Can this new conjugate system comprehend how it works? Well, we got a bit closer, in that the system understands part of itself, but in doing so we had to add a new system, which the system cannot comprehend; moreover, though it's probably not that important, this new system is much more complicated than the original adding system.

      So again we append a system capable of comprehending the original comprehending appendix, and of course we find ourselves with the same problem; that this new system cannot understand itself.

      If we proceed by induction it would seem that we conclude that no system can understand how it works.

      Thoughts?

    2. #2
      The Anti-Member spockman's Avatar
      Join Date
      Aug 2008
      Gender
      Location
      Colorado
      Posts
      2,500
      Likes
      132
      That only works for a computer that only knows what it is programmed to know. If something can adapt and learn, this isn't true. For example, I make a computer with the system 'capacity for learning.' When it is first made, it has no concepualization of itself. But I give it access to all of the info it requires to understand itself. As long as it's memory is great enough, it can eventually learn all it has to know and even learn how it learned all it needed to know.

      So, applying that to life, a creature that works 100% on instinct and pretty much knows all that it ever will... Yes, this theory works. But a creature with the capacity to learn and reason, like humans, could theoretically understand it all one day.
      Last edited by spockman; 06-22-2010 at 06:33 AM.
      Paul is Dead




    3. #3
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3082
      I don't think learning makes a difference. Learning is just changing the system around. At any one point in time, the system is what the system is, and does what it can do; and then the argument above applies.

      Note that the system in my argument is not necessarily simple. The first appendix alone may have to be very complex. We are yet to create a program which understands how addition works, and to be able to itself write a program that performs addition. The only computer we know capable of doing this currently is the human brain.

    4. #4
      The Anti-Member spockman's Avatar
      Join Date
      Aug 2008
      Gender
      Location
      Colorado
      Posts
      2,500
      Likes
      132
      Quote Originally Posted by Xei View Post
      I don't think learning makes a difference. Learning is just changing the system around. At any one point in time, the system is what the system is, and does what it can do; and then the argument above applies.

      Note that the system in my argument is not necessarily simple. The first appendix alone may have to be very complex. We are yet to create a program which understands how addition works, and to be able to itself write a program that performs addition. The only computer we know capable of doing this currently is the human brain.
      Theoretically, though, there is nothing keeping a computer from being programmed to have creativity and a capacity for independent learning. Then it could understand all of the principles used to create it, and then proove it by writing scripts and programs of it's own. How would the above argument apply then?
      Paul is Dead




    5. #5
      Antagonist Invader's Avatar
      Join Date
      Jan 2004
      Location
      Discordia
      Posts
      3,239
      Likes
      533
      Can we define the word "understand" before we go any further?

      I'm of the thought that understanding is a quality that can only be possessed by entities that are already aware, or conscious.

    6. #6
      Member Achievements:
      Made Friends on DV 1000 Hall Points Veteran First Class

      Join Date
      Jun 2010
      Gender
      Posts
      708
      Likes
      348
      A close approximation is the best any sub system of a closed system can get to replicating behavior, structure or interconnectivity of the whole. If one were to add a conjugate system, it would invariably be included within the boundary of the original, meaning they would be the same only more, and the approximation would remain proportionally identical. As far as I know, chaos theory and systems theory address this very issue.

    7. #7
      Member Photolysis's Avatar
      Join Date
      Dec 2007
      Gender
      Posts
      1,270
      Likes
      316
      I don't necessarily think what follows is true, but I would like to talk about it to stimulate my own thoughts:

      Imagine you have a system which does something simple, like adding up two numbers in base 10.

      Is it possible for this system to conceptualise how it (itself) works?

      Patently not, it can only add up.

      What if we append to the original system another system, which is capable of comprehending the original adding system.

      Can this new conjugate system comprehend how it works? Well, we got a bit closer, in that the system understands part of itself, but in doing so we had to add a new system, which the system cannot comprehend; moreover, though it's probably not that important, this new system is much more complicated than the original adding system.

      So again we append a system capable of comprehending the original comprehending appendix, and of course we find ourselves with the same problem; that this new system cannot understand itself.

      If we proceed by induction it would seem that we conclude that no system can understand how it works.

      Thoughts?
      It depends on what level of comprehension you want to aim for, but I'll assume you mean at the most complete level.


      I would say a system's ability to comprehend itself is based on three values:

      1. The level of comprehension the system is potentially capable of
      2. The amount of information / knowledge the system can store
      3. The actual complexity of the system



      #1 does not have to be directly proportional to #3. If we were able to double the capability of understanding for a human brain for instance, it might not involve a huge increase in complexity. In fact, the change might well be a relatively simple one.

      Clearly this is speculation when it comes to the human brain and especially at our present stage. However, when we look at the next most intelligent animals, it appears that our brains are not that much more complicated, yet there is a very large difference in the ability for abstraction and conceptualisation (or so it would appear).

      #2 this might involve a slight increase in complexity to handle the storage but again does not have to be proportional to the size increase. As an analogy, I can replace a 200GB hard drive with a 2TB hard drive in a modern PC, but the system will remain functionally the same. However, to get to this stage from earlier PCs, the architecture had to be revised to handle the large sizes.

      #3 clearly increases the values for #1 and #2, at least once past certain limits. The more complicated the system, the more knowledge required to be comprehended and stored regarding its workings. But as stated in #1 and #2, it does not appear that these are directly linked. Complexity can also be increased with no gain in performance as well.


      In your original post Xei you correctly state that a system that can only understand part of itself would require an infinite set of additions (i.e. you need to have a system capable of understanding the system that's capable of understanding the system of... all the way down to the part that understands the concept of adding base 10). However this doesn't consider that a sufficiently complicated new system can understand the component systems as well as itself.

      Again, if you consider computing architecture here, 64bit computing means addresses take up double the amount of space, but it exponentially increases the storage, so you gain ground.
      Xei likes this.

    8. #8
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      358
      Quote Originally Posted by Xei View Post
      Can this new conjugate system comprehend how it works? Well, we got a bit closer, in that the system understands part of itself, but in doing so we had to add a new system, which the system cannot comprehend
      [...]
      So again we append a system capable of comprehending the original comprehending appendix, and of course we find ourselves with the same problem; that this new system cannot understand itself.
      Well, we don't even have to enter the philosophical discussion about what it means to "understand" or "comprehend" something, because the very structure of the argument is question-begging. I've highlighted its crucial premise--that the newly added system may be able to understand the old system but not itself--but this is the very conclusion the argument seeks to establish. It seems plausible that this premise/conclusion may be true (and this is presumably the reason we are discussing it in the first place), but it doesn't actually argue for the idea at all, it simply assumes it. Alternatively, it might in fact be the case that the new double-system is capable of understanding itself.
      Xei likes this.

    9. #9
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3082
      DuB: that post was pretty much as clear as you can come to perfection.

      Yes, the argument does beg the question. If we consider the argument in the context of humans, what is being argued (incorrectly) is that humans may be able to understand how most of their brain operates, but they won't be able to understand their understanding.

      Although I have a hunch that understanding understanding may prove the most intractable by far, potentially insurmountably so, this is only an assertion as it stands.

      Photolysis: same as DuB.

      People talking about the issue of comprehension: this isn't really problematic. I assume that comprehension is well defined, and is something that the human brain can do; as the human brain can be represented by an algorithm, there's no conceptual problem with a system which understands another system.

    10. #10
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      54
      Yeah good points guys!

      Quote Originally Posted by Xei View Post
      People talking about the issue of comprehension: this isn't really problematic. I assume that comprehension is well defined, and is something that the human brain can do; as the human brain can be represented by an algorithm, there's no conceptual problem with a system which understands another system.
      Comprehension can occur on many different levels though. Understand how something works, in what way? Understand something to what degree, or in what context? Maybe give an example of what kind of concept/algorithm it would be.

      Example: humans try to understand the universe, but they have many different methods of understanding of how it works. Of course, it doesn't need to be that complex, but be aware of the possible differences of how something can be understood by a brain/system.
      Last edited by really; 06-22-2010 at 04:49 PM.

    11. #11
      Member
      Join Date
      May 2007
      Gender
      Posts
      635
      Likes
      45
      That's a legit sounding theory, but I don't think it applies to humans. If it did that'd mean everytime we learnt something new or understood a new part of ourselves another new currently incomprehendable part would come in requiring us a new part to understand that part... continuing on perpetually. I personally don't think that is the case.

      This is going to sound cheesy... but I'll say it anyways.

      We are coming upon a new age in human evolution, where the changes lay in the brain, not only the body. We are currently researching the importance of dreams, and new types of concsiousness. Dreaming, Lucid dreaming, is the next step in concsiousness. Without the boundaries of the waking world holding back human potential our understanding of ourselves and essentially everything will accelerate exponentially.

      Concsiousness is the next, and perhaps last, step in human evolution.

    12. #12
      Member Achievements:
      Made Friends on DV 1000 Hall Points Veteran First Class

      Join Date
      Jun 2010
      Gender
      Posts
      708
      Likes
      348
      I believe the topic has transitioned from understanding the idea of a system into to human understanding of itself, this is an epistemic distinction. Epistemology is useful in many ways, although some of the principles it's based on are still viscously disputed; a priori or a posteriori knowledge; regression; constructivism; infallibilism, indefeasibility; truth; belief, etc.

      The Gettier problem is probably the most provocative issue in all of epistemology which states that there are situations in which one's belief may be justified and true, yet fail to count as knowledge. At this point philosophers like Marx would try to explain the limitations of knowledge and justification which usually develops in to a healthy dose of skepticism. Basically, what this practice does is address a few important questions like, "what is knowledge?", "how is knowledge acquired?", etc., giving people a better understanding of certain subjects (mind-body problem, consciousness, brain) and allowing for a heuristic growth in ideas, whereas without the practice of epistemology those ideas may never had occurred.

    13. #13
      Sleeping Dragon juroara's Avatar
      Join Date
      May 2006
      Gender
      Location
      San Antonio, TX
      Posts
      3,865
      Likes
      1171
      DJ Entries
      144
      Okay, don't yell at me, because I don't use the big words that you boys use.

      As I was transferring my dream journal I remembered a strange dream. I wanted to know what makes animals move. Sure I understood *in the dream* that there is muscle tissue and bones, but what makes that move? So I run around and find a group of physics students arguing about this and that. *cough* They were inspired by several dream views members.

      I pop the question. They laugh at me because my question seems so obvious! Then a nice young man in the group rephrases my question using bigger words. The nerds get quiet as they think. Then the young man answers my question "The mystery is consciousness"

      That's what I thought of when I read this thread. I mean, sure we can poke the brain and say it works this way and that way. But even if you've figured out everything about the brain, there is no argument or book you could ever read to a rock to make it understand any of it. Because there is only one way you can know consciousness - By being conscious.

    14. #14
      Member Achievements:
      1000 Hall Points Veteran First Class

      Join Date
      Jul 2009
      Gender
      Posts
      276
      Likes
      21
      It depends what you consider "understanding how it works".
      I think there is a project that is investingating how to emulate the brain waves most commonly correlated with consciousness and awareness. It seems that consciousness and knowledge of self
      are very similar, and once one is conquered, the other should quickly be as well. To what extent is the next inquiry.

      Anyhow, through induction....(assuming that the appended object "comprehends" every object of interest), You still get to a pont where a system needs to deliver what we can call a "knowledge" construct.
      So yeah I guess I agree with you. I think that the best method for neural "completeness" would be every attempt we made at emulating the neural structure of our brains systematically.

    15. #15
      Dismember Achievements:
      1000 Hall Points Veteran First Class
      SnakeCharmer's Avatar
      Join Date
      Mar 2009
      Gender
      Location
      The river
      Posts
      245
      Likes
      41
      I don't think anyone will ever be able to comprehend how brain works in its entirety.
      That doesn't mean that we will never be able to know how exactly how every part of it works and how every part of it interacts with other parts. We will also be able to build computational models of neural function on every level.

      It can be compared to how people that design hardware view computers. Computer engineers can tell you exactly what every circuit in the computer does. They know how to connect the parts to get desired behavior. However, there isn't a single engineer that can hold the entire network structure in his head and say what's happening with every node. Lucky for us, that's not really needed because they have analytical tools to understand what's going on.

      Our brains didn't evolve to understand more than a handful of interactions at the same time, but we can have some understanding by mapping certain features of complex networks into "lower dimensions". Something like using projections to understand the hypercube.

    16. #16
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      540
      DJ Entries
      17
      Quote Originally Posted by Xei View Post
      "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t."
      ~ Emerson M. Pugh

      I don't necessarily think what follows is true, but I would like to talk about it to stimulate my own thoughts:

      Imagine you have a system which does something simple, like adding up two numbers in base 10.

      Is it possible for this system to conceptualise how it (itself) works?

      Patently not, it can only add up.

      What if we append to the original system another system, which is capable of comprehending the original adding system.

      Can this new conjugate system comprehend how it works? Well, we got a bit closer, in that the system understands part of itself, but in doing so we had to add a new system, which the system cannot comprehend; moreover, though it's probably not that important, this new system is much more complicated than the original adding system.

      So again we append a system capable of comprehending the original comprehending appendix, and of course we find ourselves with the same problem; that this new system cannot understand itself.

      If we proceed by induction it would seem that we conclude that no system can understand how it works.

      Thoughts?
      Thoughts?

      XeL is a bro.
      And so is Kurt Gödel.
      You two might get along well.
      Except for the fact that he's a paranoid schizophrenic and probably wouldn't want to ever meet you in person.
      And also he's dead.
      Details, details.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    17. #17
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3082
      XeL? XeL?? That goddamned usurper...

      Seriously I may have to change my name. 5 years on this forum and this is what I get for it!

    18. #18
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      540
      DJ Entries
      17
      Oh! Oh my! I'm very sorry. Really. It's a muscle memory thing. I really mean Xei is awesome. ;____
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    19. #19
      Member really's Avatar
      Join Date
      Sep 2006
      Gender
      Posts
      2,676
      Likes
      54
      Quote Originally Posted by Xei View Post
      "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t."
      ~ Emerson M. Pugh
      (In responding to the thread in another way:) I think what this is kind of inching towards is the fact that conceptualization does not simplify reality. To truly understand something is a non-intellectual knowledge that is free of concepts and symbols, which is so simple that to conceptualize something thereafter is actually to complicate it by superimposition. Even if there is a concept or symbol that supposedly simplifies and aims to understand a system in an easier way, all it is really doing is unifying other concepts in a more abstract manner.

      The concepts and ideas of objects become more detailed and complex for endless purposes and functions of society, but when it comes to pure simplicity, no concept or understanding is required, and that, in itself, is total understanding. This is essentially to agree with the phrases "Is what it is" or "A rose is a rose is a rose".

      Another thing to ponder:
      "Science cannot solve the ultimate mystery of nature because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve." - Max Planck
      Last edited by really; 07-07-2010 at 03:10 PM.

    Similar Threads

    1. Replies: 5
      Last Post: 04-11-2009, 07:49 PM
    2. Artificial Neural Networks
      By The Cusp in forum Science & Mathematics
      Replies: 40
      Last Post: 03-04-2009, 01:19 AM
    3. How Do Neural Nets Work?
      By Keeper in forum Tech Talk
      Replies: 8
      Last Post: 10-31-2006, 10:30 PM
    4. Neural Noise Synthesisizer
      By kichigai in forum Attaining Lucidity
      Replies: 4
      Last Post: 06-14-2006, 04:45 PM

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •