• Lucid Dreaming - Dream Views




    Results 1 to 19 of 19
    Like Tree2Likes
    • 1 Post By Photolysis
    • 1 Post By

    Thread: Neural Incompleteness Theorem

    Hybrid View

    1. #1
      Member Achievements:
      Made Friends on DV Veteran First Class 5000 Hall Points

      Join Date
      Jun 2010
      Gender
      Posts
      709
      Likes
      348
      A close approximation is the best any sub system of a closed system can get to replicating behavior, structure or interconnectivity of the whole. If one were to add a conjugate system, it would invariably be included within the boundary of the original, meaning they would be the same only more, and the approximation would remain proportionally identical. As far as I know, chaos theory and systems theory address this very issue.

    2. #2
      Member Photolysis's Avatar
      Join Date
      Dec 2007
      Gender
      Posts
      1,270
      Likes
      316
      I don't necessarily think what follows is true, but I would like to talk about it to stimulate my own thoughts:

      Imagine you have a system which does something simple, like adding up two numbers in base 10.

      Is it possible for this system to conceptualise how it (itself) works?

      Patently not, it can only add up.

      What if we append to the original system another system, which is capable of comprehending the original adding system.

      Can this new conjugate system comprehend how it works? Well, we got a bit closer, in that the system understands part of itself, but in doing so we had to add a new system, which the system cannot comprehend; moreover, though it's probably not that important, this new system is much more complicated than the original adding system.

      So again we append a system capable of comprehending the original comprehending appendix, and of course we find ourselves with the same problem; that this new system cannot understand itself.

      If we proceed by induction it would seem that we conclude that no system can understand how it works.

      Thoughts?
      It depends on what level of comprehension you want to aim for, but I'll assume you mean at the most complete level.


      I would say a system's ability to comprehend itself is based on three values:

      1. The level of comprehension the system is potentially capable of
      2. The amount of information / knowledge the system can store
      3. The actual complexity of the system



      #1 does not have to be directly proportional to #3. If we were able to double the capability of understanding for a human brain for instance, it might not involve a huge increase in complexity. In fact, the change might well be a relatively simple one.

      Clearly this is speculation when it comes to the human brain and especially at our present stage. However, when we look at the next most intelligent animals, it appears that our brains are not that much more complicated, yet there is a very large difference in the ability for abstraction and conceptualisation (or so it would appear).

      #2 this might involve a slight increase in complexity to handle the storage but again does not have to be proportional to the size increase. As an analogy, I can replace a 200GB hard drive with a 2TB hard drive in a modern PC, but the system will remain functionally the same. However, to get to this stage from earlier PCs, the architecture had to be revised to handle the large sizes.

      #3 clearly increases the values for #1 and #2, at least once past certain limits. The more complicated the system, the more knowledge required to be comprehended and stored regarding its workings. But as stated in #1 and #2, it does not appear that these are directly linked. Complexity can also be increased with no gain in performance as well.


      In your original post Xei you correctly state that a system that can only understand part of itself would require an infinite set of additions (i.e. you need to have a system capable of understanding the system that's capable of understanding the system of... all the way down to the part that understands the concept of adding base 10). However this doesn't consider that a sufficiently complicated new system can understand the component systems as well as itself.

      Again, if you consider computing architecture here, 64bit computing means addresses take up double the amount of space, but it exponentially increases the storage, so you gain ground.
      Xei likes this.

    Similar Threads

    1. Replies: 5
      Last Post: 04-11-2009, 07:49 PM
    2. Artificial Neural Networks
      By The Cusp in forum Science & Mathematics
      Replies: 40
      Last Post: 03-04-2009, 01:19 AM
    3. How Do Neural Nets Work?
      By Keeper in forum Tech Talk
      Replies: 8
      Last Post: 10-31-2006, 10:30 PM
    4. Neural Noise Synthesisizer
      By kichigai in forum Attaining Lucidity
      Replies: 4
      Last Post: 06-14-2006, 04:45 PM

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •