• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    I wouldn’t be surprised if someday when we’ve fully figured out how our own brains work we go “oh, is that all? I guess we just seem a lot more complicated than we actually are.”

    • BigMikeInAustin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      True.

      That’s why consciousness is “magical,” still. If neurons ultra-basically do IF logic, how does that become consciousness?

      And the same with memory. It can seem to boil down to one memory cell reacting to a specific input. So the idea is called “the grandmother cell.” Is there just 1 cell that holds the memory of your grandmother? If that one cell gets damaged/dies, do you lose memory of your grandmother?

      And ultimately, if thinking is just IF logic, does that mean every decision and thought is predetermined and can be computed, given a big enough computer and the all the exact starting values?

      • huginn@feddit.it
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        You’re implying that physical characteristics are inherently deterministic while we know they’re not.

        Your neurons are analog and noisy and sensitive to the tiny fluctuations of random atomic noise.

        Beyond that: they don’t do “if” logic, it’s more like complex combinatorial arithmetics that simultaneously modify future outputs with every input.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          3 months ago

          Though I should point out that the virtual neurons in LLMs are also noisy and sensitive, and the noise they use ultimately comes from tiny fluctuations of random atomic noise too.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.

      • antonim@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        Something of the sort has already been claimed for language/linguistics, i.e. that LLMs can be used to understand human language production. One linguist wrote a pretty good reply to such claims, which can be summed up as “this is like inventing an airplane and using it to figure out how birds fly”. I mean, who knows, maybe that even could work, but it should be admitted that the approach appears extremely roundabout and very well might be utterly fruitless.