• wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      17
      ·
      24 days ago

      But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

      This whole “we can’t explain how it works” is bullshit from software engineers too lazy to unwind the emergent behavior caused by their code.

      • doctordevice@lemmy.ca
        link
        fedilink
        English
        arrow-up
        36
        ·
        24 days ago

        I agree with your first paragraph, but unwinding that emergent behavior really can be impossible. It’s not just a matter of taking spaghetti code and deciphering it, ML usually works by generating weights in something like a decision tree, neural network, or statistical model.

        Assigning any sort of human logic to why particular weights ended up where they are is educated guesswork at best.

        • andyburke@fedia.io
          link
          fedilink
          arrow-up
          16
          arrow-down
          16
          ·
          24 days ago

          You know what we do in engineering when we need to understand a system a lot of the time? We instrument it.

          Please explain why this can’t be instrumented. Please explain why the trace data could not be analtzed offline at different timescales as a way to start understanding what is happening in the models.

          I’m fucking embarassed for CS lately.

          • Scubus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            17
            ·
            24 days ago

            … but they just said that it can. You check it, and you will receive gibberish. Congrats, your value is .67845278462 and if you change that by .000000001 in either direction things break. Tell me why it ended up at that number. The numbers, what do they mean?

          • David From SpaceA
            link
            fedilink
            English
            arrow-up
            17
            ·
            24 days ago

            It’s not always as simple as measuring an observable system or simulating the parameters the best you can. Lots of parameters + lots of variables = we have a good idea how it should go, we can get close, but don’t actually know. That’s part of why emergent behavior and chaos theory are so difficult, even in theoretically closed systems.

          • Match!!@pawb.social
            link
            fedilink
            English
            arrow-up
            17
            arrow-down
            4
            ·
            24 days ago

            That field is called Explainable AI and the answer is because that costs money and the only reason AI is being used is to cut costs

            • wizardbeard@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              24 days ago

              Thank you. I am fucking exhausted from hearing people claim these things are somehow magically impossible when the real issue is cost.

              Computers and technology are amazing, but they are not magic. They are the most direct piece of reality where you can reliably say that every single action taken can be broken into discrete steps, even if that means tracing individual CPU operations on data registers like an insane person.

      • dactylotheca@suppo.fi
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        4
        ·
        edit-2
        24 days ago

        But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.

        Well sure, but as someone else said even heat is statistics. Saying “ML is just statistics” is so reductionist as to be meaningless. Heat is just statistics. Biology is just physics. Forests are just trees.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          24 days ago

          It’s like saying a jet engine is essentially just a wheel and axle rotating really fast. I mean, it is, but it’s shaped in such a way that it’s far more useful than just a wheel.

        • andyburke@fedia.io
          link
          fedilink
          arrow-up
          4
          arrow-down
          4
          ·
          24 days ago

          Yeah, but the critical question is: is human intelligence statistics?

          Seems no, to me: a human lawyer wouldn’t, for instance, make up case law that doesn’t exist. AI has done that one already. If it had even the most basic understanding of what the law is and does, it would have known not to do that.

          This shit is just megahal on a gpu.

          • dactylotheca@suppo.fi
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            5
            ·
            edit-2
            24 days ago

            Seems no, to me: a human lawyer wouldn’t, for instance, make up case law that doesn’t exist. AI has done that one already. If it had even the most basic understanding of what the law is and does, it would have known not to do that.

            LLMs don’t have an understanding of anything, but that doesn’t mean all AI in perpetuity is incapable of having an understanding of eg. what the law is. Edit: oh and also, it’s not like human lawyers are incapable of mistakenly “inventing” case law just by remembering something wrong.

            As to whether human intelligence is statistics, well… our brains are neural networks, and ultimately neural networks – whether made from meat or otherwise – are “just statistics.” So in a way I guess our intelligence is “just statistics”, but honestly the whole question is sort of missing the point; the problem with AI (which right now really means LLMs) isn’t the fact that they’re “just statistics”, and whether you think human intelligence is or isn’t “just statistics” doesn’t really tell you anything about why our brains perform better than LLMs

          • candybrie@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            24 days ago

            Seems no, to me: a human lawyer wouldn’t, for instance, make up case law that doesn’t exist

            You’ve never seen someone misremember something? The reason human lawyers don’t usually get too far with made-up case law is because they have reference material and they know to go back to it. Not because their brains don’t make stuff up.

            • andyburke@fedia.io
              link
              fedilink
              arrow-up
              4
              arrow-down
              2
              ·
              24 days ago

              I think you’re not aware of the AI making up a case name from whole cloth that I am talkimg about? Because that’s not misremembering, it’s exactly what you would expect unintelligent statistical prediction to come up with.

              • candybrie@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                23 days ago

                Have you ever graded free response tests before? I assure you that some people do similar things when pressed to come up with an answer when they don’t know. Often, they know they’re BSing, but they’re still generating random crap that sounds plausible. One of the big differences is that we haven’t told AI to tell us “I’m not sure” when it has high uncertainty; though plenty of people don’t do that either.

      • 0ops@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        2
        ·
        24 days ago

        It’s totally statistics, but that second paragraph really isn’t how it works at all. You don’t “code” neural networks the way you code up website or game. There’s no “if (userAskedForThis) {DoThis()}”. All the coding you do in neutral networks is to define a model and training process, but that’s it; Before training that behavior is completely random.

        The neural network engineer isn’t directly coding up behavior. They’re architecting the model (random weights by default), setting up an environment (training and evaluation datasets, tweaking some training parameters), and letting the models weights be trained or “fit” to the data. It’s behavior isn’t designed, the virtual environment that it evolved in was. Bigger, cleaner datasets, model architectures suited for the data, and an appropriate number of training iterations (epochs) can improve results, but they’ll never be perfect, just an approximation.

        • Match!!@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          24 days ago

          Tensorflow has some libraries that help visualize the “explanation” for why it’s models are classifying something (in the tutorial example, a fireboat), in the form of highlights over the most salient parts of the data:

          Image of a firefighting boat, then a purple lattice overlaying the boat.

          Neural networks are not intractable, but we just haven’t built the libraries for understanding and explaining them yet.

          • AnarchistArtificer@slrpnk.net
            link
            fedilink
            English
            arrow-up
            5
            ·
            23 days ago

            I’d have to check my bookmarks when I get home for a link, but I recently read a paper linked to this that floored me. It was research on visualisation of AI models and involved subject matter experts using an AI model as a tool in their field. Some of the conclusions the models made were wrong, and the goal of the study was to see how good various ways of visualising the models were — the logic being that better visualisations = easier for subject matter experts to spot flaws in the model’s conclusions instead of having to blindly trust it.

            What they actually found was that the visualisations made the experts less likely to catch errors made by the models. This surprised the researchers, and caused them to have to re-evaluate their entire research goal. On reflection, they concluded that what seemed to be happening was that the better the model appeared to explain itself through interactive visualisations, the more likely the experts were to blindly trust the model.

            I found this fascinating because my field is currently biochemistry, but I’m doing more bioinformatics and data infrastructure stuff as time goes on, and I feel like my research direction is leading me towards the explainable/interpretable AI sphere. I think I broadly agree with your last sentence, but what I find cool is that some of the “libraries” we are yet to build are more of the human variety i.e. humans figuring out how to understand and use AI tools. It’s why I enjoy coming at AI from the science angle, because many scientists alreadyuse machine learning tools without any care or understanding of how they work (and have done for years), whereas a lot of stuff branded AI nowadays seems like a solution in search of a problem.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          24 days ago

          But the actions taken by the model in the virtual environments can always be described as discrete steps. Each modification to the weights done by each agent in each generation can be described as discrete steps. Even if I’m fucking up some of the terminology, basic computer architecture enforces that there are discrete steps.

          We could literally trace each command that runs on the hardware that runs these things individually if we wanted full auditability, to eat all the storage space ever made, and to drive someone insane. Have none of you AI devs ever taken an embedded programming/machine language course? Never looked into reverse engineering of compiled executables?

          I understand that these things work by doing these steps millions upon millions of times, but there has to be a better middle ground for tracing these things than “lol i dunno, computer brute forced it”. It is a mixture of laziness, and unwillingness to allow responsibility to negatively impact profits that result in so many in the field to summarize it as literally impossible.

          • Match!!@pawb.social
            link
            fedilink
            English
            arrow-up
            6
            ·
            24 days ago

            There’s a wide range of “explainability” in machine learning - the most “explainable” model is a decision tree, which basically splits things into categories by looking at the data and making (training) an entropy-minimizing flowchart. Those are very easy for humans to follow, but they don’t have the accuracy of, say, a Random Forest Classifier, which is exactly the same thing done 100 times with different subsets.

            One flowchart is easy to look at and understand, 100 of them is 100 times harder. Neural nets are another 100 times harder, usually. The reasoning can be done by hand by humans (maybe) but there’s no regulations forcing you to do it, so why would you?

          • 0ops@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            24 days ago

            But the actions taken by the model in the virtual environments can always be described as discrete steps.

            That’s technically correct, but practically useless information. Neural networks are stochastic by design, and while Turing machines are technically deterministic, most operating systems’ random number generators will try to introduce noise from the environment (current time, input devices data, temperature readings, etc). So because of that randomness, those discrete steps you’d have to walk through would require knowing intimate details of the environment that the PC was in at precisely the time it ran, which isn’t stored. And even if it was or you used a deterministic psuedo-random number generator, you’d still essentially be stuck reverse engineering the world’s worse spaghetti code written entirely in huge matrix multiplications, code that we already know can’t possibly be optimal anyway.

            If a software needs guaranteed optimality, then a neural network (or any stochastic algorithm) is simply the wrong tool for the job. No need to shove a square peg in a round hole.

            Also I can’t speak for AI devs, in fact I’ve only taken an applied neural networks course myself, but I can tell you that computer architecture was like a prerequisite of a prerequisite of a prerequisite of that course.

          • magic_lobster_party@kbin.run
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            23 days ago

            It’s true that each individual building block is easy to understand. It’s easy to trace each step.

            The problem is that the model is like a 100 million line program with no descriptive variable names or any comments. All lines are heavily intertwined with each other. Changing a single line slightly can completely change the outcome of the program.

            Imagine the worst code you’ve ever read and multiply it by a million.

            It’s practically impossible to use traditional reverse engineering techniques to make sense of the AI models. There are some techniques to get a better understanding of how the models work, but it’s difficult to get a full understanding.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        24 days ago

        This whole “we can’t explain how it works” is bullshit

        Mostly it’s just millions of combinations of y = k*x + m with y = max(0, x) between. You don’t need more than high school algebra to understand the building blocks.

        What we can’t explain is why it works so well. It’s difficult to understand how the information is propagated through all the different pathways. There are some ideas, but it’s not fully understood.

        • Match!!@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          24 days ago

          ??? it works well because we expect the problem space we’re searching to be continuous and differentiable and the targetted variable to be dependent on the features given, why wouldn’t it work

          • magic_lobster_party@kbin.run
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            23 days ago

            The explanation is not that simple. Some model configurations work well. Others don’t. Not all continuous and differentiable models cut it.

            It’s not given a model can generalize the problem so well. It can just memorize the training data, but completely fail on any new data it hasn’t seen.

            What makes a model be able to see a picture of a cat it has never seen before, and respond with “ah yes, that’s a cat”? What kind of “cat-like” features has it managed to generalize? Why does these features work well?

            When I ask ChatGPT to translate a script from Java to Python, how is it able to interpret the instruction and execute it? What features has it managed to generalize to be able to perform this task?

            Just saying “why wouldn’t it work” isn’t a valid explanation.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 days ago

      But it’s called neural network, and learns just like a human, and some models have human names, so it’s a mini-digital human! Look at this picture that detects things in an image, it’s smart, it knows context!