• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I wonder how creative these things are. Somewhere between “hallucination” and fully verifiable correct answers based on current knowledge, there might be a “zone of creativity.”

      I would argue that there is no such thing as something completely from nothing. Every advance builds on work that came before, often combining bodies of knowledge in disparate fields to discover new insights.

      • Rhaedas@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.

        What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…