• 0 Posts
  • 164 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • Light debugging I actually use an LLM for. Yes, I know, I know. But when you know it’s a syntax issue or something simple, but a quick skim through produces no results; AI be like, “Used a single quote instead of double quote on line 154, so it’s indirectly using a string instead of calling a value. Also, there’s a typo in the source name on line 93 because you spelled it like this everywhere else.”

    By design, LLMs do be good for syntax, whether a natural language or a digital one.

    Nothing worse than going through line by line, only to catch the obvious mistake on the third “Am I losing my sanity?!” run through.







  • saltesc@lemmy.worldtocats@lemmy.worldNo issues here
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    17 days ago

    I never thought of it that way.

    battle snare drum to a montage of buying Super Soakers, ending with a one-liner to camera…

    …It’s time to get some pussies wet… and witches. But mainly the pussies because the aliens. Damn it, I ruined the one-liner.



  • saltesc@lemmy.worldtocats@lemmy.worldNo issues here
    link
    fedilink
    arrow-up
    37
    arrow-down
    8
    ·
    edit-2
    17 days ago

    Everyone here balancing the ethics of getting wet like it’s assault.

    Water melts snowflakes and wicked witches, everyone else need not worry.

    All living things should be used to being wet either all the time or somewhat regularly. To think beyond that, wow, society has its teeth in you and you are lost.


  • We can, but it’s a lot of effort and time. Good AI requires a lot of patience and specificity.

    I’ve sort of accepted the gimmick of LLMs being a bit of a plateau in training. It has always been that we teach AI to learn, but currently the public has been exposed to what they perceive to be magic and that’s “good enough”. Like, being wrong so often due to bad information, bad interpretation of information, and bias within information is acceptable now, apparently. So teaching to learn isn’t a high mainstream priority compared to throwing in mass information instead—it’s far less exciting working on infrastructure.

    But here’s the cool thing about AI, it’s pretty fucking easy to learn. If you have patience and creativity to put toward training, you can do what you want. Give it a crack! But always be working on refining it. I’m sure out there right now someone’s been inspired enough to do what you’re talking about and in a few years of tears and insane electricity bills, there’ll be a viable model.


  • Yeah, get too far in or give it too much to start with, it can’t handle it. You can see this with visual generators. “Where’s the lollypop in its hand? Try again… Okay now you forgot about the top hat.”

    Have to treat them like simple interns that will do anything to please rather than admit the task is too complex or they’ve forgotten what they were meant to do.


  • saltesc@lemmy.worldtoProgrammer Humor@programming.devEfficiency
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    25 days ago

    I use Claude for SQL and PowerQuery whenever I brain fart.

    There’s more usefulness in reading its explanation than its code, though. It’s like bouncing ideas back off someone except you’re the one that can actually code them. Never bother copying it’s code unless it’s a really basic request that’s quicker to type than to code.

    Bad quality and mass quantity in is obviously much quicker for LLMs and people that don’t understand the tech behind AI don’t understand this actually what’s going on, so it’s “magic”. A GPT is fundamentally quite simple and produces simple results full of potential issues, combine that with poor training quality and “gross”. There’s minimal check iterations it can do and how would it even do them when it’s knowledge base is more bullshit than it is quality?

    Truth is it will be years before AI can reliably code. Training for that requires building a large knowledge base of refined working solutions covering many scenarios, with explanation, to train off. It’d take longer for AI to self-learn these too without significant input from the trainer.

    Right now you can prompt the same thing six times and hope it manages a valid solution in one. Or just code it yourself.







  • Haha, I just responded to another comment of having to pull myself up from shallow drowning. It’s for real, but I think it’s specific to people with good long capacities—doing a lot longer than the average. I can easily hold my breath for 60s, but 90% of people can’t. Shallow drowning is not a situation 90% of people could find themselves facing.

    I always remember brain damage can start occuring after 180s, so start questioning at 120. Nothing wrong with coming up for a couple mins of good fresh air before going down again.


  • It’s not really about what you can see and clarity, but it’s true that clearer water is much more psychologically inviting.

    We’ll bring a smooth granite pebble out with us, while waiting for the swell, drop it down and take turns bringing it back up. We’ve had dolphins join us in the game once before.

    But definitely I don’t feel the appeal of diving down and doing that when the sky or water is dark and unclear. It’s less inviting.