There are topics to debate on concerning AI use, but in the end art itself is always subjective. What’s funny is that AI has gotten to a level now where if used carefully it might fly under the radar of AI critics, meanwhile they are attacking artists who aren’t using AI but have a style that seems to them to be “fake”.
Rhaedas
Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.
- 0 Posts
- 314 Comments
Wonder what the audio version sounds like?
If they’re asking for them, they don’t know what they are. Therefore they don’t exist.
If they require you to make one to fill it out, then there are other jobs out there.
selectivity based on probability and not weighing on evidence
I don’t follow this, but an LLM’s whole “world” is basically the prompt it’s fed. It can “weigh” that, but then how does one choose what’s in the prompt?
Some describe or use the analogy of an autocompleter with a very big database. LLMs are more complex than just that, but that’s the idea, and when the model looks at the prompt and context of the conversation, it’s choosing the best match of words to fulfill that prompt. My point was that the best word or phrase completion doesn’t mean it’s the best answer, or even right. It’s just seen as the most probabilistic in the huge training data. If that data is crap, the answers are crap. Having Wikipedia as a source and presumably the only source is better than many places on the internet to pull from, but that doesn’t guarantee the answers that pop up will be always correct or the best in a choice of answers. It’s just the most likely based on the data.
It would be different if it was AGI because by definition it would be able to find the best data based on the data itself, not text probability, and could look at anything connected including discussion behind the article and make a judgement on how solid the information is for the prompt in question. We don’t have that yet. Maybe we will, maybe we won’t for any number of reasons.
I think 2 were bought out in a merger recently.
An LLM with a cultivating source is a lot better than what the other major ones are, but it still has the issue of selectivity based on probability and not weighing on evidence (unless it does that, which would be huge). Because people are naturally gullible and believe the first thing they read, especially if it’s presented as if “someone” has validated it for them.
But the good part is that both DDG and Firefox made it both obvious and easy to disable the AI.
Except Feynman did answer in the end, or at least gave us an idea of what’s going on without diving into the hard physics. The journey there was to teach us that asking questions doesn’t always lead to a simple answer, and can lead to more questions.
Trump probably got two of those very strong neodymium magnets together and can’t get them apart, so now he’s confused and pissed at China because that’s where they were bought.
Started with Netscape core, won’t deviate.
Things I’m not surprised about:
-
that someone remembering would mention it
-
that the home site for it looks like it’s from 1997
Things I’m surprised about:
-
Lynx is still supported (the oldest browser that is)
-
that the latest version number is so low
-
that someone mentioning it wouldn’t also say they use Arch btw
-
This works thanks to the default setting being “I fit, I sit” for any cardboard surface.
Many AI pictures use bokeh, the out of focus background, to avoid the problems with details in anything except the main subject. They probably don’t realize that this is also a common thing used in photography as well using a wide angle lens.
Or maybe it was just a drive by who labels anything as AI slop.
The best version of peer-reviewed.
A civilization that can add enough gases to Mars to create close to Earth’s atmosphere isn’t concerned with minor maintenance like that. A small comet body’s worth every hundred years (if even that often), child’s play.
The next hard part is the dust. Lunar and Martian dust is a huge problem to overcome, and something we don’t have to deal with here. Then there’s radiation, although there’s things we can do to lessen that problem.
Rhaedas@fedia.ioto
Games@lemmy.world•Square Enix says it wants generative AI to be doing 70% of its QA and debugging by the end of 2027
9·12 天前No better testing than in production.
Everyone who compares growth here (here being very relative considering how it works) vs. the idealized Reddit is forgetting something. Age. You don’t get peak Reddit by looking at its first years, and yet you’re looking at the literal first years for Lemmy and company and saying it’s not comparable. No, it’s not.
Doesn’t mean there shouldn’t be constant discussion on improving and growing communities for better discussion, but the whole “oh no, the numbers are low” is ridiculous. Aside from being a aggregated discussion format, this is like comparing apples and cars. Reddit shouldn’t be a goal or benchmark, discussion flow here should be. I’ll be more worried about stagnation when feed numbers for myself drop back to the first few months, where there was concern about if federation would even work well. (and improving federation/defederation is also a great topic to talk about, it isn’t perfect, but it’s far better than it was)
If they weren’t orange, you’d be in trouble. But what looks like planning a murder is just them trying to get the single shared brain cell of theirs synced up again.
Rhaedas@fedia.ioto
Programming@programming.dev•A thought on the useful inefficiency of reading the docs
71·13 天前I’ve only found success in LLM code (local) with smaller, more direct sections. Probably because it’s pulling from its training data the most repeated solutions to such queries. So for that it’s like a much better Google lookup filter that usually gets to the point faster. But for longer code (and it always wants to give you full code) it will start to drift and pull things out of the void, much like in creative text hallucination but in code it’s obvious.
Because it doesn’t understand what it’s telling you. Again, it’s a great way to mass filter Stack Overflow and Reddit answers, but remember in the past when searching through those, that can work well or be a nightmare. Just like then, don’t take any answer and just plug it in, understand why that might or might be a working solution.
It’s funny, I’ve learned a lot of my programming knowledge through the decades by piecing things together and in the debugging of my own or other’s coding, figured out what works. Not the greatest way to do it, but I learn best through necessity than without a purpose. But with LLM coding that goes wild, debugging has its limits, and there have been minor things that I’ve just thrown out and started over because the garbage I was handed was total BS wrapped up in colorful paper.
“Can we just roll for falling rocks, please?”








Never throw a meme together in a hurry, it’s like typing a reply and hitting post before you check what you wrote.
Euro Gray is a weird one. Given the naming conventions for gray/grey, it ought to be “grey”. Must be a US color referring to European styling.