Technically, almost all of Antarctica is located north of the south pole
- 1 Post
- 48 Comments
Me? Reading that there’s a drop-in replacement function for the one that was deprecated, in the error message? Why I’d never!
Maybe the onus should be on LLM developers to filter out trash like this from their training datasets
At any rate, it’s extremely unhelpful to not include a version number at the very very least
Ashelyn@lemmy.blahaj.zoneto Science Memes@mander.xyz•That's why it's called science fiction duhEnglish23·4 months agoYour eugenic sentiments aside, if you want people to have fewer babies, you don’t just tell them to stop fucking; you teach them how to use contraception and make it as accessible as possible.
.loc and .iloc queries are a fun syntax adventure every time
I guess you could consider someone who is staunchly whitehat with no exceptions to have a creed/code, where they consider the rules transcendent of any specific situation (e.g. nazi websites).
Ashelyn@lemmy.blahaj.zoneto Android@lemdro.id•Google will now automatically revoke permissions from harmful Android appsEnglish2·5 months agoOh my bad. According to another commenter it is sandboxed though
Ashelyn@lemmy.blahaj.zoneto Android@lemdro.id•Google will now automatically revoke permissions from harmful Android appsEnglish4·5 months agoThey have Google services
but through a third party wrapper called MicroG, which keeps itsandboxed to a degree that you can keep it from doing system-level actions like thisedit: not microG, as evidenced by the strikethrough I put in very soon after receiving the first of several replies clarifying the situation. I would encourage you to read one of them before adding your own. <3
Well, no.
In scenario A they are instantly vaporized. In scenario B they are brutally sliced into multiple pieces and crushed to death, rather painfully depending on the speed of the trolley.
You are on track A and the bomb is within sight. If you get the shit end of the 50/50, everyone in the diagram would be vaporized instantly
Ashelyn@lemmy.blahaj.zoneto I Made This (MOVED TO LEMMY.ZIP)@lemm.ee•Because I couldn't give straight spoons to my gay friends1·6 months agoThose look like they could be quite ergonomical, but it’s a bit hard to tell just from looking at them
Ashelyn@lemmy.blahaj.zoneto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•When corporations scrape academic papers, it's justified. When individuals do it, it's inexcusable.English3·8 months agoPeople developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.
Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.
Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.
Ashelyn@lemmy.blahaj.zoneto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•When corporations scrape academic papers, it's justified. When individuals do it, it's inexcusable.English31·8 months agoOn the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.
The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.
ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.
I always found the idea of stable Boltzmann brains fascinating. The idea that on an infinite enough universe, there must exist self-sustaining minds that function on an entirely circumstantial set of rules and logic based on whatever the quantum soup spit up.
It’s also hard to argue while also claiming your god is moral, which is why creationists usually scapegoat the task of planting fossils to Satan.
I always found it funny how they’ll sometimes try to justify their claims scientifically to give it an air of legitimacy. If god created the stars close to one another and expanded them to fill the sky over a single day, the skies would be dark for billions of years. A YEC could easily say “oh well god put the light there to make the stars look like they’ve been in the sky for a long time” but very often they just don’t have an answer because they didn’t think of one. Unfortunately, there’s almost that will stop them from doubling down on their beliefs and just becoming more prepared for the next person they talk to
Ashelyn@lemmy.blahaj.zoneto Technology@beehaw.org•Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says5·9 months agoIdeally, I agree wholeheartedly. American gun culture multiplies the damage of every other issue we have by a lot
Ashelyn@lemmy.blahaj.zoneto Technology@beehaw.org•Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says14·9 months agoOne or more parents in denial that there’s anything wrong with their kids and/or the idea they need to take gun storage seriously? That’s the first thing that comes to mind, and it’s not uncommon in the US. Especially when you consider that a lot of gun rhetoric revolves around self defense in an emergency/home invasion, not having at least one gun readily available defeats the main purpose in their minds.
edit: meant to respond to django@discuss.tchncs.de
Ashelyn@lemmy.blahaj.zoneto Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Internet Archive's support email has now been compromisedEnglish4·9 months ago90 days to cycle private tokens/keys?
u will become crab one way or another 🦀