I can imagine it to be the opposite.
Maybe irritant tears have less protein to not clog your vision when in a fight or threatened?
I can imagine it to be the opposite.
Maybe irritant tears have less protein to not clog your vision when in a fight or threatened?
Unfortunately the spam arms race has destroyed any chance of search going back to the good ole days. SEO and AI content farms means we’ll need a whole new system to categorize webpages, as well as filter out human sounding but low effort spam.
Point being, it’s no longer enough to find a page that’s relevant to the topic, it has to be relevant and actually deliver information, which currently the only feasible tech that can differentiate those is LLMs.
Listen there’s definitely enough carbon in the body to boost that into a steel sword.
If we can make diamonds out of corpses, we can make steel.
Can’t wait for FOSS brain implants, it would still be hellish but a fun kind of hellish.
I want someone like Linus Torvalds to verbally abuse someone for not understanding basic computational neuroscience.
You would like Global Workspace Theory, basically says your consciousness is the result of components of the brain broadcasting their information to the whole.
I also like Integrated Information Theory which measures the conscious experience of a system by how integrated it is, which means that you can’t reduce the system to the sum of it’s parts without losing the emergent properties.
Yeah I love Foundry, but I’m convinced the DM needs technical knowledge to use it. I ran a server for non tech savvy DM and it was like working customer service.
With plenty of investment you can get the tabletop to be almost exactly what you want it to be, and for a popular system like 5e you can make it as automated as a Baldurs Gate game. You just need to download a lot of modules to get there and customize a lot of settings. Without that it just becomes a less intuitive Roll20.
And I must stress from experience, never offer to host/troubleshoot a server for someone else, especially if the DM likes to complain or can’t handle minor technical setbacks.
I’m curious, is there actually so many 42’s in the system? (more than 69 sounds unlikely)
What if the LLM is getting tripped up because 42 is always referred to as the answer to “the Ultimate Question of Life, the Universe, and Everything”.
So you ask it a question like give a number between 1-100, it answers 42 because that’s the answer to “Everything”, according to it’s training data.
Something similar happened to Gemini. Google discouraged Gemini from giving unsafe advice because it’s unethical. Then Gemini refused to answer questions about C++ because it’s considered “unsafe” (referring to memory management). But Gemini thinks C++ is “unsafe” (the normal meaning), therefore it’s unethical. It’s like those jailbreak tricks but from its own training set.
deleted by creator
If you want barebones Windows I’d suggest you cough cough obtain Windows 10 LTSC.
It’s got most the bloatware cut out, you just have to reenable the old style picture viewer.
Though when I eventually make a new PC, I’m probably just gonna use Linux Mint because I hear running Windows games/software isn’t nearly as bad nowadays, thanks Steam.
From what I know, most raptors had feathers and that’s where birds came from.
The broader group of theropods, including the T-Rex, had a precursor to feathers literally called “Dinofuzz”.
All other kinds of dinosaurs I believe are actually scaly like we thought.
That “rated 3 and up” is killing me.
Finally a solution for those jobless deadbeat infants.
Just to clarify I am implying the medical provider would be the one sued. I didn’t think ChatGPT would be in the wrong.
ChatGPT has just done a great job revealing how lazy and poorly thought out people are all over.
Typically for the AI to do anything useful you’d copy and paste the medical records into it, which would be patient data.
Technically you could expunge enough data to keep it inline with HIPPA, but if there’s more people careless enough not to proofread their paper, then I doubt those people would prep the data correctly.
Come to think of it, I wonder if using ChatGPT violates HIPPA because it sends the patient data to OpenAI?
I smell a lawsuit.
May I introduce you to our lord and savior JavaScript?
I thought you were being too cynical because plenty of plants evolved this technique but then I realized because of AI I have absolutely no idea if they’re real or not, unless I spend time that I don’t have on researching it.
This is amazing! Now we’ll be able to free humans from manual lab-
Cotton Gin gets invented
Not enough negative mass
I feel like the people getting upset over this are taking these hypotheticals of “young looking adults just wanting to be able to make porn equally and that technically the community did nothing wrong”.
The problem is that just ignores the fact that pedophiles would definitely use communities like that as a “foot in the door” to a comminity that would naturally have a lot of closetted pedophiles. The issue isn’t young looking adults making porn, the issue is a community based around youngest possible looking adults is naturally gonna attract and encourage pedophiles.
It’s like they say, “all it takes is allowing one nazi in your bar for it to rapidly turn into a nazi bar”.
deleted by creator