Which can be further summarized: academics (🙋🏻) are basically a bunch of idiotic sheep, despite being in academia.
See also https://pluralistic.net/2024/08/16/the-public-sphere/#not-the-elsevier
Which can be further summarized: academics (🙋🏻) are basically a bunch of idiotic sheep, despite being in academia.
See also https://pluralistic.net/2024/08/16/the-public-sphere/#not-the-elsevier
Really embarrassing also for the journals that published the papers – and which are as guilty. They take ridiculously massive amounts of money to publish articles (publication cost for one article easily surpasses the cost of a high-end business laptop), and they don’t even check them properly?
Yeah to me too. I’m not clicking on that “Download client” link for sure.
The summary I just read sounds great, thanks for the tip!
You brought back memories and I got interested. Interesting reading about privacy:
https://www.irchelp.org/security/privacy.html
How much of it is true?
Agree (you made me think of the famous face on Mars). I mean that more as a joke. Also there’s no clear threshold or divide on one side of which we can speak of “human intelligence”. There’s a whole range from impairing disabilities to Einstein and Euler – if it really makes sense to use a linear 1D scale, which very probably doesn’t.
Title:
ChatGPT broke the Turing test
Content:
Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]
researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time
Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.
PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.
Got one! XNA. Here’s an article example (boo behind a paywall).
I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).
I don’t know what you have in mind with “trustworthy”, and about what, so maybe this comment is worthless for you. But I’ve been using their cloud storage for several years (like other commenters here), for work-related files, and to sync them between computers and phone. Their syncing system and apps are actually great. No complaints on my part.
What’s sad and superficial is that these kinds of restrictions and bans just cover a symptom but don’t cure the problem. Maybe they even make it worse. We need an overhaul of our cultural foundation and educational system.
I agree that the wording is inaccurate, but some of the essence remains: the second “service” is forced on you. It’s somewhat as if anyone with a Fakebook account also automatically had a Whatsapp or Instagram account, or some permutation of this.
It’s utter bullshit from the very start. First, it isn’t true that the Ricci curvature can be written as they do in eqn (1). Second, in eqn (2) the Einstein tensor (middle term) cannot be replaced by the Ricci tensor (right-hand term), unless the Ricci scalar ® is zero, which only happens when there’s no energy. They nonchalantly do that replacement without even a hint of explanation.
Elsevier and ScienceDirect should feel ashamed. They can go f**k themselves.