Over 15% marketshare in India
Over 15% marketshare in India
~35 million concurrent active users.
Long time “old-school” kernel maintainers don’t know Rust and don’t want to learn Rust (completely fair and reasonable). But some of them don’t want to work with the Rust guys for lots’o’technical reasons.
It’s by far not an easy situation technically. Like this is a huge challenge.
But some of those old-school C guys are being vocal about their dislike of Rust in the kernel and gatekeeping the process. This came to a head at a recent conference (Linux Plumbers Conference?) and now one of the Rust maintainers has quit.
The big technical challenge is being confounded by professional opinions.
Bad argument.
It would hold water if their solution was proprietary and closed source. But it isn’t, and anyone else, literally anyone, can take Proton and use it in their project for profit.
Even if they closed shop tomorrow, or even just gave up work on Proton itself, we’d all still reap the benefits at no cost to us.
Epic has exclusivity on release
Wait, really? It’s officially off my list now. Screw those guys.
Find me another company that supports open source and Linux the way Valve does… I’ll wait
No digital game store is worth your loyalty.
When that store is run by a company that contributes massively to open source and works harder and puts more money into enabling alternate platforms for gaming than all other companies combined; ya, they have my loyalty.
I would love to see reasonable competition to steam which would give consumers and developers better options
No one’s going to compete with and outdo Steam with Linux support.
But it could also be for legal reasons, like websites where you can post stuff for everybody to see, in case you post something highly illegal and the authorities need to find you. Another example is where a webshop is required to keep a copy of your data for their bookkeeping.
None of these require your account to “exist”. There could simply be an acknowledgement stating those reasons with “after X days the data will be deleted, and xyz will be archived for legal reasons”.
Mostly it’s 30-90 days where they keep your data, just in case somebody else decided to delete your account or you were drunk or something
This is the only valid reason. But even then this could be stated so that the user is fully aware. Then an email one week and another one day before deletion as a reminder, and a final confirmation after the fact. I’ve used services before that do this. It’s done well and appreciated.
This pseudo-deletion shadow account stuff is annoying.
What the user was doing is that they don’t trust that the system truly deleted the account, and they worry it was just deactivated (while claiming it was “deleted”). So they tried to do a password recovery which often reactivates a falsely “deleted” account.
I’ve done this before and had to message the company and have them confirm the account is entirely deleted.
I feel like nowadays there’s less forums or places people can ask help with
I’m sorry, what??
There are more places than ever to find support. The Ubuntu forums, EndeavourOS forums, Manjaro forums, NixOS forums, SUSE forums, etc. Just about every larger distro has it’s own forum and they’re all very active. Then there are general Linux, Linux “newbie”, Linux help communities on the various Lemmy servers and (whether you like it or not) on Reddit also. Then there’s Mastodon. General tech forums like Level1Tech, Hacker News, etc.
Using Relational DBs where the data model is better suited to other sorts of DBs.
This is true if most or all of your data is such. But when you have only a few bits of data here and there, it’s still better to use the RDB.
For example, in a surveillance system (think Blue Iris, Zone Minder, or Shinobi) you want to use an RDB, but you’re going to have to store JSON data from alerts as well as other objects within the frame when alerts come in. Something like this:
{
"detection":{
"object":"person",
"time":"2024-07-29 11:12:50.123",
"camera":"LemmyCam",
"coords": {
"x":"23",
"y":"100",
"w":"50",
"h":"75"
}
}
},
"other_ojects":{
<repeat above format multipl times>
}
}
While it’s possible to store this in a flat format in a table. The question is why would you want to. Postgres’ JSONB datatype will store the data as efficiently as anything else, while also making it queryable. This gives you the advantage of not having to rework the the table structure if you need to expand the type of data points used in the detection software.
It definitely isn’t a solution for most things, but it’s 100% valid to use.
There’s also the consideration that you just want to store JSON data as it’s generated by whatever source without translating it in any way. Just store the actual data in it’s “raw” form. This allows you to do that also.
Edit: just to add to the example JSON, the other advantage is that it allows a variable number of objects within the array without having to accommodate it in the table. I can’t count how many times I’ve seen tables with “extra1, extra2, extra3, extra4, …” because they knew there would be extra data at some point, but no idea what it would be.
JSON data within a database is perfectly fine and has completely justified use cases. JSON is just a way to structure data. If it’s bespoke data or something that doesn’t need to be structured in a table, a JSON string can keep all that organized.
We use it for intake questionnaire data. It’s something that needs to be on file for record purposes, but it doesn’t need to be queried aside from simply being loaded with the rest of the record.
Edit: and just to add, even MS SQL/Azure SQL has the ability to both query and even index within a JSON object. Of course Postgres’ JSONB data type is far better suited for that.
I heard a first earther recently say it as: pe-tha-gore-ian
They don’t
The difference in people is that our brains are continuously learning and LLMs are a static state model after being trained. To take your example about brute forcing more data, we’ve been doing that the second we were born. Every moment of every second we’ve had sound, light, taste, noises, feelings, etc, bombarding us nonstop. And our brains have astonishing storage capacity. AND our neurons function as both memory and processor (a holy grail in computing).
Sure, we have a ton of advantages on the hardware/wetware side of things. Okay, and technically the data-side also, but the idea of us learning from fewer examples isn’t exactly right. Even a 5 year old child has “trained” far longer than probably all other major LLMs being used right now combined.
But they are. There’s no feedback loop and continuous training happening. Once an instance or conversation is done all that context is gone. The context is never integrated directly into the model as it happens. That’s more or less the way our brains work. Every stimulus, every thought, every sensation, every idea is added to our brain’s model as it happens.
The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it’s locked into that state with no additional learning.
A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.
Dunno. I don’t live there.