Statistical modeling and machine learning theory goes back several decades. I’m not sure LLM’s even use new algorithms. They may just apply various techniques that improve the performance and accuracy of pre-existing algorithms.
Statistical modeling and machine learning theory goes back several decades. I’m not sure LLM’s even use new algorithms. They may just apply various techniques that improve the performance and accuracy of pre-existing algorithms.
TBF, the veracity of the information is relatively field dependent. Structural engineering? Yeah, probably still as relevant as the day it was published… Quantum computing or astrobiology theory? Far more likely to be superseded or debunked.
This is straight comedic gold. I like to imagine some elderly stenographer refused to retire and this is a common occurrence… or it’s some 3rd rock from the sun aliens first day.
At this point why even bother with the bread?
Black holes don’t swallow everything around them for the same reason that the sun hasn’t swallowed all the planets.
I wasn’t implying this. I was referring to how they would gradually increase in mass as they absorb particles that came close enough — the same way that all other matter accumulates.
Gravitational capture usually involves multiple objects, because the trajectory has to get nudged for a collision to happen.
What about the countless proton sized blackholes and matter dispersed between them? Wouldn’t they all interact with each other? How come the visible matter accumulated but the black holes did not? Are they so small that they’re all around us but too small to interact with non-primordial black holes?
A gaseous body collects mass at a faster rate than a black hole with the same mass
Because the gas/mass is distributed over a larger area? As in, gas has a larger gravitational “dragnet”, relative to its mass?
What doesn’t make sense is how haven’t these primordial black holes accumulated to form massive objects that dominate the universe? The distribution of dark matter implies that it’s widely dispersed throughout the universe, amongst all other matter, but wouldn’t these primordial black holes have started to attract matter and grown in mass from 1 second after the Big Bang? Wouldn’t the vast majority of these proton sized black holes no longer be proton sized? Wouldn’t they have grown until they absorbed most visible matter in their vicinity? If this were true wouldn’t it mean most of the dark voids between galaxies and stars are actually gravitationally dominated by black holes of varying sizes? How could our observations to date have missed that? Wouldn’t pointing the JWST at any void show signs of infrared gravitational lensing? Oh god I’ve gone cross-eyed…
Edit:
“Even if you take into account clustering, the time scales for the merger are so long that they would only merge into really massive black holes over the entire age of the universe,” he continued.
If this were true, how have all the sub-atomic particles accumulated to form visible matter and galaxies? Do these black holes somehow have less mass than other particles? If so, how could they possibly make up most of the gravity in the universe — more gravity than all the visible matter — yet their gravity is so weak that they don’t accumulate into larger clusters?
PHOTONS SLAM INTO JWST AFTER 13.3 BILLION YEARS!!!
You’re clearly one of the reasons the quality is so low. Wasting everyone’s time using lemmy as your personal link aggregator. It’s obnoxious af.
Yeah, the quality on Lemmy is nowhere near what Reddit was back in its heyday 10+ years ago; mostly due to the quality of the users; users who think content like this is worthy of posting and upvoting.
Removed by mod
Removed by mod
FYI ^ Sunny — I suggest you query your LAN routing config with Tailscale specific support, discord, forums, etc. I’m 99% certain you can fix your LAN access issues with little more than a reconfig.
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer.
WTF? What galaxy are you from? Literally zero average consumers use that. They use whatever router their ISP provides, is currently advertised on tech media, or is sold at retailers.
I’m not talking about budget routers. I’m talking about ALL software running on consumer routers. They’re all dogshit closed source burn and churn that barely receive security updates even while they’re still in production.
Also you don’t need port forwarding and ddns for internal routing. … At home, all traffic is routed locally
That is literally the recommended config for consumer Tailscale and any mesh VPN. Do you even know how they work? The “external dependency” you’re referring to — their servers — basically operate like DDNS, supplying the DNS/routing between mesh clients. Beyond that all comms are P2P, including LAN access.
Everything else you mention is useless because Tailscale, Nebula, etc all have open source server alternatives that are way more robust and foolproof to rolling your own VPS and wireguard mesh.
My argument is that “LAN access” — with all the “smart” devices and IoT surveillance capitalism spyware on it — is the weakest link, and relying on mesh VPN software to create a VLAN is significantly more secure than relying on open LAN access handled by consumer routers.
Just because you’re commenting on selfhosted, on lemmy, doesn’t mean you should recommend the most complex and convoluted approach, especially if you don’t even know how the underlying tech actually works.
What is the issue with the external dependency? I would argue that consumer routers have near universal shit security, networking is too complex for the average user, and there’s a greater risk opening up ports and provisioning your own VPN server (on consumer software/hardware). The port forwarding and DDNS are essentially “external dependencies”.
Mesh VPN clients are all open source. I believe Tailscale are currently implementing a feature where new devices can’t connect to your mesh without pre-approval from your own authorized devices, even if they pass external authentication and 2FA (removing the dependency on tailscale servers in granting authorization, post-authentication).
It’s almost like corporations are incentivized to be greedy and parasitic, instead of investing in their customers and workforce? I call it vulture capitalism.
I’ll bite, too. The reason the status quo allows systemic wage stagnation for existing employees is very simple. Historically, the vast majority of employees do not hop around!
Most people are not high performers and will settle for job security (or the illusion of) and sunk cost fallacy vs the opportunity of making 10-20% more money. Most people don’t build extensive networks, hate interviewing, and hate the pressure and uncertainty of having to establish themselves in a new company. Plus, once you have a mortgage or kids, you don’t have the time or energy to job hunt and interview, let alone the savings to cover lost income if the job transition fails.
Obviously this is a gamble for businesses, and can often turn out foolish for high-skilled and in demand roles — we’ve all seen many products stagnate and be destroyed by competition — but the status quo also means that corporations are literally structured — managerially, and financially — towards acquisition, so all of the data they capture to make decisions, and all of the decision makers, neglect the fact that their business is held together by the 10-30% of under appreciated, highly experienced staff.
It’s essentially the exact same reason companies offer the best deals to new customers, instead of rewarding loyalty. Most of the time the gamble pays off, and it’s ultimately more profitable to screw both your employees and customers!
I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?
FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.
Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.
Chris Berg is a professor of economics at the RMIT Blockchain Innovation Hub.
Worthless opinion piece is worthless.
As others suggested you don’t need all your historic mail on your mailserver. My approach to email archival is the same as all my historic data — a disorganized dumping ground that’s like my personal data lake, and separate service(s) to crawl, index, and search it (e.g. https://www.recoll.org/)