

Do you automate that or just check the list manually every once in a while?


Do you automate that or just check the list manually every once in a while?


extension developers should be able to justify and explain the code they submit, within reason
I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.
Calculator apps should have achievements, like “Pressed clear 5 times”, and “You could have done this in your head”.
Whoa, is that a save button irl??


I do backups with a Raspberry Pi with a 1TB SD card and leave it on all the time. The power draw is very small and I think reasonable for the value of offsite backups.
My personal experience with WOL (or anything related to power state of computers) is that it’s not reliable enough for something offsite. If you can set something up that’s stable, awesome, but if your backup server is down and you need to travel to it, that suuuucks.


I found code that calculated a single column in an HTML table. It was “last record created on”.
The algorithm was basically:
foreach account group
foreach account in each account group
foreach record in account.records
if record.date > maxdate
max = maxdate
It basically loaded every database record (the basic unit of record in this DATA COLLECTION SYSTEM) to find the newest one.
Customers couldn’t understand why the page took a minute to load.
It was easily replaced with a SQL query to get the max and it dropped down to a few ms.
The code was so hilariously stupid I left it commented out in the code so future developers could understand who built what they are maintaining.
“So what was the problem in the end?”
“Man, I don’t fucking know.”


How do you like the acoustics in that bucket?


I’m not sure if it automatically does the metadata lookup or if it just reads embedded metadata from the epubs I’ve downloaded. It for sure does a poor job of setting up the series name and book number fields if you read a lot of series.


I use a combination of calibre-web-automated for metadata management and calibre-web-automated-book-downloader for downloading from Anna’s Archive. Book read progress and status is synced from my Kobo.
It works really well but you need to manually request books one at a time. The readarr feature I miss was the ability to subscribe to a GoodReads list.

It’s tied to company stock performance. He’s only getting a trillion if he brings Tesla to like an 8 trillion dollar valuation, which seems pretty unlikely given his weird behaviour, their broken PR, and their weird valuation for years. Oh, and the competition heating up in the EV sector.
But to answer the question asked: those people who could go on strike have a few shares or options in TSLA, so if Elon hits that insane target, they all stand to make millions themselves.
That, and they knowingly and willingly work for a nazi that doesn’t care about them. I don’t think there’s anything he could say or do that would make them rethink their career path.


There’s a few free vpn providers out there. You could do a little homework and see if they are a good option. I tend to mistrust free services like that (if I’m not paying, who is, and what are they paying for?), but if you’re in a pinch it’s something to research.
Another option is to look for deals. Sometimes VPN providers will have a super sale a few times a year. I think once I got a NordVPN subscription for THREE YEARS for like $60. It’s more expensive than a single month of anything, yes, but if you can afford a onetime expense it’s nice not to have to worry about a recurring monthly cost.
Your other option is to not use torrents. Usenet is still the grand daddy of file sharing and is pretty much anonymous. Most Usenet hosts are paid, but back in the day even ISPs ran their own servers. It could be worth looking and seeing if there’s any free or low cost providers there.
I first saw this each time Bush was elected. So many people vowed to move to Canada if that bastard got elected.
Then the bastard got elected. And Canada reported like a tiny spike in traffic to their immigration information website. I think actual applications were up a statistically insignificant amount.
I guess it’s like the difference between filing for bankruptcy and Declaring It, Michael.


Replaced by AI, ironically.
This feels very adjacent to “we don’t do this because it was easy. We did it because we thought it would be easy.”


They have (had?) a fairly generous free tier that works well for people starting out.
I ended up buying a license after evaluation because the UI provides everything I reasonably want to do, it’s fundamentally a Linux server so I can change things I need, and it requires virtually zero fucking around to get started and keep running.
I guess the short answer is: it ticks a lot of boxes.


If you need some historical context, look up the recording industry anti-piracy panic that began when cassette recorders came onto the consumer market (early 80s?). Similarly the VHS panic when video could suddenly be recorded.
I haven’t kept any sources, but I recall a few studies over the years that showed the industry concerns were comically overblown and didn’t impact their bottom lines.
I set up Syncthing using the docker image from the Unraid “store” and it works great.
I’m not in love with the clients (especially Windows) but it seems to work pretty well once your setup is stable.


White cables also transmit slower in the dark. As soon as the cabinet is closed the data is going to slow way down with only the dim glow of the LEDs of the equipment acting to accelerate packets.
I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.
The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.
I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.