The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
Honestly, if you’re doing regular backups and your ZFS system isn’t being used for business you’re probably fine. Yes, you are at increased risk of a second disk failure during resilver but even if that happens you’re just forced to use your backups, not complete destruction of the data.
You can also mitigate the risk of disk failure during resilver somewhat by ensuring that your disks are of different ages. The increased risk comes somewhat from the fact that if you have all the same brand of disks that are all the same age and/or from the same batch/factory they’re likely to die from age around the same time, so when one disk fails others might be soon to follow, especially during the relatively intense process of resilvering.
Otherwise, with the number of disks you have you’re likely better off just going with mirrors rather than RAIDZ at all. You’ll see increased performance, especially on write, and you’re not losing any space with a 3-way mirror versus a 3-disk RAIDZ2 array anyway.
The ZFS pool design guidelines are very conservative, which is a good thing because data loss can be catastrophic, but those guidelines were developed with pools that are much larger than yours and for data in mind that is fundamentally irreplaceable, such as user generated data for a business versus a personal media server.
Also, in general backups are more important than redundancy, so it’s good you’re doing that already. RAID is about maintaining uptime, data security is all about backups. Personally, I’d focus first on a solid 3-2-1 backup plan rather than worrying too much about trying to mitigate your current array suffering catastrophic failure.
This is true, but I don’t know if you’d be counted as a seeder on that list though if you don’t have the full torrent.
It depends if you’re using them all. Systems where I have lots of applications installed (especially graphical ones) will have lots of packages, my bare-minimum container hosts will have few. I think there’s also an element of selection bias here, because people posting screenshots of neofetch on their system are also likely to be people who intentionally run very minimal systems focussed on minimizing the number of packages so they can brag about it on the internet.
TL;DR - the right number of packages to have is as many as are required for your computer to do what you need it to do, and not too many more than that.
I’m personally a big fan of OpenAudible. It’s not free, but it’s not crazy expensive and it does all the work for you. You sign into your Audible account in the app, it will pull your library, download each book, decrypt it, and convert it to the format of your choice (I usually do M4B). I’ve been using it for years and it makes downloading your Audible library in an ongoing basis a breeze.
So two things about this:
Tailscale doesn’t actually route through Tailscale’s servers, it just uses its servers to establish a direct connection between your nodes. You can use Headscale and monitor the traffic on the client and server sides to confirm this is the case. Headscale is just a FOSS implementation of that handshake server, and you point the Tailscale client there instead.
Doesn’t renting a $3 VPS and routing your traffic through that expose many of the same vulnerabilities regarding a 3rd party potentially having access to your VPN traffic, namely the VPS provider?
For what it’s worth, I generally think that the Headscale route is the most privacy- and data-sovereignty-preserving route, but I do think it’s worth differentiating between Tailscale and something like Nord or whatever, where the traffic is actually routed through the provider’s servers versus Tailscale where the traffic remains on your infrastructure.
This is very exciting, I’ve felt that SQLite has held back the performance of the *arrs for a long time so I’m excited to see this.
Yeah it was 2006 and that was how you got the MP3 files onto your iPod Nano. This was back when “mobile internet” consisted of “m.website.com” links that loaded a page without a style sheet at dial-up speeds that was designed to be navigated with a D-pad.