![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
Unless something has changed recently, OPNSense doesn’t have an ARM build so it won’t work on the Pi4.
Unless something has changed recently, OPNSense doesn’t have an ARM build so it won’t work on the Pi4.
If you want to use the PI as a router you’ll probably end up with a double NAT situation which isn’t ideal but may work well enough. In terms of wifi performance, I wouldn’t expect a Pi to be particularly good here so I’m not sure this even worth it unless it’s just a budget issue and you don’t have any other options.
In terms of your problem, you should be able to assign the Pi ethernet port to the default WAN and WAN6 networks. As for wifi, the Pi adapter needs to have support for AP mode, and looking around it doesn’t seem clear if the built in wifi adapter supports that or not (most people using the Pi are using it purely as a router and not a wireless AP). If not, you’d need a USB wifi adapter that supports AP mode. You might want to get that additional ethernet adapter too for testing/debugging and it will allow you to add a dedicated wireless AP.
It’s nice not to deal with HTTPS warnings etc and as you said it’s more convenient to access by domain name rather than remembering port numbers. You should be able to technically achieve the latter in another way by using docker and configuring it to assign a real IP for each service (a bridge network presumably), then setting each service to use port 80 externally. But that’s probably as much work as just setting up a reverse proxy.
And if you’re concerned about exposing ports, you can use DNS challenge which doesn’t require opening port 80 on your router.
Depends on how you define “sufficient”. Having some amount of swap can be helpful for efficient use of RAM, but I personally prefer to use zram for those cases.
A swap partition can also be useful if you use hibernation.
I haven’t tested the ebook functionality and I mostly use it for podcasts, but you should be able to download on the mobile client at least.
And if you’ve hosted it at home it will continue to work on the LAN if your internet connection goes down.
Assuming the Switch supports ipv6, and given how backward Nintendo’s tech tends to be, it wouldn’t surprise me if they didn’t.
Although at least nintendo.com has an AAAA record.
The last time I tried a rebase from Kinoite to Bazzite it left me with a weird set of flatpaks and removed Firefox somehow
There’s a warning against this in the Bazzite FAQ, so that’s not too surprising. It’s referring to DEs, but different “distributions” also applies I presume. I hope that becomes solved in the long run, as it is one of the current downsides with Silverblue etc
Yes, and consider using zstd (if it’s not the default on your distribution) and be pretty aggressive with the disk size since it has a high compression ratio. I normally set it to 100% (so zram disk size = physical RAM size), but you can experiment with different values.
I think there are more people that are #1 and #2 the same time
Probably where some of the attitude comes from. People are assuming that it’s paid IT people bringing their work home with them, which is a different case then a casual user trying out self-hosting without the broader background.
Although I haven’t seen this attitude myself so I suspect it’s not that common, and probably just a handful of users jumping to conclusions.
I haven’t tried it, but Tube Archivist may fit the bill.
The downside with ULA is that ipv4 is given preference, which is annoying on dual stack networks. I believe there is a draft RFC to change this but it will take a while for it to be approved and longer still for OSes to change their behaviour. I workaround it by using one of the unused (but not ULA) prefixes.
Pretty cool especially since it’s RISC-V. I’d have some concerns about the software and driver side of things, though (and the performance).
Ah Nvidia. Bazzite uses Wayland I believe since it uses the same gamescope session as SteamOS (unless something has changed recently). While it may be possible to get it working, I’d expect a much better time with an AMD card.
A traditional distribution may be a better bet with Nvidia for now.
There’s a bunch of other variants like PiKVM and BIiKVM as well. Even some cheap knockoffs on Aliexpress that may do the job.
Mainly because running multiple desktop machines adds up to a lot of power, even at idle. If you power them off and on as needed it’s better, but then it’s not as convenient. Of course, if you leave a single machine with multiple GPUs on 24/7 that will also eat a lot of power, but it will be less than multiple machines turned on 24/7 at least.
And the physical space taken up by multiple desktop machines starts to add up significantly, particularly if you live in an apartment or smaller house.
Vanguard is especially bad because it will not allow to run the game with Intel-VT/AMD-V enabled even if you are running bare metal as of its last update.
The Vanguard anti-cheat is incredibly invasive and something akin to malware, so that’s not surprising.
I’ve recently tried to do that using sunsine and different linux gaming distros and it was awful, the VM was working great for a few minutes and then suddenly crashes and I have to hard stop it.
Are you running this with something like libvirtd/qemu? If so, VFIO configurations can get pretty complex. Random crashes seem like MSI interrupt issues (or you’ve allocated too much RAM to the guest). Or it could be GPU reset issues that would also occur on the (Linux) host, a newer kernel and Mesa version in the guest may help.
Setting on the kernel commandline for the host to workaround MSR interrupt crashes:
kvm.ignore_msrs=1
If you’re running on a Windows host or with something like Virtualbox (assuming GPU passthrough is supported by these), YMMV but I wouldn’t expect good results.
NixOS. Ubuntu when I just want to test something quickly.
The followups do usually come, just later. It’s more like the GTA double dipping strategy where they get console users (and impatient PC users who buy a console) then PC users, both often paying at full price.
I suspect this is comparing against default Wine without fsync or esync (which are included with Proton and some wine builds) but from memory, ntsync has better compatibility and some performance gains over fsync (the next fastest alternative). But don’t expect 50% performance gains compared to fsync in most workloads.