

You can easily do that with forgejo/gitea. However, you cannot sync these issues, that’s a one-off operation.
You can however totally sync the git repo - either out of the box or using web hooks/git hooks.
You can easily do that with forgejo/gitea. However, you cannot sync these issues, that’s a one-off operation.
You can however totally sync the git repo - either out of the box or using web hooks/git hooks.
There is a nice video of the full airborne phase, up to its demise in the water. Luckily the launch pad is not impacted.
They did reach the 30 seconds of flight announced prior to the launch, so not too bad. It looks like primarily GNC issue, though this could also be originating from a thruster issue.
https://bsky.app/profile/selshevneren.bsky.social/post/3lllwns5xpc2w
I have been using Bookstack, I like it though it is missing a few features I would love:
I am not familiar with what is required for hydropony, but I would guess it requires more equipment. Plus growing them on Lunar soil means eventually you get some elements from the Lunar soil itself and do not need to have full recycling otherwise, which means you don’t have to have a fully closed cycle for this.
There is still the issue of the closed cycle for air though (which is where Mars is easier than the Moon for medium term colonies).
It is on the Moon, it knows it can die due to decompression at any moment and we completely screw up it’s circadian cycle with 30+ days. Of course it will be anxious, no need to prove it.
Let’s see if it wakes up once the sun hits the solar panels. Hopefully the thermal conditions do not kill it by then.
Can we? Yes. Should we do it right now? That’s debatable.
The question is how much this would cost vs getting a two weeks offline every 26 months?
These two weeks do not create any additional requirements (you already have to make sure the probes can survive for a few weeks without comms), science does not fully stops during these two weeks. And it gives an opportunity to do long duration maintenance on the ground segment.
Frankly, there is little need to spend >$100M for such relays satellites until we actually have a permanent human presence on Mars.
I was called a few times, but the last 3 or 4 times it happened always at the exact same hour when I was putting my kid to sleep so I could not answer. And now I am not called anymore… 😥
Sorry, my autocorrect changed its into it’s.
Tailscale surprisingly was the fastest, even faster than plain Wireguard, despite being userspace. But it also consumed more memory (245 MB after the iperf3 test!) and CPU.
Do we know if this is a variation due to the test protocol or Tailscale is using wireguard with specific settings to improve, slightly, its speed?
With Tailscale and other mesh VPN, by default all your machines are client and servers. If you have 3 machines A, B and C, when machine A wants to send something to B it will connect to the server that B has.
These mesh VPN have a central server that is used to help with the discovery of the members, manage ACLs, and in the case one machine is quite hidden and not direct network access can be done act as a relay. Only in that last case do the traffic go through the central server, otherwise the only thing the central server knows is that machine A requested to talk to machine B.
You still have to trust them if you want to use their server, but you can also host your own server (headscale for Tailscale). Though at this point you still need to somewhat trust Tailscale anyway since they re the ones doing the client releases. They could absolutely insert a backdoor and it would work for a while until is is discovered and would then totally ruin their reputation.
One thing to keep in mind is that the websocket sync is not straightforward to set up with vaultwarden and the proxy. If you don’t have it working, then your client does not necessarily sync on every change.
Maybe this is related to this, with sync not being performed by the client you were using for modification?
If you are in an enterprise environment, it is easier to sell Ubuntu - at least there is a company that can provide support for it behind. Companies want to make sure someone is on the hook to fix an issue that would be blocking to them, and this is much harder with something like Debian.
That’s why Red Hat is used that much in companies, and what Canonical main revenues are coming from.
But as a selfhoster, I use Debian by default for my servers. Only if there is a very specific need for Ubuntu would I switch, and I am frankly tired of the Snap shenanigans on my desktop (thinking of migrating to PopOS or KDE Neon).
I’ll provide an ELI5, though if you actually want to use it you’ll have to go beyond ELI5.
You contact a web service via a combination of IP address and port. For the sake of simplicity, we can assume that domain name is equivalent to IP address. You can then compare domain name/port with street name/street number: you need both to actually find someone. By default, some street numbers are really standard, like 443 is for regular encrypted connection. But you can have any service on any street number, it’s just less nice and less standard. This is usually done on closed networks.
Now what happens if you have a lot of services and you want all of them reachable at address 443? Well basically you are now in the same situation as a business building with a lobby. Whenever you want to contact a service, you go to 443, ask the reception what floor they are in, and they will direct you there. The reception desk is your proxy: just making sure you talk to the right people.
Ouch, and that is with Gitea and Codeberg being essentially the same software.