• 1 Post
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • Real Druids are kinda an unknown. We have writings about their practices and beliefs from Roman writers and much later Christian writers. The former were known to be exaggerate and just make shit up when it came to “barbarians” and the enemies of Rome. And the later were often working with incomplete knowledge and also making shit up. This was muddled further by 18th Century work which liked to make ancient cultures even more fantastical. And then you get all the Neo-Pagan revival crap which cast their own beliefs onto ancient cultures, such as the druids, which completely muddied the waters. The fact is, we don’t actually know a whole lot about the real Druids.





  • As far as the rest of it, it seems to be happening with every filament I slice in Prusa slicer.

    This just reminded me of an issue I was facing recently. I also use Prusa Slicer and was having a hell of a time with my prints. It turned out to be the “Arc Fitting” setting.
    In Print Settings - Advanced - Slicing look for the *Arc Fitting setting. When I had it set to “Enabled: G2/3IJ” it just completely borked my prints. Just weird problems all over the place. As soon as I set that to “Disabled”, it cleaned up my prints considerably. Not sure exactly what I’m giving up there, but I do know I’m getting much better prints.


  • If you haven’t yet, try a cold pull and see if that helps. I personally just do a cold pull every time I change filaments. Maybe it helps, maybe it’s overkill, but I rarely have issues around clogs.

    Other things to think about:

    1. Does this happen with other filaments? Maybe your current filament is wet and needs drying. Maybe you just got a bad batch.
    2. Does slowing down the print speed for infill make a difference? Perhaps this filament is just flowing differently and you need to change the printing temperature, flow rate, or just slow down.
    3. How old is your nozzle? They do wear out and a worn out nozzle can manifest as all kinds of wonky problems.




  • There may also be a (very weak) reason around bounds checking and avoiding buffer overflows. By rejecting anything longer that 20 characters, the developer can be sure that there will be nothing longer sent to the back end code. While they should still be doing bounds checking in the rest of the code, if the team making the UI is not the same as the team making the back end code, the UI team may see it as a reasonable restriction to prevent a screw up, further down the stack, from being exploited. Again, it’s a very weak argument, but I can see such an argument being made in a large organization with lots of teams who don’t talk to each other. Or worse yet, different contractors standing up the front end and back end.



  • I would add the admittance of China to the WTO as another proximate cause. And one which probably had more of a material effect than NAFTA; but, NAFTA had already become a GOP talking point and it just stuck. China’s entry to the WTO was also moved over the finish line by Bush II, though most of the ground work was laid by Clinton. So, it wouldn’t have had the same clean narrative as NAFTA. US Employment in manufacturing went into freefall in late 2000 and early 2001. This was also during a recession, so that is intermixed with the effects of those changes in international trade. But, even as the recession receded and the US entered an economic boom, leading up to the 2008 crash, manufacturing employment in the US either held steady or decreased slightly. It’s unsurprising that the same period saw a lot of offshoring of manufacturing to China. And this was also the period of Neoliberal economists pushing “comparative advantage” and how the US losing all those manufacturing jobs was a good thing.

    So it’s not surprising then that they get bitter, they cling to guns or religion or antipathy to people who aren’t like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations.
    – Barack Obama, 2008


  • Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.

    Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.



  • I run OctoPrint in a docker container on my home server. They have an official docker image available. And they also have a docker-compose.yaml file available.

    I’m quite happy with the setup. The server is more stable (for me) than a small board computer. I have the whole setup on a UPS. Management is dead simple. The only caveat is that the server and printer need to be fairly close to each other for the USB connection. In my setup that was already a given, they sit less than a foot apart because of where I wanted them.

    I have wanted to try out Klipper , and may well do that in docker as well, but my printer is a proprietary nightmare and Klipper isn’t currently an option.


  • I currently do all of my 3d printing from Linux. My printer is physically connected to my server, which is running Ubuntu and has a docker container running Octoprint. The container is based on Debian. The printer itself is a crappy knock-off of the Ender 3. The only issue was identifying the port I needed to pass through to the container… And by “issue”, I mean I had to run ls -l /dev/serial/by-id and put the resulting device in the devices declaration of my docker-compose.yaml file.

    My main machine is Arch and I use Prusa Slicer as an Appimage. The only issue there is that Prusa Slicer likes to SegFault while slicing some models with some settings on my system. It’s not common, but it does happen. I think this is related to the Nvidia drivers; but, by using the Appimage it’s just the application which crashes and I can’t be arsed to spend the time to solve the issue. I also tried Cura, but ran into this bug (tl;dr: don’t use Nvidia on Linux). Overall though, it just works and I don’t really think about the fact that I’m on Linux.

    For modeling, I personally use OpenScad, as I have all the artistic capabilities of a mortally wounded water buffalo. One of these days, I’ll pretend to try to learn FreeCad, which runs just fine. Blender also runs great on Linux.

    In short, so long as you aren’t buying anything too proprietary, you should be just fine.



  • No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.


  • Short answer: yes, you can self-host on any computer connected to your network.

    Longer answer:
    You can, but this is probably not the best way to go about things. The first thing to consider is what you are actually hosting. If you are talking about a website, this means that you are running some sort of web server software 24x7 on your main PC. This will be eating up resources (CPU cycles, RAM) which you may want to dedicated to other processes (e.g. gaming). Also, anything you do on that PC may have a negative impact on the server software you are hosting. Reboot and your server software is now offline. Install something new and you might have a conflict bringing your server software down. Lastly, if your website ever gets hacked, then your main PC also just got hacked, and your life may really suck. This is why you often see things like Raspberry Pis being used for self-hosting. It moves the server software on to separate hardware which can be updated/maintained outside a PC which is used for other purposes. And it gives any attacker on that box one more step to cross before owning your main PC. Granted, it’s a small step, but the goal there is to slow them down as much as possible.

    That said, the process is generally straight forward. Though, there will be some variations depending on what you are hosting (e.g. webserver, nextcloud, plex, etc.) And, your ISP can throw a massive monkey wrench in the whole thing, if they use CG-NAT. I would also warn you that, once you have a presence on the internet, you will need to consider the security implications to whatever it is you are hosting. With the most important security recommendation being “install your updates”. And not just OS updates, but keeping all software up to date. And, if you host WordPress, you need to stay on top of plugin and theme updates as well. In short, if it’s running on your system, it needs to stay up to date.

    The process generally looks something like:

    • Install your updates.
    • Install the server software.
    • Apply updates to the software (the installer may be an outdated version).
    • Apply security hardening based on guides from the software vendor.
    • Configure your firewall to forward the required ports (and only the required ports) from the WAN side to the server.
    • Figure out your external IP address.
    • Try accessing the service from the outside.

    Optionally, you may want to consider using a Dynamic DNS service (DDNS) (e.g. noip.com) to make reaching your server easier. But, this is technically optional, if you’re willing to just use an IP address and manually update things on the fly.

    Good luck, and in case I didn’t mention it, install your updates.