• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: September 11th, 2023

help-circle


  • a single strawman: these tools do not exist and no developer in the world cares about the topic

    I haven’t seen anyone make the argument that denies these things exist - it’s that the existence of these tools are even necessary to safeguard the language in the first place is the argument. And then on top of that, you’ll additionally need a shop that is even allowed the time to properly utilize these tools and implement their usage as standard practice within the company culture.

    That there are alternatives which remove (significantly more) footguns is the overall point. Work in one of these other languages so e.g. dumb-ass PMs don’t even have the option of pilfering the time it takes to code safely, as it would already be baked in.




  • Some thoughts:

    Ubuntu, most likely

    I’d encourage you to take a look at Linux Mint, it alleviates some of the Ubuntu fuckiness. And if you want to join the “I use arch btw” crowd, maybe checkout EndeavourOS if you’re feeling more brave than just Ubuntu variants (which is built on arch, but makes barrier to entry a little easier).

    i9s are the latest hotness but don’t think the price is worth it

    Take a look at last generation to soften the blow to your wallet. E.g., instead of looking at a 14900k, look at 13 or even 12 series. In fact, this is a useful strategy all around if you’re being price conscious: go one gen older.

    GPU that can support some sort of ML/AI with DisplayPort

    Probably going to want to go with a discrete card, rather than just integrated. Other major consideration is going to be nvidia vs AMD, for which you’ll need to decide if CUDA should be part of your calculus or not. I’ll defer to any data science engineers that might wander through this post.

    The rest of your reqs pretty much come as regular stock options when building a pc these days. Though another nicety for my latest builds, is multi-gig nics (though 2.5Gb was my ceiling, since you’ll also need the network gear to utilize it). Going multi-gig is nice for pushing around a fuckton of data between machines on my lan (including a NAS).

    Very last thing that I’ve found helpful in my last 3 builds spanning 15 years: I use newegg for its reviews of items, specifically so I can search for the term “linux” in any given product’s reviews. Often times I can glean quick insight on how friendly (or not) hardware has been for other’s linux builds.

    And I lied, I just remembered about another linux hardware resource: https://linux-hardware.org/?view=search

    You can see other people that have built with given hardware. Just remember to do a scan too once your build is up to pay it forward.

    Good luck, and remember to have fun!









  • and a private telecommunications company can read absolutely all your digital communication

    Well maybe. It’s one of the reasons e2e encryption is so imperative to online privacy. For instance, turning on https everywhere, then your isp can only see which servers you’re connecting to, not what’s in your traffic to them.

    And to point it out up front, yeah the distant end’s servers likely have some for of that traffic captured, but now law enforcement has to dig up every company that they’re trying to pull info from. Which is significantly more difficult than just relying on a one stop shop arrangement.

    And for the best privacy, like security, a multi-layered approach is better. So throw in a VPN, throw in something like a mullvad browser, throw in pseudonymous accounts, throw in different usernames + passwords across accounts, throw in…







  • It exists, it’s called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

    And therein lies the issue: if you place a robots.txt out for the content, all bots will ignore the content, including search engine indexers.

    So huge publishers want it both ways, they want to be indexed, but they don’t want the content to be archived.

    If the NYT is serious about not wanting to have their content on the webarchive but still want humans to see it, the solution is simple: Put that content behind a login! But the NYT doesn’t want to do that, since then they’ll lose out on the ad revenue of having regular people load their website.

    I think in the case of the article here though, the motivation is a bit more nefarious, in that the NYT et al simply don’t want to be held accountable. So there’s a choice to be had for them, either retain the privilege of being regarded as serious journalism, or act like a bunch of hacks that can’t be relied upon.