• 64 Posts
  • 273 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2024

help-circle
    1. At least on Lemmy, this is definitely what I’ve observed. If you look at any thread that’s full of sturm und drang, it’s usually a tiny handful of accounts that are creating all of it (and then roping other people into their hostility, like a little chain reaction, like Chernobyl.) If you look at the impact, it just looks like everyone’s an asshole, but if you look at the root of the trouble, you realize most people are fine and a tiny minority are noisy and hostile and they can just get everyone else spun up.
    2. I agree, if you’re in NYC right at this moment in history and you can’t see a bigger picture of things worth getting heated up about than White Lotus, you should talk with people in your community more.





  • Yeah. People said 100% the same thing about Hitler. He was a clown, he was the weird guy that came to fundraisers and scarfed all the food because he didn’t have any money, and scared away donors and political allies because he would get in their face yelling about Jews. Until, all of a sudden, his big opponents got sent to the camps or just killed, and it wasn’t funny.

    They’re doing a great job at following the playbook so far. People are upset but no one’s really done that much to stop them, which means it will continue and get worse.


  • Grok responded to X users’ questions about public figures by generating foul and violent rape fantasies, including one targeting progressive activist and policy analyst Will Stancil. (Stancil has indicated he may sue X.)

    When you fine-tune a coding AI on code that has deliberate flaws in it, and then switch it back to having conversations in English, it starts praising Hitler and constructing other deliberately hateful content. It wouldn’t surprise me if fine-tuning Grok to be Nazi also led it to “generalize” some additional things that weren’t intended by the operators.










  • Yeah, almost to an excessive degree. To me it’s fine, it just means the designer has room to grow in terms of their skill at getting the right balance, but also it’s going to be a little bit of personal taste. This video includes some pretty interesting discussion of the balance between spelling things out, making sure that everyone can notice and enjoy them, versus making things opaque knowing that you’ll leave some people behind but making it that much more special for the people who found them “all by themselves” without any kind of prompting.




  • Yeah. It’s not a perfect game, it has many issues, but it is fun and exciting and it does something very very different, very successfully. I’m reminded of the Zero Punctuation review of Psychonauts basically saying that its number one good point was that it was something genuinely mad and original, in contrast to the sea of imitation that is modern gaming, and for that alone hooray.



  • All I can say is you’re missing out… I can see that it’s a type of game that may not be for everybody, but it is honestly probably the most unusual game I have ever played in my life and I’m enjoying it a lot. I almost did the same as you did, I beat Leshy one time and then continued messing around with it sort of out of curiosity… and then the whole actual fuckin’ game started.

    It just made me pick a file from my hard drive, made me a card based on it, and then told me if I let that card die, it’s going to delete that file. This game is nuts man.



  • And storing the source and such for every dependency would be bigger than, and result in the same thing as an image.

    Let’s flip that around.

    The insanity that would be downloading and storing everything you need, wrapping it all up into a massive tarball and then shipping it to anyone who wants to use the end product, and also by the way assuming that everything you need in order to rebuild it will always be available from every upstream source if you want to make any changes, is precisely what Docker does. And yes, it’s silly to trust that everything it’s referencing will always be available from whoever’s providing it.

    (Also, security)

    Docker is like installing onto an empty computer then shipping the entire machine to the end user.

    Correct. Because it’s not capable enough to make actually-reproducible builds.

    My point is, you can do that imaging (in a couple of different ways) with Nix, if you really wanted to. No one does, because it would be insane when you have other more effective tools that can accomplish the exact same goal without needing to ship the entire machine to the end user. There are good use cases for Docker, making it easy to scale services up as was the original intent is a really good one. The way people commonly use it today, as a way to make reproducible environments for ease of one-off deployment, is not one. In my opinion.

    I’ve been tempted into a “my favorite technology is better” pissing match, I guess. Anyway, Nix is better.


  • The issue is, nix builds are only guaranteed to be reproducible if the dependencies don’t change.

    Dude, this is exactly why Nix is better. Docker builds are only guaranteed to be reproducible if the dependencies don’t change. Which they will. The vast majority of real-world Dockerfiles do pip install, wget, all kinds of basically unlimited nonsense to pull down their dependencies from anywhere on the internet.

    Nix builds, on the other hand, are forbidden from the internet, specifically to force them to declare dependencies explicitly and have it within a managed system. You can trust that the Nix repositories aren’t going to change (or store them yourself, along with all the source that generated them and will actually produce the same binaries, if you’re paranoid). You can send the flake.nix and flake.lock files and it will actually work to reproduce a basically byte-identical container on the receiver’s end, which means you don’t have to send multi-gigabyte “images” in order to be able to depend on the recipient actually being able to make use of it. This is what I was saying that the whole thing of needing “images” is a non-issue if your workflow isn’t allowing arbitrary fuckery on an industrial scale whenever you are trying to spin up a new container.

    I suspect that making a new container and populating it with something useful is so trivial on Nix, that you’re missing the point of what is actually happening, whereas with Docker you can tell something big is happening because it’s such a fandango when it happens. And so you assume Docker is “real” and Nix is “fake” or something.

    I like one a package to be independent

    Yes, me too, which is why an affinity for Docker is weird to me.


  • Yes because that is a wrong and clunky way to do it lol.

    If you really wanted to, you could use dockerTools.BuildImage to create an “imaged” version of the container you made, or you could send around the flake.nix and flake.lock files exactly as someone would send around Dockerfiles. That stuff is usually just not necessary though, because it’s replaced with just a better approach (for the average-end-user case where you don’t need large numbers of Docker containers that you can deploy quickly at scale) that accomplishes the same thing.

    I feel like I’m not going to convince you of this though. Have fun with Docker, I guess.


  • Hold up, nix added containerization? How did I miss that? I will have another look now!

    Nix is containerization. Here is firing up a temporary little container with a new python version and then throwing it away once I’m done with it (although you can also do this with more complicated setups, this is just showing doing it with one thing only):

    [hap@glimmer:/proc/69235/fd]$ python --version
    Python 3.12.8
    
    [hap@glimmer:/proc/69235/fd]$ nix-shell -p python39
    this path will be fetched (27.46 MiB download, 80.28 MiB unpacked):
      /nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21
    copying path '/nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21' from 'https://cache.nixos.org/'...
    
    [nix-shell:~]$ python --version
    Python 3.9.21
    
    [nix-shell:~]$ exit
    exit
    
    [hap@glimmer:/proc/69235/fd]$ python --version
    Python 3.12.8
    

    The whole “system” you get when moving from Nix to NixOS is basically just a composition of a whole bunch of individual packages like python39 was, in one big container that is “the system.” But you can also fire up temporary containers trivially for particular things. I have a couple of tools with source in ~/src which, whenever I change the source, nix-os rebuild will automatically fire up a little container to rebuild them in (with their build dependencies which don’t have to be around cluttering up my main system). If it works, it’ll deploy the completed product into my main system image for me, but if it doesn’t then nothing will have changed (and either way it throws away the container it used to attempt the build in).

    Each config change spawns a new container for the main system OS image (“generation”), but you can roll back to one of the earlier generations (which are, from a functional perspective, still around) if you want or if you broke something.

    And so on. It’s very nice.


  • I mean if it makes you happy, I won’t tell you to do anything different. I think a certain amount of it is just prejudice against Docker on my part. Just in my experience NixOS is the best of both worlds: You can have a single coherent system if everything in that system can play nice with each other, and if not, then things can be containerized completely that way still works too. And then on top it has a couple of other nice features like rolling back configs easily, or source builds that get slotted in in-place as if they were standard packages (which is generally where I abandon Docker installs of things, because making changes to the source seems like it’s going to be a big hassle).

    I’m not trying to evangelize though, you should in all seriousness just do what you find to be effective.