You’re probably right. I think COBOL development is one of the cases where the crazier stories are the ones that bubble to the top. The regular scene is probably more mundane.
I do think there are a few advantages to learning COBOL over C++. COBOL seems to be much stickier - companies that use it seem much more hesitant to replace it than a lot of the companies that use C++, and as a result, they will probably get more desperate. And while there’s definitely a lot more C++ out there than COBOL, I have to imagine that the number of people under 50 that use COBOL is probably tiny, while C++ still has a very large userbase. On the other hand, consulting depends a lot on your portfolio, references, and past accomplishments, and nobody’s going to pay 1k EUR/USD/etc. per hour (exaggerating, obviously) if you don’t have any credentials. It takes time to build that up.
Ultimately, I do think you’re pretty spot on, but we’ll have to see. This is more just a fantasy I tell myself to make it seem like retirement is closer than it probably is…
I fantasize about being one of those extremely well-paid Cobol consultants when I reach the later stages of my career. Hoping that I can earn a full year’s salary in 3-4 months and take the rest of the time off as a semi-retirement. It would be easier said than done, but it’s a dream that helps me get through the days when I get sick of the daily grind.
This is very interesting! Things like this make me wish programmers would give functional^W declarative programming more of a chance. I’ve long fantasized about being able to write programs as declarative code that the computer can optimize automatically without human intervention. When you implement your program in more restrictive (ie. stateless) paradigms, you can more easily reason about the code, and thereby make it easier to optimize or run in different environments.
SQL is a great example of this - when you look at some of the optimizations that servers like PostgreSQL can do under the hood, this is because the language inherently limits what you can do so the actual system executing your instructions can do different things with it for better performance and reliability. Things like this are what make query optimizers possible, and it’s really fascinating if you actually read carefully what query analyzers report (beyond just checking whether your indices are being used or not).
Beautiful chart. Thanks for sharing!
I’m not sure what you mean when suggesting Linux is a singular implementation around which features are exclusively designed. There’s all kinds of software that runs on all kinds of different OSes. Userspace applications, for example, can take advantage of POSIX compatibility to ensure that they run on all platforms (Linux, BSDs, even Windows).
Does systemd have any similar sort of compatibility guarantee? Can I run systemd-whateverd on BSD? Can I run systemd itself on BSD? I’m pretty sure most other init systems support at least one other OS if not more. Would the maintainers even support merging patches that do this? What about musl?
I am also a Void user, but will agree that the installation process can be very difficult, especially if you want to set up encryption in ways the standard installer does not support. You have to install it into a chroot (which I believe is how Debian was installed 20+ years ago).
That said, it is a great learning process and really helps you appreciate how awesome xbps is as a package manager!
I believe you’re thinking of Gentoo. But it seems that you can get precompiled kernels in Gentoo these days.
+1. systemd is something the Linux ecosystem really needs, but its execution is abysmal. We should be designing around standards so the best product can win. We should not be designing around singular implementations that could make it easy for Red Hat to execute a EEE strategy to consolidate Linux on the workstation.
I can’t wait till a crowdstrike-like flaw is exposed in systemd so we can all see how terrible^W wonderful monocultures can be.
The full write-up can be found here and should be fairly readable for users of this forum.
Some quotes that I thought were interesting:
With a heap corruption as a primitive, two FILE structures malloc()ated in the heap, and 21 fixed bits in the glibc’s addresses, we believe that this signal handler race condition is exploitable on amd64 (probably not in ~6-8 hours, but hopefully in less than a week). Only time will tell.
So 64-bit systems seem to be a bit more resistant to this it seems? But I can’t be completely sure given how much I’ve read about this yet.
This vulnerability is exploitable remotely on glibc-based Linux systems, where syslog() itself calls async-signal-unsafe functions (for example, malloc() and free()): an unauthenticated remote code execution as root, because it affects sshd’s privileged code, which is not sandboxed and runs with full privileges. We have not investigated any other libc or operating system; but OpenBSD is notably not vulnerable, because its SIGALRM handler calls syslog_r(), an async-signal-safer version of syslog() that was invented by OpenBSD in 2001.
It seems that non glibc-based systems also could be vulnerable, but they have not yet tried to demonstrate it yet (or have tried and not been successful).
And OpenBSD wins again it seems.
I would vote for docker as well. The last time I had to inherit a system that ran on virtual machines, it was quite a pain to figure out how the software was installed, what was where in the file system, and where all the configuration was coming from. Replicating that setup took months of preparation.
By contrast, with Docker, all your setup is documented. The commands that were used to install our software into the virtual machines and were long gone are present right there in the Docker file. And building the code? An even bigger win for Docker. In the VM project, the build environment for the C++ portion of our codebase was configured by about a dozen environment variables, none of which were documented. If it were built in Docker, all the necessary environment variables would have been right there in the build environment. Not to mention the build commands themselves would be there too, whereas with VMs, we would often have developers build locally and then copy it into the VM, which was terrible for reproducibility and onboarding new developers.
That said, this all comes down to execution - a well-managed VM system can easily be much better than a poorly managed Docker system. But in general, I feel that Docker tends to be easier to work with than a VM. While Docker is far from flawless, there are a lot more things that can make life harder with VMs, at least from my experience.
From my reading, it sounded like there was some controversy around whether it was ready to be merged it not. It sounded like some people felt that it wasn’t ready, but Linus decided to overrule them and merge it, saying it was ready enough and that merging it would help them improve it more rapidly.
Don’t worry everyone, I’m here to help:
Garbage
Outlook
Hot Garbage
Outlook (new)
Shit-tier garbage
Glad to be of service! Until next time…
This is quite cool. I always find it interesting to see how optimization algorithms play games and to see how their habits can change how we would approach the game.
I notice that the AI does some unnatural moves. Humans would usually try to find the safest area on the screen and leave generous amounts of space in their dodges, whereas the AI here seems happy to make minimal motions and cut dodges as closely as possible.
I also wonder if the AI has any concept of time or ability to predict the future. If not, I imagine it could get cornered easily if it dodges into an area where all of its escape routes are about to get closed off.
I assume you’re trying to imply in your comment that people are not going to use it if it’s not easy.
It’s unfortunate, but sometimes, having nice things can be a little hard. If people want to use the easiest thing under the sun, then they’ll just have to accept the downsides that come with it. Sometimes, that means private companies will use private photos of people’s underage children in AI training models that can generate deepfake pornography. What can you do? Convenience comes at a cost sometimes.
I’m not saying I agree with this of course, but that’s just how things are in the world where all rules must follow the dollar.
Well of course, that’s true of any and all publicly accessible data. At least with self-hosting, your private channels still don’t get mined against your wishes
There is no way to make a network request faster than a function call.
Apologies in advance if this it too pedantic, but this isn’t necessarily true. If you’re talking about an operation call that takes ~seconds to run, then the network overhead is negligible. And if you need specialized hardware for it, then it definitely could be delegate it out to a separate machine over the network. Examples could include requiring a GPU, more RAM, or even a faster CPU if your main application is running on more power-efficient CPUs.
I’m not saying that this is true in every case - they are definitely niche cases. But I definitely wouldn’t say that network requests are never faster than local function calls.
Honest question here: what would stop me from starting a video, then pausing it and walking away from my computer for several hours so youtube plays ads to no one?
Now repeat but with several tabs.
And bonus points if the videos simply happen to be mine and I were to enable monetization on them.
Hmmm…
Agreed on all points. I think some of the issues that you’re facing are things that would be resolved if Ocaml were more popular. But some others would be harder to fix without making breaking changes to the language as I mentioned earlier. If I had to put it as succinctly as possible, I’d say that the language just needs a lot more polish which would probably happen if it were more mainstream. But not all languages have to be mainstream, and maybe Ocaml’s purpose in the world is, as you put it, to inspire other languages. It is definitely extremely good at that!
No one has said Ocaml yet, so I will. It’s not a perfect language, but it has a lot of cool ideas and concepts. It’s a functional language, but allows you to write imperative code when you want to. Algebraic data types and type matching are built natively into the language and work very nicely. It’s type inference capabilities are very powerful (though that can backfire at times), and the |>
operator is really, really fun to use. It also has very powerful module/functor capabilities, though they go a bit over my head since I haven’t had a chance to play with them. Also, Opam is a very powerful package manager and it’s pretty easy to wrap/bind external libraries with it.
I’d love to see some improvements to the language - the syntax is a bit confusing and ugly at times (but this unfortunately can’t be fixed without breaking the language of course) - but overall I think I’d have a lot more fun programming in Ocaml than what I do in my day job.
Right - they say that they’re just going to use it to defend their “property rights”. In practice, they’re going to use it for a whole lot more than just that…
I haven’t done too much work with WASM myself, but when I did, the only languages I saw recommended were Rust, C++, or TinyGo. From what I’ve heard, Rust and C++ are smoother than TinyGo. Garbage collected languages usually aren’t great choices for compiling to wasm because wasm doesn’t have any native garbage collection support. That limits your selection down a lot.
But another option you may want to consider is Nim. As I understand, it compiles to C, so any C->Wasm compiler should theoretically work for you as well. I did a quick search and wasn’t able to find any great resources on how to do this, but you might get a bit more lucky. Good luck!