• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • You raised an issue that the other bulletpoint has the solution for, I really don’t see how these are “key differences”.

    In Rust there always only one owner while in C++ you can leak ownership if you are using shared_ptr.

    That’s what unique_ptr would be for. If you don’t want to leak ownership, unique pointer is exactly what you are looking for.

    In Rust you can borrow references you do not own safely and in C++ there is no gurantee a unique_ptr can be shared safely.

    Well yeah, because that’s what shared_ptr is for. If you need to borrow references, then it’s a shared lifetime. If the code doesn’t participate in lifetime, then ofcourse you can pass a reference safely even to whatever a unique_ptr points to.

    The last bulletpoint, sure that’s a key difference, but it’s partially incorrect. I deal with performance (as well as write Rust code professionally), this set of optimizations isn’t so impactful in an average large codebase. There’s no magical optimization that can be done to improve how fast objects get destroyed, but what you can optimize is aliasing issues, which languages like C++ and C have issues with (which is why vendor specific keywords like __restrict exists). This can have profound impact in very small segments of your codebase, though the average programmer is rarely ever going to run into that case.


  • I participated in this, have to say it was fun and it’s been a thing I’ve said for years could make (at least) linear algebra lessons more interesting to young people. Shaders are the epitome of “imagery through math”, and if something like this was included in my linear algebra classes I would have paid much more interest in school.

    Funny now that this is my day job. I’m definitely looking forward to the video by IQ that is being made about this event.

    To explain some of the error pixels: the way you got a pixel on the board was by elaborately writing down all operations in details (yes this included even simply multiplications), the goal wasn’t if the pixel was correct or not, and depending on the location of your pixel the calculation could be a bit more complex, as long as you had written down your steps to get the result as detailed as possible.

    More than likely simple mistakes were made in some of these people’s calculations that made them take a wrong branch when dealing with conditionals. Hopefully the postmortem video will shed some light on these.


  • He’s making a video as a post mortem to this experiment, so it might still be released. But I can see why it would be better not to share them (aside from privacy/legal concerns as there was no such release agreement), some of the contributors used their real names, I may be one of them. It could be a bit shameful to see this attached to your real name. They might have submitted their initial draft and then, due to circumstances, could not update the results in the several hour window that was afforded to you.

    Luckily my pixels look correct though.






  • Hey, game dev here (well currently working for a company that works with many dev studios), graphics programmer in particular. It depends on what you want to do, is your primary usage going to be programming? You can get away with integrated graphics cards, as long as you stick to programmer-art quality level of environment details (which you would normally do anyway to test code).

    You can get pretty far into the dev process with minimal need for a detailed 3D env.

    There will be a perf hit for an external GPU just because of the physics involved (proximity and type of connection matters a lot in computers, this is why your CPU has L-caches on the cores).

    I actually have a crap GPU always laying around because it’s also the best to test out performance issues. Nothing drives you to improve perf than a choppy framerate ;)

    Most of my colleagues in my previous company were rocking 960’s or worse till last year, myself included. And we were a team of graphics programmers working on GPU driver-like software.

    I’d say, try it out, download Godot and an example project, run it and see how well it performs. If the perf looks fine, congrats, it’s a good idea. If the performance is bad, look at the quality of the example project and think “will I make anything visually more complex?” if not, congrats everything is good. Otherwise, well consider an external GPU if you think that’s best.

    I’d suggest getting a desktop though if you ever decide to keep going down the game dev line, just to keep upgrade costs low. I operate on a ±5 years cadence to modify parts, alternating between my CPU and GPU mainly. So I don’t replace the entire thing, but in 2 years I’ll be updating my CPU and in 4 it’ll be my GPU. I also have a crap laptop for when I’m on the road, and use the desktop for my actual work. I can always remote desktop into my desktop if I need something with more power to compile or render.

    To put your hardware in perspective, it would have beaten my desktop of 15 years ago and I was already doing game dev back then just fine. So you could definitely do game dev with it, the big question is “what type of game dev”.

    (sorry for the chaotic nature of this response, hope you got something helpful out of it)