This to me feels like the author trying to understand library code, failing to do so, then complaining that it’s too complicated rather than taking the time to learn why that’s the case.
For example, the example about nalgebra is wild. nalgebra does a lot, but it has only one goal, and it does that goal well. To quote nalgebra, this is its goal:
nalgebra is a linear algebra library written for Rust targeting:
- General-purpose linear algebra (still lacks a lot of features…)
- Real-time computer graphics.
- Real-time computer physics.
Note that it’s a general-purpose linear algebra library, hence a lot of non-game features, but it can be used for games. This also explains its complexity. For example, it needs to support many mathematical operations between arbitrary compatible types (for example a Vector6 and a Matrix6x6, though nalgrbra supports arbitrary sized matrices so it’s not just a 6x6 matrix that needs to work here).
Now looking at glam:
glamis a simple and fast linear algebra library for games and graphics.“For games and graphics” means glam can simplify itself by disregarding features they don’t need for that purpose. nalgebra can’t do that. glam can work with only square matrices up to 4x4 because it doesn’t care about general linear algebra, just what’s needed for graphics and games. This also means glam can’t do general linear algebra and would be the wrong choice if someone wanted to do that. glam also released after nalgebra, so it should come as no surprise that they learned from nalgebra and simplified the interface for their specific needs.
So what about wgpu? Well…
wgpuis a cross-platform, safe, pure-Rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.GPUs are complicated af. wgpu is also trying to mirror a very actively developed standard by following WebGPU. So why is it so complicated? Because WebGPU is complicated. Because GPUs are very complicated. And because their users want that complexity so that they can do whatever crazy magic they want with the GPU rather than being unable to because the complexity was hidden. It’s abstracted to hell and back because GPU interfaces are all incredibly different. OpenGL is nothing like Vulkan, which is nothing like DirectX 11, which is nothing like WebGPU.
Having contributed to bevy, there’s also two things to keep in mind there:
- Bevy is not “done”. The code has a lot of churn because they are trying to find the right way to approach a very difficult problem.
- The scope is enormous. The goal with bevy isn’t to create a game dev library. It’s to create an entire game engine. Compare it to Godot or Unreal or Unity.
What this article really reminds me of isn’t a whole lot of Rust libraries that I’ve seen, but actually Python libraries. It shouldn’t take an entire course to learn how to use numpy or pandas, for example. But honestly even those libraries have, for the most part, a single goal each that they strive to solve, and there’s a reason for their popularity.
I created a Nushell plugin in Rust that merely converts between Nushell and BSON data formats.
It works, but I still have a fundamental lack of understanding of the magic abstract generalized data transformation framework/interface.
I wish there were fewer magic conversions and transformations, and less required knowledge of them and calling or knowing the correct ones. Magic traits leading to magic conversions for magic reasons. Or something.
Or, borrowed from another “proverb”: Pre-mature abstraction is the root of all evil
I find that even without abstractions, code can be fairly unreadable if it goes in strong on Uncle Bob’s Clean Code ideas and you’re bouncing up and down the code/stack frames trying to see where the work actually happens.
i find this one sentence for each paragraph to be hard to read as a blog post.
Also doesn’t help that the grammar reeks of LLM.
Somewhere else this was posted the author stated they wrote it themselves, but took grammar corrections from grammarly. That probably created the Ai vibes.
They already got a lot of flak for it there.
Really? I would be shocked if an LLM’s been anywhere near this, it’s not in AI style at all.
@syklemil@discuss.tchncs.de Just for curiosity I checked the text with an Ai detector. Mind you, this is not proof for or against, as I don’t take these tools too serious too. They are Ai tools themselves.
https://www.scribbr.com/ai-detector/ says 0% likely is written by Ai.
https://stealthwriter.ai/ says 93% is written by Ai. This tool also shows the passages it thinks it is.
Remember, this is not a proof. But it looks suspicious to me. Test the services with text you know for a fact it is written by human (like yourself). And test is with some text you know for a fact it is output by an Ai (maybe directly ask an Ai for some text). In example the second tool gives me 48% written by Ai, for some text from my own blog post.
At some point, we’re going to have to have verified real human identity crap because the present situation of having to question everything I come across on the internet and essentially CAPTCHA myself to everyone every time I post is giving me a level of stress that makes me want to log off forever and I can’t be the only one.
That StealthWriter thing doesn’t feel very reliable to me, e.g. it pick up the first paragraph:
After 4 years with Rust, I love the language – but I’m starting to think the ecosystem has an abstraction addiction. Or: why every Rust crate feels like a research paper on abstraction.
I think AI would write “four” instead of 4, would use an em dash, and the construction of the second sentence (a Doctor Strangelove reference?) doesn’t feel like most LLM’s style.
Anyway, there is this massive red flag at the end which suggests I was completely wrong:
P.S grammarly forked me over vro 🥀
I’m not 100% sure what that means, but Grammarly is an AI writer thingy (or maybe it’s only their premium level?). I know that AI tools do pick up on Grammarly-filtered stuff as these somewhat false positives are a problem in educational circles.
Yeah, I noticed too it does not work well. After some other comments, I tested it on more stuff where I know for a fact it was written by human (myself). And it even gave me over 40% ai probability. It’s entirely possible it does that by design, because they have a functionality to obfuscate text for other Ai to detect.
I need to stop talking and posting about these tools, they are as bad as Ai can get.






