- cross-posted to:
- programming@programming.dev
- cross-posted to:
- programming@programming.dev
Recently re-discovered this gem of a blog post, written in 2018 by Nikita Propokov, about his disenchantment with the state of modern software. Do you think it’s still relevant today (perhaps more/less so than it was when it was written)?
It only got worse, and now that enshittification is in full effect for most big name consumer software, the speed at which things are falling is rising exponentially.
I only expect even worse now that AI has become mainstream and companies are half baking it into every notepad they find.
Funnily, due to this, i often find an open source app that is way better than whatever annoyed me.
Just today i used an Adobe product that got me raging. Within minutes i installed an oss equivalent that was a joy to use in comparison.
It’s an interesting trend.
This is my experience as well. I’ve always tried to be privacy-conscious, and stick to self-hosted alternatives or FOSS, but I was also lazy and didn’t really tried too hard. With the recent enshittification problems for almost every product that has a corporation behind it, it’s a lot more in my face that it’s shit and I should be dealing with it.
It made me finally get a VPN and switch to Mullvad browser. Get rid of Reddit completely. I finally got a Pixel with GrapheneOS and got a NAS running.
It’s also doing wonders for my digital addiction. The companies are grossly mistaken in assuming that my addiction to their service is greater than my immense hatred for forced monetization, fingerpriting and dark patterns. It’s turning out it’s not, and I’ve dropped so many services in the last few months I never was able to really stop using, most of them thanks to popups like “You have to log in to view this content” or “This content is available only in app”, or “You are using an adblocker…”. Well, fuck you. I didn’t want to be here anyway.
IMO, it’s more relevant today than it was when written. None of the problems called out in the post have been addressed, and many have gotten worse.
Why would it get better? Higher level programming is faster and saves companies money… they’re not going back to super efficient assembly or something to make xyz from scratch.
Article has a bit of an “old man yelling at clouds” sort of vibe.
It does have that vibe, but it’s unarguably true that a lot of software and websites are ridiculously bloated and slow.
7 years ago when I started my career, My first project we sat down and designed the program and interfaces.
Today, we implement features using best practices, never sitting down to design and end up accumulating technical debt that we don’t have funds or time to go back and fix.
Time to market is proportional to time to obsoletence. We don’t design for longevity anymore :(
My current project is building a (almost) 1gb Java rich client which takes around 2minites to load… while it’s merely a gui with some small client to client capabilities. The technical debt is insane, and it’s only getting worse because neither can they afford to rebuild it from scratch.
In similar boat(s) too. Like watching a train wreck.
Basically anything above assembly is a high level language. The article is not about making everything from scratch. It’s about thinking about what you’re doing and not just being lazy.
I don’t think it’s about laziness, but rather about having deadlines set by management that you can only possible meet by reusing stuff as much as possible - even if you only actually need 5 % of this stuff but got to package everything of it in your application for it to work.
He’s talking about ridiculous programming stacks and bloated tooling and things. Not once does he level any criticisms at higher level languages in general
The frameworks and tooling stacks are just even higher level abstractions.
Very true.
Programmer time is more expensive than computer time.
That might excuse inefficiency if all of these things were true:
- The programmers (or their employers) were buying new computers for all their users
- The new computers were fast enough to keep slow software from wasting users’ time
- The electricity to run them was free and without pollution
- The resources consumed and waste produced by that upgrade cycle had no impact on the environment
What’s really happening here is that producers of software are making things cheaper and easier for themselves by shifting and multiplying costs onto the users and the environment.
The amount of waste is staggering. It’s part of why I haven’t enjoyed professional software development in years.
I’ve tumbled down this rabbit hole on more than one occasion.
This line of thinking can lead you to the conclusion that the only ecologically just thing to do is for humans to cease to exist.
It’s a trap that can lead to despair.
Do your part to be mindful, respectful, and conservative with resources, but don’t give in to nihilism.
That’s not the problem.
Software used to be an artisan job, a skilled engineer carefully sculpts a solution for a problem.
Management didn’t have much to add there, or visibility, this was a world-breaking problem for them, where was their value?
The solution was issue-tracking, make every line of code a bureaucratic nightmare, ensure panopticon-like visibility for everything, that guaranteed the manager was always in control.
Progress slowed to a crawl, that’s fine, you just need to hire more developers, hundreds, they scale, right?
Good programmers stick to startups because large companies are just well-paying torture firms. I wouldn’t go back to Google for any amount of money, but I’ll do a startup almost for free, because they let me write code.
I see you’ve met Jira. :)
There are far worse things in the darkness than jira :( but yes.
Good article that I think does a pretty good job of outlining the problems of “Computer time is less expensive than programmer time.”
I was “raised” on the idea that end-user time is more valuable than programmer time and nobody really talked about computer time except in the case of unattended-by-design systems like batch processing. Even those were developed to save end-user time by, for example, preparing standard reports overnight so that they would be ready for use in the morning.
I think that one place we went off the rails was the discovery that one way to manage efficiency was by creating different classes of end-user: internal and external. Why would management care about efficiency when the cost of inefficiency is paid by someone else?
So much software is created explicitly for the purpose of getting someone else to do the work. That means the quicker you get something out there, the quicker you start benefiting, almost without regard to how bad the system is. And why bother improving it if it’s not chasing customers away?
@sahrizv: 2014 - We must adopt #microservices to solve all problems with monoliths. 2016 - We must adopt #docker to solve all problems with microservices. 2018 - We must adopt #kubernetes to solve all problems with docker
Now everything is a microservice, even when it doesn’t need to be.
And then there is the whole noSql debacle, where relational projects reinvent the wheel every time with nosql and for some reason that is thought to be smart
If I had a penny for every time I saw something along the lines of:
noSqlQuery().filter(...)
I would be disturbingly rich.
This has always been the case. When Windows XP came out people hated it needed 64MB (not GB) of RAM, because that was more than the entire disk installation of Windows 95, which was also bloated compared to older Macs and Amigas.
I was just thinking about something similar in regards to gamedev.
For the past few years since college, we’ve been working on a 2D game in our spare time, running on Unity. And for the past few months I’ve been mostly working on performace, and it’s still mind-boggling to me how is it possible that we’re having troubles with performance. It’s a 2D game, and we’re not even doing that much with it. That said, I know it’s mostly my fault, being the lead programmer, and since most of the core system were written when I wasn’t really an experienced programmer, it shows, but still. It shouldn’t be that hard.
Is the engine overkill for what we need? Probably. Especially since it’s 2D, writing our own would probably be better - we don’t use most of the features anyway. The only problem would be tooling for scene building, but that’s also something that shouldn’t be that hard.
The blog post is inspiring, just yesterday I was looking into what would I need to get a basic rendering done in Rust, I may actually give it a try and see if I can make a basic 2D engine from scratch, it would definitely be an amazing learning experience. And I don’t really need that many features, right? Rendering, audio, sprite animation, collisions and scene editor should be sufficient, and I have a vague idea about how would I write each of those features in 2D.
Hmm. I wonder what would be the performance difference if I got an MVP working.
Unity is trash and I’ll just leave that alone.
Using Rust for a game engine with wgpu, unless you already know Rust intimately and have used the Vulkan API before, is going to be difficult for you. I recommend you give it a try, but last I checked wgpu expected you to be familiar with Vulkan and is missing comments on most crate types and functions.
You might have better luck with something like macroquad or miniquad, but you’ll probably hit a wall and realize you want to do something that the developer didn’t think to expose an API to make possible. You’re also on your own for sound. Bevy has many components and I know it’s popular, but I don’t know if it has rendering. Maybe macroquad is the missing piece? Oh, and then text rendering. That’s a tough one.
I recommend a couple options: browse lib.rs or AreWeGameYet for game engines that aim to provide a complete package.
For non-rust, recently Relogic gave a bunch of money to Godot and FNA, so I would check those out. That’s going to be your quickest start (towards minimalism and performance) that isn’t unity.
This has been getting to me for a while. A couple of months ago I stumbled across the gemini protocol - the extension of gopher. I haven’t done much in that space as of yet, and I’m not suggesting it’s a salve for all the problems talked about in the article here, but I have found it is somewhat of a balm for the soul. A different attitude, to the internet and to a limited extent some of the software.
Extending that mindset into even a small fraction of software development would be refreshing.