There’s a very long history of extremely effective labor saving tools in software.

Writing in C rather than Assembly, especially for more than 1 platform.

Standard libraries. Unix itself. More recently, developing games in Unity or Unreal instead of rolling your own engine.

And what happened when any of these tools come on the scene is that there is a mad gold rush to develop products that weren’t feasible before. Not layoffs, not “we don’t need to hire junior developers any more”.

Rank and file vibe coders seem to perceive Claude Code (for some reason, mostly just Claude Code) as something akin to the advantage of using C rather than Assembly. They are legit excited to code new things they couldn’t code before.

Boiling the rivers to give them an occasional morale boost with “You are absolutely right!” is completely fucked up and I dread the day I’ll have to deal with AI-contaminated codebases, but apart from that, they have something positive going for them, at least in this brief moment. They seem to be sincerely enthusiastic. I almost don’t want to shit on their parade.

The AI enthusiast bigwigs on the other hand, are firing people, closing projects, talking about not hiring juniors any more, and got the media to report on it as AI layoffs. They just gleefully go on about how being 30% more productive means they can fire a bunch of people.

The standard answer is that they hate having employees. But they always hated having employees. And there were always labor saving technologies.

So I have a thesis here, or a synthesis perhaps.

The bigwigs who tout AI (while acknowledging that it needs humans for now) don’t see AI as ultimately useful, in the way in which C compiler was useful. Even if its useful in some context, they still don’t. They don’t believe it can be useful. They see it as more powerfully useless. Each new version is meant to be a bit more like AM or (clearly AM-inspired, but more familiar) GLaDOS, that will get rid of all the employees once and for all.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 hours ago

      Which is exactly why nobody uses AI for their work, because what they do is complex and nuanced and they can see the AI is full of shit.

      But your work is easy, and AI produces stuff just like because I’m not smart enough to tell the difference.

      • Derpgon@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        As a senior PHP developer, I can say that it is absolutely useless for more than writing boilerplate unit tests.

        Best case scenario you HAVE TO review the code. Remember, you are submitting the changes, not the AI.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 hours ago

    Well, is A* useful? But that’s not a fair example, and I can actually tell a story that is more specific to your setup. So, let’s go back to the 60s and the birth of UNIX.

    You’re right that we don’t want assembly. We want the one true high-level language to end all discussions and let us get back to work: Fortran (1956). It was arguably IBM’s best offering at the time; who wants to write COBOL or order the special keyboard for APL? So the folks who would write UNIX plotted to implement Fortran. But no, that was just too hard, because the Fortran compiler needed to be written in assembly too. So instead they ported Tmg (WP, Esolangs) (1963), a compiler-compiler that could implement languages from an abstract specification. However, when they tried to write Fortran in Tmg for UNIX, they ran out of memory! They tried implementing another language, BCPL (1967), but it was also too big. So they simplified BCPL to B (1969) which evolved to C by 1973 or so. C is a hack because Fortran was too big and Tmg was too elegant.

    I suppose that I have two points. First, there is precisely one tech leader who knows this story intimately, Eric Schmidt, because he was one of the original authors of lex in 1975, although he’s quite the bastard and shouldn’t be trusted or relied upon. Second, ChatGPT should be considered as a popular hack rather than a quality product, by analogy to C and Fortran.

    • jackalope@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 hours ago

      Very interesting! I didn’t realize there was this historical division between fortran and c. I thought c was just “better” because it came later.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        Oh, not at all. It would be very rude of me to describe C as a pathogen transmitted through the vector of Unix, so I won’t, even if it’s mostly accurate to say so.

        Many high level systems programming languages predate C, like the aforementioned Fortran, Pascal, PL/I and the ALGOL family. The main advantage C had over them in the early 1970s was its relatively light implementation. The older, bigger languages were generally considered superior to C for actual practical use on systems that could implement them, i.e. not a tiny cute little PDP-7.

        Since then C has grown some more features and a horrible standard filled to the brim with lawyerly weasel words that let compilers optimize code in strange and terrifying ways, allowing it to exists as something of a lingua franca of systems programming, but at the time of its birth C wouldn’t have been seen as anything particularly revolutionary.

  • fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 hours ago

    “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” -Upton Sinclair

  • 4am@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    edit-2
    11 hours ago

    In the hands on an experienced coder, AI being used as an autocomplete, as a test suite for easy obvious bugs, etc is a time saver.

    The big push for AI though is to sew doubt about physical evidence. “This document was faked by creating it with AI”, “This video is doctored by AI and is a deepfake”, etc.

    Now they think have a machine that they can blame for evidence of their crimes. It was never about new tools for vibe coding.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      you can save even more time by not doing the work at all

      the output is more consistent than what an LLM shits out, too

      Edit: serious note, even though you probably aren’t worth anyone’s time: you may be conflating the technology’s actual use cases (as an accountability sink and to spread misinformation) with the intentions of its creators. and the real reason higher-ups are pushing this is because they’re pliant dipshits that would eat dogfood if the bowl was labelled “FOMO”. also they hate paying employees

  • jackalope@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    13 hours ago

    People focus on Claude code because it’s a massive improvement over the previous models. The difference between gpt 4.1 and Claude 4 is palpable. (Claude code is really just a particular interface for using Claude 4, though that interface does add some juice beyond just the model the same way that copilot does.)

    You make an excellent point in the analogy between c and assembly.

    Part of it is I think the current companies are extremely consolidated. Why would they want to make something new? They are only interested in strangling their remaining customers.

    • diz@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      11 hours ago

      I dunno, I guess I should try it just to see what the buzz is all about, but I am rather opposed to plagiarism and river boiling combination, and paying them money is like having Peter Thiel do 10x donations matching for donations to a captain planet villain.

      I personally want a model that does not store much specific code in its weights, uses RAG on compatibly licensed open source and cites what it RAG’d . E.g. I want to set app icon on Linux, it’s fine if it looks into GLFW and just borrows code with attribution that I will make sure to preserve. I don’t need it to be gaslighting me that it wrote it from reading the docs. And this isn’t literature, theres nothing to be gained from trying to dilute copyright by mixing together a hundred different pieces of code doing the same thing.

      I also don’t particularly get the need to hop onto the bandwagon right away.

      It has all the feel of boiling a lake to do for(int i=0; i<strlen(s); ++i) . LLMs are so energy intensive in large part because of quadratic scaling, but we know the problem is not intrinsically quadratic otherwise we wouldn’t be able to write, read, or even compile the code.

      Each token has the potential of relating to any other token but does only relate to a few.

      I’d give the bastards some time to figure this out. I wouldn’t use an O(N^2) compiler I can’t run locally, either, there is also a strategic disadvantage in any dependence on proprietary garbage.

      Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        9 hours ago

        Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.

        Like MoE (Mixture of Experts) models? This technique is already in use by many models - Deepseek, Llama 4, Kimi 2, Mixtral, Qwen3 30B and 235B, and many more. I read that GPT 4 was leaked and confirmed to use MoE, and Grok is confirmed to use MoE; I suspect most large, hosted, proprietary models are using MoE in some manner.

        • diz@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          3 hours ago

          No no I am talking of actual non bullshit work on the underlying math. Think layernorm, skip connections, that sort of thing, changes how the neural network is computed so that it trains more effectively. edit: in that case would be changing it so that after training, at inference for the typical query, most (intermediary) values computed will be zero.

      • jackalope@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        10 hours ago

        From what I’ve read monthsl’s worth of Claude queries through github copilot is estimated to have the same carbon footprint as driving 12 miles.

        I do not care about IP law. My greater concern is how this stuff furthers consolidation in the tech industry.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 hours ago

          ah right, you only care about vague consolidation in the tech industry, but will take the industry’s word at their self-reported energy usage (while they build massive datacenters and construct or reopen polluting energy sources, all specifically to scale out LLMs) and don’t care about the models being fed massive amounts of plagiarized work at great cost to independent website operators, both of which are mechanisms by which LLMs are being used as a weapon with which to consolidate the tech industry under the rule of a handful of ethically bankrupt billionaires. but it’s ok, Claude Code is a massive improvement over the garbage that came before it — and it’s still a steaming pile of shit! but I’m sure going to bat for this absolute bullshit won’t have any negative consequences at all.

          how about you fuck off, bootlicker.

          • diz@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            3 hours ago

            In case of code, what I find the most infuriating is that they didn’t even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

            Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

            I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.