- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
I predict that for a while, corporations will lie about using AI more than they actually do, just because it’s still being hyped. But then everyone will stop giving a fuck.
We already never really gave a shit at the companies I worked at so far. Bosses love spewing shit about AI, telling us to use it and find out more about it, especially coding AIs, but we often try them out for a bit with serious tasks then revert back to only ever using them for formatting or research assistance. My colleagues tried and failed several times to use them for tasks involving non-trivial logic and it never worked, even with paid Cursor or Gemini.
Predict
Can already confirm they are doing that.
The executives are building dog shit unrealizable tools right now that nobody uses. It’s purely for the stock holders.
We marked ourselves as “AI powered” . Nobody is using our AI tools after they go live. We can see the logs.
Can already confirm they are doing that.
Well yeah, I guess I meant more that this trend is increasing.
We marked ourselves as “AI powered” . Nobody is using our AI tools after they go live. We can see the logs.
Did using the buzzword help at all with shareholders or customers though?
The executives are building dog shit unrealizable tools right now that nobody uses
Don’t even need to build them sometimes IMO. Friend pointed out that the excellent notetaking app she was using claimed to use AI and then she found this comment on reddit from one of the people behind it:
While FlowSavvy’s auto-scheduling is considered AI in the broad sense that it performs complex tasks that mimic human intelligence (and is exactly what people are looking for when they’re looking for “AI scheduling”), it does not ML/DL. Instead, FlowSavvy uses a carefully-designed deterministic algorithm to schedule tasks predictably, reliably, and quickly.
Everyone seems to be super happy with it as an AI auto-scheduler lol
Did using the buzzword help at all with shareholders or customers though?
It has helped a LOT with the stock price. With customers? Not even a little. We work with high end clients, customized solutions for every client. I can say without a shadow of a doubt that 100% of our clients have asked for us to remove the AI features.
Don’t even need to build them sometimes IMO
LMAO so AI is just an IF statement.
That’s funny.
Good. So when will the bubble burst?
My fear is that the US and Chinese governments will be propping up AI long after its shelf life because we’re in a mini economic cold war with them and nobody wants to get “left behind” in this ridiculous AI race to the bottom.
Literal trillions of dollars are going into AI related initiatives. But the bubble can’t burst unless the money dries up and I don’t see that happening with the current regimes.
I just realized that the GenAI craze is like the modern version of Ronald Reagan’s Star Wars project, but somehow both countries got fooled into pouring money into a colossal folly.
I just realized that the GenAI craze is like the modern version of Ronald Reagan’s Star Wars project, but somehow both countries got fooled into pouring money into a colossal folly.
That’s a good point. I was listening to a YouTube podcast on the topic and the hosts did say international competition is the primary reason for developing AI. US rationale is that even if they want to regulate AI, the “bad guy” won’t so it’s better to develop it first before the bad guys. They made the same comparison with the Manhattan Project and the race to beat the Axis from developing nukes.
But not at US government contractors! Out of nowhere we have a version of ChatGPT and a new hire that is actively working on getting AI to do everything 🙄
It’s like having an incompetent assistant. Worth it kinda I guess? Just don’t give them anything important.
I worked in logistics and there was a software company working on routing software for the company before I left, this was about 2 years ago and they were going on about how AI would route all the trucks and decide which order the deliveries would be done.
I was in a lot of these planning meetings and never said a thing about how it didn’t need to be AI it just needed to be a set of rules to follow. They are still not running without human intervention.
Rules: Don’t load too much, don’t have two trucks crossing over eachother, go when the stores are open, don’t work more than 12 hours, try to give the same amount of stops/workload to each driver.
AI doesn’t know what these things mean unless we tell it how to interpret work into rules. There is nothing intelligent about a system that relies on humans having to constantly check its work and paying extra to call that AI.
they were going on about how AI would route all the trucks and decide which order the deliveries would be done.
Yeah, NP hard problems don’t get easier just because the AI is doing the “thinking.”
Pretty sure that’s how the product will work but they’ll just call it AI
Yeah that was why it was so dumb. There was nothing AI about it and before this AI bubble it would have just been called automated.
I work in a tech department for a super old company and we are give pretty much all the professional AI softwares and custom LLMs.
My AI usage has gone up but not really for what you think. I use it mostly for thoughts consolidation and pagination. As someone with ADHD. My thoughts are scattered and often my reports and presentations are hard to comprehend. AI has been pretty helpful putting my thoughts into a format that is easier to consume for people not on the spectrum.
I essentially dictate my thoughts in probably the most scattered brain way and the AI does a great job putting my thoughts into words and even presentations.
In my previous jobs without AI, I struggled with putting my thoughts on paper. Now I literally a top performer at work.
No one ever knew how to use it, and they still don’t. All we heard was “implement AI” but not any actual use cases.
The use case for AI is pumping up the stock market.
That’s because it is a speak and apell pretending to be a hammer.
I was working in pure software engineering and we had to attend a meeting/presentation about some use cases for it.
It’s one of things that any useful tech would never need. Do you think the airplane, the cell phone, the internet, any other useful tech you can think of needed brainstorming sessions for use cases? Hell no, they couldn’t implement their ideas as fast as they wanted because the uses are so obvious
It works well to get a simple working example of a tech you haven’t used before. Like a quicker alternative to searching Stack Overflow. “Vibe coding” a whole app seems like too much of a stretch for the current tech though. And whether or not it’s worth the money invested in it or the energy used to run it is another area where it seems vastly oversold.
If I have to spend several days ‘training’ it, turn take time to fix it’s mistakes, it’s not saving any time
idk I think my AI infised can opener is great!
People are using it every day. You might be using LLMs or generative models without even knowing it. There’s all kinds of tools, plugins, and features in photo editing, video editing, audio work, programming, image scanning/sorting. Half the time, I find that Kagi’s AI agent is more productive than trying to waste time with stupid forum posts for an hour trying to troubleshoot a support issue.
Just because you don’t know how to use it doesn’t mean “no one ever knew how to use it”.
If AI is a bubble that pops, all the big talking heads that went on about how it’s the future and we all need to embrace it won’t lose credibility. They’ll mostly just keep their jobs. Not fair. I can be pig headed and wrong and I’ll do it for less than their seven figure compensation
Worse, those of us who have been sticking our neck out and saying “hey guys let’s maybe slow down a minute on investing into things that have no foreseeable path to profitability” are getting passed over on career advancements while hype-chasers are getting rewarded.
Life ain’t fair man, especially when you have a passing interest in understanding wtf is going on and a moral compass that tentatively points towards not actively and knowingly making the world worse.
Every try sprinkling in a little “my idea will allow the company to fire 90% of their workers” while being pig headed and wrong? Might bump you up a few levels - companies love that kind of shit.
Breaking news, trash tech no one wanted or asked for being shoved down everyone’s throats is trash.
It’s not trash. It’s just not the “replace every worker in every industry” hype bullshit that psychopathic CEOs are peddling to their rich friends every chance they get.
I use LLMs just about every day. They are useful tools that save time, if you know how to use them right, employ proper review, and verify important information. It is not a wizard, and it will not replace a functioning brain.
The Gartner hype cycle doesn’t crash to zero. It stabilizes. I think people have been too conditioned by actual garbage technologies like NFTs, blockchain, and to some extent, crypto. And true driverless cars have such a high barrier to entry that it’s difficult to reach any sort of “good enough” point with them without another few decades of innovation, so people ignore that tech, too. Nowadays, people are so conditioned to expect every new tech to just disappear after the hype cycle and life just continues as normal.
But, that’s not how this works.
Sounds like an awful lot of work to get the hallucinating, environment destroying, billionaire enriching, Hitler praising slop machine to work right. We’re all better off binning the trash.
Sounds like an awful lot of work to get the hallucinating, environment destroying, billionaire enriching, Hitler praising slop machine to work right.
This is a very reductive and ignorant take. Media promotes the edge cases and makes fun of them. Meanwhile, people are using this shit all the time without incident.
We’re all better off binning the trash.
Not going to happen. You’d have a better chance of all of social media suddenly disappearing overnight.
Meanwhile, people are using this shit all the time without incident.
Oh I’ve met these people. They’re often just too stupid to understand when they’re being fed bullshit. That’s why they have no issues.
Meanwhile, people are using this shit all the time without incident.
Lol. Lmao even.
I agree, it makes a good alternative to a quick web search, at least for many cases. It’s not like search engines surface completely accurate information either, gotta verify and use common sense either way
time to move some of my 401k allocations, cuz a big slice of the sp500 pie is heavily invested on AI
It’s such a misery managing any account during a bubble. If you hold you lose when the knife falls, if you try to time it then everything stays irrational longer than you can stay solvent.
I always just ride it out. I already bought the asset for a reason that doesn’t change just because it’s currently in a bubble. I didn’t buy a lump sum because I thought it was “cheap” then; I bought gradually every paycheck. So that’s exactly how I intend to spend it - slowly as needed, ignoring bubbles.
You are not the first person I’ve heard share that sentiment!
why though
the s&p is an index weighted to match the top 500 companies, so the bigger companies get more money put in to their stocks. if most companies aren’t doing great and a few tech ones are “holding up the market”, then most of the money in your s&p indeed will be with them
Good point, it is I who is the fool.
It makes sense that large companies would be incapable of using it effectively. They rely on antiquated systems and rigid structures that are less likely to change than smaller companies.
What a dumb take.
People don’t use AI for a lot of reasons, but it’s not because their company said they couldn’t. Every programmer I know is being asked to use AI, and most of them find AI to be significantly shitty to use on top of how horrible it is to use it from an environmental, occupational, moral, and psychological view.
Like, skip past the parts where AI has killed people. Skip past the insane water usage. Skip past the emissions. Skip past the cognitive reduction in reasoning.
This thing was trained on whatever data they could get a hold of: the internet, discredited information, and biased data notwithstanding. When you’re lucky, it is basically a coin flip on whether it works or not. So, if you have no foundation about the question you ask it, you have no clue if that is a hallucination or a bad data point or a correct answer. And if you do, you have to double check the answer anyway.
AI, as it is now, is a glorified search engine doubling as a sycophant. The main purpose of the businesses that own and run AI is to keep you using it, forever. Whether it is good or bad at anything else is unintentional.
Well put, and even skipping past all the serious issues, its trash. Even without all the issues it still sucks ass. Just trying to search the internet sucks, it’s all AI nonsense with no way to filter it out.
Right. And that problem compounds itself, as well. The more AI generated information that exists and inevitably is fed back into the algorithm, the worse the outcomes will get because algorithms will essentially inbreed themselves off the data they generate.
But these companies are desperate to hook other companies on AI. If they can generate income off of AI by renting other companies AI workers, they’ve made you a perpetual customer. The boss is asking workers to use these AI to feed more specific data into the algorithm to better mimic the workers because the more workers that use these, the more “good” data they can feed into them, to ultimately replicate your job functions.
It’s just… Bad from pretty much every angle.
What a dumb take.
Well defended. Truly, brevity.
Large companies actually enforce people to use AI with telemetry tracking and punish for not using it. They measure efficiency of worker by amount of AI the worker use. That’s the biggest problem right now. Leave people alone.
I used AI at work the other day… I’d just pasted something into a browser and realised I need to do a load of text manipulation. Rather than copy it out to vim, process it there and then back in, I just told the AI to do it. EDIT: funny how this got downvoted. What exactly do people think they’re downvoting?
truly the industry is saved
Yeah I use it to bypass advertising when looking up obscure coding problems. But I think the point is I’d never pay a penny (see what I did there?) to use it. It’s nice and helps some places but I think monetizing the billions spent is the challenge / impossible. Plus people at my work have copilot fatigue. It’s been integrated in such a clumsy distracting way. Actually adding negative value now.
I’m way too afraid of having our codebase ripped by these shucksters to use those in-IDE ones, and good open models are still too big to be run locally.
Skill issue. I used AI to create a web application that extracts the serial number from an image into text. This allows us to just simply take a picture rather than than having to type the serial number manually while using a magnifying glass. Significantly speeding up the process and lowering error rate.
You could’ve just looked for off the shelf OCR software and it would probably be better, no LLM needed. OCR has been around for far longer than the current LLM bubble.
No, I tried OCR and it was less accurate.
You’re reading text from a picture. That is OCR.
deleted by creator
Yea you could argue semantically that using an LLM to turn text in an image into machine readable format falls within “Optical Character Recognition”. I was referring specifically to OCR algorithms like Tesseract (pytesseract) and EasyOCR.
you could argue semantically
No. There’s nothing to argue there, it’s the definition of OCR.
Also, do you believe that LLMs found a new, novel way of doing OCR? That’s not how they work, LLMs don’t invent, they don’t innovate, they’re simply unable to do that. What they do, when they work correctly, is that they use already known and established techniques and tools. So to quote your top comment in this chain:
Skill issue
indeed, anyone with skills would have whipped that up in noetime without AI… or use any of the many apps that already do that
For real, you can copy text out of images on most modern phones so this isn’t really an issue anymore
Yea, you can do that, but have fun doing that 1000 times a day. Manually copying was the problem, I created a system that automated that process.
Ok but how does the phone extract the text? The fact that a modern phone can do it does not mean there’s no AI involved
OCR is a very, very old AI system that for decades we’ve scanned documents and saved to pdf with. The old system is far from perfect, but it does work well enough in most cases. If an AI recognition model can do the same job with less mistakes, that is an improvement.
Then why was I the one to do it if anyone could have done it?
If I have a low ability in that specific area then why was I able to achieve success?