AI Completely Failing to Boost Productivity, Says Top Analyst
futurism.com/artificial-intelligence/ai-failing…
31 Comments
Comments from other communities
Ah yes, a surprise to absolutely NOBODY.
It depends on what you consider productivity. If you mean crunching numbers, like how many reports a week an employee can produce or something like that then, yes, productivity provably will skyrocket thanks to AI slop. But once you start factoring in quality or usefulness of the work then productivity will plumbing down.
If corpo is paying me to generate slop in their internal documentation I execute.
Especially since they plan to train another AI on the doc we generated.
Their “expert” which is the son of one of the chief, have implemented a RAG which doesn’t support TXT files, nor word documents from before 2022.
So yeah unusable project, so now we have to convert 20 year of documentation which is old word or txt files to new word format.
I’m sure this will improve productivity.
Those who have increased their productivity will not share that with capitalist in exchange for the undervalued salary.
We already saw productivity increased multiple times with salaries not representing it anyhow.
I love that studies have shown that a 4 day work week boosts productivity AND salvages some of the living in the work life balance, but rich people went with AI because it doesn’t boost productivity AND it consumes exorbitant amounts of water.
AI is intended to let the wealthy access the benefits of talent, without letting the talented access the benefits of wealth.
This seems more and more true.
You forgot about electricity.
You forgot about PC Hardware
The main difference is that PCs actually worked as advertised, back in the day and the reason for this productivity dent wasn’t a false promise from the start. Before AI the main use of computers was of a deterministic nature, meaning you get a directly reproducible outcome depending on the input. AIs (especially: LLMs) are probabilistic in nature, the output cannot be guaranteed to be correct, and it turns out just bolting on guardrails on top of the system is a band-aid. In practice, instead of getting a general-purpose intelligent machine which is capable of making autonomous decisions, you get a word predictor with an unlimited amount of possible failure modes.
Good summary.
I also find that LLMs are mostly good at the parts of my job that i enjoy or that are too important to entrust to something that is right 80% of the time.
Tech boosters suggest using LLMs to draft emails. My job involves explaining mathematical ideas to (well educated) people who have less math training than i have. That part of the job is fun and challenging - why would i outsource a task that i like doing and am good at? “Everyday” emails are not important enough to stress about - i can dash them off as fast as i can write a prompt.
The first time i used a computer instead of a typewriter to compose something, i knew that the world had changed. all that AI has changed is that i now doubt every article and picture i see.
Its similar for me as a software developer. I’m sometimes using LLMs to get alternative concepts or implementations to the task at hand or simply to refresh my memory about something with an example. This works well, because I’m already well aware of what I am trying to accomplish and so my prompts are precise enough to get a decent result. However generating directly usable code that meets my expectations just with prompts is really hard to do. There is so much fine tuning necessary that I’m faster just doing it myself.
I don’t see the technology itself as evil, there are some good uses if you know about the capabilities and limitations. What’s evil is big corporations selling this technology to people who are not prepared to understand or handle the limitations.
But personally I’m using it less and less these days. I don’t want to take part in the massive environmental damage AI is causing and the thought of contributing to this mess makes me feel icky every time I think of using it.
I’m willing to bet those outsourced teams in India are just vibe coding too.
I’ve seen this so many times, long before AI was even a thing. It always goes like this:
What amazes me is that this is still happening to this day. I’ve seen a real world example of this just last week.
On top of that, AI has arrived and it gives the CEOs of the world an opportunity to make the same mistakes again. It’s mindblowingly stupid.
Note: I don’t blame Indian companies for offering their services. The blame entirely goes to greedy companies from the west who try squeeze out profit from income disparity and lower standards.
Also, you get what you pay for. Lots of companies have good quality in India. Same as how lots of factories produce good quality stuff in China.
But it shouldn’t be a surprise to get garbage bin quality if you’re shopping for bottom shelf prices. Going for higher quality wouldn’t be a bad deal but there’s still money to be squeezed by going lower… and lower… and lower
Very good summary of the Corporate shitshow
Idk, impoverished with access to education is quite a mix for hardworking individuals.
No doubt, but the tech firms in India that are bidding on outsourced software projects all have a toxic incentive to produce code very quickly and very cheaply. In an environment like that, I’m sure the pressure to use AI is extremely high.
That assumes the AI can produce acceptable code quickly, which I think is not likely.
I agree, but In my experience tech companies are willing to stretch the definition of “acceptable” far beyond what’s responsible if it gives them a temporary boost on this quarter’s earnings report. Is it sustainable? No. But corp-think has never held sustainability as a virtue.
A cubicle farm in India has no such incentive as a large tech company, they have to push out code quickly as you said.
My boss recently specifically requested I create a chart using AI, it took me approximately 10 times as long as it would have in excel, in no small part because I couldn’t convince it it hadn’t added the range values to the y-axis.
I feel your pain. I tried using for work to make a dummy banner image for placeholder. It would never give me the size I requested, ever. Tried different ways of saying the same thing but the image size was always the same.
TBH boosting productivity was always a BS excuse. AI is meant to eventually replace human workers, but it’s also failing to do that.
It’s unclear from context who Forrester is or what they studied. But I’d be interested to learn if this supports that AI output replaces the 6 %, or if the economy will contract 6 %.
My cynical guess would be that the study is more based on current employment trends, rather than actual economic viability. Meaning the 6% will be the 2030 size of the bubble of tech bros still trying to find a product-market fit at the expense of VC money.
If anyone manages to figure out what that study is, I’d welcome a link or doi.
https://investor.forrester.com/news-releases/news-release-details/forrester-ai-led-job-disruption-will-escalate-while-fears-job
Thank you!
Having read it, there is no study. It’s a prediction of undisclosed methodology, and it’s unknown and obfuscated where their number of 6,1 % comes from.
I’d assume the whole report is an educated (futurism) guess, and I don’t have the track record of how good Forrester is at those.
The full report is a $1500 product, so I’m afraid I won’t be reading it.
Honestly I’ve been using AI for coding like copilot and I’ll ask ChatGPT for things etc… I always felt like I was having a major productivity boost - and sometimes I do! But I swear lately models have just been getting worse and producing incorrect results and just being slow. Either I’m expecting more of them or the are really getting worse
Yeah that’s the problem, because apparently it definitely feels like a productivity boost, but it turns into a 10% productivity decrease in the long term from debugging and fixing. I’m not going to look up the article which was maybe about a year old? so you can take that with as much salt as you like.
That’s how it works. It has a fairly profound psychological effect on people where they can easily be convinced that it’s beneficial, when the actual reality is that we have absolutely no evidence of that being the case. On the contrary, we have a growing body of evidence that it has a great deal of negative effects, like decreasing productivity, cognitive decline, widespread social issues arising from their use, and more.
As to your point, they haven’t actually fundamentally changed at all since the original transformer model paper was written in 2017. The only thing that has changed over time is the number of parameters and the datasets (that is, vacuuming up and stealing all content on the internet). But it does the same thing as it’s always done, which is simply generate the next token by taking the token with the greatest probability across a probability distribution created from the combination of the tokens it’s seen before. Note that this is simply a maximum, so if all of the tokens in the distribution have a low probability, it will still take the max, resulting in hallucinations, fabrications, illogical conclusions, and so forth. That has not, and quite simply cannot, change. You would need a fundamentally different technology for that, and quite frankly, one that exists purely in the realm of science fiction.
I’m a pretty staunch skeptic about AIs utility - I think the executive class bought into the hype and were seduced by the prospect of big waves of delicious redundancies with the attendant stock boosts, without ever actually bothering to find out if it works.
That said the article refers to that MIT study that is quite dated, and (like many) somewhat mis-characterize its findings.
For anyone who’s tried to solve linguistic processing tasks with traditional methods (or even tried to write a text adventure!), it’s clear there’s huge potential of LLMs for /something/, but the idea that there’s a way to pay for the operating costs and absurd levels of investment that has already happened is laughable.
Current generations of AI are amazing as long as the results don’t matter in any way.
For hobby stuff where a human can throw out the useless bits, AI can be great.
Of course, even then it can be a problem. There’s a hilarious meme going around about taking up “vibe electronics”. Electricity doesn’t care about intentions. Crcuits often catch on fire when done wrong. Vibing electronics would be hilariously expensive, because the costs of things catching on fire can’t be quietly left for someone else to learn about later.
Many things that don’t seem to matter have proven to matter. One might think writing a movie script could good be a great use of AI - but AI can only remix mediocre inputs, and mediocre movies lose money and get people un-invited from making movies.
Of course, one use case that billionaires seem very comfortable using AI for is too provide
public services to non-billionaires…
Other than the CNC stuff, did anyone think AI would boost productivity? “Can it draw me a picture? Sure, better than a genius. Can it mix me a drink? Sure! But you have to buy a shitload of robotic hardware first. Don’t worry though we won’t charge you that much for the privilege of using our version of the software.”