Scam Altman says it’ll take another year before ChatGPT can start a timer. An $852 billion company, ladies and gentlemen.
gizmodo.com/sam-altman-says-itll-take-another-y…
37 Comments
Comments from other communities
Hilariously the only use I have for Siri is setting timers.
“Humans have been keeping time as far back as 3500 B.C. ChatGPT is still figuring it out.
Last week, OpenAI CEO Sam Altman appeared on the show Mostly Human to talk about the future of AI, his company, and humanity as a whole. The interview was relatively standard fare—the host, Laurie Segall, got Altman on the record on the demise of Sora, OpenAI sliding in following the Pentagon’s spat with Anthropic, etc. But at one point, she asked Altman to react to a viral video posted by TikTok user @huskistaken in which he asks ChatGPT’s voice model to time him running a mile. The chatbot very obviously makes up a time instead of actually keeping track.”
It seems stupid to even try to get an LLM to do deterministic tasks. You’ll harm the product by trying to make it something that it isn’t.
Seems much more sensible to give it modules that perform the task and give it access to those which I believe is how Google Assistant used to work.
I haven’t looked into it but couldn’t someone just use an LLM for natural language processing and feed that to a home assistant? Like prompt it with “break up individual commands and pass to the assistant” so when I say “living room lights on and bedroom lights off” the fucking thing does it instead of “huh? I’m a moron.”
They could, but they wouldn’t be able to trap that functionality behind a paywall, so they’re not interested.
It can get done by anyone with an open model in way less than a year, using the approach you’ve described.
People run whisper on HA, and there already exist intention mapping packages. They’ve been around for probably a decade already. Pretty hit or miss… mostly because there isn’t a ton of flexibility in the structure of the commands you issue it.
If someone wanted to use an online LLM to attempt to translate a complex whisper transcription into something an existing intention mapped would handle well, that’s closer to a day’s worth of goofing around rather than a year. I actually refuse to believe it hasn’t already been done.
And if you’re using an online llm to do that translation, I don’t see why that can’t be behind a paywall either.
Honestly for this task, I imagine offline models would be sufficient.
MCP, skills and whatever else garbage they came up with doesn’t include a timer? Or even just getting a timestamp? Lmao
It’s really funny to me that people are still listening to him and taking his statements at face value.
I dunno. Timeliness seems reasonable.
AGI in 18 months. Start a timer in 12. I think a clock is roughly 66% of the complexity of AGI. Math checks out.
Can you ask it to make a story that will take like 10 mins to read and make it read it. Like timer with extra steps.
ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86
Share on Mastodon
He’s too busy teaching it how to sexually assault his sister.
So, ChatGPT can’t match any function of a Casio wristwatch. I’m concerned that when it can, it will consume the power of microwaving a turkey just to tell a user what time it is.
a better comparison: if you ask siri to time you like this 10 years ago, it would correctly start the timer app
This guy looks like a psychopath in every photo.
He looks like a reindeer caught middle of the road
It’s because he knows how screwed OpenAI, actually is.
He acts like he’s surfing the wave. He looks like he’s exactly as deep in the hole as he actually is.
ChatGPT is the next Theranos.
He hasn’t just scammed consumers. He’s scammed investors. And that’s the one crime that actually lands people like him in prison.
AGI just around the corner
C’mon bro, just one more trillion, that’s all. Then we’ll have a paradise with a new Epstein island and everything.
Okay, but just to be clear, the problem is not that it can’t do a timer. The problem is that it claims to be able to and even produces a result which looks plausible. It means, you cannot trust it to do anything that you can’t easily verify. If they could fix that overconfidence in a year, it would be much better.
No one who is impressed by LLMs should ever be permitted to make decisions which affect anyone not similarly cognitively impaired.
On their own as an advancement in computing LLMs are impresive, but tech and finance bros overinflated the perception of their performance well beyond what’s reasonable to try to discipline workers for wanting more rights.
From a computing perspective the thing I find saddest is that everyone will hate anything having to do with AI in the future because of this bullshit. Given ownership by the people, more research into the field could actually liberate us all more from tedious labor.
I agree. The tech of an LLM is really cool and impressive. But what the tech market and finance markets have made of it is just really fucking sad. I really hope the bubble fucking bursts
There’s a guy on tiktok called Huskistaken (yes, i know) that demonstrates repeatedly just how useless chatgpt is.
The first video of his I saw was him playing a clip of altman stating that it doesn’t have a timer and chat gpt countering that it does.
He then gets it to start a timer to time his long it takes him to run a mile and almost instantly tells it to stop. It tells him it was +7 minutes!
FatherPi on YouTube highlights how bad all the LLMs are.
How many Rs in strawberry?
he’s linked in the article
The link I posted is him reacting to sister fuckers reaction
ChatGPT and other LLMs need access to tools for things like this just like you and I do. If you ask me how many seconds have elapsed since I started typing this, I would give you a convincing estimate. I would need a Casio watch to give you an exact answer.
This is correct. LLM’s are just the knowledge and information processing bit of our brain. To actually do things we need access to things like our limbs, eyes, ears, watch, computer,…
Which is why my comment in this thread spoke of an mcp tool and a webhook, which is all thats needed. So a year for that? Fuck off, thats absurdly long for 2 things that already exist and just need to br plugged in the source..
A year? To make a mcp tool that starts a timer and a website hook that listens for the timer?
Alright, thats kinda fucked lol