I think the title should be “What if LLMs doesn’t get much better than this?” because that’s effectively what the article is talking about. I see no reason to expect that our AI systems wouldn’t keep improving even if LLMs don’t.
Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.
So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.
Yeah, I think LLMs are close to their peak. Any new revolutionary developments in LLMs will probably be in efficiency rather than capability. Something that can actually think in a real sense will probably happen eventually, though, and unless it’s even more absurdly resource-intensive it’ll probably replace LLMs in everything but autocomplete (since they’re legitimately good at that).
I think that’s true, but also missing the point… We’ve hit the peak of AI until the next transformative breakthrough
They’re still fucking magic. They’re really cool and useful, when you use them correctly.
But chat gpt 5 isn’t much better than 3.5. It’s a bit better, it requires less prompt engineering to get good results, it gives more consistent results… But it’s still unreliable. And weirdly likes to talk down to you now, as if I don’t know more than it…I am still the expert here, it’s a light speed intern, it doesn’t know what’s going on
I think the title should be “What if LLMs doesn’t get much better than this?” because that’s effectively what the article is talking about. I see no reason to expect that our AI systems wouldn’t keep improving even if LLMs don’t.
Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.
So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.
Yeah, I think LLMs are close to their peak. Any new revolutionary developments in LLMs will probably be in efficiency rather than capability. Something that can actually think in a real sense will probably happen eventually, though, and unless it’s even more absurdly resource-intensive it’ll probably replace LLMs in everything but autocomplete (since they’re legitimately good at that).
I think that’s true, but also missing the point… We’ve hit the peak of AI until the next transformative breakthrough
They’re still fucking magic. They’re really cool and useful, when you use them correctly.
But chat gpt 5 isn’t much better than 3.5. It’s a bit better, it requires less prompt engineering to get good results, it gives more consistent results… But it’s still unreliable. And weirdly likes to talk down to you now, as if I don’t know more than it…I am still the expert here, it’s a light speed intern, it doesn’t know what’s going on