

Yeah. I had ChatGPT (more than once) take the code given, cut it in half, scramble it and then claim “see? I did it! Code works now”.
When you point out what it did, by pasting its own code back in, it will say “oh, why did you do that? There’s a mistake in your code at XYZ”. No…there’s a mistake in your code, buddy.
When you paste in what you want it to add, it “fixes” XYZ … and …surprise surprise… It’s either your OG code or more breaks.
The only one ive seen that doesn’t do this is (or does it a lot less) is Claude.
I think Lumo for the most part is really just Mistral, Nemotron and Openhands in a trench coat. ICBW.
I think Lumo’s value proposition is around data retention and privacy, not SOTA llm tech.

Cheers for that!