I was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.
I thought that was a very interesting read, because it’s so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can’t detect LLM use just by the fact that someone uses em dashes. It’s a complicated stylistic issue that usually boils down to “well, you know what ChatGPT output looks like when you see it”.
There are no reliable automated LLM output detectors. Anyone who says otherwise is either trying to sell you snake oil (or is unwittingly helping someone to sell snake oil to someone else, I guess).
so the question still stands. how do they detect AI use? i am all for it btw. it is absolutely necessary but I am afraid it is impossible to do or implement.
But how do they know it is ai written?
https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup/Guide
I was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.
I thought that was a very interesting read, because it’s so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can’t detect LLM use just by the fact that someone uses em dashes. It’s a complicated stylistic issue that usually boils down to “well, you know what ChatGPT output looks like when you see it”.
Ok but surely there must be an automated way. You can’t throw manpower at this because they will loose
There are no reliable automated LLM output detectors. Anyone who says otherwise is either trying to sell you snake oil (or is unwittingly helping someone to sell snake oil to someone else, I guess).
so the question still stands. how do they detect AI use? i am all for it btw. it is absolutely necessary but I am afraid it is impossible to do or implement.
I think they just try their best