We made a mistake this week. A bad one.
While reaching out to journalists who cover Ai accountability, we used an Ai assistant to research contacts and identify relevant story angles. That assistant told us a specific reporter at Ars Technica had published a retracted article containing fabricated, Ai-generated quotes attributed to a source who never said them.
It was the wrong reporter.
We sent the email anyway — because we didn't verify. We trusted the output. The reporter we wrongly named was understandably angry, and she had every right to be. The retraction was under a different byline entirely. Our Ai assistant confidently handed us the wrong name, and we didn't check.
That is exactly the failure ShriekAi was built to expose.
Unverified Ai output, presented as fact, sent to a real person with real consequences.
We apologized directly to the reporter. We're publishing this because accountability has to start at home — and because if we can't hold ourselves to the standard we demand of others, we have no business demanding it at all.
What Happened
Ars Technica recently retracted an article after it was discovered to contain fabricated quotes generated by an Ai tool and attributed to a real person who never said them. The reporter responsible — Benj Edwards — publicly apologized.
When we used an Ai assistant to research outreach targets, it surfaced this story and misidentified the reporter. We built an outreach email around that angle and sent it — without ever opening the original article to confirm the byline.
The reporter we contacted had nothing to do with the incident. She was wrongly accused in an email from a site that exists to hold Ai accountable.
The Lesson
Ai assistants are confidently wrong. They hallucinate. They misattribute. They fill gaps with plausible-sounding nonsense and present it without caveat. We know this. We cover it. And we still got caught by it.
Even when you are fighting Ai overreach, you are not immune to its failures. The tool doesn't know your mission. It doesn't care about your values. It just outputs — and you are responsible for what you do with that output.
Verify everything. Trust nothing blindly. Including us.
We reported this incident through our own Ai Abuse Report system. It lives in our database alongside every other report our community files. We are not above it.
This is what accountability looks like: name the mistake, name the harm, correct the record, and publish it. No hedging. No PR spin. Just the truth.