Stack Exchange Strike - Now AI is bad? Does Stack Exchange know what it is doing?

Posted on Tue 04 July 2023 in Stack Exchange Strike

Introduction

My previous posts about the ongoing moderator and curator strike on the Stack Exchange network can be found linked at the bottom of this post, or by visiting the Stack Exchange Strike category on this site. I'd post a summary about what's happened in the last ten days, but there is nothing to report. There are discussions, but no agreements. The appointed Stack Exchange employee empowered to talk with moderators stepped back and is not participating any longer.

Tomorrow marks the one month point. We are hours away from 10,000 pending moderator flags on Stack Overflow. This is up from 78 (yes, two digits, in mid-May). The way this has gone down, the lack of progress, and the continued mischaracterization of moderators to the press hasn't motivated me to spend my free time to volunteer in the last long though. I still have this feeling that Stack Exchange is looking at the reddit protests recently with their demand that moderators return to the community and wondering if they can replicate that here.

New confusion

On July 3, 2023 Stack Overflow published a blog post entitled: "Do large language models know what they are talking about?". Spoiler: the conclusion of the article is "Nope."

But that's not the interesting thing. The interesting thing is how this answer is presented. The very last paragraph of the post cuts to the heart of the matter that moderators on Stack Overflow raised in December when we banned ChatGPT.

Treating AI-generated information as purely actionable might be the biggest danger of LLMs, especially as more and more web content gets generated by GPT and others: we’ll be awash in information that no one understands. The original knowledge will have been vacuumed up by deep learning models, processed into vectors, and spat out as statistically accurate answers. We’re already in a golden age of misinformation as anyone can use their sites to publish anything that they please, true or otherwise, and none of it gets vetted. Imagine when the material doesn’t even have to pass through a human editor.

We saw this in action with ChatGPT. We still see it in action with ChatGPT and it's still a problem users are becoming more aware of as the strike continues. We saw it when Stack Exchange tried their formatting assistant on Stack Overflow. What I see here is Stack Overflow admitting that the moderators are correct, in public.

The other interesting thing about that paragraph is that it links to an article from The Verge that quotes the Stack Overflow moderators and the decision to ban AI. It also has this dig at Stack Exchange executives:

The mods say AI output can’t be trusted, but execs say it’s worth the risk.

Their own post is explaining why it's not worth the risk.

What's this mean?

I see this as more communication failure on Stack Exchange's part. In an update I posted weeks ago, I linked to internal emails that where leaked.

How are we messaging this? Who is allowed to post and respond to questions and comments on Meta, chat, social media, etc?

The Community Leadership Team ([redacted]) are working together in close coordination with Marketing ([redacted]) on comms. They will post and respond to questions on-site. Unless you are specifically tapped to respond to something please do not engage. It is best to avoid commenting on anything related to this action on site, even if you think you have something helpful to add. Please get review and approval from Philippe prior to posting on site, or from [redacted] if you are approached off-site.

Someone, somewhere, didn't realize what this blog post was about or what it linked to.

But, nothing changes with this. The company has dug in so hard on forcing GenAI to be on the sites and is marching toward an announcement of somekind in late July 2023 about AI. In the meantime, I can only see blog posts like this one as an indication that Stack Exchange doesn't know what they are attempting to build toward and at the same time have come to the conclusion (or at least a team within Stack Exchange has) that GenAI isn't to be trusted.

Just like the community said back in December and continues to say now.


- is a father, an engineer and a computer scientist. He is interested in online community building, tinkering with new code and building new applications. He writes about his experiences with each of these.