top of page

How much of what we read is AI generated: A Quick Experiment

  • Feb 19
  • 2 min read

So I read Matt Schumer's Letter to Loved Ones about AI being a bigger thing than we realize in the same way that COVID was a big thing before we realized. We actually just had this conversation in our science department. I remember vividly telling people that there was no way that we would be canceling our son, Jett's, birthday party, but before we knew it, we were isolating. Schumer's main idea is to learn to use AI now while there's still time, and not the free version, the paid tier version. I'm tempted to pay for Claude Opus 4.6, but this little tinge inside of me wonders if Matt Schumer is a reliable source. He's openly heavily invested in AI and AI infrastructure. Is this an incredibly strong sales pitch from someone with more than a quarter million followers on X? How do I know he didn't just ask his AI agent to write the most convincing argument to get more people to pay for top tier AI? So I thought I would run his post through ChatGPT zero. Here's the results: 73% Human, 24% AI

If the model he's using is as great as he describes, what are the odds he used it to write this 4,500 word piece?


Uh Oh. What's this? I just looked back and noticed these two things while looking for the word count:

I scan it a second time and:


74% humand to 34% human seems like quite a jump. I can almost imagine the prompt: Write a convincing article about AI that will make a majority of my followers want to pay for top tier AI services. I guess it was the number of characters analyzed that made it change its estimate? The angle of the writing hits so many pressure points:

1. It's written to his (non-descript) family members. (Not the readers of Fortune who republished the article.)

2. If you don't act now, you'll be left behind.

3. You won't understand until you start paying for the paid tier.

Am I being too cynical?


Well, let's just get a little perspective by running my last blog post (where I changed the font of everything that was generated by AI) through ChatGPT Zero:



(I ran mine a second time and it said it was 69% human. Also, this is not the top tier paying model.)

Hmm. I'm not sure if this says more about Matt Schumer's writing or the reliability of ChatGPT Zero, but that's not the point. How much of what we read is being generated by AI already? Does the AI writing dilute the message or make it less reliable? For me it does a little bit. I like to think I know when I'm being manipulated and that the manipulator is another human. If a human is so good at manipulation that I don't notice, that's on me. If an AI is so good at manipulation that I don't notice, the illusion of my own autonomy is undermined.


Is it too much to ask for people to clarify when and where they're using AI? Here's an idea: What if we boycott media that doesn't tell us what parts of it are AI generated? Would we ever believe them?

 
 
 

Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page