Dan Stoneking Wrote This

A Counterpoint to Yesterday’s ChatGPT Article

Spoiler alert!  Yesterday, I published an Editorial called Title: The Dark Side of ChatGPT: Unraveling the Concerns for Humanity.  Except for the final paragraph, it was actually written by ChatGPT.  Today, I am critiquing yesterday’s piece.  Gee, I may even slam it a bit. To fully appreciate today’s editorial, you should probably read that one first. 

Done?  Cool.  Read on.

Okay, first, if you are going to steal something from ChatGPT, remember to take the word “Title” out of the title.  But that is just the very beginning of things you might want to think about.  ChatGPT wrote that narrative in just a few seconds.  Maybe I should cut him/her/they/it (let’s go with “it”) some slack.  I will never write that fast.  But still.

My first thought was, where’s the beef?  My second thought was that ChatGPT would never think to use that clever catchphrase from a Wendy’s commercial in 1984 in its writing.   My third thought was that it was painfully boring.  

It followed this pattern that every high school senior and college freshmen learn, but upperclassmen grow beyond.   It has a rhythm not unlike an Emily Dickinson poem – tell them what you are going to tell them, tell them, tell them what you told them.  Even then, I would dock a high school senior down half a grade, so at an A- already, for being so perfunctory about it.  It told us in the introduction that it was going to discuss ethical, social, and psychological ramifications.  Three things.  But in the body and conclusion it added privacy concerns.  Four things.  Hmmm.  

Hey!  Did you see what I did there?  I used bold and italics to help you quickly see the three, then four, topics.  ChatGPT doesn’t use that fancy stuff to help the reader (for that matter, you won’t find many parentheticals that offer the reader witty asides).  Get it?

Here’s another problem in its paper, “…raises questions…,” “…has sparked debates….”  Ladies and gentlemen, we call that the passive voice.  Not cool.  Who raised the questions?  Who is debating?  There are also a handful of vague references to what “may” happen.  My favorite example is the admission that “…various aspects of daily life may have unforeseen psychological consequences.”  It is absolutely unequivocal that there will be unforeseen psychological consequences.  If we could foresee all of the consequences people wouldn’t be debating, would they?  Its paper is down to a B/B- now.

I am even a teeny bit skeptical at this point.  The other day, I tried to get my Alexa device to swear by getting it to repeat a sentence I made.  When it got to the F bomb, instead of repeating it, Alexa made an extremely loud beeping sound instead.  It hurt my ears.  A savvy programmer decided to punish people like me for our vulgarity.  In re-reading ChatGPT’s paper, it seems pretty clear that some programmer or algorithm was ready for my kind of question and was immediately prepared to provide a blasé, could-be-bad-but-also-has-good-qualities-and-potential, a kind of middle-of-the-road-non-position (sorry for all the hyphens).  My direction was for it to write about how it was “bad for humanity,” not sorta-kinda-maybe-bad.  We’re talking C/C- and heading downhill. 

Here’s the real kicker for me.  Where’s the example, anecdote, or story to bring this paper to life?  I gave you all my Alexa story.   Some of you will likely try it and more of you will remember that to highlight suspicious AI algorithms.  Along the same lines – and I don’t know whether to laugh or cry – but there were no quotes or references.  The reason is obvious – AI steals everything, so how could it possibly decide when to give rare credit and when not to do so.  Cheater. 

I used to teach high school English.  So, I will share a tip for other teachers and cheating students – it is always highly suspicious when there is not a single misspelling or grammar mistake.  You have to give ChatGPT credit on that one.  A little bit of imperfection is endearing.  Recently, a friend of mine on Facebook sent me a DM to let me know I misspelled something in my post.  Look!  Another anecdote!  Do we want to rob all of the teachers and grammar police of their opportunities to correct us?  Ah, the human intangibles.

In the final assessment, ChatGPT should be the premiere subject matter expert on ChatGPT.  I expect more.  I would give ChatGPT a D+ for its 500-word essay.  But maybe I have a bias. 

I am a human.  And I am a writer.  And I wrote 764 words for extra credit.

###