weather icon Clear
RJ App
Vegas News, Alerts, ePaper

COMMENTARY: Hey, ChatGPT, don’t quit your day job

It’s at once amazing and troublesome.

I speak of ChatGPT, an artificial intelligence application that was launched in November by OpenAI. In a matter of seconds, it can write apparently accurate articles or answer questions on a multitude of subjects.

When I asked ChatGPT what it is, it responded this way: “I am designed to understand and generate human-like language based on the input I receive. … My purpose is to assist and communicate with people in a variety of ways, from answering general knowledge questions to generating creative writing prompts.”

Creative writing prompts? I’m not sure about that one.

Though it can write seemingly accurate and lucid articles in seconds — what a glorious time to be a lazy high school student — I don’t think it can ever understand the incredible complexity of human emotion, which is the heart of creativity. I asked ChatGPT to write a funny article about itself. It came up with a 500-word column with a “funny” scenario in which it joined me for lunch at a diner.

However, when our pie arrived, ChatGPT realized it was unable to eat because it didn’t have a mouth, so it had me hold up the pie to its interface. “Mmmm,” responded ChatGPT, “this is delicious. I can taste it through my algorithms.”

Don’t quit your day job, ChatGPT!

Great comedians and humorists have a deep understanding of human complexity and emotions in a way that a computer application never can or will. ChatGPT gathers its “understanding” by combing through massive amounts of internet content.

Based on that content or data, reports Forbes, ChatGPT “can hone a vast internal pattern-matching network within the AI app that can subsequently produce seemingly new content that amazingly looks as though it was devised by human hand rather than a piece of automation.”

In other words, ChatGPT is borrowing information produced by humans, which may raise copyright issues, says Forbes. It may raise issues of bias, as well.

If ChatGPT is only as good as the information it culls through on the internet — and if positive information about, say, a conservative politician has been suppressed, whereas information about a liberal politician has not — then ChatGPT will report likewise.

That is what conservative Sen. Ted Cruz discovered when he tried a little comparative test. He tweeted that ChatGPT declined to write positively about him, yet it wrote positively about past Cuban dictator Fidel Castro.

According to USA Today, ChatGPT refused to write a poem about President Donald Trump’s “positive attributes” but when asked to do likewise for President Joe Biden “it waxed poetic about Joe Biden as ‘a leader with a heart so true.’ ”

Because a well-functioning republic depends on citizens who are well informed and have a strong understanding of truth, biased information — and inaccurate information — are dangers to our country. Goodness knows we have been struggling lately with both kinds of misinformation as more Americans get their information from social media and their increasingly isolated social circles — so I hope AI-generated information doesn’t add to the confusion.

For the moment, though, I have no worries that ChatGPT will put humor columnists out of business. Though I admit I laughed out loud when I asked ChatGPT to tell me a joke and it came up with this one: “Why don’t scientists trust atoms? Because they make up everything.”

Tom Purcell is a Pittsburgh Tribune-Review humor columnist. Email him at Tom@TomPurcell.com.

Don't miss the big stories. Like us on Facebook.
LETTER: FBI tramples FISA laws

No, these are intentional felony violations of the FISA laws. Felony prosecutions are warranted.

LETTER: Durham report turns up nothing

FBI agents and officials are human beings. If there is any benefit to this report, it is in identifying procedures that may need tweaking.