top of page
  • Writer's pictureSue Leonard

Artificial Intelligence (AI): Chatbots and Deepfakes

We hear about Artificial Intelligence (AI) almost every day. We might fear it, get excited about it, or don’t care about it.


drawing of a brain with gears

My friends list their top concerns:

  • Grandkids using AI to avoid doing papers

  • Fear of fraud from schemes involving cloned voices

  • The spread of misinformation: You can’t tell what’s real anymore


AI consists of computer systems that do tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

What may surprise you is that you’ve likely been using AI for years, if not decades.


Whenever you search Google, shop on Amazon, or use the app from your favorite store. AI algorithms are trying to figure out your intentions and present you with the relevant information. Your car’s Global Positioning System (GPS) uses AI. Virtual assistants like Alexa and Siri use it to converse with you.


AI systems become more sophisticated every day. Navigation systems which started by telling us how to get to our destination can now drive vehicles. Search apps which started by presenting you with a list of items have grown into apps that you can have conversations with.


And as you may know, AI can make mistakes. GPS has guided me to the wrong destination, mostly due to inaccurate map information. Search engines have listed products that have no relation to what I was looking for. And don’t ask hubby about Siri – he thinks she’s worthless.


A new branch of AI, called Chatbots, was released to the public this spring. Chatbots are computer apps that use natural language processing to engage in human-like conversations. The most common ones are OpenAI’s ChatGPT, Google’s chatbot named Bard, and Microsoft’s BING. These language models can respond to questions and compose various written content, including articles, social media posts, essays, code, and emails.


ChatGPT

Chatbots have received a lot of press lately. They are the apps that your grandkids might use to write papers. Can teachers distinguish papers written by their students from those written by a chatbot? Surprisingly, a study showed high school teachers were only 70% accurate in detecting AI-written papers. (2) Experts are developing apps that try to detect the use of a chatbot, but the technology is still not perfect. (3) Some teachers actually embrace AI and guide their students through using it, including how to get the best results and checking for errors.


Chatbots are good for smaller projects such as getting ideas for writing or writing a greeting card. I recently asked ChatGPT to write a birthday greeting for a cousin whom I’ve never met and who is an impressionist artist. I loved ChatGPT’s response, “Happy Birthday, talented cousin. May your special day be as colorful and vibrant as your artwork.”


So chatbots can give us ideas for writing. But I don’t think it’s going eliminate writers any time soon. Neither does a marketing consultant who hires writers. He says, "It won't replace the good ones, but will probably replace some of them."


For example, I asked ChatGPT to rewrite the following to make it more compelling for a senior audience: “Fall is the season when the Foundation raises funds for competitive scholarships. The Foundation awards these scholarships each year to employees or their dependents.” ChatGPT’s response:

As we embrace the vibrant colors of fall, we invite you to join us in a noble endeavor that nurtures the aspirations of our community's bright minds. The season marks the time when the Foundation comes together to champion competitive scholarships. These scholarships, a beacon of opportunity,”


Maybe ChatGPT thought phrases like ‘noble endeavor that nurtures the aspirations’ and ‘beacon of opportunity’ were more compelling. I thought it sounded pompous and bloated. Even Dickens would groan at those adjectives.


I asked ChatGPT to rewrite the text again at the 8th-grade level and with as few adjectives as possible. The rewrite was much better but I still just used it to tweak my own writing. The lesson here is you have to be very specific when asking chatbots to generate articles and you may have to try several times before you get something usable. Even then, I only use ChatGPT to get ideas and sometimes use what ChatGPT wrote as a springboard for my own creativity. Even though I use ChatGPT for ideas, it won’t be writing my blogs anytime soon.


There was an outcry from experts when chatbots were released to the public earlier this year. In March, more than 1,000 industry leaders and researchers called for a six-month moratorium on chatbot development due to profound risks to society. Several days later,19 current and former leaders of the Association for the Advancement of Artificial Intelligence a 40-year-old academic society, including Eric Horovitz, chief scientific officer at Microsoft released a letter warning about the risks of AI.


Here are a few reasons why people are worried.

  • Misinformation. The internet is rife with misinformation and some tech companies aren’t doing enough to identify it. Elon Musk removed the blue check mark verification from Twitter which led to scammers generating fake company accounts. . (4) Within days a fake Eli Lilly account claimed to give away free insulin and one delivery company found 17 fake accounts using their name. See the references for websites to help you identify misinformation.

  • Unreliable Data Sources: While chatbots have collected data from millions of scholarly sources, they also use data collected from social media such as Twitter, Facebook, etc. a lot of which is false, dangerous, or misleading. Chatbots can correct themselves with user feedback most users don’t research or report errors.

  • AI Hallucinations. AI can create deceptively convincing but entirely fabricated responses. For example, when I asked ChatGPT to give me references for additives in food, it listed five seemingly scientific articles. When I checked them, none of them actually existed.

  • Deepfakes. Deepfakes are synthetic media created with AI that can mimic individuals in videos, images, and audio. While they can be entertaining, they also pose a significant risk due to their potential to deceive. Scammers use AI to replicate the voices of loved ones and demand money under the guise of an emergency. Such frauds cost Americans over $9 billion in 2022. To protect yourself, never send money to callers without verifying their claims. Contact someone you trust to confirm your loved one's safety. Government agencies do not threaten individuals over the phone for payments.

Responsible creators of deepfakes mark them as fake, but not all do so.


Although deepfakes can be amusing, they are alarming because of how well they can accurately imitate people, especially those with great influence or power. One example is the Morgan Freeman video “This is not Morgan Freeman.” Watch it. It’s Interesting and thought-provoking. (Notice the DEEP NEP identification logo on the bottom right of the video that identifies the video as a deepfake). See references for links to other mind-blowing deepfake videos.


YouTube video This is Not Morgan Freeman


In conclusion, while AI and chatbots have made significant strides, they also come with risks, particularly in the spread of misinformation and deceptive content. As with many products, use with caution. Use the fact checkers below when you see something that seems controversial. Check with your loved ones if you receive a threatening call asking for money.


But also realize these technologies have great potential and can be beneficial now if used properly.


Epilogue

Some people worry about chatbots collecting information about you. An AI expert says, “Chatbots don’t collect information about the users but they do store your questions and chats. I don’t know if it is tagged with my info like IP address, and other info Google has about me.” So as with all use of the internet, don’t put anything out there you don’t want the world to know.


Fact-Checking: One of the ways to tell if something is real is to use fact-checking sites such as Snopes (for urban legends), Politifact, Factcheck.org, Fact Check (Washington Post), International Fact-Checking Network.


For more information:

  1. Steven Wolfram, What is Chat-GPT doing and why does it work, stevenwolfram.com, February 13, 2023

  2. Timothy B. Lee and Seam Trott, A jargon-free explanation of how AI large language models work, arstechnia.com, July 21, 2023

References

  1. Tal Waltzer, Testing the Ability of Teachers and Students to Differentiate between Essays Generated by ChatGPT and High School Students, Hindawi.com, 202

  2. Janet W. Lee, A new tool helps teachers detect if AI wrote an assignment, NPR.com, January 15, 2023

  3. Cade Metz, Godfather of AI Leave Google and warns of Dangers Ahead, New York Times, May 2, 2023

  4. Shirin Ghaffary, Elon’s blue check disaster is getting worse, vox.com, April 25, 2023

  5. Christina Ianzito, Americans Lost Record-Breaking $8.8 Billion to Scams in 2022, AARP.com, February 28, 2023

  6. Tim Marcin, Thirteen Deepfake Videos That Will Mess With Your Mind, Mashable.com, August 2, 2020

  7. Joseph Foley, 21 of the Best Deepfake Videos, Creative Blog, September, 2023

2 Comments


Guest
Oct 10, 2023

Great blog post Sue! Information everyone needs to know!

Like
Sue Leonard
Sue Leonard
Oct 12, 2023
Replying to

Thanks!

Like
bottom of page