Skip to Main Content

Artificial Intelligence (AI) Literacy

Find resources on Artificial Intelligence (AI) Literacy for students

AI Literacy for Students

What is Artificial Intelligence? 

Whether you know it or not, artificial intelligence (AI) is already a large part of your daily life. Much of it is invisible to you, but it's working behind the scenes personalizing your video and music recommendations, customizing your social media feeds, analyzing your spending habits to detect fraud, and touching up your posts with photo filters. 

So, what is artificial intelligence? 

Broadly speaking, artificial intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence. These systems are designed to learn, reason, and make decisions based on large amounts of data. 

Here is a good starting point to understand AI: 

Artificial intelligence is the design, implementation, and use of programs, machines, and systems that exhibit human intelligence, with its most important activities being knowledge representation, reasoning, and learning. Artificial intelligence encompasses a number of important subareas, including voice recognition, image identification, natural language processing, expert systems, neural networks, planning, robotics, and intelligent agents.

Source: George M. Whitson. “Artificial Intelligence.” In Salem Press Encyclopedia of Science, 2023. https://discovery.ebsco.com/c/wnnu3f/viewer/html/vojdkpt4vj

 

What is AI Literacy? 

Some fundamental abilities that are useful to all students in today's information environment are: 

    1. Understand the basics of how AI works
    2. Use AI effectively and ethically
    3. Make informed decisions about using AI technologies

Source:  Hennig, Nicole. “AI Literacy: May 17 Webinar.” Nicole Hennig, March 16, 2023. https://nicolehennig.com/ai-literacy-may-17-webinar/.

 

One of the most important things to understand about AI is that the decisions made by AI are based on probability and statistics. So, no matter how advanced the system is or how much data went into feeding the machine learning of a particular AI program, the decisions made by AI can not attain the same kind of nuanced and creative reasoning that can be accomplished by a human. Probability-based decisions are often correct, but not always--so there will be times when AI tools make errors or even spread misinformation. 

Source: Education Week. “AI Literacy, Explained.” May 10, 2023, sec. Technology, Classroom Technology. https://www.edweek.org/technology/ai-literacy-explained/2023/05.

 

So, why are we talking about AI literacy?

In the past year, there have been significant advancements in AI that have impacted the daily lives of students. In particular, the release of tools that fall under the category of generative AI. Generative AI is the use of AI tools to create new content. Here is a good definition: 

Generative A.I.: Technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.

Source: Pasick, Adam. “Artificial Intelligence Glossary: Neural Networks and Other Terms Explained.” The New York Times, March 27, 2023, sec. Technology. https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html.

 

 

By now, you've probably heard of ChatGPT and likely also heard caution against using it for your coursework. But what is it, and how can you ethically use ChatGPT as a student? Read on. 

 

Note:  this section discusses ChatGPT as an example of a large language model. It is not the only one that exists, but it's the one that is the most well-known at the moment. 

 

What is ChatGPT?

If you know something about ChatGPT, you probably know that it is a chatbot that you feed prompts to generate text responses. There are all kinds of applications for this technology, from creative endeavors to research assistance (you'll read more about the ethical use of ChatGPT on the following page of this module). 

I asked ChatGPT to describe itself in simple terms: 

ChatGPT is a computer program that can understand and respond to natural language. It was trained by being fed a huge amount of text from the internet, and it uses that information to answer questions, have conversations, and even generate creative writing. Think of it like a very smart and knowledgeable friend who you can ask anything you want and it will try its best to give you a helpful answer.

For example, if you ask ChatGPT "What is the capital of France?", it will respond "The capital of France is Paris." It can answer questions about many different topics, such as history, science, sports, and more. It can even help you write stories or jokes.

ChatGPT is not perfect, and sometimes its responses might not make sense. But overall, it's a pretty amazing tool that shows how far computer technology has come in understanding and using human language.

 

What is a Large Language Model? 

LLMs are AI tools trained on vast amounts of text--or language--data. The data is processed through a neural network, or more specifically a "transformer" architecture (note, the "T" in GPT stands for "transformer"). Here's a good and simple explanation of how it works: 

Specifically, a transformer can read vast amounts of text, spot patterns in how words and phrases relate to each other, and then make predictions about what words should come next. You may have heard LLMs being compared to supercharged autocorrect engines, and that's actually not too far off the mark: ChatGPT and Bard don't really “know” anything, but they are very good at figuring out which word follows another, which starts to look like real thought and creativity when it gets to an advanced enough stage.

Source: David Nield. “How ChatGPT and Other LLMs Work—and Where They Could Go Next.” Wired, April 30, 2023. https://www.wired.com/story/how-chatgpt-works-large-language-model/

ChatGPT is currently the most popular of the free, public large language models (LLMs). Other big names in LLMs include Google Bard or Bing Chat, but others are focused more on research, like Aomni and Elicit. 

 

So, is ChatGPT a search? 

Importantly, ChatGPT is not a search engine. Whereas a search engine like Google will seek out existing text to answer your query, ChatGPT or other LLMs generate brand new text based on the probability of what it determines would be the most plausible response based on the text that was fed into its programming. 

Now that you have some foundation to understand what ChatGPT (or other LLMs) are capable of, you're probably wondering how you can use it. Read on in the next page. 

 

What are some ethical ways a student can use ChatGPT (or other LLMs)? 

  • Generate ideas to get started with research or creative pursuits
  • Ask for keywords to simplify a search to get better search results
  • Ask for ways to expand a topic into new resesarch directions
  • Ask for suggestions to improve your writing or find weak spots in your research

All of these methods are useful in developing your research topic using ChatGPT or another LLM. Notice that ChatGPT can be ethically used in these ways that occur throughout the research process. 

What are ways to use ChatGPT (or other LLMs) that should be avoided? 

  • Asking the LLM (like ChatGPT) to write your whole essay
    • Not only is this unethical, but it probably won't be very good. The chatbot hasn't been in your class and doesn't know the context of what you're supposed to have learned throughout the semester
  • Asking the LLM (like ChatGPT) to search for your sources
    • This isn't necessarily unethical, but it's a bad idea. ChatGPT will often make up sources that look real but are actually non-existent. Other LLMs like Bing will point you to internet resources but will not be able to find articles that are scholarly or that are as good as what you could find through human evaluation. Doing the research yourself will allow you to search creatively and will produce better results.

Gray areas in the ethical use of LLMs

  • Simplify the language of a text in order to understand it better
    • While this is a great use of the technology to aid in learning, there may be copyright concerns if the LLM incorporates copyrighted text into its knowledge base.