<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1392659690788492&amp;ev=PageView&amp;noscript=1">
Skip to content
Watch a Demo
<span id=May ChatGPT Roundup: Getting A Little Technical" width="596">

May ChatGPT Roundup: Getting A Little Technical

divider

Welcome to our latest follow-up to our webinar, ChatGPT in Education! This is our space to address questions that people shared after the event. We appreciate your interest and enthusiasm, so please keep those questions coming.

In our previous post, we promised to address the more technical questions that we've received. That's what we're doing today. But first, let's review some of the terminology:

A quick refresher

ChatGPT is a modern-day chatbot which is powered by AI. Specifically, it is built on a kind of generative machine learning (ML) model known as a large language model (LLM). 

Most ML models output some kind of prediction or classification. By comparison, a generative ML model creates content. ChatGPT creates text, Midjourney creates images, and so on. These are called "large" language models because they were built on massive amounts of data, such as content found online.

ChatGPT is just one of several LLM-driven AI chatbots. It's also the most famous right now, in no small part due to the notoriety of its underlying ML models, GPT-3 and GPT-4

How is new information given to ChatGPT?

Companies build – in technical parlance, "train" – an ML model by feeding training data to an algorithm. The algorithm looks for patterns in that data and then saves those patterns into a model.

One key limitation of an ML model – any model, not just those used by ChatGPT – is that it only "knows" things based on what is in its training dataset. A generative model that is trained on data through the year 2022, for example, would describe 2023 events as being in the future. It would do this even if it were run in 2023 or 2024. 

In order to update a model, you must feed it new data. A common approach is to start the training process anew, so the algorithm rereads the entire (new) training dataset and looks for new patterns. Other models support what's known as "online" learning, which means that you can feed them a steady trickle of new data and they will experience incremental updates. A full retraining is like tearing down a house and rebuilding it from scratch. Online learning is more like tacking on an addition: you can still live in the house while you're adding to it.

You mentioned that ML models are "biased." What did you mean by that?

Data scientists and ML researchers first choose a training dataset when they set out to build a new model. That leads us to two key issues:

  • The training data they choose will be limited to what they can think of. 
  • The training dataset will further be limited by what data they can access.

The sum total is that, even when the data scientists have the best of intentions and the most thoughtful process of choosing data, every training dataset they choose will exhibit some kind of bias. (And that doesn't even account for people who choose a biased dataset on purpose, perhaps to spread a particular political or ideological agenda.) 

Since a model's entire worldview is based on its training data, every model will exhibit bias, as well.

The term "bias" understandably conjures thoughts of favoring one social view, country, or ethnic group over others. In the AI sense of the term, though, "bias" means that the data doesn't perfectly mirror the real world. A generative AI model that is only trained on news articles will not create content that sounds like casual, day-to-day speech. It is biased to sound like a newspaper.

How do we start piloting ChatGPT at our K-12 school outside the United States?

The person who submitted this question went on to ask: "Will we require a robust IT team to manage this pilot?"

This is a great question. The answer is, unfortunately … "It's complicated." You'll want to approach this in the order of planning, policy, and then technology.

Planning: How do you expect to use the chatbot? Would teachers generate content that they review and filter before passing on to students? Or would students interact with the chatbot directly?

Policy: What does your legal team say about the chatbot's terms of service (TOS)? Many TOS documents set a minimum age limit, which would impact your plans to have students use the chatbot. 

A TOS may also specify export restrictions, which limit the countries in which it can be used. That list may thin out even more if it has been banned by a local government. Italy recently banned ChatGPT, and other EU countries may follow suit.

Technology: ChatGPT and other services offer a programmable interface, called an API, which enables software developers to bypass the freeform text box and build custom tools. We also expect third-party vendors to create turnkey tools for companies without in-house technical expertise.

What else would you like to know?

We enjoy answering your questions about ChatGPT, LLMs, and AI in education. What would you like us to cover in future blog posts? Drop your question in the comments and we'll take it from there.

MSM Headshot - ChatGPT Webinar

MSM

Michael S. Manley currently serves as the Chief Technology Officer of ThinkCERCA. In his previous position, he was CTO of Public Good Software, which used machine learning technology to match online news content to relevant social good causes and campaigns. He has worked in software engineering for thirty-five years and is a graduate of Purdue University in both software engineering and English literature.

Q McCallum
Q McCallum

Q McCallum is a consultant, writer, and researcher in the domain of machine learning and artificial intelligence (ML/AI). He's spent his career applying disruptive technology to business use cases. His published work includes Understanding Patterns of Disruption: Lessons Learned from the Cloud, Machine Learning, and More; Business Models for the Data Economy; Parallel R: Data Analysis in the Distributed World; and Bad Data Handbook: Mapping the World of Data Problems. His current research interests include: The intersection of ML/AI and business models (data monetization, human/AI interaction, AI-based automation); The application of financial concepts (such as risk, N-sided marketplaces, and asset bubbles) to other domains.