<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1392659690788492&amp;ev=PageView&amp;noscript=1">
Skip to content
Watch a Demo

April ChatGPT Roundup


April ChatGPT roundup

We're back with another follow-up to our February webinar, ChatGPT in Education

In our previous post, we explained what ChatGPT is and shared some ways to use it in an educational setting. This time around, we're addressing a few questions that we weren't able to answer during the event.

Should we block access to ChatGPT?

During the webinar, we noted that it would be futile to block ChatGPT on your school networks because students would certainly find workarounds. We still hold that to be true, as a practical matter.

That said, it may not hold as a policy matter. If your school is required to block network access to ChatGPT in order to remain compliant with local regulations, then you know what to do.  Keep in mind that ChatGPT's terms of service require that end-users be at least 13 years of age if they have parental consent or 18 without it. Those of you who manage IT for K-12 institutions will definitely want to restrict access if you haven't already.

Still, students who are determined to use ChatGPT will find a way.  They just won't use your network to do it. That's why our webinar offered ideas on how to teach your students to use the technology responsibly. 

How do we avoid ChatGPT pitfalls?

It helps to keep in mind that ChatGPT is, deep down, a machine learning/artificial intelligence (ML/AI) model. And while a model may sometimes feel like magic – they can make predictions and generate images, after all – they're more like factory machines.  

That leads us to my number one rule for AI safety: "Never let the machines run unattended."

Did you use ChatGPT to generate a few paragraphs?  Great.  Your next step is to  review that text before clicking "publish."  Treat it as an early draft, not a finished product.  If you always keep a human in the loop, it's far less likely that ChatGPT can embarrass you. 

Consider incidents at Vanderbilt University and online publication CNET.  Both used generative AI to create text. Both published the output with insufficient oversight.  Both wound up with egg on their face as a result.  By comparison, law firm Allen & Overy has implemented an internal chatbot for attorneys to use. They have confirmed that a human is always in the loop.  No embarrassing incidents thus far.

Is ChatGPT available in every country?

Given all of the news and the screencaps people have posted online, it would be reasonable to assume that ChatGPT is just … everywhere. There are actually several countries where it is not available. 

OpenAI, ChatGPT's parent company, restricts access from certain countries as part of US export embargoes. Governments that enforce strong internet censorship – such as China, North Korea, and Iran – have also blocked their citizens' access to the service. 

That doesn't mean people in those countries will simply do without. A number of  Chinese tech companies, for example, are developing generative AI chatbots that will focus on the local language and culture. 

Can you tell us more about how ChatGPT works?

We most certainly can.  We've gathered your tech-related questions and we're already working on the answers. Stay tuned!

What next?

What else would you like to know about ChatGPT in education? Please let us know in the comments, and we'll try to answer your question in a future blog post. 

MSM Headshot - ChatGPT Webinar


Michael S. Manley currently serves as the Chief Technology Officer of ThinkCERCA. In his previous position, he was CTO of Public Good Software, which used machine learning technology to match online news content to relevant social good causes and campaigns. He has worked in software engineering for thirty-five years and is a graduate of Purdue University in both software engineering and English literature.

Q McCallum
Q McCallum

Q McCallum is a consultant, writer, and researcher in the domain of machine learning and artificial intelligence (ML/AI). He's spent his career applying disruptive technology to business use cases. His published work includes Understanding Patterns of Disruption: Lessons Learned from the Cloud, Machine Learning, and More; Business Models for the Data Economy; Parallel R: Data Analysis in the Distributed World; and Bad Data Handbook: Mapping the World of Data Problems. His current research interests include: The intersection of ML/AI and business models (data monetization, human/AI interaction, AI-based automation); The application of financial concepts (such as risk, N-sided marketplaces, and asset bubbles) to other domains.