In recent years, advancements in artificial intelligence and natural language processing have led to the development of ChatGPT, or Conversational Generative Pre-trained Transformer. This technology, developed by Open AI (with funding from Elon Musk) has been used for a variety of purposes, from creating automated customer service agents to generating personalized responses to text messages, and now it is making its way into education.
For students looking for a virtual tutor, ChatGPT may seem like a miracle. The tool is free and available 24/7. It is able to provide personalized instruction, depending on the student’s needs and interests, with the ability to remember what students have asked it in past sessions. Complex answers can even be reworded as simplified summaries, and because ChatGPT is interactive, it can help improve students’ retention of information.
While ChatGPT has the potential to be used to augment analog study habits and improve students’ learning, it also brings with it ethical considerations and potential risks. Besides the potential for its use as a tool for cheating, other drawbacks include the watering down of children’s educational experiences, privacy concerns, and the accidental circulation of bad information.
The most obvious concern educators and parents have about ChatGPT is the possibility of the software being used for students to cheat on tests or assignments. This type of cheating could be especially effective for tests or assignments that are open-ended, such as essays or problem-solving questions. The way it works is shockingly simple: a student can ask ChatGPT a specific question from the test, and it will generate a response that is similar to what a human would say. The student can select the correct length of the response, choose a specific language model, and adjust how often certain words are allowed to repeat.
To the untrained eye, the response from ChatGPT (which is nearly instantaneous) can look like competent (though not genius) writing. ChatGPT tends to give general answers to prompts and is prone to circular reasoning. But for cheaters, decent is good enough.
Fortunately, schools can use plagiarism detection tools to scan student work, proctoring exams, and utilizing software that can detect suspicious activity on school networks. New York City has banned ChatGPT from being used by students. Some schools have invested in software that can be used to detect if a student is taking an abnormally short amount of time to answer a question or if the same answer is given multiple times. If the software flags any suspicious activity, the teacher can then investigate the situation further.
Another strategy is to use software that is able to detect AI plagiarism itself. Software developed by the same company who created ChatGPT is able to guess with reasonable accuracy whether a text was created by a machine. Other programs, like Turnitin, scan text against a database of previously submitted answers, and if the software finds a match, then the student is flagged for plagiarism. Teachers can also use everyday tricks in Google Docs or Microsoft Word to track changes and detect how long a document has been open or whether blocks of text have been copied and pasted. If the software detects that the student is using a computer to generate or copy answers, the teacher can then investigate the situation further.
But beyond cheating there are other ethical issues around AI that educators are currently struggling to answer.
One concern is that the use of ChatGPT could lead to issues of bias and inaccuracy. As with any AI system, ChatGPT is only as accurate as the data it is fed. If the data used to train the system is biased or inaccurate in any way, then this could lead to the generation of inaccurate or biased responses. This could lead to students being misinformed or receiving incorrect advice, which could have serious consequences for their learning and their future.
Another ethical issue surrounding ChatGPT is the potential for automation to replace real-life teachers and teaching assistants. While AI technology can be used to provide personalized responses to student queries and help guide their learning, it may also lead to a greater reliance on automated systems and less face-to-face interaction with teachers and other human support staff, who are able to use emotional intelligence and what they know about the students to adjust their learning in real time. This replacement of humans with machines could lead to a decrease in the quality of education and a decrease in the quality of human connection and support in the classroom.
Finally, ChatGPT could lead to issues of privacy and confidentiality. ChatGPT systems can store vast amounts of data about students and their learning habits, and this data could be used for a variety of purposes, from providing targeted advertising to creating detailed student profiles. These profiles could be used to create personalized learning experiences, but could also be used to manipulate students and their learning habits.
Now a small confession: This essay was written by ChatGPT. Using answers from a combination of four prompts and minimal editing to create transitions between paragraphs, the authors were able to create this article in less than half the time it would have taken them to write it alone. While ChatGPT did the heavy lifting, the authors also added a few expressive phrases, e.g. “shockingly simple” and deleted repetitions that might have given away that a machine had generated the text. Specific examples were added to provide more evidence of ChatGPT’s claims. Does knowing this change how you view this piece of writing? Does it make it less useful or invalidate its arguments?
To find out how educators can work with ChatGPT instead of ignoring or banning it, click here to read part 2 (written and edited entirely by humans).
By Brad Hoffman and Faya Hoffman, Board Certified Educational Planners, in collaboration with My Learning Springboard faculty members
Leave a Reply