Despite the naturalness of texts generated by artificial intelligence tools such as ChatGPT, they are not useful for everything. Pablo Dorta-González, Professor of Quantitative Methods at the University of Las Palmas de Gran Canaria, analyzes the tool and proposes practical examples for its use in the classroom.
Chatbots have limited reach and singular results, but they make the news and fascinate people, which is a good thing. Over the past three months, millions of users have tested the potential and limits of ChatGPT, a bot developed by the company OpenAI to write emails, programming code, poems and even academic activities. This chatbot recently passed a graduate (MBA) exam at a U.S. university, causing even more admiration and alarm. In response, some professors are modifying exams and assignments to accommodate this tool.
While many users are amazed at how impressive and natural the result can be, that doesn't mean it's useful for everything. In fact, humans should worry less about being put out of work by ChatGPT and more about being put out of work by someone who knows how to effectively use this technology. Therefore, some weaknesses and strengths of the language models are described below, as well as practical examples of their possible use in the classroom.
ChatGPT appears to have three goals: to be useful, to be truthful, and to be non-offensive. However, in its attempt to be useful (and not offensive), it occasionally has digital hallucinations and makes things up. In addition, when using ChatGPT you may discover that he is only able to answer satisfactorily to questions of definition and use of concepts, but that he has no problem-solving skills. On the other hand, when the questions are difficult and require a deeper understanding, ChatGPT provides elegantly written answers, but which overlook the situation posed and therefore reach the wrong conclusion. It also makes calculation errors, which may come as a surprise as it is a computer model.
ChatGPT is useful in explaining concepts and giving examples. However, it is not reliable as a search engine or source of information. Therefore, the answers it produces that cannot be easily verified should be quarantined.
It can also help generate different perspectives to understand how various sectors of society perceive things. This may have implications for our educational system. Instead of answering questions, students could be asked to write bot questions and evaluate their answers based on the different perspectives raised.
Supercreativity is an emerging concept in Artificial Intelligence (AI). In the words of Sebastian Thrun, founder of Google X: "We haven't even begun to understand how creative AI will become. If you take all the knowledge and creativity in the world and put it in a bottle, you'll be amazed at what will come out of it." This ability to collaborate with AI to create better outcomes and innovative ideas will become increasingly important in the future. However, we need to develop processes for humans to monitor AI and ensure that what it generates is reliable.
An important competency, both in education and in the labor market, is the ability to transfer what has been learned into practice. Transfer requires a deep understanding of a concept. Initially, when people learn a new concept in a given context, they often do not recognize it when they encounter it again in a different context. This is because we tend to focus on the specifics of any given problem or situation, and also because applying knowledge to a new context requires a deeper understanding. To use what has been previously learned, people have to recognize that the problem is the same but in a new context.
It is difficult to demonstrate that transfer occurs, and more difficult to teach people to transfer knowledge. But AI can help because it is good at inventing situations. It's a cheap way to give people lots of examples, some of which might be inaccurate, need more explanation, or just not be very realistic.
One possible classroom application could be for students to ask the AI to create scenarios that apply a concept they have learned in class: create a Harry Potter script that explains inflation in a country's economy; show a two-scene script where aliens lead an innovative company on Earth; or write a song that uses the derivative and marginal analysis, for example. And then ask students to critique and elaborate on these models, and suggest possible improvements.
In this way, the ability to generate infinitely many convincing (but slightly flawed) examples of the application of a concept allows one to approach transfer in a novel way.
We think we understand the world much better than we do, which makes us less willing to learn. This is called the illusion of explanatory depth, a cognitive bias that occurs when a person overestimates his or her understanding of a concept or phenomenon. This bias is often characterized by a person's ability to offer a detailed and seemingly well-informed explanation of a topic, despite his or her actual rather limited degree of understanding. For example, most of us cannot explain how an airplane can fly, how an engine works, or even how a pencil is made, but we delude ourselves that we have an in-depth knowledge of the subject. Students can also easily fall into this illusion, assuming that they understand how something works when, in reality, they have only a superficial knowledge of the subject.
Breaking the illusion requires confronting one's own ignorance, which can be an upsetting and humbling experience, and difficult for teachers to do. So letting the AI do it for you may be a good alternative.
One possible classroom application could be that students ask the AI to explain a particular concept step-by-step, something the AI is very good at. Students should then improve this output by adding information and questioning some aspects: what is missing; what is wrong?
The aim of this activity is for students to add, remove and combine aspects, using their own research, until they come to understand the concept and realize how complex it really is, while gaining some mastery over it.
When students hear a concept explained, they often feel they understand it, but that feeling is not always accurate. An effective way to put concepts from theory into practice is to teach someone else, evaluate their work, and give them advice on how to improve. As any teacher knows, the act of teaching someone else and evaluating their work greatly enhances our own knowledge of a subject.
Acting as a 'student', the AI can provide conversations on a topic for students to critique and improve. The goal of this exercise is for the AI to develop a discourse on a topic and then 'work with the student' to improve the writing, adding new information, clarifying aspects, adding ideas, and providing evidence. In this way, we take advantage of the AI's propensity to simplify complex topics and their lack of deep analysis as an excuse for the student to provide clues to their understanding.
In this activity, the teacher provides students with an AI essay topic and they will have to give suggestions for improvement. Students will paste the original essay, their suggestions and the final result. The process will encourage them to think critically about the content and to articulate their ideas clearly and concisely. They may have to seek additional information to fill in gaps that may be missing in the AI writing or check with other sources for the facts presented in the AI. At the very least, with this activity students will be able to recognize how difficult it can be to teach and evaluate the work of others.
Do you still want to stay in touch with us?
1. Subscribe to our Telegram channel.
2. Support us, as you can on Ko-fi or Patreon
It will help us a lot to employ a lot of writers, to write for us on the same style of article that you like so much.
We can't respond to individual comments, so please do not leave any personal information
The Bloggors Blog shows you just what you need to do when it comes to communication and how you ought to improve to be the best version of yourself. However, we are not responsible for any disputes you may have when putting our advice into practice, although this doesn't want that our articles are not correct or safe. All our articles have been written by authors who are experts in their field. Some of his solutions may work for others and may not work for you.