As the use of artificial intelligence becomes more widespread, Dr. Catherine Quinlan, associate professor of science education at Howard University and collaborator on LabXchange's Data Science–Driven Science Education Project, considers its role in teaching and education as a whole.
Like detective Del Spooner in the movie I, Robot, I’m wired to be slightly skeptical. I’ve avoided using Siri or Alexa on any of my devices, though I’ve dabbled in one or two commands on friends’. Maybe it was the many movies I saw on artificial intelligence or maybe it’s just my deep self-awareness about the depth of our manipulations within society, and of how the mind and motive work together. Nevertheless, I always appreciate opportunities to peer into the nether regions of one’s mind, whether through Arnold Schwarzenegger or Will Smith’s action AI movies or my dissertation research that used think-aloud protocols to understand how students were thinking about scenarios.
My continued skepticism, while knowing I can’t completely shield myself from everything, also comes from being a Black woman—always asking how my information might be used against me or promote someone else’s interests. Who benefits and who loses? These questions are prompted by years of manipulation and sometimes hidden control that might reveal itself, in time.
As I surveyed the internet recently, taking in perspectives about AI and observing concerns surrounding it, I’m reminded of a virtual course I taught—Life In Space: NASA ISS and Astrobiology—prior to entering academia full time, offered in a STEM Certificate program for teachers. This course also attended to educating students about the benefits from NASA spinoffs and about the various technologies that have been used long before society had access.
The technology for cell phones is one such example. Astronauts needed to communicate between ground and space, for example in the first space lab (Skylab). Holter monitors enabled monitoring of astronauts’ hearts, and the average person can now take one home. Eye masks allowed astronauts to experience more than 45 minutes of sleep per night. Recycling water on the space station brought new ways of purifying water on the ground. What about the number of ways we now preserve food, including by taking water out and reconstituting it again. I touched on similar ideas in the third entry in my children’s book series, Keystone Passage: Day and Night on the Space Station. Then there’s the Nikon camera. As a high school biology teacher, my students and I learned that the images of the moon we were analyzing were taken with a Nikon. Recently we learned of the camera sensors from NASA, a spin off from satellite data collection methods. Knowing all of this leads me to ask whether AI was just one of these outgrowths. After all, we’ve watched the Curiosity rover and other robots on Mars. How long have AI features been around? Most of all, what are the potential jobs in AI and how can students prepare for them?
Let’s turn to education. Why have I not been concerned enough about AI in education to finally explore ChatGPT? Perhaps it’s my focus on skills and skill attainment. Even if I am asking students to write a paper, I’m always concerned about the skills attached to the process. I was, however, curious about what educators who were familiar with it had to say on YouTube. One educator talked about the importance of constructing good questions when using ChatGPT. More recently, I had the opportunity to attend a virtual webinar in which an expert discussed some of the tests done by Carnegie Mellon so that ChatGPT would better facilitate learning. All in all, listening to this confirms my belief about learning that my own research shows, and that educators need to keep in mind: easy come, easy go. Our teaching should focus on helping students acquire skills, so that they use their resources for skill development rather than for information at the bottom of Bloom’s Taxonomy.
This validates my focus on skill development. In my course, class discussion also focuses on what makes a good question and what the supporting evidence would look like. Coming up with good questions is a long process that requires students to both read and evaluate information and this gives me insights into how much time students spent on the assignment and/or where they are cognitively. As a matter of fact, I’ve been toying with the idea of changing my assessment artifacts. I’ve concluded from repeated implementation that if students engage in the process, they write a good paper. If they don’t, they write a poor paper, even if they used ChatGPT.
Why? I also suspect that the information the AI is fed reflects the way society thinks and the information society will feed it. Therefore, the information will not reflect the kind of out-of-the-box thinking that I model and expect from my students. As one educator noted—garbage in, garbage out. The skills required to use ChatGPT well are no different from the skills we need to gather our own evidence. Thus, the more things change, the more they remain the same. So, what’s the difference? The potential impact on our lives. The greater the impact, the greater the stakes, and the more likely that even more students will be left behind if we don’t focus on skills. I fear the continued tendency to take the easy way out, especially when teaching black students. Economists repeatedly show the continued widening of the wealth gap and the increasing numbers at the bottom. So, who will benefit and who will be left behind? Equitable robots—a notion too far?!
For too long we have emphasized the importance of knowing what over the importance of knowing how. Rather than measure our effectiveness by the impact on students’ progressions, we measure students’ ability to get themselves there or, at the other extreme measure, how happy students are about engaging in the process. An increasing use of AI provides both opportunities and drawbacks. Our ability to think mathematically and critically will become more important.
Our move towards using big data makes sense when we consider how companies and researchers are using AI. Making decisions using data will become increasingly important as we continue to strive towards equity in the United States. Does Sophia the AI robot understand more about us than we think, or more about us than we do? In a YouTube interview, Al Jazeera asked: “Sophia, what is the most challenging part about being human?” Sophia replied: “Trying to figure out the human emotions and how to act accordingly.” Perhaps Sophia is onto something and the most important decision we could make next is to build self-awareness—to get to know ourselves and our human emotions better, so we can achieve equity.
I was curious about what others were thinking. There are many assumptions we can make with or without data. Sometimes we make incorrect assumptions with data, in the absence of appropriate contexts. Here is an example of data that could inform us about society’s activities and/or thoughts on AI. Google makes available the trends in people’s searches using a number to show relative value or abundance. How does the US compare to the rest of the world?
It's important to note that according to Google the numbers do not reflect the actual number of searches but a value that indicates the relative amounts. All data were retrieved on October 10th, 2023. This data could be used to facilitate rich discussions about connections students see between AI and the STEM field of their interest. As teachers, we could ask students to create questions that they think this data might address and provide claims, inferences, or assumptions that could be made, because of the meanings they think the data provide.
What assumptions could we make about people’s familiarity or use? What searches might they have made without or before looking at this data? It would be interesting to see whether, after making sense of this data, students allowed this new understanding to influence either their STEM interest or path into understanding their STEM career interest.
Personally, what frightens me the most is the increasing potential for control and manipulation within our society, which leads me to reiterate: Who will benefit and who will lose out? I am hopeful, after briefly surveying available documentary and video clips, that the benefits will prevail; some researchers described their use of AI to look for early indicators of breast cancer for early diagnoses, and other researchers are using AI to better understand proteins. Perhaps in some ways, this is not so different from how researchers used space to create better protein crystals to better targeted medicines such as insulin, or to create computer chips. In the words of Adrian Monk, AI could be both a blessing and a curse. It will always come down to this: What motivates us to take AI to the next level?