top of page

Learning With AI: How Artificial Intelligence Both Grants and Bars Access in Writing and Learning

Abstract:

​

This study explores the question: how accessible is assistive AI technology as it pertains to writing and learning? Through qualitative research via interviews, observations, and documents, I look at how current AI technologies such as Otter.ai, Speechify, Read&Write, NVDA, and ChatGPT provide access to students alongside discovering areas where they prohibit access. I find that although AI offers a multiple of possibilities in learning design and makes students and teachers alike more efficient, AI contains problems such as inaccuracy, concerns with integrity and security, misrepresentation, and unavailability due to technological and monetary means. Despite such concerns with access, AI continues to grow and since it will remain part of our lives, I advocate for working with AI in the classroom in the hope that it will bring more digital literacy amongst individuals so that we can make informed decisions about the technology we use and, if necessary, change it for the better.

​

Introduction

​

Why write? It’s a question I used to ask myself as a kid faced with heaps of homework that meant less time playing. It was also often accompanied by the question: what’s the point of math? Now, while I can’t exactly bring much insight into the math side of things, I’ve done enough writing to see its place in our lives. Writing is a way of thinking, of learning, and of communicating. It certainly cannot be ignored in our society. From the emails we send and receive to the tax documents we agonizingly fill out each year to the stories we tell, writing is there. Its presence is undeniable and due to its deep roots in society, it can give power to individuals through the jobs it gives us access to or to the laws that we rewrite. With writing skills, we can begin to understand the power we have to incite change no matter how small. We can begin to see what we are capable of. Thus, writing is, in many ways, “closely linked to how we form our identities” because when we write, we put our knowledge and perspective on the page in order to accomplish something (Brueggemann 386). Through writing, we access ways of agency.

 

Since we live in a literate society, in order to access the agency writing provides, we very much need to learn how to write. School is where that happens. It is said that “successful function in the school is a good indication of successful function in a modern literate society” (Olson 283). The issue though is that school is often taught to a standard. Standard grammar. Standard English. Standardized testing. This adherence to a standard can oftentimes block access because it doesn’t easily accommodate “the world’s multiple ways of living and speaking” (Bloom 671). People are diverse with different language backgrounds and neurodivergent perspectives. Even writing itself is a versatile thing from serious documents to blog posts. Thus, “people do make knowledge in different ways” (Brueggemann 380). It is, therefore, important to recognize and understand the various ways knowledge and writing can be accessed while also understanding the limitations that still exist.

 

In today’s society, one way in which diversity in learning and writing can be supported is through assistive technology. In his paper discussing the history of assistive technology, Ben Satterfield references the Individuals with Disabilities Act of 1997 which defines assistive technology as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve function capabilities of children with disabilities” (1). When it comes to writing and reading, assistive technology mainly takes the form of text-to-speech or speech-to-text software. These technologies function with artificial intelligence codes that process inputted information to create a new output. Thus, AI helps present information in multiple ways to provide a diverse learning experience and is increasingly expanding.

 

In fact, recently in late 2022, a new AI system known as ChatGPT was released to the public. Using ChatGPT is simple enough. All you have to do is input text and ChatGPT provides an output. Additionally, as a learning language model, ChatGPT produces uniquely human-like responses to questions. It helps write essays, emails, or even simply gives advice on tough questions. It in and of itself is an assistive technology for it helps people in answering a variety of questions and topics from science to history. As one person says, “it’s turning out to be used in lots of applications: summarizing long articles, for example, or producing a first draft of a presentation that can then be tweaked” (Naughton). As you can see, AI assists people and provides access to new perspectives and ways of doing things.

 

Yet, ChatGPT’s release brought with it a concern about AI’s place in education. In an article, Logan Cochrane notes, “as AI increasingly becomes a tool everyone can use, universities need to ensure that learning outcomes and skills are transformed in such a way that they are complementary to the developments in AI and machine learning” (Cochrane). Since technology changes the way we access information, it impacts how we learn and thus disrupts the standard that is taught in education. In many ways, this is good because we begin to recognize that we can “learn along a continuum of difference” (Brueggemann 380). This would ultimately enhance learning for everyone by showing students all the possibilities instead of limiting them to just one.

 

But is it too good to be true? Since AI technology is already opening the conversation about learning in new ways, we need to understand just what this technology is capable of and what it isn’t. With this in mind, the purpose of this study is to illuminate the way AI technology can help individuals access writing and learning in new ways while also revealing its limitations. Teachers and students need to know the full picture surrounding assistive technology in order to know how to best use it to enhance learning. Furthermore, the only way to improve something is to know what’s still missing. Assistive technology may provide more access but how much and what is still being prohibited? Thus, the question I want you to have in mind going forward is this: how accessible is assistive AI technology as it pertains to writing and learning?

​

Methods

​

To answer this question, I have taken a qualitative approach through a combination of interviews, observations, and documents that focus on access and AI in college education. With this triangulation of data, I can assure that my results are reliable and consistent by comparing the trends that appear between each method. I have chosen to take a qualitative approach because this approach allows researchers to “tease out…the immensely varied and complex relationships that writing facilitates” (Russell 261). Because AI technology is used to assist writing and learning skills, the best way to ascertain its range of accessibility (or lack thereof) is to gather varying perspectives on those who have actually used it alongside observing it in action. We need to see those complex relationships unfold. Qualitative studies open the door to such conversations and observations and thus we can begin to tease out the complex relationship AI has with writing and learning access.

​

Interviews

​

Over the course of this study, I conducted three interviews. Each interviewee is associated with Wake Forest University and has expertise in either teaching or technolo gy. The first interview was with Davita DesRoches, the alternative media specialist for the Center for Learning, Access, and Student Success (CLASS). She works directly with students to provide them with assistive AI technology. The second interview was with Dr. Betsy Barre, the Executive Director for the Center for the Advancement of Teaching (CAT). She works to improve teaching on the Wake Forest campus and recently has taken to sharing information on AI in the classroom, particularly concerning ChatGPT. The final interview was with Eudora Struble, the Director of Technology Accessibility for Wake Forest’s Information Systems. She is constantly working to inform individuals about and provide better technology accessibility on campus. These interviews provide insight on AI as it is presented to students, teachers, and the campus community in general.

​

Observations

​

As a current student at Wake Forest University, I also decided to observe the AI technologies by using them myself. I observed a variety of AI technologies and recorded my responses in an observation notebook. This notebook separated direct observations from personal thoughts in an attempt to diminish personal bias. Thus, I record what actually happened alongside my reaction to it. For this study, I observed the following: (1) Otter.ai, a speech-to-text software, (2) Speechify, a text-to-speech software, (3) NVDA, a screen reader, (4) Read&Write, a combination of speech-to-text and text-to-speech, and (5) ChatGPT, the new AI learning language model (See Appendix A-E). All these AI technologies except ChatGPT are offered on the CLASS department’s webpage. I am including ChatGPT in the mix because, a lthough it is relatively new, it is already widely discussed among universities including Wake Forest about its role as assistive technology in the classroom. With these observations, I hope to determine what these AI technologies are capable of in terms of access.

​

Documents

​

The documents collected for this study were more of a supportive nature than actual analysis. Since AI has been circulating the news lately, I gathered articles that spoke out on the relationship between AI and education in order to better understand the diverse conversations currently occurring in our society. I made sure to collect a variety of documents whether that be for, against, or neutral to AI. I find it important to understand what has already been said about AI in order to better understand what it does and where it is going.

​

Limitations 

​

This study does come with limitations. Although I do conduct three interviews across different departments and the interviewees provide well-informed insights into AI and access across a college campus, they do come from positions of power. Additionally, though I am a current undergraduate student, my observations are entirely limited to my perspective and experience and therefore, cannot account for other students’ experiences with AI. Now, I do try to circumvent these limitations by comparing the consistent trends that appear across these methods thus corroborating the findings. Still, further research is necessary to strengthen this study. Particularly, in addition to my own observations and interviews, observations and conversations with students that are neurodivergent or have other disabilities would be beneficial since their experiences with the technology would provide a more diverse perspective than my single observations and conversations with educators and directors do.

​

Results

​

With all this in mind let’s recall our central question for this study: how accessible is assistive AI technology as it pertains to writing and learning? After examining all the data across my interviews, observations, and documents, I found that AI is, at its core, a learning system and this means that although AI has learned to do certain things well, it is still learning and therefore still capable of making mistakes. In terms of accessibility in writing and learning, this means that AI can and does assist students but cannot do so with complete trust. Moving forward, I’ll go through my results by trend showing where AI succeeds in accessibility and where it falters. After going through my results, I’ll explain the implications of an imperfect, still-learning AI system in the classroom and ways we can still move forward.

​

Trend 1: Multiplicity

​

When thinking about multiplicity, the words abundance, variety, and diversity pop into my head and that is exactly what assistive AI technology provides. As discussed at the start of this paper, people learn in different ways and AI technology gives students access to that multiplicity of doing things. At the surface level, there are many different AI technologies to choose from allowing students to pick what works best for them. They aren’t necessarily limited to just one method. There’s a choice and that choice can be tailored to the student’s needs. While I didn’t observe all the technologies offered by the CLASS department, they had over a dozen options for students to choose from to help in their reading, writing, and note-taking tasks. As Davita DesRoches told me, “AI is built into a lot of the software that I use.” And it is not just the software she uses, it’s available to all of us from Apple’s Siri to Google’s voice typing feature. AI comes in many forms allowing for many different uses.

 

In a diverse world, having many possibilities is incredibly important because “it opens up opportunities for people who have traumatic brain injuries, cognitive fatigue, who have intellectual, cognitive disabilities” (Struble). Take our speech-to-text software – Otter.ai and Read&Write – for example. Both of these software’s employ AI to transcribe audio as it happens in real-time. Thus, as you talk, it writes. Using it is simple, all you have to do is press record. Otter.ai even takes things a step further by distinguishing between multiple speakers and even records data such as time. As I was using it, I was able to edit and comment on any saved conversations. I found that without the hassle of needing to split my attention between writing down what was being said and listening, I could focus and engage more on the conversation happening around me and less on transcribing my notes. I also didn’t have to worry about forgetting a vital piece of information since it was being recorded. For individuals who have trouble quickly writing, multitasking, listening, or remembering things, this speech -to-text software is good for recording information to be revisited later while also putting it in a new form other than audio. You have the option of both seeing the text, hearing it, and editing it.

 

Going even beyond the typical transcription, Read&Write could also do text-to-speech and would, if you were typing, offer suggestions based on the context of your sentences. For instance, if I were to write: “This is important” the AI in Read&Write would suggest the word “because” to go next. Even though this is a simplistic example, this feature could ultimately help writers continue the flow of sentences if they were to get stuck. Read&Write also contained a dictionary that defined words in both text and picture format. Thus, not only could you transcribe audio or vocalize text, Read&Write worked to make sure its users understood what was being read by defining and making suggestions through multiple options whether it be visually or auditorily.

 

Read&Write wasn’t the only AI software that provided text-to-speech. If you’re a better auditory learner than visual learner or have a visual impairment, text-to-speech AI alleviates that burden. Speechify is an app that allows you to upload files, scan, type, or link websites that its AI will then process and read out loud to you. Not only does it provide you with many options to input data into its AI software, but it can also read in various languages and in different voice tones. There was even an option to listen to text as if Snoop Dogg or Gwyneth Paltrow were reading it. As I used Speechify, I found that I could edit the text, adjust the listening speed, and select where and when I wanted it to start reading. In all, there were many options available to me so I could tailor the experience to my needs.

 

The NVDA screen reader system was also an AI tool that read text out loud. Although it didn’t have upload or editing options, the NVDA AI was essentially attached to my computer cursor so that whenever I hovered over text or typed a letter, it automatically read it out loud to me. Had my eyes been closed, I would still always know where my cursor was on my computer because I could hear its location by the text it hovered over. Thus, as you can see, there are multiple ways of recording and presenting written text that can help those who learn in different ways.

 

Last but certainly not least is ChatGPT, the system - as mentioned earlier - that’s “pretty adept at mimicking human language” (Naughton). When it comes to multiplicity, ChatGPT will not only answer whatever question you can think to write, it will write its response in the form you request it to be in whether that be a poem, a report, an essay, or something else! Additionally, when thinking about the classroom, ChatGPT has many applications. One way to use ChatGPT to improve writing skills is to ask students to critique a ChatGPT generated answer. Students can thus “explore AI’s output and then do the hard work of improving that output” (Mollick 3). This not only makes students think about how something was written but also where there may be gaps in reasoning. Similarly, one of my interviewees, Eudora Struble, viewed it as a “reflection partner” where she could input things she had already written and have ChatGPT test it and give feedback on it. I did this myself when observing ChatGPT. I copied and pasted a story I had written and asked ChatGPT to give me feedback. ChatGPT gave surprisingly constructive feedback telling me where I went right (engaging descriptions) but where I could improve (like adding in more variety and action). Here’s how the conversation went pertaining to a short story I wrote about trench-life during WW1 (See Appendix F for full story):

Me: Read the following short story and give me feedback pertaining to the dialogue, character development, and themes.

 

ChatGPT: Your story is a gripping and intense description of the horrors of war and its devastating effects on both the soldiers and the land they are fighting on.

 

Plot-wise, your story effectively conveys the sense of despair, hopelessness, and futility that soldiers in the trenches must have felt during World War I. The portrayal of the land itself as a malevolent entity that fights back against the humans who seek to exploit it is particularly effective. Your story also highlights the psychological toll of war on the soldiers, who are slowly but surely driven mad by the constant violence and trauma they experience.

 

In terms of character development, there is not a lot of individual characterization in your story, as it is more focused on the collective experiences of the soldiers. However, the way you describe the soldiers' reactions to their environment and the events around them effectively conveys their fear, desperation, and hopelessness.

 

The ending of your story is particularly powerful. The idea that the soldiers are trapped in a perpetual cycle of death and destruction, with no escape from the hell they have created for themselves, is a haunting and poignant conclusion that leaves a lasting impact on the reader.

 

The dialogue in your story is sparse, but that is appropriate given the setting and subject matter. The few lines of dialogue that are present are effective in conveying the soldiers' fear and desperation.

 

Overall, your story is a well-written and impactful portrayal of the horrors of war. The descriptions are vivid and evocative, and the theme of the destructive power of humanity is effectively conveyed. One potential area for improvement could be to provide a bit more variety in the pacing and structure of the story to prevent it from becoming too monotonous.

I’d like you to notice how detailed and lengthy this response is. ChatGPT points out what is both working and lacking which, as the writer, made me feel better about the story since there were good aspects to the story alongside the flaws. ChatGPT also focused on the features I asked about and explained why it was good or needed improvement. Thus, I found it to be good constructive criticism. For teachers who don’t have time to give detailed feedback or students who want more guidance, using ChatGPT in this way could end up improving student writing.

 

A final way ChatGPT can help with learning and writing is as a brainstorming tool. Dr. Barre describes ChatGPT as being able to “pull together different things in ways that sometimes we can’t.” This then allows for students to overcome the hurdle of not knowing where to begin an essay or not knowing what counter-arguments are at their disposal. I see it as a tool to learn how to question my way out of writer’s block. The only caveat so far is that you have to know what questions to ask. The more detailed the question, the more tailored ChatGPT’s response can be. The good news is that ChatGPT can be asked many questions. So, whether you are using text-to-speech, speech-to-text, or ChatGPT to aid a learning experience, students and teachers have options both within the software they use and outside the software which allows for a diversity of experiences that can account for some of the world’s many ways of learning and writing.

 

Yet, despite all the good things that multiplicity provides, we must recognize the limitations it creates. When you have so many options, it can be difficult to decide what is best for you and learning how to effectively use the technology can be a case of trial and error. There’s often a learning curve before you notice the effectiveness of the AI you are using. Take, for example, the Speechify software. As I was using it, the voice tone came off as quite robotic. It lacked the normal warmth and ease that an actual human voice usually has. Thus, for me, listening to a text through Speechify was disjointed for it wasn’t very engaging and I had to concentrate to pay attention. As a listener, I was not used to the voice and so it took a while for me to settle in the text. Another example would be with Read&Write. There was no obvious tutorial and so I spent a while trying to determine what each button did and where they were. Or, with NVDA, I had to practice patience and slowly navigate my computer or else face NVDA rapidly reading and, consequently, blurring together all the text I was hovering over before I could process what was being audibly transcribed. As described earlier, ChatGPT also has a learning curve since the effectiveness of the software entirely depends on the quality of the question you are asking. Sometimes this meant asking the same question in different ways until ChatGPT produced an answer that described things in a way that best worked for you.

 

Fortunately, when it comes to academics, guidance is offered to students and faculty as they go about using AI to aid their writing and learning. At Wake Forest, Davita DesRoches discussed with me how she works with students who use the CLASS department to introduce them to the AI software saying, “so usually, what I do with the student in our first meeting is we do kind of an overview of everything that’s kind of available” and, if something is not working for a student, “we have that secondary conversation where we can figure out like a different approach to something or a way to make it more useful to the students” (DesRoches). Thus, in college academia, there is a support system surrounding the multitude of available AI technology in order to ease the learning curve and help students access AI in an effective way.

​

Trend 2: Efficiency

​

Another trend AI technology has when it comes to access, writing, and learning is efficiency. Technology gets things done quicker. For instance, if you have more difficulty reading than you do listening, using a text-to-speech AI will help you process information faster. The same route goes for speech-to-text. Why make things more difficult for oneself when the information can be readily available in a way that best suits your needs? Davita DesRoches describes this ease that AI provides as,

“showing ways in which some of that labor is a little bit repetitive or can be replicated by a machine…it does make parts of my job a lot easier and allows me to serve students and make academic environments way more accessible for them. So that’s a good thing because there would otherwise be way too much labor and…there would need to be two or three of me” (Davita DesRoches).

As DesRoches noted, when one allows AI to take up some of the grunt work, you end up having more time for other things. In a school setting, this could mean more time spent helping students. One example provided to me during my interview with Betsy Barre was when she described ChatGPT handling the grammar so that students could focus on the content when writing a difficult essay. She says,

“We all only have so much cognitive capacity to work through a problem. And so, if there are a number of complex things happening that we’re paying attention to, we stop paying attention to other pieces of the problem [like grammar]…if ChatGPT can handle the grammar, right, you can get them to do much more advanced critical thinking about their ideas” (Barre).

AI can essentially take away the stress of one aspect of your writing and allow you to focus more on another. When working with AI, we can divide and conquer.

 

The key thing is to remember that AI should only be an aid and not a replacement for learning. It’s easy to just let technology do everything for us. If it gets the job done more efficiently than us, why bother learning to do it ourselves? As I’ll later describe in more depth, AI isn’t perfect and, therefore, it might not always work the way it should. So, it’s a balancing act. You need to know enough in order to catch AI in its mistakes. You have to learn how to do it yourself so that you are prepared to complete a task in the case that you don’t have AI at your disposal. Using something efficiently also means knowing when to exercise caution

​

Trend 3: (In)accuracy

​

AI often struggles with accuracy. AI is still developing and learning and because of that, it still makes mistakes. Every single AI software I observed had an issue with accuracy. To give a few examples, Otter.ai could not always identify the correct words to translate. For instance, it translated “Wake Forest” as “wait for us” or “ChatGPT” became “Chet GPT” and once “Chechen beauty.” It also clumped together speakers. For instance, my response to something Betsy Barre said would be lumped together with what she said. This became an issue of who is actually saying what is being translated. Another clear example is with Speechify. I provided a link to an article for Speechify to read aloud and it ended up adding in additional text like promotional ads and mixed it within the actual article. I did not initially know this and got really confused about what was being read to me. What was a 10-minute read became an over 30- minute one with the excess text mixed in making the experience confusing and labor intensive. Here’s an excerpt of what it looked like when I had Speechify read Kashmir Hill’s New York Times article The Secretive Company That Might End Privacy As We Know It compared to what it actually looked like on the NYT website:

Figure 1 on the right shows a screenshot from a New York Times Article and Figure 2 on the left shows an excerpt from a Speechify audio transcription

Notice how Speechify clumps in the podcast episode with the article I was reading. Speechify would continue to transcribe and vocalize the entire 31-minute podcast before returning to the actual article itself. That’s 31 minutes of listening to bolded text I didn’t originally want to listen to. On the website, you can scroll right past and decide to listen later and you can clearly tell that the podcast is not part of the original story with the smaller text, icons, and divider. It’s basically an Ad, a complimentary piece that the NYT thought I would be interested in. Despite this, Speechify could not differentiate between the article and the surrounding distractions. Instead, it fuses them together and this makes for a disjointed listening experience as I listen to the artic le meshed together with a bunch of additional material.

 

The screen-reading AI called NVDA had noticeable mishaps as well. I recall from my observation journal (see Appendix C) that NVDA was highly sensitive meaning that it would constantly read out whatever I was hovering over whether that be a word or an icon. I’d like to note that despite its overall sensitivity, NVDA made mistakes. For example, I noticed that it would not read out icons like the share button in the top right corner of a Google Doc. It also didn’t recognize the tabs I had saved in my Chrome bookmarks bar. For someone that’s visually impaired, this could limit their ability to navigate a computer since they wouldn’t know that their mouse was hovering over a clickable option.

 

ChatGPT is another example of inaccuracy in AI. ChatGPT essentially has access to loads of documents gathered from the web and not all that information can be trusted. With “a lot of different people sort of sharing information that may or may not be accurate,” we cannot fully trust ChatGPT’s output as being 100% vetted (Struble). For instance, Betsy Barre had it read an article and then answer some questions. When she reviewed the answers, she found that ChatGPT “would like make up entire portions of the article” (Barre). For individuals who may not know the correct information, using ChatGPT could end up spreading misinformation. How can you tell what is wrong if you don’t know what is right?

 

This question prompts thoughts about learning. As I’ll highlight in my discussion further down, we need to learn with AI. By building up our knowledge, we build up an ability to recognize mistakes - the places for improvement. In an article titled Getting the AI We Deserve, author Matt Reed advocates for students becoming better readers. He says, “most disinformation falls apart at the slightest questioning, since it has no underlying reality. Even if writing can be automated, reading can’t be” (Reed). Thus, AI might not be replacing learning but instead encourages students to learn more so that they aren’t roadblocked by AI’s mistakes. In fact, they might be able to go in and fix them someday.

​

Trend 4: Integrity & Security 

​

We shouldn't just question AI when it comes to accuracy. We should also question technology's role in upholding integrity and security. One of the main issues brought about by ChatGPT is that of academic honesty. Why? Well, not only has ChatGPT passed the bar exam (among other tests), but students can also use it to answer questions or write essays for any assignment. Therefore, using AI becomes “a justice issue if there are students who are [misusing it]” (Barre). This leads to questions about how assignments and school policies are designed which I will discuss in depth further on in this essay. But, essentially, it becomes a discussion of awareness. If students know ChatGPT is not always accurate, they won’t rely on it too much. Knowing what it can and cannot do leads to a more informed, ethical decision. Additionally, it also becomes a question of student motivation and how things are taught. Because, “having ChatGPT do your work for you, it’s not going to help you learn” (Barre). So, if students are using ChatGPT to do all their work for them, maybe that’s an indication of their lack of desire to truly learn and maybe that means the way things are done needs to be reassessed.

 

Yet integrity does not just lie with the student. It also belongs to the companies creating these AI systems. AI is ever-growing and “we don’t really know who’s running these startups” (Barre). Can we then trust the individuals creating and distributing the AI? I found that many of these software’s require logins, emails, and you might have to download something. Can we trust those companies to keep our information secure? Luckily, the accessible technologies offered by Wake Forest are vetted by the CLASS department. Davita DesRoches described how companies allow her to do trial runs and surveys with students before offering the software campus wide. Thus, there is some oversight but with new AI systems appearing all the time like ChatGPT, questions of security and integrity still need to be asked.

​

Trend 5: (Mis)Representation

​

Just as we question the people running the companies and the accuracy of the information, AI involves questions of representation. As mentioned before, AI is a learning system. Therefore, any input that people generate impacts the output. This brings up

“concerns about the inclusion of people with disabilities in the sort of models that AI are using. And making sure that information about all kinds across all experiences are being included in what the AI is leveraging” (Struble).

For instance, if everyone using a speech-to-text software speaks Standard English, then the AI will reinforce Standard English. It will recognize it faster because it has a lot of data to build off of. If English is not your first language or you have an accent, AI might not pick up on it accurately. As Struble tells me,

“with different ways that your accent, your words, and intonation sound, [AI] could end up not having a good output and so that already represents a bias in that technology” (Struble).

The bias exists because different ways of speaking are not fairly represented by AI. At the end of the day, AI was created by people and these people have limitations and biases and this leads to flawed AI systems.

 

Luckily, artificial intelligence is constantly in development. It keeps on gathering data and computer techs keep on altering it as they discover new ideas. For instance, the current version of ChatGPT has filters that decline any inappropriate requests. Additionally, ChatGPT recognizes its own limitations when you open up the tool as shown in the figure below:

Figure 3: ChatGPT Home Screen Listing Three Limitations for users to watch out for including (1) May occasionally generate incorrect inormation, (2) may occasionally produce harmful instructions or biased content, and (3) limited knowledge of world and events after 2021.

As such, it is not like AI is obscuring the fact that it is flawed. Eudora Struble also describes how AI is working to improve itself when discussing speech-to-text software saying,

“it’s starting to get a little better with people who have like a fairly noticeable Southern accent or something…it’s doing a little bit better it seems in the last year picking up the words correctly” (Struble).

Thus, AI is growing and still finding ways to represent individuals from diverse backgrounds whether it be by more accurately picking up accents or by providing alternate ways of accessing data for the visually impaired or neurodivergent.

 

Besides, looking past all the flaws, we have to recognize that AI needs to start somewhere. The AI systems we currently have are the foundation upon which it can grow. We just have to make sure that we don’t become complacent and instead keep messing with the standard.

​

Trend 6: (Un)Availability 

​

With issues of representation, it is logical to ask who actually can access AI technology. To whom is it available? Alongside needing a physical electronic device and a solid internet connection, to use all the features of most AI softwares, you have to pay. Of course, there are trial runs and free plans but those give you the bare minimum. For instance, until you pay a premium, Otter.ai only gives you so many transcription minutes and once you run out of those minutes, you can’t transcribe any more files. Similarly, Speechify does not allow you to take notes on the files you upload unless you pay. With ChatGPT, you can pay to get first dibs on the site when it gets too overcrowded. As Dr. Barre rightly asks, “if you have to pay for it, who pays for it?” Not everyone has the means to do so. In the end, those who have money get prioritized and represented more by AI. Therefore, if AI is to be used to aid learning, schools need to ensure that their students can access these resources without any financial burden.

​

Discussion: Towards A More Digitally Literate Society

​

Why bother with AI if it is just full of inaccuracies, misrepresentations, trust issues, and unequal availability? Some schools have already banned the use of ChatGPT. Many fear that it will ruin learning (Roose). Yet, as my results show, AI is not something to be buried. It can help in the classroom. It can alleviate burdens on students with disabilities. By ignoring AI and the aid it can provide, we prohibit access to those who need it. We neglect the understanding that “people do make knowledge in different ways'' (Brueggemann 380). And, in truth, AI is not going anywhere. Working against it is “going to be a losing battle, in the sense that it’s going to keep getting better” (Barre). You know, long before the first computer, Socrates once feared that writing would ultimately weaken the mind (Gabrial 23). He believed that with pencil and paper, humanity would be taking a step back. Rather, humanity moved forward, and the pencil is a common tool and writing is a means of power for everyone. That fear of Socrates has simply moved from writing to AI. Thus, if humanity stayed by writing and saw it turn out for the better, why shouldn’t we give the same opportunity to technology?

 

The thing about change is that it involves a learning curve. To change, we must know what needs to be changed. AI is not perfect, and so, we should not view it as perfect. So in the interim as AI continues to learn and improve, we must remain alert and aware of AI’s limitations alongside its benefits. We must have and spread digital literacy which involves an “understanding of what [AI] can and cannot do and how to prompt it and really use it in a careful way that produces responsible results, ethical results, etc.” (Barre). Digital literacy creates an informed perspective on the technology that influences our lives. Thus, if we are to move forward with AI, schools need to incorporate more digital literacy initiatives for both teachers and students to engage with. This will, of course, mean re-envisioning or reassessing the way things are done in the classroom. In fact, sometimes showing AI in action is more effective than just telling people what to do and what not to do. Teachers can share digital literacy by showing it through how they present and teach with AI. If they outright ban it, students may be rebellious and use it anyway. If teachers use it as a helping tool, so might students. Thus, I believe that schools need to learn with AI because students will “need hands-on experience to understand how this type of AI works, what types of bias it contains, and how it can be misused and weaponized” (Roose). In this digital age, our awareness must grow as AI continues to evolve.

 

Now, change may cause discomfort but sometimes discomfort is a good thing. When we are challenged “to reconceptualize teaching” it “contributes to and enhances learning for all students” (Zamel 519). Why? Because it opens up the possibilities of what can be done and “when we ask all students to tap into multiple pathways to generate and rethink their ideas, we disrupt constructions of “normal,” we broaden notions of “writing”” (Brueggemann 381). So if using AI means having to adjust an assignment in order to inform students on technology and provide new ways of learning, then it might just be worth the additional work.

 

Thankfully, the conversations about AI are already happening and more awareness is being spread through the news and through forums being held at schools. To help spread digital literacy, I created an informational quick guide about the access and limitations provided by AI. This guide is meant to reach the public in a clear, concise manner so that more people are capable of reading it than they would this paper. At the very least, I hope this study and my quick guide make people curious about AI and make them question it because when people question, they seek to find answers and they seek to make things better. AI certainly is not replacing humanity but instead justifies the need for us since “there will always be the need for people in the space…to help check in and fix things and make recommendations” (Struble). So as AI continues to grow, more questions and research will be needed about the access it provides and the people it impacts in school, in the workplace, and elsewhere. Don’t be afraid to question the robots in your life. Learn alongside them. In fact, learn better.

Explore!

About This Study
Acknowledgements
Sources
Appendix
Bonus Material
bottom of page