Critical & Ethical AI

Ethics in AI

Generative Artificial Intelligence (Gen AI) is an exciting and powerful technological development that is altering a vast array of social institutions, in particular higher education. While there is still much we have to learn about how to use AI to support the mission of higher education, we must remain vigilant about the ways AI reinforces and can be used to reinforce systemic biases in higher education.

Critical scholars of technology in education acknowledge that digital technologies, like AI, “in schools are not neutral but political…they carry assumptions and ideas about the future of society” and “their design, promotion and use are all sites in which struggles over power are conducted’” (Selwyn et al., 2016, p. 149-50).

In the pursuit of educational excellence, we must remember that AI, like any powerful tool, can be a double-edged sword. As we embrace JEDI (Justice, Equity, Diversity, and Inclusion) principles, it is imperative that we scrutinize the impact of AI on the digital divide. Are we bridging gaps or exacerbating disparities? Let us not just employ AI in education, or conversely, completely write off the use of AI in the classroom, but consider its uses and limitations with intention and empathy, ensuring that every student is set up for academic success. This scrutiny should extend to the equitability of our methods of assessment and our use of assessment tools like TurnItIn's AI detection algorithm. Do our assessment methods allow all students to demonstrate their true potential, or do they inadvertently favor or penalize certain groups? When it comes to students' use of AI, we must critically question whether our assessments are enhancing fairness or inadvertently reinforcing biases. It is our collective responsibility to ensure that our assessments promote equity and are not simply maintaining preexisting inequities in higher education.

- Dr. Kira Donnell, JEDI Faculty Director at CEETL and Lecturer Faculty in the Department of Asian American Studies
Kira Donnell Headshot

We encourage faculty, students, staff and administrators to adopt a critical stance toward AI, one that views AI as political and a site of power struggle, even as many of us seek to find beneficial uses for AI in our work and our classrooms.

To take a JEDI approach to AI, you should consider reading and discussing many of the ethical concerns addressed in the previous section. This might include:  

  • Exploring the exploitative labor practices behind AI companies as well as how AI creates new forms of labor

  • Uncovering how apps students may use like Grammarly or ChatGPT can be predatory

  • Using Chat with students (see Teaching with AI) to show how it produces “hallucinations” that appropriate cultural identities and rhetorics

  • For more tips, check out the AI Guidance produced by Dr. Jennifer Trainor, CEETL Faculty Fellow and Professor of English

Questions to Consider:

What does critical AI mean to you? Which of the ethical concerns are most relevant to your work? What are your ideas about how can we implement JEDI principles at SF State and mitigate these ethical AI concerns?

Read further to learn more about ethical concerns around AI as well as how you can adopt a critical stance toward AI in your own work or teaching.

Ethical Concerns Around AI

To adopt a critical stance toward AI, it is important to first understand the ways that AI reinforces systemic biases and harms. The following list introduces an array of ethical concerns relevant to higher education that AI scholars have identified; these categories are adapted from Leon Furze’s (2023a) list of ethical concerns and Adams’ et al (2023) list of ethical concerns for education.

Algorithmic systems are only as unbiased as the data they are trained on, which is to say that algorithmic systems like Gen AI are biased and discriminatory. The AI image generation tool Midjourney, for example, has been shown to produce biased results: when prompted to generate a profile photos for professors in different departments the majority of images were white professors, and mostly male (Growcoot, 2023). Generative AI “indiscriminately [scrapes] the internet for data,” such that its dataset likely contains “racist, sexist, ableist, and otherwise discriminatory language” which then produces “outputs that perpetuate these biases and prejudices” (Furze, 2023b). In addition to biased data, other forms of bias and discrimination impact Generative AI: the design of the AI model itself, unjust applications of AI outputs, and real-world forms of bias and discrimination.

  • Even though AI is digital and may feel like it has no environmental impact, it in fact requires the use of “rare earth minerals and metals” and large data centers to operate. A study by University of Massachusetts Amherst found that training a single large language model (they looked at ChatGPT-2) can have five times the carbon-footprint of the average lifetime emissions of an American car (Hao, 2019)! 

  • If you would like to explore how to address this ethical AI concern further in your work or your course, you might consider resources at the SF State Climate HQ Faculty, don’t miss out on the faculty learning community led by Assistant Professor Carolina Prado.

  • Perhaps one of the most salient ethical concerns on higher education campuses is how Gen AI impacts truth and academic integrity. AI language models can both produce false or “hallucinated” information, including deep fakes, and they can be used to author content for the Gen AI querier. Please see our suggestions for how to teach with AI and our AI Guidance document for how to minimize such uses in your classroom. 

  • Another ethical concern for instructors to consider is AI detection software. Such tools are inaccurate and only growing more inaccurate as AI development outpaces detection tools. Turnitin's AI-detection feature has a 4% false positive rate (Chechitelli, 2023). During AY  2022-2023 at SF State, over 86,000 assignments were run through Turnitin; with a 4% false positive rate, that means that nearly 3,500 assignments may have been falsely flagged as being AI-generated. And, AI detection is significantly more faulty for English language learners with a recent study by computer scientists at Stanford showing that AI had a staggering ~60% false positive rate for papers written by English language learners (Liang et al., 2023; Myers, 2023).

The content used to train AI is scraped from the web; consequently, the work of authors and artists available on the web was given without consent. We must ask, when AI produces an image or text, whose composition or voice is being exploited? Who can we can we attribute the authorship to—the Gen AI user, the machine, or someone/thing else? Currently, US copyright law does not copyright images created by GenAI (Prakash, 2023).

The training data for Gen AI includes personal information. Much, if not all, of what we do online is “data-fied,” that is, turned into datapoints to measure, classify and compare us as users for the purposes of advertising, surveillance, and other business interest. Like the authors and artists whose work was / is exploited to train Large Language Models (LLMs), individual users did not consent to having their data train such AI models. Consider the privacy of yourself, your students and your colleagues when using Gen AI. 

Technology advocates often position technology as leading to greater efficiency and automating human labor. In reality, Gen AI creates new forms of labor, and in the training of LLMs required the exploitation of the “global underclass” (Gray and Suri, 2019, see an interview of Mary Gray here). In education, digital technologies have often led to new forms of labor such as increased pressure to document work and student learning outcomes, and this pressure often is differentiated across faculty rank, gender, and race (Selwyn et al. 2018) 

There is considerable financial investment, including in the education sector, into developing AI that analyzes and classifies human facial expressions according to affect. Such tools overlap with concerns about student privacy and data rights, the potential for bias and discrimination, and surveillance. Consider for example how bias might factor into such AI tools: Proctoring software that uses facial recognition algorithms have already been shown to discriminate against students of color and, in particular women of darker skin tones, based on its training data (Yoder-Himes et al., 2022). What cost might there be to students, particularly students of color, if educational AI tools developed to recognize affect to show engagement, for example, misclassify their affect?

The digital divide is both an issue of access and literacy. Access to technology, including the appropriate devices, stable internet connection, and widely-used software, is increasingly required to participate in schools and broader society in today’s digital age. AI, too, spreads unequal “benefits and risk within and across societies” (Mohamed, Png & Isaac 2020, p. 661); in the age of AI, access to paid version of AI tools as well as literacy in using Gen AI will likely become increasingly important to students and their careers. As you determine your own course policies on AI, consider how AI literacy might be important to your students and your field in the next 5 to 10 years.

Just because AI can be used for an assignment or in your course does not mean that it should be. According to SF State Professor Jennifer Trainor, educators should consider whether the use of AI supports existing learning goals, develops students’ information and critical AI literacy, and promotes students’ sense of agency and confidence in their own voice and human judgement. In addition, students should have the option to opt out of AI use. You might consider the pedagogical appropriateness of AI by reviewing questions from the EdTech Audit.

Each of these ethical concerns speak to how AI is a site of power struggle. The design and dataset are places in which worldviews and biases are encoded into AI, and the application and interpretation of AI outputs can have systemic ramifications. In addition, developing and maintaining AI is costly, meaning that “powerful AI is increasingly concentrated in the hands of those who already have the most” (Furze, 2023a).