Chapter 1: Introduction As seen in voice assistants, mapping tools, shopping recommendations, essay-writing capabilities, and other familiar applications, AI may enhance educational services. Second, urgency and importance arise through awareness of system-level risks and anxiety about potential future risks. For example, students may become subject to greater surveillance. Some teachers worry that they may be replaced—to the contrary, the Department firmly rejects the idea that AI could replace teachers. Examples of discrimination from algorithmic bias are on the public’s mind, such as a voice recognition system that doesn’t work as well with regional dialects, or an exam monitoring system that may unfairly identify some groups of students for disciplinary action. Some uses of AI may be infrastructural and invisible, which creates concerns about transparency and trust. AI often arrives in new applications with the aura of magic, but educators and procurement policies require that edtech show efficacy. AI may provide information that appears authentic, but actually is inaccurate or lacking a basis in reality. Of the highest importance, AI brings new risks in addition to the well-known data privacy and data security risks, such as the risk of scaling pattern detectors and automations that result in “algorithmic discrimination” (e.g., systematic unfairness in the learning opportunities or resources recommended to some populations of students). Third, urgency arises because of the scale of possible unintended or unexpected consequences. When AI enables instructional decisions to be automated at scale, educators may discover unwanted consequences. In a simple example, if AI adapts by speeding curricular pace for some students and by P a g e | 5
16 Publizr Home