Chapter 3: What is AI? will be renewed and intensify as AI-enabled systems and tools advance in capability and become more ubiquitous. Let’s start with another simple example. When a teacher says, “Display a map of ancient Greece on the classroom screen,” an AI system may choose among hundreds of maps by noting the lesson objectives, what has worked well in similar classrooms, or which maps have desirable features for student learning. In this case, when an AI system suggests an instructional resource or provides a choice among a few options, the instructor may save time and may focus on more important goals. However, there are also forms of AI-enabled automation that the classroom instructor may reject, for example, enabling an AI system or tool to select the most appropriate and relevant readings for students associated with a historical event. In this case, an educator may choose not to utilize AI-enabled systems or tools given the risk of AI creating false facts (“hallucinating”) or steering students toward inaccurate depictions of historical events found on the internet. Educators will be weighing benefits and risks like these daily. Computers process theory and data differently than humans. AI’s success depends on associations or relationships found in the data provided to an algorithm during the AI model development process. Although some associations may be useful, others may be biased or inappropriate. Finding bad associations in data is a major risk, possibly leading to algorithmic discrimination. Every guardian is familiar with the problem: A person or computer may say, “Our data suggests your student should be placed in this class,” and the guardian may well argue, “No, you are using the wrong data. I know my child better, and they should instead be placed in another class.” This problem is not limited exclusively to AI systems and tools, but the use of AI models can amplify the P a g e | 22
33 Publizr Home