68

Chapter 5: Teaching In the Introduction, we discuss the notion that when AI is incorporated into a system, the core of the AI is a model. In the Learning section, we discuss that we need to be careful that models align to the learning we envision (e.g., that they aren’t too narrow). Now, based on the needs of teachers (as well as students and their families/caregivers), we add another layer to our criteria for good AI models: the need for explainability. Some AI models can recognize patterns in the world and do the right action, but they cannot explain why (e.g., how they arrived at the connection between the pattern and the action). This lack of explainability will not suffice for teaching; teachers will need to know how an AI model analyzed the work of one of their students and why the AI model recommended a particular tutorial, resource, or next step to the student. Thus, explainability of an AI system’s decision is key to a teacher’s ability to judge that automated decision. Such explainability helps teachers to develop appropriate levels of trust and distrust in AI, particularly to know where the AI model tends to make poor decisions. Explainability is also key to a teacher’s ability to monitor when an AI system may be unfairly acting on the wrong information (and thus may be biased. We discuss bias and fairness more in the Assessment section next). Surrounding the idea of explainability is the need for teachers to be able to inspect what an AI model is doing. For example, what kinds of instructional recommendations are being made and to which students? Which students are being assigned remedial work in a never ended loop? Which are making progress? Dashboards in current products present some of this information, but with AI, teachers may want to further explore which decisions are being made and for whom and know of the student-specific factors that an AI model had available (and 58 | P a g e

69 Publizr Home


You need flash player to view this online publication