Chapter 8: Recommendations not only efficacy but must also explore where harm may arise and the system problems that can occur through weak trust or over-trust in AI systems. R&D must take the lead in making AI models more contextsensitive and ensuring that they are effective, safe, and trustworthy for use with varied learners in diverse settings. Although AI has capabilities to find patterns beyond the limited number of variables that people normally think about, AI is not particularly good at understanding and working with context in the ways people do. Over time, we’ve seen learning sciences grow to be less about individualistic cognitive principles and more encompassing first of social learning and then of the many dimensions of context that matter in learning. Our use of AI needs to follow this trajectory toward context to support educational applications. To achieve human-centric vision, listening session attendees argued that teams will need time and freedom to explore how best to manage the tension between the pace of technological advancement and the need for broader contextual insights—for trust and for safety. They will need time and freedom to pioneer new processes that better involve teachers and students as codesigners, with attention to balancing power dynamics. And they will need to shift attention from older ways of framing priorities (such as achievement gaps) to new ways of prioritizing digital equity. We call on R&D funders to focus resources on the long tail of learner variability, the need for AI-enabled systems that better incorporate context, and time required to get contextual considerations right. We call upon researchers and developers to prioritize challenges of context, trust, and safety in their work to advance AI. 101 | P a g e
112 Publizr Home