106

Chapter 8: Recommendations more than processing “big data,” and although we want to harness insights from data, human interpretation of data remains highly important. We reject a technological determinism in which patterns in data, on their own, tell us what to do. Applications of AI in education must be grounded in established, modern learning principles, the wisdom of educational practitioners, and should leverage the expertise in the educational assessment community around detecting bias and improving fairness. 8.6. Recommendation #4: Prioritize Strengthening Trust Technology can only help us to achieve educational objectives when we trust it. Yet, our listening sessions revealed the ways in which distrust of edtech and AI is commonplace. Constituents distrust emerging technologies for multiple reasons. They may have experienced privacy violations. The user experience may be more burdensome than anticipated. Promised increases in student learning may not be backed by efficacy research. They may have experienced unanticipated consequences. Unexpected costs may arise. Constituents may distrust complexity. Trust needs to incorporate safety, usability, and efficacy. The Department firmly takes the stance that constituents want AI that supports teachers and rejects AI visions that replace teachers. And yet, teachers, students, and their families/caregivers need support to build appropriate levels of trust in systems that affect their work. In the broader ecosystem, trustworthy AI is recognized as a multidimensional problem (including the dimensions of Figure 14, above). If every step forward does not include strong elements of trust building, we worry that distrust will distract from innovation serving the public good that AI could help realize. 96 | P a g e

107 Publizr Home


You need flash player to view this online publication