Chapter 8: Recommendations across more than 700 attendees in our listening sessions, the predominant discussion tackled how constituents can achieve a consensus vision for AI-enabled edtech where humans are firmly at the center. The Blueprint for an AI Bill of Rights similarly calls for “access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts…” Building on this consensus, we call upon all constituents to adopt “humans in the loop” as a key criterion for educational use of AI. We envision a technology-enhanced future more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement. Robot vacuums do their job, freeing the human from involvement or oversight. Although teachers should not be the only humans involved in loops, Figure 5 provided examples of three types of teacher loops that are central to education and can be used to illustrate what “human in the loop” means. Here, we use the example of an AI chatbot to elaborate on the meaning of the loops. First, as students become involved in extended interactions with AI chatbots, teachers will need to educate students about safe AI use, monitor their use, and provide human recourse when things go astray. Second, teachers are beginning to use chatbots to plan personalized instruction for their students; they will need to be involved in loops with other teachers to understand effective prompts, to know how to analyze AI-generated lesson plans for flaws, and to avoid the human tendency to overly trust AI systems and underapply human judgement. Third, teachers need to be involved in the design and evaluation of AI systems before 89 | P a g e
100 Publizr Home