Chapter 2: Building Ethical, Equitable Policies Together A recent Executive Order issued by President Biden sought to strengthen the connection among racial equity, education and AI, stating that “members of underserved communities—many of whom have endured generations of discrimination and disinvestment—still confront significant barriers to realizing the full promise of our great Nation, and the Federal Government has a responsibility to remove these barriers” and that the Federal Government shall both “pursue educational equity so that our Nation’s schools put every student on a path to success” and also “root out bias in the design and use of new technologies, such as artificial intelligence.” A specific vision of equity, such as described in the Department’s recent report, Advancing Digital Equity for All is essential to policy discussion about AI in education. This report defines digital equity as “The condition in which individuals and communities have the information technology capacity that is needed for full participation in the society and economy of the United States.” Issues related to racial equity and unfair bias were at the heart of every listening session we held. In particular, we heard a conversation that was increasingly attuned to issues of data quality and the consequences of using poor or inappropriate data in AI systems for education. Datasets are used to develop AI, and when they are non-representative or contain undesired associations or patterns, resulting AI models may act unfairly in how they detect patterns or automate decisions. Systematic, unwanted unfairness in how a computer detects patterns or automates decisions is called “algorithmic bias.” Algorithmic bias could diminish equity at scale with unintended discrimination. As this document discussed in the Formative Assessment section, this is not a new conversation. For decades, constituents have rightly probed whether assessments are unbiased and fair. Just as with assessments, whether an AI model P a g e | 13
24 Publizr Home