Generative AI in Education

AI is already playing a primary role in Vocational & Higher Education, with the goal of optimising workflow and improving student outcomes and learning. AI enabled digital experience is also playing an increasing role in service delivery. Career advising and workforce training is focused around corporate partnerships, community engagement with local and regional governments, small business owners, and alumni. Vocational offerings are being absorbed by nontraditional providers in the shadow education sector. Institutions are beginning to use and leverage advanced AI technology where it makes sense to enhance engagement and simplify personal interactions throughout the student/alumni journey. In turn, AI will become a sign of value and quality for institutions that adapt and survive the changing market.

Widespread access and exploration of generative artificial intelligence (AI) models by students will challenge traditional vocational & higher education practices and assessment approaches even further. Generative AI models have recently demonstrated dramatically improved abilities to generate creative outputs in response to prompts from humans with no detailed technology expertise. AI is also starting to enable achieving educational priorities in better ways, at scale, and with lower costs. AI is enabling teachers to extend the support they offer to individual students when they run out of time. Developing resources that are responsive to the knowledge and experiences students bring to their learning, and AI will enable greater customizability of curricular resources to meet the needs.

Critical Thinking of Student through Generative AI

Institutions should encourage students to declare when they are using Generative AI tools, and if doing so, to additionally review their process of generating a higher-quality answer. There is an emerging opportunity to move from faculty-provided support to AI-provided assistance. Given the current quality of content indexed by these models, additional insights, literature review and critical thinking of the student remain key short-term points of differentiation. The value of outputs from generative AI rely on the design of appropriate prompts and refinements; skills that could, themselves, be graded. Reviewing the student’s initial question and process of improvement through iterative rewrites may both help build student reflective practice skills, and shape a more sustained institutional AI and assessment strategy.

AI in Education with Humans in the Loop

The recent U.S. Department of Education Office of Educational Technology’s policy report on Artificial Intelligence and the Future of Teaching and Learning is pursuing a vision of AI in Education where humans are in the loop. That means that people are part of the process of noticing patterns in an educational system and assigning meaning to those patterns. It also means that teachers remain at the helm of major instructional decisions. It means that assessments involve teacher input and decision making, too. One loop is the cycle of recognizing patterns in what students do and selecting next steps or resources that could support their learning. Other loops involve teachers planning and reflecting on lessons. Response to Intervention is another well-known type of loop. The report outlines six desired qualities of AI tools and systems in education.

  1. When choosing to use AI in educational systems, decision makers prioritize educational goals, the fit to all we know about how people learn, and alignment to evidence-based best practices in education.

  2. Educators can inspect EdTech to determine whether and how AI is being incorporated within EdTech systems. Educators’ push for AI models can explain the basis for detecting patterns and/or for making recommendations, and people retain control over these suggestions.

  3. Developers and implementers of AI in education take strong steps to minimizing bias and promoting fairness in AI models.

  4. The use of AI models in education is based on evidence of efficacy (using standards already established in education for this purpose) and work for diverse learners and in varied educational settings.

  5. AI models that support transparent, accountable, and responsible use of AI in education by involving humans in the loop to ensure that educational values and principles are prioritized.

  6. Ensuring security and privacy of student, teacher, and other human data in AI systems is essential.

Instructional Decisions & Strengthening Assessments with AI

AI could help teachers to customize and personalize materials for their students, leveraging the teacher’s understanding of student needs and strengths. Academic assessment approaches must evolve beyond isolated assignments toward more continuous data-driven views of the student. Combining multiple formative and summative approaches continues to offer an enduring pathway forward. The ability of a student to evaluate when and how to effectively use generative AI will also become significant. Institutions must embrace the opportunity to acknowledge this shift, explore opportunities for AI-assisted authoring, guide students to evaluate AI strengths and weaknesses, and evolve teaching and assessment as these technologies continue to develop.

When AI enables instructional decisions to be automated at scale, AI adapts by speeding curricular pace for some students and by slowing the pace for other students (based on incomplete data, poor theories, or biased assumptions about learning), achievement gaps could widen. Exercising judgement and control in the use of AI systems and tools is an essential part of providing the best opportunity to learn for all students—especially when educational decisions carry consequence. AI does not have the broad qualities of contextual judgment that people do.

AI models and AI-enabled systems may have potential to strengthen assessments. AI can be embedded in the learning process, providing feedback to students as they work to solve a problem, rather than only later after the student has reached a wrong answer. When assessment is more embedded, it can better support learning, and timely feedback is critical.

In one example, a question type that invites students to draw a graph or create a model can be analyzed with AI algorithms, and similar student models might be grouped for the teacher to interpret. Enhanced assessment may enable teachers to better respond to students’ understanding of a concept like “rate of change” in a complex, real-world situation. In another example an AI-enabled learning technology may be able to interact verbally with a student about their response to an essay prompt, asking questions that guide the student to clarify their argument without requiring the student to read a screen or type at a keyboard.

Inspectable, Explainable and Overridable AI in Education

Explainability of an AI system’s decision is key to a teacher’s ability to judge that automated decision. Such explainability helps teachers to develop appropriate levels of trust and distrust in AI, particularly to know where the AI model tends to make poor decisions. Explainability is also key to a teacher’s ability to monitor when an AI system may be unfairly acting on the wrong information and thus may be biased.

Surrounding the idea of explainability is the need for teachers to be able to inspect what an AI model is doing. For example, what kinds of instructional recommendations are being made and to which students? Which students are being assigned remedial work in a never ended loop? Which are making progress? With AI, teachers may want to further explore which decisions are being made and for whom and know of the student-specific factors that an AI model had available (and possibly which factors were influential) when reaching a particular decision.

Teachers will also need the ability to view and make their own judgement about automated decisions, such as decisions about which set of mathematics problems a student should work on next. They need to be able to intervene and override decisions when they disagree with the logic behind an instructional recommendation.

AI may also be helpful by highlighting for students and teachers what forms of assistance have been most useful to the student in the recent past so that an educator can expand access to specific assistance that works for that individual student. AI-enabled systems and tools can provide teachers with additional information about the students’ recent work, so their instructor has a greater contextual sense as they begin to provide help.

Algorithmic Discrimination in AI-Enabled Assessment

Bias and fairness are important issues in assessment design and administration and now with AI, we now must worry about Algorithmic Discrimination which can arise due to the manner in which AI algorithms are developed and improved from large datasets of parameters and values that may not represent all cohorts of learners.

Algorithmic discrimination is not just about the measurement side of formative assessment; it is also about the feedback loop and the instructional interventions and supports that may be undertaken in response to data collected by assessments.

There is a question both about access to such interventions and the quality or appropriateness of such interventions or supports. When an algorithm suggests hints, next steps, or resources to a student, we have to check whether the help-giving is unfair because one group systematically does not get useful help which is discriminatory.

Fairness goes beyond bias as well. In AI-enabled assessment, both the opportunity to learn through feedback loops, as well as the quality of learning in and outside of such loops, should be addressed. Issues of bias and fairness have arisen in traditional assessments, and the field of psychometrics has already developed valuable tools to challenge and address these issues.

Educators can build upon alignments between their long-standing visions for formative assessment and the emerging capabilities that AI holds. Further, the professional assessment community brings a toolkit for asking and answering questions about topics like bias and fairness. The psychometric toolkit of methods is a strong start toward the questions that must be asked and answered because it already contains ways to measure bias and fairness and, more generally, to benchmark the quality of formative assessments.

Previous
Previous

Foundation Models, LLMs and Generative AI

Next
Next

Rapidly Evolving Conversational AI Shiny or Scary?