Playful Assessment: What we learned from games to make assessment playful
By: YJ Kim |김윤전 Assistant Professor Curriculum & Instruction, UW-Madison
This presentation gave me a lot to reflect on in terms of the purpose, possibilities and effectiveness of gamification in teaching and learning.
Gamification is the application of typical game elements (e.g. point scoring, competition with others, rules of play) to encourage greater learner engagement.
There is an assumption made here: that since youths play games all the time and enjoy playing games, we WILL be able to better engage them by making learning more game-like. However, if we include games in our classrooms for the sake of it, we run a risk of these games being 'disjointed' from curriculum, and we may, as a result, not be able to achieve increased engagement. An apt analogy: Covering a broccoli with chocolate doesn't solve the underlying problem that children do not like broccoli in the first place.
The key is learning how to make learning intrinsically more motivating - the application of play in education should go beyond extrinsic motivation. It is important to place students' joy and sense of agency at the centre of the learning and assessment process.
The idea of play in education goes beyond the game itself (refer to the diagram above). It also includes the game "infrastructure", as well as the state of mind of being playful.
Playful Learning Environments
Playful learning environments incorporate these 5 elements:
Joyful
Socially interactive
Actively engaging
Meaningful
Iterative
This echos my belief that learning is social (and cultural), and that students learn by doing. Learning is also an iterative process where students cycle through participating in/conducting an activity, evaluating how it went, improving it, and doing the activity again until they are ready to move on to a different activity.
What is playful ASSESSMENT?
Assessment as Evidentiary Reasoning (i.e. Assessment as a process of reasoning from evidence)
[Adapted from: The Design of an Assessment System for the Race to the Top: A Learning Sciences Perspective on Issues of Growth and Measurement]
Teachers assess students to learn about what they know and can do, but assessments do not offer a direct pipeline into a student's mind. Assessing educational outcomes is not as straightforward as measuring height and weight; the attributes to be measured are mental representations and processes that are not outwardly visible. Therefore, an assessment is a tool designed to observe students’ behaviour and produce data that can be used to draw reasonable inferences about what students know (Bob Mislevy, 2001).
Three key elements underlying any assessment:
Cognition: Theory/data and a set of assumptions about how students represent knowledge and develop competence in a subject matter domain. Our goal is to identify the set of knowledge and skills that is important to measure for the context of use, whether that be characterising the competencies students have acquired at some point in time to make a summative judgment, or for making a formative judgement to guide subsequent instruction to maximise learning.
Observation: A set of assumptions and principles about the kinds of tasks that will prompt students to say, do or create something that demonstrates important knowledge and skills. (i.e. a set of specifications for assessment tasks that will elicit illuminating responses from students.) The tasks to which students are asked to respond on an assessment are not arbitrary. They must be carefully designed to provide evidence that is linked to the cognitive model of learning and to support the kind of inferences and decisions that will be made on the basis of the assessment results.
Interpretation: Expresses how the observations derived from a set of assessment tasks constitute evidence about the knowledge and skills being assessed. In the context of large-scale assessment, the interpretation method is usually a statistical model, which is a characterisation/ summarisation of patterns one would expect to see in the data given varying levels of student competency. In the context of classroom assessment, the interpretation is often made less formally by the teacher, and is usually based on an intuitive/ qualitative model rather than a formal statistical one (e.g. use of assessment rubrics).
These three elements may be explicit or implicit, but an assessment cannot be designed and implemented without consideration of each. For an assessment to be effective and valid, the three elements must be in synchrony.
Example 1: Digital Game Based Assessment
Shadowspect is a digital game developed for assessment for middle school math, with puzzles designed to allow students to demonstrate their conceptual understanding of geometry and their spatial reasoning skills.
Important note made that the puzzles in the game are aligned to the national curriculum standards, since "if they are not designed to be aligned with academic standards, teachers are not going to use them".
Below is the backend data collected to assess the efficacy of the game in terms of capturing students' understanding of geometry, their spatial reasoning skills as well as persistence.
This is a very sophisticated game model that I do not dream of replicating (or even fully understanding 🤣), but I feel the intention is one that I share - when designing anything for learning, it is important to first consider and be intentional about what data/evidence of learning you will be collecting in order to make sense of whether the resource is constructed well. This will make the refinement process much smoother given that is substantiated by evidence,
The diagram shows how persistence is measured by the game model:
The model also enables teachers to use filters to group their learners - e.g. high persistence and substantial puzzle progress, low participation, high participation, etc.). This allows for more informed intervention, and can also help teachers in making classroom-based decisions (e.g. in terms of pacing).
Referring to the second image, the lines are a visual representation of students' game 'path'. Purple lines are students who are making good moves and seeking feedback before completely the level(s) swiftly (hence no intervention needed). On the other hand, the green lines represent students eho are making making many moves but are not completing the levels, or students taking extended breaks. The follow up by the teacher would then be to figure out if its an engagement issue or skill issue, clarify misconceptions, etc.
Such evidence-based interventions seem to be more efficient than the traditional approaches.
Example 2: Playful Assent Tools for Hands-On Learning
With greater emphasis on authentic learning in classrooms, from hands-on STEM projects to open-ended making, educators now face an increased need to assess process-related skills (e.g., collaboration and troubleshooting).
It is thus important to create assessment tools that go beyond rubrics and empower students to be active participants of assessment.
The MIT Playful Journey Lab give us a glimpse of what it would look like to seamlessly embed assessment in such messy, iterative, and social processes of learning that include activities such as making and engineering tasks.
E.g. in terms of 'productive risk-taking', something that is difficult to quantify, the use of "I" statements allows for students self-assessment, which can promote student agency.
Example of "I" statements (somewhat like success criteria):
Another instance of involving students in the assessment process is through the risk-taking punch card, where students constantly reflect and take stock of their learning. They punch the card each time they take a risk, and use the flip side of the punch card to qualify these instances. >>>
Read more here: Embedding Assessment in Hands-On Learning
Access resources here: Beyond Rubrics
Example 3: Assessment Party Game PD for Teachers
Interested to learn more about this!
Presenter's Summary & Insights:
Playful lens helpful to develop assessment that is student- centered, authentic, engaging and socially mediated.
It can lead to more innovative teaching and assessment practices. (Though the urge to have “one single score” might still persist)
Digital tools can “automate” reasoning processes & are more “robust”, while non-digital tools can be can easily adapted by educators in their classrooms given the higher level of flexibility.
Playful assessment can spark teachers’ creativity in terms of how assessment should look like in their own classrooms.
Establishing new norms about assessment takes time! It could be difficult for teachers and students to buy in and feel comfortable!
My Reflection and Takeaways:
1. Effectiveness of Gamification?
The extent to which gamification can enhance teaching and learning is dependent on both students and teachers. Students' readiness levels would play a role, but I feel in the current climate that it is the teachers who need greater convincing to incorporate games in the classroom. This is since we are constantly under pressure to cover the syllabus within our limited curriculum hours, and hence direct instruction (frontal teaching) is perceived to be more efficient. (Though it is questionable whether our students are absorbing what we think we are teaching them.)
The importance of a seamless integration of games into the curriculum cannot be overlooked. Teachers can use games effectively for preparation for future learning, where students first gain implicit understanding through the game experience before teachers introduce the formal concepts. The teacher, as a designer and active facilitator, can then help students draw relevance between the game and the concept to be learned. This is something that I wholehearted agree with. In fact, this has informed my lesson design, where I adopted such an inductive approach to teach students about disaster management strategies.
Apart from being a form of assessment, I believe that games also play a role in sparking students' curiosity to explore concepts further, often beyond what is required of them by their syllabus. Games like Getting to Zero is designed to catalyse students' interest in sustainability-related issues through the rich post-game discussions generated. [Read players' insights here & here.]
2. Operationalising Play
A new perspective gained would be that there is more than 1 way to operationalise play - games can be used not just to teach CONCEPTS, but also guide learners through PROCESSES/ LEARNING FRAMEWORKS.
I was quite inspired by MetaRubric, a game designed to help teachers learn how to design and use rubrics to assess project-based/maker-centered learning. It starts with a creative mini-project (e.g. creating a movie poster) , then asks them to identify what makes that project good, ultimately coming back around to evaluating their original project.
I thought that it could also be useful to have students play to co-create rubrics in a playful way. The process of getting them to articulate the elements of a good movie poster (or whatever the end product of the creative mini-project is) can better guide them in refining their projects.
A quote from A/P YJ Kim:
“Students should be part of the broader assessment conversation,” “Then they talk about what they think is important when demonstrating success. Assessment needs to be participatory. And playful. It’s about a sense of agency and mutually understood communicative values.”
Excited to apply what I've gleaned from this presentation to create more gamified learning experiences!
댓글