top of page
Search

#12: Assessment for Dummies

Writer's picture: Ng Wen XinNg Wen Xin

Notes and takeaways from EOY 2021 & from reading Assessment in Singapore: Perspectives for Classroom Practice. Shall organise this post this way:




 

Stage 1: Setting/Vetting an Assessment


1.1 Maintaining Demands of Assessment


Defining Demand:

  • requests that examiners make of candidates to perform certain tasks within a question;

  • a pre-test concern explicitly stating what candidates needs to do

  • Note: Wide variation in task demand over time would be unfair to students as their expectations regarding what the exams will involve are likely to be based directly or indirectly on previous exam papers

VS Difficulty:

  • Measured by how students have performed on a question, can vary for different groups of students, and can even change over time; an empirical measure done after the test

Comparing Demand:

  • Before setting the current year’s paper: work through and be familiar with the examination demands of past year papers

  • Note: Comparability of demands between two examinations is when they are similar in OVERALL level of demand rather than the same level of demand in every aspect [or section]; difference in the demands in different aspects [or section] somehow balance each other out

Steps:


 

1.2 Setting Open-Ended Questions


 

1.3 What validity is and isn't


Validity is about:

  • Supporting evidence and theory

  • Interpretations of test scores

  • Test scores; not the test or assessment

  • Degree, i.e., ‘how much’ or ‘relative extent’

  • Verifying that test scores are used according to the intended purpose

Validity is not:

  • Just about measuring what a test is supposed to measure (i.e. test content)


Evidence of Validity

*We have greater validity when we have a greater degree of the evidence

Example [Question from another school's Sec 1 EOY]



Consider: Fairness & Reliability of Question Unfair:

  • Extensive calculation for 1 mark

  • *If setter is testing on concept of population density, s/he could just have gotten student to calculate one country's population density (especially for NA)

  • *If setter is testing students' ability to see and rank density, s/he could then perhaps merge columns 2 and 3 to give density values straight away, OR at least set aside 1 more mark to cater room for calculation error

[Potentially] Unreliable:

  • Can't differentiate between students who lost marks because of errors in calculation or because of lack of knowledge in concept

(Credits: Raine 😬)

 

1.4 Ensuring Construct Representation and Relevance

 

Stage 2: Marking


2.1 How to reduce subjectivity in marking open-ended questions


I. Developing a comprehensive mark scheme

II. Articulate the intended traits

  • Underlying traits designed to be measured in the questions must be shared with the markers to establish a common understanding on the intent of testing

  • Sampling students’ responses after the test will help markers see the alignment between the intent and outcomes of the test; any unexpected outcomes should be given special treatment during marking for fairness

  • *For essay-type questions where choices of questions are offered, markers have to be reminded that the demand on students are comparable across the questions ----- being mindful of comparability of demands across the choices will prevent markers from exercising leniency when marking responses of the question that ‘looks’ difficult and vice versa

 

2.2 [Tiny] Takeaways from 3N SS EOY Marking

  • Many students misinterpreted sources in answering Q5 (Evaluation)

    1. "They are literal people"

    2. How to help them:

      1. Exposure to more of such sources; though also bearing in a mind to have a good mix of literal and “more abstract” sources within an SBQ [1Y1N for the 2 confusing sources so they can still hit L3]

      2. Get them to pay closer attention to provenance

  • 2 ways of describing for SRQ6:

    1. What the suggestion/solution is about, OR

    2. Why the strategy will be helpful to solving the issue at hand





 

Stage 3: Making Sense of Results (For Teachers and Students)


3.1 How to make the reporting of test results more meaningful


Effective feedback addresses the following 3 questions:

  1. Feed up (where am I going?)

  2. Feed back (how am I going?)

  3. Feed forward (where to next?)

Using reports to enhance feedback

  • Reports of test results should be able to inform students what else s/he needs to do

  • Serves to affirm what students were able or unable to do in a test without having to focus on evaluate feedback such as scores

  • Test reports that include effective and actionable feedback without extensive evaluative feedback will make test results more meaningful, both for the teacher and student


Guide:


 

3.2 Crafting MRS

  1. Consider: time constraints, students’ readiness/willingness to reflect, etc.

  2. Once again: consider input vs output

  3. “Simplifying” LORMS for them (in regular lessons too)

E.g. [Reliability]

  • L2 — just because she is one muslim woman (who)

  • L3 — just based on content; no message but can find message in CR (what OR CR)

  • L4 — (what AND CR) Believe

  • L5 — (what AND CR) Don’t believe

  • L6 — who what why

 

3.3 Script-checking AFIs


 

40 views0 comments

Recent Posts

See All

Comments


bottom of page