top of page

Co-crafting an Improved Assessment

  • Oct 5, 2025
  • 2 min read
Co-crafting Questions Routine

Sometimes in teaching things are based on good ideas but fall short in practice. One of the assessments I used with my students last year was called “What’s the question?” The goal of this assessment was to have students give a problem that could result in a given answer. This was similar to the math routine called “co-craft questions”. In assessing students in this way I was not helping students to create mathematical questions but rather expecting them to be able to do this as a preexisting skill. In reality I found that this assessment did not give me the reliable results I was looking for as there were many problems in the initiation of this assessment. 


The first problem with this assessment is that students did not have the background knowledge to be successful. “Barriers and inequities exist when background knowledge that is unfamiliar to some learners is critical to integrating or using new information’ (Center for Applied Special Technology). This assignment did not address the barriers but rather was asking students to rely on their (lack of) background knowledge. The intention was that I wanted the students to show real world applications of the problem, however many of them were struggling with the mathematical concepts themselves and were not able to tie it into real world examples. 


The other problem with this assessment is that there was not enough scaffolding in place to get solid answers. While the problem was standards- based it did not guide students to demonstrate their mastery. “The trouble is, when the context and criteria for making evaluations are ambiguous, bias is more prevalent” (Mackenzie, 2019). Students were given a standard and I asked them to prove that they understand the objective with their question. This was too open ended and did not give clear expectations for students. While students were guessing what I was expecting, I was arbitrarily assigning students scores compared to my assumption of "meeting grade level expectations”. This leads to very biased-based grading. 



While this assessment did not work in the way I had set it up it does have the potential to make it into a better assessment through scaffolding and the use of a clear rubric. A rubric allows students with fair grading on an open ended task. In a good assessment “rubrics were used such that the evaluation of the work was the same, thus quality ensured, but the demonstration could be different”  (Montenegro, 2017, p 7). If I were to set up a clear rubric and provide students with guided practice and feedback on their work this could be modified to be a better assessment. Like many things in education you can learn through the “fails” in order to improve and make the assessment more reliable.


Center for Applied Special Technology (CAST). (n.d.). UDL guidelines. CAST. https://udlguidelines.cast.org/


Montenegro, E., & Jankowski, N. A. (2017). Equity and assessment: Moving towards culturally responsive assessment. (Occasional Paper No. 29). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).


Nishiura Mackenzie, L., Wehner, J., & Correll, S. (2019, January 11).Why most performance evaluations are biased, and how to fix them. Harvard Business Review.


 
 
 

Comments


© Alexandria Hopp 2026

bottom of page