Critical_Thinking Creative_Problem_Solving Effective_Communicating

CHAPTER 17
Strategies for Assessing Student Learning

What Is It?

Researchers and educators typically identify three types of assessments: diagnostic, formative, and summative (Swearingen, 2002). Diagnostic assessments determine students' background knowledge and identify misconceptions. However, we are not going to discuss diagnostic assessments in this chapter because we have covered them in Chapter 13: Strategies for Activating Prior Knowledge.

Formative assessments are non-graded activities that provide data for teachers and students. Teachers can use them to provide feedback to students so they can reflect on their learning progression and plan for future learning. Teachers can use formative assessments to help them plan for re-teaching, intervention, and enrichment. Ultimately, formative assessments can improve a student's learning and a teacher's teaching (Carnegie Mellon University, n.d.).

Summative assessments occur at the end of a unit. They can indicate a student's progress in learning the standards and can be used by students, parents, teachers, administrators, and state officials (Woods, 2017, p. 1).

Effective assessments require students to show an understanding of the content they've learned and demonstrate acquired skills. On the first day of school, we announce to students, “This is a thinking class, not a memorization class. You will need to memorize some things, which has been expected of you in the past, like when you learned to tie your shoe or spell your name. However, we will require you to also understand the information we teach you, which requires you to think.”

To help students differentiate between memorization and understanding, we ask them to raise their hands if they can recite the Pledge of Allegiance (or some other commonly memorized verse, such as one from a popular song to which students are currently listening). Most students raise their hands to indicate they know the Pledge. We instruct these students to keep their hands raised if they can explain what it means to “pledge your allegiance to a country.” Most hands are lowered, at which time we continue, “Although you may have memorized the Pledge of Allegiance, you may not understand it. Our goal is that you understand what we teach you in this class.”

In the book, Understanding by Design, authors Grant Wiggins and Jay McTighe (2005) describe how to plan units so students understand content. They explain that, “Understanding thus involves meeting a challenge for thought. We encounter a mental problem, an experience with puzzling or no meaning. We use judgment to draw upon our repertoire of skill and knowledge to solve it” (Wiggins & McTighe, 2005, p. 39). To assess students on their understanding, tests and activities they complete must include a challenge that requires them to use their learned knowledge in a novel situation.

Assessing students on their learning can occur in a variety of ways in the science classroom. Student understanding of content can be assessed using diverse testing methods, such as performance-based or traditional paper-and-pencil tests. Performance-based assessments are “the demonstration and application of knowledge, skills, and work habits” (Association for Supervision and Curriculum Development, 2011, p. 1). Assessments can also include a list of choices where students choose how to demonstrate their learning.

Why We Like It

Formative assessments can provide the data we need to determine if our current lessons are effectively teaching content. When the majority of students perform well on a formative assessment, this can be an indicator that they are learning and are ready to move to the next concept. However, when most students struggle on a formative assessment, we know we need to develop a different approach.

Formative assessments can also identify individual student needs, such as their misconceptions and misunderstandings. We can use this data to support individual students, which is often referred to as “intervention” (Hooper, n.d.). While some students receive intervention, others can receive an enrichment activity that allows them to dig deeper into the content (Taylor, 2019).

Another reason we use formative assessments is because they can provide an opportunity for students to self-reflect on their learning and learning efforts. We use a structured format called “learning goals and scales,” which provides students with feedback every few days so they can determine if they're learning and what may need to change to make them more successful.

Summative assessments can be used for teacher and student reflection. By analyzing and interpreting summative data, sometimes patterns emerge that reveal areas of opportunity. For example, while working with teachers in a professional development class, one of our colleagues recognized that the majority of her students who were seated in the back of the classroom scored lower on assessments than those who sat in the front of the room. These students weren't failing, but she knew they were not reaching their full potential. She knew she needed to ensure she was monitoring and providing feedback to all students, regardless of their seat location in her classroom.

We also like to give students options when we assess their learning. Giving students choice in how they are assessed can allow for more creativity and provide additional opportunities for them to apply information instead of just regurgitating facts for a test (Davenport, 2018).

Supporting Research

A meta-analysis of the effects of formative assessment was completed by educational researchers Paul Black and Dylan Wiliam. After analyzing more than 250 publications, they concluded that, “While formative assessment can help all pupils, it yields particularly good results for low achievers by concentrating on specific problems with their work and giving them a clear understanding of what is wrong and how to put it right” (Black & Wiliam, 1998).

Researchers have found that learning goals and self-reflection can have a positive impact on intrinsic motivation and overall student performance (Grant & Dweck, 2003, p. 550; Alesch & Niblack-Rickard, 2018, p. 25).

Studies also show self-reflection can lead to deeper learning, growth, and students taking more ownership over their learning (JISC, 2015; Eisenbach, 2016).

Research indicates that students can demonstrate learning at higher levels when they are assessed with a performance-based assessment, particularly one that interests them (Bae & Kokka, 2016, p. 13).

Skills for Intentional Scholars/NGSS Connections

Effective assessments create opportunities for all three of the Skills for Intentional Scholars to be demonstrated. Students must use critical thinking and creative problem solving to apply the knowledge they have learned in all assessment types that test for their understanding of the content. Students must also be able to effectively communicate in order to demonstrate their learning.

The Science and Engineering Practices in the Next Generation Science Standards require that students develop and use models; construct explanations and design solutions; and obtain, evaluate, and communicate information, all of which can be accomplished while assessing student learning (National Science Teaching Association, 2014).

Application

Formative and summative assessments have different purposes and may often have different formats. We begin by discussing how to use cool downs as formative assessments. Then we provide a very detailed description of how to develop and use learning goals and scales to identify students who require intervention or enrichment prior to the summative assessment. And, finally, we discuss the process for developing summative assessments, including performance-based assessments. Although there are many differences between formative and summative assessments, both should test for student understanding and not solely focus on memorization.

FORMATIVE ASSESSMENTS

Formative assessments are non-graded activities that provide data to both students and teachers. Students can use them to identify the concepts they are finding difficult to understand and what are—and aren't—effective learning strategies. Teachers can use formative assessments to plan for individual student needs, such as intervention and enrichment.

We usually administer formative assessments at the beginning or end of the class period. If they occur at the end, we refer to them as “cool downs.” In this section we will discuss various types of cool downs and how to use learning goals and scales as a structured form of formative assessment. See Chapter 15: Strategies for the Beginning and Ending of Class for a further discussion of warm-ups and cool downs.

COOL DOWNS: PROCEDURES

After students complete the day's primary lesson activity, they are instructed to return to their seats to complete the day's cool down. We place the cool down on the board or project it onto a screen (see examples in the next section). Because we want to determine the needs of individual students, we ask that students complete their cool downs independently. Students can document their answers on an index card, sticky note, or scrap piece of paper.

Once all students are done with their cool down, students exchange their papers. We then share the answer key with the class and ask students to correct their partner's paper. Once the feedback is complete, we provide a few minutes for partners to provide a verbal explanation of the corrections so students know how they can improve their answers.

We often collect cool downs because they provide us with important data about student learning (or the lack of it). Cool downs can act as a formative assessment that indicates if the majority of the class is ready for the next concept or if we need to reteach the lesson. This kind of analysis can help us more immediately support students who need it.

COOL DOWNS: EXAMPLES

Cool downs can be quick formative assessments that require no more than 10 min: 3–5 min for students to answer the questions and 3–5 min to review their answers with a learning partner and participate in a class discussion.

The following are a few examples of cool downs we've used:

LEARNING GOALS AND SCALES

A specific tool that we use for both warm-ups and cool downs is learning goals and scales (LG&S), which engages students in a review of previously taught content and a time of reflection. When we begin planning a unit, we first write a standards-based learning goal. Then we break the goal into increments that are written as a scale of 0–4, where a 0 rating indicates that the students don't know any of the learning goal's content and a 4 rating indicates the students have accomplished the goal and completed an enrichment activity.

When we plan a unit, we use the backward design format, which dictates that teachers first write learning goals, then the unit assessment, and finally the daily lessons (Culatta, 2019). We begin our planning by starting with the chosen standard(s). We use the standard(s) to develop a learning goal and its scale, which is used to create the summative assessment. Then we write daily lesson plans to support student understanding of content.

NOTE: We also include a section on a “Lower-Prep Version of Learning Goals and Scales” for teachers who might want to experiment with this strategy first prior to making it a major part of their instructional routine.

Learning Goal Development

Learning goals state what students should know and be able to do at the end of a unit. They generally take several weeks for students to accomplish. They differ from daily targets, daily objectives, and learning objectives because these focus on what should be accomplished at the end of a lesson (Marzano, 2013).

There are many learning goals and scales available online for every subject and grade level; however, we prefer to write our own because they drive our planning and we can customize them to our specific state standards, available resources, and student needs.

Before we begin writing a learning goal, we decide the standard(s) that will be taught. To model this process, we share an example of a lab safety and scientific method learning goal, along with explaining the process we used to arrive at it.

We begin every school year, regardless of the grade level and subject we are teaching, with the state standard that says students should be able to design, safely conduct, and communicate controlled investigations. When we unpacked this standard for an eighth grade general science class, we determined that it included the following skills that students should be able to do at the end of the unit:

  1. Exercise lab safety procedures.
  2. Develop a testable hypothesis and design a controlled experiment.
  3. Communicate the experiment's results.

We used the standard and its components to write the learning goal, which is written in student-friendly language by starting with “I can.” The goal reads, “I can demonstrate lab safety procedures, set up an experiment, and communicate an experiment's results.” Notice that the learning goal has three parts. This is intentional. We've found that having three specific parts helps to write the goal's scale, which we discuss in the next section.

The biggest difficulty in writing learning goals is determining the necessary background knowledge students must have before they can accomplish each of the three parts of the goal. We specifically write the goal so that the first part is required background knowledge for the second part, which is then required background knowledge for the third part. For our example learning goal, “I can demonstrate lab safety procedures, set up an experiment, and communicate an experiment's results,” we require students to pass lab safety tests with a minimum grade of 80% before they can participate in labs. This means students need to be able to demonstrate lab safety procedures prior to setting up their experiment, which is also necessary prior to communicating the experiment's results.

Scale Development

After writing the learning goal, we develop a scale from 0–4. Scales are similar to rubrics. They indicate to students how much of the learning goal they've accomplished and what learning must still occur prior to the summative assessment. Figure 17.2: Example Scale is the scale we wrote for the learning goal, “I can demonstrate lab safety procedures, set up an experiment, and communicate an experiment's results.”

When we write a scale, we keep in mind Carol Dweck's research about students who have a growth mindset, which—among other elements—states that if they believe they can learn, they are likely to apply more effort (Mindset Works, 2017). To provide students with the encouragement they may need to put forth effort, we use the word “yet” and emphasize it by writing it using all capital letters. See Chapter 6: Strategies for Teaching the Engineering Process for additional information regarding the cultivation of growth mindsets.

We use the three succinct parts of the learning goal to develop the scale. A rating of a 0 indicates the student has not yet learned any of the three parts. A 1 rating indicates that a student has learned the first part but not parts 2 or 3. This pattern continues until rating 4, which is intended for students who enjoy challenges, require an enrichment lesson, or have a passion for the subject and want to learn more. We provide a detailed discussion about the 4 rating in the section titled Enrichment Ideas.

Learning Goals and Scales in the Classroom

LG&S are intended to help both students and teachers. Students can benefit from LG&S because they know what they are expected to learn or do, how much of the expectation they've met, and how much more they must learn (Marzano, 2007, p. 23).

LG&S can benefit teachers because as students complete formative assessments, it's easy to identify which students require intervention and which would benefit from an enrichment activity. The next four sections of this chapter discuss: how to introduce a new LG&S to students; how to use LG&S to motivate students to learn; how to use LG&S for student reflection; and when to assess students.

Introducing a New Learning Goal and Scale

We begin every unit with a formative assessment to determine every student's background knowledge and how much of the learning goal they already know. To prevent students from feeling anxious about their assessment scores, we purposely use the term “practice test” and explain that they are not graded. Throughout the unit, students frequently complete formative assessments so we can identify their learning progression and move them along the scale.

We use the practice test data to determine each student's initial rating on the scale. One lesson we quickly learned is that students should never decide their own ratings. When we facilitate professional development classes, we explain that, “Students, like most of us, often don't know what they don't know.” We equate this to the first year we taught astronomy. We thought we had an immense grasp on the subject until we began planning the first unit. We realized very quickly that we had a lot to learn! We didn't know how much we didn't know.

We purposefully write the unit's first practice test to be very difficult. Our goal is that nearly every student is assigned a 0 rating because it “levels the playing field,” meaning that no one student is rated higher than another. We don't want students to feel negatively when they discover their first rating is a 0 so we remind them that the content has not yet been taught and there is no expectation that they know it YET. When we are teaching secondary students, we explain that we expect them to “fail” the practice test because they should be taking a different class if they already know the information.

We test only for the first part of the learning goal, which, using our example, is that students can demonstrate lab safety. Our aim with the initial practice test is to determine a student's background knowledge regarding the first part of the learning goal. We don't test for the remaining parts because practice tests should require only a few minutes of class time. Their purpose is to determine if a student will be assigned a rating of 0 or 1. Students who answer all of the questions accurately and thoroughly are assigned a rating of 1 while students who don't meet these high expectations are assigned a rating of 0.

A variation is to assign students a half rating, which is used for students who answer most, but not all, of the questions on the practice test correctly. We rarely use this variation for the first practice test because we hope nearly all students earn the same rating. However, using half ratings in subsequent practice tests can be one way to differentiate struggling students. For further differentiation strategies, see the Differentiation for Diverse Learners section.

Figure 17.3: Example of a Unit's First Practice Test is the practice test we use to determine students' initial ratings on the scale for our example learning goal regarding lab safety and the scientific method. Although we haven't taught the material yet, we ask questions that will not require students to regurgitate facts but instead to apply their knowledge. Our aim is always to test for an understanding of content, including background knowledge.

After students receive the results of their first practice test, they each receive a copy of Figure 17.4: Reflecting on My Learning—Blank. We provide students with the unit's learning goal and the upcoming date of the summative assessment, which they document on their Reflecting on My Learning worksheet.

Students then document the goal they set for themselves. We ask, “Do you want to earn a 3, which indicates that you accomplished the entire learning goal or do you prefer to challenge yourself and aim for a 4 rating?” In our experience, some students don't believe they can accomplish the required work for a 4 rating, so they never try. To encourage students to push themselves outside of their comfort zone, we announce the work they must do to achieve a 4 rating. For example, using our lab safety and scientific method learning goal, we tell students to achieve a 3 rating, they will conduct an experiment that leaves a gummy bear in water overnight. To earn a 4 rating, they will be required to design an extension of the experiment. They don't have to actually perform the experiment, they only need to design it, which includes writing a hypothesis, identifying variables, etc. In our experience, once students know what is expected of them, it seems less intimidating and they tend to push themselves further up the scale.

We hope all students attempt a rating of at least a 3 because achieving it indicates that they have accomplished the minimum learning goal. But we realize that depending on the content, some students may struggle just to obtain a 2 rating. There have been numerous times when a student achieves a 2 rating on one learning goal but then achieves a 3 on the subsequent learning goal. In our experience, background knowledge can heavily influence how much students learn. This is one reason why we place so much emphasis on activating background knowledge in Chapter 13: Strategies for Activating Prior Knowledge.

Figure 17.4 also asks students to commit to one specific action they can take to increase the likelihood that they will accomplish their goal. When learning goals and scales are first introduced, students most commonly write, “study more,” “review my notes,” or “pay attention in class.” To avoid these ambiguous statements, we provide examples of nine specific things to which students can commit. Here are the nine commitments:

  1. Using online game sites, such as Quizlet, Kahoot, and/or Quizizz, search for a lesson that pertains to the content.
  2. Watch a YouTube video about the content we are learning.
  3. Complete your homework on time.
  4. Ask one question every day, either to your teacher or to your learning partner.
  5. Make flash cards from your vocabulary sheet (index cards or in Quizlet).
  6. Review your vocabulary 5 min every night.
  7. When you are absent, create a plan for learning the material you missed.
  8. If you tend to daydream or sleep, ask to stand in the back of the classroom or during an activity.
  9. Create your own idea! How do you best learn/study? What has worked for you in other classes?

Students then graph their practice test rating on the bar graph in Figure 17.4. In the first column, they graph the rating they were assigned based on their practice test results. They also title the first column with the date of the practice test.

We then collect their Reflecting on My Learning worksheets until the next formative assessment.

See Figure 17.5: Reflecting on My Learning—Completed Example for an example of how the reflection worksheet looks at the end of a unit. At this point, on the first day of the unit, they will only have the top half completed with one data point on the graph.

We, as teachers, need to know which students aren't moving along the scale. These are the students who would benefit from intervention before the summative test. See the section Intervention Ideas for a list of possible interventions.

Using a Learning Goal and Scale to Motivate Learning

We introduce our first lesson of the unit. We start by telling students, “Most of you are on a zero rating, which is okay because we haven't yet taught the information to you. If you already knew all of this material, then we would wonder, ‘why are you in this class?’ You simply haven't learned this yet, which is important to remember. We are going to start teaching you now.”

To use the example of the lab safety and scientific method learning goal, we continue, “To move to a rating of a 1 on the scale, you must be able to demonstrate lab safety.” This statement provides purpose for student learning. They know what they need to do to move on the scale, which provides them with direction and motivation. Now we begin to teach the lesson.

As we teach lab safety, we continually use warm-ups and cool downs to determine if students are learning. Once we have formative assessment data indicating students are ready to move to a 1 rating, we give another practice test. In some cases, we can give the same practice test we gave on the first day of the unit because that practice test only tested for their knowledge of the first part of the learning goal. This practice test is in lieu of a warm-up or cool down.

We collect the practice tests, provide feedback, and assign a rating on the scale. If students answer all of the questions correctly, their new rating is a 1. If they answer at least half of them correctly, we assign a 0.5 rating because we want to honor students for their efforts and learning. If a student doesn't answer any of the questions correctly, they remain at a rating of 0. When we return the practice tests to students, we remind them that the practice test scores will not be entered into the gradebook.

Using a Learning Goal and Scale to Help Students Reflect on Their Efforts

After returning the completed practice tests, we review the answers as a class. Then we return students' Reflecting on My Learning worksheets that they began on the first day of the unit. They are instructed to graph the results of their practice test.

Do learning goals and scales motivate all students? We surveyed our students by asking, “How did you feel when you accomplished your goal?” Here are some students' answers:

DARRIUS: I felt good. I felt like I was actually learning and growing.
KHADIJA: I pushed myself more.
SAMUEL:  I was proud of the hard work I put into class.
HAYDEN: I felt more accomplished because I had accomplished my goals and it makes me more excited to learn.

Not everyone moves on the scale at the same time as their peers. Sometimes it's obvious why a student doesn't move and it's an easy fix. For example, some students may have been absent, in which case we explain they will soon take a practice test so they can move, too.

Other times, it's not obvious why a student didn't move on the scale. We pull these students aside and have a private conversation. We explain, “We were surprised when we reviewed your practice test. We thought you were ready. Did you feel the same way?” We've had students share personal stories, such as how their test anxiety gets in the way of their performance or how a challenge at home is a distraction right now. Regardless of their situation, we assure them that these practice tests are not graded so it's okay if they didn't do well the first time. They will always have a chance to test again and move on the scale.Some students tell us that they studied and then express frustration because they don't know why they didn't do well on the practice test. We help them problem-solve by asking them how they studied, with whom did they study, and for long did they study. We also asked our students, “How did you feel when you didn't accomplish your goal?” Here are some students' responses:

ZAHARA: I was bummed. But I felt encouraged to work harder.
AZAMI: I was really upset. But after discussing it with you, I felt better and found that I wasn't studying effectively. Now I study differently for all of my classes.
KIM: It was disappointing and I wanted to do better.
MALIK: I felt OK, I just know that I will have to push harder, and change how I am studying.
LULU: It made me develop different study strategies and showed me I needed to try harder.

When we talk with students who haven't moved on the learning scale, it's another opportunity to provide them with an example of how to use a growth mindset. We stress that although they haven't learned the information yet, we will continue to work with them so they eventually learn it. For examples of how we support students who aren't moving on the scale, see the section Intervention Ideas.

The bottom of Figure 17.4 allows for students to record their reflection from one practice test to the next. They first write down the date of the practice test. Then, if they moved up to a 1 rating, they document what actions they took that resulted in the improvement. If they didn't move to a 1 rating, they explain what they will do differently to improve the next time we take a practice test.

In our experience, students of every grade level struggle to identify a change they can make that will increase their learning so they are better prepared for the next practice test. We remind them to read what they wrote near the top of their Figure 17.4: Reflecting on My Learning worksheet where it states, “One specific thing I will commit to doing to improve my learning is…” If they haven't been exercising that commitment, then they recommit to it or make a different commitment. Because we collect their Reflecting on My Learning worksheets, we encourage them to write their commitment in their science notebooks or phones.

See Figure 17.5: Reflecting on My Learning—Completed Example for an example of how the reflection looks at the end of a unit. At this point students should have two data points graphed on the top half and #1 should be done on the bottom half.

When to Assess Students Who Didn't Move on the Scale

If a student doesn't move on the scale, regardless of the reason, we always provide additional opportunities for them to demonstrate their learning. Once we've intervened to provide differentiated instruction and the student feels confident they will perform well on a practice test, we offer them the following options for taking another practice test:

  1. Retake the written practice test before or after school or at lunch

    Because we review the answers to the practice test in class, we create a second version of it that students complete in their own time.

  2. Take the practice test orally before or after school or at lunch

    We've found oral practice tests can often provide support for students who are English language learners or who have learning challenges. It takes the form of a discussion because we ask students for clarification or ask them to provide an example so their answers include more evidence of their understanding and not just their memorization.

  3. Wait until the next time we take a practice test

    When we wrote the learning goal, we ensured that the first part was required background knowledge for the second part, etc. Therefore, if a student waits until the next practice test, they can demonstrate their learning of the first part by using this knowledge to answer the questions that will test for the second part.

There are times we use classroom activities to assess every student's learning. See the Other Forms of Formative Assessment to Move Students on a Scale section for examples.

Lower-Prep Version of Learning Goals and Scales

We recognize that creating LG&S and then assessing for them can be time-consuming. When we first began implementing LG&S, it was daunting. It can take several hours to write learning goals, the scales, and the practice tests for one unit. And class time must be dedicated to students taking and reviewing the practice tests, which are generally given to students once each week, requiring about 25 min.

When we first began using LG&S, we started off small. We created a one-part learning goal that was very specific and could be assessed with our current set of lessons, which avoided the creation of practice tests. For example, the learning goal, “I can demonstrate lab safety procedures, set up an experiment, and communicate an experiment's results,” was shortened to “I can follow the steps of the scientific method.” Students then received a checklist that listed the scientific method steps:

  1. Ask a question.
  2. Perform research.
  3. Write a hypothesis.
  4. Set up and perform an experiment.
  5. Analyze results.
  6. Write a conclusion.
  7. Publish results.

As students completed classroom activities, such as asking an experimental question, they highlighted the associated step to indicate they had accomplished it. We didn't have to create a unique assessment to determine if they could ask an experimental question because we were already doing so in our lesson.

For resources that support the teaching of the scientific method, see Chapter 3: Strategies for Teaching the Scientific Method and Its Components.

Other Forms of Formative Assessment to Move Students on a Scale

Any ungraded activity can be used as a formative assessment so teachers can identify students who are struggling. See Chapter 15: Strategies for the Beginning and Ending of Class for additional resources pertaining to formative assessments.

In addition to a practice test, we've also used strategies and tools discussed in other chapters such as lab reports, individual and class discussions, warm-ups, cool downs, and classroom activities to assess student learning. For example, when students are learning about lab safety, they complete an activity entitled, Identifying Broken Lab Safety Rules, which is shared in Chapter 1: Strategies for Teaching Lab Safety. The activity is Figure 1.2: Identifying Broken Lab Safety Rules and its answer key is available in Figure 1.3: Identifying Broken Lab Safety Rules—Answer Key.

These types of formative assessments can easily be used outside of Learning Goals and Scales. They can also be used to demonstrate evidence of student competency of some of those goals.

It's Time for the Summative Assessment

When the majority of the class has reached the rating of a 3, we know it's time to give the summative test. Of course, if you are not using the scales, you will have your own measure for determining when it's time for this kind of end-of-unit assessment and the Summative Assessment section later in this chapter offers ideas for developing effective ones.

By now we've assessed student learning multiple times, provided intervention, and provided enrichment (see the Intervention Ideas and Enrichment Ideas sections for this discussion). In our experience, the rating that students have at this point is highly indicative of how they will do on the summative test. We've found that students at a 3 rating generally earn an A or B on the test, students with a 2 rating earn a C, students at a 1 rating score a D, and students with a 0 rating fail the test.

After students take the summative, we return their Reflecting on My Learning worksheets so they can add their test score to the graph and document how they studied for the test. On the second page of the worksheet, it asks students if they met their goal and then instructs them to reflect on their efforts. They can use the answers to these questions to plan their learning for the next unit. For example, if a student achieved their goal, then they answer the question that asks, “If you reached your goal, what did you do to achieve it? Be very specific.” Students usually document their study strategy. Because they know this study strategy works for them, they can transfer it to the next unit's Reflecting on My Learning where it states, “One specific thing I will commit to doing to learn and accomplish my goal is…”

Intervention Ideas

Interventions are “specific, formalized steps to address a particular need” that a student has (Lee, n.d.). For example, if a student is struggling to identify dependent and independent variables, an intervention would be providing the student a learning partner who can act as a peer tutor. The pair work together, attempting to identify the variables in example experiments.

Here is a list of the most commonly used interventions in our science classes:

  1. Pairing struggling students with strong peers who will guide them through the work and not do the work for them.
  2. Working with students in small groups so the teacher-to-student ratio is smaller.
  3. Checking in with struggling students more often.
  4. Asking struggling students to explain why their answers are correct.
  5. Providing students with more examples.
  6. Modeling how to study.
  7. Using the strategy of corrective instruction to reteach content that students didn't learn the first time. Corrective instruction involves teaching the same content in a different way because the first strategy that was used was ineffective (PowerSchool, 2016). An example of corrective instruction occurred after analyzing a formative assessment. We realized a handful of our students thought that batteries have electricity inside of them. We developed a new lesson to help them better understand batteries. We began by showing them a video of a potato being used as a battery and then asked them to touch a potato. Students observed that they weren't electrocuted when they held the potato, cut into the potato, or bit the potato. This led to a discussion about what electricity is and how batteries work. After repeating the formative assessment, the data revealed that students had a better understanding of how batteries produce electrical currents. By presenting the same information in a different way than in our original lesson, students increased their understanding of the content.
  8. Providing a visual representation of the content, such as a model. One of the science and engineering practices in the NGSS requires that students develop and use models because they can increase student comprehension, especially for intangible concepts, such as valence electrons. To help students understand where valence electrons are located in relation to the nucleus, they can create, analyze, and interpret Bohr diagrams.

See the Technology Connections for more resources regarding interventions.

Enrichment Ideas

While some students are receiving interventions, what do the remaining students do?

Our goal, although not always achieved, is that every student is working diligently every minute of every class period. So while some students are receiving the interventions they need, the rest of the class participates in an enrichment activity.

This is the purpose of the 4 rating on a scale. To earn a 4 rating, students must complete the work to earn a 3 on the scale, earn an A on the summative assessment, and complete an additional activity. Students don't receive a grade for the activity so we do not provide rubrics or scoring guides.

At this point in the professional development class we teach, teachers often ask why students would choose to complete extra work without receiving a grade. In our experience, some students are driven by challenges, others find the activity engaging, and others do it because we have instructed them to do so. We also entice students by explaining that we will contact their parents when their enrichment activity is complete.

See the Technology Connections section for ideas about enrichment activities.

What Do Students Say About Learning Goals and Scales?

At the end of every school year, we survey students to obtain their feedback regarding the use of learning goals and scales. One of the survey's questions asked, “How did learning goals and scales help you in class?” Here are some of the students' answers:

JUAN: Having a learning goal and scales to follow in class helped me find the best way to study, get homework done, and just enjoy the class. I found that when I did my homework, and paid attention in class, even if it was just listening for vocab words, I retained more knowledge to achieve the grade and understanding I needed for the next test and the final test.
ELIZAVETA: They helped me understand what I needed to focus on during that point in time and how much I needed to improve to reach my goal!
MALIA: Learning goals helped me by telling me what I needed to understand in the lesson. Scales showed me what I already understood and what I needed to improve on.

We also asked, “How did you use learning goals and scales?” Here are two students' answers:

ANDREW: The learning goals and the scales did help me a lot understanding where I was at with the learning goal of each unit we were in and knowing what I did know and knowing what I didn't understand, so when I had my own time that I was able to go in depth into what I didn't understand and practice it and study even more so for the next time I am able to understand the learning goal in class.
VERONICA: Having the scales gave me a chance to remember what I needed to and know what I'm going to be learning. Learning goals helped me too because they let me know how my day/class will go and what I needed to succeed that day.

Not all of our students answered positively but, depending on the class, we found 77–86% of students found that learning goals and scales improved their learning and made them better students. Although our data is anecdotal, research supports our findings. A powerful motivation tool is goal setting and when students' goals are to learn (not “get an A”), then motivation is heightened even more (Latham & Seijts, 2006).

SUMMATIVE ASSESSMENTS

Unlike formative assessments, summative assessments indicate the end of a learning unit or period. Their purpose is to inform students, teachers, administrators, parents, and state officials of the amount of learning that was accomplished.

In this section, we will discuss how to create tests that assess student understanding (which we refer to as “thinking tests”) instead of just assessing memorization. In addition, we will provide examples of the various formats that can be used when creating summative assessments, such as performance-based assessments.

Thinking Tests

We were inspired to coin the term “thinking test” after taking a course that teaches how to keep the end in mind when planning units. The course focused on the order in which teachers plan units. Instead of beginning with the daily lessons and then developing the summative assessment, the course teaches the backward design model, which begins by writing learning goals, then creating the unit test, and, finally, choosing the daily lessons that will best support the learning goals (University of Arizona, Office of Instruction & Assessment, n.d.). The idea of backwards design is not a new one. In 1950, Ralph Tyler participated in an ambitious educational research project called the Eight-Year Study where he asked, “How can we assume that purposes are being obtained?” (Tyler, 1950, p. 1). He further explained that academic achievement is synonymous with the attainment of knowledge so the assessment teachers use should be the driving force of their daily instruction (Tyler, 1950).

Another version of backwards design is called outcome-based education (OBE). One of the best-known proponents of OBE is Dr. William Spady. Spady includes two approaches to OBE: traditional/transitional and transformational. The traditional/transitional approach focuses on teaching students content whereas the transformational approach emphasizes the skills students need to be successful post-high school, such as our Skills for Intentional Scholars (Nicholson, 2011).

Thinking tests require students to use both of Spady's OBE approaches. Thinking tests are written in a way that expose students to unique data, phenomenon, or events, which requires them to use their newly acquired content knowledge to solve a problem, communicate the meaning of data, and interpret a situation.

Developing Thinking Tests: Types of Questions

Thinking tests are traditional paper-and-pencil tests that are comprised of open-ended questions. We prefer open-ended (short-answer) over close-ended questions (multiple-choice, true/false, matching) because we can grant students partial credit. Close-ended questions are either 0% or 100% correct. They can't indicate if a student has learned part of the learning goal, which means partial credit can't be earned. But open-ended questions can provide the opportunity to give a student partial credit. When students are required to explain their answer, we can give them credit for the portion of the concept they did learn.

In addition, open-ended test questions allow students to explain their thinking, which can be another way to earn credit. When we develop thinking tests, we create an answer key but, sometimes we discover a question has a possible answer that we didn't include in the answer key. Open-ended questions allow students to explain their thinking and justify their response, which may provide an alternate correct answer.

We work with secondary students who have learning challenges and it's not uncommon that they feel defeated when it's testing time. They may have difficulty motivating themselves to work hard, study, and ask questions. We believe that one reason for this problem could be that these students have received failing grades in many of their previous science classes and they may have given up on themselves. By using open-ended questions, they can earn partial credit for the learning they did accomplish. This drives up their test scores, which, in our experience, increases their self-esteem and they may begin to work harder, participate more, and ask for help more often.

Education and sociology professors at Stanford University, including Sean F. Reardon, a Professor of Poverty and Inequality in Education, analyzed 8 million fourth and eighth grade students' test data to determine if there was a difference in how they performed on open-ended and closed-ended questions. They concluded that 25% of the “achievement gap” (which is probably more correctly called the “Opportunity Gap”; Ferlazzo, 2011) between males and females on state assessments is explained by the question format. Females tend to answer open-ended questions with more accuracy than they do close-ended questions (Reardon, Kalogrides, Fahle, & Zárate, 2018). Using this data, it has been suggested that math and science teachers should place more emphasis on open-ended questions (Berwick, 2019).

We do acknowledge that there are drawbacks to using open-ended questions on a summative test. They take longer to grade and a teacher can't ask as many questions as they can when they use close-ended questions (University of Washington, Center for Teaching and Learning, 2019). However, we believe these are teacher-centric disadvantages and that the advantages of receiving partial credit for what students have learned and being able to explain an answer are student-centric.

Developing Thinking Tests: Testing for Understanding

Before a student can perform well on a thinking test, they must know more than just content. They must also be able to use their Intentional Scholar Skills. Our teaching methods must match our assessments methods. If we want students to understand material, then we must teach them how to understand it and then how to demonstrate their understanding.

Wiggins and McTighe describe six facets of understanding, which are, “the kinds of performance evidence we need to successfully distinguish factual knowledge from an understanding of the facts” (Wiggins & McTighe, 2005, p. 161). We use the six facets of understanding to create thinking tests. Not all six facets are included in every test; however, Wiggins and McTighe state that the first facet of explanation must be included in every assessment because we need to know what the students think their answers mean and the justifications they provide for them (2005, p. 167).

Here are the six facets and a description, in addition to an example of each:

  1. Explanation: explain an answer, make connections, or develop a theory using data analysis

    Question: One of the largest diamonds known to man is a white dwarf star known as Lucy. It's located in the Centaurus constellation, about 50 light years away from Earth, which means to see it you need to use a telescope. The diamond weighs 2,270,000,000,000,000,000,000,000,000 tons! Some people say geologists should study Lucy and others say astronomers should study her. Explain why both people are correct.

    Answer: Lucy is a diamond, which is a mineral and geologists study naturally forming minerals. Astronomers study space and Lucy is located not here on Earth but 50 light years away in space.

  2. Interpretation: interpret stories, art pieces, musical works, situations, claims, or data; can also be an interpretation of ideas and feelings from one medium to another

    Question: In the movie, The Wizard of Oz, the witch is splashed with water and begins to melt to her death. While she shrinks, she screams, “I'm melting” in a very irritating high pitched voice. If the witch was educated about moon phases, what else could she have said instead of “I'm melting”?

    Answer: “I'm waning!”

  3. Application: use of knowledge in a different skill or situation; can be using knowledge in a different content area

    Question: In June and July of 2013, a fire burned 8,400 acres of land near Prescott, Arizona, in a town called Yarnell. There was a huge loss of habitat for both humans and animals, which will take a very long time to recover without help. Create a habitat restoration project for the Yarnell area. Hint: Yarnell is in the Chaparral biome.

    Answer: I would begin by planting trees and bushes to begin succession. Once these started to grow and bud, the first-level consumers would naturally be drawn to return to the forest. And then the second-level and finally the third-level consumers would make their way back home. To expedite the process, I would manually reintroduce some native species of plants and small animals.

  4. Perspective: explain a different point of view or the other side of a debated topic, make connections to explain the “big picture,” identify underlying assumptions and their effects, or critique with explanation and evidence

    Question: In 2010, a ship collided with the Great Barrier Reef in Australia. In an attempt to clean up the resulting oil spill, chemical dispersants that break down oil were dumped into the ocean. These dispersants have a hydrophilic end and a hydrophobic tail. Some scientists argue that it's best to let the oil disperse naturally. Our instinct is to clean up the oil spill. Explain the point of view of the scientists who believe the oil should disperse naturally. According to them, what might be one drawback of adding hydrophilic and hydrophobic chemicals to the ocean and why does this drawback outweigh the benefits?

    Answer: Because the dispersants are both hydrophilic and hydrophobic, they have opposing forces so they cause the oil to break up into smaller droplets. These smaller oil drops can easily be spread farther and deeper than the larger, heavier oil. This oil spreading affects the wildlife in more distant locations and on the floor of the ocean that would not otherwise be affected. It's best to contain the oil on the surface so only the surface creatures are affected.

  5. Empathy: appreciate another person's thoughts and feelings, especially when they are in contrast to our own

    Question: You are a conservation biologist who has been asked to speak at the annual conference of Traditional Medicine. You recently learned that rhinoceroses are being hunted and killed because the people who practice traditional medicine believe that rhinoceroses' horns cure cancer. Write a short speech (one paragraph) that would help them to understand why they must not collect horns. Your speech must be respectful of their belief system and culture; after all, if you are rude, you will alienate your audience and they won't want to listen to your very important message. HINT: It helps to put yourself in their shoes as you write your speech.

    Answer: “Thank you for having me today. I'm excited to tour your beautiful city and share in your delicious food. As a conservation biologist, it is my job to help species who are about to go extinct. I've chosen to focus on the rhinoceros. I know you too must be worried that it is near extinction; after all, you harvest the horns for very important medicine. I'm here today to discuss a solution to our common problem. How can we slow down the killing of the rhinoceros so that you have horns in the future and the world has a hefty rhinoceros population?”

  6. Self-Knowledge: self-assess learning

    Question: While your teacher was passing out the test, she also returned the practice test you took at the beginning of the unit. Choose two questions that you got wrong and correct the answers. Explain why your new answers are the correct answers.

    Answer: On my practice test, I said that 23,459.1 mm = 0.234591 km but I was wrong. The correct answer is 0.0234591 km. When I originally converted the millimeters, I forgot about the base level of the metric system and only moved the decimal point five times. I should have moved the decimal point six times.

When we develop a thinking test, we look at the learning goal and decide what students should be able to do if they have complete understanding of the goal. For example, if students should be able to analyze and interpret data on a graph, then we write a test question that presents students with a graph and asks them to interpret the data.

Thinking tests have fewer than 10 questions because our class periods are only one-hour long and we want students to complete the test in that time period. Each question is usually worth a minimum of two points: one point for the correct answer and one point for the explanation, justification, or evidence. For an example of a thinking test we wrote for a toxicology unit, see Figure 17.6: Toxicology Unit Thinking Test.

Performance-Based Summative Assessments

At the end of each of our units, students complete two types of assessments: a thinking test and a performance-based test. Performance-based assessments can be lab reports, research reports, websites, skits, posters, or slideshows, as shown in previous chapters of the book. Performance-based assessments can demonstrate what students have learned and how they can apply it, which is an example of Wiggins and McTighe's third facet of understanding: Application (2005, p. 165). Another method of performance-based assessment is models, which we will discuss further in this section.

Developing Performance-Based Assessments

Wiggins and McTighe's six facets of understanding can also be used to develop performance-based assessments, which are, “the kinds of performance evidence we need to successfully distinguish factual knowledge from an understanding of the facts” (2005, p. 161). Similar to thinking tests, not all six facets are included in every test. However, Wiggins and McTighe state that the first facet of explanation must be included in every performance-based assessment because, as with thinking tests, we need to know how students interpret their knowledge and how they justify their answers (2005, p. 167).

Here are the six facets and a description, in addition to an example of a performance-based assessment for each:

  1. Explanation: explain an answer, make connections, or develop a theory using data analysis

    Given an unknown chemical, students perform multiple tests to determine its identity and then use the test results as evidence of their conclusion.

  2. Interpretation: interpret stories, art pieces, musical works, situations, claims, or data; can also be an interpretation of ideas and feelings from one medium to another

    Students collect the daily relative humidity using a sling psychrometer for two weeks. They graph their data and interpret its meaning. Then they compare their data with the relative humidity data of a different biome in a different country. They explain why the two biomes have similar or different relative humidities.

  3. Application: use of knowledge in a different skill or situation; can be using knowledge in a different content area

    Students design a complex circuit that includes a minimum of five different electrical devices that must run using a 12 or 15 V DC power supply. The design must operate at 4, 8, and 12 or 15 V. The design must include resistors (Performance Assessment Resource Bank, n.d.). Students must explain how they designed the circuit and the role of the resistors.

  4. Perspective: explain a different point of view or the other side of a debated topic, make connections to explain the “big picture,” identify underlying assumptions and their effects, or critique with explanation and evidence

    Students research fracking, focusing on its benefits and drawbacks. They write two letters. One letter is from a citizen of a town where fracking is occurring. The letter explains the citizen's point of view. Students then write a response to the citizen as a representative of the fracking company and explain the company's stance.

  5. Empathy: appreciate another person's thoughts and feelings, especially when they are in contrast to our own

    Students answer situational questions that require them to role play. For example, while teaching biology, students took on the role of a physician to address specific patient case studies. Here is an example of one question: Tycho Brahe (1546–1601) was a wealthy Danish astronomer. In October 1601, he attended a banquet in Prague. During the party, he had the urge to urinate but thought it would be poor manners to excuse himself so he held his urine in until after the party, at which time he was surprised to find out he couldn't urinate. Eleven days later he died. The theory is that his cause of death was uremia, which is urine in the blood. Write an explanation of why some people choose to ignore their body's messages. If you were a doctor, how would you motivate a patient who was ignoring a homeostatic message from his body?

  6. Self-Knowledge: self-assess learning

    Students choose any lab report they wrote during the current unit. They edit the report by correcting their errors and improve the report by integrating their newly acquired vocabulary from the unit. They then design a new experiment that would further their learning on the subject.

Offering Choice in Performance-Based Assessments

We also use choices for performance-based assessments. Assessments that allow for student choice differentiate for students with different interests and strengths. Some students may prefer to write a response, while others prefer to prove their learning through a piece of art or video. We have found that students are often more engaged and perform better when they are given the opportunity to decide how to demonstrate their learning.

When we introduce a summative assessment that includes student choice, we give each student a copy of Figure 17.8: Student-Choice Performance-Based Assessment, which is an example from a seventh grade ecology unit. Students are instructed to choose any combination of activities listed but the total points of their chosen activities must be a minimum sum of 50 points. We allow students to complete more than 50 points worth of activities but they do not receive extra credit for doing so.

The more points assigned to an activity, the more work that is required. Some students choose an activity based on the amount of work they perceive it will require. Other students choose activities they find interesting; for example, students who are artistic and enjoy drawing tend to choose the A2, which requires them to create a comic strip and B3, which engages a student by drawing a scene of biotic and abiotic factors in a given environment.

We provide students with three 1-hour periods to complete their activities. We walk around as they work and provide immediate feedback so students know when they are doing well and when they can improve. They are permitted to use any resources they need, such as class notes, online videos, their learning partner, and the Internet.

Models as Performance-Based Assessments

One of the Science and Engineering Practices required by the Next Generation Science Standards is modeling, which includes computer simulations, diagrams, analogies, physical replicas, and mathematical representations (National Science Teaching Association, 2014). For a model to be used as a summative assessment, it should require students to explain their design and justify their ideas. For example, an analogy we use in biology class is called, “Cell Analogy.” Students choose an object they have an interest in that has multiple components, such as a concert hall, movie theater, skateboard, or basketball game. Students compare the function of a cell's organelles to the function of a component in their chosen object. One example is that the solar panels on a house (a student's chosen object) have the same function as the chloroplast in a plant cell.

The assessment students turn in for a grade is a model of their chosen object (a house, for example) with the analogous components labeled and described, indicating how it's similar to a cell's specific organelle. They must include the shared 11 organelles and 2 plant organelles (the cell wall and chloroplasts). Student model examples are available in Figure 17.9: Cell City Models—Student Examples, which includes a model entitled, “A Cell Is Like a Pizza” and “A Cell Is Like a Bookstore.” The checklist we use for this assessment is Figure 17.10: Checklist for Cell City Models.

See the Technology Connections section for additional performance-based assessment ideas.

DIFFERENTIATION FOR DIVERSE LEARNERS

When students are initially filling out Figure 17.4: Reflecting on My Learning, we've found that some students benefit from receiving the handout with the learning goal already completed. Some students find it difficult to transfer information from the board to their paper so we complete as much of the handout for them as we can.

When using learning goals and scales, we sometimes choose to move students half a point on the scale instead of a full point. We've found that students who remain on the same rating can feel defeated and lose their motivation to work hard, regardless of our best efforts. We find one reason, even if it's minor, to move them that half point. This change is often enough to “jump start” their efforts again, especially if we also celebrate by making contact with their parents. We explain to a parent that their child found a specific concept especially challenging, but they never gave up, and persevered! Students (and parents) appreciate the personal connection and love that the call from the teacher was positive, not negative.

English language learners and other students who have reading comprehension difficulties may have difficulty reading and understanding the questions in thinking tests, though for obviously different reasons. These test questions include many scenarios or stories so students who struggle to read or who are learning the language require a lot of support. To help them, we remove unnecessary information from the test questions and add hints. We also offer to read them test questions aloud.

In addition, we tell students that they can ask us for the definition of any word on the test that is not bolded and underlined, which are the vocabulary words that we taught them during the unit and those that are part of the assessment. We instruct students to raise their hand and when we come over, they only have to point to the word they don't know. We will kneel down near them and whisper the definition to them. We also provide an example and, if necessary, we draw a picture. They are welcome to ask us questions and request additional examples and pictures.

The goal is that their reading skills aren't being tested but, instead, their understanding of the content. Some students choose to draw their answers instead of writing in paragraph form, which we also accept. See Figure 17.11: Toxicology Unit Thinking Test Modified for an example of how we modified the test in Figure 17.6. The modified test has the same answer key, which is Figure 17.7.

Student Handouts and Examples

  • Figure 17.1: Final Day Cool Down Activity
  • Figure 17.2: Example Scale
  • Figure 17.3: Example of a Unit's First Practice Test (Student Handout)
  • Figure 17.4: Reflecting on My Learning—Blank (Student Handout)
  • Figure 17.5: Reflecting on My Learning—Completed Example
  • Figure 17.6: Toxicology Unit Thinking Test (Student Handout)
  • Figure 17.7: Toxicology Unit Thinking Test—Answer Key
  • Figure 17.8: Student-Choice Performance-Based Assessment (Student Handout)
  • Figure 17.9: Cell City Models—Student Examples
  • Figure 17.10: Checklist for Cell City Models (Student Handout)
  • Figure 17.11: Toxicology Unit Thinking Tests Modified (Student Handout)

What Could Go Wrong?

The first time students take a thinking test, it is typical for teachers to discover an “implementation dip,” a concept studied by Michael Fullan, a change management expert. He defines an implementation dip as the phenomenon that occurs, “as one encounters an innovation that requires new skills and new understandings” (Burnside, 2018).

Mandi documented an implementation dip when she tested her students on their vocabulary skills. She “taught” students vocabulary words by requiring them to memorize their definitions. Then she tested them for their memorization, resulting in an average test score of 88%. But she realized they didn't truly learn the words because they couldn't correctly use them in their writing. She changed her teaching style by incorporating the three Skills for Intentional Scholars. She taught them another set of vocabulary words and tested them for their understanding. Her students' average test score was 78%, which is 10% less than when she required them to only memorize the words' definitions. This is explained by the implementation dip.

For the next set of vocabulary words, Mandi continued to use lesson plans that required students to use their Skills for Intentional Scholars. She again tested them at the end of the unit for their understanding of the words. Their average test score was 90%, which is 2% higher than when she tested them for their memorization. More importantly, students were correctly using their new vocabulary words in their writing.

After returning students' first thinking tests, we explain to them what the implementation dip is. We assure them that as we practice our thinking, communicating, and problem-solving skills, their test scores will increase. We remind them to have a growth mindset. It's okay that they haven't honed these skills yet and with more practice we know they will excel.

Another difficulty that students encounter when they first prepare for thinking tests is that they don't know how to study. If they have only been exposed to tests that require memorization, then they study by memorizing. But memorizing doesn't help students on thinking tests; they need to study differently. Prior to the first test, we offer the following study suggestions for students:

  • Have someone who is not taking this class ask you “why,” “how,” and “what if” questions about your vocabulary terms. For example, “What if the Earth was tilted 90° on its axis?” and “How do molecules break in a covalent bond?” The less the person knows about the content, the better their questions will be and the more you will have to explain to them. This is great practice for a thinking test!
  • Write a story that uses all of the vocabulary terms in a meaningful way. If you can use the terms correctly in your writing, then you understand their definitions. If you write your story two days before the test, I will look at it and give you feedback.
  • Look over all of your classwork during the unit. Make a list of the terms that caused you confusion on those assignments. Draw pictures representing the terms.
  • Watch online videos about the unit. Stop the video every few minutes and summarize the content of the video in your own words.

Technology Connections

We use the following resources for intervention ideas:

We use these resources for enrichment activity ideas:

We use the following resources for performance-based assessment ideas:

Attributions

Thank you to Jason Prichard, the original author of Figure 17.8: Student-Choice Performance-Based Assessment, which provides an example of an ecology unit.

Thank you to Athena Loya and Jadin Hughes for allowing us to include their bookstore cell city model. And thank you to Lucas Peterson and Treven Lucas for allowing us to include their pizza cell city model.

Figures

Figure 17.1 Final Day Cool Down Activity

Figure 17.2 Example Scale

Figure 17.3 Example of a Unit's First Practice Test

Figure 17.4 Reflecting on My Learning—Blank (Student Handout)

Figure 17.5 Reflecting on My Learning—Completed Example

Figure 17.6 Toxicology Unit Thinking Test (Student Handout)

Figure 17.7 Toxicology Unit Thinking Test—Answer Key

Figure 17.8 Student-Choice Performance-Based Assessment (Student Handout)

Figure 17.9 Cell City Models—Student Examples

Figure 17.10 Checklist for Cell City Models (Student Handout)

Figure 17.11 Toxicology Unit Thinking Test Modified (Student Handout)