Skip to main content

Viewing Statistics about Online Tests

The Coursework feature allows you to review detailed statistics about graded online assignments that three or more students have completed. You do this using a feature called Test Analysis.

Note

You can also view statistics on basic and uploaded assignments. For help with this, see Viewing Statistics.

About Test Analysis

Test Analysis provides data on all graded online assignments after they are completed by three or more students. This feature is useful if you want to assess the effectiveness of an assignment and each of its questions, if you want to see which parts of the curriculum students struggled with, and other data along these lines.

Test Analysis provides:

  • A snapshot of students’ performance on the assignment.

  • A summary of the relative difficulty of the assignment and each of its questions.

  • An assessment of whether each question adequately distinguished between students who understood the material and those who did not.

  • Analysis—using parameters that you set—about whether a question was too hard or too easy.

  • Analysis of multiple-choice questions, including a look at the efficacy of distractors (the incorrect options within multiple-choice answer sets).

Note

Test analysis data includes only those students who are in the audience for the assignment.

Key Terms

This section defines key terms that you’ll need to know in order to configure Test Analysis and understand the data the system generates.

Performance groups are categories that classify students as high, low, or midrange performers. Essentially, these groups are intended to represent those students who basically understood the material, those who didn’t, and those who are in between.

The size of each performance group is a percentage of the total class size, and it is configurable. So, for example, if the high-performers group is defined as 10 percent, and 100 students complete the assignment, the high-performers group consists of the 10 students who did the best.

You manually define the size of the high- and low-performers groups using the Performance Groups tab. If you configure these two groups so that together they contain less than 100 percent of the entire class, the system creates and automatically determines the size of a middle-performers group. These size definitions are used throughout the course section (as opposed to a test-by-test basis).

The default sizes of the performance groups are:

  • High—25%

  • Middle—50%

  • Low—25%

Note that it is possible for students who got the same score to be organized into different performance groups. For example, suppose that your performance groups have the default sizes of 25%, 50% and 25%. If 19 of 20 students who took the test got the same score, there would be students in the high-, middle-, and low-performers groups who have the same score.

The difficulty rating is a number between 0 and 1 that reflects the percentage of students who answered the question incorrectly. The higher the difficulty rating, the harder the question was. You can configure the system to display a warning if any question is too easy (using the Low Difficulty Warning field) or too difficult (using the High Difficulty Warning field).

The discrimination index is a number between -1 and 1 that tells you how effective a question was at distinguishing between high and low performers. The closer the number is to 1, the better the question was at making this distinction.

Technically, the discrimination index is the difference between the percentage of high performers who got a question right and the percentage of low performers who got it right. A negative discrimination index is problematic because it means that more low performers than high performers answered correctly.

Distractors are the incorrect options offered as part of a multiple-choice question. The percentage of students who choose a particular distractor is its distractor performance. Possible values are between 0 and 99.

Ideally, you want a question’s distractors to be chosen with equal, or close-to-equal, frequency. For this reason, you can configure the system to warn you if there is a large disparity among the distractor performances for the various incorrect options.

For example, suppose you set the Poor Distractor Performance Warning to 20 percent. In this case, if 10 percent of students chose the first distractor, and 50 percent chose another distractor, the system would display a warning.