Academics

DISCLAIMER: This data in this section is fictitious and does not, in any way, represent any of the programs at Gallaudet University. This information is intended only as examples. 

Types of Scoring Criteria (Rubrics)

A rubric is a scoring guide used to assess performance against a set of criteria. At a minimum, it is a list of the components you are looking for when you evaluate an assignment. At its most advanced, it is a tool that divides an assignment into its parts and provides explicit expectations of acceptable and unacceptable levels of performance for each component. 

Types of Rubrics

1 – Checklists, the least complex form of scoring system, are simple lists indicating the presence, NOT the quality, of the elements. Therefore, checklists are NOT frequently used in higher education for program-level assessment. But faculty may find them useful for scoring and giving feedback on minor student assignments or practice/drafts of assignments. 

Example 1: Critical Thinking Checklist 

The student…

__ Accurately interprets evidence, statements, graphics, questions, etc.  

__ Identifies the salient arguments (reasons and claims)  

__ Offers analyzes and evaluates major alternative points of view  

__ Draws warranted, judicious, non-fallacious conclusions  

__ Justifies key results and procedures, explains assumptions and reasons  

__ Fair-mindedly follows where evidence and reasons lead 

Example 2: Presentation Checklist 

The student… 

__ engaged audience  

__ used an academic or consultative American Sign Language (ASL) register  

__ used adequate ASL syntactic and semantic features  

__ cited references adequately in ASL  

__ stayed within allotted time  

__ managed PowerPoint presentation technology smoothly 

2 – Basic Rating Scales are checklists of criteria that evaluate the quality of elements and include a scoring system. The main drawback with rating scales is that the meaning of the numeric ratings can be vague. Without descriptors for the ratings, the raters must make a judgment based on their perception of the meanings of the terms. For the same presentation, one rater might think a student rated “good,” and another rater might feel the same student was “marginal.” 

Example: Basic Rating Scale for Critical Thinking

Excellent
5
Good
4
Fair
3
Marginal2 Inadequate
1
Accurately interprets evidence, statements, graphics, questions, etc          
Identifies the salient arguments (reasons and claims)          
Offers analyzes and evaluates major alternative points of view          
Draws warranted, judicious, non-fallacious conclusions          
Justifies key results and procedures, explains assumptions and reasons          
Fair-mindedly follows where evidence and reasons lead          

3 – Holistic Rating Scales use a short narrative of characteristics to award a single score based on an overall impression of a student’s performance on a task. A drawback to using holistic rating scales is that they do not provide specific areas of strengths and weaknesses and therefore are less useful to help you focus your improvement efforts. Use a holistic rating scale when the projects to be assessed will vary greatly (e.g., independent study projects submitted in a capstone course). Or when the number of assignments to be evaluated is significant (e.g., reviewing all the essays from applicants to determine who will need developmental courses). 

Example: Holistic Rating Scale for Critical Thinking Scoring

Rating Scale
  Not meeting
1
Approaching
2
Meeting
3
Exceeding
4
         
         

The Holistic Critical Thinking Scoring Rubric: A Tool for Developing and Evaluating Critical Thinking. Retrieved April 12, 2010 from Insight Assessment. 4 – Analytic Rating Scales are rubrics that include explicit performance expectations for each possible rating, for each criterion. Analytic rating scales are especially appropriate for complex learning tasks with multiple criteria. Evaluate carefully whether this the most appropriate tool for your assessment needs. They can provide more detailed feedback on student performance; more consistent scoring among raters, but the disadvantage is that they can be time-consuming to develop and apply. Results can be aggregated to provide detailed information on the strengths and weaknesses of a program. Example: Critical Thinking Portion of the Gallaudet University Rubric for Assessing Written English 

Ideas and Critical Thinking

Pre-College Skills
1
Emerging Skills
2
Developing Skills
3
Mastering Skills
4
Exemplary Skills
5
1. Assignment lacks a central point. 2. Displays central point, although not clearly developed. 3. Displays adequately-developed central point. 4, Displays clear, well-developed central point. 5. Central point is uniquely displayed and developed.
1. Displays no real development of ideas. 2. Develops ideas superficially or inconsistently. 3. Develops ideas with some consistency and depth. 4. Displays insight and thorough development of ideas. 5. Ideas are uniquely developed.
1. Lacks convincing support for ideas. 2. Provides weak support for main ideas. 3. Develops adequate support for main ideas. 4. Develops consistently strong support for main ideas. 5. Support for main ideas is uniquely accomplished.
1. Includes no analysis, synthesis, interpretation, and/or other critical manipulation of ideas. 2. Includes little analysis, synthesis, interpretation, and/or other critical manipulation of ideas. 3. Includes analysis, synthesis, interpretation and/or other critical manipulation of ideas in most parts of the assignment. 4. Includes analysis, synthesis, interpretation, and/or other critical manipulation of ideas, throughout. 5. Includes analysis, synthesis, interpretation, and/or other critical manipulation of ideas, throughout— leading to an overall sense that the piece could withstand critical analysis by experts in the discipline.
1. Demonstrates no real integration of ideas (the author’s or the ideas of others) to make meaning. 2. Begins to integrate ideas (the author’s or the ideas of others) to make meaning. 3. Displays some skill at integrating ideas (the author’s or the ideas of others) to make meaning. 4. Is adept at integrating ideas (the author’s or the ideas of others) to make meaning. 5. Integration of ideas (the author’s or the ideas of others) is accomplished in novel ways.

Steps for Creating an Analytic Rating Scale (Rubric) from Scratch

There are different ways to approach building an analytic rating scale: logical or organic. For both the logical and the organic model, steps 1-3 are the same. 

Steps 1 – 3: Logical AND Organic Method

Determine the Best Tool

  • Identify what is being assessed, (e.g., ability to apply theory) as this is focused on program-level learning assessment. Determine first whether an analytic rating scale is the most appropriate way of scoring the performance and/or product. An analytic rating scale is probably a good choice
    • if there are multiple aspects of the product or process to be considered
    • if a basic rating scale or holistic rating scale cannot provide the breadth of assessment you need.

Building the Shell

The Rows
  • Identify what is being assessed. (e.g., ability to apply theory).
    • Specify the skills, knowledge, and/or behaviors that you will be looking for.
    • Limit the characteristics to those that are most important to the assessment.
Rating Scale
  Not meeting
1
Approaching
2
Meeting
3
Exceeding
4
         
         
The Columns
  • Develop a rating scale with the levels of mastery that is meaningful.

Tip: Adding numbers to the ratings can make scoring easier. However, if you plan to use the rating scale for course-level assessment grading as well, a meaning must be attached to that score. For example, what is the minimum score that would be considered acceptable for a “C.” 

Example:  

Components of Analytic Rating Scales 

  • Criteria that link to the relevant learning objectives
  • Rating scale that distinguishes between levels of mastery
  • Descriptions that clarify the meaning of each criterion, at each level of mastery
Rating Scale
Criteria Excellent Good Inadequate
  Descriptive characteristics (apply to the appropriate table cell)    
       

Other possible descriptors include:

  • Exemplary, Proficient, Marginal, Unacceptable
  • Advanced, High, Intermediate, Novice
  • Beginning, Developing, Accomplished, Exemplary
  • Outstanding, Good, Satisfactory, Unsatisfactory

Step 4:

Writing the Performance Descriptors in the Cells

examples of inconsistent performance characteristics and suggested corrections.

  • Use either the logical or the organic method to write the descriptions for each criterion at each level of mastery.
Logical Method Organic Method
  • For each criterion, at each rating level, brainstorm a list of the performance characteristics*. Each should be mutually exclusive.
  • Have experts sort sample assignments into piles labeled by ratings (e.g., Outstanding, Good, Satisfactory, Unsatisfactory)
 
  • Based on the documents in the piles, determine the performance characteristics* that distinguish the assignments

Tips: Keep list of characteristics manageable by only including critical evaluative components. Extremely long, overly-detailed lists make a rating scale hard to use. 

In addition to having descriptions brief, the language should be consistent. Below are several ideas to keep descriptors consistent: 

3 2 1
analyses the effect of … describes the effects of … lists the effects of …

Keep the aspects of a performance stay the same across the levels but adding adjectives or adverbial phrases to show the qualitative difference 

3 2 1
provides a complex explanation provides a detailed explanation provides a limited explanation
shows a comprehensive knowledge shows a sound knowledge shows a basic knowledge
3 2 1
uses correctly and independently uses with occasional peer or teacher assistance uses only with teacher guidance

A word of warning: numeric references on their own can be misleading. They are best teamed with a qualitative reference (eg three appropriate and relevant examples) to avoid ignoring quality at the expense of quantity. 

3 2 1
provides three appropriate examples provides two appropriate examples provides an appropriate example
uses several relevant strategies uses some relevant strategies uses few or no relevant strategies

Steps 5-6: Logical AND Organic Methods

  • Part 6. Scoring Rubric Group Orientation and Calibration” for directions for this process.
  • Review and revise.

Steps for Adapting an Existing Analytic Rating Scale (Rubric)

  • Evaluate the rating scale. Ask yourself:
    • Does the rating scale relate to all or most the outcome(s) I need to assess?
    • Does it address anything extraneous?
  • Adjust the rating scale to suit your specific needs.
    • Add missing criteria
    • Delete extraneous criteria
    • Adapt the rating scale
    • Edit the performance descriptors
  • Test the rating scale.
  • Review and revise again, if necessary.

Uses of Rating Scales (Rubrics)

Use rating scales for program-level assessment to see trends in strengths and weaknesses of groups of students. 

Examples 

  • To evaluate a holistic project (e.g., theses, exhibitions, research project) in capstone course that pulls together all that students have learned in the program.
  • Supervisors might use a rating scale developed by the program to evaluate students’ field experience and provide feedback to both the student and the program.
  • Aggregate the scores of rating scale used to evaluate a course-level assignment. For example, the Biology department decides to develop a rating scale to evaluate students’ reports from 300- and 400-level sections. The professors use the scores to determine the students’ grades and provide students with feedback for improvement. The scores are also given to the department’s Assessment Coordinator to summarize to determine how well they are meeting their student learning outcome, “Make appropriate inferences and deductions from biological information.”

For more information on using course-level assessment to provide feedback to students and to determine grades, see University of Hawaii’s “Part 7. Suggestions for Using Rubrics in Courses” and the section on Converting Rubric Scores to Grades in Craig A. Mertler’s “Designing Scoring Rubrics for Your Classroom”.

Sample Rating Scales (Rubrics)

Resources

Adapted from sources below: 

Allen, Mary. (January, 2006). Assessment Workshop Material. California State University, Bakersfield. Retrieved DATE from http://www.csub.edu/TLC/options/resources/handouts/AllenWorkshopHandoutJan06.pdf 

http://www.uhm.hawaii.edu/assessment/howto/rubrics.htm 

http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4523.html?detoured=1 

Mueller, Jon. (2001). Rubrics. Authentic Assessment Toolbox. Retrieved April 12, 2010 from http://jonathan.mueller.faculty.noctrl.edu/toolbox/rubrics.htm   

http://en.wikipedia.org/wiki/Rubric_(academic)   

Tierney, Robin & Marielle Simon. (2004). What’s Still Wrong With Rubrics: Focusing on the Consistency of Performance Criteria Across Scale Levels. Practical Assessment, Research & Evaluation, 9(2).  

 

Contact Us

Assessment

College Hall 410A

(202) 559-5370

202.651.5085

Monday
9:00 am-5:00 pm
Tuesday
9:00 am-5:00 pm
Wednesday
9:00 am-5:00 pm
Thursday
9:00 am-5:00 pm
Friday
9:00 am-5:00 pm

Select what best describes your relationship to Gallaudet University so we can effectively route your email.
By submitting this form, I opt in to receive select information and deaf resources from Gallaudet University via email.
This field is for validation purposes and should be left unchanged.