Assessment of Student Learning

Academic Degree Assessment

Assessing Student Learning at the Program Level

Both direct and indirect methods of assessment can be valuable, and the assessment experts agree that no single assessment method should ever be relied on exclusively.

Student Learning Outcomes (SLOs)

The first step to any assessment plan is to define the student learning outcomes for the program or course under consideration: the things we want students to be able to do, or think or know, by the time they have finished a course of study.

Course or Program SLO characteristics

  • They should describe the broadest and most comprehensive goals of the course or program, what assessment theorist Mark Battersby refers to as “integrated complexes of knowledge” or competencies. They should focus on what a student should be able to do with the knowledge covered, not simply on what the instructor will cover. Courses and programs may typically have three to five outcomes, though fewer or more are possible.
  • They should employ active verbs, usually taken from the higher levels of Bloom's taxonomy (reprinted in the appendix to this document)—e.g., students should be able to “analyze” or “evaluate,” not “define” or “describe.”
  • As much as possible, they should be written in intelligible language, understandable to students.
  • As often as possible, they should be arrived at collaboratively, as instructors who teach the same class or in the same program come to consensus about the key objectives of that unit of instruction. (For course-level SLOs, instructors will undoubtedly have SLOs of their own in addition to consensus ones.) Adjunct instructors–and students themselves–should be involved in the process of developing SLOs as much as possible.
  • SLOs should be measurable. Ideally, they should contain or make reference to the product (papers, projects, performances, portfolios, tests, etc. through which students demonstrate competency) and the standard (e.g., with at least 80% accuracy) or criterion by which success is measured. When the behavior/product and standard are specified, the SLO is sometimes referred to as made “operational.”

Program-level SLO examples

  • Biology majors

    "(Students should be able to) apply ethical principles of the discipline in regard to animal and human subjects, environmental protection, use of sources, and collaboration with colleagues." (Walvoord, 2004)
  • Political Science majors

    "(Students should be able to) identify a problem [in the discipline], situate it within an appropriate literature, pose a particular hypothesis or intellectual puzzle, then use original sources to test the hypothesis or solve the puzzle." (Walvoord, 2004)
  • Economics majors

    "(Students should be able to) use statistical methods to analyze economic questions" (Walvoord, 2004).
  • Master's in Business Administration

    "(Students should be able to) apply the strategic management process and formulate firm strategy." (Central Michigan University)
  • Juris Doctorate degree

    "Students will demonstrate effective use of the tools of legal research both hard copy and online tools, be able to create an effective research plan for assessing a legal problem, and demonstrate the ability to use appropriate citation form for advocacy and expositive legal writing." (Georgia State University)

Direct Assessment

Direct Assessment is a focus on actual student work (essays, exams, nationally normed tests) where we look for evidence that learning has been achieved.


Instructors use existing tests, exams, or writing prompts to identify learning trends in a particular course or group of related courses. A particular department might agree to give a common final in which questions are mapped to specific learning outcomes for the course, then the results aggregated. (A variation of this approach would require all instructors in a course to ask a set of common questions on a part of an exam, but permit them to develop instructor-specific questions for the rest of the exam).

Another department might simply decide to look at student writing on a variety of late-term essay assignments for evidence that certain learning outcomes have been met. The main advantage of embedded assessment is that it simplifies the assessment process, asking instructors to evaluate existing student work, but in a different way than they usually do and for a different purpose. It is usually good practice to collect such assessment data so as to make evaluation of individual instructors impossible.


Require students (or instructors) to assemble a group of projects from a single class or group of classes as a way of demonstrating that achievement of learning outcomes has taken place—and to reveal areas of learning deficiency. This is a particularly effective method of assessing institutional learning outcomes.

Capstone courses

Courses usually taken in a student's senior year and intended to allow students to demonstrate comprehensive knowledge and skill in the particular major. Capstone courses (and capstone projects usually required in such course) integrate knowledge and skills associated with the entire sequence of courses that make up the program. Assessing student performance in these classes therefore approximates assessment of student performance in the major as a whole.

Standardized tests

Nationally normed tests of such institution-wide learning outcomes as critical thinking or writing, or discipline-specific tests like the ETS Major Field Achievement Tests. Standardized tests may be useful measures if instructors agree to teach the skills that such tests can be shown to measure, and they have the advantage of providing departments with a national standard by which to measure their students. But standardized tests are costly to administer; students are often insufficiently motivated to do their best work when taking them; and as noted, they may not measure what faculty in the program actually teach.

Scoring rubrics

Aid in assessing student performance captured in portfolios, capstone courses, essays, speeches, or other presentations. Individual instructors can employ them on their own, too. Look at a specific assignment— essay, a demonstration, an oral report—in which student learning cannot be measured with numerical precision.

Develop (whether alone or with others) a scoring guide or checklist that will indicate various skill levels for various "primary traits," with clearly delineated language suggesting the degree to which the assignment demonstrates evidence that the SLO has been achieved. See the PowerPoint that provides an overview and examples of rubrics: Using Rubrics to Make Assessment More Efficient.

Indirect Assessment

Indirect assessment measures should be used to augment, not substitute for, more direct measures. Ideally, multiple assessment methods should be employed whenever possible, so that student surveys, for example, can become a useful additional check against data derived from doing embedded assessment or administering standardized tests.

Student surveys and focus groups

A substantial body of evidence suggests that student self- reported learning gains correlate modestly with real learning gains. You may want to consider surveying students (or a sampling of students) at the end of a course of instruction (or after graduation from a program) to determine what they see as their level of achievement of the course or program's learning outcomes.

You may also want to gather a representative group of students together for more informal conversation about a particular course or program when it has ended, asking them open-ended questions about its effect upon them. Surveys of alumni can also produce meaningful assessment data. These techniques are particularly valuable when done in conjunction with more direct assessment measures.

Faculty surveys

Instructors can be asked, via questionnaires, about what they perceive to be strengths and weaknesses among their students.

Data likely to be kept by Offices of Institutional Research on retention, success, and persistence, job placement information, rates of acceptance into graduate programs, demographics, etc. may also be strong assessment tools, if analyzed and mapped to specific SLOs.


Classroom Assessment Techniques (CATs)

Classroom Assessment Techniques (CATs) are generally simple, non-graded, anonymous, in-class activities designed to provide useful feedback on the teaching-learning process as it is happening. CATs are ideal ways of helping instructors in determine what their students know and do not know, or are having difficulty learning. These formative evaluations provide information that can be used to improve course content, methods of teaching, and, ultimately, student learning.


> 2015-2020 Assessment Plan and Timeline

> Handbook for Learning Outcomes Assessment

> Using Rubrics


Contact Info

Ruth E. Cain, Ed.D.

Director of Assessment
Administrative center, Room 351