Office of the Provost

Assessment in Academic Degrees


How to Assess Student Learning at the Program Level

Assessment can either be direct, focusing on actual student work (essays, exams, nationally normed tests) where we look for evidence that learning has been achieved, or indirect, where we look for signs that learning has taken place through proxies or such “performance indicators” as surveys, focus groups, retention or transfer rates, etc. Both methods of assessment can be valuable, and in fact the assessment experts agree that no single assessment method should ever be relied on exclusively. The first step to any assessment plan is to define the student learning outcomes for the program (or course) under consideration: the things we want students to be able to do (or think or know) by the time they’ve finished a course of study.

Student Learning Outcomes (SLOs)

Student learning outcomes for courses or programs should share the following characteristics:
  • They should describe the broadest and most comprehensive goals of the course or program, what assessment theorist Mark Battersby refers to as “integrated complexes of knowledge” or competencies. They should focus on what a student should be able to do with the knowledge covered, not simply on what the instructor will cover. Courses and programs may typically have three to five outcomes, though fewer or more are possible.
  • They should employ active verbs, usually taken from the higher levels of Bloom’s taxonomy (reprinted in the appendix to this document)—e.g., students should be able to “analyze” or “evaluate,” not “define” or “describe.”
  • As much as possible, they should be written in intelligible language, understandable to students.
  • As often as possible, they should be arrived at collaboratively, as instructors who teach the same class or in the same program come to consensus about the key objectives of that unit of instruction. (For course-level SLOs, instructors will undoubtedly have SLOs of their own in addition to consensus ones.) Adjunct instructors—and students themselves—should be involved in the process of developing SLOs as much as possible.
  • SLOs should be measurable. Ideally, they should contain or make reference to the product (papers, projects, performances, portfolios, tests, etc. through which students demonstrate competency) and the standard (e.g., “with at least 80% accuracy”) or criterion by which success is measured. When the behavior/product and standard are specified, the SLO is sometimes referred to as made “operational.”

Sample program-level SLOs, therefore, might look something like this:
  • A simple SLO for English majors
    “At graduation, English majors are able to write a clear, coherent, persuasive, and correct essay demonstrating their ability to analyze and interpret texts, to apply secondary criticism to them, and to explain their contexts.” (University of Texas-Arlington)
  • A simple SLO for Biology majors
    “[Students should be able to] apply ethical principles of the discipline in regard to animal and human subjects, environmental protection, use of sources, and collaboration with colleagues.” (Walvoord, 2004)
  • An SLO for honors Political Science majors
    “[Students should be able to] identify a problem [in the discipline], situate it within an appropriate literature, pose a particular hypothesis or intellectual puzzle, then use original sources to test the hypothesis or solve the puzzle.” (Walvoord, 2004)
  • An SLO for Economics majors
    “[Students should be able to] use statistical methods to analyze economic questions” (Walvoord, 2004).
  • An SLO for the MBA at Central Michigan University
    “[Students should be able to] apply the strategic management process and formulate firm strategy.”
  • An SLO for the J.D. degree at Georgia State University
    “Students will demonstrate effective use of the tools of legal research (both hard copy and online tools), be able to create an effective research plan for assessing a legal problem, and demonstrate the ability to use appropriate citation form for advocacy and expositive legal writing.”

Direct Assessment Methods

Some effective direct assessment methods that can be employed to measure achievement of SLOs in courses or programs include:

Embedded assessment, in which instructors use existing tests, exams, or writing prompts to identify learning trends in a particular course or group of related courses. A particular department might agree to give a common final in which questions are mapped to specific learning outcomes for the course, then the results aggregated. (A variation of this approach would require all instructors in a course to ask a set of common questions on a part of an exam, but permit them to develop instructor-specific questions for the rest of the exam.) Another department might simply decide to look at student writing on a variety of late-term essay assignments for evidence that certain learning outcomes have been met. The main advantage of embedded assessment is that it simplifies the assessment process, asking instructors to evaluate existing student work, but in a different way than they usually do and for a different purpose. It’s usually good practice to collect such assessment data so as to make evaluation of individual instructors impossible.

Portfolios, which require students (or instructors) to assemble a group of projects from a single class or group of classes as a way of demonstrating that achievement of learning outcomes has taken place—and to reveal areas of learning deficiency. This is a particularly effective method of assessing institutional learning outcomes.

Capstone courses are usually ones taken in a student’s senior year and intended to allow students to demonstrate comprehensive knowledge and skill in the particular major. Capstone courses (and capstone projects usually required in such course) integrate knowledge and skills associated with the entire sequence of courses that make up the program. Assessing student performance in these classes therefore approximates assessment of student performance in the major as a whole.

Standardized tests, particularly nationally normed tests of such institution-wide learning outcomes as critical thinking or writing, or discipline-specific tests like the ETS Major Field Achievement Tests. Standardized tests may be useful measures if instructors agree to teach the skills that such tests can be shown to measure, and they have the advantage of providing departments with a national standard by which to measure their students. But standardized tests are costly to administer; students are often insufficiently motivated to do their best work when taking them; and as noted, they may not measure what faculty in the program actually teach.

Scoring rubrics enable us to assess student performance captured in portfolios, capstone courses, essays, speeches, or other presentations. Individual instructors can employ them on their own, too. Look at a specific assignment—an essay, a demonstration, an oral report—in which student learning cannot be measured with numerical precision. Develop (whether alone or with others) a scoring guide or checklist that will indicate various skill levels for various “primary traits,” with clearly delineated language suggesting the degree to which the assignment demonstrates evidence that the SLO has been achieved. See the PowerPoint that provides an overview and examples of rubrics: Using Rubrics to Make Assessment More Efficient .

Indirect Assessment Methods

Student surveys and focus groups. A substantial body of evidence suggests that student self- reported learning gains correlate modestly with real learning gains. You may want to consider surveying students (or a sampling of students) at the end of a course of instruction (or after graduation from a program) to determine what they see as their level of achievement of the course or program’s learning outcomes. You may also want to gather a representative group of students together for more informal conversation about a particular course or program when it has ended, asking them open-ended questions about its effect upon them. Surveys of alumni can also produce meaningful assessment data. These techniques are particularly valuable when done in conjunction with more direct assessment measures.

Faculty surveys. Instructors can be asked, via questionnaires, about what they perceive to be strengths and weaknesses among their students.

Data likely to be kept by Offices of Institutional Research on retention, success, and persistence, job placement information, rates of acceptance into graduate programs, demographics, etc. may also be strong assessment tools, if analyzed and mapped to specific SLOs.

Classroom Assessment Techniques. The UMKC assessment committee encourages instructors to familiarize themselves (and routinely employ) some of the classroom-based assessment techniques that Thomas Angelo and Patricia Cross detail in their text on the subject, cited in the appendix. For example, instructors might use the “minute paper” at the end of a class period to have students respond quickly and anonymously to two questions: “what was the most important thing you learned today?” and “what important question remains unanswered?” CATs are ideal ways of helping instructors in specific classes determine what their students know and don’t know, or are having difficulty learning. When you adjust teaching practices in light of the information you gather from a CAT, you’re completing the feedback loop that is successful outcomes assessment. If members of your discipline agree to employ CATs regularly, consider detailing their efforts in a document that can become part of an annual assessment report.

One caveat: indirect assessment measures should be used to augment, not substitute for, more direct measures. Ideally, in fact, multiple assessment methods should be employed whenever possible, so that student surveys (for example) can become a useful additional check against data derived from doing embedded assessment or administering standardized tests.