Assessment Handbook

Goals and Outcomes

What are program goals? 

Program goals are broad statements about what a program’s majors should know or be able to do. The goals are a step in between the program's mission statement (a current, accurate description of the primary purpose of the program) and program outcomes (specific statements about the observable behaviors of a program’s majors ). Unlike outcomes, goals do not have to be stated in ways that imply how they will be measured. Think of a set of goals as a way to organize the program's student-learning outcomes.

Program goals should give prospective students a general sense of what they should gain from completing the program. Ideally, all faculty in a program should be aware of the program's goals and be able to easily communicate the goals to prospective students and their families, as well as to potential employers of the program's graduates.

While there are no strict rules about how many goals a program should have, programs should have a reasonable and manageable number. Three to six goals are typical, with major programs having more goals than minor programs. The set of goals should represent what is most important to the program and should be reflected in the program's curriculum. Finally, please keep in mind that your program goals should reflect the uniqueness of your unit; therefore, they should not be identical to general education goals or to other programs’ goals.

What are program outcomes?

Program outcomes are specific statements about the observable behaviors that are expected of a program’s majors upon graduation. While the outcomes do not have to be inclusive of all aspects of the program, they should represent those overarching skills and abilities that are most important to a program and that are appropriate for the degree level. While the list of outcomes certainly could not include everything a student should learn from majoring in a program, the outcomes should give prospective students, their families and other major stakeholders a good sense of what students gain from completing the program.

Ideally, programs should organize outcomes around student learning goals. Each parent goal should have a reasonable number of outcomes associated with it. 

Outcomes should be written with specific action verbs that identify how students will demonstrate learning and avoid vague terms (such as knowledge, ability, awareness, appreciation). The verb chosen for a student learning outcome is especially important since it implies how the program will measure the degree to which its students are successfully achieving the outcome. Vanderbilt University describes how you can use Bloom’s Taxonomy to help you choose your verbs.

In addition, following the S.M.A.R.T. acronym, outcomes should be: 

  • Specific:          States exactly what is expected using concrete action verbs
  • Measurable:  Requires a measurable result
  • Attainable:     Is achievable yet reasonably stretches the student
  • Realistic:         Is reasonable and appropriate for the degree level
  • Timely:            Incorporates current professional/disciplinary expectations

 

Examples of low-quality outcomesExamples of high-quality outcomes
  • Students will be able to understand psychological theories.
  • Students will be able to communicate orally and in writing.
  • Students will be provided research opportunities.

 

  • Students will be able to evaluate psychological theories.
  • Students will be able to present scientific data orally, textually and visually.
  • Students will be able to analyze and interpret quantitative psychological data.

 

Assessment Methods

Because the goal of learning-outcomes assessment at the program level is to determine how well students have met program objectives at or near graduation, typical measures for program assessment purposes are those in which students demonstrate mastery of the objective (see the curriculum map). These can be major assignments in capstone or other 400-level courses, internship supervisor evaluations, or program-level exams (e.g., licensure exam). Consider choosing a measure that will provide longitudinal evidence of maintenance or changes in student performance.

At times you may want to use a different assessment method. For example, if you find students have not mastered an objective, you may want to assess performance in an earlier course to investigate possible reasons for their suboptimal performance. Once you determine what those reasons are, you can make changes and then reassess students at the mastery level. Alternatively, you may have an assessment question that requires measurement of student performance at an earlier point, or even multiple points, in the program. The best choice of assessment measure will provide meaningful information to the program faculty.

There are two basic types of measures for assessment:

Direct versus indirect assessment 

Direct measures

These are assessment measures in which the products of student work are evaluated in light of the learning outcomes for the program. Evidence from coursework, such as projects, performances or specialized tests of knowledge or skill are examples of direct measures.

Indirect measures

Indirect assessment methods require that faculty and staff infer actual student abilities, knowledge and values rather than observe direct evidence. Among indirect methods are surveys, exit interviews and focus groups.

Direct evidence by itself is a stronger measure of student learning than indirect evidence by itself. The strongest assessment evidence combines both direct and indirect evidence. 

In small-enrollment programs, data collection may constitute obtaining copies of each student’s artifact (i.e., assignment). In large-enrollment programs, it may be necessary to collect data from a sample of students.

Additional considerations

Remember that this process is driven by your learning concerns or assessment questions. At the most basic level, learning outcomes assessment is about whether students have the skills and knowledge necessary for success in their chosen field of study after graduation. The most effective strategy for answering this question is to measure student performance at the demonstration of mastery level, typically in a capstone or other 400-level course, or a master’s or dissertation defense for graduate programs.

Faculty may have other concerns or interests about the program that lead to different assessment designs. Perhaps they believe students are not coming into the program with the necessary background. This belief suggests the need to administer a pretest in an early course. If faculty are interested in how students are developing throughout the program, measures at various points in the curriculum would be the best assessment strategy. Alternatively, faculty might want to know how demographics or experience might impact student mastery of skills and knowledge. If a language program wants to know if students who study abroad master learning objectives at a greater level than those who don’t, performance data would need to be compared between those two levels of experience. To learn more, please reach out to the director of institutional effectiveness and assessment by email or during normal office hours.

Performance Criteria

The final component associated with designing your assessment measure is creating a performance criterion or benchmark, which is a statement of the level at which you would consider students to have met the learning objective. It is important that your performance criterion be in alignment with your measure. Several examples of performance targets are listed below.

  • 80% of students will receive a total score of 85 or above on the essay.
  • At least 85% of students will score a 3 out of 5 on each of the rubric components.
  • At least 90% of students will receive a score of 80 or higher on the internship evaluation form.
  • 95% of students will score at least a 3 on each rubric component, and 60% of students will score at least a 4 on the attached 5-point rubric.

Results and Analysis

Briefly summarize the major findings from assessments you have conducted during the past year to obtain feedback about the extent to which expected outcomes are being realized. Sometimes assessments do not yield useful feedback or the data were not available as expected. Report what happened and describe how the assessments will be modified to capture better data in the future. For purposes of reporting, a paragraph is usually enough to summarize the results from each method used to assess an outcome. However, for purposes of assessment and use of data for curricular changes, additional detail may be necessary. Longer reports can help to create meaningful dialogue and cultivate the creation of programmatic changes. If you utilize a longer report, please upload the document into the document repository and reference it in the paragraph summary.

Action Plans

Describe improvements and changes made in your unit in response to your assessment findings. These should be reported in the past tense. However, you should also report findings that are currently informing your planning efforts or improvement initiatives that are now underway.

Key points to keep in mind when developing your action plan:

  • Action plans flow directly from the analysis of the data. 
  • Actions do not necessarily need to be major curricular changes; you could just decide that you need more information.
  • Some actions plans will immediately solve a problem in the next cycle, but others are long term and will put you on the path to improvement. 
  • Action plans are specific. 
  • Action plans may or may not require additional resources. 

Examples of Action Plan items

  • Curricular Changes 
    • Adding/changing pre-requisites 
    • Changing degree program requirements 
    • Changing course sequence 
  • Pedagogical Changes
    • Incorporating guest lectures
    • Adding organized small-group activities
    • Adding web-based delivery of content 
  • Student Support 
    • Implementing peer-tutoring system
    • Organizing group study 
    • Providing online resources/referrals 
  • Faculty Support
    • Faculty retreat 
    • Professional development technology assistance online resources

What do I do if I met my criteria? 

  • If performance is consistent and no significant program changes occurred, you may conclude that no changes are necessary. Now is the time to consider if you still want to include this outcome, or if you would like to raise your target levels. 
  • If performance is consistent and significant program changes occurred, you may conclude that changes were not effective. Lack of immediate improvement in the next assessment cycle is not seen as a failure. Continue monitoring and reporting.
  • If performance levels improved because of previous changes, decide if you want to continue with recent changes or make additional modifications. Consider how to sustain what has been working and how to improve upon it.
  • While in aggregate you may have met your criteria, you may also want to consider disaggregating the data to better understand the results. For example, did transfer students perform better than new first-year students? Or did male students perform better than female students? 

What do I do if the criteria are not met or partially met? 

  • You may conclude that students admitted to the program are not prepared to perform at expected levels.
  • You may conclude that students are weak in a foundational concept that prevents them from performing at expected levels in upper-division coursework. 
  • Use your curriculum map to investigate possible causes for low student performance and ensure adequate content coverage across the domain and course of study.
  • Before drawing your conclusions, you again may want to consider disaggregating the data as it may illustrate some additional considerations. For example, in disaggregation you may find that one subpopulation of students performed worse than another. What additional questions could you ask that would help improve the performance of this subpopulation?
  • Other common strategies include establishment of a focused tutoring program, creation of a writing clinic, or the scheduling of study sessions that are facilitated by course instructors or graduate students.