Skip to content

Assessment FAQs

Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences. Assessment occurs at course, department, college, and institution levels. The University of Tennessee, Knoxville, is accredited by the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), yet assessment practices at UT extend beyond SACSCOC to include programmatic accreditation agencies, as well as our own practices of assessment. However, assessment is not just important to our accreditors, it is also important for student learning and continual improvement of our University’s programs.

Every academic program on campus undergoes an assessment reporting process each year. All academic programs are covered by the institution’s SACSCOC regional accreditation, and some academic programs also have programmatic accreditation from an external accrediting body. SACSCOC requires that UT submit departments’ progress with the Decennial and Fifth-Year Interim Reports. The next Interim Report will be due in March 2021, and will need to include multiple years of assessments. This means that all departments must report on their assessments every year. Once those reports are completed, they are reviewed by the respective colleges and by members of the Assessment Steering Committee.

All reports should have the following:

  1. Student learning outcomes
  2. A description of the direct and indirect methods used to assess those learning outcomes
  3. An analysis and discussion of the results of the assessments and a plan for use of the results to improve student learning (what the department will do, based on the assessment data, to improve the program).

The Assessment Steering Committee uses a rubric to determine the strength of annual assessment reports. Each department should have learning outcomes that describe the competencies that students in the program should master by the time they graduate. The learning outcomes for the following term are usually discussed/developed/revised in late spring. Once they have been established, the faculty in the department must decide how they will measure student performance in these areas. This is generally also decided in late spring or during the summer semester.

In the subsequent fall and spring semesters, data from the assessments chosen are collected. During the spring semester, faculty discuss the results and, if the data reflect a need for improvement, develop a plan to address what they will do as a department to improve the program. If the data reflects adequate improvement, there may not be any action taken for the following year. Irrespective of the results, all information is reported by the program in the Planning module of Compliance Assist by September 15th (or earlier, depending on the college).

A student learning outcome (SLO) is a statement that describes what a student should know once they complete a course or a program. A good SLO describes an observable behavior that can be measured within a specific time frame (e.g., by the end of a course, by the time the student graduates, etc.). Generally, SLOs use active verbs from a level of Bloom’s Taxonomy or a similar taxonomy. For more information about developing good SLOs and Bloom’s Taxonomy, please refer to TLC’s Programmatic Assessment page.

Formative assessments are used as a means of gauging student learning before or during a learning activity/unit. Evaluative activities in this category can be used before or during a class session to provide feedback to the instructor regarding what students know before a subject is introduced, how well students are understanding new material, as well as the efficacy of his or her instruction. These activities generally take little time to complete, and can therefore occur one or more times during a class session. They are also invariably low-stakes, meaning that they have little to no impact on a student’s overall grade. The main purposes of formative assessments are to guide/adjust instruction and to help students see what they need to know / improve upon. Some examples of formative assessments include one-minute “papers” in which students summarize what was covered during a class session in their own words, informal quizzes, short homework assignments, class discussions or mid-semester class evaluations. These are not always discussed in a program’s annual assessment report, but can be useful in preparing students for the summative assessments that will be reported.

Conversely, summative assessments are activities that evaluate student effectiveness in meeting a learning outcome. These assignments typically take place after instruction for a unit/module in a course has ended. Some examples of summative assessments include, but are not limited to final exams, oral presentations, portfolios, group projects and papers. These activities are typically discussed in annual assessment reports.

For more information about formative and summative assessments and how they can be used in conjunction with one another, check out the Course Assessment page on the TLC website at http://tenntlc.utk.edu/course-development/assessment/ . A more detailed list of examples of formative and summative assessments can be found on the document entitled “Assessment Toolbox”  found here.

Direct assessment is used to determine the level of student learning achieved against established learning outcomes. Activities in this category usually have a direct impact on measures of student performance (e.g., grades in a course). Some examples of direct assessment may include, but are not limited to exams, quizzes, oral presentations, dissertations, theses, essays and portfolios. Indirect assessment is typically used to evaluate the quality of student learning experiences. For instance, students might be given a survey to gauge their perceptions of their growth in a skill as a result of a class or a study abroad experience. They might also evaluate the quality of instruction in a course or during a service-learning experience. Some examples of indirect assessments include self-efficacy surveys, end of course evaluations, focus groups and questionnaires for alumni regarding program effectiveness and retention.

Both forms of assessment should be completed as part of the curricular or co-curricular programs. For more information about direct and indirect measures, please read this article on Cornell University’s Center for Teaching Excellence website: https://www.cte.cornell.edu/documents/Direct%20Indirect%20Measures.pdf.

Conference papers and presentations cannot be considered a form of direct assessment because they are not requirements for all students, and they are usually not evaluated by program faculty. Such work is generally categorized as an indirect assessment of student learning because it is reflective of the quality of the student learning experience in a program. However, if program faculty decide to score or evaluate conference papers or presentations as part of a course, they can consider the student work a direct assessment.

The terms formative and summative assessment refer to types of assessment. Direct and indirect assessments refer to methods of assessment.

A formative assessment can be direct or indirect given that it is a low stakes activity used to guide instruction as the teacher helps lead the students to the fulfillment of a learning outcome he or she has set.  Having students complete a midterm self-assessment of their mastery of an outcome is an example of a formative/indirect assessment because the activity gives the instructor information about the quality of the students’ learning experience, while providing him or her with feedback that can be used to improve course delivery.

A summative assessment is always going to be direct because it is comparing a student’s performance to an established learning outcome. Therefore, an exam for a class is a summative/direct assessment.

Read last year’s report for the program, available in Compliance Assist (make this a link). The report will tell you what the program’s outcomes are and the faculty’s plan for assessing them this year. If you are unable to access the system, you may contact the Faculty Consultant for Assessment in UT’s Teaching and Learning Center via email at tenntlc@utk.edu , the Office of Accreditation at SACS_Liaison@utk.edu, or the Office of Institutional Research and Assessment at assessment@utk.edu to grant you access.

Annual reports are submitted via the Planning module of Compliance Assist no later than September 15th, unless the college has established an earlier deadline.

Q: How many outcomes should there be for a program?

As a basic rule, faculty should strive to assess at least three student learning outcomes (SLOs) for a program each year. However, for programs with enrollments of 5 to 20 students, the number of outcomes assessed can be smaller. It is recommended that departments do not assess more than five SLOs per year to keep the reporting process manageable. If you are unsure how many outcomes your department should assess, or need assistance in revising or writing your learning outcomes, please contact the Teaching and Learning Center via email at tenntlc@utk.edu or by phone at 865-974-3807. For more information about the reporting process and the requirements for programmatic assessment, please see the FAQ “What should be included in the annual report, and how does the reporting process work?”

If an academic program has a small number of students, the assessment coordinator should continue to assess the few students enrolled, and data can be stored in the Planning module; however, the assessment coordinator should put the outcomes on “extended cycle” to indicate a rolling review of students. Here is an example:

“The individualized nature of the certificate program and small enrollments require combining evaluations across academic years. Data will be collected annually, but for most objectives, assessment will take place as a rolling review of students who have completed the course in the most recent three years” (Women’s Studies Certificate).

To place an outcome on extended cycle, follow the instructions for entering results and, after hitting the Edit tab, go to the AY End Date field and adjust the year. Provide a justification for extended cycle under the Notes section of the report. Then, under “Progress,” select “Extended Cycle” from the drop down menu. After making all the necessary changes, scroll down and click the Save and Close button to save all changes.

Course grades cannot be used as an assessment method because what they measure goes beyond a single outcome (i.e., they may also take into account attendance, quality of writing, etc.). For the purpose of assessing student learning outcomes (SLOs), the method must be outcome-specific. A course grade provides little information about what could be enhanced to help students more effectively master the outcome. An alternative to a course grade could be a grade on an assignment whose focus is to demonstrate the outcome. Another example would be to submit a sample of student work focused on the outcome from a select group of courses, and for the assessment group to examine the artifacts using a rubric or criteria list.

If the sole purpose of the test is to measure one specific student learning outcome, the grade on the test can be used as a measure. If the test measures several outcomes, sub-scores for relevant questions should be used for each outcome.

The university currently uses several modules offered by Campus Labs in order to collect data using assessments. Annual programmatic outcome reports are entered into the Planning module. The new Outcomes module can be used to enter course-level data. The Baseline module can also be used to collect survey and rubric data. In addition to the Campus Labs modules, several programs utilize the survey tool Qualtrics to collect data for the assessment of student learning.

While there is no set scale for program rubrics, it is generally acceptable to have a scale of four to five levels. Three levels provide a baseline for student performance. For example, it is not uncommon for departments to use program rubrics with the levels Excellent, Proficient and Beginner. In most cases, it is useful to start with a three-point scale, grade a small sampling of student work to check validity/user-friendliness of the rubric, and then add additional levels as needed. For instance, as an extension of the Excellent, Proficient and Beginner categories, one can adjust the scale as follows: Excellent, Proficient, Developing and Beginner. On the other hand, rubrics with a scale of fewer than four levels and more than five will almost always have validity issues and are often difficult to score. For assistance with creating a rubric, please contact the Teaching and Learning Center at tenntlc@utk.edu or 974-3807. For additional resources about rubrics in general (e.g., how to make them, samples, and types of rubrics), check out the following websites:

There are three main benefits to using a rubric or checklist (but, no, this is not a requirement). First, rubrics make grading easier. Because the requirements are explicitly included on the actual document, instructors do not have to spend as much time writing feedback. Moreover, if a rubric is created with the student learning outcomes (SLOs) in mind, it facilitates the reporting process. For example, if faculty want to assess student performance in the areas of oral presentation and writing proficiency in one assignment, they may create one rubric that measures both. However, in their report, they may discuss oral presentation and writing proficiency as two different SLOs. Having a rubric isolates specific data about each outcome so that reporting is easier for departments and programs. Finally, rubrics are an effective means of communicating expectations to students.

Appropriate sampling size varies according to the academic program. To determine the appropriate sampling size for an assessment report, it is helpful to look at trends of student involvement in the program over time. In larger departments, it is not uncommon to have a sample size of 30-100 students. However, in smaller departments such as Interdisciplinary Programs, it is not uncommon to have a sample size of 5-10 students. In smaller departments, any sampling size below 5 students may be considered too small, and it is recommended that the outcomes be put on extended cycle so that faculty continue to collect data until the sampling size is sufficient for analysis. Generally, for any program, a good sample size would be at least 20% of student enrollment in the program, with a minimum of 5 students. Therefore, given a total program enrollment of 200 students, the sample size would be 50 students. The following table includes some examples of sample sizes in assessment reports across disciplines.

Department Sample Size
General Education 20% of enrolled students
Global Studies – BA 17 students
Mathematics – BA 11 students
Women’s Studies – BA 15 students
Sociology – BA 39 students
Computer Engineering – BA 18 students
Nursing – BA 51 students
Social Work – Ph.D. 5 students
Accounting – MA 108 students
Food Science and Technology – MA 6 students
Music – BA 19 students
Graphic Design – BFA 31 students

Interpret the results and compare them to last year’s results. Present the data and highlight the improvement in an explanation.

Evidence of improvement involves any positive change from one year to the next. To determine whether there has been improvement, faculty compare the results from the current evaluative year to the results from the previous cycle. For example, note the following outcome:

Learner Outcome #2: Students will perform at or above the national mean on North American Veterinary Licensing Examination (NAVLE).  – (Veterinary Medicine)

If 55% of students performed at or above the national mean on the NAVLE in Spring 2014, and 65% of students scored at or above the national average in Spring 2015, there is evidence of a 10% improvement from one year to the next. This data should be reflected and explained in the report. Let’s say, though, that the outcome were changed to the following:

Learner Outcome #2: 75% of students will perform at or above the national mean on North American Veterinary Licensing Examination (NAVLE).  – (Veterinary Medicine)

Although this benchmark was not met, the previously stated data (55% of students in Spring 2014 and 65% in Spring 2015) would still show some evidence of improvement. This growth should be explained in the Assessment Analysis and Results section of the annual report.

In Compliance Assist, there is an option to upload files in the report to provide evidence of student mastery or non-mastery of the established student learning outcomes (SLOs) and to demonstrate that faculty are making strides toward improving their programs. Note that Compliance Assist will only support the following file types: pdfs, .docs or .docx, .htm, .html, .ppt, .pptx, .xls, .xlsx. Some possible documents that might be helpful to upload include, but are not limited to:

  • Samples of student work: These can be exams, papers, artwork, videos or any other projects that are used to directly assess established SLOs. All names on student work must be redacted before uploading. It is recommended that faculty upload a small sample of work that is representative of the range of student ability (e.g., a report may include one example each of excellent, average and/or poor work) to show how the assignments were evaluated. If a video is used, it is best to attach a link to a website instead of trying to upload the file.
  • Rubrics, checklists or criteria sheets used to score assessments: These can be completed and the results summarized in a table to demonstrate students’ overall performance or they can be uploaded without the results to show the criteria by which the assessments were evaluated.
  • Minutes of faculty meetings: These notes are useful to document when program faculty meet to discuss the results of the assessment completed during the academic year and make decisions about what changes – if any – should be made for the following assessment cycle. These files are typically uploaded either in the “Actions Taken” or the “Assessment Analysis” sections of the report.
  • Selfassessments, alumni questionnaires, senior exit surveys: These documents can be uploaded as examples of indirect assessments used to evaluate program effectiveness and the quality of student learning experiences.
  • Exams, capstone projects, writing prompts: These are examples of direct assessments that can demonstrate that a specific learning outcome or set of learning outcomes were assessed during an assessment cycle. When uploading this type of evidence, it is necessary to point out which parts of the assignment were used to assess the particular outcome.

In UT’s Compliance Assist, click on the “My Dashboard” tab, then the “Academic Assessment” tab (next to the role tab). Click on the outcome to open the form, then on the edit tab at the top of the form. Fill in all fields marked “required.” See the Planning Module Guide under Resources on the UTK SACSCOC Accreditation website for a step-by-step how-to guide on assessment reporting (URL: http://sacs.utk.edu/resources/ ). If you need additional assistance working in the system, contact assessment@utk.edu.

Email your department head and the dean of your college to inform them that your report is complete. The report will be reviewed by your college and then by a member of the Assessment Steering Committee. You will receive comments and suggestions via the Feedback Form within Compliance Assist. Once the feedback has been received, you can update the report and resubmit it for review.

Once the results have been collected, the next step is to define the data. This involves asking yourself and your fellow colleagues the following questions:

  • What was the benchmark? A benchmark is a quantifiable means of determining whether or not students have satisfied a learning outcome. For example, note the following outcome:
    Graduates of the UTK Chemical and Biomolecular Engineering program communicate effectively in writing, speaking, and listening in a variety of contexts (Chemical and Biomedical Engineering, BS).

In the Direct Assessment Method(s) description box of the report, the reporter identified the benchmark as:
“…80% of students are expected to maintain a 70% average in all graded components of the course, including written and oral reports.”

Setting a benchmark allows departments to quantify the student success rate in meeting an outcome while clearly defining areas where growth is needed.

  • Once a benchmark has been established, what do the data tell us? Now that a benchmark has been set to determine what success looks like in terms of fulfilling the outcome, faculty can begin to organize and report their findings in the “Assessment Results and Analysis” portion of the report. In keeping with the previous example, the department might look at the graded components of the course and find that 85% of students maintained an average of 70% or higher in their written and oral reports. Therefore, the analysis section might say something like this:
    “…80% of students maintained an average of 70% or higher in their written and oral reports. This indicates that students are meeting the learning outcome.”

However, if the outcome is not met – imagine that only 60% of students met the learning outcome – the faculty must not only state this, but they will also want to discuss possible factors that may have contributed to the students not meeting the outcome.

  • What factors might have contributed to student failure or success in meeting an outcome? In addition to communicating the results, faculty should also think about what might have caused the results. Was there a change in the curriculum? Were students lacking in a certain skill? Was there a change or a loss in personnel? This discussion will also go in the “Assessment Results and Analysis” section of the report.
  • Now that we have discussed the results, how do we move forward? Assessment is an ongoing process where the ultimate goal is improvement. Therefore, after looking at the data and hypothesizing about what worked and didn’t work in terms of curricular activities, it is important to think about what should be done to enhance student learning and to improve the program curriculum. Imagine that 80% of students in the Chemical and Biomolecular Engineering Department met the learning outcome. The department might respond in one of two ways:
  • Faculty may decide that, because students met the established SLO, no actions should be taken to alter the curriculum, or
  • Faculty may decide that to change the benchmark to say “…85% of students are expected to maintain a 70% average in all graded components of the course, including written and oral reports.”

Should the faculty decide to change the learning outcome, they will need to indicate this in the “Actions Taken” section of the report. Actions to change the SLO based on results can be documented in minutes or notes from a faculty meeting where the changes were discussed.

Conversely, if students did not meet the SLO, the faculty will want to explore what they can do to help students reach the benchmark they set. An effective strategy might involve a change in the curriculum – in this case, creating a Technical Writing course for engineers, etc. – or providing students with extra tutoring opportunities. Irrespective of the decision, the actions explored should be reported in the “Actions Taken” section of the report.

Once the data have been collected and analyzed, there are a number of actions that faculty can take to address the needs of the students in their programs. These actions must be reported in the Planning Module of Compliance Assist in the “Actions Taken” section of the report. The following are some examples of actions taken as derived from other reports:

  1. Course Revision

Use: For changes made to course content, such as adding a new unit, revising a required assignment, changing a required textbook, adding a practicum rotation, adopting a common syllabus for multi-instructor course.

 Example 1: Revised persuasive speech evaluation rubric to include intercultural component; initiated textbook revision to include intercultural content; collect baseline data for review of delivery and content components of informative speech. (Communication Studies-BA)

 Example 2: Faculty instructors and clinical mentors who supervise CFS 470 students will … begin instruction on the assessment of young children earlier in the semester. (Child & Family Studies-BS)

  1. Curriculum Revision

 Use: To reflect curricular changes including adding a new course, modifying the sequencing of courses, changing prerequisites, and dropping a course.

 Example 1: As a faculty, we have approved the creation of HIST 299, which will be a requirement of the major beginning in 2015-16 and a prerequisite for HIST 499. HIST 299 will place emphasis on teaching and learning historical thinking in small-format, seminar-like settings. This course will have many different iterations, depending on the particular subject specialty of the faculty teaching it. But the emphasis in each case will be on intensive reading, problem solving, modeling critical analysis, and lab-like exercises designed to offer hands-on training in historical methods, research, and writing. (History, BA)

Example 2: Instructors of all 100- and 200- level classes must assign at least one piece of formal writing which will include instructions requiring students to discuss a play or performance through a broader historical, social, or theatrical context. (Theatre-BA)

  1. Faculty Development / Training

 Use: For activities aimed to more effectively prepare faculty to teach or assess a learning outcome, including training of practicum supervisors, convening of norming session for faculty using a program rubric, etc.

 Example 1: We are also planning to offer workshops in “how to teach HIST 299” beginning in summer 2015. Faculty who choose to participate in these workshops will work together to develop standards for the course (length of writing assignments, basic skills to be learned, etc.) and to exchange ideas about how to teach it. (History BA)

Example 2: To continue to improve critical thinking scores, a faculty workshop will be created during Fall 2014 to help faculty understand how to incorporate critical thinking into the curriculum. (Hotel, Restaurant & Tourism-BS)

For examples of possible actions taken for both graduate and undergraduate programs and how they can be worded in the system, please see the document entitled “Examples of Actions Taken” provided here.

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.