student success

Evaluating college student retention at mid year

Lew SanborneSenior Vice PresidentNovember 30, 2011
Mid-year student assessment provides very valuable data for student retention.
Assessing students at mid-year allows campuses to evaluate student success efforts before the end of the first year.

“How do we compare to other schools?”

In my work as a consultant, I get asked some variation of this question a lot, especially with campuses that are trying to compare their student retention initiatives to similar institutions. More campuses are becoming increasingly data-oriented toward student retention—a great development—and national benchmarks are one way to gauge their performance.

There are good national benchmarks for student satisfaction and student engagement, but precious few quantitative measures of retention success. The two retention success measures we rely on heavily are annually-released measures from IPEDS for first-time, full-time students: 1) first-to-second-year retention, and 2) 150-percent-of-normal-time graduation rates (three years for two-year institutions and six years for four-year institutions). These two measures are supplemented by annually-released five-year graduation rates from ACT.

However, while these are important measures to track, they are also lagging indicators. If we spend a year building a program for first-year students, we won’t be able to measure our success until those first-year students return for year two. Measuring the impact of our interventions through graduation rates lags even farther, five or six years.

Obviously, our internal institutional measures can give us a sense of how we are doing compared to how we did in previous years or with previous cohorts, but what we need are leading indicators. We need to know in October if our fall first-year students are struggling. If fall-to-spring persistence rates are down, we need to act in February to staunch the flow.

In 2008, Noel-Levitz began examining these leading indicators as a way to benchmark student retention at the mid-year point and identify strategies for strengthening student success. This Mid-Year Retention Indicators Report (see the 2011 edition) examines a host of benchmarks, including:

  • Fall-to-spring persistence for first-year undergraduates
  • Fall-to-spring persistence for second-year undergraduates
  • Fall-to-fall retention for first-year undergraduates
  • Fall-to-fall retention for second-year undergraduates (at four-year schools)
  • Fall-to-spring persistence for conditionally admitted first-year students
  • Credit hours attempted, credit hours earned, and the ratio of hours completed to attempted.

Among the persistence patterns and retention results, the current study revealed:

  • First-year students at two-year and four-year institutions completed 77 to 93 percent of the credit hours they attempted (median rates), with the highest rates of completion reported among students at four-year private colleges.
  • Between 9 and 19 percent of first-year students at the median failed to persist to the second term.
  • Between 7 and 14 percent of second-year students at the median failed to persist to the second term across institution types, and even more failed to return for their third year at four-year institutions.
  • Fewer first-year students who were conditionally admitted persisted from the first to the second term compared to their non-conditionally-admitted counterparts.

These benchmarks illustrate a growing movement in student retention: to analyze and track indicators of student and institutional performance more systematically. For instance, these mid-year assessment results have implications for institutional goal setting and strategy development, impacting areas such as:

  • Identifying key retention indicators to measure student success more precisely.
  • Establishing retention goals.
  • Forecasting future retention results.
  • Intervening earlier with students, without having to wait for mid-term grades or referrals.
  • Expanding retention efforts beyond the first year into the second year as well.

Again, if you check pages 7-8 of the report, you can see how you can use these metrics to significantly boost your retention efforts and raise retention and completion rates.

Credit-hour completion benchmarks

Another problem with our reliance on the two IPEDS measures is that our institutions are highly diverse. At many institutions, first-time full-time students have become a minority of each incoming class, so the traditional measures do us little good. In addition to first-time, full-time students, we are dealing with transfers, adults, part-timers, online, or non-degree-seeking students who are seeking job re-training.

To overcome this challenge, Noel-Levitz’s ratio of credit-hours-attempted to credit-hours-earned benchmark should become an important measure for all colleges and universities. With this measure, each institution can compare its campuswide rate to the benchmark for like institutions. Internally, we can also compare the performance of whatever subpopulation(s) are of concern at each institution. For example, community colleges can compare students in occupational programs to those in transfer programs to those in adult basic skills. Institutions with athletic programs can compare rates for athletes in various sports. Degree completion adults can be compared to traditional program commuters. I encourage you to develop a ratio for every population that makes sense at your institution.

Analysis of these course completion ratios should point to opportunities for program improvements, such as:

  • Bolster advising programs for underperforming student groups.
  • Enhance placement methods to ensure students are enrolling in appropriate skill development courses.
  • Promote career services to improve the student-program match.

In summary

With these benchmarks, your campus can see how you are performing compared to your peers. I encourage you to check your institutional persistence rates against these benchmarks. Spread the word and celebrate if you are ahead of these norms; evaluate your current approach if you are behind and identify opportunities for improvement.

Also, consider using the new, mid-year benchmarks to help identify opportunities for improvement at your institution. Start by looking at your internal term-to-term, year-to-year, and cohort-to-cohort comparisons to evaluate the programs you currently have in place. Where you see a gap between your institutional performance and the benchmarks, look to strengthen programmatic interventions.

Questions or comments? I invite you to contact me by e-mail or by calling 1-800-876-1117.


About the Author

Lew Sanborne

Dr. Lew Sanborne is RNL's leader in strategic enrollment planning. He offers three decades of experience in higher education and enrollment management, with a range of expertise including annual, and strategic enrollment planning, student success...

Read more about Lew's experience and expertise

Reach Lew by e-mail at Lewis.Sanborne@RuffaloNL.com.


Read More In: Student Success
Read More Blogs By: Lewis Sanborne