(Welcome to Part 3 of a series we’re calling How Enrollment Predictions Are Driving Colleges Out Of Business. Don’t miss our previous entries, the (Series Introduction; Part 1, Treating Adolescent Decision-Making as Linear; and Part 2, Not Adequately Testing Models in Real World Scenarios.)
Trying to Forecast October’s Weather on January 1
The Old Farmer’s Almanac is a veritable treasure trove of goodness. Want to know the order in which you should plant your beans and cabbage, your horoscope a year in advance or the best recipe for Irish Soda Bread? Covered.
You know what it’s not good at? Weather forecasting.
Indeed, the Old Farmer’s Almanac, published out of Lewiston, Maine, has long been issuing extended-term weather predictions based on the positioning of planets, sunspots and tidal patterns. How colloquial you might say: A quasi-scientific industry clinging to folk theories that at one point dominated meteorological science to predict Halloween’s weather before Martin Luther King Day. Endearing even. Grandmothers and Farmer Greyhair relying on the reputation of the almanac to accurately predict complex weather patterns months in advance.
In many ways though, the Strategic Enrollment Management (SEM) prediction industry has taken this very approach to forecasting the highly volatile and evolving behavior of adolescents. Many of the predictions being made are calculated at the beginning of an admissions cycle and rarely, if ever revisited.
Why is this a horrible idea?
Many conditions are simply not yet known, which is why Nate Silver — in his ground-breaking book The Signal and the Noise: Why Most Predictions Fail But Some Don’t — cited the increase in accuracy between a 10-day weather forecast and the 24-hour forecast.
Higher education must plan its prediction methodologies similarly. Student behavior ebbs and flows, as students move toward and away (and then potentially back again) from the schools they are considering.
Famous on-air salesman and purveyor of the “Chop-o-Matic,” Ron Popeil may have been on to something with his “set it and forget it” rotisserie-cookers. But, then again, Ron was never a dean of admissions. Set it and forget it might work for a pot roast but is not an effective strategy for building a diverse and prepared class.
My colleagues and I at Capture Higher Ed have been working on an alternative to the current practice of training a model once and filtering analysis through it over the course of one (or even two) year(s). Instead, we believe that the dynamic nature of SEM demands a more flexible approach in which a process called “feature selection” can adjust how the model performs midstream.
Enrollment predictions are a special type of “self-cancelling predictions” in that an admissions shop that receives these predictions will naturally take action on them to either make sure that prediction does or does not (if it’s less than optimal) occur.
In this way (and hopefully only in this way) enrollment predictions are similar to flu outbreak predictions. When public health organizations like the Centers for Disease Control (CDC) predict greater than normal flu levels in the coming year, the broader health industry reacts by producing more vaccines and publicizing the need for people to get their flu shots.
As a result, incidences of flu decrease from their original prediction. The original predictions were not inaccurate per se, only that there was an intervening collection of actions that changed the trajectory of the predicted outcome. So goes enrollment predictions.
How then, can any serious enrollment prediction, be made once and not adjusted as conditions unfold throughout an enrollment cycle? It represents a fundamental misunderstanding of not only the dynamic nature of enrollment management but of the nature of prediction itself.
Look for Part 4 of this series, “Choosing Interpretability Over Accuracy,” next week.
By Thom Golden, Ph.D., Vice President of Data Science, Capture Higher Ed