Web Policy  |  Intranet  |  Contact SCHEV    
Blogs
SCHEV Research Data Blog

The official blog of @SCHEVResearch at the State Council of Higher Education for Virginia. Discussions about our work, national higher education data policy, and highlights about the data we publish.

 

Recent Posts Blog Archives Subscribe Feed All Blogs

Blog Post: The Kludgeocreacy, Big Data, and Personalized Public Policy

by Tod Massa 2. November 2013 09:03
Keep off the patio,
Keep off the path. 
The lawn may be green 
But you better not be seen 
Walkin' through the gate that leads you down,
 Down to a pool fraught with danger 
Is a pool full of strangers.  
You're living in your own Private Idaho, 
Where do I go from here to a better state than this. 
Well, don't be blind to the big surprise 
Swimming round and round like the deadly hand
 Of a radium clock, at the bottom, of the pool. 
B-52s “Private Idaho”

 

Has higher education policy joined the kludgeocracy?

THE COSTS OF COMPLEXITY

The most insidious feature of kludgeocracy is the hidden, indirect, and frequently corrupt distribution of its costs. Those costs can be put into three categories — costs borne by individual citizens, costs borne by the government that must implement the complex policies, and costs to the character of our democracy.

From this article here by Stephen Teles.

I find it difficult to sit through discussions of higher ed funding affordability without wondering if we are in the midst of one big kludge because things are so complex. In Virginia we have a funding model that represents what is believed to be the base level of adequate funding for the institutions. This model is based on what comparable institutions in the nation spend with ratios of program size, degree awards, staffing, and enrollment. It is not rocket science, but it does represent a certain level of complexity. Add to this, separate “policy collections” regarding student financial aid and faculty salary goals, new models of incentive funding, and the complexity increases further. And then, add to this mix federal law and policy for student financial aid (and anything Congress can toss into the Title IV umbrella through the Higher Education Act) and you have, shall we say, a LOT of complexity.

It seems difficult to have a clear idea what we are trying to accomplish through the allocation of resources. Mind you, I am not pointing fingers as I have some responsibility here. After all, my work over the years has impacted the budget, its recommendations, and the laws of the Commonwealth. I just want to point to the obvious – it is complex. 

The one unifiying concept that exists throughout all the models and all the policies above is the belief in data. The models are built on data with expectations of outputs described with the data. The authorizing laws mandate the collection of the required data, production of specific reports, and disclosures of data and metrics.

And now we are in the era of Big Data.

The Wiki gods describe Big Data as “data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.” There is room here for bigness to be relevant to the limits of the analyst, but I know that is not really the point. To me the definition means really super big. Much bigger than the tiny dataset we have at SCHEV on students. Despite the fact that it covers 20 years of student activity and is fairly rich in detail, it is not what I consider Big Data. It is definitely a good size. Perhaps it is Good Data?

Once upon a time, we were taught to be parsimonious in our collection and selection of data. To develop a simple model of what we thought the world look liked for our analysis and use a respectfully limited dataset. Today’s technology puts all data on the table. Not only can we explore every possible correlation, we can make news with even the most spurious and silly of correlations.

This week, the Chronicle published a blog article quoting the former chief technology officer for the President’s 2012 campaign as saying, “Big Data is bullshit.”  The argument is that Big Data is the domain of charlatans and frauds selling more stuff. This might be kind of extreme, but he might also be right.

A better criticism is found in this article, also in the Chronicle, from last spring: 

There is no causal link, and we do not need an explanatory story. In the kind of world we live in, you wrestle every day with a swirling mass of inexplicable correlations, and then you die.

“..and then you die.” To quote Mark Petrie in Stephen King’s Salem’s Lot, “that’s when the monsters get you.” That might just be where kludgeocracy and Big Data meet where policy and data are nothing more than swirling mass of inexplicable correlations.  

This, I think, is a needlessly pessimistic view. The models we use at SCHEV are not that complex and we don’t spend nearly as much time looking for correlations as we spend just trying to show what is happening. We take this approach because it is simply more important for us to establish common language of experience focused on who, what, when, and where, not so much the why. Plus, we have a good sense of the limitations of our data and simple models can tell simple truths.

Simple policies are more likely to achieve their objective as well. Maybe Big Data doesn’t belong in the public policy realm?

If we extend the promise of Big Data to public policy and the kludgeocracy, we can consider the possibility of predicting the behaviors and outcomes of very small groups, perhaps even individuals. Not only might this allow greater, and more successful interventions with at-risk students, it could also allow us to increase the success of all other students. Moreover, the analytics associated with Big Data could also allow us to target the production of specific skillsets needed by industry in small groups on a just in time basis. 

Pardon me while I take a slight detour. About this time last year I attended a small Educause convening on Big Data and then followed that with a couple of panel discussions at other meetings regarding our wage data. One of the common topics across these events was personalization of data products and how that could extend the reach and effectiveness of measurement. I find the personalization of data to be fabulously interesting and I have some ideas for SCHEV Research along those lines.

Detour now finished.

In fact, it seems to me, that the next phase would be personalized public policy. Imagine a world where the broad strokes of policy are nonexistent and every individual is governed by a personalized public policy designed to maximize their opportunity and success. This could be pretty cool. I’m not sure it is possible without a hugely intelligent computing platform behind it and this gives me pause. I have read and watched far too much dystopian science fiction to be easily comfortable with that idea.

Of course, all this can only work if the future is enough like the past for the models to be predictive.

However, let’s stick to simple. After all, it is a gift to be simple. If we use our recent debt reports as an example, we can say in a very straightforward manner, using lots of data to get there, that median debt for bachelor degree completers has increased by nearly $8,000 in the last five years. What we can’t say by merely looking at the data is why something is the case, or more importantly, if this something is a good thing or a bad thing. We can answer the who, what, when, and where questions. We can even answer how we measured this. To my mind, this is a pretty good start.

 

A final note. Longitudinal data are really important. Whenever we talk policy we should talk about the unintended consequences. This is something we do at SCHEV. However, we need to be mindful that it might be sometime after the fact that these unintended consequences may occur and that they may have seemingly no direct relation to our policy domain. As an example, I give you this article http://www.nbcnews.com/business/high-unemployment-blame-high-home-ownership-study-says-8C11511682 which makes the case that high levels of home ownership leads to high levels of unemployment because home ownership discourages mobility for many individuals. Of particular note, the study referenced by the article suggests that there is a five year lag between home acquisition and the impact on unemployment.

Interesting stuff.

At great risk of making an obscure reference, consider this post to be foundational.

Tags:

Categories: General

blog comments powered by Disqus
Follow SCHEV Research on Twitter