> In studies of expert performance admissions people are less good at predicting UG[undergrad] GPA than a simple algorithm. (The "algorithm" is simply a weighted sum of SAT and HS GPA!)
If you replace the admissions people with a simple algorithm (or even a complex one), you get a machine that can be gamed very efficiently. In a couple of years, the algorithm is going to suck unless you have a team of experts constantly changing it. (Hi, Google!)
Ironically, the admissions people are currently being gamed by applicants by way of their essays and "portfolios." There is a whole college application advising industry equivalent to "SEO" companies.
This quote also implies that the goal of an admissions officer is to maximize total UG GPA/be able to predict UG GPA of admitted students.
Isn't the goal ostensibly to build a solid UG class of talented individuals?
[update]
I don't have any data to back this up, but I would even hazard a guess that admissions officers take a certain number of calculated risks on students who they know won't be stellar academic performers but add value in other ways.
They can accomplish a lot just by being human. If the applicant can sell himself to them despite his poor academic achievement, then he can sell poor products to customers, keep employees faithful at crappy jobs, and attract investment dollars to underperforming companies. If he can't, maybe he can get a series of promotions anyway. By acting as "charisma detectors" and "bullshit quality meters" (i.e., normal human beings) admission officers can spot people who have a future, people whom it will be beneficial to have the college brand on and who might donate to the university later.
I am not sure about that - how do you hack the SAT? I know there are a lot of companies who have tried for years, but wouldn't one of them have had a breakthrough by now, if it was possible?
A few friends of mine are in various sorta-financial fields - insurance underwriting, and some sort of accounting thing. They're constantly amazed at how much of what they do could be automated... and then ignore us when we tell them that they should just spend a month picking up python and lessening their workload.
Then again, their computers are locked down pretty tight, and I guess if you write an app that does your job, you're basically proving to management that they don't necessarily need you.
I guess if you write an app that does your job, you're basically proving to management that they don't necessarily need you.
This varies depending on where you work. I know at least one investment bank has an explicit policy that if you make your job redundant, it can only benefit you [1]. Also, if you don't have an explicit replacement plan (i.e., a list of people who can do your job), it counts against you at your yearly review.
[1] I know of one case where someone (call them Q) made their own job redundant in 2008 (a time of many layoffs). The bank fired someone else (call them Z) and gave their job to Q (with a tiny pay boost, in a time when most people took big pay cuts).
They need you to maintain the app, add features, change fonts, redesign the interface, make it print prettier reports, get it to run on the new Linux server they bought to put it on, fix the small bug...
The computer can't be trusted to specify the model without insightful human input though. Correlations that work very well in the short term can break down spectacularly in the longer term.
from the comments below the blog, a comment I can only hope was intended to be ironic:
The only thing a loan underwriter really needs to know is loan to value and foreclosure costs & timeframe. Keep it low enough and creditworthiness is basically irrelevant, you will get paid from the collateral no matter what the borrower does.
It's amazing how much was lost from assumptions that foreclosure costs wouldn't vary with average borrower creditworthiness...
Risk management analysis is best performed by a human aided by a computer, yes. Those among us whose jobs are essentially risk-management-oriented would be best served either to become that human computer operator, or to find a different task.
I strongly bet that pg has discovered some extremely weird (i.e., extremely human) correlations, and uses them, but possibly cannot reveal them for fear of ridicule or worse, people gaming the process. For example, big nose == great non-technical founder, esp. in terms of communication! Something along those lines. :-)
If you replace the admissions people with a simple algorithm (or even a complex one), you get a machine that can be gamed very efficiently. In a couple of years, the algorithm is going to suck unless you have a team of experts constantly changing it. (Hi, Google!)