Here's a commentator at Mini-Microsoft explaining why it probably became more toxic over time, and it looks like these changes happened before the author arrived:
My memory is really good - [ current head of HR ] is the one who did away with the "life-time performance average" which let people take huge risks and made Microsoft great.
THAT, was huge BillG wisdom. You could look at your score over time and do some really simple math to see what kinds of risks you could take. And then suddenly, a 15-year running average of 4.0 turned into "your current manager doesn't like you, and he's required to fire one of his 10 no matter what, so you're fired".
She also did away with the standing rule that you could take any written standing offer - NO MATTER WHAT (unless you were actually fired for cause) - which was Bill's way of ensuring that great engineers couldn't be tossed out by one horrible ass-kissing manager ... which is pretty much all of them today.
The day that it became possible for a single crappy manager to fire someone for saying "I disagree", or "That's Wrong". Microsoft died.
And when's the last time you saw that lovely deck of laminated cards that started with "a passion for technology" ... and a really logical employee development plan? yea, right, somewhere around 1995.
She also did away with the standing rule that you could take any written standing offer - NO MATTER WHAT (unless you were actually fired for cause) - which was Bill's way of ensuring that great engineers couldn't be tossed out by one horrible ass-kissing manager ... which is pretty much all of them today.
I've read this about ten times and still don't understand it. Do you know what it is supposed to mean?
What that means is that if your manager is terminating you with extreme prejudice (but "without cause", e.g. not for stealing from the company), he's not allowed to be a gatekeeper and prevent you from joining another team. E.g. one that knows you are great and your manager is of the "horrible ass-kissing" variety.
Let me supply another comment, which shows how terribly bad some very high levels of managers have become:
O.k. here are my 3 favorite Microsoft "principle consultant" experiences:
1. I had this really brilliant L-62 dev who couldn't believe just how STUPID the "chief architects" (L-65 and up, really were). So, one day, one of them walked into our office and took great offense because we'd changed his slides ...
Without even blinking, he said "you can change the text all you want, but you can't change my pictures, because that's all people look at."
As he was trying to explain to the guy that the "pictures" came from the actual "data" ... I suddenly realized that this moron didn't actually know where charts came from, and I excused myself and waited in the hallway for the really great partner consultant who thought I had a really bad attitude problem, to come flying out, desperately in need of a drink.
It took 10 minutes. I took him out for a cup of coffee so no one saw him shaking, throwing things, and revisiting his whole opinion of Microsoft.
Seriously, that one wins - but not by much
2). And the same great dev marched off one day to to tell the lead architect that the product he was planning to use, just didn't work that way ...
And that one's just plain funny. The "Architect" (who I wouldn't hire to build a computer for a 5-year-old, looked really annoyed, pulled out a glossy marketing folder, pointed and said 'of course it does - IT SAYS SO, RIGHT HERE."
Right. Bill Gates would have just shot these people and put their heads on stakes around the castle.
[ 3rd example is a bit more intricate but just as bad. ]
Note these are comments by and for Microsoft insiders and former employees. Mini-Microsoft really wants the company to win, although I think he's largely given up by now.
"Partner" is a Microsoft position. Consultant, besides the obvious, I don't know. We really don't need to know anything other than that he's above the author.
"bad attitude problem"
Because the author had previously been pointing out these sorts of problems. Note the ending where the partner consultant was "revisiting his whole opinion of Microsoft."
"desperately in need of a drink"
Well, given that they then drank coffee, only so much, but I originally took it as really in need of some alcohol, probably deferred due to the time of day.
Perhaps; it's certainly unclear enough I normally wouldn't forward it, but I think the burning managerial incompetence shines through the confused prose.
I mean, a manager who doesn't understand his "pictures" which everyone focuses on are charts based on current data, which may change to his detriment? An "architect" who defeats challenges to his designs with glossy brochures?
Hire a bozo at this level for your startup and it will probably die. That they have high positions of authority in an $80 billion dollar technology company is appalling with two 'p's.
I think the problems started when they hit their monopoly phase and Windows became a cash cow. It probably looked at that point like they had "won" and all they had to do was keep the upgrade chain going. Don't need to innovate; that would rock the boat and risk the cow.
Per the article - fostering a system that allows for taking big risks and possibly failing, but showing a track record over time of succeeding is much more beneficial to a company than yearly or half yearly cycles producing a number that has to fit a curve starting with comparing you against the peers in your group. Projects, life circumstances, and many other factors can have a one time effect.
Also, yes, Windows and Office are cash cows. MSFT is a company of 90k+ people, they aren't all working on the cash cows. There are individuals trying to create new products, enter new markets, etc. It is a complicated ecosystem. I'm pretty sure it was new HR and HR policies along with Bill relinquishing control that spurred a change in dynamics.
Strictly speaking, a monopoly / cash cow phase creates a context in which bad practices will not kill your company and are not removed by natural selection. It allows these bad practices to survive and thrive, but it does not create the practices themselves.
I think the distinction is important, because removing the cash cow will not fix the problems, but instead let these problems run the company into the ground. RIM is probably a perfect example of this process, and Microsoft is not, which gives credibility to the idea that, for all its failings, Microsoft has in fact been able to embrace a number of innovations and new markets (Xbox, Azure, enterprise...).
Interestingly, some classic monopolies, like the old AT&T and Xerox, innovated tremendously but would fail to capitalize on those innovations (Bell Labs & Unix, Xerox and the Mouse/Menu/etc GUI).
I think this is the more common failure mode - a lot of innovative work happens at big companies, but those innovations never make it to market because commercializing a product requires a large investment in, well, finding customers, and the innovators at big companies are not empowered to do this.
There is still a ton of innovative research work coming out of Microsoft, Intel, and even IBM, but typically instead of being commercialized and profited from, it languishes until the company shuts down the research lab and the scientists involved get jobs elsewhere.
The original AT&T was an official government monopoly so the very existence of Bell Labs was in part a tax it payed to avoid problems. As I recall it also wasn't allowed to do things like sell UNIX(TM) for serious money prior to the United States v. AT&T 1982 consent decree ... which I notice is the same year UNIX System III was released.
That isn't even an example of disruptive innovations almost never happening in established companies, Xerox just wasn't doing that sort of thing to begin with. Well, they bought Scientific Data Systems at the top in 1969 and fumbled that by the middle of the '70s....
I remember one comment at a perf review from my boss when I worked at MS. There were two of us on the team of 35, who were working on a certain tenet. My boss said "next review season, I'm going to go around and ask people who's the [tenet] expert. It should be you, not him."
So although we were working together, we were also competing. And yeah, when he asked, I was the expert. I think a large part of that was because whenever the two of us got together to ping people about our tenet, it was in my office, so the emails came from me, and the bug updates were under my name. I did that half-intentionally, and feel a little bad about it.
Thank you for reminding me that "tenet" is one of those words that has a weird MSFT-cult meaning, and an entirely separate meaning to those who are more sane of mind. :-)
That sounds exactly like the hell I went through at Raytheon. In fact, I may have heard that visibility quote verbatim during one review.
Raytheon didn't have the problems described by Microsofties, though. The long-term employees (who were a majority at all the facilities I worked in) developed a sort of mass psychosis that was part Stockholm syndrome, part misguided patriotism, and part Dunning-Kruger. If you didn't drink the Kool-Aid yourself the stack ranking system would be the least of your worries, though it was a useful tool for managers to deal with subordinates who wouldn't comply.
> The long-term employees (who were a majority at all the facilities I worked in) developed a sort of mass psychosis that was part Stockholm syndrome, part misguided patriotism, and part Dunning-Kruger.
Whoa. "Interesting" combination. More details, please?
I don't work for Raytheon, but I use a lot of their equipment and deal with their engineers fairly often when stuff doesn't work.
The government is weird. Especially when it comes to contracts. There's a massive process that goes on come procurement time - years are spent deciding who is going to get the contract, who is going to develop the technology, who is going to do the manufacturing, etc. Thousands of questions get asked and trillions of dollars are on the line. Contractors spend lots of money bribing, er, "promoting" their companies and explaining why they should be the ones to get the contract.
Then the contract gets awarded.
And, well, the oversight pretty much stops there. Once you have the contract, you've won. You're in the money. And there's very few checks and balances going on to make sure that your product actually works. They're going through this problem right now with Lockheed Martin and the F-35. They've gone through three failed deadlines and massive cost overruns... and nothing has happened. Nothing adverse. Just, "Here's more money, please fix it. Oh, and you're very bad people." Don't worry, though - when the next plane needs to be built, you'll still be on the short-list for who gets the contract.
The culture in an organization that exists off of this system is also really, really weird. It's not "Make a good product." It's "Make something that, at a cursory glance, looks like a good product." The two are treated the same, but the latter is rewarded much more because it's easier to put sham features onto something and takes much less effort. An engineer who works his ass off and perfectly fulfills three features in a system gets overshadowed by the engineer who makes a garbage implementation of ten features.
Then comes testing time, and the thing is obviously borked to shit. Well, here comes redesign time! There's no punitive measures on it - after all, design is completely different from the field, right? This stuff happens. So here's another 120 billion dollars, let's fix this thing. Wash, rinse, repeat. If you do it right, you can make a product that requires incremental improvement over its entire lifespan instead of making it correctly the first time. The result - everyone makes more money. And the government doesn't really care because it's playing with Monopoly money anyway. It's the end of August in my shop - my captain and master sergeant are sitting there saying, "We need to spend the remaining 20% of our budget on something. No, I don't care what it is. Justify it and figure it out, otherwise our budget will be reduced and we'll be fucked when something big comes up next year."
Now - inside this company, no one is actually saying to bork the project and show the 5th consecutive overfulfillment of the Five-Year-Plan. But you'll find that the engineers are not as good as they pretend to be, (their massive achievements are either flawed or completely made up) the managers are competing with each other for these massive made-up achievements, and the people in charge of the company are doublethinking making a good product and borking the shit out of it so that they can say, "We built this thing, so we're obviously the best choice to keep improving it."
It's completely batshit insane, and if you value any sense of reality, stay the hell away from any company that deals with the government. The benefits are really, really nice though.
"We need to spend the remaining 20% of our budget on something. No, I don't care what it is. Justify it and figure it out, otherwise our budget will be reduced and we'll be fucked when something big comes up next year."
----------
I remember being perplexed by this govt thought process when I first learned about it .... 25 years ago. And I'm still perplexed by it. I don't know how we work our way out of this mentality, but it rewards the very definition of "waste, fraud and abuse". You'd think something more sane like "departments that were under budget this year get first priority in next year's budget" would be a no-brainer policy to adopt, but it's exactly the opposite. Reward thrift, not waste.
That approach could have unintended consequences, though. A department could just cut back on the services they provide, in order to cut back on their spending, to stay well within their budget. If these services are critical, then that may be worse than some wasteful spending, but a higher level of service.
Agreed. Certainly there'd need to be some other metrics to judge a dept on - service feedback, goals hit, timelines, etc, in addition to budget. However, we seem to be so far off the other end of the spectrum with this line of thinking that wasteful spending is encouraged, or one might even say required.
I've done work for state and city agencies over the years and it's generally the same thing. "Well, if our budget is $4m, but we only spend $3.85m this year, we won't be able to get $4m in next year's budget, and we know we'll need it then, so we have to spend the other $150k now so we can get more next year." Again, it simply boggles my mind that a budget process would not only take in to account a department's requests, but also their track record.
I've had to deal with it even with app hosting for some clients. "What do we need?". "Well, right now, we only need one server, but if demand goes up, we'll need 3 servers in 6 months, and we'll need them for about 2 months." "Well, we'll need to order 3 servers and pay for a year to get it in the budget". Huh? Variable pricing has been something foreign to most govt depts I've worked with over the years.
In this case, it's just due to sunk costs. You've already given this defense contractor a trillion dollars. Ten years later, the contractor has a flawed project with a lot of holes in it... but what can you do? If you cut them off, then you have this shitty product that you've already spent a trillion dollars on. Getting someone else to fix it (or develop something else entirely) would cost even more. So you grit your teeth and pay them the 200 billion that they need to redesign and fix it. And then, when they still have a fucked up product a year later, you're facing the exact same problem... except now you've spent 1.2 trillion dollars.
The interesting thing is that in the long-term, it's actually a good idea to tell the contractor to fuck off. The contractor then goes out of business, and the rest of them change their tune really fast. Meet your fucking deadlines, or we'll cut you off regardless of the cost. The end result would be much more realistic cost estimates, better timelines, and a much smaller chance of fuckery. Of course, tell that to the project director who is getting hounded by Congress and is looking for the most cost-effective (short-term) way out.
In the brokerage and bank case, it's due to the government saying, "Well, we can go off of principle and tell them to go through bankruptcy like everyone else. But if we do that, then the economy will tank and we'll be looking at trillions of dollars in damages. So we'll pay the couple hundred billion dollars that we need to keep that from happening."
Unfortunately, that leads to the exact same thing as the contractors - these banks say, "Well, then all we need to do to get free money from the government is to get into the position where it costs more money to do the right thing than pay us off for making bad decisions!"
It is also due to capping the profit on a contract. The only way to make more money is to enlarge the problem. Some general wants a gold-plated titanium feature? "YES SIR!" "Here is the change order for your signature, SIR!"
Yet if you don't cap the profit, there is incentive to cut corners. So you lose, or you lose.
And the really fun thing is that if our early-mid WWII torpedo experience is any guide, it's worse when the government does all the designing, testing and manufacturing.
Not so much that they screwed up, every major power had problems with their torpedoes when they entered the war. But in e.g. the case of the Germans they properly investigated and cashiered two officers responsible. It took the guys at the sharp end of the spear in Hawaii proving e.g. that the contact firing pin binded in its housing when hit square to get any corrective measures. (The torpedo also swum too deep and the magnetic fuze didn't work in that part of the world vs. the Atlantic near New England.)
An interesting variant of this system is one where executives give or take points away from organizations based upon team performance. Assume that individual rankings go from -2 to +2. The CEO gives each VP points according to his whim and/or how well their organization did in the quarterly objectives. A favored VP may get +20 points. An underperforming one may get -3. Now he has to assign points to his directors, who get to give points to their managers, who give points to contributors. Now the sum performance review score at each subtree has to match the allocated points.
So you have fixed stacks, but they're at least relative to team performance and/or political favor. This system tends to depend on the initial allocations, so you should really get in the favored part of the organization.
Throwaway so you don't know which company I'm talking about.
All stories about how bad stack ranking goes like this: "I was a manager at X and all my subordinates were above average/great and the system forced me to give them lower grades."
But it stands to reason that since the system measures performance relative to the average exactly half of the managers should have had the opposite problem: "all my subordinates were worse than average but I had to give them higher grades than they deserved." I think the former problem is much more common because people tend to overvalue the people they are directly in contact with.
I suspect that if the managers could set the grade freely from 1-5, the an unreasonable amount of 4's and 5's would be given. In a large organisation like Microsoft's the skill of the engineers and the ratings should follow a normal distribution.
If you choose your employees at random from the pool of applicants, and the applicants' abilities are normally distributed, then you're right that the employees' performance will be normally distributed. But I would expect any tech firm's hiring criteria to have at least some correlation with employees' performance, which means their performance will be biased towards the high end of the curve.
So no, the skill of the engineers probably shouldn't follow a normal distribution.
I think you are wrong. I think randomly drawing any significant number of people from a pool that isn't normally distributed is still likely to result in a normal distribution.
No, that's not quite right.
Randomly picking people from a distribution should eventually end up giving the original distribution.
Taking n people from a pool m times, then the means of those m samplings will be normally distributed however (given the constraints of the CLT of course).
By definition, average is in the middle of whatever distribution they follow. Maybe median would be a more appropriate point to measure against though.
> By definition, average is in the middle of whatever distribution they follow.
Not necessarily the middle. In a skewed distribution, the average is still the sum of the samples divided by the number of samples, but it's not located at the middle (centerpoint) of the sample set.
> Maybe median would be a more appropriate point to measure against though.
You managed to get that exactly backwards. It's the median that's located at the middle of the distribution, not the mean. For a balanced distribution, they're the same, but for a skewed distribution, it's the mean that moves away from the "middle", not the median.
> In a large organisation like Microsoft's the skill of the engineers and the ratings should follow a normal distribution.
My understanding is that stack ranking is effective when first implemented. Subsequent implementations are progressively less effective and eventually become toxic if done too frequently. This is because at the beginning, the distribution of skill in the company's population of engineers does mirror the distribution in the overall, global population of engineers. So the initial stack rank eliminates the bottom, say 10%, performing engineers. The problem is when this happens again the following year, which seems to be what Microsoft has been doing. Barring a horrible incoming recruiting class, the company's distribution has already been shifting to the right. So each yearly cut is cutting out increasingly better engineers.
Well, by that rationale, the worst your hiring process, the more you gain by stack ranking once. But, even if you hiring people by tossing coins, if you don't have huge teams the results won't be fair.
Yet, just defining a minimum performance and firing the people that don't reach it outperforms that first application of stack ranking. And you can apply that filter as many times you want, with no loss of quality.
That's ignoring the actual "accuracy" of most employee "ranking" algorithms.
And continual, though diminishing improvements is only true if your method of ranking employees is significantly more accurate on existing employees than for job candidates.
And if they were all ranked within one company-wide group, that might not be so bad. But in the organizations I've worked in that used stack ranking, that wasn't the case. You were ranked within much smaller groups.
So group A, which, company-wide, would have had all average to above employees now has to drop some person(s) down to below average, within the group. That "below average" person - who would have been average company-wide - now gets a "performance improvement plan" or worse - shown the door.
And group B, which, company-wide, would have had all average and below-average employees, will now rank some person(s) above average. That above-average person - who would've been only average company-wide - now gets a bonus.
Should it work like that? No. Does it work like that? More often that it should; I've seen it - in mid-size and large companies.
The problem doesn't seem to be with stack ranking as with what the rankings are used for: If Group B is performing badly compared to the company as a whole and Group A is performing well, the results of being ranked "average" in Group B should not be the same as the results of being ranked average in Group A.
All large companies have to accept that they will lose some great people and keep some bad people. If a company decided to perform an evaluation at a random point in time, I doubt stack ranking would, on average, perform much worse than any other technique.
The problem with stack ranking is the dynamics it encourages. Any set of rules is going to be gamed, so a goal for an HR policy should be to ensure that the dynamics created while gaming the system are beneficial to the company. The real damage of stack ranking happens in between ranking periods.
Now that I think about it, Princeton University now has this sort of problem (since 2004) with its Grade Inflation policy, where there are quotas for the number of A's and A-'s that can be given out.
Academia is one of the few places where the bell curve should be applied diligently...because you want the students to see some sort of top to aspire to.
Not at top schools where most of a class may well deserve an A or B.
Outside of a period in the '60s (the draft) MIT never suffered from serious grade inflation (I've looked at the numbers), and the general rule is that if you've picked a suitable major you'll make As and Bs unless you're having personal problems.
It would be stark raving mad to grade its students on the curve ... and I've heard of serious pathology in schools that mechanically do, e.g. premeds sabotaging their peers taking organic chemistry (one tale had one deactivating many of his peers' alarm clocks). You just don't want to set up the wrong incentives, and I can attest that at least in the '80s MIT students were generally happy to help each other.
On the other hand there are state schools that by law have to accept every applicant who meets a random external threshold ... they don't have much choice but to flunk out large fractions of their freshman. But after the courses that do that, and often do that for a major, they shouldn't obsess on the curve.
MIT accepts only very smart people, to apply differentiation at that level wouldn't work very well: the motivation and aspiration has already been satisfied by the MIT selection process!
But even a good hard-to-get-into department (computer science) at a good state school (UW) has a fairly diverse-ability student body, so curves are quite useful in creating some top to aspire to. Even here, since the students got into the department and the bell curve is fairly fat in the middle (hard to fail), it doesn't create for such a toxic environment as being at the top is nice but definitely not necessary (even if you want to go to grad school). The trouble starts where being at the top is necessary (pre-med).
It also matters a bit more in the sciences, where an undergraduate degree is just a ticket to get a Ph.D. unless you want a career as a lab tech.
Also, this is in the context of Princeton, one of the best of the Ivies, I'm told the most academically rigorous. I suppose ... ah, here's the key: the Ivies have to admit way too many "legacies", children of alums. Enough that it makes a massive difference in their student bodies compared to MIT, where being the child of an alum doesn't hurt, but is no help at all if MIT judges you can't do the required calculus and calculus based physics. So maybe it does make sense to grade on a curve ... still, MIT grades on mastery, and I know if I found myself in this situation:
The undergraduate student body president, Connor Diemand-Yauman, a senior from Chesterland, Ohio, said: “I had complaints from students who said that their professors handed back exams and told them, ‘I wanted to give 10 of you A’s, but because of the policy, I could only give five A’s.’ When students hear that, an alarm goes off.”
I'd find another school to transfer to, or at the minimum work out an arrangement for a great grad school recommendation from the guilty faculty (would not be hard since I can/could do research).
>The undergraduate student body president, Connor Diemand-Yauman, a senior from Chesterland, Ohio, said: “I had complaints from students who said that their professors handed back exams and told them, ‘I wanted to give 10 of you A’s, but because of the policy, I could only give five A’s.’ When students hear that, an alarm goes off.”
This exact thing happened to my friend in our 2nd year intro circuits course. The prof (awesome guy) wanted to give him an A but couldn't because of the quota and told him this in person. Said friend, as fate would have it, is thriving at MSFT and is currently a Senior Program Manager there.
I agree that the curve should really be at the discretion of the instructor, not administrators, who is best fit to apply it reasonably. One of my colleagues was actually told the opposite in a class he taught recently: that he was failing or not giving A's to enough students in the classes he was teaching! It definitely goes both ways, but instructors are reasonably fit to decide how to do their curves.
The latter happens more than a little at MIT, and I've witnessed a department head tell a professor you've very possibly heard of that he'd never be allowed to teach a particular course again. That was after making him read every student evaluation, were were uniformly negative except for a single special case, which wasn't exactly positive.
I disagree. I don't think grades are the kind of top that academia should motivate students to aspire to. There should be plenty of role models and amazing achievements for any student to look up to. I consider grades to be at best a necessary evil, and any attempt to shape them will only make them more evil and less necessary.
There are plenty of other things to aspire to in universities, but really, why not master the material in your classes? What is so wrong with that? And why not have that material be hard enough that top mastery is unlikely? Are we so detached from competition that this is now an absurd concept?
All highly selective universities have student populations that come out of the cream of the crop. All the admitted applicants are bumping up against the highest percentiles in SATs. All have straight As. Why would you expect a bell curve from a sample like that?
> The article states it plainly: the problem with the stack ranking system is that it's a zero-sum game.
Done properly, and used for the right purposes, that actually makes sense. Particularly, in the case of compensation, if overall raises for a group were set (possibly by stack ranking of groups with a supergroup where the overall level was set), and stack ranking was used to determine the distribution within the group.
That's an interesting conclusion, but I don't see the argument supporting it. Zero-sum rankings within each group, in nested groups, seem to make perfect sense for most purposes -- what's wrong to me would seem to be the assumptions like, e.g., that punitive measures need to be assigned to the bottom rank within groups every time, regardless of the overall performance of the group.
Just to be fair here, the practice of applying stack ranking to very small populations was not standard across the company. I worked in Office for about 8 years. During those years I found that:
- Stack ranking is communicated up front and the process is very transparent. Managers let you know how the process works, what they're doing, how you get bucketed, what they talk about behind the door, and will discuss your "calibration card" with you (the 3-5 high and low points they'll bring into stack ranking meetings).
- The curve is not enforced on the level of a development team. It's not enforced in a product group. You don't see the hammer starting to come down until you get to populations of at least hundreds of employees.
I disliked the system (despite generally getting strong reviews) but it was not everywhere as demonic as it is depicted here.
Lived through this as an ex MSFT manager myself. The reality was having to throw direct reports under the bus and explaining to them the outcome was "relative to their peers". Then they'd look at a smaller team and see people with much less impact getting good or great reviews (less competition in those teams). Its an aweful system and firmly believe it factors into MSFTs stagnant innovation. Lifetime average, peer reviewing and open transfers are what's needed.
http://minimsft.blogspot.com/2013/08/steve-ballmer-is-going-...
Anonymous said...
My memory is really good - [ current head of HR ] is the one who did away with the "life-time performance average" which let people take huge risks and made Microsoft great.
THAT, was huge BillG wisdom. You could look at your score over time and do some really simple math to see what kinds of risks you could take. And then suddenly, a 15-year running average of 4.0 turned into "your current manager doesn't like you, and he's required to fire one of his 10 no matter what, so you're fired".
She also did away with the standing rule that you could take any written standing offer - NO MATTER WHAT (unless you were actually fired for cause) - which was Bill's way of ensuring that great engineers couldn't be tossed out by one horrible ass-kissing manager ... which is pretty much all of them today.
The day that it became possible for a single crappy manager to fire someone for saying "I disagree", or "That's Wrong". Microsoft died.
And when's the last time you saw that lovely deck of laminated cards that started with "a passion for technology" ... and a really logical employee development plan? yea, right, somewhere around 1995.
[ People who need to leave. ]