Hacker News new | past | comments | ask | show | jobs | submit login
Hedge Fund Is Building an Algorithmic Model From Its Employees’ Brains (wsj.com)
252 points by newscasta on Dec 22, 2016 | hide | past | favorite | 99 comments



I think this....

> Mr. Dalio returned to run Bridgewater earlier this year after stepping back to a mentor role six years ago.

Is what's driving almost all of his new found desire for a unified AI to run everything.

Here you have a man who by any measure has been wildly successful, he tried stepping back and letting other take over but ended up finding that the team he left in charge didn't make the exact decisions he would have.

If the story ended there it wouldn't be in any way surprising, who hasn't left a team and second guessed the decisions that the new leaders have made.

It turns out that when you are worth 15 Billion one of the things you can do is hire a shit load of AI experts, like one of the heads of IBM's Watson team, to build an AI to make decisions based on WWRDD, "What would Ray Dalio Do"?

Since his first retirement failed, this is his new take on how to have his second retirement go smoother. Just build an AI that bases all decisions on the Bridge water principles that he wrote down years ago.

see: https://www.principles.com/

As an aside, he recently let Tony Robbins publish his ideal portfolio, the all weather portfolio. It's actually a pretty strong portfolio for the average person.

http://awealthofcommonsense.com/2014/11/back-testing-tony-ro...

http://www.moneysense.ca/invest/raining-on-the-all-seasons-p...


> see: https://www.principles.com/

This is a very, very good read. Thanks for sharing.


You might disagree with the read if you actually saw what the implementation of said principles looks like. There's plenty of articles out there about what those principles do to a psyche, but let's forget about those: The culture is still broken because there is no sensible way to have real transparency in an environment with power differentials.

In any situation with a broken status quo, openness by those that disagree will just get them squashed. In practice, change occurs in the dark: The people that have a different idea hide in a corner, bake the idea in secret, build allies in secret, and only reveal it when they cannot be squashed down. It works with different ways of investing, with tolerance to LGBT, interracial marriage... instant openness in an environment that is against you will ruin you unless you are powerful.

The principles, as applied, lead to an appearance of openness, where people have to toe the party line and only disagree when they know they can win politically. Otherwise, the powers that be will find you and make sure your disagreement can't go anywhere.

And how do you get power? In practice, by toeing the party line. Only by agreeing with the people above you, those that have been blessed as the smartest, you can get any credibility. And yes, this is something that is actively codified in Bridgewater's culture.

I wish external researchers had access to the internal ratings and surveys that Bridgewater employees fill in all the time. The patterns in them are the definition of a dystopia and groupthink.


> " instant openness in an environment that is against you will ruin you unless you are powerful."

You make some good points, and I especially like this one.


This excellent post echoes what I have heard from friends employed or formerly employed at Bridgewater, although these patterns are common to all large organisations.


After reading some of principles.com what you say seems even more plausible.

I get the feeling his wife never asked if she looked fat. Or he always made the mistake of saying yes. Talk about missing out on reality.


// DISCLAIMER: I work there.

For those who've built tech companies up from the 10 to 1000 people range, there's a lot in that link that's very easy to recognize.

If you interpret back from the more traditional business lingo, you will recognize key 'iterative development' ideas applied outside engineering.

This allows a sizable 30 year old enterprise to handle new ideas much more as a tech startup would.

On the main article topic, instead of the article's quote, “like trying to make Ray’s brain into a computer”, I'd say as an engineer imagine if you could "run a company under a debugger." Frame it that way, and I think you could imagine some neat possibilities.

If you're very good at software development / distributed systems engineering, and think self-driving management or a self-driving fund might be even more interesting than yet another self-driving car, we're always hiring. Hit me up via profile.


//Disclaimer I don't work there. But I know people who do.

While there are some articles out there bashing Bridgewater's work culture (similar to how Amazon's work culture got attacked in the press), the people I know who work at Bridgewater like it and find the work interesting and feel they are well compensated. Though, no one I know who works there had any finance background before taking the job and that seems to be OK.

Just curious if you have any thoughts on if previous finance experience before working at Bridgewater is a good thing or bad thing or irrelevant? And is there any connection between those employees that succeed at BW and whether they have previously worked in finance?


Good questions. These are just my personal thoughts.

I think software engineers from more fields would do well and have fun in this environment than they imagine. You do need to be good, but you don't need a background in finance.

You can read in the link above the idea that people come to the table with values and skills, where it's hard to change what you're like, easier to adapt your skills.

Couple that with the observation that for open minded people who like to learn, effective engineering values and skills seem to translate pretty well across problem domains.

This means it's less about the kind of tech stunts interview folklore attributes to Google, more about trying to understand how you think about problems and get things done.

Sounds trite, but if you think well and do things (need both), you can succeed.


It sounds to me like what he's building is a lot closer to a custom version of an issue tracking product than an artificial version of Dalio's brain.

Is this accurate?


This is truly one of the best readings, one of the three books I recommend everyone read (and re-read) every now and then.


Would you care to share the other two?


"The Power of Now" by Eckhart Tolle and "Starting Strength" by Mark Rippetoe (applicable to men mostly) contain lifetime lessons on mind and body management. "Principles" is very useful for mental models and just critical thinking (or, rather, structured self-doubting).


just went ahead and 1-click ordered Eckhart's book. I recall listening to the audiobook I torrented 4 years ago but never got to finish it.

I'm reading Principles.com and I'm blown away by how similar it is to my own model of the world but without the same confidence and experiences to back it up the way Ray did.

I think perhaps I was being too judgemental about Ray by reading the WSJ article. But to be fair, I'm constantly trying to grasp reality. Perhaps it's better to go in without strong opinions at all...#2017resolutions


Tolle is full of hand wavy pop psychology junk. At the end of the day Tolle is full of mashed up philosphies he mixed and matched until it tasted really sweet. What are you left with? A loosely defined mess that reads like so many other self help books. I am critical here but with good reason, this is like going shopping at the philosophy store and just buying candy. It comes off as blatantly anti reason in places too.

Read Epictetus, Aurelieus, Minsky (Society of Mind), GEB, even Thoreau. Get inside your thinking machine. Feelings are okay. Thinking is okay. Orient around what you will create. Learn to meditate a little. Read Tolle if you are set on it, but I say most thinking people are better off with more original sources that challenge and ask more of their readers.


Love your comment. Always enjoy contrarian views. I think it won't hurt to read Tolle but I've just ordered based on your suggestion:

The Enchiridion, Meditations, Society of Mind, Golden Braid. Left out Thoreau because that seems like a natural survivalist and it had 4/5 reviews on Amazon.

The only way I'll read books is if there's a monetary sunk cost. If I pirate ebooks, unless it's super essential, won't get around to reading it.


Nice! Walden is a classic. It is a very Stoic take on the world. Its okay to skip though :) GEB is a book I had to work through over the course of a year. Good luck in your reading and thinking adventures! I hope you come away with some new ideas about thinking and "being human".


I can't imagine a philosophy that isn't "a loosely defined mess" and still attempts to take on the true ambiguity and confusion of life.


Formal philosophy is pretty rigorous. Life philosophy not so much. It is a tool to understand the human context, that is why I listed Minsky. Life is not ambiguous, it simply is. The harder you try to interpret it to fit a narrative rattling around in your brain the more confused it may seem IMO. Understanding what can be reasoned about (Kant), and what "feelings" are is important to attempt. We are stuck with a certain mode of existence via evolution. I dunno, I think giving into this surface level crusing of this deep currents never lets you even glimpse a deeper intuition about being human and what we can know.


As a case study in megalomania perhaps ...


oh no, I'm sure this particular Great Thesis of Logic and Ethics and Objectivity will enlighten us all. What business organizations have been missing is 40 pages of this schlub aimlessly restating the Golden Rule.

> You must be calm and logical. When diagnosing problems, as when identifying problems, reacting emotionally, though sometimes difficult to avoid, can undermine your effectiveness as a decision-maker. By contrast, staying rational will serve you well. So if you are finding yourself shaken by your problems, do what you can to get yourself centered before moving forward.

This reads like a parody of the DSM criteria for Asperger's Syndrome.


It also sounds like a journal entry from a concentration camp administrator.


I actually think that Ray himself would agree with this statement following his open minded endeavour towards grasping reality...but I do think principles.com read shows it's almost a consequence of how we view reality and an idea of how it works and pushing it onto others in order to "stress test" and improve it.

Nobody likes working for megalomaniacs, and the WSJ article suggests that it's a pretty toxic place to work when people breakdown in bathrooms.

The truth is that money is a powerful incentive if significant enough that overrides all personal principles and creates a subversive mind.

In other news, Gordon Gekko feels that greed is good for his own personal gain at the cost of the personal morals of his followers and profiteering off their transgressions & weaknesses.


Ray Dalio is a fascinating person. He purchased a one-word .com to share his personal and management principles, and now he is attempting to train a learning system to automate and preserve his decision-making.

There's a movie script somewhere in here. Exaggerate the narcissism, advance the science 50-100 years and we've got an eccentric billionaire that causes an artificial intelligence uprising. Like something between Transcendence and Westworld.

A few other thoughts from reading the article:

• Dalio seems like a classic case of someone pursuing immortality in whatever way he can. His coming out of retirement, his efforts to leave a legacy of "radical transparency" and to make his company an "altar of openness", etc. He is also 67 years old. He may feel as though he is "running out of time", and I've seen an uptick in media mentions surrounding him and Bridgewater. There's quite a few religious themes here.

• Bridgewater might be an excellent case study in radical changes that result in success, but it also might not. In particular, there are two challenges. First is the culture of Wall St, which lends itself more to a "show me the results" manifesto and can probably tolerate a culture of open criticism more than most other companies. The other is that Bridgewater is a single datapoint. We know almost nothing about the inner workings of e.g. RenTech, but they are just as impressive (albeit with lower AUM) and have a more "academic" culture. Does the culture matter that much, or is the success a result of a confluence of other factors? Someone who appears to pride himself on his contrarian decision making like Dalio might see his culture as the defining differentiator responsible for his success.

• I find it much more likely that Bridgewater's culture of open criticism (and elements of quantified self?) is like Zappos Holacracy and Valve's "no managers" manifestos, rather than a new set of principles to be brought down from on high. Each of these companies prides itself, and to some extent claims is the reason for its success, on a culture that was deliberately implemented instead of organically grown. After you see enough of these I feel you start to become "culture agnostic", and from my point of view I don't feel any of them has a causal relationship with profitability or productivity.

• A nitpick here - it seems hypocritical to reduce a culture of "radical transparency." If you decide to change the status quo from radical transparency for 100% of the staff to radical transparency for 10% of the staff, then what is so radical about it? Equally, what does it mean to only dole out radical transparency to those "responsible enough" to receive it? That sounds like...transparency.


* ... Does the culture matter that much, or is the success a result of a confluence of other factors? Someone who appears to pride himself on his contrarian decision making like Dalio might see his culture as the defining differentiator responsible for his success.

* I find it much more likely that Bridgewater's culture of open criticism (and elements of quantified self?) is like Zappos Holacracy and Valve's "no managers" manifestos, rather than a new set of principles to be brought down from on high. Each of these companies prides itself, and to some extent claims is the reason for its success, on a culture that was deliberately implemented instead of organically grown. After you see enough of these I feel you start to become "culture agnostic", and from my point of view I don't feel any of them has a causal relationship with profitability or productivity.

I've been wondering the same about Silicon Valley's obsession with culture. It seems cargo-cultish in so many ways - correlated but not necessarily causal, and almost certainly a victim of survivorship bias given the group think about culture among the startup crowd. Thiel's advice that your startup should be like a cult felt like that too.

But then I did a startup, and saw many others close hand, and realized that Thiel is right, at least initially it needs to be a cult. In the early stages pre-traction you simply don't have the bandwidth for deep philosophical disagreements among the team about fundamental product direction and strategy.

That part has got to be like a cult where everyone is on the same page and you can focus like a laser on developing your team's ability to work together and execute like a well-oiled machine. That capability is not to be taken for granted, and does not just happen, you have to make it happen with intent, and unless you get really lucky with initial hires who just gel right off the bat, every early startup goes through it.

One of the main pitfalls for early stage startups is emerging disagreements over product strategy and direction among the co-founders or leadership team. But this is exacerbated by slow execution, leaving more room and time for self-doubt and second-guessing. Thus developing execution capability early on is the most important strategy. Execution both helps you get to a point faster where you can see whether your product strategy is right or wrong, and helps you pivot faster if wrong. If you're going to deliberately implement a culture, make sure its focus is on execution ability. Joel Spolksky's "smart and gets things done" or PG's "relentlessly resourceful" should be a good enough template for 99% of startups out there.


The protagonist in Billions seems to be basically an evil version of Dalio. I think characters based on him have made it to the screen a few times already...


Billions is actually based on Steven Cohen and his hedge fund, SAC Capital. It was bumped down to a family office for insider trading, and subsequently rebranded as Point72.

Aside from basic inspiration, the plot bears almost no resemblance to the actual hedge fund or Cohen himself.


I'm not sure All Seasons is such a great portfolio...

It's way overweight U.S. Treasury bonds (55%) and underweight stocks (30%). And what's with 15% in gold and commodities (which produces no interest or dividend income)?


In fact, Dalio himself just a couple years ago contacted Bridgewater's clients to tell them the firm fucked up, had miscalculated things and was over-exposed to duration risk (basically meaning they owned too many bonds, as you say). The studying of this risk from bonds and how to "mitigate" it is ongoing at Bridgewater. Kinda hard to recommend a portfolio from a guy who admits it is broken and is still trying to fix it.

I've seen chollida1 comments in other finance-related threads and I think he knows his stuff. On this one though I agree with you, that portfolio has some serious issues and I would not feel comfortable recommending it to the average person as is.

Here is an article talking about Ray Dalio recognizing this "all weather" portfolio was bad and they needed to make corrections: https://www.bloomberg.com/news/articles/2013-08-14/dalio-pat...


Kinda hard to recommend a portfolio from a guy who admits it is broken and is still trying to fix it.

Honestly, I think that this is probably the only thing that makes me think there could be something worthwhile.

All portfolios are broken in some way. It's refreshing to hear a proponent of one admit it and try to fix it.

(Also, I highly recommend Lewis' "The Undoing Project" if you think this is a problem)


Reading my comment again I should have worded it better and more clearly. Thanks for pointing out what you did. My comment was specifically meant as a response to the portfolio recommended above, that was linked to and that appears in Tony Robbins' book. That portfolio is broken and outdated in that form.

I should have said more clearly, it is hard to recommend that portfolio at this time because the creator of the portfolio, Ray Dalio, has subsequently said there are problems with it and he is still working on fixing those problems.

Especially considering the portfolio is described to regular non-professional investors as a type of "set it and forget it" longer term investment portfolio. One that doesn't need active monitoring and works all market conditions. It is dangerous to recommend as is.

For sure there are merits to the all weather portfolio! Its risk parity structures were a ground-breaking, genius move by Ray Dalio that made him rich and famous (famous in the investing world at least). I have a ton of respect for Dalio, I've read his principles more than once. He is quite private, his achievements and contributions are not as well-known as some other old billionaires of the investing world but he is just as skilled.

Basically, the Tony Robbins/Ray Dalio portfolio has potential. In its current form it offers many lessons to a lay investor just by reading about it and the theories behind it. But before one invests their hard earned money in the portfolio, consider that some major changes have happened to the portfolio recently and other major changes may be in the works. It could turn out great or it could be scrapped and turn out not to work as thought. So a potential investor might want to hold off following that outdated portfolio strategy and re-evaluate once more info about an updated version is available.


David Ferrucci lead the team who built the Watson research system that won at Jeopardy. He left for Bridgewater in 2014, before the IBM Watson unit was created, and way before Dalio came back to run Bridgewater.


> Mr. Dalio has the highest stratum score at Bridgewater, and the firm has told employees he has one of the highest in the world.

So this billionaire is designing a human-ranking system based on some criteria on which he scores number one in the world?

In examining a system for bias I certainly hope they factor in the fact that the guy who paid billions for scientists to research factors under the "total human awesomeness scale" also somehow managed to find that the guy signing all the cheques was the same guy scoring a perfect 10 million every time.

I wonder if the scientists working on quantifying human greatness for Kim Jong-Un are using the same scale?

Couch it in whatever scientific BS you want, it sounds like we just read some sort of billionaire's ego-rotica.


The strata in the article are based on the work of psychologist Elliot Jacques who as far as I can tell created this independently of Dalio and Bridgewater.

Strata are related to a person's ability to handle complexity, and Jacques argued that this is related to the time horizon a person can handle. For example, the role of a shop floor employee selling widgets does not need to consider what happens more than 1-3 months in the future, however the store manager has to think about a year into the future, the owner of the chain has to consider what happens 2-3 years down the line and so on.

He devised an interview protocol to determine at which level a person is capable of thinking at to be use to match employee's with roles for which they are suitable. Being in the highest stratum means having a time horizon of 20-30 of which Jacques believed only a handful of people are capable of in each generation.


Dr. Jaques divided people into eight strata based on their innate capabilities and limits.

Most of us if we're honest with ourselves are on the level of Strata 1–4. (Even to presume we're Stratum 4 is being generous.)

Strata 5–8 are the rarefied elites. They're typically CEOs of large companies and business leaders.

Stratum 9 and higher—like Mr. Dalio—are the rare geniuses who are capable of thinking into the far future (100+ years) and working on problems of extraordinary complexity.


It might be a function of being in a position to effect things that far in the future, or thinking that far in the future will matter that much.

People love to armchair quarterback federal politics and long term plans that might effect the future. You start learning as you become more effective in life that doing that is ineffective and doesn't get you anywhere. Your energy is better spent on thinking about the things you can actually effect in life.

Or 'man plans and god laughs'


No one can think 100 years into the future, that's just ridiculous. Chaotic systems -- all real-world human systems on some time scale -- are unpredictable no matter how awesome you are.


I mean clairvoyantly sure, but why not imagine the future? Traveling to Mars, autonomous and flying cars, intergalactic space travel etc. type problems are all thinking in a long range, certainly not 5 to 7 years. Look at something like planes even, which took humans years to have the right tools. Thats why people mull over the possibilities of an outcome to reach their goal, which could very well be a long term, 100+ year goal. Instead of worrying about the unpredictability, you can hope the unpredictability will lead to answers for questions you can't answer yet; thats the whole reason for research labs at Google and such, working on longer term problems, finding what's missing now and shelving things for later when we might be able to accomplish the goal.


It reminds of "managing the numbers". Not to say Dalio himself doesn't know what he's doing. But translating that to software, yeah...good luck.

It's challenging enough to translate simple trading systems into machine code, and have it reproduce human results. At least that's my experience.

To distill your knowledge and wisdom down to a set of algorithms, the complex interactions, discretion on when to use and ignore certain ones, being able to learn from results by adding in new "rules" and discarding old ones that no longer work such that you've identified the correct parameters/situations applicable?

Best of luck with that one. Not a place I'd ever want to work.


I did a job on vWorker which was implementing a trading algorithm. The guy had a system, was using it to trade and had a paid service for people to buy it from him and had a paid subscription video channel of videos of him trading the previous day.

When I used what he thought were signals he was following (exponential smoothing, direction finding and spread detection iirc) it generated plenty of "false" trades.

It ended up he refused to pay because "obviously" my code was applying the rules wrong, not his rules didn't work.

I eventually got 50% of the fee and learned quite a lot in the process of why trading algos will struggle to work when faced with real random walks.


>It's challenging enough to translate simple trading systems into machine code

I can attest to this from my experience with a bot I'm trying to build to handle my personal finances. When I sat down to try and encode the seemingly simple decisions I make like whether to pay a bill now or wait until the next check, things got really complicated really quickly.


I'm struggling a lot to get access to my bank's "bill pay" interface. Any recommendations for banks that support building out automation like this?

There are very small things I'd like to implement (budgeting, auto-categorization of expenditures) and some type of bill pay as well.


<strike>I can't really help you on bank API's</strike>. My bot uses webdriver.

Actually I can help with that: https://openbankproject.com/for-developers/


Huh.. Looks like they don't support my bank, but really love the project!

I'll have to check out webdriver, thanks!


Not to mention the fact that there is incentive for the people you are trying to use to try to mess you up. There are so many little mistakes you could make that would result in significant errors when added up that it would make the end result worthless. And it's in their interest to do that... the employees.


not necessarily. some (maybe most?) black box quants own their algorithm even if they leave the Hedge Fund. And of course they make profits while the algos are used. So the architects of the system are incentivized for it to work well.


> some (maybe most?) black box quants own their algorithm even if they leave the Hedge Fund.

I've never observed this be the case at all.


It's the default case in the USA, not sure about other countries. The contract has to specifically sign over ownership.


this was the case at 2 funds I've worked at. So I assumed it was the norm


Vindicis, unless you're not working, you probably already work at a place like this.

All Dalio is doing is tying the pieces together and stating the obvious.


What if I am working, but not at a place like that? I know what you mean, but that's a very large and inaccurate assumption you're making there.

Stating the obvious, and implementing it, are two very different things.

If it were so easy to model and implement human behaviour and thought process, we'd already have strong AI. As we don't, perhaps it isn't quite so simple, regardless of how obvious it might seem?

The point I was trying to make is that when you're trying to model that sort of thing, it very quickly goes from "Seems easy enough." to "Great time to start a new career!"

The complexity of these systems ramps up exponentially.


I am a big believer is what he is doing. I worked both in financial firm and now have my own small factory. There is so much wast in financial firms. in the early 1990s, people with MBA where getting 100k a year for just doing spreadsheets. On top of that, it is not a full days worth of work. the work was not rewarding and it was high stress. The paycheck was great, for sure. I am trying to do that same thing now with my small manufacturing shop. Ordering of raw materials, Manufacturing allocation, labor etc are all controlled by a software. These are all boring jobs that an AI will do a much better job at.


Why does it have to be an artificial intelligence? The field of Operations Research has basically perfected automated and optimized decision making in pretty much every heavy industry out there, and they've been doing it since before computers even existed by using mathematical modeling.


Since reading it maybe 6 or 7 years ago, I've thought Principles is a great handbook for personal reflection and self-improvement, especially in that it continually guides all discussion back to the lodestar of thinking for oneself and asking "is this true?".

However, I'd also say that Principles is a lousy "how to manage people" manual, because it tries to algorithmically describe how to deal with unpleasant interactions that require emotional intelligence, and harps really hard on the concept of "the truth will set you free and if you can't handle the truth you need to GTFO of our culture". Reading this article has reinforced that opinion.

Two other random observations here:

- These guys are a big Palantir client...I wonder if Palantir has any role in building this new system. They weren't mentioned in the article.

- Isn't "Overseer" is a really, really fucking bad formal job title for anyone at any organization?


> Isn't "Overseer" is a really, really fucking bad formal job title for anyone at any organization?

Yeah. At my company we just include anyone who would fit that billing as a member of the Panopticonal Oversight Committee so that they can also keep their regular title.


I assume "overseer" is a bit tongue in cheek.


That has got to be the worst case of group think ever. What happens if you challenge one of the "principles"?


Sounds like you lose upside down Christmas trees at the company party, get demoted from co-CEO, get fired, or get chewed out so hard you cry in the bathroom.

Honestly, I just read a few other articles about Bridgewater Associates, and even though it seems cult-like, I think the principles probably help prevent group-think. For instance, the second most important goal is no talking about people behind their back. If any of the companies I worked at had this rule, I'd think they were trying to make people toe the company line. However, this rule actually supports the first rule, encouraging transparency, really well.

For instance, have you ever been in a situation where you are doing something wrong, only to find out later that people noticed but nobody told you? The consequences aren't that bad if your tie is on backward, but I've made expensive mistakes because nobody spoke up, and I've been in organizations where there was such a negative stigma around "confrontation" that mistakes happened all the time and gossip was rampant.

Studies have shown that groups that are told they much come to a consensus make worse decisions than groups that are encouraged to disagree. The difference can be pretty marginal, but a marginal difference in return can result in huge gains or huge losses when you are dealing with markets.

It definitely reflects poorly on a company when stories come out about employees crying in the bathroom. One can only imagine the psychological horrors that must go on at such a company! However, in my experience, there is a certain kind of person whose entire self-worth is based on their success. They are typically very high achieving, but failure is an anathema to them. They try to avoid failure at all costs. They end up under incredible strain, because failure is inevitable.

I've known a lot of people like this. There was one person I knew who was so stressed when she didn't know a question on an exam that she ended up vomiting all over the test. Another time, I caused a woman to cry in Model U.N. by vetoing her Security Council resolution as the Russian Federation. If you know anything about Russia and the Security Council, a veto is not exactly an uncommon measure from them. However, it meant we ended 5 days of debate on an issue without passing a resolution. She took the veto as hard as if I had vetoed a judgement of her character. She was a grad student at a prestigious school, so I'd say she had to have been in her mid-20s, which is definitely old enough to be working at a high-powered investment firm.

I personally think I'd like that environment. I do really well when I have a lot of autonomy, and I don't need much supervision to work. I hate it when people don't criticize my work. It just makes me paranoid; I know I'm not perfect, and I know there are flaws. I learn from every critique and failure. I know a lot of people that would have a mental breakdown in that environment though. It's hard to tell who those people are ahead of time though, because they are such high-achieving individuals, and their anxiety about failure is basically invisible.

So, to answer your question more seriously, I think their continued success is evidence that they must comfortable challenging their beliefs. However, I do think a holy book full of principles seems a little cult-like, especially in light of the fact that they are building an AI clone of their CEO.


seems to me like challenging peers is ok, but don't dare challenge management. So, yes you'll get day to day issues worked out, but things like ethics, general strategy, and other high level issues you run into problems. So, when they f* u* it will be huge...because the whole organization will be blinded to it.


"Hit a nerve - a prized attribute." It would be interesting to see how quickly the company as a whole (and the system, if possible) responds to the results of incentives. Does the culture quickly regress to fooling the algorithm or does a cat and mouse game result?

Either way not something I'd want to use to drive culture, but an interesting take on how to "institutionalize" a culture and cultural norms even long after one has left the business.


They already have a system that people game all the time. It doesn't work.


I had a good chuckle reading this article:

> At Bridgewater, most meetings are recorded, employees are expected to criticize one another continually, people are subject to frequent probes of their weaknesses, and personal performance is assessed on a host of data points, all under Mr. Dalio’s gaze.

I think this is a testament to the disconnect between people in this space and technology. The idea that some mythical holy grail technology will produce alpha is as illusive as relying on automated trades.

> Bridgewater says about one-fifth of new hires leave within the first year. The pressure is such that those who stay sometimes are seen crying in the bathrooms, said five current and former staff members. This article is based on interviews with them and more than a dozen other past and present Bridgewater employees and others close to the firm.

And it confirms my suspicions. I stopped reading here because it's another PR piece for Bridgewater.

I think we will see a small homage to Bridgewater in Mr. Taleb's upcoming books as an example of "techno-irrational exuberance" along with other hilarious empty suit moments described in the book Fooled By Randomness.


How could any quant models not be based on the brains of their inventors?


From the article, it doesn't appear this has anything to do with quant models - they're looking to optimize how their employees conduct the work and how they are managed as personnel. This does not seem to target their trading strategies at all.


The 1980s called, they want their expert systems back.


exactly what I thought, this is a new gen expert system


Bridgewater manages $160 billion, the most of any hedge-fund firm. It has earned clients twice as much total profit as any rival, says LCH Investments NV, a firm that puts client money into hedge funds. Mr. Dalio personally earned $1.4 billion last year, according to research firm Institutional Investor’s Alpha.


I have heard that Amazon tried to do this with hiring with really bad results.


Can you expand on this at all? An article anywhere with a detailed write up? It sounds fascinating.


Another megalomaniac looking for religion where it can not be found: radical transparency, humans and economies as machines, personality tests to determine stratums, daily ratings called "dots" (which incidentally is what Palantir calls their ratings as well). Then again maybe the article makes the place sound more like a cult than it actually is in reality.


Sounds like a horrible place to work! If things go as planned, "Horrible Place to Work(tm)" will have in install package for it.


Yeah, really.

"The pressure is such that those who stay sometimes are seen crying in the bathrooms, said five current and former staff members."

What's the upshot? Seven figure salaries or something?


there are people crying in the bathrooms at every big firm on the planet.


Solution is to move the bathrooms into the open plan. No more slacking or being aa wimp in private.

Or even better give everyone nappies so they can clean themselves on their own dime. Once they get home.


yes very true. And everyone is taking pills just to keep up with the pressure . I much rather have an AI take over these tasks then taking more pills


You can't possibly get enough training data for this.


I don't think it is fair to say it is not possible. There is an AI system out there that has correctly predicted the superfecta for the Kentucky Derby based soley on user predictions.

http://www.newsweek.com/artificial-intelligence-turns-20-110...


I work for a company that bets on the horses, and yet they use broadly similar techniques. But this doesn't replace humans at all.

Also as an aside, correctly predicting a superfecta happens quite a lot, and it is hard to tell from sample size of 1 if this was luck, skill or a combination of both. Probably a combination of both - their algo may have reduced the odds to 200-1 for example, and luck did the rest.


UNU is not AI, it's polling. Bridgewater would like to develop an AI that minimizes human input, not just a better method of combining different human opinions.


They said that about Nate Silver, too, then Trump got elected.


Not until you can (video)tape all meetings and interactions, analyse body language and spoken content.



Horrible, but I can sympathize.

Living alone now after 22 years of co-habiting. Tempted to encase myself in cameras and sensors and train a model of ... me.

What would Jaba do? If it's a dumb thing, then help him not do it...


I'm gonna guess these employees have a work for hire contract of some sort, so that their ideas or potentially copyrightable or patentable methods, belong to their employer.


They have one of the most onerous non-competes of any asset manager, especially for people who touch any of their strategies.


So does every large technology company including Google, Apple, Twitter, Facebook, Netflix, etc.


As do a number of major universities (CMU, Harvard, MIT, Stanford, and plenty more, I'm sure). If I had to guess, these contracts are generally a lot more prevalent in fields where people are regularly creating intellectual property.


There is no algorithm for trader's instincts;


FYI, I was able to dodge the paywall and read the article by copying the URL into Google to get their referral, then clicking the top link.


not sure why the regular search links aren't working anymore when you click web. The link in "Top stories" section worked


Seems to be related to the title - if I take the article title from the actual page, google's first result works.

However if I take the title from here it goes to the paywall.

I wonder if Google has recently adjusted the algorithm for this? Makes me wonder if they actually like paywalls, as I've always thought Google was against them.


Ah, it wasn't working for me until I turned off uBlock, for whatever reason, incase someone also has this issue.


Click the "Web" link under the title to skip the paywall.


Odd, I submitted the same story but it wasn't picked up as a duplicate.

https://news.ycombinator.com/item?id=13238860


[flagged]


tell her it will make her thinner. It's on page 57 of the manual.


These are by far the best two comments


Yep.

I disagree with the downvotes.


I would assign a small chance to Bridgewater being a ponzi scheme. I am not the first person to be skeptical of the firm and there have constantly been rumors about them.


"Though outsiders expected Mr. Ferrucci would use his talents to help find hidden signals in the financial markets, his job has focused more narrowly on analyzing the torrent of data the firm gathers about its employees. The data include ratings employees give each other throughout the work day, called 'dots'"

This -- Isn't -- JEOPARDY!! This is shit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: