Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s Secrecy Hurts Its AI Software Development (bloomberg.com)
156 points by coloneltcb on Oct 30, 2015 | hide | past | favorite | 103 comments



I think the article misses the point, while its secrecy may hurt other people from benefiting from the investment they have made in AI, it doesn't hurt Apple. Its almost the perfect prisoner's dilemma right? Apple defected and everyone else co-operated so they got the most benefit?

As a strategy I have observed that people with too much secrecy lose out on innovations that people outside the process could illuminate, but as a strategy it has a history of helping the company employing it stay ahead of the game. Look at Google's secrecy about its data centers, had it gone public with that stuff when it was done, perhaps 2 trillion kwH of power could have been used to do something else if other people had adopted those things. But instead it just helped Google grow faster and do more with their stuff. Is it bad for Google? No. Bad for the rest of us? Perhaps but we don't know one way or the other.

So when Apple's efforts fail to produce a competitive product, then I think you can say its hurting itself, but until then I'm sure they see it as a competitive advantage.


> it doesn't hurt Apple

They addressed this right in the article. It does hurt Apple, because the best people won't go work there. That's why they are so far behind on every AI thing they do. Siri and maps are both way worse than Google's offering, as is the predictive typing. Probably because they can't get the best people working on the problem.

I use an iPhone but it's this lack of AI that makes me seriously consider Android every time it's time to upgrade.


I think it is a fair point that you find Android's AI features more compelling (although you haven't? switched). So from Apple's perspective its probably hard to measure how many customers they "lose". They did just announce another record breaking quarter and it looks like they had more profit than Google had in revenue for the same period (hard to pin it precisely given the way quarters overlap).

However, while the article claims the "best people" won't go to work there, that is a bit hard to substantiate isn't it? It's seems apparent that the best people who insist on publishing their results won't go there, but are the "best people in AI research" and "people who insist on publishing" the same set of people? Or is it possible that some members of the set "best people in AI" are completely happy to work in a dark lab with no papers published but an essentially unlimited budget?

There is another organization which operates on the same principle, the NSA. They hire some fraction of the world's best mathematicians. Those mathematicians never publish their work, and yet still work there. And every now and then when we get to see behind the curtain, or long after the fact, we discover some really great work has gone on behind that veil of secrecy. Has it hurt them?

The author asserts that it hurts Apple, but I don't find their argument compelling. And there is the sticky issue of defining "hurt". Clearly it isn't hurting them financially, they are killing it. And clearly the feature set parity is close enough that you're still carrying around an iPhone. So how do we define hurt here? As compared to what?

I'm not convinced Apple is hurt by this strategy yet, perhaps when you post here "I really wanted to replace my iPhone with another iPhone but the features of Google's AI or Microsoft's Cortana compelled me to buy a different phone." Then we can start talking about hurt. But so far I'm not seeing it.


You're right, the difference isn't good enough to convince me to switch yet, but that is in part because I can still use the Google products (with poor integration) on my iPhone.

So honestly if they allowed Google to better integrate on the iPhone, it may not hurt them at all since I'd keep buying their hardware.

Having actually been involved in the AI scene, I can tell you anecdotally that all the very best people refuse to work at Apple (or the NSA, to your point), because they know they can get the same budget and data access at Google, but still be part of the community. Most of them in fact do work at Google, or at other startups nearby that work closely with Google.


> Having actually been involved in the AI scene, I can tell you anecdotally that all the very best people refuse to work at Apple (or the NSA, to your point), because they know they can get the same budget and data access at Google, but still be part of the community. Most of them in fact do work at Google, or at other startups nearby that work closely with Google.

Or facebook (Yann LeCun) or Baidu (Andrew Ng) :)


Although you both make the same tacit assumption, which is that you know the contents of the set "best people in AI" when in fact all we can really say is that "we know some really great people in AI who are visible." The next Alan Turing or Ray Kurzweil could be working at Apple right now and we would not know it due to their secrecy right?

I completely agree that if AI features become the compelling and ranking feature of phones, computers, and tablets. And if Apple begins to lose financially because they are paying a "tax" to license someone else's AI features to stay competitive, and unable to develop a compelling experience on their own, then you can say their internal AI group either doesn't exist, or is not staffed with competitive researchers.


Do the truly best people really believe the best use of their talents is finding new ways to show people more ads? I call shenanigans.


They're not working on ads. They're working on maps, speech recognition, visual recognition, self driving cars, etc.


I very much doubt that's their job description. How is anyone working on Translate, or Self-driving cars "finding new ways to show people more ads"?


What is the purpose of Translate other than to get more eyeballs to stick an ad in front of? Are you suggesting Google provide that service out of pure altruism?


My hunch is that they see "show people fewer lame ads" is merely a side effect of interesting research.


Is current profit really the best metric to measure the success of an AI department? I doubt many people bought iPhones because of Siri. AI development's biggest impact on bottom line is still probably 5-15 years away.


No it isn't. However profit is the standard by which we measure companies in many cases and so I think it is fair to use it to measure Apple overall. Measuring the impact of the AI group will be harder to do, at the "C" level you evaluate your R&D budget against your ability to stay ahead of your competitors, lower down at the SVP level you're probably balancing near term improvement over game changing innovation. Going further down the the director level you're no doubt measuring your department by its rate of progress against it's own goals, the progress of external organizations with a similar mission.

Given the secrecy we can't really know how effective Apple's AI group has been at meeting its goals, nor can we tell how quickly they are moving with respect to the state of the art in the industry.

That leaves us with profit from the outside looking in.

I'll go out on a limb and suggest that if Google's margins continue to erode over the next five years as they have the previous five, it will will become that much harder to sustain their AI efforts. So at one level company profits can be a way to estimate a company's future ability to compete in the space.


I'm not convinced that Apple's secrecy has anything to do with Siri and Maps lagging behind Google's offerings. I suspect a large part of it has to do with the fact that Apple is not willing to compromise user privacy in order to do this stuff. Siri does require the internet[1] to parse your spoken words, but all of the data that Siri actually operates on is only local to your device and is not known to Apple. Similarly, Apple doesn't know anything pertaining to your mapping[2], that's all device-local too. Meanwhile Google aggregates everything it can, and AFAIK all of its email-related capabilities (e.g. finding invitations in email and whatnot) actually require you to be using Google email too, so they already have all of that. Because Google has such comprehensive knowledge about the user's personal information, they can do more intelligent things. But this comes at the cost of the user having given up any semblance of privacy. Also, I imagine that Google can run more powerful algorithms if it's all happening in the cloud, instead of being restricted by what can reasonably run on a mobile phone.

[1] Although I'm not sure if it does everything over the internet or if it's moved any of that to the device now that we have more powerful devices. I'm guessing they haven't, but it wouldn't surprise me if they've changed aspects of this.

[2] IIRC Apple said that they take routes, chop off the beginning and end (so you can't identify the addresses), and upload those using a unique anonymous identifier (i.e. UUID) and no device-specific info, possibly with information on the actual driving (e.g. traffic) but I'm not sure, and they use that info to help improve the product. But that sounds like pretty thorough anonymizing (no way to associate routes with each other or with a device, and the routes don't include the potentially private source/destination) so I'm happy to give them that, and even then I assume it only uploads this if the user has already opted in to sending Apple data to improve their products (which it asks about during initial device setup, and there's a switch in System Preferences for this).


>> chop off the beginning and end (so you can't identify the addresses), and upload those using a unique anonymous identifier (i.e. UUID) and no device-specific info,

Actually since they know your IP address ,it's not anonymous - it's easy to connect to your identity.

Also , in general the way they gather location(from another post) [1] , is they gather list of cellular and wifi hotspots around you(and i wonder about whether they gather signal strength).And since for example , wifi signals are heavily blocked by walls, they are very space limited in many situations.

So maybe(IDK, the field of anonymization is tricky) combining this data with the data sent when asking for a route is enough get pretty close location ? altough sure, some data is definetly is lost in this process and that's a good thing.

[1]http://thenextweb.com/apple/2011/04/27/finally-apple-speaks-...


But on most (probably all?) mobile carriers, you're NATted anyway, so your IP address doesn't really tell you much about who specifically you are compared to, say, your home address.


even while being NATted , don't your port remain stable at least for some time ? and that coupled with Apple's other services which do identify and use that port , gives enough info ?


> Actually since they know your IP address ,it's not anonymous - it's easy to connect to your identity.

Only if they record that information.

When a company is going to lengths to deliberately throw away identifying information, it seems kind of ludicrous to suggest that they're still going to try and track you by using a mechanism that is significantly worse than the information they already had access to and chose not to use.


Apple aims to maneuver between two goals - between selling privacy , but creating great services(who often depend on data). My suggestion could be one mechanism to optimize both.

Or maybe not. we don't know.


How does tracking users by IP address optimize for anything?

Apple has shown repeatedly that they believe privacy is very important (Tim Cook has even gone on record as saying he believes privacy is a fundamental right), and they exert a fair amount of effort in trying to preserve that privacy (e.g. all of the data they refuse to collect, and all of the anonymizing they do on the data they do collect, which still requires the user to opt-in before that anonymized data is sent to Apple). The only way in which your argument makes sense is if you think Apple doesn't actually care about privacy, but only cares about the appearance of caring about privacy, and therefore would anonymize data it collects while still leaving loopholes for it to try and recover some of that data anyway. But that contradicts all of the evidence.


Apple know who the best people are. They will search them out. If they haven't already, then maybe Apple's priorities lie elsewhere and this is all moot.


This is pretty much Apple's MO since early on. They're a business heavy company that has a different twist on marketing.

They have done some "somewhat" good things. (I.e. webkit) However, at the end they haven't followed through with it. (I.e. parts of Webkit were broken, they've refused to work with the community etc) At least with Google you're getting the benefits of the Google services and libraries that may not be their own products. (I.e. Guava, Dager, Guice, GWT, Angular, etc)


Those working on a secret project don't benefit from public evaluation and extension.


I think apple has failed miserably to create a competitive product w/ siri, and I expect them to be far far behind google in other areas like cars.


If only they had a bespoke search engine backing her up :-)


And terabytes of users' data collected over years or decades…


I'd say that it hurts _all_ of it's software development, though AI is probably the most obvious genre. If you can't discuss issues with outsiders how do you attract developers who are interested/skilled in those specific issues.

I realize that Apple gets plenty of applicants, but when you're building something that requires some very specific sets of knowledge, it's harder to find them if you don't give them any way at all to find you. The developers who can make Pages/Numbers/Keynote/iCloud great are out there, but they certainly don't work at Apple.


Also, how are you going to attract scientists who want to make a career (which, perhaps unfortunately for them, involves publishing stuff).

They should probably take an example from research.microsoft.com.


It has been awhile, but MSR used to be the place where people went and still published, while Google Labs was a black hole. I remember some Google guy giving a talk at NIPS, but it was just a presentation from a shared corporate slide deck, which did not fit in with the rest of the conference. Google attracted some really good machine learning folks (temporarily) by offering them boatloads of money and manpower to implement their ideas, not freedom to publish. Has that changed?


I don't know what you're talking about; Google researchers publish all the time and give presentations at machine learning conferences with plenty of interesting and unique detail. Google just didn't do much with machine learning in the old days- the pubs were mostly system infrastructure and data analysis.

Here's an oldie but a goodie: http://people.mbi.ohio-state.edu/datta.53/philtalk.pdf note the talk refers readers to the patent. At the time, reading Google's patents was the best strategy to learn what it was doing.


Yup, Google now has two teams publishing the very best ML papers alongside FAIR, Toronto, Montreal and some other places. The teams or Deep Mind and Google Brain.


To my knowledge, Apple ran a program called "The Apple University Consortium," among others, where they would tap into work of higher education research.

It's my understanding these efforts continue, you just don't hear a whole lot (if anything) about it publicly.

Check it out, you'll see Apple quietly listed on several university websites, CMU for example: http://www.cmu.edu/corporate/partnerships/cic/tenants.html


Some scientists just want to build cool things and not have to worry about publishing. Some of my colleagues over the years have moved to Apple and startups for this reason.


Perhaps so. Other scientists want to push the border of human knowledge.


There is a difference between pushing the borders of human knowledge and disseminating said knowledge, though.


I guess that depends on your interpretation of "human knowledge" then :)


By being a human, you would be able to expand human knowledge without letting other humans know about it. So yeah, it's a matter of definition;P


Exactly. The first, Without the second, is not called science.


If the scientists in question are applying the scientific method as a means to "build cool things", can they not be said to be doing science?


One of the key tenets of the scientific method is reproducibility. People can't reproduce your results if they don't know what your results are.


Reproducibility does not necessarily mean that other people should be able to reproduce the results in question (which would depend upon them knowing the relevant information in the first place) - but that an experiment can be replicated.

I would agree that publishing results is better for several reasons (ethics, pragmatism, probably more thorough evaluation, etc.), but the definition doesn't really deal with the public at all.

TLDR; Reproducibility != public reproducibility.


It doesn't have to be "arbitrarily many people". Just people other than the original scientist. So colleagues at Apple still qualify.

There are tons of secret research behind any company (and governments), and it's still "science".

(And conversely, lots of published scientific papers are actually not reproducible, but for most of them nobody bothers -- even if other scientists cite them as accurate).


It's just the conclusions that need to be reproducible. In fact, science as a whole works better when researchers devise alternate experiments to test the same conclusion.

For example, say I measure gravitational acceleration on Earth by dropping a feather repeatedly. I get a number and I publish it. If other scientists just repeat my exact experiment, they will get a similar number and my result will be confirmed. Yay science!

But if scientists decide to check my number by dropping a wide variety of other objects, they will illustrate the flaws in my original experiment and everyone will get a clearer picture of gravity.

So it's not about just reproducing an experiment; scientists seek to test and expand upon each others' results.


People studying machine learning are not like people running a psychology experiment. If you give the same code, data and computers to two different teams, they will get the same results, and they will understand why that happens.

That's because a machine learning system has been built up from known principles and hardware; whereas the human brain is not fully understood, so researchers must work their way down from observed evidence to induce theories.


Luckily bugs don't exist in machine-learning software!


What is a bug, anyway? It means the system is not working properly, which implies that folks know in advance how the system is intended to work.

When physicists were struggling to explain radiation, they didn't call it a bug in the atom. It was their understanding, not the system they studied, that needed fixing.


Let's propose a hypothetical scenario. Imagine an investigator in a large research lab somewhere makes a revolutionary AI breakthrough. Not just an incremental step, but one that achieves 99.9999% success in image object recognition (current state-of-the-art using GPU-accelerated Sparse ConvNets puts Univ. of Warwick's Dr. Ben Graham's team in the lead at about 95%[1]).

What would be the incentive to make the result public? Certainly if the discoverer were a grad student at Berkeley they may throw a brief summary onto Arxiv the next day to get some peer validation. But if it were the ML team at Uber's new secretive lab in Pittsburgh? Would they not have a compelling interest in keeping it under the kimono?

There are several possible outcomes for this game to play out. But for right now let's bask in the warmth of our open source learning toolkit abundance and concentrate on the real question of the day: now that we can teach machines, are there better ways to use their time then ascertaining what makes a top selfie?

[1] https://www.kaggle.com/c/cifar-10/leaderboard/public


This is not how research works or a new fundamental technique is discovered. Any result needs to stand the rigor of peer-review before the said breakthrough can be conclusively proven (more unbiased minds trying to find the limitations of the approach and applying the said approach to various problems). Also this process is slow, and requires many PhDs, students building upon one another's work.

Also, going from 96-99 may not be really revolutionary, what really matters is how you got there e.g. was new research needed to detect new unicorn classes or scenes, or if it was simply solved by great engineering, more data, etc.


This is a strange view of computer science research. In most of the cases of material advances outside the academic setting I am familiar with, the researchers always demonstrate or prove their result to a high degree of rigor themselves. None of this requires an army of PhDs (or even one) to rigorously inspect the result unless it is so badly represented as to be opaque. It is not like they sit around coming up with random ideas for how to solve a problem and leave it at that. Peer review is useful for checking the result but you aren't supposed to rely on peer review to filter garbage ideas.

I know a few professional computer science researchers, people who churn out new and better algorithms (some radically so) in multiple domains on a surprisingly consistent basis for private companies. At least half their job is actually the easily understandable explanation and proof of why the algorithm must have the claimed properties. Their job isn't to produce academic papers but to solve algorithm problems for software engineers writing real code, so a lot of focus is on reduction to practice and clear explanation of the mechanics.


>Any result needs to stand the rigor of peer-review before the said breakthrough can be conclusively proven

Not if you can easily verify the claim is true with sample images (in the situation given). Then you don't need to care for any "peer review".

In the same vein, if I come up with a data structure implementation that is 100x faster than common techniques, it just has to pass a suite of tests. No peer review needed. If some Intel engineer finds a revolutionary, 100% speed improving cache prediction method, ditto. And for lots of other cases. As soon as you check it works, you can start patenting and marketing pronto.


By patenting though, you reveal your technique (that's the whole point: you get exclusive rights in exchange for disseminating knowledge). Once you file and receive your patent, you've basically published your results and everyone can benefit (although they'll have to pay you, of course).


Sure this is the case sometimes but generally you need to check and make sure you aren't gaming some odd property of the tests somehow; does your algorithm give up speed somewhere in order to recover it somewhere else. Is there a worst case in the tests you're not noticing. It's easy to miss something small. Proprietary algorithms are by and large not fundamental general breakthroughs in the field, rather they are incremental upgrades or modifications or tradeoffs or completely non-general algorithms for a very specialized purpose that happens to align with the business goals of the company


Any result needs to stand the rigor of peer-review before the said breakthrough can be conclusively proven.

Apple aren't in the business of doing research for the sake of provable science though. They do research to make things they can sell. The goal is very different.


>The goal is very different.

The article agrees, and suggests that because of this, Apple will miss out on the top talent that prefers to publish


That is not how Google works. They keep their new fangled systems under wraps and only release the details after a few years (likely 10). Most definitely research is done through the rigor of peer reviews but in a commercial setting that is almost never the case.


More like 5 years, and shrinking.


My comment and the article are about AI research specifically, where there is lots of exciting new research due to the success of deep learning. And since the area is new, there is lot of low hanging fruit and challenging problems, which researchers are eager to solve and publish, and as a result Apple is suffering.

So it does not matter how Google research worked earlier or if other areas of Google research continue to wait for a 5 year timer.


The incentive for academic labs and researchers to go public is very clear: publications and kudos in the community, which leads to career advancement and plentiful research funding.

You don't get super-rich like this, but you get the things that people really want. Respect of your peers, feeling like you are doing something useful, and enough money to run a lab and live a decent middle-class lifestyle, so you can get on with the work you love. That's a good life.

(Also you might want to stay in touch with your grad students who are driven to get rich. Can't hurt.)


The reality is that's not how research works. Inspiration and breakthroughs don't come out of thin air; they come from the freedom to interact with other scientists, process their ideas and work, and then combine them with a new perspective which is tested and then occasionally turns out to be better than the state of the art. The process of research builds a lot of tacit knowledge that can't just be picked up by secretly picketing machine learning conferences.

It's a lot of hard work, and the benefits to doing it in collaboration with the wider scientific community far outweigh the added value from being able to keep some marginal contribution secret. The top-secret, non-collaborating research lab always falls behind, with perhaps the rare exception of certain large and well-funded government organizations.


Well they could patent it. This is literally the purpose of patents. You can't patent it if you don't publish, and if you do patent it, you have every incentive to publish it. To get other people to use it and pay you licensing fees, or just to brag and attract better scientists to work at your company.

I do hate patents, and I think they are often given to ideas that seems simple, trivial, or iterative, rather than ground breaking. Stuff that probably would have been invented by someone else a few years, or even months, later. But for your hypothetical "revolutionary AI breakthrough", that's the ideal use case for them.


Never underestimate hubris and greed. If they made it know, they gain respect, they attract top notch researchers & thieves, the market rewards them with funding/buying more shares.


This is a potential pitfall of private research. Ideally any real scientist who is earnest and dedicated about his or her work and is passionate about its benefit to humanity or contribution to human knowledge would, upon making a breakthrough like that, feel the need to share it. Look at the plot of Atlas Shrugged: completely unrealistic; Galt is a disgusting sociopath who invents a way to remove resource constraints from the world, essentially destroying markets and capitalism, poverty and hunger, and refuses to share it with humanity so he can hold out for... resources in exchange. These things don't happen. Scientists don't try to keep their work under wraps -- it's the business people who own them that do.

The secret private work that these people are doing is a chain around their neck which stops them from being real scientists. Some are willing to give that up for money and work at a lab that doesn't publish. Some are willing to give that up out of a sense of patriotic duty and work in intelligence. It's fantastic that there are private research labs like MSR that pay well and DO publish so there isn't a forced choice between money and being a real scientist for the lucky few who work there.


I'm scratching my head here. Bloomberg casts Apple as the outlier for its secrecy. In reality, companies which publish their results openly are the outliers today. In the post-golden-era-of-Bell-Labs-and-industrial-research era, publishing is not much of a priority for most companies. Expediting research projects to commercialization leaves little time for going through peer review and publishing. A note on incentives: most companies these days reward R&D employees for commercialization and revenue resulting from their ideas and inventions. Even if they pretend to allow publishing, one can be quite sure that the publications are impacting the careers of authors within the companies in a minimal way, compared to deployments of products and services resulting from their inventions. Apple, Amazon, going one step further from all the old industrial and defense conglomerates, just seems to dispense with the charade about allowing publications. The Amazon example is particularly weird, because what they did (committee approving release, etc.) would be standard practice in most R&D labs, other than the usual exceptions of Microsoft Research, Mitsibushi ERL, etc.


Hiya. Author here.

I'd class some of Apple's main peers as: Google, Microsoft, Facebook, Samsung, Baidu, and, to a lesser extent, Amazon. These are all large consumer technology companies that are trying to develop quite intimate relationships with consumers, whether through phones or services.

Among these companies, Apple's secrecy with regards to AI does make it an apparent outlier. All its peers are, to one extent or another, publishing much more aggressively than it in an apparent attempt to woo some of the most accomplished grad students in the academic community into considering going into industry. (Not mentioned in article, but relevant for this: Samsung has started publishing a few papers, and I'm seeing lots of collaboration between Samsung-affiliated researchers and SK academics pop up on Arxiv. Huawei is coming up as well, via its "Noah's Ark Lab" and some other R&D centers.)

Tl,dr; most companies are v private and publishing so openly is the outlier, but among Apple's peers/significant competitors, there is a tendency towards open publication and interaction with the academic community.


Is it true that "really strong people don't want to go into a closed environment?" By definition, people who prefer (or tolerate) a closed environment will be underrepresented in any public data set (e.g. lists of paper authors, conference presenters, etc.).

All the quotes in the story are from professors, who are going to be more aware of those public folks than the ones that hire into more secretive companies. And, I would guess that professors are more likely to prefer (and therefore promote) a public approach to CS research.

Certainly a closed environment will select for people who like that (or are OK with it), but that is not the same thing as selecting for talent, skill, or experience. It doesn't get more closed than the NSA, and they seem to do ok hiring top CS and math talent.


It is amazing to me how little Siri has changed. Apple is the biggest company on the planet and with a huge cash hoard. And this is their AI? Either they are lacking in ideas or the execution is as bad as the article says.


Compared to what? I haven't found Google's or MS's AI to be that much better. A little better yes, but not that much better.

And perhaps they are "the biggest company on the planet" because they don't throw lots of their money on half-baked stuff like AI, but proceed conservatively with what they can reliably offer.


Indeed both Google and MS have a better AI. All evidence suggests that next phase shift in AI will come from deep, recurrent networks. The article points our why that future is not looking good for Apple.


Deep learning and neural networks are quickly becoming the fizzbuzz of AI.

I'm surprised nobody talks about topics such as simulated annealing and genetic algorithms.

AI is not just limited to deep neural networks.


Genetic algorithms were hot in the '90s. Then it turned out they kinda sucked.


I suspect Siri's lack of improvement is because Apple is a hardware company. Hardware companies tend to complete projects then say "Well, that's shipped and done. Lets go start something new now." and not invest in continuous improvement of the product like a software firm would.


Except in their OSes. Bloated, but still the best.


My friend had windows 8 installed on his macbook pro. why?


Come on. Do you know how much it costs Jony Ive to fold aluminium? A little more than NASA's yearly budget thats how much.


The quant hedge fund world is possibly the area where secrecy hurts the most. I worked for many years on a strategy that my colleagues were convinced was special. It did make a lot of money, but looking back it really suffered from not being able to take in any outside feedback. We had a meeting where I was the only dissenting voice in the question of whether to give any transparency into how the model worked.

There's some really bad issues with trying to do anything like this in total secrecy:

- We couldn't tell investors how the strategy worked. This is bad for building trust. When things go well, people will invest anyway. But they are quick to leave you when they don't.

- We couldn't hire anyone without bolting down everything. I ended up having to do all the coding while making sure nothing went into the repo that was the secret sauce, having to send a new binary to a consultant (whom everyone knew) every time the sauce changed, and so forth. As a tiny team we had a lot of fiddling with the network to make sure nobody with the wrong credentials could get things they weren't supposed to.

- It affected how we tried to hire people for other things, like new strategies. When you think everything you have is special, you tend to devalue what other people do. You also end up thinking you can take their secret sauce. There's at least one well known fund that thinks it can just interview people and get their ideas. It's pretty obvious when it's happening.

- Because truly original ideas are hard to find, you stop looking and you think that's it. Or, as it happens, you read some paper, slightly modify it, and you announce your revolutionary new idea to the team. Unfortunately, people seemed to buy this. They couldn't just accept that a trading strategy is a whole bunch of ideas that come and go over time. And certainly not the idea that a tiny little tweak that makes everything work is probably a dangerous thing.

- You end up not comparing your findings to the real world. Nobody was reading what was happening in the machine learning world (the guys are essentially using high school math to implement the model). Attempts at introducing things that were completely ordinary in software development were met with derision. Basically if something wasn't more secret sauce it wasn't interesting.

- The crux of this thinking is you tend to think implementation is easy and idea generation is separate and difficult. Probably most people will find the opposite to be true. In truth, if you look at the financial code, it would fit on a screen. The other tens of kLoC was was implementation: back office checking, execution, writing things to database, connections to all the exchanges, and so on.

If you look at the wider industry, you can talk to people at large funds where they don't actually care that lots of people are looking at the strategy. They know what the techniques are, that know how to build code collaboratively.


If have encountered situations where people are secret because there is nothing there - I was doing work with a team that had been acquired by my employer at the time and they supposedly had a "secret" approach for evaluating a problem in their domain.

Eventually the admitted there wasn't anything special there - they simply looked at a graph of the data and made a judgement call.

It wasn't a big deal as it wasn't why they had been acquired, but they really put up every barrier possible - including the "how can we trust you" line. Had to get escalated all the way to execs of a $600M business to get someone to tell them to tell me exactly what they were doing...


Well that's basically what happened. Smart people are very good at finding ways to convince themselves they're right and the whole world is wrong.


Wise people know that they don't know everything and are able to realize that even if they know a lot, someone else may know more about X than they do and absorb their perspective.


> if you look at the financial code, it would fit on a screen

This is exactly why people shouldn't be afraid at somebody else looking at the code. What is important here is not specifically the code that is implemented, but the ideas, model testing, analytical thinking that went into the writing of those few lines. The problem with many researchers in this area is to think that it would be easy to go from an idea or simple implementation into a full investment system. Most of these ideas have already been published, it is just that few people know how to use them to make money.


That's right. A lot of "ideas" are really a bit useless if you don't have infrastructure to execute them. Normally this means "how can I make these trades as cheaply as possible? And without moving the market?".

For instance, anyone can pick up a paper about fundamentals (or the Bible, aka Graham and Dodd) and see why it might work. To make money there's a lot of dirty work. Getting some data, making sure it's clean, thinking about biases, getting some lines in to do the trading, coding up a trading engine, writing back office code, getting someone to stare at it all day, and so on.


>> The quant hedge fund world is possibly the area where secrecy hurts the most.

Cool comment, thanks.

I always assumed that the secret world of spies, secret organizations and their technology development was more inefficient than the secret corporate worlds? Because (a) CIA et al have the same problems and (b) they aren't even limited by having to earn money -- they can even stamp embarrassing foul ups as state secrets!


I think this might matter most when it comes to self-driving cars. Google's cars have already been driving around for quite some time, in real traffic, against real drivers.

It is impossible for Apple to keep their self-driving car project secret (if there is one) and get some real-world experience at the same time.


It is impossible for Apple to keep their self-driving car project secret (if there is one) and get some real-world experience at the same time.

How do you know they're not already testing a car in public and they can keep it a secret?


Because you need a permit for that, and Apple doesn't have one.


Unless they're doing it under the cover of a shell corporation.


There are only 10 companies and they are fairly well known: https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testi...


That's only a list for California, each state seems to have their own laws and might not require disclosure of the companies:

http://cyberlaw.stanford.edu/wiki/index.php/Automated_Drivin...


Which would still be very easy to track, since the permits are public.

Or do you think nobody would notice a permit issued to "Ladron de Manzanas Inc" or whatever name nobody ever heard of?

Not to mention the accidents. How would Apple hide all those human-cause SDC accidents?

Occam's Razor is telling me there are no Apple SDCs on the streets, but hey, maybe they invented an invisible, aethereal SDC registered under a shell corporation that nobody became curious about :)


They also don't have to test in the United States.


So they would need to enter into a secret agreement with a foreign government, create secret facilities, ship the car around the world, still make their cars invisible and ethereal, just so they could test their SDCs in real traffic?

The plot thickens. Add a couple rocket launchers and you have a Hollywood blockbuster.


Especially if it flies.

Then they'd only need to deal with the FAA, and no one is looking in that direction.


A couple days ago there was a guy looking for people on the clojure mailing list. He said he was hiring for a super secret ML project at a super secret company.

His linked-in profile says he works as a headhunter for apple.

Apples secrecy has become a joke, they're like that guy from sesame street selling you letters.


A friend was being heavily sought after by one of our nation's intelligence services. Excellent compensation package, lots of relevant work. But he'd be unable to have anything on his resume and would likely be unqualified for any work outside the organization if he wanted to leave. Sounds a little like Apple. He didn't take the job.


> But he'd be unable to have anything on his resume and would likely be unqualified for any work outside the organization if he wanted to leave

That is an exceptional talent retention mechanism that I don't think enough people consider when they talk about how the three-letter-agencies maintain their large talent pools. Once you are in in is very difficult to get out.


Great for retention, pretty terrible for recruiting though.

AFAIK most TLAs invest a lot in training since they're going to really struggle with recruiting people with the specialised knowledge they need; which is fine when that's a reasonable path for your org, but as a company, training AI experts seems really tough.


(Apple) Research? What research? We're a design company.


[flagged]


Did you read the article at all? It seems like you didn't even read the title of it correctly


Yes I did. It was focused on AI, which is a branch of software development. AI is the front of software that humanity is on right now. Therefore saying Apple isn't propagating AI = isn't propagating software = bullshit.


Even the dig at Fortran is silly. Most numerical libraries and subroutines are still implemented in Fortran and you have to figure out how to call them from other languages (e.g. scipy, numpy).


Things get implemented because that's what people know, not because people want to build their ABC data-driven business on a new technology.

You're kidding yourself if you think implementing Fortran is helping anyone but the two parties involved: the employer and the employee.


Most? Do you have ANY proof to back this up?


So this is the pit Apple will finally choke on. They will be able to change a lot of things and surprise in many ways but they will never ditch their extreme secrecy and become open and collaborative.


What? I do have a crystal ball and that entitles me, as much as the next sage, to tell what's coming and share my wisdom. Downvotes here are nothing but a metric for denial in the under-informed masses.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: