The first figure, the bar chart based on the BLS Consumer Expenditure Survey shows almost the same expenditures in Atlanta, Georgia and Los Angeles, California, with somewhat higher overall expenditures in Atlanta and somewhat lower housing expenditures relative to transportation in Atlanta.
Median list price per square foot is $478/sq foot in Los Angeles
Median list price of homes is $750,000
That is a radical difference in housing prices that does not seem to show up in the figure in the article. Presumably salaries and budgets are proportionately lower in Atlanta than Los Angeles.
Is this really an accurate and adequate explanation? I know people who have moved from Atlanta where they could afford a house and had to downsize to a smaller apartment in the San Francisco Bay Area which is somewhat more expensive than Los Angeles, but in the same ballpark.
Is Los Angeles really more affordable than Atlanta?
[ADDED]
The US Census reports:
Fulton County, GA (Atlanta area)
Median Household Income (2012-16): $58,851
Per capita income for past 12 months (2012-16): $39,101
Note that Fulton County, GA (Atlanta) has a higher median household income than Los Angeles County, and much lower median home prices according to Zillow.
It is unclear what the consumer expenditure report is computing. It may be the percentage of total expenditures NOT INCLUDING SAVINGS AND TAXES, rather than gross income:
Asteroid is discovered in deep space heading for Earth. Astronomers watch the asteroid approach Earth. Asteroid hits New York City. Millions die. Astronomers and media report:
"Asteroid from deep space kills millions. Asteroid causes death of millions."
Did you just post a rebutal to your original comment calling for 'experiments'? If economics is as mysterious as you say, how can you design experiments that produce verifiable and reproducible results?
Physics has had a lot of predictive power: economics hasn't. "Asteroids are incredibly powerful" is a physical statement, and a physical statement of causation that you can only make because physics is better-understood.
With exceptions, physical statements have predictive power. With exceptions, economic statements don't. So that's why I say that causality in economics is not generally understood.
The article is misleading if not false. Neural nets were hot in academic AI research 30 years ago (1988). The original Perceptron had fallen out of favor in part because of arguments that it could not implement a exclusive or (XOR) in Minsky and Papert's book Perceptrons
Neural nets fell out of favor in the 1970's but came back and became hot in the early 1980's with work by John Hopfield and others that addressed the objections.
Practical and commercial successes were limited in the 1980's and 1990's which led to a reasonable decline in interest in the method. There were some commercial successes such as HNC Software which used neural nets for credit scoring and was acquired by Fair Isaac Corporation (FICO).
I turned down a job offer from HNC in late 1992 and neural nets were still clearly hot at that time.
Some people continued to use neural nets with some limited success in the late 1990's and 2000s. I saw some successes using neural nets to locate faces in images, for example. Mostly they failed.
AI research is very faddish with periods of extreme optimism about a technique followed by disillusionment. One may wonder how much of the current Machine Learning/Deep Learning hype will prove exaggerated.
Also, traditional Hidden Markov Model (HMM) speech recognition is not rule based at all. It uses a maximum likelihood based extremely complex statistical model of speech.
Citations to his papers have been rising steadily from between 88 and 107 in 1987 to between 685 and 826 in 1999. That's hardly an unpopular researcher.
And for a bit of comparison with other machine learning researchers, here's a link to a data set of family relations from a 1986 paper by Hinton:
At the bottom of that page, in the Relevant Papers sections there's two links to two papers using the data set, one Hinton's own paper that introduces it and one by Quinlan.
Clicking on the [Web Link] links for the two papers, I can see the references to those papers. There is a single reference to Quinlan's paper. There are 43 to Hinton's, of which all but 6 are from 1999 and earlier. And those are not self-references, neither references by Bengio, Le Cun et al. If there is a clique, it is hard to see it.
So there was a lot of interest to Hinton's work even in the years he was supposed to be "exiled to the academic hinterland" as another article said.
> AI research is very faddish with periods of extreme optimism about a technique followed by disillusionment.
Truth! I happened to go through grad school at a time when SVMs and kernel methods were cool, neural nets were the opposite, and Hinton and his students were oddball outcasts at NIPS. Kudos to Hinton for sticking to his story until the current "deep learning" hype wave presumably allowed him to buy a yacht and an island. I imagine we'll be hearing about some other non-linear optimization technique in a few years.
The XOR story is misleading. :-) The Wikipedia page does a good job in what the book actually proved:
"What the book does prove is that in three-layered feed-forward perceptrons (with a so-called "hidden" or "intermediary" layer), it is not possible to compute some predicates unless at least one of the neurons in the first layer of neurons (the "intermediary" layer) is connected with a non-null weight to each and every input. This was contrary to a hope held by some researchers in relying mostly on networks with a few layers of "local" neurons, each one connected only to a small number of inputs. A feed-forward machine with "local" neurons is much easier to build and use than a larger, fully connected neural network, so researchers at the time concentrated on these instead of on more complicated models."
The author incorrectly speculates that the title of the magazine -- Jacobite -- is a play on Jacobin:
Jacobite – which is apparently still a real magazine and not a one-off gag making fun of Jacobin – summarizes their article Under-Theorizing Government as “You’ll never hear the terms ‘principal-agent problem,’ ‘rent-seeking,’ or ‘aligning incentives’ from socialists. That’s because they expect ideology to solve all practical considerations of governance.”
Jacobite has nothing to do with Jacobin. Jacobitism (/ˈdʒækəbaɪˌtɪzm/ JAK-ə-by-tiz-əm;[1][2] Scottish Gaelic: Seumasachas [ˈʃeːməs̪əxəs̪], Irish: Seacaibíteachas, Séamusachas) was a political movement in Great Britain and Ireland that aimed to restore the Roman Catholic Stuart King James II of England and Ireland (as James VII in Scotland) and his heirs to the thrones of England, Scotland, France and Ireland. The movement took its name from Jacobus, the Renaissance Latin form of Iacomus, which in turn comes from the original Latin form of James, "Iacobus." Adherents rebelled against the British government on several occasions between 1688 and 1746.
The Jacobins on the other hand were a French political movement/party:
The Society of the Friends of the Constitution (French: Société des amis de la Constitution), after 1792 renamed Society of the Jacobins, Friends of Freedom and Equality (Société des Jacobins, amis de la liberté et de l'égalité), commonly known as the Jacobin Club (Club des Jacobins) or simply the Jacobins (English: /ˈdʒæ.kə.bɪnz/; French: [ʒa.kɔ.bɛ̃]), was the most influential political club during the French Revolution. Initially founded in 1789 by anti-Royalist deputies from Brittany, the Club grew into a nationwide republican movement, with a membership estimated at a half million or more.[1] The Jacobin Club was heterogeneous and included both prominent parliamentary factions of the early 1790s, the Mountain and the Girondins.
In 1792–93, the Girondins were more prominent in leading France, the period when war was declared on Austria and Prussia, the monarchy was overthrown and the Republic created.
I can almost guarantee you that Scott Alexander is aware of who the Jacobites and the Jacobins were and what the difference is. The blog is constantly bringing up all manner of historical and political esoterica. He's making a joke based on the name resemblance.
It doesn't read like a joke. It reads like a mistake or perhaps he is assuming his audience won't be aware of who the Jacobites were and will find it funny.
I've read this blog for years. This is exactly the kind of joke he makes all the time; in describing or referring to something, he folds in a fact which, while not literally true, is a funny way of imagining it to be, and which is expected to be obvious to the readership. It's kind of like the practice of Using Capital Letters to refer to a Self-Serious Idea of Something.
The author of the article, pseudonym Scott Alexander, lived in Ireland for some time while studying medicine. He's probably aware of the Jacobite rebellion and events like the Battle of the Boyne, merely via intellectual curiosity about the North while in Ireland.
I would not take excuses like this for rejecting a candidate, especially a highly qualified/skilled candidate, at face value. Despite the pervasive rhetoric about wanting the "best" or hiring only the "best," both individual interviewers and companies as a whole are often afraid of hiring candidates who know more or are more skilled than themselves. The unstated fears include that the candidate will, if hired, outshine colleagues, his or her boss, or even, in some cases, the founder or CEO of the company -- especially in the case of startups where venture capitalists or other investors could be impressed by the new hire.
Since it is usually not acceptable to reject an applicant for this reason, some pretext will be generated. One way is to focus in on some area where the candidate is weak or different such as using a different software tool. Another way is to use trick questions or problems.
This is an article on how this was allegedly done in academia in the Soviet Union:
Actually, I had drafted "Horror story 4: Candidate was rejected because he was better than the interviewer" but I thought it is too complex to be included. I might write a whole blog post about it. Thanks so much for linking to the paper.
He didn’t even understand the question “what would you have to see to convince you that you’re wrong?” meaning he wouldn’t even be able to recover from the error!
For older engineers this is one of the reason why they are not hired. There are always various excuses, but basically no one wants to hire his next boss.
>>both individual interviewers and companies as a whole are often afraid of hiring candidates who know more or are more skilled than themselves.
Reminds of me of an experience a few years back, where I aced one round, in the next there is this guy who comes along, asks questions. He got very angry everytime I gave a correct answer to his questions.
Its almost like he was looking for an excuse to fail me, and wasn't getting one.
afraid of hiring candidates who know more or are more skilled than themselves. The unstated fears include that the candidate will, if hired, outshine colleagues, his or her boss
At one company the joke was you should never hire someone taller than yourself, because you would end up reporting to them. It was funny because it was true...
Isn't that your job as a founder to find and surround yourself with people that are much smarter then you in their given field of expertise?
You as the founder should then coordinate the efforts of your different experts and handle all that other startup stuff (talking to investors, aquiring customers, etc)...
Venture capitalists for example are notorious for investing in startups and replacing the founder and/or CEO with "more experienced management" as the company grows.
A well known prominent example is the ouster of Steve Jobs at Apple in the 1980s by John Sculley whom Jobs had supposedly hired -- with the support of the lead investors such as "Mike" Markkula and Arthur Rock.
Steve Jobs is not an unusual case. The founders of Cisco Sandy Lerner and Len Bosack were ousted by Don Valentine/Sequoia:
President Trump's use of Twitter, a new communications technology, to communicate with the general public and world is no different than Franklin Delano Roosevelt (FDR)'s use of the then new technology radio -- weekly addresses to the nation and press conferences -- to communicate with the nation and as it happens bypass the almost uniformly hostile newspapers of the 1930s.
Trump is no more using Twitter to communicate policy directives to US government agencies than FDR was using radio. He still has to formulate and sign executive orders and go through the legally proscribed channels as the successful legal challenges to the original Muslim ban show.
Rather than try to censor the President of the United States his critics should rationally and clearly critique his actual policies and actions.
>>his critics should rationally and clearly critique his actual policies and actions.
I agree in sentiment, but that isn't really how it works anymore (maybe it never did?). It's just a never-ending barrage of soundbites and 'outrages' because that's what moves joe public.
People vary. I find it quite difficult to work effectively in an open office. Coffee shops are often better but not that great. I find either total silence or soft instrumental music is usually best for me. Some people seem to enjoy open offices although I suspect their work is different and does not require the same level of deep concentration.
The obvious solution is to allow people to select the office environment that works best for them: office, cubicle, open area etc. instead of imposing a one size fits all "solution" from above.
Watson's original account of the discovery of the double helix has been heavily criticized for downplaying the role of Rosalind Franklin's x-ray crystallography measurements of the structure of DNA in the discovery.
Might this criticism have been politically motivated?
For a while she actively campaigned against DNA being a double helix, see e.g. her 'obiturary' for the double helix [1] which preceeds Watson and Crick’s paper. Might her insistence that DNA is not a double helix have misled Wilkins and others?
This is complicated issue. Specifically, The Double Helix, cited in the original HN post, does mention Rosalind Franklin but in a rather demeaning way that slights her contributions -- which came to be highly criticized in subsequent years.
This is the relevant section from the Wikipedia article, which like most Wikipedia articles on contentious topics should be taken with a grain of salt and not substituted for original sources:
Recognition of her contribution to the model of DNA
Upon the completion of their model, Crick and Watson had invited Wilkins to be a co-author of their paper describing the structure.[186] Wilkins turned down this offer, as he had taken no part in building the model.[187] He later expressed regret that greater discussion of co-authorship had not taken place as this might have helped to clarify the contribution the work at King's had made to the discovery.[188] There is no doubt that Franklin's experimental data were used by Crick and Watson to build their model of DNA in 1953. Some, including Maddox, have explained this citation omission by suggesting that it may be a question of circumstance, because it would have been very difficult to cite the unpublished work from the MRC report they had seen.[78]
Indeed, a clear timely acknowledgment would have been awkward, given the unorthodox manner in which data were transferred from King's to Cambridge. However, methods were available. Watson and Crick could have cited the MRC report as a personal communication or else cited the Acta articles in press, or most easily, the third Nature paper that they knew was in press. One of the most important accomplishments of Maddox's widely acclaimed biography is that Maddox made a well-received case for inadequate acknowledgement. "Such acknowledgement as they gave her was very muted and always coupled with the name of Wilkins".[189]
Twenty five years after the fact, the first clear recitation of Franklin's contribution appeared as it permeated Watson's account, The Double Helix, although it was buried under descriptions of Watson's (often quite negative) regard towards Franklin during the period of their work on DNA. This attitude is epitomized in the confrontation between Watson and Franklin over a preprint of Pauling's mistaken DNA manuscript.[190] Watson's words impelled Sayre to write her rebuttal, in which the entire chapter nine, "Winner Take All" has the structure of a legal brief dissecting and analyzing the topic of acknowledgement.[191]
Sayre's early analysis was often ignored because of perceived feminist overtones in her book. Watson and Crick did not cite the X-ray diffraction work of Wilkins and Franklin in their original paper, though they admit having "been stimulated by a knowledge of the general nature of the unpublished experimental results and ideas of Dr. M. H. F. Wilkins, Dr. R. E. Franklin and their co-workers at King's College, London".[81] In fact, Watson and Crick cited no experimental data at all in support of their model. Franklin and Gosling's publication of the DNA X-ray image, in the same issue of Nature, served as the principal evidence:
Thus our general ideas are not inconsistent with the model proposed by Watson and Crick in the preceding communication.[192]
The section you cite does not mention that F. explicitly
campaigned against the helix model for a while. Why? If we exclude the
option that she deliberately lied about it and publically spoke
against the helix model so as to mislead others (which would be a
major breach of scientic ethics) -- and we should: "de mortuis nil nisi
bonum dicendum est" -- then she failed to see what others did see.
To use the vernacular:
The issue with Watson's original account is here in the Johnson's text:
After negotiations between both labs, papers by Wilkins and by Franklin and Gosling appeared in the same issue of Nature along with the one by Watson and Crick. (They can all be found on a website at Nature, and an annotated version of the Watson-Crick paper is at the Exploratorium’s site.) Toward the end of their paper, they flatly state that “We were not aware of the details of the results presented [by the King’s scientists] when we devised our structure, which rests mainly though not entirely on published experimental data and stereochemical arguments.” Yet they go on to write in an acknowledgment, three paragraphs later: “We have also been stimulated by a knowledge of the general nature of the unpublished experimental results and ideas of Dr. M. H. F. Wilkins, Dr. R. E. Franklin and their co-workers at King’s College, London.”
The sentences seem to contradict each other, and in any case Watson made a point, in his book The Double Helix, to describe the pivotal moment when he saw Photo 51.
So the controversy continues. Was it ethical for Wilkins to show Watson his colleague’s work without asking her first? Should she have been invited to be a coauthor on the historic paper? Watson hardly helped his case with his belittling comments about Franklin in The Double Helix.
The bigger issue from the original Hacker News post is Watson and also Feynman's portrayal of their work as highly independent of their colleagues: "disregarding" others.
You keep ignoring the elephant in the room: that she was wrong (in the charitable interpretation). Why?
I'd appreciate it if you could state clearly whether you think
- F. got it wrong, or
- she knew/assumed early on that it was a helix, but lied about it?
Thank you.
As to whether the X-rays should have been seen by Crick/Watson, The research was publically
funded. Publically funded research should be open and transparent. Or do you disagree with transparency of taxpayer money?
Would it have
even been legally possible for F. to refuse the communication of her
tax-payer funded results?
This was addressed in the appropriate chapter in The Gene: An Intimate History by Siddhartha Mukherjee[1]. It was the first I had heard of Rosalind Franklin and I was very grateful for this account which gave appropriate recognition. (Side note: the entire book is also highly recommended)
Both "Uncle Bob" and the author are selling extremely complicated, cumbersome, expensive ways to develop software -- great for them as consultants if they can pull it off, not necessarily great for their customers. Giant near monopolies such as Microsoft or Google can afford these methods, may even benefit from this sort of super-bureaucratic "gold plating" of software and certainly do benefit from convincing small potential non-monopoly competitors to adopt these slow, expensive, bureaucratic methods that smaller firms cannot afford (they run out of money trying to create perfect software instead of software that works and gets the job done).
Perhaps if the methods are sound (even if expensive) then someone can step up and make tooling or refine them so the methods are more accessible to smaller firms. Uncle Bob's opinion's might be discouraging people from exploring them and finding ways to drive down the cost of adopting these better methods.
> Perhaps if the methods are sound (even if expensive) then someone can step up and make tooling or refine them so the methods are more accessible to smaller firms.
To his credit, Hillel is trying to evangelize (in general, not directly through the post linked) a particular tool (TLA+) whose creator (Leslie Lamport) wants to do exactly this. I don't know how big Hillel's employer is, but from what I understand it's nowhere near the biggest engineering firm in number of technical employees. Lamport's current objective with TLA+ is to get people to learn concepts for formal description/specification of systems, and he happens to provide an effective tool for testing those system specs.
We have about ten engineers, so nowhere near the size where people consider formal methods "appropriate". Nonetheless it's still been incredibly useful for our work.
I'm a pretty huge evangelist of TLA+, but I don't think it's the silver bullet of software correctness. It just happens to be the tool I'm most familiar with and the one I thought could benefit most from a free guide. If people start widely using TLA+, I'll be ecstatic. If people ignore TLA+ but start widely using Alloy, I'll still be ecstatic. Software correctness is a really huge field and there's lots of really cool stuff in it!
Speaking of making methods more accessible, I'm working on a tutorial about Stateful Testing. Hypothesis (https://hypothesis.works) is an absolutely incredible property-based testing library for Python, and I think it could potentially make PBT a mainstream technique. One of the more niche features is that you can define a test state machine that runs by randomly selecting transitions rules and mutating your program state, then running assertions on the new state. It's really neat!
Perhaps as people who call themselves "engineers" we should move away from the idea that "gets the job done" is good enough? Especially when we have security breaches left right and center.
There is no obvious connection between the methodologies peddled by "Uncle Bob" or the author of the piece and security breaches. They are not claiming to be computer security experts. How exactly are unit tests or code reviews or similar rituals supposed to have prevented the numerous breaches by Julian Assange and his WikiLeaks associates, especially when most of the information appears to have come from insiders with authorization to access the data? (for example). In the Bradley/Chelsea Manning case the government has consistently implied that Manning had authorization to access the vast State department archive of diplomatic cables, as implausible at that seems. I don't see how unit tests or code reviews would have prevented John Podesta from clicking on an obvious phishing link. :-)
In Mountain View, home to Google, there are over 2000 luxury apartments and condos under construction. The situation is similar in several other Silicon Valley cities. This is a huge increase in apartment construction since 2010.
It is a similar pattern to San Francisco and other big cities mentioned in the Guardian article. The new construction is aimed toward generally young, generally single, tech workers at Google, Apple, Facebook and other employers within easy commuting distance of Mountain View. Many existing apartments have remodeled and boosted rents, effectively evicting many people with lower paying jobs. There is minimal construction of apartments for lower paid folks.
Google has the political clout in Mountain View to get more apartments for its tech employees despite the NIMBYs. Rental costs are a major negative for Google since the high cost means California tech salaries are actually among the lowest in USA adjusting for local cost of living. The disparity with the Plano Texas region (so-called Telecom Corridor) is astonishing.
People with lower incomes who are disproportionately Hispanic in California don't have a powerful force like Google to stand up for them.
There was a rent control measure (Measure V) passed in Mountain View about one year ago and remarkably actually enforced despite legal challenges.
More units, even luxury units, are good for everyone, even lower income people. The people trying to advocate for lower-income people by blocking housing are making it worse for the very people they're trying to help!
That is not necessarily true. Under some circumstances it can be true. For example, if a community builds more luxury apartments entirely in addition to existing apartments, then yes some well paid people will probably move out of the existing apartments into the new luxury ones, making apartments available to the lower paid.
However, if inexpensive apartments are either bulldozed or remodeled to become luxury apartments, eliminating inexpensive options for the lower paid. That has been happening in Mountain View and probably in San Francisco which the Guardian article directly discusses.
However, Zillow gives a median home value of about $220,000 in Atlanta: https://www.zillow.com/atlanta-ga/home-values/
Median list price per square foot is $219
Median home list price is $310,000
Zillow gives a median home value of about $663,000 in Los Angeles: https://www.zillow.com/los-angeles-ca/home-values/
Median list price per square foot is $478/sq foot in Los Angeles
Median list price of homes is $750,000
That is a radical difference in housing prices that does not seem to show up in the figure in the article. Presumably salaries and budgets are proportionately lower in Atlanta than Los Angeles.
Is this really an accurate and adequate explanation? I know people who have moved from Atlanta where they could afford a house and had to downsize to a smaller apartment in the San Francisco Bay Area which is somewhat more expensive than Los Angeles, but in the same ballpark.
Is Los Angeles really more affordable than Atlanta?
[ADDED]
The US Census reports:
Fulton County, GA (Atlanta area)
Median Household Income (2012-16): $58,851
Per capita income for past 12 months (2012-16): $39,101
https://www.census.gov/quickfacts/fact/table/fultoncountygeo...
Los Angeles County, CA
Median Household Income (2012-16): $57,952
Per capita income for past 12 months (2012-16): $29,301
https://www.census.gov/quickfacts/fact/table/losangelescount...
Note that Fulton County, GA (Atlanta) has a higher median household income than Los Angeles County, and much lower median home prices according to Zillow.
It is unclear what the consumer expenditure report is computing. It may be the percentage of total expenditures NOT INCLUDING SAVINGS AND TAXES, rather than gross income:
https://www.bls.gov/opub/hom/cex/calculation.htm
Hence people in Atlanta may be saving more but spending a similar percentage of TOTAL EXPENDITURES to people in Los Angeles.