Used to have IB: that broker is no joke. I would see ads on TV of $7 a trade while I was paying their crazy .001 cents per share or whatever their price was. Great paper trading account, and a Java/C++ API for everything. Plus level-2 data.
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf: "_ExtInt types are bit-aligned to the next greatest power-of-2 up to 64 bits: the bit alignment A is min(64, next power-of-2(>=N)). The size of these types is the smallest multiple of the alignment greater than or equal to N. Formally, let M be the smallest integer such that A * M >= N. The size of these types for the purposes of layout and sizeof is the number of bits aligned to this calculated alignment, A * M. This permits the use of these types in allocated arrays using the common sizeof(Array)/sizeof(ElementType) pattern."
The object size has to be at least the alignment size so that arrays work properly--&somearray[1] needs to be properly aligned, and that only works if the object size is a multiple of the alignment: sizeof myint >= _Alignof(myint) && (sizeof myint % _Alignof(myint)) == 0.
As the proposal says, the bit alignment of these types is min(64, next power-of-2(>=N)). (Of course, the alignment can't be smaller than 8 bits, which the proposal fails to account for.) Assuming CHAR_BIT==8, it follows that:
So the amount of padding can be considerable. But that doesn't matter much. What they're trying to conserve is the number of value bits that need to be processed, and in particular minimize the number of logic gates required to process the value. Inside the FPGA presumably the value can be represented with exactly N bits, regardless of how many padding bits there are in external memory.
Where does the spec say that it does that? As far as I can tell C only allows objects to have sizes in whole number of bytes, and that includes booleans.
Although a _Bool type can be used for a bit field (having size of 1 bit) but you can't use sizeof with a bit field.
A byte is CHAR_BIT bits, where CHAR_BIT is required to be at least 8 (and is exactly 8 for the vast majority of implementations).
The word "byte" is commonly used to mean exactly 8 bits, but C and C++ don't define it that way. If you want to refer to exactly 8 bits without ambiguity, that's an "octet".
I think you worded this pretty well. One thing I'd add (and that annoys me about C & C++) is that the size guarantees for integral types boil down to is that CHAR_BIT = sizeof(char) and that sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long). sizeof(T*) (for any T) is not even defined well, and can be OS/compiler specific. Makes cross-platform 32/64-bit support painful, especially because there were no strictly sized integer types before C11 & C++11. Although C11 & C++11 define types like int32_t and int64_t, they're not actually required to be those sizes! The various x-bit types only have to at least be large enough to store x-bits. So, on a hypothetical 40-bit CPU, sizeof(int32_t) could vary well be 40-bits, if that's the natural "word" size for the CPU.
The devil is always in the details, and the devil is very, very annoying...
Can't upvote enough. I think these changes could also be made in a way that can be mechanically-translatable.
For example: removing the register keyword, always requiring a return statement, etc etc.
A lot of changes can me made that will make static analysis easier.
There will always be people with 50 year old code bases that will never change (and some c89 compiler will always be there for them), but the language is pervasive enough that it deserves progressive changes to make it (even) simpler and safer and slightly more high level.
I think this is useful even for systems (SW stacks) that are much smaller and "knowable": you start by observing, trying small things, observing more, trying different things, observe more and slowly build a mental model of what is likely happening and where.
His defining characteristic is where you can permanently work around a bug (not know it, but know _of_ it) vs find it, know it, fix it.
I sat through a Xeon Phi presentation at university, about how it would revolutionize the university's "supercomputer". I left shortly after; did Phi come to nothing?
It was too difficult to get decent performance from Xeon Phi for general use cases. A few apps could make it work e.g. PGS bought up all the old stock for a big geophysics system.
Omnipath went the way infiniband is going. Ethernet has caught up, and surpassed the speeds, so using proprietary technology with fewer features isn't that attractive anymore.
But not faster than L3 cache bandwidth. Some cards can DMA to L3 cache. Granted, eventually it's flushed to main RAM, so might not help too much in the end.
Totally bonkers: economists using science to approach "social science" issues. Social sciences should be ashamed that this is not the norm, we should all be startled and surprised that this is a new thing and evidently people have just been using a system of high-fives and good wishes to solve the world's social problems.
Define "science". For most topics in economics it's basically impossible to an RCT. You can't say "Let's turn the US into a centrally planned economy, and see what happens." RCT based macroeconomics, or trade economics are basically impossible. Instead people have long relied on observational data, and did the best they could to handle the issues this caused.
RCTs didn't start with Duflo. (Duflo isn't even the first to win for RCTs -- Kahneman and Smith won in 2002 for experiments.) Experimental economics dates back to the 70s, but it always suffered from the same problem as psychology -- most experiments were conducted on students, and the interventions were always small-scale.
RCTs in development economics are much bigger scale because there are rich NGOs willing to spend big money on measuring the efficacy of interventions, and willing to work with economists to do it. This is not without controversy. A development RCT involves an economist from a rich country flying to a poor country, and then running an experiment on the inhabitants of that country. Not everyone thinks that's okay.
The RCTs also rely on the fact that economists come from coun
Can astronomy reproduce the Big Bang? Can biologists reproduce the Cambrian explosion? Can geologists reproduce end of the Mesozoic Era?
It's harder to know things we can't do experiments, but we can still know them. In economics, there is a rich tradition of relying on "natural experiments", which is where something like a natural disaster or a law change allows researchers to examine the effects. This is how it was shown that the effect of minimum wage increases on employment is very small. The financial crisis falsified an entire school of macroeconomics.
Before we pile on the social sciences, maybe someone familiar with them can tell us why RCTs are so hard to do. There are likely other issues involved that make RCTs very difficult -- I'm guessing some ethical issues at least. I doubt social scientists and economists are just a bunch of idiots or charlatans. Likewise, the recent breakthrough wouldn't be such a breakthrough if it had been easier. I don't have an Economist account so I can't read the rest of the article. Perhaps that was illuminated in the article. Anyways, before we criticize another field, we should at least have a good understanding of it.
You are exactly correct. RCTs are hard, not just for ethical reasons but also logistical ones. It's hard to get the money and authority to conduct an experiment in the first place, and it's often impossible to create a true control group.
"Hard" scientists like to pat themselves on the back for rigor, but they get that because they're studying comparatively simple things. Studying the lives of people is hard, but it's also important. It affects public policy, which in turn affects people's actual lives. That public policy gets created whether it's being studied or not -- the studies are hard, but they're better than guessing, and slowly they can build up a picture that makes them better. It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.
This breakthrough is about finding ways to use the many villages found in poor countries to even attempt to do an RCT, and to come up with mathematical ways to account for the fact that the trials aren't really randomized. Aid had previously been given based on people's best guesses about what would work, which would maximize the value of the aid given if the guesses were correct, but it's hard to measure if it weren't. Aid has been beset by misguided theories and lack of measurement -- good intentions, but often ineffective.
> It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.
Yet medicine actually focuses on scientific measurement of effects. They don’t just throw their hands up and go, “experiments that affect people’s lives are too hard.”
Right, and that's what this is about. They're doing the experiments. I didn't say they were too hard; I said they were hard. But it's early days of learning how to do experiments, much like medicine was not that long ago.
There's quite a bit of medicine that "works" but the specifics of why it work isn't well understood, especially in mental health. One of my friends who works in the mental health pharmacology field told me one of the challenges with the field is measuring the efficacy of those drugs. What do you do? Do you ask someone if they are feeling better or happier? Is that trust worthy? Or is it too fuzzy? Was it the drug that did it or something else? In that regard, they face similar challenges as the social sciences.
I have a subscription. The entire remainder of the article discusses it haha. Let see if I can paraphrase: 1) framing the economic experiment is difficult to prevent bias (my take: economics is not in a lab), 2) what works in Kenya might not work in Guatemala (my take: confounding factors are much greater), 3) ethics of withholding benefits to a group, 4) rich country researchers assuming they can and should intervene in poor country's problems, 5) rich country researchers don't have local context, 5) small experiments and small topics may not apply to or have an impact on a global scale which economics operates at.
- ethics. Example: is democracy good for economic growth? Of course one could randomly engineer coups in some countries but that's probably not appropriate.
- cost. Example: how much do people change their labor force participation when taxes change by 1%? where a RCT would be "let's give a _lot_ of money to people" and see what happens.
- situation where it is not appropriate. Example: why did Europe rise to prominence (aka the great divergence)? There is not much to randomize here.
Note that RCTs have shortcomings anyway (see for instance [0]).
Physics–the only science where reductionism ever really worked–has sort-of ruined it for all the others, where it (mostly) doesn't.
In economics, you are studying vast systems. For the majority of questions, it is impossible to isolate some part of the system and control and measure all the inputs and outcomes. That's probably obvious for macroeconomics: You can't have FED raise or lower interest rates based on a random number generator. And even if you could, you would still need a second United States to act as the control group.
It's mostly also true for microeconomics. Consider the difficulty of studying UBI. The largest such studies gave a basic income to a small African village, for a limited time of maybe two years. But the idea, and its opponents, mostly deal with the life choices people make, requiring essentially life-long guarantees. And even just knowing to be part of such a study, or continuing to live in a society that hasn't changed, is likely or at least plausible to change the outcome to render the study meaningless.
> I doubt social scientists and economists are just a bunch of idiots or charlatans.
The vast majority are certainly not, however, idiocy or ill-intent are not required to fall prey to many common causes of inaccurate results. Smart people trying their best to do good work still frequently succumb to errors and this is especially true in the less 'hard' sciences.
That's why the push for increasing rigor with RCTs and other methods is important and necessary.
I guess RCTs are complicated to do because you often can't generate homogeneous control and treatment groups, so you are either forced to laboriously measure every relevant aspect of each group to standardise. Or to otherwise invent clever ways to do your treatment that ensures that most of the effects from unavoidable differences cancel out.
The thing is, these complication doesn't explain why nobody overcame them until Kremer, Duflo e. al. started their experiments in the 1990's. Their work appears to be a simple adaption of methods from other fields to studies in developmental economics, not any sort of technological development. (This is one of the earliest papers cited in the motivation provided by Nobel foundation: https://pubs.aeaweb.org/doi/pdfplus/10.1257/app.1.1.112 it does some linear regression at the most)
With creation of new technology ruled out as the blocker for performing the experiment, you are basically left with internal and external sociological explanations.
Yes, it would be more appropriate if sociologists and public/political scientists took charge and were doing these studies and not economists. Some of these RCTs are only loosely related to solving market questions.
Is there nothing between RCTs and "high-fives" to estimate causal effects? Economists seem to run awfully long journal articles if they all boil down to high fives.
Piketty's Capital in the 21st century would be an example of something in between. His main hypothesis is that unless there is some intervention wealth accumulates until almost all of it is concentrated among just a few. He uses lots and lots of statistics to support his theory. He tells us that it happens but is unable to tell us exactly why.
Is that really any better though. Unless those statistics are shown to have a predictive effect on money flow, he’s just publishing something to fit a model and metaphorical high fives are thrown around by people that already agree with the hypothesis.
There is a lot of research on how to estimate causal effects from data.
And text books. "Causality" by Pearl about causal models in general. "Causation, Prediction, and Search" by Spirtes about how to learn the models from data.
For example assume the world consists of three random variables A, B, and C. If A causes B and B causes C (as DAG A -> B -> C), then A and C are correlated. But if the model is A -> B <- C, then A and C are not correlated. But conditioned on B, A and C are correlated in A->B<-C and not correlated in A->B->C. So you can falsify such causal models without an rct
There are many philosophy journals filled with long articles that also have no causal connection with reality. Why would you assume any given academic journal has to be publishing things that make sense, or are useful?
Worth noting that Transformative Hermeneutics of Quantum Gravity was not publishing in a philosophy journal. It was published in a litcrit journal which is a very distinct discipline.
Anyway, I think it's more correct to say "It speaks horribly of Hacker News readers _economics knowledge_." There are some topics here in which the quality of comments is quite poor but it's probably unreasonable to expect a group of people (specifically "good hackers") to be knowledgeable about _everything_. It is what it is and you just have to figure out which topic to avoid here.
I don't get why people are up in arms over this: the average person drives like an I D I O T. A Tesla auto-pilot drives above average. And right now they are driving the worse they will ever drive.
I don't think this is true, there are maybe many young drivers and some drunk drivers that cause a lot of accidents.
But consider this, you are a decent driver and you need to send your children somewhere. Do you drive him yourself because you know that you will not speed or text or be drunk or you send it with a robot that is better then an idiot but worse then you ???
Sure if you were drunk or tired is safer to send him with the robot.
Yes that is the physiology barrier that you describe that will be difficult to overcome.
The flaw is everyone thinks they will be a better driver than an AI, even though very few actually will be.
So if I had to choose my children driving with an "average joe" / friend / etc vs an AI, I would say: the data says the AI will crash 25% less, therefore it is safer.
I assume you are better then a drunk teen that comes from a party and has 3 months of experience, The most road deaths that happen here in Romania are caused by young drivers cumming from parties late at night, maybe drunk, the car is full with people so you have 1 crash causing a lot of deaths, So the average driver from the stats is a terrible driver because the stats are skewed hard by the inexperienced, drunk or tired drivers. You can have both this next two facts true at the same time
1 replacing all drivers with AI is x% safer
2 replacing you an experienced responsible driver with an AI is less safer
These kids with their RH accounts have no idea..