* A west-aligned country is at war with a terrorist organization which is part of the Russian-Iranian-North Korean axis of evil.
* A war which the terrorist organization started unprovoked
* The ally country conducts the most precise strike against militant combatants in history (also completely legal by my understanding of international rules of war)
* Your suggestion is that they should've confiscated their walkie talkies instead
Isn't this a circular argument? The question was what "useful" things do these crypto solutions/companies provide, and your example is a product which allows regulated entities to invest in crypto.
"Why is X valuable? Because X allows you to invest in... X."
Also, in this specific example (Arc), is the solution even considered DeFi? There's a centralized list of whitelister entities and you can only participate if you're a customer of these entities. So they're like banks.
Whitelisted counterparties borrowing and lending to each other without having to trust a third party to intermediate, such as fedwire, swift, or dtcc, is clearly not a scam. It's a b2b product with at least 30 sizable informed and willing institutional participants. They could take their capital anywhere else in financial markets but choose to take it to Aave.
It's the same platform Aave offers anyone else on Ethereum, polygon, arbitrum, and other chains coming soon. The customers of the arc product have a regulatory compliance burden that arc solves for them but the tech is the same.
The solution you suggest is irrelevant to the issue mentioned in the article. Even if you use np.random.RandomState, or any other "explicit RNG state", that state will still be copied in the fork() call.
The post just stresses that one should be careful when using random states and multiprocessing, so you should either reseed after forking or using multiprocess/multithread-aware RNG API.
Possibly but this is the kind of boilerplate which people tend to ignore, especially when a program is non-trivial. It’s really easy to notice if you’re doing something like `seed_rng(); fork();` but once there’s distance and more than one thing being passed around I’d be surprised if you didn’t find the same pattern, perhaps a bit less common.
Fundamentally, there two problems: fork() is a performance trick to try to do setup only once and seeding an RNG is a type of setup which isn’t intuitively obvious can’t be optimized that way; and if most people learn from a tutorial or quick start this is exactly the kind of important but non core issue people omit or ignore in that context.
Additionally, I think people make a hidden assumption that they don't even realize they're making: that when you ask for random numbers from numpy, they're more or less "true" random numbers, not seeded ones. Like, I think the intention of the programmers is just "give me a bunch of random numbers, I don't really care how as long as they're random", and assumes that that is what that numpy function does. But it doesn't: it provides you a pseudo-random sequence – not true randomness – so of course the sequence is identical after the fork.
Like, they think they're reading from /dev/random, but they're not: they're just running rand() (metaphorically speaking).
Definitely - back when I supported a computational neuroscience group that came up multiple times (not numpy but similar contexts), along with the various quirks around floating point math. Even experienced people do things like that because they’re focused on the actual problem and this is a leaky implementation detail.
Why use autoscaling and not just launch the instance directly from lambda? The run time is short so there's no danger of two instances running in parallel
> Convolutional layers are just simplifications which make training easier. They are priors in the sense that we know a fully connected layer in image applications would just devolve into a convolutional layer anyway, so we might as well start with a convolution layer. That "design" is the prior. But it's not mandatory; the network would still function without that "prior".
As far as I know this is incorrect. Can you point to a paper that shows this? If by "easier to train" you mean that the models do not overfit training data, then that's the whole point of using correct priors / hypothesis classes.
I'm not sure what bugs you in this paper, but the point is that they decouple the prior architecture from the training/optimization mechanism, and that seems interesting.
The "sheaf axioms" define what it means for a "rule" associating "data" to open subsets to be a sheaf, and I was just trying to illustrate them with the example of "F". (Perhaps calling them "properties" or "laws" -- or even an interface or typeclass! -- might help?)
In general, proving that something that looks like a sheaf really is one may be nontrivial. :)
In the special case that I outlined above, it certainly is easy to show that F satisfies those axioms, as you point out. And it is a sheaf (the sheaf of continuous real-valued functions on R) precisely because it does.
Sorry, I meant the claim about f and g. Assuming you meant that F(I) should be continuous functions, you can construct an h from f,g to be cont on I and J, no? So it's not an axiom. Just making sure I understand correctly...
Again, that was just an example. F is just one possible sheaf on the real line, and in the case of F, yes, continuous functions can be stitched together.
You could define A(I) = { } for a trivial example of a different sheaf A where the "data" (always an empty set, regardless of I) is very different from what it was in the case of F (the set of continuous functions on I).
A lot of "may" and "might" in your reply, and absolutely no facts. The facts are that on ISIS' agenda, fighting Palestinian groups such as Hamas has a higher priority than fighting Israel [1][2], yet people in this thread still manage to "lightly" blame the Israelis, the US, basically anyone except those responsible, as if these people are incapable of independent thought.
People who have historically done these kinds of act don't care about IS nor are they devote Muslims. They care about support for Israel because it is seen as a further proof of the west mistreating Muslims. It's not about blaming Israel, it's about western governments doing things like the french did and first celebrating free speech after Charlie Hebdo, just to ban anti-Israeli demonstrations. That's a great recruiting tool to use on young person in a suburb already feeling mistreated by the government.
> lightly" blame the Israelis, the US, basically anyone except those responsible.
This argument is so stupid.
Everyone is blaming the people responsible. That's a given. The reason people are blaming Israel and the US is because they are indirectly responsible since their actions are used as recruitment tools.
They clearly use scapegoats to push forward with their agenda. Didn't Al-Qaeda declare Jihad on India? That's a scapegoat. India has nothing to do with them, but they've declared Jihad. France has nothing to do with them, but they bombed her citizens.
The US, France, and other European countries I can actually understand, since these countries performed military operations in Iraq and/or Syria, but I'll be very surprised if Israel is used as a recruiting tool. If you have some sources implying this I'm interested in reading them!
Why would you be surprised Israel's persecution of the Palestinians would be a recruiting tool ? It's a major issue for almost every Muslim worldwide. ISIS have absolutely been recruiting on the back of this and I've seen recruits from Australia even mention it when interviewed.
I'm going to sound very aggressive, but I don't understand the point of this book. They say
> The material in this book is too valuable not to share.
and after reading the sample I feel their definition of valuable doesn't align with mine. It's a "handbook" but the chapters are interviews that don't go in depth into anything. Here's a sample "question"
> Compassion is also critical for designing beautiful and intuitive products, by solving the pain of the user. Is that how you chose to work in product, as the embodiment of
data?
Really? This reads like an onion article about data science.
As a data scientist, I draw inspiration from the infinite depths of understanding. Life is merely the unfolding of a series of recursive recommendation engines, each one AB testing the local gradients of human emotion. You think you are just buying a pair of socks online. Foolish mortal. That transaction was a multi-armed map-reduce set into motion long before the internet existed. Data science is the internet of things, it is big data at the speed of entropy, quantum mechanics at the scale of desire.
In data we trust. All others bring EVEN. MORE. DATA.
You had me at "life is merely the unfolding of a series of recursive recommendation engines, each one AB testing the local gradients of human emotion". The more i think about it the more it makes sense.
Your comment is meaningless. I don't mean that negatively. I mean that it really is meaningless. It's buzzwords connected together with a lack of thought or direction.
Agree, this seems like a collection of interviews with some famous folks who are in the field of 'data science' (can we just call it stats please). A handbook is a poor description indeed.
For folks interested in learning about this topic there are tons of online courses/videos on the real stuff. The stanford islr course is a great place to start
I think this passage actually addresses the context of that question:
>The difference between empathy and compassion is big. Empathy is understanding the pain. Compassion is about taking away the pain away from others, it’s about solving the problem. That small subtle shift is the difference between a data scientist that can tell you what the graph is doing versus telling you what action you need to do from the insight. That’s a force multiplier by definition.
I think the context you've quoted only further proves my point. This sentence means absolutely nothing to me, it's trivial and devoid of any actionable information. But because the interviewee is a "famous data scientist" it is suddenly important info that needs to be shared with the world?
To be fair, I know some of the co-authors and I'd say they're pretty sharp at data science. However, this book highlights something they're not so sharp at: self-promotion.
I believe this book is meant to be a gag in the way the original Facebook Brogrammer store was meant to be a gag. And, similarly, the authors are going to learn the consequences of having a lot of people take the gag seriously.
I don't think there's a strict need, but functional languages tend to bring the desired traits: immutable data structures, referential transparency, isolating side effects and state mutation as much as possible, etc.
It seems like the more one uses React, the more they crave the above, so it makes sense to use an environment where all of the above is natural.
This comment is very confusing. First of all, the linked paper doesn't state what you claim it states. The authors show equivalence between two specific frameworks of neural networks: SVM-NN and Regularized-NN, and not equivalence between SVM and NN. Generally, SVM and NN are equivalent only in the sense that all discriminative models are equivalent. The kernel trick in SVM requires your embedding to have an "easily" calculable inner product. I'm not an expert, but I think this places strong constraints on the embeddings you can use.
Second of all, SVM does not create any feature space (i.e., embeddings). It just finds a good separator with a maximal margin. Deep NNs, on the other hand, do create features in their hidden layers.
Anyway, even ignoring these issues, I'm not sure I understood your main point.
* A west-aligned country is at war with a terrorist organization which is part of the Russian-Iranian-North Korean axis of evil.
* A war which the terrorist organization started unprovoked
* The ally country conducts the most precise strike against militant combatants in history (also completely legal by my understanding of international rules of war)
* Your suggestion is that they should've confiscated their walkie talkies instead