Does anyone know of a way other than Raman spectroscopy to classify graphene monolayers? I recall making the graphene was simple but confirming it was the real chore.
I've never used Ansible. Is it worth using for just this workflow? I'm asking coming from a baseline of just having a Git-versioned shell script which has lines like
You get the nice stuff that Ansible brings, like adding specific lines to .zshrc, templates, etc. I found it easier than my usually-beloved shell scripts because I didn't need to think about the mechanism, just the result.
For example, brew install coreutils would use the community.general.homebrew module: https://docs.ansible.com/ansible/latest/collections/communit... <-- you can see from that page that each module has lots of examples, which makes it pretty easy to go from requirements to Ansible script.
Yes, but in addition to the one at home we also have to buy a separate one to sit in the nurse's office all year. Also, if the before/after school program doesn't have access to the nurse's office, we have yet another one that sits in their cabinet, too.
They all come home at the end of the year, expire, repeat.
I don’t understand what the plausible alternative policy is. Each kid needs a personal one for when they are at home. They presumably should bring it with them to school for quick action and also for if they have an episode while commuting. The school could decline to carry them to save a bit of money each year, but it seems unwise to rely on (1) kids never forgetting their medication and (2) an adult always knowing where to find the child’s medication if they are unresponsive. So shouldn’t the school have a few?
Their complaint isn't that the school is stocking them. Their complaint is that each child that needs one available is required to provide a personal one to the school to store for use in case they have an emergency.
So instead of the school managing and restocking a reasonable number, the parents restock one each year.
Wouldn't the "reasonable number" to stock be the same? Let's say there's 8 kids that need the EpiPen. If there was a food allergy incident, it could be something like a school lunch event which means they need at least enough for every kid since they might all get the same reaction from all eating the same food. From reading these replies it looks like you need 2 for each person just in case they require a second dose, and replace them once a year.
So:
- if school stocks them for 8 kids: they need to replace 16 every year
- if the 8 kids families supply the stocks: they need to, in aggregate, replace 16 every year
Same amount. Obviously better if the school pays, but I'm not understanding the "reasonable number" part.
I've had good luck with this with simple variables and native data structures list lists, dictionaries, tuples. But when I try it with something bulkier like a Pandas dataframe, I'm not able to make any sense of the stack and find what I'm looking for, which is usually the data.
> The only consistent explanation I've seen that it is about 'easy'. The other languages have tools to make them easy, easy IDE's, the languages 'solve' one 'thing' and using them for that 'one thing' is easier to than building your own in LISP.
I have thought something similar, perhaps with a bit more emphasis on network effects and initial luck of popularity, but the same idea. Then about a week ago, I was one of the lucky 10,000[0] that learned about The Lisp Curse[1]. It's as old as the hills, but I hadn't come across anything like it before. I think it better explains the reason "the best languages don't get adopted" than the above.
The TL;DR is with unconstrained coding power comes dispersion. This dilutes solutions across equally optimal candidates instead of consolidation on a single (arbitrary) optimal solution.
I'd say you were unlucky, because it's a rather terrible essay and doesn't actually get the diagnosis correct at all; indeed if some of its claims were true, you'd think they would apply just as well to other more popular languages, or rule out existing Lisp systems comprised of millions of lines of code with large teams. The author never even participated in Lisp, and is ignorant of most of Lisp history. I wish it'd stop being circulated.
I gave a talk on using Google BERT for financial services problems at a machine learning conference in early 2019. During my preparation, this was the only resource on transformers I could find that was even remotely understandable to me.
I had a lot of trouble understand what was going on from just the original publication[0].
reply