Various parts of an algorithm/procedure are related to each other in various ways. Sometimes those relationships describe a sequence. Other times they do not. For me, this is the essence of functional programming: a means to describe how the many small parts of an algorithm are related to each other in the whole (in a natural way).
If the computation truly is one that is most practically described as a sequence of steps, that is easy to do. In fact, in some sense this is what the infamous "Monad" concept is all about.
I don't think it makes sense to characterize Haskell as "big" on this basis, because 1) it is trivial to define an operator in Haskell, so there's bound to be a lot of them and 2) even the "standard" operators typically have a simple definition (e.g. https://www.stackage.org/haddock/lts-12.9/base-4.11.1.0/src/...).
On the whole, I consider user-defined infix operators to be a huge mistake. While the few common ones are great, the ability for every single library creator to add their own infix operator turns into a mess in the long run.
There is no mention of starvation in the title of the article nor in the text. I'm not sure why it's in the title of the submission. There is a link to a related article "2009: Ala. Sheriff Jailed For Starving Inmates". I assume it's the same article referenced in this article:
"In 2009, then-Sheriff Greg Bartlett of Morgan County was briefly tossed in jail after acknowledging that he had personally profited, to the tune of $212,000, from a surplus in the jail-food account. Prisoners testified about receiving meager meals."
Still, I don't see the justification for the use of the word "starvation" in that context, either.
"Starving" probably not. Using expired food, creating gruels, making the inmates sick, forcing inmates to steal food with violence -- is a bit more accurate title. Here is an example Arizona sheriff laughing as he makes inmates rotten food [1]. I would suspect many inmates do begin the starvation process and become violently ill. (NOTE: many of these inmates have not been convicted of a crime, this is county jail).
> "A couple people I knew came through the jail, and they say they got meat maybe once a month, and every other day, it was just beans and vegetables," Qualls told Sheets. "I put two and two together and realized that that money could have gone toward some meat or something."
The provided budget for food is obviously going to be roughly set at the minimum necessary to provide barely adequate nutrition, because that's how legislatures operate. So if most of that money is being stolen by the sheriff, starvation is a perfectly reasonable term to use.
Many jails/prisons are spending no more than 50 cents or so per meal, and providing only two meals per day.
Thanks, we've updated the headline from “Sheriff Legally Profits $750,000 from Starving Inmates” to that of the article. The guidelines ask submitters to please not editorialize like this.
It's not about sticking up for sheriffs. It's about telling the truth without sensationalizing it. What the sheriff did was bad enough, there's no need to embellish.
You know I feel similarly as you, but with all of the research I have been seeing lately about how propaganda and emotional manipulation are far more effective than facts and reason, perhaps it is the case that to achieve positive change you do need to embellish, or else the forces pushing for negative change, who have no problem embellishing, will win?
Surely you must think some ends justify some means. For example I highly doubt you've cut all white lies from your life. What makes the use of embellishment here different?
I don't think anyone should be shamed for trying to make a conversation more honest, even if they seem to be serving the bad guy. Criticizing the word usage doesn't tip the scale toward the sheriff much at all.
This reminds me that we had a cat that developed diabetes. Besides the usual treatment, we changed the cat's diet "radically". Eventually the cat no longer required insulin shots, i.e. it was "cured". I remarked to my wife at the time that maybe the reason T2 diabetes is considered incurable is that it's all but impossible to get humans to radically change their diet.
It's pretty common in cats for there to be a honeymoon period after treatment begins. The lower blood sugar levels allow the still viable ß cells to resume producing insulin.
After a time, however, the process that was killing them off in the first place often reappears and insulin is required again.
The biggest factor I've seen in previous cats with diabetes - dry vs wet food. Switch from free-feeding dry to measured, timed canned portions made a night/day difference.
As we have recently adopted TypeScript, this is a nice affirmation, but preventing bugs is only part of the benefit and I'm not sure it's the most important part.
TypeScript, in conjunction with a code editor that supports it, significantly improves the code editing experience. Sometimes I still have to chase down documentation, examples, etc. to figure out how to use something, but it happens far less often. In general, I feel like I can code faster and with more confidence.
I assume all this is also true of Flow, though I've never used it.
The killer apps of static types are code completion, documentation, modelling rather than safety and performance. This makes TypeScript's unsoundness much more acceptable, though many in the type systems community still have a difficult time grocking this.
To me the big advantage of static typing is refactoring. I am dealing with a mid size JS codebase right now. It's well written but it's still really hard to refactor because you never know what code may break. In C# or C++ I can make a change and the compiler tells me what breaks. I haven't used TypeScript but it seems it will also make refatoring easier.
At this point I've done about equal amounts of programming in JavaScript and TypeScript. Yes, that's a huge difference. Without that one difference, the languages seem really similar, especially since ES2015.
I'm sure someone might chime in and say that good tests will cover that, but client-side JavaScript code can be difficult to test properly, so I'd rather just get the type checking for free.
Pretty sure the first code completion programs were in dynamic languages. With many environments, you literally asked the object what methods it had. Same for documentation.
The first code completion system was in Alice Pascal circa 1986. Statically typed.
After that, it appears in production for the first time in VS 97, still using static type information. It wasn't present in smalltalk IDEs before then, or LISP ones.
Lisp, with SLIME (Lisp IDE tool for Emacs), gives you coding completion with a dynamic language. This isn't exclusive for dynamic languages.
However i don't want to debate "static" vs "dynamic". Static typing has advantages, it's simply that the advantages I see are performance (execution speed) ones. As mentioned above, i don't buy the idea that simple type errors are a serious speed bump in development speed.
As an active promoter of live programming, I'm totally enthusiastic about code completion using dynamic type information. It just didn't happen before static types. Code completion on dynamic execution context is a bit challenging because you need to surface execution context somehow (which is a problem live programming focuses on).
SLIME replicates the same functionality of the Lisp programming environment of the late 1970s, so most of the features have been available for Lisp long, long ago.
Sure? And what were those features? As far as I know, most lisp IDEs had feature sets similar to enclave, which meant simple string-based code completion most of the time.
Tab completion is often a small subset of code completion. When I think of tab completion I think of it being able to complete something you start typing based on what it knows to exist, but rarely does it understand much context.
Code completion meanwhile knows context much better.
right, context did exist at that point but seems limited to filenames, and symbols afaict, obviously though you can obtain much more context in a statically typed environment
It is this 'obviously' that I am challenging. It might seem easier to do it for statically typed languages, but it is not something that is only possible there.
Note, I am decidedly not claiming it is easier in a dynamic language. Just making the claim that it is possible in both.
I suppose yes, I could see such a thing working in the style of "A unified approach to solving seven programming problems"
perhaps 'naively' would be the right word
I'll confess this surprises me, but not overly so. I'll try to find what I was thinking of. Primarily, I thought you could do live reflection in lisp machines. Which is basically this.
The primitives to do something isn't equivalent to doing it. So lisp machines had live reflection, but they didn't have the UX in place to surface the experience that we know today as code completion, they essentially didn't know that this feature existed or would be useful. As we know, UX is very important.
A similar situation happens today when we argue about live programming. Smalltalk had the infrastructure in place (hot code replace, fix and continue) to do much of it, but never provided the UX for what we would recognize as a live programming experience. Defining the experience is as important, if not more, than being able to realize it.
I'm not sure I appreciate the distinction here. I am not claiming that the original toolsets were the same as modern ones. In large, I would expect that was a limitation in memory and extra resources. Similarly, I expect the completion you are referencing is a far cry from modern completion. I remember early completion that I had access to was limited to available methods only. Was the referenced one more sophisticated?
That is, I was really only trying to reference the primitives. Because, well, progress. :)
Alice Pascal's code completion isn't really sophisticated, just convenient. They needed it because it was a syntactic editor and typing things was inconvenient. It turned out to be a good idea (for exploration) outside of its original context (save on typing)...but it's discovery was basically an accident.
I should add that I do appreciate being corrected here! I can't edit the parent post anymore to indicate that my claim was likely to be interpreted in a false way, unfortunately. Hopefully interested folks read down the thread.
"Give a menu of possible completions for string so far"
and
'Completion Apropos'
"Do apropos within the completions of what has been typed"
The editor of a Symbolics Lisp Machine would already sectionize a buffer and would know what is defined there. It also kept track what was changed in various buffers, what compilation errors were related to which definitions, etc.
Basic completion would work over symbols in a buffer or what is available in packages in the running Lisp.
Note that the Symbolics had a COMPLETE key on the keyboard and in a Lisp editor buffer it would run COM-COMPLETE-DEFINITION-NAME 'Attempt to complete the definition-name of the symbol under point'-
Mid 80s Symbolics introduced the Presentation System, which recorded all I/O (including graphics) and recorded which objects were displayed as what type.
Thus when one interacted with the machine, it knew what were classes, methods, functions, etc. and it could also reconstruct it from the textual display. If one would for example use an editor command which needed a class, then when typing, completion and searching was limited to classes and only classes on screen would be mouse sensitive. Also if one would right-click on an object, there would be only the commands for that class or presentation type in the popup menu. Similar for all kinds of dialog menus. The system knows which types are acceptable and knew which objects could be reused, or which to search.
This is not really 'completion', but puts the classes (or the presentation types) directly into the user interface - based on the dynamic classes and presentation types of the objects displayed.
A lot of interaction in Dynamic Windows goes through listeners and command loops, which understand input contexts and accceptable presentation types. The main listener does that, but also all other applications had such a command loop. When you interact in a REPL with Dynamic Windows, completion wasn't really that necessary, since the UI had access to the actual objects (and not just a textual representation).
From a UI standpoint I as a developer would write typed functions (aka commands) which then would be invoked via gestures (keystrokes, commands, mouse gestures, ...). Developing an application always then would involve writing presentation types and presentation methods for classes (and other things), which could then also be used in a listener/repl in a programming sessions. Let's say we wanted to implement a calendar, then one would implement presentation types for days, weeks, months, years, persons, rooms, events, tags, etc., provide visual and textual representations for them. Then while programming, we could play around with those objects in the listener. It would know which commands are applicable for the things on the screen - no completion necessary. Still if a person/event/room/tag/... would be requested, it could complete or search through the known persons/events/rooms/tag/....
Much of the UI was developed with Flavors and later with CLOS
, such that there were lots of classes, lots of messages/generic functions/methods, ... Naturally this worked only over the things loaded into the current Lisp world. Code or other objects on disk was not available for this kind of interaction - though some stuff worked also with pre-computed tag tables. But to know about classes, methods, presentation types and presentations, they usually had to be present in the running world (which could be saved and restarted later).
Is apropos really code completion? It is neither using static nor dynamic type information, it is just using a local and/or global namespace. It is horrible for the browsing that code completion is mostly used for.
If I remember correctly, clos had a strange naming connection that included the type in the symbol name. I guess that would make apropos sufficient, right?
Browsing is done with other tools, for example the Flavor Examiner or special editor commands.
Completion can know that something would be a Flavor message and search only in those or that something has to have a function binding.
But what it all kind of hinders, is that Lisp is a verb-objects language and not an object-verb-parameters language, where the object is an instance of a class and where the class is also a namespace. One is supposed to choose the operation first. In Lisp namespaces are packages (and not classes), thus the first thing is to choose the namespace of the operation, and then complete over the available operations in that namespace. Also names for operations tend to be long, so typing graphics:d-p and complete that would likely find the draw-point and the draw-polygon functions, since those would be exported from the package. Often there would be short nicknames for a package, say, gr for graphics, so typing gr:d-p COMPLETE would find the matching operations.
A drawing function will likely be named, for example, graphics:draw-polygon and take a graphics-stream and a list of points, thus find it quickly via some completion should not be that difficult. In the original Flavors it would be like (send graphics-stream :draw-polygon list-of-points) ...
But again, the action in the editor is only part of the story, because much of the development action will take place in the listener, where one interacts with actual objects.
> If I remember correctly, clos had a strange naming connection that included the type in the symbol name
CLOS is built around multiple dispatch, so there isn't any type to go in the method name if you wanted; methods don't belong to a class, they just specialise generic functions onto any number of their arguments. What lispm described goes quite a ways beyond apropos, though.
Code completion goes beyond auto-completing method names on objects/modules. Either way this shouldn't be some tribal us vs them thing, both static/dynamic have well documented utility for different use-cases.
But if we're going to make a generalization on which is easier or most capable to build complex editor integration around I'd say statically typed languages is the winner here.
Modern code completion does, yes. Just as static analysis goes beyond type checking of a language. The computing resources necessary to support modern tools is beyond what older tools had.
I am not trying to make this an "us versus them." To the contrary, I'm trying to point out both toolsets have many of the same advantages.
I will even concede that static languages do seem to have the better IDEs today. I assert that is as much a by product of money spent on development. Not a foregone conclusion of language design.
I am deeply convinced the killer application of static types is polymorphism. Types create ways of organizing your code that are simply not available for dynamic languages.
Unless, of course, you get out of your way writing dispatchers, as is usual in Python. But those are long, repetitive, bug-prone and can not really be made generic.
You can have dynamic types that give you something similar. Overloading is much more difficult (though not impossible, few dynamic languages allow overloading outside of the receiver; e.g. In Dylan).
The title is, provocatively, "to type or not to type."
The top of the article makes it clear that it really, really, really will answer whether we should type or not type. From the top:
>This is a terrific piece of work with immediate practical applications for many project teams. Is it worth the extra effort to add static type annotations to a JavaScript project? Should I use Facebook’s Flow or Microsoft’s TypeScript if so? Will they really catch bugs that would otherwise have made it to master?
>TL;DR: both Flow and TypeScript are pretty good, and conservatively either of them can prevent about 15% of the bugs that end up in committed code.
In this comment I will address wherher this is really enough to deliver on that promise:
Why is bugs a metric instead of bugs per programming hour, or hours of programming including fixing bugs, with or without typescript?
Typescript adds types (and requires programmers to keep them in mind.)
I think it's hardly controversial to claim that typed programs have a lower bug count because type errors are caught. It is also not controversial to claim that programming time is slowed down, because of the need to think of types.
The question is: how much?
If a programming language slowed everyone down by 5x but resulted in 75% fewer bugs, few people would choose it. Most people would choose bugs.
If a programmer language was 100 times faster and only resulted in 5x as many bugs (so, instead of 100 times base rate, 500 times base rate) then I and practically everyone else would always choose it for almost everything. After all if it's 100x more productive you can take 80x productivity gain and sacrifice 20% of your programming wall time to spend on debugging.
So real results around speed and bug count, as well as the insidiousness of bugs, are crucial.
If the bug count were not increased in comparing a and b, but the second language had showstopping bugs that took 100x longer to find and fix, nobody would choose b.
> It is also not controversial to claim that programming time is slowed down, because of the need to think of types.
I'd dispute that. I think the types provide me a notation to think about something that I'd already want to think about, so I can actually program faster with them than without.
(Very much agree with your overall point: what matters is not how many bugs we have but how much business value we deliver overall)
> Typescript adds types (and requires programmers to keep them in mind.)
Programs without TypeScript also require programmers to keep types in mind - it's just that those in those programs there's no way for a computer to verify that the programmer got the types right, and no unified way for a person to write down the types involved at any given point in their program.
> Programs without TypeScript also require programmers to keep types in mind
Not types, as such, but usually generics.
JS is hard to talk about in this context because of it's weak typing and desire to type cast just about anything. So the comment may well be true for JS, but I wouldn't be certain of it, it certainly wasn't the case when I was working with CoffeeScript to avoid the common pitfalls of JS.
But, for most dynamic languages, if you'll allow to stretch things a bit, I don't need to understand precisely what I'm feeding into the function.
Instead of asking "is it a string?" or "is it a vector?" or anything else along those lines, usually I just need to ask "is it iterable?", if so, good enough for me.
This can ease refactoring, when you need to change to a new type, because if the interface provides the same mechanism, you don't need to change anything. If it doesn't, then it may be possible to provide the interface, rather than modifying the function.
---
Things can get more complex than this, however.
Most people think of dynamic and weakly typed languages as the same.
However, there are several strongly typed, dynamic languages. Often times they would not allow you to pass the wrong type to standard functions.
Some dynamic languages have guards and contracts to ensure types - and some of those guards and contracts are... "compile time"... for lack of a better term.
That gives you optional typing, that you can add as you feel the need to tighten up your code, whilst still allowing you to be as dynamic as you feel appropriate.
I would certainly dispute that programming time is necessarily slowed down. Ironically, I find myself thinking much harder about types in a dynamic language, precisely because there's no compiler there to do the rote work for me. Lately I've been using pandas dataframes quite a bit, and the number of hours I've wasted due to problems with types is mindboggling.
So the article you just linked ("Greggman") was interesting and began roughly along the lines of the questions I expressed in my comment.
Sadly, JavaScript is not among the languages Greggman compared (didn't see it in the charts or on the page), let alone both JavaScript and a language that adds types to it.
That's what we're really interested in, isn't it. And as the original article we're discussing ("To type or not to type") promises, that's what would have:
>immediate practical applications for many project teams. Is it worth the extra effort to add static type annotations to a JavaScript project? Should I use Facebook’s Flow or Microsoft’s TypeScript if so?
So if these languages had simply been included in Greggman, it really would have gotten at the essence of my questions. Unfortunately they weren't studied.
Specifically, Table II gives mean times of 231.4 s (Flow) resp. 306.8 (TypeScript). So if you spend on average more than 5 minutes debugging errors that static typing could have prevented, you're probably better off just writing the annotations.
You hit the nail on the head. The biggest save for well structured teams would have to be revisiting tickets/speed of development. Having a compiler tell you: this field is missing, this field is wrong, a cannot be b saves a ton of time in terms of development validation (manual or automated).
As someone who grew up on Python and Javascript, I was never really that sold on typing, until I refactored something by changing a field on the model and Flow showed me all the places in the code that needed to be updated.
Granted, other more mature type systems/IDE will automatically refactor, but this was a really big obvious win for me :)
TypeScript is certainly not flawless and there are tradeoffs. But this is not a good argument against TypeScript. The point of TypeScript and any other statically typed programming language is to have type checking performed statically, that is before the code is actually translated into machine instructions and run.
When people refer to "chemicals" in food, without distinguishing what type or category of chemicals, I tend to assume that they have an agenda of frightening an uninformed populace or that they are a member of that uninformed populace themselves.
Well, first there was the agenda of food companies to put as much crap can increase their profits (e.g. by making food more addictive, sweeter, longer lasting, etc) in there as legally (and often beyond legally) possible.
Here is something I think we are close to agreement on. Companies have made food more addictive, cheaper, more shelf stable in a variety of ways, to the detriment of nutritious food and consumer health in general. But unless we put the focus on the real issues with modern food, I don't think we will see much if any progress. In fact, I suspect that food companies and food scientists are up to the challenge of making food that is just about as cheap and addictive and shelf stable as the current stuff is without using any "artificial" chemicals. I mean, when "nitrate free" bacon can be sold that is, in fact, chock full of nitrates, the possibilities are endless.
What is "he" right about, exactly? You say "yes they can be useful", but that's not the impression I get from OP. I am also confused because the very quote OP is responding to states that "...type systems are not enough to prevent all the problems in logic and programming...". Do you consider that acting like type systems are a cure-all?
It could be at least argued that type systems aren't very effective at reducing meaningful bugs and/or aren't worth the costs they impose, but you went way beyond that argument. As I see it, the only way you can believe that "nothing suggests that type systems...can be improved to be practical at preventing bugs" is because you have willfully ignored or completely discounted every bit of evidence you've encountered that suggests otherwise.
I know that mice have been used very successfully to study many diseases and conditions that can also affect people. But are mice a good human analogue when it comes to diet?
I don't have time to find citations at the moment, but I seem to recall that the original studies linking consumption of saturated fat with...bad things (high cholesterol, etc.) were originally done with mice and more recent studies involving actual humans have failed to find a connection between consumption of saturated fat and the aforementioned bad things.
I know this doesn't answer your question as much as it introduces more questions, but I recall there being a propensity to use mice for these studies because gene expression based on lineage is more reliable in mice, the effects of metabolic modification are easier to study in mice, and a lot of existing studies are done on mice so it is easier to advance research. I may have read this in a biologists AMA on Reddit, so take it with a grain of salt. I also very much agree that study reproducibility is a major problem. Everyone wants to do the original research, and few want to validate existing work.
It is about compromise. The ideal would be to use real humans for all studies. There are many reasons we cannot do this, about half of them are obvious. (try to come up with the non-obvious ones as an exercise)
Mice are cheap, have a short lifespan, and have less ethical concerns so we use them. They are an okay model of humans, which is good enough to say if something fails on mice don't try it on humans. It is an open question of what treatments would work on humans that fail in mice - but this is impossible to study so we will never know how often this happens. (the fringe "coconut oil" groups put it at 5% from what I can tell)
You've defined above why we use model organisms, but the original question wasn't questioning model organisms in general. I think we should work towards the original posters challenge to validate our models are efficacious, not just normal and cheap. After spending a semester seminar reading rodent dietary studies, I am equally sceptical that mice are a good general model for diet.
The cautious conclusion regarding your question seems to be "mouse models are decent, but we understand the dietary habits of mice much better than that of humans. And sadly, leptin model doesn't translate neatly to humans for reasons not yet fully known but probably partially social in nature"
If the computation truly is one that is most practically described as a sequence of steps, that is easy to do. In fact, in some sense this is what the infamous "Monad" concept is all about.