I learned a bit of Idris on the book you link, but I gave up after trying to implement Quicksort (the classical Haskell one-liner) in Idris vectors. You need to manually write a page of theorems for Idris to accept it. It's crazy.
So don't write quicksort in Idris. It's not going to work very well anyways. It's not at all well-suited, and isn't really even an especially important sorting algorithm in a world where linear time sorting and constant time indexed searches are a thing.
But most of the work you'd be doing to write quicksort is writing the machinery for structurally recursive sorting, which a lot of tutorials write not because you have to, but because it's a very good exercise for learning how to write total functions that are recursive in a way DT systems can prove totality on.
Most of these tools are in the contrib library (and the book mentions this). For example the preorder interface you need:
This book is a real joy to read. I got to peek at a draft at OPLSS 2017 and have been waiting impatiently for it to come out. The detailed, carefully worked examples one after another that this style of book is famous for is adapted beautifully to dependent type theory. Check it out! The code implementing the language in the book is here: https://github.com/the-little-typer/pie
I believe it's more just how to use a dependently typed language (my copy is still in the mail). Like the Little Schemer, it's written in the Socratic style. I think the idea is that you can do it all in your head if you want, with a piece of paper covering up the column with the answers.
I'm going to order it right now. I highly recommend the little schemer and the seasoned schemer too! I have read all the titles (MLer and Java too) in the series and those are my favorites. The only one I have not finished is the reasoned schemer although I hope to try again in the future.
Little Schemer first and Seasoned Schemer second. The other ones are about different independent topics and can be read in any order once you have read the first two ones.
Several comments elaborate on the big gap between normal programming practice (e.g. structurally recursive algorithms) and the great difficulty of dependent typing those practices. Some of these comment on how dependent types are "just out of the lab".
This brings into focus a question I've had for a long time: Why this gap, and especially in this direction? E.g. in aerodynamics we had Bernoulli's principle for a couple of hundred years before we could build airplanes, which depend on it. In formal language theory we had lambda calculus, Turing's universality results, etc. decades before we had Lisp and Fortran.
We often see the difficulty of building a practice to exploit theory.
So it seems very strange to me that we are able to write / plug together literally world-spanning software systems -- which do have bugs but fact work correctly nearly all the time. But we can't easily well-type even simple algorithms with extremely well understood properties.
I'd argue that there aren't many "simple" programs with "well-understood" properties in use in industry.
Software is a bit different from architecture in that partial failures tend to work and can be refined around repeatedly (a partially failed building tends to rip itself apart, partial failures in software can linger for years and only cease when their dependencies fault out). People are just more amenable to altering their processes, products and lives around bad software.
I think dependent typing reveals to us, to some extent, what a house of cards we truly have built for ourselves.
Several interesting points but maybe they show something different from what you intend.
Network stacks, databases etc. that correctly handle trillions of interactions, some of them adversarial, have indeed been "refined around [their failures] repeatedly" and have gotten pretty robust along the way.
On the other hand useful formal accounts of their behavior (distributed, highly parallel, loosely coupled, asynchronous) seem far, far way.
It would be helpful to have an account of approximation to formal properties, especially if that can help us understand how repair and refactoring can lead to progressively better approximations.
Perhaps the house of cards that is revealed is formal methods not software.
> Network stacks, databases etc. that correctly handle trillions of interactions,
But they have also incorrectly handled a similar order of magnitude of transactions! Routing failures and issues that people troubleshoot around, misconfigurations and bugs, and simple electromechanical glitches exist in the real world and they're not some kind of zebra, you can force them by pining a server.
Our networking systems "mostly work" because they're persistent, and because (interestingly) the difference between a perfectly coordinated networking system and a purely chaotic networking system with random traffic is actually rather small compared to other domains.
> On the other hand useful formal accounts of their behavior (distributed, highly parallel, loosely coupled, asynchronous) seem far, far way.
Generally what makes a difference between a "robust" program and a brittle program that simply falls over is how much of its own failure it attempts to take into account. Even bad models of self-failure (pre-OTP erlang, for example) are better than no model at all.
> Perhaps the house of cards that is revealed is formal methods not software.
Given that almost no one applies formal methods and dependent typing is by no means a part of that discipline, I'm not sure it's fair to draw those comparisons in this conversation.
Dependent Typing is not formal methods, nor is it verifcation. It's code that allows for automated reasoning. It's closer to the machinery of formal methods, like datalog or an SMT solver.
Again lots of interesting points. Only time for one response.
You highlight networking but I also mentioned databases. Schematizing your comment "Our... systems "mostly work" because... the difference between a perfectly coordinated... system and a purely chaotic... system... is actually rather small".
I think banks and customers would disagree rather violently regarding the (distributed, asynchronous) system that handles credit card transactions.
And yet this system is certainly subject to a lot of attacks, not just random errors.
> I think banks and customers would disagree rather violently regarding the (distributed, asynchronous) system that handles credit card transactions.
Do they? Most customers accept without question the idea that they have an "available" balance distinct from their "current" balance.
It's also the case that when attacked the system typically fails and then the fungibility of currency combined with the massive power of insurance is used to pave over it.
I hear that Practical Foundations of Programming Languages by Robert Harper[0] is highly recommended. I have Types and Programming Languages, but found it a little dry to be honest. I do use it as a reference manual in my work though.
Don't get me wrong - CT is a super handy thing to learn, and it pops up a ton when thinking about any software, including type checkers and compilers, but the OP was specifically asking about type theory and programming language implementation. Yes, you can express lots of category theory in terms of type theory (the dream is to implement all of it in terms of TT so that we can mechanise it), but it won't be the best use of your time if you want to build a type system.
I probably phrased my suggestion wrong. Yes, it's not central to OP's enquiry. But I did find a basic understanding of CT to help a lot when reasoning about types, and that lecture series gives just that in a relatively simple way.
They've been historically hard to implement in a way that is both efficient, sound, integrates with effectful code, promotes code reuse (like in Haskell), and has good developer UX (good error messages and IDE support). I believe we are getting to a point where it is feasible, hence I'm messing around with it myself [0], but it can be challenging to navigate the terrain of papers and theory as a language designer. While DTs themselves are quite simple, the integration with other features users expect is tricky, and producing a production-grade compiler is definitly not as straightforward as making a regular Hindley-Milner style language.
They can make libraries a pain. Like imagine if someone write a library in C#2025 with dependent types where functions took parameters like [i: i%2 == 0] or whatever. And your big project has no existing dependent types. There's no way to adopt this library unless you pull dependent types all the way in.
Or even if you did, but your types were [i: i%4 == 0] or [i: (i+1)%2 == 1] or whatever, you'd have to write proofs that they types were compatible. Even some simple things are just not worth it.
If the project had no existing dependent types at all, couldn't you just perform a runtime check before calling the library? The check would return evidence that [i: i%2 == 0] for the particular i, or fail at runtime. If it succeeded, then you could invoke the library using the new evidence.
One appealing aspect of dependent types is that they let you decouple validation of inputs from the function calls themselves, while still disallowing passing wrong inputs to the function by mistake.
Yup, lots of people are under the impression that you need to prove everything when it comes to DTs. That's not true - you can indeed push these checks to runtime, and just have the compiler make ensure you do it.
You could just assert that i%2 == 0 via some postulate - as long as the proof is irrelevant for the code that is. Doing so is similar to converting from any in a gradual type system: if the assertion is correct you get correct code with little work and if you're uncorrect you're no worse off than if you had no types at all.
There are difficulties with dependent types, but having to go all in can be avoided via a shift in culture.
Wondered that myself. It’s surprising there’s not more languages with dependent types (and the associated requisite libraries).
I suspect it’s taken a while for the theory behind dependent types to become widespread enough for some enterprising hacker or grad student to make a useable language for them (e.g. Idris). Even if a new theory comes along it takes time for appropriate metaphors and methods of using them to be created and disseminated.
Though does computing the dependent types at compile time take more process.
Imagine having to prove a Coq theorem for every function call.
No, it's not really that bad. But almost. I really suggest you play with Idris some; what you can do with it is brilliant, for simple tasks. For complicated tasks, like the fully general propositions you need for library functions, it can get much more complicated than the code itself.
This. Thank you. I'm being rate limited, so this is the only post I'll be making on this thread. My question was genuine. I want to learn both the positives and the pitfalls of dependent types. I mimicked the book's description because it amused me from a linguistic perspective. Looking back, I probably should have phrased it differently.
Note that many of those problems are being actively worked on in research, and progress is being made, albeit slowly. That said, we can still get some of the benefits of dependent types, even without all the problems being solved right now! Just gotta be aware that it's not all roses yet.
Here is a quick guide to avoiding them: "Unless you are using Haskell at a very high level of abstraction, Coq, Agda, Pie or Idris: congratulations you have avoided them."
It's not really clear why you'd want a book about the downsides of a quite recent development in practically usable programming models. Is it just because you are a hater?
My guess is because OP has been burned by the "LEARN THIS NEW THING, IT'S REALLY COOL AND POWERFUL AND ALL OF THE COOL KIDS ARE DOING IT (and oh by the way many of the simple things you do all the time are incredibly inconvenient...)" narrative one too many times.
Adopting something purely on it's merits is a bad idea, but nobody ever writes the book about a language/paradigm's downsides. I'm pretty sure that was the joke, but I might be off.
nobody ever writes the book about a language/paradigm's downsides
Indeed. One can argue that books like "optimizing X" or "secure X" are about ways to easily write slow or insecure code in X, but this is somewhat narrow. Is there never enough demand for a broader book on downsides of X?
> I wrote a whole chapter about the downsides of X
I lol'ed 6-8 times... Still, I have to say that I'm loving my distros lately; distro maintainers, you rule!
Some typos in the article:
eents/events
client an/client can
because it the/because it solved? the
tires/tries
ore dump/core dump
trwe/tree
piel/pixel
resulution/resolution
ertain/certain
N ot/Not
screens een have/screens have
How's it "not knowing the difference" to explain that the usage of the terms client and server switched over time, in the sense of which is local and which is remote?
An xterm client running on a VAX mainframe connects to an X11 server running on a Sun workstation: the X11 client is remote, the X11 server is local.
A web browser client running on a phone connects to a web server running in the cloud: the web client is local, the web server is remote.
Exactly! I, too, used to live in an office next to a machine room full of "web clients"! :-D
(Actually, I'm one of those deluded fools who claim a "server" provides a "service" to one or more "clients", who make "requests". Yeah, I know, but I figure somebody has to keep the joke funny.)
>Given it's roots, X naturally used the technically-correct terms to describe its major components: the portion that abstracted display and input hardware into a service that could be used by other programs was called the "display server"; the portion that made use of those services was called the "display client." This later caused endless confusion who thought that "server" was a synonym for "big computer" (file server, database server, etc.) and "client" meant "small computer" (diskless client, etc.). Who knows, maybe it would have been easier had they been called "application server" and "application client" but that revisionist history.
Also interesting:
>Ultimately, the pendulum swung back with the advent of Web 2.0 technology and mobile devices. Now, we take it for granted that applications can run anywhere in the network and be accessed by any type of device. While X is primitive compared to JavaScript and HTML5 (whose ability to push computational tasks over the network into the display device were inspired by Java and NeWS), X did lay the initial groundwork. It also proved that an open source model could work for in business environments. Not too shabby for a technology that will soon be hitting its 30th anniversary.
>Jim Fulton, alumnus of Project Athena, the MIT X Consortium, Cognition (first commercial use of X on DOS) and Network Computing Devices (leading X terminal vendor)
There are tons, but we really need to ask if any of this is a fair topic for Dependent Typing, which has only emerged from pi-calculus musings on paper to compilers that can do more than prove simple structural recursion.
It's like seeing a photo of a baby and asking why there isn't a service to provide police records for babies, because you "just want to be careful, you know?"
It's not even really possible to "adopt" dependent typing today. It's only emerged from the realm of academic curiosity and only two implementations exist that are anywhere near "practical" in the context you're describing.
Both of those implementations are very honest about their shortcomings, and nearly every talk and blogpost for them mentions you can't yet use this in many industrial contexts.
It's very difficult to see this as anything but the usual distate for PL theory that constantly swirls around this community. If the author didn't intend to associate a post with that, then they've done it by accident.
This book does not talk about static typing, but about dependent types. Dependent types are more powerful and expressive than simple types because they convey more information. For example you could have [Int] to represent a list of numbers, but you could also have [x: Int, x > 20 && x < 50]. Or you could have an ordered array and know this fact by the type associated to the array. Moreover, you need to use a theorem prover to show that applying a function to a particular dependent type will result in the output dependent type. This kind of programming is not well suited to be implemented into typescript.
Dependent types are static. GP was trying to say that this book is about more than just normal static typing (of the kind that TypeScript adds to JavaScript).
There is no problem compiling dependent (or static) types to JaveScript, as the type checks are done at compile time, and don’t require any support from the JavaScript runtime.
Clojure spec is contract programming. You write pre and post conditions to ensure at runtime that Pre -> Program -> Post and every time you call a function pre and post conditions need to be checked. Dependent types analyze that relationship at compile time by proving that given the preconditions the function will produce the desired output. Since this is a compile time check you will not have a runtime penalty.
Typescript actually does have some forms of type dependency (eg. conditional types), but these things have been added in a sort of ad-hoc way, and it's unclear how sound they actually are, and they are quite limited. The book here uses a language, pie, that is designed in a much more principled way, with greater expressiveness. The book probably won't be directly applicable to TS work, but I'm sure you would benefit from getting a more rounded understanding of what is possible with type systems.
I feel like there’s so much to say about this comment, that I don’t even know where to start.
So firstly I agree, $40 seems like a lot. I personally couldn’t justify it at the moment. Particularly when the reproduction cost is nearly zero.
But $40 also doesn’t seem like a lot to pay for a book that will probably not sell more than a few thousand copies. Most likely revenue generated won’t fully cover the time and effort required to create such a book.
But... the author is also a professor at MIT. I feel like if this work wasn’t somewhat publicly funded, it really should have been.
In the end, I’m left morally confused. But it feels like something is wrong in the world when a book like this is available only to a select few, when for the same capital outlay it could be available to everybody.
> In the end, I’m left morally confused. But it feels like something is wrong in the world when a book like this is available only to a select few, when for the same capital outlay it could be available to everybody.
Only to a select few? For $40? The book costs less than most console video games.
When I grew up, we never ate at a restaurant... just would have been an extravagant expense. My parents probably earned <15000USD combined. This book would have been far too expensive to consider.
This is in the western world... imagine how many people in the world can’t afford this book. On a global scale... very few people could afford this book (or a 40USD video game).
It makes me kind of sad that you’d so casually suggest that this is not the case. Makes me think that perhaps people in the tech world don’t have a great understanding of how poor many people are...
Over 80% of the world population lives on less than $10/day. The percentage of the world population who can afford this book is very small.
Maybe 10%, 5%? Other than repeating this obviously wrong statement, would you like to provide some evidence that >20% of the world population can afford this book?
The worlds population living on $10/day, will have no use for a book on a niche aspect of type systems. The majority of people on that wage will probably not even access to a computer.
Most people, at any income level, would not find value in this book. The fraction of people who find value in the book maybe correlated with income.
There are a large number of people who might benefit from access to this book (or similar books) would can not afford to legitimately purchase it.
Given that only a small fraction of the world population can afford this book (I estimated 5%, happy entertain other estimates) it seems likely that the majority of people that could benefit from access to this book, could not afford to buy it.
Another way of phrasing that: The book costs as much as work that took dozens of people years to make.
Also if you were to somehow poll everyone reading this and ask "Did you buy this book?" you'd get some number, x. But if the book were priced at $5, then $5y would be much greater than $40x.
I bought a bamboo fineline pencil for $50 the other day. It's a tool that will serve me for at least a year. It's unclear whether this book would.
I want work like this to exist, and for the author to be rewarded for it. But ultimately, in an era when words are infinitely and instantaneously copyable, the economic value of words seems to drop.
Given the choice between stealing knowledge and not stealing knowledge, when you wouldn't have paid for it anyway, where's the harm?
The harm is less people writing books like this in the future. Every dollar this book gets in revenue not only goes partly to the author, but goes into the record as profits for that "genre" of books. Every dollar of revenue for this book increases the value of the advance the author receives for the next book, increases the probability that another author writing the same kind of material will get accepted for publication. The harm you create by stealing this book is that the market for that knowledge is destroyed.
I think you could've made the same sort of argument against movies or music before netflix and napster, but here we are, and the markets still seem thirsty for new content.
> I think you could've made the same sort of argument against movies or music before netflix and napster, but here we are, and the markets still seem thirsty for new content.
Because reasonable adults are paying for content which they want, directly via movie tickets, indirectly via e.g. Netflix and Spotify.
Content is stolen if and only if a user tries to steal it, so I don't understand how you are absolving them of responsibility. People pay for Netflix and I guarantee you that if people stopped paying for it, Netflix originals would stop being made.
> Given the choice between stealing knowledge and not stealing knowledge, when you wouldn't have paid for it anyway, where's the harm?
As a working adult, time spent reading a book is both more limited and more valuable than $40. If you wouldn't have bought it anyways, why is it worth your time stealing and reading it?
Although it was an unpopular opinion at the time, I agree with Metallica's outspoken moral opposition to Napster circa 2000.
You repeatedly call me “dude” and “bro”. You assume many people can afford a $40 book.
To me you pretty much typify what’s wrong with modern tech culture, a total lack of empathy or understanding that there is a world outside of that which you live.
I would like to see more publicly funded, freely available educational material out there, but it seems worth pointing out that MIT is a private university.
The knowledge is freely available, here's Dan Licata giving an introduction to Dependent Types for functional programmers https://youtu.be/LXvP1A97oAM
Even though watching that, I probably think I understand Dependent Types then I'll read the Little Typer and discover my intuition was wrong like when I read the Seasoned Schemer and thought I already knew everything there was to know about the concept of higher order functions.
According to WorldCat Library search it's not even in libraries other than Library of Congress and "College of Western Idaho". So it is probably not popular enough to be uploaded to a pirate site either. In fact it just came out on 18 September, so in order for a copy to be uploaded anywhere someone would have had to buy it and then immediately scan it, or the authors would have to put a draft copy on their website.
As an author of a book on which I worked incredibly hard, FUCK you. If you want it so bad, buy it. Or wait a few years, buy it used. Or borrow it from a library. Don't fucking steal it.
Carpenters and other craftspeople who earn far less than the denizens of Hacker News buy whatever they need to further their business. They don't steal others' tools.
I'd be happy to debate, if you're interested. But in the meantime it appears that the anger is misplaced; direct it at the fact that we have this wonderful tool that destroys class barriers and makes knowledge free to all.
You act as if I have $40. Would it surprise you to learn my power was cut off within the last few months?
Another point: "Stealing" implies something was lost. The words are still there, even if I have copied them.
The game industry and the iPhone app store have proven that when you price something closer to $1, it will generate exponentially more revenue than $40.
My anger isn't misdirected. It is directed exactly at this sentiment of yours: "The words are still there". It is the same whether I came up with a new magic trick, or a song, or a film. Don't fucking steal it because you have this wonderful tool called "copy" that "destroys class barriers". There is plenty of stuff on the internet (mine included) that people have chosen to put up for free; go use those. Or wait 10 years. The words will still be there, and you can get it much cheaper.
There was a lot of energy put behind those words, and I expect to get paid a modest sum for it. The means of knowledge transmission aren't free, just as you discovered that while a river wants to be free, the mechanism of converting it to useful energy and transmitting it to you isn't free.
People put a lot of energy into programming languages, but they don't expect to get paid anything for it. Why is it different when it comes to publishing a book?
That may sound like a dismissive question, but it's at the crux of our disagreement. If we can resolve that, we might be able to see eye to eye.
I'd hazard that a very large fraction of people who work on open PLs and open operating systems get paid --and expect to get paid -- for the effort they put into that project; take Golang, Rust, Linux, Kotlin, Java/JVM, Scala, Haskell, OCaml, Swift and so on. These are not written by homeless people. Try telling Rob Pike that he's not going to get paid for the ten years he spent on Go, and that he would only get paid for his other contributions to Google.
Further, it is an author's prerogative (whether that author is a company or an individual) to set the terms of the pricing. As a consumer you can choose to accept it and pay the price, or come up with an alternative model (like iTunes or the App Store) that changes the market.
I resent it intensely that after putting my own money and time and effort into a project (took a full year off a job), somebody just pirates it so easily. And then attempts to claim moral high ground with weasel phrases like "knowledge wants to be free" and "class barriers".
If I understand your position correctly, you feel that even if someone does not have $40 or would not have paid $40 for something that can be freely copied, it is both immoral and unethical to ignore the author's wishes and copy it anyway. Even though you as the author are unaffected by this action. You also feel that it's justifiable to seek out people who do this and tell them that they should not do this, i.e. how to live their life.
Is that an accurate summary? I am trying to respond to the strongest possible interpretation of what you're saying.
What is the difference between someone doing this, which is an illegal victimless crime, and recreational drug use, which is also an illegal victimless crime? Why is one immoral and unethical, but not the other? Furthermore, why is it justifiable to believe that it's an important right to be able to ingest whatever you want into your body as long as you're not harming anyone else? And are you sure the same argument doesn't apply to this case?
I write a book, put it out on the market. Someone copies it and puts it up on a website (crime alert). Someone like you publicises it. I don't get paid because from your point of view, it is out there free for the taking. Meanwhile, I have lost hundreds or thousands of potential sales from people who may have paid, but have now been tempted to join your illegal caper. Everyone revels in the very public theft.
Which is why I seek out comments like yours that glorify piracy and tell them to bugger out of _my_ life. They are most welcome to their lives as long as they don't adversely affect mine.
> Meanwhile, I have lost hundreds or thousands of potential sales from people who may have paid, but have now been tempted to join your illegal caper. Everyone revels in the very public theft.
No you haven't, and there lies the crux of the argument. Many of those who helped themselves to the free copy wouldn't have purchased it at the retail price anyways, even if there were no free copy to be had. There is a financial loss yes, but multiplying the total number of free copies by the profit to the author off purchased copy yields a greatly exaggerated figure.
So many assumptions are embedded in your comment. Firstly, it's not proven that piracy affects sales. Quite the opposite: it usually raises popularity for an item, because – if it's good – people sing its praises, which leads to more sales.
You didn't respond to my actual comment. Again, it is victimless because no one would've bought the overpriced book except for those who have $40 to throw away on a lark.
If we focus on making a quality product at a reasonable price, sales follow. The fact that technology has reduced this price to near $0 is unfortunate but is merely a consequence of computers.
I get that technology is often upsetting, but why take it out on users? The way to win is to pay attention to trends and adapt, not wish the world were different.
Riddle me this: Why did people write books before there was an economic incentive for them to? The crux of our disagreement appears to be this: it wouldn't hurt the world for us to return to those times. And technology seems to make this inevitable.
I wish I could get paid to write programming languages all day, but many people wish they could be paid for many things that are not feasible. Are you so sure your book would have maid those thousands of dollars in an era before it was possible to widely distribute it? Who would buy it? And moreover, who would hear about it and how?
Of course, Lissa did not necessarily intend to read his books. She might want the computer only to write her midterm. But Dan knew she came from a middle-class family and could hardly afford the tuition, let alone her reading fees. Reading his books might be the only way she could graduate.
It's always interesting to watch Stallman's writings become reality.
In fact, this is so prescient as to be worth quoting in full:
Programmers still needed debugging tools, of course, but debugger vendors in 2047 distributed numbered copies only, and only to officially licensed and bonded programmers. The debugger Dan used in software class was kept behind a special firewall so that it could be used only for class exercises.
It was also possible to bypass the copyright monitors by installing a modified system kernel. Dan would eventually find out about the free kernels, even entire free operating systems, that had existed around the turn of the century. But not only were they illegal, like debuggers—you could not install one if you had one, without knowing your computer's root password. And neither the FBI nor Microsoft Support would tell you that.
Substitute "Microsoft Support" for "Apple". We even have officially-licensed and bonded programmers now: The $100 developer ransom.
Lissa did not report Dan to the SPA. His decision to help her led to their marriage, and also led them to question what they had been taught about piracy as children. The couple began reading about the history of copyright, about the Soviet Union and its restrictions on copying, and even the original United States Constitution. They moved to Luna, where they found others who had likewise gravitated away from the long arm of the SPA. When the Tycho Uprising began in 2062, the universal right to read soon became one of its central aims.
You claim you are a victim. Yet you refuse to acknowledge that there are people who can't afford your work who would otherwise be enriched by it. Of the two victims, it's hard to say which is worse. Especially given that people will continue writing books even when there is no incentive to.
"Riddle me this: Why did people write books before there was an economic incentive for them to? The crux of our disagreement appears to be this: it wouldn't hurt the world for us to return to those times."
In the middle ages, books were written by the elites, for the elites. Necessarily so, because education itself was an elite activity. You want a return to those times?
"Are you so sure your book would have maid those thousands of dollars in an era before it was possible to widely distribute it? Who would buy it? And moreover, who would hear about it and how?"
Surely you don't mean to say that a product would never be heard of if there weren't the means to copy it. Surely you don't go into a theatre and livestream a play or concert or standup, because the poor "victims" outside wouldn't have a chance to be enriched. If broadway is expensive, go elsewhere for your entertainment. You don't have the right to give away or consume someone else's work just because technology brings the cost of watching it down to zero.
This is the last I am going to say about this argument. I'm quite done.
> "People put a lot of energy into programming languages, but they don't expect to get paid anything for it. Why is it different when it comes to publishing a book?"
The difference is what the people who choose to do the work expect to get out of it. There are people who work on programming languages (or software) that have commercial licenses. That's their choice. Choosing to ignore their choice, to subvert the terms on which they choose to offer the fruits of their labor, is wrong.
If you can convince them to offer their work under other terms, great. Until then, respect the terms the authors (of books, languages, and software) have chosen.
> The game industry and the iPhone app store have proven that when you price something closer to $1, it will generate exponentially more revenue than $40.
This may be true for mass-market, high-volume products like mobile games, (although even there the real money is in in-app purchases). It is not necessarily true for special-interest, low-volume products like this book. In order for it to be true, you need to assume that over 40 times as many people will buy the book at $1 as will buy it at $40. Frankly, the audience for this book is so narrow that I suspect that they would eventually get three times as many free downloads as they will get sales at $40.
Ok, so digital scarcity is engineered, not real, because scarcity makes our economic system work. However, I don't think anyone has come up with a viable post-scarcity model. In some cases like mobile apps and steam sales, big discounts can increase volume to the point where the economic equation works. Sadly, I don't think a book on dependent types is really amenable to that scenario.
https://www.manning.com/books/type-driven-development-with-i...