Hacker News new | past | comments | ask | show | jobs | submit login
Learn Physics with Functional Programming (nostarch.com)
291 points by privong on Sept 5, 2023 | hide | past | favorite | 100 comments



Sounds similar to https://tgvaughan.github.io/sicm/toc.html Should be fun if you're physics inclined.


Official version: https://mitp-content-server.mit.edu/books/content/sectbyfn/b...

I'd really like to see a "spiritual successor" to Structure and Interpretation of Classical Mechanics--something that can take off and achieve a life of its own.

SICM is open-source, and many people have implemented their own versions of parts of it, but I would love to see a vibrant and active community develop around such a beautiful computer algebra / computer-physics system.

SICM goes far beyond simple Newtonian mechanics, implementing calculus, Lagrangian and Hamiltonian mechanics, and differential geometry, and probably a whole lot more that you just have to spelunk into the source code to discover.

(Here's a book about the differential geometry implementation in scmutils: https://mitpress.mit.edu/9780262019347/functional-differenti... as seen in HN: https://news.ycombinator.com/item?id=7884551 )


Have a look at Emmy: https://emmy.mentat.org


Are there any other systems that can render programmatic equations to LaTeX?


SymPy[1] and Mathematica[2] can both render expressions in LaTeX, if this is what you mean.

[1] https://docs.sympy.org/latest/tutorials/intro-tutorial/print...

[2] https://reference.wolfram.com/language/ref/TeXForm.html


I’m not the parent but do you know if there is a system/language that renders programmatic equations in-line in the editor? The closest thing I know of is the usage of Unicode symbols and Greek letters in Lean. I’m imagining a vscode extension that would interpret scientific code/equations and render those in-line or in a preview line above/below. Not sure about the utility of such a thing but it sure seems fun


It’s a bit of a stretch but if you write your equations in sympy in a Jupyter notebook, you can display nice LaTeX renders of the expressions and then either do math with them or just evaluate them as if you’d written them as a normal python function.


Wonder what it would take to implement a neural network in Emmy.


If it has an implementation of reverse mode automatic differentiation, then it might be possible.


I have reverse-mode (purely functional reverse mode at that!) sitting in a branch, and will get this going at some point soon. Even more fun will be compilation down to XLA, like JAX does in Python.


I know someone who took that course. They did not have fond memories of it.

My impression is that it can be a very frustrating way to learn mechanics if you don't have much interest in functional programming.


It's not for everyone.

I cut my teeth on SICP before going into physics, so I was perhaps the exact target audience.

I also see Scheme as an improvement over its successor languages.


What is the appeal of functional programming for physicists?


The motivation to "implement" physics in code, is that you can't "cheat." You have to spell out every step in a formal way. The motivating example in SICM is that the usual way the Euler-Lagrange equations are written ... doesn't make sense.

The authors explain: "Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding. Traditional mathematical notation contributes to this problem. Symbols have ambiguous meanings that depend on context, and often even change within a given context."

Read the rest of the preface here: https://mitp-content-server.mit.edu/books/content/sectbyfn/b...

And why not just "code" but "functional code"? Well, it makes a lot more sense to "take a derivative of a function" if that function doesn't have side effects (etc). There is a tighter correspondence between functions in the programming sense and in the mathematical sense.


I don't think the MIT guys have the same motivations as the author of this book. He (Walck) discusses the suitability of (a subset of) Haskell in this article: https://arxiv.org/abs/1412.4880

Maybe someone else can shed light on the MIT mindset. Certainly some of Walck's points apply to Scheme as much as to Haskell, but Scheme lacks the type system, syntax and syntactical "convenience" of curried functions. The basic strength of functional programming is the lack of complex imperative book-keeping: your code looks more like math.

My impression is that SICP and SICM are eccentric.


> Scheme lacks the type system, syntax and syntactical "convenience" of curried functions.

The argument is that all of that syntax is a distraction.


Yes, and that's like arguing that spaces between words is syntactic distraction. It's clearly not, more syntax rules can make a language simpler to understand (for both humans and computers).


A very smart CS guy I know pitched functional programming for scientific computing- he said it would greatly speed up the performance of codes by not spending time computing results that weren't going to be used.

Although that's not a terrible idea, I have never actually seen any major scientific code that was based on functional programming and was significantly faster than its non-FP competitors. My guess is that the folks writing the codes are already pretty smart, not doing any extra work that could be easily removed, and already take advantage of algorithms that use non-functional paradigms which give them significant speedups


I've heard that before, usually from people with no experience in actual scientific computing. There's nothing wrong with using functional programming in scientific applications. I do. But I don't see how it's "specifically" good for scientific programming.

The thing about performance in scientific programming, it is often binary: You either need the very best, or you don't care about it at all. Unlike other areas of programming, there is no middle ground. If you need your scientific code to be performant, then you need to squeeze every last bit of performance out of your hardware, which you can only do with something like Fortran or C. If you don't care about performance, then it doesn't matter. That's why Python is so popular.

Ideally I would love for something like F# to replace python in the scientific computing space, but the ecosystem is so much larger in python. That's what matters to most scientists.


Generally agree, but: the idea for FP in scientific computing would be for the FP-optimizing compiler to elide any computation that doesn't contribute to the final result.

The analogy I think of is is tree traversal. A smart person can write an optimal tree traversal algorithm and make their program finish quickly, whether or not the user requested that part of the algorithm's results, but FP can realize the program doesn't output the tree, so traversing it can be skipped. OK, that's not a great analogy but the point is that in principle, FP optimization could find a cheaper way to produce the same exact values as a simulation written in a non-functional language.


How often are there competing implementations in scientific computing? Most of the time people are doing just enough to publish a paper, or maybe maintaining a single library that everyone uses. Few people have the inclination, and even fewer the funding, to "rewrite everything".

In finance, which has a lot of parallels with scientific computing but tends to end up with semi-secret, parallel, competing implementations of the same ideas, functional programming has had significant (though by no means universal) success in doing exactly what you describe.


Let's see. The two big codes I worked with- BLAST and AMBER- have competitors. For BLAST there have been a long history of codes that attempted to do better than it, and I don't think anybody really succeeded until that except possibly HMMER2. Both BLAST and HMMER2 had decades of effort poured into them. BLAST was rewritten a few times by its supporting agency (NCBI) and the author(s) of HMMER rewrote it to be HMMER2. I worked with the guy who wrote the leading competitor to HMMER, he was an independently wealthy computer programmer (with a physics background). In the case of AMBER, there are several competitors- gromacs, NAMD, and a few others are all used frequently. AMBER has been continuously developed for decades (I first used it in '95 and already it was... venerable).

All the major players in these fields read each other's code and papers and steal ideas

In other areas there are no competitors, there's just "write the minimal code to get your idea that contributes 0.01% more to scientific knowledge, publish, and then declare code bankruptcy". And a long tail of low to high quality stuff that lasts forever and turns out to be load-bearing but also completely inscrutable and unmodifiable.

After typing that out I realize I just recapitulated what you said in your first paragraph. My knowledge of finance is limited beyond knowing "jane street capital has been talking about FP for ages" and most of the people I've talked to say their work in finance (HPC mostly) is C++ or hardware-based.


Really we need somehow to rejuvenate scmutils, either with a faithful port to a modern language, or a spiritual successor of similar calibre.


It's fully rejuvenated in Clojure as "Emmy", with Sussman's support and a bunch of 2D and 3D graphing extensions. See Emmy-Viewers: https://emmy-viewers.mentat.org/ and Emmy: https://emmy.mentat.org/

Thanks to https://2.maria.cloud, everything in SICM and FDG works in the browser as well: https://2.maria.cloud/gist/d3c76ee5e9eaf6b3367949f43873e8b2


Wow, thanks for this!


Of course! And referencing your other comment, during the ~2 year period I've been working on Emmy (on top of work by Colin Smith), I was keen to make the implementation more accessible and well-documented than the original.

There's still not a great map of the project (from primitives to general relativity), but many of the namespaces are written as literate programming explorations: https://emmy.mentat.org/#explore-the-project

Here's the automatic differentiation implementation/essay, for example: https://sritchie.github.io/emmy/src/emmy/differential.html

A rough sketch of the tower is:

- `emmy.value` and `emmy.generic` implement the extensible generic operations

- `emmy.ratio`, `emmy.complex` and `emmy.numbers` fleshes out the numeric tower

- `emmy.expression` and `emmy.abstract.number` add support for symbolic literals

Next we need an algebraic simplifier...

- `emmy.pattern.{match,rule,syntax} give us a pattern matching language

- `emmy.simplify.rules` adds a ton of simplification rules, out of which

- `emmy.simplify` builds a simplification engine

Actually the simplifier has three parts... the first two start in `emmy.rational-function` and `emmy.polynomial` and involve converting an expression into either a polynomial or a rational function and then back out, putting them into "canonical form" in the process. That will send you down the rabbit hole of polynomial GCD etc...

And on and on! I'm happy to facilitate any code reading journey you go on or chat about Emmy or the original scmutils, feel free to write at sam [at] mentat.org, or else visit the Discord I run for the project at https://discord.gg/hsRBqGEeQ4.


This is an absolute triumph. Over the course of several years (starting about 15 years ago) I've been looking for a way to go through SICM and FDG without dredging up an MIT scheme that's useful for nothing else, or dealing with a partial reimplementation in a language with much less expressive power.


What would you build / create / write if you had a web-enabled build of SICM (well, scmutils I guess) in hand? I'd love to hear more about your thoughts on how to build a community around these tools and ideas.


I would love to explore statistical mechanics, quantum field theory, and/or general relativity through a similar lens.

But I am also quite interested in learning more about the "under the hood" workings and software craftsmanship of scmutils. The textbook _uses_ scmutils to explore classical mechanics.

But it does not delve into the implementation details of scmutils itself, which interest me.


SICM Is more like a 2nd physics course as it uses the Lagrangian interpretation of classical mechanics and starts with it whereas OP seems to start from the basics.


Expressing physics with more rigorous mathematical or computational formalism might be good: For instance, a force is a triple (sending object, receiving object, force vector). Newton's third law says that whenever there is a force (S,R,F), then there is another force (R,S,-F). Also, this makes clear that Newton's 2nd law is not a definition of force, because it misses out on two of the three components of a force triple (-- by the way, is there a formal term for what I'm calling a force triple?)


Yes and it could help with a shortcoming of traditional representations of physical phenomena: defining cause and effect.


Calculus makes a lot more sense to me as code so this seems promising :D


Julia might have been more practical - also has a type system, symbolic computing, automatic differentiation, etc. And maybe just functional enough.


What's impractical about Haskell?


moonads


except its not 0 index!


For translating mathematics to code, in most (but not all cases) more relevant to have 1-indexing.


Structure and Interpretation of Classical Mechanics published in 2001 uses Scheme (There is a second edition released more recently)



I would like to see Learn Physics with APL.


Absolutely!

I did find this.

"Working with APL for Physics Research - Kostas Blekos - Dyalog '17" https://www.youtube.com/watch?v=pWtvRlCdX00

Would be interested in reading a more detailed account.


This is an incredibly cool concept and I would have loved to have this book when I was in undergrad (for Physics).


I am actually studying special relativity and found Jupyter Notebook plus some simple plot helps a lot. I did search Common Lisp as reading python code is hard. A mess for the one doing Lorentz trnasformation. End up I have to do it myself. This is different I suspect if it is in common lisp as it usually can be read, understood and use.

Sadly for simple realization for example the twin paradox without acceleration (3 astronauts handing over clock info instead of using acelearation). It is not there. And doing graphic … and simple wedge. Sadly.

May be someone here can highlight some sites for this and that.

For chapter 14 I wonder whether a Jupiter notebook (which can do Tex if using Matplotlib … have not tested it as I used texshop and screen capture from Matplotlib instead).


Is there an effective way of dealing with anti-commutative algebras like the geometric product in a functional model?

Obviously total functional programming is a problem.

But the GP greatly simplifies electrodynamics and removing the need to track handedness in your basis is very helpful in my experience.


I feel like the screenshots don’t make the book seem particularly novel or interesting


I have been looking for a excuse to try again to learn Haskell. Also looking for some interesting book to learn material in order to try to get my son interested in science. Thanks for this.


I'd love a book like this written in Python. Just feedback. I clicked the link and was about to purchase this. The idea of learning physics sounds great. The idea of spending time learning a new programming language that I'm unlikely to use for anything else sounds terrible.


I had the opposite reaction. I already know physics, but I want to learn Haskell.


Maybe I will take the plunge. Is there any real value to learning Haskell? There are so many different things I’d love to study.


I get your reluctance to dive into a new language just for one project. But here's the deal: Haskell's strong type system and emphasis on pure functions can actually make you a better programmer in any language. Its lazy evaluation is a unique feature that optimizes performance, and it's built with concurrency in mind. Plus, in Haskell, you really can't cheat your way out of functional programming. The language enforces it, so the code you write will be functional, unlike in Python where you can mix paradigms. Even if you're a Python devotee, the principles you'll learn from Haskell can offer a new lens for problem-solving. It's not just about learning a new language; it's about broadening your skill set.


Yes, I benefited greatly from learning Haskell. Found myself writing beautiful new abstractions in TypeScript based on what I learned. I suggest going through https://haskell.mooc.fi/ to learn.

Beyond the practical value you'll also discover the delightful beauty with which you can express things in Haskell, and these sparks of joy could give you a new motivation to grow and appreciate deeper, beautiful ways of coding and thinking.


This is a very valid consideration.

At the least, Haskell is much faster than Python, the language itself is not that complicated (learning a whole new ecosystem is a different story), FP might lend itself better to translating physics into code, and some elements of it carry over to other languages, for example, type classes and Rust’s traits.


> the language itself is not that complicated

I find Haskell quite complicated, involving abstract concepts you would likely not consciously encounter in other languages (lenses, monad transformers, free monads, etc).


It doesn't need to be.

You can do so much with just data declarations and typeclasses.

https://www.simplehaskell.org/

(disclaimer: not a huge fan of the sight design, but I fully support the concept)


Lenses and monad transformers are libraries, not part of the language. It doesn't look like the book uses them, so you don't need to understand them in order to follow it:

https://nostarch.com/learn-physics-functional-programming


> and some elements of it carry over to other languages, for example, type classes and Rust’s traits.

Add react to the list


There's always some value in learning a new paradigm, but Haskell always felt less useful to me than learning other tools and esoteric technologies. That's a personal belief though and it's subjective. Some people love Haskell and get a lot of power from it.

Python mostly hits an optimal point for my needs though and I didn't have to read 50 articles trying to explain what a Monad was. Scripting languages just click a lot better for me (less ceremony). There are some weird languages I like such as APL. APL is also very mathy, but there's essentially no ceremony (just some funny symbols that are easy enough to learn).


Ah. You found Haskell hard because you were trying to learn it out of order. Demanding to understand what a monad is before learning how higher-order functions and higher-kinded polymorphism work in Haskell is like demanding to know what an Iterator is before learning what a variable is in Java.

The thing about "Monad" is that it's an absolutely trivial interface. You'll know you actually understand it properly when you go "oh, that's all." Is it hard to describe? Not when you understand the underlying concepts. But if you demand to skip them, you're only going to confuse yourself.


No, but a good assumption. I read several books and dozens of tutorials including parts of the 1000 page one that couple did like 6 years ago. Function application and so on definitely came before IO and the likes. It all just seemed like enormous mental effort for the simplest of things. That was my experience at least. It's possible I just don't have the best kind of brain for FP.

Your mileage will vary of course. This was just my experience. I know Haskell is powerful, but it wasn't right for me. I haven't met a scripting language I didn't like yet though, so perhaps my neurons are just wired for that now. I do enjoy the random APL or Forth problem though and I'm decent at SQL after using it often for a decade.


> parts of the 1000 page one that couple did like 6 years ago.

IMHO that book is bloated and overrated. Maybe it works for some people, but I find the explanations really weird and some of the exercises are just useless busywork without a point.

There are good parts in there (they do try to teach you how to properly think about types which is nice) but I really feel like you don't need to read 1000 pages to learn basic Haskell.


Fine. I also read Learn You a Haskell, the O'Reilly one and a myriad of other sources. In comparison, Python was a cake walk.


Oh I agree that Haskell is complicated to learn, I just happen to think that that book kinda makes it even worse.


Which makes it rather unfortunate that Haskell is a language where the recommended way of writing a program to e.g. read two numbers from the user, add them, and print the result, demands understanding "Monad".


Where do I have to 'understand "Monad"' to write this?

    main = do
      x <- readLn
      y <- readLn
      print (x + y)
Do I have to 'understand "Monad"' to write this?

    def main():
      x = input()
      y = input()
      print(x + y)


> Where do I have to 'understand "Monad"' to write this?

Literally every line?

Your example makes it seem so simple! "Do" for function declarations, "<-" for variable assignment... Haskell is just like python!

Alright, here I go!

  fact n = do
    n' <- n - 1
    if n <= 1 then 1 else n * fact n'
Uh oh...

  Couldn't match type `t' with `m t'
  Expected: t -> m t
  Actual: m t -> m t
What the heck is "m t"?


> What the heck is "m t"?

It's worse than that! What the heck is "->"?

But I don't particularly buy the argument. Alright, here I go:

    def main():
      x = input
      y = input
      print(x + y)
Uh oh...

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: unsupported operand type(s) for +: 'builtin_function_or_method' and 'builtin_function_or_method'
What the heck is "Traceback", "<stdin>", "<module>", "TypeError", "operand", "builtin_function_or_method"? Did I have to understand all those to write the simple Python program?

The claim that you have to understand everything about a failure case in order to write simple programs doesn't seem valid to me. It doesn't resonate with how I've learned languages before. However, I may not have the best perspective because I'm very familiar with both Python and Haskell so I don't have the "beginners mind" any more. I would be interested to hear from someone who's a beginner (perhaps in both languages) whether they think the Haskell program requires more understanding than the Python one https://news.ycombinator.com/item?id=37402253


My point isn't that it errors. It's completely reasonable to expect even a beginner to do basic troubleshooting. My point is that the error specifically mentions monads. So will the documentation for `do` and `<-`. Anybody looking to use `IO` beyond a copy-and-paste level will be thrust headfirst into monad-land. This doesn't mean `IO` cannot be used at all without understanding monads, just that it's extremely explicit that something unique and interesting is happening when you do. In fact, I expect most people build their intuitions around monads by playing around with `do` notation.

This isn't the case in Python where input and output are standard functions. The syntax for function declaration and variable assignment just work.

If our requirements changed to now read and sum an arbitrary number of inputs together, I'd expect a beginner could change the Python program to do so. I'm skeptical they could make the required changes to the Haskell version (without understanding monads).


> My point isn't that it errors. It's completely reasonable to expect even a beginner to do basic troubleshooting. My point is that the error specifically mentions monads.

Right, and the Python errors mention all sorts of things a beginner is not expected to understand, like "builtin_function_or_method". I don't think "understanding builtin_function_or_method" is required to write the Python and I don't think "understanding Monad" is required to write the Haskell.

> If our requirements changed to now read and sum an arbitrary number of inputs together, I'd expect a beginner could change the Python program to do so. I'm skeptical they could make the required changes to the Haskell version (without understanding monads).

Where do I have to "understand monads" to write this?

    main = do
      iters <- readLn
    
      let loop n total =
            if n == iters
            then print total
            else do
              x <- readLn
              loop (n + 1) (total + x)
    
      loop 0 0
I had to understand that in Haskell looping is done via recursion (which is mindbending initially, if you come from imperative programming), but I don't see that I had to understand monads. FWIW the Python that I'd write is

    def main():
        iters = input()
    
        total = 0
    
        for _ in range(iters):
            x = input()
            total += x
    
        print total
This is simpler. But I could write a Haskell version that uses an IORef to get roughly the same code structure as the Python.

    main2 = do
      iters <- readLn
    
      total <- newIORef 0
    
      for_ [1..iters] $ \_ -> do
        x <- readLn
        modifyIORef' total (+ x)
    
      theTotal <- readIORef total
    
      print theTotal
I would say this requires understanding the mutability/immutability distinction to know why we use an IORef at all (but you'd get that with OCaml[1] too) and it still requires understanding do notation, but I don't see that it requires "understanding monads"!

Now, if I were writing this for real I probably would use a version with a state monad transformer so it was

    main3 = do
      iters <- readLn
    
      total <-
        flip runStateT 0 $ for_ [1..iters] $ \_ -> do
          x <- lift readLn
          modify' (+ x)
    
      print total
That would require some understanding of monads so that you can understand how they are transformed with monad transformers!

[1] https://www.cs.cornell.edu/courses/cs3110/2018sp/l/14-mutabl...


Is an error that takes looking things up better than just doing an unexpected thing? Because the mistake there seems pretty similar to the kind of syntax mistake in:

  >>> def factorial(n):
  ...   1 if n <= 1 else n * factorial(n - 1)
  ...
  >>> factorial(1)
  >>> 
Why is nothing happening?


> Is an error that takes looking things up better than just doing an unexpected thing?

100% yes! Took me quite a while to figure out what was wrong with your example. This is by far my biggest frustration with Python.

Perfect example, by the way. You've exactly captured the sort-of "semantic mistake" I was trying to convey: technically valid syntax but the meaning doesn't match the intent.

Fixing your example requires understanding how functions return values, a concept pervasive throughout Python. The first function you ever wrote probably returned something.

Fixing mine requires understanding how `do` and `<-` interact, which I remain skeptical can be done without exposing oneself to monads. My example should not use `do` at all, I'm actually surprised it can be made to compile with it.


> Fixing mine requires understanding how `do` and `<-` interact, which I remain skeptical can be done without exposing oneself to monads.

I think this is the major point of contention between us.

I don't think that understanding how to use `do` requires "understanding monads". My example was to demonstrate that understanding how to use `do` for the IO monad is not really much different from understanding how to write a sequence of statements in Python. There's one big difference: you have to use both `<-` and `let`, and you have to know that the former is used to get a value out of something of type `IO a` and the latter is just normal variable binding. But that still doesn't require "understanding monads"! It just requires knowing there's a special thing called `IO` that wraps things with effects.

I do think that understanding how to use `do` for arbitrary monads requires understanding monads! But that was not required to solve the problem lmm originally proposed, nor your extension.

I also do think that most pedagogical Haskell material focuses too much on "understanding fine details" and that applies particularly to monads. My point is that another approach is possible. That is simply to use `do` notation intuitively via analogy with imperative languages (which is a perfectly fine thing to do).


  fact n = do
    let n' = n - 1
    if n <= 1 then 1 else n * fact n'
Here you go. Not sure about Haskell but in PureScript it compiles. Use "<-" for functions which return a value in IO type constructor, otherwise use "let".


That's kinda the point though, right? There's something different about this `IO` thing that uses special syntax. You might be able to guess something and get it to compile, but should you?

  fact n = do
    let n' = do n - 1
    if do n <= 1 then do 1 else do n * fact n'


> Where do I have to 'understand "Monad"' to write this?

The recommended explanation of what <- is/does will be about "Monad".


You're moving the goalposts! You said 'the recommended way of writing a program to e.g. read two numbers from the user, add them, and print the result, demands understanding "Monad".'


I don't think it's "moving the goalposts" when you demand that you can understand, at least to a first-order approximation, what the code that you write is doing.

Yes, you could just treat IO as a black box and memorize a bunch of weird syntax rules, but you're going to have a hard time very quickly, especially when you change something and the compiler throws errors at you.

FWIW, I think that monads are a powerful abstraction, but they're certainly not incredibly natural and Haskell basically just throws them at you from day 1.


The recommended way of writing a program in most languages demands some level of understanding what you're writing. You would have to somehow come up with the "<-" parts of that program, and the recommended way of doing that is to understand Monad.


> The recommended way of writing a program in most languages demands some level of understanding what you're writing

Sure, some. I think this discussion is about exactly what and how much, when it comes to Haskell.

> the recommended way of doing that is to understand Monad.

That doesn't ring true to me, not from how I learned Haskell and not by analogy with how I learned other languages. I don't remember having to "understand Monad" to write in "do" notation. Quite the opposite in fact. I vividly remember trying to "understand Monad" and being unable to, and then just giving up and trying to use do notation, finding it easy, completely unhelped by what I was trying to "understand", and regretting having spent all the time "understanding". But it's been a long time since I learned a language so maybe I'm mistaken. I would appreciate the point of view of others who have fresher eyes.

I will concede that people have a harder time learning Haskell than Python (although I don't think anyone understands the exact reasons) but you gave a very specific example of a program that is supposedly requires some deep understanding to write in Haskell. I wanted to give a counterpoint so people can decide for themselves. That's as far as my claim goes.


Sure, but if it's just for pedagogy reasons, there's plenty of things you can do in order to learn about HOFs, the type system, typeclasses, Monoid, Functor, etc. before reading in some input, and if that becomes necessary, a teacher can always provide some template with a disclaimer "don't worry about it, we'll cover that later" - same as they do it for Java that forces you to learn what classes are before you can write a program.

OTOH, the promise of Haskell is that while it takes longer to learn and getting used to, the payoff is that you can get rid of certain classes of bugs. Whether that's true or not is a different matter entirely, but a priori just because something is hard to learn it doesn't mean it's not worth it.



I'd be willing to bet there are already books like this for python. Or at least online tutorials. I know for a fact there are python books dedicated to scientific programming, but they probably assume some knowledge of science and focus more on the codding aspect.


I’m working my way through a German book that explores Physics with Python: https://pyph.de/1/2/index.php?name=intro


learning another programming language that you will never use still has value. it makes you a better python programmer.


Another win if Python was used will be showing how functional programming is done in Python. That said I'm fine with it being based on Haskell. Presenting a specific programming language doesn't seem to be the main point. Also someone can follow book with Python and typing/mypy as exercise.


You can learn Scheme in two hours. Haskell, not so much. Stick to SICM.


Does it really teach physics or does it teach Haskell through the way of physics equations?


Funny, I was thinking of doing just this exact thing to learn haskell.


How much of either do I need to know to find value out of this book?


It appears to be an elementary introduction to both. It appears that no prior physics knowledge is needed.


Does this implement symbolic algebra/calculus, or is it entirely numerical?


No idea if this book does it, but this seems like a perfect fit for automatic differentiation. You can get a simple implementation of automatic differentiation for one dimension in a small amount of surprisingly clear code, without needing to pull in any dependencies or extraneous concepts.


That is along the lines of the motivation for my question. Automatic differentiation is such a natural fit with Haskell...


> this seems like a perfect fit for automatic differentiation.

Why?


It would let you represent physical systems as normal Haskell functions and get accurate derivatives "for free". At least in the one-dimensional case it's probably simpler than numeric differentiation while also letting you worry far less about accuracy and stability.

I've never worked this through to a full conclusion, but you could even write it in a way that would let you get symbolic differentiation out of it too.


Yes, if you get to automatic differentiation by overloading your operators to also take a “differential” type, you can further overload them to do symbolic arithmetic and then symbolic differentiation falls out for free.

See https://sritchie.github.io/emmy/src/emmy/differential.html for detail!


Makes sense. I initially thought, that automatic differentiation would be mostly useful for long derivative chains, but of course even for single derivatives it does have advantages.


After learning Physics with FP, you can help with Modelica!


Ordered from Amazon, thx.


I wish the title said "Haskell" instead of "Function Programming". I mean the two aren't interchangeable. Was actually hoping for Clojure personally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: