I find it amusing that many people (seemingly Hackers) concentrate on Wolfram's ego rather than on the actual thing at hand. I don't care about egos, lipstick color, or the pants the language creator wears. I care about the language, so let's talk about it now:
From a long-time Mathematica user: this is not a language I like. I use it, but there is little to no fun. Many things aren't immediately clear or become problematic if you stop using the language for a while. As an example, quick, what is the difference between Module[], Block[] and With[]? Even if you remember that With[] is a lexical scoping construct, Module[] and Block[] can be confusing.
It doesn't help that Mathematica is absolutely horrible for debugging code. I don't know why they put so little emphasis on their REPL, but it's way behind the times. Even mismatched parentheses can cause serious pain (and I hate the fact that things move around when I edit parentheses).
That said, I have a lot of respect for Mathematica as a complete system. It is incredibly useful, most mathematical tools you will ever need are there. It is also a great tool for explorative data analysis. Have a pile of data you'd like to see patterns in? Load it up, mash it into shape with high-level functions, then quickly produce great-looking graphics. Nothing even comes close to the flexibility of Mathematica here.
Not sure if you just want commiseration and are just having trouble remembering which keyword is which, so I apologize if I'm explaining too much, but for me deciding which to use is more grappling with the difference between dynamic and lexical scoping rather than which concept maps to which keyword, and just as knowing more about a person seems to help me remember their name, so too does knowing more about the function help me remember its name.
It's a bit dense to read through but with Block, "i" has been assigned the value "a", but still remains in the form "i" throughout the evaluation, and once "m" is replaced with "i^2", the "i" is seen and replaced with "a", acting as normal dynamic scoping.
In Module, however, "i" is replaced by a temporary variable with the form "i$5592", which does not have a definition anywhere in Mathematica, so when "m" is evaluated, "i" is left alone, which is how lexical scoping acts.
I dislike forgetting parts of languages in the ones I use. C# is full of things whose behavior I have to look up, as is javascript, python, and especially while I'm learning them: clojure and cljs.
But what's nice about this language is that these two concepts, Module and Block, while having different meaning, have the same form.
Correct me if I'm wrong, but the ego of the language creators seems to shine through in at least this[0] reference page, though. The most important bit I'd like to get some clarification on:
> Long viewed as an important theoretical idea, functional programming finally became truly convenient and practical
> with the introduction of the Wolfram Language. Treating expressions like as both symbolic data and the
application
> of a function provides a uniquely powerful way to integrate structure and function—and an efficient, elegant
> representation of many common computations.
I see nothing on the reference page that could be said to make Wolfram's FP constructs more convenient or practical. They make it sound like FP was a long lost dream but then Wolfram came along and actually made it happen. They're seemingly using the exact same constructs as everyone else.
What you do actually get from the reference page, though, is that they went the Haskell route and decided the FP stuff shouldn't be readable by anyone who hasn't used the language before. This is all fine and dandy, really; it's a model as good as any other, since it at least gives writability where it loses readability... They aren't doing anything new, from the looks of it, though. They seem to be doing precisely the opposite... In exactly the same way.
Because ego blinds one to effective criticism. And the criticism many of us have with Wolfram products is that while smart in limited domains, they can be blindingly stupid in others (your various examples providing evidence for this). If he listened to and respected our complaints, maybe we'd have a better product. But no.
Not really. Stephen knows how poor our debugging facilities are (and I regularly tell him how bad the situation is) -- we've just had limited bandwidth to actually fix it properly to our satisfaction.
Luckily I have a totally revamped debugging and logging system that will put us ahead of the curve here (imagine DTrace at a program level). Probably a point release of V10.
Same goes with multiple undo and retina support -- we know how embarrassing their absence is, and we've luckily been able to fix these for v10. I've been in design meetings where the multiple undo has bumped several other desirable features off our roadmap.
Unless it's the same kind of curve Go is judged against (aka the curve of 30 years ago), that's a very tall order. Are you working on something better than a time-traveling debugger?
This is good to hear. But problem is that the curve is moving all the time and Mathematica seems to be catching up very slowly.
As for the REPL and general editing, if I were you, I'd quickly go towards LightTable integration — it should be doable, and the effects could be spectacular.
I like what they write about functional programming.
"Functional Programming
Long viewed as an important theoretical idea, functional programming finally became truly convenient and practical with the introduction of the Wolfram Language."
Not just that, he also seems to think that he invented Lisp's idea of code-as-data: "In most languages there’s a sharp distinction between programs, and data, and the output of programs. Not so in the Wolfram Language. It’s all completely fluid. Data becomes algorithmic. Algorithms become data. There’s no distinction needed between code and data." ( http://blog.wolfram.com/2013/11/13/something-very-big-is-com... )
Kinda funny considering how the Mathematica language is built on top of the M-expressions that lisp never bothered to implement.
Note that this is marketing copy aimed at Mathematica users, mainly scientists and mathematicians, most of whom probably don't know what LISP or Haskell are.
I think that's part of it. It's also aimed at non-hackers who took a look at Clojure once and said 'yuck'. And managers who are being 'groomed' not to totally freak out when someone on their team says "can we use the Wolfram Language on such and such a project?".
There was also a recent article on here about how programming is a dead-end, non-prestigious job in research circles. Sounds like a continuation of that sentiment.
The science "stack" is something that, I think, most people agree exists. Just as it exists in technology. But I always read the comic as making fun of the feeling of superiority in being deeper down the stack.
No, not everyone does. Physicists, for example, are sometimes known to maintain that math is only human models approximating underlying physical substrates, and that pure / theoretical math is an artificial construction whose validation depends on finding physical manifestation of the theory. That would put experimental physicists to the right (above?) the mathematicians.
But even if you did believe the mathematicians claim, by the same logic you would philosophy above them.
I think it pretty amazing reading physics forums where it seems clear that a lot of the people think mathematics is almost some kind of joke until substantiated by physicists. And then there was the usually insightful Lawrence Krauss's blanket dismissal of philosophy — while philosophical musings that contradict known data are problematic, much of philosophy lies outside the domain of physics.
Anyway, in regard to the idea of science "stack", I kind of enjoyed Alan Lightman's Reunion where he poses a nineteenth century astronomer's obsession with and histrionics in response to rejection by a cute young woman in his observatory as a kind of "proof" that biology studied more powerful forces. The event simultaneously lead to the astronomer abandoning his career and inspired hers in biology.[1] Though obviously meant as joke, the author himself transitioned from a successful career as a physicist to writing and teaching literature.
Ok, I interpreted it wrongly. But I still don't agree. I've worked in academia and a NASA research center and never did I observe a respected scientist making such an observation. I know that's anecdotal evidence, but I don't know what else to fallback on.
I always thought that the old tired line of "X is the core science!" was a trap that only mathematicians, physicists, and ignorant grad students fell into. I wasn't aware that any respected (and respectful) scientists actually took it seriously.
I think perhaps you're still thinking of a value judgement. To me, it is obvious that we have different fields of science for different scales. For example, there are many aspects of physics that a biologist takes for granted or doesn't even reason about when considering biological systems. They may need to reason about, say, chemistry from first principles. But I would be surprised for many biologists to have to apply much reasoning from quantum mechanics.
Again, this is not a value judgement. A similar thing exists with computers. People who write web applications depend on user-application infrastructure, who depend on systems applications, which depend on operating systems, which depends on computer architecture, which depends on materials engineering and so on. As a systems programmer, I regularly reason about operating systems and computer architecture. But I never reason about the properties of the materials that make the hardware. The computer architects, though, may have to reason about the materials, as it can provide constraints to their designs.
As a general principle, it is always good to have at least some knowledge of 2 levels of abstraction below (and above) where you work, because they tend to leak!
i think that's a reference to the fact that Mathematica uses M-expressions and operator precedence. it doesn't look like lisp, but if you FullForm an expression you will see that it's all just M-expressions.
that said, i think it is lisp-like as a design choice. the Mathematica language itself is more raw than even lisp -- you can take the language in different design directions. Wolfram chose to include lisp aspects
These comments would be a lot more interesting if people identified what was cool rather than just bitching about ego or syntax or licensing. I'm not going to be using the Wolfram Language, but I'd love to identify the best ideas from it. And it's a different enough language that I am confident there are interesting ideas in it.
Reap/Sow ss a funny language feature that I haven't imagined before, but that's not really the point. The point is that this is a runnable example – that's actually what is output. Typically when you run something like that you get "NameError: a is not defined". And "a" here is really a variable, of sorts – it's not a string or symbol (at least not a symbol in the sense that we know them in programming).
Given this, snippets of code are just as executable as entire programs. Every expression is like a function with the free variables as its parameters, and a sequence of expressions is a bit like function composition.
This is all natural from the perspective of mathematic notation. In a more traditional programming environment I think it's reminiscent of partial evaluation: https://en.wikipedia.org/wiki/Partial_evaluation – where you analyze a program and execute expressions opportunistically. It's really almost the same as partial evaluation, but the Wolfram Language knows a lot more about how you can execute different combinations of expressions than a typical language. A typical language does not really "believe" that (a+b) and (b+a) are equivalent. It doesn't know how to relate different operations. Nor do normal languages have a concept of simplification, so they can't speculatively try other arrangements (where none in isolation is clearly better or simpler than another) to see if simplifications are possible.
I know it's not the point of what you were mentioning, but I wanted to briefly mention that reap/sow are not unlike Haskell's so-called Writer Monad:
newtype Farm a b = Farm (b, [a]) deriving (Eq,Show)
instance Monad (Farm b) where
return x = Farm (x, [])
Farm (b, as) >>= f =
let Farm (b', as') = f b in Farm(b', as ++ as')
reap :: Farm a b -> (b, [a])
reap (Farm x) = x
sow :: a -> Farm a ()
sow a = Farm ((), [a])
-- prints ("e", ["a","c","d"])
main = print (reap (do sow "a"
return "b"
sow "c"
sow "d"
return "e"))
(Reimplemented in a simple way here to show the underlying machinery; usually, instead of lists, a writer is parameterized by an arbitrary monoid.)
What you are getting at here is the fact that Mathematica is a symbolic language through-and-through.
In my webpage here: http://www.oftenpaper.net/sierpinski.htm I try to give passing mentions to how the Mathematica language and its infrastructure make various things easier.
As a programming language geek, I haven't found a language more powerful than Mathematica, and that's before considering the infrastructure it comes with.
I've always found the repeated application of rules to simplify patterns in expressions until the expression stops changing to be pretty powerful when just playing around, especially with small problems. However, in my experience in bioinformatics, that flexible approach is so slow in the face of actual data as to be totally useless for use on real problems in my line of work.
I think an option to use a faster, less powerful, linear pattern matching algorithm more like in ML/Haskell would help a lot in this regard.
When I was a student and working with Mathematica 5 and 6 a lot, I was always amazed about its simplicity and power. I had just two windows: typing in a white document window without any toolbars (only zoom control in the bottom) and having another window with documentation. And the documentation was great: you could evaluate examples right inside the doc page, play with parameters, like in your own document.
The design of the language, user interface and depth and clarity of documentation makes me think that Wolfram Research is even more design-obsessive than Apple. Stephen Wolfram takes credit for too many things (like Jobs), but he also delivers amazing stuff.
and there are options to remove the window frame, etc. but i think if you are using certain licenses the toolbar may not disappear. and the main toolbar never disappears, to my dismay
I think it was a default. Or, at least, with one formatting toolbar which I simply disabled. I never used palettes of special symbols: it was easier to remember some common shortcuts via "escape" key (e.g. Euler constant is <escape> ee <escape>)
part of the problem Mathematica has is that people assume it's just for math, which isn't the case. it's a very general system. this renaming, while utterly dorky and aesthetically hamhanded, will at least help clear that misunderstanding
The language lacks a spec. For a language with mathematical background this is a joke. The descriptions of core features are just not existing - not even talking of a semantic definition, like Scheme has now since a few decades.
Maybe they want to prevent alternative implementations.
the spec of the underlying language would be very simple. it's basically just repeated pattern matching. beyond that, knowing where to draw the line would be a bit weird. for example, does the function TextRecognize [1] need to be specified?
and by the way, Mathematica's documentation is enormous so i don't think a single spec would be necessary. maybe a few functions here or there would need some clarification in certain edge cases, but those would be higher-level functions like Plot
yes, using Module primarily. here's one way you might make a basic object-like thing (though i would generally advise against OO stuff in Mathematica. part of the power of the language is due to how it keeps everything straightforwardly manipulable):
precisely speaking, however, Module simply renames symbols to make them unique. the effect is the same as a closure, but it isn't a "pure" closure like you would find in other functional languages, because you can do for example Names["i*"] and find that the supposedly-private 'i' has the name i$2656 and you can alter its value that way
another point is that you could, if you wanted to, create your own version of Module and other scoping stuff, since it's a matter of making a macro by using the HoldAll atttribute [1] and using 'ReplaceAll' to replace specific variables or arbitrary structures
if you're asking whether functions in Mathematica are first-class values, then yes. everything in Mathematica is first-class. functions are just symbols that have associated pattern-matching rules in the form f[___], otherwise they are the same as any other symbol.
you can do, for example:
RandomChoice[{Plus, Times}][2, 3]
where the result will be either 5 or 6. in more explicit form:
If[RandomReal[] < .5, Plus, Times][2, 3]
example of an "anonymous" function:
(#1^2 + #2^2 &)[2, 3]
square of all numbers from 1 to 100:
#^2 & /@ Range[100]
simpler form:
Range[100]^2
all pairwise products of the first seven prime numbers (among themselves):
I'm always struck by how slick and well produced everything related to Wolfram is. It's not what I've come to expect from most academically leaning companies at all. This goes for NKS which is a beautifully presented book whether or not you go along with its contents.
It seems that whenever Wolfram makes these grandiose and dubious claims, there are always people cheering him on. The comments on his blog posts are always unending adulation. You sound just like those people.
I very much doubt that those of us who doubt just about all the marketing fluff that he exposes are just not understanding his genius. I have read parts of NKS, and I don't think it's all that fascinating other than "cellular automata produce pretty pictures, and there's no reason to try to understand why they do this." It's very cranky.
I think Stephen Wolfram is just a rare breed of socially successful crank. He scores high on the crackpot index, but not high enough to be completely discredited as a lunatic. His litigious nature and ability to wrangle the legal system in order to label everything "mine" is what keeps him in business.
For those curious for more details about "He scores high on the crackpot index, but not high enough to be completely discredited as a lunatic. His litigious nature and ability to wrangle the legal system in order to label everything "mine" is what keeps him in business.", see this review of "A New Kind of Science", which begins:
> Attention conservation notice: Once, I was one of the authors of a paper on cellular automata. Lawyers for Wolfram Research Inc. threatened to sue me, my co-authors and our employer, because one of our citations referred to a certain mathematical proof, and they claimed the existence of this proof was a trade secret of Wolfram Research. I am sorry to say that our employer knuckled under, and so did we, and we replaced that version of the paper with another, without the offending citation. I think my judgments on Wolfram and his works are accurate, but they're not disinterested.
> With that out of the way: it is my considered, professional opinion that A New Kind of Science shows that Wolfram has become a crank in the classic mold, which is a shame, since he's a really bright man, and once upon a time did some good math, even if he has always been arrogant.
How can they have legal standing for such a claim, unless one of the authors had privileged access to Wolfram Research knowledge to misappropriate a trade secret? Seems to me that the principle of trade secrets should have a clear defense for independent rediscovery, although law often fails to reflect common sense.
> How can they have legal standing for such a claim, unless one of the authors had privileged access to Wolfram Research knowledge to misappropriate a trade secret?
The absence of legal standing obviously is something that can be raised once a lawsuit is underway, but it doesn't prevent the threat of a lawsuit, and many organizations will knuckle under to the threat of a lawsuit from a wealthy opponent just to avoid the expense of consulting with lawyers if there isn't a big cost in avoiding the lawsuit.
How would you refer to a trade secret in a scientific publication? The citations list where, when, and by whom the reference was published; a trade secret isn't a trade secret after it's published, right?
And is the employer in question the Santa Fe Institute? I am not sure whether to be incredulous about or cynically believing that they would go along with that.
"The real problem with this result, however, is that it is not Wolfram's. He didn't invent cyclic tag systems, and he didn't come up with the incredibly intricate construction needed to implement them in Rule 110. This was done rather by one Matthew Cook, while working in Wolfram's employ under a contract with some truly remarkable provisions about intellectual property. In short, Wolfram got to control not only when and how the result was made public, but to claim it for himself. In fact, his position was that the existence of the result was a trade secret. Cook, after a messy falling-out with Wolfram, made the result, and the proof, public at a 1998 conference on CAs. (I attended, and was lucky enough to read the paper where Cook goes through the construction, supplying the details missing from A New Kind of Science.) Wolfram, for his part, responded by suing or threatening to sue Cook (now a penniless graduate student in neuroscience), the conference organizers, the publishers of the proceedings, etc. (The threat of legal action from Wolfram that I mentioned at the beginning of this review arose because we cited Cook as the person responsible for this result.)"
I admire his (or his company's) high production values (I bet many would interpret my "slick" comment as a slur though!) but these certainly stand separate to his/their work's scientific merit which I'll sadly never be smart enough to judge ;-)
I suspect much of his/their success comes through his/their Jobsian eye for detail and ability in presentation and while this is at its best when applied to meritorious work, it's certainly a skill/asset in its own right too.
Don't confuse "smart" and "knowledgeable". If you choose to do so, you could become sufficiently knowledgeable about the field to judge for yourself with reasonable confidence Wolfram's and Wolfram's company's scientific contributions.
In my field, there are many people smarter than I am. Many of my significant contributions to others come from knowledge and experience.
Wolfram made a really useful tool. This is independent of how you may feel about NKS.
Try to take yourself back to 1988. Macs are newish. Desktop publishing is new. There's this neat new program called Mathematica. You can type in equations, solve them, and even graph them in 3D. You can laser-print the notebooks and they look better than textbooks.
Wolfram may be an egomaniacal self-promoter, but it's not all empty hype.
The details of the story are hard to find nowadays, probably because the lawyers told everyone to shut up about it. You can find some vague allusions to it here: " Wolfram quit Illinois, took the program private, and entered into complicated lawsuits with both his former employee and his co-authors (all since settled)." http://vserver1.cscs.lsa.umich.edu/~crshalizi/reviews/wolfra...
Btw, which side of his reputation does he live up to? To being a visionary genius or to being a raving crank?
One has to take a grain of salt with Stephen Wolfram's claims of superiority.
Nevertheless, it appears that people have to pay to use "the Wolfram language" (although there might be scenarios that allow "free usage" for the purpose of attracting users who will eventually pay for using the language if they want to scale up their work). The concept of paying to use a language is troublesome in terms of common understanding of what a language is.
I've seen it used for prototyping, proving and equation simplification in the simulation space, but then those algorithms were converted to C++ for production use.
I know an engineer at Boeing who leans on it heavily for his research. I'm not sure exactly what his department/responsibilities are, but there's one data point anyway.
I use it for prototyping, analyzing data, and explorative research. When considering a new approach to a computational or machine learning problem, you can often try it out in Mathematica and see if it's the right direction at all.
From what I've observed, Matlab is more popular. There's a perception -- for whatever reason valid or invalid -- that Matlab is for numerical computation, and Mathematica is for symbolic or higher level math. And, that Matlab is easier.
However, I'm noticing a trend towards using mainstream programming languages and whatever free libraries are available, to do the same kind of work. I used Mathematica 15+ years ago, and for doing the same kind of work today, I now use Python and am quite happy with it.
Is there a PDF of this doc somewhere? I would like to scroll through it to see what is in there. but I don't think i'm going to click on 10000 links to see every language construct...
Realistically, rather than work with the fussy syntax of this pseudo functional language, most people who work in data science would be better of using Python because of the IPython Notebook http://ipython.org/notebook.html which provides similar mixing of code and graphical results. The matplotlib, inspired by Mathematica, can produce a wide variety of graphical plots. Newer libraries like pandas have been bringing all the power of R into the Python world.
Five years ago, Wolfram may have been worthwhile, but today the wave has formed and it is converging on Python.
My experience was having used Mathematica years before I ever saw LISP, or really knew anything about programming beyond C & F77. A REPL, functional programming, lists, etc. -- it was pretty mind-blowing.
Now I know that none of those things were invented by Wolfram (the company or the man), but they were put into productive use, in a very slick and beautiful application with a slew of symbolic and numerical math libraries which were world class.
I guess to a large extent, you're not missing something, but the libraries are comprehensive, well designed, well documented, very consistent and reasonably intuitive.
Similar experience here. I didn't do CS in college but used MMA a fair bit. Then ended up programming for a living in mainstream 90s languages. So when FP started to make a comeback recently it just hit me that "Hey this is just like Mathematica". But even then the Mathematica environment was better in ways than what you get with some FP languages now. I don't get all the retro fetish. I know my vi but I've no wish to use it over a good IDE. Hopefully Light table can do it for a functional language.
I don't know, but to me most of the power of Mathematica is basically the wide-ranging and compatible utilities. Not having to write my own basic math functions, nor spending hours trying to get some C++ package working, is huge.
> To me this looks like a Lisp variant with a large set of utility libraries.
I feel like I'm missing something because I read this a lot, but don't see it when I look at Mathematica. The language seems to be a higher-order, functional programming language with dynamic (?) typing. That part sounds like lisp, but syntactically it looks nothing like lisp to me. Is there a portion of the language that I'm overlooking?
Technically the language is a Term Rewriting System[1]. You can use it as a TRS by directly manipulating transformation rules, in a functional way, in an imperative way, and of course as pure symbolic mathematics.
It does naturally suit functional thinking, so that becomes the default 'best' way of engaging with it.
Thanks. I think I knew this years ago while spending my off-work hours delving deeper into programming language history and theory, but most of that info is now lost from disuse. Like I said to your sibling post, I see the semantic comparisons, I just don't get calling it a "lisp variant", unless we're going to call every language with somewhat similar or derivative semantics but wildly different syntactical structures and additional semantic constructs lisp variants as well. With that, distinction between languages (primarily for conversation and discussions) becomes virtually non-existent as we could probably come up with a half dozen categories that all languages fall under.
Interesting. And after looking at Mathematica (the language) more this morning, I can see the influence of Lisp and other languages. I think my definition of "lisp variant" is stricter than others' definition. I wouldn't consider Rebol, TCL or Ruby "lisp variants", even though they both derive a lot of language concepts from the lisp family. So I can't really consider Mathematica a lisp variant either.
Put it in any bucket you like, but don't ignore it. Mathematica is absolutely worth studying. If you've never used a general purpose term rewriting system before, you're missing out on some mind-altering stuff.
I'm not ignoring it, I used it years ago in grad school, it was fantastic. It just has no relevance (as a tool) to my current work or sideprojects. My question was strictly about calling it a lisp variant.
i think you're right that the "essense" is not lisp. Mathematica is more abstract than lisp, and it has lisp-like features built on top of it. that said, the analogy can be useful because things like "macros" are completely natural to the language:
In[1]:= FullForm[a + b]
Out[1]= Plus[a, b]
In[2]:= a + b /. Plus -> Times
Out[2]= a b
From a long-time Mathematica user: this is not a language I like. I use it, but there is little to no fun. Many things aren't immediately clear or become problematic if you stop using the language for a while. As an example, quick, what is the difference between Module[], Block[] and With[]? Even if you remember that With[] is a lexical scoping construct, Module[] and Block[] can be confusing.
It doesn't help that Mathematica is absolutely horrible for debugging code. I don't know why they put so little emphasis on their REPL, but it's way behind the times. Even mismatched parentheses can cause serious pain (and I hate the fact that things move around when I edit parentheses).
That said, I have a lot of respect for Mathematica as a complete system. It is incredibly useful, most mathematical tools you will ever need are there. It is also a great tool for explorative data analysis. Have a pile of data you'd like to see patterns in? Load it up, mash it into shape with high-level functions, then quickly produce great-looking graphics. Nothing even comes close to the flexibility of Mathematica here.