Hacker News new | past | comments | ask | show | jobs | submit login
Mathematica v12.1 (stephenwolfram.com)
129 points by taliesinb on March 19, 2020 | hide | past | favorite | 96 comments



Ongoing related thread: https://news.ycombinator.com/item?id=22625682

Probably too different to merge the submissions.


I have tried Mathematica for small duration and in general was wowed by it.

For free CAS I would recommend to give Axiom or FriCAS(more actively developed) a run.

http://fricas.sourceforge.net/ https://en.wikipedia.org/wiki/Axiom_(computer_algebra_system...


Now days, it's not often that I need to do heavy mathematical work, but I turn to Sage https://www.sagemath.org/ when I do.

Sage has a notebook interface, is built on Python, and incorporates many free software math packages, like Maxima, into one system.


I used Sage heavily when working in string theory a long time ago. It's very powerful. But for symbolic math, if possible, I would still use mathematica.


Their is also an excellent open source clone of the rule reducing engine called Expreduce[1], and I've been working on Foxtrot[2] which uses Expreduce to create a GUI and data platform on top of it.

[1] https://github.com/corywalker/expreduce [2] https://github.com/wrnrlr/foxtrot


Or Sympy [1], which is very accessible, and embedded in Python, so connects directly to that huge ecosystem. I teach university maths, and use Sympy for every CAS need.

Live demo: https://live.sympy.org/

[1] https://www.sympy.org/en/index.html


How good is Axiom as compared to Mathematica in general? I am familiar with Mathematica, am asking to get zeroth order idea for Axiom before I would try it out myself. Thanks.


I think Mathematica is HUGE. Axiom/FriCAS does not have every thing that Mathematica have(specially statistics).

Overall it is not that polished UI wise. I think it is pretty good for algebra and symbolic computation.

Let me quote from the book/guide of FriCas:

" FriCAS provides state-of-the-art algebraic machinery to handle your most advanced symbolic problems. For example, FriCAS’s integrator gives you the answer when an answer exists. If one does not, it provides a proof that there is no answer. Integration is just one of a multitude of symbolic operations that FriCAS provides. "

You can find out more by reading the book which is available here:

http://fricas.sourceforge.net/doc/index.html


Thanks!


Or Maxima, which has a long heritage.


There’s a bit of a learning curve for sure, but in the last year Maxima has become an indispensable tool for me. Symbolically writing out multiple equations, putting them together, and solving for a variable buried deep inside is dramatically more repeatable and less error-prone. Pretty much anything I’m working on now that involves algebra or calculus gets a Maxima file that starts with first principals and derives the final equation.


I'm a physicist and I've used mathematica sparingly off and on for several years, mostly manipulating symbolic formulas.

I still do not understand what I'm doing half the time. Mostly just googling and stackoverflow answers get me by. For example, I don't understand why I can't use subscripts as symbols?

Any recommendations on trying to 'get' mathematica?


As someone who started using Mathematica only very recently, I found The Wolfram Language: Fast Introduction for Programmers [1] very helpful. It's the quickest way to get started with the Wolfram language if you've seen a bit of functional programming before, e.g., Python's map(F, x) is equivalent to Wolfram's Map[F, x], reduce(F, x) equivalent to Fold[F, x], etc. There's also An Elementary Introduction to the Wolfram Language by Stephen Wolfram [2,3] which is available freely as Mathematica Notebooks, although I'm yet to read it fully.

[1]: https://www.wolfram.com/language/fast-introduction-for-progr...

[2]: https://www.wolfram.com/language/elementary-introduction/2nd...

[3]: https://www.wolfram.com/language/elementary-introduction/nbs...


My advice is to find a good book that teaches the fundamentals. I'm currently reading "Power Programming with Mathematica: The Kernel" by David B. Wagner. It was published in 1996 and covers then-new version 3, but boy is it good (and still very relevant!).

A scanned PDF was made available for free with the permission of the publisher: https://mathematica.stackexchange.com/questions/16485/are-yo...


I think the reason Mathematica hasn't "caught on" is that it doesn't look like fortran/matlab type programming that scientists/engineers are more traditionally exposed to. I find that if I'm trying to write an explicit loop (as opposed to a map or vector operation) in Mathematica I am most likely doing something wrong.

You'll drive yourself crazy trying to understand subscripts as a beginner, I would advise against getting too fancy with them unless you are fairly advanced. They work reasonably transparently in most cases, but there are a handful of situations where they are really frustrating (e.g. in With/Module/Block constructs). You can actually use them as symbols if you import the Symbolize package and Symbolize them, but then you lose the ability to "do math" to the subscript (e.g. you won't be able to use Table[] won't generate the symbolized variables.)


I've found the "Evaluation of Expressions" doc page to be super useful: https://reference.wolfram.com/language/tutorial/EvaluationOf...

Read through it once to get the gist, then go back and read it again. The second time through, _play_ with each example. Try them out, see how they behave, attempt to apply other things you've learned to each. In no time at all, you'll have the basics down and then will be able to make heads or tails of the other doc pages much more quickly.


That's whay working on CS feels like all the time! /sarcasm

Jokes aside, probably you're experiencing the part in learning a programming language where you can think what yu want to do but don't understand it enough to express it correctly. Usually the solution to this is more practice in said language, which is when pet projects come in handy.

Maybe you can try implementing something that already exists from scratch? That way you can always look for help online implementing it, if you need it, but you get to excercise the language a bit.

This way it takes effort and time, for sure, but it'll help you get a good grip.


Mathematica is worth learning and using since it's so far ahead of the open source alternatives (e.g. Python/Jupyter and associated libraries ) in usability/interactivity/rapid development that it comes across like future-tech.

Unfortunately it's also the single tool most hampered by its licensing and silo-like ecosystem.


I've tried it for modeling tasks but I struggled with the sort of basic data manipulation that can be done within pandas/data.table. I can quite comfortably work with 100-million line CSVs using data.table on a standard laptop but Mathematica wasn't even able to ingest the file. I don’t disagree its technically very impressive but there's no point in having these amazing features if it stumbles with such basic tasks.


I've used Mathematica since about the year 2000, and I think Wolfram "missed the boat" of AI, big data, and machine learning.

They were in the unique position of having one of the best symbolic differentiation engines and one of the best numeric engines and a Lisp-like REPL that allows one to write terse, elegant code.

What they were always missing was efficient bulk data structures.

In recent versions they've added a handful of "special cases" where some types of data are stored as a plain data array like in C-derived languages, but this is hit-and-miss.

Similarly, they've dabbled with GPU acceleration and parallelism, but it's half-baked. It feels like a proof of concept, not something you'd ever actually use.

Julia and the like will slowly but surely eat their lunch.


the neural network stuff is very good and relies on GPU acceleration. it is the "computation graph" paradigm -- have to define the whole architecture up front -- but it's actually quite pleasant to use.


Maxima is also an amazing tool for symbolic maths, and is free software. I especially recommend the wxMaxima interface which is close in spirit to jupyter notebook


I'm taking this opportunity to post a video I made some time ago of my current work-in-progress, a new user interface to Maxima. It's more like a regular commandline compared to wxMaxima, which may or may not be what users want.

https://peertube.mastodon.host/videos/watch/df751bd5-5a26-44...


I'm an R and sometimes Julia user, although not Python. Can you offer some examples of how Mathematica surpasses open source alternatives in those areas you've mentioned?


It's hard to convey what I mean exactly by listing features, it's best if you use it yourself. Next best thing is to watch an expert use it.

Wolfram has released good videos to that end, here's the latest:

https://www.twitch.tv/videos/569938853

I find R to also be a lot better than Python and closer to Mathematica (especially if you combine it with RStudio and Shiny) but still not quite as good overall on the interactivity/environment end.


None of those can beat it in symbolic computation. You can write entire papers, by simply asking yourself the question: I wonder if this has an analytical integral? If you are lucky Mathematica spits out a solution in terms of a special function.


Why hasn't Wolfram automatically published all those papers? Or sell a paper-generating plugin?


Because the most important part of the paper is formulating the question it is answering, and stating why asking that question is interesting and useful. That is not something anything short of AGI can do.


How much symbolic computation can you do with R?


There are several functions in base R for differentiation, integration, solving system of equations, etc. E.g. `solve`, `stats::D`, `stats::deriv`, `stats::integrate`, `stats::numericDeriv`.

R package Deriv for symbolic differentiation, it allows user to supply custom rules for differentiation.[1]

R package numDeriv[2] for calculating numerical approximations to derivatives.

R package gmp[3] and Rmpfr[4] provide multiple precision arithmetic and floating point operations. They also include some special functions, e.g. Rmpfr::integrateR for numerical integration.

R package mpc[5] available at R forge. It provides multiple precision arithmetic for complex numbers.

R package rSymPy[6] provides an interface to ‘SymPy’ library in python via rJava.

R package Ryacas[7] provides an interface to the ‘Yacas’ computer algebra system. It is easier to install compared to `rSymPy`.

R package symengine[8] is an R interface to the SymEngine C++ library for symbolic computation.

[1] https://cran.r-project.org/web/packages/Deriv/index.html

[2] https://cran.r-project.org/web/packages/numDeriv/index.html

[3] https://cran.r-project.org/web/packages/gmp/index.html

[4] https://cran.r-project.org/web/packages/Rmpfr/index.html

[5] http://mpc.r-forge.r-project.org/

[6] https://cran.r-project.org/web/packages/rSymPy/index.html

[7] https://cran.r-project.org/web/packages/Ryacas/index.html

[8] https://github.com/symengine/symengine.R


I use R much more often than Mathematica and think is great for many reasons, but there are places where Mathematica is on another level. My mathematical maturity isn't high enough to really get how it's done or describe it well, but Mathematica has a way of being shockingly consistent across concepts and has pretty thorough documentation that can even help you learn the topics. R is very inconsistent even in the internal library, and documentation quality runs from best around to worse than no documentation.

There are also little nifty things like for image processing you can have a hard coded image show up in your code (I like plain text better but it's cool and future-techy). Distributions (as in normal, binomial, Poisson, etc) are a type and PDFs and CDFs can be obtained from them consistently rather than having to remember the different parameters of dnorm, dbinorm, etc.

I would love a real Mathematica expert to give us more. That's the real drawback of the closed ecosystem there is so much less information about it out there, fewer code samples, etc.


Great information, thanks. I do totally agree about documentation for R libraries. It can be very hit and miss.


When I had the chance to try it, I didn't find it that easy to use due to the interface which felt clunky: the way the command line works: no feature to repeat previous comnand with up arrow, have to edit previous existing one, requiring more mouse usage, weird forms of cursor placement, weird default enter key behavior.

In ipython, matlab and octave it's much easier to repeat and modify last commands, which is something you seem to need all the time when experimenting with math.

What was I missing, usability/interactivity/rapid development wise?


You are just not used to it.

> the way the command line works: no feature to repeat previous comnand with up arrow, have to edit previous existing one, requiring more mouse usage, weird forms of cursor placement, weird default enter key behavior.

It's not a command line. It's an interactive notebook. It's an entirely different experience. Repeating and modifying last commands can still be done with arrow keys, then Shift-Enter. Also, when you are doing math, you spend way more time thinking than typing and manipulating; the time needed to move your hands to the mouse is minuscule by comparison.


What command line are you talking about? If you want a command-line IPython-style REPL, enter MathKernel, which absolutely does support arrow keys to go backward and forward in history (and works over SSH without X). If you want a IPython (later Jupyter) notebook-style interface, enter Mathematica/Wolfram notebooks (guess where IPython notebook got its idea from). Sounds like you just didn’t bother to learn a bit about it before making up your mind.


You're missing cmd+L (copy last command to cursor) and cmd+shift+L (last result).

Although, as others have said, many prefer the style of editing and re-running, rather than leaving the history above.


Mathematica Notebooks have some REPL-like aspects, but they really should be uses like a script editor that shows the script output inline. You can jump between script segments and execute them out of order, which is useful in some cases. But notebooks are at their best when you have some discipline when editing, so that they stay in a state where you can open them, select execute notebook from the menu and everything works on the first try.


  ds = CreateDataStructure["LinkedList"]
String keys to access types is kind of yucky, to be honest :( Couldn't there be a better way to represent this than having to look up the documentation (https://reference.wolfram.com/language/ref/$DataStructures.h...) to see what would work?


For interpreted languages, it doesn't matter semantically whether you use strings or not. It's always a run-time value that needs to get resolved when the interpreter gets there.


In these cases the notebook interface will normally show you all of the choices via autocomplete.


Even for string?

Why not define a LinkedList constant?


Yes, in WL strings are commonly used in a matter similar to enums in other languages.

The thing to keep in mind is that WL is based on symbolic replacement. You normally wouldn't define that kind of function as taking in a parameter (i.e., CreateDataStructure[x_] := ...), rather you'd define it for each case separately (i.e., CreateDataStructure["LinkedList"] := ...).


Well.. if constants were a thing you could probably also pattern match them.


It's a constant string :P


Does anyone know of a good, strongly-typed computer algebra system similar to Mathematica?

By strongly typed, I mean that Mathematica is weakly typed in the sense that it simply assumes that all expressions are Complex numbers. At most, it can restrict itself to some subset such as the Reals or Integers, but that's it.

It can't, for example, perform general simplifications over non-associative types such as most Matrix algebras, the quaternions, or any geometric algebra. Most built-in functions operate only on Complex numbers, or expressions over the Complex numbers, etc...

I'm looking for something I can use to do symbolic expression manipulation for physics equations in terms of geometric algebra, but as far as I know there's nothing out there with the capability.


FriCAS and Axiom


Does that mean I can solve a system of equations in vector form, e.g. solve this for X:

A·X=d1

B·X=d2

(A⨯B)·X=(A⨯B)·C


Mathematica is an amazing tool, but it has one huge downside: reproducibility.

It is a closed monolith, and - especially when you start to use advanced features - you derive results from it that can't be independently verified because of the closed source nature of the monolith.


A good piece to read about this was in the American Math Society Notices: https://www.ams.org/notices/200710/tx071001279p.pdf


Thanks for the link! The succinct article, "Open Source Mathematical Software", illustrates the issue with a quote from a Mathematica tutorial:

> the internals of Mathematica are quite complicated, and even given a basic description of the algorithm used for a particular purpose, it is usually extremely difficult to reach a reliable conclusion about how the detailed implementation of this algorithm will actually behave in particular circumstances.


It continues to be my hope that Wolfram will open-source Mathematica someday. It seems the most enduring thing Wolfram could do to ensure his legacy.


Do note that's in the context of performance optimization. Even open source complicated code is still complicated code.


I agree. How an algorithm "actually behaves" depends on the implementation details of the language interpreter or compiler; open source or not, there are complex transformations leading to running code, that it might as well be a black box.

From the article calling for open-source mathematical software (my emphasis):

> ..we need a symbolic standard to make computer manipulations easier to document and verify.

> ..perhaps we should not be dependent on commercial software here. An open source project could..find better answers to the obvious problems such as availability, bugs, backward compatibility, platform independence, standard libraries, etc.

> Increasingly, proprietary software and the algorithms used are an essential part of mathematical proofs.

> ..with this situation two of the most basic rules of conduct in mathematics are violated: information is passed on free of charge and everything is laid open for checking.

If Mathematica were to be open-sourced one day, I suppose that would cover most of this wish list, with improved availability/reproducibility and verifiability. Tough to imagine without significant funding, collaboration, and communal agreement.


Very true. But like mathematical arguments in journals, I'm glad open source complicated code is there to be examined.


Has this ever happened to you? There can't be many areas of mathematics in which it could.


What do you mean? There are many areas of math where providing a proof matters.


it's just a thing that is low-hanging fruit for people to complain about. i've never seen a coherent argument outside of "but it's not open source".


If a software is used to create data that is okay like editing text files in microsoft windows. But computational software such as Wolfram being closed source bothers me a lot. There is no way to verify the science you do is correct.


Quite often it gives you a result that you can then prove directly, or check with other tools. The benefit of MMA is that it has a lot of tools and a good interface, good documentation, and a large community.

In practice, pretty much no one doing science has the expertise or time to completely verify the science they are doing - they are building on centuries of knowledge across many disciplines, and for the most part the community verifies each part as they build knowledge.

And certainly opensource does not allow the vast majority of people "to verify the science you do is correct." They'd have to check the code, the compiler, the hardware, ensure no cosmic rays flipped bits during computation, and so on.

So I'd not worry too much about the closed source vs open source nature of it. It's a solid tool that enables lots of research.


The “cosmic rays” argument, to me, is inane. It simply doesn’t practically apply and is certainly not an argument against the benefits of open source code. You’re castigating the whole practice of code review, computer-aided proofs, automated theorem proving, etc.


> The "cosmic rays" argument, to me, is inane.

Have you ever looked at the rates, or you just dismiss it without looking at it? Note that the current rate is higher than older references since the feature sizes have shrunk, and lower energy events can change bits on newer hardware.

Scientific computation, especially at the level of most researchers, is affected by cosmic ray bitflips, without question.

Since the OP was complaining about not being able to check everything ad absurdium, then this effect is certainly on the table. It's more likely to affect research than the difference between closed and open source if a researcher is ignorant of it.

It's also why good researchers, who know this is a real effect, tries to run a computation in multiple methods over different times, until they feel a consensus on the calculations is robust enough.

If you've never done it, write a program to watch memory for bit flips, and be amazed.

Here's an intro - do a back of the envelope calculation and see if you still think these events are rare enough that they don't affect common scientific work.

https://en.wikipedia.org/wiki/Soft_error#Cosmic_rays_creatin...


>You’re castigating the whole practice of code review, computer-aided proofs, automated theorem proving, etc.

No I'm not. Those are but one avenue of reducing the probability of error during computation. All of those only ensure that the code part is solid - there is an entire other world on the physical part that needs incredible engineering, noise reduction, error correction, defect mitigation, thermal issues, quantum issues, physical data decay, memory leakage, and so on.

I think by focusing only on aspects for code, you miss a large part of ensuring modern computing is accurate.


And how many people doing "science" do code review, computer-aided proofs, or automated theorem proving to verify their code is correct? Very, very, very few.


This. In theory it’s open, but in reality, people just want to get this work done mostly.

Also, if something is that important, people can and do perform the same calculations using different packages or different algorithms.

Write two algorithms in MMA, or one in MMA and one in something else, and every so often spot check a few cases by hand.


Likely you can see certain sections of the code if you sign an NDA. I have done this before for the CAD software I use.


Kudos to the team for the great release.

Two things I miss when working with Mathematica: refactor-rename variables and a usable object-oriented programming support.

Refactor-rename is available via Eclipse-based IDE; however math typesetting is not available there (it makes a difference for large equations/expressions).

There are a lot of community-developed approaches to OOP. I tried a number of these, sticking with a particular approach, but it also leave a lot to be desired. OOP is useful in that it ties together data and functions which act on the data. Inheritance/composition are useful if you compute properties of similar objects.


Mathematica is mostly used for relatively small, notebook-style, applications. Both features, while probably useful for some people, don‘t seem to be necessary for the more popular use cases.


This is one of the few pieces of non-free software I'm happy to pay for.

I've been doing some PenPlotter for fun, I wonder if I can use Mathematica with HatchFilling for some of my plotting now.


It also appears to be faster than v12.0; my WolframBench score with 12.1 was significantly higher than 12.0 (25-50% higher).

The new IPFS integration is exciting; might see if I could write a static site generator that automatically publishes to IPFS.


they put the biggest improvement upfront lol: things now scale properly if you have a 4K monitor. people (myself included) have been squinting at very tiny font for years, and now they don't have to anymore!


Anyone here uses Mathematica besides Physics or pure mathematics?

I mean is it worth using the machine learning from Mathematica or Tensorflow is better?

Is it better to use integer optimization solvers or then Mathematica?

In which ways you are using Mathematica and Really leveraging it?

Sorry for my question but I think and suspect I am missing something bigger.

Will be glad if someone Illuminate me. And please be free to PM if your project is discreet.


> Anyone here uses Mathematica besides Physics or pure mathematics?

My adviser (a physicist) uses Mathematica for all non-physics computations as well. It's a clunky language for general purpose computation and doesn't play well with external programs and libraries. The user experience on Linux can also be quite haphazard with frequent crashes and lack of good HiDPI support (I haven't used version 12.1). It's also hard to run headless programs written in Wolfram language. But if you're okay doing everything inside the Mathematica GUI and don't care about the fact that it's a walled ecosystem, don't have a preferred text editor, and don't particularly care for Unix's one-thing-well philosophy, Mathematica might work for you.


Depends how wide your definition of 'physics' is. We have a couple of environmental scientists at work that use it to model water pollution for example. The big win is that they can basically just type in their differential equations exactly like they would write them on paper and have the answers simply pop out.


The language began as something of a gem and each release seems to be muddying it up. Another commenter already mentioned the string argument to make a data structure. It also looks like built-in methods on these objects are also strings.

The metadata and annotation facilities seem unclear. Sometimes they change the appearance and behavior of the object, sometimes they don’t.

Some functions now are curried by default. The choice probably makes symbolic programming and pattern matching trickier since it breeds a variety of representations of the same concept. (Pattern matching and rewriting strongly favor canonicalization.)

All and all, it’s beginning to look like the core Wolfram Language is beginning to tremble under its own complexity. I can’t imagine coping with buggy code at all in this framework.


Now if only it wasn't $350 for home use. Like any non-privileged student can afford that.


It is free on the Raspberry Pi computer.


It is, and that's how I use it, but we also find that computer in the category "horribly underpowered" category of devices that can run Mathematica.


$160 or $80/yr for students


"per year" is the drug dealer of the software world. Adobe might be the biggest pusher, but Mathematica isn't far behind.


Seems to support 4k HiDPI on windows finally.


Is there any reason not to call Mathematica AI at this point? I swear some of the things it does look more like black magic than what we call AI nowadays...


I'd call it more of an "expert system" - it has a HUGE amount of rules and methods built-in, but at the end of the day it's all deterministic.


> but at the end of the day it's all deterministic.

That's not what's preventing it from being AI.


Many people on this and similar threads rightly question whether results received with the help of a closed source software are reproducible. I think it's a fair question, but these discussions often miss a couple of important points.

First, there is a question of theoretical vs. practical reproducibility: true, in _theory_, an open-source codebase with many millions of lines of code of highly complex transformation can be checked for accuracy by anyone. In _practice_, only very few insiders will have both technical understanding and time to verify the correctness of this or that algorithm, and everyone else would need to rely on their expertise. That situation is no different from using a proprietary system if we trust their authors.

Second, while it might be shocking for someone, most of the science, outside of maybe math, it's not _practically_ reproducible. Most of the articles are behind paywalls, don't have enough data to replicate their results (even economists much more often than not won't include their raw data or algorithms or both). Also, no one who is trying to build a scientific career would try to replicate someone else's results, especially if it requires some costly equipment, reagents, etc. - the rewards in the scientific community are for novel results. Replication crisis[1] describes this pretty well.

Most of the software that runs LHC at CERN is probably not inspectable by a regular Joe the physicist, yet somehow the physicist community somehow trusts their results, etc.

In summary, while an open-source software of Mathematica quality would be awesome in theory, I highly doubt it would be practical for a long time.

[1] https://en.wikipedia.org/wiki/Replication_crisis


Reproducible has a much simpler meaning than that you describe: it’s feasible for any scientist to recreate the result perpetually. While inspection of the methods might help one understand the result—and I think itself holds a great deal of value to especially mathematicians—just the simple ability to compute the answers yourself without binding yourself to a contract or paying $1000’s of dollars in license fees is a valuable aspect to science. Especially so when the calculations are at the level of pure algebraic manipulation.

Your (and Wolfram’s!) argument about “not needing to see the insides because really only 10 people in the world understand it anyway” should really be an argument for opening it up. If 10 people could understand it, and we must intersect that group with folks who can access the source, we are left in quite a dismal state of affairs.


While I don't disagree with your definition of reproducibility, I want to point out that very few papers would satisfy that criteria, regardless of their use of a closed- vs open-source software.

Here is a recent example: Imperial College COVID-19 response team published an article[1] where they modeled different effects of non-pharmaceutical interventions, such as suppression and mitigation on the number of infected, deaths, etc. This is a very interesting result, but it's impossible to replicate their results in practice without contacting the authors, as their methodology is not enough to reproduce it.

While someone else[2] posted their own model that is very well documented and fully reproducible by anyone with a $230/year personal license.

While theoretically [1] is high science and [2] is not, in my own opinion [2] is better than [1]. I would love more science to be done and discussed that way. Ideally, using open-source software, but in practice, using Wolfram Language, in that case, is already good enough in my opinion.

PS. I'm not affiliated with Wolfram Research in any way.

[1] https://www.imperial.ac.uk/media/imperial-college/medicine/s...

[2] https://community.wolfram.com/groups/-/m/t/1901002


Making an exact copy of an experiment is the lwest level of reproducibility. In science , reproducibility means reproducing the results with different components (people, tools, methodology), showing robust Independence form potential confounders.


Regarding your second point - I don't have a horse in that race, other than tremendously enjoying my personal license that I pay for out of my pocket; but I found their arguments[1] quite reasonable. I think there was an HN discussion about this some time ago.

[1] https://blog.wolfram.com/2019/04/02/why-wolfram-tech-isnt-op...


I find the arguments confused and lacking. Many of those arguments conflate “open source” with “community driven”. Rarely did they make a compelling case as to how availability of the source code would hurt the customers.

And many, many amazing open source teams have demonstrated that even including community driven design in some cases does not hamper the quality of the product. Look at Apple and Swift, or Rust, as examples.

I read their arguments as “we do not know how to engage with a community effort because it’s not in our company DNA” under the facade of supposedly legitimate reasons.


Swift is a great example, but don't forget that is essentially funded by Apple. It would look very different if it would be a true community effort.

Rust I'm not very familiar with: how much of their funding is dependent on Mozilla, at least for the core developer(s)?

Also, a language + a standard library has a (much!) smaller surface comparing to Wolfram's vision.


Mozilla funds a few things. Some stuff is funded by other companies. Mozilla pays the most folks to work full time, but the amount is tiny compared to the overall set of teams, let alone contributors.


Thanks Steve for the answer! HN is an amazing place where a random comment asking a question about Rust can get an answer from a core developer. I hope we will always keep it that way.


They have funding; that’s great! Exactly what I’m saying is that these projects manage to be community run (to some extent) while still remaining successful.

If Wolfram’s reason to not publish the source code is that they believe they would no longer have a business model, they should list that as the One Reason, not 12 gaslighting reasons.


Well, they clearly think their model works for them. If you think otherwise, it's on you to convince them that they should change their model, or start your own effort if you think it will benefit humanity. I don't think they necessarily need to "defend" their business choices.


You're right, they don't need to defend it. But they shouldn't put out reasons that don't make sense. They ought to just put nothing out at all and operate as normal.


I don't care what works for them. I care what works for society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: