Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Marimo – an open-source reactive notebook for Python (github.com/marimo-team)
448 points by akshayka 10 months ago | hide | past | favorite | 106 comments
Hi HN! We’re excited to share marimo, an open-source reactive notebook for Python [1]. marimo aims to solve well-known problems with traditional notebooks [2]: marimo notebooks are reproducible (no hidden state), git-friendly (stored as Python files), executable as Python scripts, and deployable as web apps.

GitHub repo: https://github.com/marimo-team/marimo

In marimo, a notebook’s code, outputs, and program state are always consistent. Run a cell and marimo reacts by automatically running the cells that reference its declared variables. Delete a cell and marimo scrubs its variables from program memory, eliminating hidden state. Our reactive runtime is based on static analysis, so it’s performant. If you’re worried about accidentally triggering expensive computations, you can disable specific cells from auto-running.

marimo comes with UI elements like sliders, a dataframe transformer, and interactive plots that are automatically synchronized with Python [3]. Interact with an element and the cells that use it are automatically re-run with its latest value. Reactivity makes these UI elements more useful and ergonomic than Jupyter’s ipywidgets.

Every marimo notebook can be run as a script from the command line, with cells executed in a topologically sorted order, or served as an interactive web app, using the marimo CLI.

We’re a team of just two developers. We chose to develop marimo because we believe that the Python community deserves a better programming environment to do research and communicate it; experiment with code and share it; and learn computational science and teach it. We’ve seen lots of research start in Jupyter notebooks (much of my own has), only to fail to reproduce; lots of promising prototypes built that were never made real; and lots of tutorials written that failed to engage students.

marimo has been developed with the close input of scientists and engineers, and with inspiration from many tools, including Pluto.jl and streamlit. We open-sourced it recently because we feel it’s ready for broader use. Please try it out (pip install marimo && marimo tutorial intro). We’d appreciate your feedback!

[1] https://github.com/marimo-team/marimo

[2] https://docs.marimo.io/faq.html#faq-problems

[3] https://docs.marimo.io/api/inputs/index.html




This is amazing. I'm a big user of both Jupyter notebooks and Observable notebooks (https://observablehq.com/) and the thing I miss most from Observable when I'm using Jupyter is the lack of cell reactivity.

You've solved that incredibly well!

I also really like that the Marimo file format is just Python. Here's an example saved file from playing around with the intro: https://gist.github.com/simonw/e6e6e4b45d1bed9fc1482412743b8...

Nice that it's Apache 2 licensed too.

Wow, I just found the GitHub Copilot feature too!


Myles here (other core contributor) -

We are thrilled to see you have such a strong positive reaction. It means a lot coming from you - I initially learned web development using Django and landed my first contracting gig with Django.

I drifted away from writing Python and towards Typescript - but marimo has brought me back to writing Python.


Congrats Myles! Super excited that you all have finally open sourced! I'm gonna start moving my Jupyter notebooks over to this asap. I love that it's all just .py files.

Have you had anyone use Marimo to write production web app code? I've been doing a lot of AI experiments for the new venture, and it's been a pain to have to switch back and forth between .ipynb files and regular py files


People have used marimo for production web apps. They won't get you as far as writing HTML/JS. But great for internal tools or external showcases, tutorials, interactive blogs, etc.

Our friends at SLAC use marimo for their internal exploration experiments and publishing interactive apps. He is an example: https://marimo.io/@public/signal-decomposition


let's go!! so excited to see this get deserved attention


Hi Simon, slightly unrelated question.

I'm a big fan of your work, and as I've learnt a lot from reading your blog posts over the years, I'd be curious to know a bit more about typical use cases for wanting to work with Observable notebooks.

The only reason why I'm using A JavaScript notebook tool (Starboard.gg) is to be able to access cool visualisation packages like Anychart or Highcharts.

Given the hype around Observable notebooks, I feel that I'm missing something.

What makes you decide to start something in an Observable notebook rather than in Jupyter?

Thanks!


I primarily use Observable to build interactive tools, as opposed to Jupyter which I use more for exploratory development and analysis.

Here are some of my Observable notebooks which illustrate the kind of things I use it for:

https://observablehq.com/@simonw/search-for-faucets-with-cli...

https://observablehq.com/@simonw/openai-clip-in-a-browser

Those are both from https://simonwillison.net/2023/Oct/23/embeddings/

https://observablehq.com/@simonw/gpt4all-models provides a readable version of JSON file on GitHub

https://observablehq.com/@simonw/blog-to-newsletter is the tool I used to assemble my newsletter

A killer feature of Observable notebooks for me is that they provide the shortest possible route from having an idea to having a public URL with a tool that I can bookmark and use later.


Congrats OP on launching this, looking forward to dive further in! It's great to see people experimenting in the Reactive + Live Programming space as like you mention, I think it can bring a lot of improvements to how we build software. Did you run into any limitations adopting this model?

> A killer feature of Observable notebooks for me is that they provide the shortest possible route from having an idea to having a public URL with a tool that I can bookmark and use later

Thanks for sharing simon! I'm working on an Open Source Notion + Observable combination (https://www.typecell.org), where documents seamlessly mix with code, and can mix with an AI layer (e.g.: https://twitter.com/YousefED/status/1710210240929538447)

The code you write is pure Typescript (instead of sth custom like ObservableJS) which opens more paths to interoperability (aside from having a public URL). For example, I'm now working to make the code instantly exportable so you can mix it directly into existing codebases (or deploy on your own hosting / Vercel / whatever you prefer).


Thanks for getting back to me, I'll go through the examples you shared.


That's one interesting project. As someone who relies heavily on collaboration with people using Jupyter Notebook. The most annoying points about reproducing their work are the environment and the hidden state of Jupyter Notebooks.

This does to address directly the second problem. It does however by sacrificing flexibility. I might need to change a cell just to test a new thing (without affecting the other cells) but thats a trade off if you focus on reproducibility.

I know that requirements.txt is the standard solution to the other problem. But generating and using it is annoying. The command pio freeze will list all the packages in bloated way (there is better ways) but I always hoped to find a notebook system that will integrate this information natively and have a way to embed that into a notebook in a form that I can share with other people. Unfortunately I can't see support for something in any of the available solutions (at least up to my knowledge).


Yes, the second half of reproducibility is for sure packages. A solution for reproducible environments is on our roadmap (https://marimo-team.notion.site/The-marimo-roadmap-e5460b9f2...), but we haven't quite figured it out yet.

It's a bit challenging because Python has so many different solutions for package management. If you have any ideas we'd love to hear them.


People always complain about pip and python packaging but it’s never been an issue for me. I create a requirements.base.txt that has the versions of things I want installed. I then:

    pip freeze -r requirements.base.txt > requirements.txt
Install is then simply:

    pip install -r requirements.txt
Updating / installing something new is a matter of adding to the base file and then refreezing.


There are several problems with this approach, notably you don't get information about specific platform stuff. You don't get information on how these package are installed (conda, mamba..etc).

And it does not account for dependincies version conflicts which life very hard.


I don’t understand the platform thing, is that something to do with running on Windows? Why wouldn’t you just pip install? Why bring conda etc into the mix?

If you have conflicts then you have to reconcile those at point of initial install - pip deals with that for you. I’ve never had a situation in 15 years of Python packages where there wasn’t a working combination of versions.

These are genuine questions btw. I see these common complaints and wonder how I’ve not ever had issues with it.


I will try to summarize the complaints (mine at least) in obvious simple points

1- pip freeze will miss packages not installed by pip (i.e. Conda).

2- It does include all packages, even not used in the project.

3- It just dumps all packages, their dependencies and sub-dependencies. Even without conflicts, if you happen to change a package, then it is very hard to keep track of dependencies and sub-dependencies that need to be removed. At some point, your file will be a hot mess.

4. If you install specific platform package version then this information will not be tracked


1/4- Ordinary `pip install` works for binary/platform-specific wheels (e.g., numpy) too and even non-Python utilities (e.g., shellcheck-py)

2/3- you need to track only the direct dependencies _manually_ but for reprodicible deployments you need fixed versions for all dependencies. The latter is easy to generate _automatically_ (`pip freeze`, pip-tools, pipenv/poetry/etc).


Ok. I think that’s all handled by my workflow, but it does involve taking responsibility for requirements files.

If I want to install something, I pip install and then add the explicit version to the base. I can then freeze the current state to requirements to lock in all the sub dependencies.

It’s a bit manual (though you only need a couple of cli commands) but it’s simple and robust.


This is my workflow too. And it works fine. I think the disconnect here is that I grew up fighting dependencies when compiling other programs from source on Linux. I know how painful it can be and I’ve accepted the pain and when I came to python/venv I thought “This isn’t so bad!”

But if someone is coming from data science and not dev-ops then no matter how much we say “all you have to do”. The response will be why do I have to do any of this?


I don't think that manual handling of requirement.txt in a collaborative environment is a robust process. It will be a waste of time and resources to handle it like that. And I don't know about your workflow but it is obviously not standard and it does not address the first and forth points.


Haha. Ok. I think that’s where we’re just going to have to agree to disagree.


Problems 1 and 2 can be solved by using a virtualev/venv per project.

3 is solved by the workflow of manually adding requirements and not including dependencies. It may not work for everyone. Something like pipreqs might work for many people.

I do not understand why 4 is such a problem. Can you explain further?


Can you name a package manager (any language) that handles #3 well?

How does it handle the problem?


Yes, there are more problems with Windows.


I follow a similar approach -- top-level dependencies in pyproject.toml and then a pip freeze to get a reproducible set for applications. I know there are edge cases but this has worked really well for me for a decade without much churn in my process (other than migrating from setup.py to setup.cfg to pyproject.toml).

After trying to migrate everything to pipenv and then getting burned, I went back to this and can't imagine I'll use another third-party packaging project (other than nix) for the foreseeable future.


The post you’re responding to said that there are many Python packaging options, not that they don’t work. Pip freeze works reasonably well for a lot of situations but that doesn’t necessarily mean it’s the best option for their notebook tool, especially if they want to attract users who are used to conda.


Poetry handles all of this properly.


I regularly observe it stalling at dependency resolution stage upon changing version requirements for one of the packages (or python version requirements).


Just not PyTorch apparently.


The link redirect does not specify which point in the list you are referring to but I guess it is "Install missing packages from...". If so, then I really wonder if you mean supporting something like '!pip install numpy' like Jupyter or something else?

I don't think this is really a solution, not to mention that this raise the question. Does it support running shell commands using '!' like Jupyter Notebook?


Oh, sorry for not being more clear. That's not the one. It's "Package management: make notebooks reproducible down to the packages they use": https://marimo-team.notion.site/840c475fd7ca4e3a8c6f20c86fce...

Does that align with what you're talking about?

That page has some scrawled brainstormed notes. But we haven't spent time designing a solution yet.


Thanks. That is precisely what I was talking about in my comment. It would solve the problem if we have some like that integrated natively. I understand that between pip, conda, mamba and all the others it would be hard problem to solve. But at least auto generating requirements.txt would be easier. But to be honest the hard part is identify packages and where they are from not what to do with information. Good luck with the development.


The third half is data which only exists on your machine :P

And even if it’s on some shared storage, it may have been generated by another unreproducible notebook or worse, manually.


Nix is the only solution for reproducible environments that I would call rock-solid.

It comes with costs and the gpu-related stuff is especially tricky e.g. https://www.canva.dev/blog/engineering/supporting-gpu-accele...


You should try pip-tools.


Wow.. Really great work, finally someone is doing it!

Since I've thought about this for a long time (I've actually even made a very simplified version last year [1]), I want to contribute a few thoughts:

- cool that you have a Vscode extension, but I was a little disappointed that it opens a full browser view instead of using the existing, good Notebook interface of Vscode. (I get you want to show the whole Frontend- But I'd love to be able to run the Reactive Kernel within the full Vscode ecosystem.. Included Github Copilot is cool, but that's not all)

- As other comments said, if you want to go for reproducibility, the part about Package Management is very important. And it's also mostly solved, with Poetry etc...

- If you want to go for easy deployment of the NB code to Production, another very cool feature would be to extract (as a script) all the code needed to produce a given cell of output! This should be very easy since you already have the DAG.. It actually even existed at some point in VSCode Python extension, then they removed it

Again, great job

[1] https://github.com/micoloth/vscode-reactive-jupyter


You're probably referring to nbgather (https://github.com/microsoft/gather), which shipped with VSCode for a while.

nbgather used static slicing to get all the code necessary to reconstruct some cell. I actually worked with Andrew Head (original nbgather author) and Shreya Shankar to implement something similar in ipyflow (but with dynamic slicing and a not-as-nice interface): https://github.com/ipyflow/ipyflow?tab=readme-ov-file#state-...

I have no doubt something like this will make its way into marimo's roadmap at some point :)


Very exciting! I took a quick look and I have a couple of questions.

1. Can you describe your interactive widget story? I see that you integrated altair, and there is some custom written react code around it [0] [1]. I'd be interested in porting my table widget to your platform at some point.

2. How much, if any does this depend on the jupyter ecosystem?

3. How does this interact with the jupyter ecosystem?

[0] https://github.com/marimo-team/marimo/blob/b52faf3caf9aa73f4... [1] https://github.com/marimo-team/marimo/blob/b52faf3caf9aa73f4...


1. We don't have a public plugin API yet, but we will in the future. Our (internal) plugins are represented as custom elements: Python writes the HTML (e.g., `<marimo-vega ...>` and the frontend instantiates it. In the meantime, maybe we can help you port your table widget and make it a marimo plugin. You can reach us in our Discord (https://discord.gg/JE7nhX6mD8) or at Github.

2. marimo was built from scratch, it doesn't depend on Jupyter or IPython at all.

3. marimo doesn't interact with the Jupyter ecosystem. We have brainstormed the possibility of a compatibility layer that allows Jupyter widgets to be used as marimo plugins, but right now that's just an idea.


The list of dependencies seems very short, apart from tornado it does not seem like the other ones pull in a lot of other deps.

Congrats, this looks very useful and awesome.

  dependencies = [
    # cli
    "click>=8.0,<9",
    # python 3.8 compatibility
    "importlib_resources>=5.10.2; python_version < \"3.9\"",
    # code completion
    "jedi>=0.18.0",
    # compile markdown to html
    "markdown>=3.4,<4",
    # add features to markdown
    "pymdown-extensions>=9.0,<11",
    # syntax highlighting of code in markdown
    "pygments>=2.13,<3",
    # for reading, writing configs
    "tomlkit>= 0.12.0",
    # web server
    "tornado>=6.1,<7",
    # python <=3.9 compatibility
    "typing_extensions>=4.4.0; python_version < \"3.10\"",
    # for cell formatting; if user version is not compatible, no-op
    "black",
  ]


Cool. On a side note, I think the old Jupytext extension is hugely underrated. It lets Jupyter run a .py file (with markdown notes as comment in the file, displayed as notes in the web page).

Both of these solve the most important part of this problems in iPython - horrible git interaction, horrible programming practice to discouraging writing library files, though Jupyter fixes most of the weird non-deterministic behaviour by forcing you to rerun the script every time you load it (rather than reactive techniques). State is OK for power users but it's known to be a massive pain for people who are just learning programming, and an issue in large projects or with interaction.

With this new project having reactive updates I think it's definitely going to be great for beginners, or in gnarly projects.

I wonder if it runs on pyodide (a cPython compiled to run in the browser, with matplotlib and scipy bundled).


I'm a big fan of Marimo (and of Akshay and Myles in particular); it's great to finally see a viable competitor to Jupyter as it can only mean good things for the ecosystem of scientific tooling as a whole.


Very interesting project, a breeze of fresh air and welcome competition to Jupyter.

I guess it's still very early but the onboarding for Mario VSCode is not great at the moment, no idea how to actually start writing a Marimo notebook (no "Create: New Marimo notebook" option like Jupyter's).

Then I then tried clone the cookbook repo, and get "module not found" errors that are even less friendly than when it happens on Jupyter: have to figure out which cell the error actually comes from to even know which module is missing.


Looks cool!

Have you looked into WASM? Something like a jupyterlite [0] alternative for marimo?

And are there plans to integrate linting and formatting with ruff? [1]

[0] https://jupyterlite.readthedocs.io/en/stable/

[1] https://github.com/astral-sh/ruff (ruff format is almost 100% compatible with black formatting)


We started looking into WASM this week, and did some light exploratory coding toward it. It's on our roadmap: https://marimo-team.notion.site/The-marimo-roadmap-e5460b9f2...

A ruff integration is a great idea. I'll add it to the roadmap.


Looking forward to the WASM integration. Being able to use plain filesystem such as nextcloud and able to run it there would be great. I have been trying to get juypterlite wasm in my next cloud alternative that I have been working so would love to try this.


<2 cents>

I see some package management stuff on the roadmap.

Maybe you could take a look at the cargo cli, like pixi did [0]. IMO it's a nice user experience.

[0] https://prefix.dev/

</2 cents>


Thanks for the suggestion. We'll definitely take a look.


Perfect, thank you!


This is very cool. I think I need to play around with this a bit more to wrap my head around the reactivity element, but the basic shift of ipynb to standard Python would be such a huge workflow improvement for my team. We use jupyter notebooks when prototyping and trying to code review unwieldy python-in-JSON is miserable. Great to see an alternative that's worked its way around that.


Marimo are wonderful little pets, I used to have some and really liked it. I should get some more. Never failed to start a conversation when guests came over.

https://soltech.com/blogs/blog/how-to-care-for-your-marimo-m...


Arrggghh. Now I have to learn Python, which I've been actively resisting and making jokes about for years.


This looks quite nice and it might compose well with a cache library like the one posted on HN recently (XetCache, https://news.ycombinator.com/item?id=38696631).


Yeah, having worked on alternative notebooks before, one of the big implicit features of Jupyter notebooks is that long-running cells (downloading data, training models) don't get spuriously re-run.

Having an excellent cache might reduce spurious re-running of cells, but I wonder if it would be sufficient.


We've thought briefly about cell-level caching; or at least it's a topic that's come up a couple times now with our users. Perhaps we could add it as a configuration option, at the granularity of individual cells. Our users have found that `functools.cache` goes a long way.

We also let users disable cells (and their descendants), which can be useful if you're iterating on a cell that's close to the root of your notebook DAG: https://docs.marimo.io/guides/reactivity.html#disabling-cell...


ipyflow has a %%memoize magic which looks quite similar to %%xetmemo (just without specifying the inputs / outputs explicitly): https://github.com/ipyflow/ipyflow/?tab=readme-ov-file#memoi...

Would be cool if we could come up with a standard that works across notebooks / libraries!


Function-level caching is the best match for how I'd use it. Often the reason for bothering to cache is that the underlying process is slow, so some kind of future-with-progress wrapper could also be interesting. An example of how that could be used would be wrapping a file transfer so the cell can show progress and then when the result is ready unwrap the value for use in other cells. Or another example would be training in PyTorch, yield progress or stats during the run and then the final run data when complete.


Very cool! This is something Jack Rusher cries for in his talk "Stop Writing Dead Programs" https://www.youtube.com/watch?v=8Ab3ArE8W3s


Also from Joel Grus as well "I don't like notebooks" https://www.youtube.com/watch?v=7jiPeIFXb6U


At first I thought that in effect, this project only removes a couple of Ctrl+Enter keystrokes in Jupyter-notebook workflow. But after trying out the intro I think it looks good, I really like the simple convert to a webapp.

I wonder if the state/data in the generated app are stored server-side of sent to browser.

I went through the slider example in the intro and I noticed that when I change the icon, the slider position goes back to 1. I tried to fix it so that the slider-selected value is preserved over icon changes, but didn't manage, it doesn't seem straightforward.


Aren't many of the issues with Jupyter being mentioned in this thread solved by Quarto? I have been advocating for it's use more at work, and NIH has even started offering classes on it through the NIH library.


Exactly my thoughts too, especially regarding reproducibility issues Quarto has been great in the past for projects at my workplace.

I have yet to try Marimo but synchronised code cells are what seems to set it apart. Quarto + jupyter-cache [1] was the closest I have managed to get to that experience but that approach has its constraints.

[1]: https://github.com/executablebooks/jupyter-cache


This is a great idea. I'd been planning to create something similar where cells are topologically ordered based on their dependency structure; although I was thinking perhaps to integrate with Jupyter more, eg use their existing kernel web sockets infrastructure. In my mind, one would be able to zoom out and see a graph view where hovering over a node would show its corresponding cell with content / output. Each node might be coloured according to execution status. That said, I'm not a UI expert and I never got around to it. So thanks for your efforts, I'll definitely give it a spin.


That sounds really cool! marimo has a dependency graph viewer built-in, but we could definitely improve it. Coloring nodes by execution status, and annotating cells with their variable defs/refs, would be great quality-of-life improvements.


It would be amazing if it could be deployed with pyodide/wasm as an alternative to a Python web server. Truly a standalone interactive notebook, hosted with plain html.


I read in a comment that Marimo is an alternate to Jupyter. Does it not depend on Jupyter Server or ipykernel ? Is it a replacement for Jupyter lab ?

I am thinking of Jupyter as all the components in this diagram - https://docs.jupyter.org/en/latest/projects/architecture/con...

Sorry did not get to look into the codebase yet


Correct, it does not depend on Jupyter. It’s built from the ground up with different principles in mind


Do you guys have anything resembling RStudio-style doc-aware code completion? [1]

I swear it's the bane of my existence whenever doing anything inside Jupyter. Coming from RStudio it always feels like operating in a vacuum.

[1] https://rstudioblog.files.wordpress.com/2015/02/s3.png


Yes, we do!


I already use jupytext to store notebooks as code but the improved state management and notebook-as-app features are pretty compelling and I'm trying it out.

Unfortunately, I'm quite used to very specific vim keybindings in Jupyter (https://github.com/lambdalisue/jupyter-vim-binding) that make it pretty hard to use anything else :/


If you're a vimmer and a jupyter user, do yourself a favour and switch from browser to vscode: vim emulation is much better overall and you get proper python lsp experience, with jumping to definitions, type inference, copilot, and all that.

(Neovim user myself, as much as I dislike vscode for everything else, as of now it's hard to replace it when using jupyter)


That's amazing! Can I edit it in another editor, save the file and have it updated live in the browser notebook? Or does it have to recompute everything?


Not yet, but that's something we do want to support.


I love this, but Im using DataSpell from JetBrains at the moment because it has 2 killer features:

    1. Variable viewer so I can see the current value of all variables in scope. 
    
    2. Interactive debugger
Maybe the variable viewer is only important because Jupyter notebooks don’t track and rerun dependencies? So I wouldn’t need it with Marimo. But the interactive debugger is priceless.

Any plan to add debugging?


1. We do have a variable viewer. We have a few helper panels in the bottom left.

2. PDB support is planned and was scoped out yesterday.

Appreciate the feedback!


That's awesome, ok I'm going to go check it out. Great work!


Does this allow to run a long running task in the background so that a user can close & reopen the tab and continue seeing all the output that has been produced thus far?

This is currently being worked on in Jupyter: https://github.com/jupyterlab/jupyterlab/pull/15448


Looks cool. This is kind of like streamlit, which (I think) tried to escape the limitations of notebooks by giving you an API to quickly make a shareable app with sliders/charts etc. (Yet it retains some notebook concepts like 'cells').

Marimo kind of takes the reactive widgets of streamlit and brings them back into a notebook-like UI, and provides a way to export the notebooks into shareable apps.


Thanks! One way we differ from streamlit is that ML/data/experimentation work can start in marimo — i.e., you can use marimo for traditional notebooking work, without ever making an app. But you can also use marimo to make shareable apps as you've articulated.


Defining the same variable more than once is an error. The reason for this is obvious. But if the variable is never used in a cell that does not first write to it, reusing variable name should be possible.

Allowing that would be good, because many notebook cells start with "fig, ax = plt.subplots(2, 2)" and this is currently not allowed more than once.


Does the local underscore variables feature solve this? Or the approach outlined in the plots tutorial? IMO, not allowing redeclaration is more valuable than supporting this use case. A slight paradigm shift away from your example gives you the significant benefits of a reactive environment with fewer edge cases/quirks. I'd much rather have a notebook error out instead of silently overwriting a value. You save so much time debugging.


> Does the local underscore variables feature solve this

I tried this yesterday trying to convert a Jupyter notebook with a log of fig, axs, and it was very annoying converting all of them. I tried local _ with fig_ and ax1_ …etc. but it is considered a variable that cannot be reused too. Furthermore, I expected local vs global variables to be cell based somehow, but that was naive on my part. It does static analysis, not dynamic, so defining something like _suffix and add it to all reused variables and assign different values for each cell will need a dynamical analysis to work.


Yes, but what I proposed seems like no risk of silently overwriting? If there are dependencies between cells there will still be an error


Could this be used with MDX or something to embed interactive examples in documentation? That is an underserved use case.


It is not possible at the moment (we use iframes in our documentation), but once we support WASM, it should be possible.


I am most intrigued by the annotation demo you showed, since annotation is painful to set up for small projects.

Can you talk about it in more detail?

Can I tell who the user is so I can have multiple annotators?

Can I use gold data to determine which annotators aren't paying attention?

Where do I learn more about how to build this kind of tool?

Overall, kudos, I signed up for the waitlist.


Marimo looks and feels great!

Have you considered adding support for mermaid.js in the markdown? I tried including some mermaid.js in a `mo.md` invocation, but it didn't render the diagram :-)

https://mermaid.js.org/


We’ve been thinking about it (but had no requests for it yet). I will look into adding it this week. If you would want to make the contribution, feel free to jump/chat in the discord.


The readme says that I can convert Jupyter notebooks, but to what extent does this actually work? What if I've imported custom JS to render mathjax or added custom CSS? What if I've added inline graphics or videos?


This is a welcome alternative to Jupyter Notebooks/lab- great work! One thing that would be nice is an ability to see previews on GitHub of the Marimo notebook (like Jupyter Notebook). I am not sure if this is possible given you would have to run the code to see the output.


Looks really impressive!

But state is not tracked perfectly. Sometimes you have to manually re-run the cell. For example if one cell defines a dataclass d and another cell changes d.x = "new value". Then other cells using d.x will not know that it has changed.


This is good, I've been waiting for something like this to solve the issue of determinism in notebooks.


Awesome! I’ve been wanting this sort of thing for a long time. But I’ve only been aware of the Julia tool pluto


Thank you. Jupyter has me taking my hair out a lot of the time. Some completely bizarre design decisions


I'll definitely try it out tomorrow! Could fix a lot of problems with my current project.


this is very impressive.

the only one bit that is in my muscle memory from using Jupyter is 'A' (add above), 'B' (add below) and 'D-D' (delete) shortcuts.

kudos for adding polars support!


Awesome!

What would be the best way to use it locally in a minimal, self-contained install?


Try using pipx!


we use the jupyter-server kernel gateway api at https://nux.ai would love to explore using marimo's API for code execution


how do you read the resulting python files? That's what I'm struggling with -- but I guess the point is you don't read them, you use marimo for that?


Thanks for the question. Each cell is represented as a function that maps its referenced variables to the variables it defines. Cells are sorted in the order they appear on the notebook page.

If you run `marimo tutorial fileformat`, that'll open a tutorial notebook that explains the fileformat in some detail.


You've built observable but for python. Love it!


this is really cool, can’t wait to try it out for some ML pipeline development. kudos myles and akshay!


This is amazing!


Did not work a lot with Jupyter nbs but I think it would be good for you to put more emphasis into Jupyter vs Marimo into your website



It's there, but warthog is right, it should be a toplevel section like "A reactive programming environment" — yes ideally people would read the description and understand the differences themselves, or consult the FAQ, but the fact is that most people will understand Marimo in relation to Jupyter and so you might as well optimize that path.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: