Hacker News new | past | comments | ask | show | jobs | submit login
Jupyter Notebook 7 (jupyter.org)
220 points by afshin on July 26, 2023 | hide | past | favorite | 114 comments



Man Open Source software should not post announcements like this using a blogging platform that nags you to pay to view posts. Like it's possible to dismiss the prompt and view the post (for now at least) but something about that definitely feels off.


I don't share the popular anti-Medium sentiment. For many occasional bloggers, it makes sense; otherwise, they would post in the walled gardens of Facebook and LinkedIn, as thread-monsters on Twitter, or - not at all.

Still, for a large open-source project, there is no overhead in using a static site generator. And plenty of benefits.

The good news is that moving your stuff from Medium to such is easy. Up to you if you pick Jekyll, Gridsome, Gatsby, or something else. See (full disclaimer: my blog post) https://p.migdal.pl/blog/2022/12/new-blog-from-medium-to-gri....


I don't hate Medium, it just feels out of place here.


100% agreed. I can't stand Medium.

Much <3 to the Jupyter team. Github pages + Jekyll is performant!


Oh I really like Jupyter too. I don't want them to have a bunch of extra overhead or to roll their own blogging engine or anything - though I have experimented with Jupyter notebooks for writing simple technical blogs and it's actually pretty nice.


I have a github-hosted blog (roderick.dev) to which I woefully dedicate too little time. But adding it is as simple as writing markdown and putting a new entry in _posts.


You mean the tiny little 20px banner at the top? Hardly an issue IMO. Medium has a pretty sustainable, but different than most blogging platforms, business model.


No, the 50% of my whole window overlay from the bottom.


I'm logged in, that might be why I don't see it. That's obnoxious.


Getting paid feels off?


I don't have a great view of how the organization behind Jupyter operates, but I'd be really surprised if they went with Medium as a way to support themselves. What feels off is an open source project (likely by accident or unwittingly) steering users towards giving to a for-profit company.


They are very well funded by numfocus: https://numfocus.org/, at least if going by the names of orgs that are sponsors. I don't think they require any financial benefit from posting things on medium.

It is rather the attitude or non-ideology of the Jupyter contributors/team, that causes things like posting on medium or telling people to post their questions in their discourse forum. It is also reflected in the licensing they chose for their ecosystem. Though usually they are friendly and helpful towards newcomers, it has to be said.


Does medium pay enough for that to be worthwhile?


> Both Jupyter Notebook and JupyterLab are widely used across data science, machine learning, computational research, and education.

Are they though? Does anyone actually use JupyterLab by choice?

From what I've seen people love Jupyter Notebook but find JupyterLab misses the mark (and this is certainly my experience).


I primarily use Jupyter Lab. I have some frustrations but I generally like being able to manage multiple kernels from one notebook, having multiple views into one notebook, having context-sensitive help, and having some of the other features that were only in Lab.

That being said, I'm glad they've switched course and continue to work on Notebook once it became clear some people preferred it to Lab. With some of the added features and the ability to switch between Lab and Notebook more easily, I may give Notebook another try.


How does mixing and matching kernels in one notebook work? Can you directly exchange data between cells of different kernel types somehow? Do you go through the filesystem or some kind of in-memory serialization?

(I'm sorry for the questions that could be answered from documentation, but I can't find the docs on this feature! I have been wanting to specify data cells in a notebook, like the markdown cells, and then reference their contents from a code cell)


What is the usecase you have for multiple kernels in one notebook?


In my field (genetic epidemiology), there are annoyingly un-standardised toolsets. There are libraries in R, python, and C/C++ binaries. Being able to string these together in one notebook is helpful.

That being said, I usually just stick to one notebook per thing.


It's not using multiple kernels in one notebook, but being able to manage all my kernels without opening up another window.


I have seen the exact opposite. JupyterLab is far more dominant. Including cloud service providers like AWS’ Sagemaker using it as the go to simple data scientist interface.

I started strongly advocating for it pretty much immediately. The waste of space on the margins of the notebook view was (is?) awful.


> The waste of space on the margins of the notebook view was (is?) awful.

If that is your only concern, it will probably be quite easy to write a user style sheet to fix it.


> Are they though? Does anyone actually use JupyterLab

I always use JupyterLab by choice.

However between JupyterLab and VS Code Jupyter, it’s VS Code every time. It’s just so much better.


Interestingly this new Jupyter Notebook v7 is basically JupyterLab, but extensively configured to have a UI very similar to Jupyter Notebook. Under the hood it is a completely different (and much more modern) codebase than Jupyter Notebook 6.x, and it’s really cool that this finally landed!


I am confused. Isn't Jupyter Lab the same as Jupyter Notebook but also with a file chooser and some extra functions? I don't care a lot which one I'm choosing. I always open Jupyter Lab because it has some very small neat additions. Why would I want to want to use Jupyter Notebook without the Lab interface around it?


It’s literally the same except Lab has a file picker, but it’s a meme for some people to pretend Lab is some sort of disaster.


Yeah, another thing you can do is offer Labs as a service (Jupyter Hub) to a group of users and then you can do things across the org like preinstalled requirements, shared or persistent storage, federated users, etc. If you run this on kubernetes it'll spawn up and down labs as people login/out and let you manage lab lifecycles, proxying, etc. We bundle Hub with our AI product at $work to give our users a packaged experience.

https://jupyter.org/hub


I've long switched to Emacs/Org, but used JupyterLab extensively before (as a data scientist). It's way more powerful than vanilla notebooks since you can open notebooks/code/related side-by-side, easier to extend (with lab extensions), etc.

I always thought people only still used vanilla notebooks because that's what people say they use, e.g. "I work with Jupyter notebooks" (even though that may well be in JupyterLab). So most regular users wouldn't necessarily know about JupyterLab.


I have used org-mode/babel as a notebook replacement, and obviously the flexibility and the editing capabilities are vastly superior to Jupyter notebook, but I fund it sluggish. I assume that, at least in my setup (using babel-python), the kernel is invoked synchronously. I also didn't try to get any form of completion working, but it should be possible and it would be nice to have.

What is your setup?


I use emacs-jupyter, which has async execution. Sorry for the late reply, only just now noticed this!


There is ob-async that worked well back when I tried it. Might be worth a look if the synchronous nature of the executions is slowing you down.


jupyterlab is a lot more popular than notebook in my workplace.

There is literally no downside to using it over notebook, why would you prefer notebook at all?

the file browsers, the terminal, the plugins, ... so much better


I uses nteract aside jupyter notebook for a while because I simply wanted a thing where you doubleclick a ipynb and it opens it up, without a lot of tabs, clutter and such. Back then jupyter notebooks were still somewhat cumbersome and had a very slow startup time.


100% tons of DL/ML researchers use Jupyter. I think the problem that most have is deploying the apps in prod.


I vastly prefer lab to notebook. My impression is that lab is just notebook with slimmer margins tabs, overall better UI, what am I missing?


Notebook classic for me. Vim keystrokes + Black plugin for formatting. I hate JupyterLab, have tried it multiple times. Have tried VSCode and PyCharm’s notebooks (I use PyCharm for actual development). I always go back to Classic as it just feels right.


The couple of times I experimented with jupyter ecosystem, it was only through labs. I thought notebook is the barebones app and labs is the more integrated ide like approach. But still for some reason didn't stick with it.

Would you mind sharing the areas where you feel lab falls short and where notebook does it better?

I want to give notebooks a try.


If there isn't jupyterlab, I will just stick with google colab. Same notebook but lightyears better inplementation. You can tab to autocomplete code off the shelf in colab.


I run Jupyter lab with R and Python kernels for bioinformatics analysis. I saw it once in some tutorial and kind of liked it more than notebook.


Strongly agree.


Timely! I just deployed it on our company server. There's a hidden gem that's not enabled by default and really helps when pair programming in Jupyter:

https://jupyterlab.readthedocs.io/en/stable/user/rtc.html

Here's a Dockerfile that enables it:

    FROM jupyter/scipy-notebook:2023-07-25
    RUN pip install jupyter-collaboration
    ENV DOCKER_STACKS_JUPYTER_CMD="lab --collaborative"
Usage:

    docker build . -t jupyter-collaboration && docker run -p 10000:8888 jupyter-collaboration
The only missing would be having more than one cursor and some convenient way to start and attach remote servers, e.g. over AWS...


Jupyterhub can deploy multiple servers, but so far I’ve only deployed it in Kubernetes.


I am curious how do you use Jupyter?

For me, it used to be Jupyter Notebook. For reasons I cannot pinpoint, I never got convinced to JupyterLab. Sometimes I use Google Colab, primarily for sharing and using GPU for deep learning. Now, when I run it locally, I do it in Visual Studio Code (https://code.visualstudio.com/docs/datascience/jupyter-noteb...), since I don't need to jump back and forth between it and the rest of the code.


Same here - VS Code or Google Colab if I need an Nvidia GPU. I wish I could get Google Colab's GPU in VS Code, like Paperspace lets you do: https://docs.paperspace.com/gradient/notebooks/notebooks-rem....


The vscode version, for me, has tended to be a better experience, with fewer random disappointments [0], than the pycharm version. Which is a shame, because the pycharm version, if it got at least as good as pycharm generally, would probably be better imo. But I hear that the new jetbrains notebook ide is the one getting the love

[0]:

- random unusable scrolling with vim mode

- gg scrolls to top of notebook rather than top of cell

- seemingly more-limited refactoring

- ipykernel headaches when I’ve already specified the project interpreter

- randomly cell contents get erased on cell execution

- wsl headaches (allowing firewall exceptions for each new project)

- windows jupyter headaches (having to manually terminate jupyter sometimes to quit the ide)

- sometimes the debugger gets stuck fetching variables before displaying them

- some kind of incompatibility between non-pycharm-created notebooks possibly related to nb format version so they can’t be read

- removal of (ui affordances for?) the cell mode for scripts?


I repeatedly ran into bugs and unpleasant behaviours in the VSC version, so I stick with browser notebooks. I did not find the lab version to be an improvement either.


I use jupyter using org-babel inside emacs.

https://github.com/emacs-jupyter/jupyter#org-mode-source-blo...


Used to like notebooks but it becomes messy if not checked. I think they are good for some use cases such as short analysis. Switched back to file and folder organization along with Make file. Tedious yes but prevents me from messing my analysis from ad-hoc variables or calculation. Having said that, notebooks are good way to experiment.


My problem with vanilla jupyter notebook is that they hide every settings from you. Look at those 4:3 ratio dead zones on two sides, who would have thought that you can edit the css or javascript preference to increase your screen real estate?

People told me to use extensions but none of them really actually work, including the installation process.


> including the installation process

Jupyter has a habit of breaking extensions on version upgrades. jupyterlab 3 -> 4 is a good example of this. Maintainers have to modify their metadata and then run a script. While this is trivial, maintainers have to be aware of the version upgrade, find time to do the upgrade, test, and then deploy. It’s really frustrating being a version behind because of extensions you need.


> Look at those 4:3 ratio dead zones on two sides

Good thing they are using one side to put a debugger in (shown in the screenshot)


It's a fair point, but it's hardly unique to Jupyter. In fact, while 99% of websites suffer from this problem, I think it's unfair to highlight it specifically wrt Jupyter. Heck, even the site we're on right now does a poor combination of uncomfortably-long lines AND unused left and right margins.


99% of websites are not insanely popular development environments. vscode.dev, for example, takes up the full browser width.


I thought Jupyter Notebook has been superseded by Jupyter Lab. What reason is there to prefer Jupyter Notebook over Jupyter Lab?


For me, less distracting and confusing visual clutter on the screen. Also, the pane showing the file hierarchy is redundant with the file explorer that the OS already provides. But either way, not having to use an IDE is a blessing. Especially on a 14" touch screen laptop.

Note that I'm probably a freak, lots of my friends love their IDE's, but having having something that works for my particular brain and my eyeballs is a blessing.


You describe the use case for vim, not notebook. Use vim and jupyterlab. It makes working on several projects much smoother.


I thought Notebook is getting deprecated. What is the difference between Jupyter Lab and Jupyter Notebook ? Are these developed by two completely different teams? Why maintain two code bases?


The point here is that they've unified the codebases. The application "Jupyter Notebook" is just a single-document version of "JupyterLab", designed to just do that one part of Lab.

Previously there was "Jupyter Notebook". Then they separately wrote JupyterLab (creating a brand new implementation of notebooks for it). Now, they're taken the JupyterLab notebook code and used it to replace "Jupyter Notebook".


Does it mean that RISE https://rise.readthedocs.io/en/stable/ will break?

I use it sometimes to turn a notebook into a presentation and it doesn’t work with Jupyter Labs



Thanks for explanation. I still don't get what is the motivation to keep Jupyter Notebook going. Is it different feature wise? Or it's just a chrome over JupyterLab because people like the retro look of Jupyter Notebook?


I have never understood the appeal of this. You can generate good looking presentations, but that is all.

Is any real science done with this or is it the Powerpoint for PyCon talks?


I was extremely stubborn when I started out in python. Built a script for everything. Jupyter is messy. But once I started using it I never went back for data analysis tasks.

Say you have a large file you want to read into memory. That’s step 1, it takes a long time to parse that big json file they sent you. Then you want to check something about that data, maybe sum one of the columns. That’s step 2. Then you realize you want to average another column. Step 3.

If you write a basic python script, you have to run step 1 and 2 sequentially, then once you realize you want step 3 as well, you need to run 1, 2 and 3 sequentially. It quickly becomes much more convenient to have the file in memory.


I like to imagine it's like a very advanced REPL that's somewhat reproducible (if you run everything from the beginning). If you don't find the appeal of being able to mutate state live for experimentation then it isn't for you.


I didn‘t „get“ Jupyter the first time it used it. A year later it clicked. A Notebook keeps state while you write it. This is different from IDEs, where programs lose state while you are writing code. Now I use it all the time - next to an open IDE, as a playground to quickly test ideas and algorithms.


I also use it in tandem with my IDE day to day as a data engineer, for basically the same reasons. I really like being able to interactively explore and transform a dataframe while developing new pipelines, or debugging existing ones (all of which are implemented as modules in a Python package, not as Notebooks).


This is critically important when one step of your analysis takes 10+ minutes to run. You want to be able to explore the output and not worry about rerunning all the calculations like a file/IDE based tool would.

Reactive notebooks are nice but I’ve accidentally reran slow SQL queries because I updated an upstream cell and that’s painful (using Pluto, ObservableHQ, and HexComputing).

In practice I never see or create notebooks that don’t run when you push the run-all button, it’s a well understood and easily avoidable issue. It’s probably a local optimum but I’m happy with them.


IDEs can 100% do this, too. The art of connecting to a running program using the debugger is just something folks stopped caring about.

This led a lot of programming environments to where batch loading of the code is basically required. But "image based" workflows are a very old concept and work great with practice. Some older languages were based on this idea. (Smalltalk being the main one that pushed this way. Common Lisp also has good support for interacting with the running system.)

It is a shame, as many folks assume everything has to be "repl" driven, when that is only a part of what made image based workflows work.


Curious what other approach you would take to do exploratory data analysis? It's so natural to me I can't think of another way that would be practical to achieve the same workflow.


Handcrafted machine code in punch hole cards.

Interactive environment without compile nonsense is just too new for folks.


emacs org mode can do this but is not tied to just python. Anyway, something like this works:

  #+BEGIN_SRC python :results file
  import matplotlib
  matplotlib.use('Agg')
  import matplotlib.pyplot as plt
  fn = 'my_fig.png'
  plt.plot([1, 2, 3, 2.5, 2.8])
  plt.savefig('my_fig.png', dpi=50)
  return fn
  #+END_SRC

  #+RESULTS:
  [[file:my_fig.png]]


In a true notebook you would maybe want to do the following:

  import matplotlib
  matplotlib.use('Agg')
  import matplotlib.pyplot as plt
  plt.plot([1, 2, 3, 2.5, 2.8])

  Alright, saving the figure at 50 dpi first
  plt.savefig('my_fig.png', dpi=50)

  Trying a bit more DPI to see if that makes a difference
  plt.savefig('my_fig2.png', dpi=150)

  Oh, wrong numbers, forgot that the fourth datapoint was going to signify 100, going back to 50 dpi as well
  plt.plot([1, 2, 3, 100, 2.3])
  plt.savefig('my_fig4.png', dpi=50)
It seems like your example misses the interactivity.


We have a lot of scientists using Rstudio. It’s not quite the same but you can do it. It lets you view your data frames like a spread sheet and generate graphs. It’s R and I get that Jupiter supports R but it’s always has some issue with some dependency.


Ew.

R.

No thank you.


I used to think like that. Programmers hate R. But I took a biostatistics class and it really is the best tool for that job. Plus the graphic output can't be beat (ggplot2) and fairly easy to install packages make it quite valuable tool.


> it really is the best tool for that job.

Besides the ecosystem, what makes R better than Python or Julia for biostats?


Can't speak to julia..

The statistics built in are great. They're just there, less need to find a package (general stats, ttest, chi_squared test...). We tend to use the "tidyverse" packages [1] https://r4ds.hadley.nz/. Bio-python is amazing for manipulating biodata, but once the data is extracted and you need statistics, our scientist seem to use R. I really don't love R's syntax, but I get why they use it. I use python all the time for data wrangling (right now I'm pulling sequences from a fasta file to inject into a table).

Rstudio is like an IDE for your data. You can view the data tables, graph different things etc. If you try the first chapter of the R4data Science book, you can see how get up and graphing and analyzing quite quickly. https://r4ds.hadley.nz/data-visualize.html

Though at this point Python and R are necessary depending on what package/ algorithm you want to use.

There are some good packages for single cell analysis: We use "Seurat".

https://satijalab.org/seurat/articles/get_started_v5.html

Jupyter supports R now with an add in, so its less of an issue.


Yes, tons of science is done with it. I have been co-author on two studies where the ML and DL models were in notebooks. Saying that all you can generate is good presentations is wrong and I dont understand what compells you to make these sweeping claims when you dont are in the target group it seems.


Notebooks are chiefly used for scientific exploration and experiments. The “literate programming” environment provides convenient artifacts for distilling research or analytics.

Nowadays they can even be used for running models/analytics in prod with tools like Sagemaker (though I’m not advocating that they should).

Maybe you’re mistaking Jupyter for a different tool like quarto or nbconvert but your dismissive comment misses the mark by miles.


Not sure about "real science" but it's very convenient for our students. We usually setup a notebook per group for ML-related group projects on our GPU server and also set up notebooks for thesis work etc.

Advantages...no setup on the students side (+they get reliable compute remotely) and we can prepare notebooks highlighting certain concepts. Text cells are usefull for explaining stuff so they can work through some notebooks by themselves. Students can also easily share notebooks with us if they have any questions/issues.

I also use notebooks for data exploration, training initial test models etc. etc. Very useful. I'd say >50% of my ML related work gets done in notebooks.


I’m a “real scientist”. Notebooks are a widely used to run analysis in my field (bioinformatics) where you explore data interactively.

I personally prefer when people share code as notebooks because you have code alongside the results. It’s really a good practice to use Jupyter.


I have found these notebooks very useful in 2 ways besides presentations: as a final exploratory data analysis front end that loads data from a larger modeling and data reduction system, and as a playground to mature workflows into utilities or modules that will later be integrated into a back end reduction or analysis system.

The models run on a small cluster and/or a supercomputer, and the data reductions of these model runs are done in python code that dumps files of metrics (kind of a GBs -> MBs reduction process). The notebook is at the very tail end of the pipeline, allowing me to make ad hoc graphics to interpret the results.


I performed all the data preparation, computation, and image generation for an interactive data visualization website in Jupyter

https://income-inequality.info/

All the processing is documented with Jupyter notebooks, allowing anyone to spot mistakes, or replicate the visualizations with newer data in the future:

https://github.com/whyboris/Global-Income-Distribution


I use it all the time for aoftware development. E.g. when I write DSP code for audio it acts as a mixture of documentation and the actual math with graphs to visualize what I do.

That is why jupyter lab is not the wrong name, it is a bit like a lab. Not meant for production use, but very good for exploring solutions.


So far, Jupyter has been the tool that gives me the best chance of coming back a week or a year later and figuring out what I did. Also, doing "restart kernel and run all cells" before going home for the day is a great reassurance that something is likely to be reproducible.


A heck of a lot of science gets done with this. Something like it is basically mandatory for interactive analysis of datasets large enough they take a decent amount of time to load into memory and process, and jupyter is the best and most common option (you can kind of bodge it with the vainlla python REPL, and there are other options with a similar-ish workflow).


If it was the same but in LISP with horribly mapped keys and used by nobody you guys would be all over it.


Good for developing ideas that you can add small code fragments gradually and see results immediately. And if it gets big enough, chances are that you have a good idea that makes it worth the time to refactor your notebook into production code.


I refactor my code into functions as I go.

Then I can easily put them into a Python file and import them from the notebook.

Easy peasy and very nice for iterative development.


Just started down this path, its such a nice workflow. I find that my notebook ends up with being a great overview of my codebase without going into the details of every function.


Like everything else in the Python ecosystem, it's half-baked and not composable.

People use it for two reasons: a) because they need to get those graphs on the screen and this is the only way b) running ML code on a remote, beefier server.


Neither of those use cases are exclusive to jupyter. You can run scripts on remote machines quite easily, and matplotlib will happily pop up a window for your charts.

The real reason is because it’s a much better workflow for data exploration and manipulation because you don’t always know exactly what code to write before you do it. So having the data in memory is really useful.


X forwarding through a terminal session to view that matplotlib plot is a bit more work than most want to deal with. Sure, you can use ranger, and set up the image viewing with uzerbeurg? or something? and set up kitty with icat, but that doesn't work with your tmux, so you have to have a separate ssh window that's not tmux'd, which is annoying and clunky, just for viewing images. You also have to save them, and then switch to view them, which is incredibly clunky.

Or you just use jupyterlab and the problem is fixed.


kitty icat works with tmux, as of kitty 0.28.0 just FYI.


You are underestimating how useful for exploratory tasks the combination of elements: markdown/code cells + runtime kernel to keep state + persistent results usable from your browser.

Jupyter notebook is neither the first nor the only implementation of such literate approach.

If some code is stable enough for reuse, you can make it composable as any other code: put it into the module/create CLI/web API/etc -- whatever is more appropriate in your case.


>People use it for two reasons: a) because they need to get those graphs on the screen and this is the only way b) running ML code on a remote, beefier server.

Do you have a source of this, or is it something you dreamed up? Weird claim as none of those are my use case.


Well, it is the closer many folks will get from what it meant using a Lisp Machine or Smalltalk development experience.


What would a "composable" experience look like?


You would have to formalize the inputs and outputs of your notebook, perhaps a preamble with imports and so on. Then other notebooks could use yours, kind of like importing (perhaps exactly by importing?)

As it is now, you typically wind up “programizing” your notebook once it does what it should so you can run in batch and so on.


Co-locating code and outputs is handy.


My understanding was that the developers plan was to develop JupyterLab as a future version of Jupyter Notebook. But it seems that both of them now get updates. My question is what is: When to use JupyterLab or Jupyter Notebook and waht is the difference between them?


> The major change is building the Jupyter Notebook 7 interface with JupyterLab components so that the two applications share a common codebase and extension system.

This is interesting! Looking forward to testing the purple theme I use [1]. I wonder if extension developers will need to maintain three sets of instructions now? JupyterLab, Jupyter Notebook >= 7, and Jupyter Notebook < 7

[1] https://github.com/shahinrostami/theme-purple-please


I really like notebooks and often end up preferring them over lab.

But even though VS code has poor support for notebook shortcuts (half the time a cell is in focus “the wrong way” so you can’t use ‘b’ to create a new cell) I find the convenience of just clicking a notebook file and see it rendered too much.

Regardless I am happy to see they have revived the project and am eager to check out the new release!


I am new to the Jupyter ecosystem. Can anyone point me to resources that allow me to generate PDF reports from Jupyter Notebooks?

I want to build a template notebook, that has internal code to fetch data from a database, based on command line arguments and then run the notebook and then strip all the code parts and generate a beautiful PDF document.



Jupyter nbconvert should do it for you- https://nbconvert.readthedocs.io/en/latest/


You want Quarto. https://quarto.org/


Can I use Quarto in a cell-notebook style? So far it seems like I can only get it compiled into a PDF, which feels different from Jupyter.

I could be completely wrong here, just looking for insight.


Quarto is used to convert from code (script or notebook) to formatted output. You can annotate a Jupiter notebook and have Quarto compile that into a PDF. If you are looking for something that changes the look of the notebook itself I don’t think that exists. But you can still include some interactivity when exporting to html.


Does debugging work for you?

Neither in Notebook nor in Lab can I click in the gutter to set breakpoints. The debugging panel is open, the documentation is clear (except that by default there are no line numbers and you have to activate that), but nothing happens.

Where exactly am I supposed to click?


There is a little bug icon in the toolbar of your open notebook in both user interfaces. The bug only appears if you have a kernel that supports debugging (e.g., ipykernel). So if you see the little bug on the right-hand side of the toolbar for your notebook, when you enable it, you should start seeing the variables in your memory state and you should have the ability to click in the gutter to add breakpoints.


It seems they have still not fixed that notebooks lose output when reloading or opening running notebooks on another machine. https://github.com/jupyterlab/jupyterlab/issues/12422

That whole issue feels so stupid.

I quite enjoy jupyter lab otherwise, even if a lot of it is brittle and annoying.


Slightly off-topic, but does JupyterLab still run on zeromq?


Yes. Jupyter kernels all talk over zeromq channels, irrespective of the front-end user interface.


For folks asking what the Notebook UX offers that the Lab does not, this github thread may be enlightening: https://github.com/jupyter/notebook/issues/6210

(TLDR: some novice users in educational settings find the lab environment overwhelming.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: