Hacker News new | past | comments | ask | show | jobs | submit login
Maker of RStudio launches new R and Python IDE (infoworld.com)
175 points by javierluraschi 3 months ago | hide | past | favorite | 128 comments



What is the strategy behind the dizzying pace of product changes in the R/Python space? I heard that RStudio was a great product, yet they have to redo everything again, of course also renaming the company as is also standard practice in the Python "scientific" market.

Is it Jupyter envy? Why is it not possible to keep one good product and stay with it?

I wish MatLab licenses weren't so expensive, at this point I'd just buy one and sit all this churn out.


As a regular R/bash/python user, I think the Posit philosophy is increasingly to break down barriers between the languages. It's no longer an either/or relationship, you can use both. And some packages or features that have been ported to python are just objectively worse in that language, so the want for R is definitely there.

Also, a lot of the Posit team are fully "bilingual", it's not like the old guard of academic R contributors. My impression is they appreciate both languages for what they have to offer.

For me, I'm apathetic to the languages, the only thing I care about is the output.


>ported to python are just objectively worse in that language

This is absolutely the case. Dplyr syntax is much more intuitive for many use cases than Pandas or Polars equivalents.

One thing I miss from RStudio is the Rmarkdown documents with inline outputs. Jupyter notebooks, even in VSCode, are so needlessly over-engineered and under-featured compared to the elegance of RMarkdown. So I am excited to see what Posit can do to bring that experience to python. My git repos will be thankfull anyway.


I don't even know why people need to use dpyler and the tidyverse, in my opinion R is very comfortable for data wrangling and making all kinds of plots out of the box. Its able to handle huge amounts of data as well especially if you adopt a functional programming approach vs object oriented (what I see with a lot of the classic "academic" brittle hardcoded slop that R gets a bad rap for). Very fast if you keep in mind it is a vectorized language and write your scripts with that perspective. Tidyverse seems like a new, unrelated syntax to learn on top of it all, whereas the base graphics packages very much work like the base statistical functions and the base data wrangling and everything else in the base R package.


My feeling is that people with a more mathematical background tend to like developping DSLs that look more like math than code, and is typically written once and then thrown away; whereas people with a more software engineering background tend to prefer code that is more explicit about what it does, and have a better understanding about long term implications for maintenanability/extensibility. Which for me is the summary of the R versus Python debate in general.

One can see that in the JVM world with java vs scala: people attracted to scala tend to like "cute" DSL, java people tend to be more careful with shiny new features. (This is an oversimplification, of course)

Specifically for dplyr: it looks cute and tends to be easier to use in a REPL setting (you can build your pipeline step by step by running your command, looking at the output, get the command from history, add a step, run again; and at the end you get a single line to copy paste in your script). But if you want to wrap it in a function, it tends to create issues.


The base graphics packages make the plots as ugly as the ones generated by gnuplot though. ggplot2 on the other hand has very pretty output. And the concept of grammar of plots just makes so much sense to me.


You can make plots look however you want with base graphics. ggplot2 users mainly use the default settings honestly, you get that classic grey background plot I personally find more ugly than the cleaner white background defaults of the base package.


That's only true IMO of the in-IDE plots, but actual exported PNG or vector graphics I think base R plots are pretty perfect, other than perhaps the default colour palette


Beauty is in the eyes of the beholder. I much prefer the aesthetics of plots made with the lattice package or even base R over ggplot's.


Base graphics are also _massively_ faster than ggplot when data sizes get larger. To the extent that ggplot essentially becomes unusable.


Maybe it is for you, but the success of Dplyr and ggplot suggests a lot of others disagree.


I wonder how much of this is just a feedback loop; were people taught both tools and then chose the one that works best, or was one more heavily promoted than the other, so people went with what was easiest to get started?


Once you are using the tidy paradigm, it lends itself to efficient plotting with ggplot2. Plotting with base R would require reshaping your data. So I think insofar as dplyr becomes a popular default it makes sense ggplot2 would be in lock step


Its definitely a feedback loop. Every time you look up an R question on stackoverflow people give you a ggplot or dpylr answer and usually not a base package implementation. Its almost as bad as Ole Tang spamming gnu parallel on every xargs thread.


Im sure that’s part of it. But you could say the same for using python or R over another language. Besides, someone who knows R well enough to write DplyR thought the situation was dire enough to write it. And there’s also data.table but that is inscrutable to most folks and I have only ever used it for fread - which is 10x faster than any other method of loading csvs into R.


Hardly. Hand holding tools are popular but that doesn't mean they aren't hand holding tools that don't give you any new function you didn't have otherwise. Jupyter notebooks are probably more popular to write than python scripts for new data scientists too, doesn't mean anything though or take away some of the advantages you get writing properly packaged scripts instead of a big old notebook you iterate a pipeline in line by line and figure by figure.


I learned r too long ago so I am pretty fluent writing readable data wrangling code in base R. But I'm a biologist first, in my community I see the value dplyr adds in making it approachable for people who need to do some basic stats but probably will never need to really understand the language or do any development.

It also provides guardrails and encourages best practices which I find a bit to paternalistic and annoying but again I can see the value.

I think most R users would be surprised and just how much tidyverse functionality is hidden in base R but majority of the dplyr versions of functions have at least some intended improvement over the base R versions, and some are a massive improvement in functionality.

For example in a typical script the only tidyverse package I may load besides ggplot2 is tidyr, because the pivot_ wider/longer() functions really do solve a problem that was not fun in base R.


> One thing I miss from RStudio is the Rmarkdown documents with inline outputs

This is already in Quarto! https://quarto.org/docs/computations/inline-code.html#:~:tex....


I can't stand jupyter notebooks for several reasons. I've been using https://quarto.org/ and writing .qmd docs and really enjoy it.


> One thing I miss from RStudio is the Rmarkdown documents with inline outputs

RMarkdown (Rmd) was recently developed into “Quarto” (Qmd), precisely because they now support Python as well. I’ve used it a bit and it’s excellent.


But Rmd already supported python chunks (as well as other languages such as ruby, which seem to be missing from Qmd)


Quarto supports any language and works just fine. I have quarto blog posts for using APL as an example of a somewhat niche language.


I guess you have to add some plugin? I mean in Rmarkdown in Rstudio I just go ```{ruby} and I have a ruby block with nothing special installed. That doesn't work by default with Quarto.


In the Quarto front matter, you can choose to use a Jupyter backend, in which case any Jupyter kernel can be used to interpret code blocks. Many languages, including APL, have Jupyter kernels you can install.


RMarkdown really is great. I used RStudio/RMarkdown for almost all my homeworks, projects, and even papers with no code during my MS program. I now realize that it was Pandoc's Markdown mixed with LaTeX that I appreciated so much for scientific writing, but with RMarkdown you can easily call R and Julia.

I don't remember invoking Python from RMarkdowm (maybe you already could in RStudio but I never did), so this will be a welcome addition in this new Posit program.


You might be interested in what we’re building over at Evidence.dev

It’s basically RMarkdown for SQL


Neat! I have to deal with splunk at work and every time I do I'm annoyed that they decided to create their own query language.


A friend and I have been having a blast with Observable Framework. It seems really well put together- markdown + code blocks. See https://m.youtube.com/watch?v=Urf_bPFyhIk for a short demo


Just curious, what Rmarkdown features do you find lacking in VSCode notebooks? I have the opposite impression, but I am probably missing something because I don't use RStudio as much as VSCode.


The creator of pandas Wes McKinney is on the Posit team and working on this project too. Gives some additional credence to the idea of the convergence of tooling.


One of those is not like the others


> Is it Jupyter envy?

The funny thing is that the R in Jupyter actually stands for R (the language). It's Julia, Python and R. No need for envy.

Of course, RStudio/Posit != R (at least in theory)


I think the R people see the writing on the wall. Python has sucked all the air out of the room, and it is becoming increasingly difficult for them to target the R space exclusively.

If you had no legacy or compliance requirements, are you going to start a new data project in SAS, R, or Python? Where are you going to find the most talent?


I'm not sure what would lead to you believe this. I've worked in the data science/ML space for over a decade now and I see the majority of pure analytics projects started in R, including at big tech companies I've worked at recently.

Of course, ML projects and other things that need to result in production-grade models are almost always done in Python. This is currently the most visible form of "data project" due to all the ML/AI hype, but it is far from the only data work going on.


Im curious about the people who use R in big tech companies that you've worked at. Were the R users the people who had just come out of school and still working using their academic dev environment before weening off?

I always found that was the group who used R - kind of a use what you are used to until it gets out of step with the remaining workflow.

I also would say that the amount of R I see is far less than python.


So, (speaking as someone who started with R and now predominantly writes Python), I think there's a bunch of things going on here.

1. R is 100% better for analytics work and statistical modelling. There's just no contest.

2. Python is much, much better for data getting (APIs/scraping etc) and dealing with non table-like data. Again, there's basically no contest here.

3. Software engineers hate R (in most cases), which means that it's easier to hand over work for production in Python.

This leads to a situation where it looks like most of the prod-level work is being done in Python, but if you look under the covers you'll discover that most prototyping/analysis/exploration is done in R and then ported to Python if it works.

Like, Python is a great language for lots of things, but it's pretty terrible for exploratory DS work (pandas is like the worst features of base R and base Python mashed together in an unholy hybrid).

There's also the fact that all the NN stuff is predominantly Python, so lots of companies believe that they need Python people, which reinforces the stereotype.

And finally, while I love R, Python has more guardrails, and it's harder to make an unmaintainable mess with it (relative to R). Particularly when people use all the various lazy evaluation packages that the tidyverse has used over the past decade (I once maintained a codebase that used all of these in different places, it was not a fun experience).


One of the better comments in this thread, I would only qualify that different levels of ability mediate much of the "how hard is it to make an unmaintainable mess" dimension. Dplyr/tidy code can be pasta, as can pandas, and there is really a whole new level of that given llm generated nonesense edited/tweaked by novices masquerading as seniors.

Apropos this idea of a vs code competitor, I wish they would spend more effort on existing products. I find quarto frustratingly buggy and meanwhile see no reason to move my workflow from vscode to this new thing. Ymmv


> I would only qualify that different levels of ability mediate much of the "how hard is it to make an unmaintainable mess" dimension

Oh definitely, but at least Python's stdlib is relatively consistent, which helps packages be a little more so.

My favourite example is t.test, which is not a t method for the test class, unlike summary.lm which is.

And there's like 4 different styles of function naming in base & stats alone.

Python has problems (for gods sake, why isn't len a method?) but it's a little more consistent.

I used to think that R was responsible for a lot more of the mess than I now do, having seen the same kind of DS code (and I am a DS) written in both Python and R.

And it would be sweet if R had a pytest equivalent, if I never have to write self.assertEqual again, it'll be too soon.


Youre wrong. Python is outpacing R in usage. Every metric you can find proves it. R also has fundamental issues and lacks serious development.


Not to dispute because I have no idea so I'll assume you're correct. But how many metrics did you find and how were they obtained? And how would you know they are representative of all R users?


For whatever it is worth, the TIOBE index lists Python as #1, R at #21.

Python is the first language many people are exposed to today. It has a library and tooling for every use case.

https://www.tiobe.com/tiobe-index/


R has a pretty particular use case though, Python use for statistical programming/data analysis would be an apples to apples comparison. People doing a coding 101 course in Python don't really count against the R user base.


No one is disputing that R has usage in niche arenas.


s/serious/hyped


No. R fundamentally has not really improved in the past ~10 years. Do you know much about how R works?

Also try:

gsub('serious', 'hyped', x)


Maybe because it already does what it intends to do reasonably well? I mean, what do you think needs to be improved?


Here are 14 years of HN discussions/criticisms of R: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

"Does what it intends to do reasonably well" is going to be widely subjective, depending on whether the user's use-case is statistical/life-sciences vs more general purpose coding and relying on many packages; prototyping/experimentation vs production code; whether the user uses base-R, or tidyverse/data.table, etc.

Here are two of those many posts:

* An opinionated view of the Tidyverse “dialect” of the R language (July 5, 2019) https://news.ycombinator.com/item?id=20362626

* The R programming language: The good, the bad, and the ugly (epatters.org, 2018) https://news.ycombinator.com/item?id=35571659 -> https://www.epatters.org/post/r-lang/


If youre unironically asserting that R already does everything well enough Im not going to take you seriously.


Yet you haven't provided any substantial points against it and assume that others will take you seriously...


I'm asking what needs to be improved, in your opinion.

That's a normal follow-up question that you should be able to answer. Otherwise, why are you even commenting?


No you arent, youre clearly asking in a rhetorical way. Re-read your post.

Any criticism brought up you'd dismiss. Heres one: lack of native 64 bit integers.


No, I was just asking out of curiosity. You're overthinking too much.


No, you were clearly asserting that R already does everything needed. Stop trying to gaslight.


Please don't cross into personal attack or name-calling, and please especially avoid this sort of tit-for-tat spat with another user. It's way against the site guidelines and makes for boring reading.

I know it's not always easy to extricate oneself but it's helpful to remember that the only way to 'win' is to stop.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


You're crazy.


Please don't cross into personal attack or name-calling, and please especially avoid this sort of tit-for-tat spat with another user. It's way against the site guidelines and makes for boring reading.

I know it's not always easy to extricate oneself but it's helpful to remember that the only way to 'win' is to stop.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


The issue is that even if you peel the hype (which is a fact), python is still far larger.

If you check e.g. the journal of open source software (which does not have much ML/AI bias), most of the papers are python, with an occasional R and julia submission.


I have an impression that most SotA algorithms in many fields that are not deep learning are made available first as R (or even Matlab) package. Obviously they do get ported to python once they receive enough traction.


This has been my impression in social science academia: a lot of ready to run methods are released as R packages.


the problem is life sciences and statisticans are not going to learn Python though. and that's the talent


On the ground however, they learn both. Or I should say if you find someone who is exclusive to one or the other language (which happens), they aren't much of a programmer to begin with. There is a whole food chain essentially in life sciences in academia with computer knowledge. There are those who write the tooling, who are perhaps so abstracted they don't know the underlying biology and vet their tooling based on simulated data and a comparison with existing tooling, toiling in their own castles on tooling that might not ever see a real dataset. There are those who use the tooling to create novel pipelines to analyze data and draw conclusions based on their own or their collaborators literature research or life science perspectives, they might not care if the finding is truly novel or if its merely proving an existing gumption with a tool that hasn't yet been applied to an existing dataset. Then there are also those who run the pipelines created by others within their research group, sometimes others who have long left that given research group, with brittle hardcoded paths and other "DO NOT TOUCH" segments in a massive single 2500 line file that gurgitates some plots from a standard sort of csv file as input.


They are though. I work in computational biology, I would say the majority of work I see now is in python.


that's a much much smaller subset than the vast users of R in academic/research


Students I spoke to were forced to use SAS. They looked at me as if I was from Mars for suggesting alternatives.

For a "big data" project, people will probably use Python (though Google apparently retreats from it except in machine learning).

Why could you not use R and C++ or Java though? For example, Arrow has bindings for both, so the argument that Python is needed to shovel/scrape/steal data becomes less and less valid.


The issue is R is too small compared to Python and the gap is getting bigger. Theyre trying to grow the company, and realistically the only way is to support python.


VS code has been a pillar all along? MATLAB is utter trash on macos, slow and a dated ui. Also why support proprietary languages with predatory marketing tactics


Modern software can't work, that's is and no, I'm serious and have no intention to start a flame. Original desktops was a single OS-environment-framework, witch was "fragile" to a certain extent, but anything have to evolve in a fully integrated environment, this means FAR LESS code FAR LESS deps for anything. This means that's easy for the user to bend the environment to their wishes and for devs to create something "on the shoulder of giants" being with them instead of having something like https://xkcd.com/2347/ modern projects are typically written in Silicon Valley mode, anchoring the project and some deps without any reasoning about they future, scalability, maintainability and so on. Something change, anything on top collapse.

In practice we have nearly ZERO development for desktop apps, simply because modern desktops are still widget based stuff who was sold a "what we need", against the complexity of classic DocUIs, and then we migrate to modern web witch is a bad DocUI, so developing for the desktop is simply a nightmare. To add features we can't much "use the environment" so all apps tend to try doing anything inside evolving toward unmaintainable monsters no one can handle their codebase and at a certain point in time they became a kind of a framework where "features" became "ideas of someone" without a coherent vision or a target "for the application" like Eclipse from an IDE to a platform some use to code, some others to pay taxes (yes, Italian gov. have made Desktop Telematico witch is a custom Eclipse to fill taxes).

As the classic Greenspun's tenth rule we witness the same: we damn need an OS as a single user-programmable application like Emacs or doing ANYTHING is a nightmare and there is no long lasting solution.


Rstudio came out a long time ago at this point. I actually think it feels quite dated personally.

A data science focused version of VS Code with some kind of notebook sounds rather awesome to me.


I agree, Rstudio is really dated. Been using R and Python at work in parallel, I find Rstudio cannot do obvious things that R extension for VS Code can- like stopping at a breakpoint in the middle or knitting an Rmd file that has embedded code.


There is already Rmarkdown and great tutorials to go along with that for the budding R data scientist


Does it still bundle an entire, old version of libclang?


Are they going to drop RStudio? I very much prefer its Qt interface over whatever VSCode invented. It’s fast, has nice keyboard shortcuts, none of that pointless padding and it just feels great to use.


Hey! Product Manager for RStudio here (and Positron).

We have no plans to stop development or maintenance on RStudio, and are committed to it for our users, both paid and community. While Positron and RStudio have some features in common, some R-focused features will remain exclusive to RStudio. If you're currently using RStudio and are happy with the experience, you can continue to enjoy RStudio. RStudio includes 10+ years of applied optimizations for R data analysis and package development.

Cross-posting the FAQ: https://github.com/posit-dev/positron/wiki/Frequently-Asked-...


I was a huge fan of RStudio and was pretty much the biggest reason I used R. But then I realized how bad R is, syntactically, and how much more useful Python and it's ecosystem are. Then I discovered VS code and Jupyter notebooks in VS code which completely sealed the deal. So unless you are in need of specific R data science packages, Python seems like the way to go. I'm quite excited to try Positron!


R has the best syntax of any language I have used. What I hate about R is not the syntax, its all the functionality inside it that was written in hairy C and FORTRAN style. Unfortunately, any system gets written on top of by programmers of legacy systems. The syntax and semantics of R are probably the most elegant of any language I have used. The rules are extremely simple, logical, and transparent and are largely inherited from Scheme. The amount of power it gives developers is unparalleled outside of Common Lisp. So much can be redefined by users. It really is at its core a LISP wrapped in C clothing, but better because few people working on R care at all about compiler optimization. Instead, they all care about late binding and introspection so programmers can figure out what their code is doing. If speed is needed, use C++ (Rcpp). However, in practice R is usually fast enough. Somehow the R developers understood Knuth's proverb that premature optimization was the root of all evil (and wasted effort), but the rest of the world forgot.


I don't know. I bought into the Python hype and after a few years I've found myself missing R. If you're using the full python ecosystem more power to you...but for straight up data analysis and statistics, R is unbeatable.


It’s great to hear. Thanks.


It's not QT. The interface is HTML+css via node.


It’s Qt: https://github.com/rstudio/rstudio/blob/main/src/cpp/desktop...

It seems with some JavaScript generated from Java via Gwt. Regardless, I prefer it over VSCode UI.


RStudio was based on QtWebKit, then migrated to QtWebEngine, then finally migrated to Electron (which is what it uses today). You'll find some vestigial Qt code in the repository but it isn't used for the shipping releases any more.


This looks great, combining the best of VSCode and RStudio.

I prefer coding in VSCode but prefer data exploration in RStudio.

One issue with this is the lack of copilot. Copilot can be installed on VSCodium [1] but it breaks often. The other is MS’s proprietary Remove Development extension that enables a lot of functionality in VSCode. There is an open equivalent but I haven’t tried it [2]

[1] https://github.com/VSCodium/vscodium/discussions/1487

[2] https://open-vsx.org/extension/jeanp413/open-remote-ssh


How are you guys using Copilot? I feel it’s a PITA and interferes in between everytime I try to use it. How do you use it properly?


My employer is asking me to try out Copilot to see if it offers a lot of value. IMO about 90% of the time it is an annoying interruption that offers to help me write a few characters (which I could probably write faster if I wasn't being interrupted) or maybe a few lines at most. 5% of the time it offers downright incorrect suggestions, and about half of those suggestions are only subtly incorrect, so I have to be on-guard about anything I use it for. 5% of the time it offers suggestions that I didn't consider, but are definitely a positive contribution. E.g. checking input parameter validity.

I've heard that one way to use it effectively is writing a detailed comment about what you want to do, then let it suggest the code. I personally don't like those style of comments, so I'd have to:

- enable copilot if I have it disabled (I have a keymap for this in vim)

- write the comment

- carefully review that the suggestion is correct and complete

- accept the suggestion, then go back up to delete the comment

kinda inconvenient, but if I was blanking on a bunch of stdlib functions maybe it would help? But accepting the copilot suggestions doesn't add imports, whereas accepting language server suggestions often does (e.g. with gopls).


Some examples of wrong suggestions... things like suggesting OpenIDConnectClientCredentialsFlow when the actual name of the class is just OpenIDConnectClientCredentials. Language servers will have the correct suggestion, and Copilot is just a bother and a hindrance in that case.


> I've heard that one way to use it effectively is writing a detailed comment about what you want to do, then let it suggest the code. I personally don't like those style of comments

I do the same as you: write such comments, let Copilot draft the code, then I delete the comment.

Since the comment is detailed, I think of such use of Copilot as a “pseudocode to code compiler”.


> My employer is asking me to try out Copilot

this says more about your employer than CoPilot ?


Wdym, the whole comment or just that statement? If you meant the statement, then yeah. I was just offering the context for my usage of copilot. Essentially a solution in search of an unstated problem.


It’s amusing to me to see companies asking their devs to play with AI to try to find some kernel of value there.

Meanwhile, if said devs do that playing in other languages/frameworks, they’re chastised for not focusing on business value.


famously in San Diego IIR, a large corporation of some kind announced outsourcing and layoffs.. then proceeded to require the employees to train the new outsourced people for more than a month! dimly recalled from business news after the 2008 meltdown


I work somewhere where they actively encourage copilot, and I actually have some good use-cases.

Most of the time it is just autocomplete on steroids. Scanning the suggestion and hitting tab is almost always faster especially if you have to write a lot of repetitive code.

Generating useful documentation.

I work a lot with legacy codebases. Oftentimes I use copilot to explain what a chunk of code is doing, or give it a requirement and see what it generates based on context. At this point I think working with a legacy codebase as a new maintainer would be a lot harder if I didn't have copilot.


> Most of the time it is just autocomplete on steroids.

imo this is kinda underrated, I use it all the time and it's a real time saver for me. It seems to have a good recall for things you've been doing recently, so it'll make suggestions based on a method I just added or a variable I just declared.

Another fun one I had recently was implementing elo ranking. I'd done some reading so knew what I was doing, but based on the function name and comment it generated the whole thing for me, including all the correct argument names and object properties. Watching it pop up line by line was very cool!


I find the best way to use it is the chat interface. In VSCode, I can e.g. have the cursor on a Matplotlib command, press Cmd-I to open Copilot Chat, and say “make this plot an inset in the top-right corner of the previous plot above” and it will do it.

The other way that works OK is when drafting new code, I can write some comments about what I want to do and trigger Copilot via a keybinding to draft a first version of the code. It can be useful when working with new libraries since I often don’t know what functions to lookup in the documentation.


Manual trigger via shortcut key made it a useful tool I reach for in certain contexts.

Automatic not being useful was the least of the issue, it used crazy CPU, slowed down the IDE, and drained my battery too.


I use it as auto completion, and ironically to discover/have an overview of librairies instead on trying to find the good source between all the AI generated slop websites that Google propose


Mostly for boilerplate things like assigning a bunch of thing to a dictionary, it usually knows what I am thinking.


Have you ever stopped to consider why you are writing code like that, and if it is necessary in the first place?

I think if you are spamming a lot of code that looks like:

    thing = params.get('thing')
    thong = params.get('thong')
    ...
    thunk = params.get('thonk')
then you probably aren't delivering a lot of value with your code.

In fact that overly verbose code could be a liability because it has to be reviewed (possibly multiple times as people look over the code to see how values are getting assigned).


Yes I have considered that, and I consider it every time I write code.

When you start using LLMs you realise how really is boilerplate. Error checking, unit tests, if statements, loops, so much code is boilerplate not just badly written code.


Try not to write code like this in statically, nominally typed languages (Java, Kotlin, C++, etc.). Without a good compile-time metaprogramming, you're bound to need code like this sooner rather than later [EDIT: you still should try to minimize it; it shouldn't be the first solution you reach for, but it really is unavoidable in many cases]. It's a PITA, but that's just it is in those languages - and Copilot works pretty well as an ad hoc code-gen tool.


open-remote-ssh doesn't currently work with Positron because Positron does not currently build the bundles that need to be installed inside the remote host. We're definitely interested in making this work, though; stay tuned!


That is great to know. I work in an academic environment and would gladly give feedback on this from the perspective of teaching post grads. Feel free to contact, email in hn profile.


That’s been my problem with any non-MS vscode distribution. Missing Remote-SSH and devcontainers. Both of which I tend to use a lot — especially remote ssh with large datasets that exist on remote servers.

Otherwise, I’d be all in for this!


Yes I agree, I forgot about dev containers.

If I am doing ML most of the time I am using a server GPU with a large dataset. This is one of the use cases that made me use VSCode more than RStudio.

When I have really wanted to use RStudio I use the web version, but its less pleasant than a simple ssh remote.



It seems a little odd to me that this is not just… a vscode extension pack?


A good question. The VS Code extension API is pretty powerful but extensions run separately from the main workbench process and they can't draw any meaningful UI on it. This was a great design decision IMHO as it is the core reason VS Code has a reputation for a minimalist UI and good performance despite being based on Electron.

However it also made it impossible to build the kinds of experiences we wanted to with Positron. Positron has a bunch of top-level UI as well as some integrated/horizontal services that don't make sense as extensions. We built those into the core of the system which is why a fork was necessary.

It's a goal for Positron to be extensible, so it has its own API alongside VS Code's API, and both the R and Python language systems are implemented as extensions.


I'm curious if you would be willing to elaborate on what your plans are for longer term feature parity with vscode? As in I can imagine as vscode receives continued development and new features you will have the development burden of having to integrate these updates into your fork. Are you planning to keep up-to-date with vscode or will the products essentially drift a part over time? If the latter would this mean extension developers have to build separate extensions for your IDE?


We merge upstream from VS Code every month and we plan to keep up to date with it, so extensions will continue to work and we'll continue to inherit new features as they become available.

It's a development burden for sure -- but still an order of magnitude cheaper than trying to build a good workbench surface from scratch.


Warning: This is released under the Elastic License, which for obvious reasons is neither free software nor open source:

> You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.

> You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key.

https://github.com/posit-dev/positron/blob/main/LICENSE.txt


I would expect a bit more backwards compatibility for R packages. When I open a package.Rproj file with RStudio, it has GUI elements to build the package, to test it, etc.

When I open it with positron, it is treated like a text file, at least as far as I can tell by looking at the many icons and pulldown menus.

It is a weird choice, making a new application that cannot handle the key file type from its ancestor.


Positron isn't supposed to be a replacement for RStudio for all use cases (especially not this early in its lifecycle!). If you open your package folder in Positron (as opposed to the Rproj file itself) I think you'll find it has most of the commands you need for package development in the Palette.

We do hope to add better GUI tooling for project-level actions; more info here: https://github.com/posit-dev/positron/issues/1486


That's unfortunate. I've got a personal project that I'm working on at the moment that mixes Python and R and I'm too far in now to make starting over from scratch viable.


Speaking of which: does anyone here on HN feel comfortable recommending a Python IDE that's half-way bearable on iPads?


Pythonista is nicer but ships older Python:

https://omz-software.com/pythonista/

Pyto is maybe less approachable but more up to date, with clang compiler and LLVM bitcode interpreter:

https://pyto.app/

Juno is Python notebooks:

https://juno.sh/https://juno.sh/

In general I prefer Blink Code:

https://docs.blink.sh/advanced/code


Yeah Blink with the built-in VS Code is so far ahead of other options. Being able to remote back to your machine and pick up right where you left off is unbeatable.


Haven't tried it yet, but personally I'm very interested in Pythonista which also exposes an impressive number of iOS APIs (bluetooth, camera, etc.) that you can use in your scripts. No idea how it functions as an IDE though.


Hosted vs code server is what I used to use: https://github.com/coder/code-server

They've added support in blink as well which is my favorite iOS purchase for productivity on my iPad https://blink.sh/


Not an IDE, but “Carnets” gives you a local (i.e. offline) Jupyter installation on iPadOS that includes NumPy, SciPy, Matplotlib, etc.


I second this recommendation.

I used Carnets for the Advent of Code last year. I ran out of steam long before it did.


Just noticed this:

    Because Microsoft does not allow third-party IDEs to access the official VS Code
    Marketplace ...
Anyone know why?

My wild guess is it means MS doesn't want third parties to build their own VS Code based IDEs (like this one)?


There's some thoughtful speculation on this here:

"Visual Studio Code is designed to fracture" - https://ghuntley.com/fracture/


Very interesting, I hope the (formerly) RStudio people read this. So they are giving up an IDE (RStudio) that was famous for far better graphics and plotting than the Python alternatives for a half-open third party solution where Posit is the sharecropper.

This Microsoft "DevDiv" (see the link) sounds like the classic EEE dressed up as "open", "hip" and with all the right buzzwords.


The person that posted that link is the lead architect on Positron. So yeah it's certainly known about!


Visual Studio Code is the new Eclipse it seems, now everyone and their dog are shipping IDE distributions that are just re-packing plugins.


Relatedly, has anyone found a really good extension for interacting with .tsv, .csv, files in VS Code? PyCharm is much nicer on that front.


Did you try the Excel Viewer? It shows CSV files as a table. Doesn’t handle huge files though.


+1 Excel Viewer user, I've heard good things about Data Wrangler as well


having used rstudio academically, the new ide looks like a refreshed version of it, which happens to come with the familiar vs code sidebar.

personally, i see the value of rstudio (and in extension positron) while learning in a course, but i struggle to find its place beyond data exploration.

despite the licensing stuff, if they can provide some based defaults (removing microsoft telemetry "sauce"), it can be an ergonomic way to bring math-sided team members to share the same development platform.


Spyder-IDE is a better option




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: