Hacker News new | past | comments | ask | show | jobs | submit login
R, the master troll of statistical languages (2012) (talyarkoni.org)
184 points by Goldenromeo on Feb 16, 2016 | hide | past | favorite | 148 comments



The problem is people using R without trying to learn about the language itself, just assuming it works like their favourite language.

For example complaining that R is slow and then writing iterative solution instead of using vectorization. When I saw the example the author gave my first thought was "sapply/lapply". Lapply is essential to the R use, and is being taught early on in every book/course on R I've ever saw.

"In 2012, I’m the kind of person who uses apply() a dozen times a day, and is vaguely aware that R has a million related built-in functions like sapply(), tapply(), lapply(), and vapply(), yet still has absolutely no idea what all of those actually do. "


It's been a few years since I really looked at R, but I don't think the problems with R are simply that people don't learn the language. Some languages are simply not as good as others. We can all learn more about the tools we use when programming, I know that I certainly could. But this doesn't make it our fault that a language is tricky or hard to debug or hard to understand. If we worked at it, I suppose we could all write more efficient programs by using assembler, but that doesn't mean that assembler is the best possible programming language for, say, statistical programming.

Someone, Ross Ihaka, that knows a thing or two about R wrote a short post 6 years ago and said "simply start over and build something better". Take a look:

http://www.r-bloggers.com/“simply-start-over-and-build-somet...

My hope is that Julia will eventually be adopted as a basis for a future statistical programming language.


From your link, hilariously relevant to the blog post at hand:

"First, scalar computations in R are very slow. This in part because the R interpreter is very slow, but also because there are a no scalar types. By introducing scalars and using compilation it looks like its possible to get a speedup by a factor of several hundred for scalar computations. This is important because it means that many ghastly uses of array operations and the apply functions could be replaced by simple loops. The cost of these improvements is that scope declarations become mandatory and (optional) type declarations are necessary to help the compiler."


I should point out that Ross Ihaka along with Robert Gentleman created R.


The push for R is enormous. It's not getting replaced.


> The problem is people using R without trying to learn about the language itself

It's not the user's fault.

Like, congratulations on being better at R than the author of TFA. Maybe you're smarter than him, maybe you've put in more time learning, maybe you've just spent your time more intelligently, maybe you lucked out and bought better books...who knows.

But this line of reasoning completely misses the author's point, which is that despite having used the language for years, he still finds it inscrutable. "It would be easier if you were better at R" is a tautology, and unhelpful. The issue is that the author finds it hard to become better at R.

We can disagree as to whether or not it's objectively hard to become better at R, but this is a perfectly valid criticism to make. It's not the user's fault.


Its a 4 year old article and R has changed a TON with new things BUT.... R has grown a ton in users as well as features.

R really is a functional programming language that people don't take advantage on. All languages have strength and weaknesses and YET the complaint is R has too many ways to do any one thing which allows us to have data.table, dplyr, ggplot2, magrittr (pipping %>%). [EDIT RStudio and RServer are also a big example of R growth in features and quality]

As I learned R my code has changed dramatically and I think R has one of the largest gap between the code you start with and when you are proficient. My starting R code is really embarrassing.


I think the person you are responding to is simply saying some things are more complex than others and require more understanding and experience. R apparently falls into that category. If the author wants to gain that experience, I think the time spent on this blog post may be better used to reading a book on R. Fault might be a strong word, but the author has certainly made a decision on what they spend their time on and the results are as expected.


some things require more understanding and experience, for no obvious benefit...in which case they're anti-patterns, or not best practice in language design.

there are a lot of things in R that are good but it's an old language and there's a lot of cruft.

like "R has three object oriented systems (plus the base types), so it can be a bit intimidating." http://adv-r.had.co.nz/OO-essentials.html

and believe it or not, there are things that require looping through a data frame and when I had to do that a few years ago it was unbelievably slow... going multithreaded was non-trivial, writing that section in C was non-trivial...ended up rewriting the whole thing in python and was a lot happier.


If you want to go over data-frame in parallel just replace call to lapply() with mclapply().

I agree about 3 OO systems, but let's not mix things up here. Casual user (who doesn't know sapply) doesn't interact with that.


hmmh, here's the problem I was trying to solve.

data frame has 2 columns, 20 years of portfolio returns, and 20 years of % withdrawn.

using a starting portfolio value, calculate the 20 ending portfolio values for each year and the dollar amount withdrawn.

worked ok looping through the data frame, but was unreasonably slow.

never figured out how to use a vectorized method that could go through the frame building each new element from the one previously calculated.

maybe I missed something obvious?

(the parallel part came in because I was doing it on a lot of portfolios, so to speed it up just launched the same slow function on several lists of them in parallel. writing that one function above in C probably would have been OK. I think I got it to work but then I couldn't get the right version of compiler to work with the right version of R which supported the other libraries I was using. was a few years ago so maybe things weren't as stable. I never said I was very good :)


It's kind of amazing to see someone admit to spending hundreds or thousands of hours using R, yet refuse to spend a couple hours learning the language a little better. Whining that your tools are hard without investing any effort in them is just dumb.

The R help even comes with code samples that you can run!


R is, I think, an interesting language because it's heavily used by people who would not otherwise learn a programming language. If you compare R not with other programming languages, but with other ways of working with statistical data, this makes far more sense. I don't actually "know" SAS in the way I know a programming language - I know the commands I invoke to do what I want it to do.

Similarly, I encounter lots of people using R who don't actually know what a function is, just that lm(x~y) gets them what they want.


I see this as a failure of our educational system.

Speaking as an academic in CS, it's our job to teach people skills that they need for dealing with computers in the course of their career. The Math department does this for basic calculus and probability; the English department does this for literature and composition.

Why don't more CS departments offer the service courses that scientists and engineers need to really learn how to manipulate their data and make sense of it? At least part of the problem is probably that the other departments won't require their students to take such a course...


I'm sure there could be a very strong synergy with Economics and Finance. Especially the cross-over from CS to Finance.


Remember Javascript before ES5? 99% of "web programmers" didn't know the language either.


My personal experience is similar because I know quite a few people in social sciences.

Conceptually, this is similar to rats pulling a lever or monkeys being reinforced to type the right characters. It also explains p-hacking and many other problems of interpretation.

Now one question I always have is - if you consider R just a tool - what is the difference between things I should fully understand (R?) and things I should only know how to use (e.g., my cell phone)?

How can I justify saying that people should understand R while I myself don't understand quite a few aspects of my cell phone?


Also, many people use it only intermittently, maybe once every six months or so when they have some data to look at. Rather than try to relearn the language and its quirks yet again, it's much easier to take what you did last time and tweak it until you get what you need.


This too. Between collecting data, writing grant proposals, writing papers, etc. I don't spend time day-in, day-out using R.

Often the first hour of that is thinking "Shit, how do I do that again? Has Hadley written a package to do this better by now? What did I do last time - why did I do that last time?"


Surprisingly, this is true for many programmers of general-purpose languages.


> The problem is people using R without trying to learn about the language itself, just assuming it works like their favourite language.

I think you just explained Perl in a nutshell (err, the idiom, not the book). It seems whenever a language supports enough idioms of the usual C-like languages people will gravitate towards those, likely due to the high population of people that know those idioms and can fall-back on them without having to think too hard. I doubt Lisp has as much of a problem of people trying to write C in Lisp.


I sympathize with the OP and also feel frustrated with R (and I say that as a regular R "practitioner").

Part of the problem, I think, is the built-in documentation. The typical R user is a domain-expert just trying to get some work done. Occasionally, they'll get stuck and try something like "?sapply". What appears is usually a terse, confusing mess that takes a VERY LONG TIME to digest and is the LAST THING you want to read when you're trying to make a living solving a problem other than understanding R documentation.

Below is the "Description" for Apply (which you get when you try ?sapply). Does it _really_ explain the essentials of what you need to use "apply"?

"... lapply returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X.

sapply is a user-friendly version and wrapper of lapply by default returning a vector, matrix or, if simplify = "array", an array if appropriate, by applying simplify2array(). sapply(x, f, simplify = FALSE, USE.NAMES = FALSE) is the same as lapply(x, f).

vapply is similar to sapply, but has a pre-specified type of return value, so it can be safer (and sometimes faster) to use. ..."


YMMV, but I think this is pretty clear:

"lapply returns a list of the same length as X, each element of which is the result of applying FUN to the corresponding element of X" == lapply is a map() construct that takes a list and a function

"sapply is a user-friendly version and wrapper of lapply by default returning a vector, matrix or, if simplify = "array", an array if appropriate, by applying simplify2array(). sapply(x, f, simplify = FALSE, USE.NAMES = FALSE) is the same as lapply(x, f)." == "sapply(x, f, simplify = FALSE, USE.NAMES = FALSE) is the same as lapply(x, f)"


It could be A LOT more clear. The first sentence is, of course, obvious to everyone and probably already known by people searching for help on sapply.

If you have a family of functions that "sort-of" do similar things, the most critical thing to communicate is some clear sense of when to use one or another of the functions. This doc degenerates into unhelpful jibberish instead.

Perhaps a close reading of it would have helped the OP, but there is a unnecessarily high cost in frustration.


> For example complaining that R is slow and then writing iterative solution instead of using vectorisation.

But if someone prefers iterative solutions, or that's all they know, why can't R make them just as fast as the vectorised versions?


>> But if someone prefers iterative solutions, or that's all they know, why can't R make them just as fast as the vectorised versions?

R is interpreted and dynamically typed, so when you declare a variable, the interpreter has to do some bookkeeping to figure out the type of the variable, allocate memory for it and so on.

If you write a loop by hand, the interpreter has to do this bookkeeping once for each iteration.

If you write your code in vectorised form, the interpreter can sort out the bookkeeping once and then hand over to the lower-level code (C or Fortran) the vectorised functions are interpreted in.

This can also be further optimised to take advantage of processor vector instructions, parallel processing etc.

So I'm afraid we can't have our pie and eat it. If we want an interpreted language with somewhat intuitive notation, then it has to have crappy slow loops. If we want a language with fast loops we have to rely on C or Fortran and forget about vectorised notation.


> If we want an interpreted language with somewhat intuitive notation, then it has to have crappy slow loops.

Unless you're Julia, JavaScript or Lua with a fiendishly clever virtual machine. Look at the benchmark figure here: http://julialang.org/


Why can't a JIT solve this? It shouldn't need to do the bookkeeping for every iteration if it has JIT compiled it. A JIT should be able to take advantage of processor vector instructions etc.


There's some movement in that direction.

However, the R core committers are essentially not only volunteers, but they're all (afaik) academic statisticians. One of the people who made strides in this direction is primarily an computational statistician at Iowa (Luke Tierney / compiler package). Building a high performance runtime/jit is wildly out of their scope of expertise.

In retrospect, and I think many of them would agree, building and maintaining their own runtime was a giant mistake. Yet here we are.

Serious compiler people (Jan Vitek, others) have made strides towards a faster implementation (his in java / fastr IIRC), but it suffers from the same problem as cpython: there are millions of lines of C code in packages or internal functions that have the details of the R interpreter / C interface deeply embedded in them. In fact, there's probably far more "R" code written in C than in R. Undoing this mess is not easy, and probably not possible.

Oh, reading Evaluating the Design of the R Language [1] will shed some more light on why it's hard to make R run fast.

[1] http://r.cs.purdue.edu/pub/ecoop12.pdf

edited to correctly describe Luke as per gbrown


I think, and I'm pretty sure most of R core would agree, that building and maintaining their own runtime _was_ the right thing to do. Otherwise R would have been at mercy of maintainers who were interested in problems other than creating an expressive language for data analysis.


I don't think calling Luke an "agricultural statistician" is at all reflective of his work. Not everything in Iowa is corn, and Luke has been working in computationally intensive statistical methodology and statistical software development for decades.


He created lisp-stat in the late 80's

https://www.jstatsoft.org/article/view/v013i09

"While R and Lisp are internally very similar, in places where they differ the design choices of Lisp are in many cases superior. The difficulty of predicting performance and hence writing code that is guaranteed to be efficient in problems with larger data sets is an issue that R will need to come to grips with, and it is not likely that this can happen without some significant design changes."


Hmm, you're quite right; I'm not sure how I came to believe that.


R does actually ship with the ability to byte-compile functions these days, and as that functionality matures it may become the default behavior. It's still better to actually learn the language; it's far easier to optimize something like:

    apply(X, 1, function(x){
        # do stuff to the row of X
    })
than:

    for (i in 1:nrow(X)){
        # do stuff to X[i,], and store it somewhere
    }


As far as I know byte-compiling won’t actually alleviate the repeated name lookup (or does it?). Unless the R byte compiler is fiendishly clever, every single name lookup in the loop body will still incur what essentially amounts to a `get(name, environment(), inherits = TRUE)` call.


Probably not, but I'll admit to not having dug into it too deeply. In my initial experiments, I found only modest speed gains when byte compiling. Then again, I'm already using C functions wherever possible.


> If we want a language with fast loops we have to rely on C or Fortran and forget about vectorised notation.

Fortran (Fortran 90 specifically) got vector notation 20 years ago.


I suspected this might be the case but I don't know Fortran. Maybe you're right about Julia and Lua also, I'll have to investigate.


Keep in mind just how old the language is, as it started as S in 1976. It was intended to be a glue language for Fortran and C.

Keep in mind also that it's easy to rewrite the bottlenecks (which are only a small part of most programs) in C, C++, Fortran and other languages including D. That may not be what any particular person is looking for, but that's traditionally the way things have been done.


Yea I feel like sapply, lapply, and mapply will cover most of what people need to do. Hell I've personally only really ever needed sapply as I don't work much with lists.


In my experience a lot of the claims that R is slow are greatly exaggerated and made by people who don't actually use it. Kind of an echo chamber. Every time I see someone say chose Python instead because of speed, I roll my eyes.


My usual take on this: "Between R and Python, the faster language is likely whichever library author actually wrote most of their code in C or FORTRAN."


Pypy is actually much faster than both standard Python 2.7 and R in basically everything requiring the standard library.


It's funny you bring up python; I say this not as a comment on your thesis, but related, since I often hear the "python is slow" trope but that's only half true, you can typically write python that is plenty "fast enough" (As a day-jobbing data pipeline engineer) if you're implementing with an understanding of what things will drive you into the mud. This goes beyond just understanding the tool you're using, fundamentally writing an O(N^N) or something is going to hurt even if you're in C#/C. Have seen that plenty, frankly, more than I've seen "legitimately slow python"

Anyway, this was just a thought rolling around in my head given the discussion.


I've translated plenty of numerical code from (pure-ish) python to c and c++, and usually get about a 100x speedup, sometimes as high as 800x, implementing the same algorithms.


At the risk of being overly pragmatic, note that I said "fast enough" and not "as fast as possible."

My comment was more on the perception that python is unworkably slow in many situations, where I can count the number of times on my hands that I've NEEDED to C-ify some hot paths.

If you're writing a plasma fluid simulation to run on a HHPC cluster, yes, you probably damn well want some straight C/C++. Outside of similarly exceedingly high throughput situations, CPUs are normally more than fast enough, especially if the application in any way brushes up against people and thus falls into "human time" scales, in which case you'd typically be hard pressed to make things slow enough for someone to notice. (Yet somehow we find a way...)

To a sister post re: where python->C speedup can occur, to two birds with one stone, I imagine there's a lot of low hanging fruit, to take one obvious one, anything the compiler can optimize away. Memory read/address optimization, vectorization, potentially better support for branch prediction, I can handwave at more but I am so far from a compilers type that I'd probably make a fool of myself.


> Outside of similarly exceedingly high throughput situations, CPUs are normally more than fast enough, especially if the application in any way brushes up against people and thus falls into "human time" scales, in which case you'd typically be hard pressed to make things slow enough for someone to notice.

This has simply not been my experience. (In a previous job I had reasonably optimized numerical python code sitting on the back end of an api and it was incredibly easy to go over our time budget).


For what it's worth, I believe you; I'd be curious what the workload was / what the time window was, if you're able to say?

I could certainly see myself as having been spoiled with respect to beefy hardware and feasible workload/SLA ratios, but it's lead me to a prior where I take the age old advice against premature optimization pretty aggressively. (Starting projects in python, naive brute force implementations for a first pass, readability over a better O(N), etc)


Nit, but throughput is not the only performance constraint that could rule out Python. The last substantial amount of C I wrote was low throughput but needed to reliably receive, process and respond to packets in single-digit microseconds.


I've had good experience with Cython, which compiles python to C and gets almost all of the speedup of rewriting in C entirely. And in fact, most of that speedup just comes from declaring variable types...


Any idea where the speedups came from? Is it that the problems weren't algorithmically limited in the first place (lots of io for example), reduction of overhead etc (what kind of python was the code running on before?), or just that the speedup on low level operations added up cumulatively and cam e to dominate the other timing factors?

Also, did you change the data structures or use the same ones as in python? Was any of the speed boost data structure related?


Python and similar dynamic languages suffer from the fact that every name access (variable, function, etc) incurs a dynamic lookup of that name in a (nested) dictionary. Statically compiled languages don’t have this. There are fairly recent, clever optimisations that can avoid many of these lookups but (a) they are not implemented in any of the common implementations of Python, R, etc (JavaScript has them though). But even with these optimisations in place we cannot get rid of such lookups altogether, and they kill cache locality and branch prediction.

There are other reasons for slowdown (automatically managed garbage collection is a big one, and so is any kind of indirection, e.g. callbacks). But usually the big one is name lookup.


As a compiler writer, I can tell you that in JS, local variable lookups do not incur any kind of dynamic overhead. The performance of modern JS engines is much closer to C than you might think. Dynamic language optimization is also not so recent. Most of the techniques implemented by modern JS engines were invented for the Smalltalk and Self projects. See this paper from 1991, for example: http://bibliography.selflanguage.org/_static/implementation....

Python is just inexcusably non-optimized. It's a bytecode interpreter, with each instruction requiring dynamic dispatch. Integers are represented using actual objects, with pointer indirection. The most naive, non-optimizing JIT implementation might get you a 10x speedup over CPython. I think that eventually, as better-optimised dynamic languages gain popularity, people will come to accept that there is no excuse for dynamic language implementations to perform this poorly.


I haven’t followed recent development of JavaScript all that closely so my knowledge is somewhat outdated. However, the optimisations that make JS performance close to C in some cases are really recent. Some of the tricks are old, such as the paper you cited. But these tricks only go so far, and in particular even modern GCs simply work badly in memory-constrained environments, which puts a hard upper limit on the amount of memory that JavaScript can handle efficiently. One of the better articles on this subject is [1].

That said, my comment already mentioned that local variable lookup isn’t a problem in JavaScript. It is in R, however; see my example in [2]. Beyond that, both R and Python execution have obvious optimisation potential, which is made hard by the fact that existing libraries rely extensively on implementation details of the current interpreters.

[1] http://sealedabstract.com/rants/why-mobile-web-apps-are-slow...

[2] https://news.ycombinator.com/item?id=11117070


The lookup thing only happens during compilation to byte code or intermediate code, I believe. Once in byte code, there are no variable names, only addresses.


No, unfortunately that is not the case. Lookup happens at execution of the byte code, because variables cannot be looked up at byte compilation. Consider the following case:

    x = 1
    local({
        assign(user_input, 2)
        print(x)
    }, envir = new.env(parent = environment()))
If `user_input` is “x”, the lookup of `x` in the local scope finds a different variable, in a different scope. Hence this lookup needs to take place every time this piece of code is executed.

I’m not sure if Python suffers from similar problems.


The lookup thing only happens during compilation to byte code or intermediate code, I believe. Once in VM, there are no variable names. Only addresses.


All from (3). Definitely not io bound, and using standard python 2.7 (if numpy had been applicable, I would have used it...)

My data structures for numerics are generally really simple, and generally I'm able to go from python list/dict/sets to c++ vector/map/set pretty directly.


I for one write all my statistical code in baremetal assembly. I manage about 5 a year, but they all run very quickly. There is no such thing as premature optimization.


> The problem is people using R without trying to learn about the language itself

I see a lot of blub when I read posts about R. So much so that I start with the assumption that any post about R is a blub post.


I'm a Post-Doc in a small social sciences department in a major university, and am probably the department's ranking R-geek. I did my dissertation, and much of my current work, doing modeling, analysis, and even machine learning in R.

In many ways, I owe much of my success to the power that R has allowed me to wield. Multicore lapplys and ggplot2 are my life these days. But even with this, R drives me absolutely batty, and the documentation, even battier.

I may be competent relative to most, but R feels so taped-together and idiosyncratic that even on my best days, I just feel like a newbie who's built up an army of ugly hacks.

Someday, I'll learn more about the python stats tools and do my stats there. But for now, R it is. Troll on, you crazy bastard.


My data science group is currently transitioning to Python-based development from R. It's actually amazing how much faster Python development is, because of two things:

1. R libraries (and even rarely the R interpreter itself!) tend to have really weird corner case bugs that crop up every couple months, and

2. It's REALLY easy to write unmaintainable code in R, and so strange cruft creeps into the code over time.

The Python interpreter and Python statistical libs are rock solid in comparison, and with it we don't spend weeks debugging things caused by unnecessary idiosyncrasies. I just wish we'd started switching sooner and saving our time.


I just wonder if there's rationality behind not changing this (by either switching to a different language, fixing it or, hardcore mode, create a new one).

I know from a very, very different field that you often have to deal with decade(s) old technology because your employee/professor/etc is just used to it and, 15 years ago, it simply was the best option. I guess it's an equation that puts time spent learning its quirks vs time saved using a more sensible tool. While tiresome, "learning" might actually be the faster way to get things done. But it also carries so much ballast that, if there's a better alternative, must waste millions of frust-hours (and actual errors), especially for newcomers who could just as well learn a new tool - faster.


I started in SPSS. This is so much better. And although it drives me nuts, I still spend the other half of the time working in R feeling like a goddamned wizard. So even if it takes an hour and a half some days to figure out how to turn a list of unique factors into a list for lapplying (or something else stupid), it's not bad enough.

Also, remember that I'm the R geek around. Whether it's the best tool for the job or not, in my field, R is the lingua franca for stats. I could swear off R and move to Python Stats, but I'd still be supporting R among colleagues and friends. It's hard enough to convince folks who grew up in SAS to move to R, let alone to learn Python.

Finally, I'll have a hard time convincing editors that some weird-ass python implementation of GAMs or LMER is kosher when they're barely OK with the idea of GAMs. Reviewer two is, shall we say, technologically conservative.


I am in an almost identical situation, just still in the PhD stage.

I moved from Matlab (originally a mechanical engineer) and the biggest shock was documentation and just the internal help stuff in general. The help files on a regular basis require you to understand how something works to understand the thing explaining how it works.

I am often just shocked at little quirks I find trying to do things in R, not that it is worse than SATA of SAS but the goals was to be better. I am all for FOSS, and R provides many extensive capabilities not available in Matlab, but in terms of being user friendly Matlab is so superior it is honestly sad. Oh and '<-' just drives me nuts...I will never understand the choice of two characters where one is entirely sufficient.


> Oh and '<-' just drives me nuts...I will never understand the choice of two characters where one is entirely sufficient.

It's true that '<-' is a strange choice, but you can use '=' for variable assignments as well.


true


Iirc it comes from the old APL keyboard


That may be not a bad idea to get an APL keyboard actually, considering there are R interop interfaces for DyalogAPL, APLX and J.


well thank you, I always appreciate someone teaching me something :)


np. As was mentioned elsewhere in the thread, S is a pretty ancient language so R brings along some baggage with it :)


R is a language with a lot of gotcha's. I usually get burned by characters being converted to factors in read.csv() and converting factors to numeric (it works, but not how you intend). The R Inferno (http://www.burns-stat.com/documents/books/the-r-inferno/) has a lot of other gotcha's and is worth a read for people who use the language.

That said, the power, flexibility and user community make it my go-to for any first crack at an analysis of data.


What makes R infuriating is most of the complexity and gotchas aren't inherent to the problem.

So you learn Clojure and you inevitably meet the collections. And it takes like 5 minutes to tell you how to map and how to reduce and then the lecture ends with "and it just works". And in fact it does just work.

Then you learn functional R and the first five minutes are the same as the Clojure experience. Then the slow motion train wreck starts and "And R likes mapping so much, we have nine microscopically different apply statements for list and tables and they input some things and output other things and if you pick the wrong one the failure looks like the Trinity nuclear test but more impressive". Every R language lecture is like that, five minutes of how real languages do it, then the rest of the 45 minutes is endless pitfalls and accidents. Its like a 45 minute long fever dream or nightmare "... and if you accidentally tapply, table apply, to a list, then it coerces the input to ..." and drift back to Cthulhu, or maybe away from, whatever.

Pragmatically if you teach R as a statistical analysis language what looks weird often enough turns out to be super convenient. But if you try to teach and learn R as a general purpose computational language, you wonder if its a joke and nobody would actually use Intercal or BF to run analysis, would they?

Its a very powerful system in spite of the language. Think of PC hardware architecture going back to the old XT days, its sinfully ugly, but its quite capable. R is no PDP-11 or VAX, thats for sure.


Factor is one of the worst thing of R-world. I don't recall ever needing factors, yet they creep in with many functions (read.csv, cut).

Btw there's a nice readr package (from Hadleyverse) that has a read_csv method that does away with factors by default.


You should use factor for data cleaning and verification.

So you have "sex" on the questionnaire, and factor will very quickly identify contamination such as "often", "not yet", various mis-spellings, etc.


How would you represent categorical data then? R's primary use case isn't text processing. And HW isn't always right.


As character, for instance (in particular, they can do everything factors can do when used in conjunction with `unique`, and sorted factors can be represented as a conjunction of characters and numerics). Factors work better, but only barely. In particular, they are nowadays not any more efficient than using character (!). They used to be, which is why they are liberally used everywhere in R’s base libraries.


"In particular, they are nowadays not any more efficient than using character"

How could a comparison of two strings of unknown size be as efficient as comparing two integers? I'm curious to learn something new.


R uses a global string cache so any string comparison is just comparing two pointers.


You will (inevitably?) run into factors when importing data from SPSS files... sure, you can discard them upon reading... but are you sure you don't want access to the value labels in the future?


Factors are weird because no other language has anything like it, but they are actually a quite clever way to group data. It just takes a while to get used to them.


I actually use factors a fair amount, and having factor-like data shoved into numeric values gets you to some bad places statistically.


You must not do a lot of regression with categorical data, then. I use commands like `lm(y ~ (x1 + x2) * factor_variable, data = d)` and `xyplot(y ~ x1 | factor_1, groups = factor_2, data = d)` all the time.


Those also work just fine with strings.


Via an implicit call to factor, right?


Factors are great, and surprisingly powerful even outside of statistical computing. With that being said, I prefer to create them on purpose rather than having read.csv attempting to be helpful.


A factor already is a vector of numeric values, which happen to have names.


I have a love-hate relationship with R, being a predominantly Clojure (and Ruby these days) programmer who only occassionally dabbles in data crunching.

The apply/sapply dichotomy that the article mentions (actually a hexachotomy, there are also mapply, sapply, tapply and vapply) is one example of a gazillion warts that the language has.

Another random one: R has a useful function, paste, that concatenates strings together. Only it takes varargs, not a character vector, so if you have a vector v of strings, you have to use do.call(paste, v). Only not, because do.call insists that its second argument be a list, not a vector, so you do do.call(paste, as.list(v)). And if you want to separate the strings, say, by commas, you have to affix the named argument sep, obtaining do.call(paste, c(as.list(v), sep=",")).

And R's three mutually incompatible object systems. And so on and so on and so on.

There are things to love. The packaging system works really well. I like the focus that R puts on documentation: hardly anywhere is it so comprehensive, with vignettes and all. There are things plainly inspired by Lisp (R is just about the only non-Lisp I know that has a condition and restart system akin to CL). And ggplot2 is one hell of a gem of API.

In many ways, R is the PHP of data science. (Though the core language's still nowhere near as abysmal as PHP.) Despite all the warts, there are all sorts of statistical analyses that are just a package.install() away. Put another way, R is to data science what LaTeX is to typesetting. It's a heavy pile of ducttape, but it's here to stay because it's just so damn useful.


See, this is another interesting example of the kind of behavior described here: https://news.ycombinator.com/item?id=11113042

People who don't take the time to learn the language are having to go through these contortions to make R work the way their favorite language works, rather than just taking the time to learn how R works!

    R has a useful function, paste, that concatenates strings together.
    Only it takes varargs, not a character vector, so if you have a
    vector v of strings, you have to use do.call(paste, v).
But the help for the `paste` function literally goes over this exact situation:

    > v <- 1:5
    > paste(v, collapse = ",")
    [1] "1,2,3,4,5"
I'm often super baffled by the lengths people will go to not figure out how to use R and insist on writing <X> language in R.


Thanks for pointing this out. I overlooked it, presumably because it's in the last paragraph of "Details" and not illustrated in any example.

I still maintain there's a wart in what I'd described, which is `do.call` not accepting vectors as the second argument. Also, `collapse` is idiosyncratic: I have to remember a special knob for every function that has a vararg and non-vararg flavour.

You raise the point of taking the time to learn the language, and I acknowledge this. Yet, as an occassional user, this is precisely what I'd like to avoid. When working with R, I'm pragmatic: what I'm after is a working solution to the problem at hand, rather than its most succinct or elegant formulation. When I find one, I move on. In production code this would incur a technical debt, but due to R's exploratory nature, this is typically not much of a problem. Had the language been more consistent, it would take less time to learn it thoroughly.


You might enjoy purrr, https://github.com/hadley/purrr, which is my attempt to make FP tools in R more consistent.


I wonder what disagreements or counterarguments downvoters might have, and if they might share them.

For the parent, given what you've observed, do you still go to R for data crunching, or have you found anything in Clojure land that measures up?


R. Or a mixed approach, with Clojure for data preprocessing and R for the analysis proper. Case in point: I wrote the Clojure scraping library, Skyscraper [1], and made it output CSV by default so as to be able to easily drop the resulting files to R.

For statistics, Clojure has Incanter, but it's very basic in comparison. There are easily usable Java libraries for certain tasks (MALLET comes to mind), but these are few and far between.

[1]: https://github.com/nathell/skyscraper/


You can also use paste( v, collapse =",")


My biggest complaint about R isn't the inconsistency and obtuseness -- I've been using it long enough to get familiar with the documentation and the zillions of varieties of apply. My problem is the data structures.

R has only a few core data structures: vectors, lists, arrays, and matrices. Data frames are built on top of lists, and admittedly data frames are incredibly useful for statistics -- there's a reason pandas exists, and a reason data analysis is much more tedious in other languages.

But there are no hash maps or sets (lists have named elements, but with O(n) indexing; the only hash tables available use environments and accept limited types of keys), no tuples, no structural or record types, stacks and queues only recently became available on CRAN (through C), and so on.

This leads to the folk belief that the only way to optimize R is to vectorize code or to write in in C or C++ (with Rcpp, for instance). No statistical programmer ever thinks about choosing the right data structure for the job, since you basically only ever use lists and data frames. Fast operations on data structures (like graph algorithms) have to be written in C. There's just no way to do it in R.

When I co-taught a statistical computing course, covering the basics of data structures and algorithms, I included some homework assignments where the difference between a fast and a slow algorithm was the choice of data structure. R users struggled because they had very little available to them. If their code wasn't fast because they were doing O(n) list lookups in a loop, there wasn't anything they could do to fix it.

I hope Python and Julia can eat R's lunch. Some day I'll have to get around to trying Julia for a serious project...


The lack of data structures in R is a totally fixable problem.


Sure, through packages, but you'd need to adapt the entire standard library to take advantage of them, so you could pass new data structures to built-in functions and get meaningful results.

Generic iterators would also be extremely useful to build in, so it's easy to work with a wide variety of structures.


And generic functions allow you to fill in those missing pieces from a package too.


I think the general consensus is that R is a terrible language with a lot of useful libraries. I especially like that R claims to be inspired by Scheme, but the memo seems to have been "Make sure we f*ck this all up" taped to the front of the "Lambda the Ultimate" papers[1]. In particular, lexical scoping was one of the key innovations in Scheme and R has pervasively buggered up their implementation, from not distinguishing between defining and mutating a variable to making the default save/load procedures mutate the environment. OMG does R drive me insane (as a programming language person.)

[Saying "there is package on CRAN that fixes this" is not a solution. A language shouldn't require extensive knowledge of the ecosystem to get the basics working properly.]

[1] Scheme was introduced to the world in the "Lambda the Ultimate" series of papers. See http://library.readscheme.org/page1.html


No, that is not the consensus. R is (at its heart) a beautiful language that is extremely well suited to its domain. Most people who use R are not professional programmers (or even identify as programmers) so it's not surprising that there's a lot of bad code written in R.


Writing a variant of this article has become a rite of passage for all serious users of R. There are two issues that contribute to the difficulties people experience with R. First, yes, R can be confusing at times. Tal explains this really well, but only scratches the surface. There is so much more confusing and counter-intuitive stuff, for example with regards to factors that only very few people seem to understand fully. However, there is a second issue, and this is less often acknowledged: People expect R to be immensely powerful and at the same time easy to use, which is really not a very reasonable thing to expect. This attitude is fairly specific to the R community. No C++ developer would dare to write a long rant about the shortcomings of C++ while at the same time nonchalantly admitting that they never made a serious attempt to learn it. One symptom of this problem is that hardly any self-proclaimed R hacker has read Matloff's book "The Art of R Programming" which was for a long time, and perhaps still is, the only book on R programming. The mere fact that there is (or was) only one such book speaks volumes.


I agree with everything here, including the praise for Matloff's book; it should be the very first book any serious R user picks up.

But Matloff is no longer alone: Hadley Wickham's Advanced R is now also a must-read for R programmers.

http://www.amazon.com/Advanced-Chapman-Hall-CRC-Series/dp/14...


And Advanced R is also available for free at http://adv-r.had.co.nz/


There are other books on R programming: John Chambers published "Programming with Data: A Guide to the S Language" in 1998 and "Software for Data Analysis: Programming with R" in 2008.


No better way to elicit information from people than making outrageous claims. Thanks for those references!


Way too many languages are trolls, or have troll features. Or in other words, too many languages have features that don't do what a reasonable person would expect them to do.

I've long considered implicit type conversion to be a troll feature, especially how Javascript does it. Another one is how differently Java treats primitives and Object types. Oracle databases treat nulls and empty strings the same.

At times like this, all I can do is lament and search in vain for a language with no troll features.


You can take a look at Rust. The authors are really careful to design an elegant C++ replacement, with no troll features and zero-cost abstractions.


I think this article nails exactly what's right and wrong with R.

This in particular sums up the learning curve of R.

> Thankfully, I’m long past the point where R syntax is perpetually confusing. I’m now well into the phase where it’s only frequently confusing, and I even have high hopes of one day making it to the point where it barely confuses me at all.

Warning personal opinion ahead...

R, the language can get you up and running alot faster than other languages for statistics like say python with Pandas or scipy but even people who use it on a daily basis will curse the languages "quirks". I find most of the confusion comes from R trying to be too friendly to the user via type conversions. The ease in which the R's type system will convert values has probably caused me more grief when first learning the language than any other issue I ran into.

And this illustrates the down side of using R

> library(Hmisc) apply(ice.cream, 2, all.is.numeric)

> …which had the desirable property of actually working. But it still wasn’t very satisfactory, because it requires loading a pretty large library (Hmisc) with a bunch of dependencies just to do something very simple that should really be doable in the base R distribution.

Since R is rarely a programmer's most used language, I find there tends to be an above average use of google and paste type code that pulls in 50 different packages, each of which is used on 1-2 lines of a 1000 line script. Perhaps this is just a function of most programmers not really understanding the mathematical domain and hence they slowly google and iterate their way towards a solution.

Often I'll see people pull in 5 different time series libraries just because each of them operate on a ts object, so they all can work on the same object, and each one provides one additional method the other's don't and the programmer needs to create their solution.

You'll hear people talk about writing R in the Hadley universe or the basic R universe but there isn't much talk about what a canonical R solution looks like. R is a great language in the sense that Perl and C++. It allows you to do anything but there often isn't an agreed upon way of writing it and two different programmers can come up with wildly different but valid solutions to the same problem.


I think it's also due the fact that installing R package is just one command away, install.package("abc") and library(abc) and you're done. It a blessing, but encourages loading swaths of libraries.


My thoughts (a little of a Rant) on R as the Lead Engineer at a data science focused company is that R is a great statistical language, but a poor programming language. I use the term programming language as a language which is very versatile for a variety of needs (web app, commandline app) such as python, ruby, etc. R has the capabilities to use as a programming language like a climbing rope can be used as a belt. It can, but shouldn't because of some points I have below.

It is great for exploratory analysis, as it is forgiving and easy to use in the console for testing things; but once it needs to be put into practice, it has issues. For a non-programmer, grasping R isn't too hard thanks to some great developers in the community.

There is a lot of good in the R community, but people are focused on making it isn't. Just look at deploying R into production, that can be a nightmare. I've spent days looking over code to figure out where an error in production lies. One of the errors was a package of a package which was updated for the first time in years. That package depended on another package which my package called another function that called the first one; basically it was a mess of dependencies. And there are some misconceptions, while doing the engineering work in R and learning I learned not to use for loops. Then one day I timed it and the for loop was 10x+ faster than any apply/plyr function including using a gpu.

The things that separate a programming language from a statistical language are a programming language have more than one of these:

* Good dependency management

* Easy deployment into production environment

* A clear way to setup environment (e.g. naming, folder conventions)

* Ability to do most of the things you want with the base packages

* Good documentation about the above.

Basically, I believe a good data scientist is someone who can use R (or something else) to explore data and then create the algorithm in a compiled language to be put in production. And for someone who just needs to create analysis for research or a paper, R is the perfect use case. R is an excellent language for its use cases, just don't think about using it for general programming. It has caused a lot of extra dev hours working on issues with it.

Little plug, we wrote a piece on hiring data scientists.[0]

[0]: https://gastrograph.com/blogs/gastronexus/interviewing-data-...


To contrast this - I'm the lead data scientist at the same company, and head over heels in love with R....

It is the only language I can quickly and efficiently jump from algebraic topology for novel pre-processing, straight into model building and validation - with just about every potential variation of every major algorithm freely available and packaged on a well curated package manager (CRAN), and then ensemble them.

I _agree_ that it's a bit difficult to use in production, and that dependency management needs work (Packrat is trying to do that), and that blindly trusting packages on CRAN can cause errors - but 98% of the time - it just works. Graphics, models, crazy niche things that are currently only used by one post-doc locked away in a top secret research lab... it all just works.

Of course, take this with a grain of salt: this is coming from a guy who's built web-servers (HTTP responses and all) in R.


R's server sockets can't select(), so unless you've reimplemented that as well, don't count on handling concurrent requests with that web server.


My own personal rant, I think the specific feeling I get is the conceptual idea of R has long since outpaced the reality of R.

People like to fetishize data, and R sure lets you do that. The data science landscape however is growing such that R is really just a one-trick pony, however, that one trick is for better or worse being the gold standard of statistics and modeling, somehow.

But everything else wants to sugar coat the software surrounding the statistics, and leaves you no room to grow.

This is a very bad over-simplified example, but you sort of can't learn much about graphic design or good communication skills by using ggplot2 ... you can make something look very very nice, hopefully, in the general case, sure. And you can definitely do all kinds of hacks and crazy code to make it do whatever you want, but by doing that you produce ever more fragile and environment dependent code. You'd be better off learning just about anything else for graphics (Straight SVG, D3, Processing, Cairo directly, etc) because it is of course a bit more of a problem starting up, but a generalized skill set that could allow you to grow.

You also learn pretty much nothing about web development from Shiny. Shiny is a wonderful idea, but ultimately prevents a statistician from implementing what it promises, which is an analytic application. At some point, you have to ditch it and learn more traditional web stacks. It is also something of a sales funnel into a server solution that's a DDOS or security nightmare just waiting to happen.

So instead of just griping, I guess I have some ideas... it would be nice to have a Ruby/JS/Java/Python service generator. It would be nice to have a D3/React/whatver based generator. It would be nice for there to be a data munging solution (or even whole models, more like more PMML type stuff) that can be generalized into something that could be compiled or generates Python/Java/Bash/JS/Whatever code.

Ultimately you start thinking along those lines, and you realize that the promises R is making about empowering the analyst are just teasing them rather than helping.

R could do with less magic and more concentration on being simply a great statistics engine that integrates better. I guess it is that to some degree, but it sure fails the rest of the technology world that tries to live with it.


I disagree on multiple fronts!

1) ggplot does exactly what it is supposed to do: create data visualizations. It made no promises for interactivity or display, and in fact, it was originally designed for creating publication quality charts, which it continues to do well.

1.5) ggvis is a D3 API wrapper on ggplot and allows for interactive graphics. Do you want to pay your data scientists for creating production ready graphics or let them focus on what they're best at?

2) R has been growing - outside of neural networks (which R needs to catch up on), R gets almost every pre-processing and modeling algorithm first, and distributes it for free. Furthermore, it has better sampling options, metric options, augmentation options, and model ensabling tools (stack or meta-model) VS any other language or framework - it is the gold standard.

3) I don't think there's any "magic" in R. It's just a language with a learning curve and lack of opinions.

4) Last point: R is really not built for the web (it's older than Python!) - its built for data science. There's no reason you need to run your modeling stack in the same language as your application server. R is perfectly capable of writing to databases or sending API responces in JSON or PMML.

/endrant

Not trying to start a flame-war - but this type of difference in opinion is important to see when thinking about hiring data scientists or deploying models.


I sincerely appreciate your thorough and well reasoned response, and thank you for taking the time with it.

1) I agree, perhaps I was trying to allude to visualizations being more than charts. ggplot's charts are absolutely gorgeous and simpler to make than even I remember them being in Lotus 123 for DOS. However, there are some things in the periodic table of visualizations (http://www.visual-literacy.org/periodic_table/periodic_table...) that it can't do. And what about hybrid combinations in the same chart? Could I have a bar chart where the bars are also mini-spectrograms? I can instantly think of how to do this in SVG or Processing, but I'm not sure where to begin with ggplot ... maybe it is possible. Of course why would you?

1.5) I guess I don't want to pay someone else to do the custom thing in D3 that the statistician can almost do with their code, or try to get a regular web stack developer familiar with D3 to actually get it working the way the statistician says.

2) Yup, totally agree!

3) I think Shiny takes a lot of liberties and makes a lot of assumptions that users of it can't even express to me are important to them because using Shiny completely hides the underlying concepts of how it is implemented. I guess I would definitely call that magic. You're right there's quite a lack of magic in most of the language and packages, however.

4) I think data science can/should/does embrace the web. I think the modeling stack shouldn't be on the application server, but a trained model perhaps should be? I also wouldn't trust the stability of R for performance critical API calls without a lot of redundant instances and a lot of load balancing.

Anyway... the real problem is that you're also absolutely right. There is quite a bit difference in opinion between the tooling of an analysis effort, and the robustness expected by IT.

Thanks again!


I'd divide my work into three categories - Exploratory plots just for me. - Plots that I'm going to show to my boss or coworkers. - Things we're going to distribute to the whole world.

Plots in category #1 are often quick and dirty--I just want to see if an idea worked and don't really care about communicating that idea cleanly.

I could show these plots internally, but it often helps to clean them up a bit first. This avoids us getting bogged down in whether we should be comparing the red/blue lines here or the circle/diamond points there.

This is where ggplot shines--I can go from #1 to #2 with minimal effort. The final version usually still needs some tweaking, but only a small fraction of the plots ever get this far and some of this customization really needs a human in the loop (e.g., in Illustrator or something).

Similarly, while you can use Shiny as final product, it's actually great for letting moderate-sized groups play "what-if" with the data. It's certainly easier than sending them a huge powerpoint deck with "choose your own adventure" style instructions.


Except that you miss a very important point - nobody cares about the stuff you listed. I build models and use shiny to create a front end for clients to interact with them. They are very happy and pay me very handsomely. I can assure you that this is the case across the board. R is for analysts, not for programmers. It seems like programmers feel intimidated, because analysts now code their solutions themselves.


I have my fair share of problems with R, but that first example (4 ways to select a column) seems a bit silly. Just off the top of my head, I could think of plenty of ways to do the same thing in Python/pandas:

    ice_cream.icol[0]
    ice_cream['col']
    ice_cream.iloc[:, 0]
    ice_cream.loc[:, 'col']
    ice_cream.ix[:, 'col']
And if you wanted to make things more convoluted, you could also wrap things into lists like the author did in the R example. So this is definitely not a problem that is unique to R or any reasonably flexible language.


For R at least some of it is due to R's flexibility

x$name syntax stems from data frames being really lists in disguise x[["name"]] ditto, plus because it's useful to access by string (see reflection in other languages)

x[,"name"] and x[,1] because we can also apply the matrix syntax to data frames


Ok, but this seems pretty trivial compared to the many exclusive advantages R has. I've had minimal problems using and extending other people's software packages written in R (for bioinformatics). This has definitely not been the case with Java, R or perl, where just installing said software package is often unusually painful or impossible.

I think R is a prime example how useful a domain specific language can be. As such, I see Julia as the most viable replacement, although that will take a long, long time.


So, as a code person rather than a stats one, my first reaction was that in the first example there is in fact only a single way to access a column but multiple ways to specify which one, all of which made immediate intuitive sense to me.

So I wonder if this is less about R specifically and more a feature of people approaching a language (any language) without that code geek intuition for the underlying affordances ?


Most of my university classmates' first exposure to programming is using R in a statistics class. It's awful. I wish they'd make Python or something a prerequisite, so that giant swaths of people don't get turned off of computing or start with the strange ideas it teaches.


It sounds like you are arguing for imperative programming over functional programming.


R is a fine tool, but (like Java or C) not the best window for a beginner into the joy that programming can be. The syntax is pretty weird and its semantics don't align that neatly with broadly-useful ideas for reasoning about programs.

We run 3 different intro sequences in Python (for non-majors), Scheme (for most majors), and Haskell (for those who are already strong imperative programmers). They're all great.


What does that have to do with r vs python?


There are certain languages that are good for a first-time programmer.

R, despite being one of the first languages a budding "data scientist" might want to use, is probably not one of them for the many reasons given, among them:

- there are way too many ways to do everything

- implicit iteration (although great for statistics) makes performance issues hard to spot

- the data structures are a bit too flexible (it is Lisp-y in places), and you really need to understand them all to deploy the *apply and plyr functions effectively

- 3+ object-oriented programming systems

- non-standard evaluation. It's all over popular libraries like ggplot2, because it increases terseness, but it just looks like magic to beginners.

Basically, all the chapters listed here [0] -- which happens to be a great guide for experienced programmers to really understand R as a language -- happen to be the same reasons beginners give up too quickly.

Python, although it sufficiently nags me with its one-way-to-do-it motto and its many warts [1] to not want to use it regularly, is just well-rounded enough that it is a much better language for beginners. With Anaconda and iPython installed, I've found that a total programming beginner can actually get productive pretty quickly, even on stats and math problems.

[0]: http://adv-r.had.co.nz/

[1]: https://wiki.python.org/moin/PythonWarts


This could change your life: http://adv-r.had.co.nz/


The upshot is that unless you carefully read the apply() documentation..., you’re hosed.

One thing that jumps out at me, having returned to R after several years in the Python world, is how obtuse its documentation can be.

The standard format for R documentation does a few things that I find impede understanding. First, the help pages are organized into sections giving the high-level description, the arguments, the details, and the results ("values"). The "details" generally are organized by argument keyword, and the arguments section draws on the language laid down-- usually in a vague, high-level way-- by the description section. Finally the practical effects of the details are deferred till the results section. That means unless you already know what's going on, you end up having to jump around among sections, trying to synthesize everything.

This is particularly a problem for those help pages-- and there are a lot of them-- that describe a raft of related functions all at the same time. Describing a bunch of related functions in the same place sounds like a good idea (it should help you figure out `apply` vs `sapply`, right?). Yet this is exactly when the documentation organization results in the most scattershot reading, because in addition to having to synthesize between sections, you have to mentally prune away text that, for one reason or another, doesn't apply to your particular case (for example, because different functions don't all share the same arguments, or because you want to read about the values for just one variation on the function).

Another idiom I dislike in the standard R documentation is how the examples don't actually show any sample output. There are generally some attempts at comments to explain what the sample code should or shouldn't do, but they are very much written in the style of programmer's comments, not in the style of documentation or learning points. So you end up having to run the code, and sometimes puzzle over the results for a while.

Here's an example, from the help page that I happen to have open right now, `help(sample)`:

     # sample()'s surprise -- example
     x <- 1:10
         sample(x[x >  8]) # length 2
         sample(x[x >  9]) # oops -- length 10!
         sample(x[x > 10]) # length 0
The comments alert me that there's a "surprise" in store, and they even allude to the (apparently surprising) fact that the second line produces a 10-vector. Notably lacking is any explanation of what's meant to be surprising here, how that relates to the internal logic of `sample`, or how to avoid falling into the trap.

Overall, I feel like R's documentation is a bit like a conversation among experts, with a rather sink-or-swim attitude towards newcomers.

Documentation is far from the first thing that stands out about R vs Python, but it's the most salient, I think, in the context of the original article.


> This is particularly a problem for those help pages-- and there are a lot of them-- that describe a raft of related functions all at the same time. Describing a bunch of related functions in the same place sounds like a good idea (it should help you figure out `apply` vs `sapply`, right?). Yet this is exactly when the documentation organization results in the most scattershot reading, because in addition to having to synthesize between sections, you have to mentally prune away text that, for one reason or another, doesn't apply to your particular case (for example, because different functions don't all share the same arguments, or because you want to read about the values for just one variation on the function).

This reminds me very strongly of man pages. Man pages group either similar (man 3 printf) or closely related (man 3 malloc) functions, and intersperses bits about each of the functions documented by the page, which ranges from difficult to read to mind-boggling (when you have half a dozen near-identical functions being documented at the same time). Reading an lapply documentation page[0] it looks very similar in organisation, and similarly difficult to parse/use.

> The comments alert me that there's a "surprise" in store, and they even allude to the (apparently surprising) fact that the second line produces a 10-vector. Notably lacking is any explanation of what's meant to be surprising here, how that relates to the internal logic of `sample`, or how to avoid falling into the trap.

On http://www.inside-r.org/r-doc/base/sample the surprise is explained by the first paragraph of the details, with the hell of an understatement that "this convenience feature may lead to undesired behaviour" but without the big red blinking box it would definitely deserve.

[0] https://stat.ethz.ch/R-manual/R-devel/library/base/html/lapp...


Not intending to start a language war here, but if somebody who has experience with both R and Python/pandas/etc could answer - how's the current state of the emerging Python data/statistics ecosystem compared to R? (not counting all the other differences like R being allegedly weird or Python more general purpose and so on).


Also assuming you don't want to count things like "You can call R from Python, and Python from R", here's my take on it, as someone who uses both languages:

- Pandas has helped Python tremendously, but I don't think it's quite to where the R data frame is.

- For 90% of what someone who wants to do statistics wants to do, it honestly doesn't matter at all. You can do nice data visualization in both. You can fit most generalized linear models in both.

- At the cutting edge, R still takes the cake. Odds are if someone has developed a new method (especially outside machine learning), it's in R before it's in Python. Your local university's statistics department is likely running R (or SAS), not Python.


We did an evaluation recently. Not even close.


Is that evaluation somehow public, or could you share some details?


I think if you are a programmer or have some programming language experience, R is not very weird. But if you are a financial analyst or a social scientist, or a statistician, and only want to get your work done; it depends on your first programming language. If it was S3 you are golden. If it was Basic, you are not so golden. Mine was LISP.


I just introduced a friend of mine to R. He's working on his PhD in microbiology and beside himself once he started working with R. Personally, I can't believe he hadn't used it before. It really is a beautiful language to work with once you get a handle on it.


The only issue I have with R is when exposing it as a web service R is not great. For example if using R you will need one container for "dployr" and another for your web service. It's not the end of the world but more moving parts means more problems.


Maybe R should have optional "training wheels" that produce a warning every time an implicit conversion happens. In the OP's case, it would warn that a data frame was implicitly being converted to a matrix, and maybe also warn that the numeric vectors within were being converted to character in order to get slotted into that matrix.


R is great language but at the same time it can be a real pain.

Sometimes I imagine that some very wise guy designs a language much more consise and coherent, that could at the same time take advantage of the huge number of existing libraries written in R and C++... Maybe it's a dream but so many times I wonder if that's even be possible.


At my previous job we used to play "Guess what R does" over lunch. Someone would write a few R statements and we'd have to guess the output. Extremely difficult!

    >> a=c(1,2,3,4)
    >> b=c(1,2)
    >> a+b
Any guesses?


+ is a vector operation, and does it elementwise. R recycles vectors. You get warnings if there are elements left over. This is something you should know within the first 5 minutes of learning the language, hopefully?


2, 4, 4, 6. The second vector is recycled along the first one:

1 + 1, 2 + 2, 3 + 1, 4 + 2.


Yes, I'm well aware of R's many faults, I have my own long list of R caveats I hand to new hires, but not bothering to learn the damn language is no reason to complain about it. First, RTFM.


look, I am all for RTFM but my experience with R has been a different case in my experience. Can I make it do powerful things? yes. can I take code that wont run, copy it to a new file and have it run? yes.

1) There are just quirks in implementation that are nonintuitive. I constantly find myself doing things I would do in other languages, to do simple tasks, which fail for no good reason, albeit an obvious one once I find the right documentation.

2) The manual you describe is flat out unhelpful in many cases. The suggestions that constantly come about "check stack / google" are suggestive of just how poor that documentation is


You wouldn't find it intuitive if your first language was Scheme.


absolutely fair enough


ITT: people who don't understand R complain about R.


Ah, the joys of a dynamic language and its implicit conversions.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: