This is because Ruby inherits it's approach to flow control from Smalltalk, while Python comes from a C/Algol-like heritage.
In fact, Smalltalk takes this much further, such that basically all flow control (including if-then-else) is handled as message sends (e.g. if-then is just a message sent to the Boolean object taking a block as it's argument).
The downside is the syntax can feel a tad clunky. The upside is incredibly simple and consistent language grammar, while making it trivial to create new flow control mechanisms since the language has all the tools baked in (primarily first class blocks).
Just a sidenote that for performance reasons, control flow messages are actually optimized in most implementations, even though they also look like regular messages.
Also, Perl's history adds an additional layer of flavor. Ruby started as somewhat "Perl 2.0" where beauty and flexibility > all things and Python started as "anti-Perl" one way to rule them all(tm) where fast and standard > perfection of beauty.
Ah, a brilliant idea, design all languages so people who only know C can make small changes to existing Ruby code without getting confused. Definitely don't want to enable different abstractions or models of computation. Are these core principles in codebases you mentioned mostly "how to write stuff using C paradigms"? After all, "the Real Programmer can write FORTRAN in any language"
I think you’re mistaking C-style syntax for what modern C languages have adopted and become.
How many C style context even really exist today? The block separation by single characters is ok. Using (), [] and {} to separate things makes things easier on the compiler, but it’s probably down to personal preference whether you like that syntax or not. I certainly vastly prefer the more simplistic approach where you don’t have to wrap everything in brackets. Like a loop… why does it really have to be inside a () denotion? Similarly I prefer Pythons indent to {}’s. Which can technically make your code harder to read, but if your Python code is hard to read it’s likely bad. Not always, but most of the time. I guess {} is better than BEGIN END, but maybe not in terms of readability.
The end of line character, ; is also sort of unnecessary for the most parts. Again it’s mostly a relic which helps the compiler, and some people like it while others don’t.
Modern loops look nothing like they did in C. Neither does variables or properties.
The use of English for a lot of things, or the use of western character sets, is also sort of bad in the modern world. Which is part of the reason behind the huge popularity of Go in Asia. Not that it’s so bad as it’s just how it is and everyone has adopted.
Anyway. Modern C style syntax like you’ll finds in C# or Java is rather cumbersome in my opinion. Rust sort of falls into this category but at least with Rust it’s very easy to define ownership of your stored data as you pass it around in your code. But almost all of it is exactly that… modern. Almost none of it is from C and a lot of it would not have existed without languages like Ruby.
You also have to keep in mind that Ruby predates the modern C syntax. As others have mentioned it’s been influential on the modern C syntax, but it was also made in a world where they modern syntax simply didn’t exist.
> Similarly I prefer Pythons indent to {}’s. Which can technically make your code harder to read
I started out having a similar opinion to you, and have completely flipped.
1. There's very little advantage to the whitespace option apart from aesthetics. Depending on your coding style, you gain 0-2 vertical lines.
2. Meaningful whitespace makes it more difficult to write code that writes code. Is it possible to manage the indents properly? Yes. Is it more difficult? Yes.
3. IMO, meaningful whitespace is what gimped lambda in Python (as compared to Ruby, where it's extremely powerful and used frequently). I'd rather have lambda + map/filter/reduce than comprehensions. Comprehensions are nicer when the code is simple and worse when it's complex.
I'm curious why you prefer Python's meaningful whitespace over explicit delimiters?
I don’t mind the {}s too much, maybe the primary reason I dislike them is actually more to do with the fact that I’m Danish and I’ll need to press option+shift+7 to make a { on my Mac… or similarly annoying combinations depending on the machine/OS. I think I once had a windows laptop where I needed to press FN + something.
Anyway, when your language adds extra letters to the keyboard ÆØÅ in my case then they have to take the space from other things and since {}’s are rarely used they were one of the “obvious” choices.
On the flip side I don’t think the curley brackets add much to the readability.
I'm European, I lived in Denmark for pretty long, and I found it a great quality-of-life improvement to switch to ANSI keyboards, or at least to an ANSI layout.
By defining a compose key, you can type á, é, å, ø, ß, §, plus many other symbols, with relative ease, while still having all keys in the right place to program.
For instance, using Emacs with any ISO layout is much harder.
I don’t think it’s a fundamental property of single-char open/close delimiters. Like, try-with-resources, or scala/kotlin/groovy/etc-like blocks do let you specify specific initializers xor finalizers, but one might as well create a language with ` { } with (FINALIZE)` syntax.
Smalltalk and C both came out in 1972 and it really isn't clear to me that C's crazy for-loop design -- one that actually isn't used by many programming languages -- is somehow better than the one used by Algol, so I am struggling to find even a single thing you said which is making any sense to me :(.
What exactly do you think the purpose of having different programming languages is?
The popularity and usefulness of Ruby’s block-based control flow, which you seem to take issue with, is almost certainly largely responsible for the adoption of lambdas in… basically every modern language, not to mention being backported to existing languages like C# and Java.
Frankly your hot take is terrible. C was an amazing language, but there are an infinite number of practical and effective ways to improve upon it. Particularly having just admitted that you’ve never actually used the syntax in question it’s honestly astonishing that your first instinct is to jump straight to posting about how much better the C approach is.
Hell, you’d be hard-pressed these days to find a modern language that uses C-style `for` loops. They might say `for` on the tin, but they’re much closer to Ruby-style enumerators than they are to C-style setup-condition-increment control flow. With, of course, the caveat that that’s all they ever can be since they’re keyword syntax rather than just a method.
What a sad world this would be if everything interesting in programming was discovered by K&R in the 1970s.
And mind you, C's for loop was an innovation: languages before it (and even after) used "FOR var := start_expr [DOWN]TO end_expr [BY constant] DO ... OD" or variations thereof, with constant increment which allowed for much better codegen in a single-pass compiler: it's trivial to remember the increment constant and then issue it later, at the loop's end while remembering a whole expression... not so much.
> I feel like each new language absolutely needs to reinvent existing standards that have proven effective, just so that it feels unique. I don't have any experience with Ruby in particular, but every time a tool does some basic thing like a for loop "in a new, innovative way"
You don't seem to have any knowledge about the history of programming languages. There have been several lines of PL syntaxes since the 50s.
You are crazy if you think that a C style for loop is good designed. It's way to powerful but terse and obtuse to do correctly beyond the simplest application.
C style syntax as a whole is also nothing special if you mean semicolons and braces.
If you include stupid design decisions like braceless blocks in "C style syntax" I wouldn't even know what to think of that opinion.
> You are crazy if you think that a C style for loop is good designed. It's way to powerful but terse and obtuse to do correctly beyond the simplest application.
I wouldn't go as far as calling someone crazy for thinking C-style for loops are good.
The expressions in a C-style for loop is a 1:1 mapping to sigma notation, so is intuitively understood by anyone who has done high school mathematics, even if they didn't realise there's a 1:1 mapping.
Maybe you'd like something different, but it's been a staple of mathematicians for centuries now, so it's kinda hard to complain that it isn't readable.
Easy to make mistakes? Sure!
Hard to read and/or write? Only if you have never seen sigma notation before.
> so it's kinda hard to complain that it isn't readable
Hum, no, it's very easy.
Almost all of mathematics was created without any care for readability. Mathematicians working with structures that take more than a couple of lines is a very new phenomenon, and the culture of the area didn't even fully adapt yet.
But it's not a 1-by-1 mapping. The FOR-loops of FORTRAN, ALGOL, and Pascal have 1-to-1 mapping to the sigma notation because the increment part is either a) omitted entirely and forced to be +1; b) allowed to be some other integer constant, but definitely not an integer expression.
FOR i := 1 TO 10 DO ...
FOR i := 20 STEP -2 UNTIL 0 DO ...
for (i=0, j=10; s[i]; t[j--] = s[i++]); // huh?
My argument is that there is no 1:1 mapping: for instance, "for (i=0, j=10; s[i]; t[j--] = s[i++])" has no direct correspondence with sigma notation. Does it? I don't believe so. This code also has no direct correspondence to Pascal's FOR or FORTRAN's DO, or ALGOL's FOR-STEP loops. Hence, C's for loop has no 1:1 mapping to sigma notation.
Of course, I can be mistaken and either there actually is a 1:1 mapping, or you meant by "1:1 mapping" something quite different from what I mean.
> My argument is that there is no 1:1 mapping: for instance, "for (i=0, j=10; s[i]; t[j--] = s[i++])" has no direct correspondence with sigma notation.
And? I didn't claim that all the multi-expression, body-in-the-conditional possibilities are a 1:1 mapping with sigma notation, did I?
Why do you think this is more representative of for loops in C than simpler examples I gave?
> or you meant by "1:1 mapping" something quite different from what I mean.
Well, yes. I meant that the 1:1 mapping is from sigma notation to C, not the other way around, because sigma notation was not invented after the C language.
Maybe I shouldn't have said 1:1 mapping; it's much clearer to say "The `for` loop in C is just a way to write sigma notation in programming languages".
“We are having the same ridiculous heated arguments about syntax that programmers have over and over and over so wink I’ll use hyperbolic rhetoric to insult people and wink you shouldn’t take my words to mean very much.”
I find it tiresome. It’s excusable by youth and something that people should grow out of but some never do.
> The upside is incredibly simple and consistent language grammar
I hope you don't mean Ruby (I haven't investigated Smalltalk grammar). Ruby grammar is atrocious due to string interpolation stuff. Nothing to do with handling loops or conditionals, but still... Ruby is not at all an example of a language with good grammar.
Unfortunately, it's surprisingly rare for popular languages to have good grammar. If you look at it from up close, there are lots and lots of really bad decisions in virtually any language in common use today. I think, this is just not a high-priority concern for most users, but still...
Could you perhaps write a blog post on what you consider bad decisions in a PL grammar? That'd be interesting to read, esp. for all the aspiring PL developers/makers here.
But, I believe, size is a very good heuristic (i.e. the number of rules, the number of variables in rules, the number of branches in rules).
Another valuable metric to optimize is entropy. I.e. rules that look very similar aren't very good rules.
Expressiveness: how long does the program have to be to capture a useful concept.
Things like these obviously need a lot of counting and coming up with some kinds of constants, hopefully justified by experiments... That's a lot of work that will also require a lot of resources to do. But this is not to say that it is unknowable or that a programmer cannot develop an intuition which allows for rough assessment of language grammar.
So, I'm sorry, I don't have a good answer... I only offer one based on my intuition and limited experience of dealing with various language grammars.
> Ruby keeps going with its methods-first approach, except instead of each we have a new set of methods commonly implemented on collections, as below
Once you implement `each`, `include Enumerable` is all it takes to get the full set of collection methods (including `max`/`min` etc, if the entries define `<=>`).
The entire article largely read like "Python developer learns Ruby", and on top of that now fairly dated Ruby, though I'll make some concessions for him wanting to show a close parallel to the Python. I wish he'd signposted more clearly that these are examples, though, because as it stands it implies this is how to do things, while it really is not.
E.g. his "Stuff" example can be reduced to:
class Stuff
def initialize
@a_list = [1, 2, 3, 4]
end
# The ellipses here are part of the Ruby code for 'forward
# all the arguments, including the a block if passed'
def each(...) = @a_list.each(...)
include Enumerable
end
Stuff.new.each {|item| puts item }
puts Stuff.new.map {|item| item}
puts Stuff.new.select{|item| item.even?}
One could argue about my use of "..." and endless def, but one certainly would not typically implement each when forwarding to an Array by using a for loop to iterate over it other than to make it relatable to Python developers...
And, more controversially perhaps, the last three lines can be reduced to:
The first one is one that will cause arguments. The second just showcases that map is pointless here other than as an example - map here effectively just a slow way of duplicating the array, but notably first when forcible evaluated as map without a block will return an Enumerator, and "&:even?" is fairly idiomatic for "call to_proc on this symbol and apply it to the argument", but some might still not be familiar with it.
The Python example is similarly unidiomatic, reduceable, and flawed for the sake of simplicity (the example doesn't allow multiple independent Stuff iterators), so it's not like Ruby is really at a disadvantage in the article. The point of the example is to illustrate the mechanics of custom iteration (which forwarding to the encapsulated implementation wouldn't accomplish, for either language) while keeping the rest of the Stuff definition as simple as possible. But yes, a brief mention of including Enumerable wouldn't hurt.
What I like in Ruby: Every expression returns a value. In Python my_list.sort() will return `none`. So if I do `sorted_list = my_list.sort()`, my `sorted_list` will be `none`. And I have been shooting a lot in my foot in the beginning. - I love Python now, but not because I find it aesthetically appealing (I prefer Lisps or functional languages) but because it is ubiquitously available, the ecosystem is phantastic, and it gets the job done.
One rant though: Am I the only one being overwhelmed by so many "end" delimiters in Ruby? Feels like visual noise. end end end end...
The `sorted_list = my_list.sort()` is a bit of an odd case, though, because, as you've written it, `sorted_list` looks like a new list, even though `sort` does an in-place sort. (Javascript has exactly this problem, where `.sort` mutates the existing list, but can be chained in such a way that it looks like just another step, leading to surprises later on when the input data is suddenly different to how it used to be.)
In that regard, separating `sorted` (immutable, returns a new sorted list) and `sort` (mutates, returns nothing) helps a lot in terms of preventing subtle mistakes.
Ruby conventionally separates methods which modify the object from ones that don't by appending an exclamation mark to the method name. So `sort` returns a new sorted list, whereas `sort!` modifies the original list.
Strictly speaking the exclamation mark convention is for things that are "dangerous", in some sense. Modifying in-place is one sort of dangerous, but there are others, Process.exit! being one notable instance where the "modifying in place" thing is a bit of a stretch, and not really the thing you care about.
!-suffixed being dangerous is a rails convention (throws exception). !-suffixed methods are a ruby convention for instance mutation. Calling mutation dangerous isn't a justification for your argument, as there are intrinsic benefits to using those where justified and contained, such as memory savings.
That's simply not true. matz has said this explicitly in the past:
> The bang (!) does not mean “destructive” nor lack of it mean non
destructive either. The bang sign means “the bang version is more
dangerous than its non bang counterpart; handle with care”. Since
Ruby has a lot of “destructive” methods, if bang signs follow your
opinion, every Ruby program would be full of bangs, thus ugly.
I'll argue that `Process.exit!` does modify the state of the program in place (which isn't the typical thing one thinks of when thinking of values). Without the bang, it's just a function call that throws an error, which can be caught and handled, so the logical state machine is unchanged. With the bang, it replaces the set of states the current program can be in to a single exit node.
That's exactly what I meant about the modification of the state in that case not being what you care about. Technically true, actually irrelevant. Mostly.
Python’s weird mix of OOP but also global methods like `sorted` and `filter` is very weird to me. (Not to mention list comprehensions.) In Ruby, control flow pretty much always moves from left to right as you add successive method calls.
I agree. Python is not consistent, probably because features kept being added. Ruby is a little younger though, so its creator could use lessons learned from its predecessors.
> So if I do `sorted_list = my_list.sort()`, my `sorted_list` will be `none`
This is (one of) my biggest gripes with Python. It's utterly inconsistent with its design, and library designers have taken that to mean they also can do anything they want, leaving NumPy vs Pandas to have completely different class vs object stylings for instance. I reach for Python these days only when there's no other option because I'm 100% sure I'll spend 50% more time in the debugger than using Ruby for a similar problem.
> I have reverse engineered secret security algorithms used by the CIA and can break any message they encrypt. As proof, here is the last few lines of an implementation of their encryption function in Lisp
The joke misses the mark because Lisp code usually coalesces all the parentheses in the same line at the end of the block. It does make sense for Ruby, though.
I personally am more confused without the block delimiters. Having a 4 spaces indent size helps. I wonder if this is purely a matter of getting used to it.
Funny, I started typing a comment here, about the author not getting the Ruby side of things right, that actually the difference would rather be that Ruby uses a special language construct and Python just normal methods and conventions. But just skimming through the previous comments it seems that Python also has these a `yield` keyword that would be more idiomatic to use.
I think we're dealing with someone who had limited experience with both Python and Ruby. This article is somehow getting more attention then it merits.
The whole premise here IMO is quite flawed. In real Ruby code I almost never use a for-loop. I can't think of a single use base for it. You always use .each or one of the other methods like .map or .select. while loops might get used, but I'm not sure I've ever seen a for loop in Ruby on any of the projects I've worked on with the exception of PRs from people who are brand new to Ruby
Edit: much more interesting would be the contrast to list comprehensions in Python and the somewhat associated crippled lambdas.
Another interesting one is Ruby's private methods being not accessible by other instances of the same class whereas Python's are (to be bizarrely) accessible by instances of the same class (like I can use your heart because we are both human?). I always justified this difference in my mind with Ruby object despite being of the same class potentially having different methods due to metaprogramming at runtime. But it's probably just due to the languages' respective ancestors.
> Ruby's private methods being not accessible by other instances of the same class whereas Python's are (to be bizarrely) accessible by instances of the same class
Not sure what you mean here; Python doesn't really have private methods.
Sure it's only mangled. I'm Ruby you can also still call private methods via .send(:method_name).
IMO the goal of declaring something private is to make clear what you consider part of your object's public API and it's safe for others to call. It's mainly to protect callers from shooting themselves in the foot. Private provides no security. Your consumer will know better the actual situation they are in and if it's worthwhile taking the risk that your API might break them in the future or to write additional tests since they are doing something you as the author advise against. Of course it can also help enforce good design within your project. IMO it should be easy to bypass private constraints when a user makes that educated choice
Ruby has 'protected' which is not Java-like protected, but implements "another instance can see my internals" semantics. It is useful for methods which take another instance as an argument like binary operators (e.g. `def ===(other)`) where you might need access to both objects internals.
I have used all of those, but the design decision still doesn't make sense to me. Two instances of the same type might have absolutely nothing to do with each other in my domain.
FWIW the Python implementation of Stuff seems not all that Pythonic. Returning self from __iter__ and keeping state where the iteration is works, but breaks if you over the same object from an iter. To me that really distracts from the argument since the idiomatic implementation in Python and Ruby look almost exactly the same
On the same note, the implementation of Stuff in Ruby is not very ruby-like. It could have just used `.each` on the `@a_list` (in that case, you just delegate the implementation to `@a_list`, no reason to use a for-loop.
Hmm I'm no Python fan but I'd take it over Ruby any day, because it's so much easier to read a codebase. This is based on me trying to follow Gitlab's code.
The problems with Ruby seem to be:
* No static typing (I think there is Sorbet but apparently it's not very good and Gitlab doesn't use it).
* The lack of "syntax" makes it hard to grep for things. For example you can't find where `foo` is called by searching for `foo(` or `.foo` like you can in Python.
* It seems to encourage highly dynamic code where even identifiers are dynamically created, so often you'll find an identifier, and try to search for its definition but get zero results.
Maybe it's elegant and nice to write, but it's definitely awful to read.
>* It seems to encourage highly dynamic code where even identifiers are dynamically created, so often you'll find an identifier, and try to search for its definition but get zero results.
I've had to take care of 3 large ruby codebases at different companies and this is what kills ruby for me.
A lot of ruby programmers think they are being clever when writing crazy dynamic ruby code, but they are only creating technical debt.
Years later when they have left and the "context knowledge " is gone from the team, the Ruby code is a huge mess of "magical" code.
And rails with its "implicit" functionality depending on method name. A lot of Ruby feels like "magical" to me (in that, things work because of some hidden implicit readmson).
I prefer code that is explicit, in your face. You can quickly see what it does and how it does it. Principle of least surprise and "don't make me think".
As a language it's pretty, I like to say that Ruby is really object oriented while python is a mix of stuff (why len(x) instead of x.len??)
> No static typing (I think there is Sorbet but apparently it's not very good and Gitlab doesn't use it).
Sorbet is great, just not with generics yet. And if you use it the LSP will allow for finding references to method calls. If Gitlab doesn’t use it they’re missing out. Maybe they are too busy writing about how great their culture is to worry about such things…
If you don’t want dynamic identifiers, configure rubocop to prohibit it and enforce it through CI builds.
> It seems to encourage highly dynamic code where even identifiers are dynamically created, so often you'll find an identifier, and try to search for its definition but get zero results.
I guess the only reason this doesn't happen the same way in Python is because developers are told to repeat "explicit is better than implicit" 100 times before joining the community.
I can't find any single reason why the language would discourage it.
> It seems to encourage highly dynamic code where even identifiers are dynamically created, so often you'll find an identifier, and try to search for its definition but get zero results.
I wouldn't say it encourages it explicitly but it makes it far too easy. There is usually no need for highly dynamic code in code that is not part of a library.
Does lsp work well for Ruby? Back when I was writing Ruby (about a decade ago), static analysis tools typically had an even harder time finding things than grep.
> So much of how Ruby and Python differ comes down to the for loop.
I wish we wouldn't try to simplify the wildly different philosophies of language into a single "thing". From imports, loops, calls, sub-classing, chaining, multi-lines, blocks - there are just so many differences that materially matter.
That said, regarding the loop itself, I agree with the article. I come from not-C background and looping in Ruby has always been a pleasure because it feels more natural especially once you include chaining.
Ruby always seems so appealing. Can anybody tell me how the Ruby ecosystem looks like w.r.t.
- (Desktop) GUI
- Natural language processing
I would really like to learn Ruby, but I can only justify the effort if I can use its ecosystems for some private projects, and I often have been burned by languages not offering too much in those two areas.
And speed-wise, as I understand, Ruby is the same ball-park as Python?
Glimmer is a award winning GUI Toolkit for ruby which supports every major platform (gtk, qt, wxwidgets, swt, swing, java fx, etc), it can also output as SVG or CSS: https://github.com/AndyObtiva/glimmer
Glimmer has been around for a while and is in active development.
If you want a fast running and fast starting GUI you should take a look at GraalVM from oracle and its Ruby implementation called TruffleRuby, it translates Ruby code to native code and optimizes C and Ruby code at compile and runtime to make it faster: https://www.graalvm.org/ruby/
I've been using a few NLP libs in ruby lately and it seems they're mostly all dead - at least the ones I've seen. I feel like everyone gave up and just moved on to python - or deep learning or something else more interesting.
Not saying they don't work, but if you click on a few from that last you'll mostly see last commit >5 years ago - in my experience.
As usual language and implementations aren't the same thing.
While Ruby community has pursuded many implementations in regards to JIT, the reference implementation even has two currently, on the Python side outside PyPy, nothing else has actually got any community support.
Only now thanks to the pressure of Python being the "2nd coming of Lisp for AI", but without its native code generation, there is some real pressure that actually writing C,C++,Fortran and calling it "Python" isn't that practical and a JIT on CPython would be welcomed.
On the other hand, those native libraries can be equally called from Ruby.
I made the opposite choice 20 years ago, and could say the same about my choice. But I wouldn't because realistically language choice is largely about what makes you happy. A good dev needs to be a polyglot anyway.
And I chose Ruby about the same amount of time ago, spent 18 years or so writing that professionally, and now write Python for a living. They’re not that dissimilar, if you can use Ruby you’ll pick up Python pretty quickly. (You will however swear profusely now and again as you encounter another incredibly clunky bit of syntax)
Rails is hardly what I would call Ruby’s selling point. Not even the top 5.
Unfortunately the ecosystem suffers from rot because it was a language for trend-followers at one point. But it is still a better developer experience than just about everything else I’ve worked with.
GUI stuff with Python sucks just as much as it does with Ruby. I.e. if you want to do GUI, don't choose either of these languages.
GUI kind of works best with whatever language the platform for the GUI toolkit wants you to use, but a lot of the time C++ will be that language, competing with JavaScript. Every other language, almost always will end up having bindings, or some other sort of outsourcing mechanism to connect its runtime to either C++ or JavaScript. If you want to just deal with one language when working on a GUI project, it's best to just go with the one the target platform wants you to use.
I would argue that the main selling point of using Ruby is Rails, for sure there are a lot of things you do in Ruby, but for sure in 2024 there are more performant alternatives.
One of the central arguments for Ruby is that performance is not everything (it was always slow compared to other programming languages) but programmer satisfaction is more important.
I'd say Ruby is the programming language I want to program in but Rails pays the bills.
As someone who has worked in Rails performance for quite a long time now I suggest that Rails performance is largely fine. Most Rails performance problems are database and/or architecture issues and not with the language or framework.
I've written Ruby commercially for about for 20 years, and dislike Rails. Pretty much none of my use have involved it. That includes web dev. Use of Ruby outside of Rails is more low key, but it's out there. My last project involved Sinatra for our web app and financially modelling and simulations for a VC fund in Ruby.
For NLP you will have less choice than for Python, but worst case you can bridge to Python code.
As a long-time Rubyist dipping my toes in the Python world, I find decorators to be my favorite feature so far, and a much nicer way of wrapping functionality than blocks.
Or you could just use Nim (https://nim-lang.org/) and pass code blocks with templates and/or define iterators for your types - whichever best fits your use-case (with ahead-of-time compilation to native binary executables coming along for the ride).
First impressions apply to languages too. A lot of mathematicians went into AI but didn't start out with any programming experience. They dabbled in some languages and gravitated toward the one(s) they found most useful.
Paraphrasing Guy Steele [1]:
> It's important that when you design a language that does a familiar thing, that you do the familiar thing exactly... it's criminal that you can write 1/2 and get values nowhere near one-half.
Python
>>> 1/2
0.5
Ruby
irb(main):001> 1/2
0
To a novice programmer, ruby's result makes no sense, and introduces an incredible degree of doubt into the mind of the user. Sure, they could explore why it gives that result, but if that's not helping them get toward solving the problem they're using the programming language for in the first place, then it could seem like an indulgent tangent.
> it's criminal that you can write 1/2 and get values nowhere near one-half
This is a foolish argument and Guy Steele should know better. What makes 1/2, which has an accurate floating point representation, any more important than 1/10, which doesn't? Every novice programmer, every single one, trips over floating point, and pretending that floating point numbers match our intuition from mathematics doesn't do anyone any favors.
I don't think his attention was specifically to have 1/2 be a floating point number, like for example Javascript does, but to call to attention the fact that we design programming languages for a specific audience, not realising that if we widened our perspective it could be intuitive for a much wider audience.
For example, Ruby was designed for experienced software developers. Experienced software developers expect that if you divide an integer by another integer, that it performs integer division, and 1/0 would be 0 in integer maths.
If the creator of Ruby had broadened their horizon, and instead realised that outside the world of experienced software developers, the expression "1/2" means something totally different, maybe they could have chosen a different behaviour.
In a different universe perhaps any numeric literal in Ruby would always be a `Number` (like in Javascript), that for division would instead return a `Rational`. So that the end user could at the end simply call `.to_i` or `.to_f` or `.to_s` according to their needs.
I don't know if that would be a better universe though. I agree with your point that every programmer eventually needs to learn about floating point numbers and their sometimes surprising properties. And in the end Ruby was designed for programming computers, and the goal of any programming language is to expose the abilities of a computer, which include at the core integer maths, however surprising that maths might be to a novice.
> If the creator of Ruby had broadened their horizon, and instead realised that outside the world of experienced software developers, the expression "1/2" means something totally different, maybe they could have chosen a different behaviour.
IMO, that would be a tragedy.
In every language that does it implicit type coercion is the cause of a litany of bugs.
It's the cause of many problems in php. It's why people are told not to use == in javascript.
Maybe it is more difficult for beginners. But people are only beginners for a couple of weeks. It makes no sense to design the language to make those couple of weeks a little better and the rest of their programming lives a lot worse.
I agree, but I didn't suggest implicit type coercion. The bugs in Javascript are because of type coercion, but that has nothing to do with the fact that numbers in typescript are all floats. I don't think there are many bugs in Javascript that are due to numbers being floats, though I bet there's some companies who decided to build their backend in Javascript that might eventually run into problems.
There's ways to design a number class that doesn't suffer from any of these problems (obviously it would have to sacrifice performance to some degree).
That's an argument for being more careful with floating point operations, and perhaps not using them as a default numeric value, but it's not what Steele is talking about there. After all `1/10` in Python is still pretty close to one tenth, even if it's not quite there.
The question here is whether or not we should assume integer semantics for numerical values.
I suspect that the real answer to this should be that the representation should remain as a rational until it can't. Forcing it into either float or int at the point of entry is the mistake.
There's a good reason for this. Computer numbers are not the same as numbers in mathematics. Floating point numbers are not real numbers and ints are not natural numbers.
It was not debated that there is no reason some languages behave one way or the other - it is quite the opposite - only that for supporting the pratical use cases - which is the sole purpose of any language, to be useful for practical cases - then it is not a good reason.
When one has to jump hoops of the language - being the user for the whims of the language and not the language for the user - that is wasting efforts and adding complications instead of helping. Being more like puzzle for fun that practical application.
Those want to keep close to the soul of the hardware and computing should use C or other low level languages, existing for long. Not made recently to make programming easier ... on paper only apparently...
There is no computer numbers and mathematical numbers, there are only mathematical numbers and the limited (failed by circumstances) representation of mathematical numbers existing for low level lenguages since the ancient times of computing.
Which should be improved - failed representation eliminated - wherever and whenever possible. Especially for new languages. And since there are better and worse representations - actually handling - of numbers accross the languages the criticism of doing badly, not making improvements, expecting to know the whims of a modern lannguage in this regard is absolutely founded!
I think there is also a gotcha there as you get a ratio in Common Lisp and not a floating point number.
> (print (format t "1 divided by 2 is ~a and its type is ~a" (/ 1 2) (type-of (/ 1 2))))
> 1 divided by 2 is 1/2 and its type is RATIO
I don't think there is a simple solution. Numbers are harder than they seem and you just need to know what happens in the particular language you are programming in.
Having to work with scientists and mathematicians, as in offering my programming help to them... I see no value in making languages look like math formulas.
Math formulas are an awful language. It served its purpose to discover how to do things better, but it became obsolete many decades ago. People who hold on to these ideas create truly awful languages like Haskell. These languages make comprehension prohibitively difficult. So much so that a lot of people who would otherwise benefit from using the language and would have enough mental capacity to appreciate its higher-level concepts are prevented from doing so by a very superficial thing s.a. syntax.
Also, have had you done any math beyond high-school, you'd not have a problem understanding why the result you see in Ruby makes sense. I think some high schools today also include set theory in math curriculum, which would include concepts like closure, domain / co-domain etc. So... this is really not a good example of the concept you mention.
In general, I think that being correct, minimal and internally consistent is a lot more valuable for the language than to be similar to the prior knowledge newcomers might have. It's nice to be also newcomer-friendly, but that shouldn't come at the expense of sacrificing any of the aforementioned desirable properties.
The shift towards // vs / also shows great acumen by the Python designers: they tried (and arguably, largely succeeded) in catering to the greater data science/statistics/numerical analysis communities, where 1/2=0.5 is obvious, rather than sticking to the "compsci-obvious" 1/2=0.
And this isn't the reason Python succeeded. Nor is this the reason for the particular change.
The reason Python succeeded with data-science is NumPy and related group of libraries. They happened to be the first to offer easy access to R-like features of other statistically-flavored languages in an all-purpose language. I.e. it makes it easy to combine general-purpose code with statistics-specific code, and once it accumulated critical mass, the process became self-sustaining and alternatives died off quickly.
The reason for most of the changes that happened to Python in the last fifteen or so years is design driven by fashion. Which means the majority decides what to do with the language. Which also means that Python is made to look more and more like other mainstream languages (eg. Java, JavaScript, C++...) So, a lot of changes, this one included were made out of subconscious fear of non-conformity.
Surely 1/2 = 0.5 is what a statistician would expect? Whereas 1/2 = 0 is what happens in C, C++, Ruby, Java, C#, F#, Rust... Essentially most of the popular programming languages with the exception of Python and JS/Typescript.
Anyway. Maybe my original comment was poorly phrased, but I was not implying that Python succeeded because of this form of catering. Rather, the designers took note of Python becoming popular in that field and made changes (see also the matrix multiplication operator @) that accommodate those users rather than the more "typical" CompSci crowd.
My wife's mom is a statistician (and has been since like 70's). She also used to do a lot of programming in the line of her work (working for a telco, and later as biostatistician), in the latest iteration in R, where things are like you expect them to be. But, before then it was also Matlab, where 1 / 2 = 0.
But, none of that is really relevant. Both operations are useful and common in statistics. Which one is more common will depend on your domain.
> the designers took note of Python [...] made changes
That's putting too much faith in designers of Python. Even calling these people "designers" is giving them too much credit. By their own admission they don't have any sort of vision or strategy for how to deal with the language, they just add random stuff and see if a lot of people complain or thank them.
In other words, matrix multiplication operator is there not because there was some kind of intention or design on the part of the small group of people who are responsible for releasing the language, it was more of a "genetic algorithm" kind of thing: change - iterate - see if change optimizes some metric - repeat.
Back when Python was winning over the AI crowd, they did the same thing as Ruby, if you didn't remember to call some voodoo chants "from future import ..."
Okay, I'll bite: how much of this is the responsibility of the language designers versus how much of it is the responsibility of the language user?
If I had a junior programmer complain to me about this case, I'd simply tell them "you didn't RTFM", because RTFM'ing before you use a language is not only the professional thing to do, but vital to your accurate and correct use of the language.
Sure, language designers shouldn't build-in these kinds of foot-bullets, but then again, kids shouldn't play with guns.
>First impressions apply to languages too.
The duty of responsibility applies to all languages, and it is the users responsibility to understand the language they are attempting to use, before using it. No?
If a language clearly states in the FM that something "criminal", in the words of your parent comment, is known to be criminal but they leave it as it is without any clear reason behind it (even recognizing it as a past error they have to carry for backward compatibility), the junior has all the right to think the language is criminal and is against them.
In the extreme case, take something like brainfuck. Everything is in the RTFM, but that doesn't make the language less criminal.
I like to remember that we are not here to program for the sake of programming. We are mainly solving problems. I have a problem that I know I need to solve with "a gun", and maybe I'm a novice to gun usage. After having a look at some guns, I choose the one who seems to be less dangerous and have a friendlier and more intuitive usage. When your very powerful but a deathtrap with 1000 page manual of a gun is used by no one, don't complain with "why you don't use my powerful and clearly better gun!?".
Because the key here is that this is not the 1970's anymore, when all guns were complicated. Learning by doing is the best way to learn, way better than learning by reading (https://thepeakperformancecenter.com/educational-learning/le...). If a language requires me to RTFM is at a big disadvantage with any other language that allows me to easily learn by doing.
>Because the key here is that this is not the 1970's anymore, when all guns were complicated.
I take issue with this - computers are far, far more complex now than they were in the 70's, which is why denying the language-users responsibility for fully understanding the language-designers intentions is such a farce.
>If a language requires me to RTFM is at a big disadvantage with any other language that allows me to easily learn by doing.
There are no languages under the sun which do not require some degree of study, and to claim that it should not be so is simply delusional. All language must be learned before it can be properly used - some people learn by making huge mistakes with the language they use, its true, but the productive, professional use of any and all human language requires its study.
> I've always suspected that Pascal and Haskell were not serious programming languages, thank you for confirming my suspicion.
I wouldn't really group Pascal and Haskell together (unless we're playing rhyming games).
One of those had serious market penetration, a whole industry behind it and was one of the dominant choices for applications languages ... for maybe two full decades (including Turbo Pascal all the way through to Delphi).
I mean, right now, Delphi is still orders of magnitudes more popular and in use than Haskell, which is used by .... pandoc, maybe?
I don't think the target audience of Python are second-graders, but rather people who have graduated elementary school and realise that 1/2 comes out to 0.5.
That's silly, in math those two are just different notations for the same thing, and usually fractions are preferred.
Also anyone who's done any bit of programming should know that numbers as represented by a computer are discrete, while mathematics deal with symbolic relations which might require infinite amount of data to numerically represent.
1//2 always resulted in an integer, even in Python 2, with or without "from __future__ import division". To get floating point division in Python 2, you had to convert at least on of the operands to float (much like in C or C++).
You can try it online at tio.run, they still have Python 2 among their available languages, as well as Python 3.
> it's criminal that you can write 1/2 and get values nowhere near one-half.
that's sort of ridiculous. Would he be more ok with the output 0.48? what about 0.51? Ironically, in the age of LLMs and non-deterministic output, maybe he would.
I've done a lot of SRE work and two stints at companies with RoR stacks. Really have kinda soured on Ruby and Python at this point and it has little to do with loops. If I had to choose though it would be Python for asyncio.
The omitted context in this comment is that the RoR stack propelled those 2 companies to the level of usage and viability where they needed, and could afford to hire, SRE engineers.
I'm not saying RoR doesn't have its faults, I've left it behind too. But I can't deny its productivity, and even in 2024 I would be hard pressed to name something better for a SAAS company starting out.
I’ve tried so many stacks on side projects and always come back to RoR (with a bit of a grudge).
I dislike many things about Ruby, but RoR is extremely hard to beat. Basically, everything you need in a stack is provided or readily available in the community.
Yeah. My usual advice to anyone who asks is "Use Rails unless you have a very good reason not to". Those reasons do exist, but they're pretty specific and don't apply 90% of the time.
Of course if we're being really real, the first piece of advice should actually be "Can this be Wordpress?"
In my experience there is absolutely nothing "simple" about a static website, especially one that is going to be used by other people. If it was my personal blog or something, sure. A website for a non-technical friend with a business who wants to be able to post updates, etc? Omg no. No no no no no.
Good managed wordpress starts at like $10-20/month. All those problems you mention go away, you can even do things like rollbacks if someone breaks something. If what you want to do fits into what WP can do pretty well, it's a no-brainer.
> A website for a non-technical friend with a business who wants to be able to post updates, etc? Omg no. No no no no no.
I co-organise an event with a non-technical friend. Previously it was using some sort of a custom cms which was hell. I set it up with Jekyll, created a GitHub account for my friend and taught him markdown (five minutes perhaps). Yes he breaks things occasionally. That's fine! Everything is versioned, so I can easily fix things.
Or it was along for the ride. One of those companies was propelled by product market fit, the other by 0% interest and the VC firehose of 2021.
When the successful company was founded back in the noughties RoR was arguably miles ahead of everything else. An argument could certainly be made about time-to-market and iteration speed. For the later there were a lot of other viable options available..
> even in 2024 I would be hard pressed to name something better for a SAAS company starting
My sense, at least looking at the stacks for tons of recent YC companies, is that most startups are all in on JavaScript these days. Might not actually be better(though I prefer it), but that seems to be the trend.
Python has two killer use cases, being a better Perl that one can read one month later, and a better BASIC for introduction to programming and IoT stuff.
Trying to use it for application code, eventually ends up in pain, rewriting code into C and calling it "Python".
Written in 2024, with 38 years of experience reaching out to dynamic languages, some of which without any kind of JIT/AOT in the reference implementation, which Python happens to be one of them.
Eventually if the JIT POC from 3.13 evolves to something that can compete with a Common Lisp, or Smalltalk JIT, then I can change of opinion.
Note that I only mention those two languages on purpose.
“Not an advantage” is sort of the default case; Ruby supports async, too (and, unlike Python, doesn't have function coloring.) So, I don't see why asyncio is an advantage.
I like Python -- I use it more than Ruby -- and asyncio is fine, just, IMO, not an advantage of Python over Ruby.
I just prefer the experience, tooling, and performance of other languages more. Honestly two of my biggest "pains" with it were resolved the last time I was using it professionally via pipenv and asyncio.
For use cases outside AI, and perhaps some niche stuff as a bash replacement since its pre-installed on most distros, it's in a sorta no man's land where I believe there are better options and the industry activity has moved on.
I think the article leaves out the main difference in Ruby vs Python. Ruby is built for humans, Python is built for machines. This example used to be illustrated on Ruby on Rail's website.
They are philosophical vastly different languages. This isn't to say one is better or worse over the other. Just use the right tool for the job. If I was going to do anything with machine learning or data analytics or LLM's, Python is hand's down the right tool for the job.
However, If I were to setup a full fledged web application that required a really short time to market with some complex features, I would probably choose Ruby because of Rails.
Given Python has a long history as a teaching language, that strikes me as the kind of grandiose statement without evidence which Perl fans used to be fond of. Even more so given there are approximately no “non-programmers” writing Ruby and enormous numbers writing Python in finance, data science, typography, EDA, and a billion other places; civilians seem to manage with Python just fine.
For what it’s worth, language _really_ built for humans probably looks more like Excel or Max/MSP (or both) than like any traditional language - at least those use our embedded spatial reasoning rather than laundering everything through a text serialization.
Funny, because I find Python much more readable than Ruby in most cases. The metaprogramming in Ruby has always struck me of bikeshedding novel programming idioms instead of focusing on simplicity. But that’s just my own opinion.
That's ontologically false isn't it? Ruby is built for humans by humans. Python is built for humans by humans. Given that humans use Python to write code that is then readable by machines, then certainly Python is built for humans.
Ruby is what Ruby developers mistakenly think makes not only them, but also all other developers happy, if they would only know.
For a lot of developers though, basking in the glory of ambiguous poetry, documented in some random funny blog, is not what puts more and broader smiles to their face while urgently trying to decipher their predecessors sensibilities, enshrined in mind bending meta-hairballs.
It is not just the job, some people just really don't like Ruby.
> I think the article leaves out the main difference in Ruby vs Python. Ruby is built for humans, Python is built for machines. This example used to be illustrated on Ruby on Rail's website.
> Just use the right tool for the job.
Oh, cut the crap. Both statements have nothing to do with reality. If we go by your definition, there’s no reason for Ruby, because Python will always be righter tools for the job.
In fact, Smalltalk takes this much further, such that basically all flow control (including if-then-else) is handled as message sends (e.g. if-then is just a message sent to the Boolean object taking a block as it's argument).
The downside is the syntax can feel a tad clunky. The upside is incredibly simple and consistent language grammar, while making it trivial to create new flow control mechanisms since the language has all the tools baked in (primarily first class blocks).