Hacker News new | past | comments | ask | show | jobs | submit login

The better Python market isn't an easy one to crack because its a bit crowded. Go (despite its perceived and real faults) has succeeded in this space by delivering better GC, good libraries, static typing and faster programs. Python itself is improving rapidly, for example with the addition of Type Hints. Its pretty difficult to be a better Python in 2017.

The better C market, on the other hand, hasn't seen any real contender other than C++ gain traction for decades. Rust is trying now and it could get there, but its still a massive opportunity.




I really really don't been to pile on Python... but every time I've had to interact with it I've been shocked at how slow it is compared with C or C++. I tend to write scientific code to process datasets in the range on 10Gb, for simple operations Python code can take hours as opposed to just taking minutes or seconds in C.

I'm sure it's possible to write more highly optimized code in Python, but it never seems to be the case with the code bases I've worked with. With my own code (a few years ago now), I spent significant time optimizing and the final result was something that was still significantly slower than C/C++. Overall, for my work, I couldn't see an advantage.

Am I just doing it wrong? Currently Go seems far more appealing to me and I've enjoyed using it (maybe I should also try Rust).


Python is a high-level interpreted language, and C/C++ are low-level (even compared to other) compiled languages. While you might be able to optimize your Python code to run faster than it does now, it's never going to match the performance of C/C++, nor is it intended to.

Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks. Then again, the ease of development in Go will likely be noticeably better than in C or C++. Rust could potentially match the speed of C++, but it's a much more complex language than Go, so it will take some time to master. (Personally, having spent a large amount of time programming in Rust, I find myself more productive than in Go due to the powerful abstractions present in the former and lacking in the latter, but conventional wisdom is that this won't be the case for most people, and it certainly won't be when you're first learning Rust).

EDIT: Anecdotally I've heard that D is a nice language, but I have no experience with it, so I can't comment on how productive it is or how fast D code will run.


> Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks.

Its true, Go won't match the speed of C, but it comes pretty damn close. Say you have 4 cores, 1 would be dedicated to GC and the other 3 would be executing your program. That sounds sub-optimal, meaning that a Go program could only be 75% as fast as an equivalent C program. But you have to wonder how many developers are capable of writing multi-threaded, correct C code. Go is simple to learn, lightning quick to compile and runs reasonably fast. By no means is it perfect, and it is not the tool I'd use for every task in front of me, but it is good enough for most tasks.

I agree that Rust is the future in the low level space.


> Say you have 4 cores, 1 would be dedicated to GC

In my opinion, idiomatic golang is not that GC heavy. A slice (a bit like ArrayList) of structs can be a contiguous chunk of memory as opposed to array of object references, which requires allocating each object separately.

There are an order of magnitude less allocated objects than in Java for example. (Perhaps the situation will change once Java gains value types in maybe Java 9).


Javascript is also high-level interpreted language, but it's much faster than Python. Python is pretty hard to optimize in a JIT (and it gets much less attention than Javascript).


True, but it's also miles behind C/C++. While Python is arguably the slowest of the mainstream interpreted language, they're all orders of magnitude slower than C.


Hmmm. I kind of expect them to be 1 or 2x slower. Maybe even a single order of magnitude... But Python seems often to be 100 or 1000x slower. For my applications this is kind of unfortunate.


> Go will be a significant speedup over Python, but likely won't quite match the speed of C/C++ for most tasks. Then again, the ease of development in Go will likely be noticeably better than in C or C++.

In my experience, you don't mean: learn to make C and C++ development just as easy as Go development and enjoy the best of both worlds.

I agree with you, but it seems illogical on the surface.


> Go...but likely won't quite match the speed of C/C++ for most tasks.

Last I checked golang generated code 3 years ago, it was full of redundant instructions, most notably redundant range checks. Seemed to be about half as fast as equivalent C/C++ code.

I believe golang can approach something like 80-95% of C/C++ performance once codegen is good.


You can easily get best of both worlds with Lisp derived languages.


It's been my understanding that Lisp and its relatives aren't designed to compete against C for speed either. Is this wrong?


C was designed on a PDP-11, a computer much more powerful than the mainframes where Lisp was already running since 10 years, so Lisp implementations were already quite good for most tasks by then.

While C designers were trying to create UNIX at AT&T, Xerox PARC was busy creating Smalltalk, Interlisp-D and Mesa/Cedar.

Other companies were creating Genera and the Connection Machine,

In the end Worse is Better won, because the machines were too expensive for most pockets, normal developers could not grasp Lisp, mismanagement from those companies and UNIX was kind of cheap when comparing prices, with source code available almost for free (AT&T was prevented from selling it early on).


"How to make Lisp go faster than C" (2006) http://www.iaeng.org/IJCS/issues_v32/issue_4/IJCS_32_4_19.pd...

And these figures aren't using the SBCL Lisp compiler which should currently be faster than the one cited.

You are correct, it wasn't designed to compete against C. And it has a garbage collector. But make the right choices and a compiler like SBCL can produce surprisingly "clean" (optimized) machine language code.


C compilers have presumably gotten faster since then as well, of course


Common Lisp has some very fast implementations (I think SBCL is the fastest) and it was designed to be a systems language. It can be optimized to be as fast as C.


> It can be optimized to be as fast as C

Sorry if dumb question but how is this possible when it's garbage collected?

Now I really feel like I need to learn a Lisp though.. Is there a certain Lisp you would recommend that's practical enough I'd use it for projects? Racket?


You can start by reading about Genera and Interlisp-D

https://en.wikipedia.org/wiki/Genera_(operating_system)

https://www.ics.uci.edu/~andre/ics228s2006/teitelmanmasinter...

http://www.softwarepreservation.org/projects/LISP

Just check the hardware capabilities of systems having Lisp 1.5 support, several years before C was created.

Lisp is not only manipulating lists, it grew up to natively support all relevant data structures, including primitives to handle systems programming tasks.

Regarding GC and systems programming, there are plenty of GC enabled systems programming languages, the main point is that they offer features to control the GC and also do stack and GC free allocation if required.

Some examples would be Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Oberon-07, Active Oberon, Component Pascal, Sing#, System C#, D


> Sorry if dumb question but how is this possible when it's garbage collected?

On a big or complex system you end up at least doing one or both or two things:

a. Having to manage many temporal (transient) objects in memory. Thus you end up doing some sort of garbage collector, either doing it yourself, or using the facilities provided by the languages, or a library.

b. Allocating fixed big blocks of memory (such as arrays, or doing a "malloc") and then using this block of memory as your storage area, which then you manage.

Usually on C programming you do (b), although you can also do (a), it is not so easy.

Usually on Common Lisp programming you do (a) very easily, but you can also do (b) by allocating an array and a bit more stuff. It is not difficult, really.


> how is this possible when it's garbage collected?

Common Lisp lets you give "hints" to the garbage collector, so you end up with pretty much what you'd have in C but without the manual memory management.

> Is there a certain Lisp you would recommend that's practical enough I'd use it for projects?

Racket is very nice and comes with great documentation, a huge standard library and a very handy IDE. Clojure was hyped a lot so it has lots of libraries, which however might be of dubious quality or unmaintained by now, and it's on the JVM, which might or might not be a good thing for you. Common Lisp is very versatile and "pragmatic", but lacking in external libraries.


I like Lisp, though Scheme in particular. However, I have no idea where I would turn to in order to write an application with GUI that needs to be cross-platform and, possibly, derive a mobile version for iOS and Android from the same codebase.



Given that GP said "cross-platform" and then separately "possible derive mobile", I'm guessing that only supporting OS X on the desktop might not be enough for them. That being said, I'd be very surprised if there weren't cross-platform desktop GUI libraries available for Common Lisp.


I'd be shocked if WxWidgets weren't supported, maybe even Qt.


Interesting, but premise seems to be that UI code should be done outside of Lisp. Whereas, at least in my experience, this is where most code and trouble lives.


I suggest taking a long, hard look at Julia.

It's squarely aimed at high performance numerics and scientific programming, but is in fact a general purpose language with great potential. It's already very, very efficient (written using LLVM), yet has a lot of the clean elegance of Python.


I am in a similar situation, as I develop a lot of scientific code. I've tested several solutions, and so far the most viable for me (though far from perfect) is Python+NumPy+Fortran, the latter exposed to Python using f2py (much easier than binding C/C++ to Python). Not sure if this might be a good solution for you, it depends whether the bottleneck in your code is in I/O or in raw calculations (in the former case you're not going to get any significant advantage from Fortran).

I've tested several other languages that might be good for my purposes. From what I have seen, Rust is not ideal because of the acknowledged poor support for multi dimensional arrays [1] and the strange semantics for floats (due to the need to be «correct» in presence of NaNs).

Two options that might become interesting once they gain a stable status are Julia [2] and Chapel [3]. The former might be exactly what you are you looking for; the latter is ok if you usually run your codes on HPC platforms. But be prepared to face some friction when sharing your work with colleagues or publishing papers: the little penetration of these two languages in the scientific world implies that people will face difficulties in using/understanding how your code works.

[1] Look for «array» in the page https://users.rust-lang.org/t/on-rust-goals-in-2018-and-beyo...

[2] https://julialang.org/

[3] http://chapel.cray.com/


Julia could have had 10x the adoption if they painted themselves as more of a general purpose language, like "write the web app and your neural networks (GPU accelerable bien sur) in the same language ftw!"... if only they'd have bolted in some support for "non-weird-looking classic OOP" like in "wanna type `window.` and have even the most retarded editor autocomplete methods".

Imho they got it backwards: Python got so popular in scientific computing because it was first and foremost a general-purpose scripting language and didn't isolate (library-wise) sci-coders from all the other developers, second because it was designed as a "teaching language" so simple examples looked close enough to pseudocode so were liked by people in the business of having to explain their code (academia), and only lastly because it happen to be easily extensible for things like matrix operation (operator overloading...) and had non-weird general syntax (like in "functions are value and that's that" not Ruby-like weird blocks and procs that spooked math and physics folks...).

My only hope for "general purpose & sci computing" language would be Go if they dragged they heads out their asses and added some damn sugar to the langue - like operator overloading, some parametric types or templates (good luck bothering a physicists to understand when to use type switches and type assertions...), and something to allow "casual" developers to not be bitten by things like "not all nill's are equal, wtf?!" things. I like Go, but it's a hard language to sell to a non-100%-professional-developer...


> if only they'd have bolted in some support for "non-weird-looking classic OOP" like in "wanna type `window.

Ah, so you want an inferior way of doing OOP? Because the reason Julia is the way it is, it is because it supports OOP based on multiple dispatch (and over more than one object per method). I think this is a major quantum leap over the OOP you are requesting.

Sincerely, if that's what you want, there is always Java to make you happy.


The "inferior" way has a clear advantage: discoverability!

What if I want to call a method but I totally forgot its name and I'm not not sure how to search for what it does? What if I don't know what I want to do, but I have this "thinggy" and I want to see "what can it do"? Or "what messages it can receive"? This is generally how you think when you build "interfaces", you start somewhere in the middle and figure out how & what to do along the way, regardless if it's "glue APIs" or "web GUIs" or "desktop GUIs".

OOP is "message passing" first of all in my view. This is how it all started in Smalltalk, and single dispatch makes sense if you want to be able to "ask an object/actor what messages it can understand".

Now yeah, truly retarded languages like Java and C++ crapped all over the "OOP as message passing" idea, and also rejected multiple-dispatch... resulting in a truly horrible experience that forces anyone to drown in design patterns.

Julia is not retarded, multiple dispatch makes sense in a functional or in a math-oriented language, but I'd prefer things like operators to be multiple-dispatch and methods single dispatch. This would be the "having the cake and eating it too" solution that would please both "math coders" and "professional software engineers", otherwise they'll keep using different languages and we'll keep wasting time writing "glue" between them imho. It's incredibly hard to reason about functions that dispatch on more than 2 arguments anyway, and those that dispatch on 2 operands could be written just fine in a Haskell-like operator syntax like:

    M1 `mySpecialXVectorOperation` M2
and if you truly need n-ary dispatch in some special cases, add a syntax like this:

    {M1, M1, V3}.mySpecialXVectorOperation(42, false)
that could at least in theory allow IDEs and such provide some assistance to the developer! (not sure any IDE developers would bother developing so much introspection though...)

(But practically thinking, I still think you can get away with single-dispatch + a few tricks, and have easier-to-reason-about code as a side effect ;) )

EDIT+: And I get it that math-people have a more "first you truly understand it, then you apply it, then maybe you try and expand on it" mindset that make my kind of reasoning completely alien to them, but lots of software is and will be written in a "jump in the middle of it, hack your way to the solution, in the end re-re-re-refactor it until it not only works, but it also makes sense and you can explain it and maybe even partially prove it works".

EDIT+2: And don't get me wrong, Julia is a great language, its developers did some unbelievably awesome work... it's just not a "bridge the gap between physicists/engineers and software-people" language, despite being like 90% of the way there... I wish I could "sell it" to my "software-people" colleagues better :)


They're going after Matlab, Octave and R rather than Python.


That's why I think "they get things backwards" :| ...Python went after Matlab, Octave and R. And for ML/AI is almost completely replaced them. For stats and data-exploration it shares the pie with R because they are so different languages with different strengths.

It's a shame that the best language designers seem to also be the worst at "market positioning" and programming languages a pretty much fashion & promotion driven area. Anyway, maybe I should shut up, find the time to delve into Julia and when I know enough try to give them a helping hand instead of whining...


Python (as in the language designers or core people) did not "go after Matlab, Octave and R". Non-affiliated or loosely-affiliated people who wanted to do their data-oriented work in Python wrote a bunch of libraries, and the result after many years was a pretty competitive set of tools. So yeah, I think that similarly can be done for Julia to improve it for more general purpose tasks.

In educational/academic setting, I think even more importantly for adoption than Python being a general-purpose language is that it is already relatively widely adopted in various industries as a tooling language. For good and bad, universities today are very keen on providing skills with immediate relevance to employeers. The same network effect is kinda what is keeping Matlab alive...


> more importantly for adoption than Python being a general-purpose language is that it is already relatively widely adopted in various industries as a tooling language. For good and bad, universities today are very keen on providing skills with immediate relevance to employeers

Maybe I phrased it confusingly but you're saying the same things I thought. I don't mean "go after" in a conscious/targeted way. I meant "it evolved towards taking over". And "immediate relevance to employeers" is provided by being a "general purpose language" (like in "look, you can quickly wip up web apps, general server admin scrips and even excell plugins with it"). And "tooling" mostly equals "general purpose" in my book: there is no clear definition of "tooling" it's about "glue code" that needs to do "a bit of everything" to tie things together... so you need a "general purpose" language for tooling.

(Now I see that maybe some people use "general purpose" as in "you can write anything from device drivers and OSs to web apps if you really want to" but this is "systems languages" in my terms or C and C++ and the newcomers Rust and D. But by this definition even Go would be very far from "general purpose"...)


D has quite good multi-dimensional array support, though probably not as flexible as Python. One of the best libraries in this area is ndslice (part of mir-algorithm) - http://docs.algorithm.dlang.io/latest/index.html. See the "Image Processing" example to get a sense of API - http://docs.algorithm.dlang.io/latest/mir_ndslice.html. mir-algorithm is the base for the high-performance glas implementation written in pure D that outperforms libraries with hand-written assembly like OpenBLAS in single-threaded mode (multi-threaded is not yet ready). See http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/.... (Note that this blog post is bit outdated - since then the author rewrote large parts of the library, IIRC.)


The trick to high-performance scientific calculations in Python is to use libraries like NumPy (possibly via Pandas), and their large set of vectorized operations. Then majority of the number crunching happens in optimized C/C++, with Python primarily 'orchestrating' these.

For the cases where the data-manipulation functionality desired is missing and pure Python is problematic performance wise, one can use Cython to bridge the gap.

Or if that is not enough, write those functions in C against the NumPy C API, and expose that as Python APIs.


If you're interested in high-performance scientific computing, then I guess you'll pleased to know that many use D for this domain, specifically for its high-performance and also high-level features that make prototyping as easy (if not easier) as in Python. In addition to good C/C++ interop, there are also libraries for interfacing with Python and R. Be sure to check http://blog.mir.dlang.io/, https://github.com/kaleidicassociates/lubeck, https://github.com/libmir, https://github.com/DlangScience and https://www.youtube.com/watch?v=edjrSDjkfko


What everyone in this subthread is looking for is LuaJIT. It's the fastest JITed language. Comparable to C for many tasks. It has a very tiny memory footprint and integrates into C and C++ well. And there's torch if you need to do fast matrix math or machine learning.


That's up to interpretation. Eg. Pony is faster and safer than LuaJIT or C for example.


Python is interpreted, C and C++ are compiled. They target different niches. Python is more focused on ease of use than performance.

Usually when writing scientific code in Python you're going to want to at the very least use numpy. It wraps various C functions for many time consuming tasks.

To get the most out of numpy you're going to have to vectorize you're code, as python's looping constructs are notoriously slow. Instead of writing something like

    for i in range(0, len(v)):
        v[i] = w[i]**2 + w[i]
You'd write something more along the lines of

    v[:] = w**2 + w


It is up to the implementation of a language whether that language is compiled or interpreted. For instance, there are C interpreters and Python compilers. Neither language is limited to being one or the other. It's just that usually people use C compilers and Python interpreters, but there's nothing inherent in those languages that limits them to those. It's all up to the implementation.


This is true, but not always in a useful sense. There exist language features whose semantics are expressly dynamic, and if your language includes those features it will frustrate static compilation. For example, if your language includes `eval` (as Python does), then any precompiled binary would also need to include a complete Python interpreter. This is why efforts to compile Python (and other languages that are traditionally interpreted) often omit such features, and why languages that intend to be compiled tend to forgo them in the first place.


Which is why for me the best way is to be able to choose between JIT/AOT deployments, instead of a plain interpreter.

Maybe one day PyPy will finally become the default option.


> It is up to the implementation of a language whether that language is compiled or interpreted.

While that's perfectly true, it didn't seem necessary to go into it.


This seems like it would be great for a list comprehension.

I believe the main appeal to using comprehensions is that it speeds up the iteration process at run time so wouldn't a list comprehension be just as useful here if not more so then?.

You could also use the functools and itertools library for really fast iterations, possibly, depending on what it is you are trying to achieve.

If you are specifically talking lists (arrays) why not use Deque? It is set matched and python saves run time by automatically knowing each value will be a finite forward sequence


But would that be faster than numpy? numpy in typical setup would use SIMD or other math kernel (mkl, blas, etc).


It wouldn't.

Pure Python code is never going to out-perform a low-level optimized C library. Especially not when comparing iterative vs vectorized code.


Oh no of course not. Sorry I must have missed something here. I was speaking specifically of iterating over/with values. Numpy is optimized for this kind of thing. Though you can help yourself out and make sure you python code is also optimized by using comprehensions and itertools I would suppose


Have you tried out any of the options for speeding up Python apps, like Cython, writing parts that are in the perf bottleneck in C (if that is applicable), using ctypes / cffi to call C libraries for some tasks, etc.? You can use ctypes or cffi to call either existing libraries or ones you write.


Try out the Numba package: https://numba.pydata.org/


Have you tried Cython? That seems like a much smaller leap than a new language.


According to their respective Wikipedia articles, D is 8 years older than Go. Given that you're saying that Go had a lot of success breaking into the Python market, why couldn't D have instead if they had put their efforts there from the beginning?


I suspect a large part of go's initial success are the names attached to it, both Google and the team have great reputations. Go's continued success is a credit to the language, but maybe D has had some discoverability issues?


>According to their respective Wikipedia articles, D is 8 years older than Go.

And Go itself is some years old now (created in either 2007 or 2009 - according to its Wikipedia article - maybe they mean initial creation and first release for public use, by those two year values). So Go is about either 8 or 10 years old. And that makes D either 16 or 18 years old.

>Go had a lot of success breaking into the Python market, why couldn't D have instead if they had put their efforts there from the beginning?

So when D was new or just a few years old (13 to 16 years ago, say), the Python market was a lot smaller that it is today or has been for the last few years. It might not have even been a target for them, for that or any other reason - another reason could be that Walter is from a systems and compiler background, so might not have been too interested in the domain of interpreted languages (just guessing here, maybe he will comment on this). But BTW, D can be used almost like a scripting language due to its speed of compilation. See rdmd command and the "Why D?" and "D as a scripting language" articles on the Infognition blog (a google away).


I've always been attracted to systems programming, and it's what I know best. Hence D is angled that way.


Cool!


And on embedded even C++ hardly managed to overtake C.

https://youtu.be/D7Sd8A6_fYU


>Go (despite its perceived and real faults) has succeeded in this space by delivering better GC, good libraries, static typing and faster programs.

Also fast compilation, which D also has.


Most Algol derived languages have it, it became a lost art as C and C++ pushed them aside.


Interesting, didn't know that most had it - only knew it about Pascal and Delphi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: