Better yet, wait for the official announcement once binaries are compiled and posted for download. I believe there will be a blog post to go along with it, too.
Yes, there'll be an official announcement in a little bit. Tagging is just the first step (we haven't even added the release metadata to the tag on GitHub yet).
This could either be brilliant or a total nightmare:
"Support for arrays with indexing starting at values different from 1. The array types are expected to be defined in packages, but now Julia provides an API for writing generic algorithms for arbitrary indexing schemes (#16260)."
Originally I thought total nightmare but now I'm not sure.
Many people think this was added for support of 0 offset arrays, but that's only a side effect really. This feature was requested by people who work with rather odd arrays (e.g. diagonal slices through high dimensional data). I have mentioned in previous threads on the subject that julia's 1-indexed arrays don't matter much in practice because the generic APIs hide that fact most of the time. This just adds the last bit of code to make that completely true and adds documentation/cleanup. For those seeking more information, I'd recommend Tim Holy's JuliaCon keynote on the subject: https://www.youtube.com/watch?v=fl0g9tHeghA
> I have mentioned in previous threads on the subject that julia's 1-indexed arrays don't matter much in practice because the generic APIs hide that fact most of the time.
My guess is that most people who prefer zero-based feel strongly about it, and most people that like one-based don't care that much. If it really doesn't matter that much, I think Julia should've gone with zero. FFTs, Vandermonde matrices (least squares), polynomials, and pretty much anything where the array subscript relates to the mathematics is cleaner with zero-based arrays.
> My guess is that most people who prefer zero-based feel strongly about it, and most people that like one-based don't care that much.
FWIW, as someone in the latter category (liking one-based indexing), I'd say you're partially correct: I'm not emotional attached to 1-based indexing the way some 0-based folks seem to be, but it's a practical nicety and one of the things I'm really glad Julia chose to do. Making 0-based the default would definitely reduce its appeal to me.
And Fortran, where it works fine. Perl lets you change the array index offset but everyone warns you that your computer will reach out and slap you in the face if you do. As is often the case, the devil is in the details.
In perl is it APL style where it changes the index for the entire world (until changed again) or is it Fortran/Pascal style where you are expected to declare the lowest/highest index per array?
Pascal and Ada don't usually use their arrays as matrices in mathematical expressions. Julia is targeted at people using Matlab.
How do you multiply two matrices that use different indexing? Ignore the indexing differences?
It's also problematic when different libraries use different conventions (0-based vs 1-based). Can you use the output of one such library to feed another?
Ada has the notion of attributes applying to types and variables, to access related properties. Among those, there are attributes to get the starting and ending indexes of an array. So it's possible to program completely generic array processing, independent of the underlying choice or range for the array. All this from memory, and it's been a long time...
I used Julia a bit in the same time span, and I was frustrated with the breakage, but I also saw a lot of things that clearly needed breaking changes. I'm glad they're happening.
Until now, [a, b] meant "concatenate a and b if they're arrays, make a 2-element array otherwise". This struck me as both annoyingly inconsistent and as a failure to think recursively.
Now it's always a 2-element array, possibly an array of arrays. And that's a huge change in syntax, and it's for the better. I look at many things in the 0.5 release notes and they're fixes to specific pain points I had. This has me paying attention to the language again.
Arguably, it is better numpy based on multimethods, but it is far from being polished as Python 3.5+ On the contrary it suffers from the kitchen sink syndrome, with process of continuous adding of stuff instead of continuous clarification and refinement, which characterizes a good Python 3 language.))
To you it is a better numpy/python, to me it is a faster/cleaner/more parallelizable version of R, to some people it is a cleaner/faster version of Matlab.
This could be seen as a "kitchen sink" approach but it also is very useful when you want to get things done. My gripes about Julia are really minor considering how young and ambitious the project is.
Yes, you are right. It is a better R. I haven't thought of R because I consider it as being nothing special in terms of design of programming languages.
* Higher order functions now specialize on (and possibly even inline!) passed functions
* Anonymous functions are now fast, too
* Fused broadcasting can avoid intermediate allocations and only make one pass through the array
* User-extensible bounds checks allow custom array types to opt-in to skipping bounds checking, enabling SIMD-ification of some for loops
That said, compilation times may take a bit longer due to the LLVM upgrade… but this resulted in an even stronger push towards better performance in many other areas.
A feature request, Anaconda partners with Intel and includes MKL default. Could you do same for Julia and add MKL default at least in version 1.0 so no manual work needed?
If you have a specific operation where MKL is noticeably faster than OpenBLAS, report it. OpenBLAS can always be improved if they have specific workloads to target.
Probably because a hacker news comment isn't the best place for a feature request about using a proprietary library and comparing to a commercial distribution of a different language.
By the way, Julia partners with Intel as well, but on things like adding multithreading support to the language, and developing auto-parallelizing compilers (https://github.com/IntelLabs/ParallelAccelerator.jl). Improved support for building Julia with MKL and Intel's C and Fortran compilers is something that Intel would probably like to collaborate with us on as well, but not much has happened there officially yet (these are different groups within Intel).
I haven't thus far been interested in Julia. Unless you're into high-level math, it didn't seem to provide much value, and it did weird stuff with arrays, and wasn't Lua, which gets a free pass for being amazingly well designed in all other respects (arguably, it was well designed in that one as well, but it makes all the array math a pain).
I don't know, maybe it's great. Maybe I should reconsider. But then again, it's strongly typed, which isn't usually my sort of thing (I'm not working with a team, and my programs haven't devolved into chaos yet, so with no empirical data either way, I'll take my favorite)
I'm not sure what you mean by "high-level math". It isn't a symbolic language like Mathematica for mathematics research but something along the lines of R for data analysis. The strong typing isn't just there for discipline, but efficiency in execution.
Since programming is fundamentally mathematics (grin) that means it's a "general purpose language". There's nothing whatsoever preventing it from being a first-class string manipulation language for instance.
Granted, the emphasis has been on applied mathematics. I think in the longer run it will branch in several directions, one being realtime simulation - and then on to games. It is close to C++ efficiency (plus or minus) already, and it will only get better.
I haven't looked into Julia's GC implementation in detail, but I'm hopeful that with a little effort it's easy to avoid "stop the world" GC collections.
That old saw is getting dull. Because something can be described by mathematics, does not make it mathematics. Else the whole universe would be mathematics.
Programming is the art of getting a particular machine to do something you want. You may use math to get there (like in accounting or swimming) but its fundamentally a task that's performed by skilled people.
I use Julia. I work in Computational Logistics but some of the code I write is personal.
This week I have used Julia for running SQL against SQLite, manipulating dictionaries, raytracing, statistical analysis, engineering problems, simplex optimization, processing data from Excel, k-means clustering.
I'm pretty sure none of these would be classed as high-level math exactly, though some math is involved.
Any it's not as strongly typed for writing code as you seem to think
function fred(a, b)
return a * b
end
would work to multiply its numeric arguments or concatenate string arguments but it would throw an error on mixed string / numeric arguments
It also supports duck typing
function jule(a, b)
return a.value * b.value
end
will work no matter the types of a and b so long as they have the required attributes
So, is it doing type inference, or is it weakly typed? I don't know.
I mean, if Julia works for you, that's great. But it seems a bit awkward to me, and until now (and even now), there was the array indices elephant in the room (it's the Lua problem: Even if you support zero-based arrays, it's not supported by convention).
As a Scheme/Lua/JS/Python/Ruby/C/Whatever else programmer (I don't know them all well, but I like to keep my hand in), I just don't see what Julia gets me. But that's just me.
A combination of type inference, just in time compiling and multiple dispatch. The function is compiled at run-time if the signature is not already compiled.
I've not really found the 1 index much of a problem, makes translating other code a bit awkward.
The new dynamic indexing scheme (0 based, -10 based, whatever) seems interesting.
What Julia gets me is a stats / machine learning environment I had to awkwardly program in Octave.
Better yet, wait for the official announcement once binaries are compiled and posted for download. I believe there will be a blog post to go along with it, too.