Hacker News new | past | comments | ask | show | jobs | submit login

So to me the biggest semantic question are the following:

  - Does what you have to write to run on X follow the semantics of the language?
  - Can you use data structures/code defined in libraries that don't know about your thing?
In the case of julia on TPU here, the answer is yes! (surprisingly perhaps, and getting this to work is pretty hard). In particular, you get a lot of julia's language features: Multiple dispatch, control flow, etc. It's a bit of a subset of the full language (e.g. no mutation at the moment), but everything that's supported is just standard julia and we're working on growing that subset more and more.

That approach is very different from something like TensorFlow where you're essentially metaprogramming an expression graph. Numba probably counts for python (yeah, you have to put an annotation on things, but if the python people really wanted, they could probably import Numba into core cpython and make it work more smoothly). Of course in python, you have the additional complication that most of the core implementation is not in python itself, so even if you satisfy my two criteria above for the core language, you're still gonna have to rewrite the whole standard library.




It's a nice demo, but reusing Julia libraries not intended for TensorFlow seems like a fragile thing? Just because it works today doesn't mean the authors won't inadvertently break it by using something outside the portable subset.

It seems like for non-demo usage, you would want upstream maintainers to agree that their code should be TensorFlow-compatible, and have tests keeping it working.


You are correct that there is a social aspect to making julia packages work well. That's part of the reason so many julia packages are organized under various GitHub organizations to make sure that these kinds of discussions have a place to take place. However, I don't really see that as a negative thing. It gets package developers to talk to each other and delineate the abstractions for their packages more clearly. And in the end, I don't see it all that different from package development in any other language. Your users will always do things that you didn't intend and then you have to decide whether their use case is in scope for your project or not and act accordingly.

Aside: this is not TensorFlow, but XLA, which are two very different things. It's also possible to try this kind of thing and generate a TF graph, but TF is a much less nice compilation target.


Another thing I should have mentioned here is that Julia's multiple dispatch helps a lot with this problem, since you can provide specializations in the dependent package. So e.g. Flux.jl only needs to have generic code, CPU and GPU code and I can provide TPU specializations (where necessary - hopefully not often), for any function that needs it (yes, talking to upstream is required here to, to make sure they're aware we're doing it, but at least they don't have to maintain it).


Hmm, I guess that's okay as long as they don't change the code to call any new functions?

The set of functions that some generic code calls could be considered similar to an interface or trait in other languages. If they expand the interface (by calling new functions) then you'd need to make sure the new functions they call have the appropriate implementations needed as well.

In Go, for example, the interface definition would be explicit. You'd add a method to the interface (or perhaps define a new interface) and update all the implementations you know about. If there is any outside code calling it with their own implementation, they'd get a compile error.

It does sound rather convenient if essentially every function call allows for new implementations, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: