I don't see any claims about performance, but I would be very surprised if it was anything better than abysmal. In a modern neural network pipeline, just sending data to the CPU memory is treated as a ridiculously expensive operation, let alone serializing to a delimited text string.
Come to think of it, this is also a problem with the Unix philosophy in general, in that it requires trading off performance (user productivity) for flexibility (developer productivity), and that trade-off isn't always worth it. I would love to see a low overhead version of this that can keep data as packed arrays on a GPU during intermediate steps, but I'm not sure it's possible with Unix interfaces available today.
Maybe there's a use case with very small networks and CPU evaluation, but so much of the power of modern neural networks comes from scale and performance that I'm skeptical it is very large.
> I don't see any claims about performance, but I would be very surprised if it was anything better than abysmal. In a modern neural network pipeline, just sending data to the CPU memory
Notice that the bulk of data does not necessarily go through the pipeline (and thus by the cpu). You may only send a "token", than the program downstream uses to connect to and deal with the actual data that never left the gpu.
> Come to think of it, this is also a problem with the Unix philosophy in general, in that it requires trading off performance (user productivity) for flexibility (developer productivity)
“Sure, Unix is a user-friendly operating system. It's just picky with whom it chooses to be friends.”
~ Ken Thompson on Unix
But seriously, I would argue that Unix is "superuser" friendly - very friendly to advanced users who like their power tools, and is only unfriendly to those who want to have a more casual relationship with their computer (which admittedly is probably 98% of users).
I am not really a developer anymore, but any system that expects me to use a mouse over a keyboard makes me feel less productive.
Arguably a "complicated mess of cobbled together archaicness" describes most old software, Windows included. I think that's just the nature of how software evolves.
We might be in the middle ages of software development. Think of the way European cities grew naturally versus the grid structures of American cities. Perhaps in the future the art of software development will have progressed to the point where creating a new application results in nice, square lines of code that are perfectly navigable.
I wonder if, at that point, we'll wax nostalgic about the way software used to grow organically. Ahhh, to lose myself once more in the meandering spaghetti of yesteryear...
This is only for inference, which is very cheap already, cheap enough for most applications (real-time video processing an exception). Training is the slow part which is worth putting on a GPU.
I mean, some people I know achieve developer flexibility and performance by just writing their own implementation in FORTRAN. Unfortunately this is inadvisable for many people and seen as undesirable by even more people
Excerpt: "layer is a program for doing neural network inference the Unix way. Many modern neural network operations can be represented as sequential, unidirectional streams of data processed by pipelines of filters. The computations at each layer in these neural networks are equivalent to an invocation of the layer program, and multiple invocations can be chained together to represent the entirety of such networks."
Another poster commented that performance might not be that great, but I don't care about performance, I care about the essence of the idea, and the essence of this idea is brilliant, absolutely brilliant!
Now, that being said, there is one minor question I have, and that is, how would backpropagation apply to this apparently one-way model?
But, that also being said... I'm sure there's a way to do it... maybe there should be a higher-level command which can run each layer in turn, and then backpropagate to the previous layer, if/when there is a need to do so...
This kind of misses the point of the Unix philosophy of being able to dynamically reconfigure things - realistically, to get decent results, you'll need to do inference with the exact same connections as you trained (or at least finetuned) them, so there's no good reason to split the model in smaller parts.
My point was that this is not possible as the trained layers are intrinsically tightly coupled. You can't combine pre-trained sub networks in arbitrary manner without retraining. In all the standard practice of reusing pretrained networks, you would take a pretrained network or part of it, and train some layers around it to match what you need, optionally fine-tuning the pretrained layers as well. If you want use a different pre-trained embedding model, you retrain the rest of the network.
In your example, the sentiment layer will work without re-training or finetuning only if preceeded by the exact same language-embed layer as the one it was trained on. You can't swap in another layer there - even if you get a different layer that has the exact same dimensions, the exact same structure, the exact same training algorithm and hyperparameters, the exact same training data but a different random seed value for initialization, then it can't be a plug-in replacement. It will generate different language embeddings than the previous one - i.e. the meaning of output neuron #42 being 1.0 will be completely unrelated to what your sentiment layer expects in that position, and your sentiment layer will output total nonsense. There often (but not always!) could exist a linear transformation to align them, but you'd have to explicitly calculate it somehow e.g. through training a transformation layer. In the absence of that, if you want to invoke that particular version of sentiment layer, then you have no choice about the preceeding layers, you have to invoke the exact same version as was done during the training.
Solving that dependency problem requires strong API contracts about the structure and meaning of the data being passed between the layers. It might be done, but that's not how we commonly do it nowadays, and that would be a much larger task than this project. Alternatively, what could be useful is that if you want to pipe the tweets to sentiment_model_v123 then a system could automatically look up in the metadata of that model that it needs to transform the text by transformation_A followed by fasttext_embeddings_french_v32 - as there's no reasonable choice anyway.
Yes. I understand how neural networks work. In my example language-embed and sentiment are provided by layer. This allows layer to provide compatible modules. If two modules which are incompatible are used together they might provide junk output. That is true for any combination of command line utitilies. If I cat a .jpg I'm going to have a hard time using that output with sed.
What's wonderful about this concept (and unix concept in general) is that the flexibility it gives you is amazing. You can for example pipe it over the network and distribute the inference across machines. You can tee the output and save each layers output to a file. The possibilities are endless here.
Great concept. Would like to see more of this idea applied to neural network processing and configuration in general (which in my experience can sometimes be a tedious, hard-coded affair).
I've been thinking about something like this for a long time, but could never quite wrap my head around a good way to do it (especially since I kept getting stuck on making it full featured, i.e. more than inference), so thank you for putting it together! I love the concept, and I'll be playing with this all day!
This might not be a great way to build neural networks (as other commenters have said regarding performance). But, it could be a great way to learn about neural networks. I always find the command line a great way to understand a pipeline of information.
Great idea but however equally great caveat - it's just for (forward) inference. Unix pipelines are fundamentally one way and this approach won't work for back propagation.
I don’t see any reason you couldn’t just spit out the output and the derivative of the layer output with respect to the weights, then multiply and carry these all the way down. Then if you have a loss function at the end you have the gradient. Probably this project is for fun and not scale so it’s fine. But then you need to think about changing the weights on every layer based on the optimization
Wonderful idea and the Chicken Scheme implementation looks nice also.
I wrote some Racket Scheme code that reads Keras trained models and does inferencing but this is much better: I used Racket’s native array/linear algebra support but this implementation uses BLAS which should be a lot faster.
Unless you use an extension that hides your referer. Which you should - it's a needless privacy leak, and permits people to play stupid games like this. I use Smart Referer.
the problem is that every time somebody posts a link to his site, a bunch of HN folks go and say "hur hur" in the comments. Jamie doesn't have patience for fools.
Well - I will say I like the general concept. I just wish it wasn't implemented in Scheme (only because I am not familiar with the language; looking at the source, though - I'm not sure I want to go there - it looks like a mashup of Pascal, Lisp, and RPN).
It seems like today - and maybe I am wrong - but that data science and deep learning in general has pretty much "blessed" Python and C++ as the languages for such tasks. Had this been implemented in either, it might receive a wider audience.
But maybe the concept itself is more important than the implementation? I can see that as possibly being the case...
Great job in creating it; the end-tool by itself looks fun and promising!
Author here; thanks for the feedback. The path to Scheme was a bit haphazard - I got into Clojure a few years ago after exposure to some nice Clojure-based tools at work, and then worked on implementing several classical machine learning techniques in Clojure for fun.
When I came up with the idea of chaining and piping neural network layers on the command line, I also came across CHICKEN Scheme which promised to be portable and well-suited for translating the Clojure-based implementation I had previously done. As you can probably imagine, the porting process was a lot more involved than I expected, but nevertheless I had a BLASt (pun intended) hacking on it.
Clojure is ideal for Unix-style pipeline programming. Few people will use the command line for neural network inference, I think it would be better to use clojure to implement.
[The Pure Function Pipeline Data Flow](https://github.com/linpengcheng/PurefunctionPipelineDataflow)
> looking at the source, though - I'm not sure I want to go there - [Scheme] looks like a mashup of Pascal, Lisp, and RPN
Cool, you got to discover Scheme today! It's one of the classical languages that defines the programming world we live in.
> data science and deep learning in general has pretty much "blessed" Python and C++ as the languages for such tasks.
It's reasonable to expect that the languages a community uses for its programs bear some proportionality to the broader programming community unless you have a very severe historical isolation of that community (i.e., MUMPS in medical informatics). Python and C++ are extremely common languages. You should expect the usual long tail of other languages as well. And under the hood, it's really all about CUDA anyway.
There's nothing wrong with implementing the tool in Scheme, but the problem is that typical ML frameworks implemented in Python use Python as their "glue" language (which already can be somewhat problematic performance wise). This approach is using a text serialization and sh as the glue language.
Sure, it's conceptually neat, but for exploration, it's not even competitive with regular Python, let alone e.g. Python in a Jupyter notebook.
I could see an approach using scheme itself as the exploratory glue language being quite competitive. Dropping down into shell pipelines is decidedly worse.
I think the pipelines are absurd, too (actually, I think the entire Unix shell is a Rube Goldberg contraption that we would be better off without), but that's not the focus of the comment I was replying to.
>I just wish it wasn't implemented in Scheme (only because I am not familiar with the language
It is a functional-first, small and clean variety of lisp. A much more beefed-up cousin is Racket, but some implementations of scheme (in particular Guile and CHICKEN) have excellent ecosystems.
Scheme is a language that deserves to be used more. It simplicity is deceptive. It is extremely powerful because although the basic components are simple, there is virtually no limitation on how they may be composed.
There is literally only a single drawback to using Scheme (it might appear that being dynamically typed is a weakness, but both CHICKEN and Racket offer typed variants), is that it is sadly extremely unportable. The specification for Scheme is very small, and a large number of implementations exist which go beyond this standard - so basically none are compatible with eachother.
Quite to the contrary, the limitations of functional languages are intended to make code easier to reason about for humans. Making small programs is one thing, but the combinatorial explosion in potential states as you approach large programs and systems of programs make it difficult to near impossible for humans to reason about software. You can only keep so much in your head at any given time.
Even so, you can write code imperatively in scheme, it's just a bit less natural. The keyword 'begin' has the same semantics as common lisp's 'progn', which allows imperative code to be written.
I was careful to say "functional-first". Racket has a fully-fledged object system, too.
I find I am a much "better" programmer in Scheme than in any other language, in the sense that idioms that I would normally struggle to express (at all, let alone cleanly) simply flow out as if I had invented them myself. Programming in Scheme is hugely fun.
If it were true that humans thought "imperatively", then we'd all still be using languages with GOTO.
Your profile says you’re interested in the history of computing (even going so far as to learn PHP and BASIC). I’d say this is a perfect opportunity to learn Scheme, which is not only a great language but one of the most important languages in the history of the field.
I wouldn't say that Python and C++ are the end all languages for ML. They do have a critical mass but I still think projects like OPs can still help us figure out what's a good interface for ML and learnings from smaller projects can inform the design and interface of larger projects (e.g: Keras API helped inform the new TF API).
Also I'm suspicious that Tensorflow will be the Deep Learning library we'll use 5-10 years down the line so it's always nice to see smaller projects that try to do something different.
You're not the only one who has said something to this effect, so maybe I do need to look into it more. Lisp is something I've tried to get into, but never went much beyond simple "hello world" implementations. So maybe I need to give it another shot.
Because all the time spent marshalling and unmarshalling data structures into bags of textual bytes takes up CPU and does nothing but hasten the heat death of the universe. We have better models for that sort of thing. See: PowerShell.
Come to think of it, this is also a problem with the Unix philosophy in general, in that it requires trading off performance (user productivity) for flexibility (developer productivity), and that trade-off isn't always worth it. I would love to see a low overhead version of this that can keep data as packed arrays on a GPU during intermediate steps, but I'm not sure it's possible with Unix interfaces available today.
Maybe there's a use case with very small networks and CPU evaluation, but so much of the power of modern neural networks comes from scale and performance that I'm skeptical it is very large.