I thought it was a library for doing nearest-neighbour search. Most neural network literature tends to use "ANN" (artificial neural network), not "NN", I think.
1. Kernels (Functions in NNabla) are mostly implemented in Eigen.
2. Network Forward is implemented as sequential run of functions. No multi-threaded scheduling. No multi-GPU or distributed support.
3. Python binding is implemented in Cython.
4. Have some basic dynamic graph support: run functions as soon as you add them to the graph, and run backward afterwards. Somewhat similar to PyTorch.
5. No support for checkpointing and graph serialization, or I'm missing something.
I'm not sure why Sony is releasing this (yet another) deep learning framework. I don't see any new problems the project is trying to solve, compared to other frameworks like TensorFlow and PyTorch. The code is simple and clear, but nowadays people need high-performance, distributed, production-ready frameworks, not another toy-ish framework. Someone please shed some light on me?
The problem is that this library is not just a C easy-to-bind project, otherwise an high quality embeddable library that can work reasonably well with CPUs and can also benefit from commonly used GPUs in small systems, could be useful for a number of projects. Not all the problems need to have a huge dataset of complex entries (like million of images), there are many IoT problems that instead need a self contained library supporting different kinds of NNs.
Seems like out of the major libraries (TF/Caffe/Theano/pytorch), pytorch is the only one to have a core that is C (the TH*). It's not exactly a small library, though. One small library that is in C and has some state-of-the-art features is Darknet (https://pjreddie.com/darknet).
That said, seems like directly using the C++ API was a major use case here, and it looks fairly clean to me.
Does the C (or is it C++?) core of pytorch come directly from torch or do they add more functionality? Is there a way to interface with this core using C?
One thing that seems promising is built-in support for binary neural networks, which makes sense given its focus on embedded devices. No reason this couldn't have been implemented, in say, pytorch - but I'm guessing this library was started a few years back, when there were less alternatives available.
I think every new Deep Learning / NN library should put itself into more context. How does it compare to all the existing frameworks, like TensorFlow, (Py)Torch, CNTK, MXNet, Theano? It actually looks pretty similar, which makes this question even more important. From the examples, it might be most similar to PyTorch with autograd but I'm not sure. So, what are the differences?
I'm not sure what you mean, but I was referencing Sony's illegal and unethical usage of a rootkit on every CD they manufactured to hack user's computers so that they could implement DRM (in)effectively[1].
Related question: Is there a framework which lets us do very basic tasks without getting deep into NN and ML? For example an image classifier, which takes images under different groups and when trained, can tell which group a picture more likely belongs to?
I'm not a data scientist, just a potential end-user who doesn't know what input_shape is.
Keras[1] does a pretty good job of making NN's simple to use. That said, you still have sort of know whats going on. I do think tools like Keras make using NN and ML easier than a lot of basic programming things.
I believe Google is building or has built some services which you can feed an image or text and it will do some NN magic and spit back the answer.[2]
Mathematica costs money but basically does this. There’s a time limited trial and a cheaper home edition.
Of course, it’s closed source. But there’s literally just a function Classify, where you hand it pairs of examples and it spits out a model of some automatically chosen kind.
This is a flexible image recognition framework written in Tensorflow and TFslim
It allows you to train/fine-tune a NN from the command line by specifying only the few details necessary... which NN architecture to use, path to custom data, etc.
It's alluding to the nabla symbol (https://en.wikipedia.org/wiki/Nabla_symbol) used in mathematics, and the similarity in sound between the two does not go unnoticed by many students of calculus.
I created a docker image that allows you to play with their tutorials, currently not supporting the GPU extension but plan on using nvidia-docker later this week and have an image ready to play with. Here's a link to whoever is interested https://github.com/alcedok/nnabla_notebook_docker