Hacker News new | past | comments | ask | show | jobs | submit login
Lobe – Deep Learning Made Simple (lobe.ai)
390 points by adammenges on May 2, 2018 | hide | past | favorite | 73 comments



This reminds me of Unreal's blueprints. You could get a surprising amount of work done just via blueprints and not touch C++ code.

You'd have to balance the input and output connection's granularity - too many would put off users, too few would make users feel restricted. If you manage to find a sweet spot, or let the user pick the level of expertise, and reveal them accordingly, it would be perfect.

I really like how good it looks, and can't wait to use it myself!


Great description of the balance of features between super-users and beginners interested in frictionless AI tooling based on their level of experience. I've used the example of the settings, preferences and pre-built sketches within the Processing, P5js and Arduino IDE's to imagine what an AI model manager might look like if it followed this tone. Do you have any other examples of products that meet the requirements of both beginners and super-users?


Unreal was one of the early inspirations for going the visual route - completely agree!


Is there any backstory of how this came together?

I just watched the video, and one of the first thoughts was mike matas’s 2016 video “the brain” where he made an NN within quartz composer: https://youtu.be/eUEr4P_RWDA


Yeah! Adam and I met at NIPS 2015 when I demoed a gui prototype for OpenDeep, talking about ways to let non-coders build neural nets. Around the same time, we saw that QC Brain video Mike posted and we all started talking and unifying around this vision of helping people who aren't experts get started with designing and adding intelligence to apps. From there we formed the vision for Lobe and the rest is history!


This is really cool. What will the pricing model be down the road? For instance, if I were to use these models and the cloud API to service a main feature of my application, what ballpark are you thinking of in terms of monthly cost?


Our initial thoughts for pricing model are to be based on compute and keep as low as possible - we want this to be accessible to as many people as possible and do things like enable local training with Tensorflow.js for free. For cloud API deployment, we will price around the backend compute cost on AWS/GCP for gpu instances plus some margin for us maintaining the distributed setup and scaling of serving a machine learning model.


Grow it into a platform and have a marketplace. So people can train a good building block / application (like image classifier / detector with the proper preprocess steps) then the trainer/developer can sell it and others can use it! Or people will just export out their models and go to marketplace ( like https://algorithmia.com/)


Yep growing as a platform is the vision! We are committed to accessibility for people using models/components vs. paying but good point about considering the opportunity cost of someone going to algorithmia with a model once it is trained.


^ This is a great idea OP.


My use case for something like this is to scan family videos to index people to scenes/events/locations.

If I could process my catalog of videos (~30 hours) for a reasonably small amount of cash it would save me hours of mucking about with ML/OpenCV, etc... though I'd miss the learning experience.


That makes sense. I would definitely use this. I look forward to hearing more. It looks amazing.


Wish there was a small demo to try but it looks great; It kind of reminds me of the same type of sandbox/experimentation environment you get with Tensorflow Playground (albeit not constrained to a few sample sets of data). I look forward to seeing how this product turns out.


Thanks! We want to integrate Tensorflow.js to have the models run in the browser for easy sandbox/experimentation. We loved the visual aspect of Tensorflow Playground and feel a visual graph approach where you can see the output of operations really helps you understand what is happening.


I have a couple of questions -

Do you have a team implementing most of the new state-of-the-art model architectures (given how fast new ones keep getting published)?

If so, I'm assuming you keep associating some types of model architectures to the type of data being input? I'm just curious how you'd pick a particular architecture.

On the other hand, AutoML comes to mind, but IMO, the biggest hurdle of AutoML, and its ilk is the massive computational infrastructure requirements.

But great job, it looks really good and seems pretty intuitive!


Thanks! One of the benefits of Lobe is that users who build models from scratch can publish and share to use in other documents, like a community model zoo. We do this for the current architectures internally, but the goal is for the community to help keep up with the firehose state-of-the-art in ML.

Something really interesting we have discussed for a future feature is being able to train a model using the data of which architectures end up working best for different data types so that Lobe can use AutoML to suggest better templates starting out, or on the fly while you are building the model.


That sounds amazing! I love the idea of integrating ML inside the user experience itself to create a feedback loop that looks for templates that would suit the user, based on the user's previous models and the type of work they are doing. Would you also layer in some type of semantic meta-tagging and specification engine which allows the user to pull in tags and keywords from other software, to help train their own personalized decision support, beyond templates?it would be awesome to connect this to something like https://www.IRIS.AI and get template recommendations alongside recommendations for additional research papers to review on the topic. I couldn't believe how effective this type of integrator could be until I imported my 7+ years of Evernote notes and tags, into my Devonthink document manager. The recommendations I get from both the index of pdfs and my own personal tagged notes create a sum greater then its parts. Your platform looks like an amazing tool to add to that mix.


Oh gotcha! Didn't consider that you'd allow users to write their own models too.

Coming to community model zoo, would it be free access to any model, and pay for the training and disk usage (floydhub like)? Or you'd go the quantopian route?


We are focusing on free to access any model explicitly shared by the user and pay for training/deploy resources as a service, but might consider mixing in a paid route for users to monetize their unique trained models.


First time I read Deep Learning and simple on the same sentence and it's actually simple. Looking forward to use it.


This makes us really happy to hear! Thank you!


I personally really like the design and looks super approachable. Can’t wait to try it out!


Thanks! Super excited to see what you build :)


I'm blown away by the effectiveness of the whole product. While being approachable for beginners, it seems to allow experts to tweak at will. It's a massive "tour de force".

The marketplace play mentioned in other comments seems like a thing to try. I confirm I would pay for querying predictions through the API.

Such a polished product for 3 folks. Kudos!


Awesomely done ! What i really like is that, in addition to the ease of deploying a model, this product also lets you visualize the activations of the neural network. I mean you could build the visualizations in Jupyter but the ease of toggling the layers, such as switching of Max-Pool etc is super helpful for understanding how neural nets work.


Yeah we definitely agree! Thanks for the kind words.


Pretty excited about this, I've been on the edge of my seat waiting to hear back from Google after applying for the AutoML alpha but this looks even better, especially because it allows exporting the model which AutoML has not promised yet.

Also AFAIK AutoML alpha initially only supports vision tasks while this allows nearly any input type.


Thanks! Yeah our approach is that diverse applications need to be able to go in and customize the models to be useful, at least for the next few years until the algorithms for AutoML get better and replace the engineers :P


Hey everyone! One of the cofounders of Lobe here - let us know if you have any questions.


Congrats on the launch mbeissinger, this is pretty cool and useful product!


I didn't see any information on pricing. I'm sure this may be a bit premature to ask, but any indications on the pricing model (or models considered) may help people who take a look at it. Similar question on licensing and usage options too (seems like there are two usage modes, one where your server does the computation with inputs provided in near real-time and another where the trained information is downloaded and imported into an app more like a static asset/model).


Looks cool. Did you have any experience with similiar earlier generations of this concept like Knime and RapidMiner?


Is Lobe only for image data? Would it work for inputs that are text files or similar?


They support more than image input, their examples have inputs such as 3d models, accelerometer data, sound, 3d depth maps, and numeric data.

It goes the other way as well and supports generation.


Yeah! You can mainly work with images and arbitrary vectors (that's what the bounding box examples we show are using, for instance) currently, and have plans to include native support for text, video, etc. as time progresses.


I guess another question here is what are heuristics for how many images are necessary for different levels of functionality. The demos look pretty impressive, but I'm not sure how much went into them.


We've been surprised how little data folks have needed to use. If you look at the examples page you'll see in the lower right hand corner of the screen shot the number of examples they uploaded and trained on. Some examples, like the water tank, it's fine to some extent if it overfits on the training data, because the nest cam will only ever be pointed at the water tank, and it's worked in all situations and been robust for us with only ~500 examples. Other times folks are more interested in prototyping out an idea to see if it's possible on a wider scale, so a small dataset works well to prove out an idea.


Any plans for Human Action Recognition?


We plan to support when we get time series data and video upload implemented.


Hi, are you hiring?


This looks awesome. I liked the demonstration of real-world applications with the water tank.

From a deep-learning novice: Can you give a rough idea of the processing cost of doing something like setting up your water tank level recognition?


Thanks! We used the water tank as the first end-to-end test using the product, and made a website that calls the API every minute to monitor the water level in a dashboard.

The architecture implemented using Lobes for object detection is called Yolo v2 (https://pjreddie.com/darknet/yolo/). It is fairly state-of-the-art for that type of problem and has ~70 million parameters that are being learned (matrices that get multiplied and added together). With a webcam and a GPU over the network, we typically see ~1-5 fps with a lot of network overhead sending output images - looking to make that faster for API deployment. The paper site above shows it having 62.94 Bn FLOPS


Looks really impressive! Acumos plan to do something similar? https://marketplace.acumos.org/#/home


Thanks! Acumos has a similar vision of making it easy to build, share, and deploy AI. We are also heavily focused on the iteration and feedback loop time when developing AI applications, and importantly on helping users understand what is going on with the model. That way you can know the limits of the system in production, or know how/where to improve the data and help reduce bias.


This looks AMAZING! I love the UX approach to balancing simplicity and complexity. For instance, giving the granular control over the hyper-parameters in a visual way with immediate feedback is a whole level above any other design I've seen out there that tries to make deep learning more accessible.

Awesome work and I hope to see the product grow!


Thanks! Finding the right way and levels to present granularity was definitely a challenge for us developing this product.


I think there are something like this, don't know what the difference, can someone give a compare:

deep learning studio: http://deepcognition.ai/

knime: https://www.knime.com/ (have deep learning plugin)

runwayml: https://runwayml.com/ (in beta, did not open for public)


knime and deep learning studio look like tools for ML experts and I am not sure what runwayml does. Lobe appears to be something that a software engineer could use but they don't necessarily need to be a ML expert. We will see if Lobe delivers on that promise, but I think that is the difference.


Knime is very similar concept giving you access to building blocks like models, evaluators, input transformers, etc. so that you can make arbitrary ML pipelines. It doesn't have the slick drag'n'drop of input data. Instead you use an input building block and point it to a data store of some sort (text files, images, database, etc.). Also, it doesn't generally output models for direct use in other applications but you can output using PMML. Knime is also very useful for analysts, data scientists, etc. who aren't software engineers. Lobe has run with the concept pioneered earlier by folks like Knime, RapidMiner, WEKA, etc. They've simplified the process of quickly getting a working model by constraining on one model type and one input type. If your use case matches, it's a great innovation. If not, per usual, no free lunch.


@rpedela Thanks for your compare


Great, great work! Congratulations!

Do you support transfer-learning, for instance, pre-trained models on ImageNet? A lot of problems have limited dataset, and can only work by training the last layers of a pre-trained model?

And do you support training on cloud-based public datasets? Uploading a large public dataset doesn't make much sense.

Really looking forward to trying your platform!


Looks great keep up the awesome work!


Thanks!


Very cool, any plans to bring it to desktop as well? Seems like this could be a very useful tool for people getting into deep learning, but may not necessarily be targeting mobile. (From the landing page, it seems mobile-only.)


Yes definitely, we support desktop currently with the Tensorflow SavedModel download if you need the model locally, or using the REST API for an application with online connectivity.


We are also exploring exporting to other formats to run in different environments locally, or even code generation.


Are there some tools like this but for deep/machine learning internals/theorys/math study/research rather then practical deep learning ?


Our vision is to be the tool that starts with great settings for beginners but lets you graduate into the internals as you become more expert - at the lowest level you can interactively create computation graphs and see their results as you change settings, sort of like eager mode for ml frameworks on steroids (or other visual computation graph programs that designers use like Origami/Quartz Composer).

The lobes in the UI are all essentially functions that you double click into to see the graph they use, all the way down to the theory/math.

If you want more comprehensive ways to learn the theory, I highly recommend Stanford's 231n course (http://cs231n.stanford.edu/) and the Goodfellow/Bengio/Courville Deep Learning book (https://www.amazon.com/Deep-Learning-Adaptive-Computation-Ma...)


@mbeissinger thanks, great to see Lobe have low levels for play.


Given how polished it is I expect this isn't a free service, but I can't see any info about pricing/how payment works


We are currently accepting applications for private beta users, and don't have public pricing information yet.


Is this a standalone application or a web service?


Web service with the option to download a compiled Tensorflow SavedModel or CoreML file if you need the model locally.


Is this a good way to build a board game AI? If I have a db 50,000 replays would this be the right tool to build it?


We are working on supporting time series data, so coming soon.


Great job ! any information about pricing ?


We are working with private beta users now, will have pricing up when it is public.


Impressive!


Thanks!


TL;DR - GUI for building Bayesian classifiers?!


Haha well a GUI for building most computation graphs if you want - at the core you can get down to doing most TensorFlow operations. We even built a GAN using it!


So IDE for Tensorflow. Nice!


great design!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: