Love clicking a GitHub icon to go read the source code and learn more… and being taken to a google cloud storage bucket where there’s four files, a readme pdf that’s basically just links to several different google sheets documenting data that went into this, two ipython notebooks, and a pre trained model file in a big zip file which I can already tell will be the usual zipped pickle of a massive tensor flow model that deeply frustrating to work introspect and examine the code behind…
This feels like the GAN/capsule networks papers and “releases” I’d look at when I first tried to learn more about machine learning 4-5 years ago, back in then “Middle Ages” of ML, where the paucity of sources enriched the careers of Machine Learning experts who had no incentive to make their work really accessible beyond puff piece show off papers and a doi/citation to pad their h-rank or a link to the startup that funded it for their flimsy product concept that based on historical evidence was over 90% likely to fail to go beyond concept or prototype demo stage.
Edit! I might sound a bit bitter or annoyed but it’s because I’ve been getting into local ML for privacy and personal productivity and really trying to understand the capabilities and the hardware required for those capabilities… I’ve been particularly interested in potential robotics applications and I’m following this particular branch of ML intersecting Robotics quite closely as it affects the space robotics startup I founded, and it’s undoubtedly interesting work… I’m just deeply disappointed at how much of a throwback to a “worse” time this particular “publication” feels like. It’s not light on details of how they actually built it, the code feels like a throw it over the wall afterthought, and the distribution mechanism has had no effort put into making the material any more accessible, if the ipynb files were in a GitHub repo at least the notebook would be rendered so I could just read it in my browser without downloading and opening up the notebook in a code editor… it feels like there’s a code repo somewhere with a script that dumps these files into this bucket and that’s the repo where they run everything in Google colab and the strong association to Google’s DeepMind team makes me think that once I do open this up I’m going to find something that’s welded solid to googles custom TPUs and is basically useless to me… hence the mild vitriol because the entire publication gives me a bad taste in my mouth despite being extremely interested in the topic and willing to put work into understanding the current state of the art.
This feels like the GAN/capsule networks papers and “releases” I’d look at when I first tried to learn more about machine learning 4-5 years ago, back in then “Middle Ages” of ML, where the paucity of sources enriched the careers of Machine Learning experts who had no incentive to make their work really accessible beyond puff piece show off papers and a doi/citation to pad their h-rank or a link to the startup that funded it for their flimsy product concept that based on historical evidence was over 90% likely to fail to go beyond concept or prototype demo stage.
Edit! I might sound a bit bitter or annoyed but it’s because I’ve been getting into local ML for privacy and personal productivity and really trying to understand the capabilities and the hardware required for those capabilities… I’ve been particularly interested in potential robotics applications and I’m following this particular branch of ML intersecting Robotics quite closely as it affects the space robotics startup I founded, and it’s undoubtedly interesting work… I’m just deeply disappointed at how much of a throwback to a “worse” time this particular “publication” feels like. It’s not light on details of how they actually built it, the code feels like a throw it over the wall afterthought, and the distribution mechanism has had no effort put into making the material any more accessible, if the ipynb files were in a GitHub repo at least the notebook would be rendered so I could just read it in my browser without downloading and opening up the notebook in a code editor… it feels like there’s a code repo somewhere with a script that dumps these files into this bucket and that’s the repo where they run everything in Google colab and the strong association to Google’s DeepMind team makes me think that once I do open this up I’m going to find something that’s welded solid to googles custom TPUs and is basically useless to me… hence the mild vitriol because the entire publication gives me a bad taste in my mouth despite being extremely interested in the topic and willing to put work into understanding the current state of the art.