Hacker News new | past | comments | ask | show | jobs | submit login

(Disclaimer: I am an author of the monograph being discussed)

The authors are very glad at the discussion this monograph has generated and would be very happy if this leads to more research and advances in the foundations and applications of non-convex optimization. Solvers for various problems that arise in machine learning and signal processing applications, have seen great improvement in recent years. This includes MILP solvers that are very useful for scheduling and other problems and LASSO-style solvers. This monograph was an attempt to present a concise and lucid introduction to another set of very useful and highly successful algorithmic techniques for machine learning problems.

As one comment below mentioned, the scope of non-convex problems is quite vast as it pretty much includes any problem that is "not convex". However, the non-convexity that arises in machine learning and related areas is often very structured and this monograph was an attempt at collecting our collective knowledge on how to address these specific instances of non-convexity and develop extremely scalable solvers for them. A key component of this effort turns out to be a general avoidance of "relaxations", a common trick used to convert non-convex problems into convex ones so that well-understood techniques can be applied (LASSO techniques fall into this category).

Frequently, these relaxation-based approaches struggle to scale very well. In this monograph we present several algorithmic techniques that avoid relaxations for this reason. These techniques are very intuitive and offer not just excellent performance in practice (much faster than relaxation based techniques), but provably so (the monograph does indulge in formal analyses of the algorithms). This theoretical aspect of the monograph dates back at least a decade, to the initial advances in compressive sensing due to the seminal works of Candes, Tao and Romberg. These initial results were later shown to be a part of a much more general framework, which this monograph also seeks to present, by presenting applications of non-convex optimization techniques to sparse recovery, matrix completion, latent variable models, and robust learning.

We welcome comments and suggestions form all. This monograph is very much a work in progress, especially given the fast pace at which machine learning is moving forward and we wish to keep this text relevant to the community in the future as well, by making relevant updates and additions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: