Hacker News new | past | comments | ask | show | jobs | submit login

You can be a little less brute force if you use something like hyperopt (http://hyperopt.github.io/hyperopt/) or hyperband (https://github.com/zygmuntz/hyperband) for tuning hyperparameters (Bayesian and multi-armed bandit optimization, respectively). If you're more comfortable with R, then caret supports some of these types of techniques as well, and mlr has a model-based optimization (https://github.com/mlr-org/mlrMBO) package as well.

These types of techniques should let you explore the hyperparameter space much more quickly (and cheaply!), but I agree - having money to burn on EC2 (or access to powerful GPUs) will still be a major factor in tuning models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: