Hacker News new | past | comments | ask | show | jobs | submit login

stan and edward dev here. happy to answer any questions.

(shakir's blog posts are amazing; i recommend them all.)




Very cool, many questions

1) Why create a project distinct from Stan? Was it the prospect of benefiting of all the work going into TF and focus solely on the sampling procedures rather than autodiff or GPU integration?

2) Are you implementing NUTS?

3) Any plans to implement parallel tempering

4) Any plans to handle "tall" data using stochastic estimates of the likelihood?


great questions.

1. you touch upon the right strengths of TF; that was certainly one consideration. edward is designed to address two goals that complement stan. the first is to be a platform for inference research: as such, edward is primarily a tool for machine learning researchers. the second is to support a wider class of models than stan (at the cost of not offering a "works out of the box" solution).

our recent whitepaper explains these goals in a bit more detail:

https://arxiv.org/pdf/1610.09787.pdf

2) no immediate plans. but we have HMC and are looking for volunteers :)

3) same answer as above :) should be relatively easy to implement tempering.

4) this is already in the works! stay tuned!


Why TF instead of Theano as PyMC3 has done? Shouln't it be straightforward to port PyMC3 algos over TF?

My main gripe with Theano is that OpenCL support is near non-existent, but this is also the case with TF.


4) which approach are you using? Generalized Poisson Estimator, or estimating the convexity effect of the exponential by looking at the sample variance of the log likelihood? The former is more pure, the latter may be more practical if ugly.


theses are great insights.

our first approach is the simplest: stochastic variational inference. consider a likelihood that factorizes over datapoints. stochastic variational inference then computes stochastic gradients of the variational objective function at each iteration by subsampling a "minibatch" of data at random.

i reckon the techniques you suggest would work as we move forward!


Edit: ah never mind, variational inference, got it! I was thinking stochastic HMC!

---

Ok but that will get an unbiased estimate of the log-likelihood. MCMC or HMC do work with noisy estimators, but they require unbiased estimates of the likelihood.

At the very least, you need to do a convexity adjustment by measuring the variance inside your mini batch. Or you can use the Poisson technique which will get you unbiased estimates of exp(x) from unbiased estimates of x (albeit at the cost of introducing a lot of variance).


great points; yes, the challenge becomes considerably more challenging with MCMC!


if I can bug you also since you're working with Alp, how does Edward handle ADVI covariance? Is it diagonal or dense or some sparse structure estimated?


you may bug me on this. i work too closely with alp :)

edward does not implement completely implement advi yet. the piece that is missing is the automated transformation of constrained latent variable spaces to the real coordinate space. however, edward offers much more flexibility in specifying the dependency structure of the posterior approximation. diagonal is, just like in stan, the fastest and easiest. however introducing structure (e.g. assigning a dense normal to some of the latent variables while assigning a diagonal to others) is much easier in edward.


OK I would be interested in seeing how to do that. Are there any examples or hints on how to start? I worked a lot with time series models (think nonlinear autoregressive), where there's strong short term autocorrelation, and the coercion to diagonal covariance seemed inappropriate.

I have also a naïve question: why not use the graphical structure of the model itself to add structure to the covariance? For example, in an AR model, each time point places prior on the next time point, so why not assume a banded covariance? More generally, one could use a cutoff on shortest path length (through the model's graphic structure) between parameters to decide if they should have nonzero coefficients.


I came across the examples in the repo and commented on

https://github.com/blei-lab/edward/issues/211

so I'll try to do my homework before asking more questions ;)


Whoa cool but the build is failing tisk tisk.

A big issue I ran into with stan even with advi was scaling to large datasets since it (and Eigen) are single threaded. Would Edward answer all my prayers?

When is Riemannian HMC going to arrive?


i'm assuming you're referring to building edward? installation is a bit of a pain because tensorflow is not on pypi yet.

please take a look here: http://edwardlib.org/troubleshooting

edward should answer some of your prayers :) there's still some time until stan goes parallel/gpu, though there's lots of interest there.

riemannin hmc is likely just around the corner!


I was shallow-ly referring to the Travis CI badge on the GitHub page..

I've been working with both Stan & PyMC3 on some large datasets and will definitely try Edward on them.


ah ok. i agree. we're working on that. :)

give it a shot at let us know!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: