Batch processing pyro models so cc Considering the model and the evaluation code below @fonnesbeck as i think he’ll be interested in batch processing bayesian models anyway
I want to run lots of numpyro models in parallel In particular, my questions are I created a new post because
This would appear to be a bug/unsupported feature If you like, you can make a feature request on github (please include a code snippet and stack trace) However, in the short term your best bet would be to try to do what you want in pyro, which should support this. Model and guide shapes disagree at site ‘z_2’
Torch.size ( [2, 2]) vs torch.size ( [2]) anyone has the clue, why the shapes disagree at some point Here is the z_t sample site in the model Z_loc here is a torch tensor wi… Hi, i’m using the latest pyro and tutorials
The training step is as f… Hi all, i’ve read a few posts on the forum about how to use gpu for mcmc Transfer svi, nuts and mcmc to gpu (cuda), how to move mcmc run on gpu to cpu and training on single gpu, but there are a few questions i still have on how to get the most out of numpyro There is also this blog post comparing mcmc sampling methods on gpu, and although the model is built in pymc, it uses numpyro.
I am running nuts/mcmc (on multiple cpu cores) for a quite large dataset (400k samples) for 4 chains x 2000 steps I assume upon trying to gather all results (there might be some unnecessary memory duplication going on in this step?) are there any “quick fixes” to reduce the memory footprint of mcmc What tutorial are you running
0.3.3 hi, i’ve been following this tutorial to implement a bayesian nnet in pyro, and i’m being able to follow it till the prediction step, where i get a bit confused about the sampling pipeline