Results

Random Image

A random image is generated from the causal VAE model. This is generated by simply calling the model. The causal factors that are inputs to the decoder are randomly sampled and the decoder generates an image.

r_img , r_attrs = vae.inference_model()
plot_model_image(r_img)

The attributes are the causal inputs passed to the decoder. Since the procedural generation scheme also takes in the same attributes, we can generate the original image by using the procedural generation code.

We compare the images generated by both the decoder and the procedural generation scheme.

Reconstructed Image

Reconstructing the input images is native to autoencoders. We can test the reconstruction capabilities of the causal model by feeding in the inputs and the labels.

recon_img = vae.reconstruct_img(test_images[0][0:10].cuda(), test_labels[0][0:10].cuda())
plt.imshow(recon_img[1].detach().cpu().permute(1,2,0)) 

Conditioned Images

Conditioned on Latent Variables and infer Observed Variables from the Image

In this scenario, we condition on the latent variables actor's attacking strength and defense capabilities to be HIGH and the reactor's attacking, strength, and defense capabilities to be LOW. We have made the actor more strong and the reactor more vulnerable. The rest of the entities are sampled after conditioning on this evidence/query. We intuitively expect that the actor will attack given that his capabilities are high and the reactor will get hurt from this. When we run the conditioned model, we get the following image from the decoder.

The conditional distribution given the evidence is computed separately and fed into the inference model

The observed variables are sampled from the above conditional distribution. From the sampled values, the labels can be fed into the procedural generation scheme to get the actual image.

Conditioned on Observed Variables and infer Latent Variables from the Image

In this scenario, we condition on the observed variables like the actor character and its type and infer the strength, attacking and defensive capabilities of the actor.

querygrain(grainObj, nodes=c("AD", "AA", "AS"), evidence = list(AC="satyr", RC="golem", AT="type1", RT="type3"))

The above code gets us the condition distribution of the latent nodes of the actor given the evidence of observed variables.


# sample conditioning statements - sample on observed and infer latent.
cond2 = {
    "actor": torch.tensor([1,0]).cuda(),
    "reactor": torch.tensor([0,1]).cuda(),
    "actor_type": torch.tensor([1,0,0]).unsqueeze(0).cuda(),
    "reactor_type": torch.tensor([0,0,1]).unsqueeze(0).cuda()
}

conditioned_model = pyro.condition(vae.inference_model, data=cond2)
c_img, c_attrs = conditioned_model(cpts)
plot_model_image(c_img)

Intervention Images

In this example, we will see the difference between the condition and the intervention statements in terms of the probability distribution. The intervention we apply is on the actor's action and set it to attack. Now, we infer on the upstream nodes to the actor's action like the actor's attacking capability.

intervention_2_bn <- mutilated(dfit, list(AACT="Attack"))
intervention_2_grain <- as.grain(intervention_2_bn)

We can see that the attacking capability is different in the intervention distribution than in conditional distribution. Like the above, we infer for all the nodes necessary and we sample from that distribution.

Last updated