# Results

## Random Image

A random image is generated from the causal VAE model. This is generated by simply calling the model. The causal factors that are inputs to the decoder are randomly sampled and the decoder generates an image.&#x20;

```python
r_img , r_attrs = vae.inference_model()
plot_model_image(r_img)
```

The attributes are the causal inputs passed to the decoder. Since the procedural generation scheme also takes in the same attributes, we can generate the original image by using the procedural generation code.

We compare the images generated by both the decoder and the procedural generation scheme.

![Randomly Sampled Image.](/files/-MH9cH9XZvW0tXmJKSCf)

## Reconstructed Image

Reconstructing the input images is native to autoencoders. We can test the reconstruction capabilities of the causal model by feeding in the inputs and the labels.

```
recon_img = vae.reconstruct_img(test_images[0][0:10].cuda(), test_labels[0][0:10].cuda())
plt.imshow(recon_img[1].detach().cpu().permute(1,2,0)) 
```

![Reconstructed Image](/files/-MH9z_2iMMNWqE5qyN-q)

## Conditioned Images

### Conditioned on Latent Variables and infer Observed Variables from the Image

In this scenario, we condition on the latent variables actor's attacking strength and defense capabilities to be ***HIGH*** and the reactor's attacking, strength, and defense capabilities to be ***LOW***. We have made the actor more strong and the reactor more vulnerable. The rest of the entities are sampled after conditioning on this evidence/query. We intuitively expect that the actor will attack given that his capabilities are high and the reactor will get hurt from this. When we run the conditioned model, we get the following image from the decoder.

![Actor attacks and the reactor is dead.](/files/-MHx6gIslgFgme_rE6sf)

{% hint style="warning" %}
The conditional distribution given the evidence is computed separately and fed into the inference model
{% endhint %}

The observed variables are sampled from the above conditional distribution. From the sampled values, the labels can be fed into the procedural generation scheme to get the actual image.&#x20;

### Conditioned on Observed Variables and infer Latent Variables from the Image

In this scenario, we condition on the observed variables like the actor character and its type and infer the strength, attacking and defensive capabilities of the actor.

```python
querygrain(grainObj, nodes=c("AD", "AA", "AS"), evidence = list(AC="satyr", RC="golem", AT="type1", RT="type3"))
```

![Conditional Probability](/files/-MIDAgHAtjH6CvrDmiQm)

The above code gets us the condition distribution of the latent nodes of the actor given the evidence of observed variables.

```python

# sample conditioning statements - sample on observed and infer latent.
cond2 = {
    "actor": torch.tensor([1,0]).cuda(),
    "reactor": torch.tensor([0,1]).cuda(),
    "actor_type": torch.tensor([1,0,0]).unsqueeze(0).cuda(),
    "reactor_type": torch.tensor([0,0,1]).unsqueeze(0).cuda()
}

conditioned_model = pyro.condition(vae.inference_model, data=cond2)
c_img, c_attrs = conditioned_model(cpts)
plot_model_image(c_img)
```

![A sample image generated by decoder given the above evidence.](/files/-MIDBALMnGb5hWJ1uOBd)

## Intervention Images

In this example, we will see the difference between the condition and the intervention statements in terms of the probability distribution. The intervention we apply is on the actor's action and set it to attack. Now, we infer on the upstream nodes to the actor's action like the actor's attacking capability.&#x20;

```python
intervention_2_bn <- mutilated(dfit, list(AACT="Attack"))
intervention_2_grain <- as.grain(intervention_2_bn)
```

![Intervention distribution of Actor Attack](/files/-MIHXzi-A9KKm68Kmd3b)

![Conditional distribution of Actor Attack](/files/-MIHY3SIDIhtFpDXHswb)

We can see that the attacking capability is different in the intervention distribution than in conditional distribution.  Like the above, we infer for all the nodes necessary and we sample from that distribution. &#x20;

![Image generated from intervention distribution.](/files/-MIHySpHZcQo1ElGKxvq)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://linkinnation1792.gitbook.io/causal-scene-generation/tutorial/results.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
