Using AI generated Images to explain why you may not want to write code using it.

In the last few months the strength of AI portals such as ChatGPT has allowed many more people to access Large Language Models ( LLM ) which seen to understand a user supplied prompt and can generate plausible output. The LLMs have been trained with a large amount of publically available data such as programming examples in a wide range of languages. This allows the LLM to output code sequences that are basically already written. Depending on the propmt and the data used to train the model, the code may be accurate.


To see how good code can be is difficult. For this example, the image generation tool Stable Diffusion was used. The tool was given three pieces of information.

The same guide image was used each time. For each prompt, three images were generated with different random seeds. This is the guide image.

A basic image with red background and some blue and green strokes.

In the following sets of three images, note how the Stable Diffusion tool has used the prompt and guide image to query it's training data. If the prompt, seed and guide image are the same each time, the same image will be generated. The image generation is deterministic. So if this was about code generation, then the prompt would be about the problem and the generated code would be based on the relevant training data. So the output would be based on code sequences that had been tagged with information that the LLM has extracted from the prompt text. Hopefully the output images here show how the text generated by tools such as ChatGPT and Bard could appear to be reasonable answers to the prompt but may have issues.

Output Images

For the guide image and each prompt, three images have been generated. Many more images can be generated for the guide / prompt pair but three is enough to show how the output is strangely different in each case. The tool can also use style guides.