Deformed generations, particularly on human body parts (e.g. hands, feet), are a common issue with many models. This can be dealt with to some extent with good negative prompts. The following example is adapted from this Reddit post.
Using Stable Diffusion v1.5 and the following prompt, we generate a nice image of Brad Pitt, except for his hands of course!
studio medium portrait of Brad Pitt waving his hands, detailed, film, studio lighting, 90mm lens, by Martin Schoeller:6
Using a robust negative prompt, we can generate much more convincing hands.
studio medium portrait of Brad Pitt waving his hands, detailed, film, studio lighting, 90mm lens, by Martin Schoeller:6 | disfigured, deformed hands, blurry, grainy, broken, cross-eyed, undead, photoshopped, overexposed, underexposed, low-res, bad anatomy, bad hands, extra digits, fewer digits, bad digit, bad ears, bad eyes, bad face, cropped: -5
Using a similar negative prompt can help with other body parts as well. Unfortunately, this technique is not consistent, so you may need to attempt multiple generations before getting a good result. In the future, this type of prompting should be unnecessary since models will improve. However, currently, it is a very useful technique.
AI models are prone to producing deformed generations, especially when it comes to human body parts. Fortunately, we can include negative prompts like the one in the example above to fix these bad outputs.
Using negative prompts with techniques such as weighted terms can improve image generations, explicitly instructing the AI to de-emphasize deformed aspects.
It's useful to note that as generative models get better, it will likely be less necessary to use the techniques described in this article.
Improved models such as Protogen are often better with hands, feet, etc.
Blake. (2022). With the right prompt, Stable Diffusion 2.0 can do hands. https://www.reddit.com/r/StableDiffusion/comments/z7salo/with_the_right_prompt_stable_diffusion_20_can_do/ ↩