Smarter Workflows, Stunning Results
Combining ComfyUI, Blender, and Python to power next-generation AI-driven content creation.
AI image generation has advanced rapidly, producing stunning visuals with remarkable realism. However, one challenge remains unsolved, product consistency. While AI can create beautiful images, ensuring that the product always looks exactly as it should, in shape, color, and detail, is still a major hurdle. That’s why we developed a solution that combines 3D digital twins with the latest diffusion models and control nets, enabling the creation of visually striking images that maintain complete product accuracy every time.
This approach also allows us to simplify one of the most time-consuming parts of visual production — creating believable, beautiful backgrounds. By leveraging AI’s generative power, we can produce unlimited variations of realistic scenes that perfectly complement the product, saving countless hours while maintaining creative freedom and visual consistency.
The Challenge
We found that when using control net, with any diffusion model, the creativity is greatly reduced. To solve this problem we created a custom 2 step workflow that allows for greater creativity, but maintaining the perspective of the shot, and the outline of the product.
Bellow is a comparison of an image generated with the standard control net workflow, and one with our custom 2 step workflow:
The finishing touches
The final step in matching the 3D Render with the AI background is matching the reflections and the lighting of the newly generated image.
For this we are creating a custom HDRI map based on the AI generated image, and doing a new render pass.



