Hi, thanks for the amazing work on Qwen-Image. It’s an impressive project and the editing results are outstanding!
While using qwen-image-edit for multi-round editing (feeding the output of one round as the input for the next), I’ve noticed a large difference in quality between the Hugging Face demo and a local deployment.
Specifically, the Hugging Face demo can retain high visual quality even after 10 consecutive editing rounds, while the locally deployed model starts to show strong noise and degradation from around the 5th round, which is even more pronounced in qwen-image-edit-2509.
I’m wondering if the Hugging Face version uses any different default parameters, additional preprocessing/postprocessing, or any custom inference settings that help maintain image quality across multiple editing rounds.
Thanks again for your excellent work and look forward to your reply!