Built a style transfer workflow using 100% native Flux components. The core functions are divided into three main parts:
ControlNet for image composition control. (used Canny in sample workflow, but you can swap it out for Depth or HED if you prefer.)
IPAdapter for style transfer. (To be honest, the current IPAdapter isn’t very powerful yet, at least not for style transfer.)
Img2Img to further enhance style transfer effect, (it does a good job to ensure that the lighting and color tones of the image are relatively consistent.)
*4) Additional step: If the final image has too much noise due to high control weights, I applied a high-weight Img2Img for re-drawing, which can improve the details and texture.
By combining these three, you can achieve the general goal of style transfer.
However, when you actually implement this, you’ll need to adjust the parameters according to your specific needs:
ControlNet Strength: I tend to keep it above 0.7. The stronger the subject control you need, the higher you should set the strength.
IPA Strength: Doesn’t need to be very high. In theory, the higher the IPA strength, the more style transfer you get, but in practice, it affects image quality and the transfer effect isn’t great. So, I suggest keeping it below 0.5.
Img2Img weight: Img2Img contributes more to overall image quality and the similarity in lighting and color compared to IPA. I set it around 0.1. If you go up to 0.2, the image composition might begin shifting outta control.
Prompt: This is also crucial. Since you can’t set IPA and Img2Img weights too high (doing so would increase noise and reduce detail), matching the right prompt, especially a style prompt, can effectively help produce better images. I used WD14 Tagger to extract prompts from reference images for your reference.
Finally, Flux’s node ecosystem is still maturing. If your priority is image quality, combining it with SD’s IPA might be better.