IC-Light V2: AI Lighting Control Upgrade Demo

1. Tutorial Introduction
IC-Light, short for Imposing Consistent Light, is a project developed by ControlNet author "Minshen" (Zhang Lvmin) in 2024. It aims to achieve image relighting through machine learning models. It can accurately control the lighting effects in images through AI technology. The related paper results are "Scaling In-the-Wild Training for Diffusion-based Illumination Harmonization and Editing by Imposing Consistent Light Transport", and achieved a perfect score of [10,10,10,10] in ICLR 2024.
This tutorial is an upgraded version of IC-Light v2. Compared with the original IC-Light, IC-Light v2 is trained based on the newly launched Flux model, which enables it to more accurately identify the lighting tone characteristics of the image and achieve a more detailed and realistic fusion effect. IC-Light v2 has a 16-channel VAE and native high resolution, which helps to maintain image details such as skin texture, shadows, and highlights, which are still perfectly preserved after changing the lighting tone.
Effect example:
Prompt: beautiful woman, detailed face, warm atmosphere, at home, bedroom
Lighting Preference: Left
2. Operation steps
1. Start the container and click the API address to enter the web interface

2. AI controls lighting
进入 web 界面后,按照以下步骤进行操作:

主要功能参数
IC-Light 提供了多个可调节的参数,帮助用户精细化控制图像效果。
- Prompt:
Describe the main features of the image. We provide multiple main descriptions (such as beautiful woman, detailed face, handsome man, detailed face), and there are dozens of lighting descriptions. Users can choose according to the scene requirements. - Lighting Preference:
Provides four lighting preferences: up, down, left, and right. Users can choose the appropriate lighting direction according to their needs.
NOTE: The Lighting Preferences are initial potential settings, actual results may vary depending on other parameters. - Images:
The number of images generated. - Seed:
Random seed, used to generate reproducible results. - Image Width/Height:
Set the width and height of the image.其他高级参数:
- Steps:
25
Specifies the number of iterations the model takes to generate an image. The more iterations, the more detailed the image will be, but the generation time will also increase. - CFG Scale:
2
Text guidance strength. Lower values make the generated images more creative, higher values make them more closely follow the input text. - Lowres Denoise (for initial latent):
0.9
The strength of denoising for low-resolution images. The higher the value, the stronger the denoising and the smoother the image. - Highres Scale:
1.5
The scaling factor for high-resolution images. The larger the value, the more the image will be magnified. - Highres Denoise:
0.5
The strength of denoising when generating high resolution images. The higher the value, the stronger the denoising, but the image details may become blurred. - Added Prompt:
best quality
Add positive cues to help generate higher quality images. - Negative Prompt:
lowres, bad anatomy, bad hands, cropped, worst quality
Negative hints to reduce problems such as low resolution, poor anatomy, poor hand detail, cropping, low quality, etc. Then, you need to provide a description of the subject features (e.g., beautiful woman, detailed face), and a description of the lighting features (e.g., sunlight from window).
Here is an example demonstration:

prompt: mysterious humans, warm atmosphere, neon lights, city.
Lighting Preference: The lighting preference is on the left, and the generated image is bright on the left and dark on the right.
4. Exchange and Discussion
🖌️ If you find a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome everyone to scan the QR code to join the group, note [SD Tutorial], discuss technical issues with everyone, and share application results!
