1.Adding and processing datasets
Click "Online Training" on the Tensorart homepage to enter
1.1 Add dataset
1.1.1Dataset
l Currently supported formats are png/jpg/jpeg, and up to 1000 images can be added for training.
l The uploaded picture can be deleted by clicking on the upper right corner.
l It is recommended to upload higher definition images as much as possible for better training results
l Enhanced datasets can be added, such as cropping and segmentation, image mirroring/flipping
1.1.2Regularized dataset
l Regularization technology is widely used in Machine Learning and Deep learning algorithms. Its essential function is to regularize and reduce the weight of training materials, prevent overfitting, and improve Model Generalization Ability.
l We can upload regularized datasets here, and the regularized dataset can be generated using the base model used for training.
l For pure beginners who are completely unfamiliar with the training process, not using regular expression datasets may achieve better results.
Please do not upload any illegal images such as bloody/violent/yellow/political images. Uploading illegal images multiple times may result in account suspensiorn
1.2 Batch clipping
1. Cutting method:
Focus cropping: crop according to the main content of the picture. Center Crop: Crop the central part of the picture
2. Choose the cutting size according to the training bottom film
SD1.5 optional sizes:
n 512x468
n 512x512
n 768x512
SDXL optional sizes:
n 768x1024
n 1024x1024
n 1024x768
1.3 Automatic marking
Each uploaded image will be automatically tagged, and thetag content can be viewed by clicking on the image. In addition, you can also add and delete image tags.
1. If the training character wants to fix certain features, the procompt word for that feature can bedeleted
2.Any AI automatic labeling cannot be 100% accurate. Ifpossible, manually screen once to eliminate incorrect labeling and improve the quality ofthe mode
1.4 Batch labeling
Currently, it supports adding tags to images in batches. You can choose to add them to the beginning or end of the line. Generally, they are added to the first line as a trigger word.
2.Training parameter settings
2.1 Set the number of repetitions
The number of repetitions of image training, that is, the repeat parameter. Generally speaking, when training locally, this parameter needs to be adjusted separately in the training dataset folder
In the online training workbench of Tensorart, we can change thenumber of repetitions of individual images from here. If you upload an enhanced dataset on it, you can sset different repetitions here
2.2 Base model
Model theme & base model selection: The model presets different training parameters according to different themes. Choosing the appropriate base model will make your Model Training twice as effective! Note: LoRA between different XL models is unlikely to be universal, please choose the base model carefully
Two-dimensional characters: Optional base model AnythingV5/Animagine XL/Kohaku-XL Delta, training SD1.5 two- dimensional character LoRA requires AnythingV5, training SDXL LORA requires Animagine XL/Kohaku-XL
Real person: Optional base model EpiCRealism (SD 1.5)/Juggernaut XL (SDXL). Some parameters of the training haVe been preset, and you can adjust the relevant parameters according to your needs
2.5D: Optional base model DreamShaper/GuoFeng3/DreamShapper XL1.0/GuoFeng4 XL, training SD1.5 LoRA requires DreamShaper/GuoFeng3, training SDXL LORA requires DreamShaper XL1.0/GuoFeng4 XL
Standard: Default use of SDXL1.0/SD1.5 base as the training model, if not special needs, it is not recommended to use
Single Repeat (Repeat):Repeat refers to the number oftimes AI learns for each image, where Repeat only takes effect for imagesthat have not been set separately
Training rounds (Epoch): Epoch refers to a cycle of AI learnirng from your images. After all the images have completed Repeat, tthis is an Epoch.
Total Steps: see the supplement below the table
Model Effect Preview Prompt Word: The prompt word Ihere is a preset image for each version saved by Epoch, used to preview tthe training effect of the model This parameter does not affect the training effect and the qualitof the model, and is only used as a real-time preview graph parameter
The formula for calculating the total number of steps is:
(Number of images in the training dataset Repeat Epoch)
The total number of steps directly affects the computing power con:sumption of Model Training, and the more steps, the greater the computing power consumption
2.3 Professional mode
It is not recommended for beginners to use professional mode
Number of repetitions per image (Repeat): Repeat refers to the number of times AI learns from each image
Training rounds (Epoch): Epoch refers to a cycle of AI learning from your images. After all the images have completed Repeat, this is an Epoch.
The formula for calculating the total number of
steps is:
Number of images in the training dataset * Repeat
* Epoch
The total number of steps will directly affect the computing power consumption of Model Training. The more steps, the greater the computing power consumption
Seed (seed): (metaphysics, random can be)
Text Encoder Learning Rate : Adjust the sensitivity of the entire model to tags
If you find unnecessary items during the image generation process, you need to reduce the TE Learning Rate; if you find it difficult to make the content appear without heavily weighting the prompts, you need to increase the TE Learning Rate.
Unet Learning Rate: the speed and degree of model learning
High Learning Rate can make AI learn faster, but may lead to Overfitting. If the model cannot replicate the details and the generated graph does not look alike, then the Learning Rate is too low. Try increasing the Learning Rate
Learning Rate Scheduler: Scheduler refers to "how to set the change of Learning Rate"
Optimizer: The optimizer is set up to update the weights of neural networks during training. Various methods have been proposed for intelligent training.
Training grid size Dim: DIM represents the dimension of neural networks. The larger the dimension, the stronger the expression ability of the model, and the larger the final volume of the model.
DIM is not the bigger the better. For a single character LORA, there is no need to open 128 for DIM.
Training Network Alpha Value:
While keeping the actual (saved) lora weight value large, always weaken the weight by a certain proportion during training to make the weight value appear smaller. This "weakening ratio" is Network Alpha.
The smaller the Network Alpha value, the larger the weight value of the stored LoRA neural networks.
Don't adjust the default parameters at will in professional mode, which may lead to even more outrageous results. If you are not sure what a certain parameter is for, try not to adjust it. Newbie recommends using basic mode.
Scrambling labels:
Usually, the earlier the word in the title, the more important it is. Therefore,if the order of the words is fixed, the following words may not be well learned, or the previous words may have unexpected associations with image generation. By randomly changing the order of the words each time the image is loaded, this bias can be corrected.
Keep the Nth token: The first n words specified will always remain at the front of the title, which can be used to set the trigger word.
Here, "word" refers to text separated by commas. Regardless of how many words the separated text contains, it is considered as "1 word". For example, for "black cat, eating, sitting", "black cat" is considered as 1 word.
Noise offset: add noise offset in training to improve the generation of very dark or very bright images, not too large, try to be below 0.2
Multi-resolution noise attenuation rate:
Multiple resolution noise iterations:
Convolution layer dimension:
Convolution layer Alpha value:
Prompt word, sampling algorithm: The prompt word and sampling algorithm here are preset images for each version saved by Epoch, used to preview the training effect of the model
3.Training process
Because a machine can only run one Model Training task at the sanhe time, please be patient when facing possible queuing situations. We will prepare the training machine foyou assoon as possible. You can also perform staggered training at night
4.Model testing
Currently, after finding a suitable model for the example diagram and publishing it, do not upload display images (no images for display will be distributed to the homepage).After the deployment is completed, you can test your own model on the workbench.
5.Model release/download/retrain
After the training is completed, you will see four preview images ofeach Epoch. You can choose the satisfactory works to publish in Tensorart or save them locally. If you are not satisfied with this training, you can view the training parameters and retrain in the upper right cornher. The specific method of adjusting parameters can be found in the above instructions.