JSP

JSP

I'm just a newbie. I make images for fun. If you want you can like or follow; let’s grow together!
750
Followers
729
Following
6.1K
Runs
48
Downloads
2.8K
Likes
Latest
Most Liked
Upscaling in ComfyUI: ¿Algorithm or Latent?

Upscaling in ComfyUI: ¿Algorithm or Latent?

Hello again! In this little article I want to explain the upscaling methods that I know in ComfyUI and that I have researched. I hope they will help you and that you can use them in the creation of your workflows and AI tools. In addition, remember that if you have any useful knowledge, you can share it in the comments section to enrich the topic. Also, please excuse any spelling mistakes; I am just learning English hehe.¡Let’s get to the point!To the best of my knowledge, there are two widely used ways in ComfyUI to achieve uspcaling (you decide which one to use according to your needs). The two options are: Algorithm Method or Latent Method.Algorithm Method:This is one of the most commonly used method, and is readily available. It consists of loading an upscaling model, and connecting it to the workflow. That way the image pixels are manipulated as the user wishes. It is very similar to the upscale method used in the normal way of creating images in Tensor Art.The following nodes are needed:A. Load Upscale Model.B. Upscale Image (Using Model).These nodes are connected to the workflow between the “VAE Decode” and “Save Image” nodes; as shown in the image. Once this structure is created, you can choose from all the different models offered by the “Load Upscale Model” node, ranging from “2x-ESRGAN.pth” to “SwimIR_4x”. You can use any of the 23 available models and experiment with any of them. You just have to click on the node and the list will be displayed.This can also be achieved in other ways by using another node such as “Upscale Image By”. The structure is simpler to create because only that node is connected between the VAE decode and Save Image as shown in the following image.Once the node is connected, you are free to select the mode in which you want to upscale the image (Upscale_method) and you can also set the scale to which you can recondition the image pixel value (Scale By).Strengths and Weaknesses of the Algorithm Method:Among the strengths of this method are its ease of integration into the workflow and its advantage of choosing between several upscaling model options. It also allows fast generation both in the ComfyUI and in the use of AI tools.However, among its weaknesses, it is not very effective in some specific contexts. For example: the algorithm can upscale the image pixels but does not alter the actual image size; causing the generated image itself to end up being blurred in some cases.Latent Method:This is the other alternative option to the algorithm method. It is focused on highlighting image details and maximizing quality. This method is also one of the most used in the Workflow mode of different visual content creation platforms with artificial intelligence. Here, upscaling is performed while the image is generated from latent space (Latent space is where the IA takes data from the prompt, deconstructs it for analysis and then reconstructs it to represent it in an image).The Latent Upscale node is placed between the two Ksamplers. While the first Ksampler is connected to the “Empty Latent Image” node, the second one is connected to the “VAE Decode” to ensure the correct processing and representation of the generated image.It should be noted that the “Empty Latent Image” node and the “VAE Decode” node are already included by default in the Text2Image templates in WorkFlow mode. (For more information about Text2Image, you can see my other article called “ComfyUi: Text2Image Basic Glossary”).It is important to take into consideration that for this method to work properly, you have to know how to create a correct balance between the original size of the image and its upscaled size. For example, you can generate a 512x512 image and upscale it to 1024x1024; but it is not recommended to make a 512x512 image (square image) and upscale it to 768x11152 (rectangular image) since the shape of the image would not be compatible with its uspcale version. For this reason you have to pay attention to the values of the “Empty Latent Image” and the “Latent Upscale”, so that these are always proportional.In the “Empty Latent Image” node you must place the original image dimensions (for example: 768x1152); while in the “Latent Upscale” node you must place the resized image dimensions (for example: 1152x1728). In this way you are given the freedom to set the image size to your own discretion. For this I always recommend to look at the size and upscale of the normal mode in which we create illustrations, this way we will always know which values to set and which will be compatible. As you can see in the image. You look at those values, and then write them to the nodes listed above.Once everything is connected and configured, you are able to have images of any size you want. You can experiment to your taste.Strengths and Weaknesses of the Latent Method:As strengths this option should be highlighted that it allows you to access excellent quality images if everything is correctly configured. It also allows you to create images of a custom size and upscale with the values you want. It brings out the details in both SD and XL images.As negative points we have to configure everything manually every time you want to change the size of the images or the shape of the same. Also, this method is just a little bit slower in the generation process compared to the algorithm method.Which is better: ¿Algorithm or Latent?Neither method is better than the other. Both are useful in different contexts. Remember that workflows will be different from user to user, because we all have different ways of creating and designing things.It all depends on your taste and whether you want something simpler or more elaborate. I hope the explanation in this article has helped you to make Workflows more complex and to make it easier to make the images you want.Extra Tip:If you do not find any of the nodes outlined in this document. You can double click on any empty place in the workflow and you can search for the name of the node you are looking for. Just remember to type the name without spaces.
13
2
ComfyUi: Text2Image Basic Glossary

ComfyUi: Text2Image Basic Glossary

Hello! This is my first article; I hope it will be of benefit to the person who reads it. I still have limited knowledge about WorkFlow; but I have researched and learned little by little. If anyone would like to contribute some content; you are totally free to do so. Thank you.I made this article to give a brief and basic explanation about basic concepts about Comfyui or WorkFlow. This is a technology with many possibilities and it would be great to make it easier to use for everyone! What is Workflow?Workflow is one of the two main image generation systems that Tensor Art has at the moment. It corresponds to a generation method that is characterized by a great capacity to stimulate the creativity of the users; also, it allows us to access to some Pro features being Free users.How do I access the WorkFlow mode?To access the WorkFlow mode, you must place the mouse cursor on the “Create” tab as if you were going to create an image by conventional means. Once you have done that; click on the “ComfyFlow” option and you are done.After that, you will see a tab with two options “New WorkFlow” and “Import WorkFlow”. The first one allows you to start a workflow from a template or from scratch; while the second option allows you to load a workflow that you have saved on your pc in a JSON file.If you click on the “New WorkFlow” option, a tab with a list of various templates will be displayed (each template will have a different purpose). But the main one will be “Text2Image”; it will allow us to create images from text, similarly to the conventional method we always use. You can also create a workflow from scratch in the “Empty WorkFlow Template” option but for a better explanation of the basics we will use the “Text2Image”.Once you click on the "Text2Image" option, you must wait a few seconds and a new tab will be displayed with the template, which contains the basics to create an image by means of text. Nodes and Borders: ¿What are they and how do they work?Well, to understand the basics of how a WorkFlow works, it is necessary to have a clear understanding of what Nodes and Border are.Nodes are small boxes that are present in the workflow; each node will have a specific function necessary for the creation, enhancement or editing of the image or video. The basics of Text2Image are the CheckPoint loader, the Clip Text Encoders, the Empty Lantent Image, the Ksampler, the VAE decoder, and Save Image. It should be noted that there are hundreds of other nodes besides these basics and they all have many different functions.On the other hand, the “Borders” are the small colored wires that connect the different nodes. They are the ones that will set which nodes will be directly related. The Borders are ordered by colors that are generally related to a specific function.The purple is related to the Model or Lora used.The yellow one is intended for connection to the model or lora with the space to place the prompt.The red refers to VAE.The orange color refers to the connection between the spaces for placing the prompt and the “Ksampler” node.The fucsia color makes allusion to the latent, which will serve for many things; but for this case it serves to connect the “Empty Latent Image” node with the “Ksampler” node and establish the number and size of the images that will be generated.And the blue color is related to everything that has to do with images; it has many uses but this case is related to the “Save Image” node.What are the Text2Image template Nodes used for?Having this clear is of utmost relevance, since it allows you to know what each node of this basic template is for. It's like knowing what each piece in a lego set is for and understanding how they should be connected to create a beautiful masterpiece! Also, if you get to know what these nodes are for, it will be easier for you to intuit the functionality of its variants and other derived nodes.A) The first one is the node called “Load Chckpoint”, this node has three specific functions. The first one is to load the base model or checkpoint with which an image will be created. The second is the Clip, which will take care of connecting the positive and negative prompts that you write to the checkpoint. And the third is that it connects and helps to load the VAE model. B) The second one is the “Empty Latent Image”; which is the node in charge of processing the image dimensions from the latent space. It has two functions: First, set the width and length of the image; and second, set how many images will be generated simultaneously according to the “Batch Size” option.C) The third is the two “Clip Text Enconder” nodes: in this case there will always be at least two of these nodes, since they are responsible for setting both the positive and negative prompts that you write to describe the image you want. They are usually connected to the "Load Checkpoint" or any LoRa and are also connected to the “Ksampler” node.D) Then, there is a node “Ksampler”. This node is the central point of all WorkFlow; it is the one that sets the most important parameters in the creation of images. It has several functions: the first one is to determine which is the seed of the image and to regulate how much it changes from image to generated image by means of the “control_after_generate” option. The second function is to set how many steps are needed to create the image (you set them as you wish); the third function is to determine which sampling method is used and also what is the scheduler of this method (this helps to regulate how much space is eliminated when creating the image).E) The penultimate one is the VAE decoder. This node is in charge of assisting the processing of the image to be generated: its main function is to be responsible for materializing the written prompt into an image. That is to say, it reconstructs the description of the image we want as one of the final steps to finish the generation process. Then, the information is transmitted to the “Save Image” node to display the generated image as the final product.F) The last node to explain is the “Save Image”. This node has the simple function of saving the generated image and providing the user with a view of the final work that will later be stored in the taskbar where all the generated images are located.Final Consideration:This has been a small summary and explanation about very basic concepts about ComfyUI Mode; you could even say that it is like a small glossary about general terms. I have tried to give a small notion that tries to facilitate the understanding of this image generation tool. There is still a lot to explain, but I will try to cover all the topics; the information would not fit in a single article (ComfyUI is a whole universe of possibilities). ¡Thank you so much for taking the time to read this article!
46
15