MM744

MM744

Rainbow Candy
Rainbow Candy
AI enhance & AI create
14
Followers
0
Following
27
Runs
1
Downloads
274
Likes

Articles

View All
How can I reuse same a model across different programs? (Forge, ComfyUI, SwarmUI) (Windows)

How can I reuse same a model across different programs? (Forge, ComfyUI, SwarmUI) (Windows)

How to use the same model without duplicating it between different programs? (Forge, ComfyUI, InvokeAI, Fooocus, SwarmUI, Kohya_SS...) (Windows)Short answer: Use symbolic link feature. (0 KB duplicate file)1)Installation1.Download a Link Shell Extension2.Install - HardLinkShellExt_X64.exe2)Usage1.Found model do you want to duplicate2.Right click on model > Show more options (Win11) > Pick Link Source3.Going to folder where you want to put direct link file to original model4.Right click > Show more options (Win11) > Drop As... > Symbolic link5.Ready3)Important Note!Do not copy link file to other location (you start copy original file)4)ExplanationSymbolic Links (Soft Links):Think of it as a shortcut or pointer to the original file/directoryIt's just a reference containing the path to the targetIf you delete the original file, the symlink becomes brokenCan link to files across different filesystemsCan link to directoriesShows different inode number than original fileCopyln -s original.txt shortcut.txtHard Links:Think of it as another name for the exact same filePoints directly to the file's data on disk (same inode)If you delete the original, the hard link still worksCan't link across different filesystemsCan't link to directoriesShares same inode number as original fileCopyln original.txt another_name.txtPros and Cons:Symbolic Links:Can link across filesystemsCan link to directoriesEasy to identify as linksCan use relative pathsBreak if original is moved/deletedHard Links:Cannot break - work even if original is deletedSame performance as original fileSave disk space (share same data)Cannot span different filesystemsCannot link directoriesHarder to identify as linksMust use absolute paths
1
Fastest FLUX generation on Windows (ComfyUI+Flow)

Fastest FLUX generation on Windows (ComfyUI+Flow)

1)Download latest ComfyUI and Install1.Download - ComfyUI2.Unpack archive3.Update to the latest version ComfyUI and Dependencies - Open the update folder > update_comfyui_and_python_dependencies.bat > run >Press any key to continue > enter3.Download all models and put it folders:flux1-dev.safetensors to ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\unetae.safetensors to ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\vaeclip_l.safetensorsto ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clipt5xxl_fp16.safetensorsto ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\clip2)Download and Install latest ComfyUI-disty-Flow1.Open the Windows Terminal inside - ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes2.Run install command:git clone https://github.com/diStyApps/ComfyUI-disty-Flow3)Generation guide1.Run ComfyUI - run_nvidia_gpu.bat2.Open in browser - http://127.0.0.1:8188/flow3.Select one of type generation:Flux Dev or Flux Dev + Lora4.Select all model and setup the parameters (Flux Dev example)5.Put Prompt and GenerateSpeed for parameters on image (RTX 4060 ti 16GB):4)Download and Install latest ComfyUI-Manager and ComfyUI-Crystools (Optional)1.Open the Windows Terminal inside - ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes2.Run install command:git clone https://github.com/ltdrdata/ComfyUI-Manager.gitTo update ComfyUI and all nodes -1.Open manager in http://127.0.0.1:81882.Update all3.Restart4.Search for crystools in the manager and install it for usage monitoring.Note:This tutorial is also useful for Flux Schnell, SD, SDXL, etc...Model downloads are default. You can use quants models like - BNB nf4, GGUF-Q8 or FP8
1
How to create effective Captions and Tags for training and generation. (Guide)

How to create effective Captions and Tags for training and generation. (Guide)

Best model for 1.5, SDXL 1.0 Base, Pony -1)Taggui https://github.com/jhc13/tagguiwd-vit-large-tagger-v3 https://huggingface.co/spaces/SmilingWolf/wd-taggerBest model for FLUX dev, FLUX schnell -1)Joytag Caption - Batch https://github.com/MNeMoNiCuZ/joy-caption-batch https://civitai.com/articles/6723/tutorial-tool-caption-files-for-flux-training-sfw-nsfw2)Taggui https://github.com/jhc13/tagguiA good model for FLUX dev, FLUX schnell - Florence-2-base-PromptGen Florence-2-base-PromptGenInstruction for Tags:1)Taggui1.Download Taggui2.Unpack archive and run taggui.exe3.Select - wd-vit-large-tagger-v34.FIle > Load Directory (Ctrl+L) - Select folder with images5.Start Auto-CaptioningSettings and Tips:1.Maximum tags - 30 (default) is a good starting point. The more detailed images you select the more tags the model can generate. A lot of tags can create artifacts on generation.2.Show probabilities - activate weight on tags. Can create more precise tags for image. To activate this futures for training (Kohya_SS) need to activate - Parameters > Advanced > Weighted captions ON.Instruction for Captions:1)Joytag CaptionPrepare: Install Python 3.10 and CUDA Toolkit 12.6 if you don't have to. (Installing running requirements)1.Joytag Caption2.Download Joytag Caption3.Unpack archive and run venv_create.bat3.1.Select a Python version by number: 1 (Python 3.10)3.2.Enter the name for your virtual environment: Enter3.3.Do you want to upgrade pip now?: Y3.4.Do you want to install 'uv' package?: Y3.5.Do you wish to run 'uv pip install -r requirements.txt'?: Y4.Install:pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124Usage and Tips:Prepare: Put images inside - input folder in joy-caption-batch-main.1.Run Terminal inside joy-caption-batch-main folder2.Run:python -m venv venv.\venv\Scripts\activatepy batch.pyNote:Default caption length is 256 which cannot be recommended for character.Style train - up 300 words.Character train - up 25 words.2)Taggui1.Download Taggui2.Unpack archive and run taggui.exe3.Select - Florence-2-base-PromptGen4. FIle > Load Directory (Ctrl+L) - Select folder with images5.Start Auto-CaptioningSettings and Tips:1.Maximum tokens - 50 (default) is good start point. The recommended maximum value is 300.2.Number of beams - 10 (default) is a good standard. Increasing this number increase the precision prediction of the detect algorithm. More value requires more VRAM
2
How to Reduce VRAM Usage (Windows)

How to Reduce VRAM Usage (Windows)

Windows:1.Disable Autorun Programs (Autoruns)2.Optimize Visual Effects (Right-click This PC > Properties > Advanced system settings > Settings under Performance > Adjust for best performance)3.Update Graphics Drivers to the latest available version4.Lower Display Resolution & Refresh Rate5.Disable Windows Transparency Effects6.Turn off graphics acceleration for TerminalBrowser:1.Turn off graphics acceleration in browser.2.Use a chrome://gpuclean/ in you Chromium browser to clean VRAM3.Turn on Memory Saver function - Maximum4.Close unnecessary tabsNot recommended but should works:1.Using Windows debloated version2.Use NVCleanstall insdead of GeForce Experience or NVIDIA app
2
How to install Kohya_SS + PyTorch 2.6.0 (Optional) (Windows)

How to install Kohya_SS + PyTorch 2.6.0 (Optional) (Windows)

1)Install Kohya_SSOpen Terminal:git clone https://github.com/bmaltais/kohya_sscd kohya_ssgit checkout sd3-sd3.5-fluxsetup.batKohya_ss setup menu:1. Install kohya_ss GUI5. (Optional) Manually configure Accelerate:>This machine>No distributed training

Do you want to run your training on CPU only (even if a GPU / Apple Silicon / Ascend NPU device is available)?>NoDo you wish to optimize your script with torch dynamo?>NoDo you want to use DeepSpeed?>NoWhat GPU(s) (by id) should be used for training on this machine as a comma-seperated list?>AllWould you like to enable numa efficiency? (Currently only supported on NVIDIA hardware).>YesDo you wish to use FP16 or BF16 (mixed precision)?>bf16 (If you have a RTX 30/40 series video card choose >bf16. If don't have choose >fp16.)2)Install PyTorch 2.6.0 for Kohya_SS (Optional) (Experimental)1.Change text in gui.bat file inside kohya_ss folder and to add --noverify to skip check updates reequipments for kohya_gui.py@echo offset PYTHON_VER=3.10.9:: Deactivate the virtual environmentcall .\venv\Scripts\deactivate.bat:: Activate the virtual environmentcall .\venv\Scripts\activate.batset PATH=%PATH%;%~dp0venv\Lib\site-packages\torch\lib:: If the exit code is 0, run the kohya_gui.py script with the command-line argumentsif %errorlevel% equ 0 (REM Check if the batch was started via double-clickIF /i "%comspec% /c %~0 " equ "%cmdcmdline:"=%" (REM echo This script was started by double clicking.cmd /k python.exe kohya_gui.py %* --noverify) ELSE (REM echo This script was started from a command prompt.python.exe kohya_gui.py %* --noverify))2.Run Terminal inside kohya_ss folder:python -m venv venv.\venv\Scripts\activatepip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124pip install xformers==0.0.29.dev922pip3 install torch==2.5.0 torchvision --index-url https://download.pytorch.org/whl/test/cu1243.Install cuDNN 9.5.0 for Kohya_SS (Option)Run terminal inside kohya_ss folder:python -m venv venv.\venv\Scripts\activatepip install nvidia-cudnn-cu11==9.5.0.503.Install TransformerEngine or MS-AMP for Kohya_SS (In development, Option)pip install transformer-engine
2
How to clean RAM? (Free up RAM) (Windows) (Official Microsoft tool)

How to clean RAM? (Free up RAM) (Windows) (Official Microsoft tool)

Note:Clearing the RAM does not close programs or applications. You can run it even with Stable Diffusion open. After running, it takes around 2–15 seconds to rebuild some RAM cache for open applications.How to use?1)Way 1:To clean RAM - download zip archive and run inside archive a CleanRAM.bat. If you run first time -Agree License Agreement when windows shows.2)Way 2:If you run first time Agree License Agreement when windows shows.Open the RAMMap > Empty:-Empty Working Set-Empty System Working Set-Empty Modified Page List-Empty Standby List-Empty priority 0 standby listBenefits of Clearing RAM for Stable Diffusion:1. Improved System PerformanceClearing Cache: RAMMap allows you to clear the system’s standby memory, which is memory that Windows has cached from previously opened files and programs. This can help free up space for applications like Stable Diffusion, which requires a large amount of available memory to operate efficiently.Faster Response: By clearing RAM, you can prevent slowdowns that happen when the system is forced to swap data to the hard drive due to low memory. Stable Diffusion can generate outputs faster without bottlenecks.2. Optimized Memory UsageAvoid Memory Fragmentation: Running heavy models and generating images in Stable Diffusion can lead to memory fragmentation. RAMMap can help reclaim fragmented memory, ensuring a more contiguous memory block is available for the model, which can improve stability and responsiveness.Reduction in Memory Leaks: Memory leaks from background applications or processes can reduce the memory available for Stable Diffusion. By clearing RAM, you reduce the chance of these leaks impacting Stable Diffusion’s performance.3. More Stable Model GenerationReduces Crashes: Stable Diffusion can crash when there’s insufficient memory to handle large models or high-resolution image generation. Clearing RAM with RAMMap helps to maximize the amount of free memory, making crashes less likely.Consistent Output Quality: Memory availability can affect the consistency and quality of image outputs in Stable Diffusion. By ensuring maximum memory, you maintain the ability to generate high-quality images without issues.4. Lower VRAM Usage on GPU (Indirectly)If you’re running Stable Diffusion on a GPU with limited VRAM, freeing up system RAM can indirectly improve VRAM availability. When the CPU RAM is less burdened, it can handle more of the data transfer and processing, allowing the GPU to focus on high-speed image generation.5. Efficient Use of Virtual MemoryFor users who lack ample physical RAM, virtual memory may be heavily used by Stable Diffusion. Clearing physical RAM regularly allows the OS to use virtual memory more effectively, which is beneficial when running models on lower-spec systems.6. Better Multitasking CapabilitiesStable Diffusion requires a lot of memory, making it challenging to multitask. By clearing RAM, you ensure that memory-intensive background processes are minimized, allowing you to run Stable Diffusion and other necessary applications concurrently with fewer performance drops.What is RAMMap?RAMMap is an official Microsoft tool. It was developed by Mark Russinovich and Bryce Cogswell, who are well-known experts in Windows system administration and performance analysis. Sysinternals, the company behind RAMMap, was acquired by Microsoft in 2006.https://learn.microsoft.com/en-us/sysinternals/downloads/rammap
2
How to full clean NVIDIA Cache (Windows) + run .bat

How to full clean NVIDIA Cache (Windows) + run .bat

Important note: Before clean NVIDIA Cache close all programs and application. (Google Chrome, Stable Diffusion, etc.). After cleaning NVIDIA Cache restart PC.1.Open the Nvidia Control Panel > Manage 3D settings > Shader Cache Size - Disable.2.Restart PC3.Way 1:Delete all files inside this folder manually -%LOCALAPPDATA%\NVIDIA\OptixCache%APPDATA%\NVIDIA\ComputeCache%USERPROFILE%\AppData\LocalLow\NVIDIA\PerDriverVersion\DXCache%LOCALAPPDATA%\NVIDIA\GLCache3.Way 2:Using automatic .bat file. Right click on .bat file - Run as administrator. (Bat file Include in Attachments in a zip archive. Directly upload .bat not work)..Bat file commands:@echo offecho Deleting NVIDIA cache files...rem Delete files in OptixCacheif exist "%LOCALAPPDATA%\NVIDIA\OptixCache" (del /q "%LOCALAPPDATA%\NVIDIA\OptixCache\*.*"echo Deleted files in OptixCache) else (echo OptixCache folder not found)rem Delete files in ComputeCacheif exist "%APPDATA%\NVIDIA\ComputeCache" (del /q "%APPDATA%\NVIDIA\ComputeCache\*.*"echo Deleted files in ComputeCache) else (echo ComputeCache folder not found)rem Delete files in DXCacheif exist "%USERPROFILE%\AppData\LocalLow\NVIDIA\PerDriverVersion\DXCache" (del /q "%USERPROFILE%\AppData\LocalLow\NVIDIA\PerDriverVersion\DXCache\*.*"echo Deleted files in DXCache) else (echo DXCache folder not found)rem Delete files in GLCacheif exist "%LOCALAPPDATA%\NVIDIA\GLCache" (del /q "%LOCALAPPDATA%\NVIDIA\GLCache\*.*"echo Deleted files in GLCache) else (echo GLCache folder not found)echo.echo Cache deletion complete.pause4.Restart PC5.Open the Nvidia Control Panel > Manage 3D settings > Shader Cache Size - Driver Default.About the type of NVIDIA Cache -OptixCache: Used by NVIDIA’s OptiX for ray tracing, storing compiled data for faster GPU rendering. (Autodesk Arnold Renderer, Blender Cycles)ComputeCache: Caches computations for CUDA operations, optimizing performance in general-purpose GPU tasks. (TensorFlow and PyTorch)DXCache: Stores DirectX shader data, enhancing performance in DirectX applications. (Cyberpunk 2077, Overwatch 2)GLCache: Caches OpenGL shaders for faster OpenGL-based rendering. (Blender in OpenGL mode and Adobe Photoshop)
3
Installing running requirements for AI applications (Windows)

Installing running requirements for AI applications (Windows)

PythonAll versions https://www.python.org/downloads/windows/ -Download all versions from "Download Windows installer (64-bit)":3.8.10, 3.9.13, 3.10.11, 3.11.9, 3.12.6, 3.13.0Install with "Add Python to PATH"To install the default Python version - set in Environment Variables... > User variables for ... > Edit... - Path > make to the top of the list - Python\Python310\Scripts\ and Python\Python310\ (For 3.10.11 example)FFMPEGDownload https://www.gyan.dev/ffmpeg/builds/ Download latest release builds - ffmpeg-release-full.7z and unpack it to C:\Program Files\ffmpeg\Add in Environment Variables > System varibles > Path > New and add path - C:\Program Files\ffmpeg\bincuDNNDownload https://developer.nvidia.com/cudnn-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_localDownload click cudnn_9.4.0_windows.exe and install itInstall - Select > Custom (Advanced) and unchecked - cuDNN Samples > installVisual C++ Redistributable RuntimesDownload - https://www.techpowerup.com/download/visual-c-redistributable-runtime-package-all-in-one/Download zip archive > extract > run install_all.batCUDA ToolkitDownload (12.6) https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_networkDownload (11.8) https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_networkInstall - Select > Custom (Advanced) and unchecked:NVIDIA Geforce Experience comp..., Other components, Driver componentsInstall (If you already have installed latest Nvidia Drivers that can be replace to older version)To install the default CUDA Toolkit version - set in Environment Variables... > System variables > Path > make to the top of the list - C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp (For CUDA Toolkit 11.8 example)Microsoft Build Tools - Visual StudioDownload - https://visualstudio.microsoft.com/ (Visual Studio) and https://visualstudio.microsoft.com/visual-cpp-build-tools/ (Build Tools)Install for Visual Studio - Desktop Development with C++Install for Build Tools - Desktop Development with C++Install a Desktop Development with C++ (If needed after installation select the latest version in Individual components to update it)GitDownload - https://git-scm.com/downloads/winInstall - Git-2.46.2-64-bitNote: That article for general requirements installation. Some tools can require different version and other options.Problem solver:AI tool can't find a cl.exe (Build Tool Compiler). Solve - Add to system environments > path - C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.41.34120\bin\Hostx64\x64\ and run a vcvars64.bat in - C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build
2
How install Kohya_SS to Ubuntu WSL under Windows 11

How install Kohya_SS to Ubuntu WSL under Windows 11

How to install Kohya_SS to Ubuntu WSL under Windows 111)Prepare:1.Check CPU virtualization on Windows > Task Manager > Perfomance > CPU > Virtualization: Enabled or Disabled.If Disabled - Access the UEFI (or BIOS). The way the UEFI (or BIOS) appears depends on your PC manufacturer. https://support.microsoft.com/en-us/windows/enable-virtualization-on-windows-c5578302-6e43-4b4b-a449-8ced115f58e12.Make sure you are using a recent version of Windows 10/11. If needed update to the latest version. (No earlier than Windows 10, Version 1903, Build 18362)2)Install WSL and Ubuntu1.Open Terminal > Use the command -wsl --install2.Open the Microsoft Store > Find - Ubuntu. (Ubuntu which doesn't show the version in a name is the latest)3.Install Ubuntu4.Open Ubuntu5.Create profile > For example:Username - UserPassword - User3)Install Kohya_SS on WSL Ubuntu:: Preparesudo apt updatesudo apt install software-properties-common -ysudo add-apt-repository ppa:deadsnakes/ppasudo apt updatesudo apt install python3.10 python3.10-venv python3.10-dev -ysudo apt update -y && sudo apt install -y python3-tksudo apt install python3.10-tksudo apt install git -y:: NVIDIA CUDA Toolkitwget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.0-1_all.debsudo dpkg -i cuda-keyring_1.0-1_all.debsudo apt-get updatesudo apt-get -y install cudaexport PATH=/usr/local/cuda-12.6/bin${PATH:+:${PATH}}export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}:: Rebootsudo reboot:: Kohya_ss installcd ~git clone --recursive https://github.com/bmaltais/kohya_ss.gitcd kohya_ssgit checkout sd3-sd3.5-flux./setup.sh:: Configuration settingssource venv/bin/activateaccelerate config>This machine>No distributed training

>No>No>No>All>YesIf you have a RTX 30/40 series video card choose >bf16. If don't have choose >fp16.4)Run Kohya_SS on WSL Ubuntucd kohya_ss./gui.shNotes:To find Kohya_ss folder use \\wsl.localhost\Ubuntu\home\user in Explorer. You can move there a model to train and dataset.Additional commands for Windows Terminal:Shutdown -wsl --shutdownUninstall or reset Ubuntu -wsl --unregister Ubuntu
2

Posts

No posts yet