lingko

lingko

https://civitai.com/user/lingko/videos https://civitai.com/user/lingko https://lingko.my.canva.site
588
Followers
77
Following
410.8K
Runs
84
Downloads
1.8K
Likes
Latest
Most Liked
GGUF and Flux full fp16 Model] loading T5, CLIP

GGUF and Flux full fp16 Model] loading T5, CLIP

on Aug 13Support All Flux Models for Ablative ExperimentsDownload base model and vae (raw float16) from Flux official here and here.Download clip-l and t5-xxl from here or our mirrorPut base model in models\Stable-diffusion.Put vae in models\VAEPut clip-l and t5 in models\text_encoderPossible optionsYou can load in nearly arbitrary combinationsetc ...Fun factNow you can even load clip-l for sd1.5 separatelyGGUFDownload vae (raw float16, 'ae.safetensors' ) from Flux official here or here.Download clip-l and t5-xxl from here or our mirrorDownload GGUF models here or here.Put base model in models\Stable-diffusion.Put vae in models\VAEPut clip-l and t5 in models\text_encoderBelow are some comments copied from elsewhereAlso people need to notice that GGUF is a pure compression tech, which means it is smaller but also slower because it has extra steps to decompress tensors and computation is still pytorch. (unless someone is crazy enough to port llama.cpp compilers) (UPDATE Aug 24: Someone did it!! Congratulations to leejet for porting it to stable-diffusion.cpp here. Now people need to take a look at more possibilities for a cpp backend...)BNB (NF4) is computational acceleration library to make things faster by replacing pytorch ops with native low-bit cuda kernels, so that computation is faster.NF4 and Q4_0 should be very similar, with the difference that Q4_0 has smaller chunk size and NF4 has more gaussian-distributed quants. I do not recommend to trust comparisons of one or two images. And, I also want to have smaller chunk size in NF4 but it seems that bnb hard coded some thread numbers and changing that is non trivial.However Q4_1 and Q4_K are technically granted to be more precise than NF4, but with even more computation overheads – and such overheads may be more costly than simply moving higher precision weight from CPU to GPU. If that happens then the quant lose the point.And Q8 is always more precise than FP8 ( and a bit slower than fp8Precision: fp16 >> Q8 > Q4Precision For Q8: Q8_K (not available) >Q8_1 (not available) > Q8_0 >> fp8Precision For Q4: Q4K_S >> Q4_1 > Q4_0Precision NF4: between Q4_1 and Q4_0, may be slightly better or worse since they are in different metric systemSpeed (if not offload, e.g., 80GB VRAM H100) from fast to slow: fp16 ≈ NF4 > fp8 >> Q8 > Q4_0 >> Q4_1 > Q4K_S > othersSpeed (if offload, e.g., 8GB VRAM) from fast to slow: NF4 > Q4_0 > Q4_1 ≈ fp8 > Q4K_S > Q8_0 > Q8_1 > others ≈ fp16
8
FLUX WebUI

FLUX WebUI

FLUX WebUI simple app for running FLUX locally, powered by Diffusers & Gradio, comes with 2 models: 1. FLUX1 Schnell: Fast image generation with 4 steps 2. FLUX1 Merged: Flux1 dev quality with just 8 steps (via) Run Locally (Mac, Linux, Windows)通量網路使用者介面 一個非常簡單的應用程序,用於在本地運行 FLUX,由 Diffusers 和 Gradio 提供支持,有 2 個模型: 1. FLUX1 Schnell:4步驟快速產生影像 2. FLUX1 Merged:Flux1 開發品質只需 8 個步驟(透過 ) 本機運行(Mac、Linux、Windows)
14
X-Flux ControlNet V3 (Canny, Depth, Hed) Comfyui

X-Flux ControlNet V3 (Canny, Depth, Hed) Comfyui

DescriptionDownload Custom NodesLingko-x-Flux-Comfyuihttps://github.com/lingkops4/Lingko-x-Flux-Comfyui.gitDownload workflowshttps://github.com/lingkops4/Lingko-x-Flux-Comfyui/tree/main/workflowscanny_workflow.jsondepth_workflow.jsonflux-controlnet-canny-v3-workflow.jsonhed_workflow.jsonInstallationOpen the CMD/Shell and do the following:Go to ComfyUI/custom_nodesClone this repo, path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repoGo to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.pyRun ComfyUI after installing and enjoy!your /ComfyUI/custom_nodes/ folder Open the CMD/Shell Rungit clone https://github.com/lingkops4/Lingko-x-Flux-Comfyui.gitControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution. v3 version - better and realistic version, which can be used directly in ComfyUI!Downloads:X-Flux-Comfyui Nodehttps://github.com/XLabs-AI/x-flux-comfyui.gitCanny: flux-canny-controlnet-v3.safetensorshttps://huggingface.co/XLabs-AI/flux-controlnet-canny-v3/blob/main/flux-canny-controlnet-v3.safetensorsDepth: flux-depth-controlnet-v3.safetensorshttps://huggingface.co/XLabs-AI/flux-controlnet-depth-v3/blob/main/flux-depth-controlnet-v3.safetensorsHed: flux-hed-controlnet-v3.safetensorshttps://huggingface.co/XLabs-AI/flux-controlnet-hed-v3/blob/main/flux-hed-controlnet-v3.safetensorsHide
9
ReActor Node for ComfyUI (Face Swap)

ReActor Node for ComfyUI (Face Swap)

ReActor Node for ComfyUI 👉Downlond👈 https://github.com/lingkops4/lingko-FaceReActor-Nodeworkflowhttps://github.com/lingkops4/lingko-FaceReActor-Node/blob/main/face_reactor_workflows.jsonThe Fast and Simple Face Swap Extension Node for ComfyUI, based on ReActor SD-WebUI Face Swap ExtensionThis Node goes without NSFW filter (uncensored, use it on your own responsibility)| Installation | Usage | Troubleshooting | Updating | Disclaimer | Credits | Note!✨What's new in the latest update✨💡0.5.1 ALPHA1Support of GPEN 1024/2048 restoration models (available in the HF dataset https://huggingface.co/datasets/Gourieff/ReActor/tree/main/models/facerestore_models)👈[]~( ̄▽ ̄)~*ReActorFaceBoost Node - an attempt to improve the quality of swapped faces. The idea is to restore and scale the swapped face (according to the face_size parameter of the restoration model) BEFORE pasting it to the target image (via inswapper algorithms), more information is here (PR#321)InstallationSD WebUI: AUTOMATIC1111 or SD.NextStandalone (Portable) ComfyUI for WindowsUsageYou can find ReActor Nodes inside the menu ReActor or by using a search (just type "ReActor" in the search field)List of Nodes:••• Main Nodes •••💡ReActorFaceSwap (Main Node Download)👈[]~( ̄▽ ̄)~*ReActorFaceSwapOpt (Main Node with the additional Options input)ReActorOptions (Options for ReActorFaceSwapOpt)ReActorFaceBoost (Face Booster Node)ReActorMaskHelper (Masking Helper)••• Operations with Face Models •••ReActorSaveFaceModel (Save Face Model)ReActorLoadFaceModel (Load Face Model)ReActorBuildFaceModel (Build Blended Face Model)ReActorMakeFaceModelBatch (Make Face Model Batch)••• Additional Nodes •••ReActorRestoreFace (Face Restoration)ReActorImageDublicator (Dublicate one Image to Images List)ImageRGBA2RGB (Convert RGBA to RGB)Connect all required slots and run the query.Main Node Inputsinput_image - is an image to be processed (target image, analog of "target image" in the SD WebUI extension);Supported Nodes: "Load Image", "Load Video" or any other nodes providing images as an output;source_image - is an image with a face or faces to swap in the input_image (source image, analog of "source image" in the SD WebUI extension);Supported Nodes: "Load Image" or any other nodes providing images as an output;face_model - is the input for the "Load Face Model" Node or another ReActor node to provide a face model file (face embedding) you created earlier via the "Save Face Model" Node;Supported Nodes: "Load Face Model", "Build Blended Face Model";Main Node OutputsIMAGE - is an output with the resulted image;Supported Nodes: any nodes which have images as an input;FACE_MODEL - is an output providing a source face's model being built during the swapping process;Supported Nodes: "Save Face Model", "ReActor", "Make Face Model Batch";Face RestorationSince version 0.3.0 ReActor Node has a buil-in face restoration.Just download the models you want (see Installation instruction) and select one of them to restore the resulting face(s) during the faceswap. It will enhance face details and make your result more accurate.Face IndexesBy default ReActor detects faces in images from "large" to "small".You can change this option by adding ReActorFaceSwapOpt node with ReActorOptions.And if you need to specify faces, you can set indexes for source and input images.Index of the first detected face is 0.You can set indexes in the order you need.E.g.: 0,1,2 (for Source); 1,0,2 (for Input).This means: the second Input face (index = 1) will be swapped by the first Source face (index = 0) and so on.GendersYou can specify the gender to detect in images.ReActor will swap a face only if it meets the given condition.💡Face ModelsSince version 0.4.0 you can save face models as "safetensors" files (stored in ComfyUI\models\reactor\faces) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use.To make new models appear in the list of the "Load Face Model" Node - just refresh the page of your ComfyUI web application.(I recommend you to use ComfyUI Manager - otherwise you workflow can be lost after you refresh the page if you didn't save it before that).TroubleshootingI. (For Windows users) If you still cannot build Insightface for some reasons or just don't want to install Visual Studio or VS C++ Build Tools - do the following:(ComfyUI Portable) From the root folder check the version of Python:run CMD and type python_embeded\python.exe -VDownload prebuilt Insightface package for Python 3.10 or for Python 3.11 (if in the previous step you see 3.11) or for Python 3.12 (if in the previous step you see 3.12) and put into the stable-diffusion-webui (A1111 or SD.Next) root folder (where you have "webui-user.bat" file) or into ComfyUI root folder if you use ComfyUI PortableFrom the root folder run:(SD WebUI) CMD and .\venv\Scripts\activate(ComfyUI Portable) run CMDThen update your PIP:(SD WebUI) python -m pip install -U pip(ComfyUI Portable) python_embeded\python.exe -m pip install -U pip💡Then install Insightface:(SD WebUI) pip install insightface-0.7.3-cp310-cp310-win_amd64.whl (for 3.10) or pip install insightface-0.7.3-cp311-cp311-win_amd64.whl (for 3.11) or pip install insightface-0.7.3-cp312-cp312-win_amd64.whl (for 3.12)(ComfyUI Portable) python_embeded\python.exe -m pip install insightface-0.7.3-cp310-cp310-win_amd64.whl (for 3.10) or python_embeded\python.exe -m pip install insightface-0.7.3-cp311-cp311-win_amd64.whl (for 3.11) or python_embeded\python.exe -m pip install insightface-0.7.3-cp312-cp312-win_amd64.whl (for 3.12)Enjoy!II. "AttributeError: 'NoneType' object has no attribute 'get'"This error may occur if there's smth wrong with the model file inswapper_128.onnx💡Try to download it manually from here and put it to the ComfyUI\models\insightface replacing existing oneIII. "reactor.execute() got an unexpected keyword argument 'reference_image'"This means that input points have been changed with the latest updateRemove the current ReActor Node from your workflow and add it againIV. ControlNet Aux Node IMPORT failed error when using with ReActor NodeClose ComfyUI if it runsGo to the ComfyUI root folder, open CMD there and run:python_embeded\python.exe -m pip uninstall -y opencv-python opencv-contrib-python opencv-python-headlesspython_embeded\python.exe -m pip install opencv-python==4.7.0.72That's it!reactor+controlnetV. "ModuleNotFoundError: No module named 'basicsr'" or "subprocess-exited-with-error" during future-0.18.3 installationDownload https://github.com/Gourieff/Assets/raw/main/comfyui-reactor-node/future-0.18.3-py3-none-any.whlPut it to ComfyUI root And run:python_embeded\python.exe -m pip install future-0.18.3-py3-none-any.whlThen:python_embeded\python.exe -m pip install basicsrVI. "fatal: fetch-pack: invalid index-pack output" when you try to git clone the repository"Try to clone with --depth=1 (last commit only):git clone --depth=1 https://github.com/Gourieff/comfyui-reactor-nodeThen retrieve the rest (if you need):git fetch --unshallow
24
13
Textual Inversion Embeddings  ComfyUI_Examples

Textual Inversion Embeddings ComfyUI_Examples

ComfyUI_examplesTextual Inversion Embeddings ExamplesHere is an example for how to use Textual Inversion/Embeddings.To use an embedding put the file in the models/embeddings folder then use it in your prompt like I used the SDA768.pt embedding in the previous picture.Note that you can omit the filename extension so these two are equivalent:embedding:SDA768.ptembedding:SDA768You can also set the strength of the embedding just like regular words in the prompt:(embedding:SDA768:1.2)Embeddings are basically custom words so where you put them in the text prompt matters.For example if you had an embedding of a cat:red embedding:catThis would likely give you a red cat.
13
1
Art Mediums (127 Style)

Art Mediums (127 Style)

Art MediumsVarious art mediums. Prompted with '{medium} art of a woman MetalpointMiniature PaintingMixed MediaMonotype PrintingMosaic Tile ArtMosaicNeonOil PaintOrigamiPapermakingPapier-mâchéPastelPen And InkPerformance ArtPhotographyPhotomontagePlasterPlastic ArtsPolymer ClayPrintmakingPuppetryPyrographyQuillingQuilt ArtRecycled ArtRelief PrintingResinReverse Glass PaintingSandScratchboard ArtScreen PrintingScrimshawSculpture WeldingSequin ArtSilk PaintingSilverpointSound ArtSpray PaintStained GlassStencilStoneTapestryTattoo ArtTemperaTerra-cottaTextile ArtVideo ArtVirtual Reality ArtWatercolorWaxWeavingWire SculptureWoodWoodcutGlassGlitch ArtGold LeafGouacheGraffitiGraphite PencilIceInk Wash PaintingInstallation ArtIntaglio PrintingInteractive MediaKinetic ArtKnittingLand ArtLeatherLenticular PrintingLight ProjectionLithographyMacrameMarbleMetalColored PencilComputer-generated Imagery (cgi)Conceptual ArtCopper EtchingCrochetDecoupageDigital MosaicDigital PaintingDigital SculptureDioramaEmbroideryEnamelEncaustic PaintingEnvironmental ArtEtchingFabricFeltingFiberFoam CarvingFound ObjectsFrescoAugmented Reality ArtBatikBeadworkBody PaintingBookbindingBronzeCalligraphyCast PaperCeramicsChalkCharcoalClayCollageCollagraphy3d PrintingAcrylic PaintAirbrushAlgorithmic ArtAnimationArt GlassAssemblage
59
9
ControlNet Dw_openpose ComfyUi

ControlNet Dw_openpose ComfyUi

Installationwas-node-suite-comfyuiNavigate to your /ComfyUI/custom_nodes/ folderRun powershell git clone https://github.com/WASasquatch/was-node-suite-comfyui/安裝步驟👉 ffmpeg path👈WAS Node Suite: `ffmpeg_bin_path` is set to: C:fmpeg.exeLocationComfyUI\custom_nodes\was-node-suite-comfyui.jsonpathif your ffmpeg.exe in C:\Open was-node-suite-comfyui.json 👈"ffmpeg_bin_path": "C:\ffmpeg.exe" 👈 Change blue textDownloadhttps://ffmpeg.org/ffmpeg-release-full.7z 👈FFMPEG 安裝(windows)FFmpeg 是開放原始碼的自由軟體,可以錄影、轉檔、串流安裝步驟1Download .進入FFMPEG官網2.點選Download3.選擇windows4.點選第一個連結,到新網站後,找到release builds,並下載其中的ffmpeg-release-full.7z點選第一個連結到新網站後找到release builds並下載其中的ffmpeg-release-full.7z5.下載後為壓縮檔,在C槽Program Files裡建立新資料夾,取名為FFMPEG6.將以下檔案解壓縮至剛才創立的FFMPEG資料夾7.點開bin資料夾8.複製此資料夾的位置路徑9.用左下的搜尋工具搜尋,找到"編輯系統環境變數"10.按下"環境變數"11.找到"系統變數(S)"欄 的 "PATH" ,並按下"編輯"12.點選"新增"13.將先前複製的資料夾路徑位置貼上14.接著都按確定接著我們來確認有沒有安裝成功1.一樣用左下角的搜尋工具搜尋CMD2.輸入ffmpeg -version後按ENTERCMD fmpeg -version(注意ffmpeg -version,g跟-中間有空一格。)3.如果有出現以下畫面就是有成功
16
  ComfyUI - FreeU:您需要這個!升級任何模型  ComfyUI - FreeU: You NEED This! Upgrade any model

ComfyUI - FreeU:您需要這個!升級任何模型 ComfyUI - FreeU: You NEED This! Upgrade any model

FreeU WORKFLOWSComfyUI-FreeU (YouTube)說明
14
ComfyUI-AnimateDiff -DW Pose-Face Swap- ReActor  -Face Restore-Upscayl- Video Generation Workflows

ComfyUI-AnimateDiff -DW Pose-Face Swap- ReActor -Face Restore-Upscayl- Video Generation Workflows

Video Generation Workflows 30 nodesDownloda Workflows.json 👈👈My video Gallery link 🎥🎬👉 ffmpeg path👈WAS Node Suite: `ffmpeg_bin_path` is set to: C:fmpeg.exeLocationComfyUI\custom_nodes\was-node-suite-comfyui.jsonOpen was-node-suite-comfyui.json 👈Set "ffmpeg_bin_path": "C:\ffmpeg.exe"if your ffmpeg.exe in C:\Download https://ffmpeg.org/ffmpeg-release-full.7z 👈
29
8