Dice Dream Diffusion

Dice Dream Diffusion

Jack-O-Lantern🎃
Jack-O-Lantern🎃
Rainbow Candy
Rainbow Candy
Pink Candy
Pink Candy
Green Candy
Green Candy
Red Candy
Red Candy
Blue Candy
Blue Candy
Seasoned AI professional with over 10 years of experience working with AI coding and 20+ years of professional coding experience.
239
Followers
6
Following
225.7K
Runs
490
Downloads
2.6K
Likes
Latest
Most Liked
ComfyUI Core Nodes Loaders #HALLOWEEN2024

ComfyUI Core Nodes Loaders #HALLOWEEN2024

1. Load CLIP VisonDecode the image to form descriptions (prompts), and then convert them into conditional inputs for the sampler. Based on the decoded descriptions (prompts), generate new similar images. Multiple nodes can be used together. Suitable for transforming concepts, abstract things, used in combination with Clip Vision Encode.2. Load CLIPThe Load CLIP node can be used to load a specific CLIP model, CLIP models are used to encode text prompts that guide the diffusion process.*Conditional diffusion models are trained using a specific CLIP model, using a different model than the one which it was trained with is unlikely to result in good images. The Load Checkpoint node automatically loads the correct CLIP model.3. unCLIP Checkpoint LoaderThe unCLIP Checkpoint Loader node can be used to load a diffusion model specifically made to work with unCLIP. unCLIP Diffusion models are used to denoise latents conditioned not only on the provided text prompt, but also on provided images. This node will also provide the appropriate VAE and CLIP and CLIP vision models.*even though this node can be used to load all diffusion models, not all diffusion models are compatible with unCLIP.4. load controlnet modelThe Load ControlNet Model node can be used to load a ControlNet model, Used in conjunction with Apply ControlNet.5. Load LoRA6. Load VAE7. Load Upscale Model8. Load Checkpoint9. Load Style ModelThe Load Style Model node can be used to load a Style model. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in.* Only T2IAdaptor style models are currently supported.10. Hypernetwork LoaderThe Hypernetwork Loader node can be used to load a hypernetwork. Similar to LoRAs, they are used to modify the diffusion model, to alter the way in which latents are denoised. Typical use-cases include adding to the model the ability to generate in certain styles, or better generate certain subjects or actions. One can even chain multiple hypernetworks together to further modify the model.
14
Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Flux Ultimates Custom Txt 2 Vid Tensor Workfkow

Welcome to Dream Diffusion FLUX ULTIMATE, TXT 2 VID With its own custom workflow made for Tensor Arts Comfy Workspace. The workflow can be downloaded on this page....... ENJOYThis is a 2nd stage Trained checkpoint to its predecessor FLUX HYPER.When you think you had it nailed in the last version and notice a 10% margin that could still be trained........ Well that's what happened ..So now this version has even more font styles, Better adherence, Sharper image clarity and a better grasp for anime, water painting and such on....This model has the same setting parameters as Flux HyperPrompt Example : Logo in neon lights, 3D, colorful, modern, glossy, neon background,with a huge explosion of fire with epic effects, the text reads  "FLUX ULTIMATE , GAME CHANGER ",Set steps at : 20Sampler : DPM++ 2M or EULER Gives best resultsScheduler : SimpleDenoise : 1.00Image Size : 576 x 1024 or 1024 x 576 You can choose any size but this model is optimized for faster rendering with those sizes.Download the links from below and save them to your comfy folders...Comfy Workflow :  https://openart.ai/workflows/maitruclam/comfyui-workflow-for-flux-simple/iuRdGnfzmTbOOzONIiVVVae download this to your Vae folder inside of your model folderDownload them from: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vaeClip:  download clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors download these 2 and save them to your clip folder inside of your models folderDownload them from : https://huggingface.co/comfyanonymous/flux_text_encoders/tree/mainIf you have any questions or issues feel free to drop a comment below and I will get back to you as soon as I can. Enjoy  DICE
76
42
EVERYTHING AI FREE DOWNLOADS

EVERYTHING AI FREE DOWNLOADS

EVERYTHING AI FREE DOWNLOADS - By DICESCRIPT VERSION 1.5Face Fusion 2.6.0Next generation face swapper and enhancerhttps://github.com/facefusion/facefusion-pinokioSCRIPT VERSION 1.5Hallo[NVIDIA Only] Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animationhttps://github.com/fudan-generative-vision/halloSCRIPT VERSION 1.5Flash DiffusionAccelerating any conditional diffusion model for few steps image generationhttps://gojasper.github.io/flash-diffusion-project/SCRIPT VERSION 1.5Chat-With-Mlx[Mac Onlyl] An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.https://github.com/qnguyen3/chat-with-mlxPCMPhased Consistency Model - generate high quality images with 2 stepshttps://huggingface.co/spaces/radames/Phased-Consistency-Model-PCMSCRIPT VERSION 1.5Stable AudioAn Open Source Model for Audio Samples and Sound Designhttps://github.com/Stability-AI/stable-audio-toolsSCRIPT VERSION 1.5SillyTaverna local-install interface that allows you to interact with text generation AIs (LLMs) to chat and roleplay with custom characters.https://docs.sillytavern.app/SCRIPT VERSION 1.5AITownBuild and customize your own version of AI town - a virtual town where AI characters live, chat and socializehttps://github.com/a16z-infra/ai-townAugmentoolkitTurn any raw text into a high-quality dataset for AI finetuninghttps://github.com/e-p-armstrong/augmentoolkitLoRA the ExplorerStable Diffusion LoRA Playground HuggingFace:https://huggingface.co/spaces/multimodalart/LoraTheExplorerlavieText-to-Video (T2V) generation framework from Vchitecthttps://github.com/Vchitect/LaVieSCRIPT VERSION 1.3Dust3rGeometric 3D Vision Made Easyhttps://dust3r.europe.naverlabs.com/SCRIPT VERSION 1.5LlamaFactoryUnify Efficient Fine-Tuning of 100+ LLMshttps://github.com/hiyouga/LLaMA-FactorySCRIPT VERSION 1.5InvokeThe Gen AI Platform for Pro Studioshttps://github.com/invoke-ai/InvokeAISCRIPT VERSION 1.5OpenuiDescribe UI and see it rendered live. Ask for changes and convert HTML to React, Svelte, Web Components, etc. Like vercel v0, but open sourcehttps://github.com/wandb/openuiXTTSclone voices into different languages by using just a quick 3-second audio clip. (a local version ofhttps://huggingface.co/spaces/coqui/xttsRVC1 Click Installer for Retrieval-based-Voice-Conversion-WebUIhttps://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUILCMFast Image generator using Latent consistency modelshttps://replicate.com/blog/run-latent-consistency-model-on-macSCRIPT VERSION 1.3Whisper-WebUIA Web UI for easy subtitle using whisper modelhttps://github.com/jhj0517/Whisper-WebUIRealtime BakLLaVAllama.cpp with BakLLaVA model describes what does it seehttps://github.com/Fuzzy-Search/realtime-bakllavaRealtime StableDiffusionDemo showcasing ~real-time Latent Consistency Model pipeline with Diffusers and a MJPEG stream serverhttps://github.com/radames/Real-Time-Latent-Consistency-ModelSCRIPT VERSION 1StreamDiffusion[NVIDIA ONLY] A Pipeline-Level Solution for Real-Time Interactive Generationhttps://github.com/cumulo-autumn/StreamDiffusionSCRIPT VERSION 1Moore-AnimateAnyone[NVIDIA GPU ONLY] Unofficial Implementation of Animate Anyonehttps://github.com/MooreThreads/Moore-AnimateAnyoneSCRIPT VERSION 1Moore-AnimateAnyone-Mini[NVIDIA ONLY] Efficient Implementation of Animate Anyone (13G VRAM + 2G model size)https://github.com/sdbds/Moore-AnimateAnyone-for-windowsSCRIPT VERSION 1PhotoMakerCustomizing Realistic Human Photos via Stacked ID Embeddinghttps://github.com/TencentARC/PhotoMakerSCRIPT VERSION 1.1BRIA RMBGBackground removal model developed by BRIA.AI, trained on a carefully selected dataset and is available as an open-source model for non-commercial usehttps://huggingface.co/spaces/briaai/BRIA-RMBG-1.4SCRIPT VERSION 1.2GligenAn intuitive GUI for GLIGEN that uses ComfyUI in the backendhttps://github.com/mut-ex/gligen-guiSCRIPT VERSION 1.2MeloTTSHigh-quality multi-lingual text-to-speech library by MyShell.ai. Support English, Spanish, French, Chinese, Japanese and Koreanhttps://github.com/myshell-ai/MeloTTSChatbot-Ollamaopen source chat UI for Ollamahttps://github.com/ivanfioravanti/chatbot-ollamaSCRIPT VERSION 1.2Differential-diffusion-uiDifferential Diffusion modifies an image according to a text prompt, and according to a map that specifies the amount of change in each regionhttps://differential-diffusion.github.io/SCRIPT VERSION 1.2Supir[NVIDIA ONLY] Text-driven, intelligent restoration, blending AI technology with creativity to give every image a brand new lifehttps://supir.xpixel.groupSCRIPT VERSION 1.5ZeSTZeST: Zero-Shot Material Transfer from a Single Image. Local port ofhttps://huggingface.co/spaces/fffiloni/ZeST (Project: https://ttchengab.github.io/zest/)SCRIPT VERSION 1.5StoryDiffusion Comicscreate a story by generating consistent imageshttps://github.com/HVision-NKU/StoryDiffusionSCRIPT VERSION 1.2Lobe ChatAn open-source, modern-design ChatGPT/LLMs UI/Framework. Supports speech-synthesis, multi-modal, and extensible (function call) plugin system.https://github.com/lobehub/lobe-chatSCRIPT VERSION 1.5Parler-ttsa lightweight text-to-speech (TTS) model that can generate high-quality speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).https://huggingface.co/spaces/parler-tts/parler_tts_miniSCRIPT VERSION 1.5InstantstyleUpload the picture of an image, and generate images with that image style. Instant generation with no LoRA requiredhttps://huggingface.co/spaces/InstantX/InstantStyleSCRIPT VERSION 1.5Openvoice2Openvoice 2 Web UI - A local web UI for Openvoice2, a multilingual voice cloning TTShttps://x.com/myshell_ai/status/1783161876052066793SCRIPT VERSION 1.5IDM-VTONImproving Diffusion Models for Authentic Virtual Try-on in the Wildhttps://huggingface.co/spaces/yisol/IDM-VTONSCRIPT VERSION 1.5DevikaAgentic AI Software Engineerhttps://github.com/stitionai/devikaSCRIPT VERSION 1.2Open WebUIUser-friendly WebUI for LLMs, supported LLM runners include Ollama and OpenAI-compatible APIshttps://github.com/open-webui/open-webuiSCRIPT VERSION 1.5CosXLEdit images with just prompt, an unofficial demo for CosXL and CosXL Edit from Stability AI,https://huggingface.co/spaces/multimodalart/cosxlSCRIPT VERSION 1.5Face-to-allDiffusers InstantID + ControlNet inspired by face-to-many from fofr (https://x.com/fofrAI) - a localized Version ofhttps://huggingface.co/spaces/multimodalart/face-to-allSCRIPT VERSION 1.5CustomNetA unified encoder-based framework for object customization in text-to-image diffusion modelshttps://huggingface.co/spaces/TencentARC/CustomNetSCRIPT VERSION 1.5BrushnetA Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusionhttps://huggingface.co/spaces/TencentARC/BrushNetSCRIPT VERSION 1.5Arc2FaceA Foundation Model of Human Faceshttps://huggingface.co/spaces/FoivosPar/Arc2FaceSCRIPT VERSION 1.2TripoSRa state-of-the-art open-source model for fast feedforward 3D reconstruction from a single image, developed in collaboration between Tripo AI and Stability AI.https://huggingface.co/spaces/stabilityai/TripoSRSCRIPT VERSION 1.2ZETAZero-Shot Text-Based Audio Editing Using DDPM Inversionhttps://huggingface.co/spaces/hilamanor/audioEditingSCRIPT VERSION 1.2Remove-video-bgVideo background removal toolhttps://huggingface.co/spaces/amirgame197/Remove-Video-BackgroundSCRIPT VERSION 1.1[NVIDIA GPU ONLY] LGMLGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creationhttps://huggingface.co/spaces/ashawkey/LGMSCRIPT VERSION 1vid2poseVideo to Openpose & DWPose (All OS supported)https://github.com/sdbds/vid2poseSCRIPT VERSION 1IP-Adapter-FaceIDEnter a face image and transform it to any other image. Demo for the h94/IP-Adapter-FaceID modelhttps://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceIDSCRIPT VERSION 1DreamtalkWhen Expressive Talking Head Generation Meets Diffusion Probabilistic Modelshttps://github.com/ali-vilab/dreamtalkSCRIPT VERSION 1Video2OpenposeTurn any video into Openpose videohttps://huggingface.co/spaces/fffiloni/video2openpose2MagicAnimate Mini[NVIDIA GPU Only] An optimized version of MagicAnimatehttps://github.com/sdbds/magic-animate-for-windowsMagicAnimate[NVIDIA GPU Only] Temporally Consistent Human Image Animation using Diffusion Modelhttps://showlab.github.io/magicanimate/AudioSepSeparate Anything You Describehttps://huggingface.co/spaces/Audio-AGI/AudioSepTokenflowTemporally consistent video editing. A local version ofhttps://huggingface.co/spaces/weizmannscience/tokenflowModelScope Image2Video (Nvidia GPU only)Turn any image into a video! (Web UI created by fffiloni:https://huggingface.co/spaces/fffiloni/MS-Image2Video)Text Generation WebUIA Gradio web UI for Large Language Modelshttps://github.com/oobabooga/text-generation-webuiSCRIPT VERSION 1MAGNeTMAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptionshttps://github.com/facebookresearch/audiocraft/blob/main/docs/MAGNET.mdSCRIPT VERSION 1VideoCrafter 2[Runs fast on NVIDIA GPUs. Works on M1/M2/M3 Macs but slow] VideoCrafter is an open-source video generation and editing toolbox for crafting video content. It currently includes the Text2Video and Image2Video modelshttps://github.com/AILab-CVC/VideoCrafterSCRIPT VERSION 1.1Bark Voice CloningUpload a clean 20 seconds WAV file of the vocal persona you want to mimic, type your text-to-speech prompt and hit submit! A local version ofhttps://huggingface.co/spaces/fffiloni/instant-TTS-Bark-cloning
6
FLUX HYPER - DREAM DIFFUSION - By DICE

FLUX HYPER - DREAM DIFFUSION - By DICE

FLUX HYPER DREAM DIFFUSION BY DICEModel can be found on Tensor Art https://tensor.art/models/759856135286068673/FLUX-DREAM-DIFFUSION-BY-DICE-V-1or all my models are also over on Shakker.aihttps://www.shakker.ai/userpage/8b0d2aadaa2a4f2592cbb367c329ea51/publishAlso made 2 Flux Lora's that run perfectly with Dream Diffusion Flux HyperFLUX FANTASYhttps://tensor.art/models/771926956187929704/FLUX-HYPER-FANTASY-STYLE-By-DICE-V!NEON FLUXhttps://tensor.art/models/772415203775237827/NEON-FLUX-DREAM-DIFFUSION-V1 FLUX HYPER DREAM DIFFUSION BY DICEStart of with these settings in comfy to get a feel for how it runs ....Simple Prompt : a jet plane display team write text in the sky with colourful smoke, the text in the smoke says 'Dream Diffusion Flux'.Set steps at : 20sampler : Euler or DPM++2Mscheduler : SimpleDenoise : 1.00To a more complex prompt like : four bottles lined up on a table. from left to right, they are numbered "4" then "3" then "1" then "2". from left to right, they are red, blue, green, and orange. the background is a nightclubOr : four bottles lined up on a table. each bottle has text on them. bottles says 'fire'. the background is hyper realistic flames, UHDOnce you have downloaded my Flux Hyper Checkpoint you need to save it to the unet folder inside of comfy, if you dont see a unet folder create a new folder name it unet inside of the models folder.If you use forge webui then save it to models/stable diffusion folder.Download the links from below and save them to your comfy or Forge folders...Comfy Workflow : https://openart.ai/workflows/maitruclam/comfyui-workflow-for-flux-simple/iuRdGnfzmTbOOzONIiVVVae: download this to your Vae folder inside of your model folder: https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main/vaeClip: download clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors download these 2 and save them to your clip folder inside of your models folder Download them from : https://huggingface.co/comfyanonymous/flux_text_encoders/tree/mainIf you have any questions or issues feel free to drop a comment below and I will get back to you as soon as I can. Also I have created a lot of comfy workflows for Flux, so if you want them just comment below.Enjoy DICE
18
1
SD3 PLATINUM - 10GB Checkpoint - By DICE

SD3 PLATINUM - 10GB Checkpoint - By DICE

SD3 PLATINUM - By DICEThe big 10GB SD 3 Checkpoint... Hyper trained. Now this is a true weapon of a checkpoint. This one will need a large amount of GPU V Ram. For this to run smoothly you will need a powerful rig. And Comfy WebUI or Swarm UI.Its a one click install Will run on Tenser Art generator.https://tensor.art/u/729141385340557527If you have any issues or questions please drop a comment below. I am still pushing this checkpoint so I am learning its limits myself.youtube video below to see a simple workflow video of this model runninghttps://youtu.be/QgfiNTi33go?feature=sharedI have more testing to do so feel free to drop your findings below ....Enjoy Dice
2
1
SD3 GOLD - By DICE Made For Automatic 1111 standard, Forge, Foocus and Comfy

SD3 GOLD - By DICE Made For Automatic 1111 standard, Forge, Foocus and Comfy

 DREAM DIFFUSION - SD3 GOLD - By DICE                                IT'S HERE HOW SD3 SHOULD OF BEENI've made this model so it will run in automatic 1111 and Forge as well as comfy.Includes the T5XXL text encoder from the 10G version. While having the lowest resource requirements.100% Realistic100% Anime100% TrainedThis Checkpoint will render any style you throw at it. Don't need Loras unless you are styling a custom character.https://www.shakker.ai/userpage/8b0d2aadaa2a4f2592cbb367c329ea51/publishor watch it being used in forge belowhttps://youtu.be/FrVITtD0q_Y?feature=sharedI've attached a few images below so as you can see the prompts and the weights I used running on my SD3 Gold checkpoint. I look forward to seeing some of your creations and hope some of my images below inspire you to create some of your own epic renders ....Any questions or issues please post in the comments, and I will reply to you within 6 hours.Enjoy .... DICE
2
3