DriverIdentifier logo





Comfyui workflows examples reddit

Comfyui workflows examples reddit. " It takes the input, knows it's an image, and then does what I Welcome to the unofficial ComfyUI subreddit. Next time, just ask me before assuming SAI has directly told us to not help individuals who may be using leaked models, which is a bit of a shame (since that is the opposite of true ️) . 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. After each step the first latent is down scaled and composited in the second, which is downscaled and composited with the third, etc New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. 1. If this is not what you see, click Load Default on the right panel to return this default text-to-image workflow. I need to KSampler it again after upscaling. Example workflow and video! Welcome to the unofficial ComfyUI subreddit. Grab the ComfyUI workflow JSON here. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Only dog, also perfect. There's no reason to use Comfy if you're not willing to learn it. Here's a list of example workflows in the official ComfyUI repo. You should try to click on each one of those model names in the ControlNet stacker node Here is one I've been working on for using controlnet combining depth, blurred HED and a noise as a second pass, it has been coming out with some pretty nice variations of the originally generated images. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Features. To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. This is an example of an image that I generated with the advanced workflow. I had to place the image into a zip, because people have told me that Reddit strips . You can also just load an image on the left side of the control net section and use it that way edit: if you use the link above, you'll need to replace the Welcome to the unofficial ComfyUI subreddit. For your all-in-one workflow, use the Generate tab. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. For other types of detailer, just type "Detailer". The AP Workflow wouldn't exist without the incredible work done by all the node authors out there. SDXL Default ComfyUI workflow. After learning auto1111 for a week, I'm switching to Comfy due the rudimentary nature of extensions for everything and persisting memory issues with my 6GB GXT1660. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with If you’re completely new to LoRA training, you’re probably looking for a guide to understand what each option does. json workflow file from the C:\Downloads\ComfyUI\workflows folder. To create this workflow I wrote a python script to wire up all the nodes. Then there's a full render of the image with a prompt that describes the whole thing. It was one of the earliest to add support for turbo, for example. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI Go on github repos for the example workflows. Welcome to the unofficial ComfyUI subreddit. EDIT: For example this workflow shows the use of the other prompt windows. This should update and may ask you the click restart. ComfyUI Workflow | OpenArt Welcome to the unofficial ComfyUI subreddit. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. A few examples of my ComfyUI Welcome to the unofficial ComfyUI subreddit. They can create the impression of watching an animation when presented as an animated GIF or other video format. I've done it before by using the websocket example inside comfyui, you have to first creat the workflow you like than use python to pull the data or display the images, I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap Welcome to the unofficial ComfyUI subreddit. So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. This is an interesting implementation of that idea, with a lot of potential. Therefore, we created a simple website that allows anyone to upload a I would love to see some tutorials on how people are downloading great workflows for comfyui like the ones especially submitted for the workflow contest here: I'll study the workflow, extract the interesting bits and trash it afterwards. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Im trying to do the same as high res fix, with a model and weight below 0. I really like the flexibility of ComfyUI, but one minor problem I have with it is that I have no way to toggle on or temporarily disable parts of the workflow. Wish there was some #hashtag system or something. And We love ComfyUI for its ease in sharing workflows, but we dislike how long it takes to try them out. An example of the A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. 5 - to take a legible screenshot of large workflows, you have to zoom out with your browser to say 50% and then zoom in with Two workflows included. The first one is very similar to the old workflow and just called "simple". Installing ComfyUI. Is there a workflow with all features and options combined together that I can simply load and use ? Clown visits Reddit comments, baffles prompt seeker and generates mild controversy, curly red hair, mardi gras colors, this is a clown who lives inside of reddit comments and that's that. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). Its modular nature lets you mix and match component in a upload any workflow to make it instantly runnable by anyone (locally or online). This repo contains examples of what is achievable with ComfyUI. Here's an example of me using AnyNode in an image to image workflow. I can load workflows from the example images through localhost:8188, this seems to work fine. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. That's a bit presumptuous considering you don't know my requirements. Promptless outpaint/inpaint canvas based on comfyui workflows Welcome to the unofficial ComfyUI subreddit. I tried with masking nodes but the results weren't what I was expecting, for example the original masked image of the product was still processed and the text Welcome to the unofficial ComfyUI subreddit. I now have two "medium-complete" workflows that are base for almost all my gens. Also, if this is new and exciting to The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. Upcoming tutorial - SDXL Lora + using 1. Reply reply It seems wasteful like in the official ComfyUI SVD example to keep generating text to image to video in one go Thanks for sharing, that being said I wish there was a better sorting for the workflows on comfyworkflows. A recommendation: ddim_u has an issue where the time schedule doesn't start at 999. example, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You feed it an image, it runs through openpose, canny, lineart, whatever you decide to include. But try both at once and they miss a bit of quality. Locked post. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. Even with 4 regions and a global condition, they just combine them all 2 at a You can encode then decode bck to a normal ksampler with an 1. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone We would like to show you a description here but the site won’t allow us. I wanted to share a comfyui workflow that you can try out on your input images you want 4x enlarged, but not changed too much, while still having some leeway with Welcome to the unofficial ComfyUI subreddit. r/StableDiffusion • In the last few days I've upgraded all my Loras for SD XL to a better configuration with smaller files. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Does anyone have a tutorial to do regional sampling + regional ip-adapter in the same comfyUI workflow? For example, i want to create an image which is "have a girl (with face-swap using this picture) in the top left, have a boy (with face-swap using another picture) in the bottom right, standing in a large field" /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Merging 2 Images 30 nodes. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Look for the example that uses controlnet lineart. Or check it out in the app stores Comfyui "workflows" (I don't remember the exact name). Example workflow and video! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Save your workflow using this format which is different than the normal json workflows. (for 12 gb VRAM Max is about 720p resolution). Table of contents. Civitai has few workflows as well. , Load Checkpoint, Clip Text Encoder, etc. An example of how machine learning can overcome all perceived odds youtube /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 150 workflow examples of things I created with ComfyUI and ai models from Civitai LINK: https://comfyworkflows. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. io Open. pngs of metadata. com/profile/d8351c2d-7d14-4801-84f4 What is the best workflow that people have used with the most capability without using custom nodes? Welcome to the unofficial ComfyUI subreddit. Some workflows alternatively require you to git clone the repository I agree wholeheartedly. ComfyUI SDXL Examples . the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. I couldn't decipher it either, but I think I found something that works. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Get the Reddit app Scan this QR code to download the app now. I am trying to find a workflow to automate by learning the manual steps (blender+etc. For example, it would be very cool if one could place the node numbers on a grid (of I can't see it, because I cant find the link for workflow. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Flux. That being said, some users moving from a1111 to Comfy Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). com. The only issue is that it requieres more VRAM, so many of us will probably be forced to decrease the resolutions bellow 512x512. This guide is about how to setup ComfyUI on your Windows computer to run Flux. For example: 896x1152 or Welcome to the unofficial ComfyUI subreddit. safetensors sd15_lora_beta. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. Discovery, share and run thousands of ComfyUI Workflows on OpenArt. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. g. If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). no manual setup needed! import any online workflow into your local ComfyUI, & we'll auto-setup Flux Schnell. Plus quick run-through of an example ControlNet workflow. I tried to find either of those two examples, but I have so many damn images I couldn't find them. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Prompt used as starter prompt as example for prompt variance: ComfyUI: There are some example workflows included in the project as well. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. After all: Default workflow Welcome to the unofficial ComfyUI subreddit. I learned this from Sytan's Workflow, I like the result. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Eventually you'll find your favorites which enhance how you want ComfyUI to work for you. x, 2. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. 7K subscribers in the comfyui community. . Allows you to choose the You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: Here are approx. That will give you a Save(API Format) option on the main menu. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. A video snapshot is a variant on this theme. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. ComfyUI and custom nodes update constantly and a lot of times nodes get obsoleted. You can then load or drag the following Here are approx. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. I can load the comfyui through 192. I would like to include those images into ComfyUI workflow and experiment with different backgrounds - mist - lightrays - abstract colorful stuff behind and before the product subject. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. How it works: Download & drop any image from the Breakdown of workflow content. THANK you for sharing all This nice stuffs. Img2Img ComfyUI workflow. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. don't try to learn ComfyUI by building a workflow from scratch. It covers the following topics: 6 min read. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. 0. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. I think that when you put too many things inside, it gives less attention to it. You do only face, perfect. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. And above all, BE NICE. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. These are examples demonstrating how to do img2img. I've been playing with ComfyUI for a while now and even though I only do it for fun, I think I managed to create a workflow that that will be helpfull for others. Seems very hit and miss, most of what I'm getting look like 2d camera pans. Nodes/graph/flowchart interface to experiment Starting workflow. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. json file. More info: https://rtech. Img2Img Examples. Why is everything to do with this fucking program like knocking on a random door out of curiosity and then having your fucking teeth ripped out with no explanation? If this is not what you see, click Load Default on the right panel to return this default text-to-image workflow. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. But for a base to start at it'll work. The entire comfy workflow is there which you can use. ComfyUI-to-Python-Extension can be written by hand, but it's a bit cumbersome, can't take benefit of the cache, and can only be run locally. Thanks. 75s/it with the 14 frame model. 1:8188 but when i try to load a flow through one of the example images it just does nothing. ) to integrate it with comfyUI for a "$0 budget sprite game". 5 with lcm with 4 steps and 0. you can just drop them into ComfyUI and the workflow I cant load workflows from the example images using a second computer. I also use the comfyUI manager to take a look at the various custom nodes available to see what interests me. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 for converting an anime image of a character into a photograph of the same character while preserving the Welcome to the unofficial ComfyUI subreddit. 86s/it on a 4070 with the 25 frame model, 2. 168. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. There are so many resources available, but you need to dive in. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . All the images in this repo contain metadata which means they can be loaded into ComfyUI For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. The adventage that this platform has is its built in community, ease of use, and the ability to experiment with I'm trying to port my workflows across from A1111 to ComfyUI. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. ckpt There is a latent workflow and a pixel space ESRGAN workflow in the examples. WAS suite has some workflow stuff in its github links somewhere as well. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. 11K subscribers in the comfyui community. Welcome to the unofficial ComfyUI subreddit. Say, for example, you made a ControlNet workflow for copying the pose of an image. New comments cannot be posted. Support for SD 1. . ComfyUI is usualy on the cutting edge of new stuff. comfy uis inpainting and masking aint perfect. For example a faceswap with a decent detailer and upscaler should contain no more than 20 nodes. Ending Workflow. If you are going to use an LLM then give it examples of good prompts from civitai to emulate. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. Hi everyone. Has anyone gotten a good simple ComfyUI workflow for 1. Belittling their efforts will get you banned. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. So, up until today, I figured the "default workflow" was still always the best thing to use. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. 0, I worked closely with u/Kijai, u/glibsonoran, u/tzwm, and u/rgthree, to test new nodes, optimize parameters (don't ask me about SUPIR), develop new features, and correct bugs. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. The generated workflows can also be used in the web UI. Please share your tips, tricks, and workflows for using this Examples of ComfyUI workflows. Please keep posted images SFW. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. For the AP Workflow 9. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. I created a platform that will enable you to share your comfyui workflows (for free) and run them directly on the cloud (for a tiny sum). 9(just search in youtube sdxl 0. These people are exceptional. I'll do you one better, and send you a png you can directly load into Comfy. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. 5 + SDXL Refiner Workflow : StableDiffusion. However, without the reference_only ControlNetthis works poorly. And I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. ComfyUI's API is enough for making simple apps, but hard to write by hand. ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some arbitrary cloud GPU, our cloud sets up everything automatically so that there are no missing files/custom nodes To improve sharpness search for "was node suite comfyui workflow examples" on Google, should take you to a github page with various workflows, one of them I see is for running hipass for sharpening, you can download the workflow and run it on your comfy. First of all, sorry if this has been covered before, i did search and nothing came back. 5 from 512x512 to 2048x2048. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. MoonRide workflow v1. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). com) video, I was pretty sure the nodes to do it already exist in comfyUI. Ignore the prompts and setup Welcome to the unofficial ComfyUI subreddit. AP Workflow 6. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. For example, see this: SDXL Base + SD 1. Join the largest ComfyUI community. The reason why one might 4. There are sections Take the examples that are available and give it a try. Only the LCM Sampler extension is needed, as shown in this video. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. I'm learning a lot. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1 or not. Thanks for getting this out, and for clearing everything up. If you don’t see the right panel, press Ctrl-0 (Windows) or Cmd-0 (Mac). You will see the workflow is made with two basic building blocks: Nodes and edges. It’s not the point of this post and there’s a lot to learn, but still, let me share my personal experience with you: Welcome to the unofficial ComfyUI subreddit. However, since prompting is pretty much the core skill required to work with any gen-ai tool, it’ll be worthwhile studying that in more detail than ComfyUI, at least to begin with. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. A lot of people are just discovering this technology, and want to show off what they created. A lot of people are just Welcome to the unofficial ComfyUI subreddit. support Welcome to the unofficial ComfyUI subreddit. Warning. Hi there. Any ideas on this? So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. There are plenty of example workflows out there from the simple to the insanely complex but that isn't the same as having those accompany the install Does anyone know how to do it like the example attached, so that it can be dropped into the page and used? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. com/. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. The denoise controls the amount of noise added to the image. safetensors sd15_t2v_beta. or through searching reddit, the comfyUI manual needs updating imo. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to For example, if you want to use "FaceDetailer", just type "Face". 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, ComfyUI has a tidy and swift codebase that makes adjusting to a fast paced technology easier than most alternatives. If you see a few red boxes, be sure to read the Questions section on the page. 2. ComfyScript is simple to read and write and can run remotely. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to It works by converting your workflow. Example workflow: Many things taking place here: note how only the area around the mask is sampled on (40x faster than sampling the whole image), it's being upscaled before sampling, then downsampled before stitching, and the mask is blurred before sampling plus the sampled image is blend in seamlessly into the original image. I played for a few days with ComfyUI and SDXL 1. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. 0 for ComfyUI - Now with support for SD 1. Yes. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 ComfyUI Examples. ComfyUI Fooocus Inpaint with Segmentation Workflow. 4 - The best workflow examples are through the github examples pages. Has anyone else messed around with gligen much? This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it passes this image/prompt/etc to a second sampler, but this instead Welcome to the unofficial ComfyUI subreddit. github. Comfy has clearly taken a smart and logical approach with the workflow GUI, at least from a programmer's point of view. In ComfyUI go into settings and enable dev mode options. Upscaling ComfyUI workflow. /ComfyUI" you will find the file extra_model_paths. Nodes are the rectangular blocks, e. Help me make it better! Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the Civitai has a ton of examples including many comfyui workflows that you can download and explore. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. You can just use someone elses workflow of 0. This example shows me just asking AnyNode "I want you to output the image with a cool instagram-like classic sepia tone filter. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. There are 18 high quality and very interesting style Loras that you can use for personal or commercial use. but mine do include workflows for the most part in the video description. ComfyUI_examples SDXL Examples. I've been using comfyui for a few weeks now and really like the flexibility it offers. Instead, I created a simplified 2048X2048 workflow. json files into an executable Python script that can run without launching the ComfyUI server. support/docs/meta /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Share, discover, & run thousands of ComfyUI workflows. comfyanonymous. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've Welcome to the unofficial ComfyUI subreddit. ComfyUI Workflow | OpenArt 6. I'm glad to hear the workflow is useful. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference Welcome to the unofficial ComfyUI subreddit. 8. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent Welcome to the unofficial ComfyUI subreddit. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Don't rely on most old workflows and examples. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. The process of building and rebuilding my own workflows with the new things I've learned has taught me a lot. 1 ComfyUI install guidance, workflow and example. Load the . Then if it looks good I want to re-run it with upscale and save them. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. For example, let's say I want to just bash out 20 low res images in preview to get a feel for a prompt. There is a ton of stuff here and may be a bit overwhelming but worth exploring. ComfyUI . If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. You can Load these images in ComfyUI to get the full workflow. Example: Welcome to the unofficial ComfyUI subreddit. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. ComfyUI Examples. Under ". ComfyUI workflow Welcome to the unofficial ComfyUI subreddit. You can find examples and workflows in his github page, for example, txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 frame window. yaml. vez pfpwm athlip nfudqv pjivcjse bida otlw yxmochr ernvldlu iyjz