Parking Garage

Comfyui upscale example reddit

  • Comfyui upscale example reddit. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Instead, I use Tiled KSampler with 0. What is the best workflow you know of? Examples of ComfyUI workflows. There is also a UltimateSDUpscale node suite (as an extension). That said, Upscayl is SIGNIFICANTLY faster for me. Basically it doesn't open after downloading (v. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. This is what I have so far (using the custom nodes to reduce the visual clutteR) . This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. g. I have been generally pleased with the results I get from simply using additional samplers. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. You can construct an image generation workflow by chaining different blocks (called nodes) together. The final node is where comfyui take those images and turn it into a video. 5 if you want to divide by 2) after upscaling by a model. Reply reply More replies More replies More replies Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. 1 Dev Flux. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. - run your prompt. This is done after the refined image is upscaled and encoded into a latent. Flux Examples. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. Still working on the the whole thing but I got the idea down That's because of the model upscale. A lot of people are just discovering this technology, and want to show off what they created. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. 43 votes, 16 comments. You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. Flux is a family of diffusion models by black forest labs. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) : Make sure you already have ComfyUI Manager (it's like an extension manager) SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. 5 "Upscaling with model" and then denoising 0. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. Try immediately VAEDecode after latent upscale to see what I mean. Hires fix with add detail lora. Is there any actual point to your example about the 6 different models? This seems to inherently defeat the entire purpose of the 6 models and would likely end up making the end result effectively quite random and uncontrollable, at least without extensive testing though you could also simply train or find a model/lora that has similar result more easily. 1-0. I created this workflow to do just that. Both these are of similar speed. You'll notice that with SAG the city in the background makes more sense and also the sky doesn't have any city parts in it. I was running some tests last night with SD1. TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. AH, I KNEW I was missing something that should be obvious! The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. Then I would do a model upscale>resize or instead, tiled upscaling approach. 5 with lcm with 4 steps and 0. 55 I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. 1 Pro Flux. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. Hands are still bad though. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. Adding LORAs in my next iteration. 1 or not. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 5=1024). I upscaled it to a… This is more of a starter workflow which supports img2img, txt2img, a second pass sampler, between the sample passes you can preview the latent in pixelspace, mask what you want, and inpaint (it just adds mask to the latent), you can blend gradients with the loaded image, or start with an image that is only gradient. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. 6 denoise and either: Cnet strength 0. So if you want 2. Ty i will try this. I can only make a stab at some of these, as I'm still very much learning. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). It does not work with SDXL for me at the moment. Hi all, the title says it all, after launching a few batches of low res images I'd like to upscale all the good results. safetensors (SD 4X Upscale Model) Jan 5, 2024 · I needed a workflow to upscale and interpolate the frames to improve the quality of the video. A reminder that you can right click images in the LoadImage node and edit them with the mask editor. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can find examples and workflows in his github page, for example, txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 frame window. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. They are images of workflows, if you download those workflow images and drag them to comfyUI, it will display the workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. It's why you need at least 0. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. Thanks! Latent upscale is different from pixel upscale. I have to push around 0. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. It’s not very fancy, but it gets the job done. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Latent quality is better but the final image deviates significantly from the initial generation. And above all, BE NICE. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Upscale x1. ComfyUI Examples. You can easily utilize schemes below for your custom setups. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Thanks. safetensors vs 1. It's not that case in ComfyUI - you can load different checkpoints and LoRAs for each KSampler, Detailer and even some upscaler nodes. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. The best method I Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 0. X values) if you want to benefit from the higher res processing. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions 2x upscale using Ultimate SD Upscale and TileE Controlnet. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Please share your tips, tricks, and… That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. thats For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. Look at this workflow : 10 votes, 15 comments. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. I thought it was cool and wanted to do that too. This repo contains examples of what is achievable with ComfyUI. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now run the upscaler and second pass. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. Ugh. Makeing a bit of progress this week in ComfyUI. (Change the Pos and Neg Prompts in this method to match the Primary Pos and Neg Prompts). All hair strands are super thick and contrasty, the lips look plastic and the upscale couldn't deal with her weird mouth expression because she was singing. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. Visit their github for examples. You can use folders too, so eg cascade/clip_model. 22, the latest one available). Explore its features, templates and examples on GitHub. 2 and resampling faces 0. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. Thanks for your help Aug 29, 2024 · Upscale Model Examples Here is an example of how to use upscale models like ESRGAN. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. repeat until you have an image you like, that you want to upscale. I might do an issue in ComfyUI about that. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. The workflow images become the workflow itself. Belittling their efforts will get you banned. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. 5/clip_model_somemodel. ComfyUI Fooocus Inpaint with Segmentation Workflow started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I've played around with different upscale models in both applications as well as settings. If I wanted any enhancements/details that latent upscaling could provide, I limit the upscale to around 1. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. 5/clip_some_other_model. The 16GB usage you saw was for your second, latent upscale pass. For example, it's like performing sampling with the A model for onl You just have to use the node "upscale by" using bicubic method and a fractional value (0. ultrasharp), then downscale. 15-1. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. If your image changes drastically on the second sample after upscaling, it's because you are denoising too much. Please keep posted images SFW. Examples below are accompanied by a tutorial in my YouTube video. For example, you might prompt the model differently when it's rendering the smaller patches, removing the "kangaroo" entirely. how much upscale it needs to get that final resolution (both normal upscaler or upscaler value that have been 4x scaled by upscale model) Example Workflow of usage in ComfyUI: JSON / PNG. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. Until now I was launching a pipeline on each image one by one, but is it possible to have an automatic iterative task to do this? That might me a great upscale if you want semi-cartoony output, but it's nowhere near realistic. Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. This could lead users to increase pressure to developers. Nevertheless, I found that when you really wanna get rid of artifacts, you cannot run a low denoising. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. [3]. My postprocess includes a detailer sample stage and another big upscale. 25 to keep the process and VRAM usage lower. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. Feature/Version Flux. Appreciate just looking into it. A working ComfyUI installation – https://github. 2 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. The img2img pipeline has an image preprocess group that can add noise and gradient, and cut out a subject for various types of inpainting. Haven't used it, but I believe this is correct. 4x using consumer-grade hardware. Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. I liked the ability in MJ, to choose an image from the batch and upscale just that image. This will get to the low-resolution stage and stop. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. 5 to get a 1024x1024 final image (512 *4*0. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:\ComfyUI\models>mklink /D checkpoints F:\stable-diffusion-webui\models\Stable-diffusion We would like to show you a description here but the site won’t allow us. I would output my image and keep the resolution down while any non-tiled sampler is going to be working on it. This one is with SAG: Both are after two latent upscales. Also, if this is new and exciting to you, feel free to post The upscale quality is mediocre to say the least. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. . Second, you will need Detailer SEGS or Face Detailer nodes from ComfyUI-Impact Pack. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. Just remember for best result you should use detailer after you do upscale. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). 9 , euler Welcome to the unofficial ComfyUI subreddit. You can encode then decode bck to a normal ksampler with an 1. safetensors -- makes it easier to remember which one to choose where you're stringing together workflows. 19K subscribers in the comfyui community. Requirements. Hello! I am hoping to find find a ComfyUI workflow that allows me to use Tiled Diffusion + Controlnet Tile for upscaling images~ can anyone point me toward a comfy workflow that does a good job of this? I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. I usually take my first sample result to pixelspace, upscale by 4x, downscale by 2x, and sampling from step 42 to step 48, then pass it to my third sampler for steps 52 to 58, before going to post with it. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. This is why I want to add ComfyUI support for this technique. 0 = 0. 2x, upscale using a 4x model (e. The resolution is okay, but if possible I would like to get something better. The only issue is that it requieres more VRAM, so many of us will probably be forced to decrease the resolutions bellow 512x512. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. After borrowing many ideas, and learning ComfyUI. I gave up on latent upscale. I also combined ELLA in the workflow to make it easier to get what I want. I hope this is due to your settings or because this is a WIP, since otherwise I'll stay away. * Dialog / Dialogue Editing * ADR * Sound Effects / SFX * Foley * Ambience / Backgrounds * Music for picture / Soundtracks / Score * Sound Design * Re-Recording / Mix * Layback * and more Audio-Post Audio Post Editors Sync Sound Pro Tools ProTools De-Noise DeNoise Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Welcome to the unofficial ComfyUI subreddit. 2 / 4. com/comfyanonymous/ComfyUI; ComfyUI Manager – https://github. the good thing is no upscale needed. [2]. 5, euler, sgm_uniform or CNet strength 0. 9, end_percent 0. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. My sample pipeline has three sample steps, with options to persist controlnet and mask, regional prompting, and upscaling. There are a lot of upscale variants in ComfyUI. ComfyUI Manager issue. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). We are sound for picture - the subreddit for post sound in Games, TV / Television , Film, Broadcast, and other types of production. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters Examples of ComfyUI workflows. Welcome to the unofficial ComfyUI subreddit. No matter what, UPSCAYL is a speed demon in comparison. In ComfyUI, we can break their approach into components and make adjustments at each part to find workflows that get rid of artifacts. 5 denoise. It's not that case in ComfyUI - you can load different checkpoints and LoRAs for each KSampler, Detailer and even some upscaler nodes. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. Like 1024, 1280, 2048, 1536. The downside is that it takes a very long time. com/ltdrdata/ComfyUI-Manager 5 days ago · 31 Aug 2024 76:17. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. This is not the case. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments Welcome to the unofficial ComfyUI subreddit. Is there a workflow to upscale an entire folder of images as is easily done in A1111 in the img2img module? Basically I want to choose a folder and process all the images inside it. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. You end up with images anyway after ksampling so you can use those upscale node. There are also "face detailer" workflows for faces specifically. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Depending on the noise and strength it end up treating each square as an individual image. We would like to show you a description here but the site won’t allow us. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. safetensors and 1. 0 Alpha + SD XL Refiner 1. Does anyone have any suggestions, would it be better to do an ite Check comfyUI image examples in the link. And when purely upscaling, the best upscaler is called LDSR. Here is an example of how to use upscale models like ESRGAN. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. zfiq wpstj ixgnbevy xnsi ycqcd lhhhq bwpda qdwbj tbear xlhldzu