Comfyui batch upscale images reddit. Check out the example workflows on the GitHub. 30 seconds. ckpt motion with Kosinkadink Evolved . Combined Searge and some of the other custom nodes. It is not a problem in the seed, because I tried different seeds. Please share your tips, tricks, and workflows for using this. Also, if this Images are now saved with metadata readable in A1111 WebUI, Vladmandic SD. Upscale x1. If the dimensions of the second image do not match those of the first it is rescaled and center-cropped to maintain its aspect ratio. Decoding the latent 2. You can try the image upscaling models on Tiyaro. sharpen (radius 1 sigma 0. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. The length should be 1 in this case. Previously, I upscaled using a landscape image, and the results were quite satisfactory. All of the batched items will process until they are all done. Open comment sort options. Go into the mask editor for each of the two and paint in where you want your subjects. Workflow: generating a 12 step juggernaut cfg7 7:4AR , no lora, no nothing. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail Ultimate SD Upscale for ComfyUI. I liked the ability in MJ, to choose an image from the batch and upscale just that image. I don’t know why there these example workflows are being done so compressed together. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. Anxious-Activity-777. To disable/mute a node (or group of nodes) select them and press CTRL + m. This means that in the upscaling process it can be added new details to the image depending on the denoising strength. I had some great feedback, and now I'll share the results! AutoCrispy now supports 6 different backends, plus ESRGAN (and its entire library of models!) These backends now also include RealSR, SRMD, Waifu2x, and Anime4k. I tried math operations but they are imcompatible with primitives, as i'm sure you know. Pass it to a conditional witha) standard ERGAN upscale using 4xultrasharp v10b) your node (LDSR), 25 steps, none/none settings. I've put a few labels in the flow for clarity With CFG 1 use to works. r/comfyui. I send the output of AnimateDiff to UltimateSDUpscale with 2x ControlNet Tile and 4xUltraSharp. Welcome to the unofficial ComfyUI subreddit. Discussions. nothing wrong with the webui, until i ran the img2imf batch. Nothing special but easy to build off of. Also, if this The img2img pipeline has an image preprocess group that can add noise and gradient, and cut out a subject for various types of inpainting. So if you have a lower end GPU it might be better for you to work in batches of 1. r/StableDiffusion •. Thank you! Locked post. I'm new to comfy, and I Upscale image using model to a certain size. Also, if this I was unable to find anything close to batch processing, is that possible in ComfyUI? I love the tool but without batch processing it becomes useless for my personal workflow : (. py ", line 643, in _save rawmode = RAWMODE [im. 0 Alpha + SD XL Refiner 1. Image Blend. At a minimum, you need just 4 nodes: That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels Welcome to the unofficial ComfyUI subreddit. Use IP Adapter for face. Upscale with Ultimate SD Upscale. Documentation for the SD Upscale Plugin is NULL. Top 7% Rank by size. Increase the factor to four times utilizing the capabilities of the 4x UltraSharp model. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. The workflow is kept very simple for this test; Load image Upscale Save image. 3 and denoise at 0. • 5 min. I am curious both which nodes are the best for this, and which models. Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add CroustiBat. batch overlay images on top of another. 4. Nobody needs all that, LOL. Fork 2. To duplicate parts of a workflow from one Can't really recommend caffe myself. Now, I'm attempting to upscale a larger-sized image of a person while maintaining clear facial details. Put something like "highly detailed" in the prompt box. I was running some tests last night with SD1. You can use the Control Net Tile + LCM to be efficient. My team is working on building a pipeline for processing images. For general upscaling of photos go: remacri 4x upscale. 7K subscribers in the comfyui community. (*I think it's better to avoid 4x upscale generation) (2) Repeat step 1 multiple times to increase the size to x2, x4, x8, and so on. Listed below are in "find name' / display name" format (God knows what ComfuIU node developers are smoking when they decide it's an awesome idea to make these two things different). continuerevo • For A1111 users, I am the author, but you will unfortunately have to wait - Batch size in img2img (Comfyui)? Hey there, I recently switched to comfyui and I'm having trouble finding a way of changing the batch size within an img2img workflow. All the ones I've found online seem to require a Welcome to the unofficial ComfyUI subreddit. What I want: generate with model A 512x512 -> upscale -> regenerate with model A higher res generate with model B 512x512 -> upscale -> regenerate with model B higher res and so on. 26. 5x upscale back to source image and upscale again to 2x. Detailing the Upscaling Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. r/midjourney. UPDATE: In the most recent version (9/22), this button is gone. You can also do a regular upscale using bicubic or lanczos . You just have to use the node "upscale by" using bicubic method and a fractional value (0. So, how do you make it. 2 or 0. I upscaled it to a resolution of 10240x6144 px for us 2. Share Sort by: Best. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. 5 Over multiple batches, the memory allocated to Python creeps up until it has entirely consumed all of RAM available. Members Online • Mefitico . The real important part is to be sure that each slice is overlapping the other ones on a really important part. Turns out ComfyUI can generate 7680x1440 images on 10 GB VRAM. 2 -- Cut the image into tiles. Inputs A bit of a loss for how to do something - which I thought would be super simple. Anyone has a solution to stack 1-n images together without struggeling about missing images? "Make Image List" and "Make Image Batch" from the impact This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back Oldest. I'm trying to find a way of upscaling the SD video up from its 1024x576. I want ONE part of an image say a hand or a In researching InPainting using SDXL 1. Amblyopius I believe I've seen a few of those upscale pipelines in olivio's discord. So if you want 2. My current workflow that is pretty decent is to render at a low base resolution (something close to 512px), use highres fix to upscale 2x, and then use SD Upscale on img2img to upscale 2x again, which works better for me since it renders at the highest image size my card can handle, which isn't a lot, and helps minimize artifact generation. Different waifu2x will give different results. Video tutorial on how to use ComfyUI, a powerful and modular Stable Diffusion GUI and backend, is here. To move multiple nodes at once, select them and hold down SHIFT before moving. feed the 1. Hello all, I had a question around how some of you handle doing How to run an upscaling flow on all the images from a directory? Hi all, the title says it all, after launching a few batches of low res images I'd like to upscale all the Add a "Load Image" node, right click it: "Convert image to input". Does anyone have any idea? ps: I mean the upscale in text2img ps2: The one I used is R-ESRGAN 4x+Anime6B The main features are: Works with SDXL, SDXL Turbo as well as earlier version like SD1. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. For example, I can load an image, select a model (4xUltrasharp I mean the image looks very grainy. If you're using ComfyUI and have a lot of RAM once you hit your max VRAM ComfyUI will offload some of that workload to RAM which is much slower than VRAM. Installation . That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. 118 Online. repeat until you have an image you like, that you want to upscale. 2 / 4. Newest. 2. runebinder. 5/32. I'm something of a novice but I think the effects you're getting are more related to your upscaler model your noise your prompt and your CFG. Q&A. This a good starting point. Each of my slices were 512x768px but it can be 512x512 or any size that SD can handle on your configuration. SargeZT has published the first batch of Controlnet and T2i for XL. Also, if this Inpainting Workflow for ComfyUI. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. I've found RepeatImageBatch node, but it has a max of 64 images. And bump the mask blur to 20 to help with seams. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird 4. Usually I use two my I kinda need help, I use this workflow Idk why but the output of it, is always blurry I tried adjusting sampler and steps but the image are still blurry. I assume most everything is 512 and higher based on SD1. ComfyUI has been far faster so far using my own tiled image to image workflow (even at 8000x8000) but the individual frames in my image are bleeding into each other and are coming out inconsistent - and I'm not sure why. Perhaps I can make a load images node like the one i have now where you can load all images in a directory that is compatible with that node. im using Animate diff a lot, but when i want to make an anim from a single image i need to make an image sequence of the same image, like duplicate the image 100 times for a 100 frame animateDiff. I know of programs that can do it outside of comfyui, but I’m looking for something that can be part of a workflow. 5 if you want to divide by 2) after upscaling by a model. 10K Members. Stability AI accused by Midjourney of causing a server outage by attempting to scrape MJ image + prompt pairs. I am switching from Automatic to Comfy and am currently trying to upscale. Also, if this Repeat singe image or single image to video. I tried 'primitive' but it won't let me increment by a predefined value. workflow currently removes background from animate anyone video with rembg node and then I want to just layer frame by frame on to the svd frames. I have tried The WF starts like this, I have a "switch" between a batch directory and a single image mode, going to a face detection and improvement (first use of the prompt) and then to an upscaling step to detail and increase image size (second use of the prompt). Star 28. Please share your tips, tricks, and workflows for using this software to create your AI art. Text-to-image generation is still on the works, because Stable-Diffusion was not trained on these dimensions, so it suffer from coherence. Also, if this Here is my current 1. •. - latent upscale looks much more detailed, but gets rid of the detail of the original image. workflow for detailing and changing faces? I am looking for a workflow example on using facedetailer. You should insert ImageScale node. Single image works by just selecting the index of the image. Generated image at 2752x512 at 20 steps. Enjoy. Correct me, if I'm wrong. ComfyUI x4 upscalers. I've struggled with Hires. Project. without fundamentally altering the image’s content. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. Emad denys that this was authorized, and announced an internal investigation. r/comfyui A chip A close button. 3-0. Issues 1. Its a little rambling, I like to go in depth with things, and I like to explain why things batch overlay images on top of another. Download If I want to make a batch of images, then upscale, and regenerate at higher resolution with the same model, it doesn't seem to be possible. This allow you to work on smaller part of the Simple ComfyUI Img2Img Upscale Workflow. I use CFG of about 1. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. 5, but I have some really old images I'd like to add detail to. Did Update All in ComfyUI 6. Thanks! Reply reply EricRollei • Hi Chris, Thanks again for these nodes. 5 ~ x2 - no need for model, can be a cheap latent upscale. Top. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. Batching large numbers of images for upscale / high res fix. Running it through an image upscale on bilinear and 3. As you can see we can understand a number of things Krea is doing here: I did click on ComfyUI_windows_portable\update\update_comfyui_and _python_dependencies. Info. Please feel free to criticize and tell me what I may be doing silly. 0 should have the correct Positive and Negative CLIP values. ADMIN MOD. Thanks to them, I can now move the Image Chooser node to a more prominent and always-on position in the AP Workflow. true. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. But if it helps try different upscaled than real esrgan 4x, maybe the 4x ultrasharp upscaler and see if it's more The first is a tool that automatically remove bg of loaded images (We can do this with WAS), BUT it also allows for dynamic repositionning, the way you would do it in Krita. Pos Prompt: nebula in the cosmos, astrophotography, giant sprawling nebula that never ends, colorful. and spit it out in some shape or form. For a personal project i need to create 100 How to batch load images? I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next comfyanonymous / ComfyUI Public. Not sure if comfy has his own discord or anything but that would also be a good resource. 2 ways, the easy way and the mess around for hours getting it perfect way. Hook a "Preview Image" node to they first output, then add a reroute node that you can easily hook/unhook for the upscale. I have a custom image resizer that ensures the input image matches the output dimensions. ( I am unable to upload the full-sized image. THE SCIENTIST - 4096x2160. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Sorry if this is 'too basic' but i'm probably overthinking this. pth or 4x_foolhardy_Remacri. If anyone has been able to the quick fix is put your following ksampler on above 0. Notifications. Install and configure custom nodes for advanced upscaling Welcome to the unofficial ComfyUI subreddit. Drag&drop to the frame of img2img. Im trying to use Ultimate SD upscale for upscaling images. The Empty Latent Image will run however many you enter through each step of the workflow. Introduction. This is faster than trying to do it all at once and keeps the high res. Also, if this ComfyUI handles the limitations of mid-level GPUs better than most alternatives. Those images have metadata, meaning you can drag and drop them into the comfy doomndoom. Note, this has nothing to do with my nodes, you can check ComfyUI's default workflow and see it yourself. No AUTOMATIC1111 has an option to upscale images with different upscale/restoration models. Denoise 0. I use them in my workflow regularly. I made a tutorial on the Youtubes. Also, if this Overall: - image upscale is less detailed, but more faithful to the image you upscale. Also, if this Best (simple) SDXL Inpaint Workflow. Hi guys. Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Share Sort by: Best. Toggle if the seed should be included in the file name or not. =. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. Sames as Swin4R which details a lot the image. WorkFlow - Choose images from batch to upscale. on Apr 24, 2023. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage C, so if you only want upscaling, you Using the first case, if you were trying to generate three images you would set the batch_size to 3 in the `Empty Latent Image` node. it crashed with "File "D:\stable-diffusion-webui\venv\lib\site-packages\PIL\ JpegImagePlugin. 5k. Expand user menu Open settings menu. 5 to get a 1024x1024 final image (512 *4*0. New AnimateDiff on ComfyUI supports Unlimited Context Length - Vid2Vid will never be the same!!! [Full Guide/Workflow in Comments] Workflow Included Share Sort by: Best. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5. I am pretty sure it is possible just point me in the right direction :P Locked post. alecubudulecu. Can anyone put me in the right direction or show me an example of how to do batch controlnet poses inside ComfyUI? I've been at it all day and can't figure out what node is used for this. I think it works well for most cases. For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. It seems that Upscayl only uses a upscaling model, so there (1) Upscale the generated image using 2x as the SD upscale factor. Please share your tips, tricks, and Skip to main content. Also, if this While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. In my experience you need to change it to fixed the gen before the one you want to keep. A lot of people are just discovering this technology, and want to show off what they created. 48 (ControlNet depth map and softedge were used to get rid of unwanted artifacts during upscaling. Double click the new "image" input that appeared on the left side of the node. I'm doing this, I use chatGPT+ to generate the scripts that change the input image using the comfyUI API. Detailing the Upscaling Process in ComfyUI. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. Hires. You should use this if you want to refine the base image without straying from it. thatgentlemanisaggro • Your best Example 1 - Multi-Upscale. You can upscale the latent space, upscale the image directly, use controlnets, and different upscaling models and methods. 5. 5 model of choice in a reasonable amount of time on a 2080 super (8 GB) To drag select multiple nodes, hold down CTRL and drag. • 2 mo. So, I just made this workflow ComfyUI . Here's an example: Krea, you can see the useful ROTATE/DIMENSION tool on the dogo image i pasted. Here is a suggested workflow using nodes that are typically available in advanced stable New to Comfyui, so not an expert. Img2Img Upscale - Upscale a real photo? Trying to expand my knowledge, and one of the things I am curious about is upscaling a photo - lets say I have a backup image, but its not the best quality. Upscaling is done with iterative latent scaling and a pass with 4x-ultrasharp. Top reroute goes to Preview Image, bottom reroute goes to the upscaling part of the 2. Check out this workflow, its was built to upscale beyond 12k but you can disable / pause when you reached your preferred size. Here is is upscaling pixel art to something realistic using IPA adapters. mode] The above exception was the direct cause of the following exception: File "D:\stable-diffusion-webui Third, you have batch size set to 4 which will generate 4 images at a time but will use more VRAM. I'm using mm_sd_v15_v2. Starting with a 512 x 512 image, if you do two 4x upscales with a 1. 1 even, when SD generates ‘too much added detail’. OP • 7 mo. Also, try ESRGAN (there are also a lot of custom models on upscale. When I do the same in Automatic1111, I get completely different people and different compositions for every image. Encoding it and doing a tiny refining step to sharpen up the edges. But If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. wiki) or USRNet if you want to do stuff that is drawn. Hello, My ComfyUI workflow takes 2 images as input, and generates an output image combining those two images. I don't get where the problem is, I have checked the comfyui examples and used one of their hires fix, but when I upscale the latent image I get a glitchy image (only the non masked part of the original I2I image) after the second pass, if I upscale the image out of the latent space then into latent again for the second pass the result is ok. I'm trying to increment a float value (say, a LORA strenght) by 0. It's a bit cumbersome to get the regular ipadapter plus node to get conditioned differently for each image in a batch, so I created a custom node that applies ipadapter from scratch for each image. I want a checkbox that says "upscale" or whatever that I can turn on and off. This is what I do, but not in ComfyUI directly. What happens: generate aerilyn235. Hope that helps. The workflow also has TXT2IMG, IMG2IMG, up to 3x IP Adapter, 2x Revision, predefined (and editable) styles, optional up-scaling, Control Net Canny, Control Net Depth, Lora, selection of recommended SDXL resolutions, adjusting input images to the closest SDXL resolution, etc. Then it passes the remaining image batch on. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. The Upscale Image node can be used to resize pixel images. Keep the Queue the flow and you should get a yellow image from the Image Blank. GFPGAN. So you will upscale just one selected image. After you can use the same latent and tweak start and end to manipulate it. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Best. I looked at WAS-node Load image batch but that one seems to look for incremental filenaming. Next, and SD Prompt Reader. you can just run upscale through img2img with the ultimate upscale extension it really doesnt do anything special I can do that to upscale a given image, yeah. • So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Actions. After borrowing Greetings, Community! As a newcomer to ComfyUI (though a seasoned A1111 user), I've been captivated by the potential of Comfy and have witnessed a significant Does anyone know if there is a way to load a batch of images from my drive into comfy for an image to image upscale? I have scoured the net but haven't found The Ultimate AI Upscaler (ComfyUI Workflow) For a dozen days, I've been working on a simple but efficient workflow for upscale. I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. Like the leonardo AI upscaler. Great idea! Id pitch in. This looks sexy, thanks. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. 2x, upscale using a 4x model (e. Doing that manually is a pain in the ass to the point of not 1 - LDSR upscaler or more powerful upscaler: Saw a different grade of details upscaling in Automatic1111 vs. This ability emerged during the training phase of the AI, and was not programmed by people. dr_lm. 5 and 2. After Ultimate SD Upscale. 0. There are many ways to upscale an image. 1 - get your 512x or smaller empty latent and plug it into a ksampler set to some rediculously low value like 10 steps at 1. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Yet it's not happening. I'm also aware you can change the batch count in the extra options of the main When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. If I were to sign up for your This uses more steps, has less coherence, and also skips several important factors in-between. Just load your image, and prompt and go. Ultimate SD Upscale creates additional unnecessary persons . Just found out it's possible to use "Batch process" or "Batch from Directory" in the extras tab, so it's possible to upscale multiple images. Because upscale amount is determined by upscale model itself. Almost identical. you can use the "control filter list" to filter for the images you want. Using the control panel, it'll make as many as you select, but it comfyui. 60/17/18. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. I am trying This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Also, if this To this effect, I have a Tensor Batch to Image in my WAS Node Suite, where you can select a image to work on, and can use multiple to cover the whole batch. There are a lot of options in regards to this, such as iterative upscale; in my experience Welcome to the unofficial ComfyUI subreddit. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at Welcome to the unofficial ComfyUI subreddit. > <. The output doesn't seem overly blurry to my eyes. I found out that A1111's ESRGAN upscalers would split image into tiles for upscaling. The nodes required are listed above each node hopefully you have comfyui manager and I didn't miss any custom nodes, let me know if you find this useful :D I may do more. Depending on the noise and strength it end up treating each square as an individual image. Please keep posted Upscale Image - ComfyUI Community Manual. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Welcome to the unofficial ComfyUI subreddit. See comments made yesterday about this: #54 (comment) I did want it to be totally different but ComfyUI is pretty limited when it comes to the python nodes without customizing ComfyUI itself. 35/19. You can use the UpscaleImageBy node to scale up and down, by using a scale factor < 1. Also, if this is new and exciting to Images are now saved with metadata readable in A1111 WebUI, Vladmandic SD. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. 3. • 3 days ago. • 1 mo. Customizing and Preparing the Image for Upscaling. Once the image is set for enlargement, specific tweaks are made to refine the result; Adjust the image size to a width of 768 and a height of 1024 pixels, optimizing the aspect ratio, for a portrait view. If you increase this above 1, you'll get more images from your batch up to the max # in your original batch. Image-to-image was taking < 10s. It’s not solely for upscaling. TheXenoth. He's got a channel specifically for comfyui and comfy himself posts there daily. Batch image generation in ComfyUI Question | Help Is there a way to import prompts/settings from text/csv file for batch image generation in ComfyUI? Locked post. View community ranking In the Top 10% of largest communities on Reddit. Also you can make batch and set node to select index number from batch (latent or image). But that's not what I want to do, I want to create a NEW image with high denoise and then automatically "hires fix" that. Nothing stops you from using both. Apprehensive_Sky892 • ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. bat and the upload image option is back for me. 4 alpha 0. 75K views 7 months ago ComfyUI. Well, it will work, but you can't get the exact same images if you 'pick' a different number than all the images in the initial batch. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. You can't do batches with that website. A new Prompt Enricher function, able to improve your prompt with the help of GPT-4 or GPT-3. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is Welcome to the unofficial ComfyUI subreddit. 6). Open menu Open navigation Go to Reddit Home. Then on the new node: control after generate: increment. 0 factor after the first upscale and 2. Can anyone suggest a good stand-alone batch upscaling solution for Windows/AMD? Currently I'm using Zyro and uploading each image separately but I'm looking for a solution to automate this. Also the exact same position of the body. There’s a new optional node developed by u/Old_System7203 to select the best image of a batch before executing the rest of the r/StableDiffusion •. r/StableDiffusion. I haven't really shared much and want to use other's ideas as I can't see it work with randomness. Although it is not yet perfect (his own words), you can use it and have fun. I implemented the experimental Free Lunch optimization node. 0 factor after the second upscale, then. In case you are Dividing the image resolution size in two and then upscaling, works fairly well and easily changeable if needed to divide by different amounts for larger images. 5, but appears to work poorly with external (e. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. Sort by: tobi1577. For 'photorealistic' videos with lots of fine details it doesnt seem a great approach, the final I’m in the same situation, except I’m running on a laptop gpu. If I really want comfyui. 1/73. I'm not at home so I can't share a workflow. positive image conditioning) is no If you pull the latest from rgthree-comfy and restart, you should see it as "Image Comparer (rgthree)" LMK what you think. thedyze. Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. There’s a new Hands Refiner function. Absolutely you can. Get app Get the Reddit app Log In Log in to Reddit. 50 votes, 29 comments. Thanks for the answer! I almost got it working but I have questions. I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. (I didn't use AI, latent noise, and a prompt to generate it) - What nodes/workflow would you guys use to get the best results? As my test bed, i'll Yes, with ultimate sd upscale. 8K. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Log In / Sign Up; Advertise on ComfyUI Node: 🔍 CR Upscale Image Category. I found chflame163/comfyUI_LayerStyle which dose To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. 4. I have 2 images. I'm also aware you can change the batch count in the extra options of the main menu, but I'm specifically ComfyUI is the least user-friendly thing I've ever seen in my life. . "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. the best part about it though >. So in this workflow each of them will run on your input image and Ultimate SD Upscaler uses a diffusion proccess that depends on the SD model and the prompt, and an upscaling model RESGRAN, it can also be combined with controlnet. Workflow Included. miribeautyxo. Would prefer to do this in Comfy because of speed and workflow but I currently have to Personally, when I’m upscaling an image it’s usually because I like it the way it already looks, and upscaling at 0. The resolution is okay, but if possible I would like to get something better. Went to Updates as suggested below and ran the ComfyUI and Python Dependencies batch files, but that didn't work for me. 1/375. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Useful to create and control image batches. - queue the prompt again - this will now run the upscaler and second pass. The title explains it, I am repeating the same action over and over on a number of input images and I would like, instead of having to manually load each image and then pressing on the "queue prompt", to be able to select a folder and have Comfy process all input images in that folder. Extract the zip Welcome to the unofficial ComfyUI subreddit. Likewise, would pitch in! Could be a great way to check on these quick last second refiner passes. 12/3. 5-Turbo. Also, if this Thanks for the tips on Comfy! I'm enjoying it a lot so far. Old. A portion of the Control Panel What’s new in 5. then rescale to 512 x 512. I got this problem that I can't reproduce similar result in ComfyUI. He continues to train others will be launched soon! huggingface. SD upscale enlarges the loaded reference image size by The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. 0 denoise. Select the "SD upscale" button at the top. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. For instance if you did a batch of 4 and really just want to work on the second image, the batch index would be 1. That's because latent upscale turns the base image into noise (blur). It’s mainly focused on generating artificial images but the upscaling option is really handy. I want to create a character with animate anyone and background with svd. To upscale images using AI see the Upscale These ComfyUI nodes can be used to restore faces in images similar to the face restore option in AUTOMATIC1111 webui. I searched this forum but only found a few threads from a few months ago that didn't give a definitive answer. These comparisons are done using ComfyUI with default node settings and fixed seeds. New. Does anyone have any suggestions, would it be better to do an iterative upscale, or All of these timings are using the same settings, with different random seeds and different batch size: Batch size/time in seconds per iteration/max GPU memory in GB/total time in seconds. Hopefully someone can help. 5=1024). How to superimpose just ONE part of an image into another and let the ksampler continue its process from there (mainly upscaling with a bit of noise). If there are images with different prompts in the upscale folder, I don’t want to do the repetetive work of copying the prompt from the json file (node Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. The tiling is using my custom simpletiles node, which is much simpler than The batch index should match the picture index minus 1. Thought ipadapter could be a good way to control tiled upscaling. The 4X upscalers I've tried aren't great with it, I suspect the starting detail is too low. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. "Now I understand what you meant: since you load the batch image you prefer in a new ComfyUI session, the prompt associated with that image is loaded and processed by the Efficient Loader node, so the Inpainter function in 7. on Mar 21, 2023. 1 🚀 Release - Working on AUTOMATIC1111/ComfyUI out-of-the-box, improved coherence. ago. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). :) Generally a workflow like this gives good results: Generate initial image at 512x768. Do I have to upload frame by frame to upscale them? Btw, I don't know how to code. 4) Then you can cut out face and redo-it with IP Adapter. The really cool thing is how it saves the whole workflow into the picture. So instead of one girl in an image you got 10 tiny girls stitch into one giant Enhance visual quality and details of your images for printing, web design, and art. ultrasharp), then downscale. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. In Comfy all the more so, the image simply looks unnatural after the upscaling. Also added a second part where I just use a Rand noise in Latent blend. Sample again, denoise=0. Pull requests 139. To create a new image from scratch you input an Empty Latent Image node, and to do img2img you use a Load Image node and a VAE Encode to load the image and Ok ran my first test. Give it a shot! and how is the quality for upscaling in respect to face details, skin detail, textiles and background? I don't know if lcm is ideal for upscaling as it Welcome to the unofficial ComfyUI subreddit. Curious my best option/operation/workflow and upscale model. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. As far as comfyui this could be awesome feature Using the Load Image Batch node from the WAS Suite repository, I can sequentially load all the images from a folder, but for upscale I also need the prompt with which this image was created. Model: Swizz8-V2-FP16 Vae: baked Upscaler: 4x_foolhardy_Remacri Tile-Size: 512x448 Seed: 686465493884716 Steps: 50 Cfg: 4 Denoise: 0. If someone can explain the meaning of the highlighted settings here, I would create a PR to update its README . AbyszOne. I upscaled it to a resolution of 10240x6144 px for us to examine the results. My current workflow sometimes will change some details a bit, it makes the image blurry or makes the image too sharp. If one could point "Load Image" at a folder instead of at an image, and cycle through the images as a sequence adhishthite. Neg Prompt: text, watermark, bright, oversaturated, dark. This will get to the low-resolution stage and stop. You set a folder, set to increment_image, and then set the number on batches on Oldest. • 4 mo. 4 each 'step'. 🧩 Comfyroll/🔍 Upscale. We can share this when we are done if it would be helpful? From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. There isn't a "mode" for img2img. A new Face Swapper function. My sample pipeline has three sample steps, with options to persist controlnet and mask, regional prompting, and upscaling. Sometimes I drop it to 0. Projects. SVDXT Upscaling with t2i adapter depth Zoe. If you want upscale to specific size. Thank you, that did it. Instead, you need to go down to "Scripts" at the bottom and select the "SD Upscale" script. Yes for sure. Batch process from ControlNet images. comfyui. Batch-processing images by folder on ComfyUI. Wanted them to look sharp. A video compare would be amazing too! Infinite image browser extension can be used as stand alone. A while back, I shared a program I cooked up for automatically upscaling textures from games that dump them. Please help me fix this issue. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other There is making a batch using the Empty Latent Image node, batch_size widget, and there is making a batch in the control panel. The last step is an "Image save" with prefix and path. You should use the base and left some noise for few steps of refiner and then think about img2img or not use refiner at all. The Image Blend node can be used to blend two images together. Pixel Art XL v1. SDXL most definitely doesn't work with the old control net. I share many results and many ask r/comfyui. Hello everyone! Not sure how to approach this. From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Reply reply &nbsp; Welcome to the unofficial ComfyUI subreddit. I recommend you do not use the same text encoders as 1. Don't listen to the haters, reading some comments, they criticize the changes to the image, when Magnific changes it too. • 22 days ago. The inset show the initial-image at its corresponding scale. zefy_zef • 3 mo. However, • 19 days ago. I want to create a character with animate anyone and background with 1. Best is to copy the seed before you do You could try to pp your denoise at the start of an iterative upscale at say . It’s still pretty slow, but Thx for this workflow, I wanna experiment to get an upscaler similar to Magnific and this is going in the right direction, even if it's simple and nothing new. There’s a new optional node developed by u/Old_System7203 to select the best image of a batch before executing the rest of the Open the automatic1111 webui . Initial Setup for Upscaling in ComfyUI. 5, don't need that many steps. On my Nvidia 3060 with 12 GB VRAM, ComfyUI will run the newer SDXL checkpoints quite well, whereas A1111 is marginal at best for SDXL (at least as of v1. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. I use it for SDXL and v1. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. Once your upscale is done, you will need to slice it into slices. k. Also, if this is new and exciting to I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I would like to know. I was running the SDXL 1. 15 denoise strength adds plenty of minor details, smooth lines, etc. 52. This is not the case. 106. It does random one more time before going to fixed. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing 43 votes, 16 comments. That’s a cost of about $30,000 for a full base model train. lookup latent upscale method as-well this performs a staggered upscale to your desired resolution in one workflow queue. A portion of the control panel What’s new in 5. What I see from your resulting image, it looks like a model (and maybe a prompt) had a big Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. Maintainer. This website uses original linux waifu2x btw. Also, if this It will swap images each run going through the list of images found in the folder. 1. But I don't know how it works so I have no idea how to recreate the workflow. Also, if this TiledVAE is very slow in Automatic but I do like Temporal Kit so I've switched to ComfyUI for the image to image step. But also you can connect a whole bunch of sampler setups one after the other so r/StableDiffusion. workflow - google drive link. I find that setting my width and height to 1/2 makes a 2x2 grid per frame which with LCM can be quick and adds a good amount of detail. Please keep posted images SFW. 65 votes, 16 comments. 5 txt>img workflow if anyone would like to criticize or use it. • 7 mo. My postprocess includes a detailer sample stage and another big upscale. 0 = 0. 15K subscribers in the comfyui community. 55. factor = 2. However, this allows us to stay faithful to the base image, as much as possible. Members Online • No_Construction_8736 . Delete duplicates in image batch node? I’m looking for a node that takes an input of images in a batch, searches through them, and automatically deletes images that look identical. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. resize down to what you want. 1. (* The image size specification setting is ignored. I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's Image / latent batch number selector node I am searching for a node that does the following: I want to generate batches of images (like 4 or 8) and then select only specific latents/images of the batch (one or more images) to be used in the rest of the workflow for further processing like upscaling/Facedetailer. It will come with version 6. g. He published on HF: SD XL 1. <. Got sick of all the crazy workflows. Add a "Load Image" node, right click it: "Convert image to input". Batch Generate Images. Apparently you are making an image with base and doing img2img with refiner, isn't the recommended workflow. Has 5 parameters which will allow you to easily change the prompt and experiment. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Also, if this I would like to create high resolution videos using stable diffusion, but my pc can't generate high resolution images. Also, if this is new and exciting to you, feel free to Create two masks via "Pad Image for Outpainting", one without feather (use it for fill, vae encode, etc) and one with feather (use only for merging generated image with original via alpha blend at the end) First grow the outpaint mask by N/2, then feather by N. - Image Upscale does not give true high-resolution results, the quality of upscale is between the base resolution and the targeted one. Open comment sort Welcome to the unofficial ComfyUI subreddit. Then on the new Wraithnaut. Then I combine it with a combination of either Depth, Canny and OpenPose ControlNets. I am cleanly able to make 2048 images with my 1. I uploaded the workflow in GH . Basically, the Load Image node works well with 16 bit PNG, but it cannot load a sequence of images as a batch. Add a Comment. co. 13. New comments cannot be posted. Code. LD2WDavid. You can take any picture generated with comfy drop it into comfy and it loads everything. This means that your prompt (a. a. ↑ Node setup 1: Generates image and then upscales it with USDU (Save portrait to your PC and then drag and drop it into you ComfyUI interface and replace prompt with your's, press "Queue Prompt") ↑ Node setup 2: Upscales any custom image Welcome to the unofficial ComfyUI subreddit. ) 3. I did some simple comparison 8x upscaling 256x384 to 2048x3072. If you pre-upscale it with a GAN before denoising with low strength, it should take even less time. We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. I want a slider for how many images I want in a Welcome to the unofficial ComfyUI subreddit. I'm also looking for a upscaler suggestion. CUI is also faster. Upscaling: Increasing the resolution and sharpness at the same time. And above all, BE It's popular to merge the latent up scale with the image upscale. Show more. It's simple and straight to the point. I'm aware that the option is in the empty latent image node, but it's not in the load image node. Belittling their efforts will get you banned. Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. You can try my method in Automatic1111. Ultimate SD Upscale 2x and Ultimate Upscale 3x. natural or MJ) images. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Also, if this Like the batch feature in A1111 img2img or controlnet. 2k. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. This is amazing! Please, please, please give us a workflow? Thank you very much, If you want to always be updated on my workflows, follow me on other social networks, I will be happy to help you and I will always update you with new workflows and models🙌😊🙌. An AI Splat, where I do the head (6 keyframes), the hands (25 keys), the clothes (4 keys) and the environment (4 keys) separately and then mask them all together. 30/9/19. I use SD mostly for upscaling real portrait photography so facial fidelity (accuracy to source) is my priority. Controversial. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic This works best with Stable Cascade images, might still work with SDXL or SD1. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. and then rescale to 1024 x 1024. 35, 10 steps or less. Frone0910. Cheers, appreciate any pointers! Somebody else on Reddit mentioned this application to drop and read. Outcome: Pluses: LDSR PULVERISES the "highrest fix" - and i mean that. Go to "img2img" tab at the top. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. Like, yeah you can drag a workflow into the window and sure it's fast but even though I'm sure it's "flexible" it feels like pulling teeth to work with. 174. But somehow it creates additional person inside already generated images. ComfyUI is better for actually grokking what SD is doing, which in turn helps you go beyond the basics and If you want to upscale images using Ultimate SD Upscale check this video on the topic. It's a bit annoying to do there . 5 denoise to fix the distortion (although obviously its going to change your image. The little grey dot on the upper left of the various nodes will minimize a node if clicked. Also, both have a denoise value that drastically changes the result. I am building a workflow. pth. The new Only pause if batch and Pass through modes are brilliant. That's the question. What has worked best for me has been 1. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Almost exaggerated. Thank you sd-webui-controlnet team! 12K subscribers in the comfyui community. While the normal text encoders are not "bad", you can get better results if using the special encoders Welcome to the unofficial ComfyUI subreddit. Take the output batch of images from SVD and run them through Ultimate SD Upscale nodes. Paper: "Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model". Great update, Chris. then upscale to 2048 x 2048. 385 upvotes · 159 comments. Please share your tips, tricks, and workflows for using this Open menu Open navigation Go to Reddit Home. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the veins are not so strongly emphasized. Text to image using a selection from initial batch. but if it is possible to implement this type of changes on the fly in the node system, then yes, it can overcome 1111. Usually tend to be better than waifu2x. The only option to use that node with sequences is to activate the auto-queue mode, but that's not a solution for me as this only loads one image at a time, and I need a batch containing all the images at the same time. " I recently switched to comfyui from AUTOMATIC1111 and I'm having trouble finding a way of changing the batch size within an img2img workflow. ago • Edited 4 mo. 0 base, no LoRAs, 20 steps, 9 cfg, dpmpp_2m, karras using Comfy UI. Had the same issue. WASasquatch. upscale in smaller jumps, take 2 steps to reach double the resolution. 9k. Join. And above all, BE NICE. CUI can do a batch of 4 and stay within the 12 GB. the image size will upscale from 512 x 512 to 2048 x 2048. Thank you. Fill in your prompts. Wonder if anyone came accross this. Share. This will run the workflow once, on a single seed, and generate three images all with the same seed. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. ComfyUI is amazing. • 5 mo. Open comment sort options . bd hb co jq gl uu vb cz hr ts