Comfyui load workflow from image reddit
Comfyui load workflow from image reddit. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Notice that Face Swapper can work in conjunction with the Upscaler. Initial Input block - I cant load workflows from the example images using a second computer. I hope you like it. I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. Please keep posted images SFW. 1 or not. It's simple and straight to the point. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Have fun. How to solve the problem of looping? I had an idea to just write an analog of two-in-one Save image, Load image in one node, that would save the last result to a file and then output it at the next rendering queue. [DOING] Clone public workflow by Git and load them more easily. 5. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. I have to 2nd the comments here that this workflow is great. Thanks a lot for sharing the workflow. Nobody needs all that, LOL. Your efforts are much appreciated. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. Thanks. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References It is necessary to give the last generated image as it does load image locally. I thought it was cool anyway, so here. a search of the subreddit Didn't turn up any answers to my question. Pixels and VAE. I have like 20 different ones made in my "web" folder, haha. This is the node you are looking for. json file location, open it that way. Unfortunately, the file names are often unhelpful for identifying the contents of the images. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. After borrowing many ideas, and learning ComfyUI. 0 includes the following experimental functions: Then I fix the seed to that specific image and use it's latent in the next step of the process. If this is what you are seeing when you go to choose an image in the image loader, then all you need to do is go to that folder and delete the ones you no longer need. Get a quick introduction about how powerful ComfyUI Hidden Faces. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. The prompt for the first couple for example is this: Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. 82. A quick question for people with more experience with ComfyUI than me. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Are you referring to the Input folder in the Comfyui installation folder? Comfyui runs as a server and the input images are 'uploaded'/copied into that folder. Sync your collection everywhere by Git. ComfyUI/web folder is where you want to save/load . Ensure that you use this node and not Load Image Batch From Dir. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. My 2nd Attempt, i thought to myself, I will go as basic and as easy as possible, I will limit the models I am using to only large popular models, I will try to stick to basic ComfyUI nodes as possible, meaning I have none except for Manager and Workflow Spaces, thats it. Maybe a useful tool to some people. It animates 16 frames and uses the looping context options to make a video that loops. . These are examples demonstrating how to do img2img. 75K subscribers. This is what it looks like, second pic. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. You need to select the directory your frames are located in (ie. Pretty Comfy, Right? ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. This causes my steps to take up a lot of RAM, leading to killed RAM. this will open the live painting thing you are looking for. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. That's how I made and shared this. I've been using ComfyUI for nearly a year, during which I've accumulated a significant number of images in my input folder through the load image node. No need to put in image size, and has a 3 stack lora with a Refiner. I had to load the image into the mask node after saving it to my hard drive. There's a node called VAE Encode with two inputs. Experimental Functions. and spit it out in some shape or form. Get Started with ComfyUI - Drag and Drop Workflows from an Image! Run Diffusion. Aug 7, 2023 ยท Workflows can only be loaded from images that contain the actual workflow metadata created by ComfyUI, and stored in each image COmfyUI creates. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. Drag and drop doesn't work for . So, I just made this workflow ComfyUI. A lot of people are just discovering this technology, and want to show off what they created. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this allows us to load old generated images as a part of our prompt without using the image itself as img2img. Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. You can Load these images in ComfyUI to get the full workflow. That node will try to send all the images in at once, usually leading to 'out of memory' issues. The graph that contains all of this information is refered to as a workflow in comfy. PNG into ComfyUI. it's nothing spectacular but gives good consistent results without Starting workflow. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. 168. 2. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. And above all, BE NICE. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Add your workflows to the collection so that you can switch and manage them more easily. And images that are generated using ComfyBox will also embed the whole workflow, so it should be possible to just load it from an image. I have a video and I want to run SD on each frame of that video. The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. load your image to be inpainted into the mask node then right click on it and go to edit mask. Just load your image, and prompt and go. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. This workflow generates an image with SD1. Load Image List From Dir (Inspire). The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. Details on how to use the workflow are in the workflow link. I can load workflows from the example images through localhost:8188, this seems to work fine. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. You need to load and save edited image. I am trying to understand how it works and created an animation morphing between 2 image inputs. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. this is like copy paste basically and doesnt save the files to disk. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. I'm not really checking my notifications. Load Image Node. json file hit the "load" button and locate the . If it's a . I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. the diagram doesn't load into comfyui so I can't test it out. That image would have the complete workflow, even with 2 extra nodes. In either case, you must load the target image in the I2I section of the workflow. Images created with anything else do not contain this data. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 5 by using XL in comfy. 8K views 11 months ago. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Upcoming tutorial - SDXL Lora + using 1. Please share your tips, tricks, and workflows for using this software to create your AI art. My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. enjoy. And you need to drag them into an empty spot, not a load image node or something. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. 0. I can load the comfyui through 192. If you are still interested - basically I added 2 nodes to the workflow of the image (image load and save image). You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. more. Belittling their efforts will get you banned. Flux Schnell is a distilled 4 step model. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. json files. The images above were all created with this method. You can save the workflow as json file and load it again from that file. AP Workflow v5. Ending Workflow. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Welcome to the unofficial ComfyUI subreddit. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. They are completely separate from the main workflow. Browse and manage your images/videos/workflows in the output folder. Hello there. To be fair, I ran into a similar issue trying to load a generated image as an input image for a mask, but I haven't exhaustively looked for a solution. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! My ComfyUI workflow was created to solve that. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. Those images have to contain a workflow, so one you've generated yourself for example. this is just a simple node build off what's given and some of the newer nodes that have come out. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . uowp yvrwmpgg djlku gfh xivc mpz qtncq cxifl hzzyrjcb xgxwrs