Comfyui snap to grid reddit

Comfyui snap to grid reddit. My it works for a single model, i fact it works with a few, but I run out of memory if I try to grid with more. 5 if you want to divide by 2) after upscaling by a model. safetensors and sdxl. Also added a second part where I just use a Rand noise in Latent blend. If you've installed the nodes that contain the ControlNet preprocessors, it should be there. Look at this workflow : So I made a workflow to genetate multiple options of fixed using amazing ImpactPack, and then to choose and paste the best one into the original picture. Click this and paste into Comfy. Size of the actual blueprint isn't important, it can be /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0. and remember sdxl does not play well with 1. Welcome to the unofficial ComfyUI subreddit. And boy I was blown away by the fact that how well it uses a GPU. Solved: You create the primitive node and then drag the output to any input, then the primitive node with update to a number. . bat" and click "queue prompt" on my workflow. I do not know if this problem is the same on A111 because I've only been using comfyui, basically I load up comfyui from "run_nvidia_gpu. 1. I’d be interested in doing this either via audio generated within comfy OR audio sequentially plucked from a folder of audio files- like a batch image loader. I know there is the ComfyAnonymous workflow but it's lacking. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . Auto1111 uses command line rags to specify folders, comfy uses and extra models file. Nodes in ComfyUI represent specific Stable Diffusion functions. This approach allows selective attention coupling at relevant layers without having to recompute the entire UNet multiple times for different prompts, leading to ComfyUI Basic to advanced tutorials. Cheers, appreciate any pointers! Somebody else on Reddit mentioned this application to drop and read. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. Try it out and let me know if works. Would be nice to have a way to do it from the main UI. Installation: Follow the link to the Plush for ComfyUI Github page if you're not already here. I noticed a jump in performance (it's not hogging my resources like crazy) and a huge improvement in generation speed. The net effect is a grid-like patch of local average colors. Thanks tons! That's the one I'm referring CLIPSeg Plugin for ComfyUI. For example, it's like performing sampling with the A model for only 10 steps, then synthesizing another latent, injecting noise, and proceeding with 20 steps using the B model. ComfyUI Extensions by Blibla is a robust suite of enhancements, designed to optimize your ComfyUI experience. If it looks like its going in the right direction, I generate 4-12 images depending on how hard I think it’ll be to get the right result and I increase the resolution size of the masked area to get more resolution. 1 upvote · 5. The venv folder should be inside the comfyui folder, and it should point to the Python version found when it is created, the first time you run comfyui. png). Bulk convert audio files + 1 image to video. 17K subscribers in the comfyui community. In A1111, I usually: Inpaint and try some key words and generate 2-4 images. ago. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. Press Enter, it opens a command prompt. Is there anyway to unload the With CFG 1 use to works. This works for position and rotation. alohadave. Click on the green Code button at the top right of the page. OP • 3 mo. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. But once you get over that hump, you will be able to automate your workflow, create novel ways to use SD etc. You switched accounts on another tab or window. i do that alot. Please share your tips, tricks, and workflows for using this software to create your AI art. ControlNet v1. I review the result and send the image back to If that doesn't give you the seed used to recreate the image, you need to find the original seed. Demo 4 - FreeU. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. 56 upvotes · 34 comments. Just starting to tinker with comfyui. Note that when you close the UIs, the last one you closed is the one that will pop up the next time you get on Comfy. The simplest way, of course, is direct generation using a prompt. Sep 17, 2023 · More of a quality of life thing. Hi. r/comfyui. I use ComfyUI Windows Portable. com I'm really new to Comfy and would like to show the change in the conditioning average between two prompts from 0. com to make it easier for people to share and discover ComfyUI workflows. ComfyUI Manager does not install. ComfyUI uses a 'signal handler' : Ctrl-C is graceful! Ctrl-C frees up memory and deletes temporary files. 1: A complete guide - Stable Diffusion Art (stable-diffusion-art. I would use images if I could but couldn’t find a node for it. When the tab drops down, click to the right of the url to copy it. "Seed" and "Control after generate". If your end goal is generating pictures (e. life saver! More replies. 5 so that may give you a lot of your errors. comfyui manager will identify what is missing and download for you . 0-1. Adds a setting to make moving nodes always snap to grid. So since my secret primitive prompt composer is now no longer secret it seems like a good time to upgrade to something more COMFY-like. Generation using prompt. but for audio files. Found some solid recommendation to use ComfyUI Manager, so I got the v. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. Promoting your own tutorial is encouraged but do not post the same tutorial more than once every two days. Seed question : r/comfyui. if a box is in red then it's missing . If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. ryokunyaa. I found a few workflows that suit my needs, or most of them, in comfyworkflows and civitAI. Ford vs Ferrari in anime style using animatediffv3. What I found helpful was to have Auto1111 and Comfy share models and the like from a common folder. . It provides a range of features, including customizable render modes, dynamic node coloring, and versatile management tools. 2. The command window pops up and vanishes in mere seconds Hi Reddit! In October, we launched https://comfyworkflows. ComfyUI basics tutorial. A lot of people are just discovering this technology, and want to show off what they created. I suggest you remove comfyui and all Python versions and start anew. Creating Better Animation With AutoMasking, Controlnets and AnimateDiff in Comfyui. Its really not, what you actually need is to be open to learning a different way of creating images and to have easy access to useful tutorials for the various aspects of comfyUI. Hold ctrl while dragging an object, command on OSX. TIL: I recently learned that Civitai saves workflows too. The Ultimate AI Upscaler (ComfyUI Workflow) Workflow Included. Then use sd upscale to split it to tiles and denoise each one using your parameters, that way you will get a grid with your images. and spit it out in some shape or form. 513 upvotes · 36. r/StableDiffusion. A tip I learned recently: Hold Shift+Ctrl while dragging snaps the objects pivot point to the collider behind (changes depending on camera angle). The output nodes for the XY grid are included in the same workflow. Oct 9, 2023 · It creates a 5 x 5 grid based on the Ksampler cfg and steps. okachobe. , to each area for sampling. But if you have experience using Midjourney, you might notice that logos generated using ComfyUI are not as attractive as those generated using Midjourney. Also, if this is new and exciting to you, feel free to post 11 comments. The amazing MeshGraphormer understands the correct depth map for hands. The UE nodes (that allow you to broadcast data to matching inputs, avoiding all sorts of spaghetti) have a small update, thanks to a great suggestion from LuluViBritannia on GitHub. Impossible-Surprise4. Color grid T2i adapter preprocessor shrinks the reference image to 64 times smaller and then expands it back to the original size. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. Sort by: Add a Comment. It depends: - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. The XY grid is output using a second workflow. So you can say, queue your first task in the first workflow, switch Welcome to the unofficial ComfyUI subreddit. First, we'll discuss a relatively simple scenario – using ComfyUI to generate an App logo. ComfyUI forces to you to learn about the underlying pipeline, which can be intimidating and confusing at first. Please share your tips, tricks, and workflows for using this. 0 in 0. Demo 3 - Model and LoRA. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). cool dragons) Automatic1111 will work fine (until it doesn't). You signed in with another tab or window. You can look at the EXIF data to get the SEED used. Just Started using ComfyUI when I got to know about it in the recent SD XL news. Graceful but inelegant. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. Please share your tips, tricks, and workflows for using this comfyui. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. sdxl. Return in the default folder and type on its path too, then remove it and type “cmd” instead. Seed question. Reload to refresh your session. The x_size and y_size inputs on Save Image Grid currently default to being selection widgets rather than input points. 9K subscribers in the comfyui community. •. Just keep 32x32 in mind. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. 05 increments as an XY plot. try civitai . Drop the image back into ComfyUI to load it, then change the seed to what was in the exif Welcome to the unofficial ComfyUI subreddit. • 2 mo. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. Whether for individual use or team collaboration, our extensions aim to enhance productivity, readability, and Design your blueprint of what ever you like to make chunk aligned, size irrelevant. 22. I'll give it a shot. Also, you can double-click on the grid and search for "DW" and you'll see it. ComfyUI Is pretty Dope To be Honest. I've been working on an app that takes a "card-based" approach to prompt building. I tested every sampler with several different loras (cyberrealistic_v33) I created a free tool and custom models (to be released) to create custom workflows and Unload checkpoint. I've kindof gotten this to work with the "Text Load Line However, the Grid Annotations node from ImagesGrid is fully compatible with the 'annotations' input on Easy Grids' Save Image Grid node, and may be helpful for labeling your grid. comfyui. You signed out in another tab or window. I liken the different methods of generating AI art to the different types of keyboard synthesizers. I've been googling around for a couple hours and I haven't found a great solution for this. DrakenZA. FugueSegue • 3 hr. Add a Comment. So I'm seeing two spaces related to the seed. 5 to get a 1024x1024 final image (512 *4*0. Plus quick run-through of an example ControlNet workflow. Welcome to the ComfyUI Community Docs! This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. In order to recreate Auto1111 in ComfyUI, you need those encode++ nodes, but you also need to get the noise that is generated by ComfyUI, to be made by the GPU (this is how auto1111 makes noise), along with getting ComfyUI to give each latent its own seed, instead of splitting a single seed across the batch. I've personally decided to step into the deep end with ComfyUI Here's everything you need to attempt to test Nightshade, including a test dataset of poisoned images for training or analysis, and code to visualize what Nightshade is doing to an image and test potential cleaning methods. I share many results and many ask to share. If you're interested in how StableDiffusion actually works, ComfyUI will let you experiment to your hearts content (or until it overwhelms you). I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. comfyui or automatic 1111. start with simple workflows . I have OCD, so I would highly appreciate it if you could add a feature that lets nodes snap to a grid so I don't spent 25 minutes of my time making sure that its all perfectly aligned 😭😭 Welcome to the unofficial ComfyUI subreddit. Cascade with restart sampling. There is a addon that places a restart button in the UI but it's requires updating to a different branch of Comfy. I have a wide range of tutorials with both basic and advanced workflows. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Hello, I recently moved from Automatic 1111 to ComfyUI, and so far, it's been amazing. You just have to use the node "upscale by" using bicubic method and a fractional value (0. ComfyUI should work perfectly. So I'm happy to announce today: my tutorial and workflow are available. Set snap to grid and set the first X and Y values to 32 (a chunk is 32x32 pixels) now every movement when you have the blueprint active will jump from chunk to chunk on the map. 📷. To be begin with I have only 4GB Vram and by todays standards it's considered potato. Footnotes. com) In theory, without using a preprocessor, we can use other image editor At the moment, A1111 has more plugins and extension, and handles inpaint/outpaint better. I have a text file full of prompts. The first space I can plug in -1 and it randomizes displays the seed for the current image, mostly what I would expect. I've found out how to do XY with efficiency nodes, but I can't figure out how to run it with this average amount as the variable. you can just plug the width and height from get image size directly into nodes where you need it too. This demo uses the XY Batch up prompts and execute them sequentially. Detailer from the ImpactPack makes multiple fix options. And above all, BE NICE. This is my first post on reddit, please do not judge too strict. I uploaded the workflow in GH . Reply reply More replies More replies   comfyui. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). radianart. If you right-click on the grid, Add Node > ControlNet Preprocessors > Faces and Poses. STEP 1: Open the venv folder, then type on its path. I'm a complete noob to this, try do get it running for some days now. It will then request to load 3 models (SDXL, SDXLCLIPMODEL, AutoencoderKL) EVERYSINGLE /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 114 votes, 85 comments. Then click restart server, refresh your browser, and in all likelyhood, things will work. This is a sub-reddit for posting and sharing your own tutorials, either free or paid for, having to do with 3D modelling or animation. Maybe its also possible to save each tile separately after it's been denoised. It creates a 5 x 4 grid based on the the selected models and LoRAs. Copy that path (we’ll need it later). Then navigate, in the command window on your computer, to the ComfyUI/custom_nodes folder and enter the command by Once you install it, you can load up a workflow in an otherwise fresh install of ComfyUI, click on Manager, and Install Missing Custom Nodes. Yes, you can actually run two instances at once by having a second tab of ComfyUI up, and load the second workflow on the second tab. Look for Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors. After you can use the same latent and tweak start and end to manipulate it. Not exactly mile-high-club but actual club. Alternatively, it could involve splitting the latent space and applying different models, Lora, prompts, cfg, etc. This demo uses the XY List method. true. Belittling their efforts will get you banned. 2 from Civitai and followed installation instructions for portable version. Then find example workflows . Everything ComfyUI needs to run with AMD ROCm is included in that pip package. [Testing] "Better" Loader Lists Adds custom Lora and Checkpoint loader nodes, these have the ability to show preview images, just place a png or jpg next to the file and it'll display in the list on hover (e. I need for it to be executed from a single comfyui workflow ie cannot export to another piece of software and have it stitch the audio to the video. Midjourney is like those cheap Casio keyboards that only has a three octaves and some electronic drum rhythms. I’ve a workflow which draws a grid of images by combining latents with “multi latent composite” node. The top three inputs are connected by UE, latent isn't connected at all, and seed has a With a fine grid and zoomed out, you still get snapping, but not necessarily to the grid line you want to. I don't like all the memes, i think it influences people into thinking comfyui is an insurmountable learning curve. • 8 mo. I'm losing my mind over here so my last resort is getting help from here. What a completely strange idea. LD2WDavid. upvotes ·comments. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. I would assume setting "control after generate" to fixed Welcome to the unofficial ComfyUI subreddit. - If only the base image generation Questions about ComfyUI from a newbie. g. Thank you, now I understand how it all works, also a minute later he literally explained how to get the workflows Creating better Animations with Auto Masking, ControlNets and AnimateDiff. Instead of polluting your system with 10GB+ of redundant AMD packages, please just do steps 3 through 5. If you deleted something under the python310 folder, you probably need to reinstall Python. It integrates with A1111 and helps reduce repetitive copying and pasting by letting you save prompt fragments. One thing about this setup is sometimes plugin installations fail due to path issues, but it is easily cleared up by editing the installers. I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. There are apps and nodes which can read in generation data, but they fail for complex ComfyUI node setups. Inputs that are being connected by UE now have a subtle highlighting effect. design_ai_bot_human. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. Please keep posted images SFW. GenArt42. Graceful by design. Snap size can be adjusted under the Edit toolbar as Snap Settings. It works pretty well in my tests within the limits of The "Attention Couple" node lets you apply a different prompt to different parts of the image by computing the cross-attentions for each prompt, which corresponds to an image segment. In the case of ComfyUI though, there is none, so create it x). For a dozen days, I've been working on a simple but efficient workflow for upscale. 5=1024). kv ar ww bn pq hu ew tv fa ke