PRODU

Best comfyui commands reddit

Best comfyui commands reddit. I have 2 instances, 1 for each graphics card. ComfyBox is nice! Thanks for asking this question, was unsure myself about how to do this :) Welcome to the unofficial ComfyUI subreddit. 1:8188. resize down to what you want. šŸŒŸ Features : - Seamlessly integrate the SuperPrompter node into your ComfyUI workflows. - Or specify a different location below". There are a few ways, some of them using the command line but I recommend if you are not used to Git download the Git official software "GitHub Desktop" and then on the File menu add a local repository if you already cloned the A1111 repo, or clone repository and paste the link of the A1111 repo, after that you should see something similar to the next image , click the button marked with red Itā€™s convenient and allows you to share the workflow into a site like comfyworkflows in an instant, you do need to make an account on the cw site to get your id to input into the field when you click the share button, it then gives you multiple options to pick the resulting image you want to upload in the event you have a bunch of preview Here's the workflow: workflow. 3. Working on a big tutorial that covers getting comfy running in the cloud on some awesome gpus as unfortunately Mac's are still struggling with Vram issues so we're There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. I only have 4gb vram so I'm just trying get my settings optimized. 4) Then you can cut out face and redo-it with IP Adapter. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Once I found it, running the second command in that directory worked. I also automated the entire process so my friends could use it too. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Welcome to the unofficial ComfyUI subreddit. com. From my experience, sgm_uniform outperforms karras at low Best YT or Blog on using ComfyUI for creating Comic books? : r/comfyui. 12. In Task Manager you need to use the dropdown on the main graph to make it display CUDA usage instead, the usage doesn't appear on the default 3D graph. EDIT: confirmed, just tried it. ai. The final display to the player, with a text input box at the bottom for the player to input commands. 1-0. If you have the reactor node and it is working, you already have it installed. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. x. Name it Activate. Please share your tips, tricks, and workflows for using this software to create your AI artā€¦ Automatically install ComfyUI dependencies. If you want full authority about data and what is installed I would go for runpod. py there is a API PromptGenerator node, It doesnt have vision capability, if you want to add your nodes to my nodes you can send me a pull request. And above all, BE NICE. Are there any major differences between the three? And related, is MagicAnimate any good? 1. Use IP Adapter for face. Lacks the extension and other functionalities, but is amazing if all you need to do is generate images. 2 options here. you define the complexity of what you build. Unfortunately, if a specific custom node is using VRAM, there is no way for ComfyUI to respond on its own until that node releases the use of the VRAM. ā€¢ 2 min. " Thereā€™s an experimental ā€œunload modelsā€ option in Manager. Join the community and come discuss games like Codenames, Wingspan, Brass, and all your other favorite games! There is a node for save it as webp video. He makes really good tutorials on ComfyUI and IP Adapters specifically. Step one: Hook up IPAdapter x2. json. You have to run two instances of it and use the --port argument to set a different port. Decoding the latent 2. ComfyUI can only release the VRAM that it manages. bat or Comfyrun. The little grey dot on the upper left of the various nodes will minimize a node if clicked. r/comfyui. . It should be 127. ComfyUi, in its startup, recognizes the A100 and its VRAM. Then drag the requirements_win. Give yourself 256GB of NVMe storage in your launch options. My guess -- and it's purely a guess -- is that ComfyUI wasn't using the best cross-attention optimization. New Features: Anthropic API Support: Harness the power of Anthropic's advanced language models like Claude-3 (Opus, Sonnet, and Haiku) to generate high-quality prompts and image descriptions. Run the command nvidia-smi in your command line and check the startup messages in the ComfyUI console window that it opens when you start it. Make sure there is a space after that. I'm a basic user for now but I want the deep dive. It proceeded to install the minicoda3 directory in my /users/ (my name)/miniconda3. A lot of people are just discovering this technology, and want to show off what they created. His tutorial/demo: look into style align batch align. The downside: it is very slow and also you will need a third party software to convert it to mp4. Also, using --disable-cuda-malloc didn't really help, it's executing the prompt, but it takes a very long time to even generate one step. and nothing gets close to comfyui here. - Press ENTER to confirm the location. Users don't need to know the tuning parameters of various AI models. We know A1111 was using xformers, but weren't told, as far as i noticed, what ComfyUI was using. The latest ComfyUI update introduces the "Align Your Steps" feature, based on a groundbreaking NVIDIA paper that takes Stable Diffusion generations to the next level. everything ai is changing left and right, so a flexible approach is the best imho. Thank you :). Please keep posted images SFW. So OP, please upload the PNG to civitai. Dragging it will copy its path in the command prompt. @ComfyFunc(category="image") def mask_image(image: ImageTensor, mask: MaskTensor) -> ImageTensor: return image * mask. com/thecooltechguy/ComfyUI-ComfyRun. Latent quality is better but the final image deviates significantly from the initial generation. Until ComfyUI/issues/1502 is resolved and/or ComfyUI/pull/1503 is pulled in, then know that you're benefiting from hundreds of millions of saved cycles each run. Are there command line args equivalent to "--precision full --no-half" in ComfyUI? I'm getting the error: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' and I saw a solution in AUTO1111 was adding those command line args, but I can't seem to find anything equivalent in ComfyUI. There you can set the quality to 100 and loseless: true, also there is an option where you can choose how it's going to be processed, set it to slow. no manual setup needed! import any online workflow into your local ComfyUI, & we'll auto-setup all necessary custom nodes & model files. Automatic1111 for multiple workflows and extensions. Launch and run workflows from the command line. - Generate text with various control parameters: - `prompt`: Provide a starting prompt for the text generation. Yes - create a bat file like this. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Otherwise, please change the flare to "Workflow not included" What worked for me was to add a simple command line argument to the file: `--listen 0. py --windows-standalone-build --normalvram --listen 0. you can add it to the command in run_nvidia_gpu. That's definetly not a 'you' problem. looking at efficiency nodes - simpleEval, its just a matter of time before someone starts writing turing complete programs in ComfyUI :-) The WAS suite is really amazing and indispensible IMO especially the text concatenation stuff for starters, and the wiki has other examples of photoshop like stuff. Check out hyperstack. You just have to annotate your function so the decorator can inspect it to auto-create the ComfyUI node definition. So even with the same seed, you get different noise. Here's a list of example workflows in the official ComfyUI repo. Just enter the local IP address into the address bar of your preferred browser. I tried torch with CUDA 11. Workflows are much more easily reproducible and versionable. ā€¢ 3 mo. For general upscaling of photos go: remacri 4x upscale. Please share your tips, tricks, and workflows for using thisā€¦. For anyone finding this and needing help, it is likely that you will also need to install 'setuptools' as per here: GPT suggests a batch mode for combining videos is the way forwards. After having issues from the last update I realized my args are just thrown together from random thread suggestions and troubleshooting but I really have no full understanding of what all the possible args are and what they do. once you get comfy with comfy you don't want to go back. Fooocus / Fooocus-MRE / RuinedFooocus - quick image generation, and simple and easy to use GUI (Based on the Comfy backend). 4 alpha 0. i think it needs you to run your exsiting comfyui install, but add the '--enable-cors-header'. you get really consistent results while doing batch processing. Inside nodes/suggest. Can someone guide me to the best all-in-one workflow that includes base model, refiner model, hi-res fix, and one LORA. I have found the workflows by Searge to be extremely useful. I believe A1111 uses the GPU to generate a random number to generate the noise, whereas comfyui uses the CPU. btw, i have nodes called ComfyUI_VLM_nodes, i have similar nodes there too. despite the complex look, it's actually very Ipadaptor for all. Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Cross-platform compatibility (Windows, Linux, Mac) Download and install models into the right directory. But when I tried to run this line: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. Right now my workflows are either a tangled mess of spaghetti I have to constantly zoom in and out of to change a single parameter somewhere, or I spent more time tidying up and putting all relevant things neatly close to each other and then I can't easily rearrange things anymore except by scrolling even more. 11 votes, 11 comments. exe -s ComfyUI\main. Welcome to the unofficial ComfyUI subreddit. At some point, probably with the ninja command, an nvdiffrast directory got created. 2. ā˜ŗļøšŸ™ŒšŸ¼šŸ™ŒšŸ¼. Well, sometimes we really do need things explained like we're five. Check out his channel and show him some love by subscribing. pacchithewizard. Step three: Feed your source into the compositional and your style into the style. A lot. Please share your tips, tricks, and workflows for using this software to create your AI art. Encoding it and doing a tiny refining step to sharpen up the edges. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. 0. The best thing about ComfyUI, for someone who is not a savant, is that you can literally drag a png produced by someone else onto your own ComfyUI screen and it will instantly replicate the entire workflow used to produce that image, which you can then customize and save as a json. GFPGAN. Users don't need to understand where to download models. Press go šŸ˜‰. py file is. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion worked in detail. To move multiple nodes at once, select them and hold down SHIFT before moving. Nodes in ComfyUI represent specific Stable Diffusion functions. \python_embeded\python. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Wish there was some #hashtag system or I made a quick review of the new IPAdapter Plus v2. . This . When the tab drops down, click to the right of the url to copy it. System Specs: - 16 Core Ryzen / Dragon Range (MINISFORUM BD790i) - 64 Gigs / Ram. 2 Share. If you're using an SDXL model definitely use the add details Lora. Rename this file to extra_model_paths. Belittling their efforts will get you banned. Somewhat. Thanks in advanced for any information. upload any workflow to make it instantly runnable by anyone (locally or online). Best Comfyui Workflows, Ideas, and Nodes/Settings. Add a Comment. As title, which animateanyone implementstion gives the best results? I see thereā€™s AnimateAnyone-Evolved, Moore-AnimateAnyone, and a thinkdiffusion fork of Evolved. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can If you need to share workflows developed in ComfyUI with other users, ComfyFlowApp can significantly lower the barrier for others to use your workflows: Users don't need to understand the principles of AI generation models. Your best bet is to SSH in with VSCode and use commands like wget to pull models directly from Civit. I pay . All the story text output from module 1 will be summarized here and stored in the out folder of ComfyUI, with the file name being the date in format 'date. Comfy speed comparison. At the moment I am on RunDiff using pay as you go, but of course when youā€™re learning and constantly looking at the clock itā€™s not helping. bat and what Welcome to the unofficial ComfyUI subreddit. But reddit will strip it away. FaceDetailer, select skin instead of face/body/hands. 1. I also tried pointing it towards the correct graphics card with --cuda-device DEVICE_ID , but that didn't help. com and then post a link back here if you are willing to share it. To disable/mute a node (or group of nodes) select them and press CTRL + m. yaml and edit it with your favorite text editor. 11 conda activate comfyui # PyTorch install command from site # in comfyui folder pip install -r requirements. 43 upvotes Ā· 15 comments. what is your favourite setting ? I personally use dpmpp_2m_sde_gpu with either the sgm_uniform or karras scheduler. There is an open issue in ComfyUI, where ComfyUI hangs when re-executing complex workflows. Much more streamlined! I dont know what sampler and scheduler would be best for creating Anime and Manga Art (+adding LoRa) Some people say Euler is good some people use dmpp with karras. Check the version of the python that came with comfyui, and try to install the correct version of insightface. Also there's a skin texture node (forgot the name) it works really well for face details. Best Laptops for ComfyUI (video) I am by no means technical So I was hoping someone could post a list of some laptops that can be bought in the UK that will run the latest models of ComfyUI or SDXL. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Thanks for this. Then navigate, in the command window on your computer, to the ComfyUI/custom_nodes folder and enter the command by typing git clone The #1 Reddit source for news, information, and discussion about modern board games and board game culture. 6, updated ComfyUI and Comfy manager, ran 'pip install segment-anything scikit-image piexif transformers opencv-python-headless GitPython' to install dependencies, removed ComfyUI-Impact-Pack folder and reinstalled manually with git clone. I know there is the ComfyAnonymous workflow but it's lacking. šŸ“·. It says by default masterpiece best quality girl, how does CLIP interprets best quality as 1 concept rather than 2? That's not really how it works. 0` The final line in the run_nvidia_gpu. Thanks tons! That's the one I'm referring Comfyui is much better suited for studio use than other GUIs available now. Scaling and GPUs can get overwhelmingly expensive so you'll have to add additional safeguards. 8 and 12. The current code had a bug and you need to add a line to load insightface at the top of the ipadapter dot py file. Might be a good tool for this but I haven't played w/ it enough yet. bat file should be in the comfy main folder where the main. More info here, including how to change a This model is a T5 77M parameter (small and fast) custom trained on prompt expansion dataset. Running it through an image upscale on bilinear and 3. Any suggestions would be appreciated. Help me make it better! Sort by: nomadoor. txt More looking at support; I have built tons of systems with both platforms and ran both has my primary for years at one point or anotherso I am a expert command-line rangerJust have not run Comfy on Ubuntu yet. Initially you can enter: Start Game. Just choose cheap and work your way up if you find the memory or speed is insufficient. bat or anything. What has worked best for me has been 1. @ComfyFunc(category="Image") def mask_image(image: ImageTensor, mask: MaskTensor) -> ImageTensor: """Applies a mask to an image. I have been trying for 4 hours now and all out of ideas, Windows 11, python 3. I've done my best to consolidate my learnings on IPAdapter. OpenAI API Integration: Leverage the cutting-edge capabilities of OpenAI's GPT-4 and GPT-3. Here are approx. self, images, frame_rate: int, loop_count: int, filename_prefix="AnimateDiff", format="image/gif", Latent Vision just released a ComfyUI tutorial on Youtube. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. txt file in the command prompt. There are a lot of options in regards to this, such as iterative upscale; in my experience, all of them are too intensive for bad GPUs or they are too Best AnimateAnyone implementation. After you get workflow - you will do it in 2-click. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. [11]. 69 cent (excluding a few cent here and there for storage) for A40 48 gb. I see that ComfyUI is a better way to create. Follow the link to the Plush for ComfyUI Github page if you're not already here. You can prefix the start command with CUDA_VISIBLE_DEVICES=0 to force comfyui to use that specifici card. Uses less VRAM than A1111. I really liked the tutorial below: Unveiling the Game-Changing ComfyUI Update. r/comfyui: Welcome to the unofficial ComfyUI subreddit. SD interprets the whole prompt as 1 concept and the closer tokens are together the more they will influence each other. Ultimate Guide to IPAdapter. 5 models, including the vision-enabled variants, for enhanced in the default does not use commas. If you don't care and just want to run workflows go for one of the comfyflow, openart whatever platforms and abuse their low prices until they run out of funding. Giving 'NoneType' object has no attribute 'copy' errors. Maybe the source image you're using is of low quality. Thanks for sharing, I did not know that site before. ComfyUI is also trivial to extend with custom nodes. - RTX RTX 3090. Reply. com or https://imgur. it's the perfect tool to explore generative ai. To drag select multiple nodes, hold down CTRL and drag. sharpen (radius 1 sigma 0. def combine_video (. Both these are of similar speed. It is the best quality I've found so far. (if youā€™re on Windows; otherwise, I assume you should grab the other file, requirements. open a command prompt, and type this: pip install -r. There is a need to train MeshGraphormer with a dataset that includes images of melted hands. Looking forward to seeing your workflow. 10. return image * mask. So is there any suggestion to where to start, any tips or resource for me. I have no affiliation to the channel, just thought that the content was good. You can use Ubuntu. As pointed out in the HandRefiner paper, MeshGraphormer is designed to generate 3D meshes from ā€œcorrectly shapedā€ hands, so itā€™s only natural that it canā€™t handle ā€œmelted handsā€ generated by AI. txt). One of the best parts about ComfyUI is how easy it is to download and swap between workflows. Welcome to ComfyUI. if you want to keep it seperate and build your own nodes that's good too. ago. It'll be perfect if it includes upscale too (though I can upscale it in an extra step in the extras tap of automatic1111). So none of your choices would work for me. Step two: Set one to compositional and one to style weight. And now you have a new fully-functional operator in the "image" category. Usually results are perfect, better then you can do manually, but on some some images models can't see what is background without some help. Currently on a 48gb vram gig A40 GPU. Check the number of denoising steps in your face detailer. bat looks like this: `. The video was pretty interesting, beyond the A1111 vs. After a week of enduring extremely long image generation times, I decided to set up my own ComfyUI server on Google Cloud Platform (GCP). Would love feedback on whether this was helpful, and as usual, any feedback on how I can improve the knowledge and in particular how I explain it! I've also started a weekly 2 minute tutorial series, so if there is anything you want covered that I can fit In my case, it turned out that the Manager Node itself needed to be updated using the command under the Troubleshooting section of the Manager github page: Go to the ComfyUI-Manager directory and execute the command: I've been struggling to learn ComfyUI because I use an ARM based Mac, and the experience has been painfully slow. The @ComfyFunc decorator inspects your function's annotations to compose the appropriate node definition for ComfyUI. Personally I use conda so my setup looks something like conda create -n comfyui -c conda-forge python=3. 19K subscribers in the comfyui community. Dec 19, 2023 Ā· In the standalone windows build you can find this file in the ComfyUI directory. """. This feature delivers significant quality improvements in half the number of steps, making your image generation process faster and Welcome to the unofficial ComfyUI subreddit. and don't get scared by the noodle forests you see on some screenshots. I'd be interested in that also, I'll give it a go and let you know sometime soon. Then you generate an accessible unique Comfy URL to connect a websocket to and pass prompts via the API. Commas are just extra tokens. - Press CTRL-C to abort the installation. When you run Automatic1111 or ComfyUI, VSCode will tunnel the open port to your local host, so it's super easy to access. To duplicate parts of a workflow from one Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. Game Screen. Click on the green Code button at the top right of the page. But I never used a node based system and also I want to understand the basics of ComfyUI. Loop the conditioning from your ClipTextEncode prompt, through ControlNetApply, and into your KSampler (or whereever it's going next). You probably want to set it very low (0. ComfyUI - great for complex workflows. 3) to minimize the changes it makes ot the image, but tweak it to ensure is cleaning up the faceswap artifacts/low res. Are there any guides that explain all the possible COMMANDLINE_ARGS that could be set in the webui-user. Thanks for sharing, that being said I wish there was a better sorting for the workflows on comfyworkflows. txt'. With all due respect, your pricing is fucking expensive. cloud. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. Your best bet is to set up an external queue system and spin up ComfyUI instances in the cloud when requests are added to the external queue. - `max_new_tokens`: Set the maximum number of new Prompt -> possibly using your flow you showed off in this vid -> 3d model -> extract depth into 8 depth frames rotating 360 degrees -> control net with another prompt -> 8 rendered frames -> retouch for consistency and other guidance -> thru your flow again -> should pop out a model with detailed structure all around. The NerdyRodent Youtube dude just posted a video yesterday about a Visual Style Prompting node. To get started, download our ComfyUI extension: https://github. 18K subscribers in the comfyui community But essentially, when in termin, navigate the the custom nodes directory and run the install command that I think is shared on the comfyUI manager GitHub page. Recommended Workflows. Using created mask to inpaint around it you can keep original object, but even without mask it could be done (with object distortion): 1. Then restart comfy UI. Miniconda3 will now be installed into this location: /Users/ (my name)/miniconda3. bat. we bd pf ec on vg ra rv yz cv