Comfyui upscale rugs reddit
Comfyui upscale rugs reddit. u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. articles on new photogrammetry software or techniques. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. These comparisons are done using ComfyUI with default node settings and fixed seeds. This will allow detail to be built in during the upscale. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. There is a face detailer node. But I probably wouldn't upscale by 4x at all if fidelity is important. I created this workflow to do just that. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. Please share your tips, tricks, and workflows for using this software to create your AI art. And above all, BE NICE. 5, euler, sgm_uniform or CNet strength 0. simply add LORAs into your workflow: https://civitai. A lot of people are just discovering this technology, and want to show off what they created. - latent upscale looks much more detailed, but gets rid of the detail of the original image. I've played around with different upscale models in both applications as well as settings. But it's weird. There are also "face detailer" workflows for faces specifically. Thanks In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. Jan 13, 2024 路 So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I have a custom image resizer that ensures the input image matches the output dimensions. 5 noise I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Belittling their efforts will get you banned. 25- 1. Upscale and then fix will work better here. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I solved that with using only 1 steps and adding multiple iterative upscale nodes. Latent upscale is different from pixel upscale. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. The resolution is okay, but if possible I would like to get something better. Try immediately VAEDecode after latent upscale to see what I mean. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Instead, I use Tiled KSampler with 0. And when purely upscaling, the best upscaler is called LDSR. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. It depends on how large the face in your original composition is. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. this is just a simple node build off what's given and some of the newer nodes that have come out. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. g. I generate an image that I like then mute the first ksampler, unmute Ult. One does an image upscale and the other a latent upscale. Generates a SD1. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. safetensors (SD 4X Upscale Model) Jan 8, 2024 路 Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Hope someone can advise. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 2 and resampling faces 0. It works more like DLSS, tile by tile and faster than iterative one. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Latent quality is better but the final image deviates significantly from the initial generation. This. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. If it’s a close up then fix the face first. After borrowing many ideas, and learning ComfyUI. Reply reply Top 1% Rank by size Grab the image from your file folder, drag it onto the entire ComfyUI window. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. A step-by-step guide to mastering image quality. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. That's because of the model upscale. 1-0. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 5 "Upscaling with model" and then denoising 0. . If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). The workflow is kept very simple for this test; Load image Upscale Save image. 2 After 6 days of hard work (2 days build, 1 day testing, 2 day recording and 1 day editing and very little sleep, well, I finally managed to upload this! full tutorial in the youtube description (it's entirely free of course) - and the video goes into 1h of detailled instructions on how to build it yourself (because I prefer for someone to learn how to fish than to give them a fish 馃槀 I had the same problem and those steps tanks performances as well. com/search/models?baseModel=SDXL%201. The downside is that it takes a very long time. 6 denoise and either: Cnet strength 0. Fastest would be a simple pixel upscale with lanczos. Thank Aug 31, 2024 路 What is the main focus of the 'ComfyUI: Flux with LLM, 5x Upscale (Workflow Tutorial)' video?-The main focus of the video is to provide a tutorial on how to use ComfyUI with Flux, a large language model (LLM), to upscale images up to 5x their original resolution using a custom workflow. Thanks for all your comments. Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. I liked the ability in MJ, to choose an image from the batch and upscale just that image. 75 denoise with ultimate sd upscale is great but how do I get rid of the sky mountains? SD1. Thanks. 5 upscale) upscaler to ksampler running 20-30 steps at . Already used tile controlnet, not sure what else to do. Hello, It’s always nice to have new tips being shared and thanks for that but from what I see I think you still need to work on your workflow. This is done after the refined image is upscaled and encoded into a latent. 2 options here. It will replicate the image's workflow and seed. 5 denoise. Look at this workflow : This is a community to share and discuss 3D photogrammetry modeling. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. 5 noise Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Here is a workflow that I use currently with Ultimate SD Upscale. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. At the end, when you open and zoom on your image, it’s quite noticeable that your upscale generated visible seams between the upscales tiles. Welcome to the unofficial ComfyUI subreddit. 9, end_percent 0. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. 0. That's because latent upscale turns the base image into noise (blur). 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with You just have to use the node "upscale by" using bicubic method and a fractional value (0. Does anyone have any suggestions, would it be better to do an ite u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. For some context, I am trying to upscale images of an anime village, something like Ghibli style. 5 if you want to divide by 2) after upscaling by a model. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. Also, both have a denoise value that drastically changes the result. That's practically instant but doesn't do much either. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. 9 , euler Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. 5 to get a 1024x1024 final image (512 *4*0. second pic. I needed a workflow to upscale and interpolate the frames to improve the quality of the video. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. 2 So I made a upscale test workflow that uses the exact same latent input and destination size. - image upscale is less detailed, but more faithful to the image you upscale. It's why you need at least 0. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Please keep posted images SFW. 5=1024). No matter what, UPSCAYL is a speed demon in comparison. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. I too use SUPIR, but just to sharpen my images on the first pass. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. May 6, 2024 路 Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. It uses CN tile with ult SD upscale. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird That said, Upscayl is SIGNIFICANTLY faster for me. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Then comes the higher resolution by upscaling. You guys have been very supportive, so I'm posting here first. So I made a upscale test workflow that uses the exact same latent input and destination size. SD upscaler and upscale from that. I often reduce the size of the video and the frames per second to speed up the process. I did once get some noise I didn't like, but rebooted & all was good second try. 0&modelType=LORA&sortBy=models_v8&query=details. 5, photon v1. Both these are of similar speed. No attempts to fix jpg artifacts, etc. Jan 5, 2024 路 I have been experimenting with AI videos lately. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. Upscale x1. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. However, I switched to Ultimate SD Upscale custom node. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). sygbe kaprfe zeau vlykad uwt paepw pml kewr wevryq xgyzn