Content pfp
Content
@
0 reply
0 recast
0 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
Working on experimental surfing video on Comfy: Trained AI with my photos and transformed them into anime > applied to video with IPA Adapter > ControlNet > Looping & flowing with AnimateDiff
5 replies
0 recast
11 reactions

Yatima pfp
Yatima
@yatima
Did you made a LoRa with your pictures? If yes, do you need a lot of them? I want to make one using frames from my 3D works. Anyway, it's looking pretty nice. I'd love to see the final result. 🍿 πŸ–€ 28 $DEGEN
2 replies
0 recast
0 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
Yes i made lots of them in different aestethic but i think not very succesful, need to learn about more. The reason why the result was good in this was new IP adapter's Plus high strenght mode and i used very similiar photos generated from Midjourney...
1 reply
0 recast
0 reaction

Yatima pfp
Yatima
@yatima
Good to know. I was reading about training LoRas few days ago and it has really sparked my curiosity. The problem is that there is so much to learn and experience that I get lost in the branches of such a big tree, haha. πŸ–€ πŸ–€
1 reply
0 recast
1 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
Experimentation and learning never end in this oh my, I've been experimenting for over 2 years : ) While it can be tiring, my curiosity keeps me going. I'm eager to explore the possibilities with this tool, so let's stay connected πŸ’ͺ πŸ™‚
1 reply
0 recast
0 reaction

Yatima pfp
Yatima
@yatima
Two years? Wow. I've only been at it for a couple of months, and I don't see the end of the tunnel, which I love. The amount of possibilities I see is overwhelming, a real challenge. πŸ‘Š πŸ–€
1 reply
0 recast
0 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
I'm including Disco Diffusion, Warp Fusion and early Stable Diffusion era ofc, interfaces were huge mess, i think Comfy is the peak of all of them.
1 reply
0 recast
0 reaction

Yatima pfp
Yatima
@yatima
I had a thing with DD in the summer of '22 but I stopped bc it got a bit tiring, and also, if I remember correctly, Google limited the collabs, so I decided to stop paying. Imagine how I felt coming back two years later with ComfyUI and bumping into everything that has advanced in this field. 🀯
1 reply
0 recast
0 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
It always happens like this; no need to get FOMO and need to wait for the best optimized version of the tool. I'm very happy with Comfy right now, though sometimes I miss the weirdness and uniqueness of early tools. I'm looking for ways to adapt Warp Fusion into Comfy, which I heard has been integrated into Comfy somehow.
1 reply
0 recast
0 reaction

Yatima pfp
Yatima
@yatima
They did have a nightmarish digital charm that you don't see anymore. πŸ–€ I don't know, but I suppose if it doesn't exist, someone could train a checkpoint to recapture that look. I also really like ComfyUI, especially since I'm used to working with node based systems like Houdini, Blender Geonodes, or TouchDesigner, it makes me feel at home. 🀘 The only thing I miss a bit is not having the option to work vertically in the Houdini style. I'll take a look at Warp Fusion, which I wasn't familiar with (thanks!). I've also seen that they've adapted Flux to work with ComfyUI, which is a really nice step forward.
1 reply
0 recast
1 reaction

Mβ–‘Aβ–‘Zβ–‘iβ–‘N pfp
Mβ–‘Aβ–‘Zβ–‘iβ–‘N
@mazin
Oh yeah Flux is great improvement, I don't want to pay monthly for Midjourney anymore. I'm definitely going to explore Flux in Comfy in the coming days to move away from MJ. I've seen some amazing live visuals and audiovisuals using Comfy and TouchDesigner. It’s really impressive what can be done with it, so you really have a significant advantage in that. I know Blender little bit but after i saw the potential with Comfy i decided to learn TouchDesigner.
1 reply
0 recast
0 reaction