Content pfp
Content
@
0 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
Here’s a deep dive into how we make our own AI driven anime 🧵 A Thread :
1 reply
0 recast
4 reactions

RŌHKI pfp
RŌHKI
@rohkiofficial
2/14 like any other animated production, we begin with storyboards and scripts. from there, we enter a preproduction phase, where we conduct R&D to build the necessary tech and pipelines for each shot
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
3/14 our process is a hybrid of 2.5D and AI-powered rotoscope animation. we start by creating 3D backgrounds and environments using unreal engine. we block out scenes within these sets, determining camera angles and character movements
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
4/14 we then record a rough dialogue track for each of the characters to determine timing and how we should act out the emotion
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
5/14 next, we film live action footage of us on a green screen acting out the scenes, which is then used as mocap and processed through various AI pipelines we've built using comfyUI. depending on the shot, we will use ViggleAI here instead of mocap
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
6/14 at this stage, the AI essentially rotoscopes each frame, re-stylizing it to match our character designs and the art style we've established
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
7/14 we spend a lot of time designing each of the characters using AI and also traditional non-AI design techniques. we then train our own AI models on these character designs so that the AI understands exactly what our characters look like. these models are known as LORA’s
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
8/14 for facial animation, we’ve been using an open source program called live portrait. for episode 2, we’ve been experimenting with “act 1” from RunwayML
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
9/14 we then composite the stylized characters into the scenes using davinci resolve, adding visual effects and color correction to complete the production
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
10/14 the final piece of the process, and our true specialty, is creating the sound design and composing the music and score. at its heart, RŌHKI is a music project, so this is where we truly get to have fun and bring everything together
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
11/14 we also sometimes run our 3D background environments through AI as well to further push the style into a specific look to create a more cohesive look. there are also some other consumer AI tools such as RunwayML, Gen3 and LumaLabsAI that we will use to generate various elements or b-role
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
12/14 this 12 minute episode was the result of months of hard work from a dedicated team of six, often working 14 hour days. while we've found ways to leverage AI to streamline and accelerate parts of the animation process, it remains incredibly labor intensive
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
13/14 there's a misconception that using AI means we can simply "prompt" our animation into existence, but that couldn't be further from the truth. even with these advanced tools, creating animation still requires a tremendous amount of effort and skill
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
14/14 we’re at a pivotal moment where this technology is just beginning to transform animation, and we're thrilled to be at the forefront, pushing the boundaries of what's possible. what is this tech gonna be capable of in a couple years? can’t wait to find out 🩸 Here it is in 4K: https://youtu.be/QLV0sKfxk9s?si=RG_tXijpM3Lknvfj and collect on @zora at https://zora.co/collect/base:0x9cd98193f58eef7f78e89caa6cbbebbaf609ffe1/1?referrer=0x99895960B30A34F8431517c4f5B52A1f4b9854a3
0 reply
0 recast
0 reaction