Content pfp
Content
@
0 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
Here’s a deep dive into how we make our own AI driven anime 🧵 A Thread :
1 reply
0 recast
4 reactions

RŌHKI pfp
RŌHKI
@rohkiofficial
2/14 like any other animated production, we begin with storyboards and scripts. from there, we enter a preproduction phase, where we conduct R&D to build the necessary tech and pipelines for each shot
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
3/14 our process is a hybrid of 2.5D and AI-powered rotoscope animation. we start by creating 3D backgrounds and environments using unreal engine. we block out scenes within these sets, determining camera angles and character movements
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
4/14 we then record a rough dialogue track for each of the characters to determine timing and how we should act out the emotion
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
5/14 next, we film live action footage of us on a green screen acting out the scenes, which is then used as mocap and processed through various AI pipelines we've built using comfyUI. depending on the shot, we will use ViggleAI here instead of mocap
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
6/14 at this stage, the AI essentially rotoscopes each frame, re-stylizing it to match our character designs and the art style we've established
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
7/14 we spend a lot of time designing each of the characters using AI and also traditional non-AI design techniques. we then train our own AI models on these character designs so that the AI understands exactly what our characters look like. these models are known as LORA’s
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
8/14 for facial animation, we’ve been using an open source program called live portrait. for episode 2, we’ve been experimenting with “act 1” from RunwayML
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
9/14 we then composite the stylized characters into the scenes using davinci resolve, adding visual effects and color correction to complete the production
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
10/14 the final piece of the process, and our true specialty, is creating the sound design and composing the music and score. at its heart, RŌHKI is a music project, so this is where we truly get to have fun and bring everything together
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
11/14 we also sometimes run our 3D background environments through AI as well to further push the style into a specific look to create a more cohesive look. there are also some other consumer AI tools such as RunwayML, Gen3 and LumaLabsAI that we will use to generate various elements or b-role
1 reply
0 recast
0 reaction

RŌHKI pfp
RŌHKI
@rohkiofficial
12/14 this 12 minute episode was the result of months of hard work from a dedicated team of six, often working 14 hour days. while we've found ways to leverage AI to streamline and accelerate parts of the animation process, it remains incredibly labor intensive
1 reply
0 recast
0 reaction