@melissanellyy_:

Melissa Nelly
Melissa Nelly
Open In TikTok:
Region: PT
Friday 08 November 2024 23:18:02 GMT
6055
612
13
7

Music

Download

Comments

marlonzinho_x_
Marlon Gomes 🪄🖤 :
😂😂😂😂😂😂
2024-11-08 23:50:27
2
manueldeyse
Darklide4✝️ :
o seu cabelo é tão lindo☺️
2024-11-09 20:15:08
1
eny.raja
Eny Raja :
nada, vocês precisam de pesquisa😁😅😅😂
2024-11-09 01:25:58
1
iluvkiki_13
KB :
AMOOO🤣🤣❤️
2024-11-09 01:03:34
1
nobremell
Melissa Nobre :
🤍
2024-11-09 00:03:15
1
luannety877
luannety@ :
😂
2024-12-05 15:41:32
0
tamiresl60
Tânia Tamires430 :
😹😹😹😹😹
2024-12-02 19:27:58
0
chonguilee0
Chonguile :
🥰
2024-11-17 07:13:18
0
To see more videos from user @melissanellyy_, please go to the Tikwm homepage.

Other Videos

Your phone isn’t just a camera - it’s a portal to alternate realities Runway’s Video-to-Video Turbo lets you restyle any clip with lifelike motion But it only works on 20-second segments That becomes an issue if you want to have consistent characters and scenes across different shots. That might seem like a constraint BUT if you know how to work it, it really isn’t :) Here’s how to use it smart: • Record a 20-second video • Include multiple poses, subtle movements, or camera shifts • Think of it as data collection - not just a “scene” or a “shot” • You're capturing different angles of *your world* for future re-use Then plug that into Runway Video-to-Video: • Set Structure Transformation between 7–10 (higher gives more creative freedom) • Skip the “first frame” input if you want more control (worked better for me) • Start with simple prompts like “Byzantine-style interior” or “desert city at dusk” → That way you’ll see exactly what the model does → Then layer in complexity as you go Yes - I also tried variations using first frame + restyle workflows + upscaling But the most realistic results still came from just **prompting + structure 7–10** Bonus move: After your first run, grab key frames from the output Use those in Runway *References* to create more shots, consistently Or pass them into image-to-video tools and expand the sequence Even with just 20 seconds, you can: - Act out and record dozens of moments - Generate consistent AI characters - Add dialogue with lip-syncing - Create an entire short-form video from one take Don’t think in limits- think in systems… think in CONNECTIONS + COMBINATIONS Record once, reuse everywhere PS: I’m testing AI workflows like this daily — follow for more inspo 🔥
Your phone isn’t just a camera - it’s a portal to alternate realities Runway’s Video-to-Video Turbo lets you restyle any clip with lifelike motion But it only works on 20-second segments That becomes an issue if you want to have consistent characters and scenes across different shots. That might seem like a constraint BUT if you know how to work it, it really isn’t :) Here’s how to use it smart: • Record a 20-second video • Include multiple poses, subtle movements, or camera shifts • Think of it as data collection - not just a “scene” or a “shot” • You're capturing different angles of *your world* for future re-use Then plug that into Runway Video-to-Video: • Set Structure Transformation between 7–10 (higher gives more creative freedom) • Skip the “first frame” input if you want more control (worked better for me) • Start with simple prompts like “Byzantine-style interior” or “desert city at dusk” → That way you’ll see exactly what the model does → Then layer in complexity as you go Yes - I also tried variations using first frame + restyle workflows + upscaling But the most realistic results still came from just **prompting + structure 7–10** Bonus move: After your first run, grab key frames from the output Use those in Runway *References* to create more shots, consistently Or pass them into image-to-video tools and expand the sequence Even with just 20 seconds, you can: - Act out and record dozens of moments - Generate consistent AI characters - Add dialogue with lip-syncing - Create an entire short-form video from one take Don’t think in limits- think in systems… think in CONNECTIONS + COMBINATIONS Record once, reuse everywhere PS: I’m testing AI workflows like this daily — follow for more inspo 🔥

About