@rpmlandworks: GPS IS KING! Welcome to the shit show, what do you guys want to see more of? #trimble #earthworks #CAT #komatsu #bobcat #mack #cutfill #rpmlandworks

Ricky
Ricky
Open In TikTok:
Region: US
Tuesday 04 April 2023 15:05:58 GMT
10628
601
26
7

Music

Download

Comments

kenknight44
Ken Knight :
I think it’s a badass video, my man 🙌
2023-04-04 16:30:33
3
calvincoaker
Grant Coaker :
Your rite, gps saves time, money, and helps with saftey of the job.
2023-04-04 17:16:15
1
nater7614
Dirt scratcher :
Topcon on komatsu is our preference. Days of the laser are gone
2023-04-04 21:25:07
1
countryside_farms_ponds
Countryside_Farms_Excavation :
We don’t have gps we use laser lol
2023-04-05 00:08:40
1
jordanmcgeorge
Jordan McGeorge :
Komatsu with Trimble is where it’s at. I’m glad you talked me into Trimble. They are the best!
2023-04-06 17:43:49
1
cattledogs
Sean Crane :
I want heaped buckets. GPS is king, but knowing how to read grade, see grade is important, and you don’t always get gps survey prior to work
2023-04-07 16:45:44
0
notuhnarc
Notuh Narc :
I'm a field service tech and I love your service rigs, gonna try and get my rig set up how y'all have yours
2023-04-24 03:12:20
0
brettkemp3
Brett Kemp :
More. Mulching.
2023-04-24 03:34:47
0
To see more videos from user @rpmlandworks, please go to the Tikwm homepage.

Other Videos

This incredible new AI from WAN is about go VIRAL - here’s WHY WAN 2.2 is quickly proving stronger than I expected. This new open-source model doesn’t just swap characters - it can now blend real-world objects into your scene with surprising consistency. In my test, I used a pouch on the bag in front of me. With older models, adding or holding objects usually broke the shot - things would glitch or collapse. WAN 2.2 respects the physics much more, letting you merge props into the scene, half-transform yourself, or anchor new details directly onto reality. That opens up a ton of creative possibilities for anyone experimenting with motion swaps or character edits. ⚡ Workflow I ran: 1. Recorded a short clip of myself speaking. 2. Used Nano Banana to edit the first frame - extended the edges and aligned it perfectly (I even built a quick alignment tool using AI in Supercreator.ai to make sure ratios matched). 3. Dropped that into WAN 2.2 4. Results came back smoother than expected, with strong detail close to the subject. 🔥 Why WAN 2.2 feels different: - More robust when USING objects into live scenes - Fewer hallucinations in body/props compared to older Act 2-style models - Strong detail retention (hair, clothing, hands) - Output up to 30s per run, though capped at 720p (the highest res right now) - Already available on Higgsfield + Supercreator.ai ⚠️ Weaknesses worth noting: - Some drifting - e.g., the rock in my shot slowly “floated” out of place - Slower than some commercial tools (2-5 minutes), so iteration takes longer 💡 Important tips if you test it: - Align images pixel-perfect - if your source is 1920×1080, make sure the edited frame is exactly the same. Even small mismatches (1910×1078) will cause drift or blur. - Run it a few times - results vary, but you can cherry-pick the cleanest. In my test, one run had a rock drifting, another nailed it. - The driver video matters! Experiment We’re now entering a phase where you can create entire worlds from your home, phone or laptop. Start experimenting today! PS: I’m testing workflows like this daily - follow for more inspo 🔥
This incredible new AI from WAN is about go VIRAL - here’s WHY WAN 2.2 is quickly proving stronger than I expected. This new open-source model doesn’t just swap characters - it can now blend real-world objects into your scene with surprising consistency. In my test, I used a pouch on the bag in front of me. With older models, adding or holding objects usually broke the shot - things would glitch or collapse. WAN 2.2 respects the physics much more, letting you merge props into the scene, half-transform yourself, or anchor new details directly onto reality. That opens up a ton of creative possibilities for anyone experimenting with motion swaps or character edits. ⚡ Workflow I ran: 1. Recorded a short clip of myself speaking. 2. Used Nano Banana to edit the first frame - extended the edges and aligned it perfectly (I even built a quick alignment tool using AI in Supercreator.ai to make sure ratios matched). 3. Dropped that into WAN 2.2 4. Results came back smoother than expected, with strong detail close to the subject. 🔥 Why WAN 2.2 feels different: - More robust when USING objects into live scenes - Fewer hallucinations in body/props compared to older Act 2-style models - Strong detail retention (hair, clothing, hands) - Output up to 30s per run, though capped at 720p (the highest res right now) - Already available on Higgsfield + Supercreator.ai ⚠️ Weaknesses worth noting: - Some drifting - e.g., the rock in my shot slowly “floated” out of place - Slower than some commercial tools (2-5 minutes), so iteration takes longer 💡 Important tips if you test it: - Align images pixel-perfect - if your source is 1920×1080, make sure the edited frame is exactly the same. Even small mismatches (1910×1078) will cause drift or blur. - Run it a few times - results vary, but you can cherry-pick the cleanest. In my test, one run had a rock drifting, another nailed it. - The driver video matters! Experiment We’re now entering a phase where you can create entire worlds from your home, phone or laptop. Start experimenting today! PS: I’m testing workflows like this daily - follow for more inspo 🔥

About