Digital Debunking: Can Simulation Technology Get a Robot to the Finish Line?
Technology’s never-ending rise has brought concepts to life many people would’ve never thought possible. Of course, these advancements can inspire fear in the anxious-hearted – and popular media certainly doesn’t help. For instance, you might see human-like robot movement technology and think of dystopian media that depicts super-agile, super-sentient robots taking over the world and enslaving humanity – “I, Robot” anyone?
Thankfully, media isn’t our reality, and automated machines and robots have taken over what was previously tiresome, repetitive, and monotonous human labor (as seen in automotive and textile factories, for example). In all, automation has made our world run smoother and more effectively both in the world’s largest factories and our everyday homes and devices.
And while robots are used for industrial-level applications and make the world go ‘round – so to speak – we can also explore more its whimsical applications as well. For example, we can design and simulate a robot that can walk, run, or jump. Usually, designers and developers help robots and simulations learn these actions by developing machine learning (ML) or artificial intelligence (AI) models and algorithms.
An easy way to understand this process is by comparing everything to a video game. In the game, let’s your character is constantly walking to the right (like in platformers), and your goal is to make them jump over holes or obstacles to avoid taking damage or losing the game. In this situation, you decide whether your character wins or loses based on how well you can avoid these obstacles.
The concept of reinforcement learning – the training of machine learning models – would be to teach your character through a series of repetitions how to avoid pitfalls on their own. In other words, your character would learn from their environment by interacting with it, often through trial-and-error. Through a series of learned algorithms, the character will continue to repeat an action, and eventually the ML/AI model will learn the best action to take to bring the character to their most optimal state – in this case, the models would find the best ways to avoid obstacles and therefore win the game.
During a recent slow Friday, one of our engineers decided to test if a similar method to teaching a virtual robot model could be used, but with simulation products. To do so, a unique optimization process was used with Altair MotionSolve.
Building the Simulation Model
Our simple robot consisted of a body with two legs. Each leg could only bend in two places: the hip and the knee. Our engineer used MotionSolve to simulate the robot in its environment. In each time step, MotionSolve calculated the robot’s rigid body dynamics, including the effects of gravity and contact with the floor. To keep things simple, our engineer assumed that the robot was rigid, but a more realistic model of a deforming robot could be created with a tool like Altair OptiStruct.
Teaching the Model to Take Baby Steps
The engineer treated the position of each of the legs at each timestep as a design variable and optimized them using Altair HyperStudy. The objective was to make the model travel forward as far as possible. As the optimization progressed, the robot found patterns that helped it run farther.
There are several strategies for teaching robots to walk, including reinforcement learning and neural evolution. Our engineer took a simpler approach – they optimized the position of each joint in each time step. Our robot didn’t have any way to sense its environment: it just planned a set of motions in advance and hoped for the best – perhaps you have klutzy friends or family who walk and run the same way. For our poor robot friend, learning to walk and run was like if it looked at an obstacle course, put on a blindfold, and tried to run through it without hitting anything.
Before the robot had been optimized
Our goal was for the robot to travel as far as possible, so our engineer used its final Y-coordinate as the objective function. HyperStudy was the brains behind the problem: our engineer used HyperStudy’s Global Response Surface Method (GRSM) to optimize the position curves of all four joints to maximize the distance traveled.
At first, the robot’s limbs were fixed in place, and the poor robot simply fell on its back – but it didn’t take long for GRSM to do better. By the end of the first iteration, it had already learned to shove itself forwards instead of falling backwards. After 14,937 simulations, GRSM found a set of joint curves that resulted in what you might call a walking gait. The robot had found a way to get all the way to the end of its floor. In fact, it had learned to perform a final leap to the edge of the path to maximize its distance traveled like an Olympic long jumper like a true athlete!
A time lapse of the robot learning to run and "leap" at the end of the run
In practice, robots (and people) can sense their surroundings as they walk. A different approach to this problem would have been to have the robot sense its current position and adapt to it as it walked around. This controls-based approach would be an excellent problem for a solution like Altair Activate.
Conclusion
Below, you can see the progression from the beginning of the process (falling) to the middle (a bombastic leap) to the end, where our robot friend (somewhat) successfully determined an optimal pattern of movement to get it from point A to point B.
Before, during, and after optimizing the model to "run"
As supremely cool as it’d be, we at Altair aren’t in the business of teaching robots to become Olympic athletes. But we do provide tools like Altair MotionSolve that help users build and execute complex system models for motion analysis and optimization. Could the future of technology one day lead to systems and models intelligent enough to have their own sporting events? Perhaps – and if so, we’re staking the claim to the name, “Robolympics.”
To learn more about multi-body system dynamics, visit altair.com/motionsolve.