How Agent-Based Modeling could Accelerate the SpaceX and Tesla Development Process

 

Building the Machine that Builds the Machine

For both SpaceX and Tesla, Elon Musk has often discussed that the key for ultimate success is “building the machine that builds the machine,” meaning it is actually the factory that becomes the product, and the output from that factory is a desired feature.  

This way of looking at things is counterintuitive, and doesn’t shine the spotlight as much on the product’s innovation itself, whether it’s a car or a rocket. That innovation is still very much there, but in order to be successful there needs to be a rapid and focused effort on developing the end product, while at the same time, developing the machine that builds it. SpaceX and Tesla both follow this roadmap, even if complexity and production levels differ.

The engineering process for both has been documented as well, and discussed at length during an interview with Tim Dodd (owner of the Everyday Astronaut YouTube channel). During this, Mr. Musk discusses the rules he uses for rapid, iterative development:

  • Make your requirements less dumb. Even if they were created by a “smart” person, the team should question them and work to simplify.

  • Delete part of the process, then add it back in if proven necessary. This tests for unnecessary buffers and just-in-case scenarios.

  • Any requirements must come from a name, not a department. This ensures accountability and traceability, as many requirements may become irrelevant or redundant over time, but not caught by the person who made them.

  • Simplify and optimize early on. This speaks strongly to the Agile-based methodology and focusing always on the minimum viable product for each new iteration. Otherwise you risk optimizing a subprocess or component that shouldn’t exist at all.

  • Accelerate cycle time. Examples given during these discussions seem to indicate he uses a method in the spirit of Lean, with a key focus on getting rid of waste.

  • Automate the process. At least, do so where it makes sense and when it makes sense. Some processes are still performed much better by a human, and attempts to automate prematurely caused major bottlenecks at Tesla before the decision was reversed.

There is no doubt, as shown by the technological achievements of both SpaceX and Tesla, that there is something special with these themes: building the machine that builds the machine; and using rapid, iterative development with a relentless focus on simplicity.

The challenge, however, lies in what could be described as the “Leveraged Cost of Innovation.” Think about it: During your development period, you have a product that continues to evolve in design at a rapid pace, and you are building models of these products in order to test them. This creates many changes to the product, but these changes have an exponential effect on the changes required for the manufacturing steps. If you have an iteration step that results in five product changes, that could potentially snowball into 50, 100, or more manufacturing changes in order to create the product.  

Pairing Rapid Iterative Development with Agent Based Evolution 

As the product is developed, so is the machine that builds it. But what would it take for the machine to automatically adapt itself to rapid iterative product development?

This is where recent advances (the last 1-2 years) in robotics, artificial intelligence (AI), virtual reality (VR), and physics-accurate simulation environments could play a role in making this happen. TU Delft, among other research groups, are investigating the possibilities of using advanced technologies to improve manufacturing as a whole. A technique we call Agent Based Evolution (a combination of reinforcement learning, agent-based modeling/simulation, and genetic algorithm architecture) is being explored and developed to solve this specific challenge.

An example will help.

Develop a Digital Environment

First, picture a basic video game, let’s say the original Super Mario Bros. In this game, you know which actions can be taken at each point, which elements of the game add value (points, lives, passing a level) and which items lose value (dying, getting hit by an enemy, running out of time).  Based on just these elements, a person can start from no training or experience (if you’ve ever played this game, it’s doubtful you had to read the manual first), and can learn how to optimize the performance of Mario through trial/error and learning.

This concept shouldn’t be new. However, what is new is being able to take this formula and apply it to real world situations. Instead of Mario, picture a robotic arm that needs to drill precise holes into a sheet of metal. Currently, a programmer has to write code that tells the arm exactly what to do and when. A manufacturing engineer has to develop a jig to precisely hold the sheet in place, and a process must be developed for the sheet to move from its previous station, to this station, to the next station, either automatically with well-timed actions, or manually when a human decides when to act. 

Apply Reinforcement Learning

If this arm can be placed into a physics-accurate simulation, either custom made or more likely using a game engine development environment; then the manufacturing cell can be replicated in the digital environment; then the process is incentivized (eg. points for identifying the locations for holes, drilling the holes, and losing a small amount of points for each second that passes); then this process could be paired with an agent-based reinforcement model that could simulate the arm learning, over thousands or millions of iterations, how to do the job.

Would doing this once save time compared to explicit programming? No. But consider this. If, instead of a jig to hold the sheet, you passed on the CAD model to the arm with the holes identified. Instead of aligning the robot, you incentivized it to learn how to detect the edges of the sheet with sensors, overlay the CAD drawing, and drill the holes. All of this can be simulated. And so long as the sensors on the digital robot can accurately reflect the physical sensors used, this innovation is possible with today’s technology. This means that the team can make whatever product changes they want, and if the product needs holes drilled, the robot can run through the updated simulation and be ready to perform the task optimally and accurately, every time.

Incentivise through Key Metrics

However, we are working with a Leveraged Cost of Innovation, so let’s try to create an exponential solution as well. Start with the robotic arm, able to adapt to any situation so long as we have a virtual environment where it can essentially stop time, run through millions of simulations rapidly, then start time again with newly acquired knowledge. This by itself will be very impressive as the use cases are developed with more and more complexity (currently the research is in early stages, but expect a few papers in 2023 that show the concept expanded). Now consider the robotic arm, plus its associated manufacturing cell—as a simulated process (inputs, processing, outputs). Then examine the string of processes, working together and incentivized to optimize key metrics like quality, cycle time, safety, etc. Now evaluate the entire factory, built with a modular focus, able to reconfigure the entire manufacturing process by using Agent Based Evolution. The complexity of this challenge is massive today, but like the SpaceX and Tesla philosophies, with rapid and iterative development it can be realized.

Final Thoughts

SpaceX and Tesla have blazed a trail not by having a few key Eureka moments, but by following a philosophy that fearlessly pursues the ideals of simplicity through rapid, iterative development.  Building the machine that builds the machines is critical for this to happen. But allowing that machine to build itself through Agent Based Evolution, imagine what could be accomplished.

About the author: Nathan Eskue is Associate Professor in the Faculty of Aerospace Engineering at TU Delft, specializing in AI, robotics, manufacturing, project/business management, and rapid iteration prototyping. He holds degrees in business information systems, marketing, operations management, an MBA, and a Master of Science in Data Science from Columbia University. Nathan has spoken at over fifty international conferences on AI, aerospace/defense, quantum, and other technical topics. He’s worked for NASA, Raytheon, Northrop Grumman, and other aerospace organizations for over twenty years, with experience in product development, project management, supply chain, manufacturing, and AI architecture.

 
Daniel Camara