Case Study: Caterpillar

In the first build of the first ever in-cab touchscreen for heavy earth moving machines (excavators, bulldozers, etc.) I was faced with multiple issues. The first was that my team and I were still traveling to do persona development research while we built out rough models and prototypes of the old “hard-button” in-cab system to convert and map its required features to touch screen interaction models. The second was that the more we interviewed our users the clearer it became that the old system wasn’t well known at all and only its most basic features were being used, since the more advanced ones were too complicated to be easily understood, if the user could even find them with the overly complex navigation model that was built.

When we had wrapped up our research and mapped the existing functions to the UI we had planned for, we began to run our personas through the workflows we had created. As we explored our personas from all over the world we landed on one critical issue that defined a major disconnect between what was built before, the engineers who build the machines and sensors, and the end users who had to try to use them. In short, the system was designed by geospatial and mechanical engineers for geospatial and mechanical engineers. While our user base had, on average, an 10th grade education and very minimal understanding of how the machine and its features were meant to work. We had to find the middle ground, the software and machines were incredibly complex with incredibly complex features and issues, while the end user who had to operate it wasn’t able to understand or utilize all the great features that had been built.

We began to re-imagine the approach as if we were building it all brand new, even the existing features, and designing it for a user that could quickly glance at the UI and know everything they needed about their job. This was made difficult by a few factors.

  1. The machines is almost always moving and around people while being used, so safety was paramount.
  2. Alerts had to be very clear and meet ISO standards, while still allowing the operator to see what the machine was doing, how the error/alert was effecting their current work, and allow them access to correct the issue without losing focus on all their surroundings.
  3. Language used by the engineering teams who designed the machines was so complex we often didn’t fully understand it all.

This was the goal we set and what we aimed to achieve: An interface and system that took complex real world controls, machines, and verbiage, and turned them into a simple, easy-to-use, at-a-glance interface.

We landed on a 3 pronged UX approach:

First, I sat down with the engineers and had them explain their complex work, systems, and features to me and the UX team as if we were children. We even allowed them to use simple children’s toys to explain it. That was when the epiphany struck, if we can understand it easily with the toy in front of us, we need the toys to be in the UI. So we began the arduous journey of using a 3D, low-poly count, live rendered model that showed the machine working in real time within one of the UI views. We often referenced the following quote:

“If you can’t explain it to a six year old, you don’t understand it yourself.”

-Albert Einstein
Theoretical Astrophysicist

Second, I recorded their explanations, examples, and models and used those to inform how we wrote all the language that was used through out the entire system. This allowed us to really take the true meaning and exact form/function of the feature and make it so anyone who tried to use it could learn it easily on their first time through.

Third, I rallied executive support to expedite the automation features that were a key component to what we were building. In essence: we want to remove as much of the users workload as possible and make it easy to set up a job, set parameters, then let the machine work and control itself to make sure the user could focus on the more pressing issues such as safety and accuracy. This launched us into an entire new hemisphere of possibilities.

Conclusion: The best systems are built for their users, losing sight of that is how we alienate or exclude our user base. No matter how complex or complicated a machine or software is the number one priority has to be matching effective usability and information architecture to the core of your user base. Even if that means going back to basics and re-examining the work in a new light, sometimes going so far as to restart from the beginning to make sure it’s done right. As a UX designer having to completely restart is frustrating and can be demoralizing but the alternative, alienating your user, is sacrilege.