Quoting Rules Dashboard @Loadsmart

Product design. Internal tools. 2021

Context

Loadsmart is an American company that creates technology solutions for the logistics industry, serving carriers, shippers, and facilities. They operate an internal brokerage enabled by digital tools, and they also develop tech tools to enhance other companies' operations.

I worked as a product designer for two related teams: Pricing Rules and Pricing Contracts. The teams focused on improving internal tools and dashboards, also enabling account managers and sales representatives to set rules for the systems to automatically accept, reject, ignore, or postpone quotes and tenders

The Quoting Rules tool shows all the rules that have been created to deal automatically with quote requests coming from shippers' APIs, enabling account managers and sales reps to focus their time on the most complex and promising ones.  

Challenge

Before, users had access to a very incipient version of this platform, which: 

  • allowed them exclusively to visualise, enable and disable the rules that had been created.
  • had very limited filtering options, which required the use of auxiliary Excel spreadsheets for generating reports.
  • did not let them create rules by themselves, needing to send e-mails asking for support or opening internal Zendesk tickets.

The main problem was that creating and editing rules was a constant demand, which needed to be handled by developers of the pricing team. Such tasks blocked the product teams from focusing on more propositional and innovative initiatives.


Process

Loadsmart uses a design approach where discovery and delivery happen continuously. When the team focused on the product, I set up times for research and alignment and began brainstorming ideas. Since we didn't have much time for discovery, I aimed to reduce uncertainty and align the team from the start.

Below, there is an illustration of the main steps of the process. It is depicted linearly just for explanatory purposes, as efforts in different steps naturally influenced each other.

1. Framing and agreeing on the Problem 

Through many internal conversations with stakeholders, it became clear that the two major intentions were:

  • to speed-up the process of creating new rules, which had become a bottle-neck on the business side, and 
  • to free up engineering time so the product teams could work on new features and strategic opportunities.

After further discussions, we aligned on the intention of tackling the problem by expanding the current Quoting Rules tool to accommodate a new feature for self-creation and edition of automatic rules.

The main metric we used to measure the success of the feature was the time it took for a rule to be planned, created and become enabled.


2. User Research with Account Managers and Rules Engineers

I opted to conduct brief, semi-structured interviews with potential users of this feature. 10 participants were chosen, aiming for a diverse group with business roles, support engineers and pricing managers. Each user was invited to a 30 to 45-minute session, which was the common maximum time we could get from the internal teams. 

I chose semi-structured interviews as the research method due to several factors. Firstly, the availability of participants was uncertain, making group-based methods like workshops and focus groups less feasible. Secondly, the diverse user profiles and the various objectives they had in using the tool, posed challenges for quantitative or structured approaches. Lastly, the complexity of the topics to be discussed, such as how they described rules to be created and which sort of follow-up questions they led to, made the flexible nature of semi-structured interviews particularly suited for delving into these areas effectively.

I focused on three key goals in this research:

a) Mapping business users' routines to assess when, why, and how they felt the need for new rules;

b) Understanding how business users asked for the creation of new rules, their process;

c) Getting insights from the tech and support teams to assess the state of the rules system, and the process behind their creation ;

The interviews were successful in providing a map of how users' routines involved the creation and management of quoting rules. I could uncover relevant insights, such as the ones illustrated below.

3. Technical discovery

In parallel with user research, I also run a technical discovery to understand how the rule setting system worked and the possibilities for such a feature.

By working alongside with the pricing engineers who maintained the original system, I could map the attributes that usually accompanied quote requests, thus possible aspects to become targets for automatic rules.

I mapped 35 possible conditions/attributes and 6 actions. They are represented in the picture below. 

The technical discovery provided a complete map of all the conditions I could rely on to power the rule setting system. Moreover, by crossing this "map" with insights uncovered through user research, I could understand which ones of them were more important and more understandable for users in a future self-service UI.

4. Explorations, definition of concept, and prototyping

After synthesising user research and compiling findings of technical discovery, I began exploring concepts to address the problem at hand. 

Although very simple, the current implementation of the Quoting Rules Dashboard already allowed users to disable and enable existing rules, beyond visualising them, which attended to part of their needs. Therefore, I first concentrated on elaborating the self-service solution for creating and editing rules. 

A general concept for creating rules

First, I started to put together what I knew about users' needs in regards to those quoting rules, and how a process for their creation could look like. Inspired by the fact our users were used to computational language and to speaking about those rules in logical and "algorithmic" terms, I started exploring the concept of visual programming and logic operators

I prepared a series of sketches and references and brought to the team. As examples, I was exploring some directions such as having a list of conditions that users could enable and disable to shape the rules; or with a more flexible system for complex rules, allowing for multiple actions depending on a combination of conditions. 

These first high-level boards helped me get feedback from my team and, combined with user research, become more confident with the concept of visual programming, while simplifying it for our users' needs. 

Overall interaction and first versions of the tool

I narrowed down my explorations to a feature with a board-like behaviour into which users could add building blocks representing the conditions of their rules. The overall intention was to provide a very visual and organised tool to mitigate confusion and reduce the cognitive load of such complex process.

First two concepts

Looking for engaging my team, making my propositions more tangible, and testing I prototyped two different, yet similar, versions. 

1) In the first one, I considered a two-columns layout inserted into the grids of the main system. In one of the columns, users could scroll through all the available conditions and add them to the column on the right, the "execution" space. 

Every condition brought to the board would be automatically inserted as an "AND" condition, meaning that the rule would be executed only if all of the conditions were met in a given quote. 

2) In the second one, I considered a big board for setting the rules. My main assumption is that the convenience of the right-column of the first version could lead to lack of space for more complicated rules. By accessing a floating list of conditions, users can access the options when they want to add something else to the "execution" space. 

Moreover, exploring a more flexible and complexity-friendly rule-setting mechanism, the second concept allowed user to choose between the AND or OR operations to connect conditions.

5. Usability test, refinement, and the final version

The concepts were tested with existing users of the tool through a mixed approach. With those who had time to attend a synchronous meeting, I conducted moderated usability tests. However, to avoid losing a relevant parcel of participants, I also prepared a prototype in UseBerry to test the usability async. 

The overall concept was validated as it provided users with a straightforward and understandable way to create and edit quoting rules. However, I could see that some aspects were not that optimal, while others were overcomplicated

After these tests, my decisions were to:

  • release the self-service tool from the frame of the major system, which was making scrolling and visualising rules complicated, specially big ones. 
  • keep the column with conditions on the left provided faster usage than having the floating list of the second version

The experience was complemented by designing:

  • the steps for selecting an action when conditions were met;
  • a last step for reviewing the rule created, naming it and providing a description

6. Results

After implementing the solution, we could successfully deliver a tool for self-service creation and edition of rules. 

Before, a rule would take anything from 2 days to 1 week to be created, in case it was not urgent. 

After the change, rules would be created on the go, requiring users to go through a flow of maximum 10 minutes. 

Thus, results were positive!


We also had some learnings for future iterations:

  • Most complicated rules still required specialised support, but those were a minority.
  • Once created, rules required little editing. Users would rather active and deactivate them for seasonal actions.
  • Some rules were created wrong and sparked some conversations on having an in-between layer for approval.

I am still setting up this page. In the meantime, I'm happy to discuss this project in more detail. Find me online.

Thank you,

Using Format