Loads Integration Dashboard @Loadsmart

Product design. 2022

Context

Loadsmart is an American company that creates technology solutions for the logistics industry, serving carriers, shippers, and facilities. 

I worked as a product designer for two related teams: Pricing Rules and Pricing Contracts. The teams focused on improving internal tools and dashboards, also enabling account managers and sales representatives to set rules for the systems to automatically accept, reject, ignore, or postpone quotes and tenders

The Loads Integration Dashboard, owned by the pricing teams, shows all the loads coming in automatically from shippers' APIs and helps Loadsmart's account managers and sales reps decide which loads to accept or reject. This can happen automatically based on rules or manually by checking details like origin, destination, and cost.

For instance, if a big client like Walmart sends 1000 load requests a day, most are handled automatically. But some need a person to review. That's where the dashboard helps.

Challenge

Before, users had access to a basic Loads Integration Dashboard. However, it had usability issues and inefficient information display.  Users were spending unnecessary time analyzing and deciding on loads that the system would automatically handle anyway. Although the reasons behind this behavior were initially unclear, research helped uncover them. Users also needed to constantly refer to Excel spreadsheets for relevant data, which was error prone


Process

Loadsmart uses a design approach where discovery and delivery happen continuously. When the team focused on the product, I set up times for research and alignment and began brainstorming ideas. Since we didn't have much time for discovery, I aimed to reduce uncertainty and align the team from the start.

Below, there is an illustration of the main steps of the process. It is depicted linearly just for explanatory purposes, as efforts in different steps naturally influenced each other.

1. Framing and agreeing on the Problem 

Initially, the team wanted an easy-to-use panel for decision-making info. But after studying the problem and the insights, I realized showing data wasn't the main goal. Building trust in the system's decisions was more important. This would let users explore new clients, deals, and analyze only key loads, saving time and money.


2. User Research with Account Managers and Sales Representatives

With these goals in mind, I opted to conduct brief, semi-structured interviews with potential users of this tool to gather insights. 7 users were chosen, aiming for a diverse group with various roles related to the project. Each user was invited to a 30 to 45-minute session, which was the common maximum time we could get from the commercial teams. 

I chose semi-structured interviews as the research method due to several factors. Firstly, the availability of account managers was uncertain, making group-based methods like workshops and focus groups less feasible. Secondly, the diverse user profiles, including those of their clients, and the various objectives they had in using the tool, posed challenges for quantitative or structured approaches. Lastly, the complexity of the topics to be discussed, such as pertinent data points, export functions, and related matters, made the flexible nature of semi-structured interviews particularly suited for delving into these areas effectively.

I focused on three key goals in this research:

a) Find out how users feel about automated decisions handling their work;

b) Discover why they manually accept or reject loads despite platform alerts about automatic decisions;

c) Evaluate their perception of the displayed information's quality.;


The research goals above inspired the creation of the script below, which was applied into these interviews during one/two weeks. "LID" stands for Loads Integration Dashboard.

The interviews were successful in providing hypotheses about why our users did not trust automatic decisions. I could uncover relevant insights, such as the ones illustrated below, and map usage scenarios, such as the ones in the board after.

3. Design principles

As interviews were happening, I worked through cycles of incorporating relevant insights into a prototype I used as the basis of an experimental approach. Since Loadsmart has a pretty good design system, experimenting with different options was not painful. So, such approach was favoured.

Based on the evidence collected and on internal discussions, I established the design principles (or propositions) below and organised my designing efforts according to them.

4. Prototyping

My approach had two main focuses. Firstly, I concentrated on designing a better version of the Loads Integration Dashboard. This meant enhancing its organization and cleanliness, as conversations with users and event tracking revealed that the old dashboard's crowded appearance and close data points led to frequent errors. To address this, I created more space, increased row size, and unified filter designs. 

The images below illustrate the tool's state before and after the changes.

The second focus was to effectively work on the design propositions to address our problem. Among other aspects, the following examples illustrate how I chose to tackle the design propositions via interface design.


1 - Principle 1 was tackled through a column that is supposed to be extremely visible and actionable, informing users about what's going to happen with their loads (or what has happened) and allowing them to obtain quick insights among the many loads they commonly receive. I also created a system of icons to represent the possible decision states (section on the right of the picture below).

2 - Research indicated users needed the option to disregard loads but still keep them on the board for productivity and peace of mind (principle P2). To address this, I introduced a "hide/un-hide" feature. Users can decide which loads appear in the main view using a toggle. This lets them control whether the loads are visible, maintaining their original order.

3 - Users can access more load information by clicking the peek button, which opens a drawer. Here, they can also change an automated decision if needed. However, as we don't want to encourage this behavior, users have to provide justification through a modal after manually changing a decision.

4 - Users can click the peek button again to review the rules behind each system-made decision. This enhances transparency, boosting trust (principle 4) in manager and sales rep loads. Additionally, it empowers them to seek revisions or adjustments in these overarching rules.

5 - Through research, we pinpointed the essential information users need to decide on loads (when required). Even if they don't change system-made decisions, accessing the LID for visibility is frequent. Considering this, I recommended filtering options based on prioritized usage scenarios.

Additionally, our users often maintained query consistency (managing the same customers and recurrent info). To accommodate this, we decided to store searches in cache. I opted for clear chips to reassure them about updated and filter-adjusted data being displayed.

5. Usability Testing, Refinement and Continuous Delivery: 

After completing the initial design proposal, it was subjected to usability tests for user feedback. Subsequently, after analyzing and prioritizing the feedback from usability tests and other discussions, I refined the design into a deliverable. No major usability problems were detected and we uncovered some insights for future iterations, such as extra filters they would like to have. Users were actually excited about the change, for the inclusion of highly anticipated features. 

I worked then on creating the necessary specifications for delivery and we started agreeing on how development would be carried out.

I called it "continuous delivery" because, as mentioned before, developers of the pricing teams could not dedicate their capacity solely to this project, which extended the timeline. Additionally, during the implementation of the first release, regular meetings with pricing users and account managers became routine. These discussions often highlighted new valuable aspects to incorporate into the product. Some of these suggestions were prioritized, leading to adjustments in designs and collaboration with developers for implementation. This conversation remained active and open.


6. Tracking and monitoring

Being an answer to our problem, data on the number of respected decisions was fetched from the database and used to infer the overall quality of the project.

Moreover, usage events were specified to be collected on the client-side to monitor my design interventions and potentially identify usability problems. To this end, I used Google Tag Manager to target interface elements, effectively collect such events and send them to Mixpanel, the tool we used to generate experience-related dashboards (see pictures below, for examples).

Additionally, I set tracking tags on action-elements inside of the drawer to check on the relevance of the features included, also hoping to obtain further data to understand the reasons behind certain decision behaviours. Those are exemplified in the pictures below. 

7 - Impact

After monitoring the release for one month, we got to a positive impact, as: 

  • 83% of the automated decisions were respected in the first month after the release, against 21% before the design improvements.
  • Adoption of features was positive. For example, most of our users started using right away the new filters that were added; 
  • The hide/unhide feature was shyly adopted, by 18% of our users. We had the opportunity to scheduled some feedback calls and uncovered that users found it unnecessary once we improved the display of information and emphasis on the state of decision. We decided to keep this feature because it was cheap to maintain and highly praised by those who used it.
  • Integration mistakes could be reversed and we could start gathering data on why sometimes users needed to change decisions.
  • Link to the rules’ pages was accessed by half of our users, which led us to do further research and conclude that it was only necessary for those who managed many clients and rules.

I am happy to discuss this project in more detail. Find me online.

Thank you,

Using Format