A purple square with the words "Polite Type" in tall, skinny, white letters.A landscape purple rectangle with the words "Polite Type" in thick white letters.
About Beth Fileti
Process Book

Charity Navigator

Charity Navigator's Encompass Rating interface. There is one main score showing 75 out of 100. This score is comprised of 4 metrics. A finance metric showing a score of 75 out of 100. An integrity metric showing a score of 60 out of 100. An Impact score of 70 out of 100. Lastly, a diversity metric showing a score of 90 out of 100.

☕️ Get to Know Beth Better:

  1. Everything starts with understanding the content.
  2. Clearly defining your scope and your MVP makes work easier.
  3. Helping clients reveal what they already know about their users makes for fun and interesting work.

Project Overview

01. Task

Design a web page template to displays ratings information about charities.

02. Responsibilities

Design, Wireframes, and Information Architecture

03. Tools

Sketch, Figma, Dropbox Paper, InVision

The Work

My Role

I was tasked with creating a wireframe and web design template for a charity profile page. However, the project ended up requiring me to take on quite a few roles outside of that. Overall, I had a chance to work on the page's Information Architecture, UX design, and a variety of UI elements.

Since the success of any design project can be defined by its processes, it's important to acknowledge that this is a case study in failure. That makes this project  one of the most valuable experiences I've had as a designer. Experiencing failure in this way has helped me to better internalize some really important lessons. Specifically, the value of developing and clearly communicating shared definitions and established truths. The final product would have been better served by a stronger internal process.

Experiencing failure in this way has helped me to better internalize some really important lessons.

It's important to note that the work shown here is not the final project outcome. Instead, I am presenting an idealized version of the project, using the actual boundaries and challenges presented by the content.  

Problem Statement

Task: Design a web page that displays ratings information for charities.

Charity Navigator was preparing to launch a new rating system for over 100,000 charities. The new system is possible through newly-developed automations for data-mining and analysis. The primary users for this page are potential donors, who may be casual users or power users.

A laptop showing the Charity Navigator charity profile page.
The final web page design on desktop

Design Solutions

The final page design is the result of several different design problems. Each problem required an individual solution, which work together in one cohesive system. This case study speaks to each of the 5 primary design problems, listed here.

  1. Designing a System
  2. Profile Themes
  3. Sidebar
  4. Advisories
  5. Metrics >Data Sections

1. Designing a System

Charity Navigator needed help organizing ratings information for their users. They wanted to build an iterative system that could easily add, remove, or alter metrics, as needed. The goal of this new system is to present fair and unbiased guidance about charities to donors. It is the backbone of a charity's profile page on the Charity Navigator website. While donors are the primary audience, the system also needs to work for the charities that are being rated. To help charities use the rating system, it needed to  provide clear  targets and shareable achievements.

Initially, we tried to define the system as a collection of badges. Above the badges, one cumulative score reinforces the charity's performance.

A square showing the cumulative score up top and five smaller circles representing badges underneath.
One cumulative score and numerous badges

FOUR PROBLEMS WITH BADGES:

Problem 1: The badges are not binary evaluations separate from the cumulative score. Often, they are  scaled metrics ranging from 0 – 100. They could also be a binary value or ratio. Some metrics are achieved with a high score, while others were achieved with a low score. Additionally, each metric has a different weighted impact upon the cumulative score. In other words, Badge 1 ≠ Badge 2 ≠ Badge 3.

The circular badges are each different sizes, showing that they have differing impacts on the cumulative score.
Problem 1: Inaccurate Representation

Problem 2: Charity Navigator wanted the flexibility to add new metrics. There is no maximum value for how many they want to add. This meant, over time, the system itself could become unwieldy and difficult to understand.

A square representing one cumulative score is up top. Underneath it is a proliferation of badges, creating an overwhelming visual.
Problem 2: An Overwhelming Amount of Information

Problem 3: Badges work best as a reward system. This system needs to communicate three states to users: pass, fail, not applicable. Showing failed badges is more transparent for users. However, highlighting those failures felt overly punitive towards the charities themselves.

On the left there is a box showing one cumulative score with 5 badges underneath it. 2 of the badges are marked as "earned"; one is marked as "failed"; and the last two are marked as "Not Applicable." On the right, there is a cumulative score with just two earned badges underneath it.
Problem 3: Unfairly Punitive or Less Transparent

Problem 4: Changing the definition of the cumulative score makes it difficult to compare. Charity Navigator's users would struggle to benchmark a charity's annual performance. Charities would also struggle to know how they can improve their score. Lastly, Charity Navigator would struggle to present clear authority in the industry.

There are three rows. The top row shows scores for 2018. It has a cumulative score of  out of 2, which is derived from 1 earned badge, 1 failed badge, and 4 not-applicable badges. The second row shows the scores for 2019. There is a cumulative score of 4 out of 4. This is made from four earned badges and two not-applicable badges. The third row shows the scores for 2020. It has a cumulative score of 3 out of 6. This is derived from 3 earned badges and 3 failed badges.
Problem 4: Difficult to Benchmark

Overall, badges introduce problems for users, the client, and charities. They devalue the ratings and expertise, established by Charity Navigator.

SOLUTION

A system developed when I came upon an article on the Charity Navigator website about donor guidance. It turns out, the client already had a great outline for how users should evaluate a charity. Charity Navigator instructs users to first determine if the charity is genuine. Next, users should consider any possible cautions or advisories. With those established, a user should use their personal values to guide them. Things that might be important to donors include the following: financial stability, integrity, impact, and culture. Using these four categories, we can standardize a rating system and introduce stability. The branding team developed terminology for these metric categories, calling them "beacons."

A graphic showing an overall score of 90 out of 100. Beneath it are four beacons. One labeled Financial Health/Fiscal Responsibility. The second is labelled as Accountability & Transparency. Next is a beacon labelled "Efficacy/Impact." Lastly is a beacon labelled "Integrity/Commitment."
Standardizing the metrics into four top-level categories

The four category system...

  1. create a stable structure and a more consistent annual benchmark.
  2. teaches users how to align their donation habits with their personal value systems.
  3. explains the ratings methodology, when and where users look for it.

Building a content strategy based on this system helped guide the user's experience of the page. I organized the levels of information into three distinct areas: Analysis > Metrics > Data. This helped to create content patterns that can apply to a wide range of information. It  maximizes transparency and information access, without overwhelming users.

Three steps to the information structure. 1. Analysis: Our curated values to guide donors. 2. Metrics: Our methodology to analyze data. 3. Data: Our materials to measure, compare, and evaluate.
This nested information structure was educational, flexible, and transparent. It helped to reinforce Charity Navigator's positioning as a "guide dog, not a watch dog."

2. Profile Themes

Charity Navigator wanted to allow charities to customize their profile page. I needed to define clear parameters to ensure this feature could be introduced in a realistic manner. These parameters were:

  1. They did not have the business plan or infrastructure to support policing images and content.
  2. Many charities lack internal resources to develop high-quality visual content.
  3. The page needs to clearly convey that Charity Navigator is the authority in control of the analysis. Too much customization could visually imply that a charity is in control of the page.

The solution was to create a limited number of themes from which charities could select their page's appearance. There is no need for image policing, or fear of creating inaccessible experiences. Plus, the array of available themes can  grow alongside the system, to celebrate milestones.

Three different homepages. The first has a dark blue header with an orange pattern. The middle has a turquoise header with a white circle pattern. The third has a white header with a multi-colored repeating pattern.
Three different homepage themes

3. Sidebar + Collapse on Scroll

We needed to concisely differentiate analytics information (the ratings) from profile information (facts about the charity). Low-fidelity wireframes were useful to help the client first think through what the system should be communicating.

Wireframes showing a wide variety or score and beacon combinations.
Wireframes exploring the relationship between the cumulative score and beacon metrics

Once the system was  more clearly defined, we were able to articulate  goals for visual solutions. Working with a wide range of example metrics helped to rapidly battle-test ideas. The sidebar design was an exercise in Gestalt principles. It benefited greatly from the principles of similarity and common region. These ideas helped to clearly reiterate the system's meaning with the resulting visuals.

Early iterations tried to visually combine the score with the beacons earned. Informal user testing (asking my husband) showed that including icons did not enhance the overall understanding. As he put it, "I understand what 75 out of 100 means." The simplification of the sidebar was scary but liberating. It allowed me to introduce a new visual cue of color strength to supplement the numeric values of the beacons.

Six different iterations of how to show the cumulative score with the our beacon metrics underneath it. Progression is made from a set of four cards to a list of the metrics. Additionally the first 5 show an attempt to duplicate the "earned beacons" next to the top score. The final iteration removes this for a more focused and clear depiction of the cumulative score and four beacon metrics.
More refined iterations for the sidebar

A sticky top bar helps to provide continued content support to users, as they scroll further down the page. This is especially helpful for power users doing comparative charity research on desktop devices. Creating strong content patterns supports this user behavior.

A sticky top bar shows the charity's name. Next is a section showing an Encompass Rating of 75. Next to this are four icons for Finance, Integrity, Impact, and Diversity. After this section is a Donate button, and a button to favorite the charity.
The sidebar changes to a sticky top bar
Hovering over the finance beacon, a dropdown showing the score of 75 out of 100, and describing this as "approaching fiscal responsibility."
Hovering over the sidebar to get the top-level metric information

Here, too, you can see the color strength system supplying additional information to users. An interesting use case is to look at the number of score combinations that can result in a 75. Despite having the same score, the system allows for an additional level of insight, at a glance.

The top score shows 90, with 3 beacons performing strongly. The second shows a score of 75, with one beacon performing strongly, one approaching, and two as empty. The third shows a 75 score, with three passing beacons and one approaching. The fourth shows a score of 75 with two passing beacons and two failing. The last shows a score of 85 with two passing beacons, one approaching, and one fail.
Five versions of the sticky top bar

4. Advisories

One of the most important pieces of content to communicate is if a charity has any active advisories. These warn about investigations or data integrity concerns that might compromise an automated rating. The advisories are tiered into low, moderate, and high. Each tier requires a different level of concern that should be communicated to users. Tiers are visually different and have slightly different user flows. Low advisories are accessed after landing on the page. Moderate and high advisories first use an interstitial before allowing profile page access. As an additional precaution, a high advisory removes rating information from the profile page.

A flowmap starting with either the Charity Navigator search page or a Direct Link. These lead to the questions "Is there an active advisory for this charity?" If "no", then you are brought to the charity profile page. If "yes," then you ask "What level is the advisory?" If it is a low advisory, you go straight to the charity profile page. If it's a a medium or high advisory, you go to an advisory interstitial before going to the charity profile page. Once on the page, the user can choose to see the advisories and can open up the advisory interstitial.
User flow showing how advisories work when navigating to a charity's profile page
Low advisories do not interfere with the user flow, but can be accessed from the home page.
For a moderate or high advisory, the user flow includes an interstitial
High advisories hide the ratings information altogether

5. Metrics > Data Sections

The beacon sections needed to be standardized in form and whenever possible, content. This helps to create a familiar user experience. An interesting challenge was the conflicting ways of defining a single metric. Metrics could be ratios or binaries. Additionally, metrics had two definitions. First, they have an internal definition (a Program Expense Ratio of 80%). Second, there is a contextual definition (a Program Expense Ratio of 60/70 possible points). The internal definition is extremely useful for experienced donors. Often, it is reflective of an industry-recognized measurement tool. The contextual definition is useful for newer donors. It helps them understand how the metric affects the section score. Rather than prioritize one group of users, I decided to introduce content duplication.

A loose wireframe helped work out the content hierarchy for the beacon sections

First, users are shown the top 3 metrics for each section. These are metrics that experienced donors most often look for, and newer donors should be guided towards.

The card content...

  1. helps to reduce the cognitive load of looking at the full list of metrics.
  2. helps power users quickly reference the most frequently looked for content.
  3. eases new donors into how they should understand the rating system.
  4. features approachable and easy-to-interpret text-based guidance ("High Impact on Score.")
  5. gives more prominence to the internal measurement (84.6%.)

These features guide users not only to the information, but through the information.

Three cards showing the top 3 finance metrics. They each have similarly structured information to match the following: Program Expense Ratio: 84.6% Higher is better. (High impact on Finance Score) How much of a charity's spending goes into their programs and services.
Card content to highlight top 3 metrics

Following these cards is the full metrics table for this section. This information is useful for power users and newer users.

  1. Power users use this section to see the full range of data points at a glance.
  2. Newer users gain a better understanding of what information is being evaluated.

Supporting these tables is the raw data and definitions for each metric. These are accessed through click-thru experiences. This design solution has a number of user benefits.

  1. It creates on-page pathways to comprehensive explanations, guidance, and data transparency.
  2. It keeps information available within the context of how a user should understand the information.
  3. It prevents creating an overwhelmingly long and complex page scroll to travel through the content.
Finance Report features accordions to gain additional context or pathways to more information
Additional information is available in a second screen within the finance report


REPEATABLE STRUCTURE

The wireframe structure for the beacon sections works well across each of the four categories. Using the same pattern for information, but using a different color scheme for each category, the page structure creates a consistent and easy-to-digest approach to complicated data.


Takeaways

  1. When tackling complex design problems, it is helpful to first gain a clear understanding of what information needs to be conveyed. Then, break moments of complexity into smaller, more digestible problems.
  2. For a designer to be responsible for a final product, they need to also take responsibility for communicating the constraints and needed information in order to achieve the best results