Case Study: OCSIT’s Email Customer Survey Process

Jan 29, 2016

At GSA’s Office of Citizen Services and Innovative Technologies (OCSIT), we offer technology services and tools to make government work better. To help us gauge the effectiveness of the programs we offer to other government agencies, in 2013 we launched our first Government Customer Experience Index (GCXi) survey. This annual email survey consistently measures customer satisfaction, loyalty and ease of use for various OCSIT programs.

The word survey in giant red lettering, with a computer mouse plugged into the letter R.

devke/iStock/Thinkstock

A previous post about the GCXi (OCSIT’s 2015 Customer Survey—What We Learned) generated lots of questions from readers about the back-end processes we use to conduct the survey and turn customer data into action. Since we’re big fans of transparency, we’re sharing this case study in hopes that it’s helpful to you as you build your own Voice of the Customer (VOC) program.

The Big Picture

Though we survey our government customers just once per year via the GCXi, we actually work on the process throughout the entire year.

We conduct our annual survey in the spring. The survey is created in SurveyMonkey (though any modern survey tool would work), and delivered via email. We have a PRA clearance, since our customer base includes not just federal, but also state and local government folks. (Note, if you’re looking for a survey tool, check out the list of free tools that have a federal-friendly terms of service agreement.)

During the summer, we review and analyze the data, then develop action plans for each program. We work to implement improvements during the fall and winter, and by then, it’s time to gear up for the next year’s survey.

A benefit of keeping this process top-of-mind for staff all year long is that it enforces the importance of customer-centric thinking in all we do. It’s also just one of many tools in our VOC toolbox. Other tools we use include:

  • Web analytics
  • Usability testing
  • Call center data
  • Web pop-up surveys
  • Free-form customer comments from social media, chat, blogs or email
  • Agency consultations and office hours
  • Employee training and engagement activities
  • Post-event surveys
  • Talking to customers one-on-one

Goals

One of our primary goals was to develop a framework to consistently measure customer satisfaction across all our programs, on an ongoing basis. Consistency is important to benchmark progress, and the index gives us a framework to do just that. By asking the same core questions for all our programs, we’ve created a baseline to help us evaluate whether we’re improving over time.

The Questions

The word Feedback seen on a small wooden cube sits on a laptop keyboard

Gajus/iStock/Thinkstock

The email survey currently consists of six questions. Four of the questions are multiple-choice and are used to calculate a “score” for each program; this is our baseline. The last two are open-ended, and provide the most actionable data, because they give customers a platform to tell us, in their own words, what’s working and where we can improve.

The core questions measure satisfaction, loyalty and ease of use. Questions may be customized slightly, such as to identify a specific program name. Here are the questions we use in our email surveys:

How would you rate your overall experience with [this program/service]?

  • Very good
  • Good
  • Fair
  • Poor
  • Very poor

How likely are you to recommend [this program/service] to a friend?

  • Very likely
  • Likely
  • Neither likely nor unlikely
  • Unlikely
  • Very unlikely

How likely are you to use [this program/service] in the future?

  • Very likely
  • Likely
  • Neither likely nor unlikely
  • Unlikely
  • Very unlikely

How easy or difficult was it to [use this program/service]?

  • Very easy
  • Easy
  • Neither easy nor difficult
  • Difficult
  • Very difficult

What are the greatest strengths of [this program/service]?

  • Open-ended

What are the greatest weaknesses?

  • Open-ended

The Score

Response choices are listed from positive to negative. The top two (positive) responses are “promoters,” the middle response is “neutral,” and the bottom two (negative) responses are “detractors.” We subtract the percentage of detractors from the percentage of promoters to get the score. We score each question, as well as calculate an overall score for each program. Note, that if you have more detractors than promoters, it’s possible to get a negative score (range is plus or minus 100).

I know you’re thinking, “So, where are your scores?” While the actual numbers are for internal management purposes only, we’ve shared some overall insights in this post, Digging Into the Data of Our Customer Survey.

Closing the Loop

One main reason this has worked for us is management support. Our management team is committed to improving the customer experience for all our programs across OCSIT, and that commitment is embraced by the entire team.

The other main reason for success is our action planning process, which “closes the loop.” We don’t just collect data, we actually do something with it. Program managers are tasked to review customer feedback and identify areas for improvement. We develop action plans that outline when and how we’ll address issues and make those improvements. We share these plans across the team, and report out to senior management on what we learned, the actions we took and how the feedback is expected to improve our programs.

Evolution

Like anything else in life, you try something, learn, adapt, move forward. We learn something new each time we run the GCXi, and continue to iterate and improve as time goes on. For example, the “ease of use” question was added in 2015, and has provided us with a clear call to action to provide more direct training and support to our customers, particularly for our more technical programs.

As background, here are some of the resources that inspired us as we developed the GCXi:

While this post focused on our email surveys, it’s worth noting that we follow a similar process for most of our website surveys, asking the same core questions, as well as asking about task completion (but that’s a topic for another post!).

Our GCXi has given us a framework to listen to customers, benchmark progress in a consistent way and evaluate whether we’re improving over time. If you have suggestions or ideas to help us better serve you, please let us know! Interested in learning more about improving the government customer experience? Join the Government Customer Experience Community and review the Customer Experience Toolkit.