Optimizing Social at Peace Corps: Not a Zero-Sum Game

Oct 2, 2015
A swirling social media universe

At the Peace Corps, we continually try to find new ways to test, measure and optimize our marketing and communications initiatives. Recently, we embarked on a project to design a framework to test and optimize content on the social media platforms we use to engage our stakeholders. This process required us to reboot our expectations in terms of measurement and re-think our social goals, but in the end it has made our decision-making process much stronger.

With other tactics we use, such as digital advertising and email marketing, integrating analytics and data-driven decision-making is fairly straightforward. We soon discovered this was not the case with social media. The social ecosystem is diverse and continually evolving, so our goal was to develop an experimentation and analysis framework that could scale across platforms. This nudged us to move away from trying to procure the “perfect” stack of tools, to taking ownership of the various aspects of testing, data collection and analysis.

We also came to the realization that the perfect is the enemy of the good when thinking about testing and analyzing social content. Common A/B testing approaches, and the application of statistical tests to measure success, didn’t fit into how the platforms are designed and how Peace Corps uses social programmatically at a post-by-post level. To avoid going down a rabbit hole of analysis, we determined and agreed to a “good enough” level of certainty in order to determine what is successful and what isn’t. As I outline below, the framework we ended up designing is not new and will evolve alongside the changing landscape of social media, but it did require a shift in how we think about social measurement and, more importantly, our confidence and willingness to take on this type of work.

Peace Corps organic engagement on Facebook.

Our Approach

  • Establish S.M.A.R.T goals. This isn’t a new idea, but before testing anything, you need to define goals that are specific, measurable, attainable, relevant and time-bound in the context of your social media initiatives and overarching marketing and communications.
  • Establish Key Performance Indicators (KPIs). Whether your goals are to increase brand awareness X percent or motivate potential applicants to apply, you need to select KPIs that map from your goals to the social platforms you’re using. For example, on Facebook, KPIs could include engagement rate, reach or some kind of direct response.
  • Categorize content. It’s important to categorize all your content and think through these categories when developing an editorial calendar and monthly and weekly testing plans. At Peace Corps, we categorize our content into three buckets. This allows us to see what content categories are performing the best each month in relation to the KPIs we’re tracking, and systematically test different components to optimize the content within each category.
  • Establish testing schedules. Our team designs experiments in conjunction with our editorial calendar and reviews results on a monthly basis. What makes pure A/B testing on social media platforms challenging is not being able to randomly sample from the population, and the lack of control over the variables that could impact content effectiveness. In order to mitigate the lack of control, we try to standardize our tests so that some of the variables are the same, such as day and time of publishing, content category and type, messaging frames, etc. It’s far from perfect, but we found this to be a “good enough” approach. There might be ways to control more variables through paid promotions, but those tools are not consistent across platforms and Peace Corps doesn’t utilize these tools on a regular basis.
  • Collect and cleanse data. Fortunately, most social media platforms export data in structured data sets, which makes cleansing and analysis easy. Peace Corps uses paid promotion on these platforms on an ad hoc basis, so when we collect the data we separate posts with paid support behind them so it’s an apples-to-apples analysis.
  • Benchmark post-level progress. The above chart includes engagement data at the post level. We use a moving average as our benchmark to mitigate the weekly ebbs and flows and to ensure outliers are recorded properly. At Peace Corps, the benchmark is how we gauge success of an experiment and a specific piece of content.
  • Analyze. When analyzing results, you need to listen to the data but understand the limitations of the methodology, as well. At Peace Corps, we have different levels of analysis to understand how things are playing out month-to-month. At a high level, we look at the effectiveness of the content categories as a whole in relation to each of the KPIs we are measuring. To do this, we conduct statistical tests to see if the overall difference between the categories is statistically significant. This tells us directionally how the categories are doing and helps us design experiments to optimize the content within lower-performing categories and to derive insights from content in higher-performing categories. We also look at content on a post-by-post level, analyzing the results from tests but also content that wasn’t tested but performing above benchmark. All of these insights help fuel our optimization efforts.
  • Repeat.
  • To put this into context, in July on Twitter, our KPIs included impressions, engagement and URL clicks received. Our “Direct Recruitment” category of content, which includes action-oriented messaging, performed the best statistically at promoting URL Clicks and conversions. Digging deeper at a post-level, we found that tweets directing to videos and listicles were the best of the best. These are the types of findings we use to design future experiments and ultimately optimize the content and messaging frames, as well as the time of publishing and other variables.

    Our goal is to test things as systematically as possible, see what’s working, and optimize variables we can control to see if we can increase the lift. Alternatively, we find value in analyzing content that falls below benchmark and then design experiments to see if we can make those more effective. Is this framework perfect? No, but that wasn’t our goal. What it achieves takes the Peace Corps a step closer to making evidence-based decisions in the social sphere. Surprisingly, the more systematic we get at testing, measuring and optimizing content, the more freedom we have and the more creative we can become when engaging with our stakeholders.

    Want to read more great content like this? Sign up for our daily or weekly DigitalGov newsletter!