Social Content Optimization Requires Looking at Micro Trends

What time should we be posting our content? What messages are resonating best with our audience? Is there a day of the week where our content performs best? How do we truly optimize our content on Facebook or Twitter? What kind of content generates the most clicks or interactions? How long does it take before we receive interactions after publishing a piece of content?

If you are managing your brand’s social media presence or working for an agency on behalf of a brand, you’ve no doubt heard these questions from your boss. All of those questions are things we’ve helped to solve in traditional marketing and PR for years, but the question of how we “scientifically” optimize our content in social is relatively new. 

At Edelman, we talk a lot about social content optimization (or SCO) as being the new SEO. Essentially, the process is gathering all of your existing (and ideally future) content and performing an audit on it to ensure that it is not only visible, but also appropriate to drive engagement (or whatever metrics you use to gauge success) on social platforms.

One of the common ways we begin to analyze content performance for the purposes of optimization is conducting a dayparting analysis. Dayparting was originally a concept developed for the television and radio industries, but has since been applied to other marketing channels as a way to determine when (day/time) content performs the best. It can be a very useful exercise if done with the proper rigor and process. Unfortunately, that doesn’t’ always happen.

Many of you have seen the work conducted by HubSpot focusing on when the best time is to tweet, publish a blog post or post a Facebook status update. If you have not seen the research, go take a look for yourself. It is interesting work, however, only truly useful from a macro-trends perspective. I don’t know about you, but if I’m in the process of optimizing my own content, macro-trends aren’t particularly useful. I want to know how my content is performing. I’m not necessarily interested in how the Dell page is performing, for example. Competitive intelligence is important, but a different stage of the process all together.

So you might be asking yourself…this all sounds interesting, but how do I do a dayparting analysis myself? The answer is simple: *Microsoft Excel. No, really, that’s all you need! Ok, that’s a small lie. You need access to an insights platform for your social channels, which you already have access to for free. You also need to develop a plan for how you’re going to conduct the analysis. What does that look like?

  • The amount of content you’ll analyze – Ideally, you will want to gather posts from the last six months. If you are in a pinch, it can work with three months but any less it becomes difficult to draw insights. Don’t rely on a month or less. It won’t work.
  • What metrics do you want to examine – Surely you’ve taken a look at all of the data presented to you by Facebook Insights, correct? Is it feasible to be looking at all of those things? Probably not. However, you could be looking at impressions per post, shares, clicks, likes and comments.  Again, completely dependent on what you feel comfortable with and how it ladders up to your goals.
  • What’s the end-goal – Is the end-goal measuring how content is performing? Or are you doing it just to optimize future content? You should be doing both, but understanding which path you’re on will help feed the end product.
  • Don’t forget about tracking messages – One of the problems with dayparting is that we’re often looking at post performance in absence of identifying core messages. If we have two or three core messages, a best-practice analysis would overlay those messages with the data we have in bullet point #2.

Dayparting is a useful exercise if done well. Macro-trend data isn’t enough to draw conclusions on future (and past) post performance. Combining the two leads to maximum social content optimization. Have you done this kind of analysis in the past? If so, what kinds of results have you seen?

*Disclosure: Microsoft is a client of Edelman

[reus id=”6207″ meta=””]

Share on Tumblr

  • I wish I had something more intelligent to add here Chuck other than thank you again for providing such a clear picture of measuring the metrics that matter in a way that is customized for your own approach.  I recall Rebecca Denison’s post a few days ago that reminded us broad brushed, macro best practices should be taken with a grain of salt and not treated as gospel when it applies to your own business.  I get the same message from your post today in addition to the excellent advice.

    • John – Awfully kind of you, thanks. I agree with Rebecca. Macro trends are useful, but in very limited circumstances. I’m not basing my planning on macro trends, nor am I using macro trends to gauge whether my content was successful.

  • While I agree on the importance of optimizing your — well, everything — I’m just not convinced yet the science is practical. In other words, if I derive an insight from the research, could I reasonably expect the same result if I apply the same approach tomorrow?

    There are simply too many variables, most of which we don’t account for or can’t anticipate, for the insights of such research to be applicable. From a macro point of view, I think we can agree the research is interesting. But as you point out, macro doesn’t always apply.

    I think we’re making progress. Research is improving. Tools are getting better. Our understanding of what the data reveals is getting more refined. With improved insights and a continued willingness to leave room for experimentation and serendipity, we’ll certainly uncover new, more effective ways to connect with customers.

    • Scott – It’s a fair point, but I disagree. The issue behind the science being impractical has less to do with the data and variability therein, and more to do with a lack of an adequate “sample.” This analysis falls apart when we look at a month’s or even three months worth of information. I know I noted in the post that three months can work, but best practice would be to look at about six months. After six months, applicable trends should develop. If they don’t develop in the dayparting part of this analysis, they will develop with message resonance.

      I just find limited value in macro trend data. I often have no idea which pages or accounts went into the final numbers. My natural inclination is to say if I don’t understand the methodology, I toss out the final output. Now, if you want to show me an infographic showing dayparting info for all CPG’s and I’m working on a CPG account then we’re getting somewhere.

      As you pointed out, the science will continue getting better. It’s in a place now, though, where we can derive actionable intelligence.

      Thanks much for reading and the comment.

      • My pleasure…thanks for starting (continuing?) an important convo.

        Unfortunately, I think many businesses — maybe most businesses — don’t have the bandwidth for what you’re describing. Even with some of the basic tools you outlined above, important work like this gets lost in the shuffle, moved to the back burner. Data that could lead to insights, which in turn could lead to informed action, is an expendable luxury for SMBs living day-to-day.

        That’s not to say it’s right, just that it happens.

        @BrandonUttley:twitter and I touched on this a bit in a podcast this week. Many marketers have to (choose to?) make decisions on hunches more often than not, rather than data. So much of what gets created and shared is, at best, a modestly educated guess.

        Thanks for stirring up this convo and pushing a higher standard, my friend.

        • Scott – Tend to agree. It does get lost in the shuffle, particularly for SMBs. Thanks again

  • Optimizing social content or digital assets or whatever we want to call it today has been around for quite some time. Which is the good news. There is plenty of research that backs up the idea that this works. My favorite part is that search nerds and data brought this to life. Yay SEO people!

    The bad news is content is rarely, if ever, consistent. So people run with HubSpot data and assume that their tweet, blog post or Facebook update will “work better” if they post it at 11:58 a.m. and respost at 2:43 p.m. Perhaps it will, but sometimes your content sucks, even more so with multiple author blogs and reports. I have to remind clients and myself that I am not writing McKinsey Quarterly.

    I am all for measurement. And better metrics. We’ll get there, but I think the content itself is the biggest wildcard.

    Additionally, we’ve seen an increase in volume of content and the outlets in which we consume it. And it continues to grow. And change. And now, the lifetime for a piece of content starts declining. Yikes. These variables make it really, really tough to take the data from content and extrapolate processes from it.

    As for SCO/DAO being the new SEO, I’m skeptical. Your company site still needs the most love, and it needs a lot more than +1s and Likes to be found. For example, if your content is not being indexed correctly or is duplicated four times on your site because of a canonical problem, no one will find it 10 minutes after the last tweet went out. So for anyone jumping on the SCO/DAO bandwagon, you better have the .com squared away first.

    I love the post Chuck. Onwards and upwards with measurement.

    • Thanks, Dominic. I agree, content is the biggest wildcard. Unfortunately, if your content is that inconsistent to render it difficult to perform these kinds of analyses you likely have much bigger issues.

      • We say this in SEO all of the time, but SEO, like social content strategy, is situational. Yes, what worked before may work now (and again and again), but always have a backup plan and other strategies in case it does not or the algo has other ideas.