Marketing Expert's Corner
This article written in 2007
Executives tend to spend their time and energy on global problems, working on the big picture and using top-down analysis. Executives like to focus on strategy, positioning, competitive advantage, budgeting, business models, operations, channels, and political alliances. Being able to work on the big things gives executives leverage and impact.
But in Sales and Marketing, this isn't enough. Little things make a big difference to results. A bad word choice can get you killed at a Gartner or Wall Street analyst briefing. Small elements of poise can blow a Sales Rep's credibility in a customer meeting. Mistakes in inane details can neutralize the effects of expensive marketing campaigns.
As an executive, how do you strike a balance to get the leverage you need without blowing it though tiny slip-ups in the Marketing process?
It's the Little Things
Of course product strategy and great design are killer requirements. But those pesky customers don't care about the product the way vendors do, and they get distracted by the smallest stuff along the way to a purchase.
Prospects must be persuaded, and your persuasive power can be rapidly dispelled by misalignments -- things that aren't quite coherent, let the attention wander, and undermine credibility. Because we live in such a low-trust world, the prospect's baloney sensors are set really high.
Here's the irony: at any moment in time, the prospect does not care about your big picture. All they are looking for is a particular set of details that satisfy their curiosity at that instant. Present a set of details that don't make sense, and you've just sent the prospect over a speed-bump that may cause him to lose interest altogether.
But how do you figure out which little details will matter to the prospect's decision? You can't afford to ask even a small percentage of your prospects.
Model first, and ask questions later
In order to understand what your prospects are going to be looking for, you need to have a model of what they pay attention to during the marketing process they go through. You should be developing a model of your customer as you are developing your product. In fact, I've argued you should design your customer first.
The model of the customer should start with a set (3-8 are manageable) of personas that describe who the customer is, what their background and work environment is, and why they're interested in your type of product. The next step is to develop use-cases that describe what the customer needs to get done, and how they will do it. These personas and use-cases form the basis of the customer's interest and inquiry cycle.
Think about how you buy things. For any significant purchase, you think through a set of questions before you ever look at a specific vendor. Do I really have a problem that can be solved? What have my friends/colleagues done in this situation? Do I have the time and money to do what they did? What alternatives are available? Your prospect has probably gone through a sequence of questions like this long before they got curious about your product.
And curiosity is their state of mind as they come to look at your company and product. For B2B products/services and for most significant B2C purchases in the US, the web is clearly the first source of information for prospects. Vendor websites are viewed as credible sources of "hard facts," but are not considered credible for comparisons or judgment calls. However, vendor sites that include customer ratings of products/services, and that show (mostly) unedited customer feedback are viewed as highly credible by prospects.
Develop a story-board or flow chart model of how your prospects will come to your site in the first place (from an email? an AdWord? a Google search? a Yahoo link?), and the sequence of things they will do to discover what they need to know. Model the prospect's state of mind at each stage: what is their intent/goal, what questions do they need answered, and what will the trigger their next step. Write this stuff down in your story-board. Use a model like "AIDA" (awareness, interest, desire, action) to identify the stages of learning and motivation the prospect goes through before the sales cycle begins.
This modeling will be a lot easier to understand and validate if you already have a clear site map, an online collateral tree, and solid page-sequence analytics from your web traffic. But even with all this info, make sure to validate your model with some of your prospects. Just do a phone interview with them, and offer a nice incentive (e.g., Amazon gift certificate) for their time.
Once you've got a model that reflects reality, start asking questions of prospects and test customers -- where did their interest get derailed? What did your company write, say, or do that was a distraction (or worse)? You want to find the 3-5 places in the prospect interest cycle where the most damage is being done to your conversion ratios: those will be the places to fix.
Testing, Testing, Testing
Modeling and surveys sure sound good, but they have a fatal flaw: they assume that your opinion and judgment matter. Marketing means humility, paying little attention to your own judgments -- instead focusing on the prospects' opinions, preferences, and behaviors.
The testing phase is where you split your prospects randomly into A to B sets, and then compare results. The testing is not a survey, but it puts the prospect through an experimental version of your marketing sequence. This experimentation is easily done in web sites, email blasts, snail mail, and phone calls, but it can also be done in tradeshow discussions, show-floors, and other marketing tactics. The key is to vary only one thingbetween the test groups, and make sure the groups are random, representative samples of your target customer. You won't need more than 30-50 individuals for meaningful results, but watch out for timing effects (e.g., collecting all your data on a Tuesday) and beware of cyclical patterns that goof up your results (data collected in December should only be compared with other data in December, as you can't extrapolate Christmas-season behaviors to the whole year).
Test incrementally and individually. For example, do a series of individual changes to your Adwords text, your landing page copy, and your registration page call to action. Small changes really matter: word choices, colors, layout, graphic elements, even whether the stamps on your snail-mail are crooked can make a 2 or 3% difference in response rates. And these small changes are important because they cause geometric growth. As you make improvements to one stage of the chain, see how the improvements impact each other. It's usually best to evaluate your tests along the lines of "percentage of increased prospects" or "improved conversion ratio" for each individual stage. You'll find that it pays to test continually as you modify campaigns and offers, so that your marketing learns and refines itself the way genetic algorithms do.
Measure Twice, Cut Once
With all this testing and data, you might wonder if you'll be in analysis paralysis. No. You need to be taking action and making improvements to each step of your marketing and pre-sales activities on a monthly basis. In other words, whatever is doing best in the A-B testing should become your main-line marketing for the next monthly cycle (I don't advise regular changes any more frequently than that).
Even with this bias for action, you have to watch out for three key issues:
- Too much change, which confuses prospects already in the interest-inquiry cycle
- Self-selecting sample bias, which leads to misleading results
- False causation, where you attribute a changed result to the wrong things.
To avoid the first issue (and just to make your life a lot easier), you'll want to leverage your SFA/CRM system, use a content management system (e.g., Joomla) in your web site, and do email blasts with a sequential autoresponder engine (e.g., Vertical Response or Eloqua). Using these tools properly, you can make sure that an individual prospect will get a coherent picture of your company and products for the length of their decision/purchase cycle, even though another prospect will get a different (but equally coherent) picture as you improve things incrementally.
The second issue is a classic for all testing: avoiding non-representative samples. Don't ask people if they want to volunteer for testing. Do make sure that all audience segments (not just customers and prospects) run through your tests, so you don't "tune out" target customers by over-optimizing.
To avoid the third issue, if you identify an important trend that implies significant change (like, "abandon this segment of target customers, they never convert"), do not take action until you have validated the cause/effect relationship. Failure to convert can be caused by 100 things, so you want to make sure there isn't some "invisible" factor that is causing the problem. The best thing to do here is a completely independent test of the result (probably using a phone survey and very good surveyors) to better understand the dynamics that are effecting prospect and customer behavior.
Phone Us +1 650 326 2626