Startup marketing lessons from scientific advertising

Everything old is new - lessons from a 92 year old book.

In 1923, Claude Hopkins, the man behind innovations like coupons and split testing wrote the hugely influential Scientific Advertising. David Ogilvy's response was that "nobody should be allowed to have anything to do with advertising until he has read this book seven times. It changed the course of my life." And with an endorsement like that, it's worth unearthing some of its key lessons for today's modern marketers.


Lesson 1: understand the job-to-be-done

There is nearly always something impressive which others have not told. We must discover it. We must have a seeming advantage. People don't quit habits without reason.

Claude Hopkins

Our products need to be perceived as solving a true customer need. But changing people's behaviour, and getting them to purchase your product, is not easy. Hopkins' approach centres around identifying the customer's unique buying motivator, and using that motivator to influence people's behaviour.

Too often, we see products fall flat because their positioning is fundamentally incorrect. These products are often designed by internal committee and seek to solve a 'problem' no one actually has.

The reason for most of the non-successes in advertising is trying to sell people what they do not want.

Claude Hopkins

In order to marry product development with customer needs Clayton Christensen developed the jobs-to-be-done framework. Jobs-to-be-done leapfrogs off Hopkins' premise in the sense that it is designed to uncover the causal mechanism behind a purchase decision. For example, it explains why chocolates and flowers compete with each other on Valentine's Day - both products are hired to demonstrate affection. By understanding the causal driver behind a purchase decision, we can adjust our product development prioritisation, market placement, messaging and design.

First steps: learn the jobs-to-be-done framework from jobstobedone.org. The site, blog, podcast and resources have built on Christensen's original work to form a robust methodology. They even have an online course.


Lesson 2: seek customer feedback

Some advertising men go out in person and sell to people before they plan to write an ad. One of the ablest of them has spent weeks on one article, selling from house to house. In this way they learn the reactions from different forms of argument and approach. They learn what possible buyers want and the factors which don't appeal. It is quite customary to interview hundreds of possible customers.

Claude Hopkins

The larger, more diverse and independent our feedback, the more accurate it will be. Such was James Surowiecki's argument in The Wisdom of Crowds. In order to engage with customers we need to "get out of the building." Coined by Steve Blank in The Four Steps to the Epiphany, this term is an insightful acknowledgment that products are in fact built for people. So if we build a product, we need to "get out of the building" and talk to the people who will be using our product. If we don't take the time to understand what customers want, our product will be useless.

Blank's former student, Eric Ries built on Blank's work with The Lean Startup. Within the lean startup, product development cycles are shortened through "iterative learning" a process whereby product development is tested incrementally and iteratively with customers in the real world.

First steps: understand and use the customer development Validation Board. This useful poster and methodology will help you understand and implement customer validation in your product development.


Lesson 3: optimise based on outcomes

Scientific advertising is impossible without [knowing your results]. So is safe advertising. So is maximum profit. Groping in the dark in this field has probably cost enough money to pay the national debt.

Claude Hopkins

We can't improve if we don't know what we're improving towards. And having a clear set of metrics is critical to improving our decisions and behaviour. But to do this, we need to get some good metrics in place. In Lean Analytics a good metric is defined as being:

  • Comparative: the metric can be compared with and sliced by different dimensions such as time or user segments

  • Understandable: is intuitive and easily discussed, like a customer's lifetime value, the cost to acquire a customer or the monthly customer churn rate

  • A ratio or rate: it's much more helpful to know the number of customers churning per month, over several years, over the total number of customers that have churned. Ratios are inherently comparative and give us insight into velocity.

But perhaps most importantly, good metrics enable us to change how we behave. There's no point measuring something unless it's going to drive our behaviour. In other words: drop the vanity metrics!

In order to help focus startups on behaviour driving metrics, 500 Startups founder Dave McClure created a presentation and framework called Startup Metrics for Pirates. In it, McClure breaks down the customer lifecycle across acquisition, activation, retention, referral and revenue categories and recommends placing behaviour-driving metrics into each of the categories.

First steps: understand the principals in Lean Analytics and Startup Metrics for Pirates. Identify the critical metrics to drive your marketing programme. Automate the consolidation and reporting of your datasets (website, channel reach, sales etc).


Lesson 4: continually test in market

Almost any questions can be answered, cheaply, quickly and finally, with a test campaign. And that's the way to answer them - not by arguments around a table.

Claude Hopkins

In Hopkins' time, a marketing test would involve laboriously coding coupons or testing a message in a specific region and then analysing the sales impact. Today, digital mediums make it negligible to test creative variations whether they be through your website, emails, social posts or advertising buys.

Behemoths like Amazon are famous for continually testing variations on their site. In one famous example Amazon was able to increase revenue by $2.7 billion a year based on a simple tweak to their review interface.

First steps: understand the basic capabilities and processes of multivariate tools like Optimizely or Website Optimizer. Then begin to roll out multivariate tests:

  • Using your behaviour-driving metrics (read above), identify a metric you'd like to influence such as sign up conversions

  • Identify the point with the most influence on the metric, e.g. sign-up page

  • Hypothesise how the current page needs to change in order to drive an increase in conversion rates, e.g. add social sign-up

  • Implement your test with a multivariate testing tool like Optimizely or Website Optimizer

  • Determine the winner, you want confidence that test results aren't by chance. Confidence in a test is a function of sample size and variation in results. You can use the Chi Squared test with this tool to determine confidence levels and the margin of error.