Consistent, iterative, best practice, performance-design driven testing is very often the best way for businesses to make more money. At JB we develop web-based testing programs to quickly drive real, compounding gains for your bottom line. 

Improve your business with the power of small gains and constant learning

We have seen simple tests produce positive results for clients whether we’re testing button color or copy tweaks for clearer messaging, as well as much more complex testing. 

[should we provide some sample of what we’re talking about? Otherwise this feels airy fairy to me…] 

Among the forces preventing success in any given organization is the allure of Big Projects and their Big Results. Website redesigns, huge campaign initiatives, etc. can activate a lot of energy, marshall creativity, and lend a sense of gravity to the work being done. Big changes look good on resumes, help establish clout for departments and leadership, and give teams and organizations something to talk about. In short, they have A LOT going for them. We see this across our culture from blockbuster movie releases to the GameStock / Reddit frenzy. Big moves. Big wins. Bigger is better/more noteworthy. No arguments there. It’s just not the whole story.

Imagine drafting a presentation for major company stakeholders and reporting the results of your hard work- a 1% gain. In all but the savviest conference rooms, this is boring and unremarkable. 

Now consider the chart below which tracks a 1% gain made each day. A team or organization that drove 1% gains every day would have an outsized impact over the long-term. If they were allowed to plod along making steady, small wins, that is.

What is Testing as a Service (TaaS)?

TaaS allows you to contract our team to generate tests to improve a given digital KPI. We charge a flat rate, so your investment is clear up front and can be planned for. 

After onboarding with your team we will work together to ensure the testing suite and analytics are set up and configured properly.

Once set up and ready for testing we will provide test mock ups as appropriate for approval and then implement them. We’ve worked with senior creatives at brands like Spotify and Netflix as well as much less designed focused (but highly profitable) B2B firms. 

When results reach statistical significance we will report on the winner/loser and provide context for why we think the result was what it was. If you have a KPI that is directly tied to revenue, or is revenue itself, we can report on the approximate gain of implementing the winning option.

What tools do you use to run tests?

  1. Qualitative Research: User interviews, competitor analysis and benchmarking, task analysis and surveys. 
  2. Analytics Tools: Google Analytics, Tag Manager, and Data Studio for baseline analytics; Hotjar, Crazyegg, and Fullstory for heat mapping. 
  3. Testing Engines: Visual Website Optimizer, Optimizely, Google Experiments, and server side tools in MODX, Rails, and React for testing.
  4. Design Tools: Sketch, InDesign, Figma, Adobe Creative Suite. 
  5. Analysis Tools: Google Docs, R, Julia, Python. Adaptive Bayesian methods, standard confidence testing, and expert narrative and interpretive review of results.

Interested in help thinking about the methodology, design or specifics of a test? Reach out to speak with one of our experts. 

What are the tradeoffs? 

  • Does your site or target page get under 1,000 sessions per month, but still have considerable revenue potential? If so you might consider leveraging Google CPC or display advertising to test messaging so that you can learn faster. Ad impressions are two orders of magnitude cheaper than sessions in most cases. 
  • Are we relatively (>95%) sure that the test will win based on best practices? If so you might be better served by going live and measuring pre/post changes with a regression discontinuity design. This is worse from a pure statistical inference standpoint, but might be better for the business. 

Is TaaS Right for You?

You should consider TaaS if you are:

  • An organization with a measurable KPI you want to drive, especially if it’s revenue (e.g. sale of a widget) or revenue-related (e.g. application start if you were a school).
  • Have ideas/changes you want to test the impact of. Obviously, every idea can’t be “good.” When in doubt or at an impasse, let the numbers (and users, by extension) tell you what works best for them. This is a critical part of being good at being wrong.
  • Currently running PPC campaigns and want to get better value from each click.
  • Have a high volume of visitors (the more traffic, the faster we can test. If you don’t have a ton of traffic, no problem, results will just take time so picking your shots becomes more important)
  • Lead an organization that wants to produce measurable, positive results and generate knowledge for continued improvement.

For the more technical-minded consumer, let’s talk. We’ve worked on a range of technical issues including:

  1. How do we manage testing relative to the complexities of multi-touch attribution
  2. How should we handle power law, non-Gaussian revenue distribution by customer
  3. Are our stopping rules heuristically based on regret minimization, error minimization or revenue maximization?
  4. How to test in a many-parameter space
  5. How does continuous polling of test results (peaking, spending ?) affect inference
  6. How should we test for sophisticated audiences, many of whom are leveraging VPNs and ad blockers?
  7. Are you well served by alternative methods like multi-arm bandits, adaptive Bayesian methods, etc.
  8. How should we gather qualitative data for test ideation and how do we weight it against quantitative data in our hypothesis formation?

Although JB has a presence in two countries, we’re a close-knit group and most of us have worked with one another for several years and meet up in person regularly. If you’re looking for a cohesive team who can get things done, you’re in the right place.

JB Web Experts