top of page

Bermuda “Triangulation”

by Jon Lorenzini

December 5, 2023


Keeping with the pop culture theme of “Kleven” from my last post, a topic that has been swirling around the measurement space is “triangulation” which reminds me of Silicon Valley’s “Conjoined Triangles of Success.”



It’s a nice visual and is useful for viewing marketing results through different angles (pun intended), but putting them on the same playing field despite their wild differences in scope, accuracy, comparability and completeness seems crazy to me. I have seen countless advertisers get stuck in integration hell only to get sucked into the “Bermuda Triangulation,” where results contradict each other and there are no clear winners due to conflicting KPIs. Also with every addition to your measurement stack, your complexity increases exponentially, non-working media costs increase, and you end up building a Kafkaesque process to arrive at a decision or action. Stop.


To navigate out, you need a clear path forward - a flight or float plan.

  • Which tools are best for answering which questions?

  • How should you leverage the inherent advantages of the different measurement solutions?

  • Where do assumptions, extrapolations, correlations, identity matches and other fill in the blanks get you from raw data to ROI?

  • How would an app only (Snap, TikTok), upper funnel type platform compare to a lower funnel, large identity graph, high-intent audience (branded Google search) show up differently based on inherent advantages and disadvantages of measurement?

  • Are you measuring shorter term outcomes (conversion) or longer term (brand equity, top of mind awareness)

  • What actions will you take from the results of this measurement tool - and what other actions should you take from the other measurement tools?


For myself the biggest bi-furcation is pretty simple:

  • Higher scope tests: Comparability BETWEEN channels requires large and consistent measurement - variance and volumes are your friends. The changes that occur at aggregate outcomes when running geo tests, running MMM models are about understanding how your most important investments drive your most important KPI (for a majority of my clients that is marginal incremental ROAS).

  • Lower scope tests: Comparability IN a platform for enhancing your execution strategy requires granularity. Testing creative A or B at a market or larger level has a lot of executional overhead. In-platform testing allows for identity-level engagement and (match rate-depending) conversion analysis. While each platform might have a bias, the bias between option A and B are equal if designed correctly and can help you improve the effectiveness of the channel, which in-turn improves the effectiveness when doing higher scoped tests.


Other bifurcations include:

  • Identity vs. non-identity (geo/time) - do you have user matching?

  • Top down vs. bottom up - Are the assumptions from splitting revenue or joining/extrapolating revenue?

  • Incremental vs. attributed - Are we looking at the value of the ads or how qualified the audience is?

  • Marginal vs. Average - Are you making investment decisions or recapping performance?

  • Sales and Brand building - Do you want conversions today, or market share in the future

  • Correlative vs. Causal - Did it happen at the same time or because of the ad?


Would be curious what combinations of measurement solutions y’all are using and what solution solves which problem. Sound off in the comments below 🙂


bottom of page