There’s a lot of discussion of “Resellers” and “Supply Paths” nowadays, often with someone selling you a solution to help make things more direct or warning advertisers about some nefarious middle man trying to steal their money.
Middle men do indeed exist, this is ad tech.
But for all of the bellyaching, this doesn’t seem to be a situation that’s meaningfully changing outside of TTD’s OpenPath. I’d like to attempt to explain why – and also to explore how I deeply believe that blame is often misplaced here (stop blaming publishers you monsters) and how the current situation actually isn’t as dire as it’s made out to be. So strap on in, and let’s talk a walk down memory lane to explore how we ended up where we are.
The Before Times - Waterfalling Display
One argument I’d like to make is that the Display market is as close to a perfect market as has existed in online advertising for some time. A unified, open-source facilitated first price auction might not be perfect, but gosh darnit it’s better than the alternatives. What alternatives you say? Well, let’s discuss what Display used to be (and what much of video and mobile app still is, or at least was a year or two ago when I was building products for it).
Let’s start with some definitions :
Demand - someone who represents (eventually) advertisers and wants to buy ads on a website
Supply - a publisher’s website, they have ad inventory they want to sell, or someone reselling that inventory
Ad Exchange - a system that aggregates integrations with buying tools via OpenRTB (do proprietary bidders still exist? I guess maybe…)
DSP - a system that works with advertisers and agencies to place ads on websites by integrating with Ad Exchanges or with Publishers to algorithmically bid on that inventory
Now, in the before times, life was simple. Publishers, the supply, would work with Ad Exchanges (basically never with DSPs, but Criteo was early here and then took steps backwards, and AppNexus kind of invented the game so they don’t really count as a pure DSP) synchronously. Synchronously means that the ability to work with an exchange could only happen one at a time in a set order – supply would use a javascript ad tag to call a single exchange, that exchange would call their DSPs/Bidders, and an auction would occur. If that exchange didn’t buy it, normally with an arbitrary price floor band set up (again, this was a miserable, necessary use of price floors, and I think plays some of the role in people misapprehending the “necessity” of price floors in modern ad tech), you would call the next exchange.
This was known as the waterfall – as you called an exchange, if they didn’t buy the advertising opportunity, you would call the next one. DSPs basically _never_ received a request for the same impression at the same time because of the synchronous nature of waterfalling – impressions could only be really sold in one place at a time.
This was great for DSPs because of fragmentation. They only had to compete with the other DSPs within a given exchange at once, and inventory was cheap. Combine that with second price auctions, and I think this was the heyday of DSP cheap inventory buying, to be counteracted only by the massive amount of real ad fraud that existed via unpoliced domain spoofing (literally, you could type &domain=google.com at the end of an ad tag and exchanges would pass this in. I am dead serious).
My second company, RTK.io, was working on building a dynamic waterfalling adserver that monitored what inventory a given exchange was most likely to buy at a given price floor, and dynamically rearrange the order of the waterfall impression by impression. The idea was so good that video adservers all started doing it, and basically still do it today, because VAST and VPAID are kind of awful and for various reasons can’t escape this waterfall. Btw — this is the original purpose of an SSP, a term now bastardized and synonymous with ad exchange — check out liftDNA or the original rubicon product, they existed to optimize and manage waterfalls.
Header Bidding Murders My Product and Frees Publishers
In the 2014s ish there were whispers about people placing javascript in the header of publisher’s pages and blocking the pageload until the auction completed. My reaction at the time was “Yeah, Right, Like any publisher will ever let you do this. They have to risk their entire pageload for programmatic ads? No way.”
Jokes on me, and I was running a header bidding company 8 months later. The reason this technology was so absurdly effective was because it achieved the most important thing a publisher could do to maximize their yield – flattening a fragmented auction. By having all exchanges compete at the same time, in the same place, you plugged a bunch of holes in what was a leaky waterfall bucket. This is because buyers are smart – they realized if they wait to buy a given impression on a given exchange, even if it was lower down in the waterfall, they got inventory that performed just as well cheaper. Optimizing exchange vs exchange was an incredibly important part of buy-side optimization. But now, in a header bidding context, because a low bid had to compete with higher bids from other systems, this waiting game opportunity ended.
This brings us to the crux of our article. Why does simultaneously sending a bid request to many different exchanges, ergo many different DSPs, for the same impression at the same time yield more revenue? I’d like to posit a few explanations, all of which I think are potentially legitimate, and all of which justify the existence of resellers and of tons of players in the supply chain, at least from a publisher’s perspective, for as long as the buy side doesn’t want to build OpenPath (and maybe even for some time afterward)?
Why Does Duplication Work?
In a header bidding auction, other nuances of the auction dynamic transition away from waterfalls aside, a single DSP can receive many bid requests for the exact same impression (even more if exchanges are engaging in bid jamming, which some do, which is artificial bid request duplication intra-exchange as opposed to inter-exchange. Yes this is real, and yes it works. Literally, an exchange will make fake additional bid requests for a single real one). And somehow this makes publishers more money. I’ve worked with literally hundreds of publishers, building their ad stacks, building header bidding management platforms, and I can tell you that up to a point adding additional bidders with access to real DSPs will almost always make you additional money (with diminishing returns, but those kick in at a pretty high number of exchanges). I’m sure some of you will want to fight me on this, but all you need to do is look at the ad stacks of some of the biggest managed services out there to know that they’ve a/b tested themselves into 10-15 client side bidders and even more server side. And no, I don’t care that you removed a bidder and didn’t seem to make less money. If you didn’t properly do it with server side session persistent A/B testing, it doesn’t count. I will die on this hill.
Possible Explanation #1 - Distributed Computing
My first explanation for why duplication works is rooted in the nature of what a DSP is. You see DSPs aren’t single advertisers – DSPs are massive agglomerations of hundreds of advertisers and thousands of campaigns and strategies. This means that at any given moment, for a given impression, within a single DSP, there are probably hundreds of eligible campaigns.
All of these campaigns have lives of their own. They have bid valuations associated with how many outcomes a given piece of inventory at a given time on a given user has driven for them, they have pacing considerations, they have all of the massively intricate workings of a modern advertising campaign. Each of these campaigns is not particularly aware of other campaigns in the same DSP.
In addition to this, the existence, the concurrent spend, and the state of each of these campaigns has to be dynamically maintained across servers all over the world, in datacenters all over the place. Oh and DSPs are expected to respond in under 100ms. Not easy.
Given that this is the case, is it such a surprise that if I hit a DSP twice, almost simultaneously, with an identical bid request, two different campaigns might come back as the winner? It could be pacing algorithms turning campaigns on and off. It could be different servers with different priorities. It could be any number of myriad things – but to expect consistency here is to misunderstand how DSPs actually work and the task they have laid before them. I’m sure some DSPs have tried to architect for this, and I’d love to hear from some DSP product leads about solving this problem in the comments below – but this explanation is rooted in the notion that IF a DSP responds differently, even for an identical impression almost simultaneously, you have now created an optimization vector for publishers. I can pound the DSP until it gives me the highest number. This works, has always worked, and we shouldn’t be surprised.
TLDR – DSPs will never respond with one campaign over and over. The people responsible for this are DSPs. I can get you to pay me more money by hitting you more.
Possible Explanation #2 - Asymmetrical Filtering
The DSP and SSP world has become awash with “optimization” solutions dedicated to traffic shaping. With all of the duplication we see in the marketplace, every major player out there has some kind of traffic shaping trying to optimize cost – often interfering with one another. Allow me to explain.
Phase 1 - DSPs try to control cost
One of the most expensive things in the world if you use AWS is Egress. AWS charges out the wazoo. Egress is the cost of transmitting information between servers (engineers please don’t @me i know this is a gross simplification) – or a volume based cost for sending out packets of data. It can be mitigated via tech stack, or by owning other parts of the pipes, but lots of people like to use AWS cuz it’s easy and lot’s of people know it. This means that DSPs hate listening and responding to traffic that they’re probably not going to buy. It literally costs them money, and if there’s almost no chance they’re going to buy it, they don’t want to listen to it. Cue solution #1 – filtering – where DSPs will literally just “not listen” to a bunch of traffic that comes from SSPs.
Phase 2 - Traffic Shaping
DSPs stopping listening to traffic means they spend less money, and SSPs don’t like that. So, in their infinite wisdom, SSPs come up with their own solution – we’re going to write machine learning that tries to predict which impressions a DSP wants to buy. And they do. And it (kind of?) works. I hate this because of the functional nature of it – SSPs are by proxy building an optimization algorithm that finds traffic that converts for advertisers, without actually knowing what they’re optimizing to, which creates a lot of opportunity for noise and interference. But this is a thing. Everyone does it.
Phase 3 - Current State
Every SSP <> DSP interaction is shaped by a convoluted complex mess of filtering and traffic shaping, and the likelihood that a given impression slips through consistently becomes very low. This means that different exchanges are going to “surface” different impressions to different DSPs, and there’s going to be highly mediocre rhyme or reason to it.
In Phase 3, the solution is simple – SEND MORE! By sending more, through as many connections as possible, you maximize the likelihood that your impressions make it through the filter/traffic shape ringer to the promised land of biddable impressions for a DSP.
TLDR - Because DSPs control cost, they don’t listen to some of my inventory opportunities’ bid requests. If I send more, through more places, I hit the DSP more, and I make more money.
Possible Explanation #3 - “Bid Salad Dressing”
My third explanation for why duplication works in header bidding is related to the nature of OpenRTB itself. In a given openRTB auction, a bid request for a display ad unit looks like this (found by googling openRTB bid request) :
{
"id": "df472a5ca259ef79fec1567f17160ff545a80fbe",
"at": 2,
"tmax": 129,
"imp": [{
"id": "1",
"tagid": "70821",
"banner": {
"w": 728,
"h": 90,
"battr": [9, 1, 2, 14018, 14014, 3, 5, 14, 13, 14005, 10, 14015, 8, 14019],
"api": []
},
"iframebuster": []
}],
"site": {
"id": "15756",
"domain": "http://zoopla.co.uk",
"name": "Zoopla",
"cat": ["IAB21"],
"page": "http://www.zoopla.co.uk/for-sale/details/31611794?search_identifier=e12d33a11efeb568768a845e59d51dfc",
"publisher": {
"id": "9208"
}
},
"device": {
"ua": "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.3; .NET4.0C; .NET4.0E; MS-RTC LM 8)",
"ip": "192.168.1.1",
"language": "en",
"devicetype": 2,
"js": 1,
"geo": {
"country": "GBR",
"region": "HNS"
}
},
"user": {
"id": "64c017a3d756e2566596cc1499294d1b602c88a3",
"buyeruid": "7be2ed3d-a245-4045-af05-f15c9771a73e",
"ext": {
"sessiondepth": 5
}
}
}
As you can see there are a bunch of fields. And the exchange could add more if they wanted to. This is significant because the population of these fields is entirely up to the exchanges – it depends on the information they pull out of the browser, on the information they have about the user, on their third party services that map their user agents and geos, on their servers and their hosting, on all kinds of random stuff that will be different from exchange to exchange. This means that for a given impression, you will have tremendous inconsistency in how that advertising impression is presented to DSPs – and from the inconsistency, you have asymmetry in DSP optimization nodes. Now, I will say, Prebid has the opportunity to fix all of this – but until prebid is passing OpenRTB bid requests, and SSPs become dumb proxies that pass those along, we’re going to have this asymmetry, and we might still have it after that. This means that working with more exchanges will make you more money – higher variability in optimization nodes will lead to increased outcomes.
Conclusion - RESELLERS AND DUPLICATION AREN’T A BIG DEAL AND I WANT ALL OF YOU TO STOP COMPLAINING ABOUT IT
I want to double click (lol, I hate me) on that concluding thought because it’s a significant thought that underpins how I feel about all of these things. At the end of the day, solutions will generate more revenue for publishers that enable advertisers to optimize more effectively. One might read this article and think that advertisers are being injured here. I would actually argue the exact opposite. If a tool makes a publisher more money, it’s because either that inventory is being surfaced to buyers it would not have been surfaced to before, or it is because the additional information being surfaced to buyers is leading to improved outcomes for those buyers. This is a good thing. We want this. The ultimate problem for third party ad tech is how do we make it work better, not necessarily how do we make it more “efficient” from an infrastructure perspective.
Let’s not forget, as well, that DSPs are in the driver’s seat here. If a DSP wanted to listen to all of a publisher’s inventory in one place with no duplication, it’s incredibly easy to reach out to that publisher and agree on a time and place. So, so, so, easy. A college kid could easily do it with 500 publishers in 1 year. Here’s a sample email that this theoretical college kid can use :
“Hello publisher A, I work at DSP B. We’d like to only buy you through one exchange. We currently see your inventory through these exchanges - C,D,E. Which one would you like for us to buy it through?
Thanks!
SPO done by 22 year old”
So please, please, stop blaming publishers for all this. Stop demonizing resellers. And Advertisers/Agencies, for pete’s sake, stop buying products from people who tell you that they’re going to make things more “efficient” for you unless the thing they’re making more efficient is your Cost Per Action – because any solution that doesn’t make your media dollars drive more conversions is more wasted than the money given to any ad tech middle man. And as an aside, given modern browsing infrastructure, this isn’t even that slow anymore! It’s like 20 additional page requests — go on CNN, there are literally hundreds there already, header bidding is costing you nothing!
Also, don’t come to me with “greenwashing tech,” because I don’t buy it. If you want campaigns to be greener, send everyone to Google, because there’s nothing cleaner than dv360 buying from GAM all hosted on GCP. Targeting away from publishers because they have “dirty supply chains” is not only probably technologically unsound and nonsensical, but it’s inequitable – you’re literally targeting the poorest companies in the supply chain, telling them they probably have to make less money in order to be eligible for campaigns that might or might not buy their traffic anyway. It’s messed up, and I’m not here for it.
You'll appreciate this: https://chat.openai.com/share/68d0975c-51a5-4c50-b188-b24f163130b1
Realest shit ever said in ad tech