Welcome back to a GHAT mini post. Apologies for the infrequency – I am working hard on my new company that I’m shamelessly going to plug in this article. That being said, in the course of scaling up my business, I have had my nose shoved into something that I knew was an issue, but I didn’t realize the scope or the depth.
The issue, and one that is quite possibly seriously compromising the performance of independent Ad tech, is Throttling.
What is Throttling
In the world of ad exchanges and real time bidding, things are communicated via packets of information called “Bid Requests.” A bid request is a bundle that includes metadata about an advertising opportunity – IP address, user IDs, page URL, etc – and these bundles are used to evaluate the expected value of a given advertising opportunity for an advertiser within a bidding environment.
From the ad exchange perspective, and getting a little technical, depending on your infrastructure (but especially if you are working with off the shelf cloud providers like Amazon), one of cost centers can be something called “egress.” Egress refers to the cost associated with transmitting information to other systems, or in this scenario, the cost of sending bid requests to bidding environments. This means that just by virtue of working with a bidding environment, exchanges incur additional costs – the costs associated with sending out the requests to that environment.
Simultaneously, from the buy side perspective, there is a cost to evaluating the given advertising opportunities against your expected value model. This is generally related to “compute,” or processing costs, but there are other costs as well. It generally costs even more money than egress does. This means that for open market bidding systems, there are fixed costs to operate the infrastructure regardless of spend.
In general, companies don’t like costs, and want to avoid unnecessary ones, or even “non-optimal” ones – which brings us to the birth of throttling.
Throttling refers to the act of limiting the number of bid requests that go to a given bidder, at least originally, to optimize costs. Because of egress, and because of compute, it is in the interest of both the sell side system and the buy side system to not incur costs for bid requests that will never result in an impression transaction. So, basic initial implementations were “I, the world’s greatest DSP, have no advertisers in Brazil. Do not send me Brazilian traffic mr. ad exchange. Or do, and I’ll just stop working with you, because you’re a commodity.”
Throttling Grows Up
But then the nerds got involved. I’m not sure if this started on the sell side or the buy side, but I’m assuming it’s the buy side, and some machine learning person went “hey, I bet I can design an algorithm that dynamically listens to more of what we want and less of what we don’t want. It will improve outcomes for our advertisers, because I’m a data scientist and I know what’s right, and also save us money. Give me a promotion please.” And then, in the infinite wisdom of DSP leadership, they went “Fuck yea, let’s give this person more money and power.” And thus dynamic DSP throttling was born, where DSPs had internal algorithms to choose which advertising opportunities they evaluated for bidding and which ones they just ignored completely.
SSPs all of the sudden saw bid rates (the rate at which DSPs bid on their traffic) wiggle all over the place. Some poor account manager probably reached out to the DSP and went “what is happening?!?!” to which the DSP replied “lol, get better traffic bro,” and then the SSP leadership team got fired by their board for not knowing what was going on. But then, genius idea, someone at the SSP realized the inverse of the DSP realization “It would seem the DSP bids more often according to some kind of horrible arbitrary pattern. What if we send more stuff according to that arbitrary pattern?” Honestly, it was probably the same machine learning person who built the dynamic throttling algorithm at the DSP. And they said to their leadership “Hey, if we dynamically change which bid requests we send to the DSP, we can manipulate their bid rates and get them to spend more money with us. I know, because I built the DSP algorithm, and I know how to trick it. Wait did I say that out loud? Nevermind, oh, and by the way, egress costs will go down too!” The new leadership team, put in place by the board, saw revenue going up double digit percentage points with their magic new algorithm and made this person a C level.
And this brings us to where we are today. Throttling is a mess. It is a game of machine learning algorithms trying to learn and optimize to other machine learning algorithms, based on signals that probably have very dubious statistically significant optimizable value in the first place. This has resulted in two even worse things :
This dynamic throttling war is partially responsible for bid jamming. The more systems throttle, at least initially, the more money you make by slapping as many bid requests into their face as possible. This is not publishers’ fault. This shit worked because of these dumb throttling algorithms.
Now, publishers are implementing their own machine learning algorithms manipulating the ad requests that go to exchange from their prebid auctions – because now exchanges throttle inbound too, it’s just great – the cost incentives are different because sending requests from the client doesn’t cost publishers money, but certainly ad exchanges throttle what they listen to from publishers, and building algorithms to game their algorithms indisputably can make more money from publishers.
We are now in the endgame of this vicious cycle of throttling spend down and then building algorithms to game and throttle spend up. I put it at about the 4th circle of hell – we aren’t quite with Satan in the ice dungeon in the middle of the earth yet, but we’re basically swimming in feces up to our necks. Or maybe that was the 6th circle, I don’t remember.
And this bullshit makes it really difficult when you want to actually fix the ecosystem. I know this because my company is fixing inventory categorization – I think ad placements, and sites, and everything having to do with inventory classification in bid requests could be much, much better. And we built a better mousetrap. Gamera classifies inventory more effectively than a GAM ad unit ever could, models performance, and creates stable identifiers that increase the performance of ad campaigns – helping publishers put their best foot forward to buyers and helping buyers reduce CPC and CPV and CPA by 35%+ (I’ll stop selling now I promise). But, because programmatic is busted, I can’t transmit my data directly through the pipes, so I’ve given up on that for now.
But what we do do is curate. We build PMPs (and help others build PMPs in combination with our inventory signals) using our data. And because of those stupid throttling algorithms, that are optimizing to signals that are awful, it’s a magic guessing game of the percentage dropoff of signals we send in via the client, signals sent out in bid requests, and therefore signals heard by DSPs.
In some circumstances, we see 99.9% dropoff rate from signals sent in via the client to signals heard in the DSP. 99.9 fucking %. Nobody at the DSP or the exchange can explain why – because it isn’t humans doing this, it’s machine learning algorithms optimizing to machine learning algorithms optimizing to other machine learning algorithms. I swear, the amount of money paid to these throttling analysis people probably could’ve cured some horrible disease by now.
And this leads me to my conclusion for this article – which is that these throttling algorithms, because they’re making decisions based on imperfect bid request data (read : garbage inventory identifiers and user IDs) and based on other machine learning algorithms are indisputably making the industry more inefficient. It means there are advertising opportunities that buyers are missing out on, which means buyers get shittier results and that publishers make less money. This is a direct result of how the infrastructure is designed, and the vicious cycle of bid jamming and throttling on throttling.
So how do we fix it?
I think the fix is rather simple – in 2025, there’s no reason to be listening to more than 2 exchanges, and DSPs should compete on who throttles the least (and therefore who can perform the best). I remember a call with Jeff Green, well, my only call ever with Jeff, where I pitched Jeff on the direct, fee-less exchange I’ve written about before. His response was “we’re TTD, the fact that we can listen to everything means that we can find opportunities our competition can’t. I don’t need one exchange.” His position was an anti-throttling one at its core – basically, throttling makes our adversaries weaker. I couldn’t argue with him, because he wasn’t wrong. And his statement rings true to this day, probably 8 years later.
There will always be listening and bidding optimization – a tradeoff between cost efficiency and performance – but Ad Tech needs performance to win if it’s going to survive.
Thanks for reading! And try out my data, it works great, or if you’re a publisher, make sure you’re using us for your direct sales (amongst all of our epic publisher products). Or that we’re at least implemented so that when we fix the internet for advertisers and DSPs you’re included in the party.