An audit and assurance consulting firm, Deloitte provides advising on financial and tax solutions, along with additional related services to both public and private firms globally. Serving a number of industries internationally, Deloitte was in search of a new solution to magnify their targeting precision. In order to acquire new customers for a new wealth management client, Deloitte needed a partner who could tap into prime conversion opportunities by activating first and third-party data for their digital advertising campaign.
Utilizing our location-based audience graph, we started the process by determining parameters for our geospatial boundaries. In order to find qualified prospects, we created a model to score all Canadian postal codes outside of Quebec, where we then identified approximately 1 million high-tier, potential customers. These devices residing at the top postal codes were then served highly targeted messaging from Deloitte’s client across Google and Facebook. Working directly with their client, we ensured their custom parameters were executed both ethically and securely on our audience graph. For the duration of the campaign, our automated channels equipped Deloitte with access to high quality data which was updated on a weekly basis, and acted as a catalyst in achieving impressive results for their client’s campaign.
RainBarrel provided a solution that suited their needs at a much lower Cost Per Thousand (CPM), a lower Cost Per Acquisition (CPA) and lower Assisted CPA. Despite working with a lower allocation on spend, RainBarrel outperformed in comparison to the campaign the wealth management company’s media agency was running. At the same time, we achieved an acquisition rate that was 3 times higher than their previous campaign, bringing in a 65% increase in what the client defines as high value customers.
Deloitte’s clients were ecstatic about the results which exceeded their goals. The campaign was a success and achieved much more lucrative results than they had previously seen. The client has opted to continue their work with RainBarrel in the upcoming year, and are delighted to do so with a more robust campaign budget.
Nolan McMahon
ML and Applied Mathematics Specialist
RainBarrel Inc.
We describe the methodology behind the process that RainBarrel uses to add relevant devices to preexisting audiences. Options to increase audience sizes are often available with other audience providers, but their processes are largely left a mystery. This reduces the trustworthiness of these features from the perspective of audience buyers. By revealing our methods, we seek to build trust in the marketplace in our expansion product feature.
Audience Size Problem
In the world of marketing, a great deal of value is placed on effective advertisement placement. The focus is on putting the right ad for the right product in front of the right person at the right time. As this process becomes more efficient, advertisers see their revenues soar (since they are wasting less of their fixed budget on ineffective marketing.) In order to select the right person unto whom to show an ad, advertisers categorize the devices that people use into so called audiences based on information which is intrinsic to all the members of said audience.
That’s what RainBarrel specializes in: audience creation. The information that we use in the categorization process varies; everything from demographic data, to online browsing habits, to real world activities, to select first party data. However, there are many instances where after an audience has been created the result is an audience that is just simply too small. This size problem has two major consequences.
First, an audience with few members does not have the same reach as one with more members, obviously. A lack of reach really isn’t desirable to clients unless we can guarantee that the audience contains highly relevant members.
Second, and more significantly, small audiences do less to preserve the anonymity of their members than large audiences. RainBarrel takes privacy very seriously. All the data that we use to create audiences must first have all Personally Identifiable Information (PII) removed before it is handled. This includes any and all cookie related information. (Since the cookie is going the way of the dinosaur anyways, this hasn't had any impact on RainBarrel audiences.) As a matter of principle, RainBarrel will not license any audience which contains fewer than 1000 members.
Barring any recourse, audiences that are deemed to be too small must simply be discarded. This is unfortunate because it compels those interested in creating audiences to decrease the specificity of their audience creation criteria or forego specialized RainBarrel audiences altogether. It was out of this dilemma that RainBarrel developed the idea for Amplify: Expanded Reach.
The Solution
Instead of discarding audiences that are too small, what if we simply make them bigger? Not by changing the creation criteria, but by allowing a computer to add devices that are "similar" to those which already make up the audience. In that way, the underlying structure and cohesiveness of the audience doesn’t change very much, but the size can be increased to a more acceptable level.
It is important to emphasize that this process needs to be completely automated. Requiring human oversight would only serve to frustrate and deliver a sub-par outcome. Hence the high-level design idea: present the end user with a binary option to expand the reach of their audience.
By taking a wholistic look at an existing audience, the computer can determine which device attributes were important when constructing the audience. Then the computer can consider devices which are not currently affiliated with the audience and score them based on how well they adhere to those defining attributes. If the computer finds that a given device is similar enough to the existing audience group, it can add that device to the audience.
The great news is that there is nothing intrinsic about this procedure that prohibits its use on any audience, small or not! All audiences are compatible with expansion, which therefore allows us to offer it not only as a service to revive small audiences, but to increase the reach of any audience.
Amplify: Expanded Reach is a two step process:
1. Create a New Audience with an Audience Criteria
Your criteria is the set of rules that we use to choose which devices should be included in your core audience and which should be excluded. For example, your criteria might be "all devices which have visited a coffee shop at least ten times in the last six months in the city of Montreal".
2. Select Expansion and Wait for Your Audience to be Expanded!
This step is completely automatic, but in case you are curious, here’s how expansion takes place under the hood!
Algorithmically speaking, the first step is to calculate how similar each device in our dataset is to every other device. These similarity values are updated on a monthly basis. Calculating similarity is difficult because there is no predefined mathematical formula that describes how similar two devices are to one another, so we went out and consulted our operational staff to help us pick a quantitative technique for measuring similarity.
Once a similarity score, which is quite literally a number, is assigned to each pair of devices, the next task is to decide how to add them into a given audience. In the end, we picked an algorithm that effectively allows for your audience to "compete" for new devices.
The competition involves the selection of two distinct groups: one which will be expanded into your audience and the one that won’t. Each device has a natural affinity, based on their similarity score, towards joining one group or the other. As the composition of each group changes, the affinity that the remaining devices have to join each group changes. Gradually, the desire that devices have to flip flop between groups diminishes and a stable state is reached where all the devices are content with the group that they are in. At this point, the group of devices that were marked for expansion are added into the core audience.
Here's another way to think about it. Imagine that a group of middle school kids need to divide themselves into two sports teams. As I'm sure you know, teenagers are particularly prone to cliquish behaviour and as a result, when they are forced to divide the group, their loyalties are tested. Inevitably, the clique members are separated from one another in the team formation process and disassociated members are forced to make some difficult decisions about which of the two teams they would prefer to join.
Throughout the entire team selection process, each individual feels torn. They have friends who are scattered across both teams, but they must decide which team is most attractive for them to join. Once they have picked a team, the social dynamics change for each of the other individuals which may cause them to reconsider their team choice. Eventually, each individual is happy with their team and there are no undecided leftovers. An analogous process takes place when devices are added to the expanded audience.
In a mathematical sense, this problem is one of optimization1. Consider a set2 D of n elements and a proper subset3 of D:A⊂D such that |A|«n. The problem at hand is three fold:
1. Derive some quantitative measure of comparison between elements d ∈ D based on the properties of each element d = {x1, x2, . . . , xm}.
2. Pick an objective function4 which can be used to measure the optimality of the 2-part partition5 of D \ A into B and D \ (A∪B).
3. Use said quantitative measure to calculate some optimal subset B ⊂ (D \ A) to be unioned6 with A such that the quantitative measure is optimally selected for both the set A∪B and D \ (A∪B) given the objective function.
1. The membership of each element d ∈ (D \ A) in B or D \ (A∪B) is given by a binary value of ±1 analogously to how each magnetic spin23 is assigned in the Ising lattice.
2. Similarity values between distinct elements are seen as equivalent to the coupling strengths between neighbouring spins Jij and the similarity values between identical elements are seen as equivalent to the coupling strengths between individual spins and the external magnetic field24 hi
3. There is no cost associated with changing the membership of an element d in either of the two partitions, so the magnetic moment25 μ = 1.
4. Instead of approximating that only adjacent spins couple with one another, as is the case in finite-dimensional Ising models, we assume that all elements d couple with all other elements. This makes the number of coupling pairs of spins equivalent to n2.
1. Stephen J. Wright, optimization, Encyclopedia Britannica (Feb 2023). https://www.britannica.com/science/optimization.
2. Steven R. Lay, Analysis: With an Introduction to Proof, Pearson, Upper Saddle River (2014).
3. Aurelien Geron, Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow, O’Reilly, Sebastopol (2019).
4. Susanna S. Epp, Discrete Mathematics with Applications, Richard Stratton, Boston (2011).
5. Elizabeth Million, The Hadamard Product (Apr 2007). http://buzzard.pugetsound.edu/courses/2007spring/projects/million-paper.pdf.
6. Robert A. Adams and Christopher Essex, Calculus: A Complete Course, Pearson, Toronto (2013).
7. Keith Nicholson, Linear Algebra with Applications, McGraw-Hill Ryerson (2013).
8. Walter Greeiner, Ludwig Neise, and Horst Stocker, Thermodynamics and Statistical Mechanics, Springer, New York (1995).
9. Ramamurti Shankar, Principles of Quantum Mechanics, Springer, New York (1994).
10. Grant R. Fowles and George L. Cassiday, Analytical Mechanics, Brooks/Cole, Boston (2005).
11. Herbert Goldstein, Charles Poole, and John Safko, Classical Mechanics, Addison Wesley (2002).
12. David L. Applegate, William J. Cook, Sanjeeb Dash, and David S. Johnson, A Practical Guide to Discrete Optimization (Aug 2014). https://www.math.uwaterloo.ca/˜bico/papers/comp_chapter1.pdf.
13. Lucas Andrew, “Ising formulations of many NP problems,” Frontiers in Physics 2 (2014). https://www.frontiersin.org/articles/10.3389/fphy.2014.00005.
14. Naeimeh Mohseni, Peter L. McMahon, and Tim Byrnes, “Ising machines as hardware solvers of combinatorial optimization problems,” Nature Review Physics 4, pp. 363-379 (Jun 2022). https://doi.org/10.1038/s42254-022-00440-8.
15. David J. Griffiths, Introduction to Quantum Mechanics, Cambridge University Press (2017).
16. David J. Griffiths, Introduction to Electrodynamics, Pearson (2013).
17. Po-Wei Wang and J. Zico Kolter, Low-rank semidefinite programming for the MAX2SAT problem (Dec 2018). https://arxiv.org/pdf/1812.06362.
18. Fred Glover, Gary Kochenberger, and Yu Du, A Tutorial on Formulating and Using QUBO Models(2019). https://arxiv.org/pdf/1811.11538.
19. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by Simulated Annealing,” Science220(4598) (May 1983). https://www.science.org/doi/10.1126/science.220.4598.671.
20. Andreas Nolte and Rainer Schrader, “A Note on the Finite Time Behaviour of Simulated Annealing” in Operations Research Proceedings 1996, pp. 175-180, Springer Berlin Heidelberg, Berlin, Heidelberg (1997). https://doi.org/10.1007/978-3-642-60744-8_32.
21. Nicholas Herring, Open source Fortran Simulated Annealing Module (2023). https://github.com/nfherrin/OpenFSAM.