Solutions
Teams
Built for your whole team.
Industries
Trusted by all verticals.
Mediums
Measure any type of ad spend
Platform
Use Cases
Many Possibilities. One Platform.
AI and Automation
The Always-on Incrementality Platform
Teams
Built for your whole team.
Industries
Trusted by all verticals.
Mediums
Measure any type of ad spend
Use Cases
Many Possibilities. One Platform.
AI and Automation
The Always-on Incrementality Platform
How to make sense of a new measurement tool, from a marketing-analytics perspective.
When mobile phones entered our lives more than two decades ago (and perhaps some of us were already born with an iPhone on their hands), new marketing methods had to occur. Suddenly, everything was measurable – the activities of users, as well as their full user journey, became available for marketers.
However, mobile marketing only started to make sense when it was backed with strong marketing analytics. That is, data that is collected, aggregated and displayed following a clear definition of what is being measured. This data is then processed into reporting and business insights that allow marketing-related decision making, such as budget allocation, fine tuning of ads, creatives, short or long term planning, and basically everything that is related to the marketing world.
Without a proper measurement method, as well as having a clear understanding of how to work with one - no marketing team can grow.
While the classical way of reporting for mobile apps was (and to some extent, still is) last-click attribution, which is the main method used by Mobile Measurement Partners (MMPs), since this method is heavily reliant on user-level data, its validity and reliability deteriorated significantly, followed by different privacy regulations such as ATT and other data obfuscation methods that are applied to user level data. When user level data is not available or consented by the user, MMPs usually infer the marketing source of an app download from other, contextual indications, such as IP, device type (or their combination), which are referred to as probabilistic matching (and some might say, that it can also be called “fingerprinting”). Although this method is by design less accurate in identifying an install source, this is still the largest fallback for user-level attribution, when a user consent is not given.
Due to the deterioration in the accuracy of user-level reporting, other measurement methods have emerged (or re-emerged), such as Media Mix Modeling and incrementality. But it is important to note that although these could be plug-and-play methods (depending on the product), adding the marketing analytics domain knowledge is key to make those methods work well for your organization. Not all measurement method vendors provide transparency on how they deliver an output of a media-mix performance (i.e. why do they decide that Meta is more valuable than Google, for example), but understanding your own data, as well as the general methodology of how your chosen measurement method work - are crucial for you to even start working, not to mention gain trust, in your newly-added method.
The most important questions you should be asking yourself when starting to work with a new measurement methodology are:
Once you are aware of the answers to these questions, you could achieve a more informed decision making, by:
What are the most essential metrics (and KPIs) by which your company sets its budget-related goals?
What defines success? What is considered a failure ?
The Key Performance Indicators to measure in any method, are the ones by which you make your budget-related decisions.
For example, if your company bases its UA budgeting strategy on Return on Ad Spend (ROAS), then the metric you should prepare for sending is revenue. If the decisions are made based on Average Revenue per Paying User (ARPPU), then you should probably be sending-in revenue and paying users (or First Time Depositors - FTDs).
Since we are discussing mobile data, a recommendation is measuring the data in cohorts (aggregated groups of users who started at a certain point in time - i.e. same install date), such as Day 0 (install date), Day 7 (aggregated revenue from day 0 to day 7) and additional cohorts that make sense to your app.
We would recommend sending a blend of short and mid term cohorts (i.e. D7, D30, D90). Since both Incrementality and MMM are in hindsight, you will need to have fully baked cohorts in order to measure and assess them, so in order to make shorter term decisions, sending early cohorts is well recommended.
To estimate the value of marketing activities, we recommend integrating only marketing-related metrics. App-performance metrics such as Daily or Monthly Active Users (DAU / MAU), for example, are not recommended, as those mostly consist of existing users that are not new to the app, and as such, would be less susceptible to marketing changes.
For reference - the below graph shows the total DAU for a mature app on a certain day:
While the total DAU behaves a certain way, if we break it down into segments of active users based on days since install date per each active day, the new users (installs - D0, and users between D1-30) will only take in a very small portion of the total active users, making it almost impossible to use DAUs as a useful metric to measure ongoing marketing efficiencies.
Other KPIs that are also not useful for marketing measurement are vanity metrics:
Unless your product is not at go to market yet, and vanity metrics are the only metrics you could be measuring – marketing spend should always have a hard metric optimized towards.
What is the business logic in which crucial decisions are made? Is it different from the business logic in which UA decisions are made on the day to day?
In the mobile marketing sphere, it is quite common to have more than one business logic to measure the success of marketing activities - a relatable example would be iOS campaigns that show different results between MMP and SKAN data, but a more important (and common) example is in cases where the company has its own user level definition - and therefore cohorts and performance that progress differently than the marketing data (usually = MMP):
For example, a certain company may define a user entity as a ״Person״ - which is cross devices and platforms (i.e. if a user installs a certain game on iOS, and logs in to the same user from an Android device and continues their progression - the company will know it is the same Person, and will track all of their game progress under a single entity) while the MMP business logic would track devices in a platform (i.e. the installs on iOS and Android, of the same users, will be counted separately, as it is Device based). By this example, every business logic will have a different install date for a user, and thus a different ROAS D7, 30 etc.
Before ingesting your data to a new measurement tool, we recommend having a deep dive into the existing business logics used by your company, and send the one logic for which you have the most trust when it comes to marketing-decisions.
Overall, in order to work with a new measurement methodology, it is crucial to not only learn about the new method, but also to understand the underlying data you are sending, including its business logic, target and metric calculations. Sending the proper dataset as well as knowing what to expect from a measurement method, will help you onboard smoothly, get trust in the new measurement method, and make the right business and budget decisions.
Hadar is the Director of Business Operations at INCRMNTAL.
She is an analytics professional, specialising in marketing measurement.
Coming with a vast experience in leading marketing analytics departments and strategy from her previous roles as Director of Marketing Analytics at Huuuge Games and an analyst at Playtika, she has been facing many of the changes in the mobile marketing sphere which affected (and still affect) the way mobile marketing is being made and measured - such as major changes in privacy and user level data.