Part of what we do here at Slice is connect people with brands, products and services that they’ll enjoy. We try to learn from their purchasing behavior and preferences so we can offer deals and products to customers who will value them.

Mobile gaming is one such sector where purchasing data can give valuable insights. Different types of games have different types of users, but are there common elements in successful games that we can find to offer a better experience to gamers? Are there traits in the user base that can predict game performance? In order to answer these questions we analyzed two years of data from 600k users who made 10 or more purchases in the mobile games category. We then aggregated the users of all games with more than 100 users in the Slice panel and checked for common patterns, after which we released a white paper called “** Monogamers, serial monogamers and polygamers: which users make a game successful?**”. On this blog post we will examine some technical aspects of that data analysis.

We measured and calculated different metrics for the users in our sample and found that games that can convince their users to spend large fractions of their game budget on them are very successful, even if the users have a low game budget. We refer to this metric as “share of game wallet,” which is the percentage of the user’s game budget which is spent on a particular game. A regression tree model with an r-squared of 0.82 showed that among the 12 input variables tested, share of game wallet is the main predictor of spend per user in a game. Users with a high share of game wallet for a particular game disproportionately spend more on that game and not on others, which means the users have either high loyalty or high stickiness, two variables that we measured. Loyalty is the number of purchases in a game divided by the total amount of purchases. Stickiness is the frequency of switches between games compared to the expected frequency of switches by chance, i.e., a measure of

*serial*loyalty.

In order to calculate the stickiness of a user, we convert the list of game purchases to a transition matrix, just like in a Markov chain, as we are now interested not in the games purchased, but in the

*transitions*between those games. This matrix is filled by ones and zeroes, depending on whether two contiguous purchases are in the same game. Since we are considering only two states, change and no-change, we can calculate the expected number of transitions in a binomial distribution:

Stickiness is the difference between the actual number of transitions () and the mean of the binomial distribution () divided by the standard deviation of the binomial distribution ():

Where

*n=number of purchases*

* o=observed probability of transition*

We use n-1 as the length of our transition matrix because in a purchase list with size n there are only n-1 opportunities for a transition. In the cases where we have an uninterrupted streak of the same game, we add a final transition to avoid dividing by zero in the calculation of stickiness.

The figure below shows simplified examples of purchase stickiness between users who play two games and their corresponding transition matrix. User A has low stickiness and is constantly shifting between games. User B’s purchasing pattern appears random; showing no perceivable pattern. User C has high stickiness, only experiencing one game transition, sticking to one game until deciding to switch to a new game. Even if stickiness and loyalty are related, they are not the same, as stickiness measures serial loyalty. The loyalty is the same in case A and C, and very similar on user B, but the stickiness is very different.

We found that a combination of loyalty and stickiness can be a fairly good predictor of high spend per user, as a regression tree using only these two variables for predicting per user spend has an r-squared of 0.75 and plotting these metrics at the game level can separate successful games in terms of spend per user for games that are not successful. (Figure 2)

Before arriving at loyalty and stickiness, we measured many other variables like game diversity and day diversity. Game diversity quantifies not only the number of games purchased, but also the probability of finding two purchases of the same game. Similarly, day diversity not only quantifies how many different days of the week the purchases were made, but also the probability of finding two purchases being made on the same day of the week. We used a measure of information entropy to quantify these two variables:

For day diversity, S is the number of days in the week (7) and for game diversity, S is the number of different games purchased.

p(i) is the probability of making a purchase on a particular day or purchasing a particular game (for day diversity and game diversity respectively)

The previous equation is a measure of information entropy known as the Shannon-Wiener diversity index, and it is widely used in ecology and other fields, as well as in different projects here at Slice Intelligence. While using these information entropy measures gives us information about the diversity of the purchase patterns, it doesn’t give us information about the order of these patterns. That’s the reason why we arrived at a new metric to quantify stickiness, which depends on the order of the purchases.

We were able to classify mobile game users in terms of their loyalty and stickiness and found three types of users:

1) Users with high loyalty (monogamers)

2) Users with low loyalty and high stickiness (serial monogamers)

3) Users with low loyalty and low stickiness (polygamers)

The first set of users are less likely to change the game they are currently playing.

The second set of users can change their mind and play a new micro-transaction game for a while.

The third set of users constantly shift games and seem to prefer single-purchase games.

Different types of users have different requirements and offer different possibilities for game developers. For example, the first two sets of users are more open to recommendations for new games than the third group, but their usage patterns for the new game will vary.

In this analysis we analyzed Slice purchasing data in a novel way to study user behavior at a general level, not only for a specific game, as we have done in the past. Excited about the idea of playing with the most detailed and diverse purchasing data in the world? We’re hiring!