Reprint List Vote Analysis

Today’s article is mostly the work of local player and data cruncher Erik, who couldn’t resist doing an analysis of the reprint list voting, though I’ve chimed in for a few bits and pieces. Apologies for the inconsistency in graph formatting: Erik and I use different graphing tools, and it was easier to just leave them as is. And also apologies that it’s ended up being a bit of a long read.

For a few weeks now, we’ve been enjoying the renewed use of some old FFG cards through Echoes of Destiny, the ARH reprint list, for which the community got to vote on their favourite cards. At the time of voting, the exact mechanism of this list was not 100% clear, but ARH has now also published how they plan to do rotation and how the reprint list fits into that.

Of course, there is no reason to actually reprint any of the cards on the list: they are merely being added to the legal card pool for the ARH standard format, in exactly the way that all cards from the Convergence block are currently legal for that format. Earlier, ARH had announced that they wanted original FFG cards to be involved in Destiny in some way or another, and I personally think that a reprint list can be a very good and fun way to do so. (Side note: I’m sure it will also make former players with dust-gathering collections happy. I have already bought some more singles from old sets and I bet that I’m not the only one.)

As always happens when anything new is announced, a discussion among players is started about what is great and what less so. Such discussions are rarely devoid of speculation, and this time is no exception. What is different this time though, is that we now have raw data to explore. As is fitting for echobase, and as already announced in our recent overview of the reprint list, that’s exactly what we are going to do today.

Click the image to view our article on the 101 cards in the reprint list

We want to preface this analysis by saying that we neither want to attack ARH or any individual players. We don’t mean to respond to any particular arguments, but merely to some sentiments that we picked up in various discussions. We will see some weaknesses of the voting process (but not necessarily ones that we have heard/seen), but that does not mean that the voting process itself was flawed. We think it’s awesome that as a community we have been given such a direct chance to influence the near future of Destiny

There are four questions that we would like to address in this article:

  1. Will some people feel very disappointed with the results while others are overjoyed?
  2. Why is there so much mitigation in the results list?
  3. What can we say about the claim that there is a contingent of players in our community that just want the broken stuff (and their voice is strongest)?
  4. What can we say about the popularity of cards within and across different FFG sets?

Some basic statistics

We were asked to vote for:

  • 7 battlefields,
  • 15 characters,
  • 40 events,
  • 3 plots,
  • 15 supports,
  • 20 upgrades.

In most categories, it wasn’t just the top voted cards that got in the result list; there was a minimal amount of sanity checking done. When we say ‘result list’, we will mean the final result, not the raw list of highest-voted cards.

There was also a 16th character added, bringing the total of cards on the result list up to 101.

Some people did not use all of their votes, which will have a little bit of influence on this analysis. The mean number of votes per voter was 88.8 and 137 out of 156 voters had at least 70 cards in total on their list. We’ve made an effort to correct for this where possible or where we think it will influence the results.

Should the outcome make people happy?

Let’s look at how many cards from the result list were on everyone’s individual list of votes, for all card types combined. We measure this as a proportion of the total result list. We will take this figure to be a measure for how satisfied any voter will be with the results: if all of the resulting cards were on their ballot, they must be quite happy; if none are, we expect some disappointment.

We should realize however that this is only a rough indication: speaking for myself for example, only 20% of the result list received a vote from me, while 50% of my characters made the list. Since characters are such an important part of your decklist, I still consider myself reasonably happy with the result.

To see at a glance how this statistic turns out across all voters, a histogram will do well.

We see that no single voter voted for at least half of the results, and most people voted for something around a third of them. A handful of people seem like they would be quite disappointed, and this pattern remains when correcting for smaller vote lists. 

We can make similar plots per card type, but that didn’t turn out to be too interesting. The same general patterns show up, however we do notice a slight variation for events and battlefields: the peak lies somewhat to the left for the battlefields and somewhat to the right for the events.

Voting for 7 battlefields out of a possible 46 represents 15% of the possible card pool, while 40 events out of a possible 443 is 9%. As most people had more events in their vote list that made the reprint list compared to battlefields, despite the fact they were voting for a lower proportion of the card pool, this suggests that, correcting for card pool size, variation in voting for events is much smaller than for battlefields. More on that later.

Why all the mitigation?

From the 40 events on the lists, we classified 18 of them as “mitigation”: Pacify, Beguile, Hidden Motive, Entangle, Doubt, The Best Defense…, Into The Garbage Chute, Overconfidence, Electroshock, He Doesn’t Like You, Hasty Exit, Crash Landing, Loth-Cat and Mouse, Easy Pickings, Flank, Isolation, Rebel Assault and Pinned Down.

That’s quite a lot of mitigation cards, and it has been remarked by the community that this may not be a good thing. Almost all of these are staples, which may make it all too easy to have a deck packed full of good control cards. We didn’t even count the four healing/shield cards in these. The simplest possible explanation would be that this is just “what the community wants”, but we don’t think that is exactly right. Let’s look at a histogram of just the mitigation cards.

That’s the number of mitigation cards in a voter’s list which made the final list. As we can see, only very few people voted for almost all of the mitigation cards that ended up on the results list. Most people only voted for less than half of them. Then how did we end up with so many of them?

A simplified argument splits the events into two categories: mitigation and tech cards. Out of all events printed in the first two blocks of Destiny, there is a very wide variety of tech cards. A lot of them will have some fans, but only a few will have many, while the mitigation cards slide into decks much more easily. Most players will remember those, and it’s more obvious how they would fit into new decks. For many of the tech cards that’s a lot harder, so the vote is spread out much thinner.

There is also a second side of this argument: let’s look at all the mitigation cards that didn’t make the list (yes, those exist too, even though they’re not as memorable). As this list is pretty long, what we’ve done is plotted the proportion of each person’s event vote which were mitigation events, including those that didn’t make the final list, for instance if you voted for 10 mitigation cards out of your 40 choices, regardless what those cards were, you’d come up at 0.25 (25%) on this graph:

We’ve included a vertical line at 45% as 18 out of 40 events being mitigation makes for 45% in the result. We do see that most people did vote for less or a lot less mitigation than was on the final list, with most people voting for 40% mitigation in their event list (16 if you voted for 40).

Looking at these two results, it seems that while most people voted for less or a not dissimilar number of mitigation cards, not so many people wanted the actual powerful mitigation cards that dominate the list. You might have voted for 6 classic and powerful mitigation cards from the ‘classic shortlist’, and 10 more obscure ones, picked from a much longer list of lesser mitigation pieces. But when everyone does that, inevitably all the classic ones end up collecting the most votes, while the obscure ones don’t, as votes are distributed more thinly over a larger number of cards.

We think this explains why the events list has ended up being mostly a list of Destiny mitigation “greatest hits”.

What the community wants

Now it’s a bit harder to capture community agreement in a histogram, but we can still make a visual representation. For any two lists of votes we can look at the symmetric difference.

The symmetric difference of two sets consists of exactly those elements that lie in either set but not both. For example, the symmetric difference of the sets {A, B, C} and {C, D, E} is {A, B, D, E}. To measure how much two lists differ, we can count the size of their symmetric difference, which is 4 in the previous case. Two identical lists (however long) have a symmetric difference of size 0, while the maximum size possible is 200, which happens for two completely non-overlapping lists of 100 votes each.

Now we could plot a histogram of the sizes obtained in this way, but that would not allow us to spot any groups; you could for instance have a lot of lists that had a symmetric difference of around 60, but those differences may be completely different (Easy Pickings is as different from Pinned Down, as both are to Beguile, but that’s not very informative). To spot groups, we employ a technique called multidimensional scaling. This is a method to solve the following problem.

Suppose we have a collection of items, and all we know are all of their pairwise distances to each other; can we find a configuration of points that most closely respects those distances? We would often search for such a collection in a low-dimensional space so we can more easily understand the outcome visually. The actual values of the x-, y- and z-coordinates of the resulting points don’t carry any information (other than the scale), but we are able to spot groups in this way.

Let’s apply multidimensional scaling to the symmetric differences we obtain from all of the lists. We can force the solution to be 2-dimensional and produce the graph below.

Confused? Here’s another way of looking at it: we’re going to make a graph with lots of points on it. Think of it as a map, where each spot shows where someone’s card preferences lie. If the points cluster together, that suggests that we have groups of people who want the same thing. If they’re all spread out, it suggests we’re all individuals who like different things.

There may be some very small groups of players who agree with each other, but certainly no ruling contingent of players who impose their wish for Destiny’s future upon us all. Even the small groups seem to be rather in disagreement.

Popularity within and across sets

Let’s also look at some popularity measure that does not have any game effects: that between and across sets. We’ve looked at all eligible cards and split them per set.

First up, how popular is each set as a whole? For that, we’ll look at the proportion of the vote which went to each set:

Wow, Empire at War as a set is unpopular! All three sets from the Legacies block are more popular than the Awakenings block, which may reflect that there are many players who started around the time that the Legacies set was released, possibly due to the Two Player Set also being released at the same time. That’s true for myself, and many players I know.

Are we seeing nostalgia for cards from when people discovered Destiny for the first time? That would also contribute towards an explanation for why Awakenings the set is more popular than the other sets from Awakenings the block.

We’re also seeing a decline in popularity from the first set of a block to the last. Perhaps that’s because the first set of a block contains the ‘classic’ utility cards (like Probe, The Best Defense…, Field Medic, Entangle) while the later sets in a block contain increasingly esoteric cards (like Pilfered Goods, Kill Them All, Drop ‘Em and Wanton Destruction).

I’m putting the next graph in for the data geeks. It shows the same data as above, but I’ve got a line for each voter. I’ve split y’all into two groups: those that voted for more Legacies block cards than Awakenings block cards (blue) and vice versa:

I’m not sure this really shows anything particularly interesting, but it’s nice to look at.

Looking at the same data from a different perspective, we can plot the number of cards in the set that received at least 20 votes. This will show us whether or not votes for a set are concentrated in the hands of just a few powerful cards:

Way of the Force contained the largest number of universally popular cards, with nearly a third of all cards in that set receiving more than 20 votes. By contrast, only a tenth of the Empire at War cards got more than 20 votes, showing that not only was the set relatively unpopular, but that across the set very few cards were universally popular.

Another more technical way of looking at this data is to look at the number of votes received by each card. We’ve ignored cards with 0 votes, adjusted for the number of cards in the set, and plotted this below: 

If a set’s curve starts out flat (on the left), that means that there are relatively many cards that received few votes. The vertical axis has a logarithmic scale, so that we can see better what’s going on. It also makes the lines wonderfully straight, which should say a lot about the statistics behind voting patterns, but we won’t get into that here.

For Empire at War (the line on the farthest right for those colour blind folks), looking at the first two jumps of the graph, we find that about 15% of the set received only a single vote and about 35% received only one or two votes. Only a small number of cards received very many votes – the Ancient and Shoto Lightsabers come to mind – as shown by the way this line kicks up in steepness right at the end.

By contrast, very few cards in Legacies received just a couple of votes, while more than half of them received 10+ votes.

Read into this what you will, but I think this suggests that Legacies was a well rounded and popular set overall, while EaW (and SoR to a lesser extent) had a few stand out cards, but in general was underwhelming.

On balance though, what’s surprising is how similar all the lines are. We’re interpreting differences in the lines here, but it must be said that the differences are relatively subtle. This suggests that FFG design did, in general, a very good job.

While we’re looking at this type of graph, let’s do the same thing for card type:

We touched on this before, but this shows that lots of events received very few votes, even when you account for the vast number of events in the card pool, with more than half of the events receiving 5 or fewer votes. By contrast, characters, plots and battlefields had a more even spread of votes, with more than half receiving 10 or more votes.

This suggests that player’s tastes in characters, battlefields and plots are more wide, while more players like the same shortlist of “greatest hits” events, and to a lesser extent supports and upgrades.

A suggestion for future voting

We do not yet know if future evolutions of the reprint list will be based on a community vote (although in the recent big announcement stream by ARH there were some hints that maybe it won’t be), but if I can make one suggestion then I would say to split the mitigation events from the tech events and have us vote separately on those. This should allow some of the over-present mitigation cards to be replaced by interesting tech cards.

One card that I’m personally sad to see didn’t make the list (although it placed rather high) is Luke’s Training. As a fan of blue hero, I quite like this card in general, but I think there are some nice specific interactions between it and some of the ARH cards, Ki-Adi-Mundi and Grogu being two of them. 

Closing thoughts

We just love it when we have some data to analyse, and have a lot of respect for the transparency ARH have shown by sharing the full voting patterns and the logic which led them to the current reprint list. We hope you’ve found our musings interesting, and hope that ARH continues to show this transparency in future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s