## 50,000 LCS 2017 SPRING PLAYOFFS – Postmortem

## 50,000 Simulations of League of Legends 2017 Spring Playoffs

^{nd}place and Phoenix1 will probably settle into third place. This is not a guess based on seeds (even though it seems to work out that way). This prediction is based on 50,000 simulations I ran of the playoffs to determine the most probable bracket.Before we get into the simulations, for those of you who do not play online video games. League of Legends by Riot Games is the most popular online multiplayer game. League of Legends or LoL is a multiplayer online battle arena (MOBA) type game. This game has become so huge that people have it has become its own sport or esport. There are professionals that play for a living. With professional players comes sanctioned tournaments. Tomorrow starts the spring 2017 north american (NA) League of Legends Championship Series (LCS). Six professional LoL teams will compete for the top spot and a cash prize. I wanted to see if I could predict the winner.

The schedule and teams

However, simulating a tournament is not as easy as flipping a coin for each match up. The data generated needs to have weight; it needs to be significant, and most importantly needs to model real life. In order to give weight to my simulations I needed to determine which team was going to win each match up probabilistically.In theory, in any match up the team with the higher skill should beat the team with the lower skill more times than not. The best way to quantify this is to calculate an elo, which ideally represents a team’s skill. Elo is used in many online multiplayer games to determine a player’s skill. Elo originated in the chess scene. Named after the creator, Arpad Elo, Elo was used to determine the top players based on who they competed against. Elo has been expanded to many other sports and games. Most notably, Nate Silver and his team at 538 have been using elo for all types of sports. The have built elo models for football and basketball as well as baseball. However, you need match history and other data to build an accurate elo for a team or player.I scraped two years of professional game results, around 700 games. Each team in those two years started with an elo of 1500. As teams competed against each other, elo fell rose, and stabilized. The method for calculating elo is easy the formula is quite straight forward (check out this tutorial here). You need a few things to calculate elo: the rating of both teams before the match, the outcome of the match, and a “k factor”.Formulas for calculating Elo ratings

Ratings and Outcomes are handled by the data and the equations above, however, the “K factor” must be estimated. I tried out a few k factors to determine which provided the most stable, but reactive elos and settled on a value of 20. I looked to a lot of Nate Silver/538 to see if they had any inside. I highly suggest reading how they calculate their elo values as the insight was invaluable.Using my K factor, I started to run through the match history of each team. As I mentioned before, I gave every team an equal start at 1500 elo, it was up to them to raise or lower their rating. One thing to keep in mind, around summer 2016 games were handled differently. Instead of one game, LoL switched to best of 5. There were two ways to approach this you could update elo based on who won the set or based on individual games. I decided to update based on individual games as a 3-0 win is more telling of who is better than a close 3-2 win.Nate Silver uses game scores, home court advantage, and margins of victory in his elo calculations. LoL is different as their are no “home courts” as it is an online game. There is no game score or easily measurable margin of victory or point spread. There are W’s and L’s. As Dustin Pedroia is quoted saying in Nate Silver’s book “All I care about is W’s and L’s.” So my elo calc is simple it only cares about who won and who lost. Maybe down the line that will change, maybe after I compare the results of the spring playoffs to my predictions. After all these calcs and all there games were run I had elo for every team.Finally using the elo calculated after two years of games, less for more recent LCS teams (looking at you FLY), I could probabilistically determine the outcome of any match up. My mind immediately went to my favorite statistical hammer: Monte Carlo simulation. If you have read my previous posts I have used monte carlo simulation here and here. 50,000 simulations of the spring 2017 playoffs later, I produced the bracket you see below.As you can see teams like TSM and C9 have high chances of placing due to their high elo and 1st week bys. Teams like Dignitas and FLY have much lower chances (especially dig) at claiming the title. However their chance is based upon building momentum. After each game of the tournament elo is recalculated. If a team like Dig 3-0s a higher seeded team their elo is boosted to a point where they have a better chance against a high elo team like TSM.I’ll update my predictions after each stage of the tourney. I am excited to see if my predictions were close or not.GO TSM!-Marcello## Kingdom Death: Monster Fight Optimization

## Kingdom Death: Monster Weapon Optimization

Like most table top games and rpgs all actions the player takes are determined by dice rolls whether they succeed or fail. When a player wants to attack a monster she rolls a dice and the number will tell her whether she hits the monster or not. Simulating actions this way allows to mathematically determine outcome much easier as I just have to model a dice roll for each attack.Before we go into simulation, I should clearly outline the process of attacking a monster in KDM. KDM takes a unique approach on attacking a monster. There are two main stages to an attack: hit phase and wound phase.During hit phase the player rolls to see if they actually hit the monster. They look at the monster’s evasion stat and their weapon’s accuracy stat. In order to hit the player must roll higher than their weapon’s accuracy stat plus the monster’s evasion stat. For each point of speed the weapon has the player may roll one dice. For example if their sword has a speed of 3, they can roll up to three dice.Once the player determines if they hit or not they go to the next phase: wound phase. Here the play rolls again to see if they wound the monster. They re roll any dice that was greater than their weapon’s accuracy stat plus the monster’s evasion stat from the previous phase. To actually wound the monster, the player must roll higher than the monster’s toughness (or hide) minus the strength of the weapon. Every dice that passes this check wounds the monster.Every roll is done on a 10-sided die. However, in KDM there are two special rolls, 1 and 10. Whenever a player rolls a 1, the hit or the wound fails no exception. Whenever a player rolls a 10, the hit or wound succeeds no exception.In order to quantify which weapon performs better against certain monsters, I will need to determine the probabilities of hitting and more importantly wounding the monster. This could probably be done with distributions and simple probabilities, however, with the 1/10 rule and the various strength and accuracy check I decided on a different approach. In comes Monte Carlo methods.Monte Carlo methods simulate your problem multiple times to determine probabilities of success. Essential to the method is randomness. Each trial produces a different result and if the number of trials is decently large it begins to model the distribution of the initial problem. I stumbled upon a great blog post that outlines how to perform these methods using R.The website Count Bayesie outlines different ways to use Monte Carlo methods here. I highly recommend reading the post as it outlines MC methods pretty clearly and gives real world examples with easy to follow code.Using R, I created a script that would crunch the numbers on each weapon. The script calculates the chance to hit, wound, critical hit, and critical wound (happens when a 10 is rolled). The script will also plot the distributions of each stat and compare them to another weapon. All of the above depends on the monster. Each monster has 3 levels with various stats.The script initially produced tables and graphs, but wasn’t super useful. I converted the script into a shiny webapp below. Now anyone can utilize my monte carlo script to compare any two weapons for any given monster.There are a few limitations so far. You can only compare 2 weapons at a time.

## Pitchfork’s Best New Markov Chains Part 2

See original visualization here: http://setosa.io/blog/2014/07/26/markov-chains/

After finishing up my last post about modelling artists and their probability to release consecutive Best new music albums (see part 1 here), I got to thinking about what else I could use the data that I scraped. I had all album reviews from 2003 to present including the relevant metadata, artist, album, genre, author of the review, and date reviewed. I also had the order in which they were reviewed.Then, with Markov chains still fresh in my mind, I got to thinking, do albums get reviewed in a genre based pattern? Are certain genre’s likely to follow others?Using the JavaScript code from http://setosa.io/blog/2014/07/26/markov-chains/, I plugged in my labels (each of the genres) and the probability of state change (moving from one genre to another) which resulted in the 9 node chain at the top of the post. If you let the chain run a little while you will notice a few patterns. The most obvious pattern is that all roads lead to Rock. For each node the probability of the next album being a rock album is close to 50%. This is because not all genres are equally represented and also because of the way Pitchfork labels genres. Pitchfork can assign up to 5 genres to an album it reviews. With up to 5 possibilities to get a spot, some genres start to gain a lead on others. Rock, for instance, is tacked on to other genres more frequently than any other genre. This causes our markov chain to highly favor going to Rock rather than other genres like Global and Jazz which are not tacked onto other as frequently. So if you are the betting type, the next album Pitchfork will review is probably a rock album.-Marcello## Pitchfork’s Best New Markov Chains

See original visualization here: http://setosa.io/blog/2014/07/26/markov-chains/

I am an avid Pitchfork reader, it is a great way to keep up to date on new music. Pitchfork lets me know what albums to listen to and what not to waste my time. It’s definitely one source I love to go to when I need something new.One way Pitchfork distills down all the music they review and listen to is to award certain albums (an more recently tracks) as “Best New Music.” Best New Music, or BNM as I’ll start calling it, is pretty self explanatory. BNM is awarded to albums (or reissues) that are recently released, but show an explemplary effort. BNM is loosely governed by scores (lowest BNM was a 7.8), but I noticed that I would see some of the same artists pop up over the years. This got me to wondering. If an artist gets a BNM is their next album more likely to be BNM or meh?We need data. Unfortunately Pitchfork doesn’t have an API and no one has developed a good one, so that lead me to scrape all the album info. Luckily, all album reviews are listed on this page http://pitchfork.com/reviews/albums/. To get them all I simply iterated through each page and scraped all new albums. I scraped the artist name, album name, genre, main author of the review, and year released. BNM started back in 2003 so I had a natural endpoint. In order to go easy on Pitchforks servers I built in a little rest between requests (don’t get to mad Pitchfork).Now that I have the data, how should I model it? We can think of BNM and “meh” as two possible options or “states” for albums (ignoring completely scores). Markov Chains allows us to model these states and how the artists flow through them. Each pass through the chains represents a new album being released. A conventional example is weather. Imagine there are only rainy days and sunny days. If it rained yesterday there may be a stronger probability that it might rain tomorrow, however the weather could also change to sunny, but at a lower probability. Same goes for sunny days. For my model, just replace sunny days with BNM and rainy days with meh.Sunny “S” ,Rainy “R”, and the probabilities of swapping or staying the course

With all my data, I was able to calculate the overall Markov models. I took all artists that that had at least 1 BNM album, 2 albums minimum, and at least 1 album after the BNM album. This insures that these probabilities actually mean anything. I can only tell what the probability of staying BNM is if you have at least one more album after your first BNM. Once I distilled all the artists down using the above criteria getting the probabilities was easy. I simply iterated through each artists discography, classifying the “state” change between them (meh to meh, meh to BNM, BNM to BNM, BNM to meh)

Finally, with all the numbers crunched I plugged them in to the visualization at the top. NOTE: all the visualizations were NOT created by me. I simply plugged in my calculated probabilities and labels. The original visualization along with a fantastic explanation of markov chains can be found at http://setosa.io/blog/2014/07/26/markov-chains/. The visualization and all the code behind it was created by him NOT me. As I said before I only supplied the probabilities.

If you look at the size of the arrows you can tell the relative probability of each state change. As you can see BNM are pretty rare and artists don’t stay that way for long (thin arrow). What is much more common, as you probably guessed, are meh albums leading to more meh albums (thick arrow). As you can see, it is more likely that an artist will produce a meh album after BNM. What is interesting is that it is more likely to release a BNM after a BNM than it is to go from meh to BNM These conclusions seem pretty obvious, in retrospect, however since we lumped all artists together we might be missing some nuance.

Now the above metrics are for all artists, but it it probably unfair to lump in Radiohead (who churns out BNM like its nothing) to the latest EDM artist. I redid my analysis only this time further splitting all the artists by their genre. Below are the three most interesting genres.

METAL

POP/R&B

RAP

Breaking artists out by genre lead to some interesting results. For the most part, most genres followed our general outline for BNM Markov chain. However, the above three deviated. Metal had a much higher chance for an artist to release consecutive BNM albums, the probability is almost 50%. However, it is much harder for a metal artist to transition from meh to BNM. The exact opposite is true for pop/r&B (Pitchfork lumps the two together in their categorization). Pop artists switch back and forth between BNM and meh, but rarely produces two albums of the same state consecutively. Rap is a little different. Rap is more resistant to change. For rap, it is harder to switch between states, but rather easy to stay in a state.There are some drawbacks from this subsetting. The number of observations drop for each group so these models are based off less data. Some albums also have multiple genre designations. Should a rock/electronic album count for both rock and electronic,weigh it 50% of a pure rock album, or separate out just rock/electronic? Nevertheless, as exploratory mildly useful Markov Chains we can see that some artists may have an advantage if they already produced a BNM album, but not by much.-Marcello

## Fakestagram – Using Machine Learning to Determine Fake Followers

A While back I went out to dinner with a bunch of my buddies from highschool. We inevitably started talking about all the people that went to our highschool and what they were doing today. Eventually we started talking about one of our friends that has actually became instagram famous. As the night waned, one of my friends came up to me and said “You know all of his/her instagram followers are fake.” I immediately went to their account and started clicking on some of the followers. Sure enough they started to look a little fishy. However, as a data scientist I wasn’t 100% convinced. Down the rabbit hole I went.

-Marcello

## Travelling Pokemon Trainer

**It is not a closed loop**: I will have to walk allllllll the way back home from where ever this route lets me off. We need to add a constraint that the last stop has to be the same as the first.**Will my bag be full at the end of this?**: I dont wanna waste time walking to more pokestops if i can’t receive any more supplies.**I don’t have to visit all the pokestops/ pokestops recharge**: This is the most important one. Unlike the travelling salesman, i do not have to visit everyone. I can stay in a small subsection and just repeatedly go the the same 3 stops if i wanted to.

## Journal Club: Week of 3/18/2016

The ASA’s statement on p-values: context, process, and purpose