Complete History of The NALCS

Data is current up to: Spring Playoffs 2017

My previous post got me thinking. How does every team in that has ever participated in the North American League of Legends Championship Series (NALCS) stack up? For sports like chess, football, soccer, baseball, and many others an ELO based system is used to rank teams.

Elo is a straightforward calculation and concept. If a team wins their rank should go up, if a team loses their rank should go down. How much these teams rise or fall is based upon their expectation of winning the game. A team that is mathematically favorited to win a game will rise slowly, but fall sharply if they lose. Conversely, a team that is not favored to win will fall slowly, but in the case of an upset will rise sharply. Using this concept, pioneered in sports of all types by 538, and applying it to League of Legends an online-only video game gives us insight into the …ahem … league.

Inspired by the fantastic visualization 538 did with their “Complete History of the NBA”, I set out to build a similar visualization for the NALCS. First, as all these projects go, I needed data. Unfortunately for me, there was no ready made database with all the info I needed. My first priority was to put together a database with W’s, L’s, and teams. League of Legends has a massive following and because of this has pretty good records. Leaguepedia provided me with all the information I needed. I scraped all their NALCS match data, tidied it up, and built my database. This became the backbone for my ELO calculation.

Calculating Elo is a simple process. I outlined it in my last post which is worth a read. Once all elo’s were calculated I started building my visualization. The above visualization was made in D3.js. I started by using Nate Miller’s block and then modified it like crazy to make it do everything I needed. The visualization starts with the Spring 2013 split. If you are familiar with LoL, this is actually the 3rd year in the championship season, but the first officially recognized by Riot Games the maker of LoL. This felt like a natural place to start.  Here are a few interesting teams from the history of the NALCS.



Team SoloMid (TSM) has been around as long as competitive League of Legends has existed. They have participated in every single season and almost every single playoff. They are a force to be reckoned with. Ending the Summer 2016 split with a 17-1 record and a decisive playoff title, TSM cemented themselves as the greatest team in the the history of NALCS.



Every great team has a great rival. Baseball had Yankees/Red Sox, League of Legends has TSM/Cloud 9 (C9). Cloud 9 only started one season after TSM, but quickly rose to be the best team the NALCS had seen. However after a string of losses, C9 began their fall.  C9’s top position was taken by one game 3 years later by TSM.



If TSM represents the best NALCS has to offer, Team Coast is the worst. Originally known as Good Game University (seen above in red), changed their name to Team Coast on June 1,2013. Coast never had a good record. The final nail in the coffin was Spring split 2015 where team Coast achieved an embarrassing 1-17 record, one of the worst records NALCS has seen.



Now this one hurt. I have been a Dignitas fan since the early days of imaqtpie. Time and the NALCS has not been kind to Dig. They are one of the original LoL teams but they cannot seem to escape mediocrity. Their elo never swung more than 80 points outside average, even though they lost their spot in the NALCS in Summer 2016.



In League teams rarely vanish completely, most get acquired, absorbed, or renamed. Echo Fox went through three name changes throughout their professional run. Originally Echo Fox was Curse Academy the challenger (think semi-pro) arm of the Curse brand. Curse Academy itself was a renaming of “Children of IWDominate”. They renamed after qualifying for Season three of the challenger league. Curse Academy rebranded themselves to Gravity Gaming after Riot’s new sponsorship rules involving the Curse voice chat client and their team by the same name. Finally on December 18th, 2015, Gravity Gaming was bought by Rick Fox of NBA fame and renamed to Echo Fox. The name changes have not helped their Elo.



As a relative newcomer to the NALCS, Immortals have done well for themselves. Originally Team 8, Immortals have surpassed an exceed T8’s skill. T8 was a mediocre team, not straying far from average. However, with a new name, and more importantly a new roster, Immortals reached a peak elo of 1760, before taking 3rd place in the 2016 Summer split.

Pitchfork’s Best New Markov Chains Part 2

See original visualization here:

After finishing up my last post about modelling artists and their probability to release consecutive Best new music albums (see part 1 here), I got to thinking about what else I could use the data that I scraped. I had all album reviews from 2003 to present including the relevant metadata, artist, album, genre, author of the review, and date reviewed. I also had the order in which they were reviewed.

Then, with Markov chains still fresh in my mind, I got to thinking, do albums get reviewed in a genre based pattern? Are certain genre’s likely to follow others?

Using the JavaScript code from, I plugged in my labels (each of the genres) and the probability of state change (moving from one genre to another) which resulted in the 9 node chain at the top of the post.

If you let the chain run a little while you will notice a few patterns. The most obvious pattern is that all roads lead to Rock. For each node the probability of the next album being a rock album is close to 50%. This is because not all genres are equally represented and also because of the way Pitchfork labels genres. Pitchfork can assign up to 5 genres to an album it reviews. With up to 5 possibilities to get a spot, some genres start to gain a lead on others. Rock, for instance, is tacked on to other genres more frequently than any other genre. This causes our markov chain to highly favor going to Rock rather than other genres like Global and Jazz which are not tacked onto other as frequently.

So if you are the betting type, the next album Pitchfork will review is probably a rock album.


Pitchfork’s Best New Markov Chains

See original visualization here:

I am an avid Pitchfork reader, it is a great way to keep up to date on new music. Pitchfork lets me know what albums to listen to and what not to waste my time. It’s definitely one source I love to go to when I need something new.

One way Pitchfork distills down all the music they review and listen to is to award certain albums (an more recently tracks) as “Best New Music.”  Best New Music, or BNM as I’ll start calling it, is pretty self explanatory. BNM is awarded to albums (or reissues) that are recently released, but show an explemplary effort. BNM is loosely governed by scores (lowest BNM was a 7.8), but I noticed that I would see some of the same artists pop up over the years. This got me to wondering. If an artist gets a BNM is their next album more likely to be BNM or meh?

We need data. Unfortunately Pitchfork doesn’t have an API and no one has developed a good one, so that lead me to scrape all the album info. Luckily, all album reviews are listed on this page To get them all I simply iterated through each page and scraped all new albums. I scraped the artist name, album name, genre, main author of the review, and year released. BNM started back in 2003 so I had a natural endpoint. In order to go easy on Pitchforks servers I built in a little rest between requests (don’t get to mad Pitchfork).

Now that I have the data, how should I model it? We can think of BNM and “meh” as two possible options or “states” for albums (ignoring completely scores). Markov Chains allows us to model these states and how the artists flow through them. Each pass through the chains represents a new album being released. A conventional example is weather. Imagine there are only rainy days and sunny days. If it rained yesterday there may be a stronger probability that it might rain tomorrow, however the weather could also change to sunny, but at a lower probability. Same goes for sunny days. For my model, just replace sunny days with BNM and rainy days with meh.


Sunny “S” ,Rainy “R”, and the probabilities of swapping or staying the course


With all my data, I was able to calculate the overall Markov models. I took all artists that that had at least 1 BNM album, 2 albums minimum, and at least 1 album after the BNM album. This insures that these probabilities actually mean anything. I can only tell what the probability of staying BNM is if you have at least one more album after your first BNM. Once I distilled all the artists down using the above criteria getting the probabilities was easy. I simply iterated through each artists discography, classifying the “state” change between them (meh to meh, meh to BNM, BNM to BNM, BNM to meh)


Finally, with all the numbers crunched I plugged them in to the visualization at the top. NOTE: all the visualizations were NOT created by me. I simply plugged in my calculated probabilities and labels. The original visualization along with a fantastic explanation of markov chains can be found at The visualization and all the code behind it was created by him NOT me. As I said before I only supplied the probabilities.

If you look at the size of the arrows you can tell the relative probability of each state change. As you can see BNM are pretty rare and artists don’t stay that way for long (thin arrow). What is much more common, as you probably guessed, are meh albums leading to more meh albums (thick arrow). As you can see, it is more likely that an artist will produce a meh album after BNM. What is interesting is that it is more likely to release a BNM after a BNM than it is to go from meh to BNM These conclusions seem pretty obvious, in retrospect, however since we lumped all artists together we might be missing some nuance.

Now the above metrics are for all artists, but it it probably unfair to lump in Radiohead (who churns out BNM like its nothing) to the latest EDM artist.  I redid my analysis only this time further splitting all the artists by their genre. Below are the three most interesting genres.





Breaking artists out by genre lead to some interesting results. For the most part, most genres followed our general outline for BNM Markov chain. However, the above three deviated. Metal had a much higher chance for an artist to release consecutive BNM albums, the probability is almost 50%. However, it is much harder for a metal artist to transition from meh to BNM. The exact opposite is true for pop/r&B (Pitchfork lumps the two together in their categorization). Pop artists switch back and forth between BNM and meh, but rarely produces two albums of the same state consecutively. Rap is a little different. Rap is more resistant to change. For rap, it is harder to switch between states, but rather easy to stay in a state.

There are some drawbacks from this subsetting. The number of observations drop for each group so these models are based off less data. Some albums also have multiple genre designations. Should a rock/electronic album count for both rock and electronic,weigh it 50% of a pure rock album, or separate out just rock/electronic? Nevertheless, as exploratory mildly useful Markov Chains we can see that some artists may have an advantage if they already produced a BNM album, but not by much.


Fakestagram – Using Machine Learning to Determine Fake Followers

A While back I went out to dinner with a bunch of my buddies from highschool. We inevitably started talking about all the people that went to our highschool and what they were doing today. Eventually we started talking about one of our friends that has actually became instagram famous. As the night waned, one of my friends came up to me and said “You know all of his/her instagram followers are fake.” I immediately went to their account and started clicking on some of the followers. Sure enough they started to look a little fishy. However, as a data scientist I wasn’t 100% convinced. Down the rabbit hole I went.

In order to solve this problem I needed some data. I have an instagram account (@gospelofmarcello) and a few followers. The problem is all my followers were real. My followers were mostly friends and families with a few random businesses sprinkled in. Unfortunately, I didn’t have any fake followers. My first step was to correct that.

Pre-fake followers (shameless plug, it’s 90% food pictures):


So I found out there is a whole market around buying followers (and likes as well but thats a story for another blog post). I won’t post links here but I found a site where I could get 100 followers for $3. Since I only had about 100 real followers, these fake followers would complete my dataset. I spent the $3 dollars (sorry instagram! I’m doing it for science) and within the hour I had 100 new followers.

Post-fake followers (plus a few real ones):



Next step was to actually get info on all my followers. If you’ve used instagram before you’ve probably seen something like the photo above. Instagram profiles have some great data which I was gonna need to build my model. Unfortunately for me Instagram recently changed their API and made it so that you can only access 10 other users (and their info/data) at a time. Even worse you needed their permission. I assumed that these bots would not give consent to be subject to my probe, so I needed a solution. In comes Selenium.

Selenium allows me to open webpages like normal and interact with them. I wrote a script that would first scrape all my followers, then one by one open up each follower’s profile and gather data. My program takes a user’s instagram handle, number of followers, number of people they are following, posts, their real name, and their bio. I assigned all of my followers 0 if they were fake, and 1 if they were real. Now its time to build and train the model.

I decided to start off really simple with a decision tree algorithm. With this as a basis I could always get more complex with random forest or even the holy grail gradient boosted trees. But for the sake of good practice I started simple. Using sci-kit learn, I fit a simple decision tree reserving 30% of my data for testing. Scoring the predictions gave me 1.0, a perfect model, or what was more likely, a super overfit model. naturally, I loaded up scikit’s cross validation model to check to see how badly over fit my model was. To my surprise, the cross validation model produced an average score of 0.97 with standard deviation of 0.03.

The original model:


The model was basic, but with all my metrics I had some confidence. However, I needed more data to test. I reached out to a friend who kindly allowed me to scrape her Instagram followers. The only downside was that all her followers are real (she verified them all before I scraped). So I bought 100 more fake followers to append to their dataset, to make a more rich and varied dataset (sorry Instagram!, all in the name of data science). I refit my model with all the original data and tested it on the new dataset. My decision tree model had and accuracy of 0.69, precision of 0.62, recall of 1.0, and predicted that my friend’s had 82.5% real followers when it was closer to 51.4%.

There was a huge drop in all metrics. I was wondering why my model performed so badly. I did a little exploratory analysis and then I realized, I’d bought the two sets of fake followers from two different sites (you’d be surprised how many sites there are). These fake followers were of significantly higher quality then my first set.  They had bios, names, and uploads, while the first set had only followers and maybe a name. Decision trees weeded these low-quality fakes out pretty quickly; however, it struggled on the high-quality fakes.

First round fake followers vs second round fake followers:


I needed to retrain my tree. I pooled all my data together and set aside 40% for training.  I repeated all my steps of training, model building, and cross validation. I then tested the new decision tree model against my friends followers and the remaining mix of fake followers.

The model performed much better with an accuracy of 0.99, precision of 0.98, recall of 1.0, and predicted that my friend’s had 51.9% real followers which was close to the real percent of 51.4%.

I chose decision trees because of their easy interpret ability. Below is a picture of the structure of the refined decision tree model used to classify each follower.


My 2nd iteration model only used 3 features out of the 5 I supplied. The model focused on number of followers, number of following, and number of posts. Whether or not the user had a name or a bio did not come into play. There are many limitations to this model as it is based strictly on a certain group of Instagram users.  My dataset leaves out real users that follow way more than they are followed. It also lacks in users that post very little, but might be more engaged with the community (likes, comments, etc). The model is quite basic and has room for growth, however I need way more varied data. In all likely-ness this model is overfit (look at the last branch), however it provides some insight into catching fake followers. Definitely look at the follower to following ratio as a major sign of “realness.”

Now that the model has been built, we have trained (and retrained) and tested it (kinda) successfully. It is time to answer the question that spawned this all. So how many of my friends followers are actually real? I scraped all 17 thousand and ran each one through the decision tree.

83% of His/her followers are fake.


Thanks for reading,

This Ain’t a Scene… It’s an Arms Race

Gun control has become a hot topic recently in the United States. Due to the increase of deaths at the ends of firearms there have been a lot of studies showing how guns flow through America. I wondered what of the larger weaponry. Items like missiles, tanks, and jet fighters Who is buying these heavy duty weaponry? Or do governments just produce their own weapons?

My intuition led me to believe that most heavy weaponry would be produced by China and USA and would be headed toward warzones like Syria, parts of Africa, and parts of the Middle East. These conflict zones surely need the most weaponry. In order to explore this hypothesis I needed data. Luck for me, there is an entire database full of heavy weaponry purchasing and selling. The Stockholm International Peace Research Institute monitors major weapons acquisitions [1]. Using this database I could trace who the major players are and where arms are moving.

After a little clean up I was able to make the plot at the top of the post. In that plot are the major trades ( that were recorded) starting in 1975, however the bulk of the trades were from the 2000’s onward. There are a few different symbols flying around there. The picture below has example of all the icons. All icons were found on the link below [2].

arms symbols

Starting from the top left we have:  Ships, Missiles, Radar Tech, Armored Vehicles, Air Defense, Hand Held Rockets, Aircraft, Military Tech, Engines, and Naval weaponry.  As you can see in the map above, lots of arms are bought and sold by the world.

Some of the more interesting points are where the arms are going and where they are coming from. USA is a big exporter and importer of arms. As expected a lot of arms flow into the Middle East. Hardly any heavy weaponry flows toward South America or, surprisingly, Africa (I guess I’ve seen Lord of War too many times).  A good amount of arms are also making their way toward South East Asia and not surprisingly, South Korea.

It would be interesting to further explore this data to see if the next conflict arises near where many of the arms are flowing toward. Or even if past arms data coincided with the Iraq/Afghanistan war.

The map above was created in D3, and, as we know, I am a very new javascript programmer so I relied heavily on the tutorial found here [3]. This post made it easier to get a map up and running and to make the animations and plotting smooth.

– Marcello




Network of Mediciation Side Effects

I recently stumbled upon a database[1] of prescription and generic medicine that contains all of the side effects listed on thier labels. As we all know from all of those prescription medication commercials (looking at you cialis) the side effects take up about half the commercial. I wondered if certain side effects always showed up together. Kinda how cough and cold are always packaged together. I downloaded and scrubbed the database. From there I broke it up into two groups, the map above and the map below.

The network above has all of the most LIKELY side effects from the medication. This was defined as occurring more than 60% of the time in people who took the medication.  This narrowed the 200+ side effects to under 100. I with my very rudimentary medical training grouped these effects into certain categories (bones, blood based, mental, etc). The stroke width of the bonds between two side effects are determined by how many times they both show up together on a side effects list. I created the force graph in d3 with help from two great sources [2][3]. You can manipulate this web and if you double click a point it highlights its neighbors that it is usually paired with.

The network below is the top 50 most COMMON side effects. These side effects appeared most on the labels of all the prescription meds. Because these side effects were so frequent they were connected to everything and produced a rather boring glob of points (as you can see below now). I further restricted the bonds by only keeping links that appeared over 200 times. This produced a (slightly) less intricate web. You can move the slider to break and form bonds, weaker bonds break first. This network also allows for double clicking.

Pretty cool stuff. This might be useful for prediction as certain side effects are always linked together. Future steps might be a more rigorous grouping rather than my less-than informed medical opinion.

All the visualization credit goes to the bottom two sources. All the above is based of the work of the two below. They helped me immensely as I am a novice in javascript. Also a quick shoutout to the [4] source as it was a complete pain to get d3 working with wordpress.






Follow The Money

Campaign finances are becoming a prominent issue in today’s elections. We have candidates like Jeb Bush who are receiving record breaking amounts of donations from private citizens and private companies alike. On the other hand we have candidates like Bernie Sanders who only receives small donations from citizens. Regardless of your opinion on which end of the spectrum candidates should behave toward campaign donations, they are nevertheless an important part of US elections. When discussing campaign donations it is almost always about presidential candidates, but what about our legislators. They only time I ever heard about donations to legislators is when there is a huge scandal.  Do they pull in as much money as presidential candidates? Do they receive more money from the average citizen or the average corporation? Do legislators of a certain party pull in more than another?

To achieve this I needed data on campaign donations for all the federal legislators. Luckily for me I am not the first to look for this data. There are quite a few places to go to for this information, but I wanted a place with an easy to understand API and something reliable. This led me to Here there is a very soft “API”, but nevertheless super useful and easy to parse. I took the data for all legislators from the past 5 years for any candidate that ran for either the Senate or the House of Representatives. Using their API, I exported their data in csv format. From there the preprocessing and the analysis was all preformed in python (anaconda distribution).

Before we jump into the analysis we need to know a little more about campaign contributions themselves. There are federal contribution limits imposed to limit how much people (and corporations, parties, PACS, etc..)  can donate. There are a few ways to get around these limits however and recent legislature that has helped to facilitate that.


The two recent major decisions that we need to know for analysis are McCutcheon v. FEC and Citizens united v. FEC.  Both of these decisions deal with how much people can contribute to candidates.  Citizens United v. FEC[1] prohibited the government from restricting political expenditures by a non-profit organization. This however is subtly different than direct campaign donations, this type of expenditure[2] endorses the candidate but is made independently from the candidate. This we have to keep in mind when discussing PACs, as this is a heavily used tactic to funnel large sums of money into a candidate. The other decision was McCutcheon v. FEC[3], this decision removed the aggregate limits on campaign contributions. These decisions have brought in new spending and new ways to spend. It is more important


Now that we know who can donate and how much they can donate, let’s see who is on the receiving end. One more important thing we need to know is when all of our candidates are up for election. As we all know Senators serve a total of six years, with 1/3 of the Senate up for reelection every two years.  Congressmen on the other hand only serve two years and are up for election every two years. Elections fall on the even numbered years, so our important data will fall on 2010, 2012, and 2014. All other years are reserved for special elections, like if a senator dies midterm.  For this first part I am using the data from the 2014 elections.

In 2014, the average Senator/Congressman pulled in a little over a million dollars in campaign donations. This average is a little skewed as some legislators pulled under a thousand and some over 10 million.  The range of the campaign donations is set by these two men, Mitch McConnell and John Patrick Devine. Mitch McConnel, a republican and more importantly the Majority leader of the Senate, pulled in a whooping $30 million, while John Patrick (whose googling revealed no pertinent results) pulled in a less than stellar $40. Naturally, John Patrick lost his race, while Mitch McConnel is our current majority leader and has held his office in the senate since 1985.


Speaking of winners and losers, who makes more? Naturally, I believe that the average winner should pull in a great deal more than the average loser.  And well, that’s pretty much how it goes. Winners bring in an average of $2 million while losers can only muster up about half a mill. This is however, excluding a group of politicians who chose to withdraw from the race. Looking at those politicians they close the gap, but not by much they pull in half of that as winner, $1 million.


Before we dive into state by state trends lets see how the Senate do against the house of reps. Below the graph sums up this subsection.


The House pulls in a lot more donations than the Senate. However, this may be due to the sheer amount of people that run for the House. This brings us to a good point. Donations are heavily influenced by the amount of people who run and the amount of people who donate. To avoid making everything based simply on population rather than underlying trends. Most of the following graphs will be averages or per capita when necessary.

Keeping that in mind, let’s see how all 50 states line up. Below is a graph on donations per capita.


2014 Camaign Donations to Legislators Grand Total

That’s much better. As you can see this is obviously not a population map. States like NY, NJ, MA, and CA are not top tier, but rather toward the bottom. Interestingly enough, states that have less people in them seem to have much greater donations per person, Alaska is a notable example. Why do these states get way more contributions than others? One possible explanation are that some of theses states are swing states. Swing states (like New Hampshire above) are very closely divided between the Republicans and the Democrats. These states should naturally garnish more donations as the races should be more exciting and volatile. In coarser terms, campaign money is more valuable in these states.

Before we go any further, we have to go into whose donating, lets take a look nationwide as to who is donating the most. Is it mostly large sums, or small donations?


Speaking of small donations, who actually donates to campaigns? I personally have never, my naïve and uninformed idea of campaign donations are just giant faceless corporations throwing money at candidates. Let’s take a peek at average joes like you and me and how much they spend. Below you can see two maps of the US, one for 2012 and one for 2014.  Hover over each state to see which citizen donated the most and how much they donated, the color scale lets you compare states to each other.

Top Donators 2012 & 2014
These two maps display the top donators for each state in 2012 and 2014. As you can see in 2012 Texas and Connecticut dominated in terms of individual donators. These points may have skewed the data however, as Linda from Connecticut was actually running in the campaign herself, personally funding her run.  David from Texas was the lieutenant governor of Texas during that time. These do not seem like ordinary people.
2014 paints a more familiar (and relate-able) campaign. The donations are much lower than in 2012, but similar trends emerge. New York, California and Texas are all toward the top in terms of individual donators. With “fly-over” states toward the bottom.

Now what about those big faceless corporations. Here are two more maps, however these are only for the year 2014. The map on the left shows the  the top Industry for that state the chart on the right shows the top ten Industries that donate the most nationwide.

Top Industries 2014

Again we have what looks to be a population map. It seems like states with the most people have the highest individual donators whether from citizens or corporations. One thing that stood out to me were the biggest donors. Real estate and medical professionals we the top players in most states. Much less surprising was that Oil & Gas donated the most where, you guessed it, there is Oil & Gas.

Finally, what about groups who donate based on different ideology? Some examples of these groups are pro-Israel, Pro-Life/Pro-Choice, environmental policy as well as many others. The bar chart on the left shows a nationwide average of which ideologies get the most money. The map on the left shows the most popular ideology per state.

Top Ideaologies 2014
Unfortunately the way structures its data makes looking at ideologies a little boring. General Liberal and Conservative ideologies are grouped. Obviously these dominate nationwide. However, there are some other ideologies that creep up after these two power houses. Big issues like Foreign policy and environment garner some money. These ideologies do not nearly donate as much as some of the smaller industries. Excluding Liberal and Conservative, ideologies donate ten fold less than industries.



So far we have skipped over the two most important groups in American politics, the Republicans and the Democrats. How do the parties compare? Seeing that the country is pretty divided on party allegiance I’d expect donations to each party be relatively the same. One thing I’d also expect is that third party candidates don’t pull in even the same magnitude as the two major parties.

Well that seems about right. Democrats and Republicans pull in around the same amount each year, while third party candidates are not even close. This was to be expected as third-party candidates rarely have the same pull nor presence as candidates from the two major parties.

Now what about statewide. During presidential elections most states are glazed over. This is because they are usually deeply entrenched in one party of the other. Below is a map of which party got each states electoral votes. Next to that is which party got more money in the 2014 elections.




The Elephants and the Donkeys

The first graph from Politico shows which party each state voted for. The one below is which party received more donation in each state. The two maps look quite similar.  Both the east and west coast mirror each other to an extent. The midwest also aligns with donations. Donations to legislators in each state may be a good predictor into where the electoral votes end up.  Or, more possibly, states that were going to vote for a certain party donate to that party more. 


Some states receive a lot more attention than other when it comes time for presidential elections. Currently I am only looking at federal legislator’s donations, but I wonder if they reflect presidential politics as well. Certain states I will refer to as swing states. These states are not as deeply entrenched as others. The swing states for 2014 were: Nevada, Colorado, Iowa, Wisconsin, Ohio, New Hampshire, Virginia, North Carolina, and Florida. The map below highlights states that have the closest spending between the Democrats and the Republicans.

Battleground States

Most of the swing states have very similar donations between the two parties. Swing states like Virginia, Florida, and Nevada have very close donations totals. Virginia actually has the closest out of all of the states. On the other end, states like California, Texas, and New York have the greatest difference in donations. This makes sense as these states are deeply entrenched in one party, just look at Texas the donations are completely lopsided. There is some good news in this map. Most states are relatively close when it comes to donations to both parties.


Political Donations are a critical component of the United States government. Looking at the donations many of my previous assumptions were confirmed and many were discredited. However, one must have a critical eye on the data presented. The analysis is only as good as the data collected. I believe it is integral to have reliable and vetted donation data as it holds many insights. I’d like to thank for their data and commitment. If you liked this analysis please check out their website and explore the data yourself! Maybe even consider donating!








I took a quick look candidate donations limited to New Jersey, now I’ve moved nation wide. Lets see if the trends that were in New Jersey were typical of the whole nation or just Jersey. I restricted the data to just 2014 to make it a little more manageable. As always lets look at Dems verse Repubs.

all states leg party

Here we see the party breakdown, along with the elusive third party. If it wasn’t obvious already the de facto two party system completely eclipses all third party hopes. Dems and Repubs trump the cumulative third party total by a magnitude difference.  Moreover Republicans candidates across the nation raise more money than their democratic counterparts. This caught me by surprise as I thought totals would lean a little democratic, but more or less even. Lets take a peak at the office breakdown.

2014 was a big election year for the House, and a lesser year for the Senate. My prediction would put House campaign donations way ahead of the Senate.

all states leg office

Yup that looks about right. Not as big a spread as I would of guess, but this follows from the years context. One thing to note, with this dataset I kept all candidates, even if they lost. This should give a more complete look at ALL donations to candidates not just the ones that have been elected. So I wonder who raised more, the winners or the losers?

win lose

The above graph is misleading. You may want to say that people who won their elections raised more money, and you would be right if you looked at it cumulatively. However, to get anything meaningful out of this graph we need to look at per elected official. It could be that there are simply more candidates that won than lost, leading to the spread.

per poli

Now this is surprising, even per candidate the politicians who were elected raised almost 5 times that of those who lost. Out of the 1415 candidates, 936 of them lost, and 474 of them won. Only 3 withdrew and 2 were “unknown”. Finally, lets look at the industries again.

all state industry


Here we see uncoded donations eclipsing the rest of the other industries per usual. As a reminder, Uncoded actually includes PAC donations as well as individual donations. This is why uncoded always comes in as the largest category.

On a federal level it looks like New Jersey is pretty much in line with all the states. However, the whole point of getting data for every state is to be able to compare them. Stay tuned for part 4



P.S. heres a preview



Journal Club: Week of 11/13/2015

Got two more for you this week. One on Machine Learning and the other on multivariate. Check them out.

Supervised Machine Learning: A Review of Classification Techniques
By S.B. Kotsiantis
University of Peloponnese (2007)

This paper serves as a review of a subset of supervised machine learning algorithms with a focus on classification. Because of the vast amount of algorithms present the author breaks down the paper into key features of the algorithms. First the author gives a brief overview of machine learning in general, why and how it is used. What I liked most about this paper is that even before any algorithms are mentioned the author talks about general issues with classifiers and algorithm selection. This prepares the reader and removes the notion of the “silver bullet” algorithm.
The article is well organized. Kotsiantis starts with the most intuitive of machine learning algorithms, decision trees, and works his way up to new and more recent (well for 2007 at least) techniques. Each section goes over a multitude of techniques within the subheading, for example Statistical Learning algorithm contains Naïve Bayes and Bayesian Networks. I liked this organization as it guides the reader into more complex techniques. One thing that lacks is the depth. Most techniques are rushed over and not fully explained, but this paper’s purpose is not to outline precise steps to implement each technique but rather to familiarize the reader with existence of certain techniques.
Another criticism I have of the paper is that it seems to feel a little dated. This is of no fault of the author of course, but nevertheless a more recent paper may be worthwhile to follow up on. There is a table in the paper comparing the different techniques in terms of speed, tolerance, and other parameters which is very useful. However it might need to be checked for accuracy as it might be outdated.

Partial Least Squares Regression: A Tutorial
By Paul Geladi and Bruce R Kowalski
Analytica Chimica Acta, 185 (1986) 1-17

Here is an oldie but a goodie. When first learning about Partial Least Squares (PLS, or sometimes called projection onto latent sturctures) there was a vast amount of papers, but none really drove the point home for me. I went back to one paper that was constantly being cited, this paper from 1986. This paper provides a very clear tutorial on how to get PLS up and running. This paper assumes you have an understanding of linear algebra. Starting with data preprocessing, the paper states what form your data needs to be in and how to get it into that form.
The paper takes a detour however. It first goes over exisiting methods like multiple linear regression and principal component regression before it begins to explain PLS. This was good and bad for me as I was solely interested in PLS, nevertheless, the other tutorials gave insight and quick rudimentary ways of using other regression methods. However, I was here for the PLS. The paper immediately dives into building the PLS model. Take care reading this section as the explanation is sparse. Overall, it’s not the best tutorial, however it has two invaluable take aways. Figure 9 in the paper shows a geometrical representation of all the outputs and inputs the PLS model uses. It shows exactly the dimensions of each and how they relate to each other.
The other is the sample PLS algorithm. In the appendix of the paper there is almost a pseudocode like description of the PLS algorithm. Using this, I was able to get a PLS program up and running in less than an hour. This algorithm clearly shows every step that must be taken and exactly how to do it. This is the main reason why I would recommend this paper. There are others out there that explain PLSR better, but this paper allows for a rapid implementation of PLS.


Follow The Money: Federal Legislature Part 2

Last part we took a look at campaign donations to New Jersey State legislatures. Now we are moving on up to the US House and Senate. The stakes are a little higher, the politicians have more power, and hopefully full of campaign donations. Luckily for me we have on our side.

All data collected for the following graphs was using’s API. This made it easy to tabulate and graph all the recorded donations. First up is Democrats Vs Republicans.

fed leg party

Follows state legislature pretty closely. Democrats stomp republicans in terms of donations, however, this may be due to our data source rather than reality. 2014 and 2010 show close donation totals, while 2012 shows a blowout. 2013 seems to be completely missing republican data. That or only Democrats won.

One important qualification to make on this data set is that it only represents donations to candidates who won their elections. We need context for 2013 as it is an off year election there must be some special circumstance. Luckily wikipedia is here to help out. Apparently during this time, sadly a senator  passed away and a special election was held. As we suspected, a democratic candidate won. This may have contributed to the lopsided data. Now lets see if office maters at all.

fed leg office

Depending on the year it looks like office matters quite a bit. The special Senate election in 2013 influenced all campaign spending that year. 2010 was similar to 2013, but completely dominated by House campaign donations. As you probably know, house seats are up every 2 years. In the data above, house donations are all in the same range except in 2013, where there is no election. Senate elections on the other hand are every 2 years, but only 1/3 of the seats are up. New Jersey Senators were up for reelection in both 2012 and 2014 but not in 2010, explaining the lack of donations. Finally lets look at industry donations in 2012.

fed leg industry

Here we see uncoded donations eclipsing the rest of the other industries. After seeing uncoded in part 1 I investigated. Uncoded actually includes a PAC donations as well as individual donations. This is why uncoded always comes in as the largest category.  I did some quick calculations to see what % was from individuals like you and me and what % came from corporations and other PACs.

Individual  $  14,760,750.00
Non-Individual  $    1,412,439.00
Grand Total  $  16,173,189.00

Overwhelmingly the donations stemmed from Individuals. That is super surprising for me.  There’s a lot more visualizations I can do with this data, but before that, we have to go nationwide.


find the data here:NJfedDon