Wrangling Data with Pandas: An Example

Introduction

If you’re familiar with Pandas, this won’t be news to you, but if not, I hope you’ll read through this and be convinced to check Pandas out. Many of us wrestle with data, whether at work, at home, or in our hobbies. If we’re doing robotics or software, there’s almost certain to be logs of data to analyze. Excel is the go to choice for many, and it’s quite useful, but it’s not easy to automate reading in data, reformatting it as you need it, maybe resampling the data, etc. While I’ve not used R, I hear it’s great for statistical analysis and some machine learning, but only if the data is in the way you want it and you only need R. Pandas, on the other hand, is a Python library for data analysis. So in addition to Pandas, you have the full power of Python to do the pre-processing, post processing, and any additional pipeline steps you may need.

Pandas is an open source library for data analysis built upon the scientific python library Scipy, as well as Numpy. Numpy adds adding support for large, multi-dimensional arrays and matrices to Python, along with sophisticated matrix mathematical operations. Pandas adds convenient row and column header concepts, using what are called Data Frames to the Numpy array concept, and adds an extensive and growing library of statistical and other data analysis functions and libraries, often making the difficult both fast and easy.

A Financial Data Time Series Example

Background

Recently I was looking at a type of investment called a Callable Yield Note. The concept is that you receive a fixed interest rate for the life of the note, with two caveats. First, at certain call dates, the issuer can call the note in, returning the original principal and the interest earned to data. Second, if the value of any of the underlying “triggers” goes below a certain percentage of the value at purchase date, then the principal is at risk. If a triggering event has occurred during the life of the note, then at the end, the investor gets back full principal only if that underlying trigger value has returned to the original starting value or higher. Otherwise, they lose the same percentage that the underlying item has lost.

The concept is that the buyer receives steady income, isolated from market fluctuations (both high and low), and the interest paid is significantly higher than that paid on other note types without early calls or triggers. The downside, of course, is that the buyer is subject to both early calls and major market downturns. In this specific case, the notes to analyze are 12 month notes, with the underlying triggers being the S&P 500 and the Russell 2000 indices. The planned investment would be to establish a rolling ladder over 12 months.

I wanted to see, historically, how often does a triggering event occur; when it does occur, is there a loss at the end, and if so, of how much, and how many monthly investments would be affected at one time. I had done a little bit with Pandas in the past, but not with time series analyses.

Analysis Steps:

The steps for the analysis are to:

  1. Get historical daily closing values for the two indices  and clean it up (in this case, replace missing values when the markets are closed with the value on the previous close).
  2. Resample the data into monthly data, computing the monthly opening, closing, and  minimum values for each month.
  3. For each month, use a 12 month moving window, moving forward, to determine if a triggering event has occurred, and if so, how much principal, if any, would be lost (recall that if the index trigger goes off but the index recovers to its original starting value before the 12 month investment period ends, the principal is preserved.

Of course, some nice graphs and print-outs would be nice as well.

Applying Pandas

[Note: this was written as a quick and dirty script to run one analysis. the code’s provided to demonstrate Pandas, the code could use cleanup in many ways, including better variable names and elimination of magic numbers] Pandas was originally developed for financial analysis, so I got a bit lucky here. Pandas has built in libraries for assessing several online stock data APIs. The Federal Reserve Bank of St. Louis’ FRED Stock Market Indexes had the longest historical data I found for free. So I first wrote a few lines to fetch and store the data. Then I run the analysis program. After loading the previously saved data into a Pandas data frame I call triggers II next replace missing data (which occurs on days when the markets are closed) with the last valid closing price:

triggers = raw_data.fillna(method='pad')

A sample of the raw daily closing values looks like:

            RU2000PR SP500     
DATE                                              
2008-05-01  1813.60  1409.34  
2008-05-02  1803.65  1413.90
2008-05-03  1800.19  1407.49  
2008-05-04  1813.70  1418.26 
2008-05-05  1779.97  1392.57

Then I need to resample the daily close value into monthly data. I’ll need the monthly opening, closing, and low values for later analysis. Pandas has a built in function for resampling with an option for capturing the opening, high, low, and close, so I use that. the “1M” parameter specifies that the resampling is to be by month. The index is datetime data, and Pandas understands time units, so no need to mess with how many days in each month or other details, just one line of code, no need to right explicit loops, either:

trigger_monthly = triggers.resample('1M', how='ohlc')

A small subset of the DataFrame (10 months of just the Russell 2000 columns, there are similar columns for the S&P500) is shown below. You can see that the data has now been resampled to 1 month intervals, and instead of a daily price with have the closing values for the first and last days of the month (open and close columns) as well as the highest and lowest daily close that occurred during the month:

              open    high     low   close
DATE                                              
1979-01-31  100.71  110.13  100.71  109.03
1979-02-28  108.91  109.24  105.57  105.57
1979-03-31  106.33  115.83  106.33  115.83
1979-04-30  115.18  118.93  115.18  118.47
1979-05-31  118.52  118.82  113.49  116.32

That’s pretty good for one line of code! Next I need to compute values for 12 month rolling windows for each starting month, looking forward. Again, Pandas can compute rolling window values with a simple one line command, but it always looks back from higher to lower indices in the data frame, so first I invert my frame from oldest date first to newest date first. After that, I add new columns to the data that captures the lowest (min) values for both the S&P 500 and the Russell 2000 that occurred in the 12 month window for each row in the data frame (where each row is a month):

flipped_trigger_monthly = trigger_monthly.iloc[::-1]

flipped_trigger_monthly['SP500','52_low'] = pd.rolling_min(flipped_trigger_monthly['SP500','low'], 12)
flipped_trigger_monthly['RU2000PR','52_low'] = pd.rolling_min(flipped_trigger_monthly['RU2000PR','low'], 12)

Now the tricky part. I need to compute the triggers for each month. So for each month, and for each index, I need to compute the ratio of the minimum value during the window computed and added to the data frame in the last step with the opening value for the month, and also which (the S&P or Russell) is the lowest).  Except for this part, I was able to find answers to my questions either from the online documentation or the book Python for Data Analysis written by the creator of Pandas. But I had to ask how to do this step on Stack Overflow.  After that, I flipped the frame back to run from oldest to newest date, as that’s the more intuitive order.

flipped_trigger_monthly['Trigger_Value','combo'] = pd.np.fmin(flipped_trigger_monthly['SP500','52_low'] / flipped_trigger_monthly['SP500','open'], 
    flipped_trigger_monthly['RU2000PR','52_low'] / flipped_trigger_monthly['RU2000PR','open'])
flipped_trigger_monthly['Trigger_Value','SP500'] = flipped_trigger_monthly['SP500','52_low'] / flipped_trigger_monthly['SP500','open']
flipped_trigger_monthly['Trigger_Value','RU2000PR'] = flipped_trigger_monthly['RU2000PR','52_low'] / flipped_trigger_monthly['RU2000PR','open']

trigger_monthly = flipped_trigger_monthly.iloc[::-1]

Finally, I plot the results versus a line at the 70% trigger level, which clearly shows four time frames when this occurred, including leading into 1987 and 2008. I could then have used Pandas to flag those time frames when the trigger hit and also compute the actual loss of principal (the ratio of closing value / opening value, capped at a max of 1.0). However at this point I wanted to eyeball the results and check them anyway, so I wrote the final DataFrame out to a csv file and dropped back to Excel. That’s actually a handy feature: it’s very both to pull data in from Excel and to write it out. Although I used a csv file for compatibility, Pandas can directly write excel formatted files as well.

#Plot out the low trigger versus the 70% of value trigger line
plt.figure(); trigger_monthly['Trigger_Value','combo'].plot(color='b'); plt.figure(); trigger_monthly['Trigger_Value','combo'].plot(color='b'); 
plt.axhline(y=0.7, linewidth=3, color='r')
plt.xlabel('months')
plt.ylabel('52 week low / open')
plt.show()

trigger_monthly.to_csv('triggerResults.csv')
Minimum Trigger Values and the 70% Trigger Level

Minimum Trigger Values and the 70% Trigger Level

If you’d like to see the full code, I posted it as a Github Gist at https://gist.github.com/ViennaMike/f953f531d5aaef071da22cdbec248794

Getting Started and Learning More

Python and the necessary libraries are much easier to install on Linux machines than on Windows, while on Macs I hear it’s a bit in between, as python comes pre-installed but it’s often not an up to date version. But regardless of your environment, I recommend you use the free Anaconda distribution of Python from Continuum Analytics. It’s a complete package with all the scientific python libraries bundled it. If Python has “batteries included,” Anaconda includes a whole electrical power plant. It also avoids library version inconsistencies or, on Windows, issues with installing underlying C based libraries.

The book on Pandas is Python for Data Analysis by Wes McKinney. It has lots of good examples that it walks you through. The website for Pandas is at http://pandas.pydata.org/ and the online documentation is quite good.

I recently also came across this blog post on the blogger’s top 10 features of Pandas and how to use them: Python and Pandas Top Ten.

The next time you need to wrangle some data, give Pandas a try.

More on Udacity and Massive Open Online Courses

I’ve posted previously about Udacity’s Programming a Robotic Vehicle course I think that online courses may come to revolutionize college experience. However I recently read a thorough and well-written critique that reaches the opposite conclusion, and thought it worth revisiting.

Since taking that course, I’ve also started taking Udacity’s statistics course, mostly as a refresher, and I’m about 2/3rds of the way through.  Thus, I read with interest AngryMath’s critique.  AngryMath is a college level statistics instructor in New York City.  He has a very harsh critique of the course, and by extension, the whole MOOC model. He highlights 10 major problems and I think he makes a lot of good points, but that he over-generalizes in his criticism.  You can read his full critique on his site, here I list some of them and offer my thoughts:

  1. Lack of Planning: I agree.  The original syllabus seems to have been published in advance of the course, it’s not particularly well-organized, and doesn’t introduce material in the best order.  It seems much poorer in this respect than the robotic vehicles class.  However, that’s an indictment of this particular course, and this can occur in online and conventional courses.
  2. Sloppy Writing: Again, I agree.  AngryMath also cites the lack of a textbook or written material.  I don’t think a textbook is needed, but course notes similar to those developed for the robotics course would be helpful.  I certainly found them helpful in that course.  AngryMath has only sampled the statistics course, so has not reviewed this approach.
  3. Quizes: He thinks they aren’t in the best places and criticizes when they are used to introduce material.  I DON”T agree that they come in out of the blue to introduce material, and I like that approach occasionally anyway.  Try things out on your own first.
  4. Population and Sample not properly distinguished: Again, AGREE!  I’m taking this as a refresher, so knew a distinction was being overlooked and researched it again on my own.  More generally, this is a criticism I have of both Udacity courses I’ve taken.  It’s fine to simplify and present a high level sample, but let the student know you’re doing it and where to get more information.
  5. Final Exam Certification: AngryMath criticizes the certificates as meaningless, because you can repeatedly answer the same static questions until you get it right.  This makes the “certificate of accomplishment” meaningless.  Well, yes, at the present time, for this course, BUT 1) they’re already working to set up standardized, monitored testing using existing testing companies to offer “real” certification.  If you’ve ever taken a test at one of these facilities, it’s more heavily monitored than a typical college final.  Udacity makes no claims about the value of their current certificate, and are working to add this feature; 2) While a few multiple choice questions can be taken again and again until you get it right without true understanding, it’s easy to add variation to the questions, even if that wasn’t done in this course, and for courses with programming assignments, random guessing won’t work; and finally 3) The retake until you get it right approach can be a good learning model if you eliminate the other issues and make the questions different.It indicates what you’ve learned by the end, rather than how much you picked up the first time.

So, while I’m learning from the class, and intend to complete it, I agree with AngryMath that the course is done rather poorly, has errors, and could be much better.  Where I disagree is his generalization:

“Some of these shortcomings may be overcome by a more dedicated teacher. But others seem endemic to the massive-online project as a whole, and I suspect that the industry as a whole will turn out to be an over-inflating bubble that bursts at some point, much like other internet sensations of the recent past.”

I believe MANY of the problems in this class are specific to the class, especially having taken the much better Programming a Robotic Car class.  While the online model has many shortcomings compared to live teaching, unlike AngryMan I believe many can be adequately worked around.  For example, I found far more information in the crowd-sourced robotic vehicle course forum and wiki than I can recall getting from office hours and recitation sections in a live class.  Other shortcomings will clearly remain.  BUT, and this is where I part company totally, I don’t believe the shortcomings exceed the savings.  If I can get 80% as good a class at a 10th or less than the current, and rapidly rising, cost of a traditional college course, then this is clearly the future.  Perhaps not entirely, and I hope not, as the college experience is often one to be treasured.  But I can easily see the competition of MOOC’s forcing a new model, perhaps just one or two years on campus, and the rest done with far cheaper MOOCs.

Also, online courses offer a chance to “teach the long tail.”  Small colleges can’t have the breadth of faculty to cover all topics at advanced undergraduate or graduate levels.  In addition, maybe only 1 or 2 students at the college is interested in the topic.  Many times, college and universities in physical proximity will offer the opportunity to get credit for attending classes taught at nearby schools (e.g., the Five College Consortium in western Massachusetts).  Imagine “virtual consortia” of hundreds of schools throughout the country (or even the world).  Through this model, the 1-2 students per college can be virtually assembled to form a sizable class, taught by a well-qualified professor from one of the consortium’s schools.

Bottom line: AngryMath has a strong, valid critique of Udacity’s Statistics 101 course, but it’s dangerous both to infer the quality of other current classes from a sample set of one, and equally dangerous to extrapolate to the future.

For yet another take, there’s a good article in Forbes: Massive Open Online Courses — A Threat Or Opportunity To Universities?