Parsing PDFs in Python with Tika

A few months ago, one of my friends asked me if I could help him extract some data from a collection of PDFs. The PDFs contained records of his financial transactions over a period of years and he wanted to analyze them. Unfortunately, Excel and plain text versions of the files were no longer available, so the PDFs were his only option.

I reviewed a few Python-based PDF parsers and decided to try Tika, which is a port of Apache Tika.  Tika parsed the PDFs quickly and accurately. I extracted the data my friend needed and sent it to him in CSV format so he could analyze it with the program of his choice. Tika was so fast and easy to use that I really enjoyed the experience. I enjoyed it so much I decided to write a blog post about parsing PDFs with Tika.

tika

California Budget PDFs

To demonstrate parsing PDFs with Tika, I knew I’d need some PDFs. I was thinking about which ones to use and remembered a blog post I’d read on scraping budget data from a government website. Governments also provide data in PDF format, so I decided it would be helpful to demonstrate how to parse data from PDFs available on a government website. This way, with these two blog posts, you have examples of acquiring government data, even if it’s embedded in HTML or PDFs. The three PDFs we’ll parse in this post are:

2015-16 State of California Enacted Budget Summary Charts
2014-15 State of California Enacted Budget Summary Charts
2013-14 State of California Enacted Budget Summary Charts

ca_budget

Each of these PDFs contains several tables that summarize total revenues and expenditures, general fund revenues and expenditures, expenditures by agency, and revenue sources. For this post, let’s extract the data on expenditures by agency and revenue sources. In the 2015-16 Budget PDF, the titles for these two tables are:

2015-16 Total State Expenditures by Agency

expenditures

2015-16 Revenue Sources

revenues

To follow along with the rest of this tutorial you’ll need to download the three PDFs and ensure you’ve installed Tika. You can download the three PDFs here:

http://www.ebudget.ca.gov/2015-16/pdf/Enacted/BudgetSummary/SummaryCharts.pdf
http://www.ebudget.ca.gov/2014-15/pdf/Enacted/BudgetSummary/SummaryCharts.pdf
http://www.ebudget.ca.gov/2013-14/pdf/Enacted/BudgetSummary/SummaryCharts.pdf

You can install Tika by running the following command in a Terminal window:

pip install --user tika

IPython

Before we dive into parsing all of the PDFs, let’s use one of the PDFs, 2015-16CABudgetSummaryCharts.pdf, to become familiar with Tika and its output. We can use IPython to explore Tika’s output interactively:

ipython

from tika import parser

parsedPDF = parser.from_file("2015-16CABudgetSummaryCharts.pdf")

You can type the name of the variable, a period, and then hit tab to view a list of all of the methods available to you:

parsedPDF.

ipython1

There are many options related to keys and values, so it appears the variable contains a dictionary. Let’s view the dictionary’s keys:

parsedPDF.viewkeys()

parsedPDF.keys()

The dictionary’s keys are metadata and content. Let’s take a look at the values associated with these keys:

parsedPDF["metadata"]

The value associated with the key “metadata” is another dictionary. As you’d expect based on the name of the key, its key-value pairs provide metadata about the parsed PDF.

ipython2

Now let’s take a look at the value associated with “content”.

parsedPDF["content"]

The value associated with the key “content” is a string. As you’d expect, the string contains the PDF’s text content.

ipython3

Now that we know the types of objects and values Tika provides to us, let’s write a Python script to parse all three of the PDFs. The script will iterate over the PDF files in a folder and, for each one, parse the text from the file, select the lines of text associated with the expenditures by agency and revenue sources tables, convert each of these selected lines of text into a Pandas DataFrame, display the DataFrame, and create and save a horizontal bar plot of the totals column for the expenditures and revenues. So, after you run this script, you’ll have six new plots, one for revenues and one for expenditures for each of the three PDF files, in the folder in which you ran the script.

Python Script

To parse the three PDFs, create a new Python script named parse_pdfs_with_tika.py and add the following lines of code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv
import glob
import os
import re
import sys
import pandas as pd
import matplotlib
matplotlib.use('AGG')
import matplotlib.pyplot as plt
pd.options.display.mpl_style = 'default'

from tika import parser

input_path = sys.argv[1]

def create_df(pdf_content, content_pattern, line_pattern, column_headings):
    """Create a Pandas DataFrame from lines of text in a PDF.

    Arguments:
    pdf_content -- all of the text Tika parses from the PDF
    content_pattern -- a pattern that identifies the set of lines
    that will become rows in the DataFrame
    line_pattern -- a pattern that separates the agency name or revenue source
    from the dollar values in the line
    column_headings -- the list of column headings for the DataFrame
    """
    list_of_line_items = []
    # Grab all of the lines of text that match the pattern in content_pattern
    content_match = re.search(content_pattern, pdf_content, re.DOTALL)
    # group(1): only keep the lines between the parentheses in the pattern
    content_match = content_match.group(1)
    # Split on newlines to create a sequence of strings
    content_match = content_match.split('\n')
    # Iterate over each line
    for item in content_match:
        # Create a list to hold the values in the line we want to retain
        line_items = []
        # Use line_pattern to separate the agency name or revenue source
        # from the dollar values in the line
        line_match = re.search(line_pattern, item, re.I)
        # Grab the agency name or revenue source, strip whitespace, and remove commas
        # group(1): the value inside the first set of parentheses in line_pattern
        agency = line_match.group(1).strip().replace(',', '')
        # Grab the dollar values, strip whitespace, replace dashes with 0.0, and remove $s and commas
        # group(2): the value inside the second set of parentheses in line_pattern
        values_string = line_match.group(2).strip().\
        replace('- ', '0.0 ').replace('$', '').replace(',', '')
        # Split on whitespace and convert to float to create a sequence of floating-point numbers
        values = map(float, values_string.split())
        # Append the agency name or revenue source into line_items
        line_items.append(agency)
        # Extend the floating-point numbers into line_items so line_items remains one list
        line_items.extend(values)
        # Append line_item's values into list_of_line_items to generate a list of lists;
        # all of the lines that will become rows in the DataFrame
        list_of_line_items.append(line_items)
    # Convert the list of lists into a Pandas DataFrame and specify the column headings
    df = pd.DataFrame(list_of_line_items, columns=column_headings)
    return df

def create_plot(df, column_to_sort, x_val, y_val, type_of_plot, plot_size, the_title):
    """Create a plot from data in a Pandas DataFrame.

    Arguments:
    df -- A Pandas DataFrame
    column_to_sort -- The column of values to sort
    x_val -- The variable displayed on the x-axis
    y_val -- The variable displayed on the y-axis
    type_of_plot -- A string that specifies the type of plot to create
    plot_size -- A list of 2 numbers that specifies the plot's size
    the_title -- A string to serve as the plot's title
    """
    # Create a figure and an axis for the plot
    fig, ax = plt.subplots()
    # Sort the values in the column_to_sort column in the DataFrame
    df = df.sort_values(by=column_to_sort)
    # Create a plot with x_val on the x-axis and y_val on the y-axis
    # type_of_plot specifies the type of plot to create, plot_size
    # specifies the size of the plot, and the_title specifies the title
    df.plot(ax=ax, x=x_val, y=y_val, kind=type_of_plot, figsize=plot_size, title=the_title)
    # Adjust the plot's parameters so everything fits in the figure area
    plt.tight_layout()
    # Create a PNG filename based on the plot's title, replace spaces with underscores
    pngfile = the_title.replace(' ', '_') + '.png'
    # Save the plot in the current folder
    plt.savefig(pngfile)

# In the Expenditures table, grab all of the lines between Totals and General Government
expenditures_pattern = r'Totals\n+(Legislative, Judicial, Executive.*?)\nGeneral Government:'

# In the Revenues table, grab all of the lines between 2015-16 and either Subtotal or Total
revenues_pattern = r'\d{4}-\d{2}\n(Personal Income Tax.*?)\n +[Subtotal|Total]'

# For the expenditures, grab the agency name in the first set of parentheses
# and grab the dollar values in the second set of parentheses
expense_pattern = r'(K-12 Education|[a-z,& -]+)([$,0-9 -]+)'

# For the revenues, grab the revenue source in the first set of parentheses
# and grab the dollar values in the second set of parentheses
revenue_pattern = r'([a-z, ]+)([$,0-9 -]+)'

# Column headings for the Expenditures DataFrames
expense_columns = ['Agency', 'General', 'Special', 'Bond', 'Totals']

# Column headings for the Revenues DataFrames
revenue_columns = ['Source', 'General', 'Special', 'Total', 'Change']

# Iterate over all PDF files in the folder and process each one in turn
for input_file in glob.glob(os.path.join(input_path, '*.pdf')):
    # Grab the PDF's file name
    filename = os.path.basename(input_file)
    print filename
    # Remove .pdf from the filename so we can use it as the name of the plot and PNG
    plotname = filename.strip('.pdf')

    # Use Tika to parse the PDF
    parsedPDF = parser.from_file(input_file)
    # Extract the text content from the parsed PDF
    pdf = parsedPDF["content"]
    # Convert double newlines into single newlines
    pdf = pdf.replace('\n\n', '\n')

    # Create a Pandas DataFrame from the lines of text in the Expenditures table in the PDF
    expense_df = create_df(pdf, expenditures_pattern, expense_pattern, expense_columns)
    # Create a Pandas DataFrame from the lines of text in the Revenues table in the PDF
    revenue_df = create_df(pdf, revenues_pattern, revenue_pattern, revenue_columns)
    print expense_df
    print revenue_df

    # Print the total expenditures and total revenues in the budget to the screen
    print "Total Expenditures: {}".format(expense_df["Totals"].sum())
    print "Total Revenues: {}\n".format(revenue_df["Total"].sum())

    # Create and save a horizontal bar plot based on the data in the Expenditures table
    create_plot(expense_df, "Totals", ["Agency"], ["Totals"], 'barh', [20,10], \
    plotname+"Expenditures")
    # Create and save a horizontal bar plot based on the data in the Revenues table
    create_plot(revenue_df, "Total", ["Source"], ["Total"], 'barh', [20,10], \
    plotname+"Revenues")

Save this code in a file named parse_pdfs_with_tika.py in the same folder as the one containing the three CA Budget PDFs. Then you can run the script on the command line with the following command:

./parse_pdfs_with_tika.py .

I added docstrings to the two functions, create_df and create_plot, and comments above nearly every line of code in an effort to make the code as self-explanatory as possible. I created the two functions to avoid duplicating code because we perform these operations twice for each file, once for revenues and once for expenditures. We use a for loop to iterate over the PDFs and for each one we extract the lines of text we care about, convert the text into a Pandas DataFrame, display some of the DataFrame’s information, and save plots of the total values in the revenues and expenditures tables.

Results

Terminal Output
(1 of 3 pairs of DataFrames)

terminal_output

PNG File: Expenditures by Agency 2015-16
(1 of 6 PNG Files)

2015-16CABudgetSummaryChartsExpenditures

In this post I’ve tried to convey that Tika is a great resource for parsing PDFs by demonstrating how you can use it to parse budget data from PDF documents provided by a government agency. As my friend’s experience illustrates, there may be other situations in which you need to extract data from PDFs. With Tika, PDFs become another rich source of data for your analysis.

Pandashells: Data Science with Python on the Command Line

I often find myself using a variety of unix commands, perl / sed / awk one-liners, and snippets of Python code to combine, clean, analyze, and visualize data. Switching between the command line tools and Python breaks up my workflow because I have to step away from the command line to run the Python code in the interpreter or a script.

That’s why, when I learned about Pandashells last year I got excited because it’s a set of tools for using Python, Pandas, and other members of the Python data stack on the command line. Since Pandashells is a bash API to Pandas, Statsmodels, Seaborn, and other libraries, it’s easy to integrate the work you’d do with these Python packages into your command line workflow.

pandashells_overview

Pandashells has a range of tools that enable you to accomplish many common data processing, analysis, and visualization tasks. The main tool is p.df, which loads your tabular data into a Pandas dataframe so you can use your favorite Pandas commands right on the command line. In addition, p.merge enables you to merge files. p.linspace and p.rand enable you to create linearly-spaced and random numbers. p.regress and p.regplot, p.plot, p.hist, p.facet_grid, and p.cdf enable you to perform multivariate linear regression and create a collection of standard plots.

A nice feature of Pandashells is that it comes with several example datasets, including election, electoral_college, sealevel, and tips, and its GitHub page presents several well-commented examples that help you get familiar with the syntax. The examples show you how to chain multiple Pandashells commands and combine them with other command line tools like head.

pandashells_example_data

Ever since I started using Pandashells, I’ve enjoyed being able to integrate Python code into my command line workflow. I’ve used it to merge datasets, parse and re-format dates, filter for specific rows and columns, create new columns and dummy variables, explore and summarize the data, create graphs and plots, and prepare the data for predictive modeling.

In this post I’d like to demonstrate how to use Pandashells to accomplish a variety of common data processing, analysis, and visualization tasks on the command line. First, I’ll present the commands according to the tasks you’re going to accomplish so the commands are short and you can skip to the tasks you’re interested in. Then, at the end of this post, I’ll provide an example of chaining several commands to prepare a dataset for predictive modeling.

 

INSTALL PANDASHELLS

To follow along with this post, you will need to install Pandashells, which you can do with one of the following commands, documented on its GitHub page:

pip install pandashells

pip install pandashells[console]

pip install pandashells[full]

conda install -c https://conda.anaconda.org/robdmc pandashells

 

 

DATA

The dataset we’ll use in these examples is the familiar customer churn dataset. You can download a copy of the dataset here: churn.csv

pandashells_churn_dataset

 

TUTORIAL

DELETE AN APOSTROPHE

To begin, one of the column headings in the file, Int’l Plan, contains an apostrophe. Pandashells has trouble with apostrophes in column headings because you enclose Pandashells commands in apostrophes, so let’s delete it.

The first sed command prints the result to the screen so you can confirm that it’s going to delete the apostrophe in the column heading. The second sed command uses the -i flag to actually make the change to the file in-place.

sed -e "s/'//g" churn.csv | head

sed -i -e "s/'//g" churn.csv

 

VIEW FIRST FEW ROWS OF THE DATAFRAME

Now that the file is ready, let’s use Pandashells to read the data into a Pandas dataframe and take a look at the header row and the first five data rows.

cat churn.csv | p.df 'df.head()'

pandashells_head

 

VIEW NUMBER OF ROWS AND COLUMNS

How many rows and columns does the dataset have? The shape command returns the number of rows and columns as a tuple. However, the p.df tool wants to return output as a dataframe and it has trouble converting the tuple into a dataframe, so assign the row value into a new column called rows, assign the column value into a new column called columns, and view the numbers in the first row using head(1). The “-o table” means you want to output the results in table format.

cat churn.csv | p.df 'df["rows"], df["columns"] = df.shape' 'df[["rows", "columns"]]' 'df.head(1)' -o table

pandashells_rows_columns

 

CHANGE COLUMN HEADINGS

The column headings contain a mix of uppercase and lowercase letters and the headings that are two words contain a space between the words. Let’s standardize the column headings by changing all of them to uppercase and converting any spaces to underscores.

cat churn.csv | p.df 'df.rename(columns={"State":"STATE", "Account Length":"ACCOUNT_LENGTH", "Area Code":"AREA_CODE", "Phone":"PHONE", "Intl Plan":"INTL_PLAN", "VMail Plan":"VMAIL_PLAN", "VMail Message":"VMAIL_MESSAGE", "Day Mins":"DAY_MINS", "Day Calls":"DAY_CALLS", "Day Charge":"DAY_CHARGE", "Eve Mins":"EVE_MINS", "Eve Calls":"EVE_CALLS", "Eve Charge":"EVE_CHARGE", "Night Mins":"NIGHT_MINS", "Night Calls":"NIGHT_CALLS", "Night Charge":"NIGHT_CHARGE", "Intl Mins":"INTL_MINS", "Intl Calls":"INTL_CALLS", "Intl Charge":"INTL_CHARGE", "CustServ Calls":"CUSTSERV_CALLS", "Churn?":"CHURN?"})' 'df.head()'

 

REMOVE ROWS THAT CONTAIN NaNs

This dataset doesn’t contain NaNs, but here are examples of how to eliminate rows that contain NaNs. The first command eliminates rows that have NaNs in any columns. The second command ensures there aren’t NaNs in specific columns.

cat churn.csv | p.df 'df[df.notnull()]' 'df.head()'

cat churn.csv | p.df 'df[df["Churn?"].notnull()]' 'df[df["Account Length"].notnull()]' 'df.head()'

 

KEEP ROWS WHERE VALUES MEET CONDITIONS

You often want to filter a dataset for rows with values that meet specific conditions. The first command filters for rows where the Account Length is greater than 145. The second command filters for rows where the International Charge is less than 2 and the Day Charge is greater than 45.

cat churn.csv | p.df 'df[df["Account Length"] > 145]' 'df.head()'

cat churn.csv | p.df 'df[(df["Intl Charge"] < 2.0) & (df["Day Charge"] > 45.0)]' 'df.head()'

 

KEEP ROWS WHERE VALUES ARE / ARE NOT IN A SET

In some cases you want to filter for rows where the values in a column are or are not in a specific set. The first command filters for rows where the value in the International Plan column is “yes”. The second command uses a tilde ‘~’ to negate the expression and filter for rows where the value in the column is NOT “yes”. I’ve found this second syntax useful in situations where a column contains a small set of invalid values. Then you can use the second command to eliminate rows that have these invalid values in the column.

cat churn.csv | p.df 'df[df["Intl Plan"].isin(["yes"])]' 'df.head()'

cat churn.csv | p.df 'df[~df["Intl Plan"].isin(["yes"])]' 'df.head()'

 

KEEP ROWS WHERE VALUES MATCH A PATTERN

In some cases you want to filter for rows where the values in a column match a specific pattern. You can filter for rows matching a pattern using startswith, endswith, and contains to specify where to look for the pattern. The first command filters for rows where the first letter in the State column is a capital K. The second command finds rows where the text in the State column contains a capital K.

cat churn.csv | p.df 'df[df["State"].str.startswith("K")]' 'df.head()'

cat churn.csv | p.df 'df[df["State"].str.contains("K")]' 'df.head()'

 

KEEP SPECIFIC COLUMNS

Sometimes a dataset contains more columns than you need. You can specify which columns to retain by specifying them as a list. The following command restricts the output to nine specific columns.

cat churn.csv | p.df 'df[["Account Length", "Intl Plan", "VMail Plan", "Day Charge", "Eve Charge", "Night Charge", "Intl Charge", "CustServ Calls", "Churn?"]]' 'df.head()'

 

CREATE NEW VARIABLES / COLUMNS

One common operation is creating new columns. You can create a new column by writing an expression on the right hand side of the equals sign that generates the values for the column and then assigning the values to the new column you specify on the left hand side of the equals sign.

The first command uses the existing “Churn?” column to create a new column called “churn”. The values in the “Churn?” column are True. and False., so the expression uses NumPy’s “where” function to convert the True.s and False.s into 1s and 0s, respectively. The second command creates a new column called “total_calls” that is the sum of the values in the day, evening, night, and international calls columns. Similarly, the third command creates a new column called “total_charges” that is the sum of the values in the day, evening, night, and international charges columns.

cat churn.csv | p.df 'df["churn"] = np.where(df["Churn?"] == "True.", 1, 0)' 'df.head()'

cat churn.csv | p.df 'df["total_calls"] = df["Day Calls"] + df["Eve Calls"] + df["Night Calls"] + df["Intl Calls"]' 'df.head()'

cat churn.csv | p.df 'df["total_charges"] = df["Day Charge"] + df["Eve Charge"] + df["Night Charge"] + df["Intl Charge"]' 'df.head()'

 

CREATE CATEGORICAL VARIABLE FROM VALUES IN ANOTHER COLUMN

One fun operation is creating a column for a new categorical variable that’s based on values in another column. You can do so using a list comprehension and if-else logic. For example, the following command uses the values in the “State” column to create a new categorical variable called “us_regions” that categorizes the states into the Census Bureau’s four designated regions: Northeast, Midwest, South, and West. As another example, I’ve used this type of command create a categorical variable of failure types based on keywords / substrings in another column containing verbose failure descriptions.

cat churn.csv | p.df 'df["us_regions"] = ["Northeast" if ("CT" in str(state).upper() or "ME" in state or "MA" in state or "NH" in state or "RI" in state or "VT" in state or "NJ" in state or "NY" in state or "PA" in state) else "Midwest" if ("IL" in state or "IN" in state or "MI" in state or "OH" in state or "WI" in state or "IA" in state or "KS" in state or "MN" in state or "MO" in state or "NE" in state or "ND" in state or "SD" in state) else "South" if ("DE" in state or "FL" in state or "GA" in state or "MD" in state or "NC" in state or "SC" in state or "VA" in state or "DC" in state or "WV" in state or "AL" in state or "KY" in state or "MS" in state or "TN" in state or "AR" in state or "LA" in state or "OK" in state or "TX" in state) else "West" for state in df["State"]]' 'df.head()'

 

CREATE INDICATOR / DUMMY VARIABLES

Sometimes you want to convert a categorical variable into a set of indicator / dummy variables and add them to the existing dataframe. You can use Pandas’ get_dummies() function to create dummy variables and the concat() function to add them as new columns to the existing dataframe. For example, the following command uses our previous code to create a categorical variable called “us_regions” and then uses the get_dummies() and concat() functions to create four new indicator variables based on the values in “us_regions” and add them to the existing dataframe.

cat churn.csv | p.df 'df["us_regions"] = ["Northeast" if ("CT" in str(state).upper() or "ME" in state or "MA" in state or "NH" in state or "RI" in state or "VT" in state or "NJ" in state or "NY" in state or "PA" in state) else "Midwest" if ("IL" in state or "IN" in state or "MI" in state or "OH" in state or "WI" in state or "IA" in state or "KS" in state or "MN" in state or "MO" in state or "NE" in state or "ND" in state or "SD" in state) else "South" if ("DE" in state or "FL" in state or "GA" in state or "MD" in state or "NC" in state or "SC" in state or "VA" in state or "DC" in state or "WV" in state or "AL" in state or "KY" in state or "MS" in state or "TN" in state or "AR" in state or "LA" in state or "OK" in state or "TX" in state) else "West" for state in df["State"]]' 'pd.concat([df, pd.get_dummies(df.us_regions)], axis=1)' 'df.head()'

 

ENSURE SPECIFIC DATE FORMAT (YYYY-MM-DD HH:MM:SS)

This dataset doesn’t contain dates, but here’s an example of ensuring that the dates in a column have a specific format. I added a newline after the word “contains” to make the command easier to read, but you wouldn’t include the newline when you use the command.

cat my_data.csv | p.df 'df[df["date_column"].str.contains
("[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,2} [0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2}")]'

 

RESTRICT TO SPECIFIC DATETIME RANGE

Here is an example of restricting your dataset to a specific date range. The following command ensures the values in the “date_column” column are more recent than “2009-12-31 23:59:59”. That is, it eliminates data from before 2010.

cat my_data.csv | p.df 'df[df["date_column"] > "2009-12-31 23:59:59"]'

 

CALCULATE NUMBER OF DAYS BETWEEN TWO DATES

Here is an example of calculating the number of days between two dates. The new column is called “diff_in_days”. The values in the column are the number of days between the values in two columns called “recent_date_column” and “older_date_column”. To calculate the difference, I use the strptime() function inside two list comprehensions to convert the text values in the two date columns into datetime objects. Next, I use the zip function to pair up the two datetime objects in each row. Finally, I use the expression “str(i-j).split()[0] if “days” in str(i-j) else 1″ to subtract one date, i, from the other, j, convert the result of the subtraction into a string, and split the string on whitespace and extract the number portion if it contains the word “days” otherwise assign the value 1. For example, if the result of the subtraction is “10 days” I want the new column “diff_in_days” to contain the number 10. I added several newlines to make the command easier to read, but you wouldn’t include the newlines when you use the command.

cat my_data.csv | p.df 'df["diff_in_days"] =
[str(i-j).split()[0] if "days" in str(i-j) else 1
for i, j in zip(
[datetime.datetime.strptime(recent_date, "%Y-%m-%d %H:%M:%S")
for recent_date in df.recent_date_column],
[datetime.datetime.strptime(older_date, "%Y-%m-%d %H:%M:%S")
for older_date in df.older_date_column])]'

pandashells_dates_csv

pandashells_dates_output

 

UNIQUE VALUES IN A COLUMN

Now we can return to analyzing our churn dataframe. The following command enables you to view the unique values in the “Churn?” column. The “-o table” option means you want to display the output in table format, as opposed to csv or another format, and the “index” option means you want to display the titles for the rows in the output.

cat churn.csv | p.df 'sorted(df["Churn?"].unique())' -o table index

 

VALUE COUNTS FOR UNIQUE VALUES IN A COLUMN

The following command enables you to view the unique values in the “VMail Plan” column, as well as the number of times each of the values appears in the dataset. As with the previous command, it’s helpful to display the output in table format and to display the titles for the rows in the output.

cat churn.csv | p.df 'df["VMail Plan"].value_counts()' -o table index

 

DESCRIPTIVE STATISTICS FOR A COLUMN

The following commands demonstrate how to compute descriptive statistics for categorical and numeric columns. The statistics for a categorical variable are the count of observations, the number of unique values in the column, the top / most frequently occurring value, and the frequency of the top occurring value. The statistics for a numeric variable are count of observations, mean, standard deviation, minimum, 25th percentile, 50th percentile / median, 75th percentile, and maximum. You can use “.T” to transpose the output, as shown in the second command.

cat churn.csv | p.df 'df[["Churn?"]].describe()' -o table index

cat churn.csv | p.df 'df[["Intl Charge"]].describe().T' -o table index

 

CROSSTABS

The following command shows how to create a crosstab table. The values in the “Churn?” column are the rows in the table, and the values in the “Intl Plan” column are the columns in the table. By default, the values in the table are counts of observations in each category, but you can specify additional data and an aggregation function to calculate different statistics for each category.

cat churn.csv | p.df 'pd.crosstab(df["Churn?"], df["Intl Plan"])' -o table index

 

GROUP BY

Sometimes you want to view separate statistics for different categories in a categorical variable, e.g. separate descriptive statistics for men and women. The following two commands show you how to use the groupby() function to group the data by the values in the “Churn?” column and calculate descriptive statistics for the two groups. The first command calculates descriptive statistics for a categorical variable, “Intl Plan”, separately for those who churned, “True.”, and those who did not churn, “False.”. Similarly, the second command calculates descriptive statistics for a numeric variable, “Intl Charge”.

cat churn.csv | p.df 'df.groupby("Churn?")[["Intl Plan"]].describe().unstack("Churn?")' -o table index

cat churn.csv | p.df 'df.groupby("Churn?")[["Intl Charge"]].describe().unstack("Churn?")' -o table index

 

PIVOT TABLES

The following two commands illustrate how to create pivot tables. Both commands display statistics about the values in the “Intl Charge” column, grouped by two categorical variables, “Churn?” and “Intl Plan”. The “Churn?” values are the rows in the output table and the “Intl Plan” values are the columns in the output table. The first command displays the “count” of the “Intl Charge” values in each of the categories, and the second command displays the “mean” of the “Intl Charge” values in each of the categories.

cat churn.csv | p.df 'df.pivot_table(values=["Intl Charge"], index=["Churn?"], columns=["Intl Plan"], aggfunc="count")' -o table index

cat churn.csv | p.df 'df.pivot_table(values=["Intl Charge"], index=["Churn?"], columns=["Intl Plan"], aggfunc="mean")' -o table index

 

BAR CHART

The following command combines the p.df tool with the p.hist tool to display a histogram of the values in the “churn” column. The command uses NumPy’s where() function to ensure the values in the new column are numeric. The “-o csv” option means the column should be outputted in csv format. I added a newline before the “–savefig” argument so it’s separate from the rest of the command, but you can remove the newline and include it in the command if you want to save the figure to a file called bar_chart.png in a folder called plots.

cat churn.csv | p.df 'df["churn"] = np.where(df["Churn?"] == "True.", 1, 0)' 'df["churn"]' -o csv | p.hist --ylabel 'Count' --xlabel 'Churn? (0: No; 1: Yes)' --title 'Bar Chart of Dependent Variable: Churn?' --theme 'darkgrid' --palette 'muted'
--savefig 'plots/bar_chart.png'

pandashells_bar_chart

 

FACET GRID

The following command shows how you can use facet grid to create separate plots based on a categorical variable. The –col “Intl Plan” argument indicates you want to create separate plots for the categories in the “Intl Plan” column. The –args “churn” argument indicates you want to display the “churn” data in the plots. The –map pl.hist argument indicates you want to display histograms of the “churn” data. Again, I added a newline before the “–savefig” argument.

cat churn.csv | p.df 'df["churn"] = np.where(df["Churn?"] == "True.", 1, 0)' 'df[["churn", "Intl Plan"]]' -o csv | p.facet_grid --col "Intl Plan" --args "churn" --map pl.hist
--savefig 'plots/bar_chart_facet.png'

pandashells_bar_chart_facet

 

WRITE DATA TO A FILE

A very common operation is to write your cleaned data to a new output file once you’re finished processing it. The following two commands show different ways to write to an output file. The first command uses Pandas’ to_csv() function to write the data to a file called “dataset_cleaned.csv”. I include the index=False argument so it doesn’t write an additional column of row index values to the output file. The second command uses the “-o csv” option to output the data in CSV format and the greater than sign to redirect the output into the output file.

cat churn.csv | p.df 'df[["Account Length", "Intl Plan", "VMail Plan", "Day Charge", "Eve Charge", "Night Charge", "Intl Charge", "CustServ Calls", "Churn?"]]' 'df.to_csv("dataset_cleaned.csv", index=False)'

cat churn.csv | p.df 'df[["Account Length", "Intl Plan", "VMail Plan", "Day Charge", "Eve Charge", "Night Charge", "Intl Charge", "CustServ Calls", "Churn?"]]' -o csv > dataset_clean.csv

 

PUTTING IT ALL TOGETHER

I’ve demonstrated a variety of ways to clean, analyze, and visualize your data on the command line with Pandashells. In the preceding examples, I refrained from chaining several commands together into a long workflow so we could focus on short snippets of code that accomplish individual tasks.

In this last example, I present an extended workflow that prepares our churn dataset for predictive modeling. I present the workflow twice. The first time I add newlines after each of the commands so each one and the entire workflow is easier to read. The second time I present the commands as you’d actually write them so it’s easy to copy and paste them into a Terminal window.

The first command renames the column headings to uppercase and replaces spaces with underscores. The second command deletes any rows containing NaN values.

The next three commands use NumPy’s where() function to create three new numeric variables based on columns that contain text values. The next two commands create two new variables that are the sum of the day, evening, night, and international calls and charges columns, respectively. The next command creates a categorical variable that categorizes the values in the States column into the Census Bureau’s four designated regions. The next command uses Pandas’ get_dummies() and concat() functions to create indicator variables for the four regions and add them to the existing dataframe.

The penultimate command selects a subset of the columns in the dataframe. The final command writes the eleven selected columns to an output file called “churn_cleaned.csv”. Since I didn’t add the argument index=False, the file also contains an additional column of row index values. The argument index_label=”ID” gives that column a heading.

NEWLINES AFTER EACH COMMAND

cat churn.csv | p.df
'df.rename(columns={"State":"STATE", "Account Length":"ACCOUNT_LENGTH", "Area Code":"AREA_CODE", "Phone":"PHONE", "Intl Plan":"INTL_PLAN", "VMail Plan":"VMAIL_PLAN", "VMail Message":"VMAIL_MESSAGE", "Day Mins":"DAY_MINS", "Day Calls":"DAY_CALLS", "Day Charge":"DAY_CHARGE", "Eve Mins":"EVE_MINS", "Eve Calls":"EVE_CALLS", "Eve Charge":"EVE_CHARGE", "Night Mins":"NIGHT_MINS", "Night Calls":"NIGHT_CALLS", "Night Charge":"NIGHT_CHARGE", "Intl Mins":"INTL_MINS", "Intl Calls":"INTL_CALLS", "Intl Charge":"INTL_CHARGE", "CustServ Calls":"CUSTSERV_CALLS", "Churn?":"CHURN?"})'
'df[df.notnull()]'
'df["CHURN"] = np.where(df["CHURN?"] == "True.", 1, 0)'
'df["INT_PLAN"] = np.where(df["INTL_PLAN"] == "yes", 1, 0)'
'df["VM_PLAN"] = np.where(df["VMAIL_PLAN"] == "yes", 1, 0)'
'df["TOTAL_CALLS"] = df["DAY_CALLS"] + df["EVE_CALLS"] + df["NIGHT_CALLS"] + df["INTL_CALLS"]'
'df["TOTAL_CHARGES"] = df["DAY_CHARGE"] + df["EVE_CHARGE"] + df["NIGHT_CHARGE"] + df["INTL_CHARGE"]'
'df["USA_REGIONS"] = ["NORTHEAST" if ("CT" in str(state).upper() or "ME" in state or "MA" in state or "NH" in state or "RI" in state or "VT" in state or "NJ" in state or "NY" in state or "PA" in state) else "MIDWEST" if ("IL" in state or "IN" in state or "MI" in state or "OH" in state or "WI" in state or "IA" in state or "KS" in state or "MN" in state or "MO" in state or "NE" in state or "ND" in state or "SD" in state) else "SOUTH" if ("DE" in state or "FL" in state or "GA" in state or "MD" in state or "NC" in state or "SC" in state or "VA" in state or "DC" in state or "WV" in state or "AL" in state or "KY" in state or "MS" in state or "TN" in state or "AR" in state or "LA" in state or "OK" in state or "TX" in state) else "WEST" for state in df["STATE"]]'
'pd.concat([df, pd.get_dummies(df.USA_REGIONS)], axis=1)'
'df[["CHURN", "ACCOUNT_LENGTH", "INT_PLAN", "VM_PLAN", "TOTAL_CALLS", "TOTAL_CHARGES", "CUSTSERV_CALLS", "NORTHEAST", "MIDWEST", "SOUTH", "WEST"]]'
'df.to_csv("churn_cleaned.csv", index_label="ID")'

 

ACTUAL COMMANDS (READY FOR COPY AND PASTE)

cat churn.csv | p.df 'df.rename(columns={"State":"STATE", "Account Length":"ACCOUNT_LENGTH", "Area Code":"AREA_CODE", "Phone":"PHONE", "Intl Plan":"INTL_PLAN", "VMail Plan":"VMAIL_PLAN", "VMail Message":"VMAIL_MESSAGE", "Day Mins":"DAY_MINS", "Day Calls":"DAY_CALLS", "Day Charge":"DAY_CHARGE", "Eve Mins":"EVE_MINS", "Eve Calls":"EVE_CALLS", "Eve Charge":"EVE_CHARGE", "Night Mins":"NIGHT_MINS", "Night Calls":"NIGHT_CALLS", "Night Charge":"NIGHT_CHARGE", "Intl Mins":"INTL_MINS", "Intl Calls":"INTL_CALLS", "Intl Charge":"INTL_CHARGE", "CustServ Calls":"CUSTSERV_CALLS", "Churn?":"CHURN?"})' 'df[df.notnull()]' 'df["CHURN"] = np.where(df["CHURN?"] == "True.", 1, 0)' 'df["INT_PLAN"] = np.where(df["INTL_PLAN"] == "yes", 1, 0)' 'df["VM_PLAN"] = np.where(df["VMAIL_PLAN"] == "yes", 1, 0)' 'df["TOTAL_CALLS"] = df["DAY_CALLS"] + df["EVE_CALLS"] + df["NIGHT_CALLS"] + df["INTL_CALLS"]' 'df["TOTAL_CHARGES"] = df["DAY_CHARGE"] + df["EVE_CHARGE"] + df["NIGHT_CHARGE"] + df["INTL_CHARGE"]' 'df["USA_REGIONS"] = ["NORTHEAST" if ("CT" in str(state).upper() or "ME" in state or "MA" in state or "NH" in state or "RI" in state or "VT" in state or "NJ" in state or "NY" in state or "PA" in state) else "MIDWEST" if ("IL" in state or "IN" in state or "MI" in state or "OH" in state or "WI" in state or "IA" in state or "KS" in state or "MN" in state or "MO" in state or "NE" in state or "ND" in state or "SD" in state) else "SOUTH" if ("DE" in state or "FL" in state or "GA" in state or "MD" in state or "NC" in state or "SC" in state or "VA" in state or "DC" in state or "WV" in state or "AL" in state or "KY" in state or "MS" in state or "TN" in state or "AR" in state or "LA" in state or "OK" in state or "TX" in state) else "WEST" for state in df["STATE"]]' 'pd.concat([df, pd.get_dummies(df.USA_REGIONS)], axis=1)' 'df[["CHURN", "ACCOUNT_LENGTH", "INT_PLAN", "VM_PLAN", "TOTAL_CALLS", "TOTAL_CHARGES", "CUSTSERV_CALLS", "NORTHEAST", "MIDWEST", "SOUTH", "WEST"]]' 'df.to_csv("churn_cleaned.csv", index_label="ID")'

pandashells_workflow

With Pandashells, we were able to quickly read the raw data into a Pandas dataframe, clean the data, create new variables, filter for specific rows and columns, and write the cleaned data to a new output file without leaving the command line. Now, if we were so inclined, we could write a skll configuration file and run a collection of predictive models on the data from the command line. Conveniently, if our workflow involves additional command line operations or tools it’s easy to combine them with the code we’ve presented because Pandashells was designed to integrate well with existing command line tools.

I hope this post has given you ideas on how to use Pandashells to integrate Python, Pandas, and other data stack commands into your existing command line workflows. I’ve enjoyed using Pandashells for data merging and cleaning, quick ad-hoc analysis, and analysis and workflow proto-typing. I still use Python scripts when they’re more appropriate for the project, but it’s been a lot of fun performing common analysis tasks on the command line with Pandashells.

Foundations for Analytics with Python: From Non-programmer to Hacker

I’m excited to share that O’Reilly Media is about to publish my new book, Foundations for Analytics with Python: From Non-programmer to Hacker. The book is geared toward people who have no prior programming experience but deal with data every day and are interested in learning how to scale and automate their work.

Foundations for Analytics with Python by Clinton Brownley, PhD

I did not have a background in programming. I learned it on the job because I recognized it would enable me to automate repetitive actions and accomplish tasks that would be time-consuming or impossible with my current skill set. I read countless books, online tutorials, and blog posts in those first few weeks and months as I attempted to get my first program for work to do something useful for me. It’s difficult to fully describe how exhilarating and empowering it was when I finally got the program to work correctly. Needless to say, I was hooked, and I haven’t looked back.

I wrote the book with a few objectives in mind:

  • Be accessible to ambitious non-programmers
  • Be practical, so you can immediately see how you can use the code at work
  • Teach fundamental programming concepts and techniques and also provide alternative, shorter code that performs the same actions
  • Make the learning curve as short and shallow as possible so you can enjoy the fruits of your labor as quickly as possible

The book’s features reflect these objectives:

  • Each section focuses on one specific task, so you can learn how to accomplish that task without distractions
  • Each section is a complete, self-contained program, so you don’t have to remember to combine a bunch of code snippets to make them work
  • In the CSV and Excel chapters, each section of code has two versions, a base Python version and a Pandas version. The base Python version teaches you fundamental concepts and techniques. The Pandas version shortens and simplifies the code you need to write to accomplish the task
  • Uses the Anaconda Python 3 distribution, which bundles the newest version of Python with some of the most popular add-in packages
  • Includes screen shots of the input files, Python code, command line, and output files
  • Common data formats, including plain text, CSV, and Excel files, and databases
  • Common data processing tasks, including filtering for specific rows, selecting specific columns, and calculating summary statistics
  • Chapters on data analysis, plotting and graphing, and automation
  • Three real-world applications that illustrate how you can combine and extend techniques from earlier chapters to accomplish important data processing tasks
  • Both Windows and Mac commands and screen shots

To give you a feel for the book, let me provide a few sections of code from the book and the table of contents. The first section of code comes from the CSV chapter, the second section of code from the Excel chapter, and the third section of code from the Database chapter.  The brief comments after each section of code are for this blog post, they are not in the book.  If you want to see what other topics are included in the book, please see the table of contents at the bottom of this post.

Example Section #1: CSV Files

Reading and Writing a CSV File

Version #1: Base Python

#!/usr/bin/env python3
import csv
import sys

input_file = sys.argv[1]
output_file = sys.argv[2]

with open(input_file, 'r', newline='') as csv_in_file:
    with open(output_file, 'w', newline='') as csv_out_file:
        filereader = csv.reader(csv_in_file, delimiter=',')
        filewriter = csv.writer(csv_out_file, delimiter=',')
        for row_list in filereader:
            filewriter.writerow(row_list)

Version #1 demonstrates how to read a CSV input file with base Python’s standard csv module and write the contents to a CSV output file. In the book, I explain every line of code. This first example gives you the ability to transfer all of your data to an output file. The subsequent examples in the chapter show you how to select specific data to write to the output file and how to process multiple CSV files.

Version #2: Pandas Add-in Module

#!/usr/bin/env python3
import sys
import pandas as pd

input_file = sys.argv[1]
output_file = sys.argv[2]

data_frame = pd.read_csv(input_file)
print(data_frame)
data_frame.to_csv(output_file, index=False)

Version #2 demonstrates how to accomplish the same task with Pandas. As you can see, you simply use read_csv to read the input file and to_csv to write to the output file.

Example Section #2: Excel Files

Reading and Writing an Excel Worksheet

Version #1: xlrd and xlwt Add-in Modules

#!/usr/bin/env python3
import sys
from xlrd import open_workbook
from xlwt import Workbook

input_file = sys.argv[1]
output_file = sys.argv[2]

output_workbook = Workbook()
output_worksheet = output_workbook.add_sheet('output_worksheet_name')

with open_workbook(input_file) as workbook:
    worksheet = workbook.sheet_by_name('input_worksheet_name')
    for row_index in range(worksheet.nrows):
        for column_index in range(worksheet.ncols):
            output_worksheet.write(row_index, column_index, \
                worksheet.cell_value(row_index, column_index))
output_workbook.save(output_file)

Version #1 demonstrates how to read and write an Excel worksheet with base Python and the xlrd and xlwt add-in modules. Again, this first example gives you the ability to transfer all of the data on one worksheet to an output file. The subsequent examples in the chapter show you how to select specific data to write to the output file, how to process multiple worksheets, and how to process multiple workbooks.

Version #2: Pandas Add-in Module

#!/usr/bin/env python3
import pandas as pd
import sys

input_file = sys.argv[1]
output_file = sys.argv[2]

data_frame = pd.read_excel(input_file, sheetname='input_worksheet_name')
writer = pd.ExcelWriter(output_file)
data_frame.to_excel(writer, sheet_name='output_worksheet_name', index=False)
writer.save()

Version #2 demonstrates how to accomplish the same task with Pandas. Again, you simply use read_excel to read the input worksheet and to_excel to write to the output worksheet.

Example Section #3: Databases

Query a table and write results to a file

#!/usr/bin/env python
import csv
import MySQLdb
import sys

output_file = sys.argv[1]

con = MySQLdb.connect(host='localhost', port=3306, db='my_suppliers', user='my_username', passwd='my_password')
c = con.cursor()

filewriter = csv.writer(open(output_file, 'wb'), delimiter=',')
header = ['Supplier Name','Invoice Number','Part Number','Cost','Purchase Date']
filewriter.writerow(header)

c.execute("""SELECT * FROM Suppliers WHERE Cost > 700.0;""")
rows = c.fetchall()
for row in rows:
    filewriter.writerow(row)

This example demonstrates how to connect to a database, query a table, and write the resulting data to a CSV output file. Other examples in the chapter explain how to load data into a database table from a file and update records in a table based on data in a file.

I hope these examples give you a feel for the book. If you want to see what other topics are included in the book, please see the table of contents shown below. Foundations for Analytics with Python is scheduled to be available in May 2016. Please keep an eye out for it, and if you know other people who may be interested please point them to this blog post and the Amazon link.  Thank you : )

 

TABLE OF CONTENTS

CHAPTER
Introduction
Why Read This Book/Why Learn These Skills
Who Is This Book For
Why Windows
Why Python
Install Anaconda Python
Text Editors
Download Book Materials
Base Python and Pandas
Overview of Chapters

CHAPTER
Python Basics
How To Create a Python Script
How To Run a Python Script
Numbers
Strings
Regular Expressions/Pattern Matching
Dates
Lists
Tuples
Dictionaries
Control Flow
Functions
Exceptions
Reading a Text File
Reading Multiple Text Files with Glob
Writing to a Text File
Writing to a Comma Separated Values “CSV” File
Print Statements

CHAPTER
Comma Separated Values “CSV” Text Files
Reading and Writing a CSV File (String Manipulation)
Reading and Writing a CSV File (Standard csv Module)
Filtering for Rows
    Value in Row Meets a Condition
    Value in Row is in a Set of Interest
    Value in Row Matches a Pattern (Regular Expression)
Selecting Columns
    Index Values
    Column Headings
Reading Multiple CSV Files
    Count Number of Files and Rows and Columns in Each File
    Concatenate Data From Multiple Files
    Sum and Average a Set of Values Per File
Selecting Contiguous Rows
Adding a Header Row

CHAPTER
Microsoft Excel Files
Introspecting an Excel Workbook
Reading a Single Worksheet
    Formatting Dates
    Filtering for Rows
        Value in Row Meets a Condition
        Value in Row is in a Set of Interest
        Value in Row Matches a Pattern (Regular Expression)
    Selecting Columns
        Index Values
        Column Headings
Reading All Worksheets
    Filtering for Rows from All Worksheets
    Selecting Columns from All Worksheets
Reading a Subset of Worksheets
    Filtering for Rows from Subset of Worksheets
    Selecting Columns from Subset of Worksheets
Reading Multiple Workbooks
    Count Number of Workbooks and Rows and Columns in Each Workbook
    Concatenate Data from Multiple Workbooks
    Sum and Average a Set of Values Per Worksheet Per Workbook

CHAPTER
Databases
Python’s Standard sqlite3 Module
    Create a Database
    Create a Database Table
    Insert Hand-written Data into a Database Table
    Query a Database Table
    Insert Data from a CSV File into a Database Table
    Update Records in a Database Table with Data from a CSV File
MySQL Database
    Create a Database
    Create a Database Table
    Insert Data from a CSV File into a Database Table
    Query a Database Table and Write Output to a CSV File
    Update Records in a Database Table with Data from a CSV File

CHAPTER
Applications
Find a Set of Items in a Large Collection of Excel and CSV Files
Parse a CSV File and Calculate a Statistic for Any Number of Categories
Parse a Text File and Calculate a Statistic for Any Number of Categories

CHAPTER
Graphing and Plotting
matplotlib
pandas
ggplot
seaborn

CHAPTER
Data Analysis
Descriptive statistics
Regression
Classification

CHAPTER
Automation
Windows: scheduled tasks
Mac: cron jobs

CHAPTER
Conclusion
Where To Go From Here
    Additional Built-Ins/Standard Modules
    Additional Add-In Modules
    Data Structures
How To Go From Here

APPENDIX
Downloads
Python
xlrd
mysqlclient/MySQL-python/MySQLdb
MySQL

Intro to Julia: Reading and Writing CSV Files with R, Python, and Julia

Last year I read yhat’s blog post, Neural networks and a dive into Julia, which provides an engaging introduction to Julia, a high-level, high-performance programming language for technical computing.

One aspect of the language I found intriguing was its aim to be as fast as C, as easy to use as Python, and as easy for statistics as R. I enjoyed seeing that Julia’s syntax is similar to Python, it has several graphing packages, including a ggplot2-inspired package called Gadfly, and it has a several structured data, statistics, and machine learning packages, including DataFrames for dealing with tabular data and StatsBase and MLBase that provide tools for statistics and machine learning operations.

There are lots of great resources for learning Julia. There are introductory books, like “Getting Started with Julia Programming,” by Ivo Balbaert, and “The Julia Express,” by Bogomil Kaminski. There are online tutorials, like Programming in Julia, Julia by Example, Learn Julia in Y minutes, and Learn Julia the Hard Way. There are also video tutorials, including two “Introduction to Julia” videos by David Sanders at SciPy 2014 and a set of ten Julia video tutorials recorded at MIT in 2013.

Since I’ve been using Python and R to analyze data, and Julia aspires to make the best features of these languages available in one place, I decided to try Julia to see if it would be worthwhile to incorporate it into my toolbox. One of the first things I wanted to learn was the new Julia syntax I’d need to use to perform the operations I’ve been carrying out in Python and R. Some of the most common operations I perform are reading text and delimited input files and writing results to output files. Since these are very common operations, let’s discuss how to perform these operations in R, Python, and Julia. In a later post we can discuss different ways to filter for specific rows and columns in these languages.

To begin, let’s create a folder to work in and name it “workspace”. Next, let’s download a publicly-available data set, e.g. wine-quality, into the folder. Let’s also create another folder called “output” inside the workspace folder where we can save the output files. At this point, we have the following set up:

folder_structure

R
Now that we have our workspace and an input file, let’s create R, Python, and Julia scripts to read the input data and write it to an output file. To create the R script, open a text editor and enter the following code:

#!/usr/bin/env Rscript
# For more information, visit: cbrownley.wordpress.com

#Collect the command line arguments into a variable called args
args <- commandArgs(trailingOnly = TRUE)
# Assign the first command line argument to a variable called input_file
input_file <- args[1]
# Assign the second command line argument to a variable called output_file
output_file <- args[2]

# Use R’s read.csv function to read the data into a variable called wine
# read.csv expects a CSV file with a header row, so
# sep = ',' and header = TRUE are default values
# stringsAsFactors = FALSE means don’t convert character vectors into factors
wine <- read.csv(input_file, sep = ',', header = TRUE, stringsAsFactors = FALSE)

# Use R’s write.csv function to write the data in the variable wine to the output file
# row.names = FALSE means don’t write an extra column of row names
# to the output file; we only want the original data columns
write.csv(wine, file = output_file, row.names = FALSE)

read_csv_R

Once you’ve pasted this code into the file, save the file as “read_csv.R” in the workspace folder and close the file. You can run this script by typing the following two commands on the command line, hitting Enter after each one:
chmod +x read_csv.R
./read_csv.R winequality-red.csv output/output_R.csv

When you run the script you won’t see any output printed to the screen, but the input data was written to a file called output_R.csv in the output folder.

A popular R package for reading and managing data is the data.table package. To use the data.table package instead of base R in the script, all you would need to do is add one require statement and edit the line that reads the contents of the input file into a variable:

#!/usr/bin/env Rscript
require(data.table)

args <- commandArgs(trailingOnly = TRUE)
input_file <- args[1]
output_file <- args[2]

wine <- fread(input_file)

write.csv(wine, file = output_file, row.names = FALSE)

To use this script instead of the first version, all you would need to do is save the file, e.g. as “read_csv_data_table.R”, run the same chmod command on this file, and then substitute this R script in the last command shown above:
./read_csv_data_table.R winequality-red.csv output/output_R_data_table.csv

Python
Now let’s create a Python script to perform the same operations. To create the Python script, open a text editor and enter the following code:

#!/usr/bin/env python
# For more information, visit: cbrownley.wordpress.com

# Import Python's built-in csv and sys modules, which have functions
# for processing CSV files and command line arguments, respectively
import csv
import sys

# Assign the first command line argument to a variable called input_file
input_file = sys.argv[1]
# Assign the second command line argument to a variable called output_file
output_file = sys.argv[2]

# Open the input file for reading and close automatically at end
with open(input_file, 'rU') as csv_in_file:
    # Open the output file for writing and close automatically at end
    with open(output_file, 'wb') as csv_out_file:
        # Create a file reader object for reading all of the input data
        filereader = csv.reader(csv_in_file)
        # Create a file writer object for writing to the output file
        filewriter = csv.writer(csv_out_file)
        # Use a for loop to process the rows in the input file one-by-one
        for row in filereader:
            # Write the row of data to the output file
            filewriter.writerow(row)

read_csv_Python

Once you’ve pasted this code into the file, save the file as “read_csv.py” and close the file. You can run this script by typing the following two commands on the command line, hitting Enter after each one:
chmod +x read_csv.py
./read_csv.py winequality-red.csv output/output_Python.csv

When you run the script you won’t see any output printed to the screen, but the input data was written to a file called output_Python.csv in the output folder.

A popular Python package for reading and managing tabular data is Pandas. Pandas provides many helpful functions, a couple of which simplify the syntax needed to read and write CSV files. For example, to perform the same reading and writing operations we performed above, the Pandas syntax is:

#!/usr/bin/env python
import sys
import pandas as pd

input_file = sys.argv[1]
output_file = sys.argv[2]

data_frame = pd.read_csv(input_file)
data_frame.to_csv(output_file, index=False)

To use this script instead of the first version, all you would need to do is save the file, e.g. as “read_csv_pandas.py”, run the same chmod command on this file, and then substitute this Python script in the last command shown above:
./read_csv_pandas.py winequality-red.csv output/output_Python_Pandas.csv

Julia
Now let’s create a Julia script to perform the same operations. To create the Julia script, open a text editor and enter the following code:

#!/usr/bin/env julia
# For more information, visit: cbrownley.wordpress.com

# Assign the first command line argument to a variable called input_file
input_file = ARGS[1]
# Assign the second command line argument to a variable called output_file
output_file = ARGS[2]

# Open the output file for writing
out_file = open(output_file, "w")
# Open the input file for reading and close automatically at end
open(input_file, "r") do in_file
    # Use a for loop to process the rows in the input file one-by-one
    for line in eachline(in_file)
        # Write the row of data to the output file
        write(out_file, line)
    # Close the for loop
    end
# Close the input file handle
end
# Close the output file handle
close(out_file)

read_csv_Julia

Once you’ve pasted this code into the file, save the file as “read_csv.jl” and close the file. You can run this script by typing the following two commands on the command line, hitting Enter after each one:
chmod +x read_csv.jl
./read_csv.jl winequality-red.csv output/output_Julia.csv

When you run the script you won’t see any output printed to the screen, but the input data was written to a file called output_Julia.csv in the output folder.

A popular Julia package for reading and managing tabular data, especially when the data may contain NAs, is DataFrames. DataFrames provides many helpful functions, a couple of which simplify the syntax needed to read and write CSV files. For example, to perform the same reading and writing operations we performed above, the DataFrames syntax is:

#!/usr/bin/env julia
using DataFrames

input_file = ARGS[1]
output_file = ARGS[2]

data_frame = readtable(input_file, separator = ',')
writetable(output_file, data_frame)

To use this script instead of the first version, all you would need to do is save the file, e.g. as “read_csv_data_frames.jl”, run the same chmod command on this file, and then substitute this Julia script in the last command shown above:
./read_csv_data_frames.jl winequality-red.csv output/output_Julia_DataFrames.csv

folder_structure_all_files

As you can see, when it comes to reading, processing, and writing CSV files, the differences in syntax between Python and Julia are very slight. For example, Python’s “with open()” statements are “open() do … end” statements in Julia, and for loops in Julia drop the colon required in Python and instead require the end keyword. These differences are so minor that I’ve found it very easy to pick up Julia syntax and transition back and forth between Python and Julia.

Now that we know how to read and write all of the data in a CSV-formatted input file with R, Python, and Julia, the next step is to figure out how to filter for specific rows and columns in these languages. Then we can move on to processing lots of files in a directory and also dealing with Excel files. We’ll cover these topics in future posts.

To Read or Not to Read? Hopefully the Former!

It has been far too long since I posted to this blog.  It’s time to let you know what I’ve been working on over the past few months.  I’ve been writing a book on a topic that is near and dear to my heart.

The book is titled Multi-objective Decision Analysis: Managing Trade-offs and Uncertainty.  It is an applied, concise book that explains how to conduct multi-objective decision analyses using spreadsheets.

The book is scheduled to be published by Business Expert Press in 2013.  For a little more information about my forthcoming book, please read the abstract shown below:

“Whether managing strategy, operations, or products, making the best decision in a complex uncertain business environment is challenging.  One of the major difficulties facing decision makers is that they often have multiple, competing objectives, which means trade-offs will need to be made.  To further complicate matters, uncertainty in the business environment makes it hard to explicitly understand how different objectives will impact potential outcomes.  Fortunately, these problems can be solved with a structured framework for multi-objective decision analysis that measures trade-offs among objectives and incorporates uncertainties and risk preferences.

This book is designed to help decision makers by providing such an analysis framework implemented as a simple spreadsheet tool.  This framework helps structure the decision making process by identifying what information is needed in order to make the decision, defining how that information should be combined to make the decision and, finally, providing quantifiable evidence to clearly communicate and justify the final decision.

The process itself involves minimal overhead and is perfect for busy professionals who need a simple, structured process for making, tracking, and communicating decisions.  With this process, decision making is made more efficient by focusing only on information and factors that are well-defined, measureable, and relevant to the decision at hand.  The clear characterization of the decision required by the framework ensures that a decision can be traced and is consistent with the intended objectives and organizational values.  Using this structured decision-making framework, anyone can effectively and consistently make better decisions to gain a competitive and strategic advantage.”

Look for my forthcoming book, Multi-objective Decision Analysis, on the bookshelves in 2013!

Source: favim.com

Know Thyself: The Value of Specifying Your Values

A few days ago I was chatting with a friend about a new company initiative she’s going to be working on.  The new initiative is an important strategic maneuver for the company.  If it is designed and implemented well, the company will be well-positioned to compete in the new market.  However, if it isn’t designed or implemented well, it will significantly drain the company’s limited management and financial resources and likely weaken the company’s competitive edge.

As our conversation continued, one important problem with the new initiative became apparent.  Based on available information, it appears the overall strategy and objectives for the new initiative aren’t well thought out.  This is an important problem for several reasons – it makes it difficult for workers to know what tasks need to be done to advance the initiative, makes it difficult for managers to track and guide progress on all of the project’s phases, and impedes clear communication.

Having identified the problem with the current state of the initiative, namely, that management has not clearly defined what they want to achieve with the initiative, we began discussing potential solutions.  The first thing we wanted to do was brainstorm the factors we thought were important to the company with respect to the initiative; however, it became clear right away that first we needed to define some terminology.

The problem was that we were using different words to describe the ideas we wanted to discuss.  One minute we were talking about the company’s values, another minute we were talking about the company’s objectives, and the next minute we were talking about the company’s goals.  We needed to agree on the definitions of the terms we were using.  After spending some time discussing possible definitions, we agreed on the following definitions:

  • Values are the areas of concern, considerations, or matters you think are significant enough to be taken into account when evaluating alternatives.  For example, values for the company considering alternative ways of rolling out the initiative may be ease of implementation, company image, and profit.
  • Objectives augment values by specifying the preferred direction of movement.  Thus, the company considering alternative ways of rolling out the initiative would find an alternative that is easier to implement more desirable.
  • Goals are thresholds of achievement with respect to values and objectives that are either achieved or not by an alternative that’s being evaluated.  For example, the company might have a goal of implementing the initiative within eight weeks.  For a given alternative, this goal may or may not be achievable.
  • Finally, underlying each of these terms is the idea of a measure, a measuring scale for the degree of attainment of an objective.  For example, the company may use “annual profit in dollars” as the measure for the objective of increasing profit.

By thinking carefully about these terms, and agreeing on their definitions, we developed a common vocabulary we could use to communicate clearly with one another.  Having defined the terminology, we brainstormed the company’s values and objectives with respect to the initiative and even defined the measures we would use to gauge the degree of attainment of the objectives.

From start to finish, the whole conversation didn’t take more than an hour, but by the end we had a shared understanding of the values and objectives the company should try to achieve with the initiative, a shared understanding of how we would measure degrees of attainment of the objectives, and significant insight into how we would evaluate the alternative ways of rolling out the initiative.  Not bad for an hour-long conversation.

This structuring process and terminology is applicable to any situation in which you want to evaluate alternatives based on multiple values and objectives.  By specifying the values you want to use to evaluate your alternatives and defining the objectives and measures you’ll use to judge the degree of attainment of your objectives, you’ll reap the significant benefits of thoroughly understanding your decision situation, being able to articulate a clear rationale for your decision, and being able to identify the alternative that, according to your values and objectives, is the most valuable to you.

You Might Be a Decision Analyst If…

Over the holidays, I visited my sister and brother-in-law.  While we were chatting, he mentioned he wanted to refurbish some audio equipment and was trying to decide between fixing the electrical components himself and hiring an expert to do it for him.  He said he was concerned with the potential cost and quality of the repairs, as well as how long it would take to complete the repairs, but he was uncertain about the range of values these three factors could take under the two alternatives.  I must have been smiling from ear to ear.  Without even realizing it, he’d described a simple, yet interesting multi-objective decision under uncertainty, and I wanted to show him how I could help him with his decision situation.

I pulled out a sheet of paper and quickly drew a rough influence diagram to represent his decision, the three uncertainties, and the resulting outcome “node” of his confidence or pride in the refurbished system.  Then I asked him about the range of costs the components could take and the range of time and quality the repairs could take.  After drafting probability distributions for the three factors, we chatted about his value functions and weights for the three factors and assigned scores and probabilities to the two alternatives across all three factors.  I transferred all of this information, which only took a few minutes to collect, into a spreadsheet and quickly churned out preliminary certainty equivalents for the two alternatives.   After performing a quick sensitivity analysis on some of the model’s inputs, including the weights and multi-attribute risk tolerance, we determined we had robust certainty equivalents for the alternatives.

My brother-in-law was pleased with the whole exercise.  It helped him frame and understand the decision situation.  By breaking the problem down into objectives, uncertainties, scores, and values, he could clearly see how changes in those factors would affect which alternative was preferred.  Best of all, in this case, the analysis was quick and free…unless you count the cost of the sandwiches we ate while we worked on the problem.  I enjoyed conducting this decision analysis with my brother-in-law and was very pleased that he thought it had been a valuable use of his lunchtime.

Do you have a similar story?  Have you ever helped a friend or family member with an interesting decision problem?  Are you working on an interesting problem at work?  When you get a chance, please take a few minutes to write about and share your experience.  You can think of it as your chance to tell the next “You might be a decision analyst if…” story.