Tuesday, December 20, 2011

Trading and predictive analytics

I attended today's class in the course Trading Strategies and Systems offered by Prof Vasant Dhar from NYU Stern School of Business. Luckily, Vasant is offering the elective course here at the Indian School of Business, so no need for transatlantic travel.

The topic of this class was the use of news in trading. I won't disclose any trade secrets (you'll have to attend the class for that), but here's my point: Trading is a striking example of the distinction between explanation and prediction. Generally, techniques are based on correlations and on "blackbox" predictive models such as neural nets. In particular, text mining and sentiment analysis are used for extracting information from (often unstructured) news articles for the purpose of prediction.

Vasant mentioned the practical advantage of a machine-learning approach for extracting useful content from text over linguistics know-how. This reminded me of a famous comment by Frederick Jelinek, a prominent
Natural Language Processing researcher who passed away recently:
"Whenever I fire a linguist our system performance improves" (Jelinek, 1998)
This comment was based on Jelinek's experience at IBM Research, while working on computer speech recognition and machine translation.

Jelinek's comment did not make linguists happy. He later defended this claim in a paper entitled "Some of My Best Friends are Linguists" by commenting,
"We all hoped that linguists would provide us with needed help. We were never reluctant to include linguistic knowledge or intuition into our systems; if we didn't succeed it was because we didn't fi nd an effi cient way to include it."
Note: there are some disputes regarding the exact wording of the quote ("Anytime a linguist leaves the group the recognition rate goes up") and its timing -- see note #1 in the Wikipedia entry.

Wednesday, December 07, 2011

Polleverywhere.com -- how it worked out

Following up on my earlier post about the use of polleverywhere.com for polling in class, here is a summary of my experience using it in a data mining elective course @ ISB (38 students, after four sessions):
  • Creating polls: After a few tries and with a few very helpful tips from a PE representative, I was able to create polls and embed them into my Power Point slides. This is relatively easy and user-friendly. One feature that is currently missing in PE, which I use a lot, is the inclusion of a figure on the poll slide (for example, a snippet of some software output). Although you can paste the image on the PPT, it takes a bit of testing to place it so that it does not overlap on the poll. Also, if you need to use the poll in a browser instead of the PPT (see below), the image won't be there...
  • Operation in class: PE requires good Internet connection for the instructor and for all the users with laptops or using the wireless with a different device. Although wireless is generally operational in the classroom that I used, I did encounter a few times when it was flaky, which is very disruptive (the poll does not load; students cannot respond). Secondly, I found that voting takes much longer with mobile/laptops than with clickers. What would have taken 30 seconds with clickers can take several minutes with PE voting.
  • Student adoption: During the first session students were curious and quickly figured out how to vote. Students could either vote using a browser (I created the page pollev.com/profgalit where live polls would show up) or those lacking Internet access used their mobiles to tweet via SMS (Airtel free SMS to 53000; other carriers SMS to Bangalore number 09243000111 via smstweet.in). As the sessions progressed, the number of voters started dropping drastically. I suspected that this might be a result of my changing the settings to allow only registered users to vote. So I switched back to "anyone can vote", yet the voting percentage remained very low.
I have never graded voting, and rather use it as a fun active learning tool. With clickers response rate was typically around 80-90%, while with PE it is currently lower than 50%. Given our occasional Internet challenge, the longer voting time, and especially the low response rate I will be going back to clickers for now.

I foresee that PE would work nicely in a setting such as a one-time talk at a large conference, or a one-day workshop for execs. I will also mention the excellent and timely support by PE. And, of course, the low price!

Monday, October 17, 2011

Early detection of what?

The interest in using pre-diagnostic data for the early detection of disease outbreaks, has evolved in interesting ways in the last 10 years. In the early 2000s, I was involved in an effort to explore the potential of non-traditional data sources, such as over-the-counter pharmacy sales and web searches on medical websites, which might give earlier signs of a disease outbreak than confirmed diagnostic data (lab tests, doctor diagnoses, etc.). The pre-diagnostic data sources that we looked at were not only expected to have an earlier footprint of the outbreak compared to traditional diagnostic data, but they were also collected at higher frequency (typically daily) compared to the weekly or even less frequent diagnostic data, and were made available with much less lag time. The general conclusion was that there indeed was potential in improving the detection time using such data (and for that we investigated and developed adequate data analytic methods). Evaluation was based on simulating outbreak footprints, which is a challenge in itself (what does a flu outbreak look like in pharmacy sales?), and on examining past data with known outbreaks (where there is often no consensus on the outbreak start date) -- for papers on these issues see here.

A few years ago, Google came out with Google Flu Trends, which monitors web searches for terms that are related to flu, with the underlying assumption that people (or their relatives/friends, etc.) who are experiencing flu-like symptoms would be searching for related terms on the web. Google compared their performance to the weekly diagnostic data by the Centers for Disease Control and Prevention (CDC). In a joint paper between a Google and CDC researchers, they claimed:
we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. (also published in Nature)
Blue= Google flu estimate; Orange=CDC data. From google.org/flutrends/about/how.html

What can you do if you have an early alert of a disease outbreak? the information can be used for stockpiling medicines, vaccination plans, providing public awareness, preparing hospitals, and more. Now comes the interesting part: recently, there has been criticism of the Google Flu Trends claims, saying that "while Google Flu Trends is highly correlated with rates of [Influenza-like illness], it has a lower correlation with actual influenza tests positive". In other words, Google detects not a flu outbreak, but rather a perception of flu. Does this means that Google Flu Trends is useless? Absolutely not. It just means that the goal and the analysis results must be aligned more carefully. As the Popular Mechanics blog writes:
Google Flu Trends might, however, provide some unique advantages precisely because it is broad and behavior-based. It could help keep track of public fears over an epidemic 
Aligning the question of interest with the data (and analysis method) is related to what Ron Kenett and I call "Information Quality", or "the potential of a dataset to answer a question of interest using a given data analysis method". In the early disease detection problem, the lesson is that diagnostic and pre-diagnostic data should not just be considered two different data sets (monitored perhaps with different statistical methods), but they also differ fundamentally in terms of the questions they can answer.

Saturday, October 01, 2011

Language and psychological state: explain or predict?

Quite a few of my social science colleagues think that predictive modeling is not a kosher tool for theory building. In our 2011 MISQ paper "Predictive Analytics in Information Systems Research" we argue that predictive modeling has a critical role to play not only in theory testing but also in theory building. How does it work? Here's an interesting example:

The new book The Secret Life of Pronouns by the cognitive psychologist Pennebaker is a fascinating read in many ways. The book describes how analysis of written language can be predictive of psychological state. In particular, the author describes an interesting text mining approach that analyzes text written by a person and creates a psychological profile of the writer. In the author's context, the approach is used to study the effect of writing on recovery from psychological trauma. You can get a taste of word analysis on the AnalyzeWords.com website, run by the author and his colleagues, which analyzes the personality of a tweeter.

In the book, Pennebaker describes how the automated analysis of language has shed light on the probability that people who underwent psychological trauma will recuperate. For instance, people who used a moderate amount of negative language were more likely to improve than those who used too little or too much negative language. Or, people who tended to change perspectives in their writing over time (from "I" to "they" or "we") were more likely to improve.

Now comes a key question. In the words of the author (p.14): "Do words reflect a psychological state or do they cause it?". The statistical/data-mining text mining application is obviously a predictive tool that is build on correlations/associations. Yet, by examining when it predicts accurately and studying the reasons for the accurate (or inaccurate) predictions, the predictive tool can shed insightful light on possible explanations, linking results to existing psychological theories and giving ideas for new ones. Then comes the "close the circle", where the predictive modeling is combined with explanatory modeling. For testing the explanatory power of words on psychological state, the way to go is experiments. And indeed, the book describes several such experiments investigating the causal effect of words on psychological state, which seem to indicate that there is no causal relationship.

[Thanks to my text-mining-expert colleague Nitin Indurkhya for introducing me to the book!]

Monday, September 19, 2011

Statistical considerations and psychological effects in clinical trials

I find it illuminating to read statistics "bibles" in various fields, which not only open my eyes to different domains, but also present the statistical approach and methods somewhat differently and considering unique domain-specific issues that cause "hmmmm" moments.

The 4th edition of Fundamentals of Clinical Trials, whose authors combine extensive practical experience at NIH and in academia, is full of hmmm moments. In one, the authors mention an important issue related to sampling that I have not encountered in other fields. In clinical trials, the gold standard is to allocate participants to either an intervention or a non-intervention (baseline) group randomly, with equal probabilities. In other words, half the participants receive the intervention and the other half does not (the non-intervention can be a placebo, the traditional treatment, etc.) The authors advocate a 50:50 ratio, because "equal allocation is the most powerful design". While there are reasons to change the ratio in favor of the intervention or baseline groups, equal allocation appears to have an important additional psychological advantage over unequal allocation in clinical trials:
Unequal allocation may indicate to the participants and to their personal physicians that one intervention is preferred over the other (pp. 98-99)
Knowledge of the sample design by the participants and/or the physicians also affects how randomization is carried out. It becomes a game between the designers and the participants and staff, where the two sides have opposing interests: to blur vs. to uncover the group assignments before they are made. This gaming requires devising special randomization methods (which, in turn, require data analysis that takes the randomization mechanism into account).

For example, to assure an equal number of participants in each of the two groups, given that participants enter sequentially, "block randomization" can be used. For instance, to assign 4 people to one of two groups A or B, consider all the possible arrangements AABB, AABA, etc., then choose one sequence at random, and assign participants accordingly. The catch is that if the staff have knowledge that the block size is 4 and know the first three allocations, they automatically know the fourth allocation and can introduce bias by using this knowledge to select every fourth participant.

Where else does such a psychological effect play a role in determining sampling ratios? In applications where participants and other stakeholders have no knowledge of the sampling scheme this is obviously a non-issue. For example, when Amazon or Yahoo! present different information to different users, the users have no idea about the sample design, and maybe not even that they are in an experiment. But how is the randomization achieved? Unless the randomization process is fully automated and not susceptible to reverse engineering, someone in the technical department might decide to favor friends by allocating them to the "better" group...

Thursday, September 15, 2011

Mining health-related data: How to benefit scientific research

Image from KDnuggets.com
While debates over privacy issues related to electronic health records are still ongoing, predictive analytics are beginning to being used with administrative health data (available to health insurance companies, aka, "health provider networks"). One such venue are large data mining contests. Let me describe a few and then get to my point about their contribution to pubic health, medicine and to data mining research.

The latest and grandest is the ongoing $3 million prize contest by Hereitage Provider Network, which opened in 2010 and lasts 2 years. The contest's stated goal is to create "an algorithm that predicts how many days a patient will spend in a hospital in the next year". Participants get a dataset of de-identified medical records of 100,000 individuals, on which they can train their algorithms. The article in KDNuggets.com suggests that this competition's goal is "to spur development of new approaches in the analysis of health data and create new predictive algorithms."

The 2010 SAS Data Mining Shootout contest was also health-related. Unfortunately, the contest webpage is no longer available (the problem description and data were previously available here), and I couldn't find any information on the winning strategies. From an article in KDNuggets:

"analyzing the medical, demographic, and behavioral data of 50,788 individuals, some of whom had diabetes. The task was to determine the economic benefit of reducing the Body Mass Indices (BMIs) of a selected number of individuals by 10% and to determine the cost savings that would accrue to the Federal Government's Medicare and Medicaid programs, as well as to the economy as a whole"
In 2009, the INFORMS data mining contest was co-organized by IBM Research and Health Care Intelligence, focused on "health care quality". Strangely enough, this contest website is also gone. A brief description by the organizer (Claudia Perlich) is given on KDNuggets.com, stating the two goals :
  1. modeling of a patient transfer guideline for patients with a severe medical condition from a community hospital setting to tertiary hospital provider and
  2. assessment of the severity/risk of death of a patient's condition.
What about presentations/reports from the winners? I had a hard time finding any (here is a deck of slides by a group competing in the 2011 SAS Shootout, also health-related). But photos holding awards and checks abound.

If these health-related data mining competitions are to promote research and solutions in these fields, the contest webpages with problem description, data, as well as presentations/reports by the winners should continue to be publicly available (as for the annual KDD Cup competitions by the ACM). Posting only names and photos of the winners makes data mining competitions look more like a consulting job where the data provider is interested in solving one particular problem for its own (financial or other) benefit. There is definitely scope for a data mining group/organization to collect all this info while it is live and post it in one central website. 

Wednesday, September 07, 2011

Multiple testing with large samples

Multiple testing (or multiple comparisons) arises when multiple hypotheses are tested using the same dataset via statistical inference. If each test has false alert level α, then the combined false alert rate of testing k hypotheses (also called the "overall type I error rate") can be as large as 1-(1-α)^k (exponential in the number of hypotheses k). This is a serious problem and ignoring it can lead to false discoveries. See an earlier post with links to examples.

There are various proposed corrections for multiple testing, the most basic principle being reducing the individual α's. However, the various corrections suffer in this way or the other from reducing statistical power (the probability of detecting a real effect). One important approach is to limit the number of hypotheses to be tested. All this is not new to statisticians and also to some circles of researchers in other areas (a 2008 technical report by the US department of education nicely summarizes the issue and proposes solutions for education research).

"Large-Scale" = many measurements
The multiple testing challenge has become especially prominent in analyzing micro-array genomic data, where datasets have measurements on many genes (k) for a few people (n). In this new area, inference is used more in an exploratory fashion, rather than confirmatory. The literature on "large-k-small-n" problems has also grown considerably since, including a recent book Large-Scale Inference by Bradley Efron.

And now I get to my (hopefully novel) point: empirical research in the social sciences is now moving to the era of "large n and same old k" datasets. This is what I call "large samples". With large datasets becoming more easily available, researchers test few hypotheses using tens and hundreds of thousands of observations (such as lots of online auctions on eBay or many books on Amazon). Yet, the focus has remained on confirmatory inference, where a set of hypotheses that are derived from a theoretical model are tested using data. What happens to multiple testing issues in this environment? My claim is that they are gone! Decrease α to your liking, and you will still have more statistical power than you can handle.

But wait, it's not so simple: With very large samples, the p-value challenge kicks in, such that we cannot use statistical significance to infer practically significant effects. Even if we decrease Î± to a tiny number, we'll still likely get lots of statistically-significant-but-practically-meaningless results.

The bottom line is that with large samples (large-n-same-old-k), the approach to analyzing data is totally different: no need to worry about multiple testing, which is so crucial in small samples. This is only one among many other differences between small-sample and large-sample data analysis.


Tuesday, September 06, 2011

"Predict" or "Forecast"?

What is the difference between "prediction" and "forecasting"? I heard this being asked quite a few times lately. The Predictive Analytics World conference website has a Predictive Analytics Guide page with the following Q&A:

How is predictive analytics different from forecasting?
Predictive analytics is something else entirely, going beyond standard forecasting by producing a predictive score for each customer or other organizational element. In contrast, forecasting provides overall aggregate estimates, such as the total number of purchases next quarter. For example, forecasting might estimate the total number of ice cream cones to be purchased in a certain region, while predictive analytics tells you which individual customers are likely to buy an ice cream cone.
In a recent interview on "Data Analytics", Prof Ram Gopal asked me a similar question. I have a slightly different view of the difference: the term "forecasting" is used when it is a time series and we are predicting the series into the future. Hence "business forecasts" and "weather forecasts". In contrast, "prediction" is the act of predicting in a cross-sectional setting, where the data are a snapshot in time (say, a one-time sample from a customer database). Here you use information on a sample of records to predict the value of other records (which can be a value that will be observed in the future). That's my personal distinction.



While forecasting has traditionally focused on providing "overall aggregate estimates", that has long changed, and methods of forecasting are commonly used to provide individual estimates. Think again of weather forecasts -- you can get forecasts for very specific areas. Moreover, daily (and even minute-by-minute) weather forecasts are generated for many different geographical areas. Another example is SKU-level forecasting for inventory management purposes. Stores and large companies often use forecasting to predict every product they carry. These are not aggregate values, but individual-product forecasts.

"Old fashioned" forecasting has indeed been around for a long time, and has been taught in statistics and operations research programs and courses. While some forecasting models require a lot of statistical expertise (such as ARIMA, GARCH and other acronyms), there is a terrific and powerful set of data-driven, computationally fast, automated methods that can be used for forecasting even at the individual product/service level. Forecasting, in my eyes, is definitely part of predictive analytics.

Monday, August 29, 2011

Active learning: going mobile in India

I've been using "clickers" since 2002 in all my courses. Clickers are polling devices that students use during class to answer multiple-choice questions that I include in my slides. They encourage students to participate (even the shy ones), they give the teacher immediate feedback about students' knowledge, and are a great ice-breaker for generating interesting discussions. Of course, clickers are also fun. Most students love this active learning technology (statistically speaking, around 90% love it and 10% don't).

Clicker technology has greatly evolved since 2002. Back then, my students would watch me (in astonishment) climbing on chairs before class to place receivers above the blackboard, to allow their infra-red, line-of-sight clickers (the size of TV remotes) to reach the receivers. The receivers were the size of a large matchbox. Slowly the clickers and receivers started shrinking in size and weight...


A few years later came the slick credit-card-size radio-frequency (RF) clickers that did not require line-of-sight. My receiver shrunk to the size of an obese USB stick.

I still love clickers, but am finding their price (hardware and software) unreasonable for education purposes. The high prices ($40/clicker in the USA) are also applicable in India, as I've discovered (a quote of over $4,000 for a set of 75 clickers and a receiver raised my eyebrows to my hairline). In addition, now that everyone carries around this gadget called a mobile phone, why burden my students with yet-more-hardware?


This brought me to research using mobiles for polling. I discovered www.polleverywhere.com, which offers a facility for creating polls via their website, then embedding the polls into slides (Power Point etc.). Students can respond with their mobile phones by sending an SMS, tweeting, or using the Internet. I am especially interested in the mobile option, to avoid needing wireless Internet connection, smartphones, or laptops in class.


So, how does this work in India?
The bad news: While in the USA and Canada the SMS option is cheap (local number), polleverywhere.com does not have a local number for India (you must text an Australian number).

The good news: Twitter! Students with Bharti Airtel plans can tweet to respond to a poll (that is, send an SMS to a local number in India). I just tested this from Bhutan, and tweeting works beautifully.

The even-better news: Those using other Indian carriers can still tweet using the cool workaround provided by www.smstweet.in. This allows tweeting to a number in Bangalore.

The cost? A fraction to the university (around $700/year for 200 students using the system in parallel) and only local SMS cost to the students. How well will this system work in practice? I am planning to try it out in my upcoming course Business Intelligence Using Data Mining @ ISB, and will post about my experience.

Wednesday, August 17, 2011

Where computer science and business meet

Data mining is taught very differently at engineering schools and at business schools. At engineering schools, data mining is taught more technically, deciphering how different algorithms work. In business schools the focus is on how to use algorithms in a business context.

Business students with a computer science background can now enjoy both worlds: take a data mining course with a business focus, and supplement it with the free course materials from Stanford Engineering school's Machine Learning course (including videos of lectures and handouts by Prof Andrew Ng). There are a bunch of other courses with free materials as part of the Stanford Engineering Everywhere program.

Similarly, computer science students with a business background can take advantage of MIT's Sloan School of Management Open Courseware program, and in particular their Data Mining course (last offered in 2003 by Prof Nitin Patel). Unfortunately, there are no lecture videos, but you do have access to handouts.

And for instructors in either world, these are great resources!


Thursday, August 04, 2011

The potential of being good

Yesterday I happened to hear talks by two excellent speakers, both on major data mining applications in industry. One common theme was that both speakers gave compelling and easy to grasp examples of what data mining algorithms and statistics can do beyond human intelligence, and how the two relate.

The first talk, by IBM's Global Services Christer Johnson, was given at the 2011 INFORMS Conference on Business Analytics and Operations Research (see video). Christer Johnson described the idea behind Watson, the artificial intelligence computer system developed by IBM that beat two champions of the Jeopardy quiz show. Two main points in the talk about the relationship between humans and data mining methods that I especially liked are:
  1. Data analytics methods are designed not only to give an answer, but also to evaluate how confident they are about the answer. In answering the jeopardy questions, the data mining approach tells you not only what is the most likely answer, but also how confident you are about that answer.
  2. Building trust in an analytics tool occurs when you see it make mistakes and learn from those mistakes.
The second talk, "The Art and Science of Matching Items to Users" was given by Deepak Agarwal , a Yahoo! principle research scientist and fellow statistician, was webcasted at ISB's seminar series. You can still catch it on Aug 10 at Yahoo!'s Big Thinker Series in Bangalore. The talk was about recommender systems and their use within Yahoo!. Among various approaches used by Yahoo! to improve recommendations, Deepak described a main idea for improving the customization of news item displays on news.yahoo.com.

On the relation between human intelligence and automation, the process of choosing which items to display on Yahoo! is a two-step process, where first human editors create a pool of potential interesting news items, and then automated machine-learning algorithms choose which individual items to display from that pool.

Like Christer Johnson's point #2, Deepak illustrated the difference between "the answer" (what we statisticians call a point estimate) and "the potential of it being good" (what we call the confidence in the estimate, AKA variability) in a very cool way: Consider two news items of which one will be displayed to a user. The first item was already shown to 100 users and 2 users clicked on links from that page. The second was shown  to 10,000 users and 250 users clicked on links. Which news item should you show to maximize clicks? (yes, this is about ad revenues...) Although the first item has a lower click-through-rate (2%), it is also less certain, in the sense that it is based on less data than item 2. Hence, it is potentially good. He then took this one step further: Combine the two! "Exploit what is known to be good, explore what is potentially good".

So what do we have here? Very practical and clear examples of why we care about variance, the weakness of point estimates, and expanding the notion of diversification to combining certain good results with uncertain not-that-good results.

Wednesday, July 27, 2011

Analytics: You want to be in Asia

Business Intelligence and Data Mining have become hot buzzwords in the West. Using Google Insights for Search to "see what the world is searching for" (see image below), we can see that the popularity of these two terms seems to have stabilized (if you expand the search to 2007 or earlier, you will see the earlier peak and also that Data Mining was hotter for a while). Click on the image to get to the actual result, with which you can interact directly. There are two very interesting insights from this search result:
  1. Looking at the "Regional Interest" for these terms, we see that the #1 country searching for these terms is India! Hong Kong and Singapore are also in the top 5. A surge of interest in Asia!
  2. Adding two similar terms that have the term Analytics, namely Business Analytics and Data Analytics, unveils a growing interest in Analytics (whereas the two non-analytics terms have stabilized after their peak).
What to make of this? First, it means Analytics is hot. Business Analytics and Data Analytics encompass methods for analyzing data that add value to a business or any other organization. Analytics includes a wide range of data analysis methods, from visual analytics to descriptive and explanatory modeling, and predictive analytics. From statistical modeling, to interactive visualization (like the one shown here!), to machine-learning algorithms and more. Companies and organizations are hungry for methods that can turn their huge and growing amounts of data into actionable knowledge. And the hunger is most pressing in Asia.
Click on the image to refresh the Google Insight for Search result (in a new window)

Thursday, July 14, 2011

Designing an experiment on a spatial network: To Explain or To Predict?

Image from
http://www.slews.de
Spatial data are inherently important in environmental applications. An example is collecting data from air or water quality sensors. Such data collection mechanisms introduce dependence in the collected data due to their spatial proximity/distance. This dependence must be taken into account not only in the data analysis stage (and there is a good statistical literature on spatial data analysis methods), but also in the design of experiments stage. One example of a design question is where to locate the sensors and how many sensors are needed?

Where does explain vs. predict come into the picture? An interesting 2006 article by Dale Zimmerman called "Optimal network design for spatial prediction, covariance parameter estimation, and empirical prediction" tells the following story:
"...criteria for network design that emphasize the utility of the network for prediction (kriging) of unobserved responses assuming known spatial covariance parameters are contrasted with criteria that emphasize the estimation of the covariance parameters themselves. It is shown, via a series of related examples, that these two main design objectives are largely antithetical and thus lead to quite different “optimal” designs" 
(Here is the freely available technical report).

Monday, June 20, 2011

Got Data?!

The American Statistical Association's store used to sell cool T-shirts with the old-time beggar-statistician question "Got Data?" Today it is much easier to find data, thanks to the Internet. Dozens of student teams taking my data mining course have been able to find data from various sources on the Internet for their team projects. Yet, I often receive queries from colleagues in search of data for their students' projects. This is especially true for short courses, where students don't have sufficient time to search and gather data (which is highly educational in itself!).

One solution that I often offer is data from data mining competitions. KDD Cup is a classic, but there are lots of other data mining competitions that make huge amounts of real or realistic data available: past INFORMS Data Mining Contests (200820092010), ENBIS Challenges, and more. Here's one new competition to add to the list:

The European Network for Business and Industrial Statistics (ENBIS) announced the 2011 Challenge (in collaboration with SAS JMP). The title is "Maximising Click Through Rates on Banner Adverts: Predictive Modeling in the On Line World". It's a bit complicated to find the full problem description and data on the ENBIS website (you'll find yourself clicking-through endless "more" buttons - hopefully these are not data collected for the challenge!), so I linked them up.

It's time for T-shirts saying "Got Data! Want Knowledge?"

Friday, June 17, 2011

Scatter plots for large samples

While huge datasets have become ubiquitos in fields such as genomics, large datasets are now also becoming to infiltrate research in the social sciences. Data from eCommerce sites, online dating sites, etc. are now collected as part of research in information systems, marketing and related fields. We can now find social science research papers with hundreds of thousands of observations and more.

A common type of research question in such studies is about the relationship between two variables. For example, how does the final price of an online auction relate to the seller's feedback rating? A classic exploratory tool for examining such questions (before delving into formal data analysis) is the scatter plot. In small sample studies, scatter plots are used for exploring relationships and detecting outliers.

Image from http://prsdstudio.com/ 
With large samples, however, the scatter plot runs into a few problems. With lots of observations, there is likely to be too much overlap between markers on the scatter plot, even to the point of insufficient pixels to display all the points.

Here are some large-sample strategies to make scatter plots useful:

  1. Aggregation: display groups of observations in a certain area on the plot as a single marker. Size or color can denote the number of aggregated observations.
  2. Small-multiples: split the data into multiple scatter plots by breaking down the data into (meaningful) subsets. Breaking down the data by geographical location is one example. Make sure to use the same axis scales on all plots - this will be done automatically if your software allows "trellising".
  3. Sample: draw smaller random samples from the large dataset and plot them in multiple scatter plots (again, keep the axis scales identical on all plots).
  4. Zoom-in: examine particular areas of the scatter plot by zooming in
Finally, with large datasets it is useful to consider charts that are based on aggregation such as histograms and box plots. For more on visualization, see the Visualization chapter in Data Mining for Business Intelligence.

Friday, May 20, 2011

Nice April Fool's Day prank

The recent issue of the Journal of Computational Graphics & Statistics published a short article by Columbia Univ Prof Andrew Gelman (I believe he is the most active statistician-blogger) called "Why tables are really much better than graphs" based on his April 1, 2009 blog post (note the difference in publishing speed using blogs and refereed journals!). The last parts made me laugh hysterically - so let me share them:

About creating and reporting "good" tables:
It's also helpful in a table to have a minimum of four significant digits. A good choice is often to use the default provided by whatever software you have used to fit the model. Software designers have chosen their defaults for a good reason, and I'd go with that. Unnecessary rounding is risky; who knows what information might be lost in the foolish pursuit of a "clean"-looking table?
About creating and reporting "good" graphs:
If you must make a graph, try only to graph unadorned raw data, so that you are not implying you have anything you do not. And I recommend using Excel, which has some really nice defaults as well as options such as those 3-D colored bar charts. If you are going to have a graph, you might as well make it pretty. I recommend a separate color for each bar—and if you want to throw in a line as well, use a separate y-axis on the right side of the graph.
Note: please do not follow these instructions for creating tables and graphs! Remember, this is an April Fool's Day prank!
From Stephen Few's examples of bad visualizations (http://perceptualedge.com/examples.php)

Monday, April 25, 2011

Google Spreadsheets for teaching probability?

In business schools it is common to teach statistics courses using Microsoft Excel, due to its wide accessibility and the familiarity of business students with the software. There is a large debate regarding this practice, but at this point the reality is clear: the figure that I am familiar with is about 50% of basic stat courses in b-schools use Excel and 50% use statistical software such as Minitab or JMP.

Another trend is moving from offline software to "cloud computing" -- Software such as www.statcrunch.com offer basic stat functions in an online, collaborative, social-networky style.

Following the popularity of spreadsheet software and the cloud trend, I asked myself whether the free Google Spreadsheets can actually do the job. This is part of my endeavors to find free (or at least widely accessible) software for teaching basic concepts. While Google Spreadsheets does have quite an extensive function list, I discovered that its current computing is very limited. For example, computing binomial probabilities using the function BINOMDIST is limited to a sample size of about 130 (I did report this problem). Similarly, HYPGEOMDIST results in overflow errors for reasonably small sample and population sizes.


From the old days when we used to compute binomial probabilities manually, I am guessing that whoever programmed these functions forgot to use the tricks that avoid computing high factorials in n-choose-k type calculations...


Saturday, April 16, 2011

Moving Average chart in Excel: what is plotted?

In my recent book Practical Time Series Forecasting: A Practical Guide, I included an example of using Microsoft Excel's moving average plot to suppress monthly seasonality. This is done by creating a line plot of the series over time and then Add Trendline > Moving Average (see my post about suppressing seasonality). The purpose of adding the moving average trendline to a time plot is to better see a trend in the data, by suppressing seasonality.

A moving average with window width w means averaging across each set of w consecutive values. For visualizing a time series, we typically use a centered moving average with w = season.  In a centered moving average, the value of the moving average at time t (MAt) is computed by centering the window around time t and averaging across the w values within the window. For example, if we have daily data and we suspect a day-of-week effect, we can suppress it by a centered moving average with w=7, and then plotting the MA line.

An observant participant in my online course Forecasting discovered that Excel's moving average does not produce what we'd expect: Instead of averaging over a window that is centered around a time period of interest, it simply takes the average of the last w months (called a "trailing moving average"). While trailing moving averages are useful for forecasting, they are inferior for visualization, especially when the series has a trend. The reason is that the trailing moving average "lags behind". Look at the figure below, and you can see the difference between Excel's trailing moving average (black) and a centered moving average (red).



The fact that Excel produces a trailing moving average in the Trendline menu is quite disturbing and misleading. Even more disturbing is the documentation, which incorrectly describes the trailing MA that is produced:
"If Period is set to 2, for example, then the average of the first two data points is used as the first point in the moving average trendline. The average of the second and third data points is used as the second point in the trendline, and so on."
For more on moving averages, see here:

Saturday, April 09, 2011

Visualizing time series: suppressing one pattern to enhance another pattern

Visualizing a time series is an essential step in exploring its behavior. Statisticians think of a time series as a combination of four components: trend, seasonality, level and noise. All real-world series contain a level and noise, but not necessarily a trend and/or seasonality. It is important to determine whether trend and/or seasonality exist in a series in order to choose appropriate models and methods for descriptive or forecasting purposes. Hence, looking at a time plot,  typical questions include:
  • is there a trend? if so, what type of function can approximate it? (linear, exponential, etc.) is the trend fixed throughout the period or does it change over time? 
  • is there seasonal behavior? if so, is seasonality additive or multiplicative? does seasonal behavior change over time?
Exploring such questions using time plots (line plots of the series over time) is enhanced by suppressing one type of pattern for better visualizing other patterns. For example, suppressing seasonality can make a trend more visible. Similarly, suppressing a trend can help see seasonal behavior. How do we suppress seasonality? Suppose that we have monthly data and there is apparent annual seasonality. To suppress seasonality (also called seasonal adjustment), we can
  1. Plot annual data (either annual averages or sums)
  2. Plot a moving average (an average over a window of 12 months centered around each particular month)
  3. Plot 12 separate series, one for each month (e.g., one series for January, another for February and so on)
  4. Fit a model that captures monthly seasonality (e.g., a regression model with 11 monthly dummies) and look at the residual series
An example is shown in the Figure. The top left plot is the original series (showing monthly ridership on Amtrak trains). The bottom left panel shown a moving average line, suppressing seasonality and showing the trend. The top right panel shows a model that captures the seasonality. The lower left panel shows the residuals from the model, again enhancing the trend.

For further details and examples, see my recently published book Practical Time Series Forecasting: A Hands On Guide (available in soft-cover and as an eBook).