Project
back
Summarizing Data with the Most Informative Itemsets
Winner of the ACM SIGKDD 2011 Best Student Paper Award — with Michael Mampaey & Nikolaj Tatti

Abstract. Knowledge discovery from data is an inherently iterative process. That is, what we know about the data greatly determines our expectations, and therefore, what results we would find interesting and/or surprising. Given new knowledge about the data, our expectations will change. Hence, in order to avoid redundant results, knowledge discovery algorithms ideally should follow such an iterative updating procedure.

With this in mind, we introduce a well-founded approach for succinctly summarizing data with the most informative itemsets; using a probabilistic maximum entropy model, we iteratively find the itemset that provides us the most novel information—that is, for which the frequency in the data surprises us the most—and in turn we update our model accordingly. As we use the Maximum Entropy principle to obtain unbiased probabilistic models, and only include those itemsets that are most informative with regard to the current model, the summaries we construct are guaranteed to be both descriptive and non-redundant.

The algorithm that we present, called mtv, can either discover the top-k most informative itemsets, or we can employ either the Bayesian Information Criterion (bic) or the Minimum Description Length (mdl) principle to automatically identify the set of itemsets that together summarize the data well. In other words, our method will `tell you what you need to know' about the data. Importantly, it is a one-phase algorithm: rather than picking itemsets from a user-provided candidate set, itemsets and their supports are mined on-the-fly. To further its applicability, we provide an efficient method to compute the maximum entropy distribution using Quick Inclusion-Exclusion.

Experiments on our method, using synthetic, benchmark, and real data, show that the discovered summaries are succinct, and correctly identify the key patterns in the data. The models they form attain high likelihoods, and inspection shows that they summarize the data well with increasingly specific, yet non-redundant itemsets.

Implementation

the C++ source code (October 2011) by Michael Mampaey. A README file with compilation and usage instructions is included. The source code requires the GNU MPFR library, which can be found at http://www.mpfr.org.

Related Publications

Mampaey, M, Vreeken, J & Tatti, N Summarizing Data Succinctly with the Most Informative Itemsets. Transactions on Knowledge Discovery from Data vol.6(4), pp 1-44, ACM, 2012. (IF 1.68)
Wu, H, Mampaey, M, Tatti, N, Vreeken, J, Hossain, MS & Ramakrishnan, N Where Do I Start? Algorithmic Strategies to Guide Intelligence Analysts. In: Proceedings of the ACM SIGKDD Workshop on Intelligence and Security Informatics (ISI-KDD), pp 1-8, ACM, 2012.
Mampaey, M, Tatti, N & Vreeken, J Data Summarization with Informative Itemsets. In: Proceedings of the 23rd Benelux Conference on Artificial Intelligence (BNAIC), ISSN 1568-7805, 2011.
Mampaey, M, Tatti, N & Vreeken, J Tell Me What I Need To Know: Succinctly Summarizing Data with Itemsets. In: Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp 573-581, ACM, 2011.