In recent times, a dramatic growth in the quantity and variety of information available electronically has meant a large number of people are beginning to use Information Retrieval systems. A sizeable part of this user community consists of casual, ...
In recent times, a dramatic growth in the quantity and variety of information available electronically has meant a large number of people are beginning to use Information Retrieval systems. A sizeable part of this user community consists of casual, untrained searchers who are precision-oriented, i.e. they prefer a small set of retrieved documents containing a good proportion of useful documents to a larger set that contains more useful items, as well as a fair amount of irrelevant information. Thus, techniques for <italic> high-precision</italic> information retrieval are becoming increasingly important. High-precision methods can also be very useful when incorporated into the traditional adhoc feedback process for query expansion.
In this thesis, we have explored several techniques for high-precision IR. We also investigate the effects of using such techniques as an intermediate step within adhoc feedback. We use Smart, one of the most successful experimental IR systems, for our experiments. Natural language processing is done using Empire, a state-of-the-art NLP system being developed at Cornell University. We use the TREC collections as our test sets. This ensures that the proposed techniques are evaluated on realistic, large, heterogenenous databases.
First, we investigate whether phrase matches between a document and query can be used to improve precision. Contrary to the general belief that phrases are precision-enhancing devices, we find that phrases do not significantly affect precision at top ranks when used with single terms in a good basic retrieval engine. Phrases are more useful for differentiating between relevance and non-relevance among poorly ranked documents. Also, when phrases are used with single words, a simple, statistical method for identifying phrases and a recent syntax-based method both yield comparable retrieval effectiveness, although syntactic phrases outperform statistical ones when phrases are used in isolation.
Next, we study document clustering as a tool for improving precision. We find that clustering top-ranked documents and eliminating outliers (with the expectation that they are mostly non-relevant) yields improvements in results for some queries, but these are offset by performance losses on other queries. We also propose a clustering-based approach to balanced query expansion. This seems to yield minor improvements at a low cost.
Finally, we investigate the use of Boolean filters along with proximity constraints. Our experiments demonstrate that manually formulated Boolean constraints can be used to substantially improve retrieval quality. This technique is especially useful when combined with adhoc feedback and frequently alleviates the problem of query drift associated with straightforward adhoc expansion schemes. We also propose a completely automatic approximation to the above approach that makes use of term correlation information. This method performs competitively, showing that good improvements are achievable even in the absence of any user intervention.