wikiwand.com

Boolean model of information retrieval - Wikiwand

The (standard) Boolean model of information retrieval (BIR)[1] is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one.[2] The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms (a bag-of-words model). Retrieval is based on whether or not the documents contain the query terms and whether they satisfy the boolean conditions described by the query.


An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Let{\displaystyle T=\{t_{1},t_{2},\ \ldots ,\ t_{n}\}}be the set of all such index terms.

A document is any subset of {\displaystyle T}. Let{\displaystyle D=\{D_{1},\ \ldots \ ,D_{n}\}}be the set of all documents.


{\displaystyle T} is a series of words or small phrases (index terms). Each of those words or small phrases are named {\displaystyle t_{n}}, where {\displaystyle n} is the number of the term in the series/list. You can think of {\displaystyle T} as "Terms" and {\displaystyle t_{n}} as "index term n".

The words or small phrases (index terms {\displaystyle t_{n}}) can exist in documents. These documents then form a series/list {\displaystyle D} where each individual documents are called {\displaystyle D_{n}}. These documents ({\displaystyle D_{n}}) can contain words or small phrases (index terms {\displaystyle t_{n}}) such as {\displaystyle D_{1}} could contain the terms {\displaystyle t_{1}}and {\displaystyle t_{2}} from {\displaystyle T}. There is an example of this in the following section.

Index terms generally want to represent words which have more meaning to them and corresponds to what the content of an article or document could talk about. Terms like "the" and "like" would appear in nearly all documents whereas "Bayesian" would only be a small fraction of documents. Therefor, rarer terms like "Bayesian" are a better choice to be selected in the {\displaystyle T} sets. This relates to Entropy (information theory). There are multiple types of operations that can be applied to index terms used in queries to make them more generic and more relevant. One such is Stemming.


A query is a Boolean expression {\textstyle Q} in normal form:{\displaystyle Q=(W_{1}\ \lor \ W_{2}\ \lor \ \cdots )\land \ \cdots \ \land \ (W_{i}\ \lor \ W_{i+1}\ \lor \ \cdots )}where {\textstyle W_{i}} is true for {\displaystyle D_{j}} when {\displaystyle t_{i}\in D_{j}}. (Equivalently, {\textstyle Q} could be expressed in disjunctive normal form.)

Any {\displaystyle Q} queries are a selection of index terms ({\displaystyle t_{n}} or {\displaystyle W_{n}}) picked from a set {\displaystyle T} of terms which are combined using Boolean operators to form a set of conditions.

These conditions are then applied to a set {\displaystyle D} of documents which contain the same index terms ({\displaystyle t_{n}}) from the set {\displaystyle T}.

We seek to find the set of documents that satisfy {\textstyle Q}. This operation is called retrieval and consists of the following two steps:

1. For each {\textstyle W_{j}} in {\textstyle Q}, find the set {\textstyle S_{j}} of documents that satisfy {\textstyle W_{j}}:{\displaystyle S_{j}=\{D_{i}\mid W_{j}\}}2. Then the set of documents that satisfy Q is given by:{\displaystyle (S_{1}\cup S_{2}\cup \cdots )\cap \cdots \cap (S_{i}\cup S_{i+1}\cup \cdots )}Where {\displaystyle \cup } means OR and {\displaystyle \cap } means AND as Boolean operators.

Let the set of original (real) documents be, for example

{\displaystyle D=\{D_{1},\ D_{2},\ D_{3}\}}

where

{\textstyle D_{1}} = "Bayes' principle: The principle that, in estimating a parameter, one should initially assume that each possible value has equal probability (a uniform prior distribution)."

{\textstyle D_{2}} = "Bayesian decision theory: A mathematical theory of decision-making which presumes utility and probability functions, and according to which the act to be chosen is the Bayes act, i.e. the one with highest subjective expected utility. If one had unlimited time and calculating power with which to make every decision, this procedure would be the best way to make any decision."

{\textstyle D_{3}} = "Bayesian epistemology: A philosophical theory which holds that the epistemic status of a proposition (i.e. how well proven or well established it is) is best measured by a probability and that the proper way to revise this probability is given by Bayesian conditionalisation or similar procedures. A Bayesian epistemologist would use probability to define, and explore the relationship between, concepts such as epistemic status, support or explanatory power."

Let the set {\textstyle T} of terms be:

{\displaystyle T=\{t_{1}={\text{Bayes' principle}},t_{2}={\text{probability}},t_{3}={\text{decision-making}},t_{4}={\text{Bayesian epistemology}}\}}

Then, the set {\textstyle D} of documents is as follows:

{\displaystyle D=\{D_{1},\ D_{2},\ D_{3}\}}

where {\displaystyle {\begin{aligned}D_{1}&=\{{\text{probability}},\ {\text{Bayes' principle}}\}\\D_{2}&=\{{\text{probability}},\ {\text{decision-making}}\}\\D_{3}&=\{{\text{probability}},\ {\text{Bayesian epistemology}}\}\end{aligned}}}

Let the query {\textstyle Q} be ("probability" AND "decision-making"):

{\displaystyle Q={\text{probability}}\land {\text{decision-making}}}Then to retrieve the relevant documents:

  1. Firstly, the following sets {\textstyle S_{1}} and {\textstyle S_{2}} of documents {\textstyle D_{i}} are obtained (retrieved):{\displaystyle {\begin{aligned}S_{1}&=\{D_{1},\ D_{2},\ D_{3}\}\\S_{2}&=\{D_{2}\}\end{aligned}}}Where {\displaystyle S_{1}} corresponds to the documents which contain the term "probability" and {\displaystyle S_{2}} contain the term "decision-making".
  2. Finally, the following documents {\textstyle D_{i}} are retrieved in response to {\textstyle Q}: {\displaystyle Q:\{D_{1},\ D_{2},\ D_{3}\}\ \cap \ \{D_{2}\}\ =\ \{D_{2}\}}Where the query looks for documents that are contained in both sets {\displaystyle S} using the intersection operator.

This means that the original document {\displaystyle D_{2}} is the answer to {\textstyle Q}.

If there is more than one document with the same representation (the same subset of index terms {\displaystyle t_{n}}), every such document is retrieved. Such documents are indistinguishable in the BIR (in other words, equivalent).

  • Clean formalism
  • Easy to implement
  • Intuitive concept
  • If the resulting document set is either too small or too big, it is directly clear which operators will produce respectively a bigger or smaller set.
  • It gives (expert) users a sense of control over the system. It is immediately clear why a document has been retrieved given a query.
  • Exact matching may retrieve too few or too many documents
  • Hard to translate a query into a Boolean expression
  • Ineffective for Search-Resistant Concepts[3]
  • All terms are equally weighted
  • More like data retrieval than information retrieval
  • Retrieval based on binary decision criteria with no notion of partial matching
  • No ranking of the documents is provided (absence of a grading scale)
  • Information need has to be translated into a Boolean expression, which most users find awkward
  • The Boolean queries formulated by the users are most often too simplistic
  • The model frequently returns either too few or too many documents in response to a user query

Data structures and algorithms

From a pure formal mathematical point of view, the BIR is straightforward. From a practical point of view, however, several further problems should be solved that relate to algorithms and data structures, such as, for example, the choice of terms (manual or automatic selection or both), stemming, hash tables, inverted file structure, and so on.[4]

Hash sets

Another possibility is to use hash sets. Each document is represented by a hash table which contains every single term of that document. Since hash table size increases and decreases in real time with the addition and removal of terms, each document will occupy much less space in memory. However, it will have a slowdown in performance because the operations are more complex than with bit vectors. On the worst-case performance can degrade from O(n) to O(n2). On the average case, the performance slowdown will not be that much worse than bit vectors and the space usage is much more efficient.

Signature file

Each document can be summarized by Bloom filter representing the set of words in that document, stored in a fixed-length bitstring, called a signature. The signature file contains one such superimposed code bitstring for every document in the collection. Each query can also be summarized by a Bloom filter representing the set of words in the query, stored in a bitstring of the same fixed length. The query bitstring is tested against each signature.[5][6][7]

The signature file approached is used in BitFunnel.

Inverted file

An inverted index file contains two parts: a vocabulary containing all the terms used in the collection, and for each distinct term an inverted index that lists every document that mentions that term.[5][6]

Loading related searches...