Reduced Order Models for Documents

The term-document matrix \Bar{\Bar{A}} is a high-order, high-fidelity model for the document-space. High-fidelity in the sense that \Bar{\Bar{A}} will correctly shred-bag-tag it to represent it as a vector in term-space as per VSM. \Bar{\Bar{A}} has m \times n entries, with m distinct terms (rows) building n documents (columns). But do we need all those mn values to capture this shred-bag-tag effect of \Bar{\Bar{A}} ? That is, can we achieve it with a modified but simpler (fewer than mn variables) model \Bar{\Bar{A}} while accepting some error? The objective of this post is to look at a traditional approach based on the ideas in Indexing by latent semantic analysis by Deerwester et al (LSA) which is basically Singular Value Decomposition (SVD) as applied to the term-document matrix \Bar{\Bar{A}}. The focus is on appreciating the  assumptions behind the approach, gauging the limitations that these assumptions give rise to, and interpreting the algebraic manipulations via examples.

1. Preliminaries

We have waded in fast and deep in the introduction above with various terminology and links. This post does build on two earlier posts  Data Dimensionality and Sensitivity to Sampling and Stacks of Documents and Bags of Words in this series on text analytics. We will start with a quick summary in order to set the stage. See Figure 1 as well.

  1. The vector space model (VSM) represents documents as points/vectors in term-space, where the set of all distinct terms \{ \Bar{T}_i} \} in the documents serve as standard basis vectors.
  2. Given a set of such document vectors \{ \Bar{D}_i} \}, we can form the term-document matrix \Bar{\Bar{A}} where \{ \Bar{D}_i} \} are columns. Each column is a point/vector in term-space as per VSM.
  3. While every document in the span of \{ \Bar{D}_i} \} has a term-space representation, not every term in the span of \{ \Bar{T}_i} \} has a representation in the document space. Clearly, the documents that can be built from a bunch of words is a superset of any finite set of documents those words may have come from.
  4. \Bar{\Bar{A}} is a linear operator that converts any document in the span of \{ \Bar{D}_i} \} to its VSM representation in term-space. The equation is:

(1)   \begin{equation*} \Bar{\Bar{A}} \, \cdot \, \Bar{D} = \Bar{T} \end{equation*}

Figure 1. The term-document matrix. m rows and n columns. Each column is a VSM representation of a document in term-space.

2. Reduced Order Models

Before we embrace VSM and proceed with order reduction via SVD/LSA let us reiterate what we have bargained away by subscribing to Equation 1 as the embodiment of all truth about the documents in the repository. In words Equation1 says:

…documents are linear combinations of words…. if two documents contain the same words then they are identical… order of words, punctuation within are irrelevant… if two documents use the same words in the same proportion then they are talking about the same idea, one verbose and the other succinct…

Clearly we do not agree with the above assessments. But having started with Equation 1 as the basis for studying the properties of the repository we should temporarily put aside what text means to us and consider it as bags of words that can be processed with each bag as a variable in an equation. If we come to grips with this, the benefits are undeniable. VSM has enabled lightning fast search by keywords on large document repositories. SVD and LSA have enabled order reduction and improved search relevance by finding documents that would otherwise stay hidden by a simple keyword search alone. For example LSA can find documents that do not contain any of the supplied keywords but contain others that are related to these keywords as per the properties of \Bar{\Bar{A}}.

Reduced order models (ROM) have been applied with reasonable success for making real-time predictions for large scale nonlinear systems such as turbulent flows. While we can apply similar techniques for reducing the dimensionality for document-spaces there is a further difference we must take note. The trouble with documents is the language itself unfortunately!  Text is open to interpretation, can take different meanings based on context, and the order of words/punctuation within can drastically alter the meaning. This is unlike solving the Navier-Stokes equation which even in its full complexity has a well defined solution – provided you have specified all the boundary and initial conditions exactly. There is a mathematical certainty about physicochemical phenomena that their reduced order models can aspire to. But we do not have that luxury with documents as we do not yet know how to express the meaning of documents as deterministic functions of their ingredients. So we do the best we can at this time – take Equation 1 as the embodiment of all truth in this document repository and get the benefits that SVD and LSA offer. Thus we should not expect these reduced order models for the document repository to match/preserve the meaning of the documents but just improve search speed & relevance.

3. Dimensionality of the Document and Term Spaces

The raw document set \{ \Bar{D}_i \} and the raw set of distinct terms \{ \Bar{T}_i \} arising from them have so far been the basis vectors in their corresponding spaces. But they may not be the most advantageous bases to work with in all cases. The number of documents and terms are usually very large. If we can remove some terms/documents that do not add much new information, that is remove those terms/documents that can essentially be built from the other ones… then we can reduce the dimensionality of our system. This building of one from some others can be a linear or nonlinear operation for dimensionality reduction – but we can only deal with linear operations here. We have gone over some of this in a different post Data Dimensionality and Sensitivity to Sampling.

Let us say we have a document repository about movies, and the terms ‘harrison’ and ‘ford’ occur together in most cases. We can then decide to use a single compound term “harrison ford” in stead of two separate terms, thereby reducing the dimensionality of the term-space by 1. It is important to understand what we mean by occur together. We are not talking about the exact phrase like “harrison ford”. Just that, if a document contains the term “harrison” x number of times, it will also contain the term “ford” about x number of times – remember the VSM model for documents. A keen reader will note that no allowance is being made for a “ford” that may have come from “gerald ford” the President, or even a ‘harrison’  from ‘george harrison’ of the Beatles fame. Let us dig into this further to clearly see the surprising reduction in dimensionality that can happen in both document and term spaces, given the VSM approach without having to apply any fancy math.

Example 1. Of Harrisons & Fords!

Consider the following three documents which we know to be different, thus expecting a dimension of 3 for the document-space. But VSM thinks otherwise and says the dimension is 1. In the first document Harrison Ford is being asked to watch over a couple of kids as they played. In the second, President Ford watched a performance from George Harrison. And in the third, President Ford and  Harrison Ford watched a play while George Harrison played with the same kids.

  • Watch Gerald and George at Play, Harrison Ford!
  • Gerald Ford Watched a Play by George Harrison
  • Harrison Ford and Gerald Ford Watched a Play while George Harrison Played with Gerald and George

You may complain that I am just playing on words, but these documents are describing different things as we understand them. But from the VSM perspective, Doc 1 and Doc 2 are the same, and Doc 3 is simply a linear combination of the first two. The 6-dimensional term-space basis is:

(2)   \begin{equation*} \left\{ \Bar{T}_i \right\} = \left\{ \text{ ford} \text{ george} \text{ gerald} \text{ harrison} \text{ play} \text{ watch} \right\} \end{equation*}

And the document vectors are:

(3)   \begin{equation*} \Bar{D}_1 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad \Bar{D}_2 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad \Bar{D}_3 = \begin{bmatrix} 2 \\ 2 \\ 2 \\ 2 \\ 2 \\ 2 \end{bmatrix} \end{equation*}

Thus as per VSM there is only one real document here and the dimensionality of the document-space is 1. Given such a one-dimensional document-space we can use just one compound term “ford george gerald harrison play watch” in stead of six terms for the term-space, and that would be sufficient to model all the documents in this document-space. Unsatisfactory state-of-affairs it may seem as the meaning of the sentences got lost while getting shredded, but we did get a massive dimensionality reduction if that is any comfort!. But this is a repository of movies and President Ford or Mr. Harrison were not movie actors as far as I know to be masquerading as Harrison Ford in the document repo, and so unlikely to happen.

3.1 Principal Component Analysis

While we can eyeball these dimensionality-reducing compound terms, or even documents that are linear combinations of other documents when the repository is small – it is impractical for large repos. In fact, we want to use the fewest number of these compounded terms that can adequately describe all the documents in the repo. The rigorous way of identifying these linearly compounded terms that can be helpful in reducing the dimensionality of the term-space is of course known from Principal Component Analysis. The eigenvectors of the term-term affinity matrix turn out to be these compound terms. We know that eigenvectors in term-space are simply linear combinations of the original discrete terms – much like the commonly occurring compound terms we had eyeballed for. Further, the eigenvectors with the largest eigenvalues are the most dominant compound terms to be found across all the documents in the repo. This discussion applies equally from the point of view of documents as well, leading to the conclusion that the eigenvectors of the document-document affinity matrix are the compound documents one could use as a dimensionality-reducing basis for the document-space.

So if we are limited to say using only p (much smaller than m or n) dimensions, we would pick the first p number of these eigenvectors arranged as per their decreasing eigenvalues to get the best approximation in each semantic-space. That is the basic idea behind Indexing by latent semantic analysis by Deerwester et al borrowed from Singular Value Decomposition (SVD) as applied to the term-document matrix.

3.2 Rank of \Bar{\Bar{A}}

We will conclude this section with a couple of notes about the rank of the term-document matrix in Equation 1. The rank r of \Bar{\Bar{A}} is the maximum size of the dimensionality we had been after in the above discussion.

  • r cannot be greater than the minimum of n and m.
  • The m terms are the distinct terms obtained by shredding the documents. In reverse, all n documents (and more!) can be linearly built from these m terms – no explanation needed. So the number of independent documents n can never be greater than m. We went over this in the earlier post Stacks of Documents and Bags of Words where we described the span of document-space being a subset of the all the documents that can built from the individual terms.

So we have the rank r of \Bar{\Bar{A}} as the number of linearly independent documents.

Example 2

Consider the two documents where we are unsuccessfully trying to get our dog to exercise.

  • fetch dog fetch!
  • lazy dog!

They are linearly independent. n is 2, that is the document-space is 2-dimensional. The number of distinct terms m that these two documents contain is 3, so the term-space is 3-dimensional. But the rank of \Bar{\Bar{A}} is 2.

Example 3

Consider the three documents where you are urging a rabbit to escape from a fox.

  • fox!
  • rabbit, fox!
  • run rabbit run! fox!

Both the document and term spaces are 3-dimensional and the rank of \Bar{\Bar{A}} is 3.

4. Document-Document and Term-Term Affinities

The affinity between the documents (as per their VSM representations in the term-space of course) is given by the matrix {\Bar{\Bar{A}}}^T \, \Bar{\Bar{A}}. Likewise, the affinity between the terms is given by the matrix  \Bar{\Bar{A}} \, {\Bar{\Bar{A}}}^T. On account of being real and symmetric, they both have eigen decompositions.

(4)   \begin{equation*} {\Bar{\Bar{A}}}^T \Bar{\Bar{A}} = \underbrace{\Bar{\Bar{V}}}_{n \times n} \; \underbrace{{\Bar{\Bar{\Lambda}}}^2}_{\text{ diagonal }} \; {\Bar{\Bar{V}}}^T \end{equation*}

(5)   \begin{equation*} \Bar{\Bar{A}} \; {\Bar{\Bar{A}}}^T = \underbrace{\Bar{\Bar{U}}}_{m \times m} \; \underbrace{{\Bar{\Bar{\Lambda}}}^2}_{\text{ diagonal }} \; {\Bar{\Bar{U}}}^T \end{equation*}

\Bar{\Bar{\Lambda}} is the diagonal matrix with real, non-negative eigenvalues – hence the use of a square value to indicate this. The fact that both {\underline{\underline{A}}}^T \underline{\underline{A}} and \underline{\underline{A}} \, {\underline{\underline{A}}}^T have the same non-zero eigenvalues is a basic result in linear algebra1. With no loss of generality the \{ \lambda_i \} are arranged in decreasing order as in:

(6)   \begin{equation*} \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_r > 0 \end{equation*}

And as per our discussion about the rank of \Bar{\Bar{A}},

(7)   \begin{equation*} \lambda_i = 0 \quad \forall i > r \end{equation*}

\Bar{\Bar{V}} is the column matrix of n orthonormal eigenvectors in document-space. \Bar{\Bar{U}} is the column matrix of  m orthonormal eigenvectors in term-space. That is, with \Bar{\Bar{I}} as the identity matrix,

(8)   \begin{align*} \Bar{\Bar{V}} \cdot {\Bar{\Bar{V}}}^T = & \underbrace{\Bar{\Bar{I}}}_{n \times n} \\ \Bar{\Bar{U}} \cdot {\Bar{\Bar{U}}}^T = & \underbrace{\Bar{\Bar{I}}}_{m \times m} \end{align*}

Time for an example to convince ourselves of what we started this section with – how the eigenvectors of the affinity matrices allow us to automatically come up with compound terms/documents that can reduce the dimensionality of the corresponding spaces.

Example 4

Consider the following four documents where we are urging a rabbit to escape from the fox, and unsuccessfully urging the dog to save the rabbit from this fox.

  • \Bar{D}_1run dog! fox is jumping in! rabbit run!
  • \Bar{D}_2dog, jump on fox!
  • \Bar{D}_3:  run rabbit run!
  • \Bar{D}_4:  fox is running with the rabbit! run lazy dog jump!

The standard ordered basis for the term-space are the six distinct terms \{ \text{ dog }, \text{ fox }, \text{ jump }, \text { lazy }, \text{ rabbit }, \text{ run } \}. A bit more difficult to eyeball but the following terms occur together in the indicated ratios whenever any one of them is present in a document.

(9)   \begin{align*} \text{ dog : fox : jump } = & \, 1 : 1 : 1 \\ \text{ rabbit : run } = & \, 1 : 2 \end{align*}

And, the first document is a simple addition of the second and third documents. That is:

(10)   \begin{equation*} \underbrace{\text{ run dog fox jump rabbit run }}_{\Bar{D}_1} =\underbrace{\text { dog jump fox }}_{\Bar{D}_2} +\underbrace{\text{ run rabbit run }}_{\Bar{D}_3} \end{equation*}

From Equation 9 we know that we should be able to just use 3 terms to describe the document-space and be none the worse for it. Those three terms being: (a) one compound term in place of the 3 terms \text{ dog fox jump }, (b) another compound term in place of the 2 terms \text { rabbit run }, and (c) the regular term \text{ lazy }. And from Equation 10 we know that we only have 3 independent documents. Three compound terms completely describing 3 documents – a rank of 3 for the term-document matrix.

Now we shall verify if all the algebra in this section agrees with our intuitive assessments above.The term-document matrix is \Bar{\Bar{A}} is:

(11)   \begin{equation*} \Bar{\Bar{A}} = \begin{bmatrix} 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 2 & 0 & 2 & 2 \end{bmatrix} \end{equation*}

The term-term affinity matrix and its eigen decomposition can be readily computed. The eigenvalues turn out to be,

(12)   \begin{equation*} \lambda_1 = 4.55 \quad \lambda_2 = 1.92 \quad \lambda_3 = 0.77 \quad \text{ and } \lambda_i  = 0 \quad \forall i > 3 \end{equation*}

confirming that the rank of \Bar{\Bar{A}} is 3 – perfect! The corresponding 3 eigenvectors in term-space (we do not care for the other 3 as their eigenvalues are zero) are:

(13)   \begin{equation*} \Bar{U_1} = \begin{bmatrix} -0.32 \\ -0.32 \\ -0.32 \\ -0.14 \\ -0.36 \\ -0.73 \end{bmatrix} \quad \Bar{U_2} = \begin{bmatrix} 0.47 \\ 0.47 \\ 0.47 \\ 0.05 \\ -0.26 \\ -0.51 \end{bmatrix} \quad \Bar{U_3} = \begin{bmatrix} -0.07 \\ -0.07 \\ -0.07 \\ 0.99 \\ -0.04 \\ -0.08 \end{bmatrix} \end{equation*}

The first three values in these vectors are the weights for the terms \left\{\text{ dog }, \text{ fox }, \text{ jump } \right\} and the last two are the weights for the terms \left\{ \text{ rabbit }, \text{ run } \right\}. Rewriting them as 3-dimensional vectors allows us to see our compound terms at play confirming our intuitive assessment.

(14)   \begin{align*} \Bar{U_1} = & \left\{ -0.32 \left( \overline{\text{ dog + fox + jump }} \right) - 0.14 \overline{\text{ lazy }}  - 0.36 \left( \overline{\text{ rabbit + 2 * run }} \right) \\ \Bar{U_2} = & \left\{ 0.47 \left( \overline{\text{ dog + fox + jump }} \right) + 0.05 \overline{\text{ lazy }} - 0.26 \left( \overline{\text{ rabbit + 2 * run }} \right) \\ \Bar{U_3} = & \left\{ -0.07 \left( \overline{\text{ dog + fox + jump }} \right) + 0.99 \overline{\text{ lazy }} - 0.04 \left( \overline{\text{ rabbit + 2 * run }} \right) \end{align*}

Before leaving this example we note the eigenvectors \{ \Bar{V}_i \} of the document-document affinity matrix. We will be using them in the coming sections as the basis for document-space.

(15)   \begin{equation*} \Bar{V_1} = \begin{bmatrix} -0.61 \\ -0.21 \\ -0.4 \\ -0.65 \end{bmatrix} \quad \Bar{V_2} = \begin{bmatrix} 0.07 \\ 0.74 \\ -0.66 \\ 0.10 \end{bmatrix} \quad \Bar{V_3} = \begin{bmatrix} -0.53 \\ -0.28 \\ -0.25 \\ 0.76 \end{bmatrix} \end{equation*}

5. The Relation Between eigen documents and eigen terms

We have stated that our goal was to use the document-space eigenvectors \{ \Bar{V}_i \} as the basis for document-space and the term-space eigenvectors \{ \Bar{U}_i \} as the basis for term-space. \{ \Bar{U}_i \} is simply a compound term that is a linear combination of the discrete terms as we have just seen in Example 4. Likewise \{ \Bar{V}_i \} is a compound document that is a linear combination of the standard basis documents. And most importantly, it turns out that \{ \Bar{V}_i \} and \{ \Bar{U}_i \} are related in a natural way – this is the crux of SVD.

5.1. \left\{ \left{ \frac{1}{\lambda_i} \,\Bar{\Bar{A}} \, \Bar{V}_i}} \right\} is an orthonormal set in term-space

If we are going to express documents with \{ \Bar{V}_i \} as the basis, we need to first understand what our shred-bag-tag operator (Equation 1) will do to these eigen/compound documents. That is, what is \Bar{\Bar{A}} \, \Bar{V}_i?  To see that let us rewrite 4 as:

(16)   \begin{align*} \left( \Bar{\Bar{A}} \, \Bar{\Bar{V}} \right)^T \cdot \left( \Bar{\Bar{A}} \, \Bar{\Bar{V}} \right) = & {\Bar{\Bar{\Lambda}}}^2 \\ \begin{bmatrix} \left( \Bar{\Bar{A}} \, \Bar{V}_1 \right)^T \\ \left( \Bar{\Bar{A}} \, \Bar{V}_2 \right)^T \\ \vdots \\ \left( \Bar{\Bar{A}} \, \Bar{V}_n \right)^T \end{bmatrix} \left( \Bar{\Bar{A}} \, \Bar{V}_1 \quad \Bar{\Bar{A}} \, \Bar{V}_2 \quad \hdots \quad  \Bar{\Bar{A}} \, \Bar{V}_n \right) = & \begin{bmatrix} \lambda_1^2  & 0 & \cdots & 0 & 0  & \cdots & 0 \\ 0 & \lambda_2^2  & \cdots & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \cdots & 0 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & \lambda_r^2 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \\ \vdots \\ 0 & 0 & \cdots & 0 & 0 & \cdots & 0 \end{bmatrix} \end{align*}

(17)   \begin{equation*} \left( \Bar{\Bar{A}} \, \Bar{V}_i \right)^T \cdot \left( \Bar{\Bar{A}} \, \Bar{V}_j \right) = & \begin{cases} \lambda_i^2 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} \end{equation*}

As the rank r \leq n we know that we will have n - r rows & columns of zeros in the diagonal matrix \Bar{\Bar{\Lambda}}. So we only need to consider the first r columns of \Bar{\Bar{V}} (or \Bar{\Bar{U}}).

From Equations 6 and 7 we have \lambda_i > 0 \; \forall \; i \leq r so we can rewrite Equation 17 for i, j \leq r as:

(18)   \begin{equation*} \left( \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \, \Bar{V}_i \right)^T \cdot \left( \frac{1}{\lambda_j} \, \Bar{\Bar{A}} \, \Bar{V}_j \right) = & \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \end{cases} \end{equation*}

Equations 17 and 18 tell us that the r vectors \left\{\frac{1}{\lambda_i} \, \Bar{\Bar{A}} \, \Bar{V}_i \right\} are an orthonormal set in term-space. So they can very well serve as the basis for the image/range of \Bar{\Bar{A}} in term-space as the rank of \Bar{\Bar{A}} is r.

But so what? In any given space there are infinitely many orthonormal  bases. What is so special about these r orthonormal vectors  \left\{ \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \, \Bar{V}_i \right\} in the term-space?  It turns out these are not any orthonormal basis vectors, but the same as the term-space eigenvectors \left\{ \Bar{U}_i \right\} that have simply been scaled. This is of course the jewel crown of an observation from Singular Value Decomposition (SVD) proved below.

5.2. Shred-Bag-Tag operation on an eigen document turns it into a scaled eigen term.

We can see this by left multiplying  \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \, \Bar{V}_i with \Bar{\Bar{A}} \; {\Bar{\Bar{A}}}^T.

(19)   \begin{align*} \Bar{\Bar{A}} \; {\Bar{\Bar{A}}}^T \left( \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \, \Bar{V}_i \right) = & \Bar{\Bar{A}} \; \frac{1}{\lambda_i} \, \left( {\Bar{\Bar{A}}}^T \; \Bar{\Bar{A}} \right) \Bar{V}_i \\ = & \Bar{\Bar{A}} \; \frac{1}{\lambda_i} \, \left( \lambda_i^2 \Bar{V}_i \right) \\ = & \lambda_i^2 \; \left( \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \; \Bar{V}_i \right) \end{align*}

The last equality in Equation 19 above shows that \frac{1}{\lambda_i} \, \Bar{\Bar{A}} \; \Bar{V}_i is an eigenvector of \Bar{\Bar{A}} \; {\Bar{\Bar{A}}}^T with the eigenvalue \lambda_i^2. We had been calling that \Bar{U}_i all this time. Thus the core result of SVD in scalar form:

(20)   \begin{equation*} \Bar{\Bar{A}} \, \Bar{V}_i  = \lambda_i \, \Bar{U}_i \end{equation*}

Grouping the eigenvectors, we get the more familiar matrix form for SVD:

(21)   \begin{align*} \Bar{\Bar{A}} \Bar{\Bar{V}} = & \, \Bar{\Bar{U}} \Bar{\Bar{\Lambda}} \\ \Bar{\Bar{A}} = & \, \Bar{\Bar{U}} \; \Bar{\Bar{\Lambda}} \; {\Bar{\Bar{V}}}^T \end{align*}

Example 4 (continued)

We return to our lazy dog example to see whether the eigenvectors \Bar{V}_i we obtained do in fact obey Equation 20. Take \Bar{V}_1 for example:

    \[ \Bar{\Bar{A}} \; \Bar{V}_1 = \begin{bmatrix} 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 2 & 0 & 2 & 2 \end{bmatrix} \; \begin{bmatrix} -0.61 \\ -0.21 \\ -0.4 \\ -0.65 \end{bmatrix} = \begin{bmatrix} -1.47  \\ -1.47 \\ -1.47 \\ -0.65 \\ -1.66 \\ -3.32 \end{bmatrix} = \frac{1}{4.55} \begin{bmatrix} -0.32 \\ -0.32 \\ -0.32\\ -0.14 \\ -0.36 \\ -0.73 \end{bmatrix} = \frac{1}{\lambda_1} \; \Bar{U}_1 \]

Thus agreeing with Equation 20 .

6. Reduced Order Models for Documents

The main driver for this article has been the  anticipated reduction in the number of documents/terms we need to consider in capturing the essence of the full document repository. From all the analysis in the earlier sections we have the tools now to identify the dominant compound terms and we are going to test them against our lazy dog example.

When we consider only the first p compound terms (p < r), in Equation 21, the term-document matrix \Bar{\Bar{A}} can be approximated as \Bar{\Bar{A_p}}:

(22)   \begin{equation*} \Bar{\Bar{A}} \approx {\Bar{\Bar{A}}}_p = \underbrace{{\Bar{\Bar{U}}}_p}_{m \times p} \; \underbrace{{\Bar{\Bar{\Lambda}}}_p}_{p \times p} \; \underbrace{{{\Bar{\Bar{V}}}^T}_p}_{p \times n} \end{equation*}

We will apply this to Example 4 for which we have already computed all the needed eigenvectors & eigenvalues. We know that the rank is 3. So the maximum number of eigenvectors we need to consider is 3. That is we should get \Bar{\Bar{A_3}} to be identical to \Bar{\Bar{A}}, with \Bar{\Bar{A_2}} and \Bar{\Bar{A_1}} faring progressively worse. But how worse? Let us do this with a quick python script.

Running which we get {\Bar{\Bar{A}}}_3 to be identical to \Bar{\Bar{A}} as expected and other two showing some deviations.

    \[ {\Bar{\Bar{A}}}_3 = \begin{bmatrix} 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 2 & 0 & 2 & 2 \end{bmatrix} {\Bar{\Bar{A}}}_2 = \begin{bmatrix} 0.97 & 0.98 & -0.01 & 1.04 \\ 0.97 & 0.98 & -0.01 & 1.04 \\ 0.97 & 0.98 & -0.01 & 1.04 \\ 0.4 & 0.21 & 0.19 & 0.43 \\ 0.98 & -0.01 & 0.99 & 1.02 \\ 1.97 & -0.02 & 1.98 & 2.05 \end{bmatrix} {\Bar{\Bar{A}}}_1 = \begin{bmatrix} 0.9 & 0.31 & 0.59 & 0.95 \\ 0.9 & 0.31 & 0.59 & 0.95 \\ 0.9 & 0.31 & 0.59 & 0.95 \\ 0.4 & 0.14 & 0.26 & 0.42 \\ 1.02 & 0.35 & 0.67 & 1.07 \\ 2.04 & 0.71 & 1.33 & 2.14 \end{bmatrix} \]

Defining a relative distance/error measure using the Frobenius norm for matrices as,

(23)   \begin{equation*} e_p = & \frac { \left\Vert \Bar{\Bar{A}} - \Bar{\Bar{A}}_p \right\Vert }{\left\Vert \Bar{\Bar{A}} \right\Vert} \end{equation*}

we get:

(24)   \begin{equation*} e_3 = 0 \qquad e_2 = 0.15 \qquad e_1 = 0.41 \end{equation*}

indicating that if we can live with 15% error for the shred-bag-tag operation, we can cut down the number of variables to work with from 24 (4 docs with 6 terms each) variables, to 9 (3 eigen docs with 3 eigen terms each). But the real measure of error is what \Bar{\Bar{A}}_p does to a document. Take a test document

    \[ \Bar{D} = \text{ dog, run over fox! run rabbit, jump! } \]

It is in the span of \Bar{\Bar{A}} as it is simply \Bar{D}_2 + \Bar{D}_3. Operating \Bar{\Bar{A}}_p on \Bar{D} we get the term-space representations as:

(25)   \begin{align*} & \left( \overline{\text{ dog + fox + jump }} \right)  +  \left( \overline{\text{ rabbit + 2 * run }} \right) \\ & 0.97 \left( \overline{\text{ dog + fox + jump }} \right) + 0.4 \overline{\text{ lazy }}  +  0.98 \left( \overline{\text{ rabbit + 2 * run }} \right) \\ & 0.90 \left( \overline{ \text{ dog + fox + jump }} \right) + 0.4 \overline{\text{ lazy }}  +  1.02 \left( \overline{\text{ rabbit + 2 * run }} \right) \end{align*}

Except for the \text{ lazy } term the match for the rest is not bad, even with one compound term.

7. Next Steps

This was a long post unfortunately but mainly due to the examples and it just would not do to break them up. But we have made a lot of progress towards a semantic analysis of  documents. We have obtained sets of terms that are observed to occur together among the documents in the repository. When we find a frequent juxtaposition of the same terms across documents, or many places in the same document – the documents are likely describing the same mini-concept, one of the core assumptions behind a semantic analysis of text. The document repository is then taken to be made up of these identified concepts. This allows for extra insights than what a simple keyword based search can provide.  And, opens doors to relate & cluster the documents by concepts. We will get into the details in the next article in this series.

  1. see Gilbert Strang for example

Leave a Reply