This paper deals with a homoskedastic errors-in-variables linear regression model and properties of the total least squares (TLS) estimator. We partly revise the consistency results for the TLS estimator previously obtained by the author [

We consider a functional linear error-in-variables model. Let

This problem is related to finding an approximate solution to incompatible linear equations (“overdetermined” linear equation, because the number of equations exceeds the number of variables)

In the linear error-in-variables regression model (

Sufficient conditions for consistency of the estimator are presented in Gleser [

The model where some variables are explanatory and the other are response is called

We allow errors in different variables to correlate. Our problem is a minor generalization of the mixed LS-TLS problem, which is studied in [

The Weighted TLS and Structured TLS estimators are generalizations of the TLS estimator for the cases where the error covariance matrices do not coincide for different observations or where the errors for different observations are dependent; more precisely, the independence condition is replaced with the condition on the “structure of the errors”. The consistency of these estimators is proved in Kukush and Van Huffel [

In the present paper, for a multivariate regression model with multiple response variables we consider two versions of the TLS estimator. In these estimators, different norms of the weighted residual matrix are minimized. (These estimators coincide for the univariate regression model.) The common way to construct the estimator is to minimize the Frobenius norm. The estimator that minimizes the Frobenius norm also minimizes the spectral norm. Any estimator that minimizes the spectral norm is consistent under conditions of our consistency theorems (see Theorems

In this paper, for the results on consistency of the TLS estimator which are stated in paper [

The structure of the paper is as follows. In Section

At first, we list the

For

For

Now, list

is the matrix of true variables. It is an

is the matrix of errors. It is an

is the matrix of observations. It is an

is a covariance matrix of errors for one observation. For every

is the matrix of true regression parameters. It is a nonrandom

is an augmented matrix of regression coefficients. It is a nonrandom

is the TLS estimator of the matrix

is a matrix whose column space

It is assumed that the matrices

Rewrite the relation in an implicit form. Let the

The entries of the matrix

Throughout the paper the following three conditions are assumed to be true:

For

This example is taken from [

For some matrices

First, find the

Now, show that the minimum in (

Notice that under condition (

For the matrix Δ that is a solution to minimization problem (

Columns of the matrix

Possible problems that may arise in the course of solving the minimization problem (

Besides (

We can construct the optimization problem that generalizes both (

A solution to problem (

In this section we briefly revise known consistency results. One of conditions for the consistency of the TLS estimator is the convergence of

The theorem can be generalized for the multivariate regression. The condition that the errors on different observations have the same distribution can be dropped. Instead, Kukush and Van Huffel [

Here is the strong consistency theorem:

In the following consistency theorem the moment condition imposed on the errors is relaxed.

Generalizations of Theorems

In the next theorem strong consistency is obtained for

The key point of the proof is the application of our own theorem on perturbation bounds for generalized eigenvectors (Theorems

When we speak of sequence

Theorem

Denote

The proofs of consistency theorems differ one from another, but they have the same structure and common parts. First, the law of large numbers

The inequalities (

Then, by Theorem

We use some classical results. However, we state them in a form convenient for our study and provide the proof for some of them.

In this paper we deal with real matrices. Most theorems in this section can be generalized for matrices with complex entries by requiring that matrices be Hermitian rather than symmetric, and by complex conjugating where it is necessary.

If in the decomposition

Theorem

In Theorem

If the matrices

The inequality

Let

Let us verify the Moore–Penrose conditions:

Since

Let

The angles

Denote

Denote the greatest of the sines of the canonical eigenvalues

If

If

We will often omit “span” in arguments of sine. Thus, for

Since

The function

Now proclaim the multivariate generalization of Lemma

In the following theorems, a random variable

Theorem

The desired inequality is trivial for

In this section we explain the relationship between the TLS estimator and the generalized eigenvalue problem. The results of this section are important for constructing the TLS estimator. Proposition

Let

In Propositions

If the constraints are compatible, the least element (and the unique minimum) is attained at a single point. Namely, the equalities

In the left-hand side of (

One can choose a stack of subspaces

In Propositions

As a consequence, if

The functional (

The

Due to inequality (

Furthermore,

Equality (

We have proved that the equality

The rank-deficient positive semidefinite symmetric matrix

Then the eigendecomposition of the matrix

Since

As soon as

Thus, the eigenvalues of

Remember inequalities (

Now, apply Lemma

The column vector

We have to verify that

Denote

By Lemma

What follows is valid for both univariate (

Due to (

The TLS estimator

Now, apply Lemma

Again, with (

In this section, we prove the convergences

It holds that

Finally,

The conditions of Theorem

Now, we prove that

By the Rosenthal inequality (case

The conditions of Theorem

Now, prove that

The proof of the asymptotic relation

Now, show that

The random events

Now, we construct a modified version

Thus we construct a matrix

for some

if

From the proof of Theorem

Whenever the random event (

Now, prove the uniqueness of

Assume by contradiction that

Now prove that the random event

We proved that the random event

This uniqueness of the solution Δ to the optimization problem (

For the proof of Lemma

In the proof, we assume that

The function

Let

Taking the limit in the relation

Because the matrix

Under the conditions of Lemma

Under the conditions of Remark

With use of eigendecomposition of

If

Using the min-max theorem, the relation

The matrix

Let

Since

In the next theorem and in its proof, matrices

Let

The

Those diagonal entries comprise all the eigenvalues of

Hence for

Notice that