protected:
void MakeEigenVectors() void MakeNormalised() void MakeOrdered() void MakeRealCode(const char* filename, const char* prefix, Option_t* option) void MakeTridiagonal() public:
TPrincipal TPrincipal() TPrincipal TPrincipal(Int_t nVariables, Option_t* opt = N) TPrincipal TPrincipal(TPrincipal&) virtual void ~TPrincipal() virtual void AddRow(Double_t* x) virtual void Browse(TBrowser* b) static TClass* Class() virtual void Clear(Option_t* option) const TMatrixD* GetCovarianceMatrix() const const TVectorD* GetEigenValues() const const TMatrixD* GetEigenVectors() const TList* GetHistograms() const const TVectorD* GetMeanValues() const const Double_t* GetRow(Int_t row) const const TVectorD* GetSigmas() const const TVectorD* GetUserData() const virtual TClass* IsA() const virtual Bool_t IsFolder() const virtual void MakeCode(const char* filename = pca, Option_t* option) virtual void MakeHistograms(const char* name = pca, Option_t* option = epsdx) virtual void MakeMethods(const char* classname = PCA, Option_t* option) virtual void MakePrincipals() virtual void P2X(Double_t* p, Double_t* x, Int_t nTest) virtual void Print(Option_t* opt = MSE) const virtual void ShowMembers(TMemberInspector& insp, char* parent) virtual void Streamer(TBuffer& b) void StreamerNVirtual(TBuffer& b) virtual void SumOfSquareResiduals(Double_t* x, Double_t* s) void Test(Option_t* option) virtual void X2P(Double_t* x, Double_t* p)Data Members
protected:
Int_t fNumberOfDataPoints Number of data points Int_t fNumberOfVariables Number of variables TVectorD fMeanValues Mean value over all data points TVectorD fSigmas vector of sigmas TMatrixD fCovarianceMatrix Covariance matrix TMatrixD fEigenVectors Eigenvector matrix of trans TVectorD fEigenValues Eigenvalue vector of trans TVectorD fOffDiagonal elements of the tridiagonal TVectorD fUserData Vector of original data points Double_t fTrace Trace of covarience matrix TList* fHistograms List of histograms Bool_t fIsNormalised Normalize matrix?
/*
The current implementation is based on the LINTRA package from CERNLIB by R. Brun, H. Hansroul, and J. Kubler. The class has been implemented by Christian Holm Christensen in August 2000.
In many applications of various fields of research, the treatment of large amounts of data requires powerful techniques capable of rapid data reduction and analysis. Usually, the quantities most conveniently measured by the experimentalist, are not necessarily the most significant for classification and analysis of the data. It is then useful to have a way of selecting an optimal set of variables necessary for the recognition process and reducing the dimensionality of the problem, resulting in an easier classification procedure.
This paper describes the implementation of one such method of feature selection, namely the principal components analysis. This multidimensional technique is well known in the field of pattern recognition and and its use in Particle Physics has been documented elsewhere (cf. H. Wind, Function Parameterization, CERN 72-21).
Suppose we have prototypes which are trajectories of particles,
passing through a spectrometer. If one measures the passage of the
particle at say 8 fixed planes, the trajectory is described by an
8-component vector:
One proceeds by generating a a representative tracks sample and
building up the covariance matrix . Its eigenvectors and
eigenvalues are computed by standard methods, and thus a new basis is
obtained for the original 8-dimensional space the expansion of the
prototypes,
allows the study of the behavior of the coefficients for all the tracks of the sample. The eigenvectors which are insignificant for the trajectory description in the expansion will have their corresponding coefficients close to zero for all the prototypes.
On one hand, a reduction of the dimensionality is then obtained by omitting these least significant vectors in the subsequent analysis.
On the other hand, in the analysis of real data, these least significant variables(?) can be used for the pattern recognition problem of extracting the valid combinations of coordinates describing a true trajectory from the set of all possible wrong combinations.
The program described here performs this principal components analysis on a sample of data provided by the user. It computes the covariance matrix, its eigenvalues ands corresponding eigenvectors and exhibits the behavior of the principal components (), thus providing to the user all the means of understanding his data.
A short outline of the method of Principal Components is given in subsection 1.3.
Let's consider a sample of prototypes each being characterized by
variables
. Each prototype is a point, or a
column vector, in a -dimensional pattern space.
(1) |
Those variables are the quantities accessible to the experimentalist, but are not necessarily the most significant for the classification purpose.
The Principal Components Method consists of applying a linear transformation to the original variables. This transformation is described by an orthogonal matrix and is equivalent to a rotation of the original pattern space into a new set of coordinate vectors, which hopefully provide easier feature identification and dimensionality reduction.
Let's define the covariance matrix:
This matrix is real, positive definite, symmetric, and will have all its eigenvalues greater then zero. It will now be show that among the family of all the complete orthonormal bases of the pattern space, the base formed by the eigenvectors of the covariance matrix and belonging to the largest eigenvalues, corresponds to the most significant features of the description of the original prototypes.
let the prototypes be expanded on into a set of basis vectors
,
The `best' feature coordinates , spanning a feature
space, will be obtained by minimizing the error due to this
truncated expansion, i.e.,
Multiplying (3) by
using (5),
we get
The minimization of the sum in (7) is obtained when each term is minimum, since is positive definite. By the method of Lagrange multipliers, and the condition (5), we get
The transformation matrix to go from the pattern space to the feature
space consists of the ordered eigenvectors
for its columns
*/
Empty CTOR, Do not use.
Ctor. Argument is number of variables in the sample of data Options are: N Normalize the covariance matrix (default) <empty> Do not Normalize the covariance matrix The created object is named "principal" and a reference to it is added to the list of specials Root objects. you can retrieve a pointer to the created object via: TPrincipal *principal = (TPrincipal*)gROOT->GetListOfSpecials()->FindObject("principal");
destructor
/*Add a data point and update the covariance matrix. The input array must be fNumberOfVariables long.
The Covariance matrix and mean values of the input data is caculated
on the fly by the following equations:
The data is stored internally in a TVectorD, in the following
way:
*/
Browse the TPrincipal object in the TBrowser.
Clear the data in Object. Notice, that's not possible to change the dimension of the original data.
Return a row of the user supplied data. If row is out of bounds, 0 is returned. It's up to the user to delete the returned array. Row 0 is the first row;
Generates the file <filename>, with .C appended if it does argument doesn't end in .cxx or .C. The file contains the implementation of two functions void X2P(Double_t *x, Double *p) void P2X(Double_t *p, Double *x, Int_t nTest) which does the same as TPrincipal::X2P and TPrincipal::P2X respectively. Please refer to these methods. Further, the static variables: Int_t gNVariables Double_t gEigenValues[] Double_t gEigenVectors[] Double_t gMeanValues[] Double_t gSigmaValues[] are initialized. The only ROOT header file needed is Rtypes.h See TPrincipal::MakeRealCode for a list of options
/*PRIVATE METHOD:
The basic idea is to find matrices and so that
, where is orthogonal and
is lower triangular. The QL algorithm
consist of a
sequence of orthogonal transformations
*/
Make histograms of the result of the analysis. The option string say which histograms to create X Histogram original data P Histogram principal components corresponding to original data D Histogram the difference between the original data and the projection of principal unto a lower dimensional subspace (2D histograms) E Histogram the eigenvalues S Histogram the square of the residues (see TPrincipal::SumOfSquareResidues) The histograms will be named <name>_<type><number>, where <name> is the first argument, <type> is one of X,P,D,E,S, and <number> is the variable.
PRIVATE METHOD: Normalize the covariance matrix
Generate the file <classname>PCA.cxx which contains the implementation of two methods: void <classname>::X2P(Double_t *x, Double *p) void <classname>::P2X(Double_t *p, Double *x, Int_t nTest) which does the same as TPrincipal::X2P and TPrincipal::P2X respectivly. Please refer to these methods. Further, the public static members: Int_t <classname>::fgNVariables Double_t <classname>::fgEigenValues[] Double_t <classname>::fgEigenVectors[] Double_t <classname>::fgMeanValues[] Double_t <classname>::fgSigmaValues[] are initialized, and assumed to exist. The class declaration is assumed to be in <classname>.h and assumed to be provided by the user. See TPrincipal::MakeRealCode for a list of options The minimal class definition is: class <classname> { public: static Int_t fgNVariables; static Double_t fgEigenVectors[]; static Double_t fgEigenValues[]; static Double_t fgMeanValues[]; static Double_t fgSigmaValues[]; void X2P(Double_t *x, Double_t *p); void P2X(Double_t *p, Double_t *x, Int_t nTest); }; Whether the methods <classname>::X2P and <classname>::P2X should be static or not, is up to the user.
Perform the principal components analysis. This is done is several stages: * Transform the covariance matrix into a tridiagonal matrix, using method Principal::MakeTridiagonal(); * Find the eigenvalues and vectors of the tridiagonal matrix, using the method Principal::MakeEigenVectors();
/*PRIVATE METHOD:
*/
PRIVATE METHOD: This is the method that actually generates the code for the transformations to and from feature space and pattern space It's called by TPrincipal::MakeCode and TPrincipal::MakeMethods. The options are: NONE so far
/*PRIVATE METHOD:
The basic idea is to perform orthogonal transformation, where each transformation eat away the off-diagonal elements, except the inner most.
*/
Calculate x as a function of nTest of the most significant principal components p, and return it in x. It's the users responsibility to make sure that both x and p are of the right size (i.e., memory must be allocated for x).
Print the statistics Options are M Print mean values of original data S Print sigma values of original data E Print eigenvalues of covarinace matrix V Print eigenvectors of covarinace matrix Default is MSE
PRIVATE METHOD: /*Calculates the sum of the square residuals, that is
*/
Test the PCA, bye calculating the sum square of residuals (see method SumOfSquareResiduals), and display the histogram
Calculate the principal components from the original data vector x, and return it in p. It's the users responsibility to make sure that both x and p are of the right size (i.e., memory must be allocated for p).
const TMatrixD* GetCovarianceMatrix() const const TVectorD* GetEigenValues() const const TMatrixD* GetEigenVectors() const TList* GetHistograms() const const TVectorD* GetMeanValues() const const TVectorD* GetSigmas() const const TVectorD* GetUserData() const Bool_t IsFolder() const TClass* Class() TClass* IsA() const void ShowMembers(TMemberInspector& insp, char* parent) void Streamer(TBuffer& b) void StreamerNVirtual(TBuffer& b) TPrincipal TPrincipal(TPrincipal&)