Clustering Exploiting Sparse Representation Learner
We propose a learning algorithm that gives a significant and consistent improvement in accuracy over the existing sparse representation learner. The proposed work is built on two essential properties of data. The first is that data points belonging to the same class span the same subspace (here onward referred as subspace property), and the second is that data points belonging to the same class are a part of the same cluster (here onward referred as clustering property). This paper proceeds by discussing a mode of breakdown for the sparse representation learner. Then, we introduce clustering exploiting sparse representation learner, which exploits both the subspace and clustering property to overcome the issues faced by sparse representation learner. The paper provides a strong geometric perspective of the classification scene involved with the different optimization frameworks discussed in the paper. To support our claims empirically, experiments where conducted comparing the sparse representation learner with the clustering exploiting sparse representation learner with a set of five diverse datasets. The final finding of this work is that clustering exploiting sparse learners could be safely assumed to give a improvement of 5\% to 10\% over the ordinary sparse representation learner.
Ensembling Sparse representation learners using SVM.
To be briefed soon
A Novel Sparsity Based Classification Framework to Exploit Clusters in Data
Traditionally, sparsity has been used to exploit a degenerate property of high dimensional signals, which is that, these high dimensional signals lie on or near a low dimensional subspace. This paper explores a new avenue for using sparsity in classification, by exploiting the property that data points belonging to the same class constitute a cluster. Here, classification is done by determining which class' cluster can give a terse linear representation of the given test vector. The optimization framework of the proposed algorithm is still, a l1 norm minimization problem under convex constraints, thereby making the problem computationally tractable. The sparsity introduced terse representation, and the computational tractability make the proposed work very much suited for classification tasks. The efficiency of the proposed algorithm, is evaluated by comparing the accuracies of this method, with the accuracies achieved by other popular machine learning algorithms, on a varied set of real datasets. The results achieved by the proposed method compare favorably, reflecting the high efficiency of the algorithm.