Skip to main content

Support Vector Machine Implementations for Classification & Clustering

Abstract

Background

We describe Support Vector Machine (SVM) applications to classification and clustering of channel current data. SVMs are variational-calculus based methods that are constrained to have structural risk minimization (SRM), i.e., they provide noise tolerant solutions for pattern recognition. The SVM approach encapsulates a significant amount of model-fitting information in the choice of its kernel. In work thus far, novel, information-theoretic, kernels have been successfully employed for notably better performance over standard kernels. Currently there are two approaches for implementing multiclass SVMs. One is called external multi-class that arranges several binary classifiers as a decision tree such that they perform a single-class decision making function, with each leaf corresponding to a unique class. The second approach, namely internal-multiclass, involves solving a single optimization problem corresponding to the entire data set (with multiple hyperplanes).

Results

Each SVM approach encapsulates a significant amount of model-fitting information in its choice of kernel. In work thus far, novel, information-theoretic, kernels were successfully employed for notably better performance over standard kernels. Two SVM approaches to multiclass discrimination are described: (1) internal multiclass (with a single optimization), and (2) external multiclass (using an optimized decision tree). We describe benefits of the internal-SVM approach, along with further refinements to the internal-multiclass SVM algorithms that offer significant improvement in training time without sacrificing accuracy. In situations where the data isn't clearly separable, making for poor discrimination, signal clustering is used to provide robust and useful information – to this end, novel, SVM-based clustering methods are also described. As with the classification, there are Internal and External SVM Clustering algorithms, both of which are briefly described.

Background

Support Vector Machine

SVMs are fast, easily trained, discriminators [1, 2], for which strong discrimination is possible without the over-fitting complications common to neural net discriminators [1]. SVMs strongly draw upon variational methods in their construction and are designed to yield the best estimate of the optimal separating hyperplane (for classifier, see Fig. 1) with confidence parameter information included (via hyperplane with margin optimization used in structural risk minimization). The SVM approach also encapsulates a significant amount of model fitting and discriminatory information in the choice of kernel in the SVM, and a number of novel kernels have been developed. In [3], novel, information-theoretic, kernels were introduced for notably better performance over standard kernels (with discrete probability distributions as part of feature vector data). The classification approach adopted in [3] is designed to scale well to multi-species classification (or a few species in a very noisy environment). The scaling is possible due to use of a decision tree architecture and an SVM approach that permits rejection on weak data. SVMs are usually implemented as binary classifiers, are in many ways superior to neural nets, and may be grouped in a decision tree to arrive at a multi-class discriminator. SVMs are much less susceptible to over-training than neural nets, allowing for a much more hands-off training process that is easily deployable and scalable. A multiclass implementation for an SVM is also possible – where multiple hyperplanes are optimized simultaneously. A (single-optimization, multi-hyperplane) multiclass SVM has a much more complicated implementation, but the reward is a classifier that is much easier to tune and train, especially when considering data rejection. The (single) multiclass SVM, doesn't have as non-scalable a throughput problem (with tree depth), and even appears to offer a natural drop zone via its margin definition, so is being considered in further refinements of the method.

Figure 1
figure 1

A sketch of the hyperplane separability heuristic for SVM binary classification. An SVM is trained to find an optimal hyperplane that separates positive and negative instances, while also constrained by structural risk minimization (SRM) criteria, which here manifests as the hyperplane having a thickness, or "margin," that is made as large as possible in seeking a separating hyperplane. A benefit of using SRM is much less complication due to overfitting (a common problem with Neural Network discrimination approaches). Given its geometric expression, it is not surprising that a key construct in the SVM formulation (via the choice of kernel) is the notion of "nearness" between instances (or nearness to the hyperplane, where it gives a measure of confidence in the classification, i.e., instances further from the decision hyperplane are called with greater confidence). Most notions of nearness explored in this context have stayed with the geometric paradigm and are known as "distance kernels," one example being the familiar Gaussian kernel which is based on the Euclidean distance: KGaussian(x,y) = exp(-DEucl.(x,y)2/2σ2), where DEucl.(x,y) = [∑k(xk-yk)2]1/2 is the usual Euclidean distance. Those kernels are used in the signal pattern recognition analysis in Figure 8 along with a new class of kernels, "divergence kernels," based on a notion of nearness appropriate when comparing probability distributions (or probability feature vectors). The main example of this is the Entropic Divergence Kernel: KEntropic = exp(-DEntropic.(x,y)2/2σ2), where DEntropic.(x,y) = D(x||y) + D(y||x) and D(..||..) is the Kullback-Leibler Divergence (or relative entropy) between x and y.

SVMs use variational methods in their construction and encapsulate a significant amount of discriminatory information in their choice of kernel. In reference [3] information-theoretic kernels provided notably better performance than standard kernels. Feature extraction was designed to arrive at probability vectors (i.e., discrete probability distributions) on a predefined, and complete, space of possibilities. (The different blockade levels, and their frequencies, the emission probabilities, and the transition probabilities, for example.) This turns out to be a very general formulation, wherein feature extraction makes use of signal decomposition into a complete set of separable states that can be interpreted or represented as a probability vector. A probability vector formulation also provides a straightforward hand-off to the SVM classifiers since all feature vectors have the same length with such an approach. What this means for the SVM, however, is that geometric notions of distance are no longer the best measure for comparing feature vectors. For probability vectors (i.e., discrete distributions), the best measures of similarity are the various information-theoretic divergences: Kullback-Leibler, Renyi, etc. By symmetrizing over the arguments of those divergences a rich source of kernels is obtained that works well with the types of probabilistic data obtained.

The SVM discriminators are trained by solving their KKT relations using the Sequential Minimal Optimization (SMO) procedure [4]. A chunking [5, 6] variant of SMO also is employed to manage the large training task at each SVM node. The multi-class SVM training generally involves thousands of blockade signatures for each signal class. The data cleaning needed on the training data is accomplished by an extra SVM training round.

Binary Support Vector Machines

Binary Support Vector Machines (SVMs) are based on a decision-hyperplane heuristic that incorporates structural risk management by attempting to impose a training-instance void, or "margin," around the decision hyperplane [1].

Feature vectors are denoted by xik, where index i labels the M feature vectors (1 ≤ i ≤ M) and index k labels the N feature vector components (1 ≤ i ≤ N). For the binary SVM, labeling of training data is done using label variable yi = ± 1 (with sign according to whether the training instance was from the positive or negative class). For hyperplane separability, elements of the training set must satisfy the following conditions: wβx-b ≥ +1 for i such that yi = +1, and wβx-b ≤ -1 for yi = -1, for some values of the coefficients w1, ..., wN, and b (using the convention of implied sum on repeated Greek indices). This can be written more concisely as: yi(wβx-b) - 1 ≥ 0. Data points that satisfy the equality in the above are known as "support vectors" (or "active constraints").

Once training is complete, discrimination is based solely on position relative to the discriminating hyperplane: wβx - b = 0. The boundary hyperplanes on the two classes of data are separated by a distance 2/w, known as the "margin," where w2 = wβwβ. By increasing the margin between the separated data as much as possible the optimal separating hyperplane is obtained. In the usual SVM formulation, the goal to maximize w-1 is restated as the goal to minimize w2. The Lagrangian variational formulation then selects an optimum defined at a saddle point of L(w,b;α) = (wβwβ)/2 - αγyγ(wβxγβ-b) - α0, where α0 = Σγαγ, αγ ≥ 0 (1 ≤ γ ≤ M). The saddle point is obtained by minimizing with respect to {w1, ...,wN,b} and maximizing with respect to {α1, ..., αM}. If yi(wβx-b) - 1 ≥ 0, then maximization on αi is achieved for αi = 0. If yi(wβx-b) - 1 = 0, then there is no constraint on αi. If yi(wβx-b) - 1 < 0, there is a constraint violation, and αi → ∞. If absolute separability is possible the last case will eventually be eliminated for all αi, otherwise it's natural to limit the size of αi by some constant upper bound, i.e., max(αi) = C, for all i. This is equivalent to another set of inequality constraints with αi ≤ C. Introducing sets of Lagrange multipliers, ξγ and μγ(1 ≤ γ ≤ M), to achieve this, the Lagrangian becomes:

L(w,b;α,ξ,μ) = (wβwβ)/2 - αγ[yγ(wβxγβ-b)+ξγ] + α0 + ξ0C - μγξγ, where ξ0 = Σγξγ, α0 = Σγαγ, and αγ ≥ 0 and ξξ ≥ 0 (1 ≤ γ ≤ M).

At the variational minimum on the {w1, ...,wN,b} variables, wβ = αγyγxγβ, and the Lagrangian simplifies to: L(α) = α0 - (αδyδxδβ αγyγxγβ/2, with 0 ≤ αγ ≤ C (1 ≤ γ ≤ M) and αγyγ = 0, where only the variations that maximize in terms of the αγ remain (known as the Wolfe Transformation). In this form the computational task can be greatly simplified. By introducing an expression for the discriminating hyperplane: fi = wβx - b = αγyγxγβx - b, the variational solution for L(α) reduces to the following set of relations (known as the Karush-Kuhn-Tucker, or KKT, relations): (i) αi = 0 yifi ≥ 1, (ii) 0 < αi < C yifi = 1, and (iii) αi = C yifi ≤ 1. When the KKT relations are satisfied for all of the αγ (with αγyγ = 0 maintained) the solution is achieved. (The constraint αγyγ = 0 is satisfied for the initial choice of multipliers by setting the α's associated with the positive training instances to 1/N(+) and the α's associated with the negatives to 1/N(-), where N(+) is the number of positives and N(-) is the number of negatives.) Once the Wolfe transformation is performed it is apparent that the training data (support vectors in particular, KKT class (ii) above) enter into the Lagrangian solely via the inner product xx. Likewise, the discriminator fi, and KKT relations, are also dependent on the data solely via the xx inner product.

Generalization of the SVM formulation to data-dependent inner products other than xx are possible and are usually formulated in terms of the family of symmetric positive definite functions (reproducing kernels) satisfying Mercer's conditions [1].

Binary SVM Discriminator Implementation

The SVM discriminators are trained by solving their KKT relations using the Sequential Minimal Optimization (SMO) procedure of [4]. The method described here follows the description of [4] and begins by selecting a pair of Lagrange multipliers, {α12}, where at least one of the multipliers has a violation of its associated KKT relations (for simplicity it is assumed in what follows that the multipliers selected are those associated with the first and second feature vectors: {x1,x2}). The SMO procedure then "freezes" variations in all but the two selected Lagrange multipliers, permitting much of the computation to be circumvented by use of analytical reductions:

L(α12β'≥3) = α1 + α2 - (α12K11 + α22K22 + 2α1α2y1y2K12)/2 - α1y1v1 - α2y2v2 + αβ'Uβ' - (αβ'αγ'yβ'Kβ'γ')/2,

with β',γ' ≥ 3, and where Kij ≡ K(xi, xj), and vi ≡ αβ'yβ'Kiβ' with β' ≥ 3. Due to the constraint αβyβ = 0, we have the relation: α1 + sα2 = -γ, where γ ≡ y1αβ'yβ' with β' ≥ 3 and s ≡ y1y2. Substituting the constraint to eliminate references to α1, and performing the variation on α2: ∂L(α2β'≥3)/∂α2 = (1 - s) + ηα2 + sγ(K11 - K22) + sy1v1 - y2v2, where η ≡ (2K12 - K11 + K22). Since vi can be rewritten as vi = wβx - α1y1Ki1 - α2y2Ki2, the variational maximum ∂L(α2β'≥3)/∂α2 = 0 leads to the following update rule:

α2new = α2old - y2((wβx-y1) - (wβx-y2))/η.

Once α2new is obtained, the constraint α2new ≤ C must be re-verified in conjunction with the αβyβ = 0 constraint. If the L(α2β'≥3) maximization leads to a α2new that grows too large, the new α2 must be "clipped" to the maximum value satisfying the constraints. For example, if y1 ≠ y2, then increases in α2 are matched by increases in α1. So, depending on whether α2 or α1 is nearer its maximum of C, we have max(α2) = argmin{α2+(C-α2); α2+(C-α1)}. Similar arguments provide the following boundary conditions: (i) if s = -1, max(α2) = argmin{α2; C+α21}, and min(α2) = argmax{0; α21}, and (ii) if s = +1, max(α2) = argmin{C; α21}, and min(α2) = argmax{0; α21-C}. In terms of the new α2new, clipped, clipped as indicated above if necessary, the new α1 becomes:

α1new = α1old + s(α2old2new, clipped),

where s ≡ y1y2 as before. After the new α1 and α2 values are obtained there still remains the task of obtaining the new b value. If the new α1 is not "clipped" then the update must satisfy the non-boundary KKT relation: y1f(x1) = 1, i.e., fnew(x1) - y1 = 0. By relating fnew to fold the following update on b is obtained:

bnew1 = b - (fnew(x1) - y1) - y11new - α1old)K11 - y22new, clipped - α2old)K12.

If α1 is clipped but α2 is not, the above argument holds for the α2 multiplier and the new b is:

bnew2 = b - (fnew(x2) - y2) - y22new - α2old)K22 - y11new, clipped - α1old)K12.

If both α1 and α2 values are clipped then any of the b values between bnew1 and bnew2 is acceptable, and following the SMO convention, the new b is chosen to be:

bnew = (bnew1 + bnew2)/2.

Multiclass SVM Methods

The SVM binary discriminator offers high performance and is very robust in the presence of noise. This allows a variety of reductionist multiclass approaches, where each reduction is a binary classification (for classifying cards by suit, maybe classify as red or black first, then as heart or diamond for red and spade or club for black, for example). The SVM Decision Tree is one such approach, and a collection of them (a SVM Decision Forest) can be used to avoid problems with throughput biasing. Alternatively, the variational formalism can be modified to perform a multi-hyperplane optimization situation for a direct multiclass solution [79], and that is what is described next.

SVM-Internal Multiclass

In the formulation in [7], there are 'k' classes and hence 'k' linear decision functions – a description of their approach is given here. For a given input 'x', the output vector corresponds to the output from each of these decision functions. The class of the largest element of the output vector gives the class of 'x'.

Each decision function is given by: fm(x) = wm.x + bm for all m = (1,2, ..., k). If yi is the class of the input xi, then for each input data point, the misclassification error is defined as follows: maxm{fm(xi) + 1 - δim} - fyi(xi), where δim is 1 if m = yi and 0 if m ≠ yi. We add the slack variable ζi where ζi ≥ 0 for all i that is proportional to the misclassification error: maxm{fm(xi) + 1 - δim} - fyi(xi) = ζi, hence fyi(xi) - fm(xi) + δim ≥ 1 - ζi for all i, m. To minimize this classification error and maximize the distance between the hyper-planes (Structural Risk Minimization) we have the following formulation:

Minimize: ∑iζi + β(1/2)∑mwmTwm + (1/2)∑mbm2,

where β > 0 is defined as a regularization constant.

Constraint: wyi.xi + byi - wm.xi - bm - 1 + ζi + δim ≥ 0 for all i,m

Note: the term (1/2)∑ m b m 2is added for de-coupling, 1/β = C, and m = yi in the above constraint is consistent with ζi ≥ 0. The Lagrangian is:

L(w,b,ζ) = ∑iζi + β(1/2)∑mwmTwm + (1/2)∑mbm2 - ∑imαim(wyixi + byi - wm.xi - bm - 1 + ζi + δim)

Where all αims are positive Lagrange multipliers. Now taking partial derivatives of the Lagrangian and equating them to zero (Saddle Point solution): ∂L/∂ζi = 1 - ∑mαim = 0. This implies that ∑mαim = 1 for all i. ∂L/∂bm = bm + ∑iαim - ∑iδim = 0 for all m. Hence bm = ∑iim - αim). Similarly: ∂L/∂wm = βwm + ∑iαimxi - ∑iδimxi = 0 for all m. Hence wm = (1/β)[∑iim - αim)xi] Substituting the above equations into the Lagrangian and after simplification reduces into the dual formalism:

Maximize: -1/2∑i,jmim - αim)(δjm - αjm)(Kij + β) - β∑i,mδimαim

Constraint: 0 ≤ αim, ∑mαim = 1, i = 1...1; m = 1...k

Where Kij = xi.xj is the Kernel generalization. In vector notation:

Maximize: -1/2∑i,jyi - Ai)(Δyj - Aj)(Kij + β) - β∑iΔyiAi

Constraint: 0 ≤ Ai, Ai. 1 = 1, i = 1 ...1

Let τi = Δyi - Ai. Hence after ignoring the constant: -1/2∑i,jτij(Kij + β) + β∑iΔ;yiτi, subject to: τi ≤ Δyi, τi.1 = 0, i = 1 ...l. The dual is solved (determine the optimum values of all the τs) using the decomposition method.

Minimize: 1/2∑i,jτimjm(Kij + β) - β∑i,mδimτim

Constraint: τi ≤ Δyi, τi.1 = 0, i = 1 ...l

The Lagrangian of the dual is:

L = 1/2∑i,j,mτimjm(Kij + β) - φ∑i,mδimτim - ∑i,muimim - τim) - ∑ivimτim

Subject to uim ≥ 0

We take the gradient of the Lagrangian with respect to τim:

τm[L] = ∑iτjm(Kij + β) - βδim + uim - vi = 0

Introducing f(τ) = ∑iτjm(Kij + β) - βδim + uim - vi = 0 and fim = ∑iτjm(Kij + β) - βδim, then f(τ) = fim + uim - vi = 0. By KKT conditions we get two more equations:

uimim - τim) = 0 and uim ≥ 0

Case I: if δim = τim, then uim ≥ 0, hence fim ≤ vi. Case II: if τim < δim, then uim = 0, hence fim = vi. Note: There is atleast one 'm' for all i such that τim < δim is satisfied.

Therefore combining Case I & II, we get:

maxm{fim} ≤ vi ≤ minm: τim < δim{fim}

Or maxm{fim} ≤ minm: τim < δim{fim}

Or maxm{fim} - minm: τim < δim{fim} ≤ ε

Note: τim < δim implies that αim > 0. Since ∑mαim = 1, for any i each αim is treated as the probability that the data point belongs to class m. Hence we define KKT violators as:

maxm{fim} - minm: τim < δim{fim} > ε for all i.

Decomposition Method to Solve the Dual

Using the method in [7] to solve the Dual, maximize

Q(τ) = -1/2∑i,jτij(Kij + β) + β∑iΔyiτi

Subject to: τi ≤ Δyi, τi.1 = 0, i = 1 ...l

Expanding in terms of a single 'τ' vector:

Qpp) = -1/2App. τp) - Bpp + Cp

Where:

Ap = Kpp + β

Bp = -βΔyp + ∑i≠pτi(Kip + β)

Cp = -1/2∑i,j≠pτij(Kij + β) + β∑i≠pτiΔyi

Therefore ignoring the constant term 'Cp', we have to minimize:

Qpp) = 1/2App. τp) + Bpp

Subject to: τp ≤ Δyp and τp.1 = 0

The above equation can also be written as:

Qpp) = 1/2App + Bp/Ap).(τp + Bp/Ap) - Bp.Bp/2Ap

Substitute v = (τp + Bp/Ap) & D = (Δyp + Bp/Ap) in the above equation. Hence, after ignoring the constant term Bp.Bp/2Ap and the multiplicative factor 'Ap' we have to minimize:

Q(v) = 1/2v.v = 1/2||v||2

Subject to: v ≤ D and v.1 = D.1 - 1

The Lagrangian is given by:

L(v) = 1/2||v||2 - ∑mρm(Dm - vm) - σ[∑m(vm - Dm) + 1]

Subject to: ρm ≤ 0

Hence ∂L/∂vm = vm + ρm - σ = 0. By KKT conditions we have: ρm(Dm - vm) = 0 & ρm ≥ 0, also vm ≤ Dm. Hence by combining the above in-equalities, we have: vm = Min{Dm, σ}, or ∑mvm = ∑mMin{Dm, σ} = ∑mDm - 1. The above equation uniquely defines the 'σ' that satisfies the above equation AND that 'σ' is the optimal solution of the quadratic optimization problem. (Refer to [7] for a formal proof).

Solve for 'σ': We have Min{Dm, σ} + Max{Dm, σ} = Dm + σ, hence ∑m[Dm + σ - Max{Dm, σ}] = ∑mDm - 1, or σ = 1/K[∑mMax{Dm, σ} - 1], hence we find σ (iteratively) that satisfies the equation: |(σl - σl+1)/σl| ≤ tolerance. The initial value for 'σ' is set to σ1 = 1/K[∑mDm - 1].

Update rule for 'τ': Once we have 'σ', τnewm = vm - Bpm/(Kpp + β), or:

τnewm = vm - fpm/(Kpp + β) + τoldm

SVM-Internal Clustering

Let {xi} be a data set of 'N' points in Rd. Using a non-linear transformation φ, we transform 'x' to some high-dimensional space called Kernel space and look for the smallest enclosing sphere of radius 'R'. Hence we have: ||φ(xj) - a ||2 ≤ R2 for all j = 1,...,N; where 'a' is the center of the sphere. Soft constraints are incorporated by adding slack variables 'ζj':

||φ(xj) - a ||2 ≤ R2 + ζj for all j = 1,...,N

Subject to: ζj ≥ 0

We introduce the Lagrangian as:

L = R2 - ∑jβj(R2 + ζj - ||φ(xj) - a ||2) - ∑jζjμj + C∑jζj

Subject to: βj ≥ 0, μj ≥ 0,

where C is the cost for outliers and hence C∑jζj is a penalty term. Setting to zero the derivative of 'L' w.r.t. R, a and ζ we have: ∑jβj = 1; a = ∑jβjφ(xj); and βj = C - μj.

Substituting the above equations into the Lagrangian, we have the dual formalism as:

W = 1 - ∑i,jβiβjKij where 0 ≤ βi ≤ C; Kij = exp(-||xi - xj||2/2σ2)

Subject to: ∑iβi = 1

By KKT conditions we have: ζjμj = 0 and βj(R2 + ζj - ||φ(xj) - a ||2) = 0.

In the kernel space of a data point 'xj' if ζj > 0, then βj = C and hence it lies outside of the sphere i.e. R2 < ||φ(xj) - a ||2. This point becomes a bounded support vector or BSV. Similarly if ζj = 0, and 0 < βj < C, then it lies on the surface of the sphere i.e. R2 = ||φ(xj) - a ||2. This point becomes a support vector or SV. If ζj = 0, and βj = 0, then R2 > ||φ(xj) - a ||2 and hence this point is enclosed with-in the sphere.

Nanopore Detector based Channel Current Cheminformatics

All data analyzed is obtained from a nanopore detector and relates to single molecule blockades of a single protein channel. The protein channel is the α-hemolysin pore-forming toxin from Staphylococcus aureus, which has a molecule-sized channel opening for partial capture, if not translocation, of biomolecules drawn in by electrophoretic forces (such as DNA) [3, 1020]. Further details on the detector and signal processing architecture are shown in Fig. 2. Further detail on the components of the extracted SVM feature vectors (on events due to individual blockade events), are given in the Methods. Although the figure can only show one SVM classifier implementation (that used in [3]), the data sets examined by all the SVMs described are kept the same (for comparative purposes), so the signal acquisition and feature extraction stages show how the SVM feature vectors are obtained.

Figure 2
figure 2

a. (A) shows a nanopore device based on the α-hemolysin channel. It has been used for analysis of single DNA molecules, such as ssDNA, shown, and dsDNA, a nine base-pair DNA hairpin is shown in (B) superimposed on the channel geometry. The channel current blockade trace for the nine base-pair DNA hairpin blockade from (B) is shown in (C). b shows the signal processing architecture that was used to classify DNA hairpins with this approach: Signal acquisition was performed using a time-domain, thresholding, Finite State Automaton, followed by adaptive pre-filtering using a wavelet-domain Finite State Automaton. Hidden Markov Model processing with Expectation-Maximization was used for feature extraction on acquired channel blockades. Classification was then done by Support Vector Machine on five DNA molecules: four DNA hairpin molecules with nine base-pair stem lengths that only differed in their blunt-ended DNA termini, and an eight base-pair DNA hairpin. The accuracy shown is obtained upon completing the 15th single molecule sampling/classification (in approx. 6 seconds), where SVM-based rejection on noisy signals was employed.

Information measures

The fundamental information measures are Shannon entropy, mutual information, and relative entropy (also known as the Kullback-Leibler divergence or distance). Shannon entropy, σ = -Σ x p(x)log(p(x)), is a measure of the information in distribution p(x). Mutual Information, μ = Σ x Σ y p(xy)log(p(xy)/p(x)p(y)), is a measure of information one random variable has about another random variable. Relative Entropy (Kullback-Leibler distance): ρ = Σ x p(x) log(p(x)/q(x)), is a measure of distance between two probability distributions. Mutual information is a special case of relative entropy between a joint probability (two-component in simplest form) and the product of component probabilities.

Khinchin derivation of Shannon entropy

In his now famous 1948 paper, Claude Shannon [21] provided a qualitative measure for entropy in connection with communication theory. The Shannon entropy measure was later put on a more formal footing by A. I. Khinchin in an article where he proves that with certain reasonable assumptions the Shannon entropy is unique [22]. A statement of the theorem is as follows:

Khinchine Uniqueness Theorem

Let H(p 1 ,p 2 ,...,p n ) be a function defined for any integer n and for all values p 1 ,p 2 ,...,p n such that p k ≥0 (k = 1,2,...,n), and Σ k p k = 1. If for any function n this function is continuous with respect to its arguments, and if the function obeys the three properties listed below, then H(p 1 ,p 2 ,...,p n ) = -λΣ k p k log(p k ), where λ is a positive constant (with Shannon entropy recovered for convention λ = 1). The three properties are:

  1. (1)

    For given n and for Σ k p k = 1, the function takes its largest value for p k = 1/n (k = 1,2,...,n). This is equivalent to Laplace's principle of insufficient reason, which says if you don't know anything assume the uniform distribution (also agrees with Occam's Razor assumption of minimum structure).

  2. (2)

    H(ab) = H(a) + H a (b), where H a (b) = -Σ a p(a)log(p(b|a)), is the conditional entropy. This is consistent with H(ab)=H(a)+H(b), for probabilities of a and b independent, with modifications involving conditional probability being used when not independent.

  3. (3)

    H(p 1 ,p 2 ,...,p n ,0) = H(p 1 ,p 2 ,...,p n ). This reductive relationship, or something like it, is implicitly assumed when describing any system in "isolation."

Relative Entropy Uniqueness

This falls out of a geometric formalism on families of distributions: the Information Geometry formalism described by S. Amari [2325]. Together with Laplace's principle of insufficient reason on the choice of "reference" distribution in the relative entropy expression, this will reduce to Shannon entropy, and thus uniqueness on Shannon entropy from a geometric context. The parallel with geometry is the Euclidean distance for "flat" geometry (simplest assumption of structure), vs. the "distance" between distributions as described by the Kullback-Leibler divergence.

The Success of Distributions of Nature suggests Generalization from Geometric Feature-Space Kernels to Distribution Feature-Space Kernels

Using the Shannon entropy measure it is possible to derive the classic probability distributions of statistical physics by maximizing the Shannon measure subject to appropriate linear momentum constraints. Constrained variational optimizations involving the Shannon entropy measure can, thus, provide a unified framework with which to describe all, or most, of statistical mechanics. The distributions derivable within the maximum entropy formalism include the Maxwell-Boltzmann, Bose-Einstein, Fermi-Dirac, and Intermediate distributions. The maximum entropy method for defining statistical mechanical systems has been extensively studied by [26].

Both statistical estimation and maximum entropy estimation are concerned with drawing inferences from partial information. The maximum entropy approach estimates a probability density function when only a few moments are known (where there are an infinite number of higher moments). The statistical approach estimates the density function when only one random sample is available out of an infinity of possible samples. The maximum entropy estimation may be significantly more robust (against over-fitting, for example) in that it has an Occam's Razor argument that "cuts both ways" – use all of the information given and avoid using any information not given. This means that out of all of the probability distributions consistent with the set of constraints, choose the one that has maximum uncertainty, i.e., maximum entropy [27].

At the same time that Jaynes was doing his work, essentially an optimization principle based on Shannon entropy, Soloman Kullback was exploring optimizations involving a notion of probabilistic distance known as the Kullback-Leibler distance, referred to above as the relative entropy [28]. The resulting minimum relative entropy (MRE) formalism reduces to the maximum entropy formalism of Jaynes when the reference distribution is uniform. The information distance that Kullback and Leibler defined was an oriented measure of "distance" between two probability distributions. The MRE formalism can be understood to be an extension of Laplace's Principle of Insufficient Reason (e.g., if nothing known assume the uniform distribution) in a manner like that employed by Khinchine in his uniqueness proof, but now incorporating constraints.

In their book Entropy Optimization Principles with Applications [27], Kapur and Kesavan argue for a generalized entropy optimization approach to the description of distributions. They believe every probability distribution, theoretical or observed, is an entropy optimization distribution, i.e., it can be obtained by maximizing an appropriate entropy measure, or by minimizing a relative entropy measure with respect to an appropriate a priori distribution. The primary objective in such a modeling procedure is to represent the problem as a simple combination of probabilistic entities that have a simple set of moment constraints. Generalized measures of distributional distance can also be explored along the lines of generalized measures of geometric distance. In physics, not every geometric distance is of interest, however, since the special theory of relativity tells us that spacetime is locally flat (Lorentzian, which is Euclidean on spatial slices), with metric generalization the Riemannian metrics. Likewise, perhaps not all distributional distance measures are created equal either. What the formalism of Information Geometry [2325] reveals, among other things, is that relative entropy is uniquely structureless (like flat geometry) and is perturbatively stable, i.e., has a well-defined Taylor expansion at short divergence range, just like the locally Euclidean metrics at short distance range.

Results

SVM Kernel/Algorithm Variants

The SVM Kernels of interest are "regularized" distances or divergences, where they are regularized if in the form of an exponential with argument the negative of some distance-measure squared (d2(x,y)) or symmetrized divergence measure (D(x,y)), the former if using a geometric heuristic for comparison of feature vectors, the latter if using a distributional heuristic. For the Gaussian Kernel: d2(x,y) = Σk(xk-yk)2; for the Absdiff Kernel d2(x,y)=(Σk|xk-yk|)1/2; and for the Symmetrized Relative Entropy Kernel D(x,y)= D(x||y)+D(y||x), where D(x||y) is the standard relative entropy. Results are shown in Fig. 3.

Figure 3
figure 3

Comparative results are shown on performance of Kernels and algorithmic variants. The classification is between two DNA hairpins (in terms of features from the blockade signals they produce when occluding ion flow through a nanometer-scale channel). Implementations: WH SMO (W); Platt SMO (P); Keerthi1 (1); and Keerthi2 (2). Kernels: Absdiff (a); Entropic (e); and Gaussian (g). The best algorithm/kernel on this and other channel blockade data studied has consistently been the WH SMO variant and the Absdiff and Entropic Kernels. Another benefit of the WH SMO variant is its significant speedup over the other methods (about half the time of Platt SMO and one fourth the time of Keerthi 1 or 2).

The SVM algorithm variants being explored are only briefly mentioned here. In the standard Platt SMO algorithm, η = 2*K12-K11-K22, and speedup variations are described to avoid calculation of this value entirely. A middle ground is sought with the following definition "η = 2*K12-2; If (η >= 0) { η = -1;}" (labeled WH SMO in Fig. 3, underflow handling and other implementations differ slightly in the implementation shown as well).

SVM-Internal Speedup via differentiating BSVs and SVs

Fig. 4 shows the percent increase in iterations-to-convergence against the 'C' value. Fig. 5 shows the number of bounded support vectors (BSV) as a function of 'C' value. Since the algorithm presented in [7] does not differentiate between SV and BSV, a lot of time is spent in trying to adjust the weights of the BSV i.e. weak data. The weight of a BSV may range from [0, 0.5) in their algorithm. In our modification to the algorithm, shown below, as soon as we identify the BSV (as specified by Case III conditions), its weight is no longer adjusted. Hence faster convergence is achieved without sacrificing accuracy:

Figure 4
figure 4

The percent increase in iterations-to-convergence against the 'C' value. For very low values of 'C' the gain is doubled while for very large values of 'C' the gain is low (almost constant for C > 150). Thus we note the dependence of the gain on 'C' value.

Figure 5
figure 5

The number of bounded support vectors (BSV) as a function of 'C' value. There are many BSVs for very low values of 'C' and very few BSVs for large values of 'C'. Thus we can say that the number of BSVs plays a vital role in the speed of convergence of the algorithm.

For the BSV/SV-tracking speedup, the KKT violators are redefined as:

For all m ≠ yi we have:

αim{fyi - fm - 1 + ζi} ≥ 0

Subject to: 1 ≥ αim ≥ 0; ∑mαim = 1;ζi ≥ 0 for all i,m

Where fm = (1/β)[wm.xi + bm] for all m

Case I:

If αim = 0 for m S.T fm = fmmax

Implies αiyi > 0 and hence ζi = 0

Hence fyi - fmmax - 1 ≥ 0

Case II:

If 1 > αim > 0 for m S.T fm = fmmax and αiyi > αim

Implies ζi = 0

Hence fyi - fmmax - 1 = 0

Case III:

If 1 ≥ αim > 0 for m S.T fm = fmmax and αiyi ≤ αim

Implies ζi > 0

Hence fyi - fmmax - 1 + ζi = 0

Or fyi - fmmax - 1 < 0

Data Rejection Tuning with SVM-Internal vs SVM-External Classifiers

The SVM Decision Tree shown in Fig. 2b obtained nearly perfect sensitivity and specificity, with a high data rejection rate, and a highly non-uniform class signal-calling throughput. In Fig. 6, the Percentage Data Rejection vs SN+SP curves are shown for test data classification runs with a binary classifier with one molecule (the positive, given by label) versus the rest (the negative). Since the signal calling wasn't passed through a Decision Tree, the way these curves were generated, they don't accurately reflect total throughput, and they don't benefit from the "shielding" shown in the Decision Tree in Fig. 2b prototype. In the SVM Decision Tree implementation described in Fig. 2b[3], this is managed more comprehensively, to arrive at a five-way signal-calling throughput at the furthest node of 16% (in Fig. 1a, 9CG and 9AT have to pass to the furthest node to be classified), while the best throughput, for signal calling on the 8GC molecules, is 75%.

Figure 6
figure 6

The Percentage Data Rejection vs SN+SP curves are shown for test data classification runs with a binary classifier with one molecule (the positive, given by label) versus the rest (the negative). Since the signal calling wasn't passed through a Decision Tree, it doesn't accurately reflect total throughput, and they don't benefit from the "shielding" shown in the Decision Tree in Fig. 1 prototype. The Relative Entropy Kernel is shown because it provided the best results (over Gaussian and Absdiff).

The SVM Decision Tree classifier's high, non-uniform, rejection can be managed by generalizing to a collection of Decision Trees (with different species at the furthest node). The problem is that tuning and optimizing a single decision tree is already a large task, even for five species (as in Fig. 2). With a collection of trees this problem is seemingly compounded, but can actually be lessened in some ways in that now each individual tree need not be so well-tuned/optimized. Although more complicated to implement than an SVM-External method, the SVM-Internal multiclass methods are not similarly fraught with tuning/optimization complications. Fig. 7 shows the Percentage Data Rejection vs SN+SP curves on the same train/test data splits as used for Fig. 6, except now the drop curves are to be understood as simultaneous curves (not sequential application of such curves as in Fig. 6). Thus, comparable, or better, performance is obtained with the multiclass-internal approach and with far less effort since there is no managing and tuning of Decision Trees. Another surprise, and even stronger argument for the SVM-Internal approach to the problem, is that a natural drop zone is indicated by the margin.

Figure 7
figure 7

The Percentage Data Rejection vs SN+SP curves are shown for test data classification runs with a multiclass discriminator. The following criterion is used for dropping weak data: for any data point xi; if maxm{fm(xi)} ≤ Confidence Parameter, then the data point xi is dropped. For this data set using AbsDiff kernel (σ2 = 0.2) performed best, and a confidence parameter of 0.8 achieve 100% accuracy.

Marginal Drop with SVM-Internal

Suppose we define the criteria for dropping weak data as the margin: For any data point xi; let maxm{fm(xi)} = fyi, and Let fm = maxm{fm(xi)} for all m ≠ yi, then we define the margin as: (fyi - fm), hence data point xi is dropped if (fyi - fm) = Confidence Parameter. (For this data set using Gaussian, AbsDiff & Sentropic kernel, a confidence parameter of at least (0.00001)*C was required to achieve 100% accuracy.) The results are shown in Table 1. Using the margin drop approach, there is even less tuning, and there is improved throughput (approximately 75% for all species).

Table 1 The table shows the results of dropping data that falls in the margin. For any data point xi; let maxm{fm(xi)} = fyi, and Let fm = maxm{fm(xi)} for all m ≠ yi, then we define the margin as: (fyi - fm), hence data point xi is dropped if (fyi - fm) ≤ Confidence Parameter. Using the margin drop approach, there is even less tuning, and there is improved throughput (approximately 75% for all species).

SVM-Internal Clustering

The SVM-Internal approach to clustering was originally defined by [29]. Data points are mapped by means of a kernel to a high dimensional feature space where we search for the minimal enclosing sphere. In what follows, Keerthi's method is used to solve the dual (see Methods for further details).

The minimal enclosing sphere, when mapped back into the data space, can separate into several components; each enclosing a separate cluster of points. The width of the kernel (say Gaussian) controls the scale at which the data is probed while the soft margin constant helps to handle outliers and over-lapping clusters. The structure of a dataset is explored by varying these two parameters, maintaining a minimal number of support vectors to assure smooth cluster boundaries.

We have used the algorithm defined in [29] to identify the clusters, with methods adapted from [30,31 for their handling. If the number of data points is 'n', then we require n(n-1)/2 number of comparisons. We have made modifications to the algorithm such that we eliminate comparisons that do not have an impact on the cluster connectivity. Hence the number of comparisons required will be less than n(n-1)/2.

In each comparison we sub-divide the line segment connecting the two data points into 20 parts; hence we obtain 19 different points on this line segment. The two data points belong to the same cluster only if all the 19 points lie inside the cluster. Given the cost of evaluating utmost 19 points for every comparison, the need to eliminate comparisons that do not have an impact on the cluster connectivity becomes even more important. Finally we have used Depth First Search (DFS) algorithm for the cluster harvest. Results are shown in Tables 2 and 3. The approach to the solving the Dual problem is shown in the Methods.

Table 2 The table shows clustering predictions when working with 400 Samples (200 each of 9GC & 9CG) with a Gaussian Kernel with Width = 50 (σ2 = 0.01).
Table 3 The table shows clustering predictions when working with 1200 Samples (600 each of 9GC & 9CG) with a Gaussian Kernel with Width = 50 (σ2 = 0.01).

SVM-External Clustering

As with the multiclass SVM discriminator implementations, the strong performance of the binary SVM enables SVM-External as well as SVM-Internal approaches to clustering. Our external-SVM clustering algorithm clusters data vectors with no a priori knowledge of each vector's class. The algorithm works by first running a Binary SVM against a data set, with each vector in the set randomly labeled, until the SVM converges (Fig. 8). In order to obtain convergence, an acceptable number of KKT violators must be found. This is done through running the SVM on the randomly labeled data with different numbers of allowed violators until the number of violators allowed is near the lower bound of violators needed for the SVM to converge on the particular data set. Choice of an appropriate kernel and an acceptable sigma value also will affect convergence. After the initial convergence is achieved, the sensitivity + specificity will be low, likely near 1. The algorithm now improves this result by iteratively relabeling the worst misclassified vectors, which have confidence factor values beyond some threshold, followed by rerunning the SVM on the newly relabeled data set. This continues until no more progress can be made. Progress is determined by an increasing value of sensitivity + specificity, hopefully nearly reaching 2. After this process, a high percentage of the previously unknown class labels of the data set will be known. With sub-cluster identification upon iterating the overall algorithm on the positive and negative clusters identified (until the clusters are no longer separable into sub-clusters), this method provides a way to cluster data sets without prior knowledge of the data's clustering characteristics, or the number of clusters. Figures 9 and 10 show clustering runs on a data set with a mixture of 8GC and 9GC DNA hairpin data. The set consists of 400 elements. Half of the elements belong to each class. The SVM uses a Gaussian Kernel and allows 3% KKT Violators.

Figure 8
figure 8

Shown is the schematic for an "external" SVM clustering algorithm.

Figure 9
figure 9

(a) The percentage correct classification (an indication of the clustering success) is shown with successive iteration of the clustering algorithm. Five separate test runs are shown, on different data from the same classes. Note that the plateau at around 0.9, this is approximately the performance of a supervised binary SVM on the same data (i.e., perfect separation isn't possible with this data without employing weak-data rejection). (b) The degradation in clustering performance for less optimal selection of kernel and tuning parameter (variance in case of Gaussian). (c) The degradation in clustering performance for non-optimal selection of kernel and tuning parameter (variance in case of Gaussian). (d) Summary of the degradation in clustering performance for less optimal selection of kernel and tuning parameter – with averages of the five test-runs are used as representative curves for that kernel/tuning selection in the above.

Figure 10
figure 10

Efforts are underway to use simulated annealing in the number of KKT Violators tolerated on each iteration of the external clustering algorithm, to accelerate the convergence (clustering) process. Our current approach, results shown, approximately halves the cluster time needed.

Machine Learning and Cheminformatics Tools are Accessible via Website

The web-site provides an interface to several binary SVM variants (with other novel kernel selections), to a multiclass (internal) SVM, an FSA-based nanopore spike detector, and an HMM-based channel current feature extraction. New, web-accessible, channel current analysis tools, have also been developed for kinetic feature extraction (via channel current sub-level lifetimes), and clustering. The website is designed using HTML and CGI scripts that are executed to process the data sent when a form filled in by the user is received at the web server – results are then e-mailed to the address indicated by the user. The interface to this and all other software described is available via the group Home Page: http://logos.cs.uno.edu/~nano/ (see Fig. 11). The SVM interface offers options on chunk processing for large training sets (SV-carry by appending to next training chunk and SV-carry by maintaining state and injecting ("unfreezing") the next training chunk (a specialized α-heuristic). The interface offers use of arbitrary or structured feature vectors – where structured, in this case, corresponds to feature vector components that satisfy the properties of a non-trivial, non-reducible, discrete probability distribution. There is an SVM interface for a new single-optimization multiclass SVM discriminator (it simultaneously optimizes multiple hyperplanes). There is also an interface for our SVM-based clustering methods.

Figure 11
figure 11

Several channel current cheminformatics tools are available for use via web interfaces at http://logos.cs.uno.edu/~nano/. These tools include a variety of SVM interfaces for classification and clustering (binary and multiclass), and HMM tools for feature extraction and structure identification (with applications to both channel current cheminformatics and computational genomics).

Discussion

Adaptive Feature Extraction/Discrimination

Adaptive feature extraction and discrimination, in the context of SVMs, can be accomplished by small batch reprocessing using the learned support vectors together with the new information to be learned. The benefit is that the easily deployed properties of SVMs can be retained while at the same time co-opting some of the on-line adaptive characteristics familiar from on-line learning with neural nets. This is also compatible with the chunking processing that is already implemented. A situation where such adaptation might prove necessary in nanopore signal analysis is if the instrumentation was found to have measurable, but steady, drift (at a new level of sensitivity for example). At the forefront of online adaptation, where the discrimination and feature extraction optimizations are inextricably mixed, further progress may derive benefit from the Information-Geometrical methods of S. Amari [2325].

Robust SVM performance in the presence of noise

In a parallel datarun to that indicated in Fig. 2a, with 150 component feature vectors, feature vectors with the full set of 2600 components were extracted (i.e., no compression was employed on the transition probabilities). SVM performance on the same train/test data splits, but with 2600 component feature vectors instead of 150 component feature vectors, offered similar performance after drop optimization. This demonstrates a significant robustness to what the SVM can "learn" in the presence of noise (some of the 2600 component have richer information, but even more are noise contributors).

AdaBoost Feature Selection

If SVM performance on the full HMM parameter set (the features extracted for each blockade signal) offers equivalent performance after rejecting weak data, then the possibility for significant improvement with selection on good parameters. An AdaBoost method is being used to select HMM parameters by representing each feature vector component as an independent Naïve Bayes classifier (trained on the data given), that then comprise the pool of experts in the AdaBoost algorithm [3234]. The experts AdaBoost assigns heaviest weighting will then the components selected in the new, AdaBoost assigned, feature vector compression.

Conclusion

  • External Multi-class SVM gave best results with Sentropic Kernel while Internal Multi-class SVM gave best results with AbsDiff kernel.

  • Internal Multi-class approach overcomes the need to search for the best performing tree out of many possibilities. This is a huge advantage especially when the number of classes is large.

  • Using a margin to define the drop zone for the internal multi-class approach produced far better results i.e. fewer data were dropped to achieve 100% accuracy.

  • Additional benefit of using the margin is that the drop zone tuning to achieve 100% accuracy becomes trivial.

  • External and Internal SVM Clustering Methods were also examined. The results show that our SVM-based clustering implementations can separate data into proper clusters without any prior knowledge of the elements' classification. this can be a powerful resource for insight into data linkages (topology).

Methods

The Feature Extraction used to obtain the Feature Vectors for SVM analysis

Signal Preprocessing Details

The Nanopore Detector is operated such that a stream of 100 ms samplings are obtained (throughput was approximately one sampling per 300 ms in [3]). Each 100 ms signal acquired by the time-domain FSA consists of a sequence of 5000 sub-blockade levels (with the 20 μs analog-to-digital sampling). Signal preprocessing is then used for adaptive low-pass filtering. For the data sets examined, the preprocessing is expected to permit compression on the sample sequence from 5000 to 625 samples (later HMM processing then only required construction of a dynamic programming table with 625 columns). The signal preprocessing makes use of an off-line wavelet stationarity analysis (Off-line Wavelet Stationarity Analysis, Figure 2b, also see [35]).

HMMs and Supervised Feature Extraction Details

With completion of preprocessing, an HMM [36] is used to remove noise from the acquired signals, and to extract features from them (Feature Extraction Stage, Fig. 2b). The HMM is, initially, implemented with fifty states, corresponding to current blockades in 1% increments ranging from 20% residual current to 69% residual current. The HMM states, numbered 0 to 49, corresponded to the 50 different current blockade levels in the sequences that are processed. The state emission parameters of the HMM are initially set so that the state j, 0 <= j <= 49 corresponding to level L = j+20, can emit all possible levels, with the probability distribution over emitted levels set to a discretized Gaussian with mean L and unit variance. All transitions between states are possible, and initially are equally likely. Each blockade signature is de-noised by 5 rounds of Expectation-Maximization (EM) training on the parameters of the HMM. After the EM iterations, 150 parameters are extracted from the HMM. The 150 feature vector components are extracted from the 50 parameterized emission probabilities, a 50-element compressed representation of the 502 transition probabilities, and an a posteriori information from the Viterbi path solution which is, essentially, a de-noised histogram of the bloackade sub-level occupation probabilities (further details in [3]). This information elucidates the blockade levels (states) characteristic of a given molecule, and the occupation probabilities for those levels, but doesn't directly provide kinetic information. An HMM-with-Duration has recently been introduced to better capture the latter information, but such feature vectors are not used in the studies shown in this paper, so this approach isn't discussed further in this paper.

Solving the Dual (Based on Keerthi's SMO [37])

The dual formalism is: 1 - ∑i,jβiβjKij where 0 ≤ βi ≤ C; Kij = exp(-||xi - xj||2/2σ2), also ∑iβi = 1. For any data point 'xk', the distance of its image in kernel space from the center of the sphere is given by: R2(xk) = 1 - 2∑iβiKik + ∑i,jβiβjKij. The radius of the sphere is R = {R(xk) | xk is a Support Vectors}, hence data points which are Support Vectors lie on cluster boundaries. Outliers are points that lie outside of the sphere and therefore they do not belong to any cluster i.e. they are Bounded Support Vectors. All other points are enclosed by the sphere and therefore they lie inside their respective cluster. KKT Violators are given as: (i) If 0 < βi < C and R(xi) ≠ R; (ii) If βi = 0 and R(xi) > R; and (iii) If βi = C and R(xi) < R.

The Wolfe dual is: f(β) = Min β {∑i,jβiβjKij - 1}. In the SMO decomposition, in each iteration we select βi & βj and change them such that f(β) reduces. All other β's are kept constant for that iteration. Let us denote β1 & β2 as being modified in the current iteration. Also β1 + β2 = (1 - ∑i = 3βi) = s, a constant. Let ∑i = 3βiKik = Ck, then we obtain the SMO form: f(β12) = β21 + β22 + ∑i,j = 3βiβjKij + 2β1β2K12 + 2β1C1 + 2β2C2. Eliminating β1: f(β2) = (s - β2)2 + β22 + ∑i,j = 3βiβjKij + 2(s - β22K12 + 2(s - β2)C1 + 2β2C2. To minimize f(β2), we take the first derivative w.r.t. β2 and equate it to zero, thus f'(β2) = 0 = 2β2(1 - K12) - s(1 - K12) - (C1 - C2), and we get the update rule: β2new = [(C1 - C2)/2(1 - K12)] + s/2. We also have an expression for "C1 - C2" from: R(x12) - R(x22) = 2(β2 - β1)(1 - K12) - 2(C1 - C2), thus C1 - C2 = [R(x22) - R(x12)]/2 + (β2 - β1)(1 - K12), substituting, we have:

β1new = β1old - [R(x22) - R(x12)]/[4(1 - K12)]

Keerthi Algorithm

Compute 'C': if percent outliers = n and number data points = N, then: C = 100/(N*n)

Initialize β: Initialize m = int(1/C) - 1 number of randomly chosen indices to 'C'

Initialize two different randomly chosen indices to values less than 'C' such that ∑iβi = 1

Compute R2(xi) for all 'i' based on the current value of β.

Divide data into three sets: Set I if 0 < βi < C; Set II if βi = 0; and Set III if βi = C.

Compute R2_low = Max{ R2(xi) | 0 ≤ βi < C} and R2_up = Min{ R2(xi) | 0 < βi ≤ C}.

In every iteration execute the following two paths alternatively until there are no KKT violators:

1. Loop through all examples (call Examine Example subroutine)

Keep count of number of KKT Violators.

2. Loop through examples belonging only to Set I (call Examine Example subroutine) until R2_low - R2_up < 2*tol.

Examine Example Subroutine

a. Check for KKT Violation. An example is a KKT violator if:

Set II and R2(xi) > R2_up; choose R2_up for joint optimization

Set III and R2(xi) < R2_low; choose R2_low for joint optimization

Set I and R2(xi) > R2_up + 2*tol OR R2(xi) < R2_low - 2*tol; choose R2_low or R2_up for joint optimization depending on which gives a worse KKT violator

b. Call the Joint Optimization subroutine

Joint Optimization Subroutine

  1. a.

    Compute η = 4(1 - K12) where K12 is the kernel evaluation of the pair chosen in Examine Example

  2. b.

    Compute D = [R2(x2) - R2(x1)]/η

  3. c.

    Compute Min{(C - β2), β1} = L1

  4. d.

    Compute Min{(C - β1), β2} = L2

  5. e.

    If D > 0; then D = Min{D, L1}

Else D = Max{D, -L2}

  1. f.

    Update β2 as: β2 = β2 + D

  2. g.

    Update β1 as: β1 = β1 - D

  3. h.

    Re-compute R2(xi) for all 'i' based on the changes in β1 & β2

  4. i.

    Re-compute R2_low & R2_up based on elements in Set I, R2(x1) & R2(x2)

The SVM-External Clustering Method

The SVM-clustering software is written in Perl. It runs data on a separate Binary SVM also written in Perl. This SVM uses a C file for kernel calculations. The data run on the SVM is created by running raw data through a tFSA/HMM(written in C), which creates a data set that contains 151 feature vectors for each element. The following is a simple step-by-step description of the basic algorithm used for SVM-clustering on this data:

  1. 1.

    Start with a set of data vectors (obtained through running raw data through tFSA/HMM feature extraction in Fig. 2b).

  2. 2.

    Randomly label each vector in the set as positive or negative.

  3. 3.

    Run the SVM on the randomly labeled data set until convergence is obtained (random relabeling is needed if prior random label scheme does not allow for convergence).

  4. 4.

    After initial convergence is obtained for the randomly labeled data set, relabel the misclassified data vectors, which have confidence factor values greater than some threshold.

  5. 5.

    Rerun the SVM on the newly relabeled data set.

  6. 6.

    Continue relabeling and rerunning SVM until no vectors in the data set are misclassified (or there is no improvement).

References

  1. Vapnik VN: The Nature of Statistical Learning Theory. 2nd edition. Springer-Verlag, New York; 1998.

    Google Scholar 

  2. Burges CJC: A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 1998, 2: 121–67.

    Article  Google Scholar 

  3. Winters-Hilt S, Vercoutere W, DeGuzman VS, Deamer DW, Akeson M, Haussler D: Highly Accurate Classification of Watson-Crick Basepairs on Termini of Single DNA Molecules. Biophys J 2003, 84: 967–976.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  4. Platt JC: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In Advances in Kernel Methods – Support Vector Learning. Volume Ch. 12. Edited by: Scholkopf B, Burges CJC, Smola AJ. MIT Press, Cambridge, USA; 1998.

    Google Scholar 

  5. Osuna E, Freund R, Girosi. F: An improved training algorithm for support vector machines. In Neural Networks for Signal Processing VII. Edited by: Principe J, Gile L, Morgan N, and Wilson E. IEEE, New York; 1997:276–85.

    Google Scholar 

  6. Joachims T: Making large-scale SVM learning practical. In Advances in Kernel Methods – Support Vector Learning. Volume Ch. 11. Edited by: Scholkopf B, Burges CJC, Smola AJ. MIT Press, Cambridge, USA; 1998.

    Google Scholar 

  7. Crammer K, Singer Y: On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines. Journal of Machine Learning Research 2001, 2: 265–292.

    Google Scholar 

  8. Hsu CW, Lin CJ: A Comparison of Methods for Multi-class Support Vector Machines. IEEE Transactions on Neural Networks 2002, 13;: 415–425.

    PubMed  Google Scholar 

  9. Lee Y, Lin Y, Wahba G: Multicategory Support Vector Machines. Technical Report 1043, Department of Statistics University of Wisconsin, Madison, WI; 2001. [http://citeseer.ist.psu.edu/lee01multicategory.html]

    Google Scholar 

  10. Bezrukov SM, Vodyanoy I, Parsegian VA: Counting polymers moving through a single ion channel. Nature 1994, 370(6457):279–281.

    Article  CAS  PubMed  Google Scholar 

  11. Kasianowicz JJ, Brandin E, Branton D, Deamer DW: Characterization of Individual Polynucleotide Molecules Using a Membrane Channel. Proc Natl Acad Sci USA 1996, 93(24):13770–73.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  12. Akeson M, Branton D, Kasianowicz JJ, Brandin E, Deamer DW: Microsecond Time-Scale Discrimination Among Polycytidylic Acid, Polyadenylic Acid, and Polyuridylic Acid as Homopolymers or as Segments Within Single RNA Molecules. Biophys J 1999, 77(6):3227–3233.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  13. Bezrukov SM: Ion Channels as Molecular Coulter Counters to Probe Metabolite Transport. J Membr Biol 2000, 174: 1–13.

    Article  CAS  PubMed  Google Scholar 

  14. Meller A, Nivon L, Brandin E, Golovchenko J, Branton D: Rapid nanopore discrimination between single polynucleotide molecules. Proc Natl Acad Sci USA 2000, 97(3):1079–1084.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  15. Meller A, Nivon L, Branton D: Voltage-driven DNA translocations through a nanopore. Phys Rev Lett 2001, 86(15):3435–8.

    Article  CAS  PubMed  Google Scholar 

  16. Vercoutere W, Winters-Hilt S, Olsen H, Deamer DW, Haussler D, Akeson M: Rapid discrimination among individual DNA hairpin molecules at single-nucleotide resolution using an ion channel. Nat Biotechnol 2001, 19(3):248–252.

    Article  CAS  PubMed  Google Scholar 

  17. Winters-Hilt S: Highly Accurate Real-Time Classification of Channel-Captured DNA Termini. Third International Conference on Unsolved Problems of Noise and Fluctuations in Physics, Biology, and High Technology 2003, 355–368.

    Google Scholar 

  18. Vercoutere W, Winters-Hilt S, DeGuzman VS, Deamer D, Ridino S, Rogers JT, Olsen HE, Marziali A, Akeson M: Discrimination Among Individual Watson-Crick Base-Pairs at the Termini of Single DNA Hairpin Molecules. Nucl Acids Res 2003, 31: 1311–1318.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  19. Winters-Hilt S: Nanopore detection using channel current cheminformatics. SPIE Second International Symposium on Fluctuations and Noise 25–28 May, 2004 25–28 May, 2004

  20. Winters-Hilt S, Akeson M: Nanopore cheminformatics. DNA Cell Biol 2004, 23(10):675–83.

    Article  CAS  PubMed  Google Scholar 

  21. Shannon CE: A mathematical theory of communication. Bell Sys Tech Journal 1948, 27: 379–423. 623–656 623–656

    Article  Google Scholar 

  22. Khinchine AI: Mathematical foundations of information theory. Dover. 1957.

    Google Scholar 

  23. Amari S: Dualistic Geometry of the Manifold of Higher-Order Neurons. Neural Networks 1991, 4(4):443–451.

    Article  Google Scholar 

  24. Amari S: Information Geometry of the EM and em Algorithms for Neural Networks. Neural Networks 1995, 8(9):1379–1408.

    Article  Google Scholar 

  25. Amari S, Nagaoka H: Methods of Information Geometry. Translations of Mathematical Monographs 2000., 191:

    Google Scholar 

  26. Jaynes E: Paradoxes of Probability Theory. 1997. Internet accessible book preprint: http://omega.albany.edu:8008/JaynesBook.html Internet accessible book preprint:

    Google Scholar 

  27. Kapur JN, Kesavan HK: Entropy optimization principles with applications. Academic Press; 1992.

    Chapter  Google Scholar 

  28. Kullback S: Information Theory and Statistics. Dover. 1968.

    Google Scholar 

  29. Ben-Hur A, Horn D, Siegelmann HT, Vapnik V: Support Vector Clustering. Journal of Machine Learning Research 2001, 2: 125–137.

    Google Scholar 

  30. Scholkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC: Estimating the Support of a High-Dimensional Distribution. Neural Comp 2001, 13: 1443–1471.

    Article  CAS  Google Scholar 

  31. Yang J, Estivill-Castro V, Chalup SK: Support Vector Clustering Through Proximity Graph Modeling. Proceedings, 9th International Conference on Neural Information Processing (ICONIP'02) 2002, 898–903.

    Google Scholar 

  32. Freund Y, Schapire R: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 1997, 55;: 119–139.

    Article  Google Scholar 

  33. Freund Y, Schapire RE, Bartlett P, Lee WS: Boosting the margin: a new explanation for the effectiveness of voting methods. Proc 14th International Conference on Machine Learning 1998.

    Google Scholar 

  34. Schapire RE, Singer Y: Improved Boosting Using Confiodence-Weighted Predictions. Machine Learning 1999, 37(3):297–336.

    Article  Google Scholar 

  35. Diserbo M, Masson P, Gourmelon P, Caterini R: Utility of the wavelet transform to analyze the stationarity of single ionic channel recordings. J Neurosci Methods 2000, 99(1–2):137–141.

    Article  CAS  PubMed  Google Scholar 

  36. Durbin R: Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge, UK & New York: Cambridge University Press; 1998.

    Chapter  Google Scholar 

  37. Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK: Improvements to Platt's SMO algorithm for SVM classifier design. Neural Computation 2001, 13: 637–649.

    Article  Google Scholar 

Download references

Acknowledgements

SWH would like to thank MA and Prof. David Deamer at UCSC for strong collaborative support post-Katrina. Funding was provided by grants from the National Institutes for Health, The National Science Foundation, The Louisiana Board of Regents, and NASA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephen Winters-Hilt.

Additional information

Authors' contributions

The paper was written by SWH and AY. The external clustering work was contributed by CM. The channel current feature vector extraction used to create the data sets was performed by ML.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Winters-Hilt, S., Yelundur, A., McChesney, C. et al. Support Vector Machine Implementations for Classification & Clustering. BMC Bioinformatics 7 (Suppl 2), S4 (2006). https://doi.org/10.1186/1471-2105-7-S2-S4

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-7-S2-S4

Keywords