Support Vector Machines (SVMs) is a very popular machine learning method for classification, regression, distribution estimation, etc. Exceptional feature of this method is an ability to handle objects of a diverse nature as soon as there is a suitable kernel function. Nonetheless, popular software libraries like LIBSVM 1 and SVMlight 2 are designed for vector data and it is hard to adopt them for other object types. The F# implementation seems to be promising in terms of readability and extensibility.
Readability and extensibility quite often are the opposite measures of the software design. Increasing one of them leads to decreasing another, but this F# implementation of SVM attempts to address the following design questions without degrading neither readability nor extensibility:
- What data structure to use for SVM model?
- How to implement kernel functions?
- How to implement decision function?
- How to implement optimization problem solving?
SVM model data structure
SVM classification, regression and distribution estimation seem to be different problems, and they should be represented by different data structures designed for a specific problem. However, their formulations share common elements like training samples \(\{\textbf{x}_i: i=1,l\}\) and kernel function \(K\), apart from different decision/approximation functions:
-
two class classification problem:
\(sign(\sum_{i=1}^l y_i \alpha_i K(\textbf{x}_i,\textbf{x}) + b)\),
-
regression problem:
\(\sum_{i=1}^l (-\alpha_i + \alpha_i^*) K(\textbf{x}_i,\textbf{x}) + b\),
-
distribution estimation problem:
\(sign(\sum_{i=1}^l \alpha_i K(\textbf{x}_i,\textbf{x}) - \rho)\).
where \(y_i \in \{-1,+1\}\) is a label of the sample \(\textbf{x}_i\), \(\alpha_i \in R\) and \(\alpha_i^* \in R\) are solutions of a dual SVM optimization problem 3.
Thus, the model needs to store the set of pairs \(\{(\text{x}_i, \beta_i) : \beta_i \neq 0, i=1,l\}\), the bias \(b\) or \(-\rho\) and the kernel function \(K\). The F# definition of this data structure is very simple:
type Kernel<'X> = 'X *'X -> float
type SVM<'X> = SVM of Kernel<'X> * ('X array) * (float array) * float
Kernels
Kernels are just functions with the type 'X *'X -> float
, but any function of
form P_1 * ... * P_N * 'X *'X -> float
also becomes a kernel if parameters P_1 * ... * P_N
are defined:
let inline linear (x1 : ^X, x2 : ^X) : float = x1 .* x2
let inline polynomial (gamma : float) (mu : float) (n : float) (x1 : ^X, x2 : ^X) : float =
(gamma*(x1 .* x2) + mu) ** n
let inline rbf (gamma : float) (x1 : ^X, x2 : ^X) : float =
let x' = (x1 - x2) in exp (-gamma * (x' .* x'))
let inline sigmoid (gamma : float) (mu : float) (x1 : ^X, x2 : ^X) : float =
tanh (gamma*(x1 .* x2) + mu)
Note: these kernels require from data type ^X
to support two operations -
and .*
.
Classification Decision Function
Mathematical definition of the classification decision function
\(sign(\sum_{i=1}^l y_i \alpha_i K(\textbf{x}_i,\textbf{x}) + b)\),
can be directly translated to the F# implementation:
let inline predict (SVM (K,X,A,b) : SVM<'X>) (x : 'X) =
sign (b + Array.fold2 (fun sum x_i a_i -> sum + a_i * K(x_i,x)) 0.0 X A)
Optimization Problem Solver
Kernels and decision function are easy to implement in F#. However, the main task in training SVM is to solve the following optimization problem:
where \(\textbf{e}\) is vector of ones, \(C\) is upper bound of all variables, \(Q\) is a symmetric matrix \(l\) by \(l\), \(Q_{ij}=y_iy_jK(\mathbf{x}_i, \mathbf{x}_j)\), \(x_i, i=1,l\) are instances with labels \(y_i \in \{-1, +1\}\).
The matrix \(Q\) is usually dense and too large to be stored in memory. Thus, it is hard to use classical optimization methods that update whole vector \(\mathbf{\alpha}\) on each step, but there are decomposition methods that update only a small subset of \(\mathbf{\alpha}\) on each iteration. An extreme case is the Sequential Minimal Optimization (SMO) method, which updates only two elements:
- Find the initial feasible solution \(\mathbf{\alpha}^1\) (e.g., for the classification problem \(\mathbf{\alpha}^1=\mathbf{0}\)).
- If \(\mathbf{\alpha}^k\) is an optimal solution of the optimization problem then stop. Otherwise, find a two element working set \(B=\{i, j\} \subset \{1,...,l\}\) and define \(N=\{1,...,l\} \setminus B\).
- Solve the optimization sub-problem involving selected working set elements and find \(\mathbf{\alpha}_B\).
- Update solution to \(\mathbf{\alpha}_B^{k+1}=\mathbf{\alpha}_B\) and \(\mathbf{\alpha}_N^{k+1}=\mathbf{\alpha}_N^k\). Proceed to the step 2.
The F# implementation of this algorithm is the following:
let A = Array.zeroCreate N
let G = Array.create N -1.0
...
let rec optimize k =
if k < parameters.iterations then
match selectWorkingSet() with
| Some (i, j) ->
let a_i, a_j = solve (i,j)
updateG (i, j) (a_i, a_j)
A.[i] <- a_i; A.[j] <- a_j
if stopCriterion() then k else optimize (k + 1)
| None -> k
else
failwith "Exceeded iterations limit"
The function recursively updates the solution vector until the moment when no violated conditions found or the stop criterion achieved. Key elements of this implementation are the following:
- Working set selection function
selectWorkingSet
- Optimization sub-problem solver function
solve
- Gradient update function
updateG
- Stop criterion function
stopCriterion
Working Set Selection Function
Core elements of working set selection strategies are the following subsets of optimization variables indeces 3:
With F# module Seq
their implementations become just one line of the code:
let I_up () = seq { 0..N-1 }
|> Seq.filter (fun i -> (Y.[i] = +1.0 && A.[i] < C.[i]) || (Y.[i] = -1.0 && A.[i] > 0.0))
let I_low () = seq { 0..N-1 }
|> Seq.filter (fun i -> (Y.[i] = -1.0 && A.[i] < C.[i]) || (Y.[i] = +1.0 && A.[i] > 0.0))
As a consequence, working set selection strategies become applications of Seq.maxBy
:
- Maximal violating pair working set selection
let inline _y_gf i = -G.[i]*Y.[i]
let inline maxUp () =
let ts = I_up ()
if Seq.isEmpty ts then not_found else Seq.maxBy _y_gf ts
let inline minLow () =
let ts = I_low ()
if Seq.isEmpty ts then not_found else Seq.minBy _y_gf ts
...
let maximalViolatingPair () =
let i = maxUp()
if i = not_found then None else Some (i, minLow())
- Second order information working set selection
let inline minLowTo s =
let ts = I_low () |> Seq.filter (fun t -> _y_gf t < _y_gf s)
let objective t =
let a_ts = Q (t, t) + Q (s, s) - 2.0*Q(t, s)*Y.[t]*Y.[s]
let b_ts = _y_gf t - _y_gf s
-b_ts*b_ts/(if a_ts > 0.0 then a_ts else tau)
if Seq.isEmpty ts then not_found else Seq.minBy objective ts
...
let secondOrderInformation () =
let i = maxUp()
if i = not_found then None else Some (i, minLowTo i)
Optimization Sub-Problem Solver
Two-variable optimization problem has the analytical solution 3, but big number of special cases make its implementation problematic. Fortunately, F# provides powerful partern matching statements that dramatically shorten the implementation:
let inline Q (i,j) = K(X.[i], X.[j])*Y.[i]*Y.[j]
let inline solve (i, j) =
let a_ij = Q (i, i) + Q (j, j) - 2.0*Q(i, j)*Y.[i]*Y.[j]
if Y.[i] <> Y.[j] then
let delta = (-G.[i]-G.[j])/(if a_ij < 0.0 then tau else a_ij)
let diff = A.[i] - A.[j]
match (A.[i] + delta, A.[j] + delta) with
| _, a_j when diff > 0.0 && a_j < 0.0 -> (diff, 0.0)
| a_i, _ when diff <= 0.0 && a_i < 0.0 -> (0.0, diff)
| _, a_j when diff <= C.[i] - C.[j] && a_j > C.[j] -> (C.[j]+diff, C.[j])
| a_i, _ when diff > C.[i] - C.[j] && a_i > C.[i] -> (C.[i], C.[i]-diff)
| a_i, a_j -> a_i, a_j
else
let delta = (G.[i]-G.[j])/(if a_ij < 0.0 then tau else a_ij)
let sum = A.[i] + A.[j]
match (A.[i] - delta, A.[j] + delta) with
| a_i, _ when sum > C.[i] && a_i > C.[i] -> (C.[i], sum-C.[i])
| _, a_j when sum <= C.[i] && a_j < 0.0 -> (sum, 0.0)
| _, a_j when sum > C.[j] && a_j > C.[j] -> (sum-C.[j], C.[j])
| a_i, _ when sum <= C.[j] && a_i < 0.0 -> (0.0, sum)
| a_i, a_j -> a_i, a_j
Gradient Update
Working set selection strategies and optimization sub-problem solver are using gradient. Fortunately, it is easy to obtain next gradient vector using two-variable optimization problem solution:
An elegant implementation of the gradient update function in the F# language would use Array.map
,
but the cost of the elegance is an allocation of the new array:
G <- Array.mapi (fun t g -> g + Q(t,i)*(a_i - A.[i]) + Q(t,j)*(a_j - A.[j]))
Therefore, it is better to rely on mutability of arrays:
let inline updateG (i, j) (a_i, a_j) =
for t in 0..N-1 do
G.[t] <- G.[t] + Q(t,i)*(a_i - A.[i]) + Q(t,j)*(a_j - A.[j])
Stop Criterion
The F# implementation of the stop criterion is even simpler than its mathematical definition 3:
let stopCriterion () =
let inline m y = I_up () |> Seq.filter (fun i -> Y.[i] = y) |> Seq.map _y_gf |> Seq.max
let inline M y = I_low () |> Seq.filter (fun i -> Y.[i] = y) |> Seq.map _y_gf |> Seq.min
((m +1.0) - (M +1.0) < epsilon) || ((m -1.0) - (M -1.0) < epsilon)
Examples
A simple way to test the classifier implementation is to use some existing dataset. E.g., breast cancer prediction dataset:
type Vector = Vector of float array
let readSamples = seq {
use r = new StreamReader("./breast-cancer_scale.txt")
while not r.EndOfStream do
let line = r.ReadLine()
let fields = line.Split [|' '|]
let y = if int fields.[0] = 2 then +1.0 else -1.0
let x = Vector(Array.sub fields 1 (Array.length fields - 2)
|> Array.map (fun f -> float (f.Split [|':'|]).[1]))
yield x, y
}
let samples = readSamples |> Seq.toArray
let X, Y = samples |> Array.unzip
However, an attempt to train the classifier for Vector
data with Kernel.rbf
.
let cancer = SMO.C_SVC SMO.SecondOrderInformation X Y (Kernel.rbf 1.0)
{ iterations = 1000; epsilon = 0.001; C = 1.0}
will give the error The type vector does not support operator '-'
.
This error means that the data type should have static operations -
and .*
in order to be used with Kernel.rbf
:
type Vector = Vector of float array with
static member (-) (Vector(X1) : Vector, Vector(X2) : Vector) =
Vector(Array.map2 (-) X1 X2)
static member (.*) (Vector(X1) : Vector, Vector(X2) : Vector) =
Array.fold2 (fun sum x1 x2 -> sum + x1*x2) 0.0 X1 X2
Now it is possible to train cancer
model and get predictions:
let cancer = SMO.C_SVC SMO.SecondOrderInformation X Y (Kernel.rbf 1.0)
{ iterations = 1000; epsilon = 0.001; C = 1.0}
let predict = TwoClass.predict cancer
printfn "Total: %A, Correct: %A" (samples |> Array.length)
(samples |> Array.countBy (fun (x, y) -> y = float (predict x)))
Results should be the following:
Total: 683, Correct: [|(true, 665); (false, 18)|]
Conclusions
The nature of the F# language stimulates to write code in the academic coding style, which is often opposed to the industrial coding style in terms of readability and maintainability. However, the F# implementation of Sequential Minimal Optimization shows that one function with a dozen of nested functions inside is much more readable than a bunch of classes. Furthermore, only small amount of an additional code is needed in order to apply it to real problem. Though the performance of this implementation is far from the native C implementation, it can be tuned and improved by rewriting some parts of code in imperative way alongside with ongoing experiments.
-
Chih-Chung Chang and Chih-Jen Lin LIBSVM: A Library for Support Vector Machines // ACM Transactions on Intelligent Systems and Technology (TIST). - 2011. - 2(3). - pp. 1-27. ↩ ↩2 ↩3 ↩4