机器学习学习笔记–hello SVM

The trotskyist lad who was useless in his real-life, but in the end took
down world’s sickest bourg jackass and btw nailed the sweatest girl in
town. all wet dreams in one plot. Thanks Spielberg.

plot the parallels to the separating hyperplane that pass through the

Y = [0] * 20 + [1] * 20

Syntax

Property path is a set of lower-case identifiers splitted with dot
(.). Path formats are described below.

Remark: If a plot/band/area/input name is the same you can get an
error. In this case you can specify an exact destination that you want
to change by adding :plot, :band, :area or :input to the path.
(e.g. short:plot.color)

© 本文版权归作者  Marshllow.
 所有,任何形式转载请联系作者。

In scikit-learn coef_ attribute holds the vectors of the separating hyperplanes for linear models. It has shape (n_classes, n_features) if n_classes > 1 (multi-class one-vs-all) and (1, n_features) for binary classification.

yy_down = a * xx + (b[1]-a*b[0])

Compare

You can customize new series added via Compare. Use compare.plot to
customize the line and compare.source to change the price source:

"compare.plot.color": "#000000",
"compare.source": "high"

Disappointment lies in its hypothetical political correctness however.
Racial minority played insignificant supporting roles that typical of
Hollywood plots. Sexual racism is obvious in its lack of depiction on
supporting characters’ personality and emotional insides. Overly
romanitised storyline between two main characters also displayed the
weakness in its innocence to acknowledge real world human relation’s
nuances.

fit the model

clf = svm.SVC(kernel=’linear’)
clf.fit(X, Y)

w = clf.coef_[0]

Default precision

Since 1.6 you can change default precision of studies using
name.precision format. Example:
"average true range.precision": 8

get the separating hyperplane

w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx – (clf.intercept_[0]) / w[1]

b = clf.support_vectors_[0]

studies_overrides: {
    "volume.volume.color.0": "#00FFFF",
    "volume.volume.color.1": "#0000FF",
    "volume.volume.transparency": 70,
    "volume.volume ma.color": "#FF0000",
    "volume.volume ma.transparency": 30,
    "volume.volume ma.linewidth": 5,
    "volume.show ma": true,
    "bollinger bands.median.color": "#33FF88",
    "bollinger bands.upper.linewidth": 7
}

get indices of support vectors

print clf.support_

yy = a*xx – (clf.intercept_[0] / w[1])

Plot property

Format: indicator_name.plot_name.property_name

  • indicator_name: < … >
  • plot_name: as you can see it in indicator’s properties dialog
    (for example, Volume or Plot)
  • property_name: one of the following:
    • transparency
    • linewidth
    • plottype. Supported plot types are:
      • line
      • histogram
      • cross
      • area
      • columns
      • circles
      • line_with_breaks
      • area_with_breaks

Examples: volume.volume.transparency,
bollinger bands.median.linewidth

In this toy binary classification example, n_features == 2, hence w = coef_[0] is the vector orthogonal to the hyperplane (the hyperplane is fully defined by it + the intercept).

import matplotlib.pyplot as plt

Study input

Format: indicator_name.input_name

  • indicator_name: use name as you can see it in Indicators
    dialog.
  • input_name: use name as you can see it in indicator’s
    properties dialog (for example, show ma)

Examples: volume.show ma, bollinger bands.length

1 sklearn简单例子

plt.axis(‘tight’)

How to set study name

You should use studies names as-they-are in the Insert Study dialog, but
in lower case. So if you want to override default EMA’s length, try
using moving average exponential.length. The same principle works for
inputs names: use names as you can see them in Study Properties dialog
(use lower case also). Example: stochastic.smooth d.

support vectors

b = clf.support_vectors_[0]
yy_down = a * xx + (b[1] – a * b[0])
b = clf.support_vectors_[-1]
yy_up = a * xx + (b[1] – a * b[0])

print “w: “, w
print “a: “, a

plt.plot(xx,yy_up,’k–‘)

In the example above, in example, all created Bollinger Bands will have
upper line width = 7 (unless you create it through API and have
specified another value for this line).

from sklearn import svm

SVM支持向量机,可以用来做分类。分割超平面。需要升维.

One can set default style and inputs values for newly created indicators
using studies_overrides parameter. Its value is expected to be an
object where key is a path to property being changed and value is the
new value for it. Example:

print ” xx: “, xx

plt.scatter(X[:,0],X[:,1],c=Y,cmap=plt.cm.Paired)

Plot colors

Format: indicator_name.plot_name.color<.color_index>

  • indicator_name: < … >
  • plot_name: < … >
  • color. It is just a keyword.
  • color_index (optional): color index (if any). It’s just an
    ordinal number of a color for this plot. I.e., to replace the color
    which is green by default for Volume, one should use
    color_index = 1.

Remark 1: color.0 is a synonym of color .So paths
volume.volume.color.0 and volume.volume.color are treated to be the
same.

Remark 2: For now, customizing area fill color and transparency is
not supported.

Limitations:

  • Only #RRGGBB format is supported for colors. Do not use short
    format #RGB.
  • Transparency varies in [0..100] range. 100 means plot is fully
    opaque.
  • Thickness is an integer.

plot the line, the points, and the nearest vectors to the plane

pl.plot(xx, yy, ‘k-‘)
pl.plot(xx, yy_down, ‘k–‘)
pl.plot(xx, yy_up, ‘k–‘)

pl.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:,
1],
s=80, facecolors=’none’)
pl.scatter(X[:, 0], X[:, 1], c=Y, cmap=pl.cm.Paired)

pl.axis(‘tight’)
pl.show()

yy_up = a * xx + (b[1] – a*b[0])

Overlay

Since 1.12 to customize Overlay you may use the following properties:

Overlay.style: (bars = 0, candles = 1, line = 2, area = 3, heiken ashi = 8, hollow candles = 9)
Overlay.showPriceLine: boolean

Overlay.candleStyle.upColor: color
Overlay.candleStyle.downColor: color
Overlay.candleStyle.drawWick: boolean
Overlay.candleStyle.drawBorder: boolean
Overlay.candleStyle.borderColor: color
Overlay.candleStyle.borderUpColor: color
Overlay.candleStyle.borderDownColor: color
Overlay.candleStyle.wickColor: color
Overlay.candleStyle.barColorsOnPrevClose: boolean

Overlay.hollowCandleStyle.upColor: color
Overlay.hollowCandleStyle.downColor: color
Overlay.hollowCandleStyle.drawWick: boolean
Overlay.hollowCandleStyle.drawBorder: boolean
Overlay.hollowCandleStyle.borderColor: color
Overlay.hollowCandleStyle.borderUpColor: color
Overlay.hollowCandleStyle.borderDownColor: color
Overlay.hollowCandleStyle.wickColor: color
Overlay.hollowCandleStyle.barColorsOnPrevClose: boolean

Overlay.barStyle.upColor: color
Overlay.barStyle.downColor: color
Overlay.barStyle.barColorsOnPrevClose: boolean
Overlay.barStyle.dontDrawOpen: boolean

Overlay.lineStyle.color: color
Overlay.lineStyle.linewidth: integer
Overlay.lineStyle.priceSource: open/high/low/close
Overlay.lineStyle.styleType: (bars = 0, candles = 1, line = 2, area = 3, heiken ashi = 8, hollow candles = 9)

Overlay.areaStyle.color1: color
Overlay.areaStyle.color2: color
Overlay.areaStyle.linecolor: color
Overlay.areaStyle.linestyle: (solid = 0; dotted = 1; dashed = 2; large dashed = 3)
Overlay.areaStyle.linewidth: integer
Overlay.areaStyle.priceSource: open/high/low/close

we create 40 separable points

np.random.seed(0)
X = np.r_[np.random.randn(20, 2) – [2, 2], np.random.randn(20, 2) +
[2, 2]]
Y = [0] * 20 + [1] * 20

plt.plot(xx,yy,’k-‘)

X = [[2, 0], [1, 1], [2,3]]
y = [0, 0, 1]
clf = svm.SVC(kernel = ‘linear’)
clf.fit(X, y)

np.random.seed(0)

get number of support vectors for each class

print clf.n_support_

2 sklearn画出决定界限

print(doc)

import numpy as np
import pylab as pl
from sklearn import svm

X =
np.r_[np.random.randn(20,2)-[2,2],np.random.randn(20,2)+[2,2]]

To plot this hyperplane in the 2D case (any hyperplane of a 2D plane is a 1D line), we want to find a f as in y = f(x) = a.x + b. In this case a is the slope of the line and can be computed by a = -w[0] / w[1].

plt.scatter(clf.support_vectors_[:,0],clf.support_vectors_[:,1],s=80,facecolors=’none’)

print clf

import numpy as np

get support vectors

print clf.support_vectors_

#创建随机点

print ” yy: “, yy

print “support_vectors_: “, clf.support_vectors_
print “clf.coef_: “, clf.coef_

# coding: utf-8

以上这个例子可以很好的理解SVM

plt.plot(xx,yy_down,’k–‘)

from sklearn import svm

#用matplotlib画图

clf.fit(X,Y)

#选择内核

#支撑向量

plt.show()

图片 1

clf = svm.SVC(kernel=’linear’)

#构造超平面

a = -w[0]/w[1]

print(__doc__)

#绘制与通过的分离超平面的相似之处

b = clf.support_vectors_[-1]

xx = np.linspace(-5,5)

代码实现