《机器学习》(周志华) 习题3.1-3.3个人笔记

来源:互联网 发布:java以什么编码 编辑:程序博客网 时间:2024/06/11 15:52

3.1  试分析在什么情况下式(3.2)中不必考虑偏置项b.

其实从前面第一章开始的习题就有很多不会的,第二章更是只会做前两道,现在到第三章,发现第一题都不是很明了了。从我个人来看:f(x)=w'x+b中,x代表d维向量,w则是相应的权重向量,而b=b*x0可看做权重为b,x0=1为相应属性. 很明显,1与x中的xi线性无关,也就是说,f(x)实际上是d+1维空间,当所有示例的x0属性都只有一种取值时,这样的属性已经失去了作为分类属性的意义,因此,将其权重b设为0,表示将所有示例具有的同样的属性值的属性去掉。(感觉这样不对啊T_T)

3.2 试证明,对于参数w,对数几率回归的目标函数(3.18)是非凸的,但其对数似然函数(3.27)是凸的。

对目标函数(3.18)求取二阶导数可发现exp(w'x+b)-1不能确定正负号,因此是非凸的,而对数似然函数3.27经由式(3.31)推导可见是恒大于0的,因此是凸函数。

3.3 编程实现对数几率回归,并给出西瓜数据集3.0a上的结果。

# -*- coding: utf-8 -*-import numpy as np# the exercise is from p69 > 3.3# the training_dataset is p89, assign to data(type=matrix).# divide the set into 3 column: density, sugar, label.# the indices are data[:,0], data[:,1], data[:,2] respectively.# in this example, the number of  attributes equals to 2.data = [[0.697,0.460,1],        [0.774,0.376,1],        [0.634,0.264,1],        [0.608,0.318,1],        [0.556,0.215,1],        [0.403,0.237,1],        [0.481,0.149,1],        [0.437,0.211,1],        [0.666,0.091,0],        [0.243,0.267,0],        [0.245,0.057,0],        [0.343,0.099,0],        [0.639,0.161,0],        [0.657,0.198,0],        [0.360,0.370,0],        [0.593,0.042,0],        [0.719,0.103,0]]beta=np.array([1,1,1]).reshape((-1,1))  # initial beta is a column vector of [w1,w2,b]'data = np.matrix(data)beta = np.matrix(beta)density, sugar, label = data[:,0], data[:,1], data[:,2]# in the label list, set 'good' to 1 and 'bad' to '0'x = np.c_[density,sugar,np.ones(len(sugar))].T  # initial x is a column vector of [x1,x2,1]'def cal_l(beta,x,label):    # solve the l'(beta) and l''(beta) of data to l1 and l2 respectively    l1, l2 = 0, np.mat(np.zeros((3,3)))    # l1,l2 = np.zeros((1,3)), np.zeros((3,3))    for i in range(x.shape[1]):        l1 += x[:,i] * (np.exp(beta.T*x[:,i])/(1+np.exp(beta.T*x[:,i])) - label[i])        l2 += x[:,i]*x.T[i,:] * (np.exp(beta.T*x[:,i])/((1+np.exp(beta.T*x[:,i]))**2))[0,0]    return [l1,l2]dist = 1  # carry out the distance between new_beta and beta.while dist >= 0.01:    new_beta = beta - cal_l(beta,x,label)[1].I * cal_l(beta,x,label)[0]    dist = np.linalg.norm(new_beta-beta)    beta = new_betac = []  # save the logit regression resultfor i in range(17):    c.append(1/(1+np.exp(-beta.T*x[:,i]))[0,0])print(new_beta)print(c)
result:[[  3.15832966] [ 12.52119579] [ -4.42886451]]
[0.97159134201182584, 0.93840796737854693, 0.7066382101828117, 0.81353420973519985, 
0.50480582132703811, 0.45300555631425837, 0.26036934432276743, 0.39970315015130975,
0.23397722179395924, 0.42110689644219934, 0.050146188402258575, 0.10851898058397864,
 0.40256730484729258, 0.53129773794877577, 0.79265049892320416, 0.11608022112650698, 
0.29559934850614572]

结果发现,17个示例中,正例和反例各有3/2个划分错误,错误率为5/17


0 0
原创粉丝点击