本文實例為大家分享了TensorFlow實現(xiàn)Logistic回歸的具體代碼,供大家參考,具體內(nèi)容如下
1.導入模塊
import numpy as npimport pandas as pdfrom pandas import Series,DataFramefrom matplotlib import pyplot as plt%matplotlib inline#導入tensorflowimport tensorflow as tf#導入MNIST(手寫數(shù)字數(shù)據(jù)集)from tensorflow.examples.tutorials.mnist import input_data
2.獲取訓練數(shù)據(jù)和測試數(shù)據(jù)
import ssl ssl._create_default_https_context = ssl._create_unverified_contextmnist = input_data.read_data_sets('./TensorFlow',one_hot=True)test = mnist.testtest_images = test.imagestrain = mnist.trainimages = train.images
3.模擬線性方程
#創(chuàng)建占矩陣位符X,YX = tf.placeholder(tf.float32,shape=[None,784])Y = tf.placeholder(tf.float32,shape=[None,10])#隨機生成斜率W和截距bW = tf.Variable(tf.zeros([784,10]))b = tf.Variable(tf.zeros([10]))#根據(jù)模擬線性方程得出預測值y_pre = tf.matmul(X,W)+b#將預測值結(jié)果概率化y_pre_r = tf.nn.softmax(y_pre)
4.構(gòu)造損失函數(shù)
# -y*tf.log(y_pre_r) --->-Pi*log(Pi) 信息熵公式cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_pre_r),axis=1))
5.實現(xiàn)梯度下降,獲取最小損失函數(shù)
#learning_rate:學習率,是進行訓練時在最陡的梯度方向上所采取的「步」長;learning_rate = 0.01optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
6.TensorFlow初始化,并進行訓練
#定義相關(guān)參數(shù)#訓練循環(huán)次數(shù)training_epochs = 25#batch 一批,每次訓練給算法10個數(shù)據(jù)batch_size = 10#每隔5次,打印輸出運算的結(jié)果display_step = 5#預定義初始化init = tf.global_variables_initializer()#開始訓練with tf.Session() as sess: #初始化 sess.run(init) #循環(huán)訓練次數(shù) for epoch in range(training_epochs): avg_cost = 0. #總訓練批次total_batch =訓練總樣本量/每批次樣本數(shù)量 total_batch = int(train.num_examples/batch_size) for i in range(total_batch): #每次取出100個數(shù)據(jù)作為訓練數(shù)據(jù) batch_xs,batch_ys = mnist.train.next_batch(batch_size) _, c = sess.run([optimizer,cost],feed_dict={X:batch_xs,Y:batch_ys}) avg_cost +=c/total_batch if(epoch+1)%display_step == 0: print(batch_xs.shape,batch_ys.shape) print('epoch:','%04d'%(epoch+1),'cost=','{:.9f}'.format(avg_cost)) print('Optimization Finished!') #7.評估效果 # Test model correct_prediction = tf.equal(tf.argmax(y_pre_r,1),tf.argmax(Y,1)) # Calculate accuracy for 3000 examples # tf.cast類型轉(zhuǎn)換 accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) print("Accuracy:",accuracy.eval({X: mnist.test.images[:3000], Y: mnist.test.labels[:3000]}))
以上就是本文的全部內(nèi)容,希望對大家的學習有所幫助,也希望大家多多支持武林站長站。
新聞熱點
疑難解答