摘要:使用內置的優化器對數據集進行回歸在使用實現梯度下降之前,我們先嘗試使用的內置優化器比如來解決數據集分類問題。使用對數據集進行回歸通過梯度下降公式,權重的更新方式如下為了實現梯度下降,我將不使用優化器的代碼,而是采用自己寫的權重更新。
作者:chen_h
微信號 & QQ:862251340
微信公眾號:coderpai
簡書地址:http://www.jianshu.com/p/13e0...
我喜歡 TensorFlow 的其中一個原因是它可以自動的計算函數的梯度。我們只需要設計我們的函數,然后去調用 tf.gradients 函數就可以了。是不是非常簡單。
接下來讓我們來舉個例子,具體說明一下。
使用 TensorFlow 內置的優化器對 MNIST 數據集進行 softmax 回歸在使用 tf.gradients 實現梯度下降之前,我們先嘗試使用 TensorFlow 的內置優化器(比如 GradientDescentOptimizer)來解決MNIST數據集分類問題。
import tensorflow as tf # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Parameters learning_rate = 0.01 training_epochs = 10 batch_size = 100 display_step = 1 # tf Graph Input x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784 y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes # Set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Construct model pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Start training with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys}) # print(__w) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if (epoch+1) % display_step == 0: # print(sess.run(W)) print ("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost)) print ("Optimization Finished!") # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy for 3000 examples accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print ("Accuracy:", accuracy.eval({x: mnist.test.images[:3000], y: mnist.test.labels[:3000]})) #### Output # Extracting /tmp/data/train-images-idx3-ubyte.gz # Extracting /tmp/data/train-labels-idx1-ubyte.gz # Extracting /tmp/data/t10k-images-idx3-ubyte.gz # Extracting /tmp/data/t10k-labels-idx1-ubyte.gz # Epoch: 0001 cost= 1.184285608 # Epoch: 0002 cost= 0.665428013 # Epoch: 0003 cost= 0.552858426 # Epoch: 0004 cost= 0.498728328 # Epoch: 0005 cost= 0.465593693 # Epoch: 0006 cost= 0.442609185 # Epoch: 0007 cost= 0.425552949 # Epoch: 0008 cost= 0.412188290 # Epoch: 0009 cost= 0.401390140 # Epoch: 0010 cost= 0.392354651 # Optimization Finished! # Accuracy: 0.873333
所以,我們在這里做的是利用內置的優化器來計算損失值。如果我們想自己計算漸變過程和更新權重,那應該怎么辦?這就是 tf.gradients 的作用了。
使用 tf.gradients 對MNIST數據集進行 softmax 回歸通過梯度下降公式,權重的更新方式如下:
為了實現梯度下降,我將不使用優化器的代碼,而是采用自己寫的權重更新。
因為這里有權重矩陣 w 和偏差項矩陣 b,所以我們需要去計算這些矩陣的梯度。所以實現的代碼如下:
# Computing the gradient of cost with respect to W and b grad_W, grad_b = tf.gradients(xs=[W, b], ys=cost) # Gradient Step new_W = W.assign(W - learning_rate * grad_W) new_b = b.assign(b - learning_rate * grad_b)
這三行代碼只是替代前面的一行代碼,干嘛給自己造成這么大的麻煩呢?因為如果你需要自己的損失函數的梯度,并且你不想編寫嚴格的數學函數,那么 TensorFlow 就可以幫助你了。
我們已經構建好了計算圖,所以接下來我們只需要在會話中運行這個計算圖就行了。讓我來試試吧。
# Fit training using batch data _, _, c = sess.run([new_W, new_b ,cost], feed_dict={x: batch_xs, y: batch_ys})
我們不需要 new_W 和 new_b 的輸出,所以我忽略了這些變量。
完整代碼如下:
import tensorflow as tf # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Parameters learning_rate = 0.01 training_epochs = 10 batch_size = 100 display_step = 1 # Parameters learning_rate = 0.01 training_epochs = 10 batch_size = 100 display_step = 1 # tf Graph Input x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784 y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes # Set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Construct model pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) grad_W, grad_b = tf.gradients(xs=[W, b], ys=cost) new_W = W.assign(W - learning_rate * grad_W) new_b = b.assign(b - learning_rate * grad_b) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data _, _, c = sess.run([new_W, new_b ,cost], feed_dict={x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if (epoch+1) % display_step == 0: # print(sess.run(W)) print ("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost)) print ("Optimization Finished!") # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy for 3000 examples accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print ("Accuracy:", accuracy.eval({x: mnist.test.images[:3000], y: mnist.test.labels[:3000]})) # Output # Epoch: 0001 cost= 1.183741399 # Epoch: 0002 cost= 0.665312284 # Epoch: 0003 cost= 0.552796521 # Epoch: 0004 cost= 0.498697014 # Epoch: 0005 cost= 0.465521633 # Epoch: 0006 cost= 0.442611256 # Epoch: 0007 cost= 0.425528946 # Epoch: 0008 cost= 0.412203073 # Epoch: 0009 cost= 0.401364554 # Epoch: 0010 cost= 0.392398663 # Optimization Finished! # Accuracy: 0.874使用梯度公式的 softmax 回歸
我們對于權重 w 的梯度處理如下:
如前所示,不使用 tf.gradients 或使用 TensorFlow 的內置優化器,這樣可以實現梯度方程。完整代碼如下:
import tensorflow as tf # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Parameters learning_rate = 0.01 training_epochs = 10 batch_size = 100 display_step = 1 # Parameters learning_rate = 0.01 training_epochs = 10 batch_size = 100 display_step = 1 # tf Graph Input x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784 y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes # Set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Construct model pred = tf.nn.softmax(tf.matmul(x, W)) # Softmax # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1)) W_grad = - tf.matmul ( tf.transpose(x) , y - pred) b_grad = - tf.reduce_mean( tf.matmul(tf.transpose(x), y - pred), reduction_indices=0) new_W = W.assign(W - learning_rate * W_grad) new_b = b.assign(b - learning_rate * b_grad) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Fit training using batch data _, _, c = sess.run([new_W, new_b, cost], feed_dict={x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if (epoch+1) % display_step == 0: print ("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost)) print ("Optimization Finished!") # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy for 3000 examples accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print ("Accuracy:", accuracy.eval({x: mnist.test.images[:3000], y: mnist.test.labels[:3000]})) # Output # Extracting /tmp/data/train-images-idx3-ubyte.gz # Extracting /tmp/data/train-labels-idx1-ubyte.gz # Extracting /tmp/data/t10k-images-idx3-ubyte.gz # Extracting /tmp/data/t10k-labels-idx1-ubyte.gz # Epoch: 0001 cost= 0.432943137 # Epoch: 0002 cost= 0.330031527 # Epoch: 0003 cost= 0.313661941 # Epoch: 0004 cost= 0.306443773 # Epoch: 0005 cost= 0.300219418 # Epoch: 0006 cost= 0.298976618 # Epoch: 0007 cost= 0.293222957 # Epoch: 0008 cost= 0.291407861 # Epoch: 0009 cost= 0.288372261 # Epoch: 0010 cost= 0.286749691 # Optimization Finished! # Accuracy: 0.898Tensorflow 是如何計算梯度的?
你可以在思考,TensorFlow是如何計算函數的梯度?
TensorFlow 使用的是一種稱為 Automatic Differentiation 的方法,具體你可以查看 Wikipedia。
我希望這篇文章對你有幫會幫助。
作者:chen_h
微信號 & QQ:862251340
簡書地址:http://www.jianshu.com/p/13e0...
CoderPai 是一個專注于算法實戰的平臺,從基礎的算法到人工智能算法都有設計。如果你對算法實戰感興趣,請快快關注我們吧。加入AI實戰微信群,AI實戰QQ群,ACM算法微信群,ACM算法QQ群。長按或者掃描如下二維碼,關注 “CoderPai” 微信號(coderpai)
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/41085.html
摘要:訓練深度神經網絡需要大量的內存,用戶使用這個工具包,可以在計算時間成本僅增加的基礎上,在上運行規模大倍的前饋模型。使用導入此功能,與使用方法相同,使用梯度函數來計算參數的損失梯度。隨后,在反向傳播中重新計算檢查點之間的節點。 OpenAI是電動汽車制造商特斯拉創始人 Elon Musk和著名的科技孵化器公司 Y Combinator總裁 Sam Altman于 2016年聯合創立的 AI公司...
摘要:前面兩個期望的采樣我們都熟悉,第一個期望是從真樣本集里面采,第二個期望是從生成器的噪聲輸入分布采樣后,再由生成器映射到樣本空間。 Wasserstein GAN進展:從weight clipping到gradient penalty,更加先進的Lipschitz限制手法前段時間,Wasserstein ?GAN以其精巧的理論分析、簡單至極的算法實現、出色的實驗效果,在GAN研究圈內掀起了一陣...
摘要:經過第一步的處理已經把古詩詞詞語轉換為可以機器學習建模的數字形式,因為我們采用算法進行古詩詞生成,所以還需要構建輸入到輸出的映射處理。 LSTM 介紹 序列化數據即每個樣本和它之前的樣本存在關聯,前一數據和后一個數據有順序關系。深度學習中有一個重要的分支是專門用來處理這樣的數據的——循環神經網絡。循環神經網絡廣泛應用在自然語言處理領域(NLP),今天我們帶你從一個實際的例子出發,介紹循...
閱讀 2335·2021-11-23 09:51
閱讀 1137·2021-11-22 13:52
閱讀 3610·2021-11-10 11:35
閱讀 1187·2021-10-25 09:47
閱讀 2994·2021-09-07 09:58
閱讀 1059·2019-08-30 15:54
閱讀 2817·2019-08-29 14:21
閱讀 3024·2019-08-29 12:20