国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

在tensorflow上進行機器學習的“Hello World”:MNIST 手寫識別

garfileo / 1056人閱讀

摘要:安裝好了安裝筆記,接下來就在他的官網指導下進行手寫數字識別實驗。實驗過程進入虛擬環境后,首先進入目錄然后進入交互終端。

安裝好了tensorflow(TensorFlow 安裝筆記),接下來就在他的官網指導下進行Mnist手寫數字識別實驗。

softmax 實驗過程

進入tfgpu虛擬環境后,首先進入目錄:/anaconda2/envs/tfgpu/lib/python2.7/site-packages/tensorflow/examples/tutorials/mnist/,然后進入IPython交互終端。

In [4]: from tensorflow.examples.tutorials.mnist import input_data
   ...: mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
   ...: 
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

In [5]: import tensorflow as tf

In [6]: x = tf.placeholder(tf.float32, [None, 784])

In [7]: W = tf.Variable(tf.zeros([784, 10]))
   ...: b = tf.Variable(tf.zeros([10]))
   ...: 

In [8]: y = tf.nn.softmax(tf.matmul(x, W) + b)

In [9]: y_ = tf.placeholder(tf.float32, [None, 10])

In [10]: cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

In [11]: train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

In [12]: init = tf.initialize_all_variables()

In [13]: sess = tf.Session()
    ...: sess.run(init)
    ...: 
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: GeForce 940M
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:08:00.0
Total memory: 1023.88MiB
Free memory: 997.54MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:839] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 940M, pci bus id: 0000:08:00.0)

In [14]: for i in range(1000):
    ...:   batch_xs, batch_ys = mnist.train.next_batch(100)
    ...:   sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
    ...:   

In [15]: correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))

In [16]: accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    ...: 

In [17]: print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.9186
softmax實驗說明
In [4]:主要是下載數據

In [6]:意思是先分配輸入x,None即輸入圖片數量稍后運行時確定,784即28*28,把一張28*28的圖片拉長成為一維向量,保證每張圖片拉長方式相同即可

In [7]:分配權重w和偏置b

In [8]:實現softmax模型,獲得輸出判斷值y

In [9]: 分配實際判斷值y_

In [10]:獲得交叉熵形式的代價函數

In [11]:每一步使用0.5的學習率(步長)來進行梯度下降算法

In [12]:初始化所有變量

In [13]:開啟一個會話,啟動模型

In [14]:進行1000次隨機梯度下降算法

In [15]:比較輸出判斷值y和真實判斷值y_

In [16]:獲得準確率

In [17]:獲得測試集上的準確率:91.86%
神經網絡實驗過程
In [1]: from tensorflow.examples.tutorials.mnist import input_data
   ...: mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
   ...: 
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

In [2]: import tensorflow as tf
   ...: sess = tf.InteractiveSession()
   ...: 
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: GeForce 940M
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:08:00.0
Total memory: 1023.88MiB
Free memory: 997.54MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:839] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 940M, pci bus id: 0000:08:00.0)

In [3]: def weight_variable(shape):
   ...:   initial = tf.truncated_normal(shape, stddev=0.1)
   ...:   return tf.Variable(initial)
   ...: 
   ...: def bias_variable(shape):
   ...:   initial = tf.constant(0.1, shape=shape)
   ...:   return tf.Variable(initial)
   ...: 

In [4]: 

In [4]: def conv2d(x, W):
   ...:   return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
   ...: 
   ...: def max_pool_2x2(x):
   ...:   return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
   ...:                         strides=[1, 2, 2, 1], padding="SAME")
   ...: 

In [5]: W_conv1 = weight_variable([5, 5, 1, 32])
   ...: b_conv1 = bias_variable([32])
   ...: 

In [6]: 

In [7]: x = tf.placeholder(tf.float32, shape=[None, 784])
   ...: y_ = tf.placeholder(tf.float32, shape=[None, 10])
   ...: 

In [8]: x_image = tf.reshape(x, [-1,28,28,1])
   ...: 

In [9]: h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
   ...: h_pool1 = max_pool_2x2(h_conv1)
   ...: 

In [10]: W_conv2 = weight_variable([5, 5, 32, 64])
    ...: b_conv2 = bias_variable([64])
    ...: 
    ...: h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    ...: h_pool2 = max_pool_2x2(h_conv2)
    ...: 

In [11]: W_fc1 = weight_variable([7 * 7 * 64, 1024])
    ...: b_fc1 = bias_variable([1024])
    ...: 
    ...: h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
    ...: h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
    ...: 

In [12]: keep_prob = tf.placeholder(tf.float32)
    ...: h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
    ...: 

In [13]: W_fc2 = weight_variable([1024, 10])
    ...: b_fc2 = bias_variable([10])
    ...: 
    ...: y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
    ...: 

In [14]: cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
    ...: train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    ...: correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
    ...: accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    ...: sess.run(tf.initialize_all_variables())
    ...: for i in range(2000):
    ...:   batch = mnist.train.next_batch(50)
    ...:   if i%100 == 0:
    ...:     train_accuracy = accuracy.eval(feed_dict={
    ...:         x:batch[0], y_: batch[1], keep_prob: 1.0})
    ...:     print("step %d, training accuracy %g"%(i, train_accuracy))
    ...:   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    ...: 
    ...: print("test accuracy %g"%accuracy.eval(feed_dict={
    ...:     x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
    ...: 
step 0, training accuracy 0.04
step 100, training accuracy 0.86
step 200, training accuracy 0.92
step 300, training accuracy 0.88
step 400, training accuracy 0.96
step 500, training accuracy 0.9
step 600, training accuracy 1
step 700, training accuracy 0.98
step 800, training accuracy 0.92
step 900, training accuracy 0.98
step 1000, training accuracy 0.94
step 1100, training accuracy 0.96
step 1200, training accuracy 1
step 1300, training accuracy 0.98
step 1400, training accuracy 0.94
step 1500, training accuracy 0.96
step 1600, training accuracy 1
step 1700, training accuracy 0.92
step 1800, training accuracy 0.92
step 1900, training accuracy 0.96
I tensorflow/core/common_runtime/bfc_allocator.cc:639] Bin (256):     Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:639] Bin (512):     Total Chunks: 1, Chunks in use: 0 768B allocated for chunks. 6.4KiB client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:639] Bin (1024):     Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
...
...
...
...
Limit:                   836280320
InUse:                    83845120
MaxInUse:                117678336
NumAllocs:                  246915
MaxAllocSize:             45883392

W tensorflow/core/common_runtime/bfc_allocator.cc:270] *****_******________________________________________________________________________________________
W tensorflow/core/common_runtime/bfc_allocator.cc:271] Ran out of memory trying to allocate 957.03MiB.  See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:936] Resource exhausted: OOM when allocating tensor with shape[10000,28,28,32]
E tensorflow/core/client/tensor_c_api.cc:485] OOM when allocating tensor with shape[10000,28,28,32]
     [[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Reshape, Variable/read)]]


In [20]: cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
    ...: train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    ...: correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
    ...: accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    ...: sess.run(tf.initialize_all_variables())
    ...: for i in range(20000):
    ...:   batch = mnist.train.next_batch(50)
    ...:   if i%100 == 0:
    ...:     train_accuracy = accuracy.eval(feed_dict={
    ...:         x:batch[0], y_: batch[1], keep_prob: 1.0})
    ...:     print("step %d, training accuracy %g"%(i, train_accuracy))
    ...:   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    ...: 
    ...: print("test accuracy %g"%accuracy.eval(feed_dict={
    ...:     x: mnist.test.images[0:200,:], y_: mnist.test.labels[0:200,:], keep_prob: 1.0}))
    ...: 
step 0, training accuracy 0.12
step 100, training accuracy 0.78
step 200, training accuracy 0.88
step 300, training accuracy 0.96
step 400, training accuracy 0.9
step 500, training accuracy 0.96
step 600, training accuracy 0.94
step 700, training accuracy 0.92
step 800, training accuracy 0.92
step 900, training accuracy 0.96
step 1000, training accuracy 0.94
step 1100, training accuracy 0.98
step 1200, training accuracy 0.96
step 1300, training accuracy 1
step 1400, training accuracy 0.98
...
...
...
test accuracy 0.995

In [21]: cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
...: train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
...: correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
...: accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
...: sess.run(tf.initialize_all_variables())
...: for i in range(20000):
...:   batch = mnist.train.next_batch(50)
...:   if i%100 == 0:
...:     train_accuracy = accuracy.eval(feed_dict={
...:         x:batch[0], y_: batch[1], keep_prob: 1.0})
...:     print "step %d, training accuracy %g"%(i, train_accuracy)
...:   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
...: 
...: print "test accuracy %g"%accuracy.eval(feed_dict={
...:     x: mnist.test.images[200:400,:], y_: mnist.test.labels[200:400,:], keep_prob: 1.0})
...:     
...: 
step 0, training accuracy 0.12
step 100, training accuracy 0.94
step 200, training accuracy 0.86
step 300, training accuracy 0.96
step 400, training accuracy 0.9
step 500, training accuracy 1
step 600, training accuracy 0.96
step 700, training accuracy 0.88
step 800, training accuracy 1
step 900, training accuracy 0.98
step 1000, training accuracy 0.96
step 1100, training accuracy 0.94
step 1200, training accuracy 0.96
step 1300, training accuracy 0.96
step 1400, training accuracy 0.94
step 1500, training accuracy 0.98
step 1600, training accuracy 0.96
step 1700, training accuracy 0.98
...
...
...
test accuracy 0.975

In [22]: for i in range(20000):
...:   batch = mnist.train.next_batch(50)
...:   if i%100 == 0:
...:     train_accuracy = accuracy.eval(feed_dict={
...:         x:batch[0], y_: batch[1], keep_prob: 1.0})
...:     print "step %d, training accuracy %g"%(i, train_accuracy)
...:   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
...: 
...: print "test accuracy %g"%accuracy.eval(feed_dict={
...:     x: mnist.test.images[400:1000,:], y_: mnist.test.labels[400:1000,:], keep_prob: 1.0})
...: 
step 0, training accuracy 1
step 100, training accuracy 1
step 200, training accuracy 0.98
step 300, training accuracy 0.98
step 400, training accuracy 0.98
step 500, training accuracy 1
step 600, training accuracy 0.96
step 700, training accuracy 0.96
...
...
...
test accuracy  0.983333
神經網絡實驗說明
In [1]: 導入數據,即測試集和驗證集

In [2]: 引入 tensorflow 啟動InteractiveSession(比session更靈活)

In [3]: 定義兩個初始化w和b的函數,方便后續操作

In [4]: 定義卷積和池化函數,這里卷積采用padding,使得輸入輸出圖像一樣大,池化采取2x2,那么就是4格變一格

In [5]: 定義第一層卷積的w和b

In [7]: 分配輸入x和y_

In [8]: 修改x的shape

In [9]: 把x_image和w進行卷積,加上b,然后應用ReLU激活函數,最后進行max-pooling

In [10]: 第二層卷積,和第一層卷積類似

In [11]: 全連接層

In [12]: 為了減少過擬合,可以在輸出層之前加入dropout。(但是本例子比較簡單,即使不加,影響也不大)

In [13]: 由一個softmax層來得到輸出

In [14]: 定義代價函數,訓練步驟,用ADAM來進行優化,可以看出,最后測試集太大了,我得顯存不夠    

In [20]: 只使用1~200個圖片作為測試集,正確率是 0.995

In [21]: 只使用201~400個圖片作為測試集,正確率是 0.975

In [22]: 只使用401~1000個圖片作為測試集,正確率是 0.983333

這個CNN的結構如下圖所示:

修改CNN

現在嘗試修改這個CNN結構,增加特征數量以期獲得更好的效果,修改后的CNN結構如圖:

實驗過程如下:

In [23]:     def weight_variable(shape):
    ...:       initial = tf.truncated_normal(shape, stddev=0.1)
    ...:       return tf.Variable(initial)
    ...:     
    ...:     def bias_variable(shape):
    ...:       initial = tf.constant(0.1, shape=shape)
    ...:       return tf.Variable(initial)
    ...:     def conv2d(x, W):
    ...:       return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME")
    ...:     def max_pool_2x2(x):
    ...:       return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
    ...:                             strides=[1, 2, 2, 1], padding="SAME")
    ...:     W_conv1 = weight_variable([5, 5, 1, 64])
    ...:     b_conv1 = bias_variable([64])
    ...:     h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
    ...:     h_pool1 = max_pool_2x2(h_conv1)
    ...:     W_conv2 = weight_variable([5, 5, 64, 128])
    ...:     b_conv2 = bias_variable([128])
    ...:     
    ...:     h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    ...:     h_pool2 = max_pool_2x2(h_conv2)
    ...:     W_fc1 = weight_variable([7 * 7 * 128, 1024])
    ...:     b_fc1 = bias_variable([1024])
    ...:     
    ...:     h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*128])
    ...:     h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
    ...:     keep_prob = tf.placeholder("float")
    ...:     h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
    ...:     W_fc2 = weight_variable([1024, 10])
    ...:     b_fc2 = bias_variable([10])
    ...:     
    ...:     y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
    ...:     cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
    ...:     train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    ...:     correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
    ...:     accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    ...:     sess.run(tf.initialize_all_variables())
    ...:     for i in range(20000):
    ...:       batch = mnist.train.next_batch(50)
    ...:       if i%100 == 0:
    ...:         train_accuracy = accuracy.eval(feed_dict={
    ...:             x:batch[0], y_: batch[1], keep_prob: 1.0})
    ...:         print "step %d, training accuracy %g"%(i, train_accuracy)
    ...:       train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    ...:     
    ...:     print "test accuracy %g"%accuracy.eval(feed_dict={
    ...:         x: mnist.test.images[0:200,:], y_: mnist.test.labels[0:200,:], keep_prob: 1.0})
    ...: 
    ...
    ...
    ...
    test accuracy 1
In [24]: for i in range(20000):
    ...:    batch = mnist.train.next_batch(50)
    ...:    if i%100 == 0:
    ...:       train_accuracy = accuracy.eval(feed_dict={
    ...:          x:batch[0], y_: batch[1], keep_prob: 1.0})
    ...:       print "step %d, training accuracy %g"%(i, train_accuracy)
    ...:    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    ...: 
    ...: print "test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images[200:400,:], y_: mnist.test.labels[200:400,:], keep_prob: 1.0})
    ...: 
    ...
    ...
    ...
    test accuracy 0.975
In [25]: for i in range(20000):
    ...:    batch = mnist.train.next_batch(50)
    ...:    if i%100 == 0:
    ...:       train_accuracy = accuracy.eval(feed_dict={
    ...:          x:batch[0], y_: batch[1], keep_prob: 1.0})
    ...:       print "step %d, training accuracy %g"%(i, train_accuracy)
    ...:    train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    ...: 
    ...: print "test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images[400:1000,:], y_: mnist.test.labels[400:1000,:], keep_prob: 1.0})
    ...: 
    ...
    ...
    ...
    W tensorflow/core/common_runtime/bfc_allocator.cc:213] Ran out of memory trying to allocate 717.77MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
In [26]: print "test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images[400:600,:], y_: mnist.test.labels[400:600,:], keep_prob: 1.0})
    ...:
    test accuracy 0.985
In [28]: print "test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images[600:800,:], y_: mnist.test.labels[600:800,:], keep_prob: 1.0})
    ...: 
    ...: 
    ...: 
test accuracy 0.985
In [29]: print "test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images[800:1000,:], y_: mnist.test.labels[800:1000,:], keep_prob: 1.0})
    ...: 
    ...: 
test accuracy 0.995

修改前的平均準確率是:

( 0.995*2 + 0.975*2 + 0.9833*6 )/ 10 = 0.98398

修改后的平均準確率是:

(1*2 + 0.975*2+ 0.985*4 + 0.995*2)/ 10 = 0.98800

可以看出增加特征過后,準確率提高了,但是內存消耗也變大了(400~1000的圖片驗證出現了OOM錯誤),而且實驗過程中也感受到時間消耗更大,怎么取舍就取決于具體需求和具體的硬件配置了。

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/38131.html

相關文章

  • 初學者怎么選擇神經網絡環境?對比MATLAB、Torch和TensorFlow

    摘要:本報告面向的讀者是想要進入機器學習領域的學生和正在尋找新框架的專家。其輸入需要重塑為包含個元素的一維向量以滿足神經網絡。卷積神經網絡目前代表著用于圖像分類任務的較先進算法,并構成了深度學習中的主要架構。 初學者在學習神經網絡的時候往往會有不知道從何處入手的困難,甚至可能不知道選擇什么工具入手才合適。近日,來自意大利的四位研究者發布了一篇題為《神經網絡初學者:在 MATLAB、Torch 和 ...

    yunhao 評論0 收藏0
  • TensorFlow 2.0 / TF2.0 入門教程實戰案例

    摘要:七強化學習玩轉介紹了使用創建來玩游戲將連續的狀態離散化。包括輸入輸出獨熱編碼與損失函數,以及正確率的驗證。 用最白話的語言,講解機器學習、神經網絡與深度學習示例基于 TensorFlow 1.4 和 TensorFlow 2.0 實現 中文文檔 TensorFlow 2 / 2.0 官方文檔中文版 知乎專欄 歡迎關注我的知乎專欄 https://zhuanlan.zhihu.com/...

    whataa 評論0 收藏0

發表評論

0條評論

garfileo

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<