国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

tensorflow學習筆記3——MNIST應用篇

baishancloud / 1150人閱讀

摘要:的卷積神經網絡應用卷積神經網絡的概念卷積神經網絡是一種前饋神經網絡,它的人工神經元可以響應一部分覆蓋范圍內的周圍單元,對于大型圖像處理有出色表現。

MNIST的卷積神經網絡應用 卷積神經網絡的概念

卷積神經網絡(Convolutional Neural Network,CNN)是一種前饋神經網絡,它的人工神經元可以響應一部分覆蓋范圍內的周圍單元,對于大型圖像處理有出色表現。[2] 它包括卷積層(convolutional layer)和池化層(pooling layer)。

使用卷積神經網絡來訓練MNIST數據集
    import tensorflow as tf
    import numpy as np
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

    trX, trY, teX, teY = mnist.train.images, mnist.train.labels,         mnist.test.images, mnist.test.labels

    trX = trX.reshape(-1, 28, 28, 1)#28*28*1 input image
    teX = teX.reshape(-1, 28, 28, 1)

    X = tf.placeholder("float", [None, 28, 28, 1])
    Y = tf.placeholder("float", [None, 10])
    conv_dropout  = tf.placeholder("float")
    dense_dropout = tf.placeholder("float")
    w1 = tf.Variable(tf.radom_normal([3, 3, 1, 32], stddev=0.01))
    w2 = tf.Variable(tf.radom_normal([3, 3, 32, 64], stddev=0.01))
    w3 = tf.Variable(tf.radom_normal([3, 3, 64, 128], stddev=0.01))
    w4 = tf.Variable(tf.radom_normal([4*4*128, 1024], stddev=0.01))
    wo = tf.Variable(tf.random_normal([1024, 10], stddev=0.01))

    #卷積和池化、dropout
    def conv_and_pool(x, w, step, dropout):
        x = tf.nn.relu(tf.nn.conv2d(x, w, strides=[1,1,1,1], padding="SAME"))
        x = tf.nn.max_pool(x, ksize=[1, step, step, 1], strides=[1, step, step, 1], padding="SAME")
        x = tf.nn.dropout(dropout)
        return x
    #構建模型
    def conv_model(x, w1, w2, w3, w4, wo, dropout, dense_do):
        x = conv_and_pool(x, w1, 2, 0.5)#第一層卷積 
        x = conv_and_pool(x, w2, 2, 0.5)#第二層卷積 
        x = conv_and_pool(x, w3, 2, 0.5)#第三層卷積 

        x = tf.nn.relu(tf.nn.matmul(x, w4))#全連接
        x = tf.nn.dropout(x, dense_do)#dropout,防止過擬合

        x = tf.nn.relu(tf.nn.matmul(x, wo))#輸出預測分類
        return x;

    py_x = conv_model(X, w1, w2, w3, w4, wo, conv_dropout, dense_dropout)

    cost =     tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y))
    train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minize(cost)
    predict_op = tf.argmax(py_x, 1)

    batch_size = 128
    test_size = 256

    #訓練模型和評估模型
    with tf.Sesseion() as sess:
        tf.global_variables_initializer().run()

        for i in range(100):
            training_batch = zip(range(0, len(trX), batch_size),         range(batch_size, len(trX)+1, batch_size))
        for start, end in training_batch:
            sess.run(train_op, feed_dict={X:trX[start:end], Y:trY[start:end], conv_dropout:0.8, dense_dropout:0.5})
        
    test_indices = np.arange(len(txX))
    np.random.shuffle(test_indices)
    test_indices = test_indices[0:test_size]
    print(i, np.mean(np.ragmax(teY[test_indices], axis=1) == sess.run(predict_op, feed_dict={X:teX[test_indices], conv_dropout:1.0, dense_dropout:1.0})))
輸出結果:

0.179688
0.453125
0.671875
0.773438
0.765625
0.789062
0.804688
0.84375
0.796875
0.828125
...
0.953125
0.921875
0.945312
0.9375
0.914062
0.929688
0.953125
0.9375

MNIST的循環神經網絡應用 循環神經網絡的概念(RNN,又稱為遞歸神經網絡)

在傳統的神經網絡模型中,是從輸入層到隱含層再到輸出層,層與層之間是全連接的,每層之間的節點是無連接的。但是這種普通的神經網絡對于很多問題卻無能無力。例如,你要預測句子的下一個單詞是什么,一般需要用到前面的單詞,因為一個句子中前后單詞并不是獨立的。RNN(Recurrent Neuron Network)是一種對序列數據建模的神經網絡,即一個序列當前的輸出與前面的輸出也有關。具體的表現形式為網絡會對前面的信息進行記憶并應用于當前輸出的計算中,即隱藏層之間的節點不再無連接而是有連接的,并且隱藏層的輸入不僅包括輸入層的輸出還包括上一時刻隱藏層的輸出。
RNN在自然語言處理領域的以下幾個方向應用得非常成功:

機器翻譯;

語音識別;

圖像描述生成(把RNN和CNN結合,根據圖像的特征生成描述)

語言模型與文本生成,即利用生成的模型預測下一個單詞的可能性.

使用循環神經網絡(RNN)訓練MNIST數據集
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    from tensorflow.contrib import rnn
    tf.set_random_seed(1)

    mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
    optimize_op = 0.01
    train_count = 100000
    batch_size  = 128

    #
    n_inputs = 28
    n_steps = 28
    n_hidden_units = 128
    n_classes = 10

    x = tf.placeholder(tf.float32, [None, 28, 28])
    y = tf.placeholder(tf.float32, [None, 10])

    weights = {
        "in": tf.Variable(tf.random_normal([28, 128])),
        "out": tf.Variable(tf.random_normal([128, 10])),
    }

    baises = {
        "in": tf.Variable(tf.constant(0.1, shape=[128, ])),
        "out": tf.Variable(tf.constant(0.1, shape=[10, ])),
    }

    def RNN(X, weights, baises):
        #Xtransform to [128*28, 28]
        X = tf.reshape(X, [-1, 28])
        X_in = tf.matmul(X, weights["in"]) + baises["in"]
        #[128*28, 128]->vonvert[128, 28, 128]
        X_in = tf.reshape(X_in, [-1, 28, 128])
        lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)
        init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)
        #dynamic_rnn
        #outputs, final_state = rnn.static_rnn(lstm_cell, X_in, initial_state=init_state)
        outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, X_in,         initial_state=init_state, time_major=False)
        results = tf.matmul(final_state[1], weights["out"]) + baises["out"]
        return results;

    pred = RNN(x, weights, baises)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
    train_op = tf.train.AdamOptimizer(optimize_op).minimize(cost)

    correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
    init = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init)
        step = 0
        while step * batch_size < train_count:
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            batch_xs = batch_xs.reshape([batch_size, 28, 28])
            sess.run([train_op], feed_dict={
             x: batch_xs,
             y: batch_ys,
             })
        
            if step % 20 == 0:
               print(sess.run(accuracy, feed_dict={x:batch_xs, y:batch_ys,}))
            step += 1
輸出結果:

0.179688
0.453125
0.671875
0.773438
...
0.9375
0.914062
0.929688
0.953125
0.9375

MNIST的自編碼網絡實現應用 自編碼網絡的概念

自編碼器是神經網絡的一種,是一種無監督學習方法,使用了反向傳播算法,目標是使輸出=輸入。 自編碼器內部有隱藏層 ,可以產生編碼表示輸入。自編碼器主要作用在于通過復現輸出而捕捉可以代表輸入的重要因素,利用中間隱層對輸入的壓縮表達,達到像PCA那樣的找到原始信息主成分的效果。

使用自編碼網絡編碼MNIST
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib import rnn
import matplotlib.pyplot as plt
import numpy as np
tf.set_random_seed(1)

mnist = input_data.read_data_sets("/tmp/data", one_hot=True)
learning_rate = 0.01
training_epochs = 20
batch_size  = 256#batch size for once training
display_step = 1

examples_to_show = 10#images to show in view

n_hidden_1 = 256#first hidden layer feature count
n_hidden_2 = 128#second hidden layer feature count
n_input = 784 #input data count


X = tf.placeholder("float", [None, n_input])#input image data


weights = {
     "encoder_h1": tf.Variable(tf.random_normal([n_input, n_hidden_1])),
     "encoder_h2": tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
     "decoder_h1": tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
     "decoder_h2": tf.Variable(tf.random_normal([n_hidden_1, n_input])),
 }
biases = {
     "encoder_b1": tf.Variable(tf.random_normal([n_hidden_1])),
     "encoder_b2": tf.Variable(tf.random_normal([n_hidden_2])),
     "decoder_b1": tf.Variable(tf.random_normal([n_hidden_1])),
     "decoder_b2": tf.Variable(tf.random_normal([n_input])),
 }

def encoder(x):
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights["encoder_h1"]), biases["encoder_b1"]))
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1,         weights["encoder_h2"]), biases["encoder_b2"]))
    return layer_2

def decoder(x):
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights["decoder_h1"]), biases["decoder_b1"]))
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights["decoder_h2"]), biases["decoder_b2"]))
    return layer_2

encoder_op = encoder(X)#encoder image data
decoder_op = decoder(encoder_op)#decoder image data

y_pred = decoder_op#prediction image data
y_true = X
cost = tf.reduce_mean(tf.pow(y_pred - y_true, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:

   sess.run(init)
   total_batch = int(mnist.train.num_examples/batch_size)
   for epoch in range(training_epochs):
      for i in range(total_batch):
          batch_xs, batch_ys = mnist.train.next_batch(batch_size)
          _, c = sess.run([optimizer, cost], feed_dict={X:batch_xs})
      if epoch %display_step == 0:
         print("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(c))
  print ("Optimization Finished!")
  encode_decode = sess.run(y_pred, feed_dict={X: 
  mnist.test.images[:examples_to_show]})
  
  f, a = plt.subplots(2, 10, figsize=(10, 2))#繪圖比較原始圖片和編碼網絡重建結果
  print ("after plt.subplots")
  for i in range(examples_to_show):
      a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28)))#測試集
      a[1][i].imshow(np.reshape(encode_decode[i], (28, 28)))#重建結果
   f.show()
   plt.draw()
   
輸出結果:

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/41320.html

相關文章

  • TensorFlow學習筆記(6):TensorBoard之Embeddings

    摘要:前言本文基于官網的寫成。是自帶的一個可視化工具,是其中的一個功能,用于在二維或三維空間對高維數據進行探索。本文使用數據講解的使用方法。 前言 本文基于TensorFlow官網的How-Tos寫成。 TensorBoard是TensorFlow自帶的一個可視化工具,Embeddings是其中的一個功能,用于在二維或三維空間對高維數據進行探索。 An embedding is a map ...

    hover_lew 評論0 收藏0
  • 深度學習

    摘要:深度學習在過去的幾年里取得了許多驚人的成果,均與息息相關。機器學習進階筆記之一安裝與入門是基于進行研發的第二代人工智能學習系統,被廣泛用于語音識別或圖像識別等多項機器深度學習領域。零基礎入門深度學習長短時記憶網絡。 多圖|入門必看:萬字長文帶你輕松了解LSTM全貌 作者 | Edwin Chen編譯 | AI100第一次接觸長短期記憶神經網絡(LSTM)時,我驚呆了。原來,LSTM是神...

    Vultr 評論0 收藏0

發表評論

0條評論

最新活動
閱讀需要支付1元查看
<