国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

Windows Theano GPU 版配置

Yu_Huang / 3377人閱讀

摘要:因?yàn)樽约涸谏系睦锩娴谒闹艿囊玫剑缓筮@個(gè)似乎是基于后端的。然而版太慢了,跑個(gè)馬爾科夫蒙特卡洛要個(gè)小時(shí),簡(jiǎn)直不能忍了。為了不把環(huán)境搞壞,我在里面新建了一個(gè)環(huán)境。

因?yàn)樽约涸谏螩oursera的Advanced Machine Learning, 里面第四周的Assignment要用到PYMC3,然后這個(gè)似乎是基于theano后端的。然而CPU版TMD太慢了,跑個(gè)馬爾科夫蒙特卡洛要10個(gè)小時(shí),簡(jiǎn)直不能忍了。所以妥妥換gpu版。

為了不把環(huán)境搞壞,我在Anaconda里面新建了一個(gè)環(huán)境。(關(guān)于Anaconda,可以看我之前翻譯的文章)

Conda Create -n theano-gpu python=3.4

(theano GPU版貌似不支持最新版,保險(xiǎn)起見裝了舊版)

conda install theano pygpu

這里面會(huì)涉及很多依賴,應(yīng)該conda會(huì)給你搞好,缺什么的話自己按官方文檔去裝。

然后至于Cuda和Cudnn的安裝,可以看我寫的關(guān)于TF安裝的教程

和TF不同的是,Theano不分gpu和cpu版,用哪個(gè)看配置文件設(shè)置,這一點(diǎn)是翻博客了解到的:
配置好Theano環(huán)境之后,只要 C:Users你的用戶名 的路徑下添加 .theanorc.txt 文件。

.theanorc.txt 文件內(nèi)容:

[global]

openmp=False

device = cuda

floatX = float32

base_compiler = C:Program Files (x86)Microsoft Visual Studio 12.0VCin

allow_input_downcast=True 

[lib]

cnmem = 0.75

[blas]

ldflags=

[gcc]

cxxflags=-IC:UserslyhAnaconda2MinGW

[nvcc]

fastmath = True

flags = -LC:UserslyhAnaconda2libs

compiler_bindir = C:Program Files (x86)Microsoft Visual Studio 12.0VCin

flags =  -arch=sm_30

注意在新版本中,聲明用gpu從device=gpu改為device=cuda

然后測(cè)試是否成功:

from theano import function, config, shared, tensor
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, tensor.Elemwise) and
              ("Gpu" not in type(x.op).__name__)
              for x in f.maker.fgraph.toposort()]):
    print("Used the cpu")
else:
    print("Used the gpu")

輸出:

[GpuElemwise{exp,no_inplace}((float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.377000 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu

到這里就算配好了

然后在作業(yè)里面,顯示Quadro卡啟用

但是還是有個(gè)warning

WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.

這個(gè)真不知道怎么處理

然后后面運(yùn)行到:

with pm.Model() as logistic_model:
    # Since it is unlikely that the dependency between the age and salary is linear, we will include age squared
    # into features so that we can model dependency that favors certain ages.
    # Train Bayesian logistic regression model on the following features: sex, age, age^2, educ, hours
    # Use pm.sample to run MCMC to train this model.
    # To specify the particular sampler method (Metropolis-Hastings) to pm.sample,
    # use `pm.Metropolis`.
    # Train your model for 400 samples.
    # Save the output of pm.sample to a variable: this is the trace of the sampling procedure and will be used
    # to estimate the statistics of the posterior distribution.
    
    #### YOUR CODE HERE ####
    
    pm.glm.GLM.from_formula("income_more_50K ~  sex+age + age_square + educ + hours", data, family=pm.glm.families.Binomial())
    with logistic_model:
        trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model
        
    ### END OF YOUR CODE ###

這里出現(xiàn)的報(bào)錯(cuò):

GpuArrayException: cuMemcpyDtoHAsync(dst, src->ptr + srcoff, sz, ctx->mem_s): CUDA_ERROR_INVALID_VALUE: invalid argument

這個(gè)問(wèn)題最后github大神解決了:
So njobs will spawn multiple chains to run in parallel. If the model uses the GPU there will be a conflict. We recently added nchains where you can still run multiple chains. So I think running pm.sample(niter, nchains=4, njobs=1) should give you what you want.
我把:

trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model

加上nchains就好了,應(yīng)該是并行方面的問(wèn)題

trace = pm.sample(400, step=[pm.Metropolis()],nchains=1, njobs=1) #nchains=1 works for gpu model

另外

plot_traces(trace, burnin=200)

出現(xiàn)pm.df_summary報(bào)錯(cuò),把pm.df_summary 換成 pm.summary就好了,也是github搜出來(lái)的。

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://specialneedsforspecialkids.com/yun/19745.html

相關(guān)文章

  • Windows Theano GPU 配置

    摘要:因?yàn)樽约涸谏系睦锩娴谒闹艿囊玫剑缓筮@個(gè)似乎是基于后端的。然而版太慢了,跑個(gè)馬爾科夫蒙特卡洛要個(gè)小時(shí),簡(jiǎn)直不能忍了。為了不把環(huán)境搞壞,我在里面新建了一個(gè)環(huán)境。 因?yàn)樽约涸谏螩oursera的Advanced Machine Learning, 里面第四周的Assignment要用到PYMC3,然后這個(gè)似乎是基于theano后端的。然而CPU版TMD太慢了,跑個(gè)馬爾科夫蒙特卡洛要10個(gè)...

    thekingisalwaysluc 評(píng)論0 收藏0
  • keras環(huán)境配置填坑(持續(xù)更新)

    摘要:檢查目錄以及其下的目錄是否被添加進(jìn)環(huán)境變量。導(dǎo)入版本時(shí),提示缺少模塊,用的函數(shù)繪制模型失敗八成是沒有安裝下面兩個(gè)包里面的無(wú)法識(shí)別八成是安裝了加速版的,此版本支持的核心,把改成進(jìn)時(shí)提示找不到解壓直接覆蓋目錄的文件夾。 L.C.提醒我補(bǔ)上配置的坑 1.配置gpu版本的keras(tensorflow/theano)真xx難!對(duì)計(jì)算的時(shí)間要求不高,就弄個(gè)cpu慢吞吞訓(xùn)練算了,怎么安裝cpu版...

    VEIGHTZ 評(píng)論0 收藏0
  • 最新Github上各DL框架Star數(shù)量大PK

    摘要:下圖總結(jié)了絕大多數(shù)上的開源深度學(xué)習(xí)框架項(xiàng)目,根據(jù)項(xiàng)目在的數(shù)量來(lái)評(píng)級(jí),數(shù)據(jù)采集于年月初。然而,近期宣布將轉(zhuǎn)向作為其推薦深度學(xué)習(xí)框架因?yàn)樗С忠苿?dòng)設(shè)備開發(fā)。該框架可以出色完成圖像識(shí)別,欺詐檢測(cè)和自然語(yǔ)言處理任務(wù)。 很多神經(jīng)網(wǎng)絡(luò)框架已開源多年,支持機(jī)器學(xué)習(xí)和人工智能的專有解決方案也有很多。多年以來(lái),開發(fā)人員在Github上發(fā)布了一系列的可以支持圖像、手寫字、視頻、語(yǔ)音識(shí)別、自然語(yǔ)言處理、物體檢測(cè)的...

    oogh 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<