摘要:多進程的方式可以增加腳本的并發處理能力,支持這種多進程的編程方式在類系統中,的模塊內置了函數用以創建子進程方式創建子進程執行結果從結果可以看到,從開始,下面的部分代碼運行了兩次,第一次是父進程運行,第二次是子進程運行,且子進程的的結果總是,
多進程的方式可以增加腳本的并發處理能力, python 支持這種多進程的編程方式
在類unix系統中, python的os 模塊內置了fork 函數用以創建子進程
import os print "Process %s start ..." %(os.getpid()) pid = os.fork() if pid == 0: print "This is child process and my pid is %d, my father process is %d" %(os.getpid(), os.getppid()) else: print "This is Fater process, And Its child pid is %d" %(pid)
執行結果
Process 4276 start ... This is Fater process, And Its child pid is 4277 This is child process and my pid is 4277, my father process is 4276
從結果可以看到, 從pid = os.fork() 開始, 下面的部分代碼運行了兩次, 第一次是父進程運行, 第二次是子進程運行, 且子進程的fork的結果總是0, 所以這個也可以用來作為區分父進程或是子進程標志
那么變量在多個進程之間是否相互影響呢
import os
print "Process %s start ..." %(os.getpid()) pid = os.fork() source = 10 if pid == 0: print "This is child process and my pid is %d, my father process is %d" %(os.getpid(), os.getppid()) source = source - 6 print "child process source value is "+str(source) else: print "This is Fater process, And Its child pid is %d" %(pid) source = source - 1 print "father process source value is "+str(source) print "source value is "+str(source)
執行的結果如下:
Process 4662 start ... This is Fater process, And Its child pid is 4663 This is child process and my pid is 4663, my father process is 4662 father process source value is 9 child process source value is 4 source value is 9 source value is 4
很明顯, 初始值為10的source 在父進程中值 減少了 1, 為9, 而子進程明顯source的初始值 是10, 也就是說多進程之間并沒有什么相互影響
multiprocessing 方式創建子進程fork 方式是僅在linux 下才有的接口, 在windows下并沒有, 那么在windows下如何實現多進程呢, 這就用到了multiprocessing
multiprocessing 模塊的Process 對象表示的是一個進程對象, 可以創建子進程并執行制定的函數
from multiprocessing import Process import os def pro_do(name, func): print "This is child process %d from parent process %d, and name is %s which is used for %s" %(os.getpid(), os.getppid(), name, func) if __name__ == "__main__": print "Parent process id %d" %(os.getpid()) #process 對象指定子進程將要執行的操作方法(pro_do), 以及該函數的對象列表args(必須是tuple格式, 且元素與pro_do的參數一一對應) pro = Process(target=pro_do, args=("test", "dev")) print "start child process" #啟動子進程 pro.start() #是否阻塞方式執行, 如果有, 則阻塞方式, 否則非阻塞 pro.join() #if has this, it"s synchronous operation or asynchronous operation print "Process end"
執行結果
Parent process id 4878 start child process This is child process 4879 from parent process 4878, and name is test which is used for dev Process end
如果沒有pro.join(), 則表示非阻塞方式運行, 那么最終的Process end的輸出位置就有可能出現在pro_do 方法執行之前了
Parent process id 4903 start child process Process end This is child process 4904 from parent process 4903, and name is test which is used for dev
通過multiprocessing 的process對象創建多進程, 還可以從主進程中向子進程傳遞參數, 例如上面例子中的pro_do的參數
Pool 進程池from multiprocessing import Pool import os, time def pro_do(process_num): print "child process id is %d" %(os.getpid()) time.sleep(6 - process_num) print "this is process %d" %(process_num) if __name__ == "__main__": print "Current process is %d" %(os.getpid()) p = Pool() for i in range(5): p.apply_async(pro_do, (i,)) #增加新的進程 p.close() # 禁止在增加新的進程 p.join() print "pool process done"
輸出:
Current process is 19138 child process id is 19139 child process id is 19140 this is process 1 child process id is 19140 this is process 0 child process id is 19139 this is process 2 child process id is 19140 this is process 3 this is process 4 pool process done
其中
child process id is 19139 child process id is 19140
是立即輸出的, 后面的依次在等待了sleep的時間后輸出 , 之所以立即輸出了上面兩個是因為誒Pool 進程池默認是按照cpu的數量開啟子進程的, 我是在虛擬機中運行, 只分配了兩核, 所以先立即啟動兩個子進程, 剩下的進程要等到前面的進程執行完成后才能啟動。
不過也可以在p=Poo() 中使用Pool(5)來指定啟動的子進程數量, 這樣輸出就是下面的了:
Current process is 19184 child process id is 19185 child process id is 19186 child process id is 19188 child process id is 19189 child process id is 19187 this is process 4 this is process 3 this is process 2 this is process 1 this is process 0 pool process done
且
Current process is 19184 child process id is 19185 child process id is 19186 child process id is 19188 child process id is 19189 child process id is 19187
都是立即輸出的
進程間的通信父進程可以指定子進程執行的方法及其參數, 達到父進程向子進程傳遞消息的單向通信的目的, 那子進程之間或子進程怎么向父進程通信呢
QueueQueue 是一種方式
from multiprocessing import Process, Queue import os, time def write_queue(q): for name in ["Yi_Zhi_Yu", "Tony" ,"San"]: print "put name %s to queue" %(name) q.put(name) time.sleep(2) print "write data finished" def read_queue(q): print "begin to read data" while True: name = q.get() print "get name %s from queue" %(name) if __name__ == "__main__": q = Queue() pw = Process(target=write_queue, args=(q,)) pr = Process(target=read_queue,args=(q,)) pw.start() pr.start() pw.join() #這個表示是否阻塞方式啟動進程, 如果要立即讀取的話, 兩個進程的啟動就應該是非阻塞式的, 所以pw在start后不能立即使用pw.join(), 要等pr start后方可 pr.terminate() #服務進程,強制停止
結果
put name Yi_Zhi_Yu to queue begin to read data get name Yi_Zhi_Yu from queue put name Tony to queue get name Tony from queue put name San to queue get name San from queue write data finishedPipe
另外還有Pipe
其原理參見http://ju.outofmemory.cn/entry/106041, 其只能作為兩個進程之間的通信
#!/usr/bin/env python #encoding=utf-8 from multiprocessing import Process,Pipe import os,time,sys def send_pipe(p): names = ["Yi_Zhi_Yu", "Tony", "San"] for name in names: print "put name %s to Pipe" %(name) p.send(name) time.sleep(1) def recv_pipe(p): print "Try to read data in pipe" while True: name = p.recv() print "get name %s from pipe" %(name) if __name__ == "__main__": #pipe, one for send, one for read ps_pipe, pr_pipe = Pipe() #process ps = Process(target=send_pipe, args=(ps_pipe,)) pr = Process(target=recv_pipe, args=(pr_pipe,)) pr.start() ps.start() ps.join() pr.terminate()
在實例化Pipe的時候, 會產生兩個ps_pipe(read-write Connection, handle 5), pr_pipe(read-write Connection, handle 5) , 都可以作為發送或者接受方, 一旦一個確認為攻, 另一個自然就是受了(之所以Pipe只能作為兩個進程之間的通信方式, 原因也許就是這個),產生的結果如下
Try to read data in pipe put name Yi_Zhi_Yu to Pipe get name Yi_Zhi_Yu from pipe put name Tony to Pipe get name Tony from pipe put name San to Pipe get name San from pipe
還有一種Array, Value 的形式, 暫且不表, 有時間在折騰
以上均為python 學習筆記和練習, 如有錯誤, 歡迎指出
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/37583.html
摘要:摘要在年云棲大會北京峰會的大數據專場中,來自阿里云的高級技術專家李雪峰帶來了主題為金融級別大數據平臺的多租戶隔離實踐的演講。三是運行隔離機制。針對這一問題,提供了多層隔離嵌套方案以便規避這種潛在的安全風險。 摘要:在2017年云棲大會?北京峰會的大數據專場中,來自阿里云的高級技術專家李雪峰帶來了主題為《金融級別大數據平臺的多租戶隔離實踐》的演講。在分享中,李雪峰首先介紹了基于傳統Iaa...
摘要:介紹今天花了近乎一天的時間研究關于多線程的問題,查看了大量源碼自己也實踐了一個生產消費者模型,所以把一天的收獲總結一下。提供了兩個模塊和來支持的多線程操作。使用來阻塞線程。 介紹 今天花了近乎一天的時間研究python關于多線程的問題,查看了大量源碼 自己也實踐了一個生產消費者模型,所以把一天的收獲總結一下。 由于GIL(Global Interpreter Lock)鎖的關系,純的p...
閱讀 3713·2021-10-12 10:11
閱讀 1980·2019-08-30 15:53
閱讀 1589·2019-08-30 13:15
閱讀 2303·2019-08-30 11:25
閱讀 1798·2019-08-29 11:24
閱讀 1648·2019-08-26 13:53
閱讀 3522·2019-08-26 13:22
閱讀 1747·2019-08-26 10:24