摘要:相當(dāng)于層的初始化。注意,這里是層層自己的消息,與層的沒(méi)關(guān)系。好吧,這個(gè)過(guò)程基本上分析完畢了,其實(shí)就是通過(guò)不斷的處理消息,并且調(diào)用消息的回調(diào)。
承接上文在looper中會(huì)在一開(kāi)始就創(chuàng)建一個(gè)MessageQueue,并且在loop中每次都會(huì)從其中取出一個(gè)message處理。那么我們就來(lái)看看這個(gè)MessageQueue:
MessageQueue(boolean quitAllowed) { mQuitAllowed = quitAllowed; mPtr = nativeInit(); }
nativeInit,無(wú)可避免的又要進(jìn)入c層進(jìn)行分析。對(duì)應(yīng)的文件是/frameworks/base/core/jni/android_os_MessageQueue.cpp:
static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) { NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue(); if (!nativeMessageQueue) { jniThrowRuntimeException(env, "Unable to allocate native queue"); return 0; } nativeMessageQueue->incStrong(env); return reinterpret_cast(nativeMessageQueue); }
這里創(chuàng)建了一個(gè)新的NativeMessageQueue并返回他的指針。這個(gè)類的定義也在此文件中,看看他的構(gòu)造做了什么:
NativeMessageQueue::NativeMessageQueue() : mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) { mLooper = Looper::getForThread(); if (mLooper == NULL) { mLooper = new Looper(false); Looper::setForThread(mLooper); } }
新建了一個(gè)Looper對(duì)象,這個(gè)肯定不是java層的那個(gè)了,但是前后都有g(shù)etForThread和setForThread。那么他們分別在干什么呢?我的理解是在做tls線程本地變量的處理,確保本線程只有一個(gè)looper。具體的內(nèi)容在這里不再論述,后續(xù)有機(jī)會(huì)可以剖析下。
我們下面來(lái)看看這個(gè)Looper是什么吧,他的構(gòu)造函數(shù)如下:
Looper::Looper(bool allowNonCallbacks) : mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false), mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false), mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) { mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s", strerror(errno)); AutoMutex _l(mLock); rebuildEpollLocked(); }
除了狀態(tài)的值得設(shè)置外,就是rebuildEpollLocked:
void Looper::rebuildEpollLocked() { // Close old epoll instance if we have one. if (mEpollFd >= 0) { #if DEBUG_CALLBACKS ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this); #endif close(mEpollFd); } // Allocate the new epoll instance and register the wake pipe. mEpollFd = epoll_create(EPOLL_SIZE_HINT); LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance: %s", strerror(errno)); struct epoll_event eventItem; memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union eventItem.events = EPOLLIN; eventItem.data.fd = mWakeEventFd; int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem); LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s", strerror(errno)); for (size_t i = 0; i < mRequests.size(); i++) { const Request& request = mRequests.valueAt(i); struct epoll_event eventItem; request.initEventItem(&eventItem); int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem); if (epollResult < 0) { ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s", request.fd, strerror(errno)); } } }
我們看到了什么?epoll。這不是linux中的epoll嗎?就是這個(gè)玩意,為了控制多個(gè)fd(文件描述符)的讀寫(xiě)等事件而誕生的,一般多用于網(wǎng)絡(luò)開(kāi)發(fā),類似win上的完成端口。然后新建了一個(gè)eventItem用于監(jiān)聽(tīng)mWakeEventFd,就是將喚醒的eventfd放到epoll的監(jiān)聽(tīng)隊(duì)列中,用于喚醒機(jī)制。然后呢,進(jìn)行了一個(gè)循環(huán),取出所有的request,并且都放到了epoll監(jiān)聽(tīng),首次調(diào)用這個(gè)for循環(huán)不會(huì)被執(zhí)行,因?yàn)閙Requests的size是0。這些request都是什么呢?看定義:
struct Request { int fd; int ident; int events; int seq; spcallback; void* data; void initEventItem(struct epoll_event* eventItem) const; };
那么他們對(duì)應(yīng)的具體內(nèi)容又是什么呢?先放一放,往下看。
回到j(luò)ava層的loop函數(shù)中,每次調(diào)用next方法獲取message,那么看看這個(gè)MessageQueue的next方法:
Message next() { // Return here if the message loop has already quit and been disposed. // This can happen if the application tries to restart a looper after quit // which is not supported. final long ptr = mPtr; if (ptr == 0) { return null; } int pendingIdleHandlerCount = -1; // -1 only during first iteration int nextPollTimeoutMillis = 0; for (;;) { if (nextPollTimeoutMillis != 0) { Binder.flushPendingCommands(); } nativePollOnce(ptr, nextPollTimeoutMillis); synchronized (this) { // Try to retrieve the next message. Return if found. final long now = SystemClock.uptimeMillis(); Message prevMsg = null; Message msg = mMessages; if (msg != null && msg.target == null) { // Stalled by a barrier. Find the next asynchronous message in the queue. do { prevMsg = msg; msg = msg.next; } while (msg != null && !msg.isAsynchronous()); } if (msg != null) { if (now < msg.when) { // Next message is not ready. Set a timeout to wake up when it is ready. nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); } else { // Got a message. mBlocked = false; if (prevMsg != null) { prevMsg.next = msg.next; } else { mMessages = msg.next; } msg.next = null; if (DEBUG) Log.v(TAG, "Returning message: " + msg); msg.markInUse(); return msg; } } else { // No more messages. nextPollTimeoutMillis = -1; } // Process the quit message now that all pending messages have been handled. if (mQuitting) { dispose(); return null; } // If first time idle, then get the number of idlers to run. // Idle handles only run if the queue is empty or if the first message // in the queue (possibly a barrier) is due to be handled in the future. if (pendingIdleHandlerCount < 0 && (mMessages == null || now < mMessages.when)) { pendingIdleHandlerCount = mIdleHandlers.size(); } if (pendingIdleHandlerCount <= 0) { // No idle handlers to run. Loop and wait some more. mBlocked = true; continue; } if (mPendingIdleHandlers == null) { mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)]; } mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers); } // Run the idle handlers. // We only ever reach this code block during the first iteration. for (int i = 0; i < pendingIdleHandlerCount; i++) { final IdleHandler idler = mPendingIdleHandlers[i]; mPendingIdleHandlers[i] = null; // release the reference to the handler boolean keep = false; try { keep = idler.queueIdle(); } catch (Throwable t) { Log.wtf(TAG, "IdleHandler threw exception", t); } if (!keep) { synchronized (this) { mIdleHandlers.remove(idler); } } } // Reset the idle handler count to 0 so we do not run them again. pendingIdleHandlerCount = 0; // While calling an idle handler, a new message could have been delivered // so go back and look again for a pending message without waiting. nextPollTimeoutMillis = 0; } }
首先看到獲取了mPtr,這個(gè)ptr就是c層的nativeMessageQueue的地址。然后進(jìn)入了一個(gè)死循環(huán),率先走了一個(gè)nativePollOnce(ptr, nextPollTimeoutMillis);內(nèi)部調(diào)用了android_os_MessageQueue_nativePollOnce:
static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj, jlong ptr, jint timeoutMillis) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast(ptr); nativeMessageQueue->pollOnce(env, obj, timeoutMillis); }
這里實(shí)際上還原了地址為NativeMessageQueue對(duì)象,并調(diào)用了pollOnce方法:
void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) { mPollEnv = env; mPollObj = pollObj; mLooper->pollOnce(timeoutMillis); mPollObj = NULL; mPollEnv = NULL; if (mExceptionObj) { env->Throw(mExceptionObj); env->DeleteLocalRef(mExceptionObj); mExceptionObj = NULL; } }
保留了pollObj對(duì)象,并且調(diào)用了Looper的pollOnce。相當(dāng)于c層Looper的初始化。那么來(lái)看看pollOnce:
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) { int result = 0; for (;;) { while (mResponseIndex < mResponses.size()) { const Response& response = mResponses.itemAt(mResponseIndex++); int ident = response.request.ident; if (ident >= 0) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - returning signalled identifier %d: " "fd=%d, events=0x%x, data=%p", this, ident, fd, events, data); #endif if (outFd != NULL) *outFd = fd; if (outEvents != NULL) *outEvents = events; if (outData != NULL) *outData = data; return ident; } } if (result != 0) { #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - returning result %d", this, result); #endif if (outFd != NULL) *outFd = 0; if (outEvents != NULL) *outEvents = 0; if (outData != NULL) *outData = NULL; return result; } result = pollInner(timeoutMillis); } }
一個(gè)死循環(huán),里面先是一個(gè)while,優(yōu)先處理應(yīng)答response(一個(gè)request對(duì)應(yīng)一個(gè)response),并返回。如果沒(méi)有response需要處理的時(shí)候,走pollInner。這個(gè)pollInner是個(gè)關(guān)鍵,代碼比較多,我們節(jié)選看:
...... struct epoll_event eventItems[EPOLL_MAX_EVENTS]; int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis); ...... for (int i = 0; i < eventCount; i++) { int fd = eventItems[i].data.fd; uint32_t epollEvents = eventItems[i].events; if (fd == mWakeEventFd) { if (epollEvents & EPOLLIN) { awoken(); } else { ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents); } } else { ssize_t requestIndex = mRequests.indexOfKey(fd); if (requestIndex >= 0) { int events = 0; if (epollEvents & EPOLLIN) events |= EVENT_INPUT; if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT; if (epollEvents & EPOLLERR) events |= EVENT_ERROR; if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP; pushResponse(events, mRequests.valueAt(requestIndex)); } else { ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is " "no longer registered.", epollEvents, fd); } } } ......
epoll_wait在mEpollFd上阻塞等待,直到有事件發(fā)生。如果等到了就執(zhí)行下面的for循環(huán),枚舉每一個(gè)epoll_event,如果等待到的消息是喚醒消息(fd==mWakeEventFd),則執(zhí)行awoken喚醒,否則判斷epollEvents是否含有相關(guān)事件,如果有填寫(xiě)生成好的events,這個(gè)應(yīng)該是轉(zhuǎn)換一下事件為了上層使用。然后進(jìn)行了pushResponse的動(dòng)作,這里終于有個(gè)response生成的過(guò)程了,繼續(xù)看下去:
void Looper::pushResponse(int events, const Request& request) { Response response; response.events = events; response.request = request; mResponses.push(response); }
看到了吧,就是個(gè)填充response的過(guò)程,并將其push到mResponses中。再回到pollInner中往下看:
...... Done: ; // Invoke pending message callbacks. mNextMessageUptime = LLONG_MAX; while (mMessageEnvelopes.size() != 0) { nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC); const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0); if (messageEnvelope.uptime <= now) { // Remove the envelope from the list. // We keep a strong reference to the handler until the call to handleMessage // finishes. Then we drop it so that the handler can be deleted *before* // we reacquire our lock. { // obtain handler sphandler = messageEnvelope.handler; Message message = messageEnvelope.message; mMessageEnvelopes.removeAt(0); mSendingMessage = true; mLock.unlock(); #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d", this, handler.get(), message.what); #endif handler->handleMessage(message); } // release handler mLock.lock(); mSendingMessage = false; result = POLL_CALLBACK; } else { // The last message left at the head of the queue determines the next wakeup time. mNextMessageUptime = messageEnvelope.uptime; break; } } ......
一上來(lái)就是一個(gè)while循環(huán),處理一下之前堆積的事件。注意,這里是c層(native層)自己的消息,與java層的沒(méi)關(guān)系。這里有個(gè)時(shí)間的對(duì)比,如果每個(gè)messageEnvelope的uptime<=now,也即是小于等于當(dāng)前時(shí)間,那么這個(gè)uptime是個(gè)什么呢?我的理解是一個(gè)喚醒時(shí)間,也就是message的執(zhí)行時(shí)間,因?yàn)閙essage是允許被后置一段時(shí)間執(zhí)行的。如果需要被執(zhí)行的時(shí)間比當(dāng)前時(shí)間晚,就調(diào)用這個(gè)message的handler的handleMessage。看起來(lái)很合理,就是為了清除一下之前堆積還未執(zhí)行的事件的handle的回調(diào)。
之后又是一個(gè)for循環(huán):
...... for (size_t i = 0; i < mResponses.size(); i++) { Response& response = mResponses.editItemAt(i); if (response.request.ident == POLL_CALLBACK) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p", this, response.request.callback.get(), fd, events, data); #endif // Invoke the callback. Note that the file descriptor may be closed by // the callback (and potentially even reused) before the function returns so // we need to be a little careful when removing the file descriptor afterwards. int callbackResult = response.request.callback->handleEvent(fd, events, data); if (callbackResult == 0) { removeFd(fd, response.request.seq); } // Clear the callback reference in the response structure promptly because we // will not clear the response vector itself until the next poll. response.request.callback.clear(); result = POLL_CALLBACK; } } ......
這里就是處理response了,就是走一個(gè)response.request.callback->handleEvent。
我們現(xiàn)在繼續(xù)找線索下,在Looper的構(gòu)造中出現(xiàn)了mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);,這個(gè)eventfd就是用來(lái)支持進(jìn)程或者線程間通訊的通道,類似管道。
好吧,這個(gè)過(guò)程基本上分析完畢了,其實(shí)就是通過(guò)epoll不斷的處理消息,并且調(diào)用消息的回調(diào)。但是其實(shí)整個(gè)過(guò)程還有很多不是很明確的地方,例如:1.這個(gè)epoll綁定的fd到底是個(gè)什么東西?是管道嗎?網(wǎng)上的文章基本上都是說(shuō)管道,這里我沒(méi)有找到線索,不好確定。2.這個(gè)c層的looper中的sendmessage已經(jīng)很明確是根據(jù)傳遞進(jìn)來(lái)的參數(shù)來(lái)設(shè)定messageEnvelope的handler。但是調(diào)用他的是哪個(gè)東西呢?怎么和java層結(jié)合起來(lái)呢?有不少問(wèn)題。
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://specialneedsforspecialkids.com/yun/66783.html
摘要:輔助功能類,提供接口向消息池中發(fā)送各類消息事件,并且提供響應(yīng)消息的機(jī)制。進(jìn)入消息泵循環(huán)體,以阻塞的方式獲取待處理消息。執(zhí)行消息的派發(fā)。并且將返回值保存在了。我們深入下去看看層的部分,在這里明顯生成了一個(gè)新的,并且將地址作為返回值返回了。 概述 android里的消息機(jī)制是非常重要的部分,這次我希望能夠系統(tǒng)的剖析這個(gè)部分,作為一個(gè)總結(jié)。首先這里涉及到幾個(gè)部分,從層次上看,分為java層和...
摘要:在這里表示的是不允許退出。這之后會(huì)調(diào)用到進(jìn)入實(shí)質(zhì)性的工作函數(shù)中。看到了吧,由于函數(shù)運(yùn)行在主線程中,因此以上這些都是在主線程中運(yùn)行的代碼。注意,這個(gè)是排他性的,如果前面的可以執(zhí)行就不會(huì)走后面的。現(xiàn)在比較清楚了吧,整個(gè)消息循環(huán)是如何運(yùn)轉(zhuǎn)的。 本來(lái)是不想寫(xiě)這篇文章的,但是很早以前看過(guò)的東西容易遺忘,希望還是給自己一個(gè)記錄吧,另外此篇希望能夠?qū)懙纳钊胍恍ooper是什么就不介紹了吧,一個(gè)線...
摘要:在子線程中發(fā)送消息,主線程接受到消息并且處理邏輯。也稱之為消息隊(duì)列,特點(diǎn)是先進(jìn)先出,底層實(shí)現(xiàn)是單鏈表數(shù)據(jù)結(jié)構(gòu)得出結(jié)論方法初始話了一個(gè)對(duì)象并關(guān)聯(lián)在一個(gè)對(duì)象,并且一個(gè)線程中只有一個(gè)對(duì)象,只有一個(gè)對(duì)象。 目錄介紹 1.Handler的常見(jiàn)的使用方式 2.如何在子線程中定義Handler 3.主線程如何自動(dòng)調(diào)用Looper.prepare() 4.Looper.prepare()方法源碼分析...
摘要:在子線程中發(fā)送消息,主線程接受到消息并且處理邏輯。子線程往消息隊(duì)列發(fā)送消息,并且往管道文件寫(xiě)數(shù)據(jù),主線程即被喚醒,從管道文件讀取數(shù)據(jù),主線程被喚醒只是為了讀取消息,當(dāng)消息讀取完畢,再次睡眠。 目錄介紹 1.Handler的常見(jiàn)的使用方式 2.如何在子線程中定義Handler 3.主線程如何自動(dòng)調(diào)用Looper.prepare() 4.Looper.prepare()方法源碼分析 5....
閱讀 3245·2023-04-26 01:31
閱讀 1892·2023-04-25 22:08
閱讀 3430·2021-09-01 11:42
閱讀 2823·2019-08-30 12:58
閱讀 2165·2019-08-29 18:31
閱讀 2429·2019-08-29 17:18
閱讀 3064·2019-08-29 13:01
閱讀 2552·2019-08-28 18:22