摘要:的構造傳遞進入的就是。如果狀態是,直接返回。到底是否正確呢看代碼先創建一個對象,這個對象是個存儲讀寫內容的對象。然后終于進入了內核驅動的部分。
承接上文,從getService開始,要開始走binder的通訊機制了。
首先是上文的java層 /frameworks/base/core/java/android/os/ServiceManagerNative.java:
118 public IBinder getService(String name) throws RemoteException { 119 Parcel data = Parcel.obtain(); 120 Parcel reply = Parcel.obtain(); 121 data.writeInterfaceToken(IServiceManager.descriptor); 122 data.writeString(name); 123 mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0); 124 IBinder binder = reply.readStrongBinder(); 125 reply.recycle(); 126 data.recycle(); 127 return binder; 128 }
1.建立2個Parcel數據data和reply,一個是入口數據,一個是出口數據;
2.data中寫入要獲取的service的name;
3.關鍵:走mRemote的transact函數;
4.讀取出口數據;
5.回收資源,返回讀取到的binder對象;
mRemote是個什么東西?看代碼:
29 /** 30 * Cast a Binder object into a service manager interface, generating 31 * a proxy if needed. 32 */ 33 static public IServiceManager asInterface(IBinder obj) 34 { 35 if (obj == null) { 36 return null; 37 } 38 IServiceManager in = 39 (IServiceManager)obj.queryLocalInterface(descriptor); 40 if (in != null) { 41 return in; 42 } 43 44 return new ServiceManagerProxy(obj); 45 }
110 public ServiceManagerProxy(IBinder remote) { 111 mRemote = remote; 112 }
/frameworks/base/core/java/android/os/ServiceManager.java:
33 private static IServiceManager getIServiceManager() { 34 if (sServiceManager != null) { 35 return sServiceManager; 36 } 37 38 // Find the service manager 39 sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); 40 return sServiceManager; 41 }
/frameworks/base/core/java/com/android/internal/os/BinderInternal.java:
83 /** 84 * Return the global "context object" of the system. This is usually 85 * an implementation of IServiceManager, which you can use to find 86 * other services. 87 */ 88 public static final native IBinder getContextObject();
可以看到,是個IBinder對象,obj.queryLocalInterface(descriptor);這句話是IBinder定義的一個接口,實現在/frameworks/base/core/java/android/os/Binder.java:
248 /** 249 * Use information supplied to attachInterface() to return the 250 * associated IInterface if it matches the requested 251 * descriptor. 252 */ 253 public IInterface queryLocalInterface(String descriptor) { 254 if (mDescriptor.equals(descriptor)) { 255 return mOwner; 256 } 257 return null; 258 }
插一句,這個descriptor是個描述符String類型的,費勁找了半天,在/frameworks/base/core/java/android/os/IServiceManager.java里面有定義:
63 static final String descriptor = "android.os.IServiceManager";
用來表示當前是ServiceManager。ServiceManagerProxy的構造傳遞進入的IBinder就是remote。就是BinderInternal.getContextObject()返回的。看下注釋:“Return the global "context object" of the system”,是系統的上下文context。找到在native層的實現:
/frameworks/base/core/jni/android_util_Binder.cpp
899static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz) 900{ 901 spb = ProcessState::self()->getContextObject(NULL); 902 return javaObjectForIBinder(env, b); 903} 904
要弄清楚到底這個remote是個什么玩意兒,還要繼續看ProcessState這個類,一個c++類,看頭文件定義:
/frameworks/native/include/binder/ProcessState.h
37 static spself();
是個單例,因為他是在native層,不是在驅動設備層,因此可以理解為每個進程一個。看看他的getContextObject干了什么吧:
85spProcessState::getContextObject(const sp & /*caller*/) 86{ 87 return getStrongProxyForHandle(0); 88}
179spProcessState::getStrongProxyForHandle(int32_t handle) 180{ 181 sp result; 182 183 AutoMutex _l(mLock); 184 185 handle_entry* e = lookupHandleLocked(handle); 186 187 if (e != NULL) { 188 // We need to create a new BpBinder if there isn"t currently one, OR we 189 // are unable to acquire a weak reference on this current one. See comment 190 // in getWeakProxyForHandle() for more info about this. 191 IBinder* b = e->binder; 192 if (b == NULL || !e->refs->attemptIncWeak(this)) { 193 if (handle == 0) { 194 // Special case for context manager... 195 // The context manager is the only object for which we create 196 // a BpBinder proxy without already holding a reference. 197 // Perform a dummy transaction to ensure the context manager 198 // is registered before we create the first local reference 199 // to it (which will occur when creating the BpBinder). 200 // If a local reference is created for the BpBinder when the 201 // context manager is not present, the driver will fail to 202 // provide a reference to the context manager, but the 203 // driver API does not return status. 204 // 205 // Note that this is not race-free if the context manager 206 // dies while this code runs. 207 // 208 // TODO: add a driver API to wait for context manager, or 209 // stop special casing handle 0 for context manager and add 210 // a driver API to get a handle to the context manager with 211 // proper reference counting. 212 213 Parcel data; 214 status_t status = IPCThreadState::self()->transact( 215 0, IBinder::PING_TRANSACTION, data, NULL, 0); 216 if (status == DEAD_OBJECT) 217 return NULL; 218 } 219 220 b = new BpBinder(handle); 221 e->binder = b; 222 if (b) e->refs = b->getWeakRefs(); 223 result = b; 224 } else { 225 // This little bit of nastyness is to allow us to add a primary 226 // reference to the remote proxy when this team doesn"t have one 227 // but another team is sending the handle to us. 228 result.force_set(b); 229 e->refs->decWeak(this); 230 } 231 } 232 233 return result; 234}
說實話,看到這里有點頭大了,在看操作系統源碼的時候經常是一個東西牽扯一堆的概念或其他的對象,如果你想跳過去只看梗概,有時候還真不行,需要弄明白就需要深入了解。所以我們再接再厲。
lookupHandleLocked(handle);這句是根據handle查詢得到handle_entry,這個里面存儲著binder。那么這個handle有點像windows下的同名東西,只是個索引用來找到內容實體。
166ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle) 167{ 168 const size_t N=mHandleToObject.size(); 169 if (N <= (size_t)handle) { 170 handle_entry e; 171 e.binder = NULL; 172 e.refs = NULL; 173 status_t err = mHandleToObject.insertAt(e, N, handle+1-N); 174 if (err < NO_ERROR) return NULL; 175 } 176 return &mHandleToObject.editItemAt(handle); 177}
注意,這個mHandleToObject是個Vector
再繼續看getStrongProxyForHandle,如果handle是0(實際傳遞進來的就是0,0指代的就是servicemanager),那么等于是要獲得servicemanager,這個最基礎的服務,這時候走了一個特殊的過程,IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);從字面上看,這里是ping了一下這個binder的內核設備,了解網絡的都知道這個的含義。如果狀態是dead,直接返回null。這些都在查詢到的binder為null的情況下,是null的時候繼續走,會創建一個BpBinder對象,并將其設置到handle_entry里,實際上就是維護在了之前的vector里面保存了下來,下次用直接取。最后返回的就是這個BpBinder,結合之前上文所述,這個BpBinder就是mRemote;
好吧,新的問題來了,BpBinder是什么?
繼續看吧:/frameworks/native/include/binder/BpBinder.h
27class BpBinder : public IBinder
這一句可看出,是從IBinder對象繼承下來的,再看IBinder對象:
/frameworks/native/include/binder/IBinder.h
就是個虛擬類,規范接口。我們在其中看到2句話:
139 virtual BBinder* localBinder(); 140 virtual BpBinder* remoteBinder();
回來看BpBinder里:
59 virtual BpBinder* remoteBinder();
看到這里,猜測下,IBinder用來規范所有Binder的接口,但是Binder分為兩類:Server和Client,用localBinder和remoteBinder來區分,那么具體實現IBinder的子類里,你實現哪個接口那么實際上你就是那個類型。
暫時先放下這部分,我們往回倒,回到java層。還記得ServiceManager的getService吧,一切其實是從這里開始的,他返回的是一個IBinder對象,其實我們現在知道就是BpBinder。然后在緩存沒有的情況下進入到ServiceManagerNative.asInterface中,這里會new一個ServiceManagerProxy對象,讓我們再次看看這個對象其中的getService方法,實際上調用的是這個方法:
118 public IBinder getService(String name) throws RemoteException { 119 Parcel data = Parcel.obtain(); 120 Parcel reply = Parcel.obtain(); 121 data.writeInterfaceToken(IServiceManager.descriptor); 122 data.writeString(name); 123 mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0); 124 IBinder binder = reply.readStrongBinder(); 125 reply.recycle(); 126 data.recycle(); 127 return binder; 128 }
mRemote.transact是關鍵,既然我們已經知道就是BpBinder了,那么往下看。
/frameworks/native/libs/binder/BpBinder.cpp
159status_t BpBinder::transact( 160 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 161{ 162 // Once a binder has died, it will never come back to life. 163 if (mAlive) { 164 status_t status = IPCThreadState::self()->transact( 165 mHandle, code, data, reply, flags); 166 if (status == DEAD_OBJECT) mAlive = 0; 167 return status; 168 } 169 170 return DEAD_OBJECT; 171}
關鍵用到了IPCThreadState::self()->transact。那么這個IPCThreadState看起來是個線程的對象。在self中new了自己,然后在構造中:
686IPCThreadState::IPCThreadState() 687 : mProcess(ProcessState::self()), 688 mMyThreadId(gettid()), 689 mStrictModePolicy(0), 690 mLastTransactionBinderFlags(0) 691{ 692 pthread_setspecific(gTLS, this); 693 clearCaller(); 694 mIn.setDataCapacity(256); 695 mOut.setDataCapacity(256); 696}
將進程對象保留下來,并且保留自身當前所在的線程id。那么猜測下,這個線程對象其實是和當前的線程相關聯的,為了每個線程都可訪問binder。并且在上面的self方法中也有處理tls的事情,這里就不貼代碼了。那么BpBinder.transact里面走的是IPCThreadState的transact:
/frameworks/native/libs/binder/IPCThreadState.cpp
548status_t IPCThreadState::transact(int32_t handle, 549 uint32_t code, const Parcel& data, 550 Parcel* reply, uint32_t flags) 551{ 552 status_t err = data.errorCheck(); 553 554 flags |= TF_ACCEPT_FDS; 555 556 IF_LOG_TRANSACTIONS() { 557 TextOutput::Bundle _b(alog); 558 alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " 559 << handle << " / code " << TypeCode(code) << ": " 560 << indent << data << dedent << endl; 561 } 562 563 if (err == NO_ERROR) { 564 LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), 565 (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); 566 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); 567 } 568 569 if (err != NO_ERROR) { 570 if (reply) reply->setError(err); 571 return (mLastError = err); 572 } 573 574 if ((flags & TF_ONE_WAY) == 0) { 575 #if 0 576 if (code == 4) { // relayout 577 ALOGI(">>>>>> CALLING transaction 4"); 578 } else { 579 ALOGI(">>>>>> CALLING transaction %d", code); 580 } 581 #endif 582 if (reply) { 583 err = waitForResponse(reply); 584 } else { 585 Parcel fakeReply; 586 err = waitForResponse(&fakeReply); 587 } 588 #if 0 589 if (code == 4) { // relayout 590 ALOGI("<<<<<< RETURNING transaction 4"); 591 } else { 592 ALOGI("<<<<<< RETURNING transaction %d", code); 593 } 594 #endif 595 596 IF_LOG_TRANSACTIONS() { 597 TextOutput::Bundle _b(alog); 598 alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " 599 << handle << ": "; 600 if (reply) alog << indent << *reply << dedent << endl; 601 else alog << "(none requested)" << endl; 602 } 603 } else { 604 err = waitForResponse(NULL, NULL); 605 } 606 607 return err; 608}
關鍵就一句:writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);在寫入數據。
904status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, 905 int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) 906{ 907 binder_transaction_data tr; 908 909 tr.target.ptr = 0; /* Don"t pass uninitialized stack data to a remote process */ 910 tr.target.handle = handle; 911 tr.code = code; 912 tr.flags = binderFlags; 913 tr.cookie = 0; 914 tr.sender_pid = 0; 915 tr.sender_euid = 0; 916 917 const status_t err = data.errorCheck(); 918 if (err == NO_ERROR) { 919 tr.data_size = data.ipcDataSize(); 920 tr.data.ptr.buffer = data.ipcData(); 921 tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); 922 tr.data.ptr.offsets = data.ipcObjects(); 923 } else if (statusBuffer) { 924 tr.flags |= TF_STATUS_CODE; 925 *statusBuffer = err; 926 tr.data_size = sizeof(status_t); 927 tr.data.ptr.buffer = reinterpret_cast(statusBuffer); 928 tr.offsets_size = 0; 929 tr.data.ptr.offsets = 0; 930 } else { 931 return (mLastError = err); 932 } 933 934 mOut.writeInt32(cmd); 935 mOut.write(&tr, sizeof(tr)); 936 937 return NO_ERROR; 938}
關鍵是最后2句 mOut.writeInt32(cmd);mOut.write(&tr, sizeof(tr));。干嘛呢?就是先寫指令,后寫binder的傳輸數據。然后呢?居然沒有其它處理,好吧,往下看。transcat方法中往后走到waitForResponse,我們來看看:
712status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) 713{ 714 uint32_t cmd; 715 int32_t err; 716 717 while (1) { 718 if ((err=talkWithDriver()) < NO_ERROR) break; 719 err = mIn.errorCheck(); 720 if (err < NO_ERROR) break; 721 if (mIn.dataAvail() == 0) continue; 722 723 cmd = (uint32_t)mIn.readInt32(); 724 725 IF_LOG_COMMANDS() { 726 alog << "Processing waitForResponse Command: " 727 << getReturnString(cmd) << endl; 728 } 729 730 switch (cmd) { 731 case BR_TRANSACTION_COMPLETE: 732 if (!reply && !acquireResult) goto finish; 733 break; 734 735 case BR_DEAD_REPLY: 736 err = DEAD_OBJECT; 737 goto finish; 738 739 case BR_FAILED_REPLY: 740 err = FAILED_TRANSACTION; 741 goto finish; 742 743 case BR_ACQUIRE_RESULT: 744 { 745 ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); 746 const int32_t result = mIn.readInt32(); 747 if (!acquireResult) continue; 748 *acquireResult = result ? NO_ERROR : INVALID_OPERATION; 749 } 750 goto finish; 751 752 case BR_REPLY: 753 { 754 binder_transaction_data tr; 755 err = mIn.read(&tr, sizeof(tr)); 756 ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); 757 if (err != NO_ERROR) goto finish; 758 759 if (reply) { 760 if ((tr.flags & TF_STATUS_CODE) == 0) { 761 reply->ipcSetDataReference( 762 reinterpret_cast(tr.data.ptr.buffer), 763 tr.data_size, 764 reinterpret_cast (tr.data.ptr.offsets), 765 tr.offsets_size/sizeof(binder_size_t), 766 freeBuffer, this); 767 } else { 768 err = *reinterpret_cast (tr.data.ptr.buffer); 769 freeBuffer(NULL, 770 reinterpret_cast (tr.data.ptr.buffer), 771 tr.data_size, 772 reinterpret_cast (tr.data.ptr.offsets), 773 tr.offsets_size/sizeof(binder_size_t), this); 774 } 775 } else { 776 freeBuffer(NULL, 777 reinterpret_cast (tr.data.ptr.buffer), 778 tr.data_size, 779 reinterpret_cast (tr.data.ptr.offsets), 780 tr.offsets_size/sizeof(binder_size_t), this); 781 continue; 782 } 783 } 784 goto finish; 785 786 default: 787 err = executeCommand(cmd); 788 if (err != NO_ERROR) goto finish; 789 break; 790 } 791 } 792 793finish: 794 if (err != NO_ERROR) { 795 if (acquireResult) *acquireResult = err; 796 if (reply) reply->setError(err); 797 mLastError = err; 798 } 799 800 return err; 801}
上來就是一個死循環,然后里面有一句talkWithDriver,后面就直接去讀取mIn的內容了。那么我認為這句是關鍵,在將剛才寫入的數據告知內核驅動,并等待完成數據處理。到底是否正確呢?看代碼:
/frameworks/native/libs/binder/IPCThreadState.cpp
803status_t IPCThreadState::talkWithDriver(bool doReceive) 804{ 805 if (mProcess->mDriverFD <= 0) { 806 return -EBADF; 807 } 808 809 binder_write_read bwr; 810 811 // Is the read buffer empty? 812 const bool needRead = mIn.dataPosition() >= mIn.dataSize(); 813 814 // We don"t want to write anything if we are still reading 815 // from data left in the input buffer and the caller 816 // has requested to read the next data. 817 const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; 818 819 bwr.write_size = outAvail; 820 bwr.write_buffer = (uintptr_t)mOut.data(); 821 822 // This is what we"ll read. 823 if (doReceive && needRead) { 824 bwr.read_size = mIn.dataCapacity(); 825 bwr.read_buffer = (uintptr_t)mIn.data(); 826 } else { 827 bwr.read_size = 0; 828 bwr.read_buffer = 0; 829 } 830 831 IF_LOG_COMMANDS() { 832 TextOutput::Bundle _b(alog); 833 if (outAvail != 0) { 834 alog << "Sending commands to driver: " << indent; 835 const void* cmds = (const void*)bwr.write_buffer; 836 const void* end = ((const uint8_t*)cmds)+bwr.write_size; 837 alog << HexDump(cmds, bwr.write_size) << endl; 838 while (cmds < end) cmds = printCommand(alog, cmds); 839 alog << dedent; 840 } 841 alog << "Size of receive buffer: " << bwr.read_size 842 << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; 843 } 844 845 // Return immediately if there is nothing to do. 846 if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; 847 848 bwr.write_consumed = 0; 849 bwr.read_consumed = 0; 850 status_t err; 851 do { 852 IF_LOG_COMMANDS() { 853 alog << "About to read/write, write size = " << mOut.dataSize() << endl; 854 } 855#if defined(HAVE_ANDROID_OS) 856 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) 857 err = NO_ERROR; 858 else 859 err = -errno; 860#else 861 err = INVALID_OPERATION; 862#endif 863 if (mProcess->mDriverFD <= 0) { 864 err = -EBADF; 865 } 866 IF_LOG_COMMANDS() { 867 alog << "Finished read/write, write size = " << mOut.dataSize() << endl; 868 } 869 } while (err == -EINTR); 870 871 IF_LOG_COMMANDS() { 872 alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: " 873 << bwr.write_consumed << " (of " << mOut.dataSize() 874 << "), read consumed: " << bwr.read_consumed << endl; 875 } 876 877 if (err >= NO_ERROR) { 878 if (bwr.write_consumed > 0) { 879 if (bwr.write_consumed < mOut.dataSize()) 880 mOut.remove(0, bwr.write_consumed); 881 else 882 mOut.setDataSize(0); 883 } 884 if (bwr.read_consumed > 0) { 885 mIn.setDataSize(bwr.read_consumed); 886 mIn.setDataPosition(0); 887 } 888 IF_LOG_COMMANDS() { 889 TextOutput::Bundle _b(alog); 890 alog << "Remaining data size: " << mOut.dataSize() << endl; 891 alog << "Received commands from driver: " << indent; 892 const void* cmds = mIn.data(); 893 const void* end = mIn.data() + mIn.dataSize(); 894 alog << HexDump(cmds, mIn.dataSize()) << endl; 895 while (cmds < end) cmds = printReturnCommand(alog, cmds); 896 alog << dedent; 897 } 898 return NO_ERROR; 899 } 900 901 return err; 902}
先創建一個binder_write_read對象bwr,這個對象是個存儲讀寫內容的對象。然后呢一堆的判斷和日志操作,最重要的一句在這里:if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)。看到這里我想說:他媽的,終于到了。真心虐啊,這里終于看到了曙光,這么n層的調用和各種邏輯中,我們看到了和驅動交互的部分!下面真的該進入到內核驅動部分了,不過先停一下,我們總結下。
我們這篇解釋了remote是什么,與binder的關系,引入了BpBinder的概念(server的binder),不過沒有仔細分析,后面在適當的時候會解釋他與BnBinder的關系。然后說明了線程的IPCThreadState與binder的關系,還有為了支持多線程環境下與其他進程的binder,有必要在這個對象中封裝了與binder內核驅動通訊的內容,需要注意的是transact真正的傳輸數據。然后終于進入了內核驅動的部分。
其實總感覺還是有很多內容沒有提及,我覺得目前先抓主線,我們要分析的是servicemanager以及它怎么利用binder與其他進程交互數據,其實線有很多,先沿著getService走下去,等到這條線明朗一些了,后面我們再看其他的會輕松不少。
下一篇我們再繼續吧。
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/65027.html
摘要:抱歉,此文暫時作廢,不會使用的刪除功能。我會在后面重新整理后再繼續寫下去。是內部的一個機制,通過設備驅動的協助能夠起到進程間通訊的的作用。這個應該是個全局表,統計所有結構。實際上是保存在打開的設備文件的結構中。目前還未涉及到其他操作,只是。 抱歉,此文暫時作廢,不會使用segmentfault的刪除功能。我會在后面重新整理后再繼續寫下去。 繼續上篇的文,這篇打算進入到android的內...
摘要:中為何新增來作為主要的方式運行機制是怎樣的機制有什么優勢運行機制是怎樣的基于通信模式,除了端和端,還有兩角色一起合作完成進程間通信功能。 目錄介紹 2.0.0.1 什么是Binder?為什么要使用Binder?Binder中是如何進行線程管理的?總結binder講的是什么? 2.0.0.2 Android中進程和線程的關系?什么是IPC?為何需要進行IPC?多進程通信可能會出現什么問...
摘要:以版本源碼為例。源碼位于下打開驅動設備,將自己作為的管理者,進入循環,作為等待的請求位于首先,建立一個結構體,然后剩下的就是給這個結構體的成員賦值。同屬于這一層,因此我們看看具體內容剛才從驅動設備讀取的的前位取出來作為進行判斷處理。 前一陣子在忙項目,沒什么更新,這次開始寫點android源碼內部的東西分析下。以6.0.1_r10版本android源碼為例。servicemanager...
閱讀 3118·2021-11-10 11:36
閱讀 3318·2021-10-13 09:40
閱讀 6117·2021-09-26 09:46
閱讀 667·2019-08-30 15:55
閱讀 1414·2019-08-30 15:53
閱讀 1585·2019-08-29 13:55
閱讀 3002·2019-08-29 12:46
閱讀 3216·2019-08-29 12:34