欧美三区_成人在线免费观看视频_欧美极品少妇xxxxⅹ免费视频_a级毛片免费播放_鲁一鲁中文字幕久久_亚洲一级特黄

Android IPC 通訊機(jī)制源碼分析

系統(tǒng) 1964 0

Binder通信簡(jiǎn)介:
linux 系統(tǒng)中進(jìn)程間通信的方式有:socket, named pipe,message queque, signal,share memory。 Java 系統(tǒng)中的進(jìn)程間通信方式有socket, named pipe等, Android 應(yīng)用程序 理所當(dāng)然可以應(yīng)用JAVA的IPC機(jī)制實(shí)現(xiàn)進(jìn)程間的通信,但我查看android的源碼,在同一終端上的應(yīng)用軟件的通信幾乎看不到這些IPC通信方式,取而代之的是Binder通信。 google 為什么要采用這種方式呢,這取決于Binder通信方式的高效率。 Binder通信是通過(guò)linux的binder driver來(lái)實(shí)現(xiàn)的,Binder通信操作類(lèi)似線程遷移(thread migration),兩個(gè)進(jìn)程間IPC看起來(lái)就象是一個(gè)進(jìn)程進(jìn)入另一個(gè)進(jìn)程執(zhí)行代碼然后帶著執(zhí)行的結(jié)果返回。Binder的用戶空間為每一個(gè)進(jìn)程維護(hù)著一個(gè)可用的線程池,線程池用于處理到來(lái)的IPC以及執(zhí)行進(jìn)程本地消息,Binder通信是同步而不是異步。
Android中的Binder通信是基于Service與Client的,所有需要IBinder通信的進(jìn)程都必須創(chuàng)建一個(gè)IBinder接口,系統(tǒng)中有一個(gè)進(jìn)程管理所有的system service,Android不允許用戶添加非授權(quán)的System service,當(dāng)然現(xiàn)在源碼開(kāi)發(fā)了,我們可以修改一些代碼來(lái)實(shí)現(xiàn)添加底層system Service的目的。對(duì)用戶程序來(lái)說(shuō),我們也要?jiǎng)?chuàng)建server,或者Service用于進(jìn)程間通信,這里有一個(gè)ActivityManagerService管理JAVA應(yīng)用層所有的service創(chuàng)建與連接(connect),disconnect,所有的Activity也是通過(guò)這個(gè)service來(lái)啟動(dòng),加載的。ActivityManagerService也是加載在Systems Servcie中的。
Android虛擬機(jī)啟動(dòng)之前系統(tǒng)會(huì)先啟動(dòng)service Manager進(jìn)程,service Manager打開(kāi)binder驅(qū)動(dòng),并通知binder kernel驅(qū)動(dòng)程序這個(gè)進(jìn)程將作為System Service Manager,然后該進(jìn)程將進(jìn)入一個(gè)循環(huán),等待處理來(lái)自其他進(jìn)程的數(shù)據(jù)。用戶創(chuàng)建一個(gè)System service后,通過(guò)defaultServiceManager得到一個(gè)遠(yuǎn)程ServiceManager的接口,通過(guò)這個(gè)接口我們可以調(diào)用addService函數(shù)將System service添加到Service Manager進(jìn)程中,然后client可以通過(guò)getService獲取到需要連接的目的Service的IBinder對(duì)象,這個(gè)IBinder是Service的BBinder在binder kernel的一個(gè)參考,所以service IBinder 在binder kernel中不會(huì)存在相同的兩個(gè)IBinder對(duì)象,每一個(gè)Client進(jìn)程同樣需要打開(kāi)Binder驅(qū)動(dòng)程序。對(duì)用戶程序而言,我們獲得這個(gè)對(duì)象就可以通過(guò)binder kernel訪問(wèn)service對(duì)象中的方法。Client與Service在不同的進(jìn)程中,通過(guò)這種方式實(shí)現(xiàn)了類(lèi)似線程間的遷移的通信方式,對(duì)用戶程序而言當(dāng)調(diào)用Service返回的IBinder接口后,訪問(wèn)Service中的方法就如同調(diào)用自己的函數(shù)。
下圖為client與Service建立連接的示意圖


首先從ServiceManager注冊(cè)過(guò)程來(lái)逐步分析上述過(guò)程是如何實(shí)現(xiàn)的。

ServiceMananger進(jìn)程注冊(cè)過(guò)程源碼分析:
Service Manager Process(Service_manager.c):
Service_manager為其他進(jìn)程的Service提供管理,這個(gè)服務(wù)程序必須在Android Runtime起來(lái)之前運(yùn)行,否則Android JAVA Vm ActivityManagerService無(wú)法注冊(cè)。
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER;

bs = binder_open(128*1024); //打開(kāi)/dev/binder 驅(qū)動(dòng)

if (binder_become_context_manager(bs)) {//注冊(cè)為service manager in binder kernel
LOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);
return 0;
}
首先打開(kāi)binder的驅(qū)動(dòng)程序然后通過(guò)binder_become_context_manager函數(shù)調(diào)用ioctl告訴Binder Kernel驅(qū)動(dòng)程序這是一個(gè)服務(wù)管理進(jìn)程,然后調(diào)用binder_loop等待來(lái)自其他進(jìn)程的數(shù)據(jù)。BINDER_SERVICE_MANAGER是服務(wù)管理進(jìn)程的句柄,它的定義是:
/* the one magic object */
#define BINDER_SERVICE_MANAGER ((void*) 0)
如果客戶端進(jìn)程獲取Service時(shí)所使用的句柄與此不符,Service Manager將不接受Client的請(qǐng)求。客戶端如何設(shè)置這個(gè)句柄在下面會(huì)介紹。

CameraSerivce服務(wù)的注冊(cè)(Main_mediaservice.c)
int main(int argc, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate(); //Audio 服務(wù)
MediaPlayerService::instantiate(); //mediaPlayer服務(wù)
CameraService::instantiate(); //Camera 服務(wù)
ProcessState::self()->startThreadPool(); //為進(jìn)程開(kāi)啟緩沖池
IPCThreadState::self()->joinThreadPool(); //將進(jìn)程加入到緩沖池
}

CameraService.cpp
void CameraService::instantiate() {
defaultServiceManager()->addService(
String16("media.camera"), new CameraService());
}
創(chuàng)建CameraService服務(wù)對(duì)象并添加到ServiceManager進(jìn)程中。


client獲取remote IServiceManager IBinder接口:
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
任何一個(gè)進(jìn)程在第一次調(diào)用defaultServiceManager的時(shí)候gDefaultServiceManager值為Null,所以該進(jìn)程會(huì)通過(guò)ProcessState::self得到ProcessState實(shí)例。ProcessState將打開(kāi)Binder驅(qū)動(dòng)。
ProcessState.cpp
sp<ProcessState> ProcessState::self()
{
if (gProcess != NULL) return gProcess;

AutoMutex _l(gProcessMutex);
if (gProcess == NULL) gProcess = new ProcessState;
return gProcess;
}

ProcessState::ProcessState()
: mDriverFD(open_driver()) //打開(kāi)/dev/binder驅(qū)動(dòng)
...........................
{
}

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
if (supportsProcesses()) {
return getStrongProxyForHandle(0);
} else {
return getContextObject(String16("default"), caller);
}
}
Android是支持Binder驅(qū)動(dòng)的所以程序會(huì)調(diào)用getStrongProxyForHandle。這里handle為0,正好與Service_manager中的BINDER_SERVICE_MANAGER一致。
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);

if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder; //第一次調(diào)用該函數(shù)b為Null
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
第一次調(diào)用的時(shí)候b為Null所以會(huì)為b生成一BpBinder對(duì)象:
BpBinder::BpBinder(int32_t handle)
: mHandle(handle)
, mAlive(1)
, mObitsSent(0)
, mObituaries(NULL)
{
LOGV("Creating BpBinder %p handle %d\n", this, mHandle);

extendObjectLifetime(OBJECT_LIFETIME_WEAK);
IPCThreadState::self()->incWeakHandle(handle);
}

void IPCThreadState::incWeakHandle(int32_t handle)
{
LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
mOut.writeInt32(BC_INCREFS);
mOut.writeInt32(handle);
}
getContextObject返回了一個(gè)BpBinder對(duì)象。
interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
將這個(gè)宏擴(kuò)展后最終得到的是:
sp<IServiceManager> IServiceManager::asInterface(const sp<IBinder>& obj)
{
sp<IServiceManager> intr;
if (obj != NULL) {
intr = static_cast<IServiceManager*>(
obj->queryLocalInterface(
IServiceManager::descriptor).get());
if (intr == NULL) {
intr = new BpServiceManager(obj);
}
}
return intr;
}
返回一個(gè)BpServiceManager對(duì)象,這里obj就是前面我們創(chuàng)建的BpBInder對(duì)象。

client獲取Service的遠(yuǎn)程IBinder接口
以CameraService為例(camera.cpp):
const sp<ICameraService>& Camera::getCameraService()
{
Mutex::Autolock _l(mLock);
if (mCameraService.get() == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16("media.camera"));
if (binder != 0)
break;
LOGW("CameraService not published, waiting...");
usleep(500000); // 0.5 s
} while(true);
if (mDeathNotifier == NULL) {
mDeathNotifier = new DeathNotifier();
}
binder->linkToDeath(mDeathNotifier);
mCameraService = interface_cast<ICameraService>(binder);
}
LOGE_IF(mCameraService==0, "no CameraService!?");
return mCameraService;
}
由前面的分析可知sm是BpCameraService對(duì)象 ://應(yīng)該為BpServiceManager對(duì)象
virtual sp<IBinder> getService(const String16& name) const
{
unsigned n;
for (n = 0; n < 5; n++){
sp<IBinder> svc = checkService(name);
if (svc != NULL) return svc;
LOGI("Waiting for sevice %s...\n", String8(name).string());
sleep(1);
}
return NULL;
}
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
這里的remote就是我們前面得到BpBinder對(duì)象。所以checkService將調(diào)用BpBinder中的transact函數(shù):
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
mHandle為0,BpBinder繼續(xù)往下調(diào)用IPCThreadState:transact函數(shù)將數(shù)據(jù)發(fā)給與mHandle相關(guān)聯(lián)的Service Manager Process。
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
............................................................
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}

if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}

if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
..............................

return err;
}

通過(guò)writeTransactionData構(gòu)造要發(fā)送的數(shù)據(jù)
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;

tr.target.handle = handle; //這個(gè)handle將傳遞到service_manager
tr.code = code;
tr.flags = bindrFlags;
。。。。。。。。。。。。。。
}
waitForResponse將調(diào)用talkWithDriver與對(duì)Binder kernel進(jìn)行讀寫(xiě)操作。當(dāng)Binder kernel接收到數(shù)據(jù)后,service_mananger線程的ThreadPool就會(huì)啟動(dòng),service_manager查找到CameraService服務(wù)后調(diào)用binder_send_reply,將返回的數(shù)據(jù)寫(xiě)入Binder kernel,Binder kernel。
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;

while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;

..............................................
}
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
............................................
#if defined(HAVE_ANDROID_OS)
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
...................................................
}
通過(guò)上面的ioctl系統(tǒng)函數(shù)中BINDER_WRITE_READ對(duì)binder kernel進(jìn)行讀寫(xiě)。

Client A與Binder kernel通信:
kernel\drivers\android\Binder.c)
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;

if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)
printk(KERN_INFO "binder_open: %d:%d\n", current->group_leader->pid, current->pid);

proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
get_task_struct(current);
proc->tsk = current; //保存打開(kāi)/dev/binder驅(qū)動(dòng)的當(dāng)前進(jìn)程任務(wù)數(shù)據(jù)結(jié)構(gòu)
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->wait);
proc->default_priority = task_nice(current);
mutex_lock(&binder_lock);
binder_stats.obj_created[BINDER_STAT_PROC]++;
hlist_add_head(&proc->proc_node, &binder_procs);
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data = proc;
mutex_unlock(&binder_lock);

if (binder_proc_dir_entry_proc) {
char strbuf[11];
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
create_proc_read_entry(strbuf, S_IRUGO, binder_proc_dir_entry_proc, binder_read_proc_proc, proc); //為當(dāng)前進(jìn)程創(chuàng)建一個(gè)process入口結(jié)構(gòu)信息
}
return 0;
}
從這里可以知道每一個(gè)打開(kāi)/dev/binder的進(jìn)程的信息都保存在binder kernel中,因而當(dāng)一個(gè)進(jìn)程調(diào)用ioctl與kernel binder通信時(shí),binder kernel就能查詢到調(diào)用進(jìn)程的信息。BINDER_WRITE_READ是調(diào)用ioctl進(jìn)程與Binder kernel通信一個(gè)非常重要的command。大家可以看到在IPCThreadState中的transact函數(shù)這個(gè)函數(shù)中call talkWithDriver發(fā)送的command就是BINDER_WRITE_READ。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;

/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/
//將調(diào)用ioctl的進(jìn)程掛起 caller將掛起直到 service 返回
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
return ret;

mutex_lock(&binder_lock);
thread = binder_get_thread(proc);//根據(jù)當(dāng)caller進(jìn)程消息獲取該進(jìn)程線程池?cái)?shù)據(jù)結(jié)構(gòu)
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}

switch (cmd) {
case BINDER_WRITE_READ: { //IPcThreadState中talkWithDriver設(shè)置ioctl的CMD
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {
ret = -EINVAL;
goto err;
}
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto err;
}
if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto err;
}
}
if (bwr.read_size > 0) {//數(shù)據(jù)寫(xiě)入到caller process。
ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait); //恢復(fù)掛起的caller進(jìn)程
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto err;
}
}
.........................................
}

Int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed)
{
uint32_t cmd;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;

while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))//從user空間獲取cmd數(shù)據(jù)到內(nèi)核空間
return -EFAULT;
ptr += sizeof(uint32_t);
if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
binder_stats.bc[_IOC_NR(cmd)]++;
proc->stats.bc[_IOC_NR(cmd)]++;
thread->stats.bc[_IOC_NR(cmd)]++;
}
switch (cmd) {
case BC_INCREFS:
.........................................
case BC_TRANSACTION: //IPCThreadState通過(guò)writeTransactionData設(shè)置該cmd
case BC_REPLY: {
struct binder_transaction_data tr;

if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
........................................
}

static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
..............................................
if (reply) // cmd != BC_REPLY 不走這個(gè)case
{
......................................
}
else
{
if (tr->target.handle) { //對(duì)于service_manager來(lái)說(shuō)這個(gè)條件不滿足(handle == 0)
.......................................
}
} else {//這一段我們獲取到了service_mananger process 注冊(cè)在binder kernle的進(jìn)程信息
target_node = binder_context_mgr_node; //BINDER_SET_CONTEXT_MGR 注冊(cè)了service
if (target_node == NULL) { //manager
return_error = BR_DEAD_REPLY;
goto err_no_context_mgr_node;
}
}
e->to_node = target_node->debug_id;
target_proc = target_node->proc; //得到目標(biāo)進(jìn)程service_mananger 的結(jié)構(gòu)
if (target_proc == NULL) {
return_error = BR_DEAD_REPLY;
goto err_dead_binder;
}
....................
}
if (target_thread) {
e->to_thread = target_thread->pid;
target_list = &target_thread->todo;
target_wait = &target_thread->wait; //得到service manager掛起的線程
} else {
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
............................................
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
..........................
ref = binder_get_ref_for_node(target_proc, node); //在Binder kernel中創(chuàng)建
.......................... //查找到的service參考
} break;

............................................
if (target_wait)
wake_up_interruptible(target_wait); //喚醒掛起的線程 處理caller process請(qǐng)求
............................................//處理命令可以看svcmgr_handler
}
到這里我們已經(jīng)通過(guò)getService連接到service manager進(jìn)程了,service manager進(jìn)程得到請(qǐng)求后,如果他的狀態(tài)是掛起的話,將被喚醒。現(xiàn)在我們來(lái)看一下service manager中的binder_loop函數(shù)。
Service_manager.c
void binder_loop(struct binder_state *bs, binder_handler func)
{
.................................
binder_write(bs, readbuf, sizeof(unsigned));

for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); //如果沒(méi)有要處理的請(qǐng)求進(jìn)程將掛起
if (res < 0) {
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);//這里func就是
................................... //svcmgr_handler
}
}
接收到數(shù)據(jù)處理的請(qǐng)求,這里進(jìn)行解析并調(diào)用前面注冊(cè)的回調(diào)函數(shù)查找caller請(qǐng)求的service
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uint32_t *ptr, uint32_t size, binder_handler func)
{
....................................
switch(cmd) {
......
case BR_TRANSACTION: {
struct binder_txn *txn = (void *) ptr;
if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
LOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply); //找到caller請(qǐng)求的service
binder_send_reply(bs, &reply, txn->data, res);//將找到的service返回給caller
}
ptr += sizeof(*txn) / sizeof(uint32_t);
break;
........
}

}
void binder_send_reply(struct binder_state *bs,
struct binder_io *reply,
void *buffer_to_free,
int status)
{
struct {
uint32_t cmd_free;
void *buffer;
uint32_t cmd_reply;
struct binder_txn txn;
} __attribute__((packed)) data;

data.cmd_free = BC_FREE_BUFFER;
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY; //將我們前面binder_thread_write中cmd替換為BC_REPLY就可以知
data.txn.target = 0; //道service manager如何將找到的service返回給caller了
..........................
binder_write(bs, &data, sizeof(data)); //調(diào)用ioctl與binder kernel通信
}
從這里走出去后,caller該被喚醒了,client進(jìn)程就得到了所請(qǐng)求的service的IBinder對(duì)象在Binder kernel中的參考,這是一個(gè)遠(yuǎn)程BBinder對(duì)象。

連接建立后的client連接Service的通信過(guò)程:
virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient)
{
Parcel data, reply;
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
data.writeStrongBinder(cameraClient->asBinder());
remote()->transact(BnCameraService::CONNECT, data, &reply);
return interface_cast<ICamera>(reply.readStrongBinder());
}
向前面分析的這里remote是我們得到的CameraService的對(duì)象,caller進(jìn)程會(huì)切入到CameraService。android的每一個(gè)進(jìn)程都會(huì)創(chuàng)建一個(gè)線程池,這個(gè)線程池用處理其他進(jìn)程的請(qǐng)求。當(dāng)沒(méi)有數(shù)據(jù)的時(shí)候線程是掛起的,這時(shí)binder kernel喚醒了這個(gè)線程:
IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

status_t result;
do {
int32_t cmd;
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail(); //binder kernel傳遞數(shù)據(jù)到service
if (IN < sizeof(int32_t)) continue;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
result = executeCommand(cmd); //service 執(zhí)行binder kernel請(qǐng)求的命令
}

// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
.......................
}

status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;

switch (cmd) {
.........................
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
LOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;

Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t), freeBuffer, this);

const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;

mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;

//LOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

Parcel reply;
.........................
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr. cookie ); //service中Binder對(duì)象即CameraService
const status_t error = b->transact(tr.code, buffer, &reply, 0);//將調(diào)用
if (error < NO_ERROR) reply.setError(error);//CameraService的onTransact函數(shù)

} else {
const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);
if (error < NO_ERROR) reply.setError(error);
}

//LOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
// mCallingPid, origPid, origUid);

if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}

mCallingPid = origPid;
mCallingUid = origUid;

IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
<< tr.target.ptr << ": " << indent << reply << dedent << endl;
}
..................................
}
break;
}
..................................
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
sendReply(reply, 0); //通過(guò)binder kernel返回?cái)?shù)據(jù)到caller進(jìn)程這個(gè)過(guò)程大家
} else { //參照前面的敘述自己分析一下
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
調(diào)用CameraService BBinder對(duì)象中的transact函數(shù):
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
.....................
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
...................
return err;
}

將調(diào)用CameraService的onTransact函數(shù),CameraService繼承了BBinder。
status_t BnCameraService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case CONNECT: {
CHECK_INTERFACE(ICameraService, data, reply);
sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());
sp<ICamera> camera = connect(cameraClient); //真正的處理函數(shù)
reply->writeStrongBinder(camera->asBinder());
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
至此完成了一次從client到service的通信。


設(shè)計(jì)一個(gè)多客戶端的Service
Service可以連接不同的Client,這里說(shuō)的多客戶端是指在Service中為不同的client創(chuàng)建不同的IClient接口,如果看過(guò)AIDL編程的話,應(yīng)該清楚,Service需要開(kāi)放一個(gè)IService接口給客戶端,我們通過(guò)defaultServiceManager->getService就可以得到相應(yīng)的service一個(gè)BpBinder接口,通過(guò)這個(gè)接口調(diào)用transact函數(shù)就可以與service通信了,這樣也就完成了一個(gè)簡(jiǎn)單的service與client程序了,但這里有個(gè)缺點(diǎn)就是,這個(gè)IService是對(duì)所有的client開(kāi)放的,如果我們要對(duì)不同的client做區(qū)分的話,在建立連接的時(shí)候所有的client需要給Service一個(gè)特性,這樣做也未嘗不可,但會(huì)很麻煩。比如對(duì)Camera來(lái)說(shuō)可能不止一個(gè)攝像頭,攝像頭的功能也不一樣,這樣做就比較麻煩了。其實(shí)我們完全可以參照 Qt 中多客戶端的設(shè)計(jì)方式,在Service中為每一個(gè)Client都創(chuàng)建一個(gè)IClient接口,IService接口只用于Serivce與Client建立連接用。對(duì)于Camera,如果存在多攝像頭我們就可以在Service中為不同的Client打開(kāi)不同的設(shè)備。
import android.os.IBinder;
import android.os.RemoteException;
public class TestServerServer extends android.app.testServer.ITestServer.Stub
{
int mClientCount = 0;
testServerClient mClient[];
@Override
public android.app.testServer.ITestClient.Stub connect(ITestClient client) throws RemoteException
{
// TODO Auto-generated method stub
testServerClient tClient = new testServerClient(this, client); //為Client創(chuàng)建
mClient[mClientCount] = tClient; //不同的IClient
mClientCount ++;
System.out.printf("*** Server connect client is %d", client.asBinder());
return tClient;
}

@Override
public void receivedData(int count) throws RemoteException
{
// TODO Auto-generated method stub

}
Public static class testServerClient extends android.app.testServer.ITestClient.Stub
{
public android.app.testServer.ITestClient mClient;
public TestServerServer mServer;
public testServerClient(TestServerServer tServer, android.app.testServer.ITestClient tClient)
{
mServer = tServer;
mClient = tClient;
}
public IBinder asBinder()
{
// TODO Auto-generated method stub
return this;
}
}
}
這僅僅是個(gè)Service的demo而已,如果添加這個(gè)作為system Service還得改一下android代碼avoid permission check!

總結(jié):
假定一個(gè)Client A 進(jìn)程與Service B 進(jìn)程要建立IPC通信,通過(guò)前面的分析我們知道他的流程如下:
1:Service B 打開(kāi)Binder driver, 將自己的進(jìn)程信息注冊(cè)到kernel并為Service創(chuàng)建一個(gè)binder_ref。
2:Service B 通過(guò)Add_Service 將Service信息添加到service_manager進(jìn)程
3:Service B 的Thread pool 掛起 等待client 的請(qǐng)求
4:Client A 調(diào)用open_driver打開(kāi)Binder driver 將自己的進(jìn)程信息注冊(cè)到kernel并為Service創(chuàng)建一個(gè)binder_ref
5: Client A 調(diào)用defaultManagerService.getService 得到Service B在kernel中的IBinder對(duì)象
6:通過(guò)transact 與Binder kernel 通信,Binder Kernel將Client A 掛起。
7:Binder Kernel恢復(fù)Service B thread pool線程,并在 joinThreadPool 中處理Client的請(qǐng)求
8: Binder Kernel 掛起Service B 并將Service B 返回的數(shù)據(jù)寫(xiě)到Client A
9:Binder Kernle 恢復(fù)Client A
Binder kernel driver在Client A 與Service B之間扮演著中間代理的角色。任何通過(guò)transact傳遞的IBinder對(duì)象都會(huì)在Binder kernel中創(chuàng)建一個(gè)與此相關(guān)聯(lián)的獨(dú)一無(wú)二的BInder對(duì)象,用于區(qū)分不同的Client。

Android IPC 通訊機(jī)制源碼分析


更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號(hào)聯(lián)系: 360901061

您的支持是博主寫(xiě)作最大的動(dòng)力,如果您喜歡我的文章,感覺(jué)我的文章對(duì)您有幫助,請(qǐng)用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點(diǎn)擊下面給點(diǎn)支持吧,站長(zhǎng)非常感激您!手機(jī)微信長(zhǎng)按不能支付解決辦法:請(qǐng)將微信支付二維碼保存到相冊(cè),切換到微信,然后點(diǎn)擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對(duì)您有幫助就好】

您的支持是博主寫(xiě)作最大的動(dòng)力,如果您喜歡我的文章,感覺(jué)我的文章對(duì)您有幫助,請(qǐng)用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長(zhǎng)會(huì)非常 感謝您的哦!!!

發(fā)表我的評(píng)論
最新評(píng)論 總共0條評(píng)論
主站蜘蛛池模板: 欧美激情综合网 | 天天干 夜夜操 | 老版奇米影视 | 麻豆短视频传媒网站怎么找 | 婷婷色基地 | 人人爽久久涩噜噜噜蜜桃 | 欧美区在线 | 亚洲九九色 | 污视频在线免费播放 | aaa毛片免费观看 | 国产电影一区二区三区 | 欧美福利一区二区三区 | 欧美不卡一区二区三区免 | 欧美影院在线 | 欧美日韩中文在线 | 欧美操人视频 | 一级特色黄大片 | xxnxx中国18| 国产精品久久久久久久久久红粉 | 国产免费www | 日韩精品一区二区三区第95 | 毛片a片| 欧美一区精品 | 99在线观看精品 | 玖玖福利| 99在线免费观看 | 91麻豆国产极品在线观看洋子 | 国内精品99| 亚洲欧洲日本无在线码天堂 | 欧洲一级毛片 | 精品久久影院 | 亚洲av毛片久久久久 | 久草久草在线视频 | 亚洲视频天堂 | 亚洲一区免费 | 99热这里只有精品国产99 | 欧美视频一区二免费视频 | 中文字幕日本亚洲欧美不卡 | 色喜亚洲美女沟沟炮交国模 | 久草网址 | 九色视频网址 |