Android6.0的Looper源码分析(1)

    xiaoxiao2023-03-24  6

    1      Looper简介

    Android在Java标准线程模型的基础上,提供了消息驱动机制,用于多线程之间的通信。而其具体实现就是Looper。

    Android Looper的实现主要包括了3个概念:Message,MessageQueue,Handler,Looper。其中Message就是表示一个可执行的任务。消息创建完毕通过消息处理器Handler在任意线程中发送添加至MessageQueue,最终在Looper线程逐个取出并调用handler.handleMessage()进行处理。

     

     

    2     Looper的初始化

    这里可以尝试分析Looper.java类的结构来推测Looper机制的实现原理。以下为Looper类的变量域:

    //这里可以简单的将ThreadLocal类型的变量想象成一个Map,键值为线程号

    static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>();

    // 注意下面的static表示sMainLooper归于Looper.Class

        private static Looper sMainLooper; //注意static数据,进程间并非共享

        //Looper的每个线程实例都有一个MessageQueue

        final MessageQueue mQueue;

        final Thread mThread;

    第一个变量sThreadLocal为ThreadLocal<Looper>类型的变量,它主要由两个方法,set()和get();这里通过泛型指定了需要线程赋值的变量类型为Looper。简单理解sThreadLocal .set()就是将当前线程的Looper副本值设定为指定值。sThreadLocal.get()将得到Looper实例在当前线程下的副本。(ThreadLocal的实现还有待研究,初步猜测其内部存在哈希Map,可以根据当前线程的线程号区分不同线程的变量)。通过ThreadLocal实现了线程级单例。

    第二个变量为static的sMainLooper,存放的应该是主线程(即UI线程的Looper),类型设计为static,这样通过Looper.getMainLooper()的方法在任何线程都能获得该Looper,从而更新UI。

    第三个参数为java层的Massage队列,Handler.sendMessage()就是将Message添加到此队列以供Looper.loop()。在接下来的分析将会发现,java层的MessageQueue的新建会导致Native层的NativeMessageQueue的创建,进而在导致Native层Looper的创建。

    第四个参数,就是Looper所在线程的引用。

    将一个线程改造成Looper线程很容易就可以实现,如下;

        class LooperThread extends Thread {

            public Handler mHandler;

     

            public void run() {

                Looper.prepare();

     

                mHandler = new Handler() {//构造方法内部绑定了当前Looper线程

                    public void handleMessage(Message msg) {

                        // 在这里处理send进来的消息

                    }

                };

     

                Looper.loop();

            }

        }

    首先分析Looper的准备工作prepare()。

        public static void prepare() {

            prepare(true);

        }

     

        private static void prepare(boolean quitAllowed) {//保证Looper的线程级单例

            if (sThreadLocal.get() != null) {

                throw new RuntimeException("Only one Looper may be created per thread");

            }

            sThreadLocal.set(new Looper(quitAllowed));//这里创建了Looper的线程单例

        }

    Looper线程单例的创建会导致MessageQueue的创建,MessageQueue内有一个Message类型的变量sMessages,因此可以想到MessageQueue在java层是通过链表实现的。以下为MessageQueue的构造函数:

    MessageQueue(boolean quitAllowed) {

            mQuitAllowed = quitAllowed;

    //通过JNI调用了Native层的相关函数,导致了NativeMessageQueue的创建

            mPtr = nativeInit();   

    }

    可以看到MessageQueue在构造的时候通过JNI调用了Native层的C++函数,从而对Looper在Native层进行必要的初始化操作。同时java MessageQueue获得了一个指向Native层的指针mPtr,从而可以通过mPtr方便的调用底层的相关方法。NativeInit对应android_os_MessageQueue.cpp中的以下函数。

    static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {

        //Native层又创建了NativeMessageQueue

        NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();

        if (!nativeMessageQueue) {

            jniThrowRuntimeException(env, "Unable to allocate native queue");

            return 0;

        }

     

        nativeMessageQueue->incStrong(env);

        //这里的返回给java层的mPtr,因此mPtr实际上是Java MessageQueue

    //nativeMessageQueue的桥梁,这里比老版本实现更为简洁

        return reinterpret_cast<jlong>(nativeMessageQueue);

    }

     

    此时Java层和Native层的MessageQueue被mPtr连接起来了,NativeMessageQueue只是java层MessageQueue在Ntive层的体现,其本身并没有实现Queue的数据结构,而是从其父类MessageQueue中继承了mLooper变量。与java层类似,这个Looper也是线程级单例。以下为NativeMessageQueue的构造函数:

    NativeMessageQueue::NativeMessageQueue() :

            mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {

        mLooper = Looper::getForThread();

        if (mLooper == NULL) {

            mLooper = new Looper(false);//Native层创建了Looper对象

            Looper::setForThread(mLooper);//同样是线程级单例

        }

    }

    可以看到在Java层Looper的创建导致了MessageQueue的创建,而在Native层则刚好相反:NativeMessageQueue的创建导致了Looper的创建。而且Native层的Looper创建和Java层的也完全不一样。它利用了Linux的epoll机制监测了Input的fd和唤醒fd。从功能上来讲,这个唤醒fd才是真正处理java Message和Native Message的钥匙。(注意5.0以上版本Looper的定义在System/core下)。

    Looper::Looper(bool allowNonCallbacks) :

            mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),

            mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),

            mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {

        //这是linux后来才有的东西,负责线程通信,替换了老版本的pipe

    mWakeEventFd = eventfd(0, EFD_NONBLOCK);

        LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd.  errno=%d", errno);

     

        AutoMutex _l(mLock);

        rebuildEpollLocked();

    }

    进入rebuildEpollLocked

    void Looper::rebuildEpollLocked() {

        // Close old epoll instance if we have one.

        if (mEpollFd >= 0) {

    #if DEBUG_CALLBACKS

            ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);

    #endif

            close(mEpollFd);

        }

     

       // Allocate the new epoll instance and register the wake pipe.

    //采用linuxEpoll,Select功能其实有点类似

            mEpollFd = epoll_create(EPOLL_SIZE_HINT);

        LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance.  errno=%d", errno);

     

        struct epoll_event eventItem;

        memset(& eventItem, 0, sizeof(epoll_event)); // 清空

        eventItem.events = EPOLLIN;//关注EPOLLIN事件,也就是可读

        eventItem.data.fd = mWakeEventFd;//设置Fd

        // mWakeEventFdevent添加到监听队列,这里其实只是为epoll_ctl放置一个唤醒机制

        int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);

        LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance.  errno=%d",

                errno);

        //这里主要添加的是Input事件如键盘,传感器输入,这里基本上由系统负责,很少主动去添加

        for (size_t i = 0; i < mRequests.size(); i++) {

            const Request& request = mRequests.valueAt(i);

            struct epoll_event eventItem;

            request.initEventItem(&eventItem);

     

            int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);

            if (epollResult < 0) {

                ALOGE("Error adding epoll events for fd %d while rebuilding epoll set, errno=%d",

                        request.fd, errno);

            }

        }

    }

    这里一定要明白的是,添加的这些fd除了mWakeEventFd负责解除阻塞让程序继续运行,从而处理Native Message和Java Message外,其他fd与Message的处理其实,毫无关系(知道这点非常重要)。此时Java层与Native层的联系如下图所示:

     

    3     创建消息并发送消息

    创建消息和发送消息一般是在Looper线程之外的另一个线程通过Handler发送。以下是Handler的满参构造方法。

        public Handler(Callback callback, boolean async) {

            if (FIND_POTENTIAL_LEAKS) {//调试接口,默认为false

                final Class<? extends Handler> klass = getClass();

                if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) &&

                        (klass.getModifiers() & Modifier.STATIC) == 0) {

                    Log.w(TAG, "The following Handler class should be static or leaks might occur: " +

                        klass.getCanonicalName());

                }

            }

            //Handler绑定当前线程的Looper实例

            mLooper = Looper.myLooper();

            if (mLooper == null) {

                throw new RuntimeException(

                    "Can't create handler inside thread that has not called Looper.prepare()");

            }

            mQueue = mLooper.mQueue;//sendMessage的目标队列就是LooperMessageQueue

            mCallback = callback;//Handler指定callback

            mAsynchronous = async;//是否异步

        }

    在每一个Handler的构造过程中,Handler通过“mLooper =Looper.myLooper();”悄悄的持有了当前所在的looper线程的一个引用。我们已经知道每个Looper都会有一个MessageQueue,这样Handler,Looper,MessageQueue就被关联起来了。

    利用Handler发送消息之前需要新建一个Message。获取Message一般可以采用Message类的static方法obtain()。此方法有很多重载方法,零参实现如下(多参重载只是对零参时未赋值的变量进行了赋值)

    public static Message obtain() {

            synchronized (sPoolSync) {

                if (sPool != null) {

                    Message m = sPool;

                    sPool = m.next;

                    m.next = null;

                    m.flags = 0; // clear in-use flag

                    sPoolSize--;

                    return m;

                }

            }

            return new Message();

        }

    接着就可以调用Handler(非Looper线程持有Handler引用)的sendMessage(msg)方法。前面已经提到,Handler内部持有一个Looper的引用,Looper内部有一个MessageQueue。这样就实现了线程间的消息传递。当然除了sendMessage(msg)之外还有其他类似的发送消息的函数。其本质就是往MessageQueue里面添加Message。这里就不详述了。

    特别要指出的是Looper.loop()在消息队列为空的情况下并不是阻塞在这个MessageQueue上,而是阻塞在Native层的epoll_wait上面。这样会存在很多问题,一个最为重要的问题就是如果在阻塞的时候,突然接收到java Message,程序怎么立马去处理这个Message?前面提到epoll监听了Input的fd和mWakeEventFd。答案就在mWakeEventFd。

    先来看每个sendMessage()或其他Send方法都会最终调用以下的这个方法。

    boolean enqueueMessage(Message msg, long when) {

            if (msg.target == null) {

                throw new IllegalArgumentException("Message must have a target.");

            }

            if (msg.isInUse()) {

                throw new IllegalStateException(msg + " This message is already in use.");

            }

     

            synchronized (this) {

                if (mQuitting) {

                    IllegalStateException e = new IllegalStateException(

                            msg.target + " sending message to a Handler on a dead thread");

                    Log.w(TAG, e.getMessage(), e);

                    msg.recycle();

                    return false;

                }

     

                msg.markInUse();

                msg.when = when;

                Message p = mMessages;

                boolean needWake;

                if (p == null || when == 0 || when < p.when) {

                    // New head, wake up the event queue if blocked.

                    msg.next = p;

                    mMessages = msg;

                    needWake = mBlocked;

                } else {

                    // Inserted within the middle of the queue.  Usually we don't have to wake

                    // up the event queue unless there is a barrier at the head of the queue

                    // and the message is the earliest asynchronous message in the queue.

                    needWake = mBlocked && p.target == null && msg.isAsynchronous();

                    Message prev;

                    for (;;) {

                        prev = p;

                        p = p.next;

                        if (p == null || when < p.when) {

                            break;

                        }

                        if (needWake && p.isAsynchronous()) {

                            needWake = false;

                        }

                    }

                    msg.next = p; // invariant: p == prev.next

                    prev.next = msg;

                }

     

                // We can assume mPtr != 0 because mQuitting is false.

                if (needWake) {

                    nativeWake(mPtr);

                }

            }

            return true;

        }

    可以看到以上函数才是真正添加Message的实干函数。在每次添加完毕之后都在需needWake的时候去调用NativeWake(mPtr)。我们已经知道mPtr指向了Native层的NativeMessageQueue。NativeWake(mPtr)最终调用了该类的wake()方法。此方法向mWakeEventFd写入了一个字节的内容。到底是什么内容并不重要,重要的是fd存在内容了,换句话说就是mWakeEventFd可读了!因此epoll_wait返回。首先遍历Native消息队列(此时基本上为空遍历),接着遍历活动fd,这里只有一个活动fd就是mWakeEventFd,读掉这一个字节的数据解除掉mWakeEventFd的可读状态。此时mWakeEventFd功成身退。程序已经从阻塞状态解除了出来。程序返回到java层的MessageQueue.next()函数中,next函数返回即从MessageQueue中返回此msg,以做后续的处理。

    首先来看Looper.loop()。

        public static void loop() {

            final Looper me = myLooper();

            if (me == null) {

                throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");

            }

            final MessageQueue queue = me.mQueue;

     

            // Make sure the identity of this thread is that of the local process,

            // and keep track of what that identity token actually is.

            Binder.clearCallingIdentity();

            final long ident = Binder.clearCallingIdentity();

     

            for (;;) {//无限循环直到quit()

                Message msg = queue.next();//获取下一个java Message

                if (msg == null) {

                    // No message indicates that the message queue is quitting.

                    return;

                }

     

                // This must be in a local variable, in case a UI event sets the logger

                Printer logging = me.mLogging;

                if (logging != null) {

                    logging.println(">>>>> Dispatching to " + msg.target + " " +

                            msg.callback + ": " + msg.what);

                }

     

                msg.target.dispatchMessage(msg);//java层的Message处理在这里

     

                if (logging != null) {

                    logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);

                }

     

                // Make sure that during the course of dispatching the

                // identity of the thread wasn't corrupted.

                final long newIdent = Binder.clearCallingIdentity();

                if (ident != newIdent) {

                    Log.wtf(TAG, "Thread identity changed from 0x"

                            + Long.toHexString(ident) + " to 0x"

                            + Long.toHexString(newIdent) + " while dispatching to "

                            + msg.target.getClass().getName() + " "

                            + msg.callback + " what=" + msg.what);

                }

     

                msg.recycleUnchecked();

            }

        }

    这里直接进入MessageQueue.next()

    Message next() {

            // Return here if the message loop has already quit and been disposed.

            // This can happen if the application tries to restart a looper after quit

            // which is not supported.

            final long ptr = mPtr;

            if (ptr == 0) {

                return null;

            }

     

            int pendingIdleHandlerCount = -1; // -1 only during first iteration

            int nextPollTimeoutMillis = 0;//这个参数向Nativeepoll_wait指定时超时时间

            for (;;) {

                if (nextPollTimeoutMillis != 0) {//此处作用有待研究

                    Binder.flushPendingCommands();

                }

     

                nativePollOnce(ptr, nextPollTimeoutMillis);//一般都是阻塞在这个函数

     

                synchronized (this) {

                    // Try to retrieve the next message.  Return if found.

                    final long now = SystemClock.uptimeMillis();

                    Message prevMsg = null;

                    Message msg = mMessages;

                    if (msg != null && msg.target == null) {

                    // Stalled by a barrier.  Find the next asynchronous message in the queue.

                        do {

                            prevMsg = msg;

                            msg = msg.next;

                        } while (msg != null && !msg.isAsynchronous());

                    }

                    if (msg != null) {

                        if (now < msg.when) {

                            // Next message is not ready.  Set a timeout to wake up when it is ready.

                            nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);

                        } else {

                            // Got a message.

                            mBlocked = false;

                            if (prevMsg != null) {

                                prevMsg.next = msg.next;

                            } else {

                                mMessages = msg.next;

                            }

                            msg.next = null;

                            if (DEBUG) Log.v(TAG, "Returning message: " + msg);

                            msg.markInUse();

                            return msg;

                        }

                    } else {

                        // No more messages.

                        nextPollTimeoutMillis = -1;

                    }

     

                    // Process the quit message now that all pending messages have been handled.

                    if (mQuitting) {

                        dispose();

                        return null;

                    }

     

                    // If first time idle, then get the number of idlers to run.

                    // Idle handles only run if the queue is empty or if the first message

                    // in the queue (possibly a barrier) is due to be handled in the future.

                    if (pendingIdleHandlerCount < 0

                            && (mMessages == null || now < mMessages.when)) {

                        pendingIdleHandlerCount = mIdleHandlers.size();

                    }

                    if (pendingIdleHandlerCount <= 0) {

                        // No idle handlers to run.  Loop and wait some more.

                        mBlocked = true;

                        continue;

                    }

     

                    if (mPendingIdleHandlers == null) {

                        mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];

                    }

                    mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers);

                }

     

                // Run the idle handlers.

                // We only ever reach this code block during the first iteration.

                for (int i = 0; i < pendingIdleHandlerCount; i++) {

                    final IdleHandler idler = mPendingIdleHandlers[i];

                    mPendingIdleHandlers[i] = null; // release the reference to the handler

     

                    boolean keep = false;

                    try {

                        keep = idler.queueIdle();

                    } catch (Throwable t) {

                        Log.wtf(TAG, "IdleHandler threw exception", t);

                    }

     

                    if (!keep) {

                        synchronized (this) {

                            mIdleHandlers.remove(idler);

                        }

                    }

                }

     

                // Reset the idle handler count to 0 so we do not run them again.

                pendingIdleHandlerCount = 0;

     

                // While calling an idle handler, a new message could have been delivered

                // so go back and look again for a pending message without waiting.

                nextPollTimeoutMillis = 0;

            }

        }

    上面函数中最为重要的变量为nextPollTimeoutMillis。这个参数为Native层的epoll_wait指定了超时时间。为什么会存在这个epoll_wait超时时间呢?不是已经有一个mWakeEventFd已经可以唤醒epoll_wait了么?回答这个问题需要对Message加以分析,存在多种Message,其中一种Message为需要立即执行的消息。这样的消息通过mWakeEventFd唤醒就可以了。另一种消息是延时消息,或者是在指定时间执行的消息。这样的消息添加到MessageQueue后一般不需要立即执行,而是等一段时间才会去执行,通过一些必要的计算给epoll_wait()指定超时时间可以使得在需要执行这些定时任务的时候epoll_wait()返回。此函数就是实现了这样的逻辑。

    接着上面的之前的分析,Looper.loop()调用MessageQueue.next()。next()调用NativePollOnce从而进入Native层处理input和Native Message。NativePollOnce经过几次转调最终会落在mLooper.PollOnce(),如下:

    int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {

        int result = 0;

        for (;;) {//首先对fd对应的的responses进行处理,后面会发现responses里都是活动fd

            while (mResponseIndex < mResponses.size()) {

                const Response& response = mResponses.itemAt(mResponseIndex++);

                int ident = response.request.ident;

                if (ident >= 0) {//这里大于0标示没有指定callback直接返回即可,有为-2

                    int fd = response.request.fd;

                    int events = response.events;

                    void* data = response.request.data;

    #if DEBUG_POLL_AND_WAKE

                    ALOGD("%p ~ pollOnce - returning signalled identifier %d: "

                            "fd=%d, events=0x%x, data=%p",

                            this, ident, fd, events, data);

    #endif

                    if (outFd != NULL) *outFd = fd;

                    if (outEvents != NULL) *outEvents = events;

                    if (outData != NULL) *outData = data;

                    return ident;

                }

            }

    //

            if (result != 0) {//注意这里处于循环内部,改变result的值是在后面的pollInner

    #if DEBUG_POLL_AND_WAKE

                ALOGD("%p ~ pollOnce - returning result %d", this, result);

    #endif

                if (outFd != NULL) *outFd = 0;

                if (outEvents != NULL) *outEvents = 0;

                if (outData != NULL) *outData = NULL;

                return result;

            }

     

            result = pollInner(timeoutMillis);//内部epoll_wait

        }

    }

    接着进入pollInner

    int Looper::pollInner(int timeoutMillis) {

    #if DEBUG_POLL_AND_WAKE

        ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis);

    #endif

     

        // Adjust the timeout based on when the next message is due.

        if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {

            nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);

            int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);

            if (messageTimeoutMillis >= 0

                    && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {

                timeoutMillis = messageTimeoutMillis;

            }

    #if DEBUG_POLL_AND_WAKE

            ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d",

                    this, mNextMessageUptime - now, timeoutMillis);

    #endif

        }

     

        // Poll.

        int result = POLL_WAKE;

        mResponses.clear();

        mResponseIndex = 0;

     

        // We are about to idle.

        mPolling = true;

     

        struct epoll_event eventItems[EPOLL_MAX_EVENTS];

        int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);

     

        // No longer idling.

        mPolling = false;

     

        // 获得锁,在Native Message的处理和添加逻辑上需要同步

        mLock.lock();

     

        //如果需要,重建epoll

        if (mEpollRebuildRequired) {

            mEpollRebuildRequired = false;

            rebuildEpollLocked();

            goto Done;

        }

     

        // Check for poll error.

        if (eventCount < 0) {

            if (errno == EINTR) {

                goto Done;

            }

            ALOGW("Poll failed with an unexpected error, errno=%d", errno);

            result = POLL_ERROR;

            goto Done;

        }

     

        // epoll超时

        if (eventCount == 0) {

    #if DEBUG_POLL_AND_WAKE

            ALOGD("%p ~ pollOnce - timeout", this);

    #endif

            result = POLL_TIMEOUT;//此值返回PollOnce,从而导致java定时Message执行

            goto Done;

        }

     

        // Handle all events.

    #if DEBUG_POLL_AND_WAKE

        ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);

    #endif

        //首先处理活动的input设备和mWakeEventFd

        for (int i = 0; i < eventCount; i++) {

            int fd = eventItems[i].data.fd;

            uint32_t epollEvents = eventItems[i].events;

            if (fd == mWakeEventFd) {//若果是唤醒fd有反应

                if (epollEvents & EPOLLIN) {

                    awoken();//内部就是read,从而使fd可读状态被清除

                } else {

                    ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);

                }

            } else {//其他input fd处理,其实就是讲活动fd放入到responses队列中,等待处理

                ssize_t requestIndex = mRequests.indexOfKey(fd);

                if (requestIndex >= 0) {

                    int events = 0;

                    if (epollEvents & EPOLLIN) events |= EVENT_INPUT;

                    if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;

                    if (epollEvents & EPOLLERR) events |= EVENT_ERROR;

                    if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;

                    pushResponse(events, mRequests.valueAt(requestIndex));

                } else {

                    ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "

                            "no longer registered.", epollEvents, fd);

                }

            }

        }

    Done: ;

     

        // 这里应该是处理Native层的Message

        mNextMessageUptime = LLONG_MAX;

        while (mMessageEnvelopes.size() != 0) {

            nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);

            const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);

            if (messageEnvelope.uptime <= now) {

                // Remove the envelope from the list.

                // We keep a strong reference to the handler until the call to handleMessage

                // finishes.  Then we drop it so that the handler can be deleted *before*

                // we reacquire our lock.

                { // obtain handler

                    sp<MessageHandler> handler = messageEnvelope.handler;

                    Message message = messageEnvelope.message;

                    mMessageEnvelopes.removeAt(0);

                    mSendingMessage = true;

                    mLock.unlock();

     

    #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS

                    ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",

                            this, handler.get(), message.what);

    #endif

                    handler->handleMessage(message);//处理Native Message

                } // release handler

     

                mLock.lock();

                mSendingMessage = false;

                result = POLL_CALLBACK;

            } else {

                // The last message left at the head of the queue determines the next wakeup time.

                mNextMessageUptime = messageEnvelope.uptime;

                break;

            }

        }

     

        // Release lock.

        mLock.unlock();

     

        // 处理之前添加进responses的活动Input设备

        for (size_t i = 0; i < mResponses.size(); i++) {

            Response& response = mResponses.editItemAt(i);

            if (response.request.ident == POLL_CALLBACK) {

                int fd = response.request.fd;

                int events = response.events;

                void* data = response.request.data;

    #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS

                ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",

                        this, response.request.callback.get(), fd, events, data);

    #endif

                // Invoke the callback.  Note that the file descriptor may be closed by

                // the callback (and potentially even reused) before the function returns so

                // we need to be a little careful when removing the file descriptor afterwards.

                //这里处理了有callbackfd,没有fd的处理可以推后到下次循环的pollOnce

                int callbackResult = response.request.callback->handleEvent(fd, events, data);

                if (callbackResult == 0) {

                    removeFd(fd, response.request.seq);

                }

     

                // Clear the callback reference in the response structure promptly because we

                // will not clear the response vector itself until the next poll.

                response.request.callback.clear();

                result = POLL_CALLBACK;

            }

        }

        return result;

    }

    下面是Looper的处理结构图。关键在于epoll。

     

    这里很明显涉及到3类消息的处理:

    1,Java层的Message

    2,Native层的Message

    3,活动fd指向的Input设备

    下面将对着三类消息一一进行分析。

    4     Java层 Message的处理

    首先需要明确的是Java层Message的执行时机。在上一节的分析中已经分析过了,它是在Native层Message和fd之后。Looper.loop()阻塞的位置在MassageQueue.next()->pollOnce()->pollInner()->epoll_wait()。

    1,           如果三类消息都为空,此时Java层send进来一个msg。sendMessage()将调用NativeWake唤醒epoll_wait()。从而回到Java层处理该msg。

    2,           如果只有Java层有msg,且为定时任务,sendMessage时唤醒epoll_wait()。在下一次循环中为epoll_wait设置超时时间。(实际上逻辑更为复杂)。

    3,           在循环时添加Java Message。epoll_wait立即返回。Msg在下一次循环被处理。

    Java层Message的发送和处理流程大致如下图所示:

    5     Native层 Message的处理

    Native层Message的发送和处理流程大致如下图所示:

    从图中可以发现,Native消息的发送过程和处理与java层Message的处理比较类似。都是在任意线程中新建一个Message,然后sendMessage(),所不同的是Native层的Looper没有Handler,因此sendMessage只能通过Looper.sendMessage()。并且需要在SendMessage()时为该Message指定处理该Message的MessageHandler。而且Native层MessageQueue的实现mMessageEnvelopes本质上是Vector,这一点和Java层MessageQueue是不同的。同样需要在sendMessage()的时候wake()。逻辑和Java层类似就不赘述了。

    6     活动fd对应的Input设备的处理

    这类消息由epoll直接监听fd,当input设备有活动时,epoll_wait()检测到对应的fd可读(或可写)。从而对fd做处理。这类消息的处理比较分散,首先来看pollInner()。

     

    int Looper::pollInner(int timeoutMillis) {

        ……

    int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);

    ……

        for (int i = 0; i < eventCount; i++) {

            int fd = eventItems[i].data.fd;

            uint32_t epollEvents = eventItems[i].events;

            if (fd == mWakeEventFd) {

                if (epollEvents & EPOLLIN) {

                    awoken();

                } else {

                    ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);

                }

            } else {

                ssize_t requestIndex = mRequests.indexOfKey(fd);

                if (requestIndex >= 0) {

                    int events = 0;

                    if (epollEvents & EPOLLIN) events |= EVENT_INPUT;

                    if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;

                    if (epollEvents & EPOLLERR) events |= EVENT_ERROR;

                    if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;

                    //将活动的fd对应mRequests包装成responses队列

                    pushResponse(events, mRequests.valueAt(requestIndex));

                } else {

                    ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "

                            "no longer registered.", epollEvents, fd);

                }

            }

    }

    ……

        }

     

        // callbackresponses处理

        for (size_t i = 0; i < mResponses.size(); i++) {

            Response& response = mResponses.editItemAt(i);

            if (response.request.ident == POLL_CALLBACK) {

                int fd = response.request.fd;

                int events = response.events;

                void* data = response.request.data;

    #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS

                ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",

                        this, response.request.callback.get(), fd, events, data);

    #endif

                // Invoke the callback.  Note that the file descriptor may be closed by

                // the callback (and potentially even reused) before the function returns so

                // we need to be a little careful when removing the file descriptor afterwards.

                int callbackResult = response.request.callback->handleEvent(fd, events, data);

                if (callbackResult == 0) {

                    removeFd(fd, response.request.seq);

                }

     

                // Clear the callback reference in the response structure promptly because we

                // will not clear the response vector itself until the next poll.

                response.request.callback.clear();

                result = POLL_CALLBACK;

            }

        }

        return result;

    }

    可以看到,对于活跃fd已经包含了callback的response,直接调用了此callback的HandlerEvent()函数。那对于没有指定Callback的活动responses在那处理呢?在下一次训话中的PollOnce()。也就是下一次epoll_wait()之前。

    int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {

        int result = 0;

        for (;;) {

            while (mResponseIndex < mResponses.size()) {

                const Response& response = mResponses.itemAt(mResponseIndex++);

                int ident = response.request.ident;

                if (ident >= 0) {//这里大于0标示没有指定callback直接返回即可,有为-2

                    int fd = response.request.fd;

                    int events = response.events;

                    void* data = response.request.data;

    #if DEBUG_POLL_AND_WAKE

                    ALOGD("%p ~ pollOnce - returning signalled identifier %d: "

                            "fd=%d, events=0x%x, data=%p",

                            this, ident, fd, events, data);

    #endif

                    if (outFd != NULL) *outFd = fd;

                    if (outEvents != NULL) *outEvents = events;

                    if (outData != NULL) *outData = data;

                    return ident;//对没有callbackresponse直接返回ident(“没有callback)

                   

                }

            }

     

            if (result != 0) {

    #if DEBUG_POLL_AND_WAKE

                ALOGD("%p ~ pollOnce - returning result %d", this, result);

    #endif

                if (outFd != NULL) *outFd = 0;

                if (outEvents != NULL) *outEvents = 0;

                if (outData != NULL) *outData = NULL;

                return result;

            }

     

            result = pollInner(timeoutMillis);

        }

    }

    注意pollOnce传入此函数的后三个参数为指针,因此也可以被认为是“返回值”,上层由此获得了一个活动fd的副本,以做后续处理。而此活动fd被responses.clear()掉。

    接着还是来继续分析自带callback的request。这里面临两个问题:1,谁添加了这些request?2,这些request的callback->handleEvent()到底指向了那个函数?

    对于第一个为题,可从后往前分析。epoll使用的是fd。这些fd在NativeInit中具体一点就是在Native Looper的构建中被添加进epoll监听队列中,如下

    void Looper::rebuildEpollLocked() {

        // Close old epoll instance if we have one.

        if (mEpollFd >= 0) {

    #if DEBUG_CALLBACKS

            ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);

    #endif

            close(mEpollFd);

        }

     

        // Allocate the new epoll instance and register the wake pipe.

        mEpollFd = epoll_create(EPOLL_SIZE_HINT);

        LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance.  errno=%d", errno);

     

        struct epoll_event eventItem;

        memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union

        eventItem.events = EPOLLIN;

        eventItem.data.fd = mWakeEventFd;

        int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);

        LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance.  errno=%d",

                errno);

       //就是这里

        for (size_t i = 0; i < mRequests.size(); i++) {

            const Request& request = mRequests.valueAt(i);

            struct epoll_event eventItem;

            request.initEventItem(&eventItem);

     

            int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);

            if (epollResult < 0) {

                ALOGE("Error adding epoll events for fd %d while rebuilding epoll set, errno=%d",

                        request.fd, errno);

            }

        }

    }

    从以上程序可以发现这些fd都是mRequests中取出来的。而mRequests由Looper.addFd()添加。查看此函数的调用者发现,很多地方都有调用此函数。因此推测在Native层可以直接使用此函数,向epoll添加监听fd。那java层能向epoll添加fd么?发现NativeInit在Native层对应的函数android_os_MessageQueue_nativeInit有一个邻居如下。

    static void android_os_MessageQueue_nativeSetFileDescriptorEvents(JNIEnv* env, jclass clazz,

            jlong ptr, jint fd, jint events) {

        NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);

        nativeMessageQueue->setFileDescriptorEvents(fd, events);

    }

    进入setFileDescriptorEvents()

    void NativeMessageQueue::setFileDescriptorEvents(int fd, int events) {

        if (events) {//从这里判断是添加还是删除

            int looperEvents = 0;

            if (events & CALLBACK_EVENT_INPUT) {

                looperEvents |= Looper::EVENT_INPUT;

            }

            if (events & CALLBACK_EVENT_OUTPUT) {

                looperEvents |= Looper::EVENT_OUTPUT;

            }

            mLooper->addFd(fd, Looper::POLL_CALLBACK, looperEvents, this,

                    reinterpret_cast<void*>(events));//添加fdthis指明callback为类自己

        } else {

            mLooper->removeFd(fd);//这里删除fd

        }

    }

    因此在Java层也是可以向epoll添加fd的

        private void updateOnFileDescriptorEventListenerLocked(FileDescriptor fd, int events,

                OnFileDescriptorEventListener listener) {

            final int fdNum = fd.getInt$();

     

            int index = -1;

            FileDescriptorRecord record = null;

            if (mFileDescriptorRecords != null) {

                index = mFileDescriptorRecords.indexOfKey(fdNum);

                if (index >= 0) {

                    record = mFileDescriptorRecords.valueAt(index);

                    if (record != null && record.mEvents == events) {

                        return;

                    }

                }

            }

     

            if (events != 0) {

                events |= OnFileDescriptorEventListener.EVENT_ERROR;

                if (record == null) {

                    if (mFileDescriptorRecords == null) {

                        mFileDescriptorRecords = new SparseArray<FileDescriptorRecord>();

                    }

                    record = new FileDescriptorRecord(fd, events, listener);

                    mFileDescriptorRecords.put(fdNum, record);

                } else {

                    record.mListener = listener;

                    record.mEvents = events;

                    record.mSeq += 1;

                }

                nativeSetFileDescriptorEvents(mPtr, fdNum, events);//添加或删除fd

            } else if (record != null) {

                record.mEvents = 0;

                mFileDescriptorRecords.removeAt(index);//猜测是java层的fd记录

            }

        }

    由于在addFd时候指定自己也就是this是callback,因此到最后该fd处理的时候会进入NativeMessageQueue的handlerEvent()方法。

    int NativeMessageQueue::handleEvent(int fd, int looperEvents, void* data) {

        int events = 0;

        if (looperEvents & Looper::EVENT_INPUT) {

            events |= CALLBACK_EVENT_INPUT;

        }

        if (looperEvents & Looper::EVENT_OUTPUT) {

            events |= CALLBACK_EVENT_OUTPUT;

        }

        if (looperEvents & (Looper::EVENT_ERROR | Looper::EVENT_HANGUP | Looper::EVENT_INVALID)) {

            events |= CALLBACK_EVENT_ERROR;

        }

        int oldWatchedEvents = reinterpret_cast<intptr_t>(data);

        int newWatchedEvents = mPollEnv->CallIntMethod(mPollObj,//调用java层代码

                gMessageQueueClassInfo.dispatchEvents, fd, events);

        if (!newWatchedEvents) {

            return 0; // unregister the fd

        }

        if (newWatchedEvents != oldWatchedEvents) {

            setFileDescriptorEvents(fd, newWatchedEvents);

        }

        return 1;

    }

    需要注意的是gMessageQueueClassInfo指向了java层的MessageQueue,因此MessageQueue的dispatchEvents()方法被调用。Message在java层被处理。

    当然在Native层就可以实现fd的添加和处理。貌似这也是主要的途径。Android6.0有好些个专门的类处理Input设备。如android_view_InputQueue、android_view_InputEventSender、android_view_InputEventReceiver等等。这里就不详述了。留待以后研究。

    原文地址: http://blog.csdn.net/a34140974/article/details/50638089

    转载请注明原文地址: https://ju.6miu.com/read-1202269.html
    最新回复(0)