mirror of
https://github.com/Lakr233/vphone-cli.git
synced 2026-04-05 04:59:05 +08:00
Squash merge iunctqwiocmfxhigkcunamxoi into main
Included commits: - f8a54b8 Update JB kernel patch research notes Refresh and revalidate jailbreak kernel-patcher documentation and runtime-verification notes. Key updates: re-analyzed B13 (patch_bsd_init_auth) and retargeted recommended site to the FSIOC_KERNEL_ROOTAUTH return check in bsd_init rather than the old ldr/cbz/bl heuristic; clarified preferred NOP-of-CBNZ vs forcing ioctl return. Reworked C21 (patch_cred_label_update_execve) to preserve AMFI exec-time flow and instead clear restrictive csflags in a success-tail trampoline; disabled in default schedule until boot validation. Documented that C23 (patch_hook_cred_label_update_execve) was mis-targeting the wrapper (sub_FFFFFE00093D2CE4) instead of the real hook body (_hook_cred_label_update_execve), explaining boot failures and recommending retargeting. Noted syscallmask and vm_fault matcher problems (patch_syscallmask_apply_to_proc historical hit targeted _profile_syscallmask_destroy; patch_vm_fault_enter_prepare matcher resolves to pmap_lock_phys_page path), and updated the runtime-verification summary with follow-up findings and which methods are temporarily commented out/disabled in the default KernelJBPatcher schedule pending staged re-validation. - 6ebac65 fix: patch_bsd_init_auth - 5b224d3 fix: patch_io_secure_bsd_root - e6806bf docs: update patch notes - 0d89c5c Retarget vm_fault_enter_prepare jailbreak patch - 6b9d79b Rework C21 late-exit cred_label patch - ece8cc0 Clean C21 mov matcher encodings - ad2ea7c enabled fixed patch_cred_label_update_execve - c37b6b1 Rebuild syscallmask C22 patch - 363dd7a Rebuild JB C23 as faithful upstream trampoline - 129e648 Disable IOUC MACF; rebuild kcall10 & C22 docs Re-evaluate and rework several JB kernel patches and docs: mark patch_iouc_failed_macf as reverted/disabled (repo-local, over-broad early-return) and replace its patcher with a no-op implementation to emit zero writes by default; update research notes to explain the reanalysis and rationale. Rebuild patch_kcall10: replace the historical 10-arg design with an ABI-correct syscall-439 cave (target + 7 args -> uint64 return), add a new cave builder and munge32 reuse logic in the kcall10 patcher, and enable the method in KernelJBPatcher group. Clarify syscallmask (C22) semantics in docs: upstream C22 is an all-ones-mask retarget (not a NULL install) and keep the rebuilt all-ones wrapper as the authoritative baseline. Misc: minor refactors and helper additions (chained-pointer helpers, cave size/constants, validation and dry-run safeguards) to improve correctness and alignment with IDA/runtime verification. - e1b2365 Rebuild kcall10 as ABI-correct syscall cave - 23090d0 fix patch_iouc_failed_macf - 0056be2 Normalize formatting in research docs Apply whitespace and formatting cleanup across research markdown files for consistency and readability. Adjust table alignment and spacing in 00_patch_comparison_all_variants.md, normalize list/indentation spacing in patch_bsd_init_auth.md and patch_syscallmask_apply_to_proc.md, and add/clean blank lines and minor spacing in patch_kcall10.md. These are non-functional documentation changes only.
This commit is contained in:
@@ -149,6 +149,15 @@ research/ # Detailed firmware/patch documentation
|
||||
|
||||
### Python Scripts
|
||||
|
||||
### Kernel patcher guardrails
|
||||
|
||||
- For kernel patchers, never hardcode file offsets, virtual addresses, or preassembled instruction bytes inside patch logic.
|
||||
- All instruction matching must be derived from Capstone decode results (mnemonic / operands / control-flow), not exact operand-string text when a semantic operand check is possible.
|
||||
- All replacement instruction bytes must come from Keystone-backed helpers already used by the project (for example `asm(...)`, `NOP`, `MOV_W0_0`, etc.).
|
||||
- Prefer source-backed semantic anchors: in-image symbol lookup, string xrefs, local call-flow, and XNU correlation. Do not depend on repo-exported per-kernel symbol dumps at runtime.
|
||||
- When retargeting a patch, write the reveal procedure and validation steps into `TODO.md` before handing off for testing.
|
||||
- For `patch_bsd_init_auth` specifically, the allowed reveal flow is: recover `bsd_init` -> locate rootvp panic block -> find the unique in-function `call` -> `cbnz w0/x0, panic` -> `bl imageboot_needed` site -> patch the branch gate only.
|
||||
|
||||
- Patchers use `capstone` (disassembly), `keystone-engine` (assembly), `pyimg4` (IM4P handling).
|
||||
- Dynamic pattern finding (string anchors, ADRP+ADD xrefs, BL frequency) — no hardcoded offsets.
|
||||
- Each patch logged with offset and before/after state.
|
||||
|
||||
@@ -74,35 +74,37 @@
|
||||
|
||||
### JB-Only Kernel Methods (Reference List)
|
||||
|
||||
Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, and `patch_cred_label_update_execve` are temporarily excluded from `_PATCH_METHODS` pending rework.
|
||||
Current default schedule note (2026-03-06): `patch_cred_label_update_execve` remains temporarily excluded from `_PATCH_METHODS` pending staged re-validation. `patch_syscallmask_apply_to_proc` has been rebuilt around the real syscallmask apply wrapper and is re-enabled after focused PCC 26.1 dry-run validation plus user-side boot confirmation; refreshed XNU/IDA review also confirms historical C22 was the all-ones-mask variant, not a `NULL`-mask install. `patch_hook_cred_label_update_execve` has also been rebuilt as a faithful upstream C23 wrapper trampoline: it retargets sandbox `mac_policy_ops[18]` to a cave that copies `VSUID`/`VSGID` owner state into the pending credential, sets `P_SUGID`, and branches back to the original wrapper. `patch_iouc_failed_macf` has been rebuilt as a narrow branch-level gate patch: the old repo-only entry early-return on `0xFFFFFE000825B0C0` was discarded, and A5-v2 now patches the post-`mac_iokit_check_open` `CBZ W0, allow` gate at `0xFFFFFE000825BA98` to unconditional allow while preserving the surrounding IOUserClient setup flow. `patch_vm_fault_enter_prepare` was retargeted to the upstream PCC 26.1 research `cs_bypass` gate and re-enabled for dry-run validation. `patch_bsd_init_auth` has been retargeted to the real `_bsd_init` rootauth failure branch and re-enabled for staged validation. Fresh IDA re-analysis shows JB-14 previously used a false-positive matcher; it now targets the real `_bsd_init` rootauth failure branch using in-function Capstone-decoded control-flow semantics and is semantically redundant with base patch #3 when JB is layered on top of `fw_patch`. For JB-16, the historical hit at `0xFFFFFE000836E1F0` is now treated as semantically wrong: it patches the `"SecureRoot"` name-check gate inside `AppleARMPE::callPlatformFunction`, not the `"SecureRootName"` deny return consumed by `IOSecureBSDRoot()`. The implementation was retargeted on 2026-03-06 to `0xFFFFFE000836E464` (`CSEL W22, WZR, W9, NE -> MOV W22, #0`) and re-enabled in `KernelJBPatcher._GROUP_B_METHODS` pending restore/boot validation.
|
||||
|
||||
| # | Group | Method | Function | Purpose | JB Enabled |
|
||||
| ----- | ----- | ------------------------------------- | ------------------------------------------ | ------------------------------------------------------- | :--------: |
|
||||
| JB-01 | A | `patch_amfi_cdhash_in_trustcache` | `AMFIIsCDHashInTrustCache` | Always return true + store hash | Y |
|
||||
| JB-02 | A | `patch_amfi_execve_kill_path` | AMFI execve kill return site | Convert shared kill return from deny to allow | Y |
|
||||
| JB-03 | C | `patch_cred_label_update_execve` | `_cred_label_update_execve` | Early-return low-riskized cs_flags path | Y |
|
||||
| JB-04 | C | `patch_hook_cred_label_update_execve` | `_hook_cred_label_update_execve` | Low-riskized early-return hook gate | Y |
|
||||
| JB-05 | C | `patch_kcall10` | `sysent[439]` (`SYS_kas_info` replacement) | Kernel arbitrary call from userspace | Y |
|
||||
| JB-06 | B | `patch_post_validation_additional` | `_postValidation` (additional) | Disable SHA256-only hash-type reject | Y |
|
||||
| JB-07 | C | `patch_syscallmask_apply_to_proc` | `_syscallmask_apply_to_proc` | Low-riskized early return for syscall mask gate | Y |
|
||||
| JB-08 | A | `patch_task_conversion_eval_internal` | `_task_conversion_eval_internal` | Allow task conversion | Y |
|
||||
| JB-09 | A | `patch_sandbox_hooks_extended` | Sandbox MACF ops (extended) | Stub remaining 30+ sandbox hooks (incl. IOKit 201..210) | Y |
|
||||
| JB-10 | A | `patch_iouc_failed_macf` | IOUC MACF shared gate | Bypass shared IOUserClient MACF deny path | Y |
|
||||
| JB-11 | B | `patch_proc_security_policy` | `_proc_security_policy` | Bypass security policy | Y |
|
||||
| JB-12 | B | `patch_proc_pidinfo` | `_proc_pidinfo` | Allow pid 0 info | Y |
|
||||
| JB-13 | B | `patch_convert_port_to_map` | `_convert_port_to_map_with_flavor` | Skip kernel map panic | Y |
|
||||
| JB-14 | B | `patch_bsd_init_auth` | `_bsd_init` (2nd auth gate) | Skip auth at @%s:%d | Y |
|
||||
| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount (strict in-function match) | Y |
|
||||
| JB-16 | B | `patch_io_secure_bsd_root` | `_IOSecureBSDRoot` | Skip secure root check (guard-site filter) | Y |
|
||||
| JB-17 | B | `patch_load_dylinker` | `_load_dylinker` | Skip strict `LC_LOAD_DYLINKER == "/usr/lib/dyld"` gate | Y |
|
||||
| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Bypass MAC mount deny path (strict site) | Y |
|
||||
| JB-19 | B | `patch_nvram_verify_permission` | `_verifyPermission` (NVRAM) | Allow NVRAM writes | Y |
|
||||
| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force shared region path | Y |
|
||||
| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Skip persona validation | Y |
|
||||
| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid | Y |
|
||||
| JB-23 | B | `patch_thid_should_crash` | `_thid_should_crash` | Prevent GUARD_TYPE_MACH_PORT crash | Y |
|
||||
| JB-24 | B | `patch_vm_fault_enter_prepare` | `_vm_fault_enter_prepare` | Skip fault check | Y |
|
||||
| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Allow VM protect | Y |
|
||||
| # | Group | Method | Function | Purpose | JB Enabled |
|
||||
| ----- | ----- | ------------------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------: |
|
||||
| JB-01 | A | `patch_amfi_cdhash_in_trustcache` | `AMFIIsCDHashInTrustCache` | Always return true + store hash | Y |
|
||||
| JB-02 | A | `patch_amfi_execve_kill_path` | AMFI execve kill return site | Convert shared kill return from deny to allow | Y |
|
||||
| JB-03 | C | `patch_cred_label_update_execve` | `_cred_label_update_execve` | Reworked C21-v3: C21-v1 already boots; v3 keeps split late exits and additionally ORs success-only helper bits `0xC` after clearing `0x3F00`; still disabled pending boot validation | N |
|
||||
| JB-04 | C | `patch_hook_cred_label_update_execve` | sandbox `mpo_cred_label_update_execve` wrapper (`ops[18]` -> `sub_FFFFFE00093BDB64`) | Faithful upstream C23 trampoline: copy `VSUID`/`VSGID` owner state into pending cred, set `P_SUGID`, then branch back to wrapper | Y |
|
||||
| JB-05 | C | `patch_kcall10` | `sysent[439]` (`SYS_kas_info` replacement) | Rebuilt ABI-correct kcall cave: `target + 7 args -> uint64 x0`; re-enabled after focused dry-run validation | Y |
|
||||
| JB-06 | B | `patch_post_validation_additional` | `_postValidation` (additional) | Disable SHA256-only hash-type reject | Y |
|
||||
| JB-07 | C | `patch_syscallmask_apply_to_proc` | syscallmask apply wrapper (`_proc_apply_syscall_masks` path) | Faithful upstream C22: mutate installed Unix/Mach/KOBJ masks to all-ones via structural cave, then continue into setter; distinct from `NULL`-mask alternative | Y |
|
||||
| JB-08 | A | `patch_task_conversion_eval_internal` | `_task_conversion_eval_internal` | Allow task conversion | Y |
|
||||
| JB-09 | A | `patch_sandbox_hooks_extended` | Sandbox MACF ops (extended) | Stub remaining 30+ sandbox hooks (incl. IOKit 201..210) | Y |
|
||||
| JB-10 | A | `patch_iouc_failed_macf` | IOUC MACF shared gate | A5-v2: patch only the post-`mac_iokit_check_open` deny gate (`CBZ W0, allow` -> `B allow`) and keep the rest of the IOUserClient open path intact | Y |
|
||||
| JB-11 | B | `patch_proc_security_policy` | `_proc_security_policy` | Bypass security policy | Y |
|
||||
| JB-12 | B | `patch_proc_pidinfo` | `_proc_pidinfo` | Allow pid 0 info | Y |
|
||||
| JB-13 | B | `patch_convert_port_to_map` | `_convert_port_to_map_with_flavor` | Skip kernel map panic | Y |
|
||||
| JB-14 | B | `patch_bsd_init_auth` | `_bsd_init` rootauth-failure branch | Ignore `FSIOC_KERNEL_ROOTAUTH` failure in `bsd_init`; same gate as base patch #3 when layered | Y |
|
||||
| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount (strict in-function match) | Y |
|
||||
| JB-16 | B | `patch_io_secure_bsd_root` | `AppleARMPE::callPlatformFunction` (`"SecureRootName"` return select), called from `IOSecureBSDRoot` | Force `"SecureRootName"` policy return to success without altering callback flow; implementation retargeted 2026-03-06 | Y |
|
||||
| JB-17 | B | `patch_load_dylinker` | `_load_dylinker` | Skip strict `LC_LOAD_DYLINKER == "/usr/lib/dyld"` gate | Y |
|
||||
| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Bypass MAC mount deny path (strict site) | Y |
|
||||
| JB-19 | B | `patch_nvram_verify_permission` | `_verifyPermission` (NVRAM) | Allow NVRAM writes | Y |
|
||||
| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force shared region path | Y |
|
||||
| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Skip persona validation | Y |
|
||||
| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid | Y |
|
||||
| JB-23 | B | `patch_thid_should_crash` | `_thid_should_crash` | Prevent GUARD_TYPE_MACH_PORT crash | Y |
|
||||
| JB-24 | B | `patch_vm_fault_enter_prepare` | `_vm_fault_enter_prepare` | Force `cs_bypass` fast path in runtime fault validation | Y |
|
||||
| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Allow VM protect | Y |
|
||||
|
||||
JB-24 note (2026-03-06): the old derived matcher hit the `VM_PAGE_CONSUME_CLUSTERED()` lock/unlock sequence inside `vm_fault_enter_prepare`, i.e. `pmap_lock_phys_page()` / `pmap_unlock_phys_page()`. The implementation is now retargeted to the upstream PCC 26.1 research `cs_bypass` gate at `0x00BA9E1C` / `0xFFFFFE0007BADE1C`.
|
||||
|
||||
## CFW Installation Patches
|
||||
|
||||
@@ -193,8 +195,13 @@ Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_sec
|
||||
- `setup_logs/jb_patch_tests_20260306_115027` (2026-03-06): rerun after `status` fix, pending-only mode (`Total methods: 19`).
|
||||
- Final run result from `jb_patch_tests_20260306_115027` at `2026-03-06 13:17`:
|
||||
- Finished: 19/19 (`PASS=15`, `FAIL=4`, all fails `rc=2`).
|
||||
- Failing methods: `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, `patch_cred_label_update_execve`.
|
||||
- Failing methods at that time: `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, `patch_cred_label_update_execve`.
|
||||
- 2026-03-06 follow-up: `patch_io_secure_bsd_root` failure is now attributed to a wrong-site patch in `AppleARMPE::callPlatformFunction` (`"SecureRoot"` gate at `0xFFFFFE000836E1F0`), not the intended `"SecureRootName"` deny-return path. The code was retargeted the same day to `0xFFFFFE000836E464` and re-enabled for the next restore/boot check.
|
||||
- 2026-03-06 follow-up: `patch_bsd_init_auth` was retargeted after confirming the old matcher was hitting unrelated code; keep disabled in default schedule until a fresh clean-baseline boot test passes.
|
||||
- Final case: `[19/19] patch_syscallmask_apply_to_proc` (`PASS`).
|
||||
- 2026-03-06 re-analysis: that historical `PASS` is now treated as a false positive for functionality, because the recorded bytes landed at `0xfffffe00093ae6e4`/`0xfffffe00093ae6e8` inside `_profile_syscallmask_destroy` underflow handling, not in `_proc_apply_syscall_masks`.
|
||||
- 2026-03-06 code update: `scripts/patchers/kernel_jb_patch_syscallmask.py` was rebuilt to target the real syscallmask apply wrapper structurally and now dry-runs on `PCC-CloudOS-26.1-23B85 kernelcache.research.vphone600` with 3 writes: `0x02395530`, `0x023955E8`, and cave `0x00AB1720`. User-side boot validation succeeded the same day.
|
||||
- 2026-03-06 follow-up: `patch_kcall10` was rebuilt from the old ABI-unsafe pseudo-10-arg design into an ABI-correct `sysent[439]` cave. Focused dry-run on `PCC-CloudOS-26.1-23B85 kernelcache.research.vphone600` now emits 4 writes: cave `0x00AB1720`, `sy_call` `0x0073E180`, `sy_arg_munge32` `0x0073E188`, and metadata `0x0073E190`; the method was re-enabled in `_GROUP_C_METHODS`.
|
||||
- Observed failure symptom in current failing set: first boot panic before command injection (or boot process early exit).
|
||||
- Post-run schedule change (per user request):
|
||||
- commented out failing methods from default `KernelJBPatcher._PATCH_METHODS` schedule in `scripts/patchers/kernel_jb.py`:
|
||||
@@ -202,6 +209,10 @@ Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_sec
|
||||
- `patch_io_secure_bsd_root`
|
||||
- `patch_vm_fault_enter_prepare`
|
||||
- `patch_cred_label_update_execve`
|
||||
- 2026-03-06 re-research note for `patch_cred_label_update_execve`:
|
||||
- old entry-time early-return strategy was identified as boot-unsafe because it skipped AMFI exec-time `csflags` and entitlement propagation entirely.
|
||||
- implementation was reworked to a success-tail trampoline that preserves normal AMFI processing and only clears restrictive `csflags` bits on the success path.
|
||||
- default JB schedule still keeps the method disabled until the reworked strategy is boot-validated.
|
||||
- Manual DEV+single (`setup_machine` + `PATCH=<method>`) working set now includes:
|
||||
- `patch_amfi_cdhash_in_trustcache`
|
||||
- `patch_amfi_execve_kill_path`
|
||||
|
||||
@@ -358,9 +358,15 @@ Should have moderate caller count (hundreds).
|
||||
|
||||
### patch_syscallmask_apply_to_proc — FIXED
|
||||
|
||||
**Problem**: `bl_callers` key bug: code used `target + self.base_va` but bl_callers is keyed by file offset.
|
||||
**Fix**: Changed to `self.bl_callers.get(target, [])` at line ~1661.
|
||||
**Status**: Now PASSING (40 patches emitted for shellcode + redirect).
|
||||
**Historical problem**: the earlier repo-side “fix” still matched the wrong place. Runtime verification later showed the old hit landed in `_profile_syscallmask_destroy` underflow handling, not the real syscallmask apply wrapper.
|
||||
**Current understanding**: faithful upstream C22 is a low-wrapper shellcode patch that mutates the effective Unix/Mach/KOBJ mask bytes to all `0xFF`, then continues into the normal setter. It is not a `NULL`-mask install and not an early-return patch.
|
||||
**Current status**: rebuilt structurally as a 3-write retarget (`save selector`, `branch to cave`, `all-ones cave + setter tail`) and separately documented in `research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md`; user reported boot success with the rebuilt C22 on `2026-03-06`.
|
||||
|
||||
### patch_iouc_failed_macf — RETARGETED
|
||||
|
||||
**Historical repo behavior**: patched `0xFFFFFE000825B0C0` at entry with `mov x0, xzr ; retab` after `PACIBSP`.
|
||||
**Problem**: fresh IDA review shows this is a large IOUserClient open/setup path, not a tiny standalone deny helper; entry early-return skips broader work including output-state preparation.
|
||||
**Current status**: rebuilt as A5-v2. It now patches only the narrow post-`mac_iokit_check_open` gate in the same function: `0xFFFFFE000825BA98` (`CBZ W0, allow`) becomes unconditional `B allow`. Focused dry-run emits exactly one write at file offset `0x01257A98`.
|
||||
|
||||
### patch_nvram_verify_permission — FIXED
|
||||
|
||||
|
||||
@@ -1,148 +1,243 @@
|
||||
# B13 `patch_bsd_init_auth`
|
||||
|
||||
## Patch Goal
|
||||
## Scope
|
||||
|
||||
Bypass the root volume authentication gate during early BSD init by forcing the auth helper return path to success.
|
||||
- Kernel analyzed: `kernelcache.research.vphone600`
|
||||
- Symbol handling: prefer in-image LC_SYMTAB if present; otherwise recover `bsd_init` from in-kernel string xrefs and local control-flow.
|
||||
- XNU reference: `research/reference/xnu/bsd/kern/bsd_init.c`
|
||||
- Analysis basis: IDA-MCP + local XNU source correlation
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
## Bottom Line
|
||||
|
||||
- Recovered symbol: `bsd_init` at `0xfffffe0007f7add4`.
|
||||
- Anchor string: `"rootvp not authenticated after mounting @%s:%d"` at `0xfffffe000707d6bb`.
|
||||
- Anchor xref: `0xfffffe0007f7bc04` inside `sub_FFFFFE0007F7ADD4` (same function as `bsd_init`).
|
||||
- Earlier B13 notes are **not trustworthy** as a patch-site guide.
|
||||
- The currently documented runtime hit at `0xFFFFFE0007FB09DC` is **not inside `bsd_init`**.
|
||||
- The real `bsd_init` root-auth gate is in `bsd_init` at `0xFFFFFE0007F7B988` / `0xFFFFFE0007F7B98C`.
|
||||
- If B13 is re-enabled, the patch should target the **`FSIOC_KERNEL_ROOTAUTH` return check in `bsd_init`**, not the `ldr x0,[xN,#0x2b8]; cbz x0; bl` pattern currently used by the patcher.
|
||||
|
||||
## Call-Stack Analysis
|
||||
## What This Patch Is Actually For
|
||||
|
||||
- Static callers of `bsd_init` (`0xfffffe0007f7add4`):
|
||||
- `sub_FFFFFE0007F7ACE0`
|
||||
- `sub_FFFFFE0007B43EE0`
|
||||
- The patch point is in the rootvp/authentication decision path inside `bsd_init`, before the panic/report path using the rootvp-not-authenticated string.
|
||||
Fact:
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
- In XNU, `bsd_init()` mounts root, calls `IOSecureBSDRoot(rootdevice)`, resolves `rootvnode`, and then enforces root-volume authentication.
|
||||
- The relevant source block in `research/reference/xnu/bsd/kern/bsd_init.c` is:
|
||||
- `if (!bsd_rooted_ramdisk()) {`
|
||||
- `autherr = VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel());`
|
||||
- `if (autherr) panic("rootvp not authenticated after mounting");`
|
||||
|
||||
- Patcher intent:
|
||||
- Find `ldr x0, [xN, #0x2b8] ; cbz x0, ... ; bl auth_fn`.
|
||||
- Replace `bl auth_fn` with `mov x0, #0`.
|
||||
- Expected replacement bytes:
|
||||
- after: `00 00 80 D2` (`mov x0, #0`)
|
||||
- Current IDA image appears already post-variant / non-matching for the exact pre-patch triplet at the old location, so the exact original 4-byte BL at this build-state is not asserted here.
|
||||
Inference:
|
||||
|
||||
## Pseudocode (Before)
|
||||
- The jailbreak purpose of B13 is **not** “generic auth bypass”.
|
||||
- Its real purpose is very narrow: **allow boot to continue even when the mounted root volume fails `FSIOC_KERNEL_ROOTAUTH`**.
|
||||
- In practice this means permitting a modified / non-sealed / otherwise non-stock root volume to survive the early BSD boot gate.
|
||||
|
||||
## Real Control Flow in `bsd_init`
|
||||
|
||||
### Confirmed symbols and anchors
|
||||
|
||||
- `bsd_init` = `0xFFFFFE0007F7ADD4`
|
||||
- Panic string = `"rootvp not authenticated after mounting @%s:%d"` at `0xFFFFFE000707D6BB`
|
||||
- String xref inside `bsd_init` = `0xFFFFFE0007F7BC04`
|
||||
- Static caller of `bsd_init` = `kernel_bootstrap_thread` at `0xFFFFFE0007B44428`
|
||||
|
||||
### Confirmed boot path
|
||||
|
||||
Fact, from IDA + XNU correlation:
|
||||
|
||||
1. `bsd_init` mounts root via `vfs_mountroot`.
|
||||
2. `bsd_init` calls `IOSecureBSDRoot(rootdevice)` at `0xFFFFFE0007F7B7C4`.
|
||||
3. `bsd_init` resolves the mounted root vnode and stores it as `rootvnode`.
|
||||
4. `bsd_init` calls `bsd_rooted_ramdisk` at `0xFFFFFE0007F7B934`.
|
||||
5. If not rooted ramdisk, `bsd_init` constructs a `VNOP_IOCTL` call for `FSIOC_KERNEL_ROOTAUTH`.
|
||||
6. The indirect filesystem op is invoked at `0xFFFFFE0007F7B988`.
|
||||
7. The return value is checked at `0xFFFFFE0007F7B98C`.
|
||||
8. Failure branches to the panic/report block at `0xFFFFFE0007F7BBF4`.
|
||||
|
||||
### Exact IDA site
|
||||
|
||||
Relevant instructions in `bsd_init`:
|
||||
|
||||
```asm
|
||||
0xFFFFFE0007F7B934 BL bsd_rooted_ramdisk
|
||||
0xFFFFFE0007F7B938 TBNZ W0, #0, 0xFFFFFE0007F7B990
|
||||
|
||||
0xFFFFFE0007F7B94C MOV W10, #0x80046833
|
||||
...
|
||||
0xFFFFFE0007F7B980 ADD X0, SP, #var_130
|
||||
0xFFFFFE0007F7B984 MOV X17, #0x307A
|
||||
0xFFFFFE0007F7B988 BLRAA X8, X17
|
||||
0xFFFFFE0007F7B98C CBNZ W0, 0xFFFFFE0007F7BBF4
|
||||
```
|
||||
|
||||
And the failure block:
|
||||
|
||||
```asm
|
||||
0xFFFFFE0007F7BBF4 ADRL X8, "bsd_init.c"
|
||||
0xFFFFFE0007F7BBFC MOV W9, #0x3D3
|
||||
0xFFFFFE0007F7BC04 ADRL X0, "rootvp not authenticated after mounting @%s:%d"
|
||||
0xFFFFFE0007F7BC0C BL sub_FFFFFE0008302368
|
||||
```
|
||||
|
||||
## Why This Is The Real Site
|
||||
|
||||
### Source-to-binary correlation
|
||||
|
||||
Fact:
|
||||
|
||||
- `FSIOC_KERNEL_ROOTAUTH` is defined in `research/reference/xnu/bsd/sys/fsctl.h`.
|
||||
- The binary literal loaded in `bsd_init` is `0x80046833`, which matches `FSIOC_KERNEL_ROOTAUTH`.
|
||||
- The call setup happens immediately after `bsd_rooted_ramdisk()` and immediately before the rootvp panic string block.
|
||||
|
||||
Inference:
|
||||
|
||||
- This is the exact lowered form of:
|
||||
|
||||
```c
|
||||
int rc = auth_rootvp(rootvp);
|
||||
if (rc != 0) {
|
||||
panic("rootvp not authenticated ...");
|
||||
autherr = VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel());
|
||||
if (autherr) {
|
||||
panic("rootvp not authenticated after mounting");
|
||||
}
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
### Call-stack view
|
||||
|
||||
```c
|
||||
int rc = 0; // forced success
|
||||
if (rc != 0) {
|
||||
panic("rootvp not authenticated ...");
|
||||
}
|
||||
Useful boot-path stack, expressed semantically rather than as a fake direct symbol chain:
|
||||
|
||||
- `kernel_bootstrap_thread`
|
||||
- `bsd_init`
|
||||
- `vfs_mountroot`
|
||||
- `IOSecureBSDRoot`
|
||||
- `VFS_ROOT` / `set_rootvnode`
|
||||
- `bsd_rooted_ramdisk`
|
||||
- `VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel())`
|
||||
- failure path -> panic/report block using `"rootvp not authenticated after mounting @%s:%d"`
|
||||
|
||||
## Why The Existing B13 Matcher Is Wrong
|
||||
|
||||
### Old documented runtime hit is unrelated
|
||||
|
||||
Fact:
|
||||
|
||||
- Existing runtime-verification artifacts recorded B13 at `0xFFFFFE0007FB09DC`.
|
||||
- IDA resolves that site to `exec_handle_sugid`, not `bsd_init`.
|
||||
- The surrounding code is:
|
||||
|
||||
```asm
|
||||
0xFFFFFE0007FB09D4 LDR X0, [X20,#0x2B8]
|
||||
0xFFFFFE0007FB09D8 CBZ X0, 0xFFFFFE0007FB09E4
|
||||
0xFFFFFE0007FB09DC BL sub_FFFFFE0007B84C5C
|
||||
```
|
||||
|
||||
## Symbol Consistency
|
||||
- That is exactly the shape the current patcher searches for.
|
||||
|
||||
- `bsd_init` symbol and anchor context are consistent.
|
||||
- Exact auth-call instruction bytes require pre-patch image state for strict byte-for-byte confirmation.
|
||||
### Why the heuristic false-positive happened
|
||||
|
||||
## Patch Metadata
|
||||
Fact:
|
||||
|
||||
- Patch document: `patch_bsd_init_auth.md` (B13).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
- `scripts/patchers/kernel_jb_patch_bsd_init_auth.py` looks for:
|
||||
- `ldr x0, [xN, #0x2b8]`
|
||||
- `cbz x0, ...`
|
||||
- `bl ...`
|
||||
- It then ranks candidates by:
|
||||
- neighborhood near a `bsd_init` string anchor,
|
||||
- presence of `"/dev/null"` in the function,
|
||||
- low caller count.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
Fact:
|
||||
|
||||
- Primary target: recovered symbol `bsd_init` at `0xfffffe0007f7add4`.
|
||||
- Auth-check patchpoint is in the rootvp-authentication decision sequence documented in this file.
|
||||
- `exec_handle_sugid` also references `"/dev/null"` in the same function.
|
||||
- Therefore the heuristic can promote `exec_handle_sugid` even though it is semantically unrelated to root-volume auth.
|
||||
|
||||
## Kernel Source File Location
|
||||
Conclusion:
|
||||
|
||||
- Expected XNU source: `bsd/kern/bsd_init.c`.
|
||||
- Confidence: `high`.
|
||||
- The current B13 implementation is not “slightly off”; it is targeting the wrong logical site class.
|
||||
- This explains why enabling B13 can break boot: it mutates an exec/credential path instead of the early root-auth gate.
|
||||
|
||||
## Function Call Stack
|
||||
## Correct Patch Candidate(s)
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- Static callers of `bsd_init` (`0xfffffe0007f7add4`):
|
||||
- `sub_FFFFFE0007F7ACE0`
|
||||
- `sub_FFFFFE0007B43EE0`
|
||||
- The patch point is in the rootvp/authentication decision path inside `bsd_init`, before the panic/report path using the rootvp-not-authenticated string.
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
### Preferred candidate: patch the return check, not the call target
|
||||
|
||||
## Patch Hit Points
|
||||
Patch site:
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- Find `ldr x0, [xN, #0x2b8] ; cbz x0, ... ; bl auth_fn`.
|
||||
- Expected replacement bytes:
|
||||
- after: `00 00 80 D2` (`mov x0, #0`)
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
- `0xFFFFFE0007F7B98C` in `bsd_init`
|
||||
- instruction: `CBNZ W0, 0xFFFFFE0007F7BBF4`
|
||||
|
||||
## Current Patch Search Logic
|
||||
Recommended transform:
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Anchor string: `"rootvp not authenticated after mounting @%s:%d"` at `0xfffffe000707d6bb`.
|
||||
- Anchor xref: `0xfffffe0007f7bc04` inside `sub_FFFFFE0007F7ADD4` (same function as `bsd_init`).
|
||||
- before: `40 13 00 35`
|
||||
- after: `1F 20 03 D5` (`NOP`)
|
||||
|
||||
## Validation (Static Evidence)
|
||||
Effect:
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
- `VNOP_IOCTL(... FSIOC_KERNEL_ROOTAUTH ...)` still executes.
|
||||
- Only the early boot failure gate is removed.
|
||||
- This is the narrowest behavioral change that matches the XNU source intent.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
### Secondary candidate: force the ioctl result to success
|
||||
|
||||
- Root volume auth check can trigger `"rootvp not authenticated ..."` panic/report path during early BSD init.
|
||||
Patch site:
|
||||
|
||||
## Risk / Side Effects
|
||||
- `0xFFFFFE0007F7B988` in `bsd_init`
|
||||
- instruction: `BLRAA X8, X17`
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
Possible transform:
|
||||
|
||||
## Symbol Consistency Check
|
||||
- before: `11 09 3F D7`
|
||||
- after: `00 00 80 52` (`MOV W0, #0`)
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `bsd_init`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `bsd_init` -> `bsd_init` at `0xfffffe0007f7add4` (size `0xe3c`).
|
||||
Effect:
|
||||
|
||||
## Open Questions and Confidence
|
||||
- Skips the actual filesystem ioctl implementation entirely.
|
||||
- More invasive than patching the subsequent `CBNZ`.
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
Assessment:
|
||||
|
||||
## Evidence Appendix
|
||||
- If we need a first retest candidate, `NOP`-ing `CBNZ W0` is safer than replacing the call.
|
||||
- It preserves any filesystem side effects that happen during the auth ioctl and only suppresses the panic gate.
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
## What The Patch Does After It Is Correctly Retargeted
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
- Allows the system to continue booting even if the mounted root volume is not accepted by `FSIOC_KERNEL_ROOTAUTH`.
|
||||
- Helps jailbreak-style boot flows where the root volume is intentionally modified and would otherwise fail the sealed/authenticated-root policy.
|
||||
- Does **not** by itself disable MACF, AMFI, persona checks, syscall masks, or other post-boot kernel policy gates.
|
||||
- In other words: B13 is a **boot-enablement patch**, not a whole-jailbreak patch.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `exec_handle_sugid`
|
||||
- Chain function sample: `exec_handle_sugid`
|
||||
- Caller sample: `exec_mach_imgact`
|
||||
- Callee sample: `exec_handle_sugid`, `sub_FFFFFE0007B0EA64`, `sub_FFFFFE0007B0F4F8`, `sub_FFFFFE0007B1663C`, `sub_FFFFFE0007B1B508`, `sub_FFFFFE0007B1C348`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE0007FB09DC` (`exec_handle_sugid`): mov x0,#0 [_bsd_init auth] | `a050ef97 -> 000080d2`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
## Risk Notes
|
||||
|
||||
- This patch intentionally weakens authenticated-root enforcement during early boot.
|
||||
- The most likely safe form is to skip only the panic branch.
|
||||
- If downstream code later depends on rootauth state beyond this early gate, more work may still be required elsewhere; this document does **not** claim B13 alone is sufficient for a full JB boot.
|
||||
|
||||
## Recommended Retargeting Rule (Design Only, No Code Change Landed)
|
||||
|
||||
If B13 is reimplemented, the matcher should anchor on facts unique to this site:
|
||||
|
||||
1. Resolve `_bsd_init` / `bsd_init` first.
|
||||
2. Stay inside that function only.
|
||||
3. Find the post-`bsd_rooted_ramdisk` false path.
|
||||
4. Require the literal `0x80046833` (`FSIOC_KERNEL_ROOTAUTH`) in the setup block.
|
||||
5. Require the next call to be the indirect vnode-op call.
|
||||
6. Patch the following `CBNZ W0, panic_block`.
|
||||
7. Optionally verify the failure target reaches the rootvp-auth string at `0xFFFFFE0007F7BC04`.
|
||||
|
||||
This rule is materially stronger than the old `ldr x0,[...,#0x2b8]; cbz; bl` shape and should exclude `exec_handle_sugid` entirely.
|
||||
|
||||
## Validation Status
|
||||
|
||||
- Validation note: on the current reference IM4P kernel, in-image symbol resolution returns `0` symbols, so B13 is currently found by anchor recovery rather than external symbol data.
|
||||
- In-memory validation against `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` succeeds after IM4P decompression.
|
||||
- `KernelJBPatcher._build_method_plan()` now includes `patch_bsd_init_auth`.
|
||||
- Live patch hit: `0xFFFFFE0007F7B98C` / file offset `0x00F7798C` / `CBNZ W0, panic` -> `NOP`.
|
||||
- Historical false-positive hit `0xFFFFFE0007FB09DC` is no longer selected.
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- Landed in `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`.
|
||||
- Default JB schedule re-enabled in `scripts/patchers/kernel_jb.py`.
|
||||
- Implemented form: patch the in-function `CBNZ W0, panic` gate in `bsd_init`.
|
||||
- Capstone semantic checks only: no raw-offset targeting and no operand-string/literal hardcoding in the final matcher.
|
||||
|
||||
## Confidence
|
||||
|
||||
- Confidence that `0xFFFFFE0007F7B988` / `0xFFFFFE0007F7B98C` is the real B13 site: **high**.
|
||||
- Confidence that `0xFFFFFE0007FB09DC` is a false-positive site: **high**.
|
||||
- Confidence that `NOP CBNZ` is a better first retest than `MOV W0,#0` on the call: **medium**, because APFS-side behavior is closed-source and may have side effects not visible from XNU alone.
|
||||
|
||||
@@ -1,203 +1,241 @@
|
||||
# C21 `patch_cred_label_update_execve`
|
||||
|
||||
## Scope (revalidated with static analysis)
|
||||
## Scope
|
||||
|
||||
- Target patch method: `KernelJBPatchCredLabelMixin.patch_cred_label_update_execve` in `scripts/patchers/kernel_jb_patch_cred_label.py`.
|
||||
- Target function in kernel: `jb_c21_patch_target_amfi_cred_label_update_execve` (`0xFFFFFE000863FC6C`).
|
||||
- Patch-point label (inside function): `jb_c21_patchpoint_retab_redirect` (`0xFFFFFE000864011C`, original `RETAB` site).
|
||||
- Kernel used for reverse engineering: `kernelcache.research.vphone600`.
|
||||
- IDA symbol / address: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi` at `0xFFFFFE000864DEFC`.
|
||||
- XNU semantic reference: `research/reference/xnu/security/mac_vfs.c`, `research/reference/xnu/bsd/kern/kern_exec.c`, `research/reference/xnu/bsd/kern/kern_credential.c`, `research/reference/xnu/osfmk/kern/cs_blobs.h`.
|
||||
|
||||
## Verified call/dispatch trace (no trust in old notes)
|
||||
This note is a fresh re-analysis. Older notes for this patch were treated as untrusted and not reused as ground truth.
|
||||
|
||||
1. Exec pipeline enters `jb_c21_supp_exec_handle_image` (`0xFFFFFE0007FA4A58`).
|
||||
2. It calls `jb_c21_supp_exec_policy_stage` (`0xFFFFFE0007FA6858`).
|
||||
3. That stage schedules `jb_c21_supp_exec_policy_wrapper` (`0xFFFFFE0007F81F00`).
|
||||
4. Wrapper calls `jb_c21_supp_mac_policy_dispatch_ops90_execve` (`0xFFFFFE00082D9D0C`).
|
||||
5. Dispatcher loads callback from `policy->ops + 0x90` at `jb_c21_supp_dispatch_load_ops_off90` (`0xFFFFFE00082D9DBC`) and calls it at `jb_c21_supp_dispatch_call_ops_off90` (`0xFFFFFE00082D9FCC`, `BLRAA ... X17=#0xEC79`).
|
||||
## Call Stack
|
||||
|
||||
This `+0x90` slot is the shared execve cred-label hook slot used by both AMFI and Sandbox hooks.
|
||||
Exec-time path in XNU source:
|
||||
|
||||
## How AMFI wires this callback
|
||||
1. `exec_handle_sugid()` asks `mac_cred_check_label_update_execve(...)` whether any MAC policy wants an exec-time credential transition.
|
||||
2. If yes, `exec_handle_sugid()` calls `kauth_proc_label_update_execve(...)`.
|
||||
3. `kauth_proc_label_update_execve(...)` allocates / updates the new credential and calls `mac_cred_label_update_execve(...)`.
|
||||
4. `mac_cred_label_update_execve(...)` iterates `mac_policy_list` and invokes each policy's `mpo_cred_label_update_execve` hook.
|
||||
5. AMFI's hook is `_cred_label_update_execve` in `com.apple.driver.AppleMobileFileIntegrity`.
|
||||
|
||||
- `jb_c21_supp_amfi_init_register_policy_ops` (`0xFFFFFE0008640718`) builds AMFI `mac_policy_ops` and writes `jb_c21_patch_target_amfi_cred_label_update_execve` into offset `+0x90` (store at `0xFFFFFE0008640AA0`).
|
||||
- Then it registers the policy descriptor via `sub_FFFFFE00082CDDB0` (mac policy register path).
|
||||
Relevant source anchors:
|
||||
|
||||
## What the unpatched function enforces
|
||||
- `research/reference/xnu/bsd/kern/kern_exec.c:6854`
|
||||
- `research/reference/xnu/bsd/kern/kern_exec.c:6950`
|
||||
- `research/reference/xnu/bsd/kern/kern_credential.c:4367`
|
||||
- `research/reference/xnu/security/mac_vfs.c:777`
|
||||
|
||||
Inside `jb_c21_patch_target_amfi_cred_label_update_execve`:
|
||||
## What The Function Actually Does
|
||||
|
||||
- Multiple explicit kill paths return failure (`W0=1`) for unsigned/forbidden exec cases.
|
||||
- A key branch logs and kills with:
|
||||
- `"dyld signature cannot be verified... or ... unsigned application outside of a supported development configuration"`
|
||||
- It conditionally mutates `*a10` (`cs_flags`) and later checks validity bits before honoring entitlements.
|
||||
- If validity path is not satisfied, it logs `"not CS_VALID, not honoring entitlements"` and skips entitlement-driven flag propagation.
|
||||
Reverse engineering of `0xFFFFFE000864DEFC` shows that AMFI's hook is not just a boolean kill gate.
|
||||
|
||||
## Why C21 is required (full picture)
|
||||
It performs all of the following before returning success or failure:
|
||||
|
||||
C21 is not just another allow-return patch; it is a **state-fix patch** for `cs_flags` at execve policy time.
|
||||
- validates the exec target / `cs_blob` and reports AMFI analytics;
|
||||
- checks multiple kill conditions and returns `1` on rejection;
|
||||
- mutates `*csflags` during successful exec handling;
|
||||
- derives extra flags from entitlement state;
|
||||
- performs final bookkeeping before returning `0`.
|
||||
|
||||
Patch shellcode behavior (from patcher implementation):
|
||||
Observed kill / deny subpaths in IDA:
|
||||
|
||||
- Load `cs_flags` pointer from stack (`arg9` path).
|
||||
- `ORR` with `0x04000000` and `0x0000000F`.
|
||||
- `AND` with `0xFFFFC0FF` (clears bits in `0x00003F00`).
|
||||
- Store back and return success (`X0=0`).
|
||||
- completely unsigned code path;
|
||||
- Restricted Execution Mode denials;
|
||||
- legacy VPN plugin rejection;
|
||||
- dyld signature verification failure;
|
||||
- helper failure from `sub_FFFFFE000864E5A0(...)` with reason string.
|
||||
|
||||
Practical effect:
|
||||
All of those failure edges converge on the shared kill return at `0xFFFFFE000864E38C` (`mov w0, #1`).
|
||||
|
||||
- Unsigned binaries avoid AMFI execve kill outcomes **and** get permissive execution flags instead of failing later due bad flag state.
|
||||
- For launchd dylib injection (`/cores/launchdhook.dylib`), this patch is critical because the unpatched path can still fail on dyld-signature / restrictive-flag checks even if a generic kill-return patch exists elsewhere.
|
||||
- Clearing the `0x3F00` cluster and forcing low/upper bits ensures launch context is treated permissively enough for injected non-Apple-signed payload flow.
|
||||
Observed success-path `csflags` mutations in IDA:
|
||||
|
||||
## Relationship with Sandbox hook (important)
|
||||
- `0xFFFFFE000864E1E8`: ORs `0x2200` or `0x200` into `*csflags` depending on dyld / helper state.
|
||||
- `0xFFFFFE000864E200`: ORs `0x802A00` into `*csflags` when AMFI-derived entitlement flags require SIP-style inheritance.
|
||||
- `0xFFFFFE000864E4EC`, `0xFFFFFE000864E500`, `0xFFFFFE000864E51C`, `0xFFFFFE000864E534`: OR installer / rootless / datavault / NVRAM-related bits into `*csflags`.
|
||||
- `0xFFFFFE000864E570`: ORs `0x2A00` into `*csflags` in the final success tail.
|
||||
|
||||
- Sandbox also has a cred-label execve hook in the same ops slot (`+0x90`):
|
||||
- `jb_c21_supp_sandbox_hook_cred_label_update_execve` (`0xFFFFFE00093BDB64`)
|
||||
- That Sandbox hook contains policy such as `"only launchd is allowed to spawn untrusted binaries"`.
|
||||
The relevant flag meanings from XNU are in `research/reference/xnu/osfmk/kern/cs_blobs.h:32`.
|
||||
|
||||
So launchd-dylib viability depends on **combined behavior**:
|
||||
## Why The Old Patch Broke Boot
|
||||
|
||||
- Sandbox hook policy acceptance for launch context, and
|
||||
- AMFI C21 flag/state coercion so dyld/code-signing state does not re-kill or strip required capability.
|
||||
The previous implementations were both too broad:
|
||||
|
||||
## IDA labels added in this verification pass
|
||||
1. the original shellcode version forged new `csflags` at function exit;
|
||||
2. the later "low-risk" version simply returned from function entry.
|
||||
|
||||
- **patched-function group**:
|
||||
- `jb_c21_patch_target_amfi_cred_label_update_execve` @ `0xFFFFFE000863FC6C`
|
||||
- `jb_c21_patchpoint_retab_redirect` @ `0xFFFFFE000864011C`
|
||||
- `jb_c21_ref_shared_kill_return` @ `0xFFFFFE00086400FC`
|
||||
- **supplement group**:
|
||||
- `jb_c21_supp_exec_handle_image` @ `0xFFFFFE0007FA4A58`
|
||||
- `jb_c21_supp_exec_policy_stage` @ `0xFFFFFE0007FA6858`
|
||||
- `jb_c21_supp_exec_policy_wrapper` @ `0xFFFFFE0007F81F00`
|
||||
- `jb_c21_supp_mac_policy_dispatch_ops90_execve` @ `0xFFFFFE00082D9D0C`
|
||||
- `jb_c21_supp_dispatch_load_ops_off90` @ `0xFFFFFE00082D9DBC`
|
||||
- `jb_c21_supp_dispatch_call_ops_off90` @ `0xFFFFFE00082D9FCC`
|
||||
- `jb_c21_supp_amfi_start` @ `0xFFFFFE0008640624`
|
||||
- `jb_c21_supp_amfi_init_register_policy_ops` @ `0xFFFFFE0008640718`
|
||||
- `jb_c21_supp_sandbox_hook_cred_label_update_execve` @ `0xFFFFFE00093BDB64`
|
||||
- `jb_c21_supp_sandbox_execve_context_gate` @ `0xFFFFFE00093BC054`
|
||||
The entry-return strategy is fundamentally wrong for boot stability because it skips AMFI's normal exec-time work entirely.
|
||||
|
||||
## Symbol Consistency Audit (2026-03-05)
|
||||
That means it bypasses:
|
||||
|
||||
- Status: `partial`
|
||||
- Recovered symbol `_hook_cred_label_update_execve` is present and consistent.
|
||||
- Many `jb_*` helper names in this file are analyst aliases and do not all appear in recovered symbol JSON.
|
||||
- `cs_blob` / signature-state handling;
|
||||
- AMFI auxiliary analytics / bookkeeping;
|
||||
- entitlement-derived `csflags` propagation;
|
||||
- final per-exec state setup that later code expects to have happened.
|
||||
|
||||
## Patch Metadata
|
||||
In short: `_cred_label_update_execve` is on the boot-critical exec path, so turning it into an unconditional `return 0` is not a safe jailbreak strategy.
|
||||
|
||||
- Patch document: `patch_cred_label_update_execve.md` (C21).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_cred_label.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
## Repaired Patch Strategy
|
||||
|
||||
## Patch Goal
|
||||
The current C21-v1 patcher no longer returns from function entry and no
|
||||
longer hijacks the beginning of the success tail.
|
||||
|
||||
Redirect cred-label execve handling to shellcode that coerces permissive cs_flags and returns success.
|
||||
Instead it:
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
1. keeps AMFI's full exec-time logic intact;
|
||||
2. finds the canonical epilogue at `0xFFFFFE000864E390`;
|
||||
3. redirects the shared deny return (`0xFFFFFE000864E38C`) and both late
|
||||
success exits (`0xFFFFFE000864E580`, `0xFFFFFE000864E588`) into one
|
||||
common trampoline;
|
||||
4. reloads `u_int *csflags` from the function's own stack slot in the cave,
|
||||
so the cave works for both deny and success exits;
|
||||
5. clears only the restrictive execution bits from `*csflags`;
|
||||
6. forces `w0 = 0` and branches into the original epilogue.
|
||||
|
||||
- Primary target: AMFI cred-label callback body at `0xfffffe000863fc6c`.
|
||||
- Patchpoint: `0xfffffe000864011c` (`retab` redirect to injected shellcode/cave).
|
||||
The current trampoline clears this mask:
|
||||
|
||||
## Kernel Source File Location
|
||||
- `CS_HARD`
|
||||
- `CS_KILL`
|
||||
- `CS_CHECK_EXPIRATION`
|
||||
- `CS_RESTRICT`
|
||||
- `CS_ENFORCEMENT`
|
||||
- `CS_REQUIRE_LV`
|
||||
|
||||
- Component: AMFI policy callback implementation in kernel collection (private).
|
||||
- Related open-source MAC framework context: `security/mac_process.c` + exec paths in `bsd/kern/kern_exec.c`.
|
||||
- Confidence: `medium`.
|
||||
Bitmask used by the patcher: `0xFFFFC0FF`.
|
||||
|
||||
## Function Call Stack
|
||||
This preserves AMFI's normal validation / entitlement work while removing the sticky exec-time restrictions that are most hostile to jailbreak tooling.
|
||||
|
||||
- Primary traced chain (from `Verified call/dispatch trace (no trust in old notes)`):
|
||||
- 1. Exec pipeline enters `jb_c21_supp_exec_handle_image` (`0xFFFFFE0007FA4A58`).
|
||||
- 2. It calls `jb_c21_supp_exec_policy_stage` (`0xFFFFFE0007FA6858`).
|
||||
- 3. That stage schedules `jb_c21_supp_exec_policy_wrapper` (`0xFFFFFE0007F81F00`).
|
||||
- 4. Wrapper calls `jb_c21_supp_mac_policy_dispatch_ops90_execve` (`0xFFFFFE00082D9D0C`).
|
||||
- 5. Dispatcher loads callback from `policy->ops + 0x90` at `jb_c21_supp_dispatch_load_ops_off90` (`0xFFFFFE00082D9DBC`) and calls it at `jb_c21_supp_dispatch_call_ops_off90` (`0xFFFFFE00082D9FCC`, `BLRAA ... X17=#0xEC79`).
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
## C21-v1 Scope
|
||||
|
||||
## Patch Hit Points
|
||||
This is intentionally the smallest credible C21-only design:
|
||||
|
||||
- Patch hitpoint is selected by contextual matcher and verified against local control-flow.
|
||||
- Before/after instruction semantics are captured in the patch-site evidence above.
|
||||
- it does not depend on `patch_amfi_execve_kill_path`;
|
||||
- it does not patch function entry;
|
||||
- it does not forge `CS_VALID`, `CS_PLATFORM_BINARY`, `CS_ADHOC`, or other
|
||||
high-risk identity bits;
|
||||
- it only converts late exits in `_cred_label_update_execve` to success and
|
||||
normalizes the restrictive `0x3F00` cluster.
|
||||
|
||||
## Current Patch Search Logic
|
||||
## C21-v1 Outcome
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_cred_label.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
- User restore testing confirms C21-v1 boots successfully.
|
||||
- That result validates the central design assumption: `_cred_label_update_execve`
|
||||
can be patched safely as long as AMFI's main body is preserved and only the
|
||||
final exits are redirected.
|
||||
|
||||
## Pseudocode (Before)
|
||||
## Dry-Run Verification (extracted PCC 26.1 research kernel)
|
||||
|
||||
```c
|
||||
if (amfi_checks_fail || cs_flags_invalid) {
|
||||
return 1;
|
||||
}
|
||||
return apply_default_execve_flags(...);
|
||||
```
|
||||
Dry-run patch generation against the extracted raw Mach-O from
|
||||
`ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` produced the
|
||||
following C21-v1 shape:
|
||||
|
||||
## Pseudocode (After)
|
||||
- code cave: `0x00AB0F00`
|
||||
- shared deny-return branch site: `0x0163C0FC`
|
||||
- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8`
|
||||
|
||||
```c
|
||||
cs_flags |= 0x04000000 | 0x0000000F;
|
||||
cs_flags &= 0xFFFFC0FF;
|
||||
return 0;
|
||||
```
|
||||
Emitted trampoline body:
|
||||
|
||||
## Validation (Static Evidence)
|
||||
- `ldr x26, [x29, #0x18]`
|
||||
- `cbz x26, +0x10`
|
||||
- `ldr w8, [x26]`
|
||||
- `and w8, w8, #0xFFFFC0FF`
|
||||
- `str w8, [x26]`
|
||||
- `mov w0, #0`
|
||||
- `b epilogue`
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
Observed C21-v1 raw patch count: `10`
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
- `7` instructions in the trampoline cave
|
||||
- `3` patched branch sites in `_cred_label_update_execve`
|
||||
|
||||
- Exec policy path preserves restrictive `cs_flags` and deny returns, causing AMFI kill outcomes or later entitlement-state failures.
|
||||
## C21-v2 Refinement
|
||||
|
||||
## Risk / Side Effects
|
||||
After C21-v1 boot success, the patch was refined to separate deny and success
|
||||
semantics instead of using one common cave for all exits.
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
### Reason for v2
|
||||
|
||||
## Symbol Consistency Check
|
||||
C21-v1 proved that the late-exit structure is safe enough to boot, but it still
|
||||
cleared `0x3F00` on the shared deny path. That is broader than necessary.
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`.
|
||||
- Canonical symbol hit(s): none (alias-based static matching used).
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe000863fc6c` currently resolves to `__ZN18AppleMobileApNonce21_saveNonceInfoInNVRAMEPKc` (size `0x250`).
|
||||
C21-v2 narrows that behavior:
|
||||
|
||||
## Open Questions and Confidence
|
||||
- deny exit: force only `w0 = 0`, then return through the original epilogue;
|
||||
- success exits: keep the late `csflags` normalization path.
|
||||
|
||||
- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain.
|
||||
- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial).
|
||||
### C21-v2 dry-run shape
|
||||
|
||||
## Evidence Appendix
|
||||
- deny cave: `0x00AB02B8`
|
||||
- success cave: `0x00AB0F00`
|
||||
- deny-return branch site: `0x0163C0FC`
|
||||
- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8`
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
Observed C21-v2 raw patch count: `12`
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
- `2` instructions in the deny cave
|
||||
- `7` instructions in the success cave
|
||||
- `3` patched branch sites in `_cred_label_update_execve`
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (2 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `True`
|
||||
- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `3` patch-point VAs.
|
||||
- IDA function sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`
|
||||
- Chain function sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`
|
||||
- Caller sample: `__ZL35_initializeAppleMobileFileIntegrityv`
|
||||
- Callee sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`, `__ZN24AppleMobileFileIntegrity27submitAuxiliaryInfoAnalyticEP5vnodeP7cs_blob`, `sub_FFFFFE0007B4EA8C`, `sub_FFFFFE0007CD7750`, `sub_FFFFFE0007CD7760`, `sub_FFFFFE0007F8C478`
|
||||
- Verdict: `valid`
|
||||
- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift.
|
||||
- Policy note: method is in the low-risk optimized set (validated hit on this kernel).
|
||||
- Key verified points:
|
||||
- `0xFFFFFE000864DF00` (`__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`): mov x0,xzr [_cred_label_update_execve low-risk] | `ff4302d1 -> e0031faa`
|
||||
- `0xFFFFFE000864DF04` (`__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`): retab [_cred_label_update_execve low-risk] | `fc6f03a9 -> ff0f5fd6`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
## C21-v3 Refinement
|
||||
|
||||
After preparing the safer split-exit structure in v2, the next experimental
|
||||
step adds only the smallest helper-bit subset from the older upstream idea.
|
||||
|
||||
### Reason for v3
|
||||
|
||||
The old upstream shellcode not only cleared restrictive flags, but also set a
|
||||
much broader collection of identity / helper bits. Most of those are too risky
|
||||
to restore directly.
|
||||
|
||||
C21-v3 keeps the v2 structure and adds only this success-only increment:
|
||||
|
||||
- `CS_GET_TASK_ALLOW` (`0x4`)
|
||||
- `CS_INSTALLER` (`0x8`)
|
||||
|
||||
Combined set mask used by v3: `0x0000000C`
|
||||
|
||||
### C21-v3 dry-run shape
|
||||
|
||||
- deny cave: `0x00AB02B8`
|
||||
- success cave: `0x00AB0F00`
|
||||
- deny-return branch site: `0x0163C0FC`
|
||||
- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8`
|
||||
|
||||
Observed C21-v3 raw patch count: `13`
|
||||
|
||||
- `2` instructions in the deny cave
|
||||
- `8` instructions in the success cave
|
||||
- `3` patched branch sites in `_cred_label_update_execve`
|
||||
|
||||
Success-cave body now becomes:
|
||||
|
||||
- `ldr x26, [x29, #0x18]`
|
||||
- `cbz x26, +0x10`
|
||||
- `ldr w8, [x26]`
|
||||
- `and w8, w8, #0xFFFFC0FF`
|
||||
- `orr w8, w8, #0xC`
|
||||
- `str w8, [x26]`
|
||||
- `mov w0, #0`
|
||||
- `b epilogue`
|
||||
|
||||
## Intended Effect
|
||||
|
||||
After the repaired patch:
|
||||
|
||||
- AMFI still runs its normal exec-time hook and keeps boot-critical side effects intact.
|
||||
- C21 now carries its own late deny→allow transition inside `_cred_label_update_execve`.
|
||||
- Successfully launched processes end up with a less restrictive `csflags` set, especially around kill / hard / library-validation style behavior.
|
||||
|
||||
This is a much narrower and more defensible jailbreak patch than forcing an unconditional success return at function entry.
|
||||
|
||||
## Current Status
|
||||
|
||||
- Patch implementation updated in `scripts/patchers/kernel_jb_patch_cred_label.py` as C21-v3.
|
||||
- C21-v1 has already booted successfully in restore testing.
|
||||
- Default schedule remains disabled in `scripts/patchers/kernel_jb.py` until C21-v3 restore / boot validation is rerun.
|
||||
- Expected dry-run patch shape for C21-v3 is:
|
||||
- 1 deny cave;
|
||||
- 1 success cave;
|
||||
- 1 branch patch at the shared deny return;
|
||||
- 2 branch patches at the two late success exits.
|
||||
- The current dry-run matches that expected shape exactly.
|
||||
- If C21-v3 regresses boot, the most likely cause is not the split late-exit structure, but the newly added `0xC` helper-bit OR on the success path.
|
||||
|
||||
@@ -1,169 +1,146 @@
|
||||
# C23 `patch_hook_cred_label_update_execve`
|
||||
|
||||
## Patch Goal
|
||||
## Scope
|
||||
|
||||
Install an inline trampoline on the sandbox cred-label execve hook, inject ownership-propagation shellcode, and resume original hook flow safely.
|
||||
- Kernel analyzed: `kernelcache.research.vphone600`
|
||||
- Concrete target image: `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`
|
||||
- Analysis date: `2026-03-06`
|
||||
- Method: IDA MCP + local `research/reference/xnu` + focused Python dry-run
|
||||
- Trust policy: historical notes for this patch were treated as untrusted and re-derived from the live PCC 26.1 research kernel
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
## Executive Verdict
|
||||
|
||||
- Sandbox policy strings/data:
|
||||
- `"Sandbox"` pointer at `0xfffffe0007a66cc0`
|
||||
- `"Seatbelt sandbox policy"` pointer at `0xfffffe0007a66cc8`
|
||||
- `mpc_ops` table at `0xfffffe0007a66d20`
|
||||
- Dynamic hook selection (ops[0..29], max size):
|
||||
- selected entry: `ops[18] = 0xfffffe00093d2ce4` (size `0x1070`)
|
||||
- Recovered hook symbol (callee in this path):
|
||||
- `_hook_cred_label_update_execve` at `0xfffffe00093d0d0c`
|
||||
- `vnode_getattr` resolution by string-near-BL method:
|
||||
- string `%s: vnode_getattr: %d` xref at `0xfffffe00084caa18`
|
||||
- nearest preceding BL target: `0xfffffe0007cd84f8`
|
||||
`patch_hook_cred_label_update_execve` should be implemented as a **faithful upstream C23 wrapper trampoline**, not as an early-return patch.
|
||||
|
||||
## Call-Stack Analysis
|
||||
The correct PCC 26.1 target is the sandbox `mac_policy_ops[18]` entry for `mpo_cred_label_update_execve`. On this kernel that table entry points to the wrapper at `0xfffffe00093bdb64` (`sub_FFFFFE00093BDB64`), not directly to the internal helper at `0xfffffe00093bbbf4` (`sub_FFFFFE00093BBBF4`).
|
||||
|
||||
- MAC framework dispatch -> `mac_policy_ops[18]` (`0xfffffe00093d2ce4`) -> internal call to `_hook_cred_label_update_execve` (`0xfffffe00093d0d0c`).
|
||||
- No direct code xrefs to `ops[18]` function (expected: data-driven dispatch table call path).
|
||||
The rebuilt repo implementation now follows upstream C23 behavior:
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
- retarget `ops[18]` to a code cave,
|
||||
- assemble the cave body via keystone `asm()` instead of hardcoded instruction words,
|
||||
- fetch file metadata with `vnode_getattr(vp, &vap, vfs_context_current())`,
|
||||
- if `VSUID`/`VSGID` are present, copy owner UID/GID into the pending new credential,
|
||||
- set `proc->p_flag |= P_SUGID` when either field changes,
|
||||
- then branch back to the original wrapper.
|
||||
|
||||
- Trampoline site: `0xfffffe00093d2ce4`
|
||||
- Before:
|
||||
- bytes: `7F 23 03 D5`
|
||||
- asm: `PACIBSP`
|
||||
- After:
|
||||
- asm: `B cave` (PC-relative, target depends on allocated cave offset)
|
||||
- Cave semantics:
|
||||
- slot 0: relocated `PACIBSP`
|
||||
- slot 18: `BL vnode_getattr_target`
|
||||
- tail: restore regs + `B hook+4`
|
||||
This means C23 is **not** a direct sandbox-disable patch. It is a compatibility trampoline that preserves exec-time setugid credential state before the normal sandbox wrapper continues.
|
||||
|
||||
## Pseudocode (Before)
|
||||
## Verified Binary Facts
|
||||
|
||||
```c
|
||||
int hook_cred_label_update_execve(args...) {
|
||||
// original sandbox hook logic
|
||||
...
|
||||
}
|
||||
```
|
||||
### 1. The live PCC 26.1 `ops[18]` entry points to the wrapper
|
||||
|
||||
## Pseudocode (After)
|
||||
Focused dry-run and local pointer decode on `kernelcache.research.vphone600` show:
|
||||
|
||||
```c
|
||||
int hook_entry(args...) {
|
||||
branch_to_cave();
|
||||
}
|
||||
- sandbox `mac_policy_conf` at file offset `0x00A54428`
|
||||
- `mpc_ops` table at file offset `0x00A54488`
|
||||
- `ops[18]` entry at file offset `0x00A54518`
|
||||
- original raw chained pointer: `0x8010EC79023B9B64`
|
||||
- decoded target file offset: `0x023B9B64`
|
||||
- decoded target VA: `0xfffffe00093bdb64`
|
||||
|
||||
int cave(args...) {
|
||||
pacibsp();
|
||||
if (vp != NULL) {
|
||||
vnode_getattr(vp, &vap, &ctx);
|
||||
propagate_uid_gid_if_needed(new_cred, vap, proc);
|
||||
}
|
||||
branch_to_hook_plus_4();
|
||||
}
|
||||
```
|
||||
So on this kernel, `ops[18]` is the wrapper `sub_FFFFFE00093BDB64`.
|
||||
|
||||
## Symbol Consistency
|
||||
### 2. The wrapper calls the internal helper
|
||||
|
||||
- `_hook_cred_label_update_execve` symbol is present and aligned with call-path evidence.
|
||||
- `ops[18]` wrapper itself has no recovered explicit symbol name; behavior is consistent with sandbox MAC dispatch wrapper.
|
||||
IDA MCP on the same PCC 26.1 research kernel shows:
|
||||
|
||||
## Patch Metadata
|
||||
- wrapper: `sub_FFFFFE00093BDB64`
|
||||
- inner helper: `sub_FFFFFE00093BBBF4`
|
||||
- call site inside wrapper: `0xfffffe00093be8d0`
|
||||
|
||||
- Patch document: `patch_hook_cred_label_update_execve.md` (C23).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_hook_cred_label.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
So the runtime call chain is:
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
- sandbox policy table `ops[18]`
|
||||
- wrapper `sub_FFFFFE00093BDB64`
|
||||
- internal helper `sub_FFFFFE00093BBBF4`
|
||||
|
||||
- Primary target: hook/trampoline path around `hook_cred_label_update_execve`.
|
||||
- Patch hit combines inline branch rewrite plus code-cave logic, with addresses listed below.
|
||||
### 3. Faithful upstream C23 branches back to the wrapper, not the helper
|
||||
|
||||
## Kernel Source File Location
|
||||
The rebuilt C23 cave uses the same high-level structure as upstream:
|
||||
|
||||
- Component: sandbox/AMFI hook glue around execve cred-label callback (partially private in KC).
|
||||
- Related open-source context: `security/mac_process.c`, `bsd/kern/kern_exec.c`.
|
||||
- Confidence: `low`.
|
||||
- save argument registers,
|
||||
- call `vfs_context_current`,
|
||||
- call `vnode_getattr`,
|
||||
- update pending credential UID/GID from vnode owner when `VSUID`/`VSGID` are set,
|
||||
- set `P_SUGID`,
|
||||
- restore registers,
|
||||
- branch back to the original wrapper entry.
|
||||
|
||||
## Function Call Stack
|
||||
For PCC 26.1, the resolved helper targets are:
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- MAC framework dispatch -> `mac_policy_ops[18]` (`0xfffffe00093d2ce4`) -> internal call to `_hook_cred_label_update_execve` (`0xfffffe00093d0d0c`).
|
||||
- No direct code xrefs to `ops[18]` function (expected: data-driven dispatch table call path).
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
- `vfs_context_current` body at file offset `0x00B756DC`
|
||||
- `vnode_getattr` body at file offset `0x00CC91B4`
|
||||
- branch-back target wrapper at file offset `0x023B9B64`
|
||||
|
||||
## Patch Hit Points
|
||||
## XNU Cross-Reference
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- Trampoline site: `0xfffffe00093d2ce4`
|
||||
- Before:
|
||||
- bytes: `7F 23 03 D5`
|
||||
- asm: `PACIBSP`
|
||||
- After:
|
||||
- asm: `B cave` (PC-relative, target depends on allocated cave offset)
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
Open-source XNU confirms the field semantics used by the faithful C23 shellcode:
|
||||
|
||||
## Current Patch Search Logic
|
||||
- `VSUID` / `VSGID` are defined in `research/reference/xnu/bsd/sys/vnode.h:807`
|
||||
- `struct vnode_attr::{va_uid, va_gid, va_mode}` are defined in `research/reference/xnu/bsd/sys/vnode.h:690`
|
||||
- `struct ucred::cr_uid` is defined in `research/reference/xnu/bsd/sys/ucred.h:155`
|
||||
- `cr_gid` aliases `cr_groups[0]` in `research/reference/xnu/bsd/sys/ucred.h:211`
|
||||
- `P_SUGID` is defined in `research/reference/xnu/bsd/sys/proc.h:177`
|
||||
- exec-time MAC label update reaches this area through `kauth_proc_label_update_execve(...)` in `research/reference/xnu/bsd/kern/kern_credential.c:4367`
|
||||
- exec path setugid handling is in `exec_handle_sugid(...)` in `research/reference/xnu/bsd/kern/kern_exec.c:6833`
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_hook_cred_label.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
## What C23 Does After Rebuild
|
||||
|
||||
## Validation (Static Evidence)
|
||||
### Facts
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
The rebuilt C23 now does exactly two writes in focused dry-run, and the cave body is keystone-generated rather than hand-written as raw instruction words:
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
1. retarget `ops[18]` from the original wrapper pointer to the code cave
|
||||
2. emit a `0xB8`-byte cave implementing the setugid fixup trampoline
|
||||
|
||||
- Exec hook path retains ownership/suid propagation restrictions, leading to launch denial or broken privilege state transitions.
|
||||
Focused dry-run output on `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`:
|
||||
|
||||
## Risk / Side Effects
|
||||
- `0x00A54518` — retarget `ops[18]` to faithful C23 cave
|
||||
- `0x00AB1720` — faithful upstream C23 cave body
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
The patched chained-pointer qword becomes:
|
||||
|
||||
## Symbol Consistency Check
|
||||
- new raw entry: `0x8010EC7900AB1720`
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `_hook_cred_label_update_execve`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `_hook_cred_label_update_execve` resolved at `0xfffffe00093d0d0c` (size `0x460`).
|
||||
### Inference
|
||||
|
||||
## Open Questions and Confidence
|
||||
C23’s role in the jailbreak patchset is best understood as a **boot-safety / semantic-preservation shim** around exec-time sandbox transition handling.
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
It does **not** directly remove the sandbox wrapper. Instead it ensures that setuid/setgid-derived credential state is already reflected in the pending exec credential before the original sandbox wrapper runs. That is consistent with the historical upstream choice to preserve exec-time credential semantics while other jailbreak patches relax deny decisions elsewhere.
|
||||
|
||||
## Evidence Appendix
|
||||
## Validation Status
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
### Syntax validation
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
Passed:
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (2 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `True`
|
||||
- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `2` patch-point VAs.
|
||||
- IDA function sample: `sub_FFFFFE00093D2CE4`
|
||||
- Chain function sample: `sub_FFFFFE00093D2CE4`
|
||||
- Caller sample: none
|
||||
- Callee sample: `__sfree_data`, `_hook_cred_label_update_execve`, `_sb_evaluate_internal`, `persona_put_and_unlock`, `proc_checkdeadrefs`, `sub_FFFFFE0007AC57A0`
|
||||
- Verdict: `valid`
|
||||
- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift.
|
||||
- Policy note: method is in the low-risk optimized set (validated hit on this kernel).
|
||||
- Key verified points:
|
||||
- `0xFFFFFE00093D2CE8` (`sub_FFFFFE00093D2CE4`): mov x0,xzr [_hook_cred_label_update_execve low-risk] | `fc6fbaa9 -> e0031faa`
|
||||
- `0xFFFFFE00093D2CEC` (`sub_FFFFFE00093D2CE4`): retab [_hook_cred_label_update_execve low-risk] | `fa6701a9 -> ff0f5fd6`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
- `python3 -m py_compile scripts/patchers/kernel_jb_patch_hook_cred_label.py scripts/patchers/kernel_jb.py`
|
||||
|
||||
### Focused dry-run validation
|
||||
|
||||
Passed in-memory only; no firmware image was written back.
|
||||
|
||||
Observed output:
|
||||
|
||||
- 2 patches emitted
|
||||
- `ops[18]` correctly decoded and retargeted
|
||||
- cave placed at `0x00AB1720`
|
||||
- cave branches back to wrapper `0x023B9B64`
|
||||
- cave encodes BL calls to `vfs_context_current` and `vnode_getattr`
|
||||
|
||||
## Repo Status After This Pass
|
||||
|
||||
- `scripts/patchers/kernel_jb_patch_hook_cred_label.py` now implements faithful upstream C23 semantics
|
||||
- `scripts/patchers/kernel_jb.py` includes `patch_hook_cred_label_update_execve` in the active Group C schedule
|
||||
- `research/00_patch_comparison_all_variants.md` should describe C23 as a faithful wrapper trampoline, not as a mis-targeted early-return patch
|
||||
|
||||
## Practical Effect
|
||||
|
||||
After the rebuild, C23 should provide the following effect on the current PCC 26.1 research kernel:
|
||||
|
||||
- preserve exec-time `VSUID` / `VSGID` credential transfer,
|
||||
- preserve `P_SUGID` marking,
|
||||
- keep the original sandbox wrapper execution path alive,
|
||||
- avoid the broader boot-risk of replacing the whole wrapper with an immediate success return.
|
||||
|
||||
That is the main reason this direction is safer than the old “return 0 from the hook path” interpretations.
|
||||
|
||||
@@ -1,148 +1,257 @@
|
||||
# B19 `patch_io_secure_bsd_root`
|
||||
# B19 `patch_io_secure_bsd_root` — 2026-03-06 reanalysis
|
||||
|
||||
## Patch Goal
|
||||
## Scope
|
||||
|
||||
Bypass secure-root enforcement branch so the checked path does not block execution.
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
|
||||
- Recovered symbol: `IOSecureBSDRoot` at `0xfffffe0008297fd8`.
|
||||
- Additional fallback function observed by string+context matching:
|
||||
- `sub_FFFFFE000836E168` (AppleARMPE call path with `SecureRoot` / `SecureRootName` references)
|
||||
- Strict branch candidate used by current fallback-style logic:
|
||||
- `0xfffffe000836e1f0` (`CBZ W0, ...`) after `BLRAA`
|
||||
|
||||
## Call-Stack Analysis
|
||||
|
||||
- `IOSecureBSDRoot` is the named entrypoint for secure-root handling.
|
||||
- `sub_FFFFFE000836E168` is reached through platform-dispatch data refs (vtable-style), not direct BL callers.
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
|
||||
- Candidate patch site: `0xfffffe000836e1f0`
|
||||
- Before:
|
||||
- bytes: `20 0D 00 34`
|
||||
- asm: `CBZ W0, loc_FFFFFE000836E394`
|
||||
- After:
|
||||
- bytes: `69 00 00 14`
|
||||
- asm: `B #0x1A4`
|
||||
|
||||
## Pseudocode (Before)
|
||||
|
||||
```c
|
||||
status = callback(...);
|
||||
if (status == 0) {
|
||||
goto secure_root_pass_path;
|
||||
}
|
||||
// fail / alternate handling
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
|
||||
```c
|
||||
goto secure_root_pass_path; // unconditional
|
||||
```
|
||||
|
||||
## Symbol Consistency
|
||||
|
||||
- `IOSecureBSDRoot` symbol is recovered and trustworthy as the primary semantic target.
|
||||
- Current fallback patch site is in a related dispatch function; this is semantically plausible but should be treated as lower confidence than a direct in-symbol site.
|
||||
|
||||
## Patch Metadata
|
||||
|
||||
- Patch document: `patch_io_secure_bsd_root.md` (B19).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_secure_root.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
|
||||
- Primary target: `IOSecureBSDRoot` policy-branch site selected by guard-site filters.
|
||||
- Patchpoint is the deny-check branch converted to permissive flow.
|
||||
|
||||
## Kernel Source File Location
|
||||
|
||||
- Likely IOKit secure-root policy code inside kernel collection (not fully exposed in open-source XNU tree).
|
||||
- Closest open-source family: `iokit/Kernel/*` root device / BSD name handling.
|
||||
- Confidence: `low`.
|
||||
|
||||
## Function Call Stack
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- `IOSecureBSDRoot` is the named entrypoint for secure-root handling.
|
||||
- `sub_FFFFFE000836E168` is reached through platform-dispatch data refs (vtable-style), not direct BL callers.
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
|
||||
## Patch Hit Points
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- Candidate patch site: `0xfffffe000836e1f0`
|
||||
- Before:
|
||||
- bytes: `20 0D 00 34`
|
||||
- asm: `CBZ W0, loc_FFFFFE000836E394`
|
||||
- After:
|
||||
- bytes: `69 00 00 14`
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
|
||||
## Current Patch Search Logic
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_secure_root.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
|
||||
## Validation (Static Evidence)
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
|
||||
- Secure BSD root policy check continues to deny modified-root boot/runtime paths needed by jailbreak filesystem flow.
|
||||
|
||||
## Risk / Side Effects
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
|
||||
## Symbol Consistency Check
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `IOSecureBSDRoot`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `IOSecureBSDRoot` -> `IOSecureBSDRoot` at `0xfffffe0008297fd8`.
|
||||
|
||||
## Open Questions and Confidence
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
|
||||
## Evidence Appendix
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Kernel used for live reverse-engineering: `kernelcache.research.vphone600`
|
||||
- Kernel file used locally: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`
|
||||
- Chain function sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`
|
||||
- Caller sample: none
|
||||
- Callee sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`, `sub_FFFFFE0007AC57A0`, `sub_FFFFFE0007AC5830`, `sub_FFFFFE0007B1B4E0`, `sub_FFFFFE0007B1C324`, `sub_FFFFFE0008133868`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE000836E1F0` (`__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`): b #0x1A4 [_IOSecureBSDRoot] | `200d0034 -> 69000014`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
- Ground-truth sources for this note:
|
||||
- IDA-MCP on the loaded research kernel
|
||||
- recovered symbol datasets in `research/kernel_info/json/`
|
||||
- open-source XNU in `research/reference/xnu`
|
||||
|
||||
This document intentionally discards earlier B19 writeups as untrusted and restarts the analysis from first principles.
|
||||
|
||||
## Executive Conclusion
|
||||
|
||||
`patch_io_secure_bsd_root` was previously targeting the wrong branch.
|
||||
|
||||
The disabled historical patch at `0xFFFFFE000836E1F0` / file offset `0x0136A1F0` does **not** patch the `"SecureRootName"` policy result used by `IOSecureBSDRoot()`. Instead, it patches the earlier `"SecureRoot"` name-match gate inside `AppleARMPE::callPlatformFunction`, which changes generic platform-function dispatch semantics and is a credible root cause for the early-boot failure.
|
||||
|
||||
The semantically correct deny path for the `IOSecureBSDRoot(rootdevice)` flow is the `"SecureRootName"` branch in `AppleARMPE::callPlatformFunction`, specifically the final return-value select at:
|
||||
|
||||
- VA: `0xFFFFFE000836E464`
|
||||
- file offset: `0x0136A464`
|
||||
- before: `f613891a` / `CSEL W22, WZR, W9, NE`
|
||||
- recommended after: `16008052` / `MOV W22, #0`
|
||||
|
||||
That patch preserves the compare, callback, wakeup, and state updates, and only forces the final policy return from `kIOReturnNotPrivileged` to success.
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- `scripts/patchers/kernel_jb_patch_secure_root.py` was retargeted on 2026-03-06 to emit this `0x0136A464` patch instead of the historical `0x0136A1F0` false-positive branch rewrite.
|
||||
- `scripts/patchers/kernel_jb.py` now includes `patch_io_secure_bsd_root` again in `_GROUP_B_METHODS` with the retargeted matcher.
|
||||
- Local dry-run verification on the research kernel emits exactly one write: `0x0136A464` / `16008052` / `mov w22, #0 [_IOSecureBSDRoot SecureRootName allow]`.
|
||||
|
||||
## Verified Call Chain
|
||||
|
||||
### 1. BSD boot calls `IOSecureBSDRoot`
|
||||
|
||||
IDA shows `bsd_init` calling `IOSecureBSDRoot` here:
|
||||
|
||||
- `bsd_init` call site: `0xFFFFFE0007F7B7C4` / file offset `0x00F777C4`
|
||||
- instruction: `BL IOSecureBSDRoot`
|
||||
|
||||
The nearby boot flow is:
|
||||
|
||||
1. `IOFindBSDRoot`
|
||||
2. `vfs_mountroot`
|
||||
3. `IOSecureBSDRoot(rootdevice)`
|
||||
4. `VFS_ROOT(...)`
|
||||
5. later `FSIOC_KERNEL_ROOTAUTH`
|
||||
|
||||
This matches open-source XNU in `research/reference/xnu/bsd/kern/bsd_init.c`, where `IOSecureBSDRoot(rootdevice);` appears before `VFS_ROOT()` and well before the later root-authentication ioctl.
|
||||
|
||||
### 2. `IOSecureBSDRoot` calls platform expert with `"SecureRootName"`
|
||||
|
||||
Recovered symbol + IDA decompilation:
|
||||
|
||||
- `IOSecureBSDRoot`: `0xFFFFFE0008297FD8` / file offset `0x01293FD8`
|
||||
- research recovered symbol: `IOSecureBSDRoot`
|
||||
- release recovered symbol: `IOSecureBSDRoot` at `0xFFFFFE000825FFD8`
|
||||
|
||||
The decompiled logic is straightforward:
|
||||
|
||||
1. build `OSSymbol("SecureRootName")`
|
||||
2. wait for `IOPlatformExpert`
|
||||
3. call `pe->callPlatformFunction(functionName, false, rootName, NULL, NULL, NULL)`
|
||||
4. if result is `0xE00002C1` (`kIOReturnNotPrivileged`), call `mdevremoveall()`
|
||||
|
||||
Open-source XNU confirms the intended semantics in `research/reference/xnu/iokit/bsddev/IOKitBSDInit.cpp`:
|
||||
|
||||
- `"SecureRootName"` is the exact function name
|
||||
- `kIOReturnNotPrivileged` means the root device is not secure
|
||||
- on that return code, `mdevremoveall()` is invoked
|
||||
|
||||
`mdevremoveall()` in `research/reference/xnu/bsd/dev/memdev.c` removes `/dev/md*` devices and clears the memory-device bookkeeping, so this path is directly relevant to ramdisk / custom-root boot flows.
|
||||
|
||||
### 3. The real secure-root decision is made in `AppleARMPE::callPlatformFunction`
|
||||
|
||||
Relevant function:
|
||||
|
||||
- `AppleARMPE::callPlatformFunction`: `0xFFFFFE000836E168` / file offset `0x0136A168`
|
||||
|
||||
Within this function, there are **two different** string-based branches that matter:
|
||||
|
||||
#### A. `"SecureRoot"` branch — callback/control path
|
||||
|
||||
At:
|
||||
|
||||
- `0xFFFFFE000836E1EC`: `BLRAA` to `a2->isEqualTo("SecureRoot")`
|
||||
- `0xFFFFFE000836E1F0`: `CBZ W0, loc_FFFFFE000836E394`
|
||||
|
||||
If the name matches `"SecureRoot"`, the function enters a branch that:
|
||||
|
||||
- waits on byte flag `[a1+0x118]`
|
||||
- may call `"SecureRootCallBack"`
|
||||
- sets / wakes byte flag `[a1+0x119]`
|
||||
- optionally returns a boolean via `a5`
|
||||
|
||||
This is **not** the direct `IOSecureBSDRoot(rootName)` policy result.
|
||||
|
||||
#### B. `"SecureRootName"` branch — actual policy decision path
|
||||
|
||||
At:
|
||||
|
||||
- `0xFFFFFE000836E3C0`: `BLRAA` to `a2->isEqualTo("SecureRootName")`
|
||||
- `0xFFFFFE000836E3C4`: `CBZ W0, loc_FFFFFE000836E46C`
|
||||
|
||||
Then:
|
||||
|
||||
- `0xFFFFFE000836E3D4`: call helper that behaves like `strlen`
|
||||
- `0xFFFFFE000836E3E4`: call helper that behaves like `strncmp`
|
||||
- `0xFFFFFE000836E3E8`: `CMP W0, #0`
|
||||
- `0xFFFFFE000836E3EC`: `CSET W8, EQ`
|
||||
- `0xFFFFFE000836E3F0`: store secure-match bit to `[a1+0x11A]`
|
||||
- wake waiting threads / synchronize callback state
|
||||
- `0xFFFFFE000836E450`: reload `[a1+0x11A]`
|
||||
- `0xFFFFFE000836E454`: load `W9 = 0xE00002C1`
|
||||
- `0xFFFFFE000836E464`: `CSEL W22, WZR, W9, NE`
|
||||
|
||||
That final `CSEL` is the actual deny/success selector for the `"SecureRootName"` request:
|
||||
|
||||
- secure match -> return `0`
|
||||
- mismatch -> return `0xE00002C1` / `kIOReturnNotPrivileged`
|
||||
|
||||
## Why the Historical Patch Is Wrong
|
||||
|
||||
### Root cause 1: live patcher has no symbol table to use
|
||||
|
||||
Running the existing `KernelJBPatcher` locally against the research kernel shows:
|
||||
|
||||
- `symbol_count = 0`
|
||||
- `_resolve_symbol("_IOSecureBSDRoot") == -1`
|
||||
- `_resolve_symbol("IOSecureBSDRoot") == -1`
|
||||
|
||||
So the current code always falls back to a heuristic matcher on this kernel.
|
||||
|
||||
### Root cause 2: the fallback heuristic picks the first `BL* + CBZ W0` site
|
||||
|
||||
The current fallback logic looks for a function referencing both `"SecureRoot"` and `"SecureRootName"`, then selects the first forward conditional branch shaped like:
|
||||
|
||||
- previous instruction is `BL*`
|
||||
- current instruction is `CBZ/CBNZ W0, target`
|
||||
|
||||
That heuristic lands on:
|
||||
|
||||
- `0xFFFFFE000836E1F0` / `CBZ W0, loc_FFFFFE000836E394`
|
||||
|
||||
But this site is only the result of `isEqualTo("SecureRoot")`. It is **not** the final policy-return site for `"SecureRootName"`.
|
||||
|
||||
### Root cause 3: the old patch changes dispatch routing, not just the deny return
|
||||
|
||||
Historical patch:
|
||||
|
||||
- before: `200d0034` / `CBZ W0, loc_FFFFFE000836E394`
|
||||
- after: `69000014` / `B #0x1A4`
|
||||
|
||||
Effect:
|
||||
|
||||
- previously: only true `"SecureRoot"` requests enter the `SecureRoot` branch
|
||||
- after patch: non-`"SecureRoot"` requests are also forced into that branch
|
||||
|
||||
Because this is inside generic `AppleARMPE::callPlatformFunction` dispatch, the patch can corrupt the control flow for unrelated platform-function calls that happen to reach this portion of the function. That is much broader than “skip secure-root denial” and is consistent with a boot-time regression.
|
||||
|
||||
## What This Patch Actually Does
|
||||
|
||||
`patch_io_secure_bsd_root` does **not** replace the later sealed-root / root-authentication gate in `bsd_init`.
|
||||
|
||||
What it actually controls is earlier and narrower:
|
||||
|
||||
1. determine whether the chosen BSD root name is platform-approved (`"SecureRootName"`)
|
||||
2. if not approved, return `kIOReturnNotPrivileged`
|
||||
3. `IOSecureBSDRoot()` maps that failure into `mdevremoveall()`
|
||||
|
||||
So the practical effect of a correct B19 bypass is:
|
||||
|
||||
- allow a non-approved/custom BSD root name to survive the platform secure-root policy step
|
||||
- avoid the `kIOReturnNotPrivileged -> mdevremoveall()` failure path
|
||||
- keep the rest of the boot moving toward `VFS_ROOT` and the later rootauth check
|
||||
|
||||
This is why B19 and `patch_bsd_init_auth` are separate methods: they handle different stages of the boot chain.
|
||||
|
||||
## Recommended Patch Strategy
|
||||
|
||||
### Preferred site: final `"SecureRootName"` return select
|
||||
|
||||
Patch only the final result selector:
|
||||
|
||||
- VA: `0xFFFFFE000836E464`
|
||||
- file offset: `0x0136A464`
|
||||
- before bytes: `f613891a`
|
||||
- before asm: `CSEL W22, WZR, W9, NE`
|
||||
- after bytes: `16008052`
|
||||
- after asm: `MOV W22, #0`
|
||||
|
||||
Why this site is preferred:
|
||||
|
||||
- preserves the string comparison logic
|
||||
- preserves the `SecureRootCallBack` synchronization / wakeup handshake
|
||||
- preserves the state bytes at `[a1+0x118]`, `[a1+0x119]`, `[a1+0x11A]`
|
||||
- changes only the final deny-vs-success return value
|
||||
|
||||
### Secondary option: force the secure-match bit before the final select
|
||||
|
||||
- VA: `0xFFFFFE000836E3EC`
|
||||
- file offset: `0x0136A3EC`
|
||||
- before bytes: `e8179f1a`
|
||||
- before asm: `CSET W8, EQ`
|
||||
- after bytes: `28008052`
|
||||
- after asm: `MOV W8, #1`
|
||||
|
||||
This is broader than the preferred patch because it changes the stored secure-match state itself, not just the returned result.
|
||||
|
||||
### Tertiary option: suppress only `IOSecureBSDRoot()` cleanup
|
||||
|
||||
There is also a coarser site in `IOSecureBSDRoot` itself:
|
||||
|
||||
- `0xFFFFFE0008298144`: compare against `0xE00002C1` followed by `B.NE`
|
||||
|
||||
That site can suppress `mdevremoveall()` without touching `AppleARMPE::callPlatformFunction`, but it is less attractive because it leaves the underlying `"SecureRootName"` failure semantics intact and only masks the wrapper-side cleanup.
|
||||
|
||||
## Safer Matcher Recipe For Future Python Rework
|
||||
|
||||
If/when the Python patcher is reworked, the fallback should stop selecting the first `BL* + CBZ W0` site in the shared function.
|
||||
|
||||
A safer matcher for stripped kernels is:
|
||||
|
||||
1. locate the function referencing both `"SecureRoot"` and `"SecureRootName"`
|
||||
2. inside that function, find the `"SecureRootName"` equality check block, not the `"SecureRoot"` block
|
||||
3. from there, require the sequence:
|
||||
- helper call 1 (length)
|
||||
- helper call 2 (compare)
|
||||
- `CMP W0, #0`
|
||||
- `CSET W8, EQ`
|
||||
- store to `[X19,#0x11A]`
|
||||
- later `MOV W9, #0xE00002C1`
|
||||
- final `CSEL W22, WZR, W9, NE`
|
||||
4. patch only that final `CSEL`
|
||||
|
||||
This gives a unique, semantics-aware patch site for the actual deny return.
|
||||
|
||||
## Local Reproduction Notes
|
||||
|
||||
Local dry analysis of the current patcher on the research kernel produced:
|
||||
|
||||
- `fallback_func = 0x136a168`
|
||||
- emitted patch = `(0x0136A1F0, 69000014, 'b #0x1A4 [_IOSecureBSDRoot]')`
|
||||
|
||||
This reproduces the disabled historical behavior and confirms that the current implementation does not yet target the correct deny site.
|
||||
|
||||
## Confidence
|
||||
|
||||
- Confidence that the historical patch site is wrong: **high**
|
||||
- Confidence that `0xFFFFFE000836E464` is the correct minimal deny-return site: **high**
|
||||
- Confidence that this alone is sufficient for full jailbreak boot: **medium**
|
||||
|
||||
The last item stays `medium` because B19 only addresses the secure-root platform policy stage; it does not replace the later root-auth/sealedness work handled elsewhere.
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
# A5 `patch_iouc_failed_macf`
|
||||
|
||||
## Status
|
||||
|
||||
- Re-analysis date: `2026-03-06`
|
||||
- Current conclusion: the historical repo A5 entry early-return is rejected as over-broad, but A5-v2 is now rebuilt as a narrow branch-level patch at the real post-MACF deny gate.
|
||||
- Current repository behavior: `patch_iouc_failed_macf` is active again with the strict A5-v2 matcher.
|
||||
|
||||
## Patch Goal
|
||||
|
||||
Bypass the shared IOUserClient MACF deny gate that emits:
|
||||
@@ -9,25 +15,21 @@ Bypass the shared IOUserClient MACF deny gate that emits:
|
||||
|
||||
This gate blocks `mount-phase-1` and `data-protection` (`seputil`) in current JB boot logs.
|
||||
|
||||
## Binary Targets (vphone600 research kernel)
|
||||
## Historical Repo Hit (rejected)
|
||||
|
||||
- Anchor string: `"failed MACF"`
|
||||
- Candidate function selected by anchor xref + IOUC co-reference:
|
||||
- function start: `0xfffffe000825b0c0`
|
||||
- Patch points:
|
||||
- Historical patch points:
|
||||
- `0xfffffe000825b0c4`
|
||||
- `0xfffffe000825b0c8`
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
## Why The Historical Repo Patch Is Rejected
|
||||
|
||||
- At `fn + 0x4`:
|
||||
- before: stack-frame setup (`stp ...`)
|
||||
- after: `mov x0, xzr`
|
||||
- At `fn + 0x8`:
|
||||
- before: stack-frame setup (`stp ...`)
|
||||
- after: `retab`
|
||||
|
||||
Result: function returns success immediately while preserving entry `PACIBSP`.
|
||||
- IDA decompilation shows `0xfffffe000825b0c0` is a large IOUserClient open / setup path, not a tiny standalone MACF helper.
|
||||
- That function also prepares output state (`a7` / `a8` in decompilation) before returning to its caller.
|
||||
- The historical repo patch overwrote the first two instructions after `PACIBSP` with `mov x0, xzr ; retab`, which forces an immediate success return before that wider setup work happens.
|
||||
- Therefore the old patch is broader than the actual MACF deny branch and is not a good upstream-aligned design.
|
||||
|
||||
## Pseudocode (Before)
|
||||
|
||||
@@ -39,14 +41,26 @@ int iouc_macf_gate(...) {
|
||||
}
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
## Narrow Branch (current A5-v2 target)
|
||||
|
||||
```c
|
||||
int iouc_macf_gate(...) {
|
||||
return 0;
|
||||
// inside sub_FFFFFE000825B0C0
|
||||
ret = mac_iokit_check_open(...);
|
||||
if (ret != 0) {
|
||||
IOLog("IOUC %s failed MACF in process %s\n", ...);
|
||||
error = kIOReturnNotPermitted;
|
||||
goto out;
|
||||
}
|
||||
```
|
||||
|
||||
Current IDA-validated branch window:
|
||||
|
||||
- `0xfffffe000825ba94` — `BL sub_FFFFFE00082EB07C`
|
||||
- `0xfffffe000825ba98` — `CBZ W0, loc_FFFFFE000825BB0C`
|
||||
- `0xfffffe000825baf8` — `ADRL X0, "IOUC %s failed MACF in process %s\n"`
|
||||
|
||||
A5-v2 patches exactly this gate by replacing `CBZ W0, loc_FFFFFE000825BB0C` with unconditional `B loc_FFFFFE000825BB0C`.
|
||||
|
||||
## Why This Patch Was Added
|
||||
|
||||
- Extending sandbox hooks to cover `ops[201..210]` was not sufficient.
|
||||
@@ -60,13 +74,16 @@ int iouc_macf_gate(...) {
|
||||
- Primary patcher module:
|
||||
- `scripts/patchers/kernel_jb_patch_iouc_macf.py`
|
||||
- JB scheduler status:
|
||||
- enabled in default `_DEFAULT_METHODS` as `patch_iouc_failed_macf`
|
||||
- present in active `_PATCH_METHODS`
|
||||
- patch method emits one branch rewrite when the strict shape matches
|
||||
|
||||
## Validation (static, local)
|
||||
|
||||
- Method emitted 2 writes on current kernel:
|
||||
- Historical repo dry-run emitted 2 writes on current kernel:
|
||||
- `0x012570C4` `mov x0,xzr [IOUC MACF gate low-risk]`
|
||||
- `0x012570C8` `retab [IOUC MACF gate low-risk]`
|
||||
- Current A5-v2 dry-run emits **1 write** on current kernel:
|
||||
- `0x01257A98` `b #0x74 [IOUC MACF deny → allow]`
|
||||
|
||||
## XNU Reference Cross-Validation (2026-03-06)
|
||||
|
||||
@@ -90,19 +107,12 @@ What still requires IDA/runtime evidence:
|
||||
|
||||
Interpretation:
|
||||
|
||||
- This patch has strong source-level support for mechanism (shared IOUC MACF gate),
|
||||
while concrete hit-point selection remains IDA-authoritative per-kernel.
|
||||
- The IOUC MACF mechanism itself is real and source-backed.
|
||||
- The old repo hit-point was too wide.
|
||||
- A5-v2 now follows the narrower branch-level retarget: preserve the IOUserClient open path and only force the post-`mac_iokit_check_open` gate into the allow path.
|
||||
|
||||
## Runtime Validation Pending
|
||||
## Bottom Line
|
||||
|
||||
Need full flow validation after patch install:
|
||||
|
||||
1. `make fw_patch_jb`
|
||||
2. restore
|
||||
3. `make cfw_install_jb`
|
||||
4. `make boot`
|
||||
|
||||
Expected improvement:
|
||||
|
||||
- no `IOUC ... failed MACF` for APFS/SEP user clients
|
||||
- `data-protection` should progress past `seputil` timeout path.
|
||||
- The old entry early-return was a repo-local experiment and is no longer used.
|
||||
- The current A5-v2 implementation patches only the narrow `mac_iokit_check_open` deny gate inside `0xfffffe000825b0c0`.
|
||||
- Focused dry-run on `kernelcache.research.vphone600` hits a single branch rewrite at `0x01257A98`, which is much closer to an upstream-style minimal gate patch than the old entry short-circuit.
|
||||
|
||||
@@ -1,152 +1,240 @@
|
||||
# C24 `patch_kcall10`
|
||||
|
||||
## Patch Goal
|
||||
## Status (2026-03-06, PCC 26.1 re-analysis)
|
||||
|
||||
Replace syscall 439 (`kas_info`) with a 10-argument kernel call trampoline and preserve chained-fixup integrity.
|
||||
- Treat all older `kcall10` notes in this repo as historical / untrusted unless they match the facts below.
|
||||
- Current verdict for the legacy upstream-style design: it was ABI-incorrect for PCC 26.1 and has been replaced in the patcher with a rebuilt ABI-correct syscall-cave design.
|
||||
- Scope of this document: single-patch re-research only, focused exclusively on the `kcall10` kernel-call patch itself.
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
## Goal
|
||||
|
||||
- Recovered symbols:
|
||||
- `nosys` at `0xfffffe0008010c94`
|
||||
- `kas_info` at `0xfffffe0008080d0c`
|
||||
- Patcher design target:
|
||||
- `sysent[439]` entry: `sy_call`, optional `sy_munge32`, return-type/narg fields.
|
||||
- Cave code:
|
||||
- shellcode trampoline in executable text cave (dynamic offset).
|
||||
- Repurpose `SYS_kas_info` (`syscall 439`) into a usable kernel-call primitive for jailbreak workflows.
|
||||
- Keep the hook on a syscall slot that is already effectively unused on this kernel.
|
||||
- Make the patch structurally correct for the real arm64 XNU syscall ABI so it can be dry-run validated without relying on guessed stack contracts.
|
||||
|
||||
## Call-Stack Analysis
|
||||
## Verified PCC 26.1 Facts
|
||||
|
||||
- Userland syscall -> syscall dispatch -> `sysent[439].sy_call`.
|
||||
- Before patch: `sysent[439] -> kas_info` (restricted behavior).
|
||||
- After patch: `sysent[439] -> kcall10 cave` (loads function pointer + args, executes `BLR x16`, stores results back).
|
||||
### `sysent[439]` on the loaded PCC 26.1 research kernel
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
- IDA function `sub_FFFFFE00081279E4` is the arm64 Unix syscall dispatcher (`unix_syscall` semantics confirmed by XNU source and call shape).
|
||||
- It computes the syscall-table base as `off_FFFFFE000773F858` and indexes entries as `base + code * 0x18`.
|
||||
- Therefore `sysent[439]` is at:
|
||||
- VA `0xFFFFFE0007742180`
|
||||
- file offset `0x0073E180`
|
||||
- Unpatched entry contents on PCC 26.1:
|
||||
- `sy_call = 0xFFFFFE0008077978`
|
||||
- `sy_arg_munge32 = 0xFFFFFE0007C6AC4C`
|
||||
- `sy_return_type = 1`
|
||||
- `sy_narg = 3`
|
||||
- `sy_arg_bytes = 0x000C`
|
||||
|
||||
- Entry-point data patching is chained-fixup encoded (auth rebase), not raw VA writes.
|
||||
- Key field semantics:
|
||||
- diversity: `0xBCAD`
|
||||
- key: IA (`0`)
|
||||
- addrDiv: `0`
|
||||
- preserve `next` chain bits
|
||||
- Metadata patches:
|
||||
### Raw entry dump
|
||||
|
||||
- 24-byte `sysent[439]` dump as observed in IDA / local decode:
|
||||
- qword `[+0x00]`: `0xFFFFFE0008077978`
|
||||
- qword `[+0x08]`: `0xFFFFFE0007C6AC4C`
|
||||
- dword `[+0x10]`: `0x00000001`
|
||||
- half `[+0x14]`: `0x0003`
|
||||
- half `[+0x16]`: `0x000C`
|
||||
- Same entry in 32-bit little-endian words:
|
||||
- `08077978 fffffe00 07c6ac4c fffffe00 00000001 000c0003`
|
||||
|
||||
### What `syscall 439` currently does here
|
||||
|
||||
- `0xFFFFFE0008077978` disassembles to:
|
||||
- `MOV W0, #0x2D`
|
||||
- `RET`
|
||||
- `0x2D` is `45` decimal, i.e. `ENOTSUP`.
|
||||
- So on this PCC 26.1 research kernel, `SYS_kas_info` is effectively a stubbed-out `ENOTSUP` syscall target, which makes it a good hook point.
|
||||
|
||||
### Verified dispatcher ABI
|
||||
|
||||
- In `sub_FFFFFE00081279E4`, the handler call sequence is:
|
||||
- `LDR X8, [X22]`
|
||||
- `MOV X0, X21`
|
||||
- `MOV X1, X19`
|
||||
- `MOV X2, X24`
|
||||
- `MOV X17, #0xBCAD`
|
||||
- `BLRAA X8, X17`
|
||||
- Derived state at the call:
|
||||
- `X21 = struct proc *`
|
||||
- `X19 = &uthread->uu_arg[0]`
|
||||
- `X24 = &uthread->uu_rval[0]`
|
||||
- So the real handler ABI is:
|
||||
- `x0 = struct proc *`
|
||||
- `x1 = &uthread->uu_arg[0]`
|
||||
- `x2 = &uthread->uu_rval[0]`
|
||||
|
||||
## XNU Cross-Check
|
||||
|
||||
- `research/reference/xnu/bsd/sys/sysent.h` defines `sy_call_t` as `int32_t sy_call(struct proc *, void *, int *)`.
|
||||
- `research/reference/xnu/bsd/dev/arm/systemcalls.c` shows `unix_syscall()` calling `(*callp->sy_call)(proc, &uthread->uu_arg[0], &uthread->uu_rval[0])`.
|
||||
- arm64 `unix_syscall` only accepts up to **8** syscall argument slots.
|
||||
- `research/reference/xnu/bsd/sys/user.h` shows `uu_rval` is `int uu_rval[2]`, so the natural 64-bit return path is `_SYSCALL_RET_UINT64_T`, which packs one 64-bit value across those two 32-bit cells.
|
||||
|
||||
## Why The Historical Design Was Wrong
|
||||
|
||||
### Old idea
|
||||
|
||||
- Historical notes described a cave that:
|
||||
- recovered a pointer from `[sp,#0x40]`
|
||||
- treated that pointer as `{ target, arg0..arg9, out_regs... }`
|
||||
- called the target with `BLR`
|
||||
- wrote many registers back to the same buffer
|
||||
- returned `0`
|
||||
|
||||
### Problems
|
||||
|
||||
- The syscall ABI never passes a userspace request buffer via `[sp,#0x40]`.
|
||||
- arm64 XNU does not provide a 10-argument Unix syscall ABI.
|
||||
- `uu_arg` only holds 8 qwords, so the old cave over-read / over-wrote beyond the copied syscall arguments.
|
||||
- The old design bypassed the real syscall return channel (`retval` / `uu_rval`) and therefore did not actually match how `unix_syscall()` returns results to userspace.
|
||||
|
||||
## Rebuilt Patch Design
|
||||
|
||||
### Practical decision
|
||||
|
||||
- A literal direct-call `kcall10` is not ABI-compatible with this kernel's Unix syscall path.
|
||||
- The rebuilt patch therefore keeps the historical hook point but redefines the request format into an ABI-correct reduced form:
|
||||
- target function pointer
|
||||
- 7 direct arguments
|
||||
- 64-bit X0 return value
|
||||
- This keeps the patch usable as a kernel-call bootstrap while staying within the real syscall ABI.
|
||||
|
||||
### New `uap` layout
|
||||
|
||||
The rebuilt patcher uses `sy_narg = 8`, with `x1` pointing at a copied 8-qword argument block:
|
||||
|
||||
```c
|
||||
struct kcall10_uap_rebuilt {
|
||||
uint64_t target;
|
||||
uint64_t arg0;
|
||||
uint64_t arg1;
|
||||
uint64_t arg2;
|
||||
uint64_t arg3;
|
||||
uint64_t arg4;
|
||||
uint64_t arg5;
|
||||
uint64_t arg6;
|
||||
};
|
||||
```
|
||||
|
||||
### New semantics
|
||||
|
||||
- `uap[0]` = target function pointer
|
||||
- `uap[1..7]` = arguments loaded into `x0..x6`
|
||||
- `x7` is forced to zero in the cave
|
||||
- target return `x0` is stored to `retval`
|
||||
- `sysent[439].sy_return_type` is set to `_SYSCALL_RET_UINT64_T`
|
||||
- userspace receives one 64-bit return value in `x0`
|
||||
|
||||
## Python Implementation
|
||||
|
||||
The dedicated patcher file is now:
|
||||
|
||||
- `scripts/patchers/kernel_jb_patch_kcall10.py`
|
||||
|
||||
### What it now does
|
||||
|
||||
- Finds the real `sysent` table by scanning backward from a decoded `_nosys` entry.
|
||||
- Locates a reusable 8-argument `sy_arg_munge32` helper from the live table and now requires that the decoded helper target be unique across all matching sysent rows.
|
||||
- Allocates an executable cave sized to the emitted blob instead of relying on a fixed large reservation.
|
||||
- Emits an ABI-correct cave that:
|
||||
- validates `uap`, `retval`, and `target`
|
||||
- loads `target + 7 args` from `x1`
|
||||
- performs `BLR X16`
|
||||
- stores `X0` to `x2`
|
||||
- returns `0` on success or `EINVAL` on malformed input
|
||||
- Rewrites `sysent[439]` to point at the cave.
|
||||
- Rewrites `sysent[439].sy_arg_munge32` to an 8-argument helper.
|
||||
- Rewrites metadata to:
|
||||
- `sy_return_type = 7`
|
||||
- `sy_narg = 8`
|
||||
- `sy_arg_bytes = 0x20`
|
||||
|
||||
## Pseudocode (Before)
|
||||
## Expected Emitted Patch Shape
|
||||
|
||||
```c
|
||||
// sysent[439]
|
||||
return kas_info(args); // limited / ENOTSUP style behavior on this platform
|
||||
```
|
||||
The rebuilt patch should emit exactly four writes:
|
||||
|
||||
## Pseudocode (After)
|
||||
1. Code cave blob in `__TEXT_EXEC`
|
||||
2. `sysent[439].sy_call = cave`
|
||||
3. `sysent[439].sy_arg_munge32 = 8-arg munger`
|
||||
4. `sysent[439].sy_return_type / sy_narg / sy_arg_bytes`
|
||||
|
||||
```c
|
||||
// sysent[439]
|
||||
ctx = user_buf;
|
||||
fn = ctx->func;
|
||||
args = ctx->arg0..arg9;
|
||||
ret_regs = fn(args...);
|
||||
ctx->ret_regs = ret_regs;
|
||||
return 0;
|
||||
```
|
||||
## Static Acceptance Criteria
|
||||
|
||||
## Symbol Consistency
|
||||
The rebuilt patch is considered structurally correct if all of the following hold:
|
||||
|
||||
- `nosys` and `kas_info` symbols are recovered and consistent with the intended hook objective.
|
||||
- Direct `sysent` symbol is not recovered; table base still relies on structural scanning + chained-fixup validation logic.
|
||||
- `sysent[439]` still decodes as a valid auth-rebase entry after patching.
|
||||
- `sy_narg == 8` and `sy_arg_bytes == 0x20`.
|
||||
- No cave instruction reads from guessed caller-frame offsets like `[sp,#0x40]` to recover user arguments.
|
||||
- The cave consumes the real syscall handler ABI: `(proc, uap, retval)`.
|
||||
- The cave returns the 64-bit primary result through `retval` and `_SYSCALL_RET_UINT64_T`.
|
||||
- The cave does not read beyond the 8 copied syscall qwords.
|
||||
|
||||
## Patch Metadata
|
||||
## Risks
|
||||
|
||||
- Patch document: `patch_kcall10.md` (C24).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_kcall10.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
- **Arbitrary kernel call surface**: this patch intentionally creates a direct kernel-call primitive from userspace; any reachable caller with sufficient privilege can invoke sensitive kernel routines with attacker-controlled arguments.
|
||||
- **Target-function safety**: the cave does not validate the semantic suitability of the target function. Calling a function with the wrong prototype, wrong locking expectations, or wrong context can panic or corrupt kernel state.
|
||||
- **Argument-width limit**: this rebuilt version is ABI-correct but only supports `target + 7 args -> uint64 x0`. Workflows that silently assume the old pseudo-10-arg contract will misbehave until userspace is updated.
|
||||
- **Return-value limit**: only primary `x0` is surfaced through the syscall return path. Any target that needs structured outputs, out-pointers, or multiple architecturally relevant return registers still needs a higher-level descriptor / copyout design.
|
||||
- **PAC / branch-context coupling**: the `sy_call` hook itself preserves the expected authenticated-call shape, but the target function call inside the cave is a plain `blr x16`. If the chosen target relies on a different authenticated entry expectation or unusual calling context, behavior may still be unsafe.
|
||||
- **Scheduler impact**: re-enabling this patch in the default JB list means future aggregate dry-runs and restore tests now include it. Any regression observed after this point must consider `patch_kcall10` as part of the active set.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
## Current Limits
|
||||
|
||||
- Primary target: syscall 439 (`SYS_kas_info`) replacement path plus injected kcall10 shellcode.
|
||||
- Hit points include syscall table entry redirection and payload cave sites.
|
||||
- This rebuilt patch is ABI-correct, but it is no longer a literal “10 direct argument” trampoline.
|
||||
- It provides a reduced-form direct-call primitive: `target + 7 args -> uint64 x0`.
|
||||
- If a future design needs more arguments or structured output, it should move to a descriptor + `copyin/copyout` model rather than trying to extend the raw syscall ABI.
|
||||
|
||||
## Kernel Source File Location
|
||||
## Validation Plan
|
||||
|
||||
- Mixed source context: syscall plumbing in `bsd/kern/syscalls.master` / `osfmk/kern/syscall_sw.c` plus injected shellcode region.
|
||||
- Confidence: `medium`.
|
||||
1. Keep work scoped to this single patch.
|
||||
2. Run a dedicated dry-run against `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`.
|
||||
3. Verify the emitted cave disassembly matches the rebuilt design.
|
||||
4. Verify the three `sysent[439]` field writes match the intended targets and metadata.
|
||||
5. Stop at dry-run validation; do not escalate to full firmware build in this step.
|
||||
|
||||
## Function Call Stack
|
||||
## Dry-Run Validation (2026-03-06)
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- Userland syscall -> syscall dispatch -> `sysent[439].sy_call`.
|
||||
- Before patch: `sysent[439] -> kas_info` (restricted behavior).
|
||||
- After patch: `sysent[439] -> kcall10 cave` (loads function pointer + args, executes `BLR x16`, stores results back).
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
Target image:
|
||||
|
||||
## Patch Hit Points
|
||||
- `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- diversity: `0xBCAD`
|
||||
Result:
|
||||
|
||||
- `method_return = True`
|
||||
- `patch_count = 4`
|
||||
|
||||
Emitted writes:
|
||||
|
||||
- `0x00AB1720` — cave blob, size `0x6C`
|
||||
- `0x0073E180` — `sysent[439].sy_call = cave`
|
||||
- `0x0073E188` — `sysent[439].sy_arg_munge32 = 8-arg helper`
|
||||
- `0x0073E190` — `sy_return_type = 7`, `sy_narg = 8`, `sy_arg_bytes = 0x20`
|
||||
|
||||
Exact emitted bytes:
|
||||
|
||||
- cave @ `0x00AB1720`:
|
||||
- `7f2303d5ffc300d1f55b00a9f35301a9fd7b02a9fd830091d3028052f40301aaf50302aa940100b4750100b4900240f9300100b4808640a9828e41a9849642a9861e40f9e7031faa00023fd6a00200f913008052e003132af55b40a9f35341a9fd7b42a9ffc30091ff0f5fd6`
|
||||
- `sysent[439].sy_call` @ `0x0073E180`:
|
||||
- `2017ab00adbc1080`
|
||||
- `sysent[439].sy_arg_munge32` @ `0x0073E188`:
|
||||
- `286dc600be2a2080`
|
||||
- metadata @ `0x0073E190`:
|
||||
- `0700000008002000`
|
||||
|
||||
Decoded post-patch fields:
|
||||
|
||||
- `sy_call` decodes to cave file offset `0x00AB1720`
|
||||
- `sy_arg_munge32` decodes to helper file offset `0x00C66D28` (chosen only after confirming the 8-arg helper target is unique across matching sysent rows)
|
||||
- `sy_return_type = 7`
|
||||
- `sy_narg = 8`
|
||||
- `sy_arg_bytes = 0x20`
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
|
||||
## Current Patch Search Logic
|
||||
Cave disassembly summary:
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_kcall10.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
|
||||
## Validation (Static Evidence)
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
|
||||
- Kernel arbitrary-call syscall path is unavailable; userland kcall-based bootstrap stages cannot execute.
|
||||
|
||||
## Risk / Side Effects
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
|
||||
## Symbol Consistency Check
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`.
|
||||
- Canonical symbol hit(s): none (alias-based static matching used).
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0008010c94` currently resolves to `nosys` (size `0x34`).
|
||||
|
||||
## Open Questions and Confidence
|
||||
|
||||
- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain.
|
||||
- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial).
|
||||
|
||||
## Evidence Appendix
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (3 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `True`
|
||||
- IDA mapping: `0/3` points in recognized functions; `3` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `0` function nodes, `0` patch-point VAs.
|
||||
- Verdict: `valid`
|
||||
- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift.
|
||||
- Policy note: method is in the low-risk optimized set (validated hit on this kernel).
|
||||
- Key verified points:
|
||||
- `0xFFFFFE000774E5A0` (`code-cave/data`): sysent[439].sy_call = \_nosys 0xF6F048 (auth rebase, div=0xBCAD, next=2) [kcall10 low-risk] | `0ccd0701adbc1080 -> 48f0f600adbc1080`
|
||||
- `0xFFFFFE000774E5B0` (`code-cave/data`): sysent[439].sy_return_type = 1 [kcall10 low-risk] | `01000000 -> 01000000`
|
||||
- `0xFFFFFE000774E5B4` (`code-cave/data`): sysent[439].sy_narg=0,sy_arg_bytes=0 [kcall10 low-risk] | `03000c00 -> 00000000`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
- prologue: `pacibsp`, 0x30-byte stack frame, saves `x19`-`x22`, `x29`, `x30`
|
||||
- validation: reject null `uap`, null `retval`, null `target` with `EINVAL`
|
||||
- load path: reads target from `[x20]`, args from `[x20+0x8 .. +0x38]`
|
||||
- call path: `blr x16` with `x0..x6` populated and `x7 = 0`
|
||||
- return path: `str x0, [x21]`, move status into `w0`, restore callee-saved registers, `retab`
|
||||
|
||||
@@ -1,144 +1,388 @@
|
||||
# C22 `patch_syscallmask_apply_to_proc`
|
||||
|
||||
## Patch Goal
|
||||
## Status
|
||||
|
||||
Inject a shellcode detour into legacy `_syscallmask_apply_to_proc`-shape logic to install custom syscall filter mask handling.
|
||||
- Re-analysis date: `2026-03-06`
|
||||
- Scope: `kernelcache.research.vphone600`
|
||||
- Prior notes for this patch are treated as untrusted unless restated below.
|
||||
- Current conclusion: the old repo C22 implementation was a misidentification that patched `_profile_syscallmask_destroy` under an underflow-panic slow path. As of `2026-03-06`, `scripts/patchers/kernel_jb_patch_syscallmask.py` has been rebuilt to target the real syscallmask apply wrapper structurally and recreate the upstream C22 behavior (mutate mask bytes to all-ones, then continue into the normal setter path). User-side restore/boot validation succeeded on `2026-03-06`.
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
## What This Mechanism Actually Does
|
||||
|
||||
- String anchors:
|
||||
- `"syscallmask.c"` at `0xfffffe0007609236`
|
||||
- `"sandbox.syscallmasks"` at `0xfffffe000760933c`
|
||||
- Related recovered functions in the cluster:
|
||||
- `_profile_syscallmask_destroy` at `0xfffffe00093ae6a4`
|
||||
- `_sandbox_syscallmask_destroy` at `0xfffffe00093ae984`
|
||||
- `_sandbox_syscallmask_create` at `0xfffffe00093aea34`
|
||||
- `_hook_policy_init` at `0xfffffe00093c1a54`
|
||||
This path is not a generic parser or allocator hook. Its real job is to **install per-process syscall filter masks** used later by three enforcement sites:
|
||||
|
||||
## Call-Stack Analysis
|
||||
- Unix syscall dispatch
|
||||
- Mach trap dispatch
|
||||
- Kernel MIG / kobject dispatch
|
||||
|
||||
- Current firmware exposes syscallmask create/destroy/hook-policy flows.
|
||||
- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates.
|
||||
In XNU source terms, the closest semantic match is `proc_set_syscall_filter_mask(proc_t p, int which, unsigned char *maskptr, size_t masklen)` in `research/reference/xnu/bsd/kern/kern_proc.c:5142`.
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
Important XNU references:
|
||||
|
||||
- Required legacy signature (strict):
|
||||
- `cbz x2` and `mov x19,x0 ; mov x20,x1 ; mov x21,x2 ; mov x22,x3` in early prologue.
|
||||
- Validation result on current image: no valid candidate.
|
||||
- Therefore expected behavior is fail-closed:
|
||||
- no cave writes
|
||||
- no branch redirection emitted.
|
||||
- `research/reference/xnu/bsd/sys/proc.h:558` — `SYSCALL_MASK_UNIX`, `SYSCALL_MASK_MACH`, `SYSCALL_MASK_KOBJ`
|
||||
- `research/reference/xnu/bsd/kern/kern_proc.c:5142` — setter for the three mask kinds
|
||||
- `research/reference/xnu/bsd/dev/arm/systemcalls.c:161` — Unix syscall enforcement
|
||||
- `research/reference/xnu/osfmk/arm64/bsd_arm64.c:253` — Mach trap enforcement
|
||||
- `research/reference/xnu/osfmk/kern/ipc_kobject.c:568` — kobject/MIG enforcement
|
||||
- `research/reference/xnu/bsd/kern/kern_fork.c:1028` — Unix mask inheritance on fork
|
||||
- `research/reference/xnu/osfmk/kern/task.c:1759` — Mach/KOBJ filter inheritance
|
||||
|
||||
## Pseudocode (Before)
|
||||
Semantics from XNU:
|
||||
|
||||
```c
|
||||
// current firmware path differs from legacy apply_to_proc shape
|
||||
apply_or_policy_update(...);
|
||||
```
|
||||
- If a filter mask pointer is `NULL`, the later dispatch path does **not** perform the extra mask-based deny/evaluate step.
|
||||
- If a filter mask pointer is present and the bit is clear, the kernel falls back into MACF/Sandbox evaluation.
|
||||
- If a filter mask pointer is present and the bit is set, the indexed Unix/Mach path does **not** fall into the extra policy callback.
|
||||
- For KOBJ/MIG there is an important nuance: a non-`NULL` all-ones mask suppresses callback evaluation only when the message already has a registered `kobjidx`; `KOBJ_IDX_NOT_SET` still reaches policy evaluation.
|
||||
- Therefore, `NULL`-mask install and all-ones install are related but **not identical** behaviors. Historical upstream C22 is the all-ones variant, not the `NULL` variant.
|
||||
|
||||
## Pseudocode (After)
|
||||
## Revalidated Live Call Chain (IDA)
|
||||
|
||||
```c
|
||||
// no patch emitted on this build (fail-closed)
|
||||
apply_or_policy_update(...);
|
||||
```
|
||||
### 1. Real apply layer in the sandbox kext
|
||||
|
||||
## Symbol Consistency
|
||||
`_proc_apply_syscall_masks` at `0xfffffe00093b1a88`
|
||||
|
||||
- Recovered symbols exist for syscallmask create/destroy helpers.
|
||||
- `_syscallmask_apply_to_proc` symbol is not recovered and legacy signature does not match current binary layout.
|
||||
Decompiled shape:
|
||||
|
||||
## Patch Metadata
|
||||
- Calls helper `sub_FFFFFE00093AE5E8(proc, 0, unix_mask)`
|
||||
- Calls helper `sub_FFFFFE00093AE5E8(proc, 1, mach_mask)`
|
||||
- Calls helper `sub_FFFFFE00093AE5E8(proc, 2, kobj_mask)`
|
||||
- On failure, reports:
|
||||
- `"failed to apply unix syscall mask"`
|
||||
- `"failed to apply mach trap mask"`
|
||||
- `"failed to apply kernel MIG routine mask"`
|
||||
|
||||
- Patch document: `patch_syscallmask_apply_to_proc.md` (C22).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_syscallmask.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
This is the real high-level “apply to proc” logic for the current kernel, even though the stripped symbol is now named `_proc_apply_syscall_masks`, not `_syscallmask_apply_to_proc`.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
### 2. Immediate callers of `_proc_apply_syscall_masks`
|
||||
|
||||
- Primary target: `syscallmask_apply_to_proc` path plus zalloc_ro_mut update helper.
|
||||
- Patchpoint combines branch policy bypass and helper-site mutation where matcher is valid.
|
||||
IDA xrefs show live callers:
|
||||
|
||||
## Kernel Source File Location
|
||||
- `_proc_apply_sandbox` at `0xfffffe00093b17d4`
|
||||
- `_hook_cred_label_update_execve` at `0xfffffe00093d0dfc`
|
||||
|
||||
- Likely XNU source family: `bsd/kern/kern_proc.c` plus task/proc state mutation helpers.
|
||||
- Confidence: `low` (layout drift noted).
|
||||
That means this path is exercised both when sandbox labels are applied and during exec-time label updates.
|
||||
|
||||
## Function Call Stack
|
||||
### 3. Helper that bridges into kernel proc/task RO state setters
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- Current firmware exposes syscallmask create/destroy/hook-policy flows.
|
||||
- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates.
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
`sub_FFFFFE00093AE5E8` at `0xfffffe00093ae5e8`
|
||||
|
||||
## Patch Hit Points
|
||||
Observed behavior:
|
||||
|
||||
- Patch hitpoint is selected by contextual matcher and verified against local control-flow.
|
||||
- Before/after instruction semantics are captured in the patch-site evidence above.
|
||||
- Accepts `(proc, which, maskptr)`
|
||||
- If `maskptr != NULL`, loads the expected mask length for `which`
|
||||
- Tail-calls into kernel text at `0xfffffe0007fd0c74`
|
||||
|
||||
## Current Patch Search Logic
|
||||
This helper is a narrow wrapper for the true setter logic.
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_syscallmask.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- String anchors:
|
||||
- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates.
|
||||
### 4. Kernel-side setter core
|
||||
|
||||
## Validation (Static Evidence)
|
||||
The tail-call target is inside `sub_FFFFFE0007FD0B64`, entered at `0xfffffe0007fd0c74`.
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
Validated behavior from disassembly:
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
- `which == 0` (Unix): if `X2 == 0`, length validation is skipped and the proc RO syscall-mask pointer is updated with `NULL`
|
||||
- `which == 1` (Mach): if `X2 == 0`, length validation is skipped and the task Mach filter pointer is updated with `NULL`
|
||||
- `which == 2` (KOBJ/MIG): if `X2 == 0`, length validation is skipped and the task KOBJ filter pointer is updated with `NULL`
|
||||
- Invalid `which` returns `EINVAL` (`0x16`)
|
||||
|
||||
- Syscall mask restrictions remain active; required syscall surface for bootstrap stays blocked.
|
||||
This matches the XNU setter semantics closely enough to trust the mapping.
|
||||
|
||||
## Risk / Side Effects
|
||||
## PCC 26.1 Upstream-Exact Reconstruction
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
On the exact PCC 26.1 research kernel matching the historical upstream script, the original C22 chain resolves as follows:
|
||||
|
||||
## Symbol Consistency Check
|
||||
- apply-wrapper entry: `0xfffffe00093994f8` (`sub_FFFFFE00093994F8`)
|
||||
- high-level caller: `0xfffffe000939c998` (`sub_FFFFFE000939C998`)
|
||||
- upstream patch writes at:
|
||||
- `0xfffffe0009399530` — original `BL` replaced by `mov x17, x0`
|
||||
- `0xfffffe0009399584` — original tail branch replaced by branch to cave
|
||||
- `0xfffffe0007ab5740` — code cave / data blob region
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`.
|
||||
- Canonical symbol hit(s): none (alias-based static matching used).
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0007609236` is a patchpoint/data-site (`Not a function`), so function naming is inferred from surrounding control-flow and xrefs.
|
||||
Validated wrapper behavior before patch:
|
||||
|
||||
## Open Questions and Confidence
|
||||
- `sub_FFFFFE000939C998` calls `sub_FFFFFE00093994F8(proc, 0, unix_mask)`
|
||||
- then `sub_FFFFFE00093994F8(proc, 1, mach_mask)`
|
||||
- then `sub_FFFFFE00093994F8(proc, 2, kobj_mask)`
|
||||
- failures map to the three familiar strings:
|
||||
- `failed to apply unix syscall mask`
|
||||
- `failed to apply mach trap mask`
|
||||
- `failed to apply kernel MIG routine mask`
|
||||
|
||||
- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain.
|
||||
- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial).
|
||||
This is the older PCC 26.1 form of the same logic that appears as `_proc_apply_syscall_masks` on the newer kernel.
|
||||
|
||||
## Evidence Appendix
|
||||
At the low wrapper level, `sub_FFFFFE00093994F8` does this:
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
- if `maskptr == NULL`, skip the pre-processing helper
|
||||
- otherwise call helper at `0xfffffe0007b761e0` with:
|
||||
- `x0` = zone/RO-mutation selector loaded from `word_FFFFFE0007A58354`
|
||||
- `x1` = backing object/pointer loaded from `qword_FFFFFE0007A58358`
|
||||
- `x2` = original mask pointer
|
||||
- then load `x3 = masklen_bits` from a small selector table
|
||||
- then tail-branch into setter core at `0xfffffe0007fc7220`
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
The historical upstream patch hijacks exactly this seam.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (2 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `True`
|
||||
- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `2` patch-point VAs.
|
||||
- IDA function sample: `_profile_syscallmask_destroy`
|
||||
- Chain function sample: `_profile_syscallmask_destroy`
|
||||
- Caller sample: `_profile_uninit`, `sub_FFFFFE00093AE678`
|
||||
- Callee sample: `sub_FFFFFE0008302368`, `sub_FFFFFE00093AE70C`
|
||||
- Verdict: `valid`
|
||||
- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift.
|
||||
- Policy note: method is in the low-risk optimized set (validated hit on this kernel).
|
||||
- Key verified points:
|
||||
- `0xFFFFFE00093AE6E4` (`_profile_syscallmask_destroy`): mov x0,xzr [_syscallmask_apply_to_proc low-risk] | `ff8300d1 -> e0031faa`
|
||||
- `0xFFFFFE00093AE6E8` (`_profile_syscallmask_destroy`): retab [_syscallmask_apply_to_proc low-risk] | `fd7b01a9 -> ff0f5fd6`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
### Exact shellcode semantics
|
||||
|
||||
Using the original upstream words from the script, the cave body at `0xfffffe0007ab5768` disassembles to:
|
||||
|
||||
- preserve original args in `x19..x22`
|
||||
- use the saved original helper arg in `x17`
|
||||
- compute `ceil(masklen_bits / 8)` in `x4`
|
||||
- call helper at `0xfffffe0007b76258`
|
||||
- restore original `(proc, which, maskptr, masklen_bits)`
|
||||
- branch back into setter core at `0xfffffe0007fc7220`
|
||||
|
||||
Crucially, the helper call is made with:
|
||||
|
||||
- `x0 = x17`
|
||||
- `x1 = x21` (original `maskptr`)
|
||||
- `x2 = 0` (offset)
|
||||
- `x3 = cave base = 0xfffffe0007ab5740`
|
||||
- `x4 = ceil(masklen_bits / 8)`
|
||||
|
||||
and the cave base holds a large `0xFF` blob.
|
||||
|
||||
That means the upstream patch mutates the pointed-to mask buffer in place so that the first `ceil(masklen_bits / 8)` bytes become `0xFF`, then installs that mask through the normal setter.
|
||||
|
||||
### Final semantic conclusion for upstream C22
|
||||
|
||||
The original upstream C22 patch is therefore:
|
||||
|
||||
- **not** “skip syscallmask apply”
|
||||
- **not** “return success early”
|
||||
- **not** “clear the mask pointer”
|
||||
|
||||
It is:
|
||||
|
||||
- **rewrite the mask contents to an all-ones allow mask, then continue through the normal setter path**
|
||||
|
||||
This is the closest faithful behavioral description of historical C22.
|
||||
|
||||
### Implication for modern reimplementation
|
||||
|
||||
If we want to reproduce upstream behavior exactly, the modern patch should preserve the apply/setter path and force the effective Unix/Mach/KOBJ masks to all ones.
|
||||
|
||||
If we prefer a smaller and likely safer patch for bring-up, the `NULL`-mask strategy remains attractive, but it is a modern simplification rather than an exact upstream reconstruction.
|
||||
|
||||
## Legacy Upstream Mapping
|
||||
|
||||
The pasted legacy script matches the historical upstream `syscallmask` shellcode patch that this repo later labeled as C22.
|
||||
|
||||
Concrete markers that identify it:
|
||||
|
||||
- shellcode cave at `0xAB1740`
|
||||
- redirect from `0x2395584`
|
||||
- setup write at `0x2395530` (`mov x17, x0`)
|
||||
- tail branch to `_proc_set_syscall_filter_mask`
|
||||
- in-cave call to `_zalloc_ro_mut`
|
||||
|
||||
Semantically, that upstream patch is **not** a destroy-path patch and **not** a plain early-return patch. It does this instead:
|
||||
|
||||
1. If the incoming mask pointer is `NULL`, skip the custom work.
|
||||
2. Otherwise compute `ceil(mask_bits / 8)`.
|
||||
3. Use `_zalloc_ro_mut` to overwrite the target read-only mask storage with bytes sourced from an in-cave `0xFF` blob.
|
||||
4. Resume into `_proc_set_syscall_filter_mask`.
|
||||
|
||||
This means the historical upstream intent was:
|
||||
|
||||
- keep the mask object/path alive
|
||||
- but force the installed syscall/mach/kobj mask to become an **all-ones allow mask**
|
||||
|
||||
That is an important semantic distinction from the newer `NULL`-mask strategy documented later in this file:
|
||||
|
||||
- **legacy upstream shellcode** => installed mask exists and all bits are allowed
|
||||
- **proposed modern narrow patch** => installed mask pointer becomes `NULL`
|
||||
|
||||
Both strategies bypass this mask-based interception layer in practice, but they are not identical. If we want the closest behavioral match to the historical upstream patch, the modern equivalent should preserve the setter path and write an all-ones mask, not simply early-return.
|
||||
|
||||
## Fresh Independent Conclusions (`2026-03-06`)
|
||||
|
||||
- The legacy pasted script maps to the historical upstream `syscallmask` shellcode patch later labeled `C22` in this repo.
|
||||
- The old repo “C22” was a false-positive hit in `_profile_syscallmask_destroy`; that patch class did not control mask installation and is not a trustworthy reference for behavior.
|
||||
- The faithful upstream C22 class is: hijack the low wrapper, preserve the normal setter path, mutate the effective Unix/Mach/KOBJ mask bytes to all `0xFF`, then tail-branch back into the setter.
|
||||
- Source-level equivalence is closest to calling `proc_set_syscall_filter_mask(..., all_ones_mask, expected_len)` for `which = 0/1/2`, not `proc_set_syscall_filter_mask(..., NULL, 0)`.
|
||||
- XNU cross-check matters here: an all-ones mask and a `NULL` mask are behaviorally different for KOBJ/MIG when `kobjidx` is not registered, so the two strategies must stay documented as separate patch classes.
|
||||
|
||||
## New Plan
|
||||
|
||||
1. Keep the rebuilt all-ones wrapper retarget as the authoritative C22 baseline, because it is the closest match to the historical upstream PCC 26.1 shellcode.
|
||||
2. Treat `NULL`-mask installation as a separate modern experiment only; do not describe it as “what upstream C22 did”.
|
||||
3. Re-check the live runtime interaction of C22 with `_proc_apply_syscall_masks`, `_proc_apply_sandbox`, and `_hook_cred_label_update_execve` before blaming any future boot issue on C22 alone.
|
||||
4. If runtime anomalies remain, classify them by enforcement site:
|
||||
- Unix syscall mask regression
|
||||
- Mach trap mask regression
|
||||
- KOBJ/MIG `KOBJ_IDX_NOT_SET` residual policy path
|
||||
5. Only after the exact upstream-equivalent path is exhausted should we prototype a separate `NULL`-mask variant for comparison.
|
||||
|
||||
## What The Old C22 Implementation Actually Hit
|
||||
|
||||
Historical runtime verification logged these writes:
|
||||
|
||||
- `0xfffffe00093ae6e4`: `ff8300d1 -> e0031faa`
|
||||
- `0xfffffe00093ae6e8`: `fd7b01a9 -> ff0f5fd6`
|
||||
|
||||
IDA mapping shows both addresses are inside `_profile_syscallmask_destroy` at `0xfffffe00093ae6a4`, not inside any apply-to-proc routine.
|
||||
|
||||
More specifically:
|
||||
|
||||
- `_profile_syscallmask_destroy` normal path ends at `0xfffffe00093ae6dc`
|
||||
- `0xfffffe00093ae6e0` is the start of the **underflow panic slow path**
|
||||
- The old patch replaced instructions in that slow path only
|
||||
|
||||
So the old “low-risk early return” did **not** disable syscall mask installation. It merely neutered a panic-reporting subpath after profile mask count underflow.
|
||||
|
||||
## Why The Old Matcher Misidentified The Target
|
||||
|
||||
The old patcher logic in `scripts/patchers/kernel_jb_patch_syscallmask.py` relies on:
|
||||
|
||||
- string anchor `"syscallmask.c"`
|
||||
- nearby function-start recovery using `PACIBSP`
|
||||
- legacy 4-argument prologue heuristics from an older shellcode-based implementation
|
||||
|
||||
On this kernel:
|
||||
|
||||
- the legacy `_syscallmask_apply_to_proc` shape is gone
|
||||
- the nearby string cluster includes create/destroy/populate helpers
|
||||
- the nearest `PACIBSP` around the string is at `0xfffffe00093ae6e0`, which is **not a real function entry** for the apply path
|
||||
|
||||
That is why the old low-risk fallback produced a false positive.
|
||||
|
||||
## Real Targets That Matter
|
||||
|
||||
### Safe semantic target
|
||||
|
||||
`_proc_apply_syscall_masks` at `0xfffffe00093b1a88`
|
||||
|
||||
This is the right place if the goal is:
|
||||
|
||||
- allow processes to keep running without syscall/mach/kobj mask-based interception
|
||||
- preserve surrounding control flow and error handling
|
||||
- avoid corrupting parser state or shared kernel setter logic
|
||||
|
||||
### Alternative narrower helper target
|
||||
|
||||
`sub_FFFFFE00093AE5E8` at `0xfffffe00093ae5e8`
|
||||
|
||||
This helper only appears to serve the apply layer here, but it is still a broader patch than changing the three call sites directly.
|
||||
|
||||
## Recommended Patch Strategy (Not Applied Here)
|
||||
|
||||
Per your instruction, no repository code changes are landed here. This section documents the patch strategy that appears correct from the live re-analysis.
|
||||
|
||||
### Preferred strategy: clear masks explicitly at the three call sites
|
||||
|
||||
Patch the three `LDR X2, [X8]` instructions in `_proc_apply_syscall_masks` to `MOV X2, XZR`.
|
||||
|
||||
Patchpoints:
|
||||
|
||||
1. Unix mask load
|
||||
- VA: `0xfffffe00093b1abc`
|
||||
- Before: `020140f9` (`ldr x2, [x8]`)
|
||||
- After: `e2031faa` (`mov x2, xzr`)
|
||||
|
||||
2. Mach trap mask load
|
||||
- VA: `0xfffffe00093b1af0`
|
||||
- Before: `020140f9` (`ldr x2, [x8]`)
|
||||
- After: `e2031faa` (`mov x2, xzr`)
|
||||
|
||||
3. KOBJ/MIG mask load
|
||||
- VA: `0xfffffe00093b1b28`
|
||||
- Before: `020140f9` (`ldr x2, [x8]`)
|
||||
- After: `e2031faa` (`mov x2, xzr`)
|
||||
|
||||
Why this is preferred:
|
||||
|
||||
- It preserves `_proc_apply_syscall_masks` control flow and error propagation.
|
||||
- It still calls the existing setter path for all three mask types.
|
||||
- The setter already supports `maskptr == NULL`, so this becomes a clean “clear installed filters” operation instead of a malformed early return.
|
||||
- It avoids stale inherited masks remaining attached to the process.
|
||||
|
||||
### Secondary strategy: null out the helper argument once
|
||||
|
||||
Single-site alternative:
|
||||
|
||||
- VA: `0xfffffe00093ae600`
|
||||
- Before: `f40301aa` (`mov x19, x2`)
|
||||
- After: `f3031faa` (`mov x19, xzr`)
|
||||
|
||||
This also forces all three setter calls to receive `NULL`, but it is slightly wider than the three-site `_proc_apply_syscall_masks` patch and depends on there being no unintended callers of this helper entry.
|
||||
|
||||
## What Not To Patch
|
||||
|
||||
### Do not patch `_profile_syscallmask_destroy`
|
||||
|
||||
- Address: `0xfffffe00093ae6a4`
|
||||
- Reason: lifecycle cleanup only; old C22 hit this by mistake
|
||||
|
||||
### Do not patch `_populate_syscall_mask`
|
||||
|
||||
- Address: `0xfffffe00093cf7f4`
|
||||
- Reason: parser/allocation path for sandbox profile data; breaking it risks malformed state during sandbox construction and early boot
|
||||
|
||||
### Avoid patching the kernel-side setter core directly unless necessary
|
||||
|
||||
- Entry used here: `0xfffffe0007fd0c74`
|
||||
- Reason: shared proc/task RO setters are broader-scope and easier to overpatch than the sandbox apply wrapper
|
||||
|
||||
## Expected Effect Of The Recommended Patch
|
||||
|
||||
If the three load sites are rewritten to `mov x2, xzr`:
|
||||
|
||||
- Unix syscall filter mask is cleared
|
||||
- Mach trap filter mask is cleared
|
||||
- Kernel MIG/kobject filter mask is cleared
|
||||
- Later dispatchers no longer see an installed mask pointer for those channels
|
||||
- The syscall/mach/kobj “bit clear -> consult MACF/Sandbox evaluator” layer is therefore skipped for these mask-based checks
|
||||
|
||||
This does **not** disable every sandbox/MACF path. It only removes this specific mask-installation layer.
|
||||
|
||||
## Why A Plain Early Return Is Inferior
|
||||
|
||||
A naive early return from `_proc_apply_syscall_masks` would likely return success, but it may leave previously inherited masks untouched.
|
||||
|
||||
That is especially risky because XNU inherits these masks across fork/task creation:
|
||||
|
||||
- Unix: `research/reference/xnu/bsd/kern/kern_fork.c:1028`
|
||||
- Mach/KOBJ: `research/reference/xnu/osfmk/kern/task.c:1759`
|
||||
|
||||
So an early return can leave stale filter pointers in place, while the explicit `NULL`-setter strategy actively clears them.
|
||||
|
||||
## Boot-Risk Assessment
|
||||
|
||||
Most plausible failure modes if this family is patched incorrectly:
|
||||
|
||||
- stale or invalid mask pointers remain attached to early boot tasks
|
||||
- Mach/KOBJ traffic gets filtered unexpectedly during bootstrap
|
||||
- parser/create/destroy bookkeeping becomes inconsistent
|
||||
- a broad setter patch corrupts proc/task RO state outside the intended sandbox apply path
|
||||
|
||||
The proposed three-site `mov x2, xzr` strategy is the narrowest approach found so far that still achieves the intended jailbreak effect.
|
||||
|
||||
## Repository Implementation Status
|
||||
|
||||
As of `2026-03-06`, the repository implementation has been updated to follow the revalidated C22 design:
|
||||
|
||||
- locate the high-level apply manager from the three `failed to apply ... mask` strings
|
||||
- identify the shared low wrapper that is called with `which = 0/1/2`
|
||||
- replace the wrapper's pre-setter helper `BL` with `mov x17, x0`
|
||||
- replace the wrapper's tail `B` with a branch to a code cave
|
||||
- in the cave, build an all-ones blob, call the structurally-derived mutation helper, then tail-branch back into the normal setter core
|
||||
|
||||
Focused dry-run validation on `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` now emits exactly 3 writes:
|
||||
|
||||
- `0x02395530` — `mov x17,x0 [syscallmask C22 save RO selector]`
|
||||
- `0x023955E8` — `b cave [syscallmask C22 mutate mask then setter]`
|
||||
- `0x00AB1720` — `syscallmask C22 cave (ff blob 0x100 + structural mutator + setter tail)`
|
||||
|
||||
This restores the intended patch class while avoiding the previous false-positive hit on `_profile_syscallmask_destroy`.
|
||||
|
||||
User validation note: boot succeeded with the rebuilt C22 enabled on `2026-03-06`.
|
||||
|
||||
## Bottom Line
|
||||
|
||||
- The historical C22 implementation is mis-targeted.
|
||||
- The real current “apply to proc” logic is `_proc_apply_syscall_masks`, not `_profile_syscallmask_destroy`.
|
||||
- The historical upstream patch class is **not** `NULL`-mask install; it is **all-ones mask mutation plus normal setter continuation**.
|
||||
- The rebuilt wrapper/cave retarget matches that upstream class and has already reached user-reported boot success on `2026-03-06`.
|
||||
- `NULL`-mask install remains a separate modern alternative worth studying later, especially because KOBJ/MIG semantics differ when `kobjidx` is unset.
|
||||
|
||||
@@ -1,167 +1,473 @@
|
||||
# B9 `patch_vm_fault_enter_prepare`
|
||||
# B9 `patch_vm_fault_enter_prepare` — re-analysis (2026-03-06)
|
||||
|
||||
## Patch Goal
|
||||
## Scope
|
||||
|
||||
NOP a strict state/permission check site in `vm_fault_enter_prepare` identified by the `BL -> LDRB [..,#0x2c] -> TBZ/TBNZ` fingerprint.
|
||||
- Kernel: `kernelcache.research.vphone600`
|
||||
- Primary function: `vm_fault_enter_prepare` @ `0xfffffe0007bb8818`
|
||||
- Existing patch point emitted by the patcher: `0xfffffe0007bb898c`
|
||||
- Existing callee at that point: `sub_FFFFFE0007C4B7DC`
|
||||
- Paired unlock callee immediately after the guarded block: `sub_FFFFFE0007C4B9A4`
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
## Executive Summary
|
||||
|
||||
- Recovered symbol: `vm_fault_enter_prepare` at `0xfffffe0007bb8818`.
|
||||
- Anchor string: `"vm_fault_enter_prepare"` at `0xfffffe0007048ec8`.
|
||||
- String xrefs in this function: `0xfffffe0007bb88c4`, `0xfffffe0007bb944c`.
|
||||
The current `patch_vm_fault_enter_prepare` analysis was wrong.
|
||||
|
||||
## Call-Stack Analysis
|
||||
The patched instruction at `0xfffffe0007bb898c` is **not** a runtime code-signing gate and **not** a generic policy-deny helper. It is the lock-acquire half of a `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` pair used while consuming the page's `vmp_clustered` state.
|
||||
|
||||
Representative static callers:
|
||||
So the current patch does this:
|
||||
|
||||
- `vm_fault_internal` (`0xfffffe0007bb6ef0`) -> calls `vm_fault_enter_prepare`.
|
||||
- `sub_FFFFFE0007BB8294` (`0xfffffe0007bb8350`) -> calls `vm_fault_enter_prepare`.
|
||||
- skips the physical-page / PVH lock acquire,
|
||||
- still executes the protected critical section,
|
||||
- still executes the corresponding unlock,
|
||||
- therefore breaks lock pairing and page-state synchronization inside the VM fault path.
|
||||
|
||||
This confirms B9 is in the central page-fault preparation path.
|
||||
That is fully consistent with a boot-time failure.
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
## What the current patcher actually matches
|
||||
|
||||
Unique strict matcher hit in `vm_fault_enter_prepare`:
|
||||
Current implementation: `scripts/patchers/kernel_jb_patch_vm_fault.py:7`
|
||||
|
||||
- `0xfffffe0007bb898c`: `BL sub_FFFFFE0007C4B7DC`
|
||||
- `0xfffffe0007bb8990`: `LDRB W8, [X20,#0x2C]`
|
||||
- `0xfffffe0007bb8994`: `TBZ W8, #5, loc_FFFFFE0007BB89C4`
|
||||
The matcher looks for this in-function shape:
|
||||
|
||||
Patch operation:
|
||||
- `BL target(rare)`
|
||||
- `LDRB wN, [xM, #0x2c]`
|
||||
- `TBZ/TBNZ wN, #bit, ...`
|
||||
|
||||
- NOP the BL at `0xfffffe0007bb898c`.
|
||||
That logic resolves to exactly one site in `vm_fault_enter_prepare` and emits:
|
||||
|
||||
Bytes:
|
||||
- VA: `0xFFFFFE0007BB898C`
|
||||
- Patch: `944b0294 -> 1f2003d5`
|
||||
- Description: `NOP [_vm_fault_enter_prepare]`
|
||||
|
||||
- before: `94 4B 02 94` (`BL ...`)
|
||||
IDA disassembly at the matched site:
|
||||
|
||||
```asm
|
||||
0xfffffe0007bb8988 MOV X0, X27
|
||||
0xfffffe0007bb898c BL sub_FFFFFE0007C4B7DC
|
||||
0xfffffe0007bb8990 LDRB W8, [X20,#0x2C]
|
||||
0xfffffe0007bb8994 TBZ W8, #5, loc_FFFFFE0007BB89C4
|
||||
0xfffffe0007bb8998 LDR W8, [X20,#0x1C]
|
||||
...
|
||||
0xfffffe0007bb89c0 STR W8, [X20,#0x2C]
|
||||
0xfffffe0007bb89c4 MOV X0, X27
|
||||
0xfffffe0007bb89c8 BL sub_FFFFFE0007C4B9A4
|
||||
```
|
||||
|
||||
The old assumption was: “call helper, then test a security flag, so NOP the helper.”
|
||||
|
||||
The re-analysis result is: the call is a lock acquire, the tested bit is `m->vmp_clustered`, and the second call is the matching unlock.
|
||||
|
||||
## PCC 26.1 Research: upstream site vs derived site
|
||||
|
||||
Using the user-loaded `PCC-CloudOS-26.1-23B85` `kernelcache.research.vphone600`, extracted locally to a temporary raw Mach-O, the upstream hard-coded site and our derived matcher do **not** land on the same instruction.
|
||||
|
||||
### Upstream hard-coded site
|
||||
|
||||
Upstream script site:
|
||||
|
||||
- raw file offset: `0x00BA9E1C`
|
||||
- mapped VA in `26.1 research`: `0xFFFFFE0007BADE1C`
|
||||
- instruction: `TBZ W22, #3, loc_...DE28`
|
||||
|
||||
Local disassembly around the upstream site:
|
||||
|
||||
```asm
|
||||
0xfffffe0007bade10 CBZ X27, loc_...DEE4
|
||||
0xfffffe0007bade14 LDR X0, [X27,#0x488]
|
||||
0xfffffe0007bade18 B loc_...DEE8
|
||||
0xfffffe0007bade1c TBZ W22, #3, loc_...DE28 ; upstream NOP site
|
||||
0xfffffe0007bade20 MOV W23, #0
|
||||
0xfffffe0007bade24 B loc_...E004
|
||||
0xfffffe0007bade28 ...
|
||||
0xfffffe0007bade94 BL 0xfffffe0007f82428
|
||||
0xfffffe0007bade98 CBZ W0, loc_...DF54
|
||||
```
|
||||
|
||||
This means the upstream patch is not hitting the later helper call directly. It is patching a branch gate immediately before a larger validation/decision block. Replacing this `TBZ` with `NOP` forces fall-through into:
|
||||
|
||||
- `MOV W23, #0`
|
||||
- `B loc_...E004`
|
||||
|
||||
So the likely effect is to skip the subsequent validation path entirely.
|
||||
|
||||
### Current derived matcher site
|
||||
|
||||
Current derived `patch_vm_fault_enter_prepare()` site on the **same 26.1 research raw**:
|
||||
|
||||
- raw file offset: `0x00BA9BB0`
|
||||
- mapped VA: `0xFFFFFE0007BADBB0`
|
||||
- instruction: `BL 0xFFFFFE0007C4007C`
|
||||
|
||||
The local patcher was run directly on the extracted `26.1 research` raw Mach-O and emitted:
|
||||
|
||||
- `0x00BA9BB0 NOP [_vm_fault_enter_prepare]`
|
||||
|
||||
Local disassembly around the derived site:
|
||||
|
||||
```asm
|
||||
0xfffffe0007badbac MOV X0, X27
|
||||
0xfffffe0007badbb0 BL 0xfffffe0007c4007c ; derived NOP site
|
||||
0xfffffe0007badbb4 LDRB W8, [X20,#0x2C]
|
||||
0xfffffe0007badbb8 TBZ W8, #5, loc_...DBE8
|
||||
...
|
||||
0xfffffe0007badbe8 MOV X0, X27
|
||||
0xfffffe0007badbec BL 0xfffffe0007c40244
|
||||
```
|
||||
|
||||
And the two helpers decode as the same lock/unlock pair seen in later analysis:
|
||||
|
||||
- `0xFFFFFE0007C4007C`: physical-page indexed lock acquire path (`LDXR` / `CASA` fast path, contended lock path)
|
||||
- `0xFFFFFE0007C40244`: matching unlock path
|
||||
|
||||
### Meaning of the mismatch
|
||||
|
||||
This is the key clarification:
|
||||
|
||||
- the **upstream** patch is very likely semantically related to the `vm_fault_enter_prepare` runtime validation path on `26.1 research`;
|
||||
- the **derived patcher** in this repository does **not** reproduce that upstream site;
|
||||
- instead, it drifts earlier in the same larger function region and NOPs a lock-acquire call.
|
||||
|
||||
So the most likely situation is **not** “the upstream author typed the wrong function name.”
|
||||
|
||||
The more likely situation is:
|
||||
|
||||
1. upstream had a real site in `26.1 research`;
|
||||
2. our repository later generalized that idea into a pattern matcher;
|
||||
3. that matcher overfit the wrong local shape (`BL` + `LDRB [#0x2c]` + `TBZ`) and started hitting the wrong block.
|
||||
|
||||
In other words: the current bug is much more likely a **bad derived matcher / bad retarget**, not proof that the original upstream `26.1` patch label was bogus.
|
||||
|
||||
## IDA evidence: what the callees really are
|
||||
|
||||
### `sub_FFFFFE0007C4B7DC`
|
||||
|
||||
IDA shows a physical-page-index based lock acquisition routine, not a deny/policy check:
|
||||
|
||||
- takes `X0` as page number / index input,
|
||||
- checks whether the physical page is in-range,
|
||||
- on the normal path acquires a lock associated with that physical page,
|
||||
- on contended paths may sleep / block,
|
||||
- returns only after the lock is acquired.
|
||||
|
||||
Key observations from IDA:
|
||||
|
||||
- the function begins by deriving an indexed address from `X0` (`UBFIZ X9, X0, #0xE, #0x20`),
|
||||
- it performs lock acquisition with `LDXR` / `CASA` on a fallback lock or calls into a lower lock primitive,
|
||||
- it contains a contended-wait path (`assert_wait`, `thread_block` style flow),
|
||||
- it does **not** contain a boolean policy return used by the caller.
|
||||
|
||||
This matches `pmap_lock_phys_page(ppnum_t pn)` semantics.
|
||||
|
||||
### `sub_FFFFFE0007C4B9A4`
|
||||
|
||||
IDA shows the paired unlock routine:
|
||||
|
||||
- same page-number based addressing scheme,
|
||||
- direct fast-path jump into a low-level unlock helper for the backup lock case,
|
||||
- range-based path that reconstructs a `locked_pvh_t`-like wrapper and unlocks the per-page PVH lock.
|
||||
|
||||
This matches `pmap_unlock_phys_page(ppnum_t pn)` semantics.
|
||||
|
||||
## XNU source mapping
|
||||
|
||||
The matched basic block in `vm_fault_enter_prepare()` maps cleanly onto the `m->vmp_pmapped == FALSE && m->vmp_clustered` handling in XNU.
|
||||
|
||||
Relevant source: `research/reference/xnu/osfmk/vm/vm_fault.c:3958`
|
||||
|
||||
```c
|
||||
if (m->vmp_pmapped == FALSE) {
|
||||
if (m->vmp_clustered) {
|
||||
if (*type_of_fault == DBG_CACHE_HIT_FAULT) {
|
||||
if (object->internal) {
|
||||
*type_of_fault = DBG_PAGEIND_FAULT;
|
||||
} else {
|
||||
*type_of_fault = DBG_PAGEINV_FAULT;
|
||||
}
|
||||
VM_PAGE_COUNT_AS_PAGEIN(m);
|
||||
}
|
||||
VM_PAGE_CONSUME_CLUSTERED(m);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The lock/unlock comes from `VM_PAGE_CONSUME_CLUSTERED(mem)` in `research/reference/xnu/osfmk/vm/vm_page_internal.h:999`:
|
||||
|
||||
```c
|
||||
#define VM_PAGE_CONSUME_CLUSTERED(mem) \
|
||||
MACRO_BEGIN \
|
||||
ppnum_t __phys_page; \
|
||||
__phys_page = VM_PAGE_GET_PHYS_PAGE(mem); \
|
||||
pmap_lock_phys_page(__phys_page); \
|
||||
if (mem->vmp_clustered) { \
|
||||
vm_object_t o; \
|
||||
o = VM_PAGE_OBJECT(mem); \
|
||||
assert(o); \
|
||||
o->pages_used++; \
|
||||
mem->vmp_clustered = FALSE; \
|
||||
VM_PAGE_SPECULATIVE_USED_ADD(); \
|
||||
} \
|
||||
pmap_unlock_phys_page(__phys_page); \
|
||||
MACRO_END
|
||||
```
|
||||
|
||||
And those helpers are defined here:
|
||||
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7520` — `pmap_lock_phys_page(ppnum_t pn)`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7535` — `pmap_unlock_phys_page(ppnum_t pn)`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:330` — `pvh_lock(unsigned int index)`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:497` — `pvh_unlock(locked_pvh_t *locked_pvh)`
|
||||
|
||||
## Why the current patch can break boot
|
||||
|
||||
The current patch NOPs only the acquire side:
|
||||
|
||||
- before: `BL sub_FFFFFE0007C4B7DC`
|
||||
- after: `NOP`
|
||||
|
||||
But the surrounding code still:
|
||||
|
||||
- reads `m->vmp_clustered`,
|
||||
- may increment `object->pages_used`,
|
||||
- clears `m->vmp_clustered`,
|
||||
- calls `sub_FFFFFE0007C4B9A4` unconditionally afterwards.
|
||||
|
||||
That means the patch turns a balanced critical section into:
|
||||
|
||||
1. no lock acquire,
|
||||
2. mutate shared page/object state,
|
||||
3. unlock a lock that was never acquired.
|
||||
|
||||
Concrete risks:
|
||||
|
||||
- PVH / backup-lock state corruption,
|
||||
- waking or releasing waiters against an unowned lock,
|
||||
- racing `m->vmp_clustered` / `object->pages_used` updates during active fault handling,
|
||||
- early-boot hangs or panics when clustered pages are first faulted in.
|
||||
|
||||
This is a much stronger explanation for the observed boot failure than the old “wrong security helper” theory.
|
||||
|
||||
## What this patch actually changes semantically
|
||||
|
||||
If applied successfully, the patch does **not** bypass code-signing validation.
|
||||
|
||||
It only removes synchronization from this clustered-page bookkeeping path:
|
||||
|
||||
- page-in accounting (`DBG_CACHE_HIT_FAULT` -> `DBG_PAGEIND_FAULT` / `DBG_PAGEINV_FAULT`),
|
||||
- `object->pages_used++`,
|
||||
- `m->vmp_clustered = FALSE`,
|
||||
- speculative-page accounting.
|
||||
|
||||
So the effective behavior is:
|
||||
|
||||
- **not** “allow weird userspace methods,”
|
||||
- **not** “disable vm fault code-signing rejection,”
|
||||
- **not** “bypass a kernel deny path,”
|
||||
- only “break the lock discipline around clustered-page consumption.”
|
||||
|
||||
For the jailbreak goal, this patch is mis-targeted.
|
||||
|
||||
## Where the real security-relevant logic is in this function
|
||||
|
||||
Two genuinely security-relevant regions exist in the same XNU function, but they are **not** the current patch site:
|
||||
|
||||
1. `pmap_has_prot_policy(...)` handling in `research/reference/xnu/osfmk/vm/vm_fault.c:3943`
|
||||
- this is where protection-policy constraints are enforced for the requested mapping protections.
|
||||
2. `vm_fault_validate_cs(...)` in `research/reference/xnu/osfmk/vm/vm_fault.c:3991`
|
||||
- this is the runtime code-signing validation path.
|
||||
|
||||
So if the jailbreak objective is “allow runtime execution / invocation patterns without kernel interception,” the current B9 patch is aimed at the wrong block.
|
||||
|
||||
## XNU source cross-mapping for the upstream 26.1 site
|
||||
|
||||
The `26.1 research` upstream site now maps cleanly to the `cs_bypass` fast-path semantics in XNU.
|
||||
|
||||
### Field mapping
|
||||
|
||||
From the `vm_fault_enter_prepare` function prologue in `26.1 research`:
|
||||
|
||||
```asm
|
||||
0xfffffe0007bada60 MOV X21, X7 ; fault_type
|
||||
0xfffffe0007bada64 MOV X25, X3 ; prot*
|
||||
0xfffffe0007bada74 LDP X28, X8, [X29,#0x10] ; fault_info, type_of_fault*
|
||||
0xfffffe0007bada78 LDR W22, [X28,#0x28] ; fault_info flags word
|
||||
```
|
||||
|
||||
The XNU struct layout confirms that `fault_info + 0x28` is the packed boolean flag word, and **bit 3 is `cs_bypass`**:
|
||||
|
||||
- `research/reference/xnu/osfmk/vm/vm_object_xnu.h:112`
|
||||
- `research/reference/xnu/osfmk/vm/vm_object_xnu.h:116`
|
||||
|
||||
### Upstream site semantics
|
||||
|
||||
The upstream hard-coded instruction is:
|
||||
|
||||
```asm
|
||||
0xfffffe0007bade1c TBZ W22, #3, loc_...DE28
|
||||
0xfffffe0007bade20 MOV W23, #0
|
||||
0xfffffe0007bade24 B loc_...E004
|
||||
```
|
||||
|
||||
Since `W22.bit3 == fault_info->cs_bypass`, this branch means:
|
||||
|
||||
- if `cs_bypass == 0`: continue into the runtime code-signing validation / violation path
|
||||
- if `cs_bypass == 1`: skip that path, force `is_tainted = 0`, and jump to the common success/mapping continuation
|
||||
|
||||
Patching `TBZ` -> `NOP` therefore forces the **`cs_bypass` fast path unconditionally**.
|
||||
|
||||
### XNU source correspondence
|
||||
|
||||
This aligns with the source-level fast path in `vm_fault_cs_check_violation()`:
|
||||
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:2831`
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:2833`
|
||||
|
||||
```c
|
||||
if (cs_bypass) {
|
||||
*cs_violation = FALSE;
|
||||
} else if (VMP_CS_TAINTED(...)) {
|
||||
*cs_violation = TRUE;
|
||||
} ...
|
||||
```
|
||||
|
||||
and with the caller in `vm_fault_validate_cs()` / `vm_fault_enter_prepare()`:
|
||||
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:3208`
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:3233`
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:3991`
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:3999`
|
||||
|
||||
So the upstream patch is best understood as:
|
||||
|
||||
- forcing `vm_fault_validate_cs()` to behave as though `cs_bypass` were already set,
|
||||
- preventing runtime code-signing violation handling for this fault path,
|
||||
- still preserving the rest of the normal page mapping flow.
|
||||
|
||||
This is fundamentally different from the derived repository matcher, which NOPs a `pmap_lock_phys_page()` call and breaks lock pairing.
|
||||
|
||||
## Proposed repair strategy
|
||||
|
||||
### Recommended fix for B9
|
||||
|
||||
Retarget `patch_vm_fault_enter_prepare` to the **upstream semantic site**, not the current lock-site matcher.
|
||||
|
||||
For `PCC 26.1 / 23B85 / kernelcache.research.vphone600`, the concrete patch is:
|
||||
|
||||
- file offset: `0x00BA9E1C`
|
||||
- VA: `0xFFFFFE0007BADE1C`
|
||||
- before: `76 00 18 36` (`TBZ W22, #3, ...`)
|
||||
- after: `1F 20 03 D5` (`NOP`)
|
||||
|
||||
## Pseudocode (Before)
|
||||
### Why this is the right site
|
||||
|
||||
```c
|
||||
state_check();
|
||||
flag = map->state_byte;
|
||||
if ((flag & BIT5) == 0) {
|
||||
goto fast_path;
|
||||
}
|
||||
- It is in the correct `vm_fault_enter_prepare` control-flow region.
|
||||
- It matches XNU's `cs_bypass` logic, not an unrelated lock helper.
|
||||
- It preserves lock/unlock pairing and page accounting.
|
||||
- It reproduces the **intent** of the upstream `26.1 research` patch rather than the accidental behavior of the derived matcher.
|
||||
|
||||
### How to implement the new matcher
|
||||
|
||||
The current matcher should be replaced, not refined.
|
||||
|
||||
#### Do not match
|
||||
|
||||
- `BL` followed by `LDRB [X?,#0x2C]` and `TBZ/TBNZ`
|
||||
- any site with a nearby paired lock/unlock helper call
|
||||
|
||||
#### Do match
|
||||
|
||||
Inside `vm_fault_enter_prepare`, find the unique gate with this semantic shape:
|
||||
|
||||
```asm
|
||||
... ; earlier checks on prot/page state
|
||||
CBZ X?, error_path ; load helper arg or zero
|
||||
LDR X0, [X?,#0x488]
|
||||
B <join>
|
||||
TBZ Wflags, #3, validation_path ; Wflags = fault_info flags word
|
||||
MOV Wtainted, #0
|
||||
B post_validation_success
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
Where:
|
||||
|
||||
```c
|
||||
// state_check() skipped
|
||||
flag = map->state_byte;
|
||||
if ((flag & BIT5) == 0) {
|
||||
goto fast_path;
|
||||
}
|
||||
```
|
||||
- `Wflags` is loaded from `[fault_info_reg, #0x28]` near the function prologue,
|
||||
- bit `#3` is `cs_bypass`,
|
||||
- the fall-through path lands at the common mapping continuation (`post_validation_success`),
|
||||
- the branch target enters the larger runtime validation / violation block.
|
||||
|
||||
## Why This Matters
|
||||
A robust implementation can anchor on:
|
||||
|
||||
`vm_fault_enter_prepare` is part of runtime page-fault handling, so this patch affects execution-time memory validation behavior, not just execve-time checks.
|
||||
1. resolved function `vm_fault_enter_prepare`
|
||||
2. in-prologue `LDR Wflags, [fault_info,#0x28]`
|
||||
3. later unique `TBZ Wflags, #3, ...; MOV W?, #0; B ...` sequence
|
||||
|
||||
## Symbol Consistency Audit (2026-03-05)
|
||||
### Prototype matcher result (2026-03-06)
|
||||
|
||||
- Status: `match`
|
||||
- Recovered symbol, anchor strings, and strict patch fingerprint all align on the same function.
|
||||
A local prototype matcher was run against the extracted `PCC-CloudOS-26.1-23B85` `kernelcache.research.vphone600` raw Mach-O with these rules:
|
||||
|
||||
## Patch Metadata
|
||||
1. inside `vm_fault_enter_prepare`, discover the early `LDR Wflags, [fault_info,#0x28]` load,
|
||||
2. track that exact `Wflags` register,
|
||||
3. find `TBZ Wflags, #3, ...` followed immediately by `MOV W?, #0` and `B ...`.
|
||||
|
||||
- Patch document: `patch_vm_fault_enter_prepare.md` (B9).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_vm_fault.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
Result:
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
- prologue flag load: `0xFFFFFE0007BADA78` -> `LDR W22, [X28,#0x28]`
|
||||
- matcher hit count: `1`
|
||||
- unique hit: `0xFFFFFE0007BADE1C`
|
||||
|
||||
- Primary target: recovered symbol `vm_fault_enter_prepare`.
|
||||
- Patchpoint: deny/fault guard branch NOP-ed at the validated in-function site.
|
||||
This is the expected upstream semantic site and proves the repaired matcher can be made both specific and stable on `26.1 research` without relying on the old false-positive lock-call fingerprint.
|
||||
|
||||
## Kernel Source File Location
|
||||
### Validation guidance
|
||||
|
||||
- Expected XNU source: `osfmk/vm/vm_fault.c`.
|
||||
- Confidence: `high`.
|
||||
For `26.1 research`, a repaired matcher should resolve to exactly one hit:
|
||||
|
||||
## Function Call Stack
|
||||
- `0x00BA9E1C`
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- Representative static callers:
|
||||
- `vm_fault_internal` (`0xfffffe0007bb6ef0`) -> calls `vm_fault_enter_prepare`.
|
||||
- `sub_FFFFFE0007BB8294` (`0xfffffe0007bb8350`) -> calls `vm_fault_enter_prepare`.
|
||||
- This confirms B9 is in the central page-fault preparation path.
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
and must **not** resolve to:
|
||||
|
||||
## Patch Hit Points
|
||||
- `0x00BA9BB0`
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- `0xfffffe0007bb898c`: `BL sub_FFFFFE0007C4B7DC`
|
||||
- `0xfffffe0007bb8990`: `LDRB W8, [X20,#0x2C]`
|
||||
- `0xfffffe0007bb8994`: `TBZ W8, #5, loc_FFFFFE0007BB89C4`
|
||||
- NOP the BL at `0xfffffe0007bb898c`.
|
||||
- Bytes:
|
||||
- before: `94 4B 02 94` (`BL ...`)
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
If it still resolves to `0x00BA9BB0`, the matcher is still targeting the lock-pair block and is not fixed.
|
||||
|
||||
## Current Patch Search Logic
|
||||
## Practical conclusion
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_vm_fault.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Anchor string: `"vm_fault_enter_prepare"` at `0xfffffe0007048ec8`.
|
||||
- Recovered symbol, anchor strings, and strict patch fingerprint all align on the same function.
|
||||
### Verdict on the current patch
|
||||
|
||||
## Validation (Static Evidence)
|
||||
- Keep `patch_vm_fault_enter_prepare` disabled.
|
||||
- Do **not** re-enable the current NOP at `0xFFFFFE0007BB898C`.
|
||||
- Treat the previous “Skip fault check” description as incorrect for `vphone600` research kernel.
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
### Likely root cause of boot failure
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
Most likely root cause: unbalanced `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` behavior in the hot VM fault path.
|
||||
|
||||
- VM fault guard remains active and can block memory mappings/transitions required during modified execution flows.
|
||||
### Recommended next research direction
|
||||
|
||||
## Risk / Side Effects
|
||||
If we still want a B9-class runtime-memory patch, the next candidates to study are:
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
- `vm_fault_validate_cs()`
|
||||
- `vm_fault_cs_check_violation()`
|
||||
- `vm_fault_cs_handle_violation()`
|
||||
- the `pmap_has_prot_policy()` / `cs_bypass` decision region
|
||||
|
||||
## Symbol Consistency Check
|
||||
Those are the places that can plausibly affect runtime execution restrictions. The current B9 site cannot.
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `vm_fault_enter_prepare`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `vm_fault_enter_prepare` -> `vm_fault_enter_prepare` at `0xfffffe0007bb8818`.
|
||||
## Minimal safe recommendation for patch schedule
|
||||
|
||||
## Open Questions and Confidence
|
||||
For now, the correct action is not “retarget this exact byte write,” but:
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
- leave `patch_vm_fault_enter_prepare` disabled,
|
||||
- mark its prior purpose label as wrong,
|
||||
- open a fresh analysis track for the real code-signing fault-validation path.
|
||||
|
||||
## Evidence Appendix
|
||||
## Evidence summary
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `vm_fault_enter_prepare`
|
||||
- Chain function sample: `vm_fault_enter_prepare`
|
||||
- Caller sample: `sub_FFFFFE0007BB8294`, `vm_fault_internal`
|
||||
- Callee sample: `__strncpy_chk`, `kfree_ext`, `lck_rw_done`, `sub_FFFFFE0007B15AFC`, `sub_FFFFFE0007B546BC`, `sub_FFFFFE0007B840E0`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE0007BB898C` (`vm_fault_enter_prepare`): NOP [_vm_fault_enter_prepare] | `944b0294 -> 1f2003d5`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
- Function symbol: `vm_fault_enter_prepare` @ `0xfffffe0007bb8818`
|
||||
- Current patchpoint: `0xfffffe0007bb898c`
|
||||
- Current matched callee: `sub_FFFFFE0007C4B7DC` -> `pmap_lock_phys_page()` equivalent
|
||||
- Paired callee: `sub_FFFFFE0007C4B9A4` -> `pmap_unlock_phys_page()` equivalent
|
||||
- XNU semantic match:
|
||||
- `research/reference/xnu/osfmk/vm/vm_fault.c:3958`
|
||||
- `research/reference/xnu/osfmk/vm/vm_page_internal.h:999`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7520`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:330`
|
||||
- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:497`
|
||||
|
||||
@@ -91,6 +91,7 @@
|
||||
### `patch_io_secure_bsd_root`
|
||||
|
||||
- `0x0136A1F0` / `0xFFFFFE000836E1F0` / b #0x1A4 [_IOSecureBSDRoot] / bytes `200d0034 -> 69000014`
|
||||
- 2026-03-06 reanalysis: this historical hit is real but semantically wrong. It patches the `"SecureRoot"` name-check gate in `AppleARMPE::callPlatformFunction`, not the final `"SecureRootName"` deny return consumed by `IOSecureBSDRoot()`. The implementation was retargeted to `0x0136A464` / `0xFFFFFE000836E464` (`CSEL W22, WZR, W9, NE -> MOV W22, #0`).
|
||||
|
||||
### `patch_kcall10`
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ class KernelJBPatcher(
|
||||
"patch_amfi_execve_kill_path", # JB-02 / A2
|
||||
"patch_task_conversion_eval_internal", # JB-08 / A3
|
||||
"patch_sandbox_hooks_extended", # JB-09 / A4
|
||||
# "patch_iouc_failed_macf", # JB-10 / A5
|
||||
"patch_iouc_failed_macf", # JB-10 / A5
|
||||
)
|
||||
|
||||
# Group B: Pattern/string anchored methods.
|
||||
@@ -75,9 +75,9 @@ class KernelJBPatcher(
|
||||
"patch_proc_security_policy", # JB-11 / B6
|
||||
"patch_proc_pidinfo", # JB-12 / B7
|
||||
"patch_convert_port_to_map", # JB-13 / B8
|
||||
# "patch_bsd_init_auth", # JB-14 / B13 (disabled: autotest FAIL rc=2 on 2026-03-06)
|
||||
"patch_bsd_init_auth", # JB-14 / B13 (retargeted 2026-03-06 to real _bsd_init rootauth gate)
|
||||
"patch_dounmount", # JB-15 / B12
|
||||
# "patch_io_secure_bsd_root", # JB-16 / B19 (disabled: autotest FAIL rc=2 on 2026-03-06)
|
||||
"patch_io_secure_bsd_root", # JB-16 / B19 (retargeted 2026-03-06 to SecureRootName deny-return)
|
||||
"patch_load_dylinker", # JB-17 / B16
|
||||
"patch_mac_mount", # JB-18 / B11
|
||||
"patch_nvram_verify_permission", # JB-19 / B18
|
||||
@@ -85,15 +85,15 @@ class KernelJBPatcher(
|
||||
"patch_spawn_validate_persona", # JB-21 / B14
|
||||
"patch_task_for_pid", # JB-22 / B15
|
||||
"patch_thid_should_crash", # JB-23 / B20
|
||||
# "patch_vm_fault_enter_prepare", # JB-24 / B9 (disabled: autotest FAIL rc=2 on 2026-03-06)
|
||||
"patch_vm_fault_enter_prepare", # JB-24 / B9 (retargeted 2026-03-06 to upstream cs_bypass gate)
|
||||
"patch_vm_map_protect", # JB-25 / B10
|
||||
)
|
||||
|
||||
# Group C: Shellcode/trampoline heavy methods.
|
||||
_GROUP_C_METHODS = (
|
||||
# "patch_cred_label_update_execve", # JB-03 / C21 (disabled: autotest FAIL rc=2 on 2026-03-06)
|
||||
"patch_hook_cred_label_update_execve", # JB-04 / C23 (low-riskized)
|
||||
"patch_kcall10", # JB-05 / C24 (low-riskized)
|
||||
"patch_cred_label_update_execve", # JB-03 / C21 (disabled: reworked on 2026-03-06, pending boot revalidation)
|
||||
"patch_hook_cred_label_update_execve", # JB-04 / C23 (faithful upstream trampoline)
|
||||
"patch_kcall10", # JB-05 / C24 (ABI-correct rebuilt cave)
|
||||
"patch_syscallmask_apply_to_proc", # JB-07 / C22
|
||||
)
|
||||
|
||||
|
||||
@@ -1,133 +1,137 @@
|
||||
"""Mixin: KernelJBPatchBsdInitAuthMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_X0_0, _rd32
|
||||
from .kernel_jb_base import ARM64_OP_REG, ARM64_REG_W0, ARM64_REG_X0, NOP
|
||||
|
||||
|
||||
class KernelJBPatchBsdInitAuthMixin:
|
||||
# ldr x0, [xN, #0x2b8] (ignore xN/Rn)
|
||||
_LDR_X0_2B8_MASK = 0xFFFFFC1F
|
||||
_LDR_X0_2B8_VAL = 0xF9415C00
|
||||
# cbz {w0|x0}, <label> (mask drops sf bit)
|
||||
_CBZ_X0_MASK = 0x7F00001F
|
||||
_CBZ_X0_VAL = 0x34000000
|
||||
_ROOTVP_PANIC_NEEDLE = b"rootvp not authenticated after mounting"
|
||||
|
||||
def patch_bsd_init_auth(self):
|
||||
"""Bypass rootvp authentication check in _bsd_init.
|
||||
Pattern: ldr x0, [xN, #0x2b8]; cbz x0, ...; bl AUTH_FUNC
|
||||
Replace the BL with mov x0, #0.
|
||||
"""Bypass the real rootvp auth failure branch inside ``_bsd_init``.
|
||||
|
||||
Fresh analysis on ``kernelcache.research.vphone600`` shows the boot gate is
|
||||
the in-function sequence:
|
||||
|
||||
call vnode ioctl handler for ``FSIOC_KERNEL_ROOTAUTH``
|
||||
cbnz w0, panic_path
|
||||
bl imageboot_needed
|
||||
|
||||
The older ``ldr/cbz/bl`` matcher was not semantically tied to ``_bsd_init``
|
||||
and could false-hit unrelated functions. We now resolve the branch using the
|
||||
panic string anchor and the surrounding local control-flow instead.
|
||||
"""
|
||||
self._log("\n[JB] _bsd_init: mov x0,#0 (auth bypass)")
|
||||
self._log("\n[JB] _bsd_init: ignore FSIOC_KERNEL_ROOTAUTH failure")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_bsd_init")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
result = self._find_auth_bl(foff, func_end)
|
||||
if result:
|
||||
self.emit(result, MOV_X0_0, "mov x0,#0 [_bsd_init auth]")
|
||||
return True
|
||||
|
||||
# Pattern search: ldr x0, [xN, #0x2b8]; cbz x0; bl
|
||||
ks, ke = self.kern_text
|
||||
rootvp_func = self._func_for_rootvp_anchor()
|
||||
if rootvp_func is None:
|
||||
self._log(" [-] rootvp anchor function not found")
|
||||
func_start = self._resolve_symbol("_bsd_init")
|
||||
if func_start < 0:
|
||||
func_start = self._func_for_rootvp_anchor()
|
||||
if func_start is None or func_start < 0:
|
||||
self._log(" [-] _bsd_init not found")
|
||||
return False
|
||||
|
||||
# Fast path: scan a narrow window around rootvp/bsd_init region first.
|
||||
near_start = max(ks, rootvp_func - 0x200000)
|
||||
near_end = min(ke, rootvp_func + 0x400000)
|
||||
candidates = self._collect_auth_bl_candidates(near_start, near_end)
|
||||
if not candidates:
|
||||
# Fallback to full kernel text only when needed.
|
||||
candidates = self._collect_auth_bl_candidates(ks, ke)
|
||||
|
||||
if not candidates:
|
||||
self._log(" [-] ldr+cbz+bl pattern not found")
|
||||
site = self._find_bsd_init_rootauth_site(func_start)
|
||||
if site is None:
|
||||
self._log(" [-] rootauth branch site not found")
|
||||
return False
|
||||
|
||||
bl_off = self._select_bsd_init_auth_candidate(candidates, rootvp_func)
|
||||
if bl_off is None:
|
||||
self._log(" [-] no safe _bsd_init auth candidate (fail-closed)")
|
||||
return False
|
||||
branch_off, state = site
|
||||
if state == "patched":
|
||||
self._log(f" [=] rootauth branch already bypassed at 0x{branch_off:X}")
|
||||
return True
|
||||
|
||||
self._log(f" [+] auth BL at 0x{bl_off:X} (strict candidate)")
|
||||
self.emit(bl_off, MOV_X0_0, "mov x0,#0 [_bsd_init auth]")
|
||||
self.emit(branch_off, NOP, "NOP cbnz (rootvp auth) [_bsd_init]")
|
||||
return True
|
||||
|
||||
def _find_auth_bl(self, start, end):
|
||||
"""Find ldr x0,[xN,#0x2b8]; cbz x0; bl pattern. Returns BL offset."""
|
||||
cands = self._collect_auth_bl_candidates(start, end)
|
||||
if cands:
|
||||
return cands[0]
|
||||
def _find_bsd_init_rootauth_site(self, func_start):
|
||||
panic_ref = self._rootvp_panic_ref_in_func(func_start)
|
||||
if panic_ref is None:
|
||||
return None
|
||||
|
||||
# Fallback for unexpected instruction variants.
|
||||
for off in range(start, end - 8, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
continue
|
||||
i0, i1, i2 = d[0], d[1], d[2]
|
||||
if i0.mnemonic == "ldr" and i1.mnemonic == "cbz" and i2.mnemonic == "bl":
|
||||
if i0.op_str.startswith("x0,") and "#0x2b8" in i0.op_str:
|
||||
if i1.op_str.startswith("x0,"):
|
||||
return off + 8
|
||||
adrp_off, add_off = panic_ref
|
||||
bl_panic_off = self._find_panic_call_near(add_off)
|
||||
if bl_panic_off is None:
|
||||
return None
|
||||
|
||||
err_lo = bl_panic_off - 0x40
|
||||
err_hi = bl_panic_off + 4
|
||||
imageboot_needed = self._resolve_symbol("_imageboot_needed")
|
||||
|
||||
candidates = []
|
||||
scan_start = max(func_start, adrp_off - 0x400)
|
||||
for off in range(scan_start, adrp_off, 4):
|
||||
state = self._match_rootauth_branch_site(off, err_lo, err_hi, imageboot_needed)
|
||||
if state is not None:
|
||||
candidates.append((off, state))
|
||||
|
||||
if not candidates:
|
||||
return None
|
||||
|
||||
if len(candidates) > 1:
|
||||
live = [item for item in candidates if item[1] == "live"]
|
||||
if len(live) == 1:
|
||||
return live[0]
|
||||
return None
|
||||
|
||||
return candidates[0]
|
||||
|
||||
def _rootvp_panic_ref_in_func(self, func_start):
|
||||
str_off = self.find_string(self._ROOTVP_PANIC_NEEDLE)
|
||||
if str_off < 0:
|
||||
return None
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
for adrp_off, add_off, _ in refs:
|
||||
if self.find_function_start(adrp_off) == func_start:
|
||||
return adrp_off, add_off
|
||||
return None
|
||||
|
||||
def _collect_auth_bl_candidates(self, start, end):
|
||||
"""Fast matcher using raw instruction masks (no capstone in hot loop)."""
|
||||
out = []
|
||||
limit = min(end - 8, self.size - 8)
|
||||
for off in range(max(start, 0), limit, 4):
|
||||
i0 = _rd32(self.raw, off)
|
||||
if (i0 & self._LDR_X0_2B8_MASK) != self._LDR_X0_2B8_VAL:
|
||||
continue
|
||||
def _find_panic_call_near(self, add_off):
|
||||
for scan in range(add_off, min(add_off + 0x40, self.size), 4):
|
||||
if self._is_bl(scan) == self.panic_off:
|
||||
return scan
|
||||
return None
|
||||
|
||||
i1 = _rd32(self.raw, off + 4)
|
||||
if (i1 & self._CBZ_X0_MASK) != self._CBZ_X0_VAL:
|
||||
continue
|
||||
def _match_rootauth_branch_site(self, off, err_lo, err_hi, imageboot_needed):
|
||||
insns = self._disas_at(off, 1)
|
||||
if not insns:
|
||||
return None
|
||||
insn = insns[0]
|
||||
|
||||
i2 = _rd32(self.raw, off + 8)
|
||||
if (i2 & 0xFC000000) != 0x94000000: # BL imm26
|
||||
continue
|
||||
|
||||
out.append(off + 8)
|
||||
return out
|
||||
|
||||
def _select_bsd_init_auth_candidate(self, candidates, rootvp_func):
|
||||
"""Select a safe candidate in core kernel code.
|
||||
|
||||
Heuristics (strict, fail-closed):
|
||||
- Stay near the core bsd_init region (anchored by rootvp panic string xref).
|
||||
- Require function context to reference `/dev/null` (boot-path fingerprint).
|
||||
- Prefer lower-caller-count function entries.
|
||||
"""
|
||||
# Keep candidates in the same broad kernel neighborhood.
|
||||
core_limit = rootvp_func + 0x400000
|
||||
nearby = [off for off in candidates if off < core_limit]
|
||||
if not nearby:
|
||||
if not self._is_call(off - 4):
|
||||
return None
|
||||
if not self._has_imageboot_call_near(off, imageboot_needed):
|
||||
return None
|
||||
|
||||
ranked = []
|
||||
for bl_off in nearby:
|
||||
fn = self.find_function_start(bl_off)
|
||||
if fn < 0:
|
||||
continue
|
||||
fn_end = self._find_func_end(fn, 0x4000)
|
||||
if not self._function_has_string(fn, fn_end, b"/dev/null"):
|
||||
continue
|
||||
callers = len(self.bl_callers.get(fn, []))
|
||||
ranked.append((callers, bl_off, fn))
|
||||
if insn.mnemonic == "nop":
|
||||
return "patched"
|
||||
|
||||
if not ranked:
|
||||
if insn.mnemonic != "cbnz":
|
||||
return None
|
||||
if len(insn.operands) < 2 or insn.operands[0].type != ARM64_OP_REG:
|
||||
return None
|
||||
if insn.operands[0].reg not in (ARM64_REG_W0, ARM64_REG_X0):
|
||||
return None
|
||||
|
||||
ranked.sort()
|
||||
best_callers, best_off, _ = ranked[0]
|
||||
# Ambiguous: multiple same-rank hits.
|
||||
same = [item for item in ranked if item[0] == best_callers]
|
||||
if len(same) > 1:
|
||||
target, _ = self._decode_branch_target(off)
|
||||
if target is None or not (err_lo <= target <= err_hi):
|
||||
return None
|
||||
return best_off
|
||||
|
||||
return "live"
|
||||
|
||||
def _is_call(self, off):
|
||||
if off < 0:
|
||||
return False
|
||||
insns = self._disas_at(off, 1)
|
||||
return bool(insns) and insns[0].mnemonic.startswith("bl")
|
||||
|
||||
def _has_imageboot_call_near(self, off, imageboot_needed):
|
||||
for scan in range(off + 4, min(off + 0x18, self.size), 4):
|
||||
target = self._is_bl(scan)
|
||||
if target < 0:
|
||||
continue
|
||||
if imageboot_needed < 0 or target == imageboot_needed:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _func_for_rootvp_anchor(self):
|
||||
needle = b"rootvp not authenticated after mounting @%s:%d"
|
||||
@@ -139,13 +143,3 @@ class KernelJBPatchBsdInitAuthMixin:
|
||||
return None
|
||||
fn = self.find_function_start(refs[0][0])
|
||||
return fn if fn >= 0 else None
|
||||
|
||||
def _function_has_string(self, func_start, func_end, needle):
|
||||
str_off = self.find_string(needle)
|
||||
if str_off < 0:
|
||||
return False
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
for adrp_off, _, _ in refs:
|
||||
if func_start <= adrp_off < func_end:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -5,6 +5,10 @@ from .kernel_jb_base import asm, _rd32
|
||||
|
||||
class KernelJBPatchCredLabelMixin:
|
||||
_RET_INSNS = (0xD65F0FFF, 0xD65F0BFF, 0xD65F03C0)
|
||||
_MOV_W0_0_U32 = int.from_bytes(asm("mov w0, #0"), "little")
|
||||
_MOV_W0_1_U32 = int.from_bytes(asm("mov w0, #1"), "little")
|
||||
_RELAX_CSMASK = 0xFFFFC0FF
|
||||
_RELAX_SETMASK = 0x0000000C
|
||||
|
||||
def _is_cred_label_execve_candidate(self, func_off, anchor_refs):
|
||||
"""Validate candidate function shape for _cred_label_update_execve."""
|
||||
@@ -112,15 +116,105 @@ class KernelJBPatchCredLabelMixin:
|
||||
|
||||
return fallback
|
||||
|
||||
def patch_cred_label_update_execve(self):
|
||||
"""Low-risk in-function early return for _cred_label_update_execve.
|
||||
def _find_cred_label_epilogue(self, func_off):
|
||||
"""Locate the canonical epilogue start (`ldp x29, x30, [sp, ...]`)."""
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_end - 4, func_off, -4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "ldp" and op.startswith("x29,x30,[sp"):
|
||||
return off
|
||||
|
||||
Keep PAC prologue intact and patch the next two instructions:
|
||||
mov x0, xzr
|
||||
retab
|
||||
This avoids code cave use and large shellcode trampolines.
|
||||
return -1
|
||||
|
||||
def _find_cred_label_csflags_ptr_reload(self, func_off):
|
||||
"""Recover the stack-based `u_int *csflags` reload used by the function.
|
||||
|
||||
We reuse the same `ldr x26, [x29, #imm]` form in the trampoline so the
|
||||
common C21-v1 cave works for both deny and success exits, even when the
|
||||
live x26 register has not been initialized on a deny-only path.
|
||||
"""
|
||||
self._log("\n[JB] _cred_label_update_execve: low-risk early return")
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_off, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic != "ldr" or not op.startswith("x26,[x29"):
|
||||
continue
|
||||
mem_op = i.op_str.split(",", 1)[1].strip()
|
||||
return off, mem_op
|
||||
|
||||
return -1, None
|
||||
|
||||
def _decode_b_target(self, off):
|
||||
"""Return target of unconditional `b`, or -1 if instruction is not `b`."""
|
||||
insn = _rd32(self.raw, off)
|
||||
if (insn & 0x7C000000) != 0x14000000:
|
||||
return -1
|
||||
imm26 = insn & 0x03FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
return off + imm26 * 4
|
||||
|
||||
def _find_cred_label_deny_return(self, func_off, epilogue_off):
|
||||
"""Find the shared `mov w0,#1` kill-return right before the epilogue."""
|
||||
mov_w0_1 = self._MOV_W0_1_U32
|
||||
scan_start = max(func_off, epilogue_off - 0x40)
|
||||
for off in range(epilogue_off - 4, scan_start - 4, -4):
|
||||
if _rd32(self.raw, off) == mov_w0_1 and off + 4 == epilogue_off:
|
||||
return off
|
||||
|
||||
return -1
|
||||
|
||||
def _find_cred_label_success_exits(self, func_off, epilogue_off):
|
||||
"""Find late success edges that already decided to return 0.
|
||||
|
||||
On the current vphone600 AMFI body these are the final `b epilogue`
|
||||
instructions in the success tail, reached after the original
|
||||
`tst/orr/str` cleanup has already run.
|
||||
"""
|
||||
exits = []
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_off, func_end, 4):
|
||||
target = self._decode_b_target(off)
|
||||
if target != epilogue_off:
|
||||
continue
|
||||
saw_mov_w0_0 = False
|
||||
for prev in range(max(func_off, off - 0x10), off, 4):
|
||||
if _rd32(self.raw, prev) == self._MOV_W0_0_U32:
|
||||
saw_mov_w0_0 = True
|
||||
break
|
||||
if saw_mov_w0_0:
|
||||
exits.append(off)
|
||||
|
||||
return tuple(exits)
|
||||
|
||||
def patch_cred_label_update_execve(self):
|
||||
"""C21-v3: split late exits and add minimal helper bits on success.
|
||||
|
||||
This version keeps the boot-safe late-exit structure from v2, but adds
|
||||
a small success-only extension inspired by the older upstream shellcode:
|
||||
|
||||
- keep `_cred_label_update_execve`'s body intact;
|
||||
- redirect the shared deny return into a tiny deny cave that only
|
||||
forces `w0 = 0` and returns through the original epilogue;
|
||||
- redirect the late success exits into a success cave;
|
||||
- reload `u_int *csflags` from the stack only on the success cave;
|
||||
- clear only `CS_HARD|CS_KILL|CS_CHECK_EXPIRATION|CS_RESTRICT|
|
||||
CS_ENFORCEMENT|CS_REQUIRE_LV` on the success cave;
|
||||
- then OR only `CS_GET_TASK_ALLOW|CS_INSTALLER` (`0xC`) on the
|
||||
success cave;
|
||||
- return through the original epilogue in both cases.
|
||||
|
||||
This preserves AMFI's exec-time analytics / entitlement handling and
|
||||
avoids the boot-unsafe entry-time early return used by older variants.
|
||||
"""
|
||||
self._log("\n[JB] _cred_label_update_execve: C21-v3 split exits + helper bits")
|
||||
|
||||
func_off = -1
|
||||
|
||||
@@ -139,20 +233,91 @@ class KernelJBPatchCredLabelMixin:
|
||||
self._log(" [-] function not found, skipping shellcode patch")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
if func_end <= func_off + 8:
|
||||
self._log(" [-] function too small for low-risk patch")
|
||||
epilogue_off = self._find_cred_label_epilogue(func_off)
|
||||
if epilogue_off < 0:
|
||||
self._log(" [-] epilogue not found")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
func_off + 4,
|
||||
asm("mov x0, xzr"),
|
||||
"mov x0,xzr [_cred_label_update_execve low-risk]",
|
||||
)
|
||||
self.emit(
|
||||
func_off + 8,
|
||||
bytes([0xFF, 0x0F, 0x5F, 0xD6]), # retab
|
||||
"retab [_cred_label_update_execve low-risk]",
|
||||
deny_off = self._find_cred_label_deny_return(func_off, epilogue_off)
|
||||
if deny_off < 0:
|
||||
self._log(" [-] shared deny return not found")
|
||||
return False
|
||||
|
||||
success_exits = self._find_cred_label_success_exits(func_off, epilogue_off)
|
||||
if not success_exits:
|
||||
self._log(" [-] success exits not found")
|
||||
return False
|
||||
|
||||
_, csflags_mem_op = self._find_cred_label_csflags_ptr_reload(func_off)
|
||||
if not csflags_mem_op:
|
||||
self._log(" [-] csflags stack reload not found")
|
||||
return False
|
||||
|
||||
deny_cave = self._find_code_cave(8)
|
||||
if deny_cave < 0:
|
||||
self._log(" [-] no code cave found for C21-v3 deny trampoline")
|
||||
return False
|
||||
|
||||
success_cave = self._find_code_cave(32)
|
||||
if success_cave < 0 or success_cave == deny_cave:
|
||||
self._log(" [-] no code cave found for C21-v3 success trampoline")
|
||||
return False
|
||||
|
||||
deny_branch_back = self._encode_b(deny_cave + 4, epilogue_off)
|
||||
if not deny_branch_back:
|
||||
self._log(" [-] branch from deny trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
success_branch_back = self._encode_b(success_cave + 28, epilogue_off)
|
||||
if not success_branch_back:
|
||||
self._log(" [-] branch from success trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
deny_shellcode = asm("mov w0, #0") + deny_branch_back
|
||||
success_shellcode = (
|
||||
asm(f"ldr x26, {csflags_mem_op}")
|
||||
+ asm("cbz x26, #0x10")
|
||||
+ asm("ldr w8, [x26]")
|
||||
+ asm(f"and w8, w8, #{self._RELAX_CSMASK:#x}")
|
||||
+ asm(f"orr w8, w8, #{self._RELAX_SETMASK:#x}")
|
||||
+ asm("str w8, [x26]")
|
||||
+ asm("mov w0, #0")
|
||||
+ success_branch_back
|
||||
)
|
||||
|
||||
for index in range(0, len(deny_shellcode), 4):
|
||||
self.emit(
|
||||
deny_cave + index,
|
||||
deny_shellcode[index : index + 4],
|
||||
f"deny_trampoline+{index} [_cred_label_update_execve C21-v3]",
|
||||
)
|
||||
|
||||
for index in range(0, len(success_shellcode), 4):
|
||||
self.emit(
|
||||
success_cave + index,
|
||||
success_shellcode[index : index + 4],
|
||||
f"success_trampoline+{index} [_cred_label_update_execve C21-v3]",
|
||||
)
|
||||
|
||||
deny_branch_to_cave = self._encode_b(deny_off, deny_cave)
|
||||
if not deny_branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{deny_off:X} to deny trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
deny_off,
|
||||
deny_branch_to_cave,
|
||||
f"b deny cave [_cred_label_update_execve C21-v3 exit @ 0x{deny_off:X}]",
|
||||
)
|
||||
|
||||
for off in success_exits:
|
||||
branch_to_cave = self._encode_b(off, success_cave)
|
||||
if not branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{off:X} to success trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
off,
|
||||
branch_to_cave,
|
||||
f"b success cave [_cred_label_update_execve C21-v3 exit @ 0x{off:X}]",
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
@@ -1,23 +1,24 @@
|
||||
"""Mixin: KernelJBPatchHookCredLabelMixin."""
|
||||
|
||||
from .kernel_jb_base import asm, _rd32
|
||||
import struct
|
||||
|
||||
PACIBSP = bytes([0x7F, 0x23, 0x03, 0xD5]) # 0xD503237F
|
||||
from .kernel_asm import asm, _PACIBSP_U32, _asm_u32
|
||||
from .kernel_jb_base import _rd32, _rd64
|
||||
|
||||
|
||||
class KernelJBPatchHookCredLabelMixin:
|
||||
_HOOK_CRED_LABEL_INDEX = 18
|
||||
_C23_CAVE_WORDS = 46
|
||||
_VFS_CONTEXT_CURRENT_SHAPE = (
|
||||
_PACIBSP_U32,
|
||||
_asm_u32("stp x29, x30, [sp, #-0x10]!"),
|
||||
_asm_u32("mov x29, sp"),
|
||||
_asm_u32("mrs x0, tpidr_el1"),
|
||||
_asm_u32("ldr x1, [x0, #0x3e0]"),
|
||||
)
|
||||
|
||||
def _find_vnode_getattr_via_string(self):
|
||||
"""Find vnode_getattr by locating a caller function via string ref.
|
||||
|
||||
The string "vnode_getattr" appears in format strings like
|
||||
"%s: vnode_getattr: %d" inside functions that CALL vnode_getattr.
|
||||
We find such a caller, then extract the BL target near the string
|
||||
reference to get the real vnode_getattr address.
|
||||
|
||||
Previous approach: find_string → find_string_refs → find_function_start
|
||||
was wrong because it returned the CALLER (e.g. an AppleImage4 function)
|
||||
instead of vnode_getattr itself.
|
||||
"""
|
||||
"""Resolve vnode_getattr from a nearby BL around its log string."""
|
||||
str_off = self.find_string(b"vnode_getattr")
|
||||
if str_off < 0:
|
||||
return -1
|
||||
@@ -26,122 +27,229 @@ class KernelJBPatchHookCredLabelMixin:
|
||||
if not refs:
|
||||
return -1
|
||||
|
||||
# The string ref is inside a function that calls vnode_getattr.
|
||||
# Scan backward from the string ref for a BL instruction — the
|
||||
# nearest preceding BL is very likely the BL vnode_getattr call
|
||||
# (the error message prints right after the call fails).
|
||||
ref_off = refs[0][0] # ADRP offset
|
||||
for scan_off in range(ref_off - 4, ref_off - 64, -4):
|
||||
if scan_off < 0:
|
||||
break
|
||||
insn = _rd32(self.raw, scan_off)
|
||||
if (insn >> 26) == 0x25: # BL opcode
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26 # sign extend
|
||||
target = scan_off + imm26 * 4
|
||||
if any(s <= target < e for s, e in self.code_ranges):
|
||||
self._log(
|
||||
f" [+] vnode_getattr at 0x{target:X} "
|
||||
f"(via BL at 0x{scan_off:X}, "
|
||||
f"near string ref at 0x{ref_off:X})"
|
||||
)
|
||||
return target
|
||||
|
||||
# Fallback: try additional string hits
|
||||
start = str_off + 1
|
||||
for _ in range(5):
|
||||
str_off2 = self.find_string(b"vnode_getattr", start)
|
||||
if str_off2 < 0:
|
||||
break
|
||||
refs2 = self.find_string_refs(str_off2)
|
||||
if refs2:
|
||||
ref_off2 = refs2[0][0]
|
||||
for scan_off in range(ref_off2 - 4, ref_off2 - 64, -4):
|
||||
start = str_off
|
||||
for _ in range(6):
|
||||
refs = self.find_string_refs(start)
|
||||
if refs:
|
||||
ref_off = refs[0][0]
|
||||
for scan_off in range(ref_off - 4, ref_off - 80, -4):
|
||||
if scan_off < 0:
|
||||
break
|
||||
insn = _rd32(self.raw, scan_off)
|
||||
if (insn >> 26) == 0x25: # BL
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = scan_off + imm26 * 4
|
||||
if any(s <= target < e for s, e in self.code_ranges):
|
||||
self._log(
|
||||
f" [+] vnode_getattr at 0x{target:X} "
|
||||
f"(via BL at 0x{scan_off:X})"
|
||||
)
|
||||
return target
|
||||
start = str_off2 + 1
|
||||
if (insn >> 26) != 0x25:
|
||||
continue
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = scan_off + imm26 * 4
|
||||
if any(s <= target < e for s, e in self.code_ranges):
|
||||
self._log(
|
||||
f" [+] vnode_getattr at 0x{target:X} "
|
||||
f"(via BL at 0x{scan_off:X}, near string ref 0x{ref_off:X})"
|
||||
)
|
||||
return target
|
||||
next_off = self.find_string(b"vnode_getattr", start + 1)
|
||||
if next_off < 0:
|
||||
break
|
||||
start = next_off
|
||||
|
||||
return -1
|
||||
|
||||
def patch_hook_cred_label_update_execve(self):
|
||||
"""Low-risk early-return patch for sandbox cred-label hook.
|
||||
def _find_vfs_context_current_via_shape(self):
|
||||
"""Locate the concrete vfs_context_current body by its unique prologue."""
|
||||
key = ("c23_vfs_context_current", self.kern_text)
|
||||
cached = self._jb_scan_cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
Keep PACIBSP at entry and patch following instructions to:
|
||||
mov x0, xzr
|
||||
retab
|
||||
This avoids ops-table rewrites, code caves, and long trampolines.
|
||||
"""
|
||||
self._log("\n[JB] _hook_cred_label_update_execve: low-risk early return")
|
||||
ks, ke = self.kern_text
|
||||
hits = []
|
||||
pat = self._VFS_CONTEXT_CURRENT_SHAPE
|
||||
for off in range(ks, ke - len(pat) * 4, 4):
|
||||
if all(_rd32(self.raw, off + i * 4) == pat[i] for i in range(len(pat))):
|
||||
hits.append(off)
|
||||
|
||||
# Find sandbox ops table
|
||||
result = hits[0] if len(hits) == 1 else -1
|
||||
if result >= 0:
|
||||
self._log(f" [+] vfs_context_current body at 0x{result:X} (shape match)")
|
||||
else:
|
||||
self._log(f" [-] vfs_context_current shape scan ambiguous ({len(hits)} hits)")
|
||||
self._jb_scan_cache[key] = result
|
||||
return result
|
||||
|
||||
def _find_hook_cred_label_update_execve_wrapper(self):
|
||||
"""Resolve the faithful upstream C23 target: sandbox ops[18] wrapper."""
|
||||
ops_table = self._find_sandbox_ops_table_via_conf()
|
||||
if ops_table is None:
|
||||
self._log(" [-] sandbox ops table not found")
|
||||
return False
|
||||
return None
|
||||
|
||||
# ── 3. Find hook index dynamically ───────────────────────
|
||||
# mpo_cred_label_update_execve is one of the largest sandbox
|
||||
# hooks at an early index (< 30). Scan for it.
|
||||
hook_index = -1
|
||||
orig_hook = -1
|
||||
best_size = 0
|
||||
for idx in range(0, 30):
|
||||
entry = self._read_ops_entry(ops_table, idx)
|
||||
if entry is None or entry <= 0:
|
||||
continue
|
||||
if not any(s <= entry < e for s, e in self.code_ranges):
|
||||
continue
|
||||
fend = self._find_func_end(entry, 0x2000)
|
||||
fsize = fend - entry
|
||||
if fsize > best_size:
|
||||
best_size = fsize
|
||||
hook_index = idx
|
||||
orig_hook = entry
|
||||
entry_off = ops_table + self._HOOK_CRED_LABEL_INDEX * 8
|
||||
if entry_off + 8 > self.size:
|
||||
self._log(" [-] hook ops entry outside file")
|
||||
return None
|
||||
|
||||
if hook_index < 0 or best_size < 1000:
|
||||
entry_raw = _rd64(self.raw, entry_off)
|
||||
if entry_raw == 0:
|
||||
self._log(" [-] hook ops entry is null")
|
||||
return None
|
||||
if (entry_raw & (1 << 63)) == 0:
|
||||
self._log(
|
||||
" [-] hook entry not found in ops table "
|
||||
f"(best: idx={hook_index}, size={best_size})"
|
||||
f" [-] hook ops entry is not auth-rebase encoded: 0x{entry_raw:016X}"
|
||||
)
|
||||
return False
|
||||
return None
|
||||
|
||||
self._log(f" [+] hook at ops[{hook_index}] = 0x{orig_hook:X} ({best_size} bytes)")
|
||||
wrapper_off = self._decode_chained_ptr(entry_raw)
|
||||
if wrapper_off < 0 or not any(s <= wrapper_off < e for s, e in self.code_ranges):
|
||||
self._log(f" [-] decoded wrapper target invalid: 0x{wrapper_off:X}")
|
||||
return None
|
||||
|
||||
# Verify first instruction is PACIBSP
|
||||
first_insn = self.raw[orig_hook : orig_hook + 4]
|
||||
if first_insn != PACIBSP:
|
||||
self._log(
|
||||
f" [-] first insn not PACIBSP "
|
||||
f"(got 0x{_rd32(self.raw, orig_hook):08X})"
|
||||
self._log(
|
||||
f" [+] hook cred-label wrapper ops[{self._HOOK_CRED_LABEL_INDEX}] "
|
||||
f"entry 0x{entry_off:X} -> 0x{wrapper_off:X}"
|
||||
)
|
||||
return ops_table, entry_off, entry_raw, wrapper_off
|
||||
|
||||
def _encode_auth_rebase_like(self, orig_val, target_off):
|
||||
"""Retarget an auth-rebase chained pointer while preserving PAC metadata."""
|
||||
if (orig_val & (1 << 63)) == 0:
|
||||
return None
|
||||
return struct.pack("<Q", (orig_val & ~0xFFFFFFFF) | (target_off & 0xFFFFFFFF))
|
||||
|
||||
def _build_upstream_c23_cave(
|
||||
self,
|
||||
cave_off,
|
||||
vfs_context_current_off,
|
||||
vnode_getattr_off,
|
||||
wrapper_off,
|
||||
):
|
||||
code = []
|
||||
code.append(asm("nop"))
|
||||
code.append(asm("cbz x3, #0xa8"))
|
||||
code.append(asm("sub sp, sp, #0x400"))
|
||||
code.append(asm("stp x29, x30, [sp]"))
|
||||
code.append(asm("stp x0, x1, [sp, #0x10]"))
|
||||
code.append(asm("stp x2, x3, [sp, #0x20]"))
|
||||
code.append(asm("stp x4, x5, [sp, #0x30]"))
|
||||
code.append(asm("stp x6, x7, [sp, #0x40]"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
bl_vfs_off = cave_off + len(code) * 4
|
||||
bl_vfs = self._encode_bl(bl_vfs_off, vfs_context_current_off)
|
||||
if not bl_vfs:
|
||||
return None
|
||||
code.append(bl_vfs)
|
||||
|
||||
code.append(asm("mov x2, x0"))
|
||||
code.append(asm("ldr x0, [sp, #0x28]"))
|
||||
code.append(asm("add x1, sp, #0x80"))
|
||||
code.append(asm("mov w8, #0x380"))
|
||||
code.append(asm("stp xzr, x8, [x1]"))
|
||||
code.append(asm("stp xzr, xzr, [x1, #0x10]"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
bl_getattr_off = cave_off + len(code) * 4
|
||||
bl_getattr = self._encode_bl(bl_getattr_off, vnode_getattr_off)
|
||||
if not bl_getattr:
|
||||
return None
|
||||
code.append(bl_getattr)
|
||||
|
||||
code.append(asm("cbnz x0, #0x4c"))
|
||||
code.append(asm("mov w2, #0"))
|
||||
code.append(asm("ldr w8, [sp, #0xcc]"))
|
||||
code.append(asm("tbz w8, #0xb, #0x14"))
|
||||
code.append(asm("ldr w8, [sp, #0xc4]"))
|
||||
code.append(asm("ldr x0, [sp, #0x18]"))
|
||||
code.append(asm("str w8, [x0, #0x18]"))
|
||||
code.append(asm("mov w2, #1"))
|
||||
code.append(asm("ldr w8, [sp, #0xcc]"))
|
||||
code.append(asm("tbz w8, #0xa, #0x14"))
|
||||
code.append(asm("mov w2, #1"))
|
||||
code.append(asm("ldr w8, [sp, #0xc8]"))
|
||||
code.append(asm("ldr x0, [sp, #0x18]"))
|
||||
code.append(asm("str w8, [x0, #0x28]"))
|
||||
code.append(asm("cbz w2, #0x14"))
|
||||
code.append(asm("ldr x0, [sp, #0x20]"))
|
||||
code.append(asm("ldr w8, [x0, #0x454]"))
|
||||
code.append(asm("orr w8, w8, #0x100"))
|
||||
code.append(asm("str w8, [x0, #0x454]"))
|
||||
code.append(asm("ldp x0, x1, [sp, #0x10]"))
|
||||
code.append(asm("ldp x2, x3, [sp, #0x20]"))
|
||||
code.append(asm("ldp x4, x5, [sp, #0x30]"))
|
||||
code.append(asm("ldp x6, x7, [sp, #0x40]"))
|
||||
code.append(asm("ldp x29, x30, [sp]"))
|
||||
code.append(asm("add sp, sp, #0x400"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
branch_back_off = cave_off + len(code) * 4
|
||||
branch_back = self._encode_b(branch_back_off, wrapper_off)
|
||||
if not branch_back:
|
||||
return None
|
||||
code.append(branch_back)
|
||||
code.append(asm("nop"))
|
||||
|
||||
if len(code) != self._C23_CAVE_WORDS:
|
||||
raise RuntimeError(
|
||||
f"C23 cave length drifted: {len(code)} insns, expected {self._C23_CAVE_WORDS}"
|
||||
)
|
||||
return b"".join(code)
|
||||
|
||||
def patch_hook_cred_label_update_execve(self):
|
||||
"""Faithful upstream C23: wrapper trampoline + setugid credential fixup.
|
||||
|
||||
Historical upstream behavior does not short-circuit the sandbox execve
|
||||
update hook. It redirects `mac_policy_ops[18]` to a code cave that:
|
||||
- fetches vnode owner/mode via vnode_getattr(vp, vap, vfs_context_current()),
|
||||
- copies VSUID/VSGID owner values into the pending new credential,
|
||||
- sets P_SUGID when either credential field changes,
|
||||
- then branches back to the original sandbox wrapper.
|
||||
"""
|
||||
self._log("\n[JB] _hook_cred_label_update_execve: faithful upstream C23")
|
||||
|
||||
wrapper_info = self._find_hook_cred_label_update_execve_wrapper()
|
||||
if wrapper_info is None:
|
||||
return False
|
||||
_, entry_off, entry_raw, wrapper_off = wrapper_info
|
||||
|
||||
vfs_context_current_off = self._find_vfs_context_current_via_shape()
|
||||
if vfs_context_current_off < 0:
|
||||
self._log(" [-] vfs_context_current not resolved")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(orig_hook, 0x2000)
|
||||
if func_end <= orig_hook + 8:
|
||||
self._log(" [-] hook function too small for low-risk patch")
|
||||
vnode_getattr_off = self._find_vnode_getattr_via_string()
|
||||
if vnode_getattr_off < 0:
|
||||
self._log(" [-] vnode_getattr not resolved")
|
||||
return False
|
||||
|
||||
cave_size = self._C23_CAVE_WORDS * 4
|
||||
cave_off = self._find_code_cave(cave_size)
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for faithful C23")
|
||||
return False
|
||||
|
||||
cave_bytes = self._build_upstream_c23_cave(
|
||||
cave_off,
|
||||
vfs_context_current_off,
|
||||
vnode_getattr_off,
|
||||
wrapper_off,
|
||||
)
|
||||
if cave_bytes is None:
|
||||
self._log(" [-] failed to encode faithful C23 branch/call relocations")
|
||||
return False
|
||||
|
||||
new_entry = self._encode_auth_rebase_like(entry_raw, cave_off)
|
||||
if new_entry is None:
|
||||
self._log(" [-] failed to encode hook ops entry retarget")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
orig_hook + 4,
|
||||
asm("mov x0, xzr"),
|
||||
"mov x0,xzr [_hook_cred_label_update_execve low-risk]",
|
||||
entry_off,
|
||||
new_entry,
|
||||
"retarget ops[18] to faithful C23 cave [_hook_cred_label_update_execve]",
|
||||
)
|
||||
self.emit(
|
||||
orig_hook + 8,
|
||||
bytes([0xFF, 0x0F, 0x5F, 0xD6]), # retab
|
||||
"retab [_hook_cred_label_update_execve low-risk]",
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
"faithful upstream C23 cave (vnode getattr -> uid/gid/P_SUGID fixup -> wrapper)",
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
@@ -1,50 +1,39 @@
|
||||
"""Mixin: KernelJBPatchIoucmacfMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, asm
|
||||
|
||||
|
||||
class KernelJBPatchIoucmacfMixin:
|
||||
def patch_iouc_failed_macf(self):
|
||||
"""Bypass IOUserClient MACF deny path at the shared IOUC gate.
|
||||
"""Bypass the narrow IOUC MACF deny branch after mac_iokit_check_open.
|
||||
|
||||
Strategy:
|
||||
- Anchor on IOUC "failed MACF"/"failed sandbox" format-string xrefs.
|
||||
- Resolve the shared containing function.
|
||||
- Require a BL call to a MACF dispatcher-like callee:
|
||||
contains `ldr x10, [x10, #0x9e8]` and `blraa/blr x10`.
|
||||
- Apply low-risk early return (keep PACIBSP at +0x0):
|
||||
mov x0, xzr ; retab
|
||||
Upstream-equivalent design goal:
|
||||
- keep the large IOUserClient open/setup path intact
|
||||
- keep entitlement/default-locking/sandbox-resolver flow intact
|
||||
- only force the post-MACF gate onto the allow path
|
||||
|
||||
This bypasses centralized IOUC MACF deny returns (for example
|
||||
AppleAPFSUserClient / AppleSEPUserClient).
|
||||
Local validated shape in `sub_FFFFFE000825B0C0`:
|
||||
- `BL <macf_aggregator>`
|
||||
- `CBZ W0, <allow>`
|
||||
- later `ADRL X0, "IOUC %s failed MACF in process %s\n"`
|
||||
|
||||
Patch action:
|
||||
- replace that `CBZ W0, <allow>` with unconditional `B <allow>`
|
||||
"""
|
||||
self._log("\n[JB] IOUC MACF gate: low-risk early return")
|
||||
self._log("\n[JB] IOUC MACF gate: branch-level deny bypass")
|
||||
|
||||
fail_macf_str = self.find_string(b"IOUC %s failed MACF in process %s")
|
||||
if fail_macf_str < 0:
|
||||
self._log(" [-] IOUC failed-MACF format string not found")
|
||||
return False
|
||||
|
||||
fail_macf_refs = self.find_string_refs(fail_macf_str, *self.kern_text)
|
||||
if not fail_macf_refs:
|
||||
fail_macf_refs = self.find_string_refs(fail_macf_str)
|
||||
if not fail_macf_refs:
|
||||
refs = self.find_string_refs(fail_macf_str, *self.kern_text)
|
||||
if not refs:
|
||||
self._log(" [-] no xrefs for IOUC failed-MACF format string")
|
||||
return False
|
||||
|
||||
fail_sb_str = self.find_string(b"IOUC %s failed sandbox in process %s")
|
||||
fail_sb_refs = []
|
||||
if fail_sb_str >= 0:
|
||||
fail_sb_refs = self.find_string_refs(fail_sb_str, *self.kern_text)
|
||||
if not fail_sb_refs:
|
||||
fail_sb_refs = self.find_string_refs(fail_sb_str)
|
||||
|
||||
sb_ref_set = {adrp for adrp, _, _ in fail_sb_refs}
|
||||
|
||||
def _has_macf_dispatch_shape(callee_off):
|
||||
callee_end = self._find_func_end(callee_off, 0x600)
|
||||
saw_load = False
|
||||
saw_call = False
|
||||
def _has_macf_aggregator_shape(callee_off):
|
||||
callee_end = self._find_func_end(callee_off, 0x400)
|
||||
saw_slot_load = False
|
||||
saw_indirect_call = False
|
||||
for off in range(callee_off, callee_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
@@ -52,74 +41,66 @@ class KernelJBPatchIoucmacfMixin:
|
||||
ins = d[0]
|
||||
op = ins.op_str.replace(" ", "").lower()
|
||||
if ins.mnemonic == "ldr" and ",#0x9e8]" in op and op.startswith("x10,[x10"):
|
||||
saw_load = True
|
||||
saw_slot_load = True
|
||||
if ins.mnemonic in ("blraa", "blrab", "blr") and op.startswith("x10"):
|
||||
saw_call = True
|
||||
if saw_load and saw_call:
|
||||
saw_indirect_call = True
|
||||
if saw_slot_load and saw_indirect_call:
|
||||
return True
|
||||
return False
|
||||
|
||||
candidates = []
|
||||
for adrp_off, _, _ in fail_macf_refs:
|
||||
fn = self.find_function_start(adrp_off)
|
||||
if fn < 0:
|
||||
continue
|
||||
fn_end = self._find_func_end(fn, 0x2000)
|
||||
if fn_end <= fn + 0x20:
|
||||
for adrp_off, _, _ in refs:
|
||||
func_start = self.find_function_start(adrp_off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
func_end = self._find_func_end(func_start, 0x2000)
|
||||
|
||||
for off in range(max(func_start, adrp_off - 0x120), min(func_end, adrp_off + 4), 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
if not d0 or not d1:
|
||||
continue
|
||||
i0 = d0[0]
|
||||
i1 = d1[0]
|
||||
if i0.mnemonic != "bl" or i1.mnemonic != "cbz":
|
||||
continue
|
||||
if not i1.op_str.replace(" ", "").startswith("w0,"):
|
||||
continue
|
||||
|
||||
# Require a BL call to a MACF-dispatcher-like function.
|
||||
has_dispatch_call = False
|
||||
for off in range(fn, fn_end, 4):
|
||||
bl_target = self._is_bl(off)
|
||||
if bl_target < 0:
|
||||
if bl_target < 0 or not _has_macf_aggregator_shape(bl_target):
|
||||
continue
|
||||
if _has_macf_dispatch_shape(bl_target):
|
||||
has_dispatch_call = True
|
||||
break
|
||||
if not has_dispatch_call:
|
||||
continue
|
||||
|
||||
# Prefer candidates that also reference the sandbox-fail format string.
|
||||
score = 0
|
||||
for sb_adrp in sb_ref_set:
|
||||
if fn <= sb_adrp < fn_end:
|
||||
score += 2
|
||||
if len(i1.operands) < 2:
|
||||
continue
|
||||
allow_target = getattr(i1.operands[-1], 'imm', -1)
|
||||
if not (off < allow_target < func_end):
|
||||
continue
|
||||
|
||||
# Sanity: should branch on w0 before logging failed-MACF.
|
||||
has_guard = False
|
||||
scan_start = max(fn, adrp_off - 0x100)
|
||||
for off in range(scan_start, adrp_off, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
ins = d[0]
|
||||
if ins.mnemonic not in ("cbz", "cbnz"):
|
||||
continue
|
||||
if not ins.op_str.replace(" ", "").startswith("w0,"):
|
||||
continue
|
||||
target = None
|
||||
for op in reversed(ins.operands):
|
||||
if op.type == ARM64_OP_IMM:
|
||||
target = op.imm
|
||||
fail_log_adrp = None
|
||||
for probe in range(off + 8, min(func_end, off + 0x80), 4):
|
||||
d = self._disas_at(probe)
|
||||
if not d:
|
||||
continue
|
||||
ins = d[0]
|
||||
if ins.mnemonic == "adrp" and probe == adrp_off:
|
||||
fail_log_adrp = probe
|
||||
break
|
||||
if target and off < target < fn_end:
|
||||
has_guard = True
|
||||
break
|
||||
if not has_guard:
|
||||
continue
|
||||
if fail_log_adrp is None:
|
||||
continue
|
||||
|
||||
candidates.append((score, fn, adrp_off, fn_end))
|
||||
patch_bytes = self._encode_b(off + 4, allow_target)
|
||||
if not patch_bytes:
|
||||
continue
|
||||
|
||||
if not candidates:
|
||||
self._log(" [-] no safe IOUC MACF candidate function")
|
||||
return False
|
||||
self._log(
|
||||
f" [+] IOUC MACF gate fn=0x{func_start:X}, bl=0x{off:X}, cbz=0x{off + 4:X}, allow=0x{allow_target:X}"
|
||||
)
|
||||
self.emit(
|
||||
off + 4,
|
||||
patch_bytes,
|
||||
f"b #0x{allow_target - (off + 4):X} [IOUC MACF deny → allow]",
|
||||
)
|
||||
return True
|
||||
|
||||
# Deterministic pick: highest score, then lowest function offset.
|
||||
candidates.sort(key=lambda item: (-item[0], item[1]))
|
||||
score, fn, _, _ = candidates[0]
|
||||
self._log(f" [+] candidate fn=0x{fn:X} (score={score})")
|
||||
|
||||
self.emit(fn + 4, asm("mov x0, xzr"), "mov x0,xzr [IOUC MACF gate low-risk]")
|
||||
self.emit(fn + 8, bytes([0xFF, 0x0F, 0x5F, 0xD6]), "retab [IOUC MACF gate low-risk]")
|
||||
return True
|
||||
self._log(" [-] narrow IOUC MACF deny branch not found")
|
||||
return False
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
"""Mixin: KernelJBPatchKcall10Mixin."""
|
||||
|
||||
from .kernel_jb_base import _rd64, struct
|
||||
from .kernel import asm
|
||||
from .kernel_asm import _PACIBSP_U32, _RETAB_U32
|
||||
|
||||
|
||||
# Max sysent entries in XNU (dispatch clamps at 0x22E = 558).
|
||||
_SYSENT_MAX_ENTRIES = 558
|
||||
@@ -9,6 +12,18 @@ _SYSENT_ENTRY_SIZE = 24
|
||||
# PAC discriminator used by the syscall dispatch (MOV X17, #0xBCAD; BLRAA X8, X17).
|
||||
_SYSENT_PAC_DIVERSITY = 0xBCAD
|
||||
|
||||
# Rebuilt PCC 26.1 semantics:
|
||||
# uap[0] = target function pointer
|
||||
# uap[1] = arg0
|
||||
# ...
|
||||
# uap[7] = arg6
|
||||
# Return path:
|
||||
# store X0 as 64-bit into retval, expose through sy_return_type=UINT64
|
||||
_KCALL10_NARG = 8
|
||||
_KCALL10_ARG_BYTES_32 = _KCALL10_NARG * 4
|
||||
_KCALL10_RETURN_TYPE = 7
|
||||
_KCALL10_EINVAL = 22
|
||||
|
||||
|
||||
class KernelJBPatchKcall10Mixin:
|
||||
def _find_sysent_table(self, nosys_off):
|
||||
@@ -24,17 +39,15 @@ class KernelJBPatchKcall10Mixin:
|
||||
Previous bug: the old code took the first _nosys match as entry 0,
|
||||
but _nosys first appears at entry ~428 (varies by XNU build).
|
||||
"""
|
||||
# Step 1: find any _nosys-matching entry
|
||||
nosys_entry = -1
|
||||
seg_start = -1
|
||||
for seg_name, vmaddr, fileoff, filesize, _ in self.all_segments:
|
||||
for seg_name, _, fileoff, filesize, _ in self.all_segments:
|
||||
if "DATA" not in seg_name:
|
||||
continue
|
||||
for off in range(fileoff, fileoff + filesize - _SYSENT_ENTRY_SIZE, 8):
|
||||
val = _rd64(self.raw, off)
|
||||
decoded = self._decode_chained_ptr(val)
|
||||
if decoded == nosys_off:
|
||||
# Verify: next entry should also have valid sy_call
|
||||
val2 = _rd64(self.raw, off + _SYSENT_ENTRY_SIZE)
|
||||
decoded2 = self._decode_chained_ptr(val2)
|
||||
if decoded2 > 0 and any(
|
||||
@@ -54,21 +67,16 @@ class KernelJBPatchKcall10Mixin:
|
||||
f"scanning backward for table start"
|
||||
)
|
||||
|
||||
# Step 2: scan backward to find entry 0
|
||||
base = nosys_entry
|
||||
entries_back = 0
|
||||
while base - _SYSENT_ENTRY_SIZE >= seg_start:
|
||||
if entries_back >= _SYSENT_MAX_ENTRIES:
|
||||
break
|
||||
prev = base - _SYSENT_ENTRY_SIZE
|
||||
# Check sy_call decodes to valid code
|
||||
val = _rd64(self.raw, prev)
|
||||
decoded = self._decode_chained_ptr(val)
|
||||
if decoded <= 0 or not any(
|
||||
s <= decoded < e for s, e in self.code_ranges
|
||||
):
|
||||
if decoded <= 0 or not any(s <= decoded < e for s, e in self.code_ranges):
|
||||
break
|
||||
# Check metadata looks like a sysent entry
|
||||
narg = struct.unpack_from("<H", self.raw, prev + 20)[0]
|
||||
arg_bytes = struct.unpack_from("<H", self.raw, prev + 22)[0]
|
||||
if narg > 12 or arg_bytes > 96:
|
||||
@@ -82,19 +90,8 @@ class KernelJBPatchKcall10Mixin:
|
||||
)
|
||||
return base
|
||||
|
||||
def _encode_chained_auth_ptr(self, target_foff, next_val, diversity=0,
|
||||
key=0, addr_div=0):
|
||||
"""Encode an arm64e kernel cache auth rebase chained fixup pointer.
|
||||
|
||||
Layout (DYLD_CHAINED_PTR_64_KERNEL_CACHE):
|
||||
bits[29:0]: target (file offset)
|
||||
bits[31:30]: cacheLevel (0)
|
||||
bits[47:32]: diversity (16 bits)
|
||||
bit[48]: addrDiv
|
||||
bits[50:49]: key (0=IA, 1=IB, 2=DA, 3=DB)
|
||||
bits[62:51]: next (12 bits, 4-byte stride delta to next fixup)
|
||||
bit[63]: isAuth (1)
|
||||
"""
|
||||
def _encode_chained_auth_ptr(self, target_foff, next_val, diversity=0, key=0, addr_div=0):
|
||||
"""Encode an arm64e kernel cache auth rebase chained fixup pointer."""
|
||||
val = (
|
||||
(target_foff & 0x3FFFFFFF)
|
||||
| ((diversity & 0xFFFF) << 32)
|
||||
@@ -106,18 +103,118 @@ class KernelJBPatchKcall10Mixin:
|
||||
return struct.pack("<Q", val)
|
||||
|
||||
def _extract_chain_next(self, raw_val):
|
||||
"""Extract the 'next' chain field from a raw chained fixup pointer."""
|
||||
return (raw_val >> 51) & 0xFFF
|
||||
|
||||
def patch_kcall10(self):
|
||||
"""Low-risk safe stub for syscall 439.
|
||||
def _extract_chain_diversity(self, raw_val):
|
||||
return (raw_val >> 32) & 0xFFFF
|
||||
|
||||
Instead of injecting an arbitrary-call shellcode trampoline, route
|
||||
syscall 439 to `_nosys` with valid chained-fixup auth encoding.
|
||||
def _extract_chain_addr_div(self, raw_val):
|
||||
return (raw_val >> 48) & 1
|
||||
|
||||
def _extract_chain_key(self, raw_val):
|
||||
return (raw_val >> 49) & 3
|
||||
|
||||
def _find_munge32_for_narg(self, sysent_off, narg, arg_bytes):
|
||||
"""Find a reusable 32-bit munger entry with matching metadata.
|
||||
|
||||
Returns `(target_foff, exemplar_entry, match_count)` or `(-1, -1, 0)`.
|
||||
Requires a unique decoded helper target across all matching sysent rows.
|
||||
"""
|
||||
self._log("\n[JB] kcall10: low-risk nosys stub")
|
||||
candidates = {}
|
||||
for idx in range(_SYSENT_MAX_ENTRIES):
|
||||
entry = sysent_off + idx * _SYSENT_ENTRY_SIZE
|
||||
cur_narg = struct.unpack_from("<H", self.raw, entry + 20)[0]
|
||||
cur_arg_bytes = struct.unpack_from("<H", self.raw, entry + 22)[0]
|
||||
if cur_narg != narg or cur_arg_bytes != arg_bytes:
|
||||
continue
|
||||
raw_munge = _rd64(self.raw, entry + 8)
|
||||
target = self._decode_chained_ptr(raw_munge)
|
||||
if target <= 0:
|
||||
continue
|
||||
bucket = candidates.setdefault(target, [])
|
||||
bucket.append(entry)
|
||||
|
||||
if not candidates:
|
||||
return -1, -1, 0
|
||||
if len(candidates) != 1:
|
||||
self._log(
|
||||
" [-] multiple distinct 8-arg munge32 helpers found: "
|
||||
+ ", ".join(f"0x{target:X}" for target in sorted(candidates))
|
||||
)
|
||||
return -1, -1, 0
|
||||
|
||||
target, entries = next(iter(candidates.items()))
|
||||
return target, entries[0], len(entries)
|
||||
|
||||
def _build_kcall10_cave(self):
|
||||
"""Build an ABI-correct kcall cave.
|
||||
|
||||
Contract:
|
||||
x0 = proc*
|
||||
x1 = &uthread->uu_arg[0]
|
||||
x2 = &uthread->uu_rval[0]
|
||||
|
||||
uap layout (8 qwords):
|
||||
[0] target function pointer
|
||||
[1] arg0
|
||||
[2] arg1
|
||||
[3] arg2
|
||||
[4] arg3
|
||||
[5] arg4
|
||||
[6] arg5
|
||||
[7] arg6
|
||||
|
||||
Behavior:
|
||||
- validates uap / retval / target are non-null
|
||||
- invokes target(arg0..arg6, x7=0)
|
||||
- stores 64-bit X0 into retval for `_SYSCALL_RET_UINT64_T`
|
||||
- returns 0 on success or EINVAL on malformed request
|
||||
"""
|
||||
code = []
|
||||
code.append(struct.pack("<I", _PACIBSP_U32))
|
||||
code.append(asm("sub sp, sp, #0x30"))
|
||||
code.append(asm("stp x21, x22, [sp]"))
|
||||
code.append(asm("stp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("stp x29, x30, [sp, #0x20]"))
|
||||
code.append(asm("add x29, sp, #0x20"))
|
||||
code.append(asm(f"mov w19, #{_KCALL10_EINVAL}"))
|
||||
code.append(asm("mov x20, x1"))
|
||||
code.append(asm("mov x21, x2"))
|
||||
code.append(asm("cbz x20, #0x30"))
|
||||
code.append(asm("cbz x21, #0x2c"))
|
||||
code.append(asm("ldr x16, [x20]"))
|
||||
code.append(asm("cbz x16, #0x24"))
|
||||
code.append(asm("ldp x0, x1, [x20, #0x8]"))
|
||||
code.append(asm("ldp x2, x3, [x20, #0x18]"))
|
||||
code.append(asm("ldp x4, x5, [x20, #0x28]"))
|
||||
code.append(asm("ldr x6, [x20, #0x38]"))
|
||||
code.append(asm("mov x7, xzr"))
|
||||
code.append(asm("blr x16"))
|
||||
code.append(asm("str x0, [x21]"))
|
||||
code.append(asm("mov w19, #0"))
|
||||
code.append(asm("mov w0, w19"))
|
||||
code.append(asm("ldp x21, x22, [sp]"))
|
||||
code.append(asm("ldp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("ldp x29, x30, [sp, #0x20]"))
|
||||
code.append(asm("add sp, sp, #0x30"))
|
||||
code.append(struct.pack("<I", _RETAB_U32))
|
||||
return b"".join(code)
|
||||
|
||||
def patch_kcall10(self):
|
||||
"""Rebuilt ABI-correct kcall patch for syscall 439.
|
||||
|
||||
The historical `kcall10` idea cannot be implemented as a literal
|
||||
10-argument Unix syscall on arm64 XNU. The rebuilt variant instead
|
||||
repoints `SYS_kas_info` to a cave that consumes the real syscall ABI:
|
||||
|
||||
uap[0] = target
|
||||
uap[1..7] = arg0..arg6
|
||||
|
||||
It returns the 64-bit X0 result via `retval` and
|
||||
`_SYSCALL_RET_UINT64_T`.
|
||||
"""
|
||||
self._log("\n[JB] kcall10: ABI-correct sysent[439] cave")
|
||||
|
||||
# Find _nosys
|
||||
nosys_off = self._resolve_symbol("_nosys")
|
||||
if nosys_off < 0:
|
||||
nosys_off = self._find_nosys()
|
||||
@@ -125,50 +222,73 @@ class KernelJBPatchKcall10Mixin:
|
||||
self._log(" [-] _nosys not found")
|
||||
return False
|
||||
|
||||
self._log(f" [+] _nosys at 0x{nosys_off:X}")
|
||||
|
||||
# Find sysent table (real base via backward scan)
|
||||
sysent_off = self._find_sysent_table(nosys_off)
|
||||
if sysent_off < 0:
|
||||
self._log(" [-] sysent table not found")
|
||||
return False
|
||||
|
||||
self._log(f" [+] sysent table at file offset 0x{sysent_off:X}")
|
||||
|
||||
# Entry 439 (SYS_kas_info)
|
||||
entry_439 = sysent_off + 439 * _SYSENT_ENTRY_SIZE
|
||||
|
||||
# Patch sysent[439] to _nosys with proper chained auth pointer.
|
||||
munger_target, exemplar_entry, match_count = self._find_munge32_for_narg(
|
||||
sysent_off, _KCALL10_NARG, _KCALL10_ARG_BYTES_32
|
||||
)
|
||||
if munger_target < 0:
|
||||
self._log(" [-] no unique reusable 8-arg munge32 helper found")
|
||||
return False
|
||||
|
||||
cave_bytes = self._build_kcall10_cave()
|
||||
cave_off = self._find_code_cave(len(cave_bytes))
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for kcall10")
|
||||
return False
|
||||
|
||||
# Read original raw value to preserve the chain 'next' field
|
||||
old_sy_call_raw = _rd64(self.raw, entry_439)
|
||||
call_next = self._extract_chain_next(old_sy_call_raw)
|
||||
|
||||
old_munge_raw = _rd64(self.raw, entry_439 + 8)
|
||||
munge_next = self._extract_chain_next(old_munge_raw)
|
||||
munge_div = self._extract_chain_diversity(old_munge_raw)
|
||||
munge_addr_div = self._extract_chain_addr_div(old_munge_raw)
|
||||
munge_key = self._extract_chain_key(old_munge_raw)
|
||||
|
||||
self._log(f" [+] sysent table at file offset 0x{sysent_off:X}")
|
||||
self._log(f" [+] sysent[439] entry at 0x{entry_439:X}")
|
||||
self._log(
|
||||
f" [+] reusing unique 8-arg munge32 target 0x{munger_target:X} "
|
||||
f"from exemplar entry 0x{exemplar_entry:X} ({match_count} matching sysent rows)"
|
||||
)
|
||||
self._log(f" [+] cave at 0x{cave_off:X} ({len(cave_bytes):#x} bytes)")
|
||||
|
||||
self.emit(
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
"kcall10 ABI-correct cave (target + 7 args -> uint64 x0)",
|
||||
)
|
||||
self.emit(
|
||||
entry_439,
|
||||
self._encode_chained_auth_ptr(
|
||||
nosys_off,
|
||||
cave_off,
|
||||
next_val=call_next,
|
||||
diversity=_SYSENT_PAC_DIVERSITY,
|
||||
key=0, # IA
|
||||
addr_div=0, # fixed discriminator (not address-blended)
|
||||
key=0,
|
||||
addr_div=0,
|
||||
),
|
||||
f"sysent[439].sy_call = _nosys 0x{nosys_off:X} "
|
||||
f"(auth rebase, div=0xBCAD, next={call_next}) [kcall10 low-risk]",
|
||||
f"sysent[439].sy_call = cave 0x{cave_off:X} (auth rebase, div=0xBCAD, next={call_next}) [kcall10]",
|
||||
)
|
||||
self.emit(
|
||||
entry_439 + 8,
|
||||
self._encode_chained_auth_ptr(
|
||||
munger_target,
|
||||
next_val=munge_next,
|
||||
diversity=munge_div,
|
||||
key=munge_key,
|
||||
addr_div=munge_addr_div,
|
||||
),
|
||||
f"sysent[439].sy_arg_munge32 = 8-arg helper 0x{munger_target:X} [kcall10]",
|
||||
)
|
||||
|
||||
# sy_return_type = SYSCALL_RET_INT_T (1)
|
||||
self.emit(
|
||||
entry_439 + 16,
|
||||
struct.pack("<I", 1),
|
||||
"sysent[439].sy_return_type = 1 [kcall10 low-risk]",
|
||||
struct.pack("<IHH", _KCALL10_RETURN_TYPE, _KCALL10_NARG, _KCALL10_ARG_BYTES_32),
|
||||
"sysent[439].sy_return_type=7,sy_narg=8,sy_arg_bytes=0x20 [kcall10]",
|
||||
)
|
||||
|
||||
# sy_narg = 0, sy_arg_bytes = 0
|
||||
self.emit(
|
||||
entry_439 + 20,
|
||||
struct.pack("<I", 0),
|
||||
"sysent[439].sy_narg=0,sy_arg_bytes=0 [kcall10 low-risk]",
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
@@ -4,45 +4,54 @@ from .kernel_jb_base import ARM64_OP_IMM, asm
|
||||
|
||||
|
||||
class KernelJBPatchSecureRootMixin:
|
||||
_SECURE_ROOT_MATCH_OFFSET = 0x11A
|
||||
|
||||
def patch_io_secure_bsd_root(self):
|
||||
"""Skip security check in _IOSecureBSDRoot.
|
||||
Prefer symbol. On stripped kernels, resolve a function that references both
|
||||
"SecureRoot" and "SecureRootName" and patch a strict policy branch site.
|
||||
"""Force the SecureRootName policy return to success.
|
||||
|
||||
Historical versions of this patch matched the first BL* + CBZ/CBNZ W0
|
||||
inside the AppleARMPE secure-root dispatch function and rewrote the
|
||||
"SecureRoot" gate. That site is semantically wrong and can perturb the
|
||||
broader platform-function dispatch path.
|
||||
|
||||
The correct minimal bypass is the final CSEL in the "SecureRootName"
|
||||
path that selects between success (0) and kIOReturnNotPrivileged.
|
||||
"""
|
||||
self._log("\n[JB] _IOSecureBSDRoot: skip check")
|
||||
self._log("\n[JB] _IOSecureBSDRoot: force SecureRootName success")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_IOSecureBSDRoot")
|
||||
if foff < 0:
|
||||
foff = self._find_secure_root_function()
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found")
|
||||
func_candidates = self._find_secure_root_functions()
|
||||
if not func_candidates:
|
||||
self._log(" [-] secure-root dispatch function not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x1200)
|
||||
site = self._find_secure_root_branch_site(foff, func_end)
|
||||
if not site:
|
||||
self._log(" [-] secure-root policy branch not found")
|
||||
return False
|
||||
for func_start in sorted(func_candidates):
|
||||
func_end = self._find_func_end(func_start, 0x1200)
|
||||
site = self._find_secure_root_return_site(func_start, func_end)
|
||||
if not site:
|
||||
continue
|
||||
|
||||
off, target = site
|
||||
b_bytes = self._compile_branch_checked(off, target)
|
||||
self.emit(off, b_bytes, f"b #0x{target - off:X} [_IOSecureBSDRoot]")
|
||||
return True
|
||||
off, reg_name = site
|
||||
patch_bytes = self._compile_zero_return_checked(reg_name)
|
||||
self.emit(
|
||||
off,
|
||||
patch_bytes,
|
||||
f"mov {reg_name}, #0 [_IOSecureBSDRoot SecureRootName allow]",
|
||||
)
|
||||
return True
|
||||
|
||||
def _find_secure_root_function(self):
|
||||
self._log(" [-] SecureRootName deny-return site not found")
|
||||
return False
|
||||
|
||||
def _find_secure_root_functions(self):
|
||||
funcs_with_name = self._functions_referencing_string(b"SecureRootName")
|
||||
if not funcs_with_name:
|
||||
return -1
|
||||
return set()
|
||||
|
||||
funcs_with_root = self._functions_referencing_string(b"SecureRoot")
|
||||
common = funcs_with_name & funcs_with_root
|
||||
if not common:
|
||||
# Fail closed: a plain SecureRootName-only function is often setup/epilogue code.
|
||||
return -1
|
||||
|
||||
# Deterministic pick: lowest function offset among common candidates.
|
||||
return min(common)
|
||||
if common:
|
||||
return common
|
||||
return funcs_with_name
|
||||
|
||||
def _functions_referencing_string(self, needle):
|
||||
func_starts = set()
|
||||
@@ -72,70 +81,121 @@ class KernelJBPatchSecureRootMixin:
|
||||
start = pos + 1
|
||||
return sorted(set(out))
|
||||
|
||||
def _find_secure_root_branch_site(self, func_start, func_end):
|
||||
# Strict selection:
|
||||
# - forward conditional branch
|
||||
# - on w0
|
||||
# - immediately after BL (typical compare/callback check)
|
||||
# - not in epilogue guard area (AUTIBSP/TBZ+BRK integrity checks)
|
||||
def _find_secure_root_return_site(self, func_start, func_end):
|
||||
for off in range(func_start, func_end - 4, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
dis = self._disas_at(off)
|
||||
if not dis:
|
||||
continue
|
||||
i = d[0]
|
||||
if i.mnemonic not in ("cbnz", "cbz"):
|
||||
ins = dis[0]
|
||||
if ins.mnemonic != "csel" or len(ins.operands) != 3:
|
||||
continue
|
||||
if not i.op_str.replace(" ", "").startswith("w0,"):
|
||||
if ins.op_str.replace(" ", "").split(",")[-1] != "ne":
|
||||
continue
|
||||
|
||||
prev = self._disas_at(off - 4)
|
||||
if not prev or not prev[0].mnemonic.startswith("bl"):
|
||||
dest = ins.reg_name(ins.operands[0].reg)
|
||||
zero_src = ins.reg_name(ins.operands[1].reg)
|
||||
err_src = ins.reg_name(ins.operands[2].reg)
|
||||
if zero_src not in ("wzr", "xzr"):
|
||||
continue
|
||||
if not dest.startswith("w"):
|
||||
continue
|
||||
if not self._has_secure_rootname_return_context(off, func_start, err_src):
|
||||
continue
|
||||
if not self._has_secure_rootname_compare_context(off, func_start):
|
||||
continue
|
||||
|
||||
target = None
|
||||
for op in reversed(i.operands):
|
||||
if op.type == ARM64_OP_IMM:
|
||||
target = op.imm
|
||||
break
|
||||
if not target or not (off < target < func_end):
|
||||
continue
|
||||
|
||||
if self._looks_like_epilogue_guard(off, target, func_end):
|
||||
continue
|
||||
|
||||
return (off, target)
|
||||
|
||||
return off, dest
|
||||
return None
|
||||
|
||||
def _looks_like_epilogue_guard(self, off, target, func_end):
|
||||
if off >= func_end - 0x40 or target >= func_end - 0x20:
|
||||
return True
|
||||
for probe in range(max(target - 4, 0), min(target + 0x14, func_end), 4):
|
||||
d = self._disas_at(probe)
|
||||
if d and d[0].mnemonic == "brk":
|
||||
return True
|
||||
for probe in range(max(off - 0x10, 0), off + 4, 4):
|
||||
d = self._disas_at(probe)
|
||||
if d and d[0].mnemonic == "autibsp":
|
||||
return True
|
||||
return False
|
||||
def _has_secure_rootname_return_context(self, off, func_start, err_reg_name):
|
||||
saw_flag_load = False
|
||||
saw_flag_test = False
|
||||
saw_err_build = False
|
||||
lookback_start = max(func_start, off - 0x40)
|
||||
|
||||
def _compile_branch_checked(self, off, target):
|
||||
delta = target - off
|
||||
b_bytes = asm(f"b #{delta}")
|
||||
insns = self._disas_n(b_bytes, 0, 1)
|
||||
assert insns, "capstone decode failed for secure-root branch patch"
|
||||
for probe in range(off - 4, lookback_start - 4, -4):
|
||||
dis = self._disas_at(probe)
|
||||
if not dis:
|
||||
continue
|
||||
ins = dis[0]
|
||||
ops = ins.op_str.replace(" ", "")
|
||||
|
||||
if not saw_flag_test and ins.mnemonic == "tst" and ops.endswith("#1"):
|
||||
saw_flag_test = True
|
||||
continue
|
||||
|
||||
if (
|
||||
saw_flag_test
|
||||
and not saw_flag_load
|
||||
and ins.mnemonic == "ldrb"
|
||||
and f"[x19,#0x{self._SECURE_ROOT_MATCH_OFFSET:x}]" in ops
|
||||
):
|
||||
saw_flag_load = True
|
||||
continue
|
||||
|
||||
if self._writes_register(ins, err_reg_name) and ins.mnemonic in ("mov", "movk", "sub"):
|
||||
saw_err_build = True
|
||||
|
||||
return saw_flag_load and saw_flag_test and saw_err_build
|
||||
|
||||
def _has_secure_rootname_compare_context(self, off, func_start):
|
||||
saw_match_store = False
|
||||
saw_cset_eq = False
|
||||
saw_cmp_w0_zero = False
|
||||
lookback_start = max(func_start, off - 0xA0)
|
||||
|
||||
for probe in range(off - 4, lookback_start - 4, -4):
|
||||
dis = self._disas_at(probe)
|
||||
if not dis:
|
||||
continue
|
||||
ins = dis[0]
|
||||
ops = ins.op_str.replace(" ", "")
|
||||
|
||||
if (
|
||||
not saw_match_store
|
||||
and ins.mnemonic == "strb"
|
||||
and f"[x19,#0x{self._SECURE_ROOT_MATCH_OFFSET:x}]" in ops
|
||||
):
|
||||
saw_match_store = True
|
||||
continue
|
||||
|
||||
if saw_match_store and not saw_cset_eq and ins.mnemonic == "cset" and ops.endswith(",eq"):
|
||||
saw_cset_eq = True
|
||||
continue
|
||||
|
||||
if saw_match_store and saw_cset_eq and not saw_cmp_w0_zero and ins.mnemonic == "cmp":
|
||||
if ops.startswith("w0,#0"):
|
||||
saw_cmp_w0_zero = True
|
||||
break
|
||||
|
||||
return saw_match_store and saw_cset_eq and saw_cmp_w0_zero
|
||||
|
||||
def _writes_register(self, ins, reg_name):
|
||||
if not ins.operands:
|
||||
return False
|
||||
first = ins.operands[0]
|
||||
if getattr(first, "type", None) != 1:
|
||||
return False
|
||||
return ins.reg_name(first.reg) == reg_name
|
||||
|
||||
def _compile_zero_return_checked(self, reg_name):
|
||||
patch_bytes = asm(f"mov {reg_name}, #0")
|
||||
insns = self._disas_n(patch_bytes, 0, 1)
|
||||
assert insns, "capstone decode failed for secure-root zero-return patch"
|
||||
ins = insns[0]
|
||||
assert ins.mnemonic == "b", (
|
||||
f"secure-root branch decode mismatch: expected 'b', got '{ins.mnemonic}'"
|
||||
assert ins.mnemonic == "mov", (
|
||||
f"secure-root zero-return decode mismatch: expected 'mov', got '{ins.mnemonic}'"
|
||||
)
|
||||
got_target = None
|
||||
for op in reversed(ins.operands):
|
||||
got_dst = ins.reg_name(ins.operands[0].reg)
|
||||
assert got_dst == reg_name, (
|
||||
f"secure-root zero-return destination mismatch: expected '{reg_name}', got '{got_dst}'"
|
||||
)
|
||||
got_imm = None
|
||||
for op in ins.operands[1:]:
|
||||
if op.type == ARM64_OP_IMM:
|
||||
got_target = op.imm
|
||||
got_imm = op.imm
|
||||
break
|
||||
assert got_target == delta, (
|
||||
"secure-root branch target mismatch: "
|
||||
f"expected delta 0x{delta:X}, got 0x{(got_target or -1):X}"
|
||||
assert got_imm == 0, (
|
||||
f"secure-root zero-return immediate mismatch: expected 0, got {got_imm}"
|
||||
)
|
||||
return b_bytes
|
||||
return patch_bytes
|
||||
|
||||
@@ -5,93 +5,90 @@ from .kernel_jb_base import asm, _rd32, struct
|
||||
|
||||
class KernelJBPatchSyscallmaskMixin:
|
||||
_PACIBSP_U32 = 0xD503237F
|
||||
_SYSCALLMASK_FF_BLOB_SIZE = 0x100
|
||||
|
||||
def _is_syscallmask_legacy_candidate(self, func_off):
|
||||
"""Match legacy 4-arg prologue shape expected by C22 shellcode."""
|
||||
func_end = self._find_func_end(func_off, 0x280)
|
||||
if func_end <= func_off or func_end - func_off < 0x80:
|
||||
return False
|
||||
|
||||
scan_end = min(func_off + 0xA0, func_end)
|
||||
seen_cbz_x2 = False
|
||||
seen_mov_x19_x0 = False
|
||||
seen_mov_x20_x1 = False
|
||||
seen_mov_x21_x2 = False
|
||||
seen_mov_x22_x3 = False
|
||||
|
||||
for off in range(func_off, scan_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "cbz" and op.startswith("x2,"):
|
||||
seen_cbz_x2 = True
|
||||
elif i.mnemonic == "mov":
|
||||
if op == "x19,x0":
|
||||
seen_mov_x19_x0 = True
|
||||
elif op == "x20,x1":
|
||||
seen_mov_x20_x1 = True
|
||||
elif op == "x21,x2":
|
||||
seen_mov_x21_x2 = True
|
||||
elif op == "x22,x3":
|
||||
seen_mov_x22_x3 = True
|
||||
|
||||
return (
|
||||
seen_cbz_x2
|
||||
and seen_mov_x19_x0
|
||||
and seen_mov_x20_x1
|
||||
and seen_mov_x21_x2
|
||||
and seen_mov_x22_x3
|
||||
def _find_syscallmask_manager_func(self):
|
||||
"""Find the high-level apply manager using its error strings."""
|
||||
strings = (
|
||||
b"failed to apply unix syscall mask",
|
||||
b"failed to apply mach trap mask",
|
||||
b"failed to apply kernel MIG routine mask",
|
||||
)
|
||||
|
||||
def _find_syscallmask_apply_func(self):
|
||||
"""Find _syscallmask_apply_to_proc.
|
||||
|
||||
Prefer symbol hit. If strict legacy shape is absent, still allow
|
||||
symbol-based low-risk in-function patching on newer layouts.
|
||||
"""
|
||||
sym_off = self._resolve_symbol("_syscallmask_apply_to_proc")
|
||||
if sym_off >= 0:
|
||||
return sym_off
|
||||
|
||||
str_off = self.find_string(b"syscallmask.c")
|
||||
if str_off < 0:
|
||||
return -1
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
return -1
|
||||
|
||||
base_funcs = sorted(
|
||||
{
|
||||
candidates = None
|
||||
for string in strings:
|
||||
str_off = self.find_string(string)
|
||||
if str_off < 0:
|
||||
return -1
|
||||
refs = self.find_string_refs(str_off, *self.sandbox_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
func_starts = {
|
||||
self.find_function_start(ref[0])
|
||||
for ref in refs
|
||||
if self.find_function_start(ref[0]) >= 0
|
||||
}
|
||||
)
|
||||
if not base_funcs:
|
||||
if not func_starts:
|
||||
return -1
|
||||
candidates = func_starts if candidates is None else candidates & func_starts
|
||||
if not candidates:
|
||||
return -1
|
||||
|
||||
return min(candidates)
|
||||
|
||||
def _extract_w1_immediate_near_call(self, func_off, call_off):
|
||||
"""Best-effort lookup of the last `mov w1, #imm` before a BL."""
|
||||
scan_start = max(func_off, call_off - 0x20)
|
||||
for off in range(call_off - 4, scan_start - 4, -4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "mov":
|
||||
continue
|
||||
op = insn.op_str.replace(" ", "")
|
||||
if not op.startswith("w1,#"):
|
||||
continue
|
||||
try:
|
||||
return int(op.split("#", 1)[1], 0)
|
||||
except ValueError:
|
||||
return None
|
||||
return None
|
||||
|
||||
def _find_syscallmask_apply_func(self):
|
||||
"""Find the low-level syscallmask apply wrapper used three times.
|
||||
|
||||
On older PCC kernels this corresponds to the stripped function patched by
|
||||
the historical upstream C22 shellcode. On newer kernels it is the wrapper
|
||||
underneath `_proc_apply_syscall_masks`.
|
||||
"""
|
||||
for name in ("_syscallmask_apply_to_proc", "_proc_apply_syscall_masks"):
|
||||
sym_off = self._resolve_symbol(name)
|
||||
if sym_off >= 0:
|
||||
return sym_off
|
||||
|
||||
manager_off = self._find_syscallmask_manager_func()
|
||||
if manager_off < 0:
|
||||
return -1
|
||||
|
||||
candidates = set(base_funcs)
|
||||
for base in base_funcs:
|
||||
start = max(base - 0x200, self.sandbox_text[0], self.kern_text[0])
|
||||
end = min(base + 0x200, self.sandbox_text[1], self.kern_text[1])
|
||||
for off in range(start, end, 4):
|
||||
if _rd32(self.raw, off) == self._PACIBSP_U32:
|
||||
candidates.add(off)
|
||||
func_end = self._find_func_end(manager_off, 0x300)
|
||||
target_calls = {}
|
||||
for off in range(manager_off, func_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target < 0:
|
||||
continue
|
||||
target_calls.setdefault(target, []).append(off)
|
||||
|
||||
ordered = sorted(
|
||||
candidates, key=lambda c: min(abs(c - b) for b in base_funcs)
|
||||
)
|
||||
for cand in ordered:
|
||||
if self._is_syscallmask_legacy_candidate(cand):
|
||||
return cand
|
||||
for target, calls in sorted(target_calls.items(), key=lambda item: -len(item[1])):
|
||||
if len(calls) < 3:
|
||||
continue
|
||||
whiches = {
|
||||
self._extract_w1_immediate_near_call(manager_off, call_off)
|
||||
for call_off in calls
|
||||
}
|
||||
if {0, 1, 2}.issubset(whiches):
|
||||
return target
|
||||
|
||||
# Low-risk fallback for newer layouts: use nearest anchor function.
|
||||
return base_funcs[0]
|
||||
return -1
|
||||
|
||||
def _find_last_branch_target(self, func_off):
|
||||
"""Find the last BL/B target in a function."""
|
||||
@@ -110,76 +107,174 @@ class KernelJBPatchSyscallmaskMixin:
|
||||
return off, target
|
||||
return -1, -1
|
||||
|
||||
def _resolve_syscallmask_helpers(self, func_off):
|
||||
"""Resolve zalloc/filter helpers with panic-target rejection."""
|
||||
panic = self.panic_off
|
||||
zalloc_off = self._resolve_symbol("_zalloc_ro_mut")
|
||||
filter_off = self._resolve_symbol("_proc_set_syscall_filter_mask")
|
||||
def _resolve_syscallmask_helpers(self, func_off, helper_target):
|
||||
"""Resolve the mutation helper and tail setter target deterministically.
|
||||
|
||||
func_end = self._find_func_end(func_off, 0x280)
|
||||
|
||||
if zalloc_off < 0:
|
||||
for off in range(func_off, func_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target < 0 or target == panic:
|
||||
continue
|
||||
if len(self.bl_callers.get(target, [])) >= 50:
|
||||
zalloc_off = target
|
||||
break
|
||||
|
||||
if filter_off < 0:
|
||||
_, filter_off = self._find_last_branch_target(func_off)
|
||||
|
||||
if (
|
||||
zalloc_off < 0
|
||||
or filter_off < 0
|
||||
or zalloc_off == panic
|
||||
or filter_off == panic
|
||||
or zalloc_off == filter_off
|
||||
):
|
||||
Historical C22 calls the next function after the pre-setter helper's
|
||||
containing function. On the upstream PCC 26.1 kernel this is the
|
||||
`zalloc_ro_mut` wrapper used by the original shellcode. We derive the
|
||||
same relation structurally instead of relying on symbol fallback.
|
||||
"""
|
||||
if helper_target < 0:
|
||||
return -1, -1
|
||||
|
||||
return zalloc_off, filter_off
|
||||
helper_func = self.find_function_start(helper_target)
|
||||
if helper_func < 0:
|
||||
return -1, -1
|
||||
|
||||
def _find_syscallmask_inject_bl(self, func_off, zalloc_off):
|
||||
"""Find BL site that will be redirected into the cave."""
|
||||
mutator_off = self._find_func_end(helper_func, 0x200)
|
||||
if mutator_off <= helper_target or mutator_off >= helper_func + 0x200:
|
||||
return -1, -1
|
||||
|
||||
head = self._disas_at(mutator_off)
|
||||
if not head:
|
||||
return -1, -1
|
||||
if head[0].mnemonic not in ("pacibsp", "bti"):
|
||||
return -1, -1
|
||||
|
||||
_, setter_off = self._find_last_branch_target(func_off)
|
||||
if setter_off < 0:
|
||||
return -1, -1
|
||||
return mutator_off, setter_off
|
||||
|
||||
def _find_syscallmask_inject_bl(self, func_off):
|
||||
"""Find the pre-setter helper BL that upstream C22 replaced."""
|
||||
func_end = self._find_func_end(func_off, 0x280)
|
||||
for off in range(func_off, min(func_off + 0x120, func_end), 4):
|
||||
if self._is_bl(off) == zalloc_off:
|
||||
scan_end = min(func_off + 0x80, func_end)
|
||||
seen_cbz_x2 = False
|
||||
for off in range(func_off, scan_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
op = insn.op_str.replace(" ", "")
|
||||
if insn.mnemonic == "cbz" and op.startswith("x2,"):
|
||||
seen_cbz_x2 = True
|
||||
continue
|
||||
if seen_cbz_x2 and self._is_bl(off) >= 0:
|
||||
return off
|
||||
return -1
|
||||
|
||||
def patch_syscallmask_apply_to_proc(self):
|
||||
"""Low-risk early-return patch for _syscallmask_apply_to_proc.
|
||||
def _find_syscallmask_tail_branch(self, func_off):
|
||||
"""Find the final tail `B` into the setter core."""
|
||||
branch_off, target = self._find_last_branch_target(func_off)
|
||||
if branch_off < 0:
|
||||
return -1, -1
|
||||
if self._is_bl(branch_off) >= 0:
|
||||
return -1, -1
|
||||
return branch_off, target
|
||||
|
||||
Replaces function body head with:
|
||||
mov x0, xzr
|
||||
retab
|
||||
This avoids code caves, syscall trampolines, and large shellcode
|
||||
while guaranteeing deterministic behavior on current vphone600.
|
||||
def _build_syscallmask_cave(self, cave_off, zalloc_off, setter_off):
|
||||
"""Build a C22 cave that forces the installed mask bytes to 0xFF.
|
||||
|
||||
Semantics intentionally follow the historical upstream design: mutate the
|
||||
pointed-to mask buffer into an allow-all mask, then continue through the
|
||||
normal setter path.
|
||||
"""
|
||||
self._log("\n[JB] _syscallmask_apply_to_proc: low-risk early return")
|
||||
blob_size = self._SYSCALLMASK_FF_BLOB_SIZE
|
||||
code_off = cave_off + blob_size
|
||||
code = []
|
||||
code.append(asm("cbz x2, #0x6c"))
|
||||
code.append(asm("sub sp, sp, #0x40"))
|
||||
code.append(asm("stp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("stp x21, x22, [sp, #0x20]"))
|
||||
code.append(asm("stp x29, x30, [sp, #0x30]"))
|
||||
code.append(asm("mov x19, x0"))
|
||||
code.append(asm("mov x20, x1"))
|
||||
code.append(asm("mov x21, x2"))
|
||||
code.append(asm("mov x22, x3"))
|
||||
code.append(asm("mov x8, #8"))
|
||||
code.append(asm("mov x0, x17"))
|
||||
code.append(asm("mov x1, x21"))
|
||||
code.append(asm("mov x2, #0"))
|
||||
|
||||
adr_off = code_off + len(code) * 4
|
||||
blob_delta = cave_off - adr_off
|
||||
code.append(asm(f"adr x3, #{blob_delta}"))
|
||||
code.append(asm("udiv x4, x22, x8"))
|
||||
code.append(asm("msub x10, x4, x8, x22"))
|
||||
code.append(asm("cbz x10, #8"))
|
||||
code.append(asm("add x4, x4, #1"))
|
||||
|
||||
bl_off = code_off + len(code) * 4
|
||||
branch_back_off = code_off + 27 * 4
|
||||
bl = self._encode_bl(bl_off, zalloc_off)
|
||||
branch_back = self._encode_b(branch_back_off, setter_off)
|
||||
if not bl or not branch_back:
|
||||
return None
|
||||
code.append(bl)
|
||||
code.append(asm("mov x0, x19"))
|
||||
code.append(asm("mov x1, x20"))
|
||||
code.append(asm("mov x2, x21"))
|
||||
code.append(asm("mov x3, x22"))
|
||||
code.append(asm("ldp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("ldp x21, x22, [sp, #0x20]"))
|
||||
code.append(asm("ldp x29, x30, [sp, #0x30]"))
|
||||
code.append(asm("add sp, sp, #0x40"))
|
||||
code.append(branch_back)
|
||||
|
||||
return (b"\xFF" * blob_size) + b"".join(code), code_off, blob_size
|
||||
|
||||
def patch_syscallmask_apply_to_proc(self):
|
||||
"""Retargeted C22 patch based on the verified upstream semantics.
|
||||
|
||||
Historical C22 does not early-return. It hijacks the low-level apply
|
||||
wrapper, rewrites the effective syscall/mach/kobj mask bytes to an
|
||||
allow-all blob via `zalloc_ro_mut`, then resumes through the normal
|
||||
setter path.
|
||||
"""
|
||||
self._log("\n[JB] _syscallmask_apply_to_proc: retargeted upstream C22")
|
||||
|
||||
func_off = self._find_syscallmask_apply_func()
|
||||
if func_off < 0:
|
||||
self._log(
|
||||
" [-] _syscallmask_apply_to_proc not found (legacy signature mismatch, fail-closed)"
|
||||
)
|
||||
self._log(" [-] syscallmask apply wrapper not found (fail-closed)")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(func_off, 0x200)
|
||||
if func_end <= func_off + 8:
|
||||
self._log(" [-] function too small for in-place early return patch")
|
||||
call_off = self._find_syscallmask_inject_bl(func_off)
|
||||
if call_off < 0:
|
||||
self._log(" [-] helper BL site not found in syscallmask wrapper")
|
||||
return False
|
||||
|
||||
branch_off, setter_off = self._find_syscallmask_tail_branch(func_off)
|
||||
if branch_off < 0 or setter_off < 0:
|
||||
self._log(" [-] setter tail branch not found in syscallmask wrapper")
|
||||
return False
|
||||
|
||||
mutator_off, _ = self._resolve_syscallmask_helpers(func_off, self._is_bl(call_off))
|
||||
if mutator_off < 0:
|
||||
self._log(" [-] syscallmask mutation helper not resolved structurally")
|
||||
return False
|
||||
|
||||
cave_size = self._SYSCALLMASK_FF_BLOB_SIZE + 0x80
|
||||
cave_off = self._find_code_cave(cave_size)
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for C22")
|
||||
return False
|
||||
|
||||
cave_info = self._build_syscallmask_cave(cave_off, mutator_off, setter_off)
|
||||
if cave_info is None:
|
||||
self._log(" [-] failed to encode C22 cave branches")
|
||||
return False
|
||||
cave_bytes, code_off, blob_size = cave_info
|
||||
|
||||
branch_to_cave = self._encode_b(branch_off, code_off)
|
||||
if not branch_to_cave:
|
||||
self._log(" [-] tail branch cannot reach C22 cave")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
func_off + 4,
|
||||
asm("mov x0, xzr"),
|
||||
"mov x0,xzr [_syscallmask_apply_to_proc low-risk]",
|
||||
call_off,
|
||||
asm("mov x17, x0"),
|
||||
"mov x17,x0 [syscallmask C22 save RO selector]",
|
||||
)
|
||||
self.emit(
|
||||
func_off + 8,
|
||||
bytes([0xFF, 0x0F, 0x5F, 0xD6]), # retab
|
||||
"retab [_syscallmask_apply_to_proc low-risk]",
|
||||
branch_off,
|
||||
branch_to_cave,
|
||||
"b cave [syscallmask C22 mutate mask then setter]",
|
||||
)
|
||||
self.emit(
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
f"syscallmask C22 cave (ff blob {blob_size:#x} + structural mutator + setter tail)",
|
||||
)
|
||||
return True
|
||||
|
||||
@@ -1,44 +1,47 @@
|
||||
"""Mixin: KernelJBPatchVmFaultMixin."""
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchVmFaultMixin:
|
||||
def patch_vm_fault_enter_prepare(self):
|
||||
"""NOP a PMAP check in _vm_fault_enter_prepare.
|
||||
"""Force the upstream cs_bypass fast-path in _vm_fault_enter_prepare.
|
||||
|
||||
Strict mode:
|
||||
- Resolve vm_fault_enter_prepare function via symbol/string anchor.
|
||||
- In-function only (no global fallback scan).
|
||||
- Require a unique BL site with post-call flag test shape.
|
||||
- Require the unique `tbz Wflags,#3 ; mov W?,#0 ; b ...` gate where
|
||||
Wflags is loaded from `[fault_info,#0x28]` near the function prologue.
|
||||
|
||||
This intentionally reproduces the upstream PCC 26.1 research-site
|
||||
semantics and avoids the old false-positive matcher that drifted onto
|
||||
the `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` pair.
|
||||
"""
|
||||
self._log("\n[JB] _vm_fault_enter_prepare: NOP")
|
||||
|
||||
# Try symbol first
|
||||
candidate_funcs = []
|
||||
|
||||
foff = self._resolve_symbol("_vm_fault_enter_prepare")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
result = self._find_bl_tbz_pmap(foff, func_end)
|
||||
if result:
|
||||
self.emit(result, NOP, "NOP [_vm_fault_enter_prepare]")
|
||||
return True
|
||||
candidate_funcs.append(foff)
|
||||
|
||||
# String anchor: all refs to "vm_fault_enter_prepare"
|
||||
str_off = self.find_string(b"vm_fault_enter_prepare")
|
||||
candidate_sites = set()
|
||||
if str_off >= 0:
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
funcs = sorted(
|
||||
{
|
||||
self.find_function_start(adrp_off)
|
||||
for adrp_off, _, _ in refs
|
||||
if self.find_function_start(adrp_off) >= 0
|
||||
}
|
||||
candidate_funcs.extend(
|
||||
self.find_function_start(adrp_off)
|
||||
for adrp_off, _, _ in refs
|
||||
if self.find_function_start(adrp_off) >= 0
|
||||
)
|
||||
for func_start in funcs:
|
||||
func_end = self._find_func_end(func_start, 0x4000)
|
||||
result = self._find_bl_tbz_pmap(func_start, func_end)
|
||||
if result is not None:
|
||||
candidate_sites.add(result)
|
||||
|
||||
candidate_sites = set()
|
||||
for func_start in sorted(set(candidate_funcs)):
|
||||
func_end = self._find_func_end(func_start, 0x4000)
|
||||
result = self._find_cs_bypass_gate(func_start, func_end)
|
||||
if result is not None:
|
||||
candidate_sites.add(result)
|
||||
|
||||
if len(candidate_sites) == 1:
|
||||
result = next(iter(candidate_sites))
|
||||
@@ -54,48 +57,81 @@ class KernelJBPatchVmFaultMixin:
|
||||
self._log(" [-] patch site not found")
|
||||
return False
|
||||
|
||||
def _find_bl_tbz_pmap(self, start, end):
|
||||
"""Find strict BL site used by vm_fault_enter_prepare guard path.
|
||||
def _find_cs_bypass_gate(self, start, end):
|
||||
"""Find the upstream-style cs_bypass gate in vm_fault_enter_prepare.
|
||||
|
||||
Expected local shape:
|
||||
BL target(rare)
|
||||
LDRB wN, [xM, #0x2c]
|
||||
... TBZ/TBNZ wN, #bit, <forward>
|
||||
Returns BL offset when the match is unique inside [start, end).
|
||||
Expected semantic shape:
|
||||
... early in prologue: LDR Wflags, [fault_info_reg, #0x28]
|
||||
... later: TBZ Wflags, #3, validation_path
|
||||
MOV Wtainted, #0
|
||||
B post_validation_success
|
||||
|
||||
Bit #3 in the packed fault_info flags word is `cs_bypass`.
|
||||
NOPing the TBZ forces the fast-path unconditionally, matching the
|
||||
upstream PCC 26.1 research patch site.
|
||||
"""
|
||||
flag_regs = set()
|
||||
prologue_end = min(end, start + 0x120)
|
||||
for off in range(start, prologue_end, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0:
|
||||
continue
|
||||
insn = d0[0]
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
continue
|
||||
dst, src = insn.operands[0], insn.operands[1]
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_MEM:
|
||||
continue
|
||||
dst_name = insn.reg_name(dst.reg)
|
||||
if not dst_name.startswith("w"):
|
||||
continue
|
||||
if src.mem.base == 0 or src.mem.disp != 0x28:
|
||||
continue
|
||||
flag_regs.add(dst.reg)
|
||||
|
||||
if not flag_regs:
|
||||
return None
|
||||
|
||||
hits = []
|
||||
scan_start = max(start + 0x80, start)
|
||||
for off in range(scan_start, end - 0x10, 4):
|
||||
for off in range(scan_start, end - 0x8, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0 or d0[0].mnemonic != "bl":
|
||||
if not d0:
|
||||
continue
|
||||
bl_target = d0[0].operands[0].imm
|
||||
n_callers = len(self.bl_callers.get(bl_target, []))
|
||||
if n_callers >= 20:
|
||||
gate = d0[0]
|
||||
if gate.mnemonic != "tbz" or len(gate.operands) != 3:
|
||||
continue
|
||||
reg_op, bit_op, target_op = gate.operands
|
||||
if reg_op.type != ARM64_OP_REG or reg_op.reg not in flag_regs:
|
||||
continue
|
||||
if bit_op.type != ARM64_OP_IMM or bit_op.imm != 3:
|
||||
continue
|
||||
if target_op.type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
d1 = self._disas_at(off + 4)
|
||||
if not d1 or d1[0].mnemonic != "ldrb":
|
||||
d2 = self._disas_at(off + 8)
|
||||
if not d1 or not d2:
|
||||
continue
|
||||
op1 = d1[0].op_str
|
||||
if "#0x2c" not in op1 or not op1.startswith("w"):
|
||||
mov_insn = d1[0]
|
||||
branch_insn = d2[0]
|
||||
|
||||
if mov_insn.mnemonic != "mov" or len(mov_insn.operands) != 2:
|
||||
continue
|
||||
mov_dst, mov_src = mov_insn.operands
|
||||
if mov_dst.type != ARM64_OP_REG or mov_src.type != ARM64_OP_IMM:
|
||||
continue
|
||||
if mov_src.imm != 0:
|
||||
continue
|
||||
if not mov_insn.reg_name(mov_dst.reg).startswith("w"):
|
||||
continue
|
||||
|
||||
reg = op1.split(",", 1)[0].strip()
|
||||
matched = False
|
||||
for delta in (8, 12, 16):
|
||||
d2 = self._disas_at(off + delta)
|
||||
if not d2:
|
||||
continue
|
||||
i2 = d2[0]
|
||||
if i2.mnemonic not in ("tbz", "tbnz"):
|
||||
continue
|
||||
if not i2.op_str.startswith(f"{reg},"):
|
||||
continue
|
||||
matched = True
|
||||
break
|
||||
if matched:
|
||||
hits.append(off)
|
||||
if branch_insn.mnemonic != "b" or len(branch_insn.operands) != 1:
|
||||
continue
|
||||
if branch_insn.operands[0].type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
hits.append(off)
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
|
||||
Reference in New Issue
Block a user