diff --git a/AGENTS.md b/AGENTS.md index 6d681cc..5b0ebe2 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -149,6 +149,15 @@ research/ # Detailed firmware/patch documentation ### Python Scripts +### Kernel patcher guardrails + +- For kernel patchers, never hardcode file offsets, virtual addresses, or preassembled instruction bytes inside patch logic. +- All instruction matching must be derived from Capstone decode results (mnemonic / operands / control-flow), not exact operand-string text when a semantic operand check is possible. +- All replacement instruction bytes must come from Keystone-backed helpers already used by the project (for example `asm(...)`, `NOP`, `MOV_W0_0`, etc.). +- Prefer source-backed semantic anchors: in-image symbol lookup, string xrefs, local call-flow, and XNU correlation. Do not depend on repo-exported per-kernel symbol dumps at runtime. +- When retargeting a patch, write the reveal procedure and validation steps into `TODO.md` before handing off for testing. +- For `patch_bsd_init_auth` specifically, the allowed reveal flow is: recover `bsd_init` -> locate rootvp panic block -> find the unique in-function `call` -> `cbnz w0/x0, panic` -> `bl imageboot_needed` site -> patch the branch gate only. + - Patchers use `capstone` (disassembly), `keystone-engine` (assembly), `pyimg4` (IM4P handling). - Dynamic pattern finding (string anchors, ADRP+ADD xrefs, BL frequency) — no hardcoded offsets. - Each patch logged with offset and before/after state. diff --git a/research/00_patch_comparison_all_variants.md b/research/00_patch_comparison_all_variants.md index 151a810..4177a1b 100644 --- a/research/00_patch_comparison_all_variants.md +++ b/research/00_patch_comparison_all_variants.md @@ -74,35 +74,37 @@ ### JB-Only Kernel Methods (Reference List) -Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, and `patch_cred_label_update_execve` are temporarily excluded from `_PATCH_METHODS` pending rework. +Current default schedule note (2026-03-06): `patch_cred_label_update_execve` remains temporarily excluded from `_PATCH_METHODS` pending staged re-validation. `patch_syscallmask_apply_to_proc` has been rebuilt around the real syscallmask apply wrapper and is re-enabled after focused PCC 26.1 dry-run validation plus user-side boot confirmation; refreshed XNU/IDA review also confirms historical C22 was the all-ones-mask variant, not a `NULL`-mask install. `patch_hook_cred_label_update_execve` has also been rebuilt as a faithful upstream C23 wrapper trampoline: it retargets sandbox `mac_policy_ops[18]` to a cave that copies `VSUID`/`VSGID` owner state into the pending credential, sets `P_SUGID`, and branches back to the original wrapper. `patch_iouc_failed_macf` has been rebuilt as a narrow branch-level gate patch: the old repo-only entry early-return on `0xFFFFFE000825B0C0` was discarded, and A5-v2 now patches the post-`mac_iokit_check_open` `CBZ W0, allow` gate at `0xFFFFFE000825BA98` to unconditional allow while preserving the surrounding IOUserClient setup flow. `patch_vm_fault_enter_prepare` was retargeted to the upstream PCC 26.1 research `cs_bypass` gate and re-enabled for dry-run validation. `patch_bsd_init_auth` has been retargeted to the real `_bsd_init` rootauth failure branch and re-enabled for staged validation. Fresh IDA re-analysis shows JB-14 previously used a false-positive matcher; it now targets the real `_bsd_init` rootauth failure branch using in-function Capstone-decoded control-flow semantics and is semantically redundant with base patch #3 when JB is layered on top of `fw_patch`. For JB-16, the historical hit at `0xFFFFFE000836E1F0` is now treated as semantically wrong: it patches the `"SecureRoot"` name-check gate inside `AppleARMPE::callPlatformFunction`, not the `"SecureRootName"` deny return consumed by `IOSecureBSDRoot()`. The implementation was retargeted on 2026-03-06 to `0xFFFFFE000836E464` (`CSEL W22, WZR, W9, NE -> MOV W22, #0`) and re-enabled in `KernelJBPatcher._GROUP_B_METHODS` pending restore/boot validation. -| # | Group | Method | Function | Purpose | JB Enabled | -| ----- | ----- | ------------------------------------- | ------------------------------------------ | ------------------------------------------------------- | :--------: | -| JB-01 | A | `patch_amfi_cdhash_in_trustcache` | `AMFIIsCDHashInTrustCache` | Always return true + store hash | Y | -| JB-02 | A | `patch_amfi_execve_kill_path` | AMFI execve kill return site | Convert shared kill return from deny to allow | Y | -| JB-03 | C | `patch_cred_label_update_execve` | `_cred_label_update_execve` | Early-return low-riskized cs_flags path | Y | -| JB-04 | C | `patch_hook_cred_label_update_execve` | `_hook_cred_label_update_execve` | Low-riskized early-return hook gate | Y | -| JB-05 | C | `patch_kcall10` | `sysent[439]` (`SYS_kas_info` replacement) | Kernel arbitrary call from userspace | Y | -| JB-06 | B | `patch_post_validation_additional` | `_postValidation` (additional) | Disable SHA256-only hash-type reject | Y | -| JB-07 | C | `patch_syscallmask_apply_to_proc` | `_syscallmask_apply_to_proc` | Low-riskized early return for syscall mask gate | Y | -| JB-08 | A | `patch_task_conversion_eval_internal` | `_task_conversion_eval_internal` | Allow task conversion | Y | -| JB-09 | A | `patch_sandbox_hooks_extended` | Sandbox MACF ops (extended) | Stub remaining 30+ sandbox hooks (incl. IOKit 201..210) | Y | -| JB-10 | A | `patch_iouc_failed_macf` | IOUC MACF shared gate | Bypass shared IOUserClient MACF deny path | Y | -| JB-11 | B | `patch_proc_security_policy` | `_proc_security_policy` | Bypass security policy | Y | -| JB-12 | B | `patch_proc_pidinfo` | `_proc_pidinfo` | Allow pid 0 info | Y | -| JB-13 | B | `patch_convert_port_to_map` | `_convert_port_to_map_with_flavor` | Skip kernel map panic | Y | -| JB-14 | B | `patch_bsd_init_auth` | `_bsd_init` (2nd auth gate) | Skip auth at @%s:%d | Y | -| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount (strict in-function match) | Y | -| JB-16 | B | `patch_io_secure_bsd_root` | `_IOSecureBSDRoot` | Skip secure root check (guard-site filter) | Y | -| JB-17 | B | `patch_load_dylinker` | `_load_dylinker` | Skip strict `LC_LOAD_DYLINKER == "/usr/lib/dyld"` gate | Y | -| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Bypass MAC mount deny path (strict site) | Y | -| JB-19 | B | `patch_nvram_verify_permission` | `_verifyPermission` (NVRAM) | Allow NVRAM writes | Y | -| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force shared region path | Y | -| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Skip persona validation | Y | -| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid | Y | -| JB-23 | B | `patch_thid_should_crash` | `_thid_should_crash` | Prevent GUARD_TYPE_MACH_PORT crash | Y | -| JB-24 | B | `patch_vm_fault_enter_prepare` | `_vm_fault_enter_prepare` | Skip fault check | Y | -| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Allow VM protect | Y | +| # | Group | Method | Function | Purpose | JB Enabled | +| ----- | ----- | ------------------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------: | +| JB-01 | A | `patch_amfi_cdhash_in_trustcache` | `AMFIIsCDHashInTrustCache` | Always return true + store hash | Y | +| JB-02 | A | `patch_amfi_execve_kill_path` | AMFI execve kill return site | Convert shared kill return from deny to allow | Y | +| JB-03 | C | `patch_cred_label_update_execve` | `_cred_label_update_execve` | Reworked C21-v3: C21-v1 already boots; v3 keeps split late exits and additionally ORs success-only helper bits `0xC` after clearing `0x3F00`; still disabled pending boot validation | N | +| JB-04 | C | `patch_hook_cred_label_update_execve` | sandbox `mpo_cred_label_update_execve` wrapper (`ops[18]` -> `sub_FFFFFE00093BDB64`) | Faithful upstream C23 trampoline: copy `VSUID`/`VSGID` owner state into pending cred, set `P_SUGID`, then branch back to wrapper | Y | +| JB-05 | C | `patch_kcall10` | `sysent[439]` (`SYS_kas_info` replacement) | Rebuilt ABI-correct kcall cave: `target + 7 args -> uint64 x0`; re-enabled after focused dry-run validation | Y | +| JB-06 | B | `patch_post_validation_additional` | `_postValidation` (additional) | Disable SHA256-only hash-type reject | Y | +| JB-07 | C | `patch_syscallmask_apply_to_proc` | syscallmask apply wrapper (`_proc_apply_syscall_masks` path) | Faithful upstream C22: mutate installed Unix/Mach/KOBJ masks to all-ones via structural cave, then continue into setter; distinct from `NULL`-mask alternative | Y | +| JB-08 | A | `patch_task_conversion_eval_internal` | `_task_conversion_eval_internal` | Allow task conversion | Y | +| JB-09 | A | `patch_sandbox_hooks_extended` | Sandbox MACF ops (extended) | Stub remaining 30+ sandbox hooks (incl. IOKit 201..210) | Y | +| JB-10 | A | `patch_iouc_failed_macf` | IOUC MACF shared gate | A5-v2: patch only the post-`mac_iokit_check_open` deny gate (`CBZ W0, allow` -> `B allow`) and keep the rest of the IOUserClient open path intact | Y | +| JB-11 | B | `patch_proc_security_policy` | `_proc_security_policy` | Bypass security policy | Y | +| JB-12 | B | `patch_proc_pidinfo` | `_proc_pidinfo` | Allow pid 0 info | Y | +| JB-13 | B | `patch_convert_port_to_map` | `_convert_port_to_map_with_flavor` | Skip kernel map panic | Y | +| JB-14 | B | `patch_bsd_init_auth` | `_bsd_init` rootauth-failure branch | Ignore `FSIOC_KERNEL_ROOTAUTH` failure in `bsd_init`; same gate as base patch #3 when layered | Y | +| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount (strict in-function match) | Y | +| JB-16 | B | `patch_io_secure_bsd_root` | `AppleARMPE::callPlatformFunction` (`"SecureRootName"` return select), called from `IOSecureBSDRoot` | Force `"SecureRootName"` policy return to success without altering callback flow; implementation retargeted 2026-03-06 | Y | +| JB-17 | B | `patch_load_dylinker` | `_load_dylinker` | Skip strict `LC_LOAD_DYLINKER == "/usr/lib/dyld"` gate | Y | +| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Bypass MAC mount deny path (strict site) | Y | +| JB-19 | B | `patch_nvram_verify_permission` | `_verifyPermission` (NVRAM) | Allow NVRAM writes | Y | +| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force shared region path | Y | +| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Skip persona validation | Y | +| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid | Y | +| JB-23 | B | `patch_thid_should_crash` | `_thid_should_crash` | Prevent GUARD_TYPE_MACH_PORT crash | Y | +| JB-24 | B | `patch_vm_fault_enter_prepare` | `_vm_fault_enter_prepare` | Force `cs_bypass` fast path in runtime fault validation | Y | +| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Allow VM protect | Y | + +JB-24 note (2026-03-06): the old derived matcher hit the `VM_PAGE_CONSUME_CLUSTERED()` lock/unlock sequence inside `vm_fault_enter_prepare`, i.e. `pmap_lock_phys_page()` / `pmap_unlock_phys_page()`. The implementation is now retargeted to the upstream PCC 26.1 research `cs_bypass` gate at `0x00BA9E1C` / `0xFFFFFE0007BADE1C`. ## CFW Installation Patches @@ -193,8 +195,13 @@ Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_sec - `setup_logs/jb_patch_tests_20260306_115027` (2026-03-06): rerun after `status` fix, pending-only mode (`Total methods: 19`). - Final run result from `jb_patch_tests_20260306_115027` at `2026-03-06 13:17`: - Finished: 19/19 (`PASS=15`, `FAIL=4`, all fails `rc=2`). - - Failing methods: `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, `patch_cred_label_update_execve`. + - Failing methods at that time: `patch_bsd_init_auth`, `patch_io_secure_bsd_root`, `patch_vm_fault_enter_prepare`, `patch_cred_label_update_execve`. + - 2026-03-06 follow-up: `patch_io_secure_bsd_root` failure is now attributed to a wrong-site patch in `AppleARMPE::callPlatformFunction` (`"SecureRoot"` gate at `0xFFFFFE000836E1F0`), not the intended `"SecureRootName"` deny-return path. The code was retargeted the same day to `0xFFFFFE000836E464` and re-enabled for the next restore/boot check. + - 2026-03-06 follow-up: `patch_bsd_init_auth` was retargeted after confirming the old matcher was hitting unrelated code; keep disabled in default schedule until a fresh clean-baseline boot test passes. - Final case: `[19/19] patch_syscallmask_apply_to_proc` (`PASS`). + - 2026-03-06 re-analysis: that historical `PASS` is now treated as a false positive for functionality, because the recorded bytes landed at `0xfffffe00093ae6e4`/`0xfffffe00093ae6e8` inside `_profile_syscallmask_destroy` underflow handling, not in `_proc_apply_syscall_masks`. + - 2026-03-06 code update: `scripts/patchers/kernel_jb_patch_syscallmask.py` was rebuilt to target the real syscallmask apply wrapper structurally and now dry-runs on `PCC-CloudOS-26.1-23B85 kernelcache.research.vphone600` with 3 writes: `0x02395530`, `0x023955E8`, and cave `0x00AB1720`. User-side boot validation succeeded the same day. +- 2026-03-06 follow-up: `patch_kcall10` was rebuilt from the old ABI-unsafe pseudo-10-arg design into an ABI-correct `sysent[439]` cave. Focused dry-run on `PCC-CloudOS-26.1-23B85 kernelcache.research.vphone600` now emits 4 writes: cave `0x00AB1720`, `sy_call` `0x0073E180`, `sy_arg_munge32` `0x0073E188`, and metadata `0x0073E190`; the method was re-enabled in `_GROUP_C_METHODS`. - Observed failure symptom in current failing set: first boot panic before command injection (or boot process early exit). - Post-run schedule change (per user request): - commented out failing methods from default `KernelJBPatcher._PATCH_METHODS` schedule in `scripts/patchers/kernel_jb.py`: @@ -202,6 +209,10 @@ Current default schedule note (2026-03-06): `patch_bsd_init_auth`, `patch_io_sec - `patch_io_secure_bsd_root` - `patch_vm_fault_enter_prepare` - `patch_cred_label_update_execve` +- 2026-03-06 re-research note for `patch_cred_label_update_execve`: + - old entry-time early-return strategy was identified as boot-unsafe because it skipped AMFI exec-time `csflags` and entitlement propagation entirely. + - implementation was reworked to a success-tail trampoline that preserves normal AMFI processing and only clears restrictive `csflags` bits on the success path. + - default JB schedule still keeps the method disabled until the reworked strategy is boot-validated. - Manual DEV+single (`setup_machine` + `PATCH=`) working set now includes: - `patch_amfi_cdhash_in_trustcache` - `patch_amfi_execve_kill_path` diff --git a/research/kernel_jb_patch_notes.md b/research/kernel_jb_patch_notes.md index 3c0f255..346791a 100644 --- a/research/kernel_jb_patch_notes.md +++ b/research/kernel_jb_patch_notes.md @@ -358,9 +358,15 @@ Should have moderate caller count (hundreds). ### patch_syscallmask_apply_to_proc — FIXED -**Problem**: `bl_callers` key bug: code used `target + self.base_va` but bl_callers is keyed by file offset. -**Fix**: Changed to `self.bl_callers.get(target, [])` at line ~1661. -**Status**: Now PASSING (40 patches emitted for shellcode + redirect). +**Historical problem**: the earlier repo-side “fix” still matched the wrong place. Runtime verification later showed the old hit landed in `_profile_syscallmask_destroy` underflow handling, not the real syscallmask apply wrapper. +**Current understanding**: faithful upstream C22 is a low-wrapper shellcode patch that mutates the effective Unix/Mach/KOBJ mask bytes to all `0xFF`, then continues into the normal setter. It is not a `NULL`-mask install and not an early-return patch. +**Current status**: rebuilt structurally as a 3-write retarget (`save selector`, `branch to cave`, `all-ones cave + setter tail`) and separately documented in `research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md`; user reported boot success with the rebuilt C22 on `2026-03-06`. + +### patch_iouc_failed_macf — RETARGETED + +**Historical repo behavior**: patched `0xFFFFFE000825B0C0` at entry with `mov x0, xzr ; retab` after `PACIBSP`. +**Problem**: fresh IDA review shows this is a large IOUserClient open/setup path, not a tiny standalone deny helper; entry early-return skips broader work including output-state preparation. +**Current status**: rebuilt as A5-v2. It now patches only the narrow post-`mac_iokit_check_open` gate in the same function: `0xFFFFFE000825BA98` (`CBZ W0, allow`) becomes unconditional `B allow`. Focused dry-run emits exactly one write at file offset `0x01257A98`. ### patch_nvram_verify_permission — FIXED diff --git a/research/kernel_patch_jb/patch_bsd_init_auth.md b/research/kernel_patch_jb/patch_bsd_init_auth.md index b42e1d6..acf9da2 100644 --- a/research/kernel_patch_jb/patch_bsd_init_auth.md +++ b/research/kernel_patch_jb/patch_bsd_init_auth.md @@ -1,148 +1,243 @@ # B13 `patch_bsd_init_auth` -## Patch Goal +## Scope -Bypass the root volume authentication gate during early BSD init by forcing the auth helper return path to success. +- Kernel analyzed: `kernelcache.research.vphone600` +- Symbol handling: prefer in-image LC_SYMTAB if present; otherwise recover `bsd_init` from in-kernel string xrefs and local control-flow. +- XNU reference: `research/reference/xnu/bsd/kern/bsd_init.c` +- Analysis basis: IDA-MCP + local XNU source correlation -## Binary Targets (IDA + Recovered Symbols) +## Bottom Line -- Recovered symbol: `bsd_init` at `0xfffffe0007f7add4`. -- Anchor string: `"rootvp not authenticated after mounting @%s:%d"` at `0xfffffe000707d6bb`. -- Anchor xref: `0xfffffe0007f7bc04` inside `sub_FFFFFE0007F7ADD4` (same function as `bsd_init`). +- Earlier B13 notes are **not trustworthy** as a patch-site guide. +- The currently documented runtime hit at `0xFFFFFE0007FB09DC` is **not inside `bsd_init`**. +- The real `bsd_init` root-auth gate is in `bsd_init` at `0xFFFFFE0007F7B988` / `0xFFFFFE0007F7B98C`. +- If B13 is re-enabled, the patch should target the **`FSIOC_KERNEL_ROOTAUTH` return check in `bsd_init`**, not the `ldr x0,[xN,#0x2b8]; cbz x0; bl` pattern currently used by the patcher. -## Call-Stack Analysis +## What This Patch Is Actually For -- Static callers of `bsd_init` (`0xfffffe0007f7add4`): - - `sub_FFFFFE0007F7ACE0` - - `sub_FFFFFE0007B43EE0` -- The patch point is in the rootvp/authentication decision path inside `bsd_init`, before the panic/report path using the rootvp-not-authenticated string. +Fact: -## Patch-Site / Byte-Level Change +- In XNU, `bsd_init()` mounts root, calls `IOSecureBSDRoot(rootdevice)`, resolves `rootvnode`, and then enforces root-volume authentication. +- The relevant source block in `research/reference/xnu/bsd/kern/bsd_init.c` is: + - `if (!bsd_rooted_ramdisk()) {` + - `autherr = VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel());` + - `if (autherr) panic("rootvp not authenticated after mounting");` -- Patcher intent: - - Find `ldr x0, [xN, #0x2b8] ; cbz x0, ... ; bl auth_fn`. - - Replace `bl auth_fn` with `mov x0, #0`. -- Expected replacement bytes: - - after: `00 00 80 D2` (`mov x0, #0`) -- Current IDA image appears already post-variant / non-matching for the exact pre-patch triplet at the old location, so the exact original 4-byte BL at this build-state is not asserted here. +Inference: -## Pseudocode (Before) +- The jailbreak purpose of B13 is **not** “generic auth bypass”. +- Its real purpose is very narrow: **allow boot to continue even when the mounted root volume fails `FSIOC_KERNEL_ROOTAUTH`**. +- In practice this means permitting a modified / non-sealed / otherwise non-stock root volume to survive the early BSD boot gate. + +## Real Control Flow in `bsd_init` + +### Confirmed symbols and anchors + +- `bsd_init` = `0xFFFFFE0007F7ADD4` +- Panic string = `"rootvp not authenticated after mounting @%s:%d"` at `0xFFFFFE000707D6BB` +- String xref inside `bsd_init` = `0xFFFFFE0007F7BC04` +- Static caller of `bsd_init` = `kernel_bootstrap_thread` at `0xFFFFFE0007B44428` + +### Confirmed boot path + +Fact, from IDA + XNU correlation: + +1. `bsd_init` mounts root via `vfs_mountroot`. +2. `bsd_init` calls `IOSecureBSDRoot(rootdevice)` at `0xFFFFFE0007F7B7C4`. +3. `bsd_init` resolves the mounted root vnode and stores it as `rootvnode`. +4. `bsd_init` calls `bsd_rooted_ramdisk` at `0xFFFFFE0007F7B934`. +5. If not rooted ramdisk, `bsd_init` constructs a `VNOP_IOCTL` call for `FSIOC_KERNEL_ROOTAUTH`. +6. The indirect filesystem op is invoked at `0xFFFFFE0007F7B988`. +7. The return value is checked at `0xFFFFFE0007F7B98C`. +8. Failure branches to the panic/report block at `0xFFFFFE0007F7BBF4`. + +### Exact IDA site + +Relevant instructions in `bsd_init`: + +```asm +0xFFFFFE0007F7B934 BL bsd_rooted_ramdisk +0xFFFFFE0007F7B938 TBNZ W0, #0, 0xFFFFFE0007F7B990 + +0xFFFFFE0007F7B94C MOV W10, #0x80046833 +... +0xFFFFFE0007F7B980 ADD X0, SP, #var_130 +0xFFFFFE0007F7B984 MOV X17, #0x307A +0xFFFFFE0007F7B988 BLRAA X8, X17 +0xFFFFFE0007F7B98C CBNZ W0, 0xFFFFFE0007F7BBF4 +``` + +And the failure block: + +```asm +0xFFFFFE0007F7BBF4 ADRL X8, "bsd_init.c" +0xFFFFFE0007F7BBFC MOV W9, #0x3D3 +0xFFFFFE0007F7BC04 ADRL X0, "rootvp not authenticated after mounting @%s:%d" +0xFFFFFE0007F7BC0C BL sub_FFFFFE0008302368 +``` + +## Why This Is The Real Site + +### Source-to-binary correlation + +Fact: + +- `FSIOC_KERNEL_ROOTAUTH` is defined in `research/reference/xnu/bsd/sys/fsctl.h`. +- The binary literal loaded in `bsd_init` is `0x80046833`, which matches `FSIOC_KERNEL_ROOTAUTH`. +- The call setup happens immediately after `bsd_rooted_ramdisk()` and immediately before the rootvp panic string block. + +Inference: + +- This is the exact lowered form of: ```c -int rc = auth_rootvp(rootvp); -if (rc != 0) { - panic("rootvp not authenticated ..."); +autherr = VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel()); +if (autherr) { + panic("rootvp not authenticated after mounting"); } ``` -## Pseudocode (After) +### Call-stack view -```c -int rc = 0; // forced success -if (rc != 0) { - panic("rootvp not authenticated ..."); -} +Useful boot-path stack, expressed semantically rather than as a fake direct symbol chain: + +- `kernel_bootstrap_thread` +- `bsd_init` +- `vfs_mountroot` +- `IOSecureBSDRoot` +- `VFS_ROOT` / `set_rootvnode` +- `bsd_rooted_ramdisk` +- `VNOP_IOCTL(rootvnode, FSIOC_KERNEL_ROOTAUTH, NULL, 0, vfs_context_kernel())` +- failure path -> panic/report block using `"rootvp not authenticated after mounting @%s:%d"` + +## Why The Existing B13 Matcher Is Wrong + +### Old documented runtime hit is unrelated + +Fact: + +- Existing runtime-verification artifacts recorded B13 at `0xFFFFFE0007FB09DC`. +- IDA resolves that site to `exec_handle_sugid`, not `bsd_init`. +- The surrounding code is: + +```asm +0xFFFFFE0007FB09D4 LDR X0, [X20,#0x2B8] +0xFFFFFE0007FB09D8 CBZ X0, 0xFFFFFE0007FB09E4 +0xFFFFFE0007FB09DC BL sub_FFFFFE0007B84C5C ``` -## Symbol Consistency +- That is exactly the shape the current patcher searches for. -- `bsd_init` symbol and anchor context are consistent. -- Exact auth-call instruction bytes require pre-patch image state for strict byte-for-byte confirmation. +### Why the heuristic false-positive happened -## Patch Metadata +Fact: -- Patch document: `patch_bsd_init_auth.md` (B13). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +- `scripts/patchers/kernel_jb_patch_bsd_init_auth.py` looks for: + - `ldr x0, [xN, #0x2b8]` + - `cbz x0, ...` + - `bl ...` +- It then ranks candidates by: + - neighborhood near a `bsd_init` string anchor, + - presence of `"/dev/null"` in the function, + - low caller count. -## Target Function(s) and Binary Location +Fact: -- Primary target: recovered symbol `bsd_init` at `0xfffffe0007f7add4`. -- Auth-check patchpoint is in the rootvp-authentication decision sequence documented in this file. +- `exec_handle_sugid` also references `"/dev/null"` in the same function. +- Therefore the heuristic can promote `exec_handle_sugid` even though it is semantically unrelated to root-volume auth. -## Kernel Source File Location +Conclusion: -- Expected XNU source: `bsd/kern/bsd_init.c`. -- Confidence: `high`. +- The current B13 implementation is not “slightly off”; it is targeting the wrong logical site class. +- This explains why enabling B13 can break boot: it mutates an exec/credential path instead of the early root-auth gate. -## Function Call Stack +## Correct Patch Candidate(s) -- Primary traced chain (from `Call-Stack Analysis`): -- Static callers of `bsd_init` (`0xfffffe0007f7add4`): -- `sub_FFFFFE0007F7ACE0` -- `sub_FFFFFE0007B43EE0` -- The patch point is in the rootvp/authentication decision path inside `bsd_init`, before the panic/report path using the rootvp-not-authenticated string. -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +### Preferred candidate: patch the return check, not the call target -## Patch Hit Points +Patch site: -- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`): -- Find `ldr x0, [xN, #0x2b8] ; cbz x0, ... ; bl auth_fn`. -- Expected replacement bytes: -- after: `00 00 80 D2` (`mov x0, #0`) -- The before/after instruction transform is constrained to this validated site. +- `0xFFFFFE0007F7B98C` in `bsd_init` +- instruction: `CBNZ W0, 0xFFFFFE0007F7BBF4` -## Current Patch Search Logic +Recommended transform: -- Implemented in `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Anchor string: `"rootvp not authenticated after mounting @%s:%d"` at `0xfffffe000707d6bb`. -- Anchor xref: `0xfffffe0007f7bc04` inside `sub_FFFFFE0007F7ADD4` (same function as `bsd_init`). +- before: `40 13 00 35` +- after: `1F 20 03 D5` (`NOP`) -## Validation (Static Evidence) +Effect: -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. +- `VNOP_IOCTL(... FSIOC_KERNEL_ROOTAUTH ...)` still executes. +- Only the early boot failure gate is removed. +- This is the narrowest behavioral change that matches the XNU source intent. -## Expected Failure/Panic if Unpatched +### Secondary candidate: force the ioctl result to success -- Root volume auth check can trigger `"rootvp not authenticated ..."` panic/report path during early BSD init. +Patch site: -## Risk / Side Effects +- `0xFFFFFE0007F7B988` in `bsd_init` +- instruction: `BLRAA X8, X17` -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. +Possible transform: -## Symbol Consistency Check +- before: `11 09 3F D7` +- after: `00 00 80 52` (`MOV W0, #0`) -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`. -- Canonical symbol hit(s): `bsd_init`. -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `bsd_init` -> `bsd_init` at `0xfffffe0007f7add4` (size `0xe3c`). +Effect: -## Open Questions and Confidence +- Skips the actual filesystem ioctl implementation entirely. +- More invasive than patching the subsequent `CBNZ`. -- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch. -- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence). +Assessment: -## Evidence Appendix +- If we need a first retest candidate, `NOP`-ing `CBNZ W0` is safer than replacing the call. +- It preserves any filesystem side effects that happen during the auth ioctl and only suppresses the panic gate. -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. +## What The Patch Does After It Is Correctly Retargeted -## Runtime + IDA Verification (2026-03-05) +- Allows the system to continue booting even if the mounted root volume is not accepted by `FSIOC_KERNEL_ROOTAUTH`. +- Helps jailbreak-style boot flows where the root volume is intentionally modified and would otherwise fail the sealed/authenticated-root policy. +- Does **not** by itself disable MACF, AMFI, persona checks, syscall masks, or other post-boot kernel policy gates. +- In other words: B13 is a **boot-enablement patch**, not a whole-jailbreak patch. -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (1 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `False` -- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `1` patch-point VAs. -- IDA function sample: `exec_handle_sugid` -- Chain function sample: `exec_handle_sugid` -- Caller sample: `exec_mach_imgact` -- Callee sample: `exec_handle_sugid`, `sub_FFFFFE0007B0EA64`, `sub_FFFFFE0007B0F4F8`, `sub_FFFFFE0007B1663C`, `sub_FFFFFE0007B1B508`, `sub_FFFFFE0007B1C348` -- Verdict: `questionable` -- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation. -- Key verified points: -- `0xFFFFFE0007FB09DC` (`exec_handle_sugid`): mov x0,#0 [_bsd_init auth] | `a050ef97 -> 000080d2` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +## Risk Notes + +- This patch intentionally weakens authenticated-root enforcement during early boot. +- The most likely safe form is to skip only the panic branch. +- If downstream code later depends on rootauth state beyond this early gate, more work may still be required elsewhere; this document does **not** claim B13 alone is sufficient for a full JB boot. + +## Recommended Retargeting Rule (Design Only, No Code Change Landed) + +If B13 is reimplemented, the matcher should anchor on facts unique to this site: + +1. Resolve `_bsd_init` / `bsd_init` first. +2. Stay inside that function only. +3. Find the post-`bsd_rooted_ramdisk` false path. +4. Require the literal `0x80046833` (`FSIOC_KERNEL_ROOTAUTH`) in the setup block. +5. Require the next call to be the indirect vnode-op call. +6. Patch the following `CBNZ W0, panic_block`. +7. Optionally verify the failure target reaches the rootvp-auth string at `0xFFFFFE0007F7BC04`. + +This rule is materially stronger than the old `ldr x0,[...,#0x2b8]; cbz; bl` shape and should exclude `exec_handle_sugid` entirely. + +## Validation Status + +- Validation note: on the current reference IM4P kernel, in-image symbol resolution returns `0` symbols, so B13 is currently found by anchor recovery rather than external symbol data. +- In-memory validation against `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` succeeds after IM4P decompression. +- `KernelJBPatcher._build_method_plan()` now includes `patch_bsd_init_auth`. +- Live patch hit: `0xFFFFFE0007F7B98C` / file offset `0x00F7798C` / `CBNZ W0, panic` -> `NOP`. +- Historical false-positive hit `0xFFFFFE0007FB09DC` is no longer selected. + +## Implementation Status + +- Landed in `scripts/patchers/kernel_jb_patch_bsd_init_auth.py`. +- Default JB schedule re-enabled in `scripts/patchers/kernel_jb.py`. +- Implemented form: patch the in-function `CBNZ W0, panic` gate in `bsd_init`. +- Capstone semantic checks only: no raw-offset targeting and no operand-string/literal hardcoding in the final matcher. + +## Confidence + +- Confidence that `0xFFFFFE0007F7B988` / `0xFFFFFE0007F7B98C` is the real B13 site: **high**. +- Confidence that `0xFFFFFE0007FB09DC` is a false-positive site: **high**. +- Confidence that `NOP CBNZ` is a better first retest than `MOV W0,#0` on the call: **medium**, because APFS-side behavior is closed-source and may have side effects not visible from XNU alone. diff --git a/research/kernel_patch_jb/patch_cred_label_update_execve.md b/research/kernel_patch_jb/patch_cred_label_update_execve.md index cfb545a..8e82706 100644 --- a/research/kernel_patch_jb/patch_cred_label_update_execve.md +++ b/research/kernel_patch_jb/patch_cred_label_update_execve.md @@ -1,203 +1,241 @@ # C21 `patch_cred_label_update_execve` -## Scope (revalidated with static analysis) +## Scope -- Target patch method: `KernelJBPatchCredLabelMixin.patch_cred_label_update_execve` in `scripts/patchers/kernel_jb_patch_cred_label.py`. -- Target function in kernel: `jb_c21_patch_target_amfi_cred_label_update_execve` (`0xFFFFFE000863FC6C`). -- Patch-point label (inside function): `jb_c21_patchpoint_retab_redirect` (`0xFFFFFE000864011C`, original `RETAB` site). +- Kernel used for reverse engineering: `kernelcache.research.vphone600`. +- IDA symbol / address: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi` at `0xFFFFFE000864DEFC`. +- XNU semantic reference: `research/reference/xnu/security/mac_vfs.c`, `research/reference/xnu/bsd/kern/kern_exec.c`, `research/reference/xnu/bsd/kern/kern_credential.c`, `research/reference/xnu/osfmk/kern/cs_blobs.h`. -## Verified call/dispatch trace (no trust in old notes) +This note is a fresh re-analysis. Older notes for this patch were treated as untrusted and not reused as ground truth. -1. Exec pipeline enters `jb_c21_supp_exec_handle_image` (`0xFFFFFE0007FA4A58`). -2. It calls `jb_c21_supp_exec_policy_stage` (`0xFFFFFE0007FA6858`). -3. That stage schedules `jb_c21_supp_exec_policy_wrapper` (`0xFFFFFE0007F81F00`). -4. Wrapper calls `jb_c21_supp_mac_policy_dispatch_ops90_execve` (`0xFFFFFE00082D9D0C`). -5. Dispatcher loads callback from `policy->ops + 0x90` at `jb_c21_supp_dispatch_load_ops_off90` (`0xFFFFFE00082D9DBC`) and calls it at `jb_c21_supp_dispatch_call_ops_off90` (`0xFFFFFE00082D9FCC`, `BLRAA ... X17=#0xEC79`). +## Call Stack -This `+0x90` slot is the shared execve cred-label hook slot used by both AMFI and Sandbox hooks. +Exec-time path in XNU source: -## How AMFI wires this callback +1. `exec_handle_sugid()` asks `mac_cred_check_label_update_execve(...)` whether any MAC policy wants an exec-time credential transition. +2. If yes, `exec_handle_sugid()` calls `kauth_proc_label_update_execve(...)`. +3. `kauth_proc_label_update_execve(...)` allocates / updates the new credential and calls `mac_cred_label_update_execve(...)`. +4. `mac_cred_label_update_execve(...)` iterates `mac_policy_list` and invokes each policy's `mpo_cred_label_update_execve` hook. +5. AMFI's hook is `_cred_label_update_execve` in `com.apple.driver.AppleMobileFileIntegrity`. -- `jb_c21_supp_amfi_init_register_policy_ops` (`0xFFFFFE0008640718`) builds AMFI `mac_policy_ops` and writes `jb_c21_patch_target_amfi_cred_label_update_execve` into offset `+0x90` (store at `0xFFFFFE0008640AA0`). -- Then it registers the policy descriptor via `sub_FFFFFE00082CDDB0` (mac policy register path). +Relevant source anchors: -## What the unpatched function enforces +- `research/reference/xnu/bsd/kern/kern_exec.c:6854` +- `research/reference/xnu/bsd/kern/kern_exec.c:6950` +- `research/reference/xnu/bsd/kern/kern_credential.c:4367` +- `research/reference/xnu/security/mac_vfs.c:777` -Inside `jb_c21_patch_target_amfi_cred_label_update_execve`: +## What The Function Actually Does -- Multiple explicit kill paths return failure (`W0=1`) for unsigned/forbidden exec cases. -- A key branch logs and kills with: - - `"dyld signature cannot be verified... or ... unsigned application outside of a supported development configuration"` -- It conditionally mutates `*a10` (`cs_flags`) and later checks validity bits before honoring entitlements. -- If validity path is not satisfied, it logs `"not CS_VALID, not honoring entitlements"` and skips entitlement-driven flag propagation. +Reverse engineering of `0xFFFFFE000864DEFC` shows that AMFI's hook is not just a boolean kill gate. -## Why C21 is required (full picture) +It performs all of the following before returning success or failure: -C21 is not just another allow-return patch; it is a **state-fix patch** for `cs_flags` at execve policy time. +- validates the exec target / `cs_blob` and reports AMFI analytics; +- checks multiple kill conditions and returns `1` on rejection; +- mutates `*csflags` during successful exec handling; +- derives extra flags from entitlement state; +- performs final bookkeeping before returning `0`. -Patch shellcode behavior (from patcher implementation): +Observed kill / deny subpaths in IDA: -- Load `cs_flags` pointer from stack (`arg9` path). -- `ORR` with `0x04000000` and `0x0000000F`. -- `AND` with `0xFFFFC0FF` (clears bits in `0x00003F00`). -- Store back and return success (`X0=0`). +- completely unsigned code path; +- Restricted Execution Mode denials; +- legacy VPN plugin rejection; +- dyld signature verification failure; +- helper failure from `sub_FFFFFE000864E5A0(...)` with reason string. -Practical effect: +All of those failure edges converge on the shared kill return at `0xFFFFFE000864E38C` (`mov w0, #1`). -- Unsigned binaries avoid AMFI execve kill outcomes **and** get permissive execution flags instead of failing later due bad flag state. -- For launchd dylib injection (`/cores/launchdhook.dylib`), this patch is critical because the unpatched path can still fail on dyld-signature / restrictive-flag checks even if a generic kill-return patch exists elsewhere. -- Clearing the `0x3F00` cluster and forcing low/upper bits ensures launch context is treated permissively enough for injected non-Apple-signed payload flow. +Observed success-path `csflags` mutations in IDA: -## Relationship with Sandbox hook (important) +- `0xFFFFFE000864E1E8`: ORs `0x2200` or `0x200` into `*csflags` depending on dyld / helper state. +- `0xFFFFFE000864E200`: ORs `0x802A00` into `*csflags` when AMFI-derived entitlement flags require SIP-style inheritance. +- `0xFFFFFE000864E4EC`, `0xFFFFFE000864E500`, `0xFFFFFE000864E51C`, `0xFFFFFE000864E534`: OR installer / rootless / datavault / NVRAM-related bits into `*csflags`. +- `0xFFFFFE000864E570`: ORs `0x2A00` into `*csflags` in the final success tail. -- Sandbox also has a cred-label execve hook in the same ops slot (`+0x90`): - - `jb_c21_supp_sandbox_hook_cred_label_update_execve` (`0xFFFFFE00093BDB64`) -- That Sandbox hook contains policy such as `"only launchd is allowed to spawn untrusted binaries"`. +The relevant flag meanings from XNU are in `research/reference/xnu/osfmk/kern/cs_blobs.h:32`. -So launchd-dylib viability depends on **combined behavior**: +## Why The Old Patch Broke Boot -- Sandbox hook policy acceptance for launch context, and -- AMFI C21 flag/state coercion so dyld/code-signing state does not re-kill or strip required capability. +The previous implementations were both too broad: -## IDA labels added in this verification pass +1. the original shellcode version forged new `csflags` at function exit; +2. the later "low-risk" version simply returned from function entry. -- **patched-function group**: - - `jb_c21_patch_target_amfi_cred_label_update_execve` @ `0xFFFFFE000863FC6C` - - `jb_c21_patchpoint_retab_redirect` @ `0xFFFFFE000864011C` - - `jb_c21_ref_shared_kill_return` @ `0xFFFFFE00086400FC` -- **supplement group**: - - `jb_c21_supp_exec_handle_image` @ `0xFFFFFE0007FA4A58` - - `jb_c21_supp_exec_policy_stage` @ `0xFFFFFE0007FA6858` - - `jb_c21_supp_exec_policy_wrapper` @ `0xFFFFFE0007F81F00` - - `jb_c21_supp_mac_policy_dispatch_ops90_execve` @ `0xFFFFFE00082D9D0C` - - `jb_c21_supp_dispatch_load_ops_off90` @ `0xFFFFFE00082D9DBC` - - `jb_c21_supp_dispatch_call_ops_off90` @ `0xFFFFFE00082D9FCC` - - `jb_c21_supp_amfi_start` @ `0xFFFFFE0008640624` - - `jb_c21_supp_amfi_init_register_policy_ops` @ `0xFFFFFE0008640718` - - `jb_c21_supp_sandbox_hook_cred_label_update_execve` @ `0xFFFFFE00093BDB64` - - `jb_c21_supp_sandbox_execve_context_gate` @ `0xFFFFFE00093BC054` +The entry-return strategy is fundamentally wrong for boot stability because it skips AMFI's normal exec-time work entirely. -## Symbol Consistency Audit (2026-03-05) +That means it bypasses: -- Status: `partial` -- Recovered symbol `_hook_cred_label_update_execve` is present and consistent. -- Many `jb_*` helper names in this file are analyst aliases and do not all appear in recovered symbol JSON. +- `cs_blob` / signature-state handling; +- AMFI auxiliary analytics / bookkeeping; +- entitlement-derived `csflags` propagation; +- final per-exec state setup that later code expects to have happened. -## Patch Metadata +In short: `_cred_label_update_execve` is on the boot-critical exec path, so turning it into an unconditional `return 0` is not a safe jailbreak strategy. -- Patch document: `patch_cred_label_update_execve.md` (C21). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_cred_label.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +## Repaired Patch Strategy -## Patch Goal +The current C21-v1 patcher no longer returns from function entry and no +longer hijacks the beginning of the success tail. -Redirect cred-label execve handling to shellcode that coerces permissive cs_flags and returns success. +Instead it: -## Target Function(s) and Binary Location +1. keeps AMFI's full exec-time logic intact; +2. finds the canonical epilogue at `0xFFFFFE000864E390`; +3. redirects the shared deny return (`0xFFFFFE000864E38C`) and both late + success exits (`0xFFFFFE000864E580`, `0xFFFFFE000864E588`) into one + common trampoline; +4. reloads `u_int *csflags` from the function's own stack slot in the cave, + so the cave works for both deny and success exits; +5. clears only the restrictive execution bits from `*csflags`; +6. forces `w0 = 0` and branches into the original epilogue. -- Primary target: AMFI cred-label callback body at `0xfffffe000863fc6c`. -- Patchpoint: `0xfffffe000864011c` (`retab` redirect to injected shellcode/cave). +The current trampoline clears this mask: -## Kernel Source File Location +- `CS_HARD` +- `CS_KILL` +- `CS_CHECK_EXPIRATION` +- `CS_RESTRICT` +- `CS_ENFORCEMENT` +- `CS_REQUIRE_LV` -- Component: AMFI policy callback implementation in kernel collection (private). -- Related open-source MAC framework context: `security/mac_process.c` + exec paths in `bsd/kern/kern_exec.c`. -- Confidence: `medium`. +Bitmask used by the patcher: `0xFFFFC0FF`. -## Function Call Stack +This preserves AMFI's normal validation / entitlement work while removing the sticky exec-time restrictions that are most hostile to jailbreak tooling. -- Primary traced chain (from `Verified call/dispatch trace (no trust in old notes)`): -- 1. Exec pipeline enters `jb_c21_supp_exec_handle_image` (`0xFFFFFE0007FA4A58`). -- 2. It calls `jb_c21_supp_exec_policy_stage` (`0xFFFFFE0007FA6858`). -- 3. That stage schedules `jb_c21_supp_exec_policy_wrapper` (`0xFFFFFE0007F81F00`). -- 4. Wrapper calls `jb_c21_supp_mac_policy_dispatch_ops90_execve` (`0xFFFFFE00082D9D0C`). -- 5. Dispatcher loads callback from `policy->ops + 0x90` at `jb_c21_supp_dispatch_load_ops_off90` (`0xFFFFFE00082D9DBC`) and calls it at `jb_c21_supp_dispatch_call_ops_off90` (`0xFFFFFE00082D9FCC`, `BLRAA ... X17=#0xEC79`). -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +## C21-v1 Scope -## Patch Hit Points +This is intentionally the smallest credible C21-only design: -- Patch hitpoint is selected by contextual matcher and verified against local control-flow. -- Before/after instruction semantics are captured in the patch-site evidence above. +- it does not depend on `patch_amfi_execve_kill_path`; +- it does not patch function entry; +- it does not forge `CS_VALID`, `CS_PLATFORM_BINARY`, `CS_ADHOC`, or other + high-risk identity bits; +- it only converts late exits in `_cred_label_update_execve` to success and + normalizes the restrictive `0x3F00` cluster. -## Current Patch Search Logic +## C21-v1 Outcome -- Implemented in `scripts/patchers/kernel_jb_patch_cred_label.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks). +- User restore testing confirms C21-v1 boots successfully. +- That result validates the central design assumption: `_cred_label_update_execve` + can be patched safely as long as AMFI's main body is preserved and only the + final exits are redirected. -## Pseudocode (Before) +## Dry-Run Verification (extracted PCC 26.1 research kernel) -```c -if (amfi_checks_fail || cs_flags_invalid) { - return 1; -} -return apply_default_execve_flags(...); -``` +Dry-run patch generation against the extracted raw Mach-O from +`ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` produced the +following C21-v1 shape: -## Pseudocode (After) +- code cave: `0x00AB0F00` +- shared deny-return branch site: `0x0163C0FC` +- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8` -```c -cs_flags |= 0x04000000 | 0x0000000F; -cs_flags &= 0xFFFFC0FF; -return 0; -``` +Emitted trampoline body: -## Validation (Static Evidence) +- `ldr x26, [x29, #0x18]` +- `cbz x26, +0x10` +- `ldr w8, [x26]` +- `and w8, w8, #0xFFFFC0FF` +- `str w8, [x26]` +- `mov w0, #0` +- `b epilogue` -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. +Observed C21-v1 raw patch count: `10` -## Expected Failure/Panic if Unpatched +- `7` instructions in the trampoline cave +- `3` patched branch sites in `_cred_label_update_execve` -- Exec policy path preserves restrictive `cs_flags` and deny returns, causing AMFI kill outcomes or later entitlement-state failures. +## C21-v2 Refinement -## Risk / Side Effects +After C21-v1 boot success, the patch was refined to separate deny and success +semantics instead of using one common cave for all exits. -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. +### Reason for v2 -## Symbol Consistency Check +C21-v1 proved that the late-exit structure is safe enough to boot, but it still +cleared `0x3F00` on the shared deny path. That is broader than necessary. -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`. -- Canonical symbol hit(s): none (alias-based static matching used). -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe000863fc6c` currently resolves to `__ZN18AppleMobileApNonce21_saveNonceInfoInNVRAMEPKc` (size `0x250`). +C21-v2 narrows that behavior: -## Open Questions and Confidence +- deny exit: force only `w0 = 0`, then return through the original epilogue; +- success exits: keep the late `csflags` normalization path. -- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain. -- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial). +### C21-v2 dry-run shape -## Evidence Appendix +- deny cave: `0x00AB02B8` +- success cave: `0x00AB0F00` +- deny-return branch site: `0x0163C0FC` +- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8` -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. +Observed C21-v2 raw patch count: `12` -## Runtime + IDA Verification (2026-03-05) +- `2` instructions in the deny cave +- `7` instructions in the success cave +- `3` patched branch sites in `_cred_label_update_execve` -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (2 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `True` -- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `3` patch-point VAs. -- IDA function sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi` -- Chain function sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi` -- Caller sample: `__ZL35_initializeAppleMobileFileIntegrityv` -- Callee sample: `__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`, `__ZN24AppleMobileFileIntegrity27submitAuxiliaryInfoAnalyticEP5vnodeP7cs_blob`, `sub_FFFFFE0007B4EA8C`, `sub_FFFFFE0007CD7750`, `sub_FFFFFE0007CD7760`, `sub_FFFFFE0007F8C478` -- Verdict: `valid` -- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift. -- Policy note: method is in the low-risk optimized set (validated hit on this kernel). -- Key verified points: -- `0xFFFFFE000864DF00` (`__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`): mov x0,xzr [_cred_label_update_execve low-risk] | `ff4302d1 -> e0031faa` -- `0xFFFFFE000864DF04` (`__Z25_cred_label_update_execveP5ucredS0_P4procP5vnodexS4_P5labelS6_S6_PjPvmPi`): retab [_cred_label_update_execve low-risk] | `fc6f03a9 -> ff0f5fd6` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +## C21-v3 Refinement + +After preparing the safer split-exit structure in v2, the next experimental +step adds only the smallest helper-bit subset from the older upstream idea. + +### Reason for v3 + +The old upstream shellcode not only cleared restrictive flags, but also set a +much broader collection of identity / helper bits. Most of those are too risky +to restore directly. + +C21-v3 keeps the v2 structure and adds only this success-only increment: + +- `CS_GET_TASK_ALLOW` (`0x4`) +- `CS_INSTALLER` (`0x8`) + +Combined set mask used by v3: `0x0000000C` + +### C21-v3 dry-run shape + +- deny cave: `0x00AB02B8` +- success cave: `0x00AB0F00` +- deny-return branch site: `0x0163C0FC` +- late success-exit branch sites: `0x0163C2F0`, `0x0163C2F8` + +Observed C21-v3 raw patch count: `13` + +- `2` instructions in the deny cave +- `8` instructions in the success cave +- `3` patched branch sites in `_cred_label_update_execve` + +Success-cave body now becomes: + +- `ldr x26, [x29, #0x18]` +- `cbz x26, +0x10` +- `ldr w8, [x26]` +- `and w8, w8, #0xFFFFC0FF` +- `orr w8, w8, #0xC` +- `str w8, [x26]` +- `mov w0, #0` +- `b epilogue` + +## Intended Effect + +After the repaired patch: + +- AMFI still runs its normal exec-time hook and keeps boot-critical side effects intact. +- C21 now carries its own late deny→allow transition inside `_cred_label_update_execve`. +- Successfully launched processes end up with a less restrictive `csflags` set, especially around kill / hard / library-validation style behavior. + +This is a much narrower and more defensible jailbreak patch than forcing an unconditional success return at function entry. + +## Current Status + +- Patch implementation updated in `scripts/patchers/kernel_jb_patch_cred_label.py` as C21-v3. +- C21-v1 has already booted successfully in restore testing. +- Default schedule remains disabled in `scripts/patchers/kernel_jb.py` until C21-v3 restore / boot validation is rerun. +- Expected dry-run patch shape for C21-v3 is: + - 1 deny cave; + - 1 success cave; + - 1 branch patch at the shared deny return; + - 2 branch patches at the two late success exits. +- The current dry-run matches that expected shape exactly. +- If C21-v3 regresses boot, the most likely cause is not the split late-exit structure, but the newly added `0xC` helper-bit OR on the success path. diff --git a/research/kernel_patch_jb/patch_hook_cred_label_update_execve.md b/research/kernel_patch_jb/patch_hook_cred_label_update_execve.md index 8205db3..9803c1b 100644 --- a/research/kernel_patch_jb/patch_hook_cred_label_update_execve.md +++ b/research/kernel_patch_jb/patch_hook_cred_label_update_execve.md @@ -1,169 +1,146 @@ # C23 `patch_hook_cred_label_update_execve` -## Patch Goal +## Scope -Install an inline trampoline on the sandbox cred-label execve hook, inject ownership-propagation shellcode, and resume original hook flow safely. +- Kernel analyzed: `kernelcache.research.vphone600` +- Concrete target image: `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` +- Analysis date: `2026-03-06` +- Method: IDA MCP + local `research/reference/xnu` + focused Python dry-run +- Trust policy: historical notes for this patch were treated as untrusted and re-derived from the live PCC 26.1 research kernel -## Binary Targets (IDA + Recovered Symbols) +## Executive Verdict -- Sandbox policy strings/data: - - `"Sandbox"` pointer at `0xfffffe0007a66cc0` - - `"Seatbelt sandbox policy"` pointer at `0xfffffe0007a66cc8` - - `mpc_ops` table at `0xfffffe0007a66d20` -- Dynamic hook selection (ops[0..29], max size): - - selected entry: `ops[18] = 0xfffffe00093d2ce4` (size `0x1070`) -- Recovered hook symbol (callee in this path): - - `_hook_cred_label_update_execve` at `0xfffffe00093d0d0c` -- `vnode_getattr` resolution by string-near-BL method: - - string `%s: vnode_getattr: %d` xref at `0xfffffe00084caa18` - - nearest preceding BL target: `0xfffffe0007cd84f8` +`patch_hook_cred_label_update_execve` should be implemented as a **faithful upstream C23 wrapper trampoline**, not as an early-return patch. -## Call-Stack Analysis +The correct PCC 26.1 target is the sandbox `mac_policy_ops[18]` entry for `mpo_cred_label_update_execve`. On this kernel that table entry points to the wrapper at `0xfffffe00093bdb64` (`sub_FFFFFE00093BDB64`), not directly to the internal helper at `0xfffffe00093bbbf4` (`sub_FFFFFE00093BBBF4`). -- MAC framework dispatch -> `mac_policy_ops[18]` (`0xfffffe00093d2ce4`) -> internal call to `_hook_cred_label_update_execve` (`0xfffffe00093d0d0c`). -- No direct code xrefs to `ops[18]` function (expected: data-driven dispatch table call path). +The rebuilt repo implementation now follows upstream C23 behavior: -## Patch-Site / Byte-Level Change +- retarget `ops[18]` to a code cave, +- assemble the cave body via keystone `asm()` instead of hardcoded instruction words, +- fetch file metadata with `vnode_getattr(vp, &vap, vfs_context_current())`, +- if `VSUID`/`VSGID` are present, copy owner UID/GID into the pending new credential, +- set `proc->p_flag |= P_SUGID` when either field changes, +- then branch back to the original wrapper. -- Trampoline site: `0xfffffe00093d2ce4` -- Before: - - bytes: `7F 23 03 D5` - - asm: `PACIBSP` -- After: - - asm: `B cave` (PC-relative, target depends on allocated cave offset) -- Cave semantics: - - slot 0: relocated `PACIBSP` - - slot 18: `BL vnode_getattr_target` - - tail: restore regs + `B hook+4` +This means C23 is **not** a direct sandbox-disable patch. It is a compatibility trampoline that preserves exec-time setugid credential state before the normal sandbox wrapper continues. -## Pseudocode (Before) +## Verified Binary Facts -```c -int hook_cred_label_update_execve(args...) { - // original sandbox hook logic - ... -} -``` +### 1. The live PCC 26.1 `ops[18]` entry points to the wrapper -## Pseudocode (After) +Focused dry-run and local pointer decode on `kernelcache.research.vphone600` show: -```c -int hook_entry(args...) { - branch_to_cave(); -} +- sandbox `mac_policy_conf` at file offset `0x00A54428` +- `mpc_ops` table at file offset `0x00A54488` +- `ops[18]` entry at file offset `0x00A54518` +- original raw chained pointer: `0x8010EC79023B9B64` +- decoded target file offset: `0x023B9B64` +- decoded target VA: `0xfffffe00093bdb64` -int cave(args...) { - pacibsp(); - if (vp != NULL) { - vnode_getattr(vp, &vap, &ctx); - propagate_uid_gid_if_needed(new_cred, vap, proc); - } - branch_to_hook_plus_4(); -} -``` +So on this kernel, `ops[18]` is the wrapper `sub_FFFFFE00093BDB64`. -## Symbol Consistency +### 2. The wrapper calls the internal helper -- `_hook_cred_label_update_execve` symbol is present and aligned with call-path evidence. -- `ops[18]` wrapper itself has no recovered explicit symbol name; behavior is consistent with sandbox MAC dispatch wrapper. +IDA MCP on the same PCC 26.1 research kernel shows: -## Patch Metadata +- wrapper: `sub_FFFFFE00093BDB64` +- inner helper: `sub_FFFFFE00093BBBF4` +- call site inside wrapper: `0xfffffe00093be8d0` -- Patch document: `patch_hook_cred_label_update_execve.md` (C23). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_hook_cred_label.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +So the runtime call chain is: -## Target Function(s) and Binary Location +- sandbox policy table `ops[18]` +- wrapper `sub_FFFFFE00093BDB64` +- internal helper `sub_FFFFFE00093BBBF4` -- Primary target: hook/trampoline path around `hook_cred_label_update_execve`. -- Patch hit combines inline branch rewrite plus code-cave logic, with addresses listed below. +### 3. Faithful upstream C23 branches back to the wrapper, not the helper -## Kernel Source File Location +The rebuilt C23 cave uses the same high-level structure as upstream: -- Component: sandbox/AMFI hook glue around execve cred-label callback (partially private in KC). -- Related open-source context: `security/mac_process.c`, `bsd/kern/kern_exec.c`. -- Confidence: `low`. +- save argument registers, +- call `vfs_context_current`, +- call `vnode_getattr`, +- update pending credential UID/GID from vnode owner when `VSUID`/`VSGID` are set, +- set `P_SUGID`, +- restore registers, +- branch back to the original wrapper entry. -## Function Call Stack +For PCC 26.1, the resolved helper targets are: -- Primary traced chain (from `Call-Stack Analysis`): -- MAC framework dispatch -> `mac_policy_ops[18]` (`0xfffffe00093d2ce4`) -> internal call to `_hook_cred_label_update_execve` (`0xfffffe00093d0d0c`). -- No direct code xrefs to `ops[18]` function (expected: data-driven dispatch table call path). -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +- `vfs_context_current` body at file offset `0x00B756DC` +- `vnode_getattr` body at file offset `0x00CC91B4` +- branch-back target wrapper at file offset `0x023B9B64` -## Patch Hit Points +## XNU Cross-Reference -- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`): -- Trampoline site: `0xfffffe00093d2ce4` -- Before: -- bytes: `7F 23 03 D5` -- asm: `PACIBSP` -- After: -- asm: `B cave` (PC-relative, target depends on allocated cave offset) -- The before/after instruction transform is constrained to this validated site. +Open-source XNU confirms the field semantics used by the faithful C23 shellcode: -## Current Patch Search Logic +- `VSUID` / `VSGID` are defined in `research/reference/xnu/bsd/sys/vnode.h:807` +- `struct vnode_attr::{va_uid, va_gid, va_mode}` are defined in `research/reference/xnu/bsd/sys/vnode.h:690` +- `struct ucred::cr_uid` is defined in `research/reference/xnu/bsd/sys/ucred.h:155` +- `cr_gid` aliases `cr_groups[0]` in `research/reference/xnu/bsd/sys/ucred.h:211` +- `P_SUGID` is defined in `research/reference/xnu/bsd/sys/proc.h:177` +- exec-time MAC label update reaches this area through `kauth_proc_label_update_execve(...)` in `research/reference/xnu/bsd/kern/kern_credential.c:4367` +- exec path setugid handling is in `exec_handle_sugid(...)` in `research/reference/xnu/bsd/kern/kern_exec.c:6833` -- Implemented in `scripts/patchers/kernel_jb_patch_hook_cred_label.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks). +## What C23 Does After Rebuild -## Validation (Static Evidence) +### Facts -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. +The rebuilt C23 now does exactly two writes in focused dry-run, and the cave body is keystone-generated rather than hand-written as raw instruction words: -## Expected Failure/Panic if Unpatched +1. retarget `ops[18]` from the original wrapper pointer to the code cave +2. emit a `0xB8`-byte cave implementing the setugid fixup trampoline -- Exec hook path retains ownership/suid propagation restrictions, leading to launch denial or broken privilege state transitions. +Focused dry-run output on `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`: -## Risk / Side Effects +- `0x00A54518` — retarget `ops[18]` to faithful C23 cave +- `0x00AB1720` — faithful upstream C23 cave body -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. +The patched chained-pointer qword becomes: -## Symbol Consistency Check +- new raw entry: `0x8010EC7900AB1720` -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`. -- Canonical symbol hit(s): `_hook_cred_label_update_execve`. -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `_hook_cred_label_update_execve` resolved at `0xfffffe00093d0d0c` (size `0x460`). +### Inference -## Open Questions and Confidence +C23’s role in the jailbreak patchset is best understood as a **boot-safety / semantic-preservation shim** around exec-time sandbox transition handling. -- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch. -- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence). +It does **not** directly remove the sandbox wrapper. Instead it ensures that setuid/setgid-derived credential state is already reflected in the pending exec credential before the original sandbox wrapper runs. That is consistent with the historical upstream choice to preserve exec-time credential semantics while other jailbreak patches relax deny decisions elsewhere. -## Evidence Appendix +## Validation Status -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. +### Syntax validation -## Runtime + IDA Verification (2026-03-05) +Passed: -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (2 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `True` -- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `2` patch-point VAs. -- IDA function sample: `sub_FFFFFE00093D2CE4` -- Chain function sample: `sub_FFFFFE00093D2CE4` -- Caller sample: none -- Callee sample: `__sfree_data`, `_hook_cred_label_update_execve`, `_sb_evaluate_internal`, `persona_put_and_unlock`, `proc_checkdeadrefs`, `sub_FFFFFE0007AC57A0` -- Verdict: `valid` -- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift. -- Policy note: method is in the low-risk optimized set (validated hit on this kernel). -- Key verified points: -- `0xFFFFFE00093D2CE8` (`sub_FFFFFE00093D2CE4`): mov x0,xzr [_hook_cred_label_update_execve low-risk] | `fc6fbaa9 -> e0031faa` -- `0xFFFFFE00093D2CEC` (`sub_FFFFFE00093D2CE4`): retab [_hook_cred_label_update_execve low-risk] | `fa6701a9 -> ff0f5fd6` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +- `python3 -m py_compile scripts/patchers/kernel_jb_patch_hook_cred_label.py scripts/patchers/kernel_jb.py` + +### Focused dry-run validation + +Passed in-memory only; no firmware image was written back. + +Observed output: + +- 2 patches emitted +- `ops[18]` correctly decoded and retargeted +- cave placed at `0x00AB1720` +- cave branches back to wrapper `0x023B9B64` +- cave encodes BL calls to `vfs_context_current` and `vnode_getattr` + +## Repo Status After This Pass + +- `scripts/patchers/kernel_jb_patch_hook_cred_label.py` now implements faithful upstream C23 semantics +- `scripts/patchers/kernel_jb.py` includes `patch_hook_cred_label_update_execve` in the active Group C schedule +- `research/00_patch_comparison_all_variants.md` should describe C23 as a faithful wrapper trampoline, not as a mis-targeted early-return patch + +## Practical Effect + +After the rebuild, C23 should provide the following effect on the current PCC 26.1 research kernel: + +- preserve exec-time `VSUID` / `VSGID` credential transfer, +- preserve `P_SUGID` marking, +- keep the original sandbox wrapper execution path alive, +- avoid the broader boot-risk of replacing the whole wrapper with an immediate success return. + +That is the main reason this direction is safer than the old “return 0 from the hook path” interpretations. diff --git a/research/kernel_patch_jb/patch_io_secure_bsd_root.md b/research/kernel_patch_jb/patch_io_secure_bsd_root.md index adb0632..7dd025a 100644 --- a/research/kernel_patch_jb/patch_io_secure_bsd_root.md +++ b/research/kernel_patch_jb/patch_io_secure_bsd_root.md @@ -1,148 +1,257 @@ -# B19 `patch_io_secure_bsd_root` +# B19 `patch_io_secure_bsd_root` — 2026-03-06 reanalysis -## Patch Goal +## Scope -Bypass secure-root enforcement branch so the checked path does not block execution. - -## Binary Targets (IDA + Recovered Symbols) - -- Recovered symbol: `IOSecureBSDRoot` at `0xfffffe0008297fd8`. -- Additional fallback function observed by string+context matching: - - `sub_FFFFFE000836E168` (AppleARMPE call path with `SecureRoot` / `SecureRootName` references) -- Strict branch candidate used by current fallback-style logic: - - `0xfffffe000836e1f0` (`CBZ W0, ...`) after `BLRAA` - -## Call-Stack Analysis - -- `IOSecureBSDRoot` is the named entrypoint for secure-root handling. -- `sub_FFFFFE000836E168` is reached through platform-dispatch data refs (vtable-style), not direct BL callers. - -## Patch-Site / Byte-Level Change - -- Candidate patch site: `0xfffffe000836e1f0` -- Before: - - bytes: `20 0D 00 34` - - asm: `CBZ W0, loc_FFFFFE000836E394` -- After: - - bytes: `69 00 00 14` - - asm: `B #0x1A4` - -## Pseudocode (Before) - -```c -status = callback(...); -if (status == 0) { - goto secure_root_pass_path; -} -// fail / alternate handling -``` - -## Pseudocode (After) - -```c -goto secure_root_pass_path; // unconditional -``` - -## Symbol Consistency - -- `IOSecureBSDRoot` symbol is recovered and trustworthy as the primary semantic target. -- Current fallback patch site is in a related dispatch function; this is semantically plausible but should be treated as lower confidence than a direct in-symbol site. - -## Patch Metadata - -- Patch document: `patch_io_secure_bsd_root.md` (B19). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_secure_root.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. - -## Target Function(s) and Binary Location - -- Primary target: `IOSecureBSDRoot` policy-branch site selected by guard-site filters. -- Patchpoint is the deny-check branch converted to permissive flow. - -## Kernel Source File Location - -- Likely IOKit secure-root policy code inside kernel collection (not fully exposed in open-source XNU tree). -- Closest open-source family: `iokit/Kernel/*` root device / BSD name handling. -- Confidence: `low`. - -## Function Call Stack - -- Primary traced chain (from `Call-Stack Analysis`): -- `IOSecureBSDRoot` is the named entrypoint for secure-root handling. -- `sub_FFFFFE000836E168` is reached through platform-dispatch data refs (vtable-style), not direct BL callers. -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. - -## Patch Hit Points - -- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`): -- Candidate patch site: `0xfffffe000836e1f0` -- Before: -- bytes: `20 0D 00 34` -- asm: `CBZ W0, loc_FFFFFE000836E394` -- After: -- bytes: `69 00 00 14` -- The before/after instruction transform is constrained to this validated site. - -## Current Patch Search Logic - -- Implemented in `scripts/patchers/kernel_jb_patch_secure_root.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks). - -## Validation (Static Evidence) - -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. - -## Expected Failure/Panic if Unpatched - -- Secure BSD root policy check continues to deny modified-root boot/runtime paths needed by jailbreak filesystem flow. - -## Risk / Side Effects - -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. - -## Symbol Consistency Check - -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`. -- Canonical symbol hit(s): `IOSecureBSDRoot`. -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `IOSecureBSDRoot` -> `IOSecureBSDRoot` at `0xfffffe0008297fd8`. - -## Open Questions and Confidence - -- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch. -- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence). - -## Evidence Appendix - -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. - -## Runtime + IDA Verification (2026-03-05) - -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` +- Kernel used for live reverse-engineering: `kernelcache.research.vphone600` +- Kernel file used locally: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` - Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (1 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `False` -- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `1` patch-point VAs. -- IDA function sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_` -- Chain function sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_` -- Caller sample: none -- Callee sample: `__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`, `sub_FFFFFE0007AC57A0`, `sub_FFFFFE0007AC5830`, `sub_FFFFFE0007B1B4E0`, `sub_FFFFFE0007B1C324`, `sub_FFFFFE0008133868` -- Verdict: `questionable` -- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation. -- Key verified points: -- `0xFFFFFE000836E1F0` (`__ZN10AppleARMPE20callPlatformFunctionEPK8OSSymbolbPvS3_S3_S3_`): b #0x1A4 [_IOSecureBSDRoot] | `200d0034 -> 69000014` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +- Ground-truth sources for this note: + - IDA-MCP on the loaded research kernel + - recovered symbol datasets in `research/kernel_info/json/` + - open-source XNU in `research/reference/xnu` + +This document intentionally discards earlier B19 writeups as untrusted and restarts the analysis from first principles. + +## Executive Conclusion + +`patch_io_secure_bsd_root` was previously targeting the wrong branch. + +The disabled historical patch at `0xFFFFFE000836E1F0` / file offset `0x0136A1F0` does **not** patch the `"SecureRootName"` policy result used by `IOSecureBSDRoot()`. Instead, it patches the earlier `"SecureRoot"` name-match gate inside `AppleARMPE::callPlatformFunction`, which changes generic platform-function dispatch semantics and is a credible root cause for the early-boot failure. + +The semantically correct deny path for the `IOSecureBSDRoot(rootdevice)` flow is the `"SecureRootName"` branch in `AppleARMPE::callPlatformFunction`, specifically the final return-value select at: + +- VA: `0xFFFFFE000836E464` +- file offset: `0x0136A464` +- before: `f613891a` / `CSEL W22, WZR, W9, NE` +- recommended after: `16008052` / `MOV W22, #0` + +That patch preserves the compare, callback, wakeup, and state updates, and only forces the final policy return from `kIOReturnNotPrivileged` to success. + +## Implementation Status + +- `scripts/patchers/kernel_jb_patch_secure_root.py` was retargeted on 2026-03-06 to emit this `0x0136A464` patch instead of the historical `0x0136A1F0` false-positive branch rewrite. +- `scripts/patchers/kernel_jb.py` now includes `patch_io_secure_bsd_root` again in `_GROUP_B_METHODS` with the retargeted matcher. +- Local dry-run verification on the research kernel emits exactly one write: `0x0136A464` / `16008052` / `mov w22, #0 [_IOSecureBSDRoot SecureRootName allow]`. + +## Verified Call Chain + +### 1. BSD boot calls `IOSecureBSDRoot` + +IDA shows `bsd_init` calling `IOSecureBSDRoot` here: + +- `bsd_init` call site: `0xFFFFFE0007F7B7C4` / file offset `0x00F777C4` +- instruction: `BL IOSecureBSDRoot` + +The nearby boot flow is: + +1. `IOFindBSDRoot` +2. `vfs_mountroot` +3. `IOSecureBSDRoot(rootdevice)` +4. `VFS_ROOT(...)` +5. later `FSIOC_KERNEL_ROOTAUTH` + +This matches open-source XNU in `research/reference/xnu/bsd/kern/bsd_init.c`, where `IOSecureBSDRoot(rootdevice);` appears before `VFS_ROOT()` and well before the later root-authentication ioctl. + +### 2. `IOSecureBSDRoot` calls platform expert with `"SecureRootName"` + +Recovered symbol + IDA decompilation: + +- `IOSecureBSDRoot`: `0xFFFFFE0008297FD8` / file offset `0x01293FD8` +- research recovered symbol: `IOSecureBSDRoot` +- release recovered symbol: `IOSecureBSDRoot` at `0xFFFFFE000825FFD8` + +The decompiled logic is straightforward: + +1. build `OSSymbol("SecureRootName")` +2. wait for `IOPlatformExpert` +3. call `pe->callPlatformFunction(functionName, false, rootName, NULL, NULL, NULL)` +4. if result is `0xE00002C1` (`kIOReturnNotPrivileged`), call `mdevremoveall()` + +Open-source XNU confirms the intended semantics in `research/reference/xnu/iokit/bsddev/IOKitBSDInit.cpp`: + +- `"SecureRootName"` is the exact function name +- `kIOReturnNotPrivileged` means the root device is not secure +- on that return code, `mdevremoveall()` is invoked + +`mdevremoveall()` in `research/reference/xnu/bsd/dev/memdev.c` removes `/dev/md*` devices and clears the memory-device bookkeeping, so this path is directly relevant to ramdisk / custom-root boot flows. + +### 3. The real secure-root decision is made in `AppleARMPE::callPlatformFunction` + +Relevant function: + +- `AppleARMPE::callPlatformFunction`: `0xFFFFFE000836E168` / file offset `0x0136A168` + +Within this function, there are **two different** string-based branches that matter: + +#### A. `"SecureRoot"` branch — callback/control path + +At: + +- `0xFFFFFE000836E1EC`: `BLRAA` to `a2->isEqualTo("SecureRoot")` +- `0xFFFFFE000836E1F0`: `CBZ W0, loc_FFFFFE000836E394` + +If the name matches `"SecureRoot"`, the function enters a branch that: + +- waits on byte flag `[a1+0x118]` +- may call `"SecureRootCallBack"` +- sets / wakes byte flag `[a1+0x119]` +- optionally returns a boolean via `a5` + +This is **not** the direct `IOSecureBSDRoot(rootName)` policy result. + +#### B. `"SecureRootName"` branch — actual policy decision path + +At: + +- `0xFFFFFE000836E3C0`: `BLRAA` to `a2->isEqualTo("SecureRootName")` +- `0xFFFFFE000836E3C4`: `CBZ W0, loc_FFFFFE000836E46C` + +Then: + +- `0xFFFFFE000836E3D4`: call helper that behaves like `strlen` +- `0xFFFFFE000836E3E4`: call helper that behaves like `strncmp` +- `0xFFFFFE000836E3E8`: `CMP W0, #0` +- `0xFFFFFE000836E3EC`: `CSET W8, EQ` +- `0xFFFFFE000836E3F0`: store secure-match bit to `[a1+0x11A]` +- wake waiting threads / synchronize callback state +- `0xFFFFFE000836E450`: reload `[a1+0x11A]` +- `0xFFFFFE000836E454`: load `W9 = 0xE00002C1` +- `0xFFFFFE000836E464`: `CSEL W22, WZR, W9, NE` + +That final `CSEL` is the actual deny/success selector for the `"SecureRootName"` request: + +- secure match -> return `0` +- mismatch -> return `0xE00002C1` / `kIOReturnNotPrivileged` + +## Why the Historical Patch Is Wrong + +### Root cause 1: live patcher has no symbol table to use + +Running the existing `KernelJBPatcher` locally against the research kernel shows: + +- `symbol_count = 0` +- `_resolve_symbol("_IOSecureBSDRoot") == -1` +- `_resolve_symbol("IOSecureBSDRoot") == -1` + +So the current code always falls back to a heuristic matcher on this kernel. + +### Root cause 2: the fallback heuristic picks the first `BL* + CBZ W0` site + +The current fallback logic looks for a function referencing both `"SecureRoot"` and `"SecureRootName"`, then selects the first forward conditional branch shaped like: + +- previous instruction is `BL*` +- current instruction is `CBZ/CBNZ W0, target` + +That heuristic lands on: + +- `0xFFFFFE000836E1F0` / `CBZ W0, loc_FFFFFE000836E394` + +But this site is only the result of `isEqualTo("SecureRoot")`. It is **not** the final policy-return site for `"SecureRootName"`. + +### Root cause 3: the old patch changes dispatch routing, not just the deny return + +Historical patch: + +- before: `200d0034` / `CBZ W0, loc_FFFFFE000836E394` +- after: `69000014` / `B #0x1A4` + +Effect: + +- previously: only true `"SecureRoot"` requests enter the `SecureRoot` branch +- after patch: non-`"SecureRoot"` requests are also forced into that branch + +Because this is inside generic `AppleARMPE::callPlatformFunction` dispatch, the patch can corrupt the control flow for unrelated platform-function calls that happen to reach this portion of the function. That is much broader than “skip secure-root denial” and is consistent with a boot-time regression. + +## What This Patch Actually Does + +`patch_io_secure_bsd_root` does **not** replace the later sealed-root / root-authentication gate in `bsd_init`. + +What it actually controls is earlier and narrower: + +1. determine whether the chosen BSD root name is platform-approved (`"SecureRootName"`) +2. if not approved, return `kIOReturnNotPrivileged` +3. `IOSecureBSDRoot()` maps that failure into `mdevremoveall()` + +So the practical effect of a correct B19 bypass is: + +- allow a non-approved/custom BSD root name to survive the platform secure-root policy step +- avoid the `kIOReturnNotPrivileged -> mdevremoveall()` failure path +- keep the rest of the boot moving toward `VFS_ROOT` and the later rootauth check + +This is why B19 and `patch_bsd_init_auth` are separate methods: they handle different stages of the boot chain. + +## Recommended Patch Strategy + +### Preferred site: final `"SecureRootName"` return select + +Patch only the final result selector: + +- VA: `0xFFFFFE000836E464` +- file offset: `0x0136A464` +- before bytes: `f613891a` +- before asm: `CSEL W22, WZR, W9, NE` +- after bytes: `16008052` +- after asm: `MOV W22, #0` + +Why this site is preferred: + +- preserves the string comparison logic +- preserves the `SecureRootCallBack` synchronization / wakeup handshake +- preserves the state bytes at `[a1+0x118]`, `[a1+0x119]`, `[a1+0x11A]` +- changes only the final deny-vs-success return value + +### Secondary option: force the secure-match bit before the final select + +- VA: `0xFFFFFE000836E3EC` +- file offset: `0x0136A3EC` +- before bytes: `e8179f1a` +- before asm: `CSET W8, EQ` +- after bytes: `28008052` +- after asm: `MOV W8, #1` + +This is broader than the preferred patch because it changes the stored secure-match state itself, not just the returned result. + +### Tertiary option: suppress only `IOSecureBSDRoot()` cleanup + +There is also a coarser site in `IOSecureBSDRoot` itself: + +- `0xFFFFFE0008298144`: compare against `0xE00002C1` followed by `B.NE` + +That site can suppress `mdevremoveall()` without touching `AppleARMPE::callPlatformFunction`, but it is less attractive because it leaves the underlying `"SecureRootName"` failure semantics intact and only masks the wrapper-side cleanup. + +## Safer Matcher Recipe For Future Python Rework + +If/when the Python patcher is reworked, the fallback should stop selecting the first `BL* + CBZ W0` site in the shared function. + +A safer matcher for stripped kernels is: + +1. locate the function referencing both `"SecureRoot"` and `"SecureRootName"` +2. inside that function, find the `"SecureRootName"` equality check block, not the `"SecureRoot"` block +3. from there, require the sequence: + - helper call 1 (length) + - helper call 2 (compare) + - `CMP W0, #0` + - `CSET W8, EQ` + - store to `[X19,#0x11A]` + - later `MOV W9, #0xE00002C1` + - final `CSEL W22, WZR, W9, NE` +4. patch only that final `CSEL` + +This gives a unique, semantics-aware patch site for the actual deny return. + +## Local Reproduction Notes + +Local dry analysis of the current patcher on the research kernel produced: + +- `fallback_func = 0x136a168` +- emitted patch = `(0x0136A1F0, 69000014, 'b #0x1A4 [_IOSecureBSDRoot]')` + +This reproduces the disabled historical behavior and confirms that the current implementation does not yet target the correct deny site. + +## Confidence + +- Confidence that the historical patch site is wrong: **high** +- Confidence that `0xFFFFFE000836E464` is the correct minimal deny-return site: **high** +- Confidence that this alone is sufficient for full jailbreak boot: **medium** + +The last item stays `medium` because B19 only addresses the secure-root platform policy stage; it does not replace the later root-auth/sealedness work handled elsewhere. diff --git a/research/kernel_patch_jb/patch_iouc_failed_macf.md b/research/kernel_patch_jb/patch_iouc_failed_macf.md index 86b9ad5..70d0612 100644 --- a/research/kernel_patch_jb/patch_iouc_failed_macf.md +++ b/research/kernel_patch_jb/patch_iouc_failed_macf.md @@ -1,5 +1,11 @@ # A5 `patch_iouc_failed_macf` +## Status + +- Re-analysis date: `2026-03-06` +- Current conclusion: the historical repo A5 entry early-return is rejected as over-broad, but A5-v2 is now rebuilt as a narrow branch-level patch at the real post-MACF deny gate. +- Current repository behavior: `patch_iouc_failed_macf` is active again with the strict A5-v2 matcher. + ## Patch Goal Bypass the shared IOUserClient MACF deny gate that emits: @@ -9,25 +15,21 @@ Bypass the shared IOUserClient MACF deny gate that emits: This gate blocks `mount-phase-1` and `data-protection` (`seputil`) in current JB boot logs. -## Binary Targets (vphone600 research kernel) +## Historical Repo Hit (rejected) - Anchor string: `"failed MACF"` - Candidate function selected by anchor xref + IOUC co-reference: - function start: `0xfffffe000825b0c0` -- Patch points: +- Historical patch points: - `0xfffffe000825b0c4` - `0xfffffe000825b0c8` -## Patch-Site / Byte-Level Change +## Why The Historical Repo Patch Is Rejected -- At `fn + 0x4`: - - before: stack-frame setup (`stp ...`) - - after: `mov x0, xzr` -- At `fn + 0x8`: - - before: stack-frame setup (`stp ...`) - - after: `retab` - -Result: function returns success immediately while preserving entry `PACIBSP`. +- IDA decompilation shows `0xfffffe000825b0c0` is a large IOUserClient open / setup path, not a tiny standalone MACF helper. +- That function also prepares output state (`a7` / `a8` in decompilation) before returning to its caller. +- The historical repo patch overwrote the first two instructions after `PACIBSP` with `mov x0, xzr ; retab`, which forces an immediate success return before that wider setup work happens. +- Therefore the old patch is broader than the actual MACF deny branch and is not a good upstream-aligned design. ## Pseudocode (Before) @@ -39,14 +41,26 @@ int iouc_macf_gate(...) { } ``` -## Pseudocode (After) +## Narrow Branch (current A5-v2 target) ```c -int iouc_macf_gate(...) { - return 0; +// inside sub_FFFFFE000825B0C0 +ret = mac_iokit_check_open(...); +if (ret != 0) { + IOLog("IOUC %s failed MACF in process %s\n", ...); + error = kIOReturnNotPermitted; + goto out; } ``` +Current IDA-validated branch window: + +- `0xfffffe000825ba94` — `BL sub_FFFFFE00082EB07C` +- `0xfffffe000825ba98` — `CBZ W0, loc_FFFFFE000825BB0C` +- `0xfffffe000825baf8` — `ADRL X0, "IOUC %s failed MACF in process %s\n"` + +A5-v2 patches exactly this gate by replacing `CBZ W0, loc_FFFFFE000825BB0C` with unconditional `B loc_FFFFFE000825BB0C`. + ## Why This Patch Was Added - Extending sandbox hooks to cover `ops[201..210]` was not sufficient. @@ -60,13 +74,16 @@ int iouc_macf_gate(...) { - Primary patcher module: - `scripts/patchers/kernel_jb_patch_iouc_macf.py` - JB scheduler status: - - enabled in default `_DEFAULT_METHODS` as `patch_iouc_failed_macf` + - present in active `_PATCH_METHODS` + - patch method emits one branch rewrite when the strict shape matches ## Validation (static, local) -- Method emitted 2 writes on current kernel: +- Historical repo dry-run emitted 2 writes on current kernel: - `0x012570C4` `mov x0,xzr [IOUC MACF gate low-risk]` - `0x012570C8` `retab [IOUC MACF gate low-risk]` +- Current A5-v2 dry-run emits **1 write** on current kernel: + - `0x01257A98` `b #0x74 [IOUC MACF deny → allow]` ## XNU Reference Cross-Validation (2026-03-06) @@ -90,19 +107,12 @@ What still requires IDA/runtime evidence: Interpretation: -- This patch has strong source-level support for mechanism (shared IOUC MACF gate), - while concrete hit-point selection remains IDA-authoritative per-kernel. +- The IOUC MACF mechanism itself is real and source-backed. +- The old repo hit-point was too wide. +- A5-v2 now follows the narrower branch-level retarget: preserve the IOUserClient open path and only force the post-`mac_iokit_check_open` gate into the allow path. -## Runtime Validation Pending +## Bottom Line -Need full flow validation after patch install: - -1. `make fw_patch_jb` -2. restore -3. `make cfw_install_jb` -4. `make boot` - -Expected improvement: - -- no `IOUC ... failed MACF` for APFS/SEP user clients -- `data-protection` should progress past `seputil` timeout path. +- The old entry early-return was a repo-local experiment and is no longer used. +- The current A5-v2 implementation patches only the narrow `mac_iokit_check_open` deny gate inside `0xfffffe000825b0c0`. +- Focused dry-run on `kernelcache.research.vphone600` hits a single branch rewrite at `0x01257A98`, which is much closer to an upstream-style minimal gate patch than the old entry short-circuit. diff --git a/research/kernel_patch_jb/patch_kcall10.md b/research/kernel_patch_jb/patch_kcall10.md index 2436d40..c5c4a19 100644 --- a/research/kernel_patch_jb/patch_kcall10.md +++ b/research/kernel_patch_jb/patch_kcall10.md @@ -1,152 +1,240 @@ # C24 `patch_kcall10` -## Patch Goal +## Status (2026-03-06, PCC 26.1 re-analysis) -Replace syscall 439 (`kas_info`) with a 10-argument kernel call trampoline and preserve chained-fixup integrity. +- Treat all older `kcall10` notes in this repo as historical / untrusted unless they match the facts below. +- Current verdict for the legacy upstream-style design: it was ABI-incorrect for PCC 26.1 and has been replaced in the patcher with a rebuilt ABI-correct syscall-cave design. +- Scope of this document: single-patch re-research only, focused exclusively on the `kcall10` kernel-call patch itself. -## Binary Targets (IDA + Recovered Symbols) +## Goal -- Recovered symbols: - - `nosys` at `0xfffffe0008010c94` - - `kas_info` at `0xfffffe0008080d0c` -- Patcher design target: - - `sysent[439]` entry: `sy_call`, optional `sy_munge32`, return-type/narg fields. -- Cave code: - - shellcode trampoline in executable text cave (dynamic offset). +- Repurpose `SYS_kas_info` (`syscall 439`) into a usable kernel-call primitive for jailbreak workflows. +- Keep the hook on a syscall slot that is already effectively unused on this kernel. +- Make the patch structurally correct for the real arm64 XNU syscall ABI so it can be dry-run validated without relying on guessed stack contracts. -## Call-Stack Analysis +## Verified PCC 26.1 Facts -- Userland syscall -> syscall dispatch -> `sysent[439].sy_call`. -- Before patch: `sysent[439] -> kas_info` (restricted behavior). -- After patch: `sysent[439] -> kcall10 cave` (loads function pointer + args, executes `BLR x16`, stores results back). +### `sysent[439]` on the loaded PCC 26.1 research kernel -## Patch-Site / Byte-Level Change +- IDA function `sub_FFFFFE00081279E4` is the arm64 Unix syscall dispatcher (`unix_syscall` semantics confirmed by XNU source and call shape). +- It computes the syscall-table base as `off_FFFFFE000773F858` and indexes entries as `base + code * 0x18`. +- Therefore `sysent[439]` is at: + - VA `0xFFFFFE0007742180` + - file offset `0x0073E180` +- Unpatched entry contents on PCC 26.1: + - `sy_call = 0xFFFFFE0008077978` + - `sy_arg_munge32 = 0xFFFFFE0007C6AC4C` + - `sy_return_type = 1` + - `sy_narg = 3` + - `sy_arg_bytes = 0x000C` -- Entry-point data patching is chained-fixup encoded (auth rebase), not raw VA writes. -- Key field semantics: - - diversity: `0xBCAD` - - key: IA (`0`) - - addrDiv: `0` - - preserve `next` chain bits -- Metadata patches: +### Raw entry dump + +- 24-byte `sysent[439]` dump as observed in IDA / local decode: + - qword `[+0x00]`: `0xFFFFFE0008077978` + - qword `[+0x08]`: `0xFFFFFE0007C6AC4C` + - dword `[+0x10]`: `0x00000001` + - half `[+0x14]`: `0x0003` + - half `[+0x16]`: `0x000C` +- Same entry in 32-bit little-endian words: + - `08077978 fffffe00 07c6ac4c fffffe00 00000001 000c0003` + +### What `syscall 439` currently does here + +- `0xFFFFFE0008077978` disassembles to: + - `MOV W0, #0x2D` + - `RET` +- `0x2D` is `45` decimal, i.e. `ENOTSUP`. +- So on this PCC 26.1 research kernel, `SYS_kas_info` is effectively a stubbed-out `ENOTSUP` syscall target, which makes it a good hook point. + +### Verified dispatcher ABI + +- In `sub_FFFFFE00081279E4`, the handler call sequence is: + - `LDR X8, [X22]` + - `MOV X0, X21` + - `MOV X1, X19` + - `MOV X2, X24` + - `MOV X17, #0xBCAD` + - `BLRAA X8, X17` +- Derived state at the call: + - `X21 = struct proc *` + - `X19 = &uthread->uu_arg[0]` + - `X24 = &uthread->uu_rval[0]` +- So the real handler ABI is: + - `x0 = struct proc *` + - `x1 = &uthread->uu_arg[0]` + - `x2 = &uthread->uu_rval[0]` + +## XNU Cross-Check + +- `research/reference/xnu/bsd/sys/sysent.h` defines `sy_call_t` as `int32_t sy_call(struct proc *, void *, int *)`. +- `research/reference/xnu/bsd/dev/arm/systemcalls.c` shows `unix_syscall()` calling `(*callp->sy_call)(proc, &uthread->uu_arg[0], &uthread->uu_rval[0])`. +- arm64 `unix_syscall` only accepts up to **8** syscall argument slots. +- `research/reference/xnu/bsd/sys/user.h` shows `uu_rval` is `int uu_rval[2]`, so the natural 64-bit return path is `_SYSCALL_RET_UINT64_T`, which packs one 64-bit value across those two 32-bit cells. + +## Why The Historical Design Was Wrong + +### Old idea + +- Historical notes described a cave that: + - recovered a pointer from `[sp,#0x40]` + - treated that pointer as `{ target, arg0..arg9, out_regs... }` + - called the target with `BLR` + - wrote many registers back to the same buffer + - returned `0` + +### Problems + +- The syscall ABI never passes a userspace request buffer via `[sp,#0x40]`. +- arm64 XNU does not provide a 10-argument Unix syscall ABI. +- `uu_arg` only holds 8 qwords, so the old cave over-read / over-wrote beyond the copied syscall arguments. +- The old design bypassed the real syscall return channel (`retval` / `uu_rval`) and therefore did not actually match how `unix_syscall()` returns results to userspace. + +## Rebuilt Patch Design + +### Practical decision + +- A literal direct-call `kcall10` is not ABI-compatible with this kernel's Unix syscall path. +- The rebuilt patch therefore keeps the historical hook point but redefines the request format into an ABI-correct reduced form: + - target function pointer + - 7 direct arguments + - 64-bit X0 return value +- This keeps the patch usable as a kernel-call bootstrap while staying within the real syscall ABI. + +### New `uap` layout + +The rebuilt patcher uses `sy_narg = 8`, with `x1` pointing at a copied 8-qword argument block: + +```c +struct kcall10_uap_rebuilt { + uint64_t target; + uint64_t arg0; + uint64_t arg1; + uint64_t arg2; + uint64_t arg3; + uint64_t arg4; + uint64_t arg5; + uint64_t arg6; +}; +``` + +### New semantics + +- `uap[0]` = target function pointer +- `uap[1..7]` = arguments loaded into `x0..x6` +- `x7` is forced to zero in the cave +- target return `x0` is stored to `retval` +- `sysent[439].sy_return_type` is set to `_SYSCALL_RET_UINT64_T` +- userspace receives one 64-bit return value in `x0` + +## Python Implementation + +The dedicated patcher file is now: + +- `scripts/patchers/kernel_jb_patch_kcall10.py` + +### What it now does + +- Finds the real `sysent` table by scanning backward from a decoded `_nosys` entry. +- Locates a reusable 8-argument `sy_arg_munge32` helper from the live table and now requires that the decoded helper target be unique across all matching sysent rows. +- Allocates an executable cave sized to the emitted blob instead of relying on a fixed large reservation. +- Emits an ABI-correct cave that: + - validates `uap`, `retval`, and `target` + - loads `target + 7 args` from `x1` + - performs `BLR X16` + - stores `X0` to `x2` + - returns `0` on success or `EINVAL` on malformed input +- Rewrites `sysent[439]` to point at the cave. +- Rewrites `sysent[439].sy_arg_munge32` to an 8-argument helper. +- Rewrites metadata to: - `sy_return_type = 7` - `sy_narg = 8` - `sy_arg_bytes = 0x20` -## Pseudocode (Before) +## Expected Emitted Patch Shape -```c -// sysent[439] -return kas_info(args); // limited / ENOTSUP style behavior on this platform -``` +The rebuilt patch should emit exactly four writes: -## Pseudocode (After) +1. Code cave blob in `__TEXT_EXEC` +2. `sysent[439].sy_call = cave` +3. `sysent[439].sy_arg_munge32 = 8-arg munger` +4. `sysent[439].sy_return_type / sy_narg / sy_arg_bytes` -```c -// sysent[439] -ctx = user_buf; -fn = ctx->func; -args = ctx->arg0..arg9; -ret_regs = fn(args...); -ctx->ret_regs = ret_regs; -return 0; -``` +## Static Acceptance Criteria -## Symbol Consistency +The rebuilt patch is considered structurally correct if all of the following hold: -- `nosys` and `kas_info` symbols are recovered and consistent with the intended hook objective. -- Direct `sysent` symbol is not recovered; table base still relies on structural scanning + chained-fixup validation logic. +- `sysent[439]` still decodes as a valid auth-rebase entry after patching. +- `sy_narg == 8` and `sy_arg_bytes == 0x20`. +- No cave instruction reads from guessed caller-frame offsets like `[sp,#0x40]` to recover user arguments. +- The cave consumes the real syscall handler ABI: `(proc, uap, retval)`. +- The cave returns the 64-bit primary result through `retval` and `_SYSCALL_RET_UINT64_T`. +- The cave does not read beyond the 8 copied syscall qwords. -## Patch Metadata +## Risks -- Patch document: `patch_kcall10.md` (C24). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_kcall10.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +- **Arbitrary kernel call surface**: this patch intentionally creates a direct kernel-call primitive from userspace; any reachable caller with sufficient privilege can invoke sensitive kernel routines with attacker-controlled arguments. +- **Target-function safety**: the cave does not validate the semantic suitability of the target function. Calling a function with the wrong prototype, wrong locking expectations, or wrong context can panic or corrupt kernel state. +- **Argument-width limit**: this rebuilt version is ABI-correct but only supports `target + 7 args -> uint64 x0`. Workflows that silently assume the old pseudo-10-arg contract will misbehave until userspace is updated. +- **Return-value limit**: only primary `x0` is surfaced through the syscall return path. Any target that needs structured outputs, out-pointers, or multiple architecturally relevant return registers still needs a higher-level descriptor / copyout design. +- **PAC / branch-context coupling**: the `sy_call` hook itself preserves the expected authenticated-call shape, but the target function call inside the cave is a plain `blr x16`. If the chosen target relies on a different authenticated entry expectation or unusual calling context, behavior may still be unsafe. +- **Scheduler impact**: re-enabling this patch in the default JB list means future aggregate dry-runs and restore tests now include it. Any regression observed after this point must consider `patch_kcall10` as part of the active set. -## Target Function(s) and Binary Location +## Current Limits -- Primary target: syscall 439 (`SYS_kas_info`) replacement path plus injected kcall10 shellcode. -- Hit points include syscall table entry redirection and payload cave sites. +- This rebuilt patch is ABI-correct, but it is no longer a literal “10 direct argument” trampoline. +- It provides a reduced-form direct-call primitive: `target + 7 args -> uint64 x0`. +- If a future design needs more arguments or structured output, it should move to a descriptor + `copyin/copyout` model rather than trying to extend the raw syscall ABI. -## Kernel Source File Location +## Validation Plan -- Mixed source context: syscall plumbing in `bsd/kern/syscalls.master` / `osfmk/kern/syscall_sw.c` plus injected shellcode region. -- Confidence: `medium`. +1. Keep work scoped to this single patch. +2. Run a dedicated dry-run against `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600`. +3. Verify the emitted cave disassembly matches the rebuilt design. +4. Verify the three `sysent[439]` field writes match the intended targets and metadata. +5. Stop at dry-run validation; do not escalate to full firmware build in this step. -## Function Call Stack +## Dry-Run Validation (2026-03-06) -- Primary traced chain (from `Call-Stack Analysis`): -- Userland syscall -> syscall dispatch -> `sysent[439].sy_call`. -- Before patch: `sysent[439] -> kas_info` (restricted behavior). -- After patch: `sysent[439] -> kcall10 cave` (loads function pointer + args, executes `BLR x16`, stores results back). -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +Target image: -## Patch Hit Points +- `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` -- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`): -- diversity: `0xBCAD` +Result: + +- `method_return = True` +- `patch_count = 4` + +Emitted writes: + +- `0x00AB1720` — cave blob, size `0x6C` +- `0x0073E180` — `sysent[439].sy_call = cave` +- `0x0073E188` — `sysent[439].sy_arg_munge32 = 8-arg helper` +- `0x0073E190` — `sy_return_type = 7`, `sy_narg = 8`, `sy_arg_bytes = 0x20` + +Exact emitted bytes: + +- cave @ `0x00AB1720`: + - `7f2303d5ffc300d1f55b00a9f35301a9fd7b02a9fd830091d3028052f40301aaf50302aa940100b4750100b4900240f9300100b4808640a9828e41a9849642a9861e40f9e7031faa00023fd6a00200f913008052e003132af55b40a9f35341a9fd7b42a9ffc30091ff0f5fd6` +- `sysent[439].sy_call` @ `0x0073E180`: + - `2017ab00adbc1080` +- `sysent[439].sy_arg_munge32` @ `0x0073E188`: + - `286dc600be2a2080` +- metadata @ `0x0073E190`: + - `0700000008002000` + +Decoded post-patch fields: + +- `sy_call` decodes to cave file offset `0x00AB1720` +- `sy_arg_munge32` decodes to helper file offset `0x00C66D28` (chosen only after confirming the 8-arg helper target is unique across matching sysent rows) +- `sy_return_type = 7` +- `sy_narg = 8` - `sy_arg_bytes = 0x20` -- The before/after instruction transform is constrained to this validated site. -## Current Patch Search Logic +Cave disassembly summary: -- Implemented in `scripts/patchers/kernel_jb_patch_kcall10.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks). - -## Validation (Static Evidence) - -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. - -## Expected Failure/Panic if Unpatched - -- Kernel arbitrary-call syscall path is unavailable; userland kcall-based bootstrap stages cannot execute. - -## Risk / Side Effects - -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. - -## Symbol Consistency Check - -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`. -- Canonical symbol hit(s): none (alias-based static matching used). -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0008010c94` currently resolves to `nosys` (size `0x34`). - -## Open Questions and Confidence - -- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain. -- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial). - -## Evidence Appendix - -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. - -## Runtime + IDA Verification (2026-03-05) - -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (3 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `True` -- IDA mapping: `0/3` points in recognized functions; `3` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `0` function nodes, `0` patch-point VAs. -- Verdict: `valid` -- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift. -- Policy note: method is in the low-risk optimized set (validated hit on this kernel). -- Key verified points: -- `0xFFFFFE000774E5A0` (`code-cave/data`): sysent[439].sy_call = \_nosys 0xF6F048 (auth rebase, div=0xBCAD, next=2) [kcall10 low-risk] | `0ccd0701adbc1080 -> 48f0f600adbc1080` -- `0xFFFFFE000774E5B0` (`code-cave/data`): sysent[439].sy_return_type = 1 [kcall10 low-risk] | `01000000 -> 01000000` -- `0xFFFFFE000774E5B4` (`code-cave/data`): sysent[439].sy_narg=0,sy_arg_bytes=0 [kcall10 low-risk] | `03000c00 -> 00000000` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +- prologue: `pacibsp`, 0x30-byte stack frame, saves `x19`-`x22`, `x29`, `x30` +- validation: reject null `uap`, null `retval`, null `target` with `EINVAL` +- load path: reads target from `[x20]`, args from `[x20+0x8 .. +0x38]` +- call path: `blr x16` with `x0..x6` populated and `x7 = 0` +- return path: `str x0, [x21]`, move status into `w0`, restore callee-saved registers, `retab` diff --git a/research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md b/research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md index 66aa3b3..1613332 100644 --- a/research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md +++ b/research/kernel_patch_jb/patch_syscallmask_apply_to_proc.md @@ -1,144 +1,388 @@ # C22 `patch_syscallmask_apply_to_proc` -## Patch Goal +## Status -Inject a shellcode detour into legacy `_syscallmask_apply_to_proc`-shape logic to install custom syscall filter mask handling. +- Re-analysis date: `2026-03-06` +- Scope: `kernelcache.research.vphone600` +- Prior notes for this patch are treated as untrusted unless restated below. +- Current conclusion: the old repo C22 implementation was a misidentification that patched `_profile_syscallmask_destroy` under an underflow-panic slow path. As of `2026-03-06`, `scripts/patchers/kernel_jb_patch_syscallmask.py` has been rebuilt to target the real syscallmask apply wrapper structurally and recreate the upstream C22 behavior (mutate mask bytes to all-ones, then continue into the normal setter path). User-side restore/boot validation succeeded on `2026-03-06`. -## Binary Targets (IDA + Recovered Symbols) +## What This Mechanism Actually Does -- String anchors: - - `"syscallmask.c"` at `0xfffffe0007609236` - - `"sandbox.syscallmasks"` at `0xfffffe000760933c` -- Related recovered functions in the cluster: - - `_profile_syscallmask_destroy` at `0xfffffe00093ae6a4` - - `_sandbox_syscallmask_destroy` at `0xfffffe00093ae984` - - `_sandbox_syscallmask_create` at `0xfffffe00093aea34` - - `_hook_policy_init` at `0xfffffe00093c1a54` +This path is not a generic parser or allocator hook. Its real job is to **install per-process syscall filter masks** used later by three enforcement sites: -## Call-Stack Analysis +- Unix syscall dispatch +- Mach trap dispatch +- Kernel MIG / kobject dispatch -- Current firmware exposes syscallmask create/destroy/hook-policy flows. -- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates. +In XNU source terms, the closest semantic match is `proc_set_syscall_filter_mask(proc_t p, int which, unsigned char *maskptr, size_t masklen)` in `research/reference/xnu/bsd/kern/kern_proc.c:5142`. -## Patch-Site / Byte-Level Change +Important XNU references: -- Required legacy signature (strict): - - `cbz x2` and `mov x19,x0 ; mov x20,x1 ; mov x21,x2 ; mov x22,x3` in early prologue. -- Validation result on current image: no valid candidate. -- Therefore expected behavior is fail-closed: - - no cave writes - - no branch redirection emitted. +- `research/reference/xnu/bsd/sys/proc.h:558` — `SYSCALL_MASK_UNIX`, `SYSCALL_MASK_MACH`, `SYSCALL_MASK_KOBJ` +- `research/reference/xnu/bsd/kern/kern_proc.c:5142` — setter for the three mask kinds +- `research/reference/xnu/bsd/dev/arm/systemcalls.c:161` — Unix syscall enforcement +- `research/reference/xnu/osfmk/arm64/bsd_arm64.c:253` — Mach trap enforcement +- `research/reference/xnu/osfmk/kern/ipc_kobject.c:568` — kobject/MIG enforcement +- `research/reference/xnu/bsd/kern/kern_fork.c:1028` — Unix mask inheritance on fork +- `research/reference/xnu/osfmk/kern/task.c:1759` — Mach/KOBJ filter inheritance -## Pseudocode (Before) +Semantics from XNU: -```c -// current firmware path differs from legacy apply_to_proc shape -apply_or_policy_update(...); -``` +- If a filter mask pointer is `NULL`, the later dispatch path does **not** perform the extra mask-based deny/evaluate step. +- If a filter mask pointer is present and the bit is clear, the kernel falls back into MACF/Sandbox evaluation. +- If a filter mask pointer is present and the bit is set, the indexed Unix/Mach path does **not** fall into the extra policy callback. +- For KOBJ/MIG there is an important nuance: a non-`NULL` all-ones mask suppresses callback evaluation only when the message already has a registered `kobjidx`; `KOBJ_IDX_NOT_SET` still reaches policy evaluation. +- Therefore, `NULL`-mask install and all-ones install are related but **not identical** behaviors. Historical upstream C22 is the all-ones variant, not the `NULL` variant. -## Pseudocode (After) +## Revalidated Live Call Chain (IDA) -```c -// no patch emitted on this build (fail-closed) -apply_or_policy_update(...); -``` +### 1. Real apply layer in the sandbox kext -## Symbol Consistency +`_proc_apply_syscall_masks` at `0xfffffe00093b1a88` -- Recovered symbols exist for syscallmask create/destroy helpers. -- `_syscallmask_apply_to_proc` symbol is not recovered and legacy signature does not match current binary layout. +Decompiled shape: -## Patch Metadata +- Calls helper `sub_FFFFFE00093AE5E8(proc, 0, unix_mask)` +- Calls helper `sub_FFFFFE00093AE5E8(proc, 1, mach_mask)` +- Calls helper `sub_FFFFFE00093AE5E8(proc, 2, kobj_mask)` +- On failure, reports: + - `"failed to apply unix syscall mask"` + - `"failed to apply mach trap mask"` + - `"failed to apply kernel MIG routine mask"` -- Patch document: `patch_syscallmask_apply_to_proc.md` (C22). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_syscallmask.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +This is the real high-level “apply to proc” logic for the current kernel, even though the stripped symbol is now named `_proc_apply_syscall_masks`, not `_syscallmask_apply_to_proc`. -## Target Function(s) and Binary Location +### 2. Immediate callers of `_proc_apply_syscall_masks` -- Primary target: `syscallmask_apply_to_proc` path plus zalloc_ro_mut update helper. -- Patchpoint combines branch policy bypass and helper-site mutation where matcher is valid. +IDA xrefs show live callers: -## Kernel Source File Location +- `_proc_apply_sandbox` at `0xfffffe00093b17d4` +- `_hook_cred_label_update_execve` at `0xfffffe00093d0dfc` -- Likely XNU source family: `bsd/kern/kern_proc.c` plus task/proc state mutation helpers. -- Confidence: `low` (layout drift noted). +That means this path is exercised both when sandbox labels are applied and during exec-time label updates. -## Function Call Stack +### 3. Helper that bridges into kernel proc/task RO state setters -- Primary traced chain (from `Call-Stack Analysis`): -- Current firmware exposes syscallmask create/destroy/hook-policy flows. -- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates. -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +`sub_FFFFFE00093AE5E8` at `0xfffffe00093ae5e8` -## Patch Hit Points +Observed behavior: -- Patch hitpoint is selected by contextual matcher and verified against local control-flow. -- Before/after instruction semantics are captured in the patch-site evidence above. +- Accepts `(proc, which, maskptr)` +- If `maskptr != NULL`, loads the expected mask length for `which` +- Tail-calls into kernel text at `0xfffffe0007fd0c74` -## Current Patch Search Logic +This helper is a narrow wrapper for the true setter logic. -- Implemented in `scripts/patchers/kernel_jb_patch_syscallmask.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- String anchors: -- Legacy apply-to-proc prologue shape required by C22 shellcode was not found in anchor-near candidates. +### 4. Kernel-side setter core -## Validation (Static Evidence) +The tail-call target is inside `sub_FFFFFE0007FD0B64`, entered at `0xfffffe0007fd0c74`. -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. +Validated behavior from disassembly: -## Expected Failure/Panic if Unpatched +- `which == 0` (Unix): if `X2 == 0`, length validation is skipped and the proc RO syscall-mask pointer is updated with `NULL` +- `which == 1` (Mach): if `X2 == 0`, length validation is skipped and the task Mach filter pointer is updated with `NULL` +- `which == 2` (KOBJ/MIG): if `X2 == 0`, length validation is skipped and the task KOBJ filter pointer is updated with `NULL` +- Invalid `which` returns `EINVAL` (`0x16`) -- Syscall mask restrictions remain active; required syscall surface for bootstrap stays blocked. +This matches the XNU setter semantics closely enough to trust the mapping. -## Risk / Side Effects +## PCC 26.1 Upstream-Exact Reconstruction -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. +On the exact PCC 26.1 research kernel matching the historical upstream script, the original C22 chain resolves as follows: -## Symbol Consistency Check +- apply-wrapper entry: `0xfffffe00093994f8` (`sub_FFFFFE00093994F8`) +- high-level caller: `0xfffffe000939c998` (`sub_FFFFFE000939C998`) +- upstream patch writes at: + - `0xfffffe0009399530` — original `BL` replaced by `mov x17, x0` + - `0xfffffe0009399584` — original tail branch replaced by branch to cave + - `0xfffffe0007ab5740` — code cave / data blob region -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`. -- Canonical symbol hit(s): none (alias-based static matching used). -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0007609236` is a patchpoint/data-site (`Not a function`), so function naming is inferred from surrounding control-flow and xrefs. +Validated wrapper behavior before patch: -## Open Questions and Confidence +- `sub_FFFFFE000939C998` calls `sub_FFFFFE00093994F8(proc, 0, unix_mask)` +- then `sub_FFFFFE00093994F8(proc, 1, mach_mask)` +- then `sub_FFFFFE00093994F8(proc, 2, kobj_mask)` +- failures map to the three familiar strings: + - `failed to apply unix syscall mask` + - `failed to apply mach trap mask` + - `failed to apply kernel MIG routine mask` -- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain. -- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial). +This is the older PCC 26.1 form of the same logic that appears as `_proc_apply_syscall_masks` on the newer kernel. -## Evidence Appendix +At the low wrapper level, `sub_FFFFFE00093994F8` does this: -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. +- if `maskptr == NULL`, skip the pre-processing helper +- otherwise call helper at `0xfffffe0007b761e0` with: + - `x0` = zone/RO-mutation selector loaded from `word_FFFFFE0007A58354` + - `x1` = backing object/pointer loaded from `qword_FFFFFE0007A58358` + - `x2` = original mask pointer +- then load `x3 = masklen_bits` from a small selector table +- then tail-branch into setter core at `0xfffffe0007fc7220` -## Runtime + IDA Verification (2026-03-05) +The historical upstream patch hijacks exactly this seam. -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (2 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `True` -- IDA mapping: `2/2` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `2` patch-point VAs. -- IDA function sample: `_profile_syscallmask_destroy` -- Chain function sample: `_profile_syscallmask_destroy` -- Caller sample: `_profile_uninit`, `sub_FFFFFE00093AE678` -- Callee sample: `sub_FFFFFE0008302368`, `sub_FFFFFE00093AE70C` -- Verdict: `valid` -- Recommendation: Keep enabled for this kernel build; continue monitoring for pattern drift. -- Policy note: method is in the low-risk optimized set (validated hit on this kernel). -- Key verified points: -- `0xFFFFFE00093AE6E4` (`_profile_syscallmask_destroy`): mov x0,xzr [_syscallmask_apply_to_proc low-risk] | `ff8300d1 -> e0031faa` -- `0xFFFFFE00093AE6E8` (`_profile_syscallmask_destroy`): retab [_syscallmask_apply_to_proc low-risk] | `fd7b01a9 -> ff0f5fd6` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +### Exact shellcode semantics + +Using the original upstream words from the script, the cave body at `0xfffffe0007ab5768` disassembles to: + +- preserve original args in `x19..x22` +- use the saved original helper arg in `x17` +- compute `ceil(masklen_bits / 8)` in `x4` +- call helper at `0xfffffe0007b76258` +- restore original `(proc, which, maskptr, masklen_bits)` +- branch back into setter core at `0xfffffe0007fc7220` + +Crucially, the helper call is made with: + +- `x0 = x17` +- `x1 = x21` (original `maskptr`) +- `x2 = 0` (offset) +- `x3 = cave base = 0xfffffe0007ab5740` +- `x4 = ceil(masklen_bits / 8)` + +and the cave base holds a large `0xFF` blob. + +That means the upstream patch mutates the pointed-to mask buffer in place so that the first `ceil(masklen_bits / 8)` bytes become `0xFF`, then installs that mask through the normal setter. + +### Final semantic conclusion for upstream C22 + +The original upstream C22 patch is therefore: + +- **not** “skip syscallmask apply” +- **not** “return success early” +- **not** “clear the mask pointer” + +It is: + +- **rewrite the mask contents to an all-ones allow mask, then continue through the normal setter path** + +This is the closest faithful behavioral description of historical C22. + +### Implication for modern reimplementation + +If we want to reproduce upstream behavior exactly, the modern patch should preserve the apply/setter path and force the effective Unix/Mach/KOBJ masks to all ones. + +If we prefer a smaller and likely safer patch for bring-up, the `NULL`-mask strategy remains attractive, but it is a modern simplification rather than an exact upstream reconstruction. + +## Legacy Upstream Mapping + +The pasted legacy script matches the historical upstream `syscallmask` shellcode patch that this repo later labeled as C22. + +Concrete markers that identify it: + +- shellcode cave at `0xAB1740` +- redirect from `0x2395584` +- setup write at `0x2395530` (`mov x17, x0`) +- tail branch to `_proc_set_syscall_filter_mask` +- in-cave call to `_zalloc_ro_mut` + +Semantically, that upstream patch is **not** a destroy-path patch and **not** a plain early-return patch. It does this instead: + +1. If the incoming mask pointer is `NULL`, skip the custom work. +2. Otherwise compute `ceil(mask_bits / 8)`. +3. Use `_zalloc_ro_mut` to overwrite the target read-only mask storage with bytes sourced from an in-cave `0xFF` blob. +4. Resume into `_proc_set_syscall_filter_mask`. + +This means the historical upstream intent was: + +- keep the mask object/path alive +- but force the installed syscall/mach/kobj mask to become an **all-ones allow mask** + +That is an important semantic distinction from the newer `NULL`-mask strategy documented later in this file: + +- **legacy upstream shellcode** => installed mask exists and all bits are allowed +- **proposed modern narrow patch** => installed mask pointer becomes `NULL` + +Both strategies bypass this mask-based interception layer in practice, but they are not identical. If we want the closest behavioral match to the historical upstream patch, the modern equivalent should preserve the setter path and write an all-ones mask, not simply early-return. + +## Fresh Independent Conclusions (`2026-03-06`) + +- The legacy pasted script maps to the historical upstream `syscallmask` shellcode patch later labeled `C22` in this repo. +- The old repo “C22” was a false-positive hit in `_profile_syscallmask_destroy`; that patch class did not control mask installation and is not a trustworthy reference for behavior. +- The faithful upstream C22 class is: hijack the low wrapper, preserve the normal setter path, mutate the effective Unix/Mach/KOBJ mask bytes to all `0xFF`, then tail-branch back into the setter. +- Source-level equivalence is closest to calling `proc_set_syscall_filter_mask(..., all_ones_mask, expected_len)` for `which = 0/1/2`, not `proc_set_syscall_filter_mask(..., NULL, 0)`. +- XNU cross-check matters here: an all-ones mask and a `NULL` mask are behaviorally different for KOBJ/MIG when `kobjidx` is not registered, so the two strategies must stay documented as separate patch classes. + +## New Plan + +1. Keep the rebuilt all-ones wrapper retarget as the authoritative C22 baseline, because it is the closest match to the historical upstream PCC 26.1 shellcode. +2. Treat `NULL`-mask installation as a separate modern experiment only; do not describe it as “what upstream C22 did”. +3. Re-check the live runtime interaction of C22 with `_proc_apply_syscall_masks`, `_proc_apply_sandbox`, and `_hook_cred_label_update_execve` before blaming any future boot issue on C22 alone. +4. If runtime anomalies remain, classify them by enforcement site: + - Unix syscall mask regression + - Mach trap mask regression + - KOBJ/MIG `KOBJ_IDX_NOT_SET` residual policy path +5. Only after the exact upstream-equivalent path is exhausted should we prototype a separate `NULL`-mask variant for comparison. + +## What The Old C22 Implementation Actually Hit + +Historical runtime verification logged these writes: + +- `0xfffffe00093ae6e4`: `ff8300d1 -> e0031faa` +- `0xfffffe00093ae6e8`: `fd7b01a9 -> ff0f5fd6` + +IDA mapping shows both addresses are inside `_profile_syscallmask_destroy` at `0xfffffe00093ae6a4`, not inside any apply-to-proc routine. + +More specifically: + +- `_profile_syscallmask_destroy` normal path ends at `0xfffffe00093ae6dc` +- `0xfffffe00093ae6e0` is the start of the **underflow panic slow path** +- The old patch replaced instructions in that slow path only + +So the old “low-risk early return” did **not** disable syscall mask installation. It merely neutered a panic-reporting subpath after profile mask count underflow. + +## Why The Old Matcher Misidentified The Target + +The old patcher logic in `scripts/patchers/kernel_jb_patch_syscallmask.py` relies on: + +- string anchor `"syscallmask.c"` +- nearby function-start recovery using `PACIBSP` +- legacy 4-argument prologue heuristics from an older shellcode-based implementation + +On this kernel: + +- the legacy `_syscallmask_apply_to_proc` shape is gone +- the nearby string cluster includes create/destroy/populate helpers +- the nearest `PACIBSP` around the string is at `0xfffffe00093ae6e0`, which is **not a real function entry** for the apply path + +That is why the old low-risk fallback produced a false positive. + +## Real Targets That Matter + +### Safe semantic target + +`_proc_apply_syscall_masks` at `0xfffffe00093b1a88` + +This is the right place if the goal is: + +- allow processes to keep running without syscall/mach/kobj mask-based interception +- preserve surrounding control flow and error handling +- avoid corrupting parser state or shared kernel setter logic + +### Alternative narrower helper target + +`sub_FFFFFE00093AE5E8` at `0xfffffe00093ae5e8` + +This helper only appears to serve the apply layer here, but it is still a broader patch than changing the three call sites directly. + +## Recommended Patch Strategy (Not Applied Here) + +Per your instruction, no repository code changes are landed here. This section documents the patch strategy that appears correct from the live re-analysis. + +### Preferred strategy: clear masks explicitly at the three call sites + +Patch the three `LDR X2, [X8]` instructions in `_proc_apply_syscall_masks` to `MOV X2, XZR`. + +Patchpoints: + +1. Unix mask load + - VA: `0xfffffe00093b1abc` + - Before: `020140f9` (`ldr x2, [x8]`) + - After: `e2031faa` (`mov x2, xzr`) + +2. Mach trap mask load + - VA: `0xfffffe00093b1af0` + - Before: `020140f9` (`ldr x2, [x8]`) + - After: `e2031faa` (`mov x2, xzr`) + +3. KOBJ/MIG mask load + - VA: `0xfffffe00093b1b28` + - Before: `020140f9` (`ldr x2, [x8]`) + - After: `e2031faa` (`mov x2, xzr`) + +Why this is preferred: + +- It preserves `_proc_apply_syscall_masks` control flow and error propagation. +- It still calls the existing setter path for all three mask types. +- The setter already supports `maskptr == NULL`, so this becomes a clean “clear installed filters” operation instead of a malformed early return. +- It avoids stale inherited masks remaining attached to the process. + +### Secondary strategy: null out the helper argument once + +Single-site alternative: + +- VA: `0xfffffe00093ae600` +- Before: `f40301aa` (`mov x19, x2`) +- After: `f3031faa` (`mov x19, xzr`) + +This also forces all three setter calls to receive `NULL`, but it is slightly wider than the three-site `_proc_apply_syscall_masks` patch and depends on there being no unintended callers of this helper entry. + +## What Not To Patch + +### Do not patch `_profile_syscallmask_destroy` + +- Address: `0xfffffe00093ae6a4` +- Reason: lifecycle cleanup only; old C22 hit this by mistake + +### Do not patch `_populate_syscall_mask` + +- Address: `0xfffffe00093cf7f4` +- Reason: parser/allocation path for sandbox profile data; breaking it risks malformed state during sandbox construction and early boot + +### Avoid patching the kernel-side setter core directly unless necessary + +- Entry used here: `0xfffffe0007fd0c74` +- Reason: shared proc/task RO setters are broader-scope and easier to overpatch than the sandbox apply wrapper + +## Expected Effect Of The Recommended Patch + +If the three load sites are rewritten to `mov x2, xzr`: + +- Unix syscall filter mask is cleared +- Mach trap filter mask is cleared +- Kernel MIG/kobject filter mask is cleared +- Later dispatchers no longer see an installed mask pointer for those channels +- The syscall/mach/kobj “bit clear -> consult MACF/Sandbox evaluator” layer is therefore skipped for these mask-based checks + +This does **not** disable every sandbox/MACF path. It only removes this specific mask-installation layer. + +## Why A Plain Early Return Is Inferior + +A naive early return from `_proc_apply_syscall_masks` would likely return success, but it may leave previously inherited masks untouched. + +That is especially risky because XNU inherits these masks across fork/task creation: + +- Unix: `research/reference/xnu/bsd/kern/kern_fork.c:1028` +- Mach/KOBJ: `research/reference/xnu/osfmk/kern/task.c:1759` + +So an early return can leave stale filter pointers in place, while the explicit `NULL`-setter strategy actively clears them. + +## Boot-Risk Assessment + +Most plausible failure modes if this family is patched incorrectly: + +- stale or invalid mask pointers remain attached to early boot tasks +- Mach/KOBJ traffic gets filtered unexpectedly during bootstrap +- parser/create/destroy bookkeeping becomes inconsistent +- a broad setter patch corrupts proc/task RO state outside the intended sandbox apply path + +The proposed three-site `mov x2, xzr` strategy is the narrowest approach found so far that still achieves the intended jailbreak effect. + +## Repository Implementation Status + +As of `2026-03-06`, the repository implementation has been updated to follow the revalidated C22 design: + +- locate the high-level apply manager from the three `failed to apply ... mask` strings +- identify the shared low wrapper that is called with `which = 0/1/2` +- replace the wrapper's pre-setter helper `BL` with `mov x17, x0` +- replace the wrapper's tail `B` with a branch to a code cave +- in the cave, build an all-ones blob, call the structurally-derived mutation helper, then tail-branch back into the normal setter core + +Focused dry-run validation on `ipsws/PCC-CloudOS-26.1-23B85/kernelcache.research.vphone600` now emits exactly 3 writes: + +- `0x02395530` — `mov x17,x0 [syscallmask C22 save RO selector]` +- `0x023955E8` — `b cave [syscallmask C22 mutate mask then setter]` +- `0x00AB1720` — `syscallmask C22 cave (ff blob 0x100 + structural mutator + setter tail)` + +This restores the intended patch class while avoiding the previous false-positive hit on `_profile_syscallmask_destroy`. + +User validation note: boot succeeded with the rebuilt C22 enabled on `2026-03-06`. + +## Bottom Line + +- The historical C22 implementation is mis-targeted. +- The real current “apply to proc” logic is `_proc_apply_syscall_masks`, not `_profile_syscallmask_destroy`. +- The historical upstream patch class is **not** `NULL`-mask install; it is **all-ones mask mutation plus normal setter continuation**. +- The rebuilt wrapper/cave retarget matches that upstream class and has already reached user-reported boot success on `2026-03-06`. +- `NULL`-mask install remains a separate modern alternative worth studying later, especially because KOBJ/MIG semantics differ when `kobjidx` is unset. diff --git a/research/kernel_patch_jb/patch_vm_fault_enter_prepare.md b/research/kernel_patch_jb/patch_vm_fault_enter_prepare.md index 07586de..013e56c 100644 --- a/research/kernel_patch_jb/patch_vm_fault_enter_prepare.md +++ b/research/kernel_patch_jb/patch_vm_fault_enter_prepare.md @@ -1,167 +1,473 @@ -# B9 `patch_vm_fault_enter_prepare` +# B9 `patch_vm_fault_enter_prepare` — re-analysis (2026-03-06) -## Patch Goal +## Scope -NOP a strict state/permission check site in `vm_fault_enter_prepare` identified by the `BL -> LDRB [..,#0x2c] -> TBZ/TBNZ` fingerprint. +- Kernel: `kernelcache.research.vphone600` +- Primary function: `vm_fault_enter_prepare` @ `0xfffffe0007bb8818` +- Existing patch point emitted by the patcher: `0xfffffe0007bb898c` +- Existing callee at that point: `sub_FFFFFE0007C4B7DC` +- Paired unlock callee immediately after the guarded block: `sub_FFFFFE0007C4B9A4` -## Binary Targets (IDA + Recovered Symbols) +## Executive Summary -- Recovered symbol: `vm_fault_enter_prepare` at `0xfffffe0007bb8818`. -- Anchor string: `"vm_fault_enter_prepare"` at `0xfffffe0007048ec8`. -- String xrefs in this function: `0xfffffe0007bb88c4`, `0xfffffe0007bb944c`. +The current `patch_vm_fault_enter_prepare` analysis was wrong. -## Call-Stack Analysis +The patched instruction at `0xfffffe0007bb898c` is **not** a runtime code-signing gate and **not** a generic policy-deny helper. It is the lock-acquire half of a `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` pair used while consuming the page's `vmp_clustered` state. -Representative static callers: +So the current patch does this: -- `vm_fault_internal` (`0xfffffe0007bb6ef0`) -> calls `vm_fault_enter_prepare`. -- `sub_FFFFFE0007BB8294` (`0xfffffe0007bb8350`) -> calls `vm_fault_enter_prepare`. +- skips the physical-page / PVH lock acquire, +- still executes the protected critical section, +- still executes the corresponding unlock, +- therefore breaks lock pairing and page-state synchronization inside the VM fault path. -This confirms B9 is in the central page-fault preparation path. +That is fully consistent with a boot-time failure. -## Patch-Site / Byte-Level Change +## What the current patcher actually matches -Unique strict matcher hit in `vm_fault_enter_prepare`: +Current implementation: `scripts/patchers/kernel_jb_patch_vm_fault.py:7` -- `0xfffffe0007bb898c`: `BL sub_FFFFFE0007C4B7DC` -- `0xfffffe0007bb8990`: `LDRB W8, [X20,#0x2C]` -- `0xfffffe0007bb8994`: `TBZ W8, #5, loc_FFFFFE0007BB89C4` +The matcher looks for this in-function shape: -Patch operation: +- `BL target(rare)` +- `LDRB wN, [xM, #0x2c]` +- `TBZ/TBNZ wN, #bit, ...` -- NOP the BL at `0xfffffe0007bb898c`. +That logic resolves to exactly one site in `vm_fault_enter_prepare` and emits: -Bytes: +- VA: `0xFFFFFE0007BB898C` +- Patch: `944b0294 -> 1f2003d5` +- Description: `NOP [_vm_fault_enter_prepare]` -- before: `94 4B 02 94` (`BL ...`) +IDA disassembly at the matched site: + +```asm +0xfffffe0007bb8988 MOV X0, X27 +0xfffffe0007bb898c BL sub_FFFFFE0007C4B7DC +0xfffffe0007bb8990 LDRB W8, [X20,#0x2C] +0xfffffe0007bb8994 TBZ W8, #5, loc_FFFFFE0007BB89C4 +0xfffffe0007bb8998 LDR W8, [X20,#0x1C] +... +0xfffffe0007bb89c0 STR W8, [X20,#0x2C] +0xfffffe0007bb89c4 MOV X0, X27 +0xfffffe0007bb89c8 BL sub_FFFFFE0007C4B9A4 +``` + +The old assumption was: “call helper, then test a security flag, so NOP the helper.” + +The re-analysis result is: the call is a lock acquire, the tested bit is `m->vmp_clustered`, and the second call is the matching unlock. + +## PCC 26.1 Research: upstream site vs derived site + +Using the user-loaded `PCC-CloudOS-26.1-23B85` `kernelcache.research.vphone600`, extracted locally to a temporary raw Mach-O, the upstream hard-coded site and our derived matcher do **not** land on the same instruction. + +### Upstream hard-coded site + +Upstream script site: + +- raw file offset: `0x00BA9E1C` +- mapped VA in `26.1 research`: `0xFFFFFE0007BADE1C` +- instruction: `TBZ W22, #3, loc_...DE28` + +Local disassembly around the upstream site: + +```asm +0xfffffe0007bade10 CBZ X27, loc_...DEE4 +0xfffffe0007bade14 LDR X0, [X27,#0x488] +0xfffffe0007bade18 B loc_...DEE8 +0xfffffe0007bade1c TBZ W22, #3, loc_...DE28 ; upstream NOP site +0xfffffe0007bade20 MOV W23, #0 +0xfffffe0007bade24 B loc_...E004 +0xfffffe0007bade28 ... +0xfffffe0007bade94 BL 0xfffffe0007f82428 +0xfffffe0007bade98 CBZ W0, loc_...DF54 +``` + +This means the upstream patch is not hitting the later helper call directly. It is patching a branch gate immediately before a larger validation/decision block. Replacing this `TBZ` with `NOP` forces fall-through into: + +- `MOV W23, #0` +- `B loc_...E004` + +So the likely effect is to skip the subsequent validation path entirely. + +### Current derived matcher site + +Current derived `patch_vm_fault_enter_prepare()` site on the **same 26.1 research raw**: + +- raw file offset: `0x00BA9BB0` +- mapped VA: `0xFFFFFE0007BADBB0` +- instruction: `BL 0xFFFFFE0007C4007C` + +The local patcher was run directly on the extracted `26.1 research` raw Mach-O and emitted: + +- `0x00BA9BB0 NOP [_vm_fault_enter_prepare]` + +Local disassembly around the derived site: + +```asm +0xfffffe0007badbac MOV X0, X27 +0xfffffe0007badbb0 BL 0xfffffe0007c4007c ; derived NOP site +0xfffffe0007badbb4 LDRB W8, [X20,#0x2C] +0xfffffe0007badbb8 TBZ W8, #5, loc_...DBE8 +... +0xfffffe0007badbe8 MOV X0, X27 +0xfffffe0007badbec BL 0xfffffe0007c40244 +``` + +And the two helpers decode as the same lock/unlock pair seen in later analysis: + +- `0xFFFFFE0007C4007C`: physical-page indexed lock acquire path (`LDXR` / `CASA` fast path, contended lock path) +- `0xFFFFFE0007C40244`: matching unlock path + +### Meaning of the mismatch + +This is the key clarification: + +- the **upstream** patch is very likely semantically related to the `vm_fault_enter_prepare` runtime validation path on `26.1 research`; +- the **derived patcher** in this repository does **not** reproduce that upstream site; +- instead, it drifts earlier in the same larger function region and NOPs a lock-acquire call. + +So the most likely situation is **not** “the upstream author typed the wrong function name.” + +The more likely situation is: + +1. upstream had a real site in `26.1 research`; +2. our repository later generalized that idea into a pattern matcher; +3. that matcher overfit the wrong local shape (`BL` + `LDRB [#0x2c]` + `TBZ`) and started hitting the wrong block. + +In other words: the current bug is much more likely a **bad derived matcher / bad retarget**, not proof that the original upstream `26.1` patch label was bogus. + +## IDA evidence: what the callees really are + +### `sub_FFFFFE0007C4B7DC` + +IDA shows a physical-page-index based lock acquisition routine, not a deny/policy check: + +- takes `X0` as page number / index input, +- checks whether the physical page is in-range, +- on the normal path acquires a lock associated with that physical page, +- on contended paths may sleep / block, +- returns only after the lock is acquired. + +Key observations from IDA: + +- the function begins by deriving an indexed address from `X0` (`UBFIZ X9, X0, #0xE, #0x20`), +- it performs lock acquisition with `LDXR` / `CASA` on a fallback lock or calls into a lower lock primitive, +- it contains a contended-wait path (`assert_wait`, `thread_block` style flow), +- it does **not** contain a boolean policy return used by the caller. + +This matches `pmap_lock_phys_page(ppnum_t pn)` semantics. + +### `sub_FFFFFE0007C4B9A4` + +IDA shows the paired unlock routine: + +- same page-number based addressing scheme, +- direct fast-path jump into a low-level unlock helper for the backup lock case, +- range-based path that reconstructs a `locked_pvh_t`-like wrapper and unlocks the per-page PVH lock. + +This matches `pmap_unlock_phys_page(ppnum_t pn)` semantics. + +## XNU source mapping + +The matched basic block in `vm_fault_enter_prepare()` maps cleanly onto the `m->vmp_pmapped == FALSE && m->vmp_clustered` handling in XNU. + +Relevant source: `research/reference/xnu/osfmk/vm/vm_fault.c:3958` + +```c +if (m->vmp_pmapped == FALSE) { + if (m->vmp_clustered) { + if (*type_of_fault == DBG_CACHE_HIT_FAULT) { + if (object->internal) { + *type_of_fault = DBG_PAGEIND_FAULT; + } else { + *type_of_fault = DBG_PAGEINV_FAULT; + } + VM_PAGE_COUNT_AS_PAGEIN(m); + } + VM_PAGE_CONSUME_CLUSTERED(m); + } +} +``` + +The lock/unlock comes from `VM_PAGE_CONSUME_CLUSTERED(mem)` in `research/reference/xnu/osfmk/vm/vm_page_internal.h:999`: + +```c +#define VM_PAGE_CONSUME_CLUSTERED(mem) \ + MACRO_BEGIN \ + ppnum_t __phys_page; \ + __phys_page = VM_PAGE_GET_PHYS_PAGE(mem); \ + pmap_lock_phys_page(__phys_page); \ + if (mem->vmp_clustered) { \ + vm_object_t o; \ + o = VM_PAGE_OBJECT(mem); \ + assert(o); \ + o->pages_used++; \ + mem->vmp_clustered = FALSE; \ + VM_PAGE_SPECULATIVE_USED_ADD(); \ + } \ + pmap_unlock_phys_page(__phys_page); \ + MACRO_END +``` + +And those helpers are defined here: + +- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7520` — `pmap_lock_phys_page(ppnum_t pn)` +- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7535` — `pmap_unlock_phys_page(ppnum_t pn)` +- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:330` — `pvh_lock(unsigned int index)` +- `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:497` — `pvh_unlock(locked_pvh_t *locked_pvh)` + +## Why the current patch can break boot + +The current patch NOPs only the acquire side: + +- before: `BL sub_FFFFFE0007C4B7DC` +- after: `NOP` + +But the surrounding code still: + +- reads `m->vmp_clustered`, +- may increment `object->pages_used`, +- clears `m->vmp_clustered`, +- calls `sub_FFFFFE0007C4B9A4` unconditionally afterwards. + +That means the patch turns a balanced critical section into: + +1. no lock acquire, +2. mutate shared page/object state, +3. unlock a lock that was never acquired. + +Concrete risks: + +- PVH / backup-lock state corruption, +- waking or releasing waiters against an unowned lock, +- racing `m->vmp_clustered` / `object->pages_used` updates during active fault handling, +- early-boot hangs or panics when clustered pages are first faulted in. + +This is a much stronger explanation for the observed boot failure than the old “wrong security helper” theory. + +## What this patch actually changes semantically + +If applied successfully, the patch does **not** bypass code-signing validation. + +It only removes synchronization from this clustered-page bookkeeping path: + +- page-in accounting (`DBG_CACHE_HIT_FAULT` -> `DBG_PAGEIND_FAULT` / `DBG_PAGEINV_FAULT`), +- `object->pages_used++`, +- `m->vmp_clustered = FALSE`, +- speculative-page accounting. + +So the effective behavior is: + +- **not** “allow weird userspace methods,” +- **not** “disable vm fault code-signing rejection,” +- **not** “bypass a kernel deny path,” +- only “break the lock discipline around clustered-page consumption.” + +For the jailbreak goal, this patch is mis-targeted. + +## Where the real security-relevant logic is in this function + +Two genuinely security-relevant regions exist in the same XNU function, but they are **not** the current patch site: + +1. `pmap_has_prot_policy(...)` handling in `research/reference/xnu/osfmk/vm/vm_fault.c:3943` + - this is where protection-policy constraints are enforced for the requested mapping protections. +2. `vm_fault_validate_cs(...)` in `research/reference/xnu/osfmk/vm/vm_fault.c:3991` + - this is the runtime code-signing validation path. + +So if the jailbreak objective is “allow runtime execution / invocation patterns without kernel interception,” the current B9 patch is aimed at the wrong block. + +## XNU source cross-mapping for the upstream 26.1 site + +The `26.1 research` upstream site now maps cleanly to the `cs_bypass` fast-path semantics in XNU. + +### Field mapping + +From the `vm_fault_enter_prepare` function prologue in `26.1 research`: + +```asm +0xfffffe0007bada60 MOV X21, X7 ; fault_type +0xfffffe0007bada64 MOV X25, X3 ; prot* +0xfffffe0007bada74 LDP X28, X8, [X29,#0x10] ; fault_info, type_of_fault* +0xfffffe0007bada78 LDR W22, [X28,#0x28] ; fault_info flags word +``` + +The XNU struct layout confirms that `fault_info + 0x28` is the packed boolean flag word, and **bit 3 is `cs_bypass`**: + +- `research/reference/xnu/osfmk/vm/vm_object_xnu.h:112` +- `research/reference/xnu/osfmk/vm/vm_object_xnu.h:116` + +### Upstream site semantics + +The upstream hard-coded instruction is: + +```asm +0xfffffe0007bade1c TBZ W22, #3, loc_...DE28 +0xfffffe0007bade20 MOV W23, #0 +0xfffffe0007bade24 B loc_...E004 +``` + +Since `W22.bit3 == fault_info->cs_bypass`, this branch means: + +- if `cs_bypass == 0`: continue into the runtime code-signing validation / violation path +- if `cs_bypass == 1`: skip that path, force `is_tainted = 0`, and jump to the common success/mapping continuation + +Patching `TBZ` -> `NOP` therefore forces the **`cs_bypass` fast path unconditionally**. + +### XNU source correspondence + +This aligns with the source-level fast path in `vm_fault_cs_check_violation()`: + +- `research/reference/xnu/osfmk/vm/vm_fault.c:2831` +- `research/reference/xnu/osfmk/vm/vm_fault.c:2833` + +```c +if (cs_bypass) { + *cs_violation = FALSE; +} else if (VMP_CS_TAINTED(...)) { + *cs_violation = TRUE; +} ... +``` + +and with the caller in `vm_fault_validate_cs()` / `vm_fault_enter_prepare()`: + +- `research/reference/xnu/osfmk/vm/vm_fault.c:3208` +- `research/reference/xnu/osfmk/vm/vm_fault.c:3233` +- `research/reference/xnu/osfmk/vm/vm_fault.c:3991` +- `research/reference/xnu/osfmk/vm/vm_fault.c:3999` + +So the upstream patch is best understood as: + +- forcing `vm_fault_validate_cs()` to behave as though `cs_bypass` were already set, +- preventing runtime code-signing violation handling for this fault path, +- still preserving the rest of the normal page mapping flow. + +This is fundamentally different from the derived repository matcher, which NOPs a `pmap_lock_phys_page()` call and breaks lock pairing. + +## Proposed repair strategy + +### Recommended fix for B9 + +Retarget `patch_vm_fault_enter_prepare` to the **upstream semantic site**, not the current lock-site matcher. + +For `PCC 26.1 / 23B85 / kernelcache.research.vphone600`, the concrete patch is: + +- file offset: `0x00BA9E1C` +- VA: `0xFFFFFE0007BADE1C` +- before: `76 00 18 36` (`TBZ W22, #3, ...`) - after: `1F 20 03 D5` (`NOP`) -## Pseudocode (Before) +### Why this is the right site -```c -state_check(); -flag = map->state_byte; -if ((flag & BIT5) == 0) { - goto fast_path; -} +- It is in the correct `vm_fault_enter_prepare` control-flow region. +- It matches XNU's `cs_bypass` logic, not an unrelated lock helper. +- It preserves lock/unlock pairing and page accounting. +- It reproduces the **intent** of the upstream `26.1 research` patch rather than the accidental behavior of the derived matcher. + +### How to implement the new matcher + +The current matcher should be replaced, not refined. + +#### Do not match + +- `BL` followed by `LDRB [X?,#0x2C]` and `TBZ/TBNZ` +- any site with a nearby paired lock/unlock helper call + +#### Do match + +Inside `vm_fault_enter_prepare`, find the unique gate with this semantic shape: + +```asm +... ; earlier checks on prot/page state +CBZ X?, error_path ; load helper arg or zero +LDR X0, [X?,#0x488] +B +TBZ Wflags, #3, validation_path ; Wflags = fault_info flags word +MOV Wtainted, #0 +B post_validation_success ``` -## Pseudocode (After) +Where: -```c -// state_check() skipped -flag = map->state_byte; -if ((flag & BIT5) == 0) { - goto fast_path; -} -``` +- `Wflags` is loaded from `[fault_info_reg, #0x28]` near the function prologue, +- bit `#3` is `cs_bypass`, +- the fall-through path lands at the common mapping continuation (`post_validation_success`), +- the branch target enters the larger runtime validation / violation block. -## Why This Matters +A robust implementation can anchor on: -`vm_fault_enter_prepare` is part of runtime page-fault handling, so this patch affects execution-time memory validation behavior, not just execve-time checks. +1. resolved function `vm_fault_enter_prepare` +2. in-prologue `LDR Wflags, [fault_info,#0x28]` +3. later unique `TBZ Wflags, #3, ...; MOV W?, #0; B ...` sequence -## Symbol Consistency Audit (2026-03-05) +### Prototype matcher result (2026-03-06) -- Status: `match` -- Recovered symbol, anchor strings, and strict patch fingerprint all align on the same function. +A local prototype matcher was run against the extracted `PCC-CloudOS-26.1-23B85` `kernelcache.research.vphone600` raw Mach-O with these rules: -## Patch Metadata +1. inside `vm_fault_enter_prepare`, discover the early `LDR Wflags, [fault_info,#0x28]` load, +2. track that exact `Wflags` register, +3. find `TBZ Wflags, #3, ...` followed immediately by `MOV W?, #0` and `B ...`. -- Patch document: `patch_vm_fault_enter_prepare.md` (B9). -- Primary patcher module: `scripts/patchers/kernel_jb_patch_vm_fault.py`. -- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution. +Result: -## Target Function(s) and Binary Location +- prologue flag load: `0xFFFFFE0007BADA78` -> `LDR W22, [X28,#0x28]` +- matcher hit count: `1` +- unique hit: `0xFFFFFE0007BADE1C` -- Primary target: recovered symbol `vm_fault_enter_prepare`. -- Patchpoint: deny/fault guard branch NOP-ed at the validated in-function site. +This is the expected upstream semantic site and proves the repaired matcher can be made both specific and stable on `26.1 research` without relying on the old false-positive lock-call fingerprint. -## Kernel Source File Location +### Validation guidance -- Expected XNU source: `osfmk/vm/vm_fault.c`. -- Confidence: `high`. +For `26.1 research`, a repaired matcher should resolve to exactly one hit: -## Function Call Stack +- `0x00BA9E1C` -- Primary traced chain (from `Call-Stack Analysis`): -- Representative static callers: -- `vm_fault_internal` (`0xfffffe0007bb6ef0`) -> calls `vm_fault_enter_prepare`. -- `sub_FFFFFE0007BB8294` (`0xfffffe0007bb8350`) -> calls `vm_fault_enter_prepare`. -- This confirms B9 is in the central page-fault preparation path. -- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file. +and must **not** resolve to: -## Patch Hit Points +- `0x00BA9BB0` -- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`): -- `0xfffffe0007bb898c`: `BL sub_FFFFFE0007C4B7DC` -- `0xfffffe0007bb8990`: `LDRB W8, [X20,#0x2C]` -- `0xfffffe0007bb8994`: `TBZ W8, #5, loc_FFFFFE0007BB89C4` -- NOP the BL at `0xfffffe0007bb898c`. -- Bytes: -- before: `94 4B 02 94` (`BL ...`) -- The before/after instruction transform is constrained to this validated site. +If it still resolves to `0x00BA9BB0`, the matcher is still targeting the lock-pair block and is not fixed. -## Current Patch Search Logic +## Practical conclusion -- Implemented in `scripts/patchers/kernel_jb_patch_vm_fault.py`. -- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected. -- The patch is applied only after a unique candidate is confirmed in-function. -- Anchor string: `"vm_fault_enter_prepare"` at `0xfffffe0007048ec8`. -- Recovered symbol, anchor strings, and strict patch fingerprint all align on the same function. +### Verdict on the current patch -## Validation (Static Evidence) +- Keep `patch_vm_fault_enter_prepare` disabled. +- Do **not** re-enable the current NOP at `0xFFFFFE0007BB898C`. +- Treat the previous “Skip fault check” description as incorrect for `vphone600` research kernel. -- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site. -- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`. -- Address-level evidence in this document is consistent with patcher matcher intent. +### Likely root cause of boot failure -## Expected Failure/Panic if Unpatched +Most likely root cause: unbalanced `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` behavior in the hot VM fault path. -- VM fault guard remains active and can block memory mappings/transitions required during modified execution flows. +### Recommended next research direction -## Risk / Side Effects +If we still want a B9-class runtime-memory patch, the next candidates to study are: -- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions. -- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows. +- `vm_fault_validate_cs()` +- `vm_fault_cs_check_violation()` +- `vm_fault_cs_handle_violation()` +- the `pmap_has_prot_policy()` / `cs_bypass` decision region -## Symbol Consistency Check +Those are the places that can plausibly affect runtime execution restrictions. The current B9 site cannot. -- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`. -- Canonical symbol hit(s): `vm_fault_enter_prepare`. -- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases. -- IDA-MCP lookup snapshot (2026-03-05): `vm_fault_enter_prepare` -> `vm_fault_enter_prepare` at `0xfffffe0007bb8818`. +## Minimal safe recommendation for patch schedule -## Open Questions and Confidence +For now, the correct action is not “retarget this exact byte write,” but: -- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch. -- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence). +- leave `patch_vm_fault_enter_prepare` disabled, +- mark its prior purpose label as wrong, +- open a fresh analysis track for the real code-signing fault-validation path. -## Evidence Appendix +## Evidence summary -- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above. -- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file. - -## Runtime + IDA Verification (2026-03-05) - -- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00` -- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600` -- Base VA: `0xFFFFFE0007004000` -- Runtime status: `hit` (1 patch writes, method_return=True) -- Included in `KernelJBPatcher.find_all()`: `False` -- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes. -- IDA mapping status: `ok` (IDA runtime mapping loaded.) -- Call-chain mapping status: `ok` (IDA call-chain report loaded.) -- Call-chain validation: `1` function nodes, `1` patch-point VAs. -- IDA function sample: `vm_fault_enter_prepare` -- Chain function sample: `vm_fault_enter_prepare` -- Caller sample: `sub_FFFFFE0007BB8294`, `vm_fault_internal` -- Callee sample: `__strncpy_chk`, `kfree_ext`, `lck_rw_done`, `sub_FFFFFE0007B15AFC`, `sub_FFFFFE0007B546BC`, `sub_FFFFFE0007B840E0` -- Verdict: `questionable` -- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation. -- Key verified points: -- `0xFFFFFE0007BB898C` (`vm_fault_enter_prepare`): NOP [_vm_fault_enter_prepare] | `944b0294 -> 1f2003d5` -- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json` -- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md` - +- Function symbol: `vm_fault_enter_prepare` @ `0xfffffe0007bb8818` +- Current patchpoint: `0xfffffe0007bb898c` +- Current matched callee: `sub_FFFFFE0007C4B7DC` -> `pmap_lock_phys_page()` equivalent +- Paired callee: `sub_FFFFFE0007C4B9A4` -> `pmap_unlock_phys_page()` equivalent +- XNU semantic match: + - `research/reference/xnu/osfmk/vm/vm_fault.c:3958` + - `research/reference/xnu/osfmk/vm/vm_page_internal.h:999` + - `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap.c:7520` + - `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:330` + - `research/reference/xnu/osfmk/arm64/sptm/pmap/pmap_data.h:497` diff --git a/research/kernel_patch_jb/runtime_verification/runtime_verification_summary.md b/research/kernel_patch_jb/runtime_verification/runtime_verification_summary.md index 49de0b8..b85f7ab 100644 --- a/research/kernel_patch_jb/runtime_verification/runtime_verification_summary.md +++ b/research/kernel_patch_jb/runtime_verification/runtime_verification_summary.md @@ -91,6 +91,7 @@ ### `patch_io_secure_bsd_root` - `0x0136A1F0` / `0xFFFFFE000836E1F0` / b #0x1A4 [_IOSecureBSDRoot] / bytes `200d0034 -> 69000014` +- 2026-03-06 reanalysis: this historical hit is real but semantically wrong. It patches the `"SecureRoot"` name-check gate in `AppleARMPE::callPlatformFunction`, not the final `"SecureRootName"` deny return consumed by `IOSecureBSDRoot()`. The implementation was retargeted to `0x0136A464` / `0xFFFFFE000836E464` (`CSEL W22, WZR, W9, NE -> MOV W22, #0`). ### `patch_kcall10` diff --git a/scripts/patchers/kernel_jb.py b/scripts/patchers/kernel_jb.py index 33196d6..1f13cf2 100644 --- a/scripts/patchers/kernel_jb.py +++ b/scripts/patchers/kernel_jb.py @@ -66,7 +66,7 @@ class KernelJBPatcher( "patch_amfi_execve_kill_path", # JB-02 / A2 "patch_task_conversion_eval_internal", # JB-08 / A3 "patch_sandbox_hooks_extended", # JB-09 / A4 - # "patch_iouc_failed_macf", # JB-10 / A5 + "patch_iouc_failed_macf", # JB-10 / A5 ) # Group B: Pattern/string anchored methods. @@ -75,9 +75,9 @@ class KernelJBPatcher( "patch_proc_security_policy", # JB-11 / B6 "patch_proc_pidinfo", # JB-12 / B7 "patch_convert_port_to_map", # JB-13 / B8 - # "patch_bsd_init_auth", # JB-14 / B13 (disabled: autotest FAIL rc=2 on 2026-03-06) + "patch_bsd_init_auth", # JB-14 / B13 (retargeted 2026-03-06 to real _bsd_init rootauth gate) "patch_dounmount", # JB-15 / B12 - # "patch_io_secure_bsd_root", # JB-16 / B19 (disabled: autotest FAIL rc=2 on 2026-03-06) + "patch_io_secure_bsd_root", # JB-16 / B19 (retargeted 2026-03-06 to SecureRootName deny-return) "patch_load_dylinker", # JB-17 / B16 "patch_mac_mount", # JB-18 / B11 "patch_nvram_verify_permission", # JB-19 / B18 @@ -85,15 +85,15 @@ class KernelJBPatcher( "patch_spawn_validate_persona", # JB-21 / B14 "patch_task_for_pid", # JB-22 / B15 "patch_thid_should_crash", # JB-23 / B20 - # "patch_vm_fault_enter_prepare", # JB-24 / B9 (disabled: autotest FAIL rc=2 on 2026-03-06) + "patch_vm_fault_enter_prepare", # JB-24 / B9 (retargeted 2026-03-06 to upstream cs_bypass gate) "patch_vm_map_protect", # JB-25 / B10 ) # Group C: Shellcode/trampoline heavy methods. _GROUP_C_METHODS = ( - # "patch_cred_label_update_execve", # JB-03 / C21 (disabled: autotest FAIL rc=2 on 2026-03-06) - "patch_hook_cred_label_update_execve", # JB-04 / C23 (low-riskized) - "patch_kcall10", # JB-05 / C24 (low-riskized) + "patch_cred_label_update_execve", # JB-03 / C21 (disabled: reworked on 2026-03-06, pending boot revalidation) + "patch_hook_cred_label_update_execve", # JB-04 / C23 (faithful upstream trampoline) + "patch_kcall10", # JB-05 / C24 (ABI-correct rebuilt cave) "patch_syscallmask_apply_to_proc", # JB-07 / C22 ) diff --git a/scripts/patchers/kernel_jb_patch_bsd_init_auth.py b/scripts/patchers/kernel_jb_patch_bsd_init_auth.py index 929d838..2df2d2f 100644 --- a/scripts/patchers/kernel_jb_patch_bsd_init_auth.py +++ b/scripts/patchers/kernel_jb_patch_bsd_init_auth.py @@ -1,133 +1,137 @@ """Mixin: KernelJBPatchBsdInitAuthMixin.""" -from .kernel_jb_base import MOV_X0_0, _rd32 +from .kernel_jb_base import ARM64_OP_REG, ARM64_REG_W0, ARM64_REG_X0, NOP class KernelJBPatchBsdInitAuthMixin: - # ldr x0, [xN, #0x2b8] (ignore xN/Rn) - _LDR_X0_2B8_MASK = 0xFFFFFC1F - _LDR_X0_2B8_VAL = 0xF9415C00 - # cbz {w0|x0},