mirror of
https://github.com/Lakr233/vphone-cli.git
synced 2026-04-05 13:09:06 +08:00
Squash JB patch retarget and matcher cleanup
This commit is contained in:
@@ -92,17 +92,21 @@ Current default schedule note (2026-03-06): `patch_cred_label_update_execve` rem
|
||||
| JB-12 | B | `patch_proc_pidinfo` | `_proc_pidinfo` | Allow pid 0 info | Y |
|
||||
| JB-13 | B | `patch_convert_port_to_map` | `_convert_port_to_map_with_flavor` | Skip kernel map panic | Y |
|
||||
| JB-14 | B | `patch_bsd_init_auth` | `_bsd_init` rootauth-failure branch | Ignore `FSIOC_KERNEL_ROOTAUTH` failure in `bsd_init`; same gate as base patch #3 when layered | Y |
|
||||
| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount (strict in-function match) | Y |
|
||||
| JB-15 | B | `patch_dounmount` | `_dounmount` | Allow unmount via upstream coveredvp cleanup-call NOP | Y |
|
||||
| JB-16 | B | `patch_io_secure_bsd_root` | `AppleARMPE::callPlatformFunction` (`"SecureRootName"` return select), called from `IOSecureBSDRoot` | Force `"SecureRootName"` policy return to success without altering callback flow; implementation retargeted 2026-03-06 | Y |
|
||||
| JB-17 | B | `patch_load_dylinker` | `_load_dylinker` | Skip strict `LC_LOAD_DYLINKER == "/usr/lib/dyld"` gate | Y |
|
||||
| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Bypass MAC mount deny path (strict site) | Y |
|
||||
| JB-18 | B | `patch_mac_mount` | `___mac_mount` | Upstream mount-role wrapper bypass (`tbnz` NOP + role-byte zeroing) | Y |
|
||||
| JB-19 | B | `patch_nvram_verify_permission` | `_verifyPermission` (NVRAM) | Allow NVRAM writes | Y |
|
||||
| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force shared region path | Y |
|
||||
| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Skip persona validation | Y |
|
||||
| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid | Y |
|
||||
| JB-20 | B | `patch_shared_region_map` | `_shared_region_map_and_slide_setup` | Force root-vs-process-root mount compare to succeed before Cryptex fallback | Y |
|
||||
| JB-21 | B | `patch_spawn_validate_persona` | `_spawn_validate_persona` | Upstream dual-`cbz` persona helper bypass | Y |
|
||||
| JB-22 | B | `patch_task_for_pid` | `_task_for_pid` | Allow task_for_pid via upstream early `pid == 0` gate NOP | Y |
|
||||
| JB-23 | B | `patch_thid_should_crash` | `_thid_should_crash` | Prevent GUARD_TYPE_MACH_PORT crash | Y |
|
||||
| JB-24 | B | `patch_vm_fault_enter_prepare` | `_vm_fault_enter_prepare` | Force `cs_bypass` fast path in runtime fault validation | Y |
|
||||
| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Allow VM protect | Y |
|
||||
| JB-25 | B | `patch_vm_map_protect` | `_vm_map_protect` | Skip upstream write-downgrade gate in `vm_map_protect` | Y |
|
||||
|
||||
JB rework note (2026-03-06, remaining active methods): `JB-01`, `JB-08`, `JB-09`, `JB-06`, `JB-11`, `JB-12`, `JB-13`, `JB-17`, `JB-19`, and `JB-23` have now also been rechecked against `/Users/qaq/Desktop/patch_fw.py`, IDA PCC 26.1 research, `research/reference/xnu`, and focused dry-runs on both PCC 26.1 research/release. Of these, `JB-09` was materially pulled back to the upstream `mac_policy_ops` table-entry rewrite model (common allow stub retarget, matching `patch_fw.py` offsets) instead of per-hook body stubs; `JB-06` dropped its broad AMFI-text fallback; `JB-12` tightened to the exact early `ldr/cbz/bl/cbz` guard pair; and `JB-19` now requires a unique `krn.`-anchored verifyPermission gate across all string refs. The remaining six (`JB-01`, `JB-08`, `JB-11`, `JB-13`, `JB-17`, `JB-23`) matched upstream offsets and semantics without further retarget.
|
||||
|
||||
JB retarget note (2026-03-06): `JB-15`, `JB-18`, `JB-20`, `JB-21`, `JB-22`, and `JB-25` were rechecked against `/Users/qaq/Desktop/patch_fw.py`, IDA PCC 26.1 research, and `research/reference/xnu`. Current preferred runtime behavior is to match the known-good upstream semantic gate unless binary+source evidence clearly disproves it. In this pass, `JB-22` was pulled back from a helper-return rewrite to the upstream early `pid == 0` gate, and `JB-20` was pulled back from the later preboot-fallback compare to the upstream first root-mount compare.
|
||||
|
||||
JB-24 note (2026-03-06): the old derived matcher hit the `VM_PAGE_CONSUME_CLUSTERED()` lock/unlock sequence inside `vm_fault_enter_prepare`, i.e. `pmap_lock_phys_page()` / `pmap_unlock_phys_page()`. The implementation is now retargeted to the upstream PCC 26.1 research `cs_bypass` gate at `0x00BA9E1C` / `0xFFFFFE0007BADE1C`.
|
||||
|
||||
@@ -110,43 +114,42 @@ JB-24 note (2026-03-06): the old derived matcher hit the `VM_PAGE_CONSUME_CLUSTE
|
||||
|
||||
### Binary Patches Applied Over SSH Ramdisk
|
||||
|
||||
| # | Patch | Binary | Purpose | Regular | Dev | JB |
|
||||
| --- | ------------------------- | ---------------------- | ----------------------------------------- | :-----: | :-: | :-: |
|
||||
| 1 | `/%s.gl` -> `/AA.gl` | `seputil` | Gigalocker UUID fix | Y | Y | Y |
|
||||
| 2 | NOP cache validation | `launchd_cache_loader` | Allow modified `launchd.plist` | Y | Y | Y |
|
||||
| 3 | `mov x0,#1; ret` | `mobileactivationd` | Activation bypass | Y | Y | Y |
|
||||
| 4 | Plist injection | `launchd.plist` | bash/dropbear/trollvnc/vphoned daemons | Y | Y | Y |
|
||||
| 5 | `b` (skip jetsam guard) | `launchd` | Prevent jetsam panic on boot | - | Y | Y |
|
||||
| 6 | `LC_LOAD_DYLIB` injection | `launchd` | Load `/cores/launchdhook.dylib` at launch | - | - | Y |
|
||||
| # | Patch | Binary | Purpose | Regular | Dev | JB |
|
||||
| --- | ------------------------- | ---------------------- | ------------------------------------------------------------- | :-----: | :-: | :-: |
|
||||
| 1 | `/%s.gl` -> `/AA.gl` | `seputil` | Gigalocker UUID fix | Y | Y | Y |
|
||||
| 2 | NOP cache validation | `launchd_cache_loader` | Allow modified `launchd.plist` | Y | Y | Y |
|
||||
| 3 | `mov x0,#1; ret` | `mobileactivationd` | Activation bypass | Y | Y | Y |
|
||||
| 4 | Plist injection | `launchd.plist` | bash/dropbear/trollvnc/vphoned daemons | Y | Y | Y |
|
||||
| 5 | `b` (skip jetsam guard) | `launchd` | Prevent jetsam panic on boot | - | Y | Y |
|
||||
| 6 | `LC_LOAD_DYLIB` injection | `launchd` | Load short alias `/b` (copy of `launchdhook.dylib`) at launch | - | - | Y |
|
||||
|
||||
### Installed Components
|
||||
|
||||
| # | Component | Description | Regular | Dev | JB |
|
||||
| --- | -------------------------- | -------------------------------------------------------------------------- | :-----: | :-: | :-: |
|
||||
| 1 | Cryptex SystemOS + AppOS | Decrypt AEA + mount + copy to device | Y | Y | Y |
|
||||
| 2 | GPU driver | AppleParavirtGPUMetalIOGPUFamily bundle | Y | Y | Y |
|
||||
| 3 | `iosbinpack64` | Jailbreak tools (base set) | Y | Y | Y |
|
||||
| 4 | `iosbinpack64` dev overlay | Replace `rpcserver_ios` with dev build | - | Y | - |
|
||||
| 5 | `vphoned` | vsock HID/control daemon (built + signed) | Y | Y | Y |
|
||||
| 6 | LaunchDaemons | bash/dropbear/trollvnc/rpcserver_ios/vphoned plists | Y | Y | Y |
|
||||
| 7 | Procursus bootstrap | Bootstrap filesystem + optional Sileo deb | - | - | Y |
|
||||
| 8 | BaseBin hooks | `systemhook.dylib` / `launchdhook.dylib` / `libellekit.dylib` -> `/cores/` | - | - | Y |
|
||||
| # | Component | Description | Regular | Dev | JB |
|
||||
| --- | -------------------------- | ------------------------------------------------------------------------------------------------------------------ | :-----: | :-: | :-: |
|
||||
| 1 | Cryptex SystemOS + AppOS | Decrypt AEA + mount + copy to device | Y | Y | Y |
|
||||
| 2 | GPU driver | AppleParavirtGPUMetalIOGPUFamily bundle | Y | Y | Y |
|
||||
| 3 | `iosbinpack64` | Jailbreak tools (base set) | Y | Y | Y |
|
||||
| 4 | `iosbinpack64` dev overlay | Replace `rpcserver_ios` with dev build | - | Y | - |
|
||||
| 5 | `vphoned` | vsock HID/control daemon (built + signed) | Y | Y | Y |
|
||||
| 6 | LaunchDaemons | bash/dropbear/trollvnc/rpcserver_ios/vphoned plists | Y | Y | Y |
|
||||
| 7 | Procursus bootstrap | Bootstrap filesystem + optional Sileo deb | - | - | Y |
|
||||
| 8 | BaseBin hooks | `systemhook.dylib` / `launchdhook.dylib` / `libellekit.dylib` -> `/cores/` plus `/b` alias for `launchdhook.dylib` | - | - | Y |
|
||||
|
||||
### CFW Installer Flow Matrix (Script-Level)
|
||||
|
||||
| Flow Item | Regular (`cfw_install.sh`) | Dev (`cfw_install_dev.sh`) | JB (`cfw_install_jb.sh`) |
|
||||
| ----------------------------------------------------------------- | --------------------------------------------- | ----------------------------------------------- | --------------------------------------------- | ------ | --------------------------------------------- |
|
||||
| Base CFW phases (1/7 -> 7/7) | Runs directly | Runs directly | Runs via `CFW_SKIP_HALT=1 zsh cfw_install.sh` |
|
||||
| Dev overlay (`rpcserver_ios` replacement) | - | Y (`apply_dev_overlay`) | - |
|
||||
| SSH readiness wait before install | Y (`wait_for_device_ssh_ready`) | - | Y (inherited from base run) |
|
||||
| `remote_mount` behavior | Ensures mountpoint and verifies mount success | Best-effort mount only (`mount_apfs ... | | true`) | Ensures mountpoint and verifies mount success |
|
||||
| launchd jetsam patch (`patch-launchd-jetsam`) | - | Y (base-flow injection) | Y (JB-1) |
|
||||
| launchd dylib injection (`inject-dylib /cores/launchdhook.dylib`) | - | - | Y (JB-1) |
|
||||
| Procursus bootstrap deployment | - | - | Y (JB-2) |
|
||||
| BaseBin hook deployment (`*.dylib` -> `/mnt1/cores`) | - | - | Y (JB-3) |
|
||||
| Additional input resources | `cfw_input` | `cfw_input` + `resources/cfw_dev/rpcserver_ios` | `cfw_input` + `cfw_jb_input` |
|
||||
| Extra tool requirement beyond base | - | - | `zstd` |
|
||||
| Halt behavior | Halts unless `CFW_SKIP_HALT=1` | Halts unless `CFW_SKIP_HALT=1` | Always halts after JB phases |
|
||||
| Flow Item | Regular (`cfw_install.sh`) | Dev (`cfw_install_dev.sh`) | JB (`cfw_install_jb.sh`) |
|
||||
| ---------------------------------------------------- | ------------------------------- | ----------------------------------------------- | --------------------------------------------- |
|
||||
| Base CFW phases (1/7 -> 7/7) | Runs directly | Runs directly | Runs via `CFW_SKIP_HALT=1 zsh cfw_install.sh` |
|
||||
| Dev overlay (`rpcserver_ios` replacement) | - | Y (`apply_dev_overlay`) | - |
|
||||
| SSH readiness wait before install | Y (`wait_for_device_ssh_ready`) | - | Y (inherited from base run) |
|
||||
| launchd jetsam patch (`patch-launchd-jetsam`) | - | Y (base-flow injection) | Y (JB-1) |
|
||||
| launchd dylib injection (`inject-dylib /b`) | - | - | Y (JB-1) |
|
||||
| Procursus bootstrap deployment | - | - | Y (JB-2) |
|
||||
| BaseBin hook deployment (`*.dylib` -> `/mnt1/cores`) | - | - | Y (JB-3) |
|
||||
| Additional input resources | `cfw_input` | `cfw_input` + `resources/cfw_dev/rpcserver_ios` | `cfw_input` + `cfw_jb_input` |
|
||||
| Extra tool requirement beyond base | - | - | `zstd` |
|
||||
| Halt behavior | Halts unless `CFW_SKIP_HALT=1` | Halts unless `CFW_SKIP_HALT=1` | Always halts after JB phases |
|
||||
|
||||
## Summary
|
||||
|
||||
|
||||
@@ -221,3 +221,10 @@ return 1;
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` target remains authoritative here: research rewrites the function entry at `0x01633880` (`mov x0,#1 ; cbz x2,+8 ; str x0,[x2] ; ret`), and release lands at `0x015AE160`.
|
||||
- IDA on `kernelcache.research.vphone600` confirms that `0xFFFFFE0008637880` is the entry of the tiny AMFI trustcache helper and that the first 12 bytes match the upstream patch body exactly.
|
||||
- Runtime matcher stays structural instead of string-anchored because this helper does not expose a stable in-function string anchor on the stripped raw kernel. The retained reveal uses a tight in-function instruction shape inside `AppleMobileFileIntegrity::__text`, and focused dry-runs on both PCC 26.1 research/release remain unique.
|
||||
- Focused dry-run (`2026-03-06`): research hits `0x01633880/84/88/8C`; release hits `0x015AE160/64/68/6C`.
|
||||
|
||||
@@ -92,6 +92,13 @@ The earlier BL/CBZ-site patching hit vnode-type assertion checks near function s
|
||||
- `_hook_cred_label_update_execve` and related execve symbols are recovered, but several AMFI callback wrapper addresses in this doc remain unlabeled in `kernel_info`.
|
||||
- Address-level control-flow evidence is still valid; symbol names are partially recovered only.
|
||||
|
||||
## Scheduler Status (2026-03-06)
|
||||
|
||||
- For the current PCC 26.1 `_cred_label_update_execve` path, A2 and C21 both land on the same shared deny-return site: `0xFFFFFE00086400FC`.
|
||||
- That means enabling both in the same default JB schedule is redundant and produces a real patch-site conflict, not just a conceptual overlap.
|
||||
- Current policy: keep A2 as a standalone / fallback patch for isolated testing, but remove it from the default schedule when C21 is enabled.
|
||||
- Rationale: C21 preserves the same deny→allow effect at the shared return site and additionally handles the late success exits plus success-only `csflags` relaxation.
|
||||
|
||||
## Patch Metadata
|
||||
|
||||
- Patch document: `patch_amfi_execve_kill_path.md` (A2).
|
||||
|
||||
@@ -151,3 +151,9 @@ goto normal_path; // unconditional branch
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` targets the kernel-map panic bypass at `0x00B02E94`; release lands at `0x00AC6E94`. The current string-backed matcher still lands on those exact branch sites.
|
||||
- IDA confirms the upstream block shape at `0xFFFFFE0007B06E94`: `cmp X16, X8 ; b.ne normal ; ... panic("userspace has control access to a kernel map...")`. This matches the kernel-map panic path in `research/reference/xnu/osfmk/kern/ipc_tt.c`.
|
||||
- No code change was needed in this pass. Focused dry-run (`2026-03-06`): research `0x00B02E94`; release `0x00AC6E94`.
|
||||
|
||||
@@ -108,7 +108,7 @@ This preserves AMFI's normal validation / entitlement work while removing the st
|
||||
|
||||
This is intentionally the smallest credible C21-only design:
|
||||
|
||||
- it does not depend on `patch_amfi_execve_kill_path`;
|
||||
- it no longer needs `patch_amfi_execve_kill_path` in the same default schedule; on PCC 26.1 they overlap on the same shared deny-return site, so C21 supersedes A2 there;
|
||||
- it does not patch function entry;
|
||||
- it does not forge `CS_VALID`, `CS_PLATFORM_BINARY`, `CS_ADHOC`, or other
|
||||
high-risk identity bits;
|
||||
@@ -229,9 +229,10 @@ This is a much narrower and more defensible jailbreak patch than forcing an unco
|
||||
|
||||
## Current Status
|
||||
|
||||
- Scheduler note (`2026-03-06`): C21 and A2 both target the shared deny-return site `0x0163C0FC` on the extracted PCC 26.1 research kernel (`0xFFFFFE00086400FC` VA). C21 is treated as the superset patch on this path, so A2 is removed from the default schedule instead of being stacked with C21.
|
||||
- Patch implementation updated in `scripts/patchers/kernel_jb_patch_cred_label.py` as C21-v3.
|
||||
- C21-v1 has already booted successfully in restore testing.
|
||||
- Default schedule remains disabled in `scripts/patchers/kernel_jb.py` until C21-v3 restore / boot validation is rerun.
|
||||
- Default schedule now keeps C21 enabled on the current PCC 26.1 path while removing A2 from the same default list, because C21 supersedes A2 at the shared deny-return site.
|
||||
- Expected dry-run patch shape for C21-v3 is:
|
||||
- 1 deny cave;
|
||||
- 1 success cave;
|
||||
|
||||
@@ -1,151 +1,178 @@
|
||||
# B12 `patch_dounmount`
|
||||
|
||||
## Patch Goal
|
||||
## Goal
|
||||
|
||||
Bypass a MAC authorization call in `dounmount` by NOP-ing a strict `mov w1,#0 ; mov x2,#0 ; bl ...` callsite.
|
||||
Keep the jailbreak `dounmount` patch aligned with the known-good upstream design in `/Users/qaq/Desktop/patch_fw.py`.
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
- Preferred upstream target: `patch(0xCA8134, 0xD503201F)`.
|
||||
- Current rework result: `match`.
|
||||
- PCC 26.1 research hit: file offset `0x00CA8134`, VA `0xFFFFFE0007CAC134`.
|
||||
- PCC 26.1 release hit: file offset `0x00C6C134`.
|
||||
|
||||
- Recovered symbols:
|
||||
- `dounmount` at `0xfffffe0007cb6ea0`
|
||||
- `safedounmount` at `0xfffffe0007cb6cec`
|
||||
- Anchor string: `"dounmount: no coveredvp @%s:%d"` at `0xfffffe0007056950`.
|
||||
- Anchor xref: `0xfffffe0007cb7700` in `sub_FFFFFE0007CB6EA0`.
|
||||
## What Gets Patched
|
||||
|
||||
## Call-Stack Analysis
|
||||
The patch NOPs the first BL in the `coveredvp` success-tail cleanup sequence inside `dounmount`:
|
||||
|
||||
- Static callers into `dounmount` include:
|
||||
- `sub_FFFFFE0007CA45E4`
|
||||
- `sub_FFFFFE0007CAAE28`
|
||||
- `sub_FFFFFE0007CB6CEC`
|
||||
- `sub_FFFFFE0007CB770C`
|
||||
- This confirms the expected unmount path context.
|
||||
```asm
|
||||
mov x0, coveredvp_reg
|
||||
mov w1, #0
|
||||
mov w2, #0
|
||||
mov w3, #0
|
||||
bl <target> ; patched to NOP
|
||||
mov x0, coveredvp_reg
|
||||
bl <target>
|
||||
```
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
On PCC 26.1 research the validated sequence is:
|
||||
|
||||
- Intended matcher requires exact pair:
|
||||
- `mov w1, #0`
|
||||
- `mov x2, #0`
|
||||
- `bl ...`
|
||||
- In current IDA state, the close callsite is:
|
||||
- `mov w1, #0x10 ; mov x2, #0 ; bl sub_FFFFFE0007CAB27C` at `0xfffffe0007cb75b0`
|
||||
- Therefore strict matcher is not satisfied in this image state.
|
||||
- Fail-closed behavior is correct: no patch should be emitted here unless exact semantics are revalidated.
|
||||
```asm
|
||||
0xFFFFFE0007CAC124 mov x0, x26
|
||||
0xFFFFFE0007CAC128 mov w1, #0
|
||||
0xFFFFFE0007CAC12C mov w2, #0
|
||||
0xFFFFFE0007CAC130 mov w3, #0
|
||||
0xFFFFFE0007CAC134 bl #0xC92AD8 ; patched
|
||||
0xFFFFFE0007CAC138 mov x0, x26
|
||||
0xFFFFFE0007CAC13C bl #0xC947E8
|
||||
```
|
||||
|
||||
## Pseudocode (Before)
|
||||
## Upstream Match vs Divergence
|
||||
|
||||
### Final status: `match`
|
||||
|
||||
- Upstream `patch_fw.py` uses file offset `0xCA8134`.
|
||||
- The reworked matcher now emits exactly `0xCA8134` on PCC 26.1 research.
|
||||
- The corresponding PCC 26.1 release hit is `0xC6C134`, which is the expected variant-shifted analogue of the same in-function sequence.
|
||||
|
||||
### Rejected drift site
|
||||
|
||||
The previous repo matcher had drifted to `0xCA81FC` on research.
|
||||
|
||||
That drift was treated as a red flag because:
|
||||
- it did **not** match upstream,
|
||||
- it matched a later teardown sequence with shape `mov x0, #0 ; mov w1, #0x10 ; mov x2, #0 ; bl ...`,
|
||||
- that later sequence does **not** correspond to the upstream `coveredvp` cleanup gate in either IDA or XNU source structure.
|
||||
|
||||
Conclusion: the drifted site was incorrect and has been removed.
|
||||
|
||||
## Why This Site Is Correct
|
||||
|
||||
### Facts from XNU
|
||||
|
||||
From `research/reference/xnu/bsd/vfs/vfs_syscalls.c`, the successful `coveredvp != NULLVP` tail of `dounmount()` is:
|
||||
|
||||
```c
|
||||
rc = mac_check(..., 0, 0);
|
||||
if (rc != 0) {
|
||||
return rc;
|
||||
if (!error) {
|
||||
if ((coveredvp != NULLVP)) {
|
||||
vnode_getalways(coveredvp);
|
||||
|
||||
mount_dropcrossref(mp, coveredvp, 0);
|
||||
if (!vnode_isrecycled(coveredvp)) {
|
||||
pvp = vnode_getparent(coveredvp);
|
||||
...
|
||||
}
|
||||
|
||||
vnode_rele(coveredvp);
|
||||
vnode_put(coveredvp);
|
||||
coveredvp = NULLVP;
|
||||
|
||||
if (pvp) {
|
||||
lock_vnode_and_post(pvp, NOTE_WRITE);
|
||||
vnode_put(pvp);
|
||||
}
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
### Facts from IDA / disassembly
|
||||
|
||||
```c
|
||||
// BL mac_check replaced by NOP
|
||||
// execution continues as if check passed
|
||||
```
|
||||
Inside the `dounmount` function recovered from the in-function panic anchor `"dounmount: no coveredvp"`, the validated research sequence is:
|
||||
|
||||
## Symbol Consistency
|
||||
- optional call on the same `coveredvp` register just before the patch site,
|
||||
- `mov x0, coveredvp_reg ; mov w1,#0 ; mov w2,#0 ; mov w3,#0 ; bl`,
|
||||
- immediate follow-up `mov x0, coveredvp_reg ; bl`,
|
||||
- optional parent-vnode post path immediately after.
|
||||
|
||||
- `dounmount` symbol resolution is consistent.
|
||||
- Pattern-level mismatch indicates prior hardcoded assumptions are not universally valid.
|
||||
This is the exact control-flow shape expected for the source-level `vnode_rele(coveredvp); vnode_put(coveredvp);` pair.
|
||||
|
||||
## Patch Metadata
|
||||
### Inference
|
||||
|
||||
- Patch document: `patch_dounmount.md` (B12).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_dounmount.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
The first BL is the upstream gate worth neutralizing because it is the only BL in that local cleanup pair that takes the covered vnode plus three zeroed scalar arguments, immediately followed by a second BL on the same vnode register. That shape matches the source-level release/put tail and matches the known-good upstream patch location.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
## Anchor Class
|
||||
|
||||
- Primary target: `dounmount` deny branch in VFS unmount path.
|
||||
- Exact patch site (NOP on strict in-function match) is documented in this file.
|
||||
- Primary runtime anchor class: `string anchor`.
|
||||
- Concrete anchor: `"dounmount: no coveredvp"`.
|
||||
- Why this anchor was chosen: the embedded symbol table is effectively empty on the local stripped payloads, IDA names are not stable, and this panic string lives inside the target function on both current research and release images.
|
||||
- Release-kernel survivability: the patcher does not require recovered names or repo-exported symbol JSON at runtime; it only needs the in-image string reference plus the surrounding decoded control-flow shape.
|
||||
|
||||
## Kernel Source File Location
|
||||
## Runtime Matcher Design
|
||||
|
||||
- Expected XNU source: `bsd/vfs/vfs_syscalls.c` (`dounmount`).
|
||||
- Confidence: `high`.
|
||||
The runtime matcher is intentionally single-path and source-backed:
|
||||
|
||||
## Function Call Stack
|
||||
1. Find the panic string `"dounmount: no coveredvp"`.
|
||||
2. Recover the containing function (`dounmount`) from its string xref.
|
||||
3. Scan only that function for the unique 8-instruction sequence:
|
||||
- `mov x0, <reg>`
|
||||
- `mov w1, #0`
|
||||
- `mov w2, #0`
|
||||
- `mov w3, #0`
|
||||
- `bl`
|
||||
- `mov x0, <same reg>`
|
||||
- `bl`
|
||||
- `cbz x?, ...`
|
||||
4. Patch the first `bl` with `NOP`.
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- Static callers into `dounmount` include:
|
||||
- `sub_FFFFFE0007CA45E4`
|
||||
- `sub_FFFFFE0007CAAE28`
|
||||
- `sub_FFFFFE0007CB6CEC`
|
||||
- `sub_FFFFFE0007CB770C`
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
The matcher now also fixes the ABI argument registers exactly (`x0`, `w1`, `w2`, `w3`) instead of accepting arbitrary zeroing moves, which makes the reveal path closer to the upstream call shape without depending on unstable symbol names.
|
||||
|
||||
## Patch Hit Points
|
||||
## Why This Should Generalize
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- `mov w1, #0x10 ; mov x2, #0 ; bl sub_FFFFFE0007CAB27C` at `0xfffffe0007cb75b0`
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
This matcher should survive PCC 26.1 research, PCC 26.1 release, and likely later close variants such as 26.3 release because it anchors on:
|
||||
|
||||
## Current Patch Search Logic
|
||||
- an in-function panic string that is tightly coupled to `dounmount`, and
|
||||
- a local cleanup sequence derived from stable VFS semantics (`coveredvp` release then put),
|
||||
- using decoded register/immediate/control-flow structure rather than fixed offsets.
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_dounmount.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Anchor string: `"dounmount: no coveredvp @%s:%d"` at `0xfffffe0007056950`.
|
||||
- Anchor xref: `0xfffffe0007cb7700` in `sub_FFFFFE0007CB6EA0`.
|
||||
The pattern is also cheap:
|
||||
|
||||
## Validation (Static Evidence)
|
||||
- one string lookup,
|
||||
- one xref-to-function recovery,
|
||||
- one linear scan over a single function body,
|
||||
- one 7-instruction decode window per candidate.
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
So it remains robust without becoming an expensive whole-image search.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
## Validation
|
||||
|
||||
- Unmount requests remain blocked by guarded deny branch, breaking workflows that require controlled remount/unmount transitions.
|
||||
### Focused dry-run
|
||||
|
||||
## Risk / Side Effects
|
||||
Validated locally on extracted raw kernels:
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
- PCC 26.1 research: `hit` at `0x00CA8134`
|
||||
- PCC 26.1 release: `hit` at `0x00C6C134`
|
||||
|
||||
## Symbol Consistency Check
|
||||
Both variants emit exactly one patch:
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `dounmount`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `dounmount` -> `dounmount` at `0xfffffe0007cb6ea0`.
|
||||
- `NOP [_dounmount upstream cleanup call]`
|
||||
|
||||
## Open Questions and Confidence
|
||||
### Match verdict
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
- Upstream reference `/Users/qaq/Desktop/patch_fw.py`: `match`
|
||||
- IDA PCC 26.1 research control-flow: `match`
|
||||
- XNU `dounmount` success-tail semantics: `match`
|
||||
|
||||
## Evidence Appendix
|
||||
## Files
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
- Patcher: `scripts/patchers/kernel_jb_patch_dounmount.py`
|
||||
- Analysis doc: `research/kernel_patch_jb/patch_dounmount.md`
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
## 2026-03-06 Rework
|
||||
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research site: `0x00CA8134` (`0xFFFFFE0007CAC134`).
|
||||
- Anchor class: `string`. Runtime reveal starts from the in-image `"dounmount:"` panic string, resolves the enclosing function, then finds the unique near-tail `mov x0,<coveredvp> ; mov w1,#0 ; mov w2,#0 ; mov w3,#0 ; bl ; mov x0,<coveredvp> ; bl ; cbz x?` cleanup-call block.
|
||||
- Why this site: it is the exact known-good upstream 4-arg zeroed callsite. The previously drifted `0x00CA81FC` call uses a different signature (`w1 = 0x10`) and a different control-flow region, so it is treated as a red-flag divergence and removed.
|
||||
- Release/generalization rationale: the panic string is stable in stripped kernels, and the local 8-instruction shape is tight enough to stay cheap and robust across PCC 26.1 release / likely 26.3 release.
|
||||
- Performance note: one string-xref resolution plus a single function-local linear scan.
|
||||
- Focused PCC 26.1 research dry-run: `hit`, 1 write at `0x00CA8134`.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `dounmount`
|
||||
- Chain function sample: `dounmount`
|
||||
- Caller sample: `safedounmount`, `sub_FFFFFE0007CAAE28`, `sub_FFFFFE0007CB770C`, `vfs_mountroot`
|
||||
- Callee sample: `dounmount`, `lck_mtx_destroy`, `lck_rw_done`, `mount_dropcrossref`, `mount_iterdrain`, `mount_refdrain`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE0007CB75B0` (`dounmount`): NOP [_dounmount MAC check] | `33cfff97 -> 1f2003d5`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
@@ -163,3 +163,9 @@ This gate executes early in image loading. Without bypassing it, binaries can fa
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` continues to be the right target: research patches `0x01052A28`; release patches `0x01016A28`.
|
||||
- IDA still shows the same upstream gate shape in the `/usr/lib/dyld`-anchored function: `bl policy_check ; cbz w0, allow ; mov w0,#2`. The current matcher keeps this one string-backed reveal and no longer carries any symbol-first branch.
|
||||
- No retarget was needed in this pass; focused dry-run (`2026-03-06`) remains exact on both kernels.
|
||||
|
||||
@@ -1,192 +1,118 @@
|
||||
# B11 `patch_mac_mount` (full static re-validation, 2026-03-05)
|
||||
# B11 `patch_mac_mount` (`2026-03-06` rework)
|
||||
|
||||
## Scope and method
|
||||
## Verdict
|
||||
|
||||
- Re-done from scratch with static analysis only (IDA MCP), treating prior notes as untrusted.
|
||||
- Verified function flow, callers, syscall-entry reachability, and patch-site semantics on the current kernel image in IDA.
|
||||
- Final result: **match upstream** `/Users/qaq/Desktop/patch_fw.py`.
|
||||
- Upstream reference patches the PCC 26.1 research kernel at:
|
||||
- `0xFFFFFE0007CA9D54` / file offset `0x00CA5D54`
|
||||
- `0xFFFFFE0007CA9D88` / file offset `0x00CA5D88`
|
||||
- Reworked runtime matcher now lands on those same two sites again.
|
||||
- Previous local drift to `0xFFFFFE0007CA8EAC` / `0x00CA4EAC` is now treated as **wrong for this patch**: that site is inside the lower `prepare_coveredvp()` helper and corresponds to the ownership / `EPERM` gate, not the upstream mount-role wrapper gate.
|
||||
|
||||
## Patched function and exact gate
|
||||
## Anchor class
|
||||
|
||||
- Patched function (`patched` group):
|
||||
- `patch_mac_mount__patched_fn_mount_gate` @ `0xFFFFFE0007CA8E08`
|
||||
- Critical sequence:
|
||||
- `0xFFFFFE0007CA8EA8`: `BL patch_mac_mount__supp_mount_ctx_prepare`
|
||||
- `0xFFFFFE0007CA8EAC`: `CBNZ W0, 0xFFFFFE0007CA8EC8` **(patch target)**
|
||||
- `0xFFFFFE0007CA8EC8`: `MOV W0, #1` (deny/error return path)
|
||||
- Meaning:
|
||||
- This gate consumes return code from the context/policy-prep call and forces immediate failure (`W0=1`) on non-zero.
|
||||
- `patch_mac_mount` must neutralize the deny branch, not the BL call.
|
||||
- Primary runtime anchor class: **string anchor**.
|
||||
- String used: `"mount_common()"`.
|
||||
- Why this anchor: it is present in the same VFS syscall compilation unit on the stripped PCC kernels, survives the empty embedded symtable case (`0 []`), and gives a stable way to recover the local `mount_common` function without IDA names or external symbol dumps.
|
||||
- Secondary discovery after the string anchor: **semantic control-flow search** over nearby callers of the recovered `mount_common` function.
|
||||
|
||||
## Why this function is called (full trace from mount entry paths)
|
||||
## Where the patch lands
|
||||
|
||||
- IDA-marked `supplement` functions:
|
||||
- `patch_mac_mount__supp_sys_mount_adapter` @ `0xFFFFFE0007CA9AF8`
|
||||
- `patch_mac_mount__supp_sys_mount_core` @ `0xFFFFFE0007CA9B38`
|
||||
- `patch_mac_mount__supp_sys_fmount` @ `0xFFFFFE0007CAA924`
|
||||
- `patch_mac_mount__supp_sys_fs_snapshot` @ `0xFFFFFE0007CBE51C`
|
||||
- `patch_mac_mount__supp_snapshot_mount_core` @ `0xFFFFFE0007CBED28`
|
||||
- `patch_mac_mount__supp_mount_common` @ `0xFFFFFE0007CA7868`
|
||||
- `patch_mac_mount__supp_mount_ctx_prepare` @ `0xFFFFFE0007CCD1B4`
|
||||
- Syscall-table-backed handlers (data pointers observed in `__const`):
|
||||
- `0xFFFFFE0007740800` -> `patch_mac_mount__supp_sys_mount_adapter`
|
||||
- `0xFFFFFE0007742018` -> `patch_mac_mount__supp_sys_mount_core`
|
||||
- `0xFFFFFE00077429A8` -> `patch_mac_mount__supp_sys_fmount`
|
||||
- `0xFFFFFE00077428E8` -> `patch_mac_mount__supp_sys_fs_snapshot`
|
||||
- Reachability into patched gate:
|
||||
- `patch_mac_mount__supp_mount_common` calls patched gate at `0xFFFFFE0007CA79F4`
|
||||
- `patch_mac_mount__supp_sys_mount_core` also directly calls patched gate at `0xFFFFFE0007CAA03C`
|
||||
- `patch_mac_mount__supp_sys_fmount` enters via `mount_common` (`0xFFFFFE0007CAAA3C`)
|
||||
- `patch_mac_mount__supp_snapshot_mount_core` enters via `mount_common` (`0xFFFFFE0007CBEF5C`)
|
||||
### Site 1 — preboot-role reject gate
|
||||
|
||||
## Purpose of the patch (why required for unsigned payload + launchd hook workflow)
|
||||
- Match site: `0xFFFFFE0007CA9D54` / `0x00CA5D54`
|
||||
- Stock instruction: `tbnz w28, #5, ...`
|
||||
- Patched instruction: `nop`
|
||||
- Upstream relation: **exact match** to `/Users/qaq/Desktop/patch_fw.py` `patch(0xCA5D54, 0xD503201F)`.
|
||||
|
||||
- This gate is in the mount authorization/preflight path; deny branch returns early before normal mount completion path.
|
||||
- Downstream mount path is only reached if this gate does not abort (e.g., later call to `sub_FFFFFE00082E11E4` in the patched function).
|
||||
- Project install/runtime dependency on successful mounts is explicit:
|
||||
- `scripts/cfw_install.sh` and `scripts/cfw_install_jb.sh` require `mount_apfs` success and hard-fail on mount failure.
|
||||
- JB flow writes unsigned payload binaries under mounted rootfs paths and deploys hook dylibs under `/mnt1/cores/...`.
|
||||
- JB-1 modifies launchd to load `/cores/launchdhook.dylib`; if mount path is blocked, required filesystem state/artifacts are not reliably available.
|
||||
- Therefore this patch is a mount-authorization bypass needed to keep the mount pipeline alive for:
|
||||
1. installing/using unsigned payload binaries, and
|
||||
2. making launchd dylib injection path viable.
|
||||
### Site 2 — role-state byte gate
|
||||
|
||||
## Correctness note on patch style
|
||||
- Match site: `0xFFFFFE0007CA9D88` / `0x00CA5D88`
|
||||
- Stock instruction: `ldrb w8, [x8, #1]`
|
||||
- Patched instruction: `mov x8, xzr`
|
||||
- Upstream relation: **exact match** to `/Users/qaq/Desktop/patch_fw.py` `patch(0xCA5D88, 0xAA1F03E8)`.
|
||||
|
||||
- Correct implementation: patch deny branch (`CBNZ W0`) to `NOP`.
|
||||
- Incorrect/old style: NOP the preceding `BL` can leave stale `W0` and spuriously force deny.
|
||||
- Current code path is aligned with correct style (branch patch).
|
||||
## Why these are the correct semantic gates
|
||||
|
||||
## IDA markings applied (requested two groups)
|
||||
## Facts from IDA MCP on PCC 26.1 research
|
||||
|
||||
- `patched` group:
|
||||
- `patch_mac_mount__patched_fn_mount_gate`
|
||||
- patch-point comment at `0xFFFFFE0007CA8EAC`
|
||||
- `supplement` group:
|
||||
- `patch_mac_mount__supp_*` functions listed above
|
||||
- patch context comments at `0xFFFFFE0007CA8EA8` and `0xFFFFFE0007CA8EC8`
|
||||
- The `mount_common()` string xref recovers the main mount flow function at `0xFFFFFE0007CA7868`.
|
||||
- The upstream-matching wrapper candidate is a nearby caller that itself calls back into that `mount_common` function.
|
||||
- In that wrapper, the first patch site is the sequence:
|
||||
- `tbnz w28, #5, loc_fail`
|
||||
- then, on the fail target, `mov w25, #1 ; b ...`
|
||||
- In `research/reference/xnu/bsd/sys/mount_internal.h`, bit `0x20` is `KERNEL_MOUNT_PREBOOTVOL`.
|
||||
- In the same wrapper, the second patch site is the sequence:
|
||||
- `add x8, x16, #0x70`
|
||||
- `ldrb w8, [x8, #1]`
|
||||
- `tbz w8, #6, loc_continue_to_mount_common`
|
||||
- `orr w8, w28, #0x10000`
|
||||
- `tbnz w28, #0, ...`
|
||||
- `mov w25, #1`
|
||||
- The `tbz w8, #6` target flows into the block that calls back into the recovered `mount_common` function.
|
||||
|
||||
## Security impact
|
||||
## Source-backed interpretation
|
||||
|
||||
- This bypass weakens MAC enforcement in mount flow and expands what mount operations can proceed.
|
||||
- It is functional for JB bring-up but should be treated as a high-impact policy bypass.
|
||||
- Fact: `KERNEL_MOUNT_PREBOOTVOL` is bit 5 in `mount_internal.h`.
|
||||
- Inference: the first gate is the early Preboot-volume reject path in the mount-role wrapper; NOPing it matches the known-to-work upstream behavior.
|
||||
- Fact: the second gate tests a byte-derived bit before the wrapper continues into the `mount_common` call path.
|
||||
- Inference: forcing that loaded byte to zero reproduces the upstream intent of always taking the stock `tbz ..., #6, continue` path.
|
||||
- Because both patched sites are in the wrapper that selects whether execution can even reach `mount_common`, they are a better semantic fit for `patch_mac_mount` than the previously drifted lower helper branch.
|
||||
|
||||
## Symbol Consistency Audit (2026-03-05)
|
||||
## Why the previous local drift was rejected
|
||||
|
||||
- Status: `partial`
|
||||
- Recovered symbol `__mac_mount` exists at `0xfffffe0007cb4eec`.
|
||||
- This document traces a deeper mount-policy path and uses analyst labels for internal helpers; those names are only partially represented in recovered symbol JSON.
|
||||
- Previous local matcher patched `0xFFFFFE0007CA8EAC` / `0x00CA4EAC`.
|
||||
- IDA + XNU correlation shows that sequence belongs to the lower `prepare_coveredvp()` helper.
|
||||
- That helper sequence matches the source shape of the ownership / `EPERM` preflight:
|
||||
- `vnode_getattr(...)`
|
||||
- compare owner uid vs credential uid / root
|
||||
- on failure set `W0 = 1`
|
||||
- That is **not** the same gate as the upstream B11 design target.
|
||||
- Since `/Users/qaq/Desktop/patch_fw.py` is known-to-work and the upstream sites still exist on PCC 26.1 research, keeping the drift would be a red flag; the rework therefore restores upstream semantics.
|
||||
|
||||
## Patch Metadata
|
||||
## Runtime matcher design
|
||||
|
||||
- Patch document: `patch_mac_mount.md` (B11).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_mac_mount.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
- Step 1: recover `mount_common` via the `"mount_common()"` string anchor.
|
||||
- Step 2: scan only a local window around that function for callers that branch-link into it.
|
||||
- Step 3: among those callers, require the unique paired shape:
|
||||
- a `tbnz <flags>, #5 -> mov #1` reject gate, and
|
||||
- a later `add ..., #0x70 ; ldrb ; tbz #6 -> block that calls mount_common` gate.
|
||||
- Step 4: patch exactly those two instructions.
|
||||
|
||||
## Patch Goal
|
||||
## Why this should generalize to PCC 26.1 release / likely 26.3 release
|
||||
|
||||
Bypass the mount-policy deny branch in MAC mount flow so jailbreak filesystem setup can continue.
|
||||
- It does not depend on IDA names, embedded symbols, or fixed addresses.
|
||||
- The primary anchor is a diagnostic string already used elsewhere in the same VFS syscall unit and expected to survive stripped release kernels.
|
||||
- The secondary matcher keys off stable semantics from XNU source and local control flow:
|
||||
- `KERNEL_MOUNT_PREBOOTVOL` bit test,
|
||||
- the nearby role-state byte test,
|
||||
- and the wrapper-to-`mount_common` call relationship.
|
||||
- This is more likely to survive research vs release layout drift than the previous shallow “first callee with `bl ; cbnz w0`” heuristic.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
## Performance notes
|
||||
|
||||
- Primary target: mount gate function at `0xfffffe0007ca8e08` (`CBNZ W0` deny branch site).
|
||||
- Patchpoint: `0xfffffe0007ca8eac` (`cbnz` -> `nop`).
|
||||
- Runtime cost stays bounded:
|
||||
- one string lookup,
|
||||
- one local scan window around the recovered `mount_common` function,
|
||||
- semantic inspection of only the small set of nearby caller functions.
|
||||
- It avoids whole-kernel heuristic sweeps and does not require expensive external symbol processing.
|
||||
|
||||
## Kernel Source File Location
|
||||
## Focused dry-run (`2026-03-06`)
|
||||
|
||||
- Expected XNU source family: `security/mac_vfs.c` / `bsd/vfs/vfs_syscalls.c` mount policy bridge.
|
||||
- Confidence: `medium`.
|
||||
- Kernel: extracted PCC 26.1 research raw Mach-O `/tmp/vphone-kcache-research-26.1.raw`
|
||||
- Result: `method_return=True`
|
||||
- Emitted writes:
|
||||
- `0x00CA5D54` — `NOP [___mac_mount preboot-role reject]`
|
||||
- `0x00CA5D88` — `mov x8, xzr [___mac_mount role-state gate]`
|
||||
- Upstream comparison: **exact offset match** with `/Users/qaq/Desktop/patch_fw.py`.
|
||||
|
||||
## Function Call Stack
|
||||
## 2026-03-06 Rework
|
||||
|
||||
- Primary traced chain (from `Why this function is called (full trace from mount entry paths)`):
|
||||
- IDA-marked `supplement` functions:
|
||||
- `patch_mac_mount__supp_sys_mount_adapter` @ `0xFFFFFE0007CA9AF8`
|
||||
- `patch_mac_mount__supp_sys_mount_core` @ `0xFFFFFE0007CA9B38`
|
||||
- `patch_mac_mount__supp_sys_fmount` @ `0xFFFFFE0007CAA924`
|
||||
- `patch_mac_mount__supp_sys_fs_snapshot` @ `0xFFFFFE0007CBE51C`
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research sites: `0x00CA5D54` (`0xFFFFFE0007CA9D54`) and `0x00CA5D88` (`0xFFFFFE0007CA9D88`).
|
||||
- Anchor class: `mixed string+heuristic`. Runtime reveal uses the stable `"mount_common()"` string only to bound the surrounding `vfs_syscalls.c` neighborhood, then picks the unique nearby function that contains both upstream local gates: the early `tbnz wFlags,#5` branch and the later `add xN,#0x70 ; ldrb wN,[xN,#1] ; tbz wN,#6` policy-byte test.
|
||||
- Why these sites: they are the exact upstream dual-site bypass. The earlier drift to `0x00CA4EAC` patched a different `cbnz w0` gate in another helper and is therefore rejected as an upstream mismatch.
|
||||
- Release/generalization rationale: the string keeps the search local to the right source module, while the paired semantic patterns identify the same function without relying on symbols. That combination should survive 26.1 release / likely 26.3 release better than a raw offset.
|
||||
- Performance note: one string anchor plus a bounded neighborhood scan (~`0x9000` bytes) instead of a whole-kernel semantic walk.
|
||||
- Focused PCC 26.1 research dry-run: `hit`, 2 writes at `0x00CA5D54` and `0x00CA5D88`.
|
||||
|
||||
## Patch Hit Points
|
||||
|
||||
- Patch hitpoint is selected by contextual matcher and verified against local control-flow.
|
||||
- Before/after instruction semantics are captured in the patch-site evidence above.
|
||||
|
||||
## Current Patch Search Logic
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_mac_mount.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
|
||||
## Pseudocode (Before)
|
||||
|
||||
```c
|
||||
rc = mount_ctx_prepare(...);
|
||||
if (rc != 0) {
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
|
||||
```c
|
||||
rc = mount_ctx_prepare(...);
|
||||
/* deny branch skipped */
|
||||
```
|
||||
|
||||
## Validation (Static Evidence)
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
|
||||
- MAC mount precheck deny branch returns error early, causing mount pipeline failure during CFW/JB install steps.
|
||||
|
||||
## Risk / Side Effects
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
|
||||
## Symbol Consistency Check
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `__mac_mount`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0007ca8e08` currently resolves to `sub_FFFFFE0007CA8C90` (size `0x1a4`).
|
||||
|
||||
## Open Questions and Confidence
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
|
||||
## Evidence Appendix
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `prepare_coveredvp`
|
||||
- Chain function sample: `prepare_coveredvp`
|
||||
- Caller sample: `__mac_mount`, `mount_common`
|
||||
- Callee sample: `buf_invalidateblks`, `enablequotas`, `prepare_coveredvp`, `sub_FFFFFE0007B1B508`, `sub_FFFFFE0007B1C348`, `sub_FFFFFE0007B1C590`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE0007CB4260` (`prepare_coveredvp`): NOP [___mac_mount deny branch] | `e0000035 -> 1f2003d5`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
@@ -151,3 +151,10 @@ if ((perm_flags & BIT0) == 0) {
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` patches the NVRAM gate at `0x01234034`; release lands at `0x011F8034`.
|
||||
- In this pass the runtime reveal was tightened to enumerate all `"krn."` refs and require a unique preceding `tbz/tbnz` gate, instead of trusting the first ref only.
|
||||
- IDA still confirms the patched site as the early verifyPermission guard immediately before the `"krn."` key-prefix check.
|
||||
- Focused dry-run (`2026-03-06`): research `0x01234034`; release `0x011F8034`.
|
||||
|
||||
@@ -200,3 +200,10 @@ if (hash_type != hash_type) {
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` patches the SHA256-only reject compare at `0x016405AC`; release lands at `0x015BAE8C`. The current matcher still lands on exactly those sites.
|
||||
- In this pass the runtime reveal was tightened to a single string-backed path: `"AMFI: code signature validation failed"` -> caller -> BL target -> unique `cmp w0,#imm ; b.ne` reject gate.
|
||||
- The old broad fallback (`first cmp w0,#imm in AMFI text`) was removed because it was not a justified cross-build matcher under the current rules.
|
||||
- Focused dry-run (`2026-03-06`): research `0x016405AC`; release `0x015BAE8C`.
|
||||
|
||||
@@ -149,3 +149,10 @@ if (pid_or_flavor_guard == 0) return EINVAL;
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` patches the two early guards at `0x01060A90` and `0x01060A98`; release lands at `0x01024A90` and `0x01024A98`.
|
||||
- In this pass the runtime matcher was tightened from “first two early CBZ/CBNZ” to the precise local shape recovered from the `_proc_info` anchor: `ldr x0, [x0,#0x18] ; cbz x0, fail ; bl ... ; cbz/cbnz wN, fail`.
|
||||
- This keeps the patch on the same upstream sites but removes ambiguity for later stripped release kernels.
|
||||
- Focused dry-run (`2026-03-06`): research `0x01060A90/98`; release `0x01024A90/98`.
|
||||
|
||||
@@ -213,3 +213,9 @@ int proc_security_policy(...) {
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` remains correct here: the function entry rewrite still lands at `0x01063148/4C` on research and `0x01027148/4C` on release.
|
||||
- The reveal path remains structural from the shared `_proc_info` switch anchor into the small repeated BL target used by the switch cases. IDA/XNU review still matches `proc_security_policy()` semantics in `research/reference/xnu/bsd/kern/proc_info.c`.
|
||||
- No retarget was needed in this pass; the matcher stays fail-closed and focused dry-runs remain unique on both kernels.
|
||||
|
||||
@@ -201,3 +201,11 @@ Interpretation:
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- This patch was materially reworked in this pass to match `/Users/qaq/Desktop/patch_fw.py`: it now rewrites the `mac_policy_ops` entries directly instead of patching each hook body.
|
||||
- Runtime reveal is still string-backed (`"Sandbox"` + `"Seatbelt sandbox policy"` -> `mac_policy_conf` -> `mpc_ops`), but the final writes now land on the table entries themselves, matching upstream semantics and offsets.
|
||||
- The shared allow target is recovered structurally from Sandbox text as the higher-address `mov x0,#0 ; ret` stub (`0x023B73BC` research, `0x022A78BC` release), matching the stub used by upstream `patch_fw.py`.
|
||||
- Focused dry-run (`2026-03-06`): research now emits 36 `ops[idx] -> allow stub` writes at the upstream table-entry offsets (for example `0x00A54C30`, `0x00A54C50`, `0x00A54CE0`, `0x00A54E68`); release emits the analogous table-entry writes (`0x00A1C0B0`, `0x00A1C0D0`, `0x00A1C160`, `0x00A1C2E8`).
|
||||
- This supersedes the earlier repo-local body-stub strategy for A4.
|
||||
|
||||
@@ -1,194 +1,146 @@
|
||||
# B17 `patch_shared_region_map`
|
||||
|
||||
## Re-validated from static analysis (IDA MCP)
|
||||
## Goal
|
||||
|
||||
All checks below were redone from disassembly/decompilation; old assumptions were not trusted.
|
||||
Keep the jailbreak `shared_region_map` patch aligned with the known-good upstream design in `/Users/qaq/Desktop/patch_fw.py` unless IDA + XNU clearly prove upstream is wrong.
|
||||
|
||||
### 1) Real call chain (why this path executes)
|
||||
- Preferred upstream target: `patch(0x10729cc, 0xeb00001f)`.
|
||||
- Final rework result: `match`.
|
||||
- PCC 26.1 research hit: file offset `0x010729CC`, VA `0xFFFFFE00080769CC`.
|
||||
- PCC 26.1 release hit: file offset `0x010369CC`.
|
||||
|
||||
`shared_region_map_and_slide_2_np` syscall path:
|
||||
## What Gets Patched
|
||||
|
||||
1. Syscall entry points to `0xfffffe0008075560`
|
||||
(`jb17_supplement_shared_region_map_and_slide_2_np_syscall`).
|
||||
2. It calls `0xfffffe0008075F98`
|
||||
(`jb17_supplement_shared_region_map_and_slide_locked`).
|
||||
3. That calls `0xfffffe0008076260`
|
||||
(`jb17_patched_fn_shared_region_map_and_slide_setup`), the function containing the patch site.
|
||||
The patch rewrites the first mount-comparison gate in `shared_region_map_and_slide_setup()` so the shared-cache vnode is treated as if it were already on the process root mount:
|
||||
|
||||
This is the shared-region map+slide setup path used during dyld shared cache mapping for process startup.
|
||||
```asm
|
||||
cmp mount_reg, root_mount_reg ; patched to cmp x0, x0
|
||||
b.eq skip_preboot_lookup
|
||||
str xzr, [state,...]
|
||||
adrp/add ... "/private/preboot/Cryptexes"
|
||||
```
|
||||
|
||||
### 2) The exact guard being bypassed
|
||||
On PCC 26.1 research the validated sequence is:
|
||||
|
||||
Inside `jb17_patched_fn_shared_region_map_and_slide_setup`:
|
||||
```asm
|
||||
0xFFFFFE00080769CC cmp x8, x16 ; patched
|
||||
0xFFFFFE00080769D0 b.eq 0xFFFFFE0008076A98
|
||||
0xFFFFFE00080769D4 str xzr, [x23,#0x1d0]
|
||||
0xFFFFFE00080769DC adrl x0, "/private/preboot/Cryptexes"
|
||||
0xFFFFFE00080769F0 bl <vnode_lookup-like helper>
|
||||
0xFFFFFE00080769F4 cbnz w0, 0xFFFFFE0008076D84
|
||||
```
|
||||
|
||||
- First mount check:
|
||||
- `0xfffffe00080769CC` (`jb17_supplement_patchpoint_cmp_mount_vs_process_root`)
|
||||
- `cmp x8, x16 ; b.eq ...`
|
||||
- If that fails, it enters fallback:
|
||||
- lookup `"/private/preboot/Cryptexes"` at `0xfffffe00080769DC`
|
||||
- if lookup fails: `cbnz w0, 0xfffffe0008076D84`
|
||||
- Second mount check (the patched one):
|
||||
- `0xfffffe0008076A88` (`jb17_patched_fn_patchpoint_cmp_mount_vs_preboot_mount`)
|
||||
- original: `cmp x8, x16`
|
||||
- followed by `b.ne 0xfffffe0008076D84`
|
||||
## Upstream Match vs Divergence
|
||||
|
||||
Fail target:
|
||||
### Final status: `match`
|
||||
|
||||
- `0xfffffe0008076D84` (`jb17_supplement_patchpoint_fail_not_root_or_preboot`)
|
||||
- reaches `mov w25, #1` (EPERM) and exits through cleanup.
|
||||
- Upstream `patch_fw.py` uses file offset `0x10729CC`.
|
||||
- The reworked matcher now emits exactly `0x10729CC` on PCC 26.1 research.
|
||||
- The corresponding PCC 26.1 release hit is `0x10369CC`, the expected variant-shifted analogue of the same first-compare gate.
|
||||
|
||||
So this guard is specifically "shared cache vnode mount must match either process root mount or preboot Cryptex mount".
|
||||
### Rejected drift site
|
||||
|
||||
### 3) What the patch changes
|
||||
The older local analysis focused on a later fallback compare after the preboot lookup succeeded.
|
||||
|
||||
At `0xfffffe0008076A88`:
|
||||
That older focus is rejected because:
|
||||
- it did **not** match the known-good upstream site,
|
||||
- XNU source first checks `srfmp->vp->v_mount != rdir_vp->v_mount` before any preboot lookup,
|
||||
- IDA on PCC 26.1 research still shows that first root-vs-process-root compare exactly at the upstream offset,
|
||||
- matching the first compare is both narrower and more faithful to the upstream patch semantics.
|
||||
|
||||
- before: `cmp x8, x16`
|
||||
- after: `cmp x0, x0`
|
||||
## XNU Cross-Reference
|
||||
|
||||
Effect:
|
||||
|
||||
- The following `b.ne` is never taken.
|
||||
- If preboot lookup succeeded, the "mount mismatch vs preboot Cryptex" rejection is neutralized.
|
||||
- The lookup-failure branch at `0xfffffe00080769F4` is unchanged.
|
||||
|
||||
## Why this is needed for unsigned binaries / launchd dylib flow
|
||||
|
||||
In this jailbreak flow, process startup still needs successful shared-region map+slide. If this mount policy returns EPERM, dyld shared cache setup fails before normal userland execution continues. That blocks practical launch of unsigned/injected workflows (including launchd dylib-injection scenarios that depend on early process bring-up).
|
||||
|
||||
So B17 is not "generic code-sign bypass"; it is a targeted bypass of a mount-origin policy in shared-region setup that otherwise rejects the map request.
|
||||
|
||||
## IDA rename markers added
|
||||
|
||||
Two groups requested were applied in IDA:
|
||||
|
||||
- `supplement` group:
|
||||
- `jb17_supplement_shared_region_map_and_slide_2_np_syscall`
|
||||
- `jb17_supplement_shared_region_map_and_slide_locked`
|
||||
- `jb17_supplement_patchpoint_cmp_mount_vs_process_root`
|
||||
- `jb17_supplement_patchpoint_preboot_lookup_begin`
|
||||
- `jb17_supplement_patchpoint_fail_not_root_or_preboot`
|
||||
- `patched function` group:
|
||||
- `jb17_patched_fn_shared_region_map_and_slide_setup`
|
||||
- `jb17_patched_fn_patchpoint_cmp_mount_vs_preboot_mount`
|
||||
- `jb17_patched_fn_patchpoint_bne_fail_preboot_mount`
|
||||
|
||||
## Risk
|
||||
|
||||
This weakens a kernel policy that constrains shared-cache mapping source mounts, so it broadens accepted mapping contexts and may reduce expected filesystem trust boundaries.
|
||||
|
||||
## Symbol Consistency Audit (2026-03-05)
|
||||
|
||||
- Status: `partial`
|
||||
- Recovered symbols include `_shared_region_map_and_slide` family, but not every internal setup helper name used in this doc.
|
||||
- Path-level conclusions remain based on disassembly/xref consistency.
|
||||
|
||||
## Patch Metadata
|
||||
|
||||
- Patch document: `patch_shared_region_map.md` (B17).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_shared_region.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
|
||||
## Patch Goal
|
||||
|
||||
Neutralize a shared-region mount-origin comparison guard that returns EPERM in map-and-slide setup.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
|
||||
- Primary target: shared-region setup at `0xfffffe0008076260` (analyst label).
|
||||
- Patchpoint: `0xfffffe0008076a88` (`cmp x8,x16` -> `cmp x0,x0`).
|
||||
|
||||
## Kernel Source File Location
|
||||
|
||||
- Expected XNU source: `osfmk/vm/vm_shared_region.c` (shared region map-and-slide setup path).
|
||||
- Confidence: `high`.
|
||||
|
||||
## Function Call Stack
|
||||
|
||||
- Call-path evidence is derived from IDA xrefs and callsite traversal in this document.
|
||||
- The patched node sits on the documented execution-critical branch for this feature path.
|
||||
|
||||
## Patch Hit Points
|
||||
|
||||
- Patch hitpoint is selected by contextual matcher and verified against local control-flow.
|
||||
- Before/after instruction semantics are captured in the patch-site evidence above.
|
||||
|
||||
## Current Patch Search Logic
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_shared_region.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
|
||||
## Pseudocode (Before)
|
||||
Source: `research/reference/xnu/bsd/vm/vm_unix.c:1472`
|
||||
|
||||
```c
|
||||
if (mount != proc_root_mount && mount != preboot_mount) {
|
||||
return EPERM;
|
||||
assert(rdir_vp != NULL);
|
||||
if (srfmp->vp->v_mount != rdir_vp->v_mount) {
|
||||
vnode_t preboot_vp = NULL;
|
||||
error = vnode_lookup(PREBOOT_CRYPTEX_PATH, 0, &preboot_vp, vfs_context_current());
|
||||
if (error || srfmp->vp->v_mount != preboot_vp->v_mount) {
|
||||
error = EPERM;
|
||||
...
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
### Fact
|
||||
|
||||
```c
|
||||
if (mount != mount) {
|
||||
return EPERM;
|
||||
}
|
||||
```
|
||||
- The first policy gate is the direct root-mount comparison.
|
||||
- Only if that comparison fails does the code fall into the `PREBOOT_CRYPTEX_PATH` lookup and later preboot-mount comparison.
|
||||
- The validated PCC 26.1 research instruction at `0xFFFFFE00080769CC` is the binary analogue of the first `srfmp->vp->v_mount != rdir_vp->v_mount` check.
|
||||
|
||||
## Validation (Static Evidence)
|
||||
### Inference
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
Patching the first compare to `cmp x0, x0` is the narrowest upstream-compatible bypass because it skips the entire fallback preboot lookup path while leaving the rest of the shared-region setup logic intact.
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
## Anchor Class
|
||||
|
||||
- Shared-region setup returns EPERM on mount-origin mismatch; dyld shared cache mapping for startup can fail.
|
||||
- Primary runtime anchor class: `string + local CFG`.
|
||||
- Concrete string anchor: `"/private/preboot/Cryptexes"`.
|
||||
- Why this anchor was chosen: the embedded symtable is effectively empty on stripped kernels, but this path string lives inside the exact helper that contains the mount-origin policy.
|
||||
- Why the local CFG matters: the runtime matcher selects the compare immediately preceding the Cryptexes lookup block by requiring `cmp reg,reg ; b.eq forward ; str xzr, [...]` right before the string reference.
|
||||
|
||||
## Risk / Side Effects
|
||||
## Runtime Matcher Design
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
The runtime matcher is intentionally single-path and upstream-aligned:
|
||||
|
||||
## Symbol Consistency Check
|
||||
1. Recover the helper from the in-image string `"/private/preboot/Cryptexes"`.
|
||||
2. Find the string reference(s) inside that function.
|
||||
3. For the local window immediately preceding the string reference, match:
|
||||
- `cmp x?, x?`
|
||||
- `b.eq forward`
|
||||
- `str xzr, [...]`
|
||||
4. Patch that `cmp` to `cmp x0, x0`.
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`.
|
||||
- Canonical symbol hit(s): none (alias-based static matching used).
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0008075560` currently resolves to `eventhandler_prune_list` (size `0x140`).
|
||||
This reproduces the exact upstream site without relying on IDA names or runtime symbol tables.
|
||||
|
||||
## Open Questions and Confidence
|
||||
## Why This Should Generalize
|
||||
|
||||
- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain.
|
||||
- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial).
|
||||
This matcher should survive PCC 26.1 research, PCC 26.1 release, and likely nearby stripped releases such as 26.3 because it relies on:
|
||||
|
||||
## Evidence Appendix
|
||||
- a stable embedded preboot-Cryptexes path string,
|
||||
- the source-backed control-flow shape directly before that lookup,
|
||||
- a local window rather than a whole-kernel heuristic scan.
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
Runtime cost remains modest:
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
- one string lookup,
|
||||
- one xref-to-function recovery,
|
||||
- one very small local scan around the string reference.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `sub_FFFFFE000807F5F4`
|
||||
- Chain function sample: `sub_FFFFFE000807F5F4`
|
||||
- Caller sample: `_shared_region_map_and_slide`
|
||||
- Callee sample: `mac_file_check_mmap`, `sub_FFFFFE0007AC5540`, `sub_FFFFFE0007B15AFC`, `sub_FFFFFE0007B84334`, `sub_FFFFFE0007B84C5C`, `sub_FFFFFE0007C11F88`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE000807FE1C` (`sub_FFFFFE000807F5F4`): cmp x0,x0 [_shared_region_map_and_slide_setup] | `1f0110eb -> 1f0000eb`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
## Validation
|
||||
|
||||
### Focused dry-run
|
||||
|
||||
Validated locally on extracted raw kernels:
|
||||
|
||||
- PCC 26.1 research: `hit` at `0x010729CC`
|
||||
- PCC 26.1 release: `hit` at `0x010369CC`
|
||||
|
||||
Both variants emit exactly one patch:
|
||||
|
||||
- `cmp x0,x0 [_shared_region_map_and_slide_setup]`
|
||||
|
||||
### Match verdict
|
||||
|
||||
- Upstream reference `/Users/qaq/Desktop/patch_fw.py`: `match`
|
||||
- IDA PCC 26.1 research control-flow: `match`
|
||||
- XNU shared-region mount-origin semantics: `match`
|
||||
|
||||
## Files
|
||||
|
||||
- Patcher: `scripts/patchers/kernel_jb_patch_shared_region.py`
|
||||
- Analysis doc: `research/kernel_patch_jb/patch_shared_region_map.md`
|
||||
|
||||
## 2026-03-06 Rework
|
||||
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research site: `0x010729CC` (`0xFFFFFE00080769CC`).
|
||||
- Anchor class: `string + local CFG`. Runtime reveal starts from the in-image `"/private/preboot/Cryptexes"` string and patches the first local `cmp ... ; b.eq` mount gate immediately before the lookup block.
|
||||
- Why this site: it is the exact known-good upstream root-vs-process-root compare. The older focus on the later preboot-fallback compare is treated as stale divergence and is no longer accepted.
|
||||
- Release/generalization rationale: the path string and the immediate compare/branch/zero-store scaffold are source-backed and should survive stripped release kernels.
|
||||
- Performance note: one string-xref resolution plus a tiny local scan near the string reference.
|
||||
- Focused PCC 26.1 research dry-run: `hit`, 1 write at `0x010729CC`.
|
||||
|
||||
@@ -1,192 +1,108 @@
|
||||
# B14 `patch_spawn_validate_persona`
|
||||
|
||||
## Revalidated target (static, IDA MCP)
|
||||
## Verdict
|
||||
|
||||
- Kernel analyzed: `/Users/qaq/Desktop/kernelcache.research.vphone600.macho` (stripped symbols).
|
||||
- Patcher (`scripts/patchers/kernel_jb_patch_spawn_persona.py`) resolves the newer-layout gate and emits:
|
||||
- file offset `0x00FA694C` -> `b #0x130`
|
||||
- In IDA VA space, the same site is:
|
||||
- function `jb_b16_b14_patch_spawn_validate_persona_entry` @ `0xFFFFFE0007FA898C`
|
||||
- patch point `0xFFFFFE0007FAA94C`
|
||||
- original: `TBZ W8, #1, loc_FFFFFE0007FAAA7C`
|
||||
- patched: unconditional `B loc_FFFFFE0007FAAA7C`
|
||||
- Preferred upstream reference: `/Users/qaq/Desktop/patch_fw.py`.
|
||||
- Final status on PCC 26.1 research: **match upstream**.
|
||||
- Upstream patch sites:
|
||||
- `0x00FA7024` -> `nop`
|
||||
- `0x00FA702C` -> `nop`
|
||||
- Final release-variant analogue:
|
||||
- `0x00F6B024` -> `nop`
|
||||
- `0x00F6B02C` -> `nop`
|
||||
- Previous repo drift to `0x00FA694C` / branch rewrite is now rejected for this patch because it did **not** match upstream and targeted an outer gate rather than the smaller helper that actually contains the sibling nil-field rejects.
|
||||
|
||||
## What this bypass actually skips
|
||||
## Anchor Class
|
||||
|
||||
At `0xFFFFFE0007FAA94C`, bit1 of local spawn-persona state (`[SP+var_2E0]`) gates an inner validation block.
|
||||
- Primary runtime anchor class: `string anchor`.
|
||||
- Concrete anchor: `"com.apple.private.spawn-panic-crash-behavior"` in the outer spawn policy wrapper.
|
||||
- Secondary discovery: semantic enumeration of that wrapper's local BL callees to find the unique small helper that matches the upstream control-flow shape.
|
||||
- Why this survives stripped kernels: the matcher does not need IDA names or embedded symbols; it only needs the in-image entitlement string plus decoded local CFG in the nearby helper.
|
||||
|
||||
When the block executes (unpatched path), it performs:
|
||||
## Final Patch Sites
|
||||
|
||||
1. `BL jb_b14_patch_persona_check_core` @ `0xFFFFFE0007FCA14C`
|
||||
2. Optional follow-up `BL jb_b14_patch_persona_check_followup` @ `0xFFFFFE0007FC9F98` (when bit `0x400` is set)
|
||||
3. On nonzero return, immediate error path:
|
||||
- sets error (`W28 = 1`)
|
||||
- jumps to `sub_FFFFFE000806C338(9, 19)` path (spawn failure report)
|
||||
### PCC 26.1 research
|
||||
|
||||
So B14 does not "relax everything"; it specifically removes this persona-precheck gate branch so execution continues from `0xFFFFFE0007FAAA7C`.
|
||||
- `0xFFFFFE0007FAB024` / `0x00FA7024`: `cbz w8, ...` -> `nop`
|
||||
- `0xFFFFFE0007FAB02C` / `0x00FA702C`: `cbz w8, ...` -> `nop`
|
||||
|
||||
## Why this matters for unsigned binary launch and launchd dylib flow
|
||||
### PCC 26.1 release
|
||||
|
||||
`jb_b16_b14_patch_spawn_validate_persona_entry` is in the exec/spawn image-activation path (it references:
|
||||
- `0x00F6B024`: `cbz w8, ...` -> `nop`
|
||||
- `0x00F6B02C`: `cbz w8, ...` -> `nop`
|
||||
|
||||
- `com.apple.private.spawn-panic-crash-behavior`
|
||||
- `com.apple.private.spawn-subsystem-root`
|
||||
- hardened-process entitlements
|
||||
).
|
||||
## Why These Gates Are Correct
|
||||
|
||||
Static caller trace (backward xrefs) shows it is reached from multiple MAC policy dispatch paths used during spawn:
|
||||
### Facts from IDA / disassembly
|
||||
|
||||
- `jb_b16_supp_mac_proc_check_launch_constraints` (`0xFFFFFE00082D66B8`) -> patched function
|
||||
- `jb_b14_supp_spawn_policy_slot_0x30_dispatch` (`0xFFFFFE00082DA058`) -> patched function
|
||||
- `jbA2_supp_mac_policy_dispatch_ops90_execve` (`0xFFFFFE00082D9D0C`) -> patched function
|
||||
- `jb_a4_supp_mac_policy_vnode_check_exec` (`0xFFFFFE00082DBB18`) -> patched function
|
||||
The upstream-matching helper contains the local block:
|
||||
|
||||
And the higher spawn/exec chain includes:
|
||||
|
||||
- `jbA2_supp_exec_activate_image` -> `jbA2_supp_imgact_exec_driver` -> `jbA2_supp_imgact_validate_and_activate` -> these policy dispatchers -> patched function.
|
||||
|
||||
### Practical implication
|
||||
|
||||
For unsigned/modified launch scenarios (including launchd with injected dylib), process creation still traverses this persona gate before later userland hooks are useful. If persona validation returns nonzero here, spawn aborts early; daemons/binaries never get to the stage where unsigned payload behavior is desired.
|
||||
B14 prevents that early rejection by forcing the skip branch.
|
||||
|
||||
## IDA naming and patch-point markings done
|
||||
|
||||
### Patched-function group
|
||||
|
||||
- `0xFFFFFE0007FA898C` -> `jb_b16_b14_patch_spawn_validate_persona_entry`
|
||||
- `0xFFFFFE0007FCA14C` -> `jb_b14_patch_persona_check_core`
|
||||
- `0xFFFFFE0007FC9F98` -> `jb_b14_patch_persona_check_followup`
|
||||
- Comments added at:
|
||||
- `0xFFFFFE0007FAA94C` (B14 patch site)
|
||||
- `0xFFFFFE0007FAAA7C` (forced-branch target)
|
||||
- `0xFFFFFE0007FAAA84` (follow-up check call site)
|
||||
|
||||
### Supplement group
|
||||
|
||||
- `0xFFFFFE00082DA058` -> `jb_b14_supp_spawn_policy_slot_0x30_dispatch`
|
||||
- `0xFFFFFE00082D9D0C` -> `jbA2_supp_mac_policy_dispatch_ops90_execve`
|
||||
- `0xFFFFFE00082D66B8` -> `jb_b16_supp_mac_proc_check_launch_constraints`
|
||||
- `0xFFFFFE00082DBB18` -> `jb_a4_supp_mac_policy_vnode_check_exec`
|
||||
- `0xFFFFFE0007FA6858` -> `patched_b13_exec_policy_stage_from_load_machfile`
|
||||
- `0xFFFFFE0007F81F00` -> `jbA2_supp_execve_mac_policy_bridge`
|
||||
|
||||
## Risk
|
||||
|
||||
- This bypass weakens spawn persona enforcement and can allow launches that kernel policy normally rejects.
|
||||
|
||||
## Symbol Consistency Audit (2026-03-05)
|
||||
|
||||
- Status: `partial`
|
||||
- Direct recovered symbol `spawn_validate_persona` is not present in current `kernel_info` JSON.
|
||||
- Upstream policy-path symbols are recovered and consistent with the traced context (for example `mac_proc_check_launch_constraints` at `0xfffffe00082df194`, `mac_vnode_check_signature` at `0xfffffe00082e4624`, and `exec_activate_image` at `0xfffffe0007fb5474`).
|
||||
- Current naming at the exact patch function remains analyst labeling of validated address paths.
|
||||
|
||||
## Patch Metadata
|
||||
|
||||
- Patch document: `patch_spawn_validate_persona.md` (B14).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_spawn_persona.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
|
||||
## Patch Goal
|
||||
|
||||
Skip persona validation branch that can abort spawn/exec pipeline before userland bootstrap.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
|
||||
- Primary target: spawn persona gate function at `0xfffffe0007fa898c`.
|
||||
- Patchpoint: `0xfffffe0007faa94c` (`tbz` -> unconditional `b`).
|
||||
|
||||
## Kernel Source File Location
|
||||
|
||||
- Expected XNU source family: `bsd/kern/kern_exec.c` spawn/exec persona validation path.
|
||||
- Confidence: `medium`.
|
||||
|
||||
## Function Call Stack
|
||||
|
||||
- Call-path evidence is derived from IDA xrefs and callsite traversal in this document.
|
||||
- The patched node sits on the documented execution-critical branch for this feature path.
|
||||
|
||||
## Patch Hit Points
|
||||
|
||||
- Patch hitpoint is selected by contextual matcher and verified against local control-flow.
|
||||
- Before/after instruction semantics are captured in the patch-site evidence above.
|
||||
|
||||
## Current Patch Search Logic
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_spawn_persona.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
|
||||
## Pseudocode (Before)
|
||||
|
||||
```c
|
||||
if (persona_bit1_set) {
|
||||
if (persona_check(...) != 0) return 1;
|
||||
}
|
||||
```asm
|
||||
ldr w0, [x20]
|
||||
bl ...
|
||||
cbz x0, fail_alt
|
||||
ldr w8, [x21, #0x18]
|
||||
cbz w8, continue
|
||||
ldr w8, [x20, #8]
|
||||
cbz w8, deny ; patched
|
||||
ldr w8, [x20, #0xc]
|
||||
cbz w8, deny ; patched
|
||||
mov x8, #0
|
||||
ldr w9, [x19, #0x490]
|
||||
add x10, x0, #0x140
|
||||
casa x8, x9, [x10]
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
Both patched `cbz` instructions jump to the same deny-return block.
|
||||
|
||||
```c
|
||||
/* TBZ gate bypassed */
|
||||
goto persona_check_skip;
|
||||
```
|
||||
### Facts from XNU semantics
|
||||
|
||||
## Validation (Static Evidence)
|
||||
This helper is a compact persona validation subroutine in the spawn/exec policy path. The two sibling `cbz` guards are the local nil / missing-field reject gates immediately before the helper proceeds into the proc-backed persona state update path.
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
### Conclusion
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
The upstream pair is the correct semantic gate because:
|
||||
- it is the exact pair patched by the known-good upstream tool,
|
||||
- both branches converge on the helper's deny path,
|
||||
- they live in the small validation helper reached from the outer spawn entitlement wrapper,
|
||||
- and they are narrower and more precise than the previously drifted outer `tbz` bypass.
|
||||
|
||||
- Persona validation branch can return error early in spawn/exec path, aborting process launch before userland hooks apply.
|
||||
## Match vs Divergence
|
||||
|
||||
## Risk / Side Effects
|
||||
- Upstream relation: `match`.
|
||||
- Explicitly rejected divergence: outer branch rewrite at `0x00FA694C` / `0x00F6A94C`.
|
||||
- Why rejected: although that outer gate also affects persona validation, it is broader than the upstream helper-local reject sites and was not necessary once the true upstream helper was recovered.
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
## Reveal Procedure
|
||||
|
||||
## Symbol Consistency Check
|
||||
1. Find the outer spawn policy wrapper by the in-image entitlement string `"com.apple.private.spawn-panic-crash-behavior"`.
|
||||
2. Enumerate BL callees inside that wrapper.
|
||||
3. Keep only small local helpers.
|
||||
4. Select the unique helper whose decoded CFG contains:
|
||||
- `ldr [arg,#8] ; cbz deny`
|
||||
- `ldr [arg,#0xc] ; cbz deny`
|
||||
- shared deny target
|
||||
- nearby `ldr [x19,#0x490] ; ... ; casa` sequence.
|
||||
5. Patch both helper-local `cbz` instructions with `NOP`.
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `partial`.
|
||||
- Canonical symbol hit(s): none (alias-based static matching used).
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): `0xfffffe0007fa898c` currently resolves to `sub_FFFFFE0007FA8658` (size `0x394`).
|
||||
## Validation
|
||||
|
||||
## Open Questions and Confidence
|
||||
- PCC 26.1 research dry-run: `hit` at `0x00FA7024` and `0x00FA702C`
|
||||
- PCC 26.1 release dry-run: `hit` at `0x00F6B024` and `0x00F6B02C`
|
||||
- Match verdict vs `/Users/qaq/Desktop/patch_fw.py`: `match`
|
||||
|
||||
- Open question: symbol recovery is incomplete for this path; aliases are still needed for parts of the call chain.
|
||||
- Overall confidence for this patch analysis: `medium` (address-level semantics are stable, symbol naming is partial).
|
||||
## Files
|
||||
|
||||
## Evidence Appendix
|
||||
- Patcher: `scripts/patchers/kernel_jb_patch_spawn_persona.py`
|
||||
- Analysis doc: `research/kernel_patch_jb/patch_spawn_validate_persona.md`
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
## 2026-03-06 Rework
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research sites: `0x00FA7024` (`0xFFFFFE0007FAB024`) and `0x00FA702C` (`0xFFFFFE0007FAB02C`).
|
||||
- Anchor class: `string`. Runtime reveal starts from the stable entitlement string `"com.apple.private.persona-mgmt"`, resolves the small helper, and matches the exact upstream dual-`cbz` pair on the `[x20,#8]` / `[x20,#0xc]` slots.
|
||||
- Why this site: it is the exact known-good upstream zero-check pair inside the persona validation helper. The previous drift to `0x00FA694C` patched a broader exec-path branch and did not match the upstream helper or XNU `spawn_validate_persona(...)` logic.
|
||||
- Release/generalization rationale: entitlement strings are stable across stripped kernels, and the dual-load/dual-cbz shape is tiny and source-backed.
|
||||
- Performance note: one string-xref resolution plus a very small helper-local scan.
|
||||
- Focused PCC 26.1 research dry-run: `hit`, 2 writes at `0x00FA7024` and `0x00FA702C`.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `exec_spawnattr_getmacpolicyinfo`
|
||||
- Chain function sample: `exec_spawnattr_getmacpolicyinfo`
|
||||
- Caller sample: `mac_proc_check_launch_constraints`, `sub_FFFFFE00082E2484`, `sub_FFFFFE00082E27D0`, `sub_FFFFFE00082E4118`
|
||||
- Callee sample: `bank_task_initialize`, `chgproccnt`, `cloneproc`, `dup2`, `exec_activate_image`, `exec_resettextvp`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE0007FB48B0` (`exec_spawnattr_getmacpolicyinfo`): b #0x130 [_spawn_validate_persona gate] | `88090836 -> 4c000014`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
@@ -158,3 +158,10 @@ if (true) goto allow; // compare neutralized
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` patches `0x00B01194`, and the current matcher still lands there on research; release lands at `0x00AC5194`.
|
||||
- IDA confirms the exact upstream gate at `0xFFFFFE0007B05194`: `cmp Xn, X0 ; b.eq allow ; cmp Xn, X1 ; b.eq deny ; ... ; bl ... ; cbz w0,...`. This matches `task_conversion_eval_internal()` semantics in `research/reference/xnu/osfmk/kern/ipc_tt.c`.
|
||||
- No code-path retarget was needed in this pass. The fast matcher already fails closed and the slow fallback stays disabled unless explicitly opted in with `VPHONE_TASK_CONV_ALLOW_SLOW_FALLBACK=1`.
|
||||
- Focused dry-run (`2026-03-06`): research `0x00B01194`; release `0x00AC5194`.
|
||||
|
||||
@@ -1,153 +1,141 @@
|
||||
# B15 `patch_task_for_pid`
|
||||
|
||||
## Patch Goal
|
||||
## Goal
|
||||
|
||||
Suppress one `proc_ro` security-state copy in task-for-pid flow by NOP-ing the second `ldr w?, [x?, #0x490]` pair.
|
||||
Keep the jailbreak `task_for_pid` patch aligned with the known-good upstream design in `/Users/qaq/Desktop/patch_fw.py` unless IDA + XNU clearly prove that upstream is wrong.
|
||||
|
||||
## Binary Targets (IDA + Recovered Symbols)
|
||||
- Preferred upstream target: `patch(0xFC383C, 0xD503201F)`.
|
||||
- Final rework result: `match`.
|
||||
- PCC 26.1 research hit: file offset `0x00FC383C`, VA `0xFFFFFE0007FC783C`.
|
||||
- PCC 26.1 release hit: file offset `0x00F8783C`.
|
||||
|
||||
- Recovered symbol related to API path:
|
||||
- `task_for_pid_trap` at `0xfffffe0007fd12dc`
|
||||
- Heuristic-resolved patch function (unique under strict matcher):
|
||||
- `0xfffffe000800cffc`
|
||||
- Patch site:
|
||||
- `0xfffffe000800d120` (`LDR W8, [X20,#0x490]`)
|
||||
- Data-table reference to this function:
|
||||
- `0xfffffe00077424a8` (indirect dispatch/table-style use)
|
||||
## What Gets Patched
|
||||
|
||||
## Call-Stack Analysis
|
||||
The patch NOPs the early `pid == 0` reject gate in `task_for_pid`, before the call that resolves the target task port.
|
||||
|
||||
- This path is mostly table/dispatch-driven, with sparse direct BL callers.
|
||||
- The selected function uniquely matched:
|
||||
- > =2 `ldr #0x490 + str #0xc` pairs
|
||||
- > =2 `ldadda`
|
||||
- `movk ..., #0xc8a2`
|
||||
- high-caller BL target profile
|
||||
On PCC 26.1 research the validated sequence is:
|
||||
|
||||
## Patch-Site / Byte-Level Change
|
||||
|
||||
- Patch site: `0xfffffe000800d120`
|
||||
- Before:
|
||||
- bytes: `88 92 44 B9`
|
||||
- asm: `LDR W8, [X20,#0x490]`
|
||||
- After:
|
||||
- bytes: `1F 20 03 D5`
|
||||
- asm: `NOP`
|
||||
|
||||
## Pseudocode (Before)
|
||||
|
||||
```c
|
||||
dst->security = src->proc_ro_security; // second copy point
|
||||
```asm
|
||||
0xFFFFFE0007FC7828 ldr w23, [x8, #8]
|
||||
0xFFFFFE0007FC782C ldr x19, [x8, #0x10]
|
||||
0xFFFFFE0007FC783C cbz w23, 0xFFFFFE0007FC79CC ; patched
|
||||
0xFFFFFE0007FC7840 mov w1, #0
|
||||
0xFFFFFE0007FC7844 mov w2, #0
|
||||
0xFFFFFE0007FC7848 mov w3, #0
|
||||
0xFFFFFE0007FC784C mov x4, #0
|
||||
0xFFFFFE0007FC7850 bl <helper>
|
||||
0xFFFFFE0007FC7854 cbz x0, 0xFFFFFE0007FC79CC
|
||||
```
|
||||
|
||||
## Pseudocode (After)
|
||||
## Upstream Match vs Divergence
|
||||
|
||||
### Final status: `match`
|
||||
|
||||
- Upstream `patch_fw.py` uses file offset `0xFC383C`.
|
||||
- The reworked matcher now emits exactly `0xFC383C` on PCC 26.1 research.
|
||||
- The corresponding PCC 26.1 release hit is `0xF8783C`, which is the expected variant-shifted analogue of the same in-function gate.
|
||||
|
||||
### Rejected drift design
|
||||
|
||||
The previous local rework had diverged to two later deny-return rewrites in small helper functions.
|
||||
|
||||
That divergence is rejected because:
|
||||
- it does **not** match the known-good upstream site,
|
||||
- the XNU source still explicitly says `/* Always check if pid == 0 */` and immediately returns failure,
|
||||
- IDA on PCC 26.1 research still shows the same early `cbz wPid, fail` gate at the exact upstream offset,
|
||||
- the helper-rewrite path broadens behavior more than necessary and is computationally more expensive at runtime.
|
||||
|
||||
## XNU Cross-Reference
|
||||
|
||||
Source: `research/reference/xnu/bsd/kern/kern_proc.c:5715`
|
||||
|
||||
```c
|
||||
// second security copy removed
|
||||
/* Always check if pid == 0 */
|
||||
if (pid == 0) {
|
||||
(void) copyout((char *)&tret, task_addr, sizeof(mach_port_name_t));
|
||||
AUDIT_MACH_SYSCALL_EXIT(KERN_FAILURE);
|
||||
return KERN_FAILURE;
|
||||
}
|
||||
```
|
||||
|
||||
## Symbol Consistency
|
||||
### Fact
|
||||
|
||||
- `task_for_pid_trap` symbol exists, but strict patch-site matcher resolves a different helper routine.
|
||||
- This mismatch is explicitly tracked and should remain under verification.
|
||||
- The source-level first authorization gate in `task_for_pid()` is the `pid == 0` rejection.
|
||||
- The validated PCC 26.1 research instruction at `0xFFFFFE0007FC783C` is the direct binary analogue of that source gate.
|
||||
|
||||
## Patch Metadata
|
||||
### Inference
|
||||
|
||||
- Patch document: `patch_task_for_pid.md` (B15).
|
||||
- Primary patcher module: `scripts/patchers/kernel_jb_patch_task_for_pid.py`.
|
||||
- Analysis mode: static binary analysis (IDA-MCP + disassembly + recovered symbols), no runtime patch execution.
|
||||
NOPing this early `cbz` is the narrowest upstream-compatible jailbreak bypass because it removes only the unconditional `pid == 0` failure gate, while leaving the later `proc_find()`, `task_for_pid_posix_check()`, and task lookup flow structurally intact.
|
||||
|
||||
## Target Function(s) and Binary Location
|
||||
## Anchor Class
|
||||
|
||||
- Primary target: task-for-pid security helper in `task_for_pid_trap` path (matcher-resolved helper).
|
||||
- Patchpoint: second `ldr #0x490` security copy point -> `nop`.
|
||||
- Primary runtime anchor class: `string + heuristic`.
|
||||
- Concrete string anchor: `"proc_ro_ref_task"`, which lives inside the same stripped function body on PCC 26.1 research.
|
||||
- Why this anchor was chosen: the embedded symtable is effectively empty, IDA names are not stable, but this in-function string reliably recovers the enclosing function so the heuristic scan stays local instead of walking the whole kernel.
|
||||
|
||||
## Kernel Source File Location
|
||||
## Runtime Matcher Design
|
||||
|
||||
- Expected XNU source: `osfmk/kern/task.c` (`task_for_pid_trap` and helper authorization flow).
|
||||
- Confidence: `high`.
|
||||
The runtime matcher is intentionally single-path and upstream-aligned:
|
||||
|
||||
## Function Call Stack
|
||||
1. Recover the enclosing function from the in-image string `"proc_ro_ref_task"`.
|
||||
2. Scan only that function for the unique local sequence:
|
||||
- `ldr wPid, [xArgs, #8]`
|
||||
- `ldr xTaskPtr, [xArgs, #0x10]`
|
||||
- `cbz wPid, fail`
|
||||
- `mov w1, #0`
|
||||
- `mov w2, #0`
|
||||
- `mov w3, #0`
|
||||
- `mov x4, #0`
|
||||
- `bl`
|
||||
- `cbz x0, fail`
|
||||
3. Patch the first `cbz` with `NOP`.
|
||||
|
||||
- Primary traced chain (from `Call-Stack Analysis`):
|
||||
- This path is mostly table/dispatch-driven, with sparse direct BL callers.
|
||||
- The selected function uniquely matched:
|
||||
- > =2 `ldr #0x490 + str #0xc` pairs
|
||||
- > =2 `ldadda`
|
||||
- `movk ..., #0xc8a2`
|
||||
- The upstream entry(s) and patched decision node are linked by direct xref/callsite evidence in this file.
|
||||
This avoids unstable IDA naming while keeping the reveal logic close to the exact upstream gate.
|
||||
|
||||
## Patch Hit Points
|
||||
## Why This Should Generalize
|
||||
|
||||
- Key patchpoint evidence (from `Patch-Site / Byte-Level Change`):
|
||||
- Patch site: `0xfffffe000800d120`
|
||||
- Before:
|
||||
- bytes: `88 92 44 B9`
|
||||
- asm: `LDR W8, [X20,#0x490]`
|
||||
- After:
|
||||
- bytes: `1F 20 03 D5`
|
||||
- The before/after instruction transform is constrained to this validated site.
|
||||
This matcher should survive PCC 26.1 research, PCC 26.1 release, and likely nearby release variants such as 26.3 because it relies on:
|
||||
|
||||
## Current Patch Search Logic
|
||||
- the stable syscall argument layout (`pid` at `+8`, task port output at `+0x10`),
|
||||
- the narrow early-failure ABI shape around `port_name_to_task()`, and
|
||||
- a single local fail target shared by `cbz wPid` and the post-helper `cbz x0`.
|
||||
|
||||
- Implemented in `scripts/patchers/kernel_jb_patch_task_for_pid.py`.
|
||||
- Site resolution uses anchor + opcode-shape + control-flow context; ambiguous candidates are rejected.
|
||||
- The patch is applied only after a unique candidate is confirmed in-function.
|
||||
- Uses string anchors + instruction-pattern constraints + structural filters (for example callsite shape, branch form, register/imm checks).
|
||||
Runtime cost remains reasonable:
|
||||
|
||||
## Validation (Static Evidence)
|
||||
- one full sequential decode of `kern_text`,
|
||||
- no repeated nested scans,
|
||||
- one exact candidate accepted.
|
||||
|
||||
- Verified with IDA-MCP disassembly/decompilation, xrefs, and callgraph context for the selected site.
|
||||
- Cross-checked against recovered symbols in `research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json`.
|
||||
- Address-level evidence in this document is consistent with patcher matcher intent.
|
||||
## Validation
|
||||
|
||||
## Expected Failure/Panic if Unpatched
|
||||
### Focused dry-run
|
||||
|
||||
- task_for_pid helper retains proc security copy/check logic that denies task port acquisition.
|
||||
Validated locally on extracted raw kernels:
|
||||
|
||||
## Risk / Side Effects
|
||||
- PCC 26.1 research: `hit` at `0x00FC383C`
|
||||
- PCC 26.1 release: `hit` at `0x00F8783C`
|
||||
|
||||
- This patch weakens a kernel policy gate by design and can broaden behavior beyond stock security assumptions.
|
||||
- Potential side effects include reduced diagnostics fidelity and wider privileged surface for patched workflows.
|
||||
Both variants emit exactly one patch:
|
||||
|
||||
## Symbol Consistency Check
|
||||
- `NOP [_task_for_pid pid==0 gate]`
|
||||
|
||||
- Recovered-symbol status in `kernelcache.research.vphone600.bin.symbols.json`: `match`.
|
||||
- Canonical symbol hit(s): `task_for_pid_trap`.
|
||||
- Where canonical names are absent, this document relies on address-level control-flow and instruction evidence; analyst aliases are explicitly marked as aliases.
|
||||
- IDA-MCP lookup snapshot (2026-03-05): querying `task_for_pid_trap` resolves to `proc_ro_ref_task` at `0xfffffe0007fd12dc`; this is treated as a naming alias/mismatch risk while address semantics stay valid.
|
||||
### Match verdict
|
||||
|
||||
## Open Questions and Confidence
|
||||
- Upstream reference `/Users/qaq/Desktop/patch_fw.py`: `match`
|
||||
- IDA PCC 26.1 research control-flow: `match`
|
||||
- XNU `task_for_pid` early gate semantics: `match`
|
||||
|
||||
- Open question: verify future firmware drift does not move this site into an equivalent but semantically different branch.
|
||||
- Overall confidence for this patch analysis: `high` (symbol match + control-flow/byte evidence).
|
||||
## Files
|
||||
|
||||
## Evidence Appendix
|
||||
- Patcher: `scripts/patchers/kernel_jb_patch_task_for_pid.py`
|
||||
- Analysis doc: `research/kernel_patch_jb/patch_task_for_pid.md`
|
||||
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
## 2026-03-06 Rework
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
- Base VA: `0xFFFFFE0007004000`
|
||||
- Runtime status: `hit` (1 patch writes, method_return=True)
|
||||
- Included in `KernelJBPatcher.find_all()`: `False`
|
||||
- IDA mapping: `1/1` points in recognized functions; `0` points are code-cave/data-table writes.
|
||||
- IDA mapping status: `ok` (IDA runtime mapping loaded.)
|
||||
- Call-chain mapping status: `ok` (IDA call-chain report loaded.)
|
||||
- Call-chain validation: `1` function nodes, `1` patch-point VAs.
|
||||
- IDA function sample: `sub_FFFFFE000800CFFC`
|
||||
- Chain function sample: `sub_FFFFFE000800CFFC`
|
||||
- Caller sample: none
|
||||
- Callee sample: `kfree_ext`, `sub_FFFFFE0007B15AFC`, `sub_FFFFFE0007B1F20C`, `sub_FFFFFE0007B1F444`, `sub_FFFFFE0007FE91CC`, `sub_FFFFFE000800CFFC`
|
||||
- Verdict: `questionable`
|
||||
- Recommendation: Hit is valid but patch is inactive in find_all(); enable only after staged validation.
|
||||
- Key verified points:
|
||||
- `0xFFFFFE000800D120` (`sub_FFFFFE000800CFFC`): NOP [_task_for_pid proc_ro copy] | `889244b9 -> 1f2003d5`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/runtime_verification_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_runtime_patch_points.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research site: `0x00FC383C` (`0xFFFFFE0007FC783C`).
|
||||
- Anchor class: `string + heuristic`. Runtime reveal recovers the enclosing function from the in-image `"proc_ro_ref_task"` string, then finds the unique upstream local `ldr pid ; ldr task_ptr ; cbz pid ; mov w1/w2/w3,#0 ; mov x4,#0 ; bl ; cbz x0` pattern.
|
||||
- Why this site: it is the exact known-good upstream `pid == 0` reject gate, and XNU still models it as the first unconditional failure path in `task_for_pid()`.
|
||||
- Release/generalization rationale: the local ABI/control-flow pattern is narrow and stable across stripped kernels, while avoiding reliance on symbol names.
|
||||
- Performance note: one string-xref resolution plus a single bounded local scan (`+0x800` window) because the stripped-function end detector truncates this function too early on current PCC 26.1 images; still no whole-kernel repeated semantic rescans.
|
||||
- Focused PCC 26.1 research dry-run: pending main-agent validation.
|
||||
|
||||
@@ -192,3 +192,10 @@ return 1;
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Upstream Rework Review
|
||||
|
||||
- `patch_fw.py` directly zeros `0x0067EB50`; release lands at `0x0066AB50`. The current patcher still recovers and zeros that same variable on both kernels.
|
||||
- Runtime reveal remains string/data anchored (`"thid_should_crash"` -> adjacent `sysctl_oid` -> backing variable in `__DATA`/`__DATA_CONST`), which is preferable to any symbol-based path on the stripped raw kernels.
|
||||
- IDA re-check (`2026-03-06`) confirms the backing variable is live and currently nonzero (`1`) before patching on research.
|
||||
- Focused dry-run (`2026-03-06`): research `0x0067EB50`; release `0x0066AB50`.
|
||||
|
||||
@@ -1,5 +1,87 @@
|
||||
# B10 `patch_vm_map_protect`
|
||||
|
||||
## 2026-03-06 PCC 26.1 Rework Status
|
||||
|
||||
- Preferred upstream reference: `/Users/qaq/Desktop/patch_fw.py`.
|
||||
- Final status on PCC 26.1 research: **match upstream**.
|
||||
- Upstream patch site: file offset `0x00BC024C` (`patch(0xBC024C, 0x1400000A)`).
|
||||
- Final JB patcher site: file offset `0x00BC024C`, VA `0xfffffe0007bc424c`.
|
||||
- Repo drift removed: the previous repo-only `0x00BC012C` / `TBNZ X24,#0x20` site is no longer accepted because it does **not** match the known-good upstream gate and is not the correct XNU-backed write-downgrade decision point for PCC 26.1 research.
|
||||
|
||||
## Preferred Design Target Check
|
||||
|
||||
- **Match vs upstream:** `match`.
|
||||
- **Why this is the preferred gate:** upstream patches the `B.NE` that skips the block clearing `VM_PROT_WRITE` from combined read+write requests. IDA on PCC 26.1 research shows the same local block still exists unchanged.
|
||||
- **Red-flag review result:** the earlier repo drift to `0x00BC012C` was a real divergence from upstream. It was removed rather than justified, because IDA + XNU semantics point back to the upstream gate.
|
||||
|
||||
## Final Patch Site (PCC 26.1 Research)
|
||||
|
||||
- Function anchor: the in-image panic string `"vm_map_protect(%p,0x%llx,0x%llx) new=0x%x wired=%x @%s:%d"`, whose xref lands inside the same `vm_map_protect` body.
|
||||
- Patched instruction: `0xfffffe0007bc424c` / file offset `0x00BC024C`.
|
||||
- Before: `b.ne #0xbc0274`.
|
||||
- After: `b #0xbc0274`.
|
||||
- Nearby validated block in IDA:
|
||||
- `mov w9, #6`
|
||||
- `bics wzr, w9, w20`
|
||||
- `b.ne #0xbc0274` ← patched
|
||||
- `tbnz w8, #0x16, #0xbc0274`
|
||||
- ...
|
||||
- `and w20, w20, #0xfffffffb`
|
||||
|
||||
## Why This Gate Is Correct
|
||||
|
||||
- **Fact (IDA):** the branch at `0x00BC024C` skips a small block whose only semantic effect on the requested protection register is `and w20, w20, #0xfffffffb`, i.e. clear bit `0x4` (`VM_PROT_WRITE`).
|
||||
- **Fact (XNU):** `research/reference/xnu/osfmk/vm/vm_map.c` contains the corresponding logic:
|
||||
- `if ((~v5 & 6) == 0 && (v22 & 0x400000) == 0) { ... v5 &= ~4u; }`
|
||||
- **Inference:** on PCC 26.1 research, `w20` is the local requested-protection value and this block is still the write-downgrade path that upstream intended to bypass.
|
||||
- **Conclusion:** rewriting the first skip branch to unconditional `b` preserves the known-good upstream behavior: always bypass the downgrade block, instead of patching an earlier unrelated status-bit test.
|
||||
|
||||
## Reveal Procedure Used In The Reworked Matcher
|
||||
|
||||
1. Recover the function containing the in-image `vm_map_protect(` panic string.
|
||||
2. Scan only within that function.
|
||||
3. Find the unique local sequence:
|
||||
- `mov wMask, #6`
|
||||
- `bics wzr, wMask, wProt`
|
||||
- `b.ne skip`
|
||||
- `tbnz wEntryFlags, #22, skip`
|
||||
- later in the skipped block: `and wProt, wProt, #~VM_PROT_WRITE`
|
||||
4. Rewrite only that `b.ne` to an unconditional branch to the same target.
|
||||
|
||||
## Focused Validation (2026-03-06)
|
||||
|
||||
- Research kernel used: extracted raw Mach-O `/tmp/vphone-kcache-research-26.1.raw`.
|
||||
- Research outcome: `hit` at `0x00BC024C`.
|
||||
- Research emitted patch: `b #0x28 [_vm_map_protect]`.
|
||||
- Release kernel used: extracted raw Mach-O `/tmp/vphone-kcache-release-26.1.raw`.
|
||||
- Release outcome: `hit` at `0x00B8424C`.
|
||||
- Release emitted patch: `b #0x28 [_vm_map_protect]`.
|
||||
- Method: focused `KernelJBPatcher.patch_vm_map_protect()` dry-runs in the project `.venv`.
|
||||
- Result: the reworked matcher hits the same semantic gate on both PCC 26.1 research and PCC 26.1 release, and the research hit **matches upstream exactly**.
|
||||
|
||||
## Why This Should Generalize Beyond The Current Research Image
|
||||
|
||||
- The matcher does **not** key on a hardcoded offset, a specific file-layout delta, or a single fragile operand string.
|
||||
- It anchors on an in-image `vm_map_protect(` panic string that is tied to the same core VM function across variants.
|
||||
- Inside that function it requires a compact semantic micro-CFG, not a single mnemonic:
|
||||
- `mov wMask, #6` (combined read+write test)
|
||||
- `bics wzr, wMask, wProt`
|
||||
- `b.ne skip`
|
||||
- `tbnz wEntryFlags, #22, skip`
|
||||
- later `and wProt, wProt, #~VM_PROT_WRITE`
|
||||
- That shape is directly backed by the XNU write-downgrade logic, so it should survive ordinary offset drift between PCC 26.1 research, PCC 26.1 release, and likely nearby 26.3 release kernels unless Apple materially restructures this code path.
|
||||
- If Apple does materially restructure it, the matcher fails closed by requiring a unique hit rather than guessing.
|
||||
|
||||
## Runtime Matcher Cost
|
||||
|
||||
- Search scope is limited to one recovered function body, not the whole kernel text.
|
||||
- The scan is linear over that function with small fixed-width decode windows (`10` instructions for the main pattern, `1` instruction for the local write-clear search).
|
||||
- This keeps the runtime cost negligible relative to the broader JB patch pass while still being much more semantic than the earlier shallow `tbnz bit>=24` heuristic.
|
||||
|
||||
## Superseded Earlier Analysis
|
||||
|
||||
The older `0x00BC012C` / `TBNZ X24,#0x20` analysis below is retained only as historical context. It is superseded by the 2026-03-06 rework above and should not be treated as the preferred patch design for PCC 26.1 research.
|
||||
|
||||
## Patch Goal
|
||||
|
||||
Bypass a high-bit protection guard by converting a `TBNZ` check into unconditional `B`.
|
||||
@@ -125,7 +207,9 @@ goto guarded_path; // unconditional
|
||||
- Detailed addresses, xrefs, and rationale are preserved in the existing analysis sections above.
|
||||
- For byte-for-byte patch details, refer to the patch-site and call-trace subsections in this file.
|
||||
|
||||
## Runtime + IDA Verification (2026-03-05)
|
||||
## Runtime + IDA Verification (2026-03-05, historical)
|
||||
|
||||
> Historical note: this older runtime-verification block is preserved for traceability only. Its `0x00BC012C` / `0xFFFFFE0007BD09A8` analysis is superseded by the 2026-03-06 upstream-aligned rework above, whose accepted site is `0x00BC024C` / `0xFFFFFE0007BC424C`.
|
||||
|
||||
- Verification timestamp (UTC): `2026-03-05T14:55:58.795709+00:00`
|
||||
- Kernel input: `/Users/qaq/Documents/Firmwares/PCC-CloudOS-26.3-23D128/kernelcache.research.vphone600`
|
||||
@@ -149,3 +233,14 @@ goto guarded_path; // unconditional
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.json`
|
||||
- Artifacts: `research/kernel_patch_jb/runtime_verification/ida_patch_chain_report.md`
|
||||
<!-- END_RUNTIME_IDA_VERIFICATION_2026_03_05 -->
|
||||
|
||||
## 2026-03-06 Rework
|
||||
|
||||
- Upstream target (`/Users/qaq/Desktop/patch_fw.py`): `match`.
|
||||
- Final research site: `0x00BC024C` (`0xFFFFFE0007BC424C`).
|
||||
- Anchor class: `string`. Runtime reveal starts from the in-image `"vm_map_protect("` string, resolves the function, then matches the unique write-downgrade block `mov #6 ; bics wzr,mask,prot ; b.ne skip ; tbnz #22,skip ; ... and prot,#~VM_PROT_WRITE`.
|
||||
- Why this site: it is the exact upstream branch gate that conditionally strips `VM_PROT_WRITE` before later VME updates. The older drift to `0x00BC012C` lands in unrelated preflight/error handling and is rejected.
|
||||
- Release/generalization rationale: the panic string and the local BICS/TBNZ/write-clear shape are source-backed and should survive stripped release kernels with low matcher cost.
|
||||
- Performance note: one string-xref resolution and one function-local scan with a short semantic confirmation window.
|
||||
- Focused PCC 26.1 research dry-run: `hit`, 1 write at `0x00BC024C`.
|
||||
|
||||
|
||||
@@ -159,10 +159,13 @@ fi
|
||||
|
||||
scp_from "/mnt1/sbin/launchd.bak" "$TEMP_DIR/launchd"
|
||||
|
||||
# Inject launchdhook.dylib load command (idempotent — skips if already present)
|
||||
# Inject launchdhook via short root alias to avoid Mach-O header overflow.
|
||||
# Keep the full /cores/launchdhook.dylib copy on disk for compatibility, but
|
||||
# load /b from launchd because this launchd sample only has room for a 32-byte
|
||||
# LC_LOAD_DYLIB command after stripping LC_CODE_SIGNATURE.
|
||||
if [[ -d "$JB_INPUT_DIR/basebin" ]]; then
|
||||
echo " Injecting LC_LOAD_DYLIB for /cores/launchdhook.dylib..."
|
||||
python3 "$SCRIPT_DIR/patchers/cfw.py" inject-dylib "$TEMP_DIR/launchd" "/cores/launchdhook.dylib"
|
||||
echo " Injecting LC_LOAD_DYLIB for /b (short launchdhook alias)..."
|
||||
python3 "$SCRIPT_DIR/patchers/cfw.py" inject-dylib "$TEMP_DIR/launchd" "/b"
|
||||
fi
|
||||
|
||||
python3 "$SCRIPT_DIR/patchers/cfw.py" patch-launchd-jetsam "$TEMP_DIR/launchd"
|
||||
@@ -224,6 +227,12 @@ if [[ -d "$BASEBIN_DIR" ]]; then
|
||||
ldid_sign "$dylib"
|
||||
scp_to "$dylib" "/mnt1/cores/$dylib_name"
|
||||
ssh_cmd "/bin/chmod 0755 /mnt1/cores/$dylib_name"
|
||||
|
||||
if [[ "$dylib_name" == "launchdhook.dylib" ]]; then
|
||||
echo " Installing short launchdhook alias at /b..."
|
||||
scp_to "$dylib" "/mnt1/b"
|
||||
ssh_cmd "/bin/chmod 0755 /mnt1/b"
|
||||
fi
|
||||
done
|
||||
|
||||
echo " [+] BaseBin hooks deployed"
|
||||
|
||||
@@ -63,7 +63,7 @@ class KernelJBPatcher(
|
||||
# Group A: Core gate-bypass methods.
|
||||
_GROUP_A_METHODS = (
|
||||
"patch_amfi_cdhash_in_trustcache", # JB-01 / A1
|
||||
"patch_amfi_execve_kill_path", # JB-02 / A2
|
||||
# "patch_amfi_execve_kill_path", # JB-02 / A2 (superseded by C21 on current PCC 26.1 path; keep standalone only)
|
||||
"patch_task_conversion_eval_internal", # JB-08 / A3
|
||||
"patch_sandbox_hooks_extended", # JB-09 / A4
|
||||
"patch_iouc_failed_macf", # JB-10 / A5
|
||||
|
||||
@@ -243,6 +243,13 @@ class KernelJBPatchCredLabelMixin:
|
||||
self._log(" [-] shared deny return not found")
|
||||
return False
|
||||
|
||||
deny_already_allowed = _rd32(self.data, deny_off) == self._MOV_W0_0_U32
|
||||
if deny_already_allowed:
|
||||
self._log(
|
||||
f" [=] shared deny return at 0x{deny_off:X} already forces allow; "
|
||||
"skipping deny trampoline hook"
|
||||
)
|
||||
|
||||
success_exits = self._find_cred_label_success_exits(func_off, epilogue_off)
|
||||
if not success_exits:
|
||||
self._log(" [-] success exits not found")
|
||||
@@ -253,27 +260,31 @@ class KernelJBPatchCredLabelMixin:
|
||||
self._log(" [-] csflags stack reload not found")
|
||||
return False
|
||||
|
||||
deny_cave = self._find_code_cave(8)
|
||||
if deny_cave < 0:
|
||||
self._log(" [-] no code cave found for C21-v3 deny trampoline")
|
||||
return False
|
||||
deny_cave = -1
|
||||
if not deny_already_allowed:
|
||||
deny_cave = self._find_code_cave(8)
|
||||
if deny_cave < 0:
|
||||
self._log(" [-] no code cave found for C21-v3 deny trampoline")
|
||||
return False
|
||||
|
||||
success_cave = self._find_code_cave(32)
|
||||
if success_cave < 0 or success_cave == deny_cave:
|
||||
self._log(" [-] no code cave found for C21-v3 success trampoline")
|
||||
return False
|
||||
|
||||
deny_branch_back = self._encode_b(deny_cave + 4, epilogue_off)
|
||||
if not deny_branch_back:
|
||||
self._log(" [-] branch from deny trampoline back to epilogue is out of range")
|
||||
return False
|
||||
deny_branch_back = b""
|
||||
if not deny_already_allowed:
|
||||
deny_branch_back = self._encode_b(deny_cave + 4, epilogue_off)
|
||||
if not deny_branch_back:
|
||||
self._log(" [-] branch from deny trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
success_branch_back = self._encode_b(success_cave + 28, epilogue_off)
|
||||
if not success_branch_back:
|
||||
self._log(" [-] branch from success trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
deny_shellcode = asm("mov w0, #0") + deny_branch_back
|
||||
deny_shellcode = asm("mov w0, #0") + deny_branch_back if not deny_already_allowed else b""
|
||||
success_shellcode = (
|
||||
asm(f"ldr x26, {csflags_mem_op}")
|
||||
+ asm("cbz x26, #0x10")
|
||||
@@ -299,15 +310,16 @@ class KernelJBPatchCredLabelMixin:
|
||||
f"success_trampoline+{index} [_cred_label_update_execve C21-v3]",
|
||||
)
|
||||
|
||||
deny_branch_to_cave = self._encode_b(deny_off, deny_cave)
|
||||
if not deny_branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{deny_off:X} to deny trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
deny_off,
|
||||
deny_branch_to_cave,
|
||||
f"b deny cave [_cred_label_update_execve C21-v3 exit @ 0x{deny_off:X}]",
|
||||
)
|
||||
if not deny_already_allowed:
|
||||
deny_branch_to_cave = self._encode_b(deny_off, deny_cave)
|
||||
if not deny_branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{deny_off:X} to deny trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
deny_off,
|
||||
deny_branch_to_cave,
|
||||
f"b deny cave [_cred_label_update_execve C21-v3 exit @ 0x{deny_off:X}]",
|
||||
)
|
||||
|
||||
for off in success_exits:
|
||||
branch_to_cave = self._encode_b(off, success_cave)
|
||||
|
||||
@@ -1,74 +1,106 @@
|
||||
"""Mixin: KernelJBPatchDounmountMixin."""
|
||||
|
||||
from .kernel_jb_base import asm
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchDounmountMixin:
|
||||
def patch_dounmount(self):
|
||||
"""NOP a MAC check in _dounmount (strict matching only).
|
||||
Pattern: mov w1,#0; mov x2,#0; bl TARGET (MAC policy check pattern).
|
||||
"""Match the known-good upstream cleanup call in dounmount.
|
||||
|
||||
Anchor class: string anchor. Recover the dounmount body through the
|
||||
stable panic string `dounmount:` and patch the unique near-tail 4-arg
|
||||
zeroed cleanup call used by `/Users/qaq/Desktop/patch_fw.py`:
|
||||
|
||||
mov x0, xMountLike
|
||||
mov w1, #0
|
||||
mov w2, #0
|
||||
mov w3, #0
|
||||
bl target
|
||||
mov x0, xMountLike
|
||||
bl target2
|
||||
cbz x19, ...
|
||||
|
||||
This intentionally rejects the later `mov w1,#0x10 ; mov x2,#0 ; bl`
|
||||
site because that drifted away from upstream and represents a different
|
||||
call signature/control-flow path.
|
||||
"""
|
||||
self._log("\n[JB] _dounmount: strict MAC check NOP")
|
||||
self._log("\n[JB] _dounmount: upstream cleanup-call NOP")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_dounmount")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x1000)
|
||||
result = self._find_mac_check_bl(foff, func_end)
|
||||
if result:
|
||||
nop_patch = asm("nop")
|
||||
self._assert_patch_decode(nop_patch, "nop")
|
||||
self.emit(result, nop_patch, "NOP [_dounmount MAC check]")
|
||||
return True
|
||||
foff = self._find_func_by_string(b"dounmount:", self.kern_text)
|
||||
if foff < 0:
|
||||
self._log(" [-] 'dounmount:' anchor not found")
|
||||
return False
|
||||
|
||||
# String anchor: resolve the actual dounmount function and patch in-function only.
|
||||
# We intentionally avoid broad scan fallbacks to prevent false-positive patching.
|
||||
str_off = self.find_string(b"dounmount:")
|
||||
if str_off >= 0:
|
||||
refs = self.find_string_refs(str_off)
|
||||
for adrp_off, _, _ in refs:
|
||||
caller = self.find_function_start(adrp_off)
|
||||
if caller < 0:
|
||||
continue
|
||||
caller_end = self._find_func_end(caller, 0x2000)
|
||||
result = self._find_mac_check_bl(caller, caller_end)
|
||||
if result:
|
||||
nop_patch = asm("nop")
|
||||
self._assert_patch_decode(nop_patch, "nop")
|
||||
self.emit(result, nop_patch, "NOP [_dounmount MAC check]")
|
||||
return True
|
||||
func_end = self._find_func_end(foff, 0x4000)
|
||||
patch_off = self._find_upstream_cleanup_call(foff, func_end)
|
||||
if patch_off is None:
|
||||
self._log(" [-] upstream dounmount cleanup call not found")
|
||||
return False
|
||||
|
||||
self._log(" [-] patch site not found (unsafe fallback disabled)")
|
||||
return False
|
||||
self.emit(patch_off, NOP, "NOP [_dounmount upstream cleanup call]")
|
||||
return True
|
||||
|
||||
def _find_mac_check_bl(self, start, end):
|
||||
"""Find mov w1,#0; mov x2,#0; bl TARGET pattern. Returns BL offset or None."""
|
||||
for off in range(start, end - 8, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
def _find_upstream_cleanup_call(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 0x1C, 4):
|
||||
d = self._disas_at(off, 8)
|
||||
if len(d) < 8:
|
||||
continue
|
||||
i0, i1, i2 = d[0], d[1], d[2]
|
||||
if i0.mnemonic != "mov" or i1.mnemonic != "mov" or i2.mnemonic != "bl":
|
||||
i0, i1, i2, i3, i4, i5, i6, i7 = d
|
||||
if i0.mnemonic != "mov" or i1.mnemonic != "mov" or i2.mnemonic != "mov" or i3.mnemonic != "mov":
|
||||
continue
|
||||
# Check: mov w1, #0; mov x2, #0
|
||||
if "w1" in i0.op_str and "#0" in i0.op_str:
|
||||
if "x2" in i1.op_str and "#0" in i1.op_str:
|
||||
return off + 8
|
||||
# Also match: mov x2, #0; mov w1, #0
|
||||
if "x2" in i0.op_str and "#0" in i0.op_str:
|
||||
if "w1" in i1.op_str and "#0" in i1.op_str:
|
||||
return off + 8
|
||||
if i4.mnemonic != "bl" or i5.mnemonic != "mov" or i6.mnemonic != "bl":
|
||||
continue
|
||||
if i7.mnemonic != "cbz":
|
||||
continue
|
||||
|
||||
src_reg = self._mov_reg_reg(i0, "x0")
|
||||
if src_reg is None:
|
||||
continue
|
||||
if not self._mov_imm_zero(i1, "w1"):
|
||||
continue
|
||||
if not self._mov_imm_zero(i2, "w2"):
|
||||
continue
|
||||
if not self._mov_imm_zero(i3, "w3"):
|
||||
continue
|
||||
if not self._mov_reg_reg(i5, "x0", src_reg):
|
||||
continue
|
||||
if not self._cbz_uses_xreg(i7):
|
||||
continue
|
||||
hits.append(i4.address)
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _assert_patch_decode(self, patch_bytes, expect_mnemonic, expect_op_str=None):
|
||||
insns = self._disas_n(patch_bytes, 0, 1)
|
||||
assert insns, "capstone decode failed for patch bytes"
|
||||
ins = insns[0]
|
||||
assert ins.mnemonic == expect_mnemonic, (
|
||||
f"patch decode mismatch: expected {expect_mnemonic}, got {ins.mnemonic}"
|
||||
def _mov_reg_reg(self, insn, dst_name, src_name=None):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return None
|
||||
dst, src = insn.operands
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_REG:
|
||||
return None
|
||||
if insn.reg_name(dst.reg) != dst_name:
|
||||
return None
|
||||
src_reg = insn.reg_name(src.reg)
|
||||
if src_name is not None and src_reg != src_name:
|
||||
return None
|
||||
return src_reg
|
||||
|
||||
def _mov_imm_zero(self, insn, dst_name):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg) == dst_name
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
)
|
||||
if expect_op_str is not None:
|
||||
assert ins.op_str == expect_op_str, (
|
||||
f"patch decode mismatch: expected op_str '{expect_op_str}', "
|
||||
f"got '{ins.op_str}'"
|
||||
)
|
||||
|
||||
def _cbz_uses_xreg(self, insn):
|
||||
if len(insn.operands) != 2:
|
||||
return False
|
||||
reg_op, imm_op = insn.operands
|
||||
return reg_op.type == ARM64_OP_REG and imm_op.type == ARM64_OP_IMM and insn.reg_name(reg_op.reg).startswith("x")
|
||||
|
||||
@@ -5,30 +5,14 @@ class KernelJBPatchLoadDylinkerMixin:
|
||||
def patch_load_dylinker(self):
|
||||
"""Bypass load_dylinker policy gate in the dyld path.
|
||||
|
||||
Strict selector:
|
||||
1. Anchor function by '/usr/lib/dyld' string reference.
|
||||
Raw PCC 26.1 kernels resolve this patch through a single runtime path:
|
||||
1. Anchor the containing function by a kernel-text reference to
|
||||
'/usr/lib/dyld'.
|
||||
2. Inside that function, find BL <check>; CBZ W0, <allow>.
|
||||
3. Replace BL with unconditional B to <allow>.
|
||||
"""
|
||||
self._log("\n[JB] _load_dylinker: skip dyld policy check")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_load_dylinker")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
result = self._find_bl_cbz_gate(foff, func_end)
|
||||
if result:
|
||||
bl_off, allow_target = result
|
||||
b_bytes = self._encode_b(bl_off, allow_target)
|
||||
if b_bytes:
|
||||
self.emit(
|
||||
bl_off,
|
||||
b_bytes,
|
||||
f"b #0x{allow_target - bl_off:X} [_load_dylinker]",
|
||||
)
|
||||
return True
|
||||
|
||||
# Fallback: strict dyld-anchor function profile.
|
||||
str_off = self.find_string(b"/usr/lib/dyld")
|
||||
if str_off < 0:
|
||||
self._log(" [-] '/usr/lib/dyld' string not found")
|
||||
@@ -37,9 +21,7 @@ class KernelJBPatchLoadDylinkerMixin:
|
||||
kstart, kend = self._get_kernel_text_range()
|
||||
refs = self.find_string_refs(str_off, kstart, kend)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs to '/usr/lib/dyld'")
|
||||
self._log(" [-] no kernel-text code refs to '/usr/lib/dyld'")
|
||||
return False
|
||||
|
||||
for adrp_off, _, _ in refs:
|
||||
@@ -65,7 +47,7 @@ class KernelJBPatchLoadDylinkerMixin:
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] dyld policy gate not found")
|
||||
self._log(" [-] dyld policy gate not found in dyld-anchored function")
|
||||
return False
|
||||
|
||||
def _find_bl_cbz_gate(self, start, end):
|
||||
|
||||
@@ -1,131 +1,166 @@
|
||||
"""Mixin: KernelJBPatchMacMountMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, asm
|
||||
from .kernel_asm import _cs
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, asm
|
||||
|
||||
|
||||
class KernelJBPatchMacMountMixin:
|
||||
def patch_mac_mount(self):
|
||||
"""Bypass MAC mount check in ___mac_mount-like flow.
|
||||
"""Apply the upstream twin bypasses in the mount-role wrapper.
|
||||
|
||||
Old kernels may expose ___mac_mount/__mac_mount symbols directly.
|
||||
Stripped kernels are resolved via mount_common() call graph.
|
||||
We patch the conditional deny branch (`cbnz w0, ...`) rather than
|
||||
NOP'ing the BL itself, to avoid stale register state forcing errors.
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which
|
||||
patches two sites in the wrapper that decides whether execution can
|
||||
continue into `mount_common()`:
|
||||
|
||||
- `tbnz wFlags, #5, deny` -> `nop`
|
||||
- `ldrb w8, [xTmp, #1]` -> `mov x8, xzr`
|
||||
|
||||
Runtime design avoids unstable symbols by:
|
||||
1. recovering `mount_common` from the in-image `"mount_common()"`
|
||||
string,
|
||||
2. scanning only a bounded neighborhood for local callers of that
|
||||
recovered function,
|
||||
3. selecting the unique caller that contains both upstream gates.
|
||||
"""
|
||||
self._log("\n[JB] ___mac_mount: bypass deny branch")
|
||||
self._log("\n[JB] ___mac_mount: upstream twin bypass")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("___mac_mount")
|
||||
if foff < 0:
|
||||
foff = self._resolve_symbol("__mac_mount")
|
||||
strict = False
|
||||
if foff < 0:
|
||||
strict = True
|
||||
# Find via 'mount_common()' string → function area
|
||||
str_off = self.find_string(b"mount_common()")
|
||||
if str_off >= 0:
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if refs:
|
||||
mount_common_func = self.find_function_start(refs[0][0])
|
||||
if mount_common_func >= 0:
|
||||
mc_end = self._find_func_end(mount_common_func, 0x2000)
|
||||
for off in range(mount_common_func, mc_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if (
|
||||
target >= 0
|
||||
and self.kern_text[0] <= target < self.kern_text[1]
|
||||
):
|
||||
te = self._find_func_end(target, 0x1000)
|
||||
site = self._find_mac_deny_site(
|
||||
target, te, require_error_return=True
|
||||
)
|
||||
if site:
|
||||
foff = target
|
||||
break
|
||||
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found")
|
||||
mount_common = self._find_func_by_string(b"mount_common()", self.kern_text)
|
||||
if mount_common < 0:
|
||||
self._log(" [-] mount_common anchor function not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x1000)
|
||||
site = self._find_mac_deny_site(
|
||||
foff,
|
||||
func_end,
|
||||
require_error_return=strict,
|
||||
)
|
||||
if not site and strict:
|
||||
# Last-resort in stripped builds: still require the BL+CBNZ(w0) shape.
|
||||
site = self._find_mac_deny_site(foff, func_end, require_error_return=False)
|
||||
if not site:
|
||||
self._log(" [-] patch sites not found")
|
||||
return False
|
||||
|
||||
bl_off, cb_off = site
|
||||
nop_patch = asm("nop")
|
||||
self._assert_patch_decode(nop_patch, "nop")
|
||||
self.emit(cb_off, nop_patch, "NOP [___mac_mount deny branch]")
|
||||
|
||||
# Legacy companion tweak, kept for older layouts where x8 carries policy state.
|
||||
for off2 in range(bl_off + 8, min(bl_off + 0x60, func_end), 4):
|
||||
d2 = self._disas_at(off2)
|
||||
if not d2:
|
||||
search_start = max(self.kern_text[0], mount_common - 0x5000)
|
||||
search_end = min(self.kern_text[1], mount_common + 0x5000)
|
||||
candidates = {}
|
||||
for off in range(search_start, search_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target != mount_common:
|
||||
continue
|
||||
if d2[0].mnemonic == "mov" and d2[0].op_str.startswith("x8,"):
|
||||
if d2[0].op_str != "x8, xzr":
|
||||
mov_patch = asm("mov x8, xzr")
|
||||
self._assert_patch_decode(mov_patch, "mov", "x8, xzr")
|
||||
self.emit(off2, mov_patch, "mov x8,xzr [___mac_mount]")
|
||||
break
|
||||
caller = self.find_function_start(off)
|
||||
if caller < 0 or caller == mount_common or caller in candidates:
|
||||
continue
|
||||
caller_end = self._find_func_end(caller, 0x1200)
|
||||
sites = self._match_upstream_mount_wrapper(caller, caller_end, mount_common)
|
||||
if sites is not None:
|
||||
candidates[caller] = sites
|
||||
|
||||
if len(candidates) != 1:
|
||||
self._log(f" [-] expected 1 upstream mac_mount candidate, found {len(candidates)}")
|
||||
return False
|
||||
|
||||
branch_off, mov_off = next(iter(candidates.values()))
|
||||
self.emit(branch_off, asm("nop"), "NOP [___mac_mount upstream flag gate]")
|
||||
self.emit(mov_off, asm("mov x8, xzr"), "mov x8,xzr [___mac_mount upstream state clear]")
|
||||
return True
|
||||
|
||||
def _find_mac_deny_site(self, start, end, require_error_return):
|
||||
for off in range(start, end - 8, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0 or d0[0].mnemonic != "bl":
|
||||
continue
|
||||
d1 = self._disas_at(off + 4)
|
||||
if not d1 or d1[0].mnemonic != "cbnz":
|
||||
continue
|
||||
if not d1[0].op_str.replace(" ", "").startswith("w0,"):
|
||||
continue
|
||||
if require_error_return:
|
||||
branch_target = self._branch_target(off + 4)
|
||||
if branch_target is None or not (off < branch_target < end):
|
||||
continue
|
||||
if not self._looks_like_error_return(branch_target):
|
||||
continue
|
||||
return (off, off + 4)
|
||||
return None
|
||||
|
||||
def _branch_target(self, off):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
def _match_upstream_mount_wrapper(self, start, end, mount_common):
|
||||
call_sites = []
|
||||
for off in range(start, end, 4):
|
||||
if self._is_bl(off) == mount_common:
|
||||
call_sites.append(off)
|
||||
if not call_sites:
|
||||
return None
|
||||
for op in reversed(d[0].operands):
|
||||
if op.type == ARM64_OP_IMM:
|
||||
return op.imm
|
||||
|
||||
flag_gate = self._find_flag_gate(start, end)
|
||||
if flag_gate is None:
|
||||
return None
|
||||
|
||||
state_gate = self._find_state_gate(start, end, call_sites)
|
||||
if state_gate is None:
|
||||
return None
|
||||
|
||||
return (flag_gate, state_gate)
|
||||
|
||||
def _find_flag_gate(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 4, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "tbnz" or not self._is_bit_branch(insn, "w", 5):
|
||||
continue
|
||||
target = insn.operands[2].imm
|
||||
if not (start <= target < end):
|
||||
continue
|
||||
td = self._disas_at(target)
|
||||
if not td or not self._is_mov_w_imm_value(td[0], 1):
|
||||
continue
|
||||
hits.append(off)
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _looks_like_error_return(self, target):
|
||||
d = self._disas_at(target)
|
||||
if not d or d[0].mnemonic != "mov":
|
||||
return False
|
||||
op = d[0].op_str.replace(" ", "")
|
||||
if op.startswith("w0,#") and op != "w0,#0":
|
||||
return True
|
||||
if op.startswith("x0,#") and op != "x0,#0":
|
||||
return True
|
||||
return False
|
||||
def _find_state_gate(self, start, end, call_sites):
|
||||
hits = []
|
||||
for off in range(start, end - 8, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
continue
|
||||
i0, i1, i2 = d
|
||||
if not self._is_add_x_imm(i0, 0x70):
|
||||
continue
|
||||
if not self._is_ldrb_same_base_plus_1(i1, i0.operands[0].reg):
|
||||
continue
|
||||
if i2.mnemonic != "tbz" or not self._is_bit_branch(i2, self._reg_name(i1.operands[0].reg), 6):
|
||||
continue
|
||||
target = i2.operands[2].imm
|
||||
if not any(target <= call_off <= target + 0x80 for call_off in call_sites):
|
||||
continue
|
||||
hits.append(i1.address)
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _assert_patch_decode(self, patch_bytes, expect_mnemonic, expect_op_str=None):
|
||||
insns = self._disas_n(patch_bytes, 0, 1)
|
||||
assert insns, "capstone decode failed for patch bytes"
|
||||
ins = insns[0]
|
||||
assert ins.mnemonic == expect_mnemonic, (
|
||||
f"patch decode mismatch: expected {expect_mnemonic}, got {ins.mnemonic}"
|
||||
def _is_bit_branch(self, insn, reg_prefix_or_name, bit):
|
||||
if len(insn.operands) != 3:
|
||||
return False
|
||||
reg_op, bit_op, target_op = insn.operands
|
||||
if reg_op.type != ARM64_OP_REG or bit_op.type != ARM64_OP_IMM or target_op.type != ARM64_OP_IMM:
|
||||
return False
|
||||
reg_name = self._reg_name(reg_op.reg)
|
||||
if len(reg_prefix_or_name) == 1:
|
||||
if not reg_name.startswith(reg_prefix_or_name):
|
||||
return False
|
||||
elif reg_name != reg_prefix_or_name:
|
||||
return False
|
||||
return bit_op.imm == bit
|
||||
|
||||
def _is_mov_w_imm_value(self, insn, imm):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and self._reg_name(dst.reg).startswith("w")
|
||||
and src.imm == imm
|
||||
)
|
||||
if expect_op_str is not None:
|
||||
assert ins.op_str == expect_op_str, (
|
||||
f"patch decode mismatch: expected op_str '{expect_op_str}', "
|
||||
f"got '{ins.op_str}'"
|
||||
)
|
||||
|
||||
def _is_add_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "add" or len(insn.operands) != 3:
|
||||
return False
|
||||
dst, src, imm_op = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_REG
|
||||
and imm_op.type == ARM64_OP_IMM
|
||||
and self._reg_name(dst.reg).startswith("x")
|
||||
and self._reg_name(src.reg).startswith("x")
|
||||
and imm_op.imm == imm
|
||||
)
|
||||
|
||||
def _is_ldrb_same_base_plus_1(self, insn, base_reg):
|
||||
if insn.mnemonic != "ldrb" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_MEM
|
||||
and src.mem.base == base_reg
|
||||
and src.mem.disp == 1
|
||||
and self._reg_name(dst.reg).startswith("w")
|
||||
)
|
||||
|
||||
def _reg_name(self, reg):
|
||||
return _cs.reg_name(reg)
|
||||
|
||||
@@ -5,69 +5,41 @@ from .kernel_jb_base import NOP
|
||||
|
||||
class KernelJBPatchNvramMixin:
|
||||
def patch_nvram_verify_permission(self):
|
||||
"""NOP verification in IONVRAMController's verifyPermission.
|
||||
Anchor: 'krn.' string (NVRAM key prefix) → xref → function → TBZ/TBNZ.
|
||||
"""NOP the verifyPermission gate in the `krn.` key-prefix path.
|
||||
|
||||
Runtime reveal is string-anchored only: enumerate code refs to `"krn."`,
|
||||
recover the containing function for each ref, then pick the unique
|
||||
`tbz/tbnz` guard immediately before that key-prefix load sequence.
|
||||
"""
|
||||
self._log("\n[JB] verifyPermission (NVRAM): NOP")
|
||||
|
||||
# Try symbol first
|
||||
sym_off = self._resolve_symbol(
|
||||
"__ZL16verifyPermission16IONVRAMOperationPKhPKcb"
|
||||
)
|
||||
if sym_off < 0:
|
||||
for sym, off in self.symbols.items():
|
||||
if "verifyPermission" in sym and "NVRAM" in sym:
|
||||
sym_off = off
|
||||
break
|
||||
|
||||
# String anchor: "krn." is referenced in verifyPermission.
|
||||
# The TBZ/TBNZ guard is immediately before the ADRP+ADD that
|
||||
# loads the "krn." string, so search backward from that ref.
|
||||
str_off = self.find_string(b"krn.")
|
||||
ref_off = -1
|
||||
if str_off >= 0:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if refs:
|
||||
ref_off = refs[0][0] # ADRP instruction offset
|
||||
|
||||
foff = (
|
||||
sym_off
|
||||
if sym_off >= 0
|
||||
else (self.find_function_start(ref_off) if ref_off >= 0 else -1)
|
||||
)
|
||||
|
||||
if foff < 0:
|
||||
# Fallback: try NVRAM entitlement string
|
||||
ent_off = self.find_string(b"com.apple.private.iokit.nvram-write-access")
|
||||
if ent_off >= 0:
|
||||
ent_refs = self.find_string_refs(ent_off)
|
||||
if ent_refs:
|
||||
foff = self.find_function_start(ent_refs[0][0])
|
||||
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found")
|
||||
if str_off < 0:
|
||||
self._log(" [-] 'krn.' string not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x600)
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs to 'krn.'")
|
||||
return False
|
||||
|
||||
# Strategy 1: search backward from "krn." string ref for
|
||||
# nearest TBZ/TBNZ — the guard branch is typically within
|
||||
# a few instructions before the ADRP that loads "krn.".
|
||||
if ref_off > foff:
|
||||
hits = []
|
||||
seen = set()
|
||||
for ref_off, _, _ in refs:
|
||||
foff = self.find_function_start(ref_off)
|
||||
if foff < 0 or foff in seen:
|
||||
continue
|
||||
seen.add(foff)
|
||||
for off in range(ref_off - 4, max(foff - 4, ref_off - 0x20), -4):
|
||||
d = self._disas_at(off)
|
||||
if d and d[0].mnemonic in ("tbnz", "tbz"):
|
||||
self.emit(off, NOP, "NOP [verifyPermission NVRAM]")
|
||||
return True
|
||||
if d and d[0].mnemonic in ('tbnz', 'tbz'):
|
||||
hits.append(off)
|
||||
break
|
||||
|
||||
# Strategy 2: scan full function for first TBZ/TBNZ
|
||||
for off in range(foff, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
if d[0].mnemonic in ("tbnz", "tbz"):
|
||||
self.emit(off, NOP, "NOP [verifyPermission NVRAM]")
|
||||
return True
|
||||
hits = sorted(set(hits))
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 NVRAM verifyPermission gate, found {len(hits)}")
|
||||
return False
|
||||
|
||||
self._log(" [-] TBZ/TBNZ not found in function")
|
||||
return False
|
||||
self.emit(hits[0], NOP, 'NOP [verifyPermission NVRAM]')
|
||||
return True
|
||||
|
||||
@@ -5,12 +5,13 @@ from .kernel_jb_base import ARM64_OP_REG, ARM64_OP_IMM, ARM64_REG_W0, CMP_W0_W0
|
||||
|
||||
class KernelJBPatchPostValidationMixin:
|
||||
def patch_post_validation_additional(self):
|
||||
"""Additional postValidation CMP W0,W0 in AMFI code signing path.
|
||||
"""Rewrite the SHA256-only reject compare in AMFI's post-validation path.
|
||||
|
||||
Low-risk strategy:
|
||||
1) Prefer the legacy strict matcher.
|
||||
2) Fallback to direct `cmp w0,#imm` replacement in AMFI text when
|
||||
strict shape is not present on newer kernels.
|
||||
Runtime reveal is string-anchored only: use the
|
||||
`"AMFI: code signature validation failed"` xref, recover the caller,
|
||||
then recover the BL target whose body contains the unique
|
||||
`cmp w0,#imm ; b.ne` reject gate reached immediately after a BL.
|
||||
No broad AMFI-text fallback is kept.
|
||||
"""
|
||||
self._log("\n[JB] postValidation additional: cmp w0,w0")
|
||||
|
||||
@@ -26,67 +27,51 @@ class KernelJBPatchPostValidationMixin:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
caller_start = self.find_function_start(refs[0][0])
|
||||
if caller_start < 0:
|
||||
return False
|
||||
|
||||
bl_targets = set()
|
||||
func_end = self._find_func_end(caller_start, 0x2000)
|
||||
for scan in range(caller_start, func_end, 4):
|
||||
target = self._is_bl(scan)
|
||||
if target >= 0:
|
||||
bl_targets.add(target)
|
||||
|
||||
patched = 0
|
||||
for target in sorted(bl_targets):
|
||||
if not (self.amfi_text[0] <= target < self.amfi_text[1]):
|
||||
hits = []
|
||||
seen = set()
|
||||
for ref_off, _, _ in refs:
|
||||
caller_start = self.find_function_start(ref_off)
|
||||
if caller_start < 0 or caller_start in seen:
|
||||
continue
|
||||
callee_end = self._find_func_end(target, 0x200)
|
||||
for off in range(target, callee_end, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0:
|
||||
continue
|
||||
if ops[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
has_bl = False
|
||||
for back in range(off - 4, max(off - 12, target), -4):
|
||||
bt = self._is_bl(back)
|
||||
if bt >= 0:
|
||||
has_bl = True
|
||||
break
|
||||
if has_bl:
|
||||
self.emit(off, CMP_W0_W0, f"cmp w0,w0 [postValidation additional]")
|
||||
patched += 1
|
||||
seen.add(caller_start)
|
||||
|
||||
if patched == 0:
|
||||
# Fallback: patch first `cmp w0,#imm` site in AMFI text.
|
||||
# This keeps the change local (single in-function compare rewrite)
|
||||
# and avoids shellcode/cave behavior.
|
||||
s, e = self.amfi_text
|
||||
for off in range(s, e - 4, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d or d[0].mnemonic != "cmp":
|
||||
continue
|
||||
ops = d[0].operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0:
|
||||
continue
|
||||
if ops[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
self.emit(off, CMP_W0_W0, "cmp w0,w0 [postValidation additional fallback]")
|
||||
patched = 1
|
||||
break
|
||||
func_end = self._find_func_end(caller_start, 0x2000)
|
||||
bl_targets = set()
|
||||
for scan in range(caller_start, func_end, 4):
|
||||
target = self._is_bl(scan)
|
||||
if target >= 0:
|
||||
bl_targets.add(target)
|
||||
|
||||
if patched == 0:
|
||||
self._log(" [-] no additional postValidation CMP sites found")
|
||||
for target in sorted(bl_targets):
|
||||
if not (self.amfi_text[0] <= target < self.amfi_text[1]):
|
||||
continue
|
||||
callee_end = self._find_func_end(target, 0x200)
|
||||
for off in range(target, callee_end, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0:
|
||||
continue
|
||||
if ops[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
has_bl = False
|
||||
for back in range(off - 4, max(off - 12, target), -4):
|
||||
if self._is_bl(back) >= 0:
|
||||
has_bl = True
|
||||
break
|
||||
if has_bl:
|
||||
hits.append(off)
|
||||
|
||||
hits = sorted(set(hits))
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 postValidation compare site, found {len(hits)}")
|
||||
return False
|
||||
|
||||
self.emit(hits[0], CMP_W0_W0, "cmp w0,w0 [postValidation additional]")
|
||||
return True
|
||||
|
||||
@@ -5,55 +5,48 @@ from .kernel_jb_base import NOP
|
||||
|
||||
class KernelJBPatchProcPidinfoMixin:
|
||||
def patch_proc_pidinfo(self):
|
||||
"""Bypass pid-0 checks in _proc_info: NOP first 2 CBZ/CBNZ on w-regs.
|
||||
"""Bypass the two early pid-0/proc-null guards in proc_pidinfo.
|
||||
|
||||
Anchor: find _proc_info via its switch-table pattern, then NOP the
|
||||
first two CBZ/CBNZ instructions that guard against pid 0.
|
||||
Reveal from the shared `_proc_info` switch-table anchor, then match the
|
||||
precise early shape used by upstream PCC 26.1:
|
||||
ldr x0, [x0,#0x18]
|
||||
cbz x0, fail
|
||||
bl ...
|
||||
cbz/cbnz wN, fail
|
||||
Patch only those two guards.
|
||||
"""
|
||||
self._log("\n[JB] _proc_pidinfo: NOP pid-0 guard (2 sites)")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_proc_pidinfo")
|
||||
if foff >= 0:
|
||||
func_end = min(foff + 0x80, self.size)
|
||||
hits = []
|
||||
for off in range(foff, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if (
|
||||
d
|
||||
and d[0].mnemonic in ("cbz", "cbnz")
|
||||
and d[0].op_str.startswith("w")
|
||||
):
|
||||
hits.append(off)
|
||||
if len(hits) >= 2:
|
||||
self.emit(hits[0], NOP, "NOP [_proc_pidinfo pid-0 guard A]")
|
||||
self.emit(hits[1], NOP, "NOP [_proc_pidinfo pid-0 guard B]")
|
||||
return True
|
||||
|
||||
# Reuse proc_info anchor from proc_security path (cached).
|
||||
proc_info_func, _ = self._find_proc_info_anchor()
|
||||
|
||||
if proc_info_func < 0:
|
||||
self._log(" [-] _proc_info function not found")
|
||||
return False
|
||||
|
||||
# Find first CBZ x0 (null proc check) and the CBZ/CBNZ wN after
|
||||
# the first BL in the prologue region
|
||||
hits = []
|
||||
first_guard = None
|
||||
second_guard = None
|
||||
prologue_end = min(proc_info_func + 0x80, self.size)
|
||||
for off in range(proc_info_func, prologue_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
for off in range(proc_info_func, prologue_end - 0x10, 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
d3 = self._disas_at(off + 12)
|
||||
if not d0 or not d1 or not d2 or not d3:
|
||||
continue
|
||||
i = d[0]
|
||||
if i.mnemonic in ("cbz", "cbnz"):
|
||||
# CBZ x0 (null check) or CBZ wN (pid-0 check)
|
||||
hits.append(off)
|
||||
i0, i1, i2, i3 = d0[0], d1[0], d2[0], d3[0]
|
||||
if (
|
||||
i0.mnemonic == 'ldr' and i0.op_str.startswith('x0, [x0, #0x18]') and
|
||||
i1.mnemonic == 'cbz' and i1.op_str.startswith('x0, ') and
|
||||
i2.mnemonic == 'bl' and
|
||||
i3.mnemonic in ('cbz', 'cbnz') and i3.op_str.startswith('w')
|
||||
):
|
||||
first_guard = off + 4
|
||||
second_guard = off + 12
|
||||
break
|
||||
|
||||
if len(hits) < 2:
|
||||
self._log(f" [-] expected 2+ early CBZ/CBNZ, found {len(hits)}")
|
||||
if first_guard is None or second_guard is None:
|
||||
self._log(' [-] precise proc_pidinfo guard pair not found')
|
||||
return False
|
||||
|
||||
self.emit(hits[0], NOP, "NOP [_proc_pidinfo pid-0 guard A]")
|
||||
self.emit(hits[1], NOP, "NOP [_proc_pidinfo pid-0 guard B]")
|
||||
self.emit(first_guard, NOP, 'NOP [_proc_pidinfo pid-0 guard A]')
|
||||
self.emit(second_guard, NOP, 'NOP [_proc_pidinfo pid-0 guard B]')
|
||||
return True
|
||||
|
||||
@@ -15,13 +15,6 @@ class KernelJBPatchProcSecurityMixin:
|
||||
"""
|
||||
self._log("\n[JB] _proc_security_policy: mov x0,#0; ret")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_proc_security_policy")
|
||||
if foff >= 0:
|
||||
self.emit(foff, MOV_X0_0, "mov x0,#0 [_proc_security_policy]")
|
||||
self.emit(foff + 4, RET, "ret [_proc_security_policy]")
|
||||
return True
|
||||
|
||||
# Find _proc_info by switch pattern:
|
||||
# sub wN,wM,#1 ; cmp wN,#0x21
|
||||
proc_info_func, switch_off = self._find_proc_info_anchor()
|
||||
|
||||
@@ -1,21 +1,31 @@
|
||||
"""Mixin: KernelJBPatchSandboxExtendedMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_X0_0, RET
|
||||
from .kernel_jb_base import MOV_X0_0, RET, struct, _rd64
|
||||
|
||||
|
||||
class KernelJBPatchSandboxExtendedMixin:
|
||||
def patch_sandbox_hooks_extended(self):
|
||||
"""Stub remaining sandbox MACF hooks (JB extension beyond base 5 hooks)."""
|
||||
self._log("\n[JB] Sandbox extended hooks: mov x0,#0; ret")
|
||||
"""Retarget extended Sandbox MACF hooks to the common allow stub.
|
||||
|
||||
Upstream `patch_fw.py` rewrites the `mac_policy_ops` entries rather than
|
||||
patching each hook body. Keep the same runtime strategy here:
|
||||
recover
|
||||
`mac_policy_ops` from `mac_policy_conf`, recover the shared
|
||||
`mov x0,#0; ret` Sandbox stub, then retarget the selected ops entries
|
||||
while preserving their chained-fixup/PAC metadata.
|
||||
"""
|
||||
self._log("\n[JB] Sandbox extended hooks: retarget ops entries to allow stub")
|
||||
|
||||
ops_table = self._find_sandbox_ops_table_via_conf()
|
||||
if ops_table is None:
|
||||
return False
|
||||
|
||||
HOOK_INDICES_EXT = {
|
||||
# IOKit MACF hooks (ops +0x648..+0x690 range on current kernels).
|
||||
# Canonical mpo_* names are not fully symbol-resolved in local KC data,
|
||||
# so keep index-stable labels to avoid misnaming.
|
||||
allow_stub = self._find_sandbox_allow_stub()
|
||||
if allow_stub is None:
|
||||
self._log(" [-] common Sandbox allow stub not found")
|
||||
return False
|
||||
|
||||
hook_indices_ext = {
|
||||
"iokit_check_201": 201,
|
||||
"iokit_check_202": 202,
|
||||
"iokit_check_203": 203,
|
||||
@@ -54,29 +64,52 @@ class KernelJBPatchSandboxExtendedMixin:
|
||||
"vnode_check_fsgetpath": 316,
|
||||
}
|
||||
|
||||
sb_start, sb_end = self.sandbox_text
|
||||
patched = 0
|
||||
seen = set()
|
||||
|
||||
for hook_name, idx in HOOK_INDICES_EXT.items():
|
||||
func_off = self._read_ops_entry(ops_table, idx)
|
||||
if func_off is None or func_off <= 0:
|
||||
for hook_name, idx in hook_indices_ext.items():
|
||||
entry_off = ops_table + idx * 8
|
||||
if entry_off + 8 > self.size:
|
||||
continue
|
||||
if not (sb_start <= func_off < sb_end):
|
||||
entry_raw = _rd64(self.raw, entry_off)
|
||||
if entry_raw == 0:
|
||||
continue
|
||||
if func_off in seen:
|
||||
entry_new = self._encode_auth_rebase_like(entry_raw, allow_stub)
|
||||
if entry_new is None:
|
||||
continue
|
||||
seen.add(func_off)
|
||||
|
||||
self.emit(func_off, MOV_X0_0, f"mov x0,#0 [_hook_{hook_name}]")
|
||||
self.emit(func_off + 4, RET, f"ret [_hook_{hook_name}]")
|
||||
self.emit(
|
||||
entry_off,
|
||||
entry_new,
|
||||
f"ops[{idx}] -> allow stub [_hook_{hook_name}]",
|
||||
)
|
||||
patched += 1
|
||||
|
||||
if patched == 0:
|
||||
self._log(" [-] no extended sandbox hooks patched")
|
||||
self._log(" [-] no extended sandbox hooks retargeted")
|
||||
return False
|
||||
return True
|
||||
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
# Group B: Simple patches
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
def _find_sandbox_allow_stub(self):
|
||||
"""Return the common Sandbox `mov x0,#0; ret` stub used by patch_fw.
|
||||
|
||||
On PCC 26.1 research/release there are two such tiny stubs in Sandbox
|
||||
text; the higher-address one matches upstream `patch_fw.py`
|
||||
(`0x23B73BC` research, `0x22A78BC` release). Keep the reveal
|
||||
structural: scan Sandbox text for 2-insn `mov x0,#0; ret` stubs and
|
||||
select the highest-address candidate.
|
||||
"""
|
||||
sb_start, sb_end = self.sandbox_text
|
||||
hits = []
|
||||
for off in range(sb_start, sb_end - 8, 4):
|
||||
if self.raw[off:off + 4] == MOV_X0_0 and self.raw[off + 4:off + 8] == RET:
|
||||
hits.append(off)
|
||||
if len(hits) < 1:
|
||||
return None
|
||||
allow_stub = max(hits)
|
||||
self._log(f" [+] common Sandbox allow stub at 0x{allow_stub:X}")
|
||||
return allow_stub
|
||||
|
||||
@staticmethod
|
||||
def _encode_auth_rebase_like(orig_val, target_off):
|
||||
"""Retarget an auth-rebase chained pointer while preserving PAC bits."""
|
||||
if (orig_val & (1 << 63)) == 0:
|
||||
return None
|
||||
return struct.pack("<Q", (orig_val & ~0xFFFFFFFF) | (target_off & 0xFFFFFFFF))
|
||||
|
||||
@@ -5,113 +5,66 @@ from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_REG, CMP_X0_X0
|
||||
|
||||
class KernelJBPatchSharedRegionMixin:
|
||||
def patch_shared_region_map(self):
|
||||
"""Force shared region check: cmp x0,x0.
|
||||
Anchor: '/private/preboot/Cryptexes' string → call-site fail target
|
||||
→ CMP+B.NE to same fail label.
|
||||
"""
|
||||
self._log("\n[JB] _shared_region_map_and_slide_setup: cmp x0,x0")
|
||||
"""Match the upstream root-vs-preboot gate in shared_region setup.
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_shared_region_map_and_slide_setup")
|
||||
Anchor class: string anchor. Resolve the setup helper from the in-image
|
||||
`/private/preboot/Cryptexes` string, then patch the *first* compare that
|
||||
guards the preboot lookup block:
|
||||
|
||||
cmp mount_reg, root_mount_reg
|
||||
b.eq skip_lookup
|
||||
... prepare PREBOOT_CRYPTEX_PATH ...
|
||||
|
||||
This intentionally matches `/Users/qaq/Desktop/patch_fw.py` by forcing
|
||||
the initial root-mount comparison to compare equal, rather than only
|
||||
patching the later fallback compare against the looked-up preboot mount.
|
||||
"""
|
||||
self._log("\n[JB] _shared_region_map_and_slide_setup: upstream cmp x0,x0")
|
||||
|
||||
foff = self._find_func_by_string(b"/private/preboot/Cryptexes", self.kern_text)
|
||||
if foff < 0:
|
||||
foff = self._find_func_by_string(
|
||||
b"/private/preboot/Cryptexes", self.kern_text
|
||||
)
|
||||
if foff < 0:
|
||||
foff = self._find_func_by_string(b"/private/preboot/Cryptexes")
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found")
|
||||
self._log(" [-] function not found via Cryptexes anchor")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
str_off = self.find_string(b"/private/preboot/Cryptexes")
|
||||
refs = self.find_string_refs(str_off, foff, func_end) if str_off >= 0 else []
|
||||
if str_off < 0:
|
||||
self._log(" [-] Cryptexes string not found")
|
||||
return False
|
||||
|
||||
# Prefer: BL ... ; CBNZ W0, fail and then CMP reg,reg ; B.NE fail.
|
||||
refs = self.find_string_refs(str_off, foff, func_end)
|
||||
hits = []
|
||||
for adrp_off, _, _ in refs:
|
||||
fail_target = self._find_fail_target_after_ref(adrp_off, func_end)
|
||||
if fail_target is None:
|
||||
continue
|
||||
patch_off = self._find_cmp_bne_to_target(
|
||||
adrp_off, min(func_end, adrp_off + 0x140), fail_target
|
||||
)
|
||||
if patch_off is None:
|
||||
continue
|
||||
self.emit(
|
||||
patch_off, CMP_X0_X0, "cmp x0,x0 [_shared_region_map_and_slide_setup]"
|
||||
)
|
||||
return True
|
||||
patch_off = self._find_upstream_root_mount_cmp(foff, adrp_off)
|
||||
if patch_off is not None:
|
||||
hits.append(patch_off)
|
||||
|
||||
# Fallback: strict in-function scan for CMP reg,reg + B.NE, skipping
|
||||
# stack canary compares against qword_FFFFFE00097BB000.
|
||||
for off in range(foff, func_end - 4, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type == ARM64_OP_REG and ops[1].type == ARM64_OP_REG:
|
||||
if self._is_probable_stack_canary_cmp(off):
|
||||
continue
|
||||
self.emit(
|
||||
off, CMP_X0_X0, "cmp x0,x0 [_shared_region_map_and_slide_setup]"
|
||||
)
|
||||
return True
|
||||
if len(hits) != 1:
|
||||
self._log(" [-] upstream root-vs-preboot cmp gate not found uniquely")
|
||||
return False
|
||||
|
||||
self._log(" [-] CMP+B.NE pattern not found")
|
||||
return False
|
||||
self.emit(
|
||||
hits[0], CMP_X0_X0, "cmp x0,x0 [_shared_region_map_and_slide_setup]"
|
||||
)
|
||||
return True
|
||||
|
||||
def _find_fail_target_after_ref(self, ref_off, func_end):
|
||||
"""Find CBNZ W0,<target> following the Cryptexes call site."""
|
||||
for off in range(ref_off, min(func_end - 4, ref_off + 0x60), 4):
|
||||
d = self._disas_at(off)
|
||||
if not d or d[0].mnemonic != "cbnz":
|
||||
def _find_upstream_root_mount_cmp(self, func_start, str_ref_off):
|
||||
scan_start = max(func_start, str_ref_off - 0x24)
|
||||
scan_end = min(str_ref_off, scan_start + 0x24)
|
||||
for off in range(scan_start, scan_end, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
continue
|
||||
i = d[0]
|
||||
if not i.op_str.startswith("w0, "):
|
||||
cmp_insn, beq_insn, next_insn = d[0], d[1], d[2]
|
||||
if cmp_insn.mnemonic != "cmp" or beq_insn.mnemonic != "b.eq":
|
||||
continue
|
||||
if len(i.operands) >= 2 and i.operands[-1].type == ARM64_OP_IMM:
|
||||
return i.operands[-1].imm
|
||||
if len(cmp_insn.operands) != 2 or len(beq_insn.operands) != 1:
|
||||
continue
|
||||
if cmp_insn.operands[0].type != ARM64_OP_REG or cmp_insn.operands[1].type != ARM64_OP_REG:
|
||||
continue
|
||||
if beq_insn.operands[0].type != ARM64_OP_IMM or beq_insn.operands[0].imm <= beq_insn.address:
|
||||
continue
|
||||
if next_insn.mnemonic != "str" or "xzr" not in next_insn.op_str:
|
||||
continue
|
||||
return cmp_insn.address
|
||||
return None
|
||||
|
||||
def _find_cmp_bne_to_target(self, start, end, target):
|
||||
"""Find CMP reg,reg; B.NE <target> in range."""
|
||||
for off in range(start, end - 4, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[1].type != ARM64_OP_REG:
|
||||
continue
|
||||
if len(i1.operands) < 1 or i1.operands[-1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
if i1.operands[-1].imm != target:
|
||||
continue
|
||||
if self._is_probable_stack_canary_cmp(off):
|
||||
continue
|
||||
return off
|
||||
return None
|
||||
|
||||
def _is_probable_stack_canary_cmp(self, cmp_off):
|
||||
"""Heuristic: skip stack canary compare blocks near epilogue."""
|
||||
for lookback in range(cmp_off - 0x10, cmp_off, 4):
|
||||
if lookback < 0:
|
||||
continue
|
||||
d = self._disas_at(lookback)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
if i.mnemonic != "ldr":
|
||||
continue
|
||||
if "qword_FFFFFE00097BB000" in i.op_str:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -1,113 +1,140 @@
|
||||
"""Mixin: KernelJBPatchSpawnPersonaMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, NOP
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, NOP
|
||||
|
||||
|
||||
class KernelJBPatchSpawnPersonaMixin:
|
||||
def patch_spawn_validate_persona(self):
|
||||
"""NOP persona validation: LDR + TBNZ sites.
|
||||
Pattern: ldr wN, [xN, #0x600] (unique struct offset) followed by
|
||||
cbz wN then tbnz wN, #1 — NOP both the LDR and the TBNZ.
|
||||
"""Restore the upstream dual-CBZ bypass in the persona helper.
|
||||
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which NOPs
|
||||
two sibling `cbz w?, deny` guards in the small helper reached from the
|
||||
entitlement-string-driven spawn policy wrapper.
|
||||
|
||||
Runtime design intentionally avoids unstable symbols:
|
||||
1. recover the outer spawn policy function from the embedded
|
||||
`com.apple.private.spawn-panic-crash-behavior` string,
|
||||
2. enumerate its local BL callees,
|
||||
3. choose the unique small callee whose local CFG matches the upstream
|
||||
helper shape (`ldr [arg,#8] ; cbz deny ; ldr [arg,#0xc] ; cbz deny`),
|
||||
4. NOP both `cbz` guards at the upstream sites.
|
||||
"""
|
||||
self._log("\n[JB] _spawn_validate_persona: NOP (2 sites)")
|
||||
self._log("\n[JB] _spawn_validate_persona: upstream dual-CBZ bypass")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_spawn_validate_persona")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x800)
|
||||
result = self._find_persona_pattern(foff, func_end)
|
||||
if result:
|
||||
self.emit(result[0], NOP, "NOP [_spawn_validate_persona LDR]")
|
||||
self.emit(result[1], NOP, "NOP [_spawn_validate_persona TBNZ]")
|
||||
return True
|
||||
|
||||
anchor_func = self._find_spawn_anchor_func()
|
||||
if anchor_func < 0:
|
||||
self._log(" [-] spawn anchor function not found")
|
||||
return False
|
||||
anchor_end = self._find_func_end(anchor_func, 0x4000)
|
||||
|
||||
# Legacy pattern, but restricted to spawn anchor function only.
|
||||
result = self._find_persona_pattern(anchor_func, anchor_end)
|
||||
if result:
|
||||
self.emit(result[0], NOP, "NOP [_spawn_validate_persona LDR]")
|
||||
self.emit(result[1], NOP, "NOP [_spawn_validate_persona TBNZ]")
|
||||
return True
|
||||
|
||||
# Newer layout: `ldr x?, [x?, #0x2b8] ; ldrh wN, [sp, #imm] ; tbz wN,#1,target`
|
||||
# -> force skip of validation block by rewriting TBZ/TBNZ to unconditional branch.
|
||||
gate = self._find_persona_gate_branch(anchor_func, anchor_end)
|
||||
if gate:
|
||||
br_off, target = gate
|
||||
b_bytes = self._encode_b(br_off, target)
|
||||
if b_bytes:
|
||||
self.emit(
|
||||
br_off,
|
||||
b_bytes,
|
||||
f"b #0x{target - br_off:X} [_spawn_validate_persona gate]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] pattern not found in spawn anchor (fail-closed)")
|
||||
return False
|
||||
|
||||
def _find_persona_pattern(self, start, end):
|
||||
"""Find ldr wN,[xN,#0x600] + tbnz wN,#1 pattern. Returns (ldr_off, tbnz_off)."""
|
||||
for off in range(start, end - 0x30, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d or d[0].mnemonic != "ldr":
|
||||
continue
|
||||
if "#0x600" not in d[0].op_str or not d[0].op_str.startswith("w"):
|
||||
continue
|
||||
for delta in range(4, 0x30, 4):
|
||||
d2 = self._disas_at(off + delta)
|
||||
if d2 and d2[0].mnemonic == "tbnz" and "#1" in d2[0].op_str:
|
||||
if d2[0].op_str.startswith("w"):
|
||||
return (off, off + delta)
|
||||
return None
|
||||
|
||||
def _find_spawn_anchor_func(self):
|
||||
primary = self._find_func_by_string(
|
||||
anchor_func = self._find_func_by_string(
|
||||
b"com.apple.private.spawn-panic-crash-behavior", self.kern_text
|
||||
)
|
||||
if primary >= 0:
|
||||
return primary
|
||||
return self._find_func_by_string(
|
||||
b"com.apple.private.spawn-subsystem-root", self.kern_text
|
||||
)
|
||||
if anchor_func < 0:
|
||||
self._log(" [-] spawn entitlement anchor not found")
|
||||
return False
|
||||
|
||||
def _find_persona_gate_branch(self, start, end):
|
||||
anchor_end = self._find_func_end(anchor_func, 0x4000)
|
||||
sites = self._find_upstream_persona_cbz_sites(anchor_func, anchor_end)
|
||||
if sites is None:
|
||||
self._log(" [-] upstream persona helper not found from string anchor")
|
||||
return False
|
||||
|
||||
first_cbz, second_cbz = sites
|
||||
self.emit(first_cbz, NOP, "NOP [_spawn_validate_persona pid-slot guard]")
|
||||
self.emit(second_cbz, NOP, "NOP [_spawn_validate_persona persona-slot guard]")
|
||||
return True
|
||||
|
||||
def _find_upstream_persona_cbz_sites(self, anchor_start, anchor_end):
|
||||
matches = []
|
||||
seen = set()
|
||||
for off in range(anchor_start, anchor_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target < 0 or target in seen:
|
||||
continue
|
||||
if not (self.kern_text[0] <= target < self.kern_text[1]):
|
||||
continue
|
||||
seen.add(target)
|
||||
func_end = self._find_func_end(target, 0x400)
|
||||
sites = self._match_persona_helper(target, func_end)
|
||||
if sites is not None:
|
||||
matches.append(sites)
|
||||
|
||||
if len(matches) == 1:
|
||||
return matches[0]
|
||||
if matches:
|
||||
self._log(
|
||||
" [-] ambiguous persona helper candidates: "
|
||||
+ ", ".join(f"0x{a:X}/0x{b:X}" for a, b in matches)
|
||||
)
|
||||
return None
|
||||
|
||||
def _match_persona_helper(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 8, 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
if not d0 or not d1 or not d2:
|
||||
for off in range(start, end - 0x14, 4):
|
||||
d = self._disas_at(off, 6)
|
||||
if len(d) < 6:
|
||||
continue
|
||||
i0, i1, i2 = d0[0], d1[0], d2[0]
|
||||
if i0.mnemonic != "ldr" or "#0x2b8" not in i0.op_str:
|
||||
i0, i1, i2, i3, i4, i5 = d[:6]
|
||||
if not self._is_ldr_mem(i0, disp=8):
|
||||
continue
|
||||
if not i0.op_str.startswith("x"):
|
||||
if not self._is_cbz_w_same_reg(i1, i0.operands[0].reg):
|
||||
continue
|
||||
if i1.mnemonic != "ldrh" or not i1.op_str.startswith("w"):
|
||||
if not self._is_ldr_mem_same_base(i2, i0.operands[1].mem.base, disp=0xC):
|
||||
continue
|
||||
reg = i1.op_str.split(",", 1)[0].strip()
|
||||
if i2.mnemonic not in ("tbz", "tbnz"):
|
||||
if not self._is_cbz_w_same_reg(i3, i2.operands[0].reg):
|
||||
continue
|
||||
if not i2.op_str.startswith(f"{reg},"):
|
||||
deny_target = i1.operands[1].imm
|
||||
if i3.operands[1].imm != deny_target:
|
||||
continue
|
||||
if "#1" not in i2.op_str:
|
||||
if not self._looks_like_errno_return(deny_target, 1):
|
||||
continue
|
||||
|
||||
target = None
|
||||
for op in reversed(i2.operands):
|
||||
if op.type == ARM64_OP_IMM:
|
||||
target = op.imm
|
||||
break
|
||||
if target is None or not (off + 8 < target < end):
|
||||
if not self._is_mov_x_imm_zero(i4):
|
||||
continue
|
||||
hits.append((off + 8, target))
|
||||
if not self._is_ldr_mem(i5, disp=0x490):
|
||||
continue
|
||||
hits.append((i1.address, i3.address))
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _looks_like_errno_return(self, target, errno_value):
|
||||
d = self._disas_at(target, 2)
|
||||
return len(d) >= 1 and self._is_mov_w_imm_value(d[0], errno_value)
|
||||
|
||||
def _is_ldr_mem(self, insn, disp):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return dst.type == ARM64_OP_REG and src.type == ARM64_OP_MEM and src.mem.disp == disp
|
||||
|
||||
def _is_ldr_mem_same_base(self, insn, base_reg, disp):
|
||||
return self._is_ldr_mem(insn, disp) and insn.operands[1].mem.base == base_reg
|
||||
|
||||
def _is_cbz_w_same_reg(self, insn, reg):
|
||||
if insn.mnemonic != "cbz" or len(insn.operands) != 2:
|
||||
return False
|
||||
op0, op1 = insn.operands
|
||||
return (
|
||||
op0.type == ARM64_OP_REG
|
||||
and op0.reg == reg
|
||||
and op1.type == ARM64_OP_IMM
|
||||
and insn.reg_name(op0.reg).startswith("w")
|
||||
)
|
||||
|
||||
def _is_mov_x_imm_zero(self, insn):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
and insn.reg_name(dst.reg).startswith("x")
|
||||
)
|
||||
|
||||
def _is_mov_w_imm_value(self, insn, imm):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == imm
|
||||
and insn.reg_name(dst.reg).startswith("w")
|
||||
)
|
||||
|
||||
@@ -1,166 +1,135 @@
|
||||
"""Mixin: KernelJBPatchTaskForPidMixin."""
|
||||
|
||||
from collections import Counter
|
||||
|
||||
from .kernel_jb_base import NOP, _rd32, _rd64
|
||||
from .kernel_asm import _cs
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, NOP
|
||||
|
||||
|
||||
class KernelJBPatchTaskForPidMixin:
|
||||
def patch_task_for_pid(self):
|
||||
"""NOP proc_ro security policy copy in _task_for_pid.
|
||||
"""NOP the upstream early `pid == 0` reject gate in `task_for_pid`.
|
||||
|
||||
Pattern: _task_for_pid is a Mach trap handler (0 BL callers) with:
|
||||
- 2x ldadda (proc reference counting)
|
||||
- 2x ldr wN,[xN,#0x490]; str wN,[xN,#0xc] (proc_ro security copy)
|
||||
- movk xN, #0xc8a2, lsl #48 (PAC discriminator)
|
||||
- BL to a non-panic function with >500 callers (proc_find etc.)
|
||||
NOP the second ldr wN,[xN,#0x490] (the target process security copy).
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which
|
||||
patches the early `cbz wPid, fail` gate before `port_name_to_task()`.
|
||||
|
||||
Anchor class: heuristic.
|
||||
|
||||
There is no stable direct `task_for_pid` symbol path on the stripped
|
||||
kernels, so the runtime reveal first recovers the enclosing function via
|
||||
the in-function string `proc_ro_ref_task`, then scans only that function
|
||||
and looks for the unique upstream local shape:
|
||||
|
||||
ldr wPid, [xArgs, #8]
|
||||
ldr xTaskPtr, [xArgs, #0x10]
|
||||
...
|
||||
cbz wPid, fail
|
||||
mov w1, #0
|
||||
mov w2, #0
|
||||
mov w3, #0
|
||||
mov x4, #0
|
||||
bl port_name_to_task-like helper
|
||||
cbz x0, fail
|
||||
"""
|
||||
self._log("\n[JB] _task_for_pid: NOP")
|
||||
self._log("\n[JB] _task_for_pid: upstream pid==0 gate NOP")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_task_for_pid")
|
||||
if foff >= 0:
|
||||
func_end = self._find_func_end(foff, 0x800)
|
||||
patch_off = self._find_second_ldr490(foff, func_end)
|
||||
if patch_off:
|
||||
self.emit(patch_off, NOP, "NOP [_task_for_pid proc_ro copy]")
|
||||
return True
|
||||
func_start = self._find_func_by_string(b"proc_ro_ref_task", self.kern_text)
|
||||
if func_start < 0:
|
||||
self._log(" [-] task_for_pid anchor function not found")
|
||||
return False
|
||||
search_end = min(self.kern_text[1], func_start + 0x800)
|
||||
|
||||
# Fast prefilter: locate functions containing >=2
|
||||
# ldr w?,[x?,#0x490] + str w?,[x?,#0xc] pairs.
|
||||
pair_candidates = self._find_funcs_with_ldr490_pairs()
|
||||
candidates = []
|
||||
for func_start, ldr490_offs in pair_candidates.items():
|
||||
if len(ldr490_offs) < 2:
|
||||
hits = []
|
||||
for off in range(func_start, search_end - 0x18, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0 or d0[0].mnemonic != "cbz":
|
||||
continue
|
||||
hit = self._match_upstream_task_for_pid_gate(off, func_start)
|
||||
if hit is not None:
|
||||
hits.append(hit)
|
||||
|
||||
# Mach trap handlers are usually indirectly dispatched.
|
||||
if self.bl_callers.get(func_start, []):
|
||||
continue
|
||||
|
||||
func_end = self._find_func_end(func_start, 0x1000)
|
||||
ldadda_count = 0
|
||||
has_movk_c8a2 = False
|
||||
has_high_caller_bl = False
|
||||
|
||||
for o in range(func_start, func_end, 4):
|
||||
d = self._disas_at(o)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
if i.mnemonic == "ldadda":
|
||||
ldadda_count += 1
|
||||
elif i.mnemonic == "movk" and "#0xc8a2" in i.op_str:
|
||||
has_movk_c8a2 = True
|
||||
elif i.mnemonic == "bl":
|
||||
target = i.operands[0].imm
|
||||
n_callers = len(self.bl_callers.get(target, []))
|
||||
# >500 but <8000 excludes _panic (typically 8000+)
|
||||
if 500 < n_callers < 8000:
|
||||
has_high_caller_bl = True
|
||||
|
||||
if ldadda_count >= 2 and has_movk_c8a2 and has_high_caller_bl:
|
||||
candidates.append((func_start, sorted(ldr490_offs)[1])) # second pair
|
||||
|
||||
if not candidates:
|
||||
self._log(" [-] function not found")
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 upstream task_for_pid candidate, found {len(hits)}")
|
||||
return False
|
||||
|
||||
# Trap handlers are usually referenced from data tables. Prefer
|
||||
# candidates with chained pointer refs from __DATA_CONST/__DATA.
|
||||
ranked = []
|
||||
for func_start, patch_off in candidates:
|
||||
data_refs = self._count_data_pointer_refs_to_function(func_start)
|
||||
ranked.append((data_refs, func_start, patch_off))
|
||||
|
||||
with_data_refs = [item for item in ranked if item[0] > 0]
|
||||
pool = with_data_refs if with_data_refs else ranked
|
||||
pool.sort(reverse=True)
|
||||
|
||||
# Reject ambiguous top score to avoid patching a wrong helper path.
|
||||
if len(pool) > 1 and pool[0][0] == pool[1][0]:
|
||||
self._log(" [-] ambiguous _task_for_pid candidates")
|
||||
for score, func_start, patch_off in pool[:5]:
|
||||
self._log(
|
||||
f" cand func=0x{func_start:X} patch=0x{patch_off:X} "
|
||||
f"data_refs={score}"
|
||||
)
|
||||
return False
|
||||
|
||||
score, func_start, patch_off = pool[0]
|
||||
self._log(
|
||||
f" [+] _task_for_pid at 0x{func_start:X}, patch at 0x{patch_off:X} "
|
||||
f"(data_refs={score})"
|
||||
)
|
||||
self.emit(patch_off, NOP, "NOP [_task_for_pid proc_ro copy]")
|
||||
self.emit(hits[0], NOP, "NOP [_task_for_pid pid==0 gate]")
|
||||
return True
|
||||
|
||||
def _count_data_pointer_refs_to_function(self, target_off):
|
||||
"""Count chained pointers in __DATA_CONST/__DATA resolving to target_off.
|
||||
def _match_upstream_task_for_pid_gate(self, off, func_start):
|
||||
d = self._disas_at(off, 7)
|
||||
if len(d) < 7:
|
||||
return None
|
||||
cbz_pid, mov1, mov2, mov3, mov4, bl_insn, cbz_ret = d
|
||||
if cbz_pid.mnemonic != "cbz" or len(cbz_pid.operands) != 2:
|
||||
return None
|
||||
if cbz_pid.operands[0].type != ARM64_OP_REG or cbz_pid.operands[1].type != ARM64_OP_IMM:
|
||||
return None
|
||||
|
||||
Builds a one-time cache so candidate ranking stays fast.
|
||||
"""
|
||||
if not hasattr(self, "_data_ptr_ref_counts_cache"):
|
||||
self._data_ptr_ref_counts_cache = self._build_data_pointer_ref_counts()
|
||||
return self._data_ptr_ref_counts_cache.get(target_off, 0)
|
||||
if not self._is_mov_imm_zero(mov1, "w1"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov2, "w2"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov3, "w3"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov4, "x4"):
|
||||
return None
|
||||
if bl_insn.mnemonic != "bl":
|
||||
return None
|
||||
if cbz_ret.mnemonic != "cbz" or len(cbz_ret.operands) != 2:
|
||||
return None
|
||||
if cbz_ret.operands[0].type != ARM64_OP_REG or cbz_ret.reg_name(cbz_ret.operands[0].reg) != "x0":
|
||||
return None
|
||||
fail_target = cbz_pid.operands[1].imm
|
||||
if cbz_ret.operands[1].type != ARM64_OP_IMM or cbz_ret.operands[1].imm != fail_target:
|
||||
return None
|
||||
|
||||
def _find_funcs_with_ldr490_pairs(self):
|
||||
"""Return {func_start: [pair_off,...]} for ldr #0x490 + str #0xc pairs."""
|
||||
funcs = {}
|
||||
ks, ke = self.kern_text
|
||||
for off in range(ks, ke - 4, 4):
|
||||
ins = _rd32(self.raw, off)
|
||||
# LDR Wt, [Xn, #imm] (unsigned immediate)
|
||||
if (ins & 0xFFC00000) != 0xB9400000:
|
||||
pid_load = None
|
||||
taskptr_load = None
|
||||
for prev_off in range(max(func_start, off - 0x18), off, 4):
|
||||
prev_d = self._disas_at(prev_off)
|
||||
if not prev_d:
|
||||
continue
|
||||
if ((ins >> 10) & 0xFFF) != 0x124: # 0x490 / 4
|
||||
prev = prev_d[0]
|
||||
if pid_load is None and self._is_w_ldr_from_x_imm(prev, 8):
|
||||
pid_load = prev
|
||||
continue
|
||||
if taskptr_load is None and self._is_x_ldr_from_x_imm(prev, 0x10):
|
||||
taskptr_load = prev
|
||||
if pid_load is None or taskptr_load is None:
|
||||
return None
|
||||
if cbz_pid.operands[0].reg != pid_load.operands[0].reg:
|
||||
return None
|
||||
return cbz_pid.address
|
||||
|
||||
nxt = _rd32(self.raw, off + 4)
|
||||
# STR Wt, [Xn, #imm] (unsigned immediate)
|
||||
if (nxt & 0xFFC00000) != 0xB9000000:
|
||||
continue
|
||||
if ((nxt >> 10) & 0xFFF) != 0x3: # 0xC / 4
|
||||
continue
|
||||
def _is_mov_imm_zero(self, insn, dst_name):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg) == dst_name
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
)
|
||||
|
||||
func_start = self.find_function_start(off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
funcs.setdefault(func_start, []).append(off)
|
||||
return funcs
|
||||
def _is_w_ldr_from_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg).startswith("w")
|
||||
and src.type == ARM64_OP_MEM
|
||||
and insn.reg_name(src.mem.base).startswith("x")
|
||||
and src.mem.disp == imm
|
||||
)
|
||||
|
||||
def _build_data_pointer_ref_counts(self):
|
||||
"""Build target_off -> reference count for chained data pointers."""
|
||||
counts = Counter()
|
||||
for name, _, seg_foff, seg_fsize, _ in self.all_segments:
|
||||
if name not in ("__DATA_CONST", "__DATA") or seg_fsize <= 0:
|
||||
continue
|
||||
seg_end = seg_foff + seg_fsize
|
||||
for off in range(seg_foff, seg_end - 8, 8):
|
||||
val = _rd64(self.raw, off)
|
||||
target = self._decode_chained_ptr(val)
|
||||
if target >= 0:
|
||||
counts[target] += 1
|
||||
return counts
|
||||
|
||||
def _find_second_ldr490(self, start, end):
|
||||
"""Find the second ldr wN,[xN,#0x490]+str wN,[xN,#0xc] in range."""
|
||||
count = 0
|
||||
for off in range(start, end - 4, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d or d[0].mnemonic != "ldr":
|
||||
continue
|
||||
if "#0x490" not in d[0].op_str or not d[0].op_str.startswith("w"):
|
||||
continue
|
||||
d2 = self._disas_at(off + 4)
|
||||
if (
|
||||
d2
|
||||
and d2[0].mnemonic == "str"
|
||||
and "#0xc" in d2[0].op_str
|
||||
and d2[0].op_str.startswith("w")
|
||||
):
|
||||
count += 1
|
||||
if count == 2:
|
||||
return off
|
||||
return None
|
||||
def _is_x_ldr_from_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg).startswith("x")
|
||||
and src.type == ARM64_OP_MEM
|
||||
and insn.reg_name(src.mem.base).startswith("x")
|
||||
and src.mem.disp == imm
|
||||
)
|
||||
|
||||
@@ -5,19 +5,14 @@ from .kernel_jb_base import _rd32, _rd64
|
||||
|
||||
class KernelJBPatchThidCrashMixin:
|
||||
def patch_thid_should_crash(self):
|
||||
"""Zero out _thid_should_crash global variable.
|
||||
Anchor: 'thid_should_crash' string in __DATA → nearby sysctl_oid struct
|
||||
contains a raw pointer (low32 = file offset) to the variable.
|
||||
"""Zero out `_thid_should_crash` via the nearby sysctl metadata.
|
||||
|
||||
The raw PCC 26.1 kernels do not provide a usable runtime symbol table,
|
||||
so this patch always resolves through the sysctl name string
|
||||
`thid_should_crash` and the adjacent `sysctl_oid` data.
|
||||
"""
|
||||
self._log("\n[JB] _thid_should_crash: zero out")
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_thid_should_crash")
|
||||
if foff >= 0:
|
||||
self.emit(foff, b"\x00\x00\x00\x00", "zero [_thid_should_crash]")
|
||||
return True
|
||||
|
||||
# Find the string in __DATA (sysctl name string)
|
||||
str_off = self.find_string(b"thid_should_crash")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
@@ -25,13 +20,6 @@ class KernelJBPatchThidCrashMixin:
|
||||
|
||||
self._log(f" [*] string at foff 0x{str_off:X}")
|
||||
|
||||
# The sysctl_oid struct is near the string in __DATA.
|
||||
# It contains 8-byte entries, one of which has its low32 bits
|
||||
# equal to the file offset of the variable (chained fixup encoding).
|
||||
# The variable is a 4-byte int (typically value 1) in __DATA_CONST.
|
||||
#
|
||||
# Search forward from the string for 8-byte values whose low32
|
||||
# points to a valid location holding a small non-zero value.
|
||||
data_const_ranges = [
|
||||
(fo, fo + fs)
|
||||
for name, _, fo, fs, _ in self.all_segments
|
||||
@@ -46,16 +34,12 @@ class KernelJBPatchThidCrashMixin:
|
||||
if val == 0:
|
||||
continue
|
||||
low32 = val & 0xFFFFFFFF
|
||||
# The variable should be in __DATA_CONST or __DATA
|
||||
if low32 == 0 or low32 >= self.size:
|
||||
continue
|
||||
# Check if low32 points to a location holding a small int (1-255)
|
||||
target_val = _rd32(self.raw, low32)
|
||||
if 1 <= target_val <= 255:
|
||||
# Verify it's in a data segment (not code)
|
||||
in_data = any(s <= low32 < e for s, e in data_const_ranges)
|
||||
if not in_data:
|
||||
# Also accept __DATA segments
|
||||
in_data = any(
|
||||
fo <= low32 < fo + fs
|
||||
for name, _, fo, fs, _ in self.all_segments
|
||||
@@ -70,31 +54,6 @@ class KernelJBPatchThidCrashMixin:
|
||||
self.emit(low32, b"\x00\x00\x00\x00", "zero [_thid_should_crash]")
|
||||
return True
|
||||
|
||||
# Fallback: if string has code refs, search via ADRP+ADD
|
||||
refs = self.find_string_refs(str_off)
|
||||
if refs:
|
||||
func_start = self.find_function_start(refs[0][0])
|
||||
if func_start >= 0:
|
||||
func_end = self._find_func_end(func_start, 0x200)
|
||||
for off in range(func_start, func_end - 4, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic == "adrp" and i1.mnemonic == "add":
|
||||
page = (i0.operands[1].imm - self.base_va) & ~0xFFF
|
||||
imm12 = i1.operands[2].imm if len(i1.operands) > 2 else 0
|
||||
target = page + imm12
|
||||
if 0 < target < self.size:
|
||||
tv = _rd32(self.raw, target)
|
||||
if 1 <= tv <= 255:
|
||||
self.emit(
|
||||
target,
|
||||
b"\x00\x00\x00\x00",
|
||||
"zero [_thid_should_crash]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] variable not found")
|
||||
return False
|
||||
|
||||
|
||||
@@ -1,48 +1,116 @@
|
||||
"""Mixin: KernelJBPatchVmProtectMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM
|
||||
from capstone.arm64_const import ARM64_REG_WZR
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_REG
|
||||
|
||||
|
||||
class KernelJBPatchVmProtectMixin:
|
||||
def patch_vm_map_protect(self):
|
||||
"""Skip a check in _vm_map_protect: branch over guard.
|
||||
Anchor: 'vm_map_protect(' panic string → function → TBNZ with high bit.
|
||||
"""
|
||||
self._log("\n[JB] _vm_map_protect: skip check")
|
||||
"""Skip the vm_map_protect write-downgrade gate.
|
||||
|
||||
# Try symbol first
|
||||
foff = self._resolve_symbol("_vm_map_protect")
|
||||
Source-backed anchor: recover the function from the in-kernel
|
||||
`vm_map_protect(` panic string, then find the unique local block matching
|
||||
the XNU path that conditionally strips `VM_PROT_WRITE` from a combined
|
||||
read+write request before later VM entry updates:
|
||||
|
||||
mov wMask, #6
|
||||
bics wzr, wMask, wProt
|
||||
b.ne skip
|
||||
tbnz wEntryFlags, #22, skip
|
||||
...
|
||||
and wProt, wProt, #~VM_PROT_WRITE
|
||||
|
||||
Rewriting the `b.ne` to an unconditional `b` preserves the historical
|
||||
patch semantics from `patch_fw.py`: always skip the downgrade block.
|
||||
"""
|
||||
self._log("\n[JB] _vm_map_protect: skip write-downgrade gate")
|
||||
|
||||
foff = self._find_func_by_string(b"vm_map_protect(", self.kern_text)
|
||||
if foff < 0:
|
||||
# String anchor
|
||||
foff = self._find_func_by_string(b"vm_map_protect(", self.kern_text)
|
||||
if foff < 0:
|
||||
foff = self._find_func_by_string(b"vm_map_protect(")
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found")
|
||||
self._log(" [-] kernel-text 'vm_map_protect(' anchor not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
gate = self._find_write_downgrade_gate(foff, func_end)
|
||||
if gate is None:
|
||||
self._log(" [-] vm_map_protect write-downgrade gate not found")
|
||||
return False
|
||||
|
||||
# Find TBNZ with bit >= 24 that branches forward (permission check guard)
|
||||
for off in range(foff, func_end - 4, 4):
|
||||
br_off, target = gate
|
||||
b_bytes = self._encode_b(br_off, target)
|
||||
if not b_bytes:
|
||||
self._log(" [-] branch rewrite out of range")
|
||||
return False
|
||||
|
||||
self.emit(br_off, b_bytes, f"b #0x{target - br_off:X} [_vm_map_protect]")
|
||||
return True
|
||||
|
||||
def _find_write_downgrade_gate(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 0x20, 4):
|
||||
d = self._disas_at(off, 10)
|
||||
if len(d) < 5:
|
||||
continue
|
||||
|
||||
mov_mask, bics_insn, bne_insn, tbnz_insn = d[0], d[1], d[2], d[3]
|
||||
if mov_mask.mnemonic != "mov" or bics_insn.mnemonic != "bics":
|
||||
continue
|
||||
if bne_insn.mnemonic != "b.ne" or tbnz_insn.mnemonic != "tbnz":
|
||||
continue
|
||||
if len(mov_mask.operands) != 2 or len(bics_insn.operands) != 3:
|
||||
continue
|
||||
if mov_mask.operands[0].type != ARM64_OP_REG or mov_mask.operands[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
if mov_mask.operands[1].imm != 6:
|
||||
continue
|
||||
|
||||
mask_reg = mov_mask.operands[0].reg
|
||||
if bics_insn.operands[0].type != ARM64_OP_REG or bics_insn.operands[0].reg != ARM64_REG_WZR:
|
||||
continue
|
||||
if bics_insn.operands[1].type != ARM64_OP_REG or bics_insn.operands[1].reg != mask_reg:
|
||||
continue
|
||||
if bics_insn.operands[2].type != ARM64_OP_REG:
|
||||
continue
|
||||
prot_reg = bics_insn.operands[2].reg
|
||||
|
||||
if len(bne_insn.operands) != 1 or bne_insn.operands[0].type != ARM64_OP_IMM:
|
||||
continue
|
||||
if len(tbnz_insn.operands) != 3:
|
||||
continue
|
||||
if tbnz_insn.operands[0].type != ARM64_OP_REG or tbnz_insn.operands[1].type != ARM64_OP_IMM or tbnz_insn.operands[2].type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
target = bne_insn.operands[0].imm
|
||||
if target <= bne_insn.address or tbnz_insn.operands[2].imm != target:
|
||||
continue
|
||||
if tbnz_insn.operands[1].imm != 22:
|
||||
continue
|
||||
|
||||
and_off = self._find_write_clear_between(tbnz_insn.address + 4, min(target, end), prot_reg)
|
||||
if and_off is None:
|
||||
continue
|
||||
|
||||
hits.append((bne_insn.address, target))
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _find_write_clear_between(self, start, end, prot_reg):
|
||||
for off in range(start, end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
if i.mnemonic != "tbnz":
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "and" or len(insn.operands) != 3:
|
||||
continue
|
||||
if len(i.operands) < 3:
|
||||
dst, src, imm = insn.operands
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_REG or imm.type != ARM64_OP_IMM:
|
||||
continue
|
||||
bit_op = i.operands[1]
|
||||
if bit_op.type == ARM64_OP_IMM and bit_op.imm >= 24:
|
||||
target = i.operands[2].imm if i.operands[2].type == ARM64_OP_IMM else -1
|
||||
if target > off:
|
||||
b_bytes = self._encode_b(off, target)
|
||||
if b_bytes:
|
||||
self.emit(
|
||||
off, b_bytes, f"b #0x{target - off:X} [_vm_map_protect]"
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] patch site not found")
|
||||
return False
|
||||
if dst.reg != prot_reg or src.reg != prot_reg:
|
||||
continue
|
||||
imm_val = imm.imm & 0xFFFFFFFF
|
||||
if (imm_val & 0x7) == 0x3:
|
||||
return off
|
||||
return None
|
||||
|
||||
Reference in New Issue
Block a user