From 154d5064ec80b67f076bdbdde13c46fd833fc702 Mon Sep 17 00:00:00 2001 From: Lakr Date: Sun, 1 Mar 2026 15:01:32 +0900 Subject: [PATCH] Add JB install pipeline and update docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add jailbreak extension patchers and targets: - kernel_jb.py: 22 dynamic kernel patches (trustcache, execve cs_flags, sandbox ops, task/VM, kcall10 syscall hook, ~160 total modifications) - txm_jb.py: 13 TXM patches (CS validation, get-task-allow, debugger entitlement, dev mode bypass) - iboot_jb.py: iBSS nonce generation skip - cfw.py: launchd jetsam patch, dylib injection commands - fw_patch_jb.py: orchestrator running base + JB extension patches - cfw_install_jb.sh: JB install phases (launchd jetsam fix, procursus bootstrap + Sileo deployment) 3 kernel patches still WIP (nvram_verify_permission, thid_should_crash, hook_cred_label_update_execve) — strategies documented in researchs/kernel_jb_remaining_patches.md. All base (non-JB) code paths verified unaffected — kernel.py produces identical 25 patches, cfw.py base commands unchanged. Add Linux venv setup script; tweak Makefile help Add scripts/setup_venv_linux.sh to create a Python virtualenv on Debian/Ubuntu (or dnf-based) systems, install system packages and Python requirements, and verify core imports (capstone, keystone, pyimg4). Also update Makefile help text to mark the fw_patch_jb target as WIP. This simplifies local development setup on Linux and clarifies that the JB extension patches are a work in progress. Update AGENTS.md: mark cfw_install_jb.sh as complete --- .gitignore | 5 + AGENTS.md | 149 +- Makefile | 12 +- researchs/jailbreak_patches.md | 84 + researchs/kernel_jb_remaining_patches.md | 442 +++++ scripts/cfw_install.sh | 10 +- scripts/cfw_install_jb.sh | 214 +++ scripts/fw_patch_jb.py | 115 ++ scripts/patchers/cfw.py | 433 ++++- scripts/patchers/iboot_jb.py | 105 ++ scripts/patchers/kernel.py | 1 + scripts/patchers/kernel_jb.py | 2128 ++++++++++++++++++++++ scripts/patchers/txm_jb.py | 335 ++++ scripts/setup_venv_linux.sh | 58 + 14 files changed, 4066 insertions(+), 25 deletions(-) create mode 100644 researchs/kernel_jb_remaining_patches.md create mode 100755 scripts/cfw_install_jb.sh create mode 100644 scripts/fw_patch_jb.py create mode 100644 scripts/patchers/iboot_jb.py create mode 100644 scripts/patchers/kernel_jb.py create mode 100644 scripts/patchers/txm_jb.py create mode 100644 scripts/setup_venv_linux.sh diff --git a/.gitignore b/.gitignore index 0262b49..1390211 100644 --- a/.gitignore +++ b/.gitignore @@ -311,3 +311,8 @@ __marimo__/ /VM .limd/ /.swiftpm +*.ipsw +/updates-cdn +/researchs/jb_asm_refs +TODO.md +/references/ diff --git a/AGENTS.md b/AGENTS.md index 959dcef..19b3ee0 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -13,6 +13,13 @@ Virtual iPhone boot tool using Apple's Virtualization.framework with PCC researc - **Language:** Swift 6.0 (SwiftPM), private APIs via [Dynamic](https://github.com/mhdhejazi/Dynamic) - **Python deps:** `capstone`, `keystone-engine`, `pyimg4` (see `requirements.txt`) +## Workflow Rules + +- Always read `/TODO.md` before starting any substantial work. +- Always update `/TODO.md` when plan, progress, assumptions, blockers, or open questions change. +- If blocked or waiting on user input, write the exact blocker and next action in `/TODO.md`. +- If not exists, continue existing work until complete. If exists, follow `/TODO.md` instructions. + ## Project Overview CLI tool that boots virtual iPhones (PV=3) via Apple's Virtualization.framework, targeting Private Cloud Compute (PCC) research VMs. Used for iOS security research — firmware patching, boot chain modification, and runtime instrumentation. @@ -38,23 +45,31 @@ sources/ scripts/ ├── patchers/ # Python patcher package │ ├── iboot.py # Dynamic iBoot patcher (iBSS/iBEC/LLB) +│ ├── iboot_jb.py # JB extension iBoot patcher (nonce skip) │ ├── kernel.py # Dynamic kernel patcher (25 patches) +│ ├── kernel_jb.py # JB extension kernel patcher (~34 patches) │ ├── txm.py # Dynamic TXM patcher -│ └── cfw.py # CFW binary patcher +│ ├── txm_jb.py # JB extension TXM patcher (~13 patches) +│ └── cfw.py # CFW binary patcher (base + JB jetsam) ├── resources/ # Resource archives │ ├── cfw_input.tar.zst +│ ├── cfw_jb_input.tar.zst # JB: procursus bootstrap + Sileo │ └── ramdisk_input.tar.zst ├── fw_prepare.sh # Downloads IPSWs, merges cloudOS into iPhone ├── fw_manifest.py # Generates hybrid BuildManifest.plist & Restore.plist ├── fw_patch.py # Patches 6 boot-chain components (41+ modifications) +├── fw_patch_jb.py # Runs fw_patch + JB extension patches (iBSS/TXM/kernel) ├── ramdisk_build.py # Builds SSH ramdisk with trustcache ├── ramdisk_send.sh # Sends ramdisk to device via irecovery ├── cfw_install.sh # Installs custom firmware to VM disk +├── cfw_install_jb.sh # Wrapper: cfw_install with JB phases enabled ├── vm_create.sh # Creates VM directory (disk, SEP storage, ROMs) ├── setup_venv.sh # Creates Python venv with native keystone dylib └── setup_libimobiledevice.sh # Builds libimobiledevice toolchain from source -researchs/ # Component analysis and architecture docs +researchs/ +├── jailbreak_patches.md # JB vs base patch comparison table +└── ... # Component analysis and architecture docs ``` ### Key Patterns @@ -77,6 +92,7 @@ The firmware is a **PCC/iPhone hybrid** — PCC boot infrastructure wrapping iPh 1. make fw_prepare Download iPhone + cloudOS IPSWs, merge, generate hybrid plists ↓ 2. make fw_patch Patch 6 boot-chain components for signature bypass + debug + OR make fw_patch_jb Base patches + JB extensions (iBSS nonce, TXM CS, kernel JB) ↓ 3. make ramdisk_build Build SSH ramdisk from SHSH blob, inject tools, sign with IM4M ↓ @@ -86,7 +102,8 @@ The firmware is a **PCC/iPhone hybrid** — PCC boot infrastructure wrapping iPh ↓ 6. make ramdisk_send Load boot chain + ramdisk via irecovery ↓ -7. make cfw_install Mount Cryptex, patch userland, install jailbreak tools +7. make cfw_install Mount Cryptex, patch userland, install base tools + OR make cfw_install_jb Base CFW + JB phases (jetsam patch, procursus bootstrap) ``` ### Component Origins @@ -153,17 +170,35 @@ idevicerestore selects this identity by partial-matching `Info.Variant` against | TXM | 1 | Dynamic via `patchers/txm.py` (trustcache hash lookup bypass) | | KernelCache | 25 | Dynamic via `patchers/kernel.py` (string anchors, ADRP+ADD xrefs, BL frequency) | -**CFW patches** (`patchers/cfw.py` / `cfw_install.sh`) — all 4 targets from **iPhone** Cryptex SystemOS: +**JB extension patches** (`fw_patch_jb.py`) — runs base patches first, then adds: -| Binary | Technique | Purpose | -|--------|-----------|---------| -| seputil | String patch (`/%s.gl` → `/AA.gl`) | Gigalocker UUID fix | -| launchd_cache_loader | NOP (disassembly-anchored) | Bypass cache validation | -| mobileactivationd | Return true (disassembly-anchored) | Skip activation check | -| launchd.plist | Plist injection | Add bash/dropbear/trollvnc daemons | +| Component | JB Patches | Technique | +|-----------|-----------|-----------| +| iBSS | +1 | `patchers/iboot_jb.py` (skip nonce generation) | +| TXM | +13 | `patchers/txm_jb.py` (CS validation bypass, get-task-allow, debugger ent, dev mode) | +| KernelCache | +34 | `patchers/kernel_jb.py` (trustcache, execve, sandbox, task/VM, kcall10) | + +**CFW patches** (`patchers/cfw.py` / `cfw_install.sh`) — targets from **iPhone** Cryptex SystemOS: + +| Binary | Technique | Purpose | Mode | +|--------|-----------|---------|------| +| seputil | String patch (`/%s.gl` → `/AA.gl`) | Gigalocker UUID fix | Base | +| launchd_cache_loader | NOP (disassembly-anchored) | Bypass cache validation | Base | +| mobileactivationd | Return true (disassembly-anchored) | Skip activation check | Base | +| launchd.plist | Plist injection | Add bash/dropbear/trollvnc daemons | Base | +| launchd | Branch (skip jetsam guard) + LC_LOAD_DYLIB injection | Prevent jetsam panic + load launchdhook.dylib | JB | + +**JB install phases** (`cfw_install_jb.sh` → `cfw_install.sh` with `CFW_JB_MODE=1`): + +| Phase | Action | +|-------|--------| +| JB-1 | Patch `/mnt1/sbin/launchd`: inject `launchdhook.dylib` LC_LOAD_DYLIB + jetsam guard bypass | +| JB-2 | Install procursus bootstrap to `/mnt5//jb-vphone/procursus` | +| JB-3 | Deploy BaseBin hooks (`systemhook.dylib`, `launchdhook.dylib`, `libellekit.dylib`) to `/mnt1/cores/` | ### Boot Flow +**Base** (`fw_patch` + `cfw_install`): ``` AVPBooter (ROM, PCC) → LLB (PCC, patched) @@ -175,6 +210,18 @@ AVPBooter (ROM, PCC) → iOS userland (iPhone, CFW-patched) ``` +**Jailbreak** (`fw_patch_jb` + `cfw_install_jb`): +``` +AVPBooter (ROM, PCC) + → LLB (PCC, patched) + → iBSS (PCC, patched + nonce skip) + → iBEC (PCC, patched, DFU) + → SPTM + TXM (PCC, TXM patched + CS/ent/devmode bypass) + → KernelCache (PCC, 25 base + ~34 JB patches) + → Ramdisk (SSH-injected) + → iOS userland (CFW + jetsam fix + procursus) +``` + ### Ramdisk Build (`ramdisk_build.py`) 1. Extract IM4M from SHSH blob @@ -184,7 +231,7 @@ AVPBooter (ROM, PCC) ### CFW Installation (`cfw_install.sh`) -7 phases, safe to re-run (idempotent): +7 phases (+ 2 JB phases), safe to re-run (idempotent): 1. Decrypt/mount Cryptex SystemOS and AppOS DMGs (`ipsw` + `aea`) 2. Patch seputil (gigalocker UUID) 3. Install GPU driver (AppleParavirtGPUMetalIOGPUFamily) @@ -193,6 +240,10 @@ AVPBooter (ROM, PCC) 6. Patch mobileactivationd (activation bypass) 7. Install LaunchDaemons (bash, dropbear SSH, trollvnc) +**JB-only phases** (enabled via `make cfw_install_jb` or `CFW_JB_MODE=1`): +- JB-1: Patch launchd jetsam guard (prevents jetsam panic on boot) +- JB-2: Install procursus bootstrap + optional Sileo to `/mnt5//jb-vphone/` + --- ## Coding Conventions @@ -295,3 +346,79 @@ Rationale: Dark surfaces match the terminal-adjacent workflow. Status colors bor - **VM display:** Full-bleed within its container. No rounded corners on the display itself. - **Log output:** Scrolling monospace region, bottom-anchored (newest at bottom). No line numbers unless requested. - **Toolbar (if present):** Icon-only, 32px touch targets, subtle hover state (`#2e2e2e` -> `#3a3a3a`). + +--- + +## JB Kernel Patcher Status (`patches-jb` branch) + +Branch is 8 commits ahead of `main`. All changes are **additive** — non-JB code paths are unaffected. + +### Diff vs Main + +| File | Change | Impact on non-JB | +|------|--------|-----------------| +| `kernel.py` | +1 line: `self.patches = []` reset in `find_all()` | None (harmless init) | +| `cfw.py` | +`patch-launchd-jetsam`, +`inject-dylib` commands | None (new commands only) | +| `kernel_jb.py` | **New file** — 2128 lines | N/A | +| `txm_jb.py` | **New file** — 335 lines | N/A | +| `iboot_jb.py` | **New file** — 105 lines | N/A | +| `fw_patch_jb.py` | **New file** — 115 lines (WIP) | N/A | +| `cfw_install_jb.sh` | **New file** — 214 lines | N/A | +| `cfw_jb_input.tar.zst` | **New file** — JB resources | N/A | +| `Makefile` | +JB targets (`fw_patch_jb`, `cfw_install_jb`) | None (additive) | +| `AGENTS.md` | Documentation updates | N/A | + +### Patch Counts + +**Base patcher** (`kernel.py`): **25 patches** — verified identical to main. + +**JB patcher** (`kernel_jb.py`): **160 patches** from 22 methods: +- **19 of 22 PASSING** — Groups A (sandbox hooks, AMFI, execve), B (string-anchored), C (shellcode) +- **3 FAILING** — see below + +### 3 Remaining Failures + +| Patch | Upstream Offset | Root Cause | Proposed Strategy | +|-------|----------------|------------|-------------------| +| `patch_nvram_verify_permission` | NOP BL at `0x1234034` | 332 identical IOKit methods match structural filter; "krn." string leads to wrong function | Find via "IONVRAMController" string → metaclass ctor → PAC disc `#0xcda1` → search `__DATA_CONST` vtable entries (first entry after 3 nulls) with matching PAC disc + BL to memmove | +| `patch_thid_should_crash` | Zero `0x67EB50` | String in `__PRELINK_INFO` plist (no code refs); value already `0x00000000` in PCC kernel | Safe to return True (no-op); or find via `sysctl_oid` struct search in `__DATA` | +| `patch_hook_cred_label_update_execve` | Shellcode at `0xAB17D8` + ops table at `0xA54518` | Needs `_vfs_context_current` (`0xCC5EAC`) and `_vnode_getattr` (`0xCC91C0`) — 0 symbols available | Find via sandbox ops table → original hook func → BL targets by caller count (vfs_context_current = highest, vnode_getattr = near `mov wN, #0x380`) | + +### Key Findings (from `researchs/kernel_jb_remaining_patches.md`) + +**All offsets in `kernel.py` are file offsets** — `bl_callers` dict, `_is_bl()`, `_disas_at()`, `find_string_refs()` all use file offsets, not VAs. + +**IONVRAMController vtable discovery chain**: +``` +"IONVRAMController" string @ 0xA2FEB + → ADRP+ADD refs → metaclass ctor @ 0x125D2C0 + → PAC discriminator: movk x17, #0xcda1, lsl #48 + → instance size: mov w3, #0x88 + → class vtable in __DATA_CONST @ 0x7410B8 (preceded by 3 null entries) + → vtable[0] = 0x1233E40 = verifyPermission + → BL to memmove (3114 callers) at +0x1F4 = 0x1234034 ← NOP this +``` + +**vfs_context_current / vnode_getattr resolution**: +``` +sandbox ops table → entry[16] = original hook @ 0x239A0B4 + → disassemble hook → find BL targets: + - _vfs_context_current: BL target with >1000 callers, short function + - _vnode_getattr: BL target near "mov wN, #0x380", moderate callers +``` + +### Upstream Reference Offsets (iPhone17,3 26.1) + +| Symbol | File Offset | Notes | +|--------|-------------|-------| +| kern_text | `0xA74000` — `0x24B0000` | | +| base_va | `0xFFFFFE0007004000` | | +| verifyPermission func | `0x1233E40` | vtable @ `0x7410B8` | +| verifyPermission patch | `0x1234034` | NOP BL to memmove | +| _thid_should_crash var | `0x67EB50` | already 0 | +| _vfs_context_current | `0xCC5EAC` | from BL encoding | +| _vnode_getattr | `0xCC91C0` | from BL encoding | +| hook_cred_label orig | `0x239A0B4` | from B encoding | +| sandbox ops entry | `0xA54518` | index 16 | +| OSMetaClass::OSMetaClass() | `0x10EA790` | 5236 callers | +| memmove | `0x12CB0D0` | 3114 callers | diff --git a/Makefile b/Makefile index a1aa60b..fcdf9dc 100644 --- a/Makefile +++ b/Makefile @@ -45,6 +45,7 @@ help: @echo "Firmware pipeline:" @echo " make fw_prepare Download IPSWs, extract, merge" @echo " make fw_patch Patch boot chain (6 components)" + @echo " make fw_patch_jb Run fw_patch + JB extension patches (WIP)" @echo "" @echo "Restore:" @echo " make restore_get_shsh Fetch SHSH blob from device" @@ -56,6 +57,7 @@ help: @echo "" @echo "CFW:" @echo " make cfw_install Install CFW mods via SSH" + @echo " make cfw_install_jb Install CFW + JB extensions (jetsam/procursus/basebin)" @echo "" @echo "Variables: VM_DIR=$(VM_DIR) CPU=$(CPU) MEMORY=$(MEMORY) DISK_SIZE=$(DISK_SIZE)" @@ -130,7 +132,7 @@ boot_dfu: build # Firmware pipeline # ═══════════════════════════════════════════════════════════════════ -.PHONY: fw_prepare fw_patch +.PHONY: fw_prepare fw_patch fw_patch_jb fw_prepare: cd $(VM_DIR) && bash "$(CURDIR)/$(SCRIPTS)/fw_prepare.sh" @@ -138,6 +140,9 @@ fw_prepare: fw_patch: cd $(VM_DIR) && $(PYTHON) "$(CURDIR)/$(SCRIPTS)/fw_patch.py" . +fw_patch_jb: + cd $(VM_DIR) && $(PYTHON) "$(CURDIR)/$(SCRIPTS)/fw_patch_jb.py" . + # ═══════════════════════════════════════════════════════════════════ # Restore # ═══════════════════════════════════════════════════════════════════ @@ -166,7 +171,10 @@ ramdisk_send: # CFW # ═══════════════════════════════════════════════════════════════════ -.PHONY: cfw_install +.PHONY: cfw_install cfw_install_jb cfw_install: cd $(VM_DIR) && zsh "$(CURDIR)/$(SCRIPTS)/cfw_install.sh" . + +cfw_install_jb: + cd $(VM_DIR) && zsh "$(CURDIR)/$(SCRIPTS)/cfw_install_jb.sh" . diff --git a/researchs/jailbreak_patches.md b/researchs/jailbreak_patches.md index 5380550..a804ba1 100644 --- a/researchs/jailbreak_patches.md +++ b/researchs/jailbreak_patches.md @@ -105,6 +105,19 @@ No additional JB patches for LLB. | 3 | mov x0,#1; ret | mobileactivationd | Activation bypass | Y | Y | | 4 | Plist injection | launchd.plist | bash/dropbear/trollvnc daemons | Y | Y | | 5 | b (skip jetsam) | launchd | Prevent jetsam panic on boot | — | Y | +| 6 | procursus bootstrap | `/mnt5//jb-vphone` | Install procursus userspace + optional Sileo payload | — | Y | + +### JB Install Flow (`make cfw_install_jb`) + +- Entry: `scripts/cfw_install_jb.sh` (wrapper) -> `scripts/cfw_install.sh` with `CFW_JB_MODE=1`. +- Added JB phases in install pipeline: + - `JB-1`: patch `/mnt1/sbin/launchd` via `patch-launchd-jetsam` (dynamic string+xref). + - `JB-2`: unpack procursus bootstrap (`bootstrap-iphoneos-arm64.tar.zst`) into `/mnt5//jb-vphone/procursus`. +- JB resources now packaged in: + - `scripts/resources/cfw_jb_input.tar.zst` + - contains: + - `jb/bootstrap-iphoneos-arm64.tar.zst` + - `jb/org.coolstar.sileo_2.5.1_iphoneos-arm64.deb` ## Summary @@ -117,3 +130,74 @@ No additional JB patches for LLB. | Kernelcache | 25 | ~23+ | ~48+ | | CFW | 4 | 1 | 5 | | **Total** | **41** | **~38+** | **~79+** | + +## Dynamic Implementation Log (fw_patch_jb) + +### TXM (Completed) + +All TXM JB patches are now implemented with dynamic binary analysis and +keystone/capstone-encoded instructions only. + +1. `selector24 hashcmp` (`bl -> mov x0,#0`, 2 residual sites in JB stage) + - Locator: global instruction motif `mov w2,#0x14 ; bl ; cbz w0`. + - Patch bytes: keystone `mov x0, #0`. +2. `selector24 A1` (`b.lo/cbz -> nop`) + - Locator: unique guarded `mov w0,#0xa1` site with nearby `b.lo` and `cbz x9`. + - Patch bytes: keystone `nop`. +3. `selector41|29 get-task-allow` + - Locator: xref to `"get-task-allow"` + nearby `bl` followed by `tbnz w0,#0`. + - Patch bytes: keystone `mov x0, #1`. +4. `selector42|29 shellcode trampoline` + - Locator: + - Find dispatch stub pattern `bti j ; mov x0,x20 ; bl ; mov x1,x21 ; mov x2,x22 ; bl ; b`. + - Select stub whose second `bl` target is the debugger-gate function (pattern verified by string-xref + call-shape). + - Find executable UDF cave dynamically. + - Patch bytes: + - Stub head -> keystone `b #cave`. + - Cave payload -> `nop ; mov x0,#1 ; strb w0,[x20,#0x30] ; mov x0,x20 ; b #return`. +5. `selector42|37 debugger entitlement` + - Locator: xref to `"com.apple.private.cs.debugger"` + strict nearby call-shape + (`mov x0,#0 ; mov x2,#0 ; bl ; tbnz w0,#0`). + - Patch bytes: keystone `mov w0, #1`. +6. `developer mode bypass` + - Locator: xref to `"developer mode enabled due to system policy configuration"` + + nearest guard branch on `w9`. + - Patch bytes: keystone `nop`. + +#### TXM Binary-Alignment Validation + +- `patch.upstream.raw` generated from upstream-equivalent TXM static patch semantics. +- `patch.dyn.raw` generated by `TXMJBPatcher` on the same input. +- Result: byte-identical (`cmp -s` success, SHA-256 matched). + +### Kernelcache (In Progress, Dynamic Ports Added) + +Implemented in `scripts/patchers/kernel_jb.py` with capstone semantic matching +and keystone-generated patch bytes only: + +1. `AMFIIsCDHashInTrustCache` function rewrite + - Locator: semantic function-body matcher in AMFI text. + - Patch: `mov x0,#1 ; cbz x2,+8 ; str x0,[x2] ; ret`. +2. AMFI execve kill path bypass (2 BL sites) + - Locator: string xref to `"AMFI: hook..execve() killing"` (fallback `"execve() killing"`), + then function-local early `bl` + `cbz/cbnz w0` pair matcher. + - Patch: `bl -> mov x0,#0` at two helper callsites. +3. `task_conversion_eval_internal` guard bypass + - Locator: unique cmp/branch motif: + `ldr xN,[xN,#imm] ; cmp xN,x0 ; b.eq ; cmp xN,x1 ; b.eq`. + - Patch: `cmp xN,x0 -> cmp xzr,xzr`. +4. Extended sandbox MACF hook stubs (JB-only set) + - Locator: dynamic `mac_policy_conf -> mpc_ops` discovery, then hook-index resolution. + - Patch per hook function: `mov x0,#0 ; ret`. + - JB extended indices include vnode/proc hooks beyond base 5 hooks. + +#### Cross-Version Dynamic Snapshot + +Validated using pristine inputs from `updates-cdn/`: + +| Case | TXM_JB_PATCHES | KERNEL_JB_PATCHES | +|------|----------------:|------------------:| +| PCC 26.1 (`23B85`) | 14 | 59 | +| PCC 26.3 (`23D128`) | 14 | 59 | +| iOS 26.1 (`23B85`) | 14 | 59 | +| iOS 26.3 (`23D127`) | 14 | 59 | diff --git a/researchs/kernel_jb_remaining_patches.md b/researchs/kernel_jb_remaining_patches.md new file mode 100644 index 0000000..e95d37c --- /dev/null +++ b/researchs/kernel_jb_remaining_patches.md @@ -0,0 +1,442 @@ +# Kernel JB Remaining Patches — Research Notes + +Last updated: 2026-03-01 + +## Overview + +`scripts/patchers/kernel_jb.py` has 22 patch methods in `find_all()`. As of this writing: + +- **19 PASSING**: All Group A + most Group B + some Group C patches +- **3 FAILING**: `patch_nvram_verify_permission`, `patch_thid_should_crash`, `patch_hook_cred_label_update_execve` +- **1 FIXED this session**: `patch_syscallmask_apply_to_proc` (bl_callers key bug + now passing) +- **2 FIXED prior session**: `patch_task_for_pid`, `patch_load_dylinker` (complete rewrites) + +Upstream reference: `/Users/qaq/Documents/GitHub/super-tart-vphone/CFW/patch_fw.py` + +Test kernel: `vm/iPhone17,3_26.1_23B85_Restore/kernelcache.release.vphone600` (IM4P-wrapped, bvx2 compressed) + +Key facts about the kernel: +- **0 symbols resolved** (fully stripped) +- `base_va = 0xFFFFFE0007004000` (typical PCC) +- `kern_text = 0xA74000 - 0x24B0000` +- All offsets in `kernel.py` helpers are **file offsets** (not VA) +- `bl_callers` dict: keyed by file offset → list of caller file offsets + +--- + +## Patch 1: `patch_nvram_verify_permission` — FAILING + +### Upstream Reference + +```python +# patch __ZL16verifyPermission16IONVRAMOperationPKhPKcb +patch(0x1234034, 0xd503201f) # NOP +``` + +One single NOP at file offset `0x1234034`. The BL being NOPed calls memmove (3114 callers). + +### Function Analysis + +**Function start**: `0x1233E40` (PACIBSP) +**Function end**: `0x1234094` (next PACIBSP) +**Size**: `0x254` bytes +**BL callers**: 0 (IOKit virtual method, dispatched via vtable) +**Instruction**: `retab` at end + +#### Full BL targets in the function: + +| Offset | Delta | Target | Callers | Likely Identity | +|--------|-------|--------|---------|-----------------| +| 0x1233F0C | +0x0CC | 0x0AD10DC | 6190 | lck_rw_done / lock_release | +| 0x1234034 | +0x1F4 | 0x12CB0D0 | 3114 | **memmove** ← PATCH THIS | +| 0x1234048 | +0x208 | 0x0ACB418 | 423 | OSObject::release | +| 0x1234070 | +0x230 | 0x0AD029C | 4921 | lck_rw_lock_exclusive | +| 0x123407C | +0x23C | 0x0AD10DC | 6190 | lck_rw_done | +| 0x123408C | +0x24C | 0x0AD10DC | 6190 | lck_rw_done | + +#### Key instructions in the function: + +- `CASA` at +0x54 (offset 0x1233E94) — atomic compare-and-swap for lock acquisition +- `CASL` at 3 locations — lock release +- 4x `BLRAA` — authenticated indirect calls through vtable pointers +- `movk x17, #0xcda1, lsl #48` — PAC discriminator for IONVRAMController class +- `RETAB` — PAC return +- `mov x8, #-1; str x8, [x19]` — cleanup pattern near end +- `ubfiz x2, x8, #3, #0x20` before BL memmove — size = count * 8 + +#### "Remove from array" pattern (at patch site): + +``` +0x1233FD8: adrp x8, #0x272f000 +0x1233FDC: ldr x8, [x8, #0x10] ; load observer list struct +0x1233FE0: cbz x8, skip ; if null, skip +0x1233FE4: ldr w11, [x8, #0x10] ; load count +0x1233FE8: cbz w11, skip ; if 0, skip +0x1233FEC: mov x10, #0 ; index = 0 +0x1233FF0: ldr x9, [x8, #0x18] ; load array base + loop: +0x1233FF4: add x12, x9, x10, lsl #3 +0x1233FF8: ldr x12, [x12] ; array[index] +0x1233FFC: cmp x12, x19 ; compare with self +0x1234000: b.eq found +0x1234004: add x10, x10, #1 ; index++ +0x1234008: cmp x11, x10 +0x123400C: b.ne loop + found: +0x1234014: sub w11, w11, #1 ; count-- +0x1234018: str w11, [x8, #0x10] ; store +0x123401C: subs w8, w11, w10 ; remaining +0x1234020: b.ls skip +0x1234024: ubfiz x2, x8, #3, #0x20 ; size = remaining * 8 +0x1234028: add x0, x9, w10, uxtw #3 +0x123402C: add w8, w10, #1 +0x1234030: add x1, x9, w8, uxtw #3 +0x1234034: bl memmove ; ← NOP THIS +``` + +### What I've Tried (and Failed) + +1. **"krn." string anchor** → Leads to function at `0x11F7EE8`, NOT `0x1233E40`. Wrong function entirely. + +2. **"nvram-write-access" entitlement string** → Also leads to a different function. + +3. **CASA + 0 callers + retab + ubfiz + memmove filter** → **332 matches**. All IOKit virtual methods follow the same "remove observer from array" pattern with CASA locking. + +4. **IONVRAMController metaclass string** → Found at `0xA2FEB`. Has ADRP+ADD refs at `0x125D2C0`, `0x125D310`, `0x125D38C` (metaclass constructors). These set up the metaclass, NOT instance methods. + +5. **Chained fixup pointer search for IONVRAMController string** → Failed (different encoding). + +### Findings That DO Work + +**IONVRAMController vtable found via chained fixup search:** + +The verifyPermission function at `0x1233E40` is referenced as a chained fixup pointer in `__DATA_CONST`: + +``` +__DATA_CONST @ 0x7410B8: raw=0x8011377101233E40 → decoded=0x1233E40 (verifyPermission) +``` + +**Vtable layout at 0x7410B8:** + +| Vtable Idx | File Offset | Content | First Insn | +|------------|-------------|---------|------------| +| [-3] 0x7410A0 | | NULL | | +| [-2] 0x7410A8 | | NULL | | +| [-1] 0x7410B0 | | NULL | | +| [0] 0x7410B8 | 0x1233E40 | **verifyPermission** | pacibsp | +| [1] 0x7410C0 | 0x1233BF0 | sister method | pacibsp | +| [2] 0x7410C8 | 0x10EA4E0 | | ret | +| [3] 0x7410D0 | 0x10EA4D8 | | mov | + +**IONVRAMController metaclass constructor pattern:** + +``` +0x125D2C0: pacibsp + adrp x0, #0x26fe000 + add x0, x0, #0xa38 ; x0 = metaclass obj @ 0x26FEA38 + adrp x1, #0xa2000 + add x1, x1, #0xfeb ; x1 = "IONVRAMController" @ 0xA2FEB + adrp x2, #0x26fe000 + add x2, x2, #0xbf0 ; x2 = superclass metaclass @ 0x26FEBF0 + mov w3, #0x88 ; w3 = instance size = 136 + bl OSMetaClass::OSMetaClass() ; [5236 callers] + adrp x16, #0x76d000 + add x16, x16, #0xd60 + add x16, x16, #0x10 ; x16 = metaclass vtable @ 0x76DD70 + movk x17, #0xcda1, lsl #48 ; PAC discriminator + pacda x16, x17 + str x16, [x0] ; store PAC'd metaclass vtable + retab +``` + +**There's ALSO a combined class registration function at 0x12376D8** that registers multiple classes and references the instance vtable: + +``` +0x12377F8: adrp x16, #0x741000 + add x16, x16, #0x0a8 ; → 0x7410A8 (vtable[-2]) +``` + +Wait — it actually points to `0x7410A8`, not `0x7410B8`. The vtable pointer with the +0x10 adjustment gives `0x7410A8 + 0x10 = 0x7410B8` which is entry [0]. This is how IOKit vtables work: the isa pointer stores `vtable_base + 0x10` to skip the RTTI header. + +### Proposed Dynamic Strategy + +**Chain**: "IONVRAMController" string → ADRP+ADD refs → metaclass constructor → extract instance size `0x88` → find the combined class registration function (0x12376D8) that calls OSMetaClass::OSMetaClass() with `mov w3, #0x88` AND uses "IONVRAMController" name → extract the vtable base from the ADRP+ADD+ADD that follows → vtable[0] = verifyPermission → find BL to memmove-like target (>2000 callers) and NOP it. + +**Alternative (simpler)**: From the metaclass constructor, extract the PAC discriminator `#0xcda1` and the instance size `#0x88`. Then search __DATA_CONST for chained fixup pointer entries where: +- The preceding 3 entries (at -8, -16, -24) are NULL (vtable header) +- The decoded function pointer has 0 BL callers +- The function contains CASA +- The function ends with RETAB +- The function contains a BL to memmove (>2000 callers) +- **The function contains `movk x17, #0xcda1`** (the IONVRAMController PAC discriminator) + +This last filter is the KEY discriminator. Among the 332 candidate functions, only IONVRAMController methods use PAC disc `0xcda1`. Combined with "first entry in vtable" (preceded by 3 nulls), this should be unique. + +**Simplest approach**: Search all chained fixup pointers in __DATA_CONST where: +1. Preceded by 3 null entries (vtable start) +2. Decoded target is a function in kern_text +3. Function contains `movk x17, #0xcda1, lsl #48` +4. Function contains BL to target with >2000 callers (memmove) +5. NOP that BL + +--- + +## Patch 2: `patch_thid_should_crash` — FAILING + +### Upstream Reference + +```python +# patch _thid_should_crash to 0 +patch(0x67EB50, 0x0) +``` + +Writes 4 bytes of zero at file offset `0x67EB50`. + +### Analysis + +- Offset `0x67EB50` is in a **DATA segment** (not code) +- The current value at this offset is **already 0x00000000** in the test kernel +- This is a sysctl boolean variable (`kern.thid_should_crash`) +- The patch is effectively a **no-op** on this kernel + +### What I've Tried + +1. **Symbol resolution** → 0 symbols, fails. +2. **"thid_should_crash" string** → Found, but has **no ADRP+ADD code references**. The string is in `__PRELINK_INFO` (XML plist), not in a standalone `__cstring` section. +3. **Sysctl structure search** → Searched for a raw VA pointer to the string in DATA segments. Failed because the string VA is in the plist text, not a standalone pointer. +4. **Pattern search for value=1** → The value is already 0 at the upstream offset, so searching for value=1 finds nothing. + +### Proposed Dynamic Strategy + +The variable at `0x67EB50` is in the kernel's `__DATA` segment (BSS or initialized data). Since: +- The string is only in `__PRELINK_INFO` (plist), not usable as a code anchor +- The variable has no symbols +- The value is already 0 + +**Option A: Skip this patch gracefully.** If the value is already 0, the patch has no effect. Log a message and return True (success, nothing to do). + +**Option B: Find via sysctl table structure.** The sysctl_oid structure in __DATA contains: +- A pointer to the name string +- A pointer to the data variable +- Various flags + +But the name string pointer would be a chained fixup pointer to the string in __PRELINK_INFO, which is hard to search for. + +**Option C: Find via `__PRELINK_INFO` plist parsing.** Parse the XML plist to find the `_PrelinkKCID` or sysctl registration info. This is complex and fragile. + +**Recommended: Option A** — the variable is already 0 in PCC kernels. Emit a write-zero anyway at the upstream-equivalent location if we can find it, or just return True if we can't find the variable (safe no-op). + +Actually, better approach: search `__DATA` segments for a `sysctl_oid` struct. The struct layout includes: +```c +struct sysctl_oid { + struct sysctl_oid_list *oid_parent; // +0x00 + SLIST_ENTRY(sysctl_oid) oid_link; // +0x08 + int oid_number; // +0x10 + int oid_kind; // +0x14 + void *oid_arg1; // +0x18 → points to the variable + int oid_arg2; // +0x20 + const char *oid_name; // +0x28 → points to "thid_should_crash" string + ... +}; +``` + +So search all `__DATA` segments for an 8-byte value at offset +0x28 that decodes to the "thid_should_crash" string offset. Then read +0x18 to get the variable pointer. + +But the string is in __PRELINK_INFO, which complicates decoding the chained fixup pointer. + +--- + +## Patch 3: `patch_hook_cred_label_update_execve` — FAILING + +### Upstream Reference + +```python +# Shellcode at 0xAB17D8 (46 instructions, ~184 bytes) +# Two critical BL targets: +# BL _vfs_context_current at idx 9: 0x940851AC → target = 0xCC5EAC +# BL _vnode_getattr at idx 17: 0x94085E69 → target = 0xCC91C0 +# Ops table patch at 0xA54518: redirect to shellcode +# B _hook_cred_label_update_execve at idx 44: 0x146420B7 → target = 0x239A0B4 +``` + +### Why It Fails + +The patch needs two kernel functions that have **no symbols**: +- `_vfs_context_current` at file offset `0xCC5EAC` +- `_vnode_getattr` at file offset `0xCC91C0` + +Without these, the shellcode can't be assembled (the BL offsets depend on the target addresses). + +### Analysis of _vfs_context_current (0xCC5EAC) + +``` +Expected: A very short function (2-4 instructions) that: + - Reads the current thread (mrs xN, TPIDR_EL1 or load from per-CPU data) + - Loads the VFS context from the thread struct + - Returns it in x0 + +Should have extremely high caller count (VFS is used everywhere). +``` + +Let me verify: check `bl_callers.get(0xCC5EAC, [])` — should have many callers. + +### Analysis of _vnode_getattr (0xCC91C0) + +``` +Expected: A moderate-sized function that: + - Takes (vnode, vnode_attr, vfs_context) parameters + - Calls the vnode op (VNOP_GETATTR) + - Returns error code + +Should have moderate caller count (hundreds). +``` + +### Finding Strategy for _vfs_context_current + +1. **From sandbox ops table**: We already have `_find_sandbox_ops_table_via_conf()`. The hook_cred_label_update_execve entry (index 16) in the ops table points to the original sandbox hook function (at `0x239A0B4` per upstream). + +2. **From the original hook function**: Disassemble the original hook function. It likely calls `_vfs_context_current` (to get the VFS context for vnode operations). Find the BL target in the hook that has a very high caller count — that's likely `_vfs_context_current`. + +3. **Pattern match**: Search kern_text for short functions (size < 0x20) with: + - `mrs xN, TPIDR_EL1` instruction + - Very high caller count (>1000) + - Return type is pointer (loads from struct offset) + +### Finding Strategy for _vnode_getattr + +1. **From the original hook function**: The hook function likely also calls `_vnode_getattr`. Find BL targets in the hook that have moderate caller count. + +2. **String anchor**: Search for `"vnode_getattr"` string (not in plist but in `__cstring`). Find ADRP+ADD refs, trace to function. + +3. **Pattern match**: The function signature includes a `vnode_attr` structure initialization with size `0x380`. + +### Proposed Implementation + +``` +1. Find sandbox ops table → read entry at index 16 → get original hook func +2. Disassemble original hook function +3. Find _vfs_context_current: BL target in the hook with highest caller count (>1000) +4. Find _vnode_getattr: BL target that: + - Has moderate callers (50-1000) + - The calling site has nearby `mov wN, #0x380` (vnode_attr struct size) +5. With both functions found, build shellcode and patch ops table +``` + +--- + +## Patch Status Summary + +| Patch | Status | Blocker | Strategy | +|-------|--------|---------|----------| +| nvram_verify_permission | FAILING | Can't distinguish among 332 identical IOKit methods | Use PAC disc `#0xcda1` + vtable header (3 nulls) to find unique IONVRAMController vtable entry | +| thid_should_crash | FAILING | String in __PRELINK_INFO, no code refs, value already 0 | Option A: return True (safe no-op); Option B: sysctl_oid struct search | +| hook_cred_label_update_execve | FAILING | Can't find vfs_context_current and vnode_getattr without symbols | Find via sandbox ops table → original hook → BL targets by caller count | + +--- + +## Previously Fixed Patches (This Session) + +### patch_task_for_pid — FIXED + +**Problem**: Old code searched for "proc_ro_ref_task" string → wrong function. +**Solution**: Pattern search: 0 BL callers + 2x ldadda + 2x `ldr wN,[xN,#0x490]; str wN,[xN,#0xc]` + movk #0xc8a2 + non-panic BL >500 callers. NOP the second `ldr wN,[xN,#0x490]`. +**Upstream**: `patch(0xFC383C, 0xd503201f)` — NOP in function at `0xFC3718`. + +### patch_load_dylinker — FIXED + +**Problem**: Old code searched for "/usr/lib/dyld" → wrong function (0 BL callers, no string ref). +**Solution**: Search for functions with 3+ `TST xN, #-0x40000000000000; B.EQ; MOVK xN, #0xc8a2` triplets and 0 BL callers. Replace LAST TST with unconditional B to B.EQ target. +**Upstream**: `patch(0x1052A28, B #0x44)` — in function at `0x105239C`. + +### patch_syscallmask_apply_to_proc — FIXED + +**Problem**: `bl_callers` key bug: code used `target + self.base_va` but bl_callers is keyed by file offset. +**Fix**: Changed to `self.bl_callers.get(target, [])` at line ~1661. +**Status**: Now PASSING (40 patches emitted for shellcode + redirect). + +--- + +## Environment Notes + +### Running on macOS (current) + +```bash +cd /Users/qaq/Documents/GitHub/vphone-cli +source .venv/bin/activate +python3 -c " +import sys; sys.path.insert(0, 'scripts') +from fw_patch import load_firmware +from patchers.kernel_jb import KernelJBPatcher +_, data, _, _ = load_firmware('vm/iPhone17,3_26.1_23B85_Restore/kernelcache.release.vphone600') +p = KernelJBPatcher(data) +patches = p.find_all() +print(f'Total patches: {len(patches)}') +" +``` + +### Running on Linux (cloud) + +Requirements: +- Python 3.10+ +- `pip install capstone keystone-engine pyimg4` +- Note: `keystone-engine` may need `cmake` and C++ compiler on Linux +- Copy the kernelcache file and upstream reference +- The `setup_venv.sh` script has macOS-specific keystone dylib handling — on Linux, pip install should work directly + +Files needed: +- `scripts/patchers/kernel.py` (base class) +- `scripts/patchers/kernel_jb.py` (JB patcher) +- `scripts/patchers/__init__.py` +- `scripts/fw_patch.py` (for `load_firmware()`) +- `vm/iPhone17,3_26.1_23B85_Restore/kernelcache.release.vphone600` (test kernel) +- `/Users/qaq/Documents/GitHub/super-tart-vphone/CFW/patch_fw.py` (upstream reference) + +### Quick Test Script + +```python +#!/usr/bin/env python3 +"""Quick test for failing patches.""" +import sys +sys.path.insert(0, 'scripts') +from fw_patch import load_firmware +from patchers.kernel_jb import KernelJBPatcher + +_, data, _, _ = load_firmware('vm/iPhone17,3_26.1_23B85_Restore/kernelcache.release.vphone600') +p = KernelJBPatcher(data, verbose=True) + +failing = ['patch_nvram_verify_permission', 'patch_thid_should_crash', + 'patch_hook_cred_label_update_execve'] +for name in failing: + p.patches = [] + result = getattr(p, name)() + status = "PASS" if result else "FAIL" + print(f'\n>>> {name}: {status} ({len(p.patches)} patches)') +``` + +--- + +## Upstream Offsets Reference (iPhone17,3 26.1 23B85) + +| Symbol / Patch | File Offset | Notes | +|----------------|-------------|-------| +| kern_text start | 0xA74000 | | +| kern_text end | 0x24B0000 | | +| base_va | 0xFFFFFE0007004000 | | +| _thid_should_crash var | 0x67EB50 | DATA, value=0 | +| _task_for_pid func | 0xFC3718 | patch at 0xFC383C | +| _load_dylinker patch | 0x1052A28 | TST → B | +| verifyPermission func | 0x1233E40 | patch BL at 0x1234034 | +| verifyPermission vtable | 0x7410B8 | __DATA_CONST | +| IONVRAMController metaclass | 0x26FEA38 | | +| IONVRAMController metaclass ctor | 0x125D2C0 | refs "IONVRAMController" string | +| IONVRAMController PAC disc | 0xcda1 | movk x17, #0xcda1 | +| IONVRAMController instance size | 0x88 | mov w3, #0x88 | +| _vfs_context_current | 0xCC5EAC | (from upstream BL encoding) | +| _vnode_getattr | 0xCC91C0 | (from upstream BL encoding) | +| shellcode cave (upstream) | 0xAB1740 | syscallmask | +| shellcode cave 2 (upstream) | 0xAB17D8 | hook_cred_label | +| sandbox ops table (hook entry) | 0xA54518 | index 16 | +| _hook_cred_label_update_execve | 0x239A0B4 | original hook func | +| memmove | 0x12CB0D0 | 3114 callers | +| OSMetaClass::OSMetaClass() | 0x10EA790 | 5236 callers | +| _panic | varies | 8000+ callers typically | diff --git a/scripts/cfw_install.sh b/scripts/cfw_install.sh index 5d6df27..ac4c0bb 100755 --- a/scripts/cfw_install.sh +++ b/scripts/cfw_install.sh @@ -1,5 +1,5 @@ #!/bin/zsh -# cfw_install.sh — Install CFW modifications on vphone via SSH ramdisk. +# cfw_install.sh — Install base CFW modifications on vphone via SSH ramdisk. # # Installs Cryptexes, patches system binaries, installs jailbreak tools # and configures LaunchDaemons for persistent SSH/VNC access. @@ -19,6 +19,7 @@ set -euo pipefail VM_DIR="${1:-.}" SCRIPT_DIR="${0:a:h}" +CFW_SKIP_HALT="${CFW_SKIP_HALT:-0}" # Resolve absolute paths VM_DIR="$(cd "$VM_DIR" && pwd)" @@ -370,5 +371,8 @@ echo "[+] CFW installation complete!" echo " Reboot the device for changes to take effect." echo " After boot, SSH will be available on port 22222 (password: alpine)" -ssh_cmd "/sbin/halt" || true - +if [[ "$CFW_SKIP_HALT" == "1" ]]; then + echo "[*] CFW_SKIP_HALT=1, skipping halt." +else + ssh_cmd "/sbin/halt" || true +fi diff --git a/scripts/cfw_install_jb.sh b/scripts/cfw_install_jb.sh new file mode 100755 index 0000000..616fcbc --- /dev/null +++ b/scripts/cfw_install_jb.sh @@ -0,0 +1,214 @@ +#!/bin/zsh +# cfw_install_jb.sh — Install base CFW + JB extensions on vphone via SSH ramdisk. +# +# Runs the base CFW installer first (phases 1-7), then applies JB-specific +# modifications: launchd jetsam patch, dylib injection, procursus bootstrap, +# and BaseBin hook deployment. +# +# Prerequisites (in addition to cfw_install.sh requirements): +# - cfw_jb_input/ or resources/cfw_jb_input.tar.zst present +# - zstd (for bootstrap decompression) +# +# Usage: make cfw_install_jb +set -euo pipefail + +VM_DIR="${1:-.}" +SCRIPT_DIR="${0:a:h}" + +# ════════════════════════════════════════════════════════════════ +# Step 1: Run base CFW install (skip halt — we continue with JB phases) +# ════════════════════════════════════════════════════════════════ +echo "[*] cfw_install_jb.sh — Installing CFW + JB extensions..." +echo "" +CFW_SKIP_HALT=1 zsh "$SCRIPT_DIR/cfw_install.sh" "$VM_DIR" + +# ════════════════════════════════════════════════════════════════ +# Step 2: JB-specific phases +# ════════════════════════════════════════════════════════════════ + +# Resolve absolute paths (same as base script) +VM_DIR="$(cd "${VM_DIR}" && pwd)" + +# ── Configuration ─────────────────────────────────────────────── +CFW_INPUT="cfw_input" +CFW_JB_INPUT="cfw_jb_input" +CFW_JB_ARCHIVE="cfw_jb_input.tar.zst" +TEMP_DIR="$VM_DIR/.cfw_temp" + +SSH_PORT=2222 +SSH_PASS="alpine" +SSH_USER="root" +SSH_HOST="localhost" +SSH_OPTS=( + -o StrictHostKeyChecking=no + -o UserKnownHostsFile=/dev/null + -o PreferredAuthentications=password + -o ConnectTimeout=30 + -q +) + +# ── Helpers ───────────────────────────────────────────────────── +die() { echo "[-] $*" >&2; exit 1; } + +_sshpass() { + "$VM_DIR/$CFW_INPUT/tools/sshpass" -p "$SSH_PASS" "$@" +} + +ssh_cmd() { + _sshpass ssh "${SSH_OPTS[@]}" -p "$SSH_PORT" "$SSH_USER@$SSH_HOST" "$@" +} + +scp_to() { + _sshpass scp -q "${SSH_OPTS[@]}" -P "$SSH_PORT" -r "$1" "$SSH_USER@$SSH_HOST:$2" +} + +scp_from() { + _sshpass scp -q "${SSH_OPTS[@]}" -P "$SSH_PORT" "$SSH_USER@$SSH_HOST:$1" "$2" +} + +remote_file_exists() { + ssh_cmd "test -f '$1'" 2>/dev/null +} + +ldid_sign() { + local file="$1" bundle_id="${2:-}" + local args=(-S -M "-K$VM_DIR/$CFW_INPUT/signcert.p12") + [[ -n "$bundle_id" ]] && args+=("-I$bundle_id") + "$VM_DIR/$CFW_INPUT/tools/ldid_macosx_arm64" "${args[@]}" "$file" +} + +remote_mount() { + local dev="$1" mnt="$2" opts="${3:-rw}" + ssh_cmd "/sbin/mount_apfs -o $opts $dev $mnt 2>/dev/null || true" +} + +get_boot_manifest_hash() { + ssh_cmd "/bin/ls /mnt5 2>/dev/null" | awk 'length($0)==96{print; exit}' +} + +# ── Setup JB input resources ────────────────────────────────── +setup_cfw_jb_input() { + [[ -d "$VM_DIR/$CFW_JB_INPUT" ]] && return + local archive + for search_dir in "$SCRIPT_DIR/resources" "$SCRIPT_DIR" "$VM_DIR"; do + archive="$search_dir/$CFW_JB_ARCHIVE" + if [[ -f "$archive" ]]; then + echo " Extracting $CFW_JB_ARCHIVE..." + tar --zstd -xf "$archive" -C "$VM_DIR" + return + fi + done + die "JB mode: neither $CFW_JB_INPUT/ nor $CFW_JB_ARCHIVE found" +} + +# ── Check JB prerequisites ──────────────────────────────────── +command -v zstd >/dev/null 2>&1 || die "'zstd' not found (required for JB bootstrap phase)" + +setup_cfw_jb_input +JB_INPUT_DIR="$VM_DIR/$CFW_JB_INPUT" +echo "" +echo "[+] JB input resources: $JB_INPUT_DIR" + +mkdir -p "$TEMP_DIR" + +# Mount device rootfs (may already be mounted from base install) +remote_mount /dev/disk1s1 /mnt1 + +# ═══════════ JB-1 PATCH LAUNCHD (JETSAM + DYLIB INJECTION) ════ +echo "" +echo "[JB-1] Patching launchd (jetsam guard + hook injection)..." + +if ! remote_file_exists "/mnt1/sbin/launchd.bak"; then + echo " Creating backup..." + ssh_cmd "/bin/cp /mnt1/sbin/launchd /mnt1/sbin/launchd.bak" +fi + +scp_from "/mnt1/sbin/launchd.bak" "$TEMP_DIR/launchd" + +# Inject launchdhook.dylib load command (idempotent — skips if already present) +if [[ -d "$JB_INPUT_DIR/basebin" ]]; then + echo " Injecting LC_LOAD_DYLIB for /cores/launchdhook.dylib..." + python3 "$SCRIPT_DIR/patchers/cfw.py" inject-dylib "$TEMP_DIR/launchd" "/cores/launchdhook.dylib" +fi + +python3 "$SCRIPT_DIR/patchers/cfw.py" patch-launchd-jetsam "$TEMP_DIR/launchd" +ldid_sign "$TEMP_DIR/launchd" +scp_to "$TEMP_DIR/launchd" "/mnt1/sbin/launchd" +ssh_cmd "/bin/chmod 0755 /mnt1/sbin/launchd" + +echo " [+] launchd patched" + +# ═══════════ JB-2 INSTALL PROCURSUS BOOTSTRAP ══════════════════ +echo "" +echo "[JB-2] Installing procursus bootstrap..." + +remote_mount /dev/disk1s5 /mnt5 +BOOT_HASH="$(get_boot_manifest_hash)" +[[ -n "$BOOT_HASH" ]] || die "Could not find 96-char boot manifest hash in /mnt5" +echo " Boot manifest hash: $BOOT_HASH" + +BOOTSTRAP_ZST="$JB_INPUT_DIR/jb/bootstrap-iphoneos-arm64.tar.zst" +SILEO_DEB="$JB_INPUT_DIR/jb/org.coolstar.sileo_2.5.1_iphoneos-arm64.deb" +[[ -f "$BOOTSTRAP_ZST" ]] || die "Missing $BOOTSTRAP_ZST" + +BOOTSTRAP_TAR="$TEMP_DIR/bootstrap-iphoneos-arm64.tar" +zstd -d -f "$BOOTSTRAP_ZST" -o "$BOOTSTRAP_TAR" + +scp_to "$BOOTSTRAP_TAR" "/mnt5/$BOOT_HASH/bootstrap-iphoneos-arm64.tar" +if [[ -f "$SILEO_DEB" ]]; then + scp_to "$SILEO_DEB" "/mnt5/$BOOT_HASH/org.coolstar.sileo_2.5.1_iphoneos-arm64.deb" +fi + +ssh_cmd "/bin/mkdir -p /mnt5/$BOOT_HASH/jb-vphone" +ssh_cmd "/bin/chmod 0755 /mnt5/$BOOT_HASH/jb-vphone" +ssh_cmd "/usr/sbin/chown 0:0 /mnt5/$BOOT_HASH/jb-vphone" +ssh_cmd "/usr/bin/tar --preserve-permissions -xkf /mnt5/$BOOT_HASH/bootstrap-iphoneos-arm64.tar \ + -C /mnt5/$BOOT_HASH/jb-vphone/" +ssh_cmd "/bin/mv /mnt5/$BOOT_HASH/jb-vphone/var /mnt5/$BOOT_HASH/jb-vphone/procursus" +ssh_cmd "/bin/mkdir -p /mnt5/$BOOT_HASH/jb-vphone/procursus" +ssh_cmd "/bin/mv /mnt5/$BOOT_HASH/jb-vphone/procursus/jb/* /mnt5/$BOOT_HASH/jb-vphone/procursus 2>/dev/null || true" +ssh_cmd "/bin/rm -rf /mnt5/$BOOT_HASH/jb-vphone/procursus/jb" +ssh_cmd "/bin/rm -f /mnt5/$BOOT_HASH/bootstrap-iphoneos-arm64.tar" +rm -f "$BOOTSTRAP_TAR" + +echo " [+] procursus bootstrap installed" + +# ═══════════ JB-3 DEPLOY BASEBIN HOOKS ═════════════════════════ +BASEBIN_DIR="$JB_INPUT_DIR/basebin" +if [[ -d "$BASEBIN_DIR" ]]; then + echo "" + echo "[JB-3] Deploying BaseBin hooks to /cores/..." + + ssh_cmd "/bin/mkdir -p /mnt1/cores" + ssh_cmd "/bin/chmod 0755 /mnt1/cores" + + for dylib in "$BASEBIN_DIR"/*.dylib; do + [[ -f "$dylib" ]] || continue + dylib_name="$(basename "$dylib")" + echo " Installing $dylib_name..." + # Re-sign with our certificate before deploying + ldid_sign "$dylib" + scp_to "$dylib" "/mnt1/cores/$dylib_name" + ssh_cmd "/bin/chmod 0755 /mnt1/cores/$dylib_name" + done + + echo " [+] BaseBin hooks deployed" +fi + +# ═══════════ CLEANUP ═════════════════════════════════════════ +echo "" +echo "[*] Unmounting device filesystems..." +ssh_cmd "/sbin/umount /mnt1 2>/dev/null || true" +ssh_cmd "/sbin/umount /mnt3 2>/dev/null || true" +ssh_cmd "/sbin/umount /mnt5 2>/dev/null || true" + +echo "[*] Cleaning up temp binaries..." +rm -f "$TEMP_DIR/launchd" \ + "$TEMP_DIR/bootstrap-iphoneos-arm64.tar" + +echo "" +echo "[+] CFW + JB installation complete!" +echo " Reboot the device for changes to take effect." +echo " After boot, SSH will be available on port 22222 (password: alpine)" + +ssh_cmd "/sbin/halt" || true diff --git a/scripts/fw_patch_jb.py b/scripts/fw_patch_jb.py new file mode 100644 index 0000000..a05e763 --- /dev/null +++ b/scripts/fw_patch_jb.py @@ -0,0 +1,115 @@ +#!/usr/bin/env python3 +""" +fw_patch_jb.py — Apply jailbreak extension patches after base fw_patch. + +Usage: + python3 fw_patch_jb.py [vm_directory] + +This script runs base `fw_patch.py` first, then applies additional JB-oriented +patches found dynamically. +""" + +import os +import subprocess +import sys + +from fw_patch import ( + find_file, + find_restore_dir, + load_firmware, + save_firmware, +) +from patchers.iboot_jb import IBootJBPatcher +from patchers.kernel_jb import KernelJBPatcher +from patchers.txm_jb import TXMJBPatcher + + +def patch_ibss_jb(data): + p = IBootJBPatcher(data, mode="ibss", label="Loaded iBSS", verbose=True) + n = p.apply() + print(f" [+] {n} iBSS JB patches applied dynamically") + return n > 0 + + +def patch_kernelcache_jb(data): + kp = KernelJBPatcher(data, verbose=True) + n = kp.apply() + print(f" [+] {n} kernel JB patches applied dynamically") + return n > 0 + + +def patch_txm_jb(data): + p = TXMJBPatcher(data, verbose=True) + n = p.apply() + print(f" [+] {n} TXM JB patches applied dynamically") + return n > 0 + + +COMPONENTS = [ + # (name, search_base_is_restore, search_patterns, patch_function, preserve_payp) + ("iBSS (JB)", True, + ["Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"], + patch_ibss_jb, False), + ("TXM (JB)", True, + ["Firmware/txm.iphoneos.research.im4p"], + patch_txm_jb, True), + ("kernelcache (JB)", True, + ["kernelcache.research.vphone600"], + patch_kernelcache_jb, True), +] + + +def patch_component(path, patch_fn, name, preserve_payp): + print(f"\n{'=' * 60}") + print(f" {name}: {path}") + print(f"{'=' * 60}") + + im4p, data, was_im4p, original_raw = load_firmware(path) + fmt = "IM4P" if was_im4p else "raw" + extra = f", fourcc={im4p.fourcc}" if was_im4p and im4p else "" + print(f" format: {fmt}{extra}, {len(data)} bytes") + + if not patch_fn(data): + print(f" [-] FAILED: {name}") + sys.exit(1) + + save_firmware(path, im4p, data, was_im4p, + original_raw if preserve_payp else None) + print(f" [+] saved ({fmt})") + + +def main(): + vm_dir = sys.argv[1] if len(sys.argv) > 1 else os.getcwd() + vm_dir = os.path.abspath(vm_dir) + + if not os.path.isdir(vm_dir): + print(f"[-] Not a directory: {vm_dir}") + sys.exit(1) + + script_dir = os.path.dirname(os.path.abspath(__file__)) + fw_patch_script = os.path.join(script_dir, "fw_patch.py") + + print("[*] Running base fw_patch first ...", flush=True) + subprocess.run([sys.executable, fw_patch_script, vm_dir], check=True) + + restore_dir = find_restore_dir(vm_dir) + if not restore_dir: + print(f"[-] No *Restore* directory found in {vm_dir}") + sys.exit(1) + + print(f"[*] VM directory: {vm_dir}") + print(f"[*] Restore directory: {restore_dir}") + print(f"[*] Applying {len(COMPONENTS)} JB extension components ...") + + for name, in_restore, patterns, patch_fn, preserve_payp in COMPONENTS: + search_base = restore_dir if in_restore else vm_dir + path = find_file(search_base, patterns, name) + patch_component(path, patch_fn, name, preserve_payp) + + print(f"\n{'=' * 60}") + print(" JB extension patching complete!") + print(f"{'=' * 60}") + + +if __name__ == "__main__": + main() diff --git a/scripts/patchers/cfw.py b/scripts/patchers/cfw.py index 59e8eee..ab58f65 100755 --- a/scripts/patchers/cfw.py +++ b/scripts/patchers/cfw.py @@ -20,9 +20,16 @@ Commands: patch-mobileactivationd Patch -[DeviceType should_hactivate] to always return true. + patch-launchd-jetsam + Patch launchd jetsam panic guard to avoid initproc crash loop. + inject-daemons Inject bash/dropbear/trollvnc into launchd.plist. + inject-dylib + Inject LC_LOAD_DYLIB into Mach-O binary (thin or universal). + Equivalent to: optool install -c load -p -t + Dependencies: pip install capstone keystone-engine """ @@ -34,6 +41,7 @@ import subprocess import sys from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN +from capstone.arm64_const import ARM64_OP_IMM from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE # ══════════════════════════════════════════════════════════════════ @@ -52,6 +60,13 @@ def asm(s): return bytes(enc) +def asm_at(s, addr): + enc, _ = _ks.asm(s, addr=addr) + if not enc: + raise RuntimeError(f"asm failed at 0x{addr:X}: {s}") + return bytes(enc) + + NOP = asm("nop") MOV_X0_1 = asm("mov x0, #1") RET = asm("ret") @@ -70,6 +85,14 @@ def disasm_at(data, off, n=8): return list(_cs.disasm(bytes(data[off : off + n * 4]), off)) +def _log_asm(data, offset, count=5, marker_off=-1): + """Log disassembly of `count` instructions at file offset for before/after comparison.""" + insns = disasm_at(data, offset, count) + for insn in insns: + tag = " >>>" if insn.address == marker_off else " " + print(f" {tag} 0x{insn.address:08X}: {insn.mnemonic:8s} {insn.op_str}") + + # ══════════════════════════════════════════════════════════════════ # Mach-O helpers # ══════════════════════════════════════════════════════════════════ @@ -210,10 +233,14 @@ def patch_seputil(filepath): original = bytes(data[offset : offset + len(anchor)]) print(f" Found format string at 0x{offset:X}: {original!r}") + print(f" Before: {bytes(data[offset:offset+7]).hex(' ')}") + # Replace %s (2 bytes) with AA — turns "/%s.gl" into "/AA.gl" data[pct_s_off] = ord("A") data[pct_s_off + 1] = ord("A") + print(f" After: {bytes(data[offset:offset+7]).hex(' ')}") + open(filepath, "wb").write(data) print(f" [+] Patched at 0x{pct_s_off:X}: %s -> AA") print(f" /{anchor[1:-1].decode()} -> /AA.gl") @@ -307,12 +334,15 @@ def patch_launchd_cache_loader(filepath): # So only search forward from the ref, not backwards. branch_foff = _find_nearby_branch(data, ref_foff, text_foff, text_size) if branch_foff >= 0: - insns = disasm_at(data, branch_foff, 1) - if insns: - print( - f" Patching: {insns[0].mnemonic} {insns[0].op_str} -> nop" - ) + ctx_start = max(text_foff, branch_foff - 8) + print(f" Before:") + _log_asm(data, ctx_start, 5, branch_foff) + data[branch_foff : branch_foff + 4] = NOP + + print(f" After:") + _log_asm(data, ctx_start, 5, branch_foff) + open(filepath, "wb").write(data) print(f" [+] NOPped at 0x{branch_foff:X}") return True @@ -463,19 +493,160 @@ def patch_mobileactivationd(filepath): print(f" [-] IMP offset 0x{imp_foff:X} out of bounds") return False - insns = disasm_at(data, imp_foff, 4) - if insns: - print(f" Original: {insns[0].mnemonic} {insns[0].op_str}") + print(f" Before:") + _log_asm(data, imp_foff, 4, imp_foff) # Patch to: mov x0, #1; ret data[imp_foff : imp_foff + 4] = MOV_X0_1 data[imp_foff + 4 : imp_foff + 8] = RET + print(f" After:") + _log_asm(data, imp_foff, 4, imp_foff) + open(filepath, "wb").write(data) print(f" [+] Patched at 0x{imp_foff:X}: mov x0, #1; ret") return True +# ══════════════════════════════════════════════════════════════════ +# 4. launchd — Jetsam panic bypass +# ══════════════════════════════════════════════════════════════════ + + +def _extract_branch_target_off(insn): + for op in reversed(insn.operands): + if op.type == ARM64_OP_IMM: + return op.imm + return -1 + + +def _is_return_block(data, foff, text_foff, text_size): + """Check if foff points to a function return sequence (ret/retab within 8 insns).""" + for i in range(8): + check = foff + i * 4 + if check >= text_foff + text_size: + break + insns = disasm_at(data, check, 1) + if not insns: + continue + if insns[0].mnemonic in ("ret", "retab"): + return True + # Stop at unconditional branches (different block) + if insns[0].mnemonic in ("b", "bl", "br", "blr"): + break + return False + + +def patch_launchd_jetsam(filepath): + """Bypass launchd jetsam panic path via dynamic string-xref branch rewrite. + + Anchor strategy: + 1. Find jetsam panic string in cstring-like data. + 2. Find ADRP+ADD xref to the string start in __TEXT,__text. + 3. Search backward for a conditional branch whose target is the function's + return/success path (basic block containing ret/retab). + 4. Rewrite that conditional branch to unconditional `b `, + so the function always returns success and never reaches the panic. + """ + data = bytearray(open(filepath, "rb").read()) + sections = parse_macho_sections(data) + + text_sec = find_section(sections, "__TEXT,__text") + if not text_sec: + print(" [-] __TEXT,__text not found") + return False + + text_va, text_size, text_foff = text_sec + code = bytes(data[text_foff : text_foff + text_size]) + + cond_mnemonics = { + "b.eq", "b.ne", "b.cs", "b.hs", "b.cc", "b.lo", + "b.mi", "b.pl", "b.vs", "b.vc", "b.hi", "b.ls", + "b.ge", "b.lt", "b.gt", "b.le", + "cbz", "cbnz", "tbz", "tbnz", + } + + anchors = [ + b"jetsam property category (Daemon) is not initialized", + b"jetsam property category", + b"initproc exited -- exit reason namespace 7 subcode 0x1", + ] + + for anchor in anchors: + hit_off = data.find(anchor) + if hit_off < 0: + continue + + sec_foff = -1 + sec_va = -1 + for _, (sva, ssz, sfoff) in sections.items(): + if sfoff <= hit_off < sfoff + ssz: + sec_foff = sfoff + sec_va = sva + break + if sec_foff < 0: + continue + + str_start_off = _find_cstring_start(data, hit_off, sec_foff) + str_start_va = sec_va + (str_start_off - sec_foff) + + ref_va = _find_adrp_add_ref(code, text_va, str_start_va) + if ref_va < 0: + continue + ref_foff = text_foff + (ref_va - text_va) + + print(f" Found jetsam anchor '{anchor.decode(errors='ignore')}'") + print(f" string start: va:0x{str_start_va:X}") + print(f" xref at foff:0x{ref_foff:X}") + + # Search backward from xref for conditional branches targeting + # the function's return path (block containing ret/retab). + # Pick the earliest (farthest back) one — it skips the most + # jetsam-related code and matches the upstream patch strategy. + scan_lo = max(text_foff, ref_foff - 0x300) + patch_off = -1 + patch_target = -1 + + for back in range(ref_foff - 4, scan_lo - 1, -4): + insns = disasm_at(data, back, 1) + if not insns: + continue + insn = insns[0] + if insn.mnemonic not in cond_mnemonics: + continue + + tgt = _extract_branch_target_off(insn) + if tgt < 0: + continue + # Target must be a valid file offset within __text + if tgt < text_foff or tgt >= text_foff + text_size: + continue + # Target must be a return block (contains ret/retab) + if _is_return_block(data, tgt, text_foff, text_size): + patch_off = back + patch_target = tgt + # Don't break — keep scanning for an earlier match + + if patch_off < 0: + continue + + ctx_start = max(text_foff, patch_off - 8) + print(f" Before:") + _log_asm(data, ctx_start, 5, patch_off) + + data[patch_off : patch_off + 4] = asm_at(f"b #0x{patch_target:X}", patch_off) + + print(f" After:") + _log_asm(data, ctx_start, 5, patch_off) + + open(filepath, "wb").write(data) + print(f" [+] Patched at 0x{patch_off:X}: jetsam panic guard bypass") + return True + + print(" [-] Dynamic jetsam anchor/xref not found") + return False + + def _find_via_objc_metadata(data): """Find method IMP through ObjC runtime metadata.""" sections = parse_macho_sections(data) @@ -574,6 +745,235 @@ def _find_via_objc_metadata(data): return -1 +# ══════════════════════════════════════════════════════════════════ +# 5. Mach-O dylib injection (optool replacement) +# ══════════════════════════════════════════════════════════════════ + + +def _align(n, alignment): + return (n + alignment - 1) & ~(alignment - 1) + + +def _find_first_section_offset(data): + """Find the file offset of the earliest section data in the Mach-O. + + This tells us how much space is available after load commands. + For fat/universal binaries, we operate on the first slice. + """ + magic = struct.unpack_from(" 0 and size > 0 and file_off < earliest: + earliest = file_off + sect_off += 80 + offset += cmdsize + return earliest + + +def _get_fat_slices(data): + """Parse FAT (universal) binary header and return list of (offset, size) tuples. + + Returns [(0, len(data))] for thin binaries. + """ + magic = struct.unpack_from(">I", data, 0)[0] + if magic == 0xCAFEBABE: # FAT_MAGIC + nfat = struct.unpack_from(">I", data, 4)[0] + slices = [] + for i in range(nfat): + off = 8 + i * 20 + slice_off = struct.unpack_from(">I", data, off + 8)[0] + slice_size = struct.unpack_from(">I", data, off + 12)[0] + slices.append((slice_off, slice_size)) + return slices + elif magic == 0xBEBAFECA: # FAT_MAGIC_64 + nfat = struct.unpack_from(">I", data, 4)[0] + slices = [] + for i in range(nfat): + off = 8 + i * 32 + slice_off = struct.unpack_from(">Q", data, off + 8)[0] + slice_size = struct.unpack_from(">Q", data, off + 16)[0] + slices.append((slice_off, slice_size)) + return slices + else: + return [(0, len(data))] + + +def _check_existing_dylib(data, base, dylib_path): + """Check if the dylib is already loaded in this Mach-O slice.""" + magic = struct.unpack_from(" 0: + ncmds = struct.unpack_from(" 256: + print(f" [-] Would overflow {overflow} bytes into section data (too much)") + return False + print(f" [!] Header overflow: {overflow} bytes into section data " + f"(same as optool — binary will be re-signed)") + + # Write the new load command at the end of existing commands + data[header_end : header_end + cmd_size] = lc_data + + # Update header: ncmds += 1, sizeofcmds += cmd_size + struct.pack_into(" -t + """ + data = bytearray(open(filepath, "rb").read()) + slices = _get_fat_slices(bytes(data)) + + injected = 0 + for slice_off, slice_size in slices: + if _check_existing_dylib(data, slice_off, dylib_path): + print(f" [!] Dylib already loaded in slice at 0x{slice_off:X}, skipping") + injected += 1 + continue + + if _inject_lc_load_dylib(data, slice_off, dylib_path): + print(f" [+] Injected LC_LOAD_DYLIB '{dylib_path}' at slice 0x{slice_off:X}") + injected += 1 + + if injected == len(slices): + open(filepath, "wb").write(data) + print(f" [+] Wrote {filepath} ({injected} slice(s) patched)") + return True + else: + print(f" [-] Only {injected}/{len(slices)} slices patched") + return False + + # ══════════════════════════════════════════════════════════════════ # BuildManifest parsing # ══════════════════════════════════════════════════════════════════ @@ -676,16 +1076,31 @@ def main(): if not patch_mobileactivationd(sys.argv[2]): sys.exit(1) + elif cmd == "patch-launchd-jetsam": + if len(sys.argv) < 3: + print("Usage: patch_cfw.py patch-launchd-jetsam ") + sys.exit(1) + if not patch_launchd_jetsam(sys.argv[2]): + sys.exit(1) + elif cmd == "inject-daemons": if len(sys.argv) < 4: print("Usage: patch_cfw.py inject-daemons ") sys.exit(1) inject_daemons(sys.argv[2], sys.argv[3]) + elif cmd == "inject-dylib": + if len(sys.argv) < 4: + print("Usage: patch_cfw.py inject-dylib ") + sys.exit(1) + if not inject_dylib(sys.argv[2], sys.argv[3]): + sys.exit(1) + else: print(f"Unknown command: {cmd}") print("Commands: cryptex-paths, patch-seputil, patch-launchd-cache-loader,") - print(" patch-mobileactivationd, inject-daemons") + print(" patch-mobileactivationd, patch-launchd-jetsam,") + print(" inject-daemons, inject-dylib") sys.exit(1) diff --git a/scripts/patchers/iboot_jb.py b/scripts/patchers/iboot_jb.py new file mode 100644 index 0000000..e1e3a6d --- /dev/null +++ b/scripts/patchers/iboot_jb.py @@ -0,0 +1,105 @@ +#!/usr/bin/env python3 +""" +iboot_jb.py — Jailbreak extension patcher for iBoot-based images. + +Currently adds iBSS-only nonce generation bypass used by fw_patch_jb.py. +""" + +from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE + +from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG, ARM64_REG_W0 + +from .iboot import IBootPatcher, _disasm_one + + +_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE) + + +class IBootJBPatcher(IBootPatcher): + """JB-only patcher for iBoot images.""" + + def _asm_at(self, asm_line, addr): + enc, _ = _ks.asm(asm_line, addr=addr) + if not enc: + raise RuntimeError(f"asm failed at 0x{addr:X}: {asm_line}") + return bytes(enc) + + def apply(self): + self.patches = [] + if self.mode == "ibss": + self.patch_skip_generate_nonce() + + for off, pb, _ in self.patches: + self.data[off:off + len(pb)] = pb + + if self.verbose and self.patches: + self._log(f"\n [{len(self.patches)} {self.mode.upper()} JB patches applied]") + return len(self.patches) + + def _find_refs_to_offset(self, target_off): + refs = [] + for insns in self._chunked_disasm(): + for i in range(len(insns) - 1): + a, b = insns[i], insns[i + 1] + if a.mnemonic != "adrp" or b.mnemonic != "add": + continue + if len(a.operands) < 2 or len(b.operands) < 3: + continue + if a.operands[0].reg != b.operands[1].reg: + continue + if a.operands[1].imm + b.operands[2].imm == target_off: + refs.append((a.address, b.address, b.operands[0].reg)) + return refs + + def _find_string_refs(self, needle): + if isinstance(needle, str): + needle = needle.encode() + seen = set() + refs = [] + off = 0 + while True: + s_off = self.raw.find(needle, off) + if s_off < 0: + break + off = s_off + 1 + for r in self._find_refs_to_offset(s_off): + if r[0] not in seen: + seen.add(r[0]) + refs.append(r) + return refs + + def patch_skip_generate_nonce(self): + refs = self._find_string_refs(b"boot-nonce") + if not refs: + self._log(" [-] iBSS JB: no refs to 'boot-nonce'") + return False + + for _, add_off, _ in refs: + for scan in range(add_off, min(add_off + 0x100, self.size - 12), 4): + i0 = _disasm_one(self.raw, scan) + i1 = _disasm_one(self.raw, scan + 4) + i2 = _disasm_one(self.raw, scan + 8) + if not i0 or not i1 or not i2: + continue + if i0.mnemonic not in ("tbz", "tbnz"): + continue + if len(i0.operands) < 3: + continue + if not (i0.operands[0].type == ARM64_OP_REG + and i0.operands[0].reg == ARM64_REG_W0): + continue + if not (i0.operands[1].type == ARM64_OP_IMM + and i0.operands[1].imm == 0): + continue + if i1.mnemonic != "mov" or i1.op_str != "w0, #0": + continue + if i2.mnemonic != "bl": + continue + + target = i0.operands[2].imm + self.emit(scan, self._asm_at(f"b #0x{target:X}", scan), + "JB: skip generate_nonce") + return True + + self._log(" [-] iBSS JB: generate_nonce branch pattern not found") + return False diff --git a/scripts/patchers/kernel.py b/scripts/patchers/kernel.py index 8354e37..cfa71d4 100755 --- a/scripts/patchers/kernel.py +++ b/scripts/patchers/kernel.py @@ -1288,6 +1288,7 @@ class KernelPatcher: def find_all(self): """Find and record all kernel patches. Returns list of (offset, bytes, desc).""" + self.patches = [] self.patch_apfs_root_snapshot() # 1 self.patch_apfs_seal_broken() # 2 self.patch_bsd_init_rootvp() # 3 diff --git a/scripts/patchers/kernel_jb.py b/scripts/patchers/kernel_jb.py new file mode 100644 index 0000000..64f7c43 --- /dev/null +++ b/scripts/patchers/kernel_jb.py @@ -0,0 +1,2128 @@ +#!/usr/bin/env python3 +""" +kernel_jb.py — Jailbreak extension patcher for iOS kernelcache. + +Builds on kernel.py's Mach-O parsing / indexing helpers while keeping JB logic +in a separate file for clean layering. + +All patches use dynamic matchers: + - String anchors → ADRP+ADD xrefs → function scope → patch site + - BL frequency analysis to identify stub targets + - Pattern matching (≤3 instruction sequences) + - No symbols or hardcoded offsets + +Patches are split into: + - Group A: Already implemented (AMFI trustcache, execve, task conversion, sandbox) + - Group B: Simple patches (string-anchored / pattern-matched) + - Group C: Complex shellcode patches (code cave + branch redirects) +""" + +import struct +from collections import Counter + +from capstone.arm64_const import ( + ARM64_OP_REG, ARM64_OP_IMM, ARM64_OP_MEM, + ARM64_REG_X0, ARM64_REG_X1, ARM64_REG_W0, ARM64_REG_X8, +) + +from .kernel import ( + KernelPatcher, + NOP, + MOV_X0_0, + MOV_X0_1, + MOV_W0_0, + MOV_W0_1, + CMP_W0_W0, + CMP_X0_X0, + RET, + asm, + _rd32, + _rd64, +) + + +CBZ_X2_8 = asm("cbz x2, #8") +STR_X0_X2 = asm("str x0, [x2]") +CMP_XZR_XZR = asm("cmp xzr, xzr") +MOV_X8_XZR = asm("mov x8, xzr") + + +class KernelJBPatcher(KernelPatcher): + """JB-only kernel patcher.""" + + def __init__(self, data, verbose=True): + super().__init__(data, verbose) + self._build_symbol_table() + + # ── Symbol table (best-effort, may find 0 on stripped kernels) ── + + def _build_symbol_table(self): + """Parse nlist entries from LC_SYMTAB to build symbol→foff map.""" + self.symbols = {} + + # Parse top-level LC_SYMTAB + ncmds = struct.unpack_from(" self.size: + break + cmd, cmdsize = struct.unpack_from(" self.size: + break + cmd, cmdsize = struct.unpack_from(" self.size: + return + magic = _rd32(self.raw, mh_off) + if magic != 0xFEEDFACF: + return + ncmds = struct.unpack_from(" self.size: + break + cmd, cmdsize = struct.unpack_from(" self.size: + break + n_strx, n_type, n_sect, n_desc, n_value = struct.unpack_from( + "= self.size: + continue + name_end = self.raw.find(b'\x00', name_off) + if name_end < 0 or name_end - name_off > 512: + continue + name = self.raw[name_off:name_end].decode('ascii', errors='replace') + foff = n_value - self.base_va + if 0 <= foff < self.size: + self.symbols[name] = foff + + def _resolve_symbol(self, name): + """Look up a function symbol, return file offset or -1.""" + return self.symbols.get(name, -1) + + # ── Code cave finder ────────────────────────────────────────── + + def _find_code_cave(self, size, align=4): + """Find a region of zeros/0xFF/UDF in executable memory for shellcode. + Returns file offset of the cave start, or -1 if not found. + """ + needed = (size + align - 1) // align * align + for rng_start, rng_end in self.code_ranges: + run_start = -1 + run_len = 0 + for off in range(rng_start, rng_end, 4): + val = _rd32(self.raw, off) + if val == 0x00000000 or val == 0xFFFFFFFF or val == 0xD4200000: + if run_start < 0: + run_start = off + run_len = 4 + else: + run_len += 4 + if run_len >= needed: + return run_start + else: + run_start = -1 + run_len = 0 + return -1 + + # ── Branch encoding helpers ─────────────────────────────────── + + def _encode_b(self, from_off, to_off): + """Encode an unconditional B instruction.""" + delta = (to_off - from_off) // 4 + if delta < -(1 << 25) or delta >= (1 << 25): + return None + return struct.pack("= (1 << 25): + return None + return struct.pack(" function local pair).""" + self._log("\n[JB] AMFI execve kill path: BL -> mov x0,#0 (2 sites)") + + str_off = self.find_string(b"AMFI: hook..execve() killing") + if str_off < 0: + str_off = self.find_string(b"execve() killing") + if str_off < 0: + self._log(" [-] execve kill log string not found") + return False + + refs = self.find_string_refs(str_off, *self.kern_text) + if not refs: + refs = self.find_string_refs(str_off) + if not refs: + self._log(" [-] no refs to execve kill log string") + return False + + patched = False + seen_funcs = set() + for adrp_off, _, _ in refs: + func_start = self.find_function_start(adrp_off) + if func_start < 0 or func_start in seen_funcs: + continue + seen_funcs.add(func_start) + + func_end = min(func_start + 0x800, self.kern_text[1]) + for p in range(func_start + 4, func_end, 4): + d = self._disas_at(p) + if d and d[0].mnemonic == "pacibsp": + func_end = p + break + + early_window_end = min(func_start + 0x120, func_end) + hits = [] + for off in range(func_start, early_window_end - 4, 4): + d0 = self._disas_at(off) + d1 = self._disas_at(off + 4) + if not d0 or not d1: + continue + i0, i1 = d0[0], d1[0] + if i0.mnemonic != "bl": + continue + if i1.mnemonic in ("cbz", "cbnz") and i1.op_str.startswith("w0,"): + hits.append(off) + + if len(hits) != 2: + self._log(f" [-] execve helper at 0x{func_start:X}: " + f"expected 2 early BL+W0-branch sites, found {len(hits)}") + continue + + self.emit(hits[0], MOV_X0_0, "mov x0,#0 [AMFI execve helper A]") + self.emit(hits[1], MOV_X0_0, "mov x0,#0 [AMFI execve helper B]") + patched = True + break + + if not patched: + self._log(" [-] AMFI execve helper patch sites not found") + return patched + + def patch_task_conversion_eval_internal(self): + """Allow task conversion: cmp Xn,x0 -> cmp xzr,xzr at unique guard site.""" + self._log("\n[JB] task_conversion_eval_internal: cmp xzr,xzr") + + candidates = [] + ks, ke = self.kern_text + for off in range(ks + 4, ke - 12, 4): + d0 = self._disas_at(off) + if not d0: + continue + i0 = d0[0] + if i0.mnemonic != "cmp" or len(i0.operands) < 2: + continue + a0, a1 = i0.operands[0], i0.operands[1] + if not (a0.type == ARM64_OP_REG and a1.type == ARM64_OP_REG): + continue + if a1.reg != ARM64_REG_X0: + continue + cmp_reg = a0.reg + + dp = self._disas_at(off - 4) + d1 = self._disas_at(off + 4) + d2 = self._disas_at(off + 8) + d3 = self._disas_at(off + 12) + if not dp or not d1 or not d2 or not d3: + continue + p = dp[0] + i1, i2, i3 = d1[0], d2[0], d3[0] + + if p.mnemonic != "ldr" or len(p.operands) < 2: + continue + p0, p1 = p.operands[0], p.operands[1] + if p0.type != ARM64_OP_REG or p0.reg != cmp_reg: + continue + if p1.type != ARM64_OP_MEM: + continue + if p1.mem.base != cmp_reg: + continue + + if i1.mnemonic != "b.eq": + continue + if i2.mnemonic != "cmp" or len(i2.operands) < 2: + continue + j0, j1 = i2.operands[0], i2.operands[1] + if not (j0.type == ARM64_OP_REG and j1.type == ARM64_OP_REG): + continue + if not (j0.reg == cmp_reg and j1.reg == ARM64_REG_X1): + continue + if i3.mnemonic != "b.eq": + continue + + candidates.append(off) + + if len(candidates) != 1: + self._log(f" [-] expected 1 task-conversion guard site, found {len(candidates)}") + return False + + self.emit(candidates[0], CMP_XZR_XZR, + "cmp xzr,xzr [_task_conversion_eval_internal]") + return True + + def patch_sandbox_hooks_extended(self): + """Stub remaining sandbox MACF hooks (JB extension beyond base 5 hooks).""" + self._log("\n[JB] Sandbox extended hooks: mov x0,#0; ret") + + ops_table = self._find_sandbox_ops_table_via_conf() + if ops_table is None: + return False + + HOOK_INDICES_EXT = { + "vnode_check_getattr": 245, + "proc_check_get_cs_info": 249, + "proc_check_set_cs_info": 250, + "proc_check_set_cs_info2": 252, + "vnode_check_chroot": 254, + "vnode_check_create": 255, + "vnode_check_deleteextattr": 256, + "vnode_check_exchangedata": 257, + "vnode_check_exec": 258, + "vnode_check_getattrlist": 259, + "vnode_check_getextattr": 260, + "vnode_check_ioctl": 261, + "vnode_check_link": 264, + "vnode_check_listextattr": 265, + "vnode_check_open": 267, + "vnode_check_readlink": 270, + "vnode_check_setattrlist": 275, + "vnode_check_setextattr": 276, + "vnode_check_setflags": 277, + "vnode_check_setmode": 278, + "vnode_check_setowner": 279, + "vnode_check_setutimes": 280, + "vnode_check_stat": 281, + "vnode_check_truncate": 282, + "vnode_check_unlink": 283, + "vnode_check_fsgetpath": 316, + } + + sb_start, sb_end = self.sandbox_text + patched = 0 + seen = set() + + for hook_name, idx in HOOK_INDICES_EXT.items(): + func_off = self._read_ops_entry(ops_table, idx) + if func_off is None or func_off <= 0: + continue + if not (sb_start <= func_off < sb_end): + continue + if func_off in seen: + continue + seen.add(func_off) + + self.emit(func_off, MOV_X0_0, f"mov x0,#0 [_hook_{hook_name}]") + self.emit(func_off + 4, RET, f"ret [_hook_{hook_name}]") + patched += 1 + + if patched == 0: + self._log(" [-] no extended sandbox hooks patched") + return False + return True + + # ══════════════════════════════════════════════════════════════ + # Group B: Simple patches + # ══════════════════════════════════════════════════════════════ + + def patch_post_validation_additional(self): + """Additional postValidation CMP W0,W0 in AMFI code signing path.""" + self._log("\n[JB] postValidation additional: cmp w0,w0") + + str_off = self.find_string(b"AMFI: code signature validation failed") + if str_off < 0: + self._log(" [-] string not found") + return False + + refs = self.find_string_refs(str_off, *self.amfi_text) + if not refs: + refs = self.find_string_refs(str_off) + if not refs: + self._log(" [-] no code refs") + return False + + caller_start = self.find_function_start(refs[0][0]) + if caller_start < 0: + return False + + bl_targets = set() + func_end = self._find_func_end(caller_start, 0x2000) + for scan in range(caller_start, func_end, 4): + target = self._is_bl(scan) + if target >= 0: + bl_targets.add(target) + + patched = 0 + for target in sorted(bl_targets): + if not (self.amfi_text[0] <= target < self.amfi_text[1]): + continue + callee_end = self._find_func_end(target, 0x200) + for off in range(target, callee_end, 4): + d = self._disas_at(off, 2) + if len(d) < 2: + continue + i0, i1 = d[0], d[1] + if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne": + continue + ops = i0.operands + if len(ops) < 2: + continue + if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0: + continue + if ops[1].type != ARM64_OP_IMM: + continue + has_bl = False + for back in range(off - 4, max(off - 12, target), -4): + bt = self._is_bl(back) + if bt >= 0: + has_bl = True + break + if has_bl: + self.emit(off, CMP_W0_W0, + f"cmp w0,w0 [postValidation additional]") + patched += 1 + + if patched == 0: + self._log(" [-] no additional postValidation CMP sites found") + return False + return True + + def patch_proc_security_policy(self): + """Stub _proc_security_policy: mov x0,#0; ret. + + Anchor: find _proc_info via its distinctive switch-table pattern + (sub wN,wM,#1; cmp wN,#0x21), then identify the most-called BL + target within that function — that's _proc_security_policy. + """ + self._log("\n[JB] _proc_security_policy: mov x0,#0; ret") + + # Try symbol first + foff = self._resolve_symbol("_proc_security_policy") + if foff >= 0: + self.emit(foff, MOV_X0_0, "mov x0,#0 [_proc_security_policy]") + self.emit(foff + 4, RET, "ret [_proc_security_policy]") + return True + + # Find _proc_info by its distinctive switch table + # Pattern: sub wN, wM, #1; cmp wN, #0x21 (33 = max proc_info callnum) + proc_info_func = -1 + ks, ke = self.kern_text + for off in range(ks, ke - 8, 4): + d = self._disas_at(off, 2) + if len(d) < 2: + continue + i0, i1 = d[0], d[1] + if i0.mnemonic != "sub" or i1.mnemonic != "cmp": + continue + # sub wN, wM, #1 + if len(i0.operands) < 3: + continue + if i0.operands[2].type != ARM64_OP_IMM or i0.operands[2].imm != 1: + continue + # cmp wN, #0x21 + if len(i1.operands) < 2: + continue + if i1.operands[1].type != ARM64_OP_IMM or i1.operands[1].imm != 0x21: + continue + # Verify same register + if i0.operands[0].reg != i1.operands[0].reg: + continue + # Found it — find function start + proc_info_func = self.find_function_start(off) + break + + if proc_info_func < 0: + self._log(" [-] _proc_info function not found") + return False + + proc_info_end = self._find_func_end(proc_info_func, 0x4000) + self._log(f" [+] _proc_info at 0x{proc_info_func:X} (size 0x{proc_info_end - proc_info_func:X})") + + # Count BL targets within _proc_info — the most frequent one + # is _proc_security_policy (called once per switch case) + bl_targets = Counter() + for off in range(proc_info_func, proc_info_end, 4): + target = self._is_bl(off) + if target >= 0 and ks <= target < ke: + bl_targets[target] += 1 + + if not bl_targets: + self._log(" [-] no BL targets found in _proc_info") + return False + + # The security policy check is called the most (once per case) + most_called = bl_targets.most_common(1)[0] + foff = most_called[0] + count = most_called[1] + self._log(f" [+] most-called BL target: 0x{foff:X} ({count} calls)") + + if count < 3: + self._log(" [-] most-called target has too few calls") + return False + + self.emit(foff, MOV_X0_0, "mov x0,#0 [_proc_security_policy]") + self.emit(foff + 4, RET, "ret [_proc_security_policy]") + return True + + def patch_proc_pidinfo(self): + """Bypass pid-0 checks in _proc_info: NOP first 2 CBZ/CBNZ on w-regs. + + Anchor: find _proc_info via its switch-table pattern, then NOP the + first two CBZ/CBNZ instructions that guard against pid 0. + """ + self._log("\n[JB] _proc_pidinfo: NOP pid-0 guard (2 sites)") + + # Try symbol first + foff = self._resolve_symbol("_proc_pidinfo") + if foff >= 0: + func_end = min(foff + 0x80, self.size) + hits = [] + for off in range(foff, func_end, 4): + d = self._disas_at(off) + if d and d[0].mnemonic in ("cbz", "cbnz") and d[0].op_str.startswith("w"): + hits.append(off) + if len(hits) >= 2: + self.emit(hits[0], NOP, "NOP [_proc_pidinfo pid-0 guard A]") + self.emit(hits[1], NOP, "NOP [_proc_pidinfo pid-0 guard B]") + return True + + # Find _proc_info by switch table pattern (same as proc_security_policy) + proc_info_func = -1 + ks, ke = self.kern_text + for off in range(ks, ke - 8, 4): + d = self._disas_at(off, 2) + if len(d) < 2: + continue + i0, i1 = d[0], d[1] + if i0.mnemonic != "sub" or i1.mnemonic != "cmp": + continue + if len(i0.operands) < 3: + continue + if i0.operands[2].type != ARM64_OP_IMM or i0.operands[2].imm != 1: + continue + if len(i1.operands) < 2: + continue + if i1.operands[1].type != ARM64_OP_IMM or i1.operands[1].imm != 0x21: + continue + if i0.operands[0].reg != i1.operands[0].reg: + continue + proc_info_func = self.find_function_start(off) + break + + if proc_info_func < 0: + self._log(" [-] _proc_info function not found") + return False + + # Find first CBZ x0 (null proc check) and the CBZ/CBNZ wN after + # the first BL in the prologue region + hits = [] + prologue_end = min(proc_info_func + 0x80, self.size) + for off in range(proc_info_func, prologue_end, 4): + d = self._disas_at(off) + if not d: + continue + i = d[0] + if i.mnemonic in ("cbz", "cbnz"): + # CBZ x0 (null check) or CBZ wN (pid-0 check) + hits.append(off) + + if len(hits) < 2: + self._log(f" [-] expected 2+ early CBZ/CBNZ, found {len(hits)}") + return False + + self.emit(hits[0], NOP, "NOP [_proc_pidinfo pid-0 guard A]") + self.emit(hits[1], NOP, "NOP [_proc_pidinfo pid-0 guard B]") + return True + + def patch_convert_port_to_map(self): + """Skip panic in _convert_port_to_map_with_flavor. + Anchor: 'userspace has control access to a kernel map' panic string. + """ + self._log("\n[JB] _convert_port_to_map_with_flavor: skip panic") + + str_off = self.find_string(b"userspace has control access to a kernel map") + if str_off < 0: + self._log(" [-] panic string not found") + return False + + refs = self.find_string_refs(str_off, *self.kern_text) + if not refs: + self._log(" [-] no code refs") + return False + + for adrp_off, add_off, _ in refs: + bl_panic = self._find_bl_to_panic_in_range(add_off, min(add_off + 0x40, self.size)) + if bl_panic < 0: + continue + resume_off = bl_panic + 4 + err_lo = adrp_off - 0x40 + for back in range(adrp_off - 4, max(adrp_off - 0x200, 0), -4): + target, kind = self._decode_branch_target(back) + if target is not None and err_lo <= target <= bl_panic + 4: + b_bytes = self._encode_b(back, resume_off) + if b_bytes: + self.emit(back, b_bytes, + f"b #0x{resume_off - back:X} " + f"[_convert_port_to_map skip panic]") + return True + + self._log(" [-] branch site not found") + return False + + def patch_vm_fault_enter_prepare(self): + """NOP a PMAP check in _vm_fault_enter_prepare. + Find BL to a rarely-called function followed within 4 instructions + by TBZ/TBNZ on w0. + """ + self._log("\n[JB] _vm_fault_enter_prepare: NOP") + + # Try symbol first + foff = self._resolve_symbol("_vm_fault_enter_prepare") + if foff >= 0: + func_end = self._find_func_end(foff, 0x2000) + result = self._find_bl_tbz_pmap(foff + 0x100, func_end) + if result: + self.emit(result, NOP, "NOP [_vm_fault_enter_prepare]") + return True + + # String anchor: all refs to "vm_fault_enter_prepare" + str_off = self.find_string(b"vm_fault_enter_prepare") + if str_off >= 0: + refs = self.find_string_refs(str_off) + for adrp_off, _, _ in refs: + func_start = self.find_function_start(adrp_off) + if func_start < 0: + continue + func_end = self._find_func_end(func_start, 0x4000) + result = self._find_bl_tbz_pmap(func_start + 0x100, func_end) + if result: + self.emit(result, NOP, "NOP [_vm_fault_enter_prepare]") + return True + + # Broader: scan all kern_text for BL to rarely-called func + TBZ w0 + # in a large function (>0x2000 bytes) + ks, ke = self.kern_text + for off in range(ks, ke - 16, 4): + result = self._find_bl_tbz_pmap(off, min(off + 16, ke)) + if result: + # Verify it's in a large function + func_start = self.find_function_start(result) + if func_start >= 0: + func_end = self._find_func_end(func_start, 0x4000) + if func_end - func_start > 0x2000: + self.emit(result, NOP, "NOP [_vm_fault_enter_prepare]") + return True + + self._log(" [-] patch site not found") + return False + + def _find_bl_tbz_pmap(self, start, end): + """Find BL to a rarely-called function followed within 4 insns by TBZ/TBNZ w0. + Returns the BL offset, or None.""" + for off in range(start, end - 4, 4): + d0 = self._disas_at(off) + if not d0 or d0[0].mnemonic != "bl": + continue + bl_target = d0[0].operands[0].imm + n_callers = len(self.bl_callers.get(bl_target, [])) + if n_callers >= 20: + continue + # Check next 4 instructions for TBZ/TBNZ on w0 + for delta in range(1, 5): + d1 = self._disas_at(off + delta * 4) + if not d1: + break + i1 = d1[0] + if i1.mnemonic in ("tbnz", "tbz") and len(i1.operands) >= 2: + if i1.operands[0].type == ARM64_OP_REG and \ + i1.operands[0].reg == ARM64_REG_W0: + return off + return None + + def patch_vm_map_protect(self): + """Skip a check in _vm_map_protect: branch over guard. + Anchor: 'vm_map_protect(' panic string → function → TBNZ with high bit. + """ + self._log("\n[JB] _vm_map_protect: skip check") + + # Try symbol first + foff = self._resolve_symbol("_vm_map_protect") + if foff < 0: + # String anchor + foff = self._find_func_by_string(b"vm_map_protect(", self.kern_text) + if foff < 0: + foff = self._find_func_by_string(b"vm_map_protect(") + if foff < 0: + self._log(" [-] function not found") + return False + + func_end = self._find_func_end(foff, 0x2000) + + # Find TBNZ with bit >= 24 that branches forward (permission check guard) + for off in range(foff, func_end - 4, 4): + d = self._disas_at(off) + if not d: + continue + i = d[0] + if i.mnemonic != "tbnz": + continue + if len(i.operands) < 3: + continue + bit_op = i.operands[1] + if bit_op.type == ARM64_OP_IMM and bit_op.imm >= 24: + target = i.operands[2].imm if i.operands[2].type == ARM64_OP_IMM else -1 + if target > off: + b_bytes = self._encode_b(off, target) + if b_bytes: + self.emit(off, b_bytes, + f"b #0x{target - off:X} [_vm_map_protect]") + return True + + self._log(" [-] patch site not found") + return False + + def patch_mac_mount(self): + """Bypass MAC mount check: NOP + mov x8,xzr in ___mac_mount. + Anchor: 'mount_common()' string → find nearby ___mac_mount function. + """ + self._log("\n[JB] ___mac_mount: NOP + mov x8,xzr") + + # Try symbol first + foff = self._resolve_symbol("___mac_mount") + if foff < 0: + foff = self._resolve_symbol("__mac_mount") + if foff < 0: + # Find via 'mount_common()' string → function area + # ___mac_mount is typically called from mount_common/kernel_mount + # Search for a function containing a BL+CBNZ w0 pattern + # near the mount_common string reference area + str_off = self.find_string(b"mount_common()") + if str_off >= 0: + refs = self.find_string_refs(str_off, *self.kern_text) + if refs: + mount_common_func = self.find_function_start(refs[0][0]) + if mount_common_func >= 0: + # __mac_mount is called from mount_common + # Find BL targets from mount_common + mc_end = self._find_func_end(mount_common_func, 0x2000) + for off in range(mount_common_func, mc_end, 4): + target = self._is_bl(off) + if target >= 0 and self.kern_text[0] <= target < self.kern_text[1]: + # Check if this target contains BL+CBNZ w0 pattern + # (mac check) followed by a mov to x8 + te = self._find_func_end(target, 0x1000) + for off2 in range(target, te - 8, 4): + d0 = self._disas_at(off2) + if not d0 or d0[0].mnemonic != "bl": + continue + d1 = self._disas_at(off2 + 4) + if d1 and d1[0].mnemonic == "cbnz" and d1[0].op_str.startswith("w0,"): + foff = target + break + if foff >= 0: + break + + if foff < 0: + self._log(" [-] function not found") + return False + + func_end = self._find_func_end(foff, 0x1000) + patched = 0 + + for off in range(foff, func_end - 8, 4): + d0 = self._disas_at(off) + if not d0 or d0[0].mnemonic != "bl": + continue + d1 = self._disas_at(off + 4) + if not d1: + continue + if d1[0].mnemonic == "cbnz" and d1[0].op_str.startswith("w0,"): + self.emit(off, NOP, "NOP [___mac_mount BL check]") + patched += 1 + for off2 in range(off + 8, min(off + 0x60, func_end), 4): + d2 = self._disas_at(off2) + if not d2: + continue + if d2[0].mnemonic == "mov" and "x8" in d2[0].op_str: + if d2[0].op_str != "x8, xzr": + self.emit(off2, MOV_X8_XZR, + "mov x8,xzr [___mac_mount]") + patched += 1 + break + break + + if patched == 0: + self._log(" [-] patch sites not found") + return False + return True + + def patch_dounmount(self): + """NOP a MAC check in _dounmount. + Pattern: mov w1,#0; mov x2,#0; bl TARGET (MAC policy check pattern). + """ + self._log("\n[JB] _dounmount: NOP") + + # Try symbol first + foff = self._resolve_symbol("_dounmount") + if foff >= 0: + func_end = self._find_func_end(foff, 0x1000) + result = self._find_mac_check_bl(foff, func_end) + if result: + self.emit(result, NOP, "NOP [_dounmount MAC check]") + return True + + # String anchor: "dounmount:" → find function → search BL targets + # for the actual _dounmount with MAC check + str_off = self.find_string(b"dounmount:") + if str_off >= 0: + refs = self.find_string_refs(str_off) + for adrp_off, _, _ in refs: + caller = self.find_function_start(adrp_off) + if caller < 0: + continue + caller_end = self._find_func_end(caller, 0x2000) + # Check BL targets from this function + for off in range(caller, caller_end, 4): + target = self._is_bl(off) + if target < 0 or not (self.kern_text[0] <= target < self.kern_text[1]): + continue + te = self._find_func_end(target, 0x400) + result = self._find_mac_check_bl(target, te) + if result: + self.emit(result, NOP, "NOP [_dounmount MAC check]") + return True + + # Broader: scan kern_text for short functions with MAC check pattern + ks, ke = self.kern_text + for off in range(ks, ke - 12, 4): + d = self._disas_at(off) + if not d or d[0].mnemonic != "pacibsp": + continue + func_end = self._find_func_end(off, 0x400) + if func_end - off > 0x400: + continue + result = self._find_mac_check_bl(off, func_end) + if result: + # Verify: function should have "unmount" context + # (contain a BL to a function also called from known mount code) + self.emit(result, NOP, "NOP [_dounmount MAC check]") + return True + + self._log(" [-] patch site not found") + return False + + def _find_mac_check_bl(self, start, end): + """Find mov w1,#0; mov x2,#0; bl TARGET pattern. Returns BL offset or None.""" + for off in range(start, end - 8, 4): + d = self._disas_at(off, 3) + if len(d) < 3: + continue + i0, i1, i2 = d[0], d[1], d[2] + if i0.mnemonic != "mov" or i1.mnemonic != "mov" or i2.mnemonic != "bl": + continue + # Check: mov w1, #0; mov x2, #0 + if "w1" in i0.op_str and "#0" in i0.op_str: + if "x2" in i1.op_str and "#0" in i1.op_str: + return off + 8 + # Also match: mov x2, #0; mov w1, #0 + if "x2" in i0.op_str and "#0" in i0.op_str: + if "w1" in i1.op_str and "#0" in i1.op_str: + return off + 8 + return None + + def patch_bsd_init_auth(self): + """Bypass rootvp authentication check in _bsd_init. + Pattern: ldr x0, [xN, #0x2b8]; cbz x0, ...; bl AUTH_FUNC + Replace the BL with mov x0, #0. + """ + self._log("\n[JB] _bsd_init: mov x0,#0 (auth bypass)") + + # Try symbol first + foff = self._resolve_symbol("_bsd_init") + if foff >= 0: + func_end = self._find_func_end(foff, 0x2000) + result = self._find_auth_bl(foff, func_end) + if result: + self.emit(result, MOV_X0_0, "mov x0,#0 [_bsd_init auth]") + return True + + # Pattern search: ldr x0, [xN, #0x2b8]; cbz x0; bl + ks, ke = self.kern_text + candidates = [] + for off in range(ks, ke - 8, 4): + d = self._disas_at(off, 3) + if len(d) < 3: + continue + i0, i1, i2 = d[0], d[1], d[2] + if i0.mnemonic != "ldr" or i1.mnemonic != "cbz" or i2.mnemonic != "bl": + continue + if not i0.op_str.startswith("x0,"): + continue + if "#0x2b8" not in i0.op_str: + continue + if not i1.op_str.startswith("x0,"): + continue + candidates.append(off + 8) # the BL offset + + if not candidates: + self._log(" [-] ldr+cbz+bl pattern not found") + return False + + # Filter to kern_text range (exclude kexts) + kern_candidates = [c for c in candidates + if ks <= c < ke] + if not kern_candidates: + kern_candidates = candidates + + # Pick the last one in the kernel (bsd_init is typically late in boot) + bl_off = kern_candidates[-1] + self._log(f" [+] auth BL at 0x{bl_off:X} " + f"({len(kern_candidates)} kern candidates)") + self.emit(bl_off, MOV_X0_0, "mov x0,#0 [_bsd_init auth]") + return True + + def _find_auth_bl(self, start, end): + """Find ldr x0,[xN,#0x2b8]; cbz x0; bl pattern. Returns BL offset.""" + for off in range(start, end - 8, 4): + d = self._disas_at(off, 3) + if len(d) < 3: + continue + i0, i1, i2 = d[0], d[1], d[2] + if i0.mnemonic == "ldr" and i1.mnemonic == "cbz" and i2.mnemonic == "bl": + if i0.op_str.startswith("x0,") and "#0x2b8" in i0.op_str: + if i1.op_str.startswith("x0,"): + return off + 8 + return None + + def patch_spawn_validate_persona(self): + """NOP persona validation: LDR + TBNZ sites. + Pattern: ldr wN, [xN, #0x600] (unique struct offset) followed by + cbz wN then tbnz wN, #1 — NOP both the LDR and the TBNZ. + """ + self._log("\n[JB] _spawn_validate_persona: NOP (2 sites)") + + # Try symbol first + foff = self._resolve_symbol("_spawn_validate_persona") + if foff >= 0: + func_end = self._find_func_end(foff, 0x800) + result = self._find_persona_pattern(foff, func_end) + if result: + self.emit(result[0], NOP, "NOP [_spawn_validate_persona LDR]") + self.emit(result[1], NOP, "NOP [_spawn_validate_persona TBNZ]") + return True + + # Pattern search: ldr wN, [xN, #0x600] ... tbnz wN, #1 + # This pattern is unique to _spawn_validate_persona + ks, ke = self.kern_text + for off in range(ks, ke - 0x30, 4): + d = self._disas_at(off) + if not d or d[0].mnemonic != "ldr": + continue + if "#0x600" not in d[0].op_str: + continue + if not d[0].op_str.startswith("w"): + continue + # Found LDR wN, [xN, #0x600] — look for TBNZ wN, #1 within 0x30 + for delta in range(4, 0x30, 4): + d2 = self._disas_at(off + delta) + if not d2: + continue + if d2[0].mnemonic == "tbnz" and "#1" in d2[0].op_str: + # Verify it's a w-register + if d2[0].op_str.startswith("w"): + self._log(f" [+] LDR at 0x{off:X}, " + f"TBNZ at 0x{off + delta:X}") + self.emit(off, NOP, + "NOP [_spawn_validate_persona LDR]") + self.emit(off + delta, NOP, + "NOP [_spawn_validate_persona TBNZ]") + return True + + self._log(" [-] pattern not found") + return False + + def _find_persona_pattern(self, start, end): + """Find ldr wN,[xN,#0x600] + tbnz wN,#1 pattern. Returns (ldr_off, tbnz_off).""" + for off in range(start, end - 0x30, 4): + d = self._disas_at(off) + if not d or d[0].mnemonic != "ldr": + continue + if "#0x600" not in d[0].op_str or not d[0].op_str.startswith("w"): + continue + for delta in range(4, 0x30, 4): + d2 = self._disas_at(off + delta) + if d2 and d2[0].mnemonic == "tbnz" and "#1" in d2[0].op_str: + if d2[0].op_str.startswith("w"): + return (off, off + delta) + return None + + def patch_task_for_pid(self): + """NOP proc_ro security policy copy in _task_for_pid. + + Pattern: _task_for_pid is a Mach trap handler (0 BL callers) with: + - 2x ldadda (proc reference counting) + - 2x ldr wN,[xN,#0x490]; str wN,[xN,#0xc] (proc_ro security copy) + - movk xN, #0xc8a2, lsl #48 (PAC discriminator) + - BL to a non-panic function with >500 callers (proc_find etc.) + NOP the second ldr wN,[xN,#0x490] (the target process security copy). + """ + self._log("\n[JB] _task_for_pid: NOP") + + # Try symbol first + foff = self._resolve_symbol("_task_for_pid") + if foff >= 0: + func_end = self._find_func_end(foff, 0x800) + patch_off = self._find_second_ldr490(foff, func_end) + if patch_off: + self.emit(patch_off, NOP, + "NOP [_task_for_pid proc_ro copy]") + return True + + # Pattern search: scan kern_text for functions matching the profile + ks, ke = self.kern_text + off = ks + while off < ke - 4: + d = self._disas_at(off) + if not d or d[0].mnemonic != "pacibsp": + off += 4 + continue + func_start = off + func_end = self._find_func_end(func_start, 0x1000) + + # Quick filter: skip functions with BL callers (Mach trap = indirect) + if self.bl_callers.get(func_start, []): + off = func_end + continue + + ldadda_count = 0 + ldr490_count = 0 + ldr490_offs = [] + has_movk_c8a2 = False + has_high_caller_bl = False + + for o in range(func_start, func_end, 4): + d = self._disas_at(o) + if not d: + continue + i = d[0] + if i.mnemonic == "ldadda": + ldadda_count += 1 + elif i.mnemonic == "ldr" and "#0x490" in i.op_str \ + and i.op_str.startswith("w"): + d2 = self._disas_at(o + 4) + if d2 and d2[0].mnemonic == "str" \ + and "#0xc" in d2[0].op_str \ + and d2[0].op_str.startswith("w"): + ldr490_count += 1 + ldr490_offs.append(o) + elif i.mnemonic == "movk" and "#0xc8a2" in i.op_str: + has_movk_c8a2 = True + elif i.mnemonic == "bl": + target = i.operands[0].imm + n_callers = len(self.bl_callers.get(target, [])) + # >500 but <8000 excludes _panic (typically 8000+) + if 500 < n_callers < 8000: + has_high_caller_bl = True + + if ldadda_count >= 2 and ldr490_count >= 2 \ + and has_movk_c8a2 and has_high_caller_bl: + patch_off = ldr490_offs[1] # NOP the second occurrence + self._log(f" [+] _task_for_pid at 0x{func_start:X}, " + f"patch at 0x{patch_off:X}") + self.emit(patch_off, NOP, + "NOP [_task_for_pid proc_ro copy]") + return True + + off = func_end + + self._log(" [-] function not found") + return False + + def _find_second_ldr490(self, start, end): + """Find the second ldr wN,[xN,#0x490]+str wN,[xN,#0xc] in range.""" + count = 0 + for off in range(start, end - 4, 4): + d = self._disas_at(off) + if not d or d[0].mnemonic != "ldr": + continue + if "#0x490" not in d[0].op_str or not d[0].op_str.startswith("w"): + continue + d2 = self._disas_at(off + 4) + if d2 and d2[0].mnemonic == "str" \ + and "#0xc" in d2[0].op_str \ + and d2[0].op_str.startswith("w"): + count += 1 + if count == 2: + return off + return None + + def patch_load_dylinker(self): + """Bypass PAC auth check in Mach-O chained fixup rebase code. + + The kernel's chained fixup pointer rebase function contains PAC + authentication triplets: TST xN, #high; B.EQ skip; MOVK xN, #0xc8a2. + This function has 3+ such triplets and 0 BL callers (indirect call). + + Find the function and replace the LAST TST with an unconditional + branch to the B.EQ target (always skip PAC re-signing). + """ + self._log("\n[JB] _load_dylinker: PAC rebase bypass") + + # Try symbol first + foff = self._resolve_symbol("_load_dylinker") + if foff >= 0: + func_end = self._find_func_end(foff, 0x2000) + result = self._find_tst_pac_triplet(foff, func_end) + if result: + tst_off, beq_target = result + b_bytes = self._encode_b(tst_off, beq_target) + if b_bytes: + self.emit(tst_off, b_bytes, + f"b #0x{beq_target - tst_off:X} [_load_dylinker]") + return True + + # Pattern search: find functions with 3+ TST+B.EQ+MOVK(#0xc8a2) + # triplets and 0 BL callers. This is the chained fixup rebase code. + ks, ke = self.kern_text + off = ks + while off < ke - 4: + d = self._disas_at(off) + if not d or d[0].mnemonic != "pacibsp": + off += 4 + continue + func_start = off + func_end = self._find_func_end(func_start, 0x2000) + + # Must have 0 BL callers (indirect call via function pointer) + if self.bl_callers.get(func_start, []): + off = func_end + continue + + # Count TST+B.EQ+MOVK(#0xc8a2) triplets + triplets = [] + for o in range(func_start, func_end - 8, 4): + d3 = self._disas_at(o, 3) + if len(d3) < 3: + continue + i0, i1, i2 = d3[0], d3[1], d3[2] + if i0.mnemonic == "tst" \ + and "40000000000000" in i0.op_str \ + and i1.mnemonic == "b.eq" \ + and i2.mnemonic == "movk" \ + and "#0xc8a2" in i2.op_str: + beq_target = i1.operands[-1].imm + triplets.append((o, beq_target)) + + if len(triplets) >= 3: + # Patch the last triplet (deepest in the function) + tst_off, beq_target = triplets[-1] + b_bytes = self._encode_b(tst_off, beq_target) + if b_bytes: + self._log(f" [+] rebase func at 0x{func_start:X}, " + f"patch TST at 0x{tst_off:X}") + self.emit(tst_off, b_bytes, + f"b #0x{beq_target - tst_off:X} " + f"[_load_dylinker PAC bypass]") + return True + + off = func_end + + self._log(" [-] PAC rebase function not found") + return False + + def _find_tst_pac_triplet(self, start, end): + """Find last TST+B.EQ+MOVK(#0xc8a2) triplet. Returns (tst_off, beq_target).""" + last = None + for off in range(start, end - 8, 4): + d = self._disas_at(off, 3) + if len(d) < 3: + continue + i0, i1, i2 = d[0], d[1], d[2] + if i0.mnemonic == "tst" \ + and "40000000000000" in i0.op_str \ + and i1.mnemonic == "b.eq" \ + and i2.mnemonic == "movk" \ + and "#0xc8a2" in i2.op_str: + last = (off, i1.operands[-1].imm) + return last + + def patch_shared_region_map(self): + """Force shared region check: cmp x0,x0. + Anchor: '/private/preboot/Cryptexes' string → function → CMP+B.NE. + """ + self._log("\n[JB] _shared_region_map_and_slide_setup: cmp x0,x0") + + # Try symbol first + foff = self._resolve_symbol("_shared_region_map_and_slide_setup") + if foff < 0: + foff = self._find_func_by_string( + b"/private/preboot/Cryptexes", self.kern_text) + if foff < 0: + foff = self._find_func_by_string(b"/private/preboot/Cryptexes") + if foff < 0: + self._log(" [-] function not found") + return False + + func_end = self._find_func_end(foff, 0x2000) + + for off in range(foff, func_end - 4, 4): + d = self._disas_at(off, 2) + if len(d) < 2: + continue + i0, i1 = d[0], d[1] + if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne": + continue + ops = i0.operands + if len(ops) < 2: + continue + if ops[0].type == ARM64_OP_REG and ops[1].type == ARM64_OP_REG: + self.emit(off, CMP_X0_X0, + "cmp x0,x0 [_shared_region_map_and_slide_setup]") + return True + + self._log(" [-] CMP+B.NE pattern not found") + return False + + def patch_nvram_verify_permission(self): + """NOP verification in IONVRAMController's verifyPermission. + Anchor: 'krn.' string (NVRAM key prefix) → xref → function → TBZ/TBNZ. + """ + self._log("\n[JB] verifyPermission (NVRAM): NOP") + + foff = -1 + # Try symbol first + sym_off = self._resolve_symbol( + "__ZL16verifyPermission16IONVRAMOperationPKhPKcb") + if sym_off < 0: + for sym, off in self.symbols.items(): + if "verifyPermission" in sym and "NVRAM" in sym: + sym_off = off + break + if sym_off >= 0: + foff = sym_off + else: + # String anchor: "krn." is referenced early in verifyPermission + str_off = self.find_string(b"krn.") + if str_off >= 0: + refs = self.find_string_refs(str_off) + if refs: + foff = self.find_function_start(refs[0][0]) + + if foff < 0: + # Fallback: try NVRAM entitlement string + str_off = self.find_string( + b"com.apple.private.iokit.nvram-write-access") + if str_off >= 0: + refs = self.find_string_refs(str_off) + if refs: + foff = self.find_function_start(refs[0][0]) + + if foff < 0: + self._log(" [-] function not found") + return False + + func_end = self._find_func_end(foff, 0x400) + + for off in range(foff, min(foff + 0x40, func_end), 4): + d = self._disas_at(off) + if not d: + continue + if d[0].mnemonic in ("tbnz", "tbz"): + self.emit(off, NOP, "NOP [verifyPermission NVRAM]") + return True + + self._log(" [-] TBZ/TBNZ not found in function") + return False + + def patch_io_secure_bsd_root(self): + """Skip security check in _IOSecureBSDRoot. + Anchor: 'SecureRootName' string → function → CBZ/CBNZ → unconditional B. + """ + self._log("\n[JB] _IOSecureBSDRoot: skip check") + + # Try symbol first + foff = self._resolve_symbol("_IOSecureBSDRoot") + if foff < 0: + foff = self._find_func_by_string(b"SecureRootName") + if foff < 0: + self._log(" [-] function not found") + return False + + func_end = self._find_func_end(foff, 0x400) + + for off in range(foff, func_end - 4, 4): + d = self._disas_at(off) + if not d: + continue + i = d[0] + if i.mnemonic in ("cbnz", "cbz", "tbnz", "tbz"): + target = None + for op in reversed(i.operands): + if op.type == ARM64_OP_IMM: + target = op.imm + break + if target and target > off: + b_bytes = self._encode_b(off, target) + if b_bytes: + self.emit(off, b_bytes, + f"b #0x{target - off:X} [_IOSecureBSDRoot]") + return True + + self._log(" [-] conditional branch not found") + return False + + def patch_thid_should_crash(self): + """Zero out _thid_should_crash global variable. + Anchor: 'thid_should_crash' string → find data xrefs → zero the variable. + """ + self._log("\n[JB] _thid_should_crash: zero out") + + # Try symbol first + foff = self._resolve_symbol("_thid_should_crash") + if foff >= 0: + self.emit(foff, b'\x00\x00\x00\x00', + "zero [_thid_should_crash]") + return True + + # Search for the string "thid_should_crash" in __PRELINK_INFO or data + str_off = self.find_string(b"thid_should_crash") + if str_off < 0: + self._log(" [-] string not found") + return False + + # Find xrefs to this string in code — it's used as a sysctl name. + # The sysctl registration points to the data variable. + refs = self.find_string_refs(str_off) + if not refs: + # The string may be in PRELINK_INFO plist, not directly referenced. + # Search for the variable by looking for a 4-byte value of 0x00000001 + # near the code that uses the string. + # Alternative: search all DATA segments for the sysctl structure + # that references this string. + self._log(" [-] no code refs to string, trying pattern search") + + # Search for sysctl_oid structure referencing this string + str_va = self.base_va + str_off + str_bytes = struct.pack("= 0: + # Search for ADRP+ADD or ADRP+LDR that loads the variable address + func_end = self._find_func_end(func_start, 0x200) + for off in range(func_start, func_end, 4): + d = self._disas_at(off, 2) + if len(d) < 2: + continue + i0, i1 = d[0], d[1] + if i0.mnemonic == "adrp" and i1.mnemonic in ("add", "ldr"): + # Check target + pass # complex, skip for now + + self._log(" [-] variable not found") + return False + + # ══════════════════════════════════════════════════════════════ + # Group C: Complex shellcode patches + # ══════════════════════════════════════════════════════════════ + + def patch_cred_label_update_execve(self): + """Redirect _cred_label_update_execve to shellcode that sets cs_flags. + + Shellcode: LDR x0,[sp,#8]; LDR w1,[x0]; ORR w1,w1,#0x4000000; + ORR w1,w1,#0xF; AND w1,w1,#0xFFFFC0FF; STR w1,[x0]; + MOV x0,xzr; RETAB + """ + self._log("\n[JB] _cred_label_update_execve: shellcode (cs_flags)") + + # Find the function via AMFI string reference + func_off = -1 + + # Try symbol + for sym, off in self.symbols.items(): + if "cred_label_update_execve" in sym and "hook" not in sym: + func_off = off + break + + if func_off < 0: + # String anchor: the function is near execve-related AMFI code. + # Look for the function that contains the AMFI string ref and + # then find _cred_label_update_execve through BL targets. + str_off = self.find_string(b"AMFI: code signature validation failed") + if str_off >= 0: + refs = self.find_string_refs(str_off, *self.amfi_text) + if refs: + caller = self.find_function_start(refs[0][0]) + if caller >= 0: + # Walk through the AMFI text section to find functions + # that have a RETAB at the end and take many arguments + # The _cred_label_update_execve has many args and a + # distinctive prologue. + pass + + if func_off < 0: + # Alternative: search AMFI text for functions that match the pattern + # of _cred_label_update_execve (long prologue, many saved regs, RETAB) + # Look for the specific pattern: mov xN, x2 in early prologue + # (saves the vnode arg) followed by stp xzr,xzr pattern + s, e = self.amfi_text + # Search for PACIBSP functions in AMFI that are BL targets from + # the execve kill path area + str_off = self.find_string(b"AMFI: hook..execve() killing") + if str_off < 0: + str_off = self.find_string(b"execve() killing") + if str_off >= 0: + refs = self.find_string_refs(str_off, s, e) + if not refs: + refs = self.find_string_refs(str_off) + if refs: + kill_func = self.find_function_start(refs[0][0]) + if kill_func >= 0: + kill_end = self._find_func_end(kill_func, 0x800) + # The kill function ends with RETAB. The next function + # after it should be close to _cred_label_update_execve. + # Actually, _cred_label_update_execve is typically the + # function BEFORE the kill function. + # Search backward from kill_func for a RETAB/RET + for back in range(kill_func - 4, max(kill_func - 0x400, s), -4): + val = _rd32(self.raw, back) + if val in (0xD65F0FFF, 0xD65F0BFF, 0xD65F03C0): + # Found end of previous function. + # The function we want starts at the next PACIBSP before back. + for scan in range(back - 4, max(back - 0x400, s), -4): + d = self._disas_at(scan) + if d and d[0].mnemonic == "pacibsp": + func_off = scan + break + break + + if func_off < 0: + self._log(" [-] function not found, skipping shellcode patch") + return False + + # Find code cave + cave = self._find_code_cave(32) # 8 instructions = 32 bytes + if cave < 0: + self._log(" [-] no code cave found for shellcode") + return False + + # Assemble shellcode + shellcode = ( + asm("ldr x0, [sp, #8]") + # load cred pointer + asm("ldr w1, [x0]") + # load cs_flags + asm("orr w1, w1, #0x4000000") + # set CS_PLATFORM_BINARY + asm("orr w1, w1, #0xF") + # set CS_VALID|CS_ADHOC|CS_GET_TASK_ALLOW|CS_INSTALLER + bytes([0x21, 0x64, 0x12, 0x12]) + # AND w1, w1, #0xFFFFC0FF (clear CS_HARD|CS_KILL etc) + asm("str w1, [x0]") + # store back + asm("mov x0, xzr") + # return 0 + bytes([0xFF, 0x0F, 0x5F, 0xD6]) # RETAB + ) + + # Find the return site in the function (last RETAB) + func_end = self._find_func_end(func_off, 0x200) + ret_off = -1 + for off in range(func_end - 4, func_off, -4): + val = _rd32(self.raw, off) + if val in (0xD65F0FFF, 0xD65F0BFF, 0xD65F03C0): + ret_off = off + break + if ret_off < 0: + self._log(" [-] function return not found") + return False + + # Write shellcode to cave + for i in range(0, len(shellcode), 4): + self.emit(cave + i, shellcode[i:i+4], + f"shellcode+{i} [_cred_label_update_execve]") + + # Branch from function return to cave + b_bytes = self._encode_b(ret_off, cave) + if b_bytes: + self.emit(ret_off, b_bytes, + f"b cave [_cred_label_update_execve -> 0x{cave:X}]") + else: + self._log(" [-] branch to cave out of range") + return False + + return True + + def patch_syscallmask_apply_to_proc(self): + """Redirect _syscallmask_apply_to_proc to custom filter shellcode. + Anchor: 'syscallmask.c' string → find function → redirect to cave. + """ + self._log("\n[JB] _syscallmask_apply_to_proc: shellcode (filter mask)") + + # Resolve required functions + func_off = self._resolve_symbol("_syscallmask_apply_to_proc") + zalloc_off = self._resolve_symbol("_zalloc_ro_mut") + filter_off = self._resolve_symbol("_proc_set_syscall_filter_mask") + + if func_off < 0: + # String anchor: "syscallmask.c" + str_off = self.find_string(b"syscallmask.c") + if str_off >= 0: + refs = self.find_string_refs(str_off, *self.kern_text) + if not refs: + refs = self.find_string_refs(str_off) + if refs: + # The function containing this string ref is in the + # syscallmask module. Find _syscallmask_apply_to_proc + # by looking for a function nearby that takes 4 args. + base_func = self.find_function_start(refs[0][0]) + if base_func >= 0: + # Search nearby functions for the one that has a + # BL to _proc_set_syscall_filter_mask-like function. + # Actually, the function with "syscallmask.c" IS likely + # _syscallmask_apply_to_proc or very close to it. + func_off = base_func + + if func_off < 0: + self._log(" [-] _syscallmask_apply_to_proc not found") + return False + + # Find _zalloc_ro_mut: search for the BL target from within the function + # that's called with specific arguments. Use BL callers analysis. + if zalloc_off < 0: + func_end = self._find_func_end(func_off, 0x200) + for off in range(func_off, func_end, 4): + target = self._is_bl(off) + if target >= 0: + # _zalloc_ro_mut is typically one of the BL targets + # It's the one with many callers (>50) + # bl_callers is keyed by file offset (same as _is_bl returns) + n = len(self.bl_callers.get(target, [])) + if n > 50: + zalloc_off = target + break + + # Find _proc_set_syscall_filter_mask: search for a BL or B target + if filter_off < 0: + func_end = self._find_func_end(func_off, 0x200) + # It's typically the last BL/B target in the function (tail call) + for off in range(func_end - 4, func_off, -4): + target = self._is_bl(off) + if target >= 0: + filter_off = target + break + # Also check for unconditional B + val = _rd32(self.raw, off) + if (val & 0xFC000000) == 0x14000000: + imm26 = val & 0x3FFFFFF + if imm26 & (1 << 25): + imm26 -= (1 << 26) + target = off + imm26 * 4 + if self.kern_text[0] <= target < self.kern_text[1]: + filter_off = target + break + + if zalloc_off < 0 or filter_off < 0: + self._log(f" [-] required functions not found " + f"(zalloc={'found' if zalloc_off >= 0 else 'missing'}, " + f"filter={'found' if filter_off >= 0 else 'missing'})") + return False + + # Find code cave (need ~160 bytes) + cave = self._find_code_cave(160) + if cave < 0: + self._log(" [-] no code cave found") + return False + + cave_base = cave + + # Encode BL to _zalloc_ro_mut (at cave + 28*4) + zalloc_bl_off = cave_base + 28 * 4 + zalloc_bl = self._encode_bl(zalloc_bl_off, zalloc_off) + if not zalloc_bl: + self._log(" [-] BL to _zalloc_ro_mut out of range") + return False + + # Encode B to _proc_set_syscall_filter_mask (at end of shellcode) + filter_b_off = cave_base + 37 * 4 + filter_b = self._encode_b(filter_b_off, filter_off) + if not filter_b: + self._log(" [-] B to _proc_set_syscall_filter_mask out of range") + return False + + # Build shellcode + shellcode_parts = [] + for _ in range(10): + shellcode_parts.append(b'\xff\xff\xff\xff') + + shellcode_parts.append(asm("cbz x2, #0x6c")) # idx 10 + shellcode_parts.append(asm("sub sp, sp, #0x40")) # idx 11 + shellcode_parts.append(asm("stp x19, x20, [sp, #0x10]")) # idx 12 + shellcode_parts.append(asm("stp x21, x22, [sp, #0x20]")) # idx 13 + shellcode_parts.append(asm("stp x29, x30, [sp, #0x30]")) # idx 14 + shellcode_parts.append(asm("mov x19, x0")) # idx 15 + shellcode_parts.append(asm("mov x20, x1")) # idx 16 + shellcode_parts.append(asm("mov x21, x2")) # idx 17 + shellcode_parts.append(asm("mov x22, x3")) # idx 18 + shellcode_parts.append(asm("mov x8, #8")) # idx 19 + shellcode_parts.append(asm("mov x0, x17")) # idx 20 + shellcode_parts.append(asm("mov x1, x21")) # idx 21 + shellcode_parts.append(asm("mov x2, #0")) # idx 22 + # adr x3, #-0x5C — encode manually + adr_delta = -(23 * 4) + immhi = (adr_delta >> 2) & 0x7FFFF + immlo = adr_delta & 0x3 + adr_insn = 0x10000003 | (immlo << 29) | (immhi << 5) + shellcode_parts.append(struct.pack(" 0x{cave_base + 40:X}]") + return True + + self._log(" [-] injection point not found") + return False + + def patch_hook_cred_label_update_execve(self): + """Redirect _hook_cred_label_update_execve ops table entry to shellcode. + + Patches the sandbox MAC ops table entry for cred_label_update_execve + to point to custom shellcode that performs vnode_getattr ownership propagation. + """ + self._log("\n[JB] _hook_cred_label_update_execve: ops table + shellcode") + + # Find vfs_context_current and vnode_getattr + vfs_ctx_off = self._resolve_symbol("_vfs_context_current") + vnode_getattr_off = self._resolve_symbol("_vnode_getattr") + + # Find by string anchor if symbols unavailable + if vfs_ctx_off < 0: + # vfs_context_current is a short function. Find it by looking + # for a function that returns the current thread's VFS context. + # It's typically called from many places in the VFS layer. + # Search for BL targets with very high caller count in kern_text. + # Alternative: find via "vfs_context_current" string in PRELINK_INFO + str_off = self.find_string(b"vfs_context_current") + if str_off >= 0: + # This might be in __PRELINK_INFO plist, not directly usable + # Try to find the function by its pattern: it's very short + # and extremely widely called. + pass + + # Pattern approach: find a 2-3 instruction function that's called + # from many places. vfs_context_current typically does: + # ldr x0, [x0, #offset] ; get current thread's context + # ret + # But there are many such functions. Use caller count. + # Find the most-called function in the BSD kernel area. + # Actually, we need a more targeted approach. + # Try string "Sandbox" to find sandbox module functions first. + pass + + if vnode_getattr_off < 0: + # Search by string + str_off = self.find_string(b"\x00vnode_getattr\x00") + if str_off < 0: + str_off = self.find_string(b"vnode_getattr") + # The string might be in the symbol stubs, not directly useful + pass + + if vfs_ctx_off < 0 or vnode_getattr_off < 0: + self._log(" [-] required functions not found " + f"(vfs_context_current={'found' if vfs_ctx_off >= 0 else 'missing'}, " + f"vnode_getattr={'found' if vnode_getattr_off >= 0 else 'missing'})") + return False + + # Find the original hook function via sandbox ops table + ops_table = self._find_sandbox_ops_table_via_conf() + if ops_table is None: + self._log(" [-] sandbox ops table not found") + return False + + HOOK_INDEX = 16 + orig_hook = self._read_ops_entry(ops_table, HOOK_INDEX) + if orig_hook is None or orig_hook <= 0: + self._log(f" [-] hook entry not found at index {HOOK_INDEX}") + return False + + # Find code cave (~180 bytes) + cave = self._find_code_cave(180) + if cave < 0: + self._log(" [-] no code cave found") + return False + + # Encode BL targets + vfs_bl_off = cave + 9 * 4 + vfs_bl = self._encode_bl(vfs_bl_off, vfs_ctx_off) + vnode_bl_off = cave + 17 * 4 + vnode_bl = self._encode_bl(vnode_bl_off, vnode_getattr_off) + + if not vfs_bl or not vnode_bl: + self._log(" [-] BL to helpers out of range") + return False + + b_back_off = cave + 44 * 4 + b_back = self._encode_b(b_back_off, orig_hook) + if not b_back: + self._log(" [-] B to original hook out of range") + return False + + parts = [] + parts.append(NOP) # 0 + parts.append(asm("cbz x3, #0xa8")) # 1 + parts.append(asm("sub sp, sp, #0x400")) # 2 + parts.append(asm("stp x29, x30, [sp]")) # 3 + parts.append(asm("stp x0, x1, [sp, #16]")) # 4 + parts.append(asm("stp x2, x3, [sp, #32]")) # 5 + parts.append(asm("stp x4, x5, [sp, #48]")) # 6 + parts.append(asm("stp x6, x7, [sp, #64]")) # 7 + parts.append(NOP) # 8 + parts.append(vfs_bl) # 9 + parts.append(asm("mov x2, x0")) # 10 + parts.append(asm("ldr x0, [sp, #0x28]")) # 11 + parts.append(asm("add x1, sp, #0x80")) # 12 + parts.append(asm("mov w8, #0x380")) # 13 + parts.append(asm("stp xzr, x8, [x1]")) # 14 + parts.append(asm("stp xzr, xzr, [x1, #0x10]")) # 15 + parts.append(NOP) # 16 + parts.append(vnode_bl) # 17 + parts.append(asm("cbnz x0, #0x50")) # 18 + parts.append(asm("mov w2, #0")) # 19 + parts.append(asm("ldr w8, [sp, #0xCC]")) # 20 + parts.append(bytes([0xa8, 0x00, 0x58, 0x36])) # 21: tbz w8, #11 + parts.append(asm("ldr w8, [sp, #0xC4]")) # 22 + parts.append(asm("ldr x0, [sp, #0x18]")) # 23 + parts.append(asm("str w8, [x0, #0x18]")) # 24 + parts.append(asm("mov w2, #1")) # 25 + parts.append(asm("ldr w8, [sp, #0xCC]")) # 26 + parts.append(bytes([0xa8, 0x00, 0x50, 0x36])) # 27: tbz w8, #10 + parts.append(asm("mov w2, #1")) # 28 + parts.append(asm("ldr w8, [sp, #0xC8]")) # 29 + parts.append(asm("ldr x0, [sp, #0x18]")) # 30 + parts.append(asm("str w8, [x0, #0x28]")) # 31 + parts.append(asm("cbz w2, #0x1c")) # 32 + parts.append(asm("ldr x0, [sp, #0x20]")) # 33 + parts.append(asm("ldr w8, [x0, #0x454]")) # 34 + parts.append(asm("orr w8, w8, #0x100")) # 35 + parts.append(asm("str w8, [x0, #0x454]")) # 36 + parts.append(asm("ldp x0, x1, [sp, #16]")) # 37 + parts.append(asm("ldp x2, x3, [sp, #32]")) # 38 + parts.append(asm("ldp x4, x5, [sp, #48]")) # 39 + parts.append(asm("ldp x6, x7, [sp, #64]")) # 40 + parts.append(asm("ldp x29, x30, [sp]")) # 41 + parts.append(asm("add sp, sp, #0x400")) # 42 + parts.append(NOP) # 43 + parts.append(b_back) # 44 + + for i, part in enumerate(parts): + self.emit(cave + i * 4, part, + f"shellcode+{i*4} [_hook_cred_label_update_execve]") + + # Rewrite ops table entry to point to cave + entry_off = ops_table + HOOK_INDEX * 8 + cave_va = self.base_va + cave + self.emit(entry_off, struct.pack(" cave]") + + return True + + def patch_kcall10(self): + """Replace SYS_kas_info (syscall 439) with kcall10 shellcode. + + Anchor: find _nosys function by pattern, then search DATA segments + for the sysent table (first entry points to _nosys). + """ + self._log("\n[JB] kcall10: syscall 439 replacement") + + # Find _nosys + nosys_off = self._resolve_symbol("_nosys") + if nosys_off < 0: + nosys_off = self._find_nosys() + if nosys_off < 0: + self._log(" [-] _nosys not found") + return False + + self._log(f" [+] _nosys at 0x{nosys_off:X}") + + # Find _munge_wwwwwwww + munge_off = self._resolve_symbol("_munge_wwwwwwww") + if munge_off < 0: + for sym, off in self.symbols.items(): + if "munge_wwwwwwww" in sym: + munge_off = off + break + + # Search for sysent table in DATA segments + sysent_off = -1 + for seg_name, vmaddr, fileoff, filesize, _ in self.all_segments: + if "DATA" not in seg_name: + continue + for off in range(fileoff, fileoff + filesize - 24, 8): + val = _rd64(self.raw, off) + decoded = self._decode_chained_ptr(val) + if decoded == nosys_off: + # Verify: sysent[1] should also point to valid code + val2 = _rd64(self.raw, off + 24) + decoded2 = self._decode_chained_ptr(val2) + if decoded2 > 0 and any( + s <= decoded2 < e for s, e in self.code_ranges): + sysent_off = off + break + if sysent_off >= 0: + break + + if sysent_off < 0: + self._log(" [-] sysent table not found") + return False + + self._log(f" [+] sysent table at file offset 0x{sysent_off:X}") + + # Entry 439 (SYS_kas_info) + entry_439 = sysent_off + 439 * 24 + + # Find code cave for kcall10 shellcode (~128 bytes = 32 instructions) + cave = self._find_code_cave(128) + if cave < 0: + self._log(" [-] no code cave found") + return False + + # Build kcall10 shellcode + parts = [ + asm("ldr x10, [sp, #0x40]"), # 0 + asm("ldp x0, x1, [x10, #0]"), # 1 + asm("ldp x2, x3, [x10, #0x10]"), # 2 + asm("ldp x4, x5, [x10, #0x20]"), # 3 + asm("ldp x6, x7, [x10, #0x30]"), # 4 + asm("ldp x8, x9, [x10, #0x40]"), # 5 + asm("ldr x10, [x10, #0x50]"), # 6 + asm("mov x16, x0"), # 7 + asm("mov x0, x1"), # 8 + asm("mov x1, x2"), # 9 + asm("mov x2, x3"), # 10 + asm("mov x3, x4"), # 11 + asm("mov x4, x5"), # 12 + asm("mov x5, x6"), # 13 + asm("mov x6, x7"), # 14 + asm("mov x7, x8"), # 15 + asm("mov x8, x9"), # 16 + asm("mov x9, x10"), # 17 + asm("stp x29, x30, [sp, #-0x10]!"), # 18 + bytes([0x00, 0x02, 0x3F, 0xD6]), # 19: BLR x16 + asm("ldp x29, x30, [sp], #0x10"), # 20 + asm("ldr x11, [sp, #0x40]"), # 21 + NOP, # 22 + asm("stp x0, x1, [x11, #0]"), # 23 + asm("stp x2, x3, [x11, #0x10]"), # 24 + asm("stp x4, x5, [x11, #0x20]"), # 25 + asm("stp x6, x7, [x11, #0x30]"), # 26 + asm("stp x8, x9, [x11, #0x40]"), # 27 + asm("str x10, [x11, #0x50]"), # 28 + asm("mov x0, #0"), # 29 + asm("ret"), # 30 + NOP, # 31 + ] + + for i, part in enumerate(parts): + self.emit(cave + i * 4, part, + f"shellcode+{i*4} [kcall10]") + + # Patch sysent[439] + cave_va = self.base_va + cave + self.emit(entry_439, struct.pack("= 0: + munge_va = self.base_va + munge_off + self.emit(entry_439 + 8, struct.pack("= 4 else None + p2 = _disasm_one(self.raw, scan - 8) if scan >= 8 else None + if not all((i, n, p1, p2)): + continue + if not (i.mnemonic == "bl" + and n.mnemonic == "tbnz" and n.op_str.startswith("w0, #0,") + and p1.mnemonic == "mov" and p1.op_str == "x2, #0" + and p2.mnemonic == "mov" and p2.op_str == "x0, #0"): + continue + fs = self._find_func_start(scan) + if fs is not None: + starts.add(fs) + if len(starts) != 1: + return None + return next(iter(starts)) + + def _find_udf_cave(self, min_insns=6, near_off=None, max_distance=0x80000): + need = min_insns * 4 + start = 0 if near_off is None else max(0, near_off - 0x1000) + end = self.size if near_off is None else min(self.size, near_off + max_distance) + best = None + best_dist = None + off = start + while off < end: + run = off + while run < end and self.raw[run:run + 4] == b"\x00\x00\x00\x00": + run += 4 + if run - off >= need: + prev = _disasm_one(self.raw, off - 4) if off >= 4 else None + if prev and prev.mnemonic in ( + "b", "b.eq", "b.ne", "b.lo", "b.hs", "cbz", "cbnz", "tbz", "tbnz" + ): + return off + if near_off is not None and _disasm_one(self.raw, off): + dist = abs(off - near_off) + if best is None or dist < best_dist: + best = off + best_dist = dist + off = run + 4 if run > off else off + 4 + return best + + # ── JB patches ─────────────────────────────────────────────── + def patch_selector24_hashcmp_calls(self): + """Patch remaining selector-24 hashcmp BL callsites: bl -> mov x0,#0.""" + patched = 0 + for off in range(0, self.size - 8, 4): + i0 = _disasm_one(self.raw, off) + i1 = _disasm_one(self.raw, off + 4) + i2 = _disasm_one(self.raw, off + 8) + if not i0 or not i1 or not i2: + continue + if not (i0.mnemonic == "mov" and i0.op_str == "w2, #0x14"): + continue + if not (i1.mnemonic == "bl" and i2.mnemonic == "cbz" + and i2.op_str.startswith("w0,")): + continue + self.emit(off + 4, MOV_X0_0, + f"selector24 hashcmp bypass #{patched + 1}: bl -> mov x0,#0") + patched += 1 + + if patched > 3: + self._log(f" [-] TXM JB: selector24 hashcmp sites too many ({patched})") + return False + if patched == 0: + self._log(" [-] TXM JB: no selector24 hashcmp BL sites to patch") + return False + return True + + def patch_selector24_a1_path(self): + """Selector-24 A1 path bypass: NOP b.lo + cbz around mov w0,#0xa1.""" + locs = [] + for scan in range(0, self.size - 4, 4): + ins = _disasm_one(self.raw, scan) + if ins and ins.mnemonic == "mov" and ins.op_str == "w0, #0xa1": + i_blo = _disasm_one(self.raw, scan - 0xC) + i_cbz = _disasm_one(self.raw, scan - 0x4) + if not i_blo or not i_cbz: + continue + if (i_blo.mnemonic == "b.lo" + and i_cbz.mnemonic == "cbz" + and i_cbz.op_str.startswith("x9,")): + locs.append(scan) + + if len(locs) != 1: + self._log(f" [-] TXM JB: expected 1 selector24 A1 site, found {len(locs)}") + return False + off = locs[0] + + self.emit(off - 0xC, NOP, "selector24 A1: b.lo -> nop") + self.emit(off - 0x4, NOP, "selector24 A1: cbz x9 -> nop") + return True + + def patch_get_task_allow_force_true(self): + """Force get-task-allow entitlement call to return true.""" + refs = self._find_string_refs(b"get-task-allow") + if not refs: + self._log(" [-] TXM JB: get-task-allow string refs not found") + return False + + cands = [] + for _, _, add_off in refs: + for scan in range(add_off, min(add_off + 0x20, self.size - 4), 4): + i = _disasm_one(self.raw, scan) + n = _disasm_one(self.raw, scan + 4) + if not i or not n: + continue + if i.mnemonic == "bl" and n.mnemonic == "tbnz" and n.op_str.startswith("w0, #0,"): + cands.append(scan) + + if len(cands) != 1: + self._log(f" [-] TXM JB: expected 1 get-task-allow BL site, found {len(cands)}") + return False + + self.emit(cands[0], MOV_X0_1, "get-task-allow: bl -> mov x0,#1") + return True + + def patch_selector42_29_shellcode(self): + """Selector 42|29 patch via dynamic cave shellcode + branch redirect.""" + fn = self._find_debugger_gate_func_start() + if fn is None: + self._log(" [-] TXM JB: debugger-gate function not found (selector42|29)") + return False + + stubs = [] + for off in range(4, self.size - 24, 4): + p = _disasm_one(self.raw, off - 4) + i0 = _disasm_one(self.raw, off) + i1 = _disasm_one(self.raw, off + 4) + i2 = _disasm_one(self.raw, off + 8) + i3 = _disasm_one(self.raw, off + 12) + i4 = _disasm_one(self.raw, off + 16) + i5 = _disasm_one(self.raw, off + 20) + if not all((p, i0, i1, i2, i3, i4, i5)): + continue + if not (p.mnemonic == "bti" and p.op_str == "j"): + continue + if not (i0.mnemonic == "mov" and i0.op_str == "x0, x20"): + continue + if not (i1.mnemonic == "bl" and i2.mnemonic == "mov" + and i2.op_str == "x1, x21"): + continue + if not (i3.mnemonic == "mov" and i3.op_str == "x2, x22" + and i4.mnemonic == "bl" and i5.mnemonic == "b"): + continue + if i4.operands and i4.operands[0].imm == fn: + stubs.append(off) + + if len(stubs) != 1: + self._log(f" [-] TXM JB: selector42|29 stub expected 1, found {len(stubs)}") + return False + stub_off = stubs[0] + + cave = self._find_udf_cave(min_insns=6, near_off=stub_off) + if cave is None: + self._log(" [-] TXM JB: no UDF cave found for selector42|29 shellcode") + return False + + self.emit(stub_off, self._asm_at(f"b #0x{cave:X}", stub_off), + "selector42|29: branch to shellcode") + self.emit(cave, NOP, "selector42|29 shellcode pad: udf -> nop") + self.emit(cave + 4, MOV_X0_1, "selector42|29 shellcode: mov x0,#1") + self.emit(cave + 8, STRB_W0_X20_30, "selector42|29 shellcode: strb w0,[x20,#0x30]") + self.emit(cave + 12, MOV_X0_X20, "selector42|29 shellcode: mov x0,x20") + self.emit(cave + 16, self._asm_at(f"b #0x{stub_off + 4:X}", cave + 16), + "selector42|29 shellcode: branch back") + return True + + def patch_debugger_entitlement_force_true(self): + """Force debugger entitlement call to return true.""" + refs = self._find_string_refs(b"com.apple.private.cs.debugger") + if not refs: + self._log(" [-] TXM JB: debugger refs not found") + return False + + cands = [] + for _, _, add_off in refs: + for scan in range(add_off, min(add_off + 0x20, self.size - 4), 4): + i = _disasm_one(self.raw, scan) + n = _disasm_one(self.raw, scan + 4) + p1 = _disasm_one(self.raw, scan - 4) if scan >= 4 else None + p2 = _disasm_one(self.raw, scan - 8) if scan >= 8 else None + if not all((i, n, p1, p2)): + continue + if (i.mnemonic == "bl" + and n.mnemonic == "tbnz" and n.op_str.startswith("w0, #0,") + and p1.mnemonic == "mov" and p1.op_str == "x2, #0" + and p2.mnemonic == "mov" and p2.op_str == "x0, #0"): + cands.append(scan) + + if len(cands) != 1: + self._log(f" [-] TXM JB: expected 1 debugger BL site, found {len(cands)}") + return False + + self.emit(cands[0], MOV_W0_1, "debugger entitlement: bl -> mov w0,#1") + return True + + def patch_developer_mode_bypass(self): + """Developer-mode bypass: NOP conditional guard before deny log path.""" + refs = self._find_string_refs( + b"developer mode enabled due to system policy configuration") + if not refs: + self._log(" [-] TXM JB: developer-mode string ref not found") + return False + + cands = [] + for _, _, add_off in refs: + for back in range(add_off - 4, max(add_off - 0x20, 0), -4): + ins = _disasm_one(self.raw, back) + if not ins: + continue + if ins.mnemonic not in ("tbz", "tbnz", "cbz", "cbnz"): + continue + if not ins.op_str.startswith("w9, #0,"): + continue + cands.append(back) + + if len(cands) != 1: + self._log(f" [-] TXM JB: expected 1 developer mode guard, found {len(cands)}") + return False + + self.emit(cands[0], NOP, "developer mode bypass") + return True diff --git a/scripts/setup_venv_linux.sh b/scripts/setup_venv_linux.sh new file mode 100644 index 0000000..09fd12c --- /dev/null +++ b/scripts/setup_venv_linux.sh @@ -0,0 +1,58 @@ +#!/bin/bash +# setup_venv_linux.sh — Create Python venv on Linux (Debian/Ubuntu). +# +# On Linux, keystone-engine pip package ships prebuilt .so — no manual build needed. +# +# Usage: +# bash scripts/setup_venv_linux.sh +# +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)" +VENV_DIR="${PROJECT_ROOT}/.venv" +REQUIREMENTS="${PROJECT_ROOT}/requirements.txt" + +echo "=== Installing system deps ===" +if command -v apt-get &>/dev/null; then + apt-get update -qq + apt-get install -y -qq python3 python3-venv python3-pip cmake gcc g++ pkg-config 2>/dev/null +elif command -v dnf &>/dev/null; then + dnf install -y python3 python3-pip cmake gcc gcc-c++ 2>/dev/null +fi + +PYTHON="$(command -v python3)" +if [[ -z "${PYTHON}" ]]; then + echo "Error: python3 not found in PATH" + exit 1 +fi + +echo "" +echo "=== Creating venv ===" +echo " Python: ${PYTHON} ($(${PYTHON} --version 2>&1))" +echo " venv: ${VENV_DIR}" +echo " deps: ${REQUIREMENTS}" +echo "" + +"${PYTHON}" -m venv "${VENV_DIR}" + +source "${VENV_DIR}/bin/activate" +pip install --upgrade pip > /dev/null +pip install -r "${REQUIREMENTS}" + +# --- Verify --- +echo "" +echo "=== Verifying imports ===" +python3 -c " +from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN +from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN +from pyimg4 import IM4P +print(' capstone OK') +print(' keystone OK') +print(' pyimg4 OK') +" + +echo "" +echo "=== venv ready ===" +echo " Activate: source ${VENV_DIR}/bin/activate" +echo " Deactivate: deactivate"