mirror of
https://github.com/Lakr233/vphone-cli.git
synced 2026-04-04 20:39:05 +08:00
Complete Swift firmware patcher parity and CLI wiring
Run SwiftFormat on firmware patcher Remove legacy Python firmware patchers Fix compare pipeline pyimg4 PATH handling Restore Python patchers and prefer fresh restore Update BinaryBuffer.swift Avoid double scanning in patcher apply Prefer Python TXM site before fallback Retarget TXM trustcache finder for 26.1 Remove legacy Python firmware patchers Fail fast on nested virtualization hosts Return nonzero on fatal boot startup Add amfidont helper for signed boot binary Stage AMFI boot args for next host reboot Add host preflight for boot entitlements Fail fast when boot entitlements are unavailable Switch firmware patch targets to Swift CLI Record real Swift firmware parity results Verify Swift firmware pipeline end-to-end parity Fix Swift firmware pipeline JB dry-run
This commit is contained in:
4
.gitignore
vendored
4
.gitignore
vendored
@@ -217,6 +217,9 @@ ipython_config.py
|
||||
# in the .venv directory. It is recommended not to include this directory in version control.
|
||||
.pixi
|
||||
|
||||
# Local scratch planning
|
||||
TODO.md
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
@@ -316,7 +319,6 @@ __marimo__/
|
||||
*.ipsw
|
||||
/updates-cdn
|
||||
/research/jb_asm_refs
|
||||
TODO.md
|
||||
/references/
|
||||
scripts/vphoned/vphoned
|
||||
/cfw_input/
|
||||
|
||||
25
AGENTS.md
25
AGENTS.md
@@ -15,10 +15,9 @@ Virtual iPhone boot tool using Apple's Virtualization.framework with PCC researc
|
||||
|
||||
## Workflow Rules
|
||||
|
||||
- Always read `/TODO.md` before starting any substantial work.
|
||||
- Always update `/TODO.md` when plan, progress, assumptions, blockers, or open questions change.
|
||||
- If blocked or waiting on user input, write the exact blocker and next action in `/TODO.md`.
|
||||
- If not exists, continue existing work until complete. If exists, follow `/TODO.md` instructions.
|
||||
- Do not create, read, or update `/TODO.md`.
|
||||
- Ignore `/TODO.md` if it exists locally; it is intentionally not part of the repo workflow anymore.
|
||||
- Track plan, progress, assumptions, blockers, and next actions in commit history, code comments when warranted, and current research docs instead of a repo TODO file.
|
||||
|
||||
For any changes applying new patches, also update research/0_binary_patch_comparison.md. Dont forget this.
|
||||
|
||||
@@ -89,23 +88,13 @@ sources/
|
||||
|
||||
scripts/
|
||||
├── vphoned/ # Guest daemon (ObjC, runs inside iOS VM over vsock)
|
||||
├── patchers/ # Python patcher modules
|
||||
│ ├── iboot.py # iBoot patcher (iBSS/iBEC/LLB)
|
||||
│ ├── iboot_jb.py # JB: iBoot nonce skip
|
||||
│ ├── kernel.py # Kernel patcher (26 patches)
|
||||
│ ├── kernel_jb.py # JB: kernel patches (~40)
|
||||
│ ├── txm.py # TXM patcher
|
||||
│ ├── txm_dev.py # Dev: TXM entitlements/debugger/dev mode
|
||||
|
||||
│ └── cfw.py # CFW binary patcher
|
||||
├── patchers/ # Python CFW patcher modules
|
||||
│ └── cfw.py # CFW binary patcher entrypoint
|
||||
├── resources/ # Resource archives (git submodule)
|
||||
├── patches/ # Build-time patches (libirecovery)
|
||||
├── fw_prepare.sh # Download IPSWs, merge cloudOS into iPhone
|
||||
├── fw_manifest.py # Generate hybrid BuildManifest/Restore plists
|
||||
├── fw_patch.py # Patch boot chain (regular)
|
||||
├── fw_patch_dev.py # Regular + dev TXM patches
|
||||
├── fw_patch_jb.py # Regular + JB extensions
|
||||
├── ramdisk_build.py # Build SSH ramdisk with trustcache
|
||||
├── ramdisk_build.py # Build SSH ramdisk with trustcache (reuses Swift patch-component for TXM/base kernel)
|
||||
├── ramdisk_send.sh # Send ramdisk to device via irecovery
|
||||
├── cfw_install.sh # Install CFW (regular)
|
||||
├── cfw_install_dev.sh # Regular + rpcserver daemon
|
||||
@@ -160,7 +149,7 @@ research/ # Detailed firmware/patch documentation
|
||||
- All instruction matching must be derived from Capstone decode results (mnemonic / operands / control-flow), not exact operand-string text when a semantic operand check is possible.
|
||||
- All replacement instruction bytes must come from Keystone-backed helpers already used by the project (for example `asm(...)`, `NOP`, `MOV_W0_0`, etc.).
|
||||
- Prefer source-backed semantic anchors: in-image symbol lookup, string xrefs, local call-flow, and XNU correlation. Do not depend on repo-exported per-kernel symbol dumps at runtime.
|
||||
- When retargeting a patch, write the reveal procedure and validation steps into `TODO.md` before handing off for testing.
|
||||
- When retargeting a patch, write the reveal procedure and validation steps into the relevant research doc or commit notes before handing off for testing. Do not create `TODO.md`.
|
||||
- For `patch_bsd_init_auth` specifically, the allowed reveal flow is: recover `bsd_init` -> locate rootvp panic block -> find the unique in-function `call` -> `cbnz w0/x0, panic` -> `bl imageboot_needed` site -> patch the branch gate only.
|
||||
|
||||
- Patchers use `capstone` (disassembly), `keystone-engine` (assembly), `pyimg4` (IM4P handling).
|
||||
|
||||
65
Makefile
65
Makefile
@@ -18,6 +18,7 @@ BUILD_INFO := sources/vphone-cli/VPhoneBuildInfo.swift
|
||||
# ─── Paths ────────────────────────────────────────────────────────
|
||||
SCRIPTS := scripts
|
||||
BINARY := .build/release/vphone-cli
|
||||
PATCHER_BINARY := .build/debug/vphone-cli
|
||||
BUNDLE := .build/vphone-cli.app
|
||||
BUNDLE_BIN := $(BUNDLE)/Contents/MacOS/vphone-cli
|
||||
INFO_PLIST := sources/Info.plist
|
||||
@@ -61,6 +62,8 @@ help:
|
||||
@echo " CPU=8 CPU cores (stored in manifest)"
|
||||
@echo " MEMORY=8192 Memory in MB (stored in manifest)"
|
||||
@echo " DISK_SIZE=64 Disk size in GB (stored in manifest)"
|
||||
@echo " make amfidont_allow_vphone Start amfidont for the signed vphone-cli binary"
|
||||
@echo " make boot_host_preflight Diagnose whether host can launch signed PV=3 binary"
|
||||
@echo " make boot Boot VM (reads from config.plist)"
|
||||
@echo " make boot_dfu Boot VM in DFU mode (reads from config.plist)"
|
||||
@echo ""
|
||||
@@ -68,9 +71,9 @@ help:
|
||||
@echo " make fw_prepare Download IPSWs, extract, merge"
|
||||
@echo " Options: IPHONE_SOURCE= URL or local path to iPhone IPSW"
|
||||
@echo " CLOUDOS_SOURCE= URL or local path to cloudOS IPSW"
|
||||
@echo " make fw_patch Patch boot chain (regular variant)"
|
||||
@echo " make fw_patch_dev Patch boot chain (dev mode TXM patches)"
|
||||
@echo " make fw_patch_jb Patch boot chain (dev + JB extensions)"
|
||||
@echo " make fw_patch Patch boot chain with Swift pipeline (regular variant)"
|
||||
@echo " make fw_patch_dev Patch boot chain with Swift pipeline (dev mode TXM patches)"
|
||||
@echo " make fw_patch_jb Patch boot chain with Swift pipeline (dev + JB extensions)"
|
||||
@echo ""
|
||||
@echo "Restore:"
|
||||
@echo " make restore_get_shsh Dump SHSH response from Apple"
|
||||
@@ -121,10 +124,18 @@ clean:
|
||||
# Build
|
||||
# ═══════════════════════════════════════════════════════════════════
|
||||
|
||||
.PHONY: build bundle
|
||||
.PHONY: build patcher_build bundle
|
||||
|
||||
build: $(BINARY)
|
||||
|
||||
patcher_build: $(PATCHER_BINARY)
|
||||
|
||||
$(PATCHER_BINARY): $(SWIFT_SOURCES) Package.swift
|
||||
@echo "=== Building vphone-cli patcher ($(GIT_HASH)) ==="
|
||||
@echo '// Auto-generated — do not edit' > $(BUILD_INFO)
|
||||
@echo 'enum VPhoneBuildInfo { static let commitHash = "$(GIT_HASH)" }' >> $(BUILD_INFO)
|
||||
@set -o pipefail; swift build 2>&1 | tail -5
|
||||
|
||||
$(BINARY): $(SWIFT_SOURCES) Package.swift $(ENTITLEMENTS)
|
||||
@echo "=== Building vphone-cli ($(GIT_HASH)) ==="
|
||||
@echo '// Auto-generated — do not edit' > $(BUILD_INFO)
|
||||
@@ -168,17 +179,43 @@ vphoned:
|
||||
# VM management
|
||||
# ═══════════════════════════════════════════════════════════════════
|
||||
|
||||
.PHONY: vm_new boot boot_dfu
|
||||
.PHONY: vm_new amfidont_allow_vphone boot_host_preflight boot boot_dfu boot_binary_check
|
||||
|
||||
vm_new:
|
||||
CPU="$(CPU)" MEMORY="$(MEMORY)" \
|
||||
zsh $(SCRIPTS)/vm_create.sh --dir $(VM_DIR) --disk-size $(DISK_SIZE)
|
||||
|
||||
boot: bundle vphoned
|
||||
amfidont_allow_vphone: build
|
||||
zsh $(SCRIPTS)/start_amfidont_for_vphone.sh
|
||||
|
||||
boot_host_preflight: build
|
||||
zsh $(SCRIPTS)/boot_host_preflight.sh
|
||||
|
||||
boot_binary_check: $(BINARY)
|
||||
@zsh $(SCRIPTS)/boot_host_preflight.sh --assert-bootable
|
||||
@tmp_log="$$(mktemp -t vphone-boot-preflight.XXXXXX)"; \
|
||||
set +e; \
|
||||
"$(CURDIR)/$(BINARY)" --help >"$$tmp_log" 2>&1; \
|
||||
rc=$$?; \
|
||||
set -e; \
|
||||
if [ $$rc -ne 0 ]; then \
|
||||
echo "Error: signed vphone-cli failed to launch (exit $$rc)." >&2; \
|
||||
echo "Check private virtualization entitlement support and ensure SIP/AMFI are disabled on the host." >&2; \
|
||||
echo "Repo workaround: start the AMFI bypass helper with 'make amfidont_allow_vphone' and retry." >&2; \
|
||||
if [ -s "$$tmp_log" ]; then \
|
||||
echo "--- vphone-cli preflight log ---" >&2; \
|
||||
tail -n 40 "$$tmp_log" >&2; \
|
||||
fi; \
|
||||
rm -f "$$tmp_log"; \
|
||||
exit $$rc; \
|
||||
fi; \
|
||||
rm -f "$$tmp_log"
|
||||
|
||||
boot: bundle vphoned boot_binary_check
|
||||
cd $(VM_DIR) && "$(CURDIR)/$(BUNDLE_BIN)" \
|
||||
--config ./config.plist
|
||||
|
||||
boot_dfu: build
|
||||
boot_dfu: build boot_binary_check
|
||||
cd $(VM_DIR) && "$(CURDIR)/$(BINARY)" \
|
||||
--config ./config.plist \
|
||||
--dfu
|
||||
@@ -192,14 +229,14 @@ boot_dfu: build
|
||||
fw_prepare:
|
||||
cd $(VM_DIR) && bash "$(CURDIR)/$(SCRIPTS)/fw_prepare.sh"
|
||||
|
||||
fw_patch:
|
||||
cd $(VM_DIR) && $(PYTHON) "$(CURDIR)/$(SCRIPTS)/fw_patch.py" .
|
||||
fw_patch: patcher_build
|
||||
"$(CURDIR)/$(PATCHER_BINARY)" patch-firmware --vm-directory "$(CURDIR)/$(VM_DIR)" --variant regular
|
||||
|
||||
fw_patch_dev:
|
||||
cd $(VM_DIR) && $(PYTHON) "$(CURDIR)/$(SCRIPTS)/fw_patch_dev.py" .
|
||||
fw_patch_dev: patcher_build
|
||||
"$(CURDIR)/$(PATCHER_BINARY)" patch-firmware --vm-directory "$(CURDIR)/$(VM_DIR)" --variant dev
|
||||
|
||||
fw_patch_jb:
|
||||
cd $(VM_DIR) && $(PYTHON) "$(CURDIR)/$(SCRIPTS)/fw_patch_jb.py" .
|
||||
fw_patch_jb: patcher_build
|
||||
"$(CURDIR)/$(PATCHER_BINARY)" patch-firmware --vm-directory "$(CURDIR)/$(VM_DIR)" --variant jb
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════
|
||||
# Restore
|
||||
@@ -225,7 +262,7 @@ restore:
|
||||
|
||||
.PHONY: ramdisk_build ramdisk_send
|
||||
|
||||
ramdisk_build:
|
||||
ramdisk_build: patcher_build
|
||||
cd $(VM_DIR) && RAMDISK_UDID="$(RAMDISK_UDID)" $(PYTHON) "$(CURDIR)/$(SCRIPTS)/ramdisk_build.py" .
|
||||
|
||||
ramdisk_send:
|
||||
|
||||
20
README.md
20
README.md
@@ -70,6 +70,15 @@ Boot into Recovery (long press power button), open Terminal, then choose one set
|
||||
sudo amfidont --path [PATH_TO_VPHONE_DIR]
|
||||
```
|
||||
|
||||
Repo helper:
|
||||
|
||||
```bash
|
||||
make amfidont_allow_vphone
|
||||
```
|
||||
|
||||
This helper computes the current signed `vphone-cli` CDHash and uses the
|
||||
URL-encoded project path form observed by `AMFIPathValidator`.
|
||||
|
||||
**Install dependencies:**
|
||||
|
||||
```bash
|
||||
@@ -225,6 +234,17 @@ AMFI/debug restrictions are not bypassed correctly. Choose one setup path:
|
||||
|
||||
- **Option 2 (debug restrictions only):**
|
||||
use Recovery mode `csrutil enable --without debug` (no full SIP disable), then install/load [`amfidont`](https://github.com/zqxwce/amfidont) while keeping AMFI otherwise enabled.
|
||||
For this repo, `make amfidont_allow_vphone` packages the required encoded-path
|
||||
and CDHash allowlist startup.
|
||||
|
||||
**Q: `make boot` / `make boot_dfu` starts and then fails with `VZErrorDomain Code=2 "Virtualization is not available on this hardware."`**
|
||||
|
||||
The host itself is running inside an Apple virtual machine, so nested
|
||||
Virtualization.framework guest boot is unavailable. Run the boot flow on a
|
||||
non-nested macOS 15+ host instead. `make boot_host_preflight` will show this as
|
||||
`Model Name: Apple Virtual Machine 1` with `kern.hv_vmm_present=1`.
|
||||
`make boot` / `make boot_dfu` now fail fast through `boot_binary_check` before
|
||||
attempting VM startup on that kind of host.
|
||||
|
||||
**Q: System apps (App Store, Messages, etc.) won't download or install.**
|
||||
|
||||
|
||||
@@ -180,6 +180,84 @@
|
||||
| iOS 26.1 (`23B85`) | 14 | 59 |
|
||||
| iOS 26.3 (`23D127`) | 14 | 59 |
|
||||
|
||||
## Swift Migration Notes (2026-03-10)
|
||||
|
||||
- Swift `FirmwarePatcher` now matches the Python reference patch output across all checked components:
|
||||
- `avpbooter` 1/1
|
||||
- `ibss` 4/4
|
||||
- `ibec` 7/7
|
||||
- `llb` 13/13
|
||||
- `txm` 1/1
|
||||
- `txm_dev` 12/12
|
||||
- `kernelcache` 28/28
|
||||
- `ibss_jb` 1/1
|
||||
- `kernelcache_jb` 84/84
|
||||
- JB parity fixes completed in Swift:
|
||||
- C23 `vnode_getattr` resolution now follows the Python backward BL scan and resolves `0x00CD44F8`.
|
||||
- C22 syscallmask cave encodings were corrected and centralized in `ARM64Constants.swift`.
|
||||
- Task-conversion matcher masks and kernel-text scan range were corrected, restoring the patch at `0x00B0C400`.
|
||||
- `jbDecodeBranchTarget()` now correctly decodes `cbz/cbnz`, restoring the real `_bsd_init` rootauth gate at `0x00F7798C`.
|
||||
- IOUC MACF matching now uses Python-equivalent disassembly semantics for the aggregator shape, restoring the deny-to-allow patch at `0x01260644`.
|
||||
- C24 `kcall10` cave instruction bytes were re-verified against macOS `clang`/`as`; no Swift byte changes were needed.
|
||||
- The Swift pipeline is now directly invokable from the product binary:
|
||||
- `vphone-cli patch-firmware --vm-directory <dir> --variant {regular|dev|jb}`
|
||||
- `vphone-cli patch-component --component {txm|kernel-base} --input <file> --output <raw>` is available for non-firmware tooling that still needs a single patched payload during ramdisk packaging
|
||||
- default loader now preserves IM4P containers via `IM4PHandler`
|
||||
- DeviceTree patching now uses the real Swift `DeviceTreePatcher` in the pipeline
|
||||
- project `make fw_patch`, `make fw_patch_dev`, and `make fw_patch_jb` targets now invoke this Swift pipeline via the unsigned debug `vphone-cli` build, while the signed release build remains reserved for VM boot/DFU paths
|
||||
- on 2026-03-11, the legacy Python firmware patcher entrypoints and patch modules were temporarily restored from pre-removal history for parity/debug work.
|
||||
- after byte-for-byte parity was revalidated against Python on `26.1` and `26.3` for `regular`, `dev`, and `jb`, those legacy firmware-patcher Python sources and transient comparison/export helpers were removed again so the repo keeps Swift as the single firmware-patching implementation.
|
||||
- Swift pipeline follow-up fixes completed after CLI bring-up:
|
||||
- `findFile()` now supports glob patterns such as `AVPBooter*.bin` instead of treating them as literal paths.
|
||||
- JB variant sequencing now runs base iBSS/kernel patchers first, then the JB extension patchers.
|
||||
- Sequential pipeline application now merges each patcher's `PatchRecord` writes onto the shared output buffer while keeping later patcher searches anchored to the original payload, matching the standalone Swift/Python validation model.
|
||||
- `apply()` now reuses an already-populated `patches` array instead of re-running `findAll()`, so `patch-firmware` / `patch-component` no longer double-scan or double-print the same component diagnostics on a single invocation.
|
||||
- unaligned integer reads across the firmware patcher now go through a shared safe `Data.loadLE(...)` helper, fixing the JB IM4P crash (`Swift/UnsafeRawPointer.swift:449` misaligned raw pointer load).
|
||||
- `TXMPatcher` now preserves pristine Python parity by preferring the legacy trustcache binary-search site when present, and only falls back to the selector24 hash-flags call chain (`ldr x1, [x20,#0x38]` -> `add x2, sp, #4` -> `bl` -> `ldp x0, x1, [x20,#0x30]` -> `add x2, sp, #8` -> `bl`) when rerunning on a VM tree that already carries the dev/JB selector24 early-return patch.
|
||||
- `scripts/fw_prepare.sh` now deletes stale sibling `*Restore*` directories in the working VM directory before patching continues, so a fresh `make fw_prepare && make fw_patch` cannot accidentally select an older prepared firmware tree (for example `26.1`) when a newer one (for example `26.3`) was just generated.
|
||||
- IM4P/output parity fixes completed after synthetic full-pipeline comparison:
|
||||
- `IM4PHandler.save()` no longer forces a generic LZFSE re-encode.
|
||||
- Swift now rebuilds IM4Ps in the same effective shape as the Python patch flow and only preserves trailing `PAYP` metadata for `TXM` (`trxm`) and `kernelcache` (`krnl`).
|
||||
- `IBootPatcher` serial labels now match Python casing exactly (`Loaded iBSS`, `Loaded iBEC`, `Loaded LLB`).
|
||||
- `DeviceTreePatcher` now serializes the full patched flat tree, matching Python `dtree.py`, instead of relying on in-place property writes alone.
|
||||
- Synthetic CLI dry-run status on 2026-03-10 using IM4P-backed inputs under `ipsws/patch_refactor_input`:
|
||||
- regular: 58 patch records
|
||||
- dev: 69 patch records
|
||||
- jb: 154 patch records
|
||||
- Full synthetic Python-vs-Swift pipeline comparison status on 2026-03-10 using `scripts/compare_swift_python_pipeline.py`:
|
||||
- regular: all 7 component payloads match
|
||||
- dev: all 7 component payloads match
|
||||
- jb: all 7 component payloads match
|
||||
- Real prepared-firmware Python-vs-Swift pipeline comparison status on 2026-03-10 using `vm/` after `make fw_prepare`:
|
||||
- historical note: the now-removed `scripts/compare_swift_python_pipeline.py` cloned only the prepared `*Restore*` tree plus `AVPBooter*.bin`, `AVPSEPBooter*.bin`, and `config.plist`, avoiding `No space left on device` failures from copying `Disk.img` after `make vm_new`.
|
||||
- regular: all 7 component payloads match
|
||||
- dev: all 7 component payloads match
|
||||
- jb: all 7 component payloads match
|
||||
- Runtime validation blocker observed on 2026-03-10:
|
||||
- `NONE_INTERACTIVE=1 SKIP_PROJECT_SETUP=1 make setup_machine JB=1` reaches the Swift patch stage and reports `[patch-firmware] applied 154 patches for jb`, then fails when the flow transitions into `make boot_dfu`.
|
||||
- `make boot_dfu` originally failed at launch-policy time with exit `137` / signal `9` because the release `vphone-cli` could not launch on this host.
|
||||
- `amfidont` was then validated on-host:
|
||||
- it can attach to `/usr/libexec/amfid`
|
||||
- the initial path allow rule failed because `AMFIPathValidator` reports URL-encoded paths (`/Volumes/My%20Shared%20Files/...`)
|
||||
- rerunning `amfidont` with the encoded project path and the release-binary CDHash allows the signed release `vphone-cli` to launch
|
||||
- this workflow is now packaged as `make amfidont_allow_vphone` / `scripts/start_amfidont_for_vphone.sh`
|
||||
- With launch policy bypassed, `make boot_dfu` advances into VM setup, emits `vm/udid-prediction.txt`, and then fails with `VZErrorDomain Code=2 "Virtualization is not available on this hardware."`
|
||||
- `VPhoneAppDelegate` startup failure handling was tightened so these fatal boot/DFU startup errors now exit non-zero; `make boot_dfu` now reports `make: *** [boot_dfu] Error 1` for the nested-virtualization failure instead of incorrectly returning success.
|
||||
- The host itself is a nested Apple VM (`Model Name: Apple Virtual Machine 1`, `kern.hv_vmm_present=1`), so the remaining blocker is lack of nested Virtualization.framework availability rather than firmware patching or AMFI bypass.
|
||||
- `boot_binary_check` now uses strict host preflight and fails earlier on this class of host with `make: *** [boot_binary_check] Error 3`, avoiding a wasted VM-start attempt once the nested-virtualization condition is already known.
|
||||
- Added `make boot_host_preflight` / `scripts/boot_host_preflight.sh` to capture this state in one command:
|
||||
- model: `Apple Virtual Machine 1`
|
||||
- `kern.hv_vmm_present`: `1`
|
||||
- SIP: disabled
|
||||
- `allow-research-guests`: disabled
|
||||
- current `kern.bootargs`: empty
|
||||
- next-boot `nvram boot-args`: `amfi_get_out_of_my_way=1 -v` (staged on 2026-03-10; requires reboot before it affects launch policy)
|
||||
- `spctl --status`: assessments enabled
|
||||
- `spctl --assess` rejects the signed release binary
|
||||
- unsigned debug `vphone-cli --help`: exit `0`
|
||||
- signed release `vphone-cli --help`: exit `137`
|
||||
- freshly signed debug control binary `--help`: exit `137`
|
||||
|
||||
## Automation Notes (2026-03-06)
|
||||
|
||||
- `scripts/setup_machine.sh` non-interactive flow fix: renamed local variable `status` to `boot_state` in first-boot log wait and boot-analysis wait helpers to avoid zsh `status` read-only special parameter collision.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,20 +0,0 @@
|
||||
[
|
||||
{
|
||||
"kernel_name": "kernelcache.release.vphone600",
|
||||
"json_path": "/Users/qaq/Documents/GitHub/vphone-cli/research/kernel_info/json/kernelcache.release.vphone600.bin.symbols.json",
|
||||
"matched": 4327,
|
||||
"missed": 1398,
|
||||
"percent": 75.58079999999999644,
|
||||
"total": 5725,
|
||||
"json_sha256": "9dba4eb578da1403dcb17b57ed82f3df469a4315c089d85cd8a583df228686c2"
|
||||
},
|
||||
{
|
||||
"kernel_name": "kernelcache.research.vphone600",
|
||||
"json_path": "/Users/qaq/Documents/GitHub/vphone-cli/research/kernel_info/json/kernelcache.research.vphone600.bin.symbols.json",
|
||||
"matched": 4327,
|
||||
"missed": 1398,
|
||||
"percent": 75.58079999999999644,
|
||||
"total": 5725,
|
||||
"json_sha256": "7232730d5d88dc816b1e7b46505ac61b28bb9647a41cc0806538c7e800d23942"
|
||||
}
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,597 +0,0 @@
|
||||
[
|
||||
{
|
||||
"method": "patch_amfi_cdhash_in_trustcache",
|
||||
"desc": "mov x0,#1 [AMFIIsCDHashInTrustCache]",
|
||||
"va": 18446741874827090704,
|
||||
"va_hex": "0xFFFFFE0008645B10",
|
||||
"foff_hex": "0x01641B10"
|
||||
},
|
||||
{
|
||||
"method": "patch_amfi_cdhash_in_trustcache",
|
||||
"desc": "cbz x2,+8 [AMFIIsCDHashInTrustCache]",
|
||||
"va": 18446741874827090708,
|
||||
"va_hex": "0xFFFFFE0008645B14",
|
||||
"foff_hex": "0x01641B14"
|
||||
},
|
||||
{
|
||||
"method": "patch_amfi_cdhash_in_trustcache",
|
||||
"desc": "str x0,[x2] [AMFIIsCDHashInTrustCache]",
|
||||
"va": 18446741874827090712,
|
||||
"va_hex": "0xFFFFFE0008645B18",
|
||||
"foff_hex": "0x01641B18"
|
||||
},
|
||||
{
|
||||
"method": "patch_amfi_cdhash_in_trustcache",
|
||||
"desc": "ret [AMFIIsCDHashInTrustCache]",
|
||||
"va": 18446741874827090716,
|
||||
"va_hex": "0xFFFFFE0008645B1C",
|
||||
"foff_hex": "0x01641B1C"
|
||||
},
|
||||
{
|
||||
"method": "patch_amfi_execve_kill_path",
|
||||
"desc": "mov w0,#0 [AMFI kill return \u2192 allow]",
|
||||
"va": 18446741874827125644,
|
||||
"va_hex": "0xFFFFFE000864E38C",
|
||||
"foff_hex": "0x0164A38C"
|
||||
},
|
||||
{
|
||||
"method": "patch_bsd_init_auth",
|
||||
"desc": "mov x0,#0 [_bsd_init auth]",
|
||||
"va": 18446741874820188636,
|
||||
"va_hex": "0xFFFFFE0007FB09DC",
|
||||
"foff_hex": "0x00FAC9DC"
|
||||
},
|
||||
{
|
||||
"method": "patch_convert_port_to_map",
|
||||
"desc": "b 0xB0E154 [_convert_port_to_map skip panic]",
|
||||
"va": 18446741874815344896,
|
||||
"va_hex": "0xFFFFFE0007B12100",
|
||||
"foff_hex": "0x00B0E100"
|
||||
},
|
||||
{
|
||||
"method": "patch_cred_label_update_execve",
|
||||
"desc": "mov x0,xzr [_cred_label_update_execve low-risk]",
|
||||
"va": 18446741874827124480,
|
||||
"va_hex": "0xFFFFFE000864DF00",
|
||||
"foff_hex": "0x01649F00"
|
||||
},
|
||||
{
|
||||
"method": "patch_cred_label_update_execve",
|
||||
"desc": "retab [_cred_label_update_execve low-risk]",
|
||||
"va": 18446741874827124484,
|
||||
"va_hex": "0xFFFFFE000864DF04",
|
||||
"foff_hex": "0x01649F04"
|
||||
},
|
||||
{
|
||||
"method": "patch_dounmount",
|
||||
"desc": "NOP [_dounmount MAC check]",
|
||||
"va": 18446741874817070512,
|
||||
"va_hex": "0xFFFFFE0007CB75B0",
|
||||
"foff_hex": "0x00CB35B0"
|
||||
},
|
||||
{
|
||||
"method": "patch_hook_cred_label_update_execve",
|
||||
"desc": "mov x0,xzr [_hook_cred_label_update_execve low-risk]",
|
||||
"va": 18446741874841300200,
|
||||
"va_hex": "0xFFFFFE00093D2CE8",
|
||||
"foff_hex": "0x023CECE8"
|
||||
},
|
||||
{
|
||||
"method": "patch_hook_cred_label_update_execve",
|
||||
"desc": "retab [_hook_cred_label_update_execve low-risk]",
|
||||
"va": 18446741874841300204,
|
||||
"va_hex": "0xFFFFFE00093D2CEC",
|
||||
"foff_hex": "0x023CECEC"
|
||||
},
|
||||
{
|
||||
"method": "patch_io_secure_bsd_root",
|
||||
"desc": "b #0x1A4 [_IOSecureBSDRoot]",
|
||||
"va": 18446741874824110576,
|
||||
"va_hex": "0xFFFFFE000836E1F0",
|
||||
"foff_hex": "0x0136A1F0"
|
||||
},
|
||||
{
|
||||
"method": "patch_kcall10",
|
||||
"desc": "sysent[439].sy_call = _nosys 0xF6F048 (auth rebase, div=0xBCAD, next=2) [kcall10 low-risk]",
|
||||
"va": 18446741874811397536,
|
||||
"va_hex": "0xFFFFFE000774E5A0",
|
||||
"foff_hex": "0x0074A5A0"
|
||||
},
|
||||
{
|
||||
"method": "patch_kcall10",
|
||||
"desc": "sysent[439].sy_return_type = 1 [kcall10 low-risk]",
|
||||
"va": 18446741874811397552,
|
||||
"va_hex": "0xFFFFFE000774E5B0",
|
||||
"foff_hex": "0x0074A5B0"
|
||||
},
|
||||
{
|
||||
"method": "patch_kcall10",
|
||||
"desc": "sysent[439].sy_narg=0,sy_arg_bytes=0 [kcall10 low-risk]",
|
||||
"va": 18446741874811397556,
|
||||
"va_hex": "0xFFFFFE000774E5B4",
|
||||
"foff_hex": "0x0074A5B4"
|
||||
},
|
||||
{
|
||||
"method": "patch_load_dylinker",
|
||||
"desc": "b #0x44 [_load_dylinker policy bypass]",
|
||||
"va": 18446741874820906704,
|
||||
"va_hex": "0xFFFFFE000805FED0",
|
||||
"foff_hex": "0x0105BED0"
|
||||
},
|
||||
{
|
||||
"method": "patch_mac_mount",
|
||||
"desc": "NOP [___mac_mount deny branch]",
|
||||
"va": 18446741874817057376,
|
||||
"va_hex": "0xFFFFFE0007CB4260",
|
||||
"foff_hex": "0x00CB0260"
|
||||
},
|
||||
{
|
||||
"method": "patch_nvram_verify_permission",
|
||||
"desc": "NOP [verifyPermission NVRAM]",
|
||||
"va": 18446741874822876196,
|
||||
"va_hex": "0xFFFFFE0008240C24",
|
||||
"foff_hex": "0x0123CC24"
|
||||
},
|
||||
{
|
||||
"method": "patch_post_validation_additional",
|
||||
"desc": "cmp w0,w0 [postValidation additional fallback]",
|
||||
"va": 18446741874827069280,
|
||||
"va_hex": "0xFFFFFE0008640760",
|
||||
"foff_hex": "0x0163C760"
|
||||
},
|
||||
{
|
||||
"method": "patch_proc_pidinfo",
|
||||
"desc": "NOP [_proc_pidinfo pid-0 guard A]",
|
||||
"va": 18446741874820964152,
|
||||
"va_hex": "0xFFFFFE000806DF38",
|
||||
"foff_hex": "0x01069F38"
|
||||
},
|
||||
{
|
||||
"method": "patch_proc_pidinfo",
|
||||
"desc": "NOP [_proc_pidinfo pid-0 guard B]",
|
||||
"va": 18446741874820964160,
|
||||
"va_hex": "0xFFFFFE000806DF40",
|
||||
"foff_hex": "0x01069F40"
|
||||
},
|
||||
{
|
||||
"method": "patch_proc_security_policy",
|
||||
"desc": "mov x0,#0 [_proc_security_policy]",
|
||||
"va": 18446741874820974064,
|
||||
"va_hex": "0xFFFFFE00080705F0",
|
||||
"foff_hex": "0x0106C5F0"
|
||||
},
|
||||
{
|
||||
"method": "patch_proc_security_policy",
|
||||
"desc": "ret [_proc_security_policy]",
|
||||
"va": 18446741874820974068,
|
||||
"va_hex": "0xFFFFFE00080705F4",
|
||||
"foff_hex": "0x0106C5F4"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_fsgetpath]",
|
||||
"va": 18446741874841172760,
|
||||
"va_hex": "0xFFFFFE00093B3B18",
|
||||
"foff_hex": "0x023AFB18"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_fsgetpath]",
|
||||
"va": 18446741874841172764,
|
||||
"va_hex": "0xFFFFFE00093B3B1C",
|
||||
"foff_hex": "0x023AFB1C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_unlink]",
|
||||
"va": 18446741874841178368,
|
||||
"va_hex": "0xFFFFFE00093B5100",
|
||||
"foff_hex": "0x023B1100"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_unlink]",
|
||||
"va": 18446741874841178372,
|
||||
"va_hex": "0xFFFFFE00093B5104",
|
||||
"foff_hex": "0x023B1104"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_truncate]",
|
||||
"va": 18446741874841179096,
|
||||
"va_hex": "0xFFFFFE00093B53D8",
|
||||
"foff_hex": "0x023B13D8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_truncate]",
|
||||
"va": 18446741874841179100,
|
||||
"va_hex": "0xFFFFFE00093B53DC",
|
||||
"foff_hex": "0x023B13DC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_stat]",
|
||||
"va": 18446741874841179456,
|
||||
"va_hex": "0xFFFFFE00093B5540",
|
||||
"foff_hex": "0x023B1540"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_stat]",
|
||||
"va": 18446741874841179460,
|
||||
"va_hex": "0xFFFFFE00093B5544",
|
||||
"foff_hex": "0x023B1544"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setutimes]",
|
||||
"va": 18446741874841179816,
|
||||
"va_hex": "0xFFFFFE00093B56A8",
|
||||
"foff_hex": "0x023B16A8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setutimes]",
|
||||
"va": 18446741874841179820,
|
||||
"va_hex": "0xFFFFFE00093B56AC",
|
||||
"foff_hex": "0x023B16AC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setowner]",
|
||||
"va": 18446741874841180160,
|
||||
"va_hex": "0xFFFFFE00093B5800",
|
||||
"foff_hex": "0x023B1800"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setowner]",
|
||||
"va": 18446741874841180164,
|
||||
"va_hex": "0xFFFFFE00093B5804",
|
||||
"foff_hex": "0x023B1804"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setmode]",
|
||||
"va": 18446741874841180504,
|
||||
"va_hex": "0xFFFFFE00093B5958",
|
||||
"foff_hex": "0x023B1958"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setmode]",
|
||||
"va": 18446741874841180508,
|
||||
"va_hex": "0xFFFFFE00093B595C",
|
||||
"foff_hex": "0x023B195C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setflags]",
|
||||
"va": 18446741874841181164,
|
||||
"va_hex": "0xFFFFFE00093B5BEC",
|
||||
"foff_hex": "0x023B1BEC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setflags]",
|
||||
"va": 18446741874841181168,
|
||||
"va_hex": "0xFFFFFE00093B5BF0",
|
||||
"foff_hex": "0x023B1BF0"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setextattr]",
|
||||
"va": 18446741874841181780,
|
||||
"va_hex": "0xFFFFFE00093B5E54",
|
||||
"foff_hex": "0x023B1E54"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setextattr]",
|
||||
"va": 18446741874841181784,
|
||||
"va_hex": "0xFFFFFE00093B5E58",
|
||||
"foff_hex": "0x023B1E58"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_setattrlist]",
|
||||
"va": 18446741874841182168,
|
||||
"va_hex": "0xFFFFFE00093B5FD8",
|
||||
"foff_hex": "0x023B1FD8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_setattrlist]",
|
||||
"va": 18446741874841182172,
|
||||
"va_hex": "0xFFFFFE00093B5FDC",
|
||||
"foff_hex": "0x023B1FDC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_readlink]",
|
||||
"va": 18446741874841183544,
|
||||
"va_hex": "0xFFFFFE00093B6538",
|
||||
"foff_hex": "0x023B2538"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_readlink]",
|
||||
"va": 18446741874841183548,
|
||||
"va_hex": "0xFFFFFE00093B653C",
|
||||
"foff_hex": "0x023B253C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_open]",
|
||||
"va": 18446741874841183888,
|
||||
"va_hex": "0xFFFFFE00093B6690",
|
||||
"foff_hex": "0x023B2690"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_open]",
|
||||
"va": 18446741874841183892,
|
||||
"va_hex": "0xFFFFFE00093B6694",
|
||||
"foff_hex": "0x023B2694"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_listextattr]",
|
||||
"va": 18446741874841184472,
|
||||
"va_hex": "0xFFFFFE00093B68D8",
|
||||
"foff_hex": "0x023B28D8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_listextattr]",
|
||||
"va": 18446741874841184476,
|
||||
"va_hex": "0xFFFFFE00093B68DC",
|
||||
"foff_hex": "0x023B28DC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_link]",
|
||||
"va": 18446741874841184860,
|
||||
"va_hex": "0xFFFFFE00093B6A5C",
|
||||
"foff_hex": "0x023B2A5C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_link]",
|
||||
"va": 18446741874841184864,
|
||||
"va_hex": "0xFFFFFE00093B6A60",
|
||||
"foff_hex": "0x023B2A60"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_ioctl]",
|
||||
"va": 18446741874841186588,
|
||||
"va_hex": "0xFFFFFE00093B711C",
|
||||
"foff_hex": "0x023B311C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_ioctl]",
|
||||
"va": 18446741874841186592,
|
||||
"va_hex": "0xFFFFFE00093B7120",
|
||||
"foff_hex": "0x023B3120"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_getextattr]",
|
||||
"va": 18446741874841187332,
|
||||
"va_hex": "0xFFFFFE00093B7404",
|
||||
"foff_hex": "0x023B3404"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_getextattr]",
|
||||
"va": 18446741874841187336,
|
||||
"va_hex": "0xFFFFFE00093B7408",
|
||||
"foff_hex": "0x023B3408"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_getattrlist]",
|
||||
"va": 18446741874841187680,
|
||||
"va_hex": "0xFFFFFE00093B7560",
|
||||
"foff_hex": "0x023B3560"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_getattrlist]",
|
||||
"va": 18446741874841187684,
|
||||
"va_hex": "0xFFFFFE00093B7564",
|
||||
"foff_hex": "0x023B3564"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_exchangedata]",
|
||||
"va": 18446741874841188128,
|
||||
"va_hex": "0xFFFFFE00093B7720",
|
||||
"foff_hex": "0x023B3720"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_exchangedata]",
|
||||
"va": 18446741874841188132,
|
||||
"va_hex": "0xFFFFFE00093B7724",
|
||||
"foff_hex": "0x023B3724"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_deleteextattr]",
|
||||
"va": 18446741874841189028,
|
||||
"va_hex": "0xFFFFFE00093B7AA4",
|
||||
"foff_hex": "0x023B3AA4"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_deleteextattr]",
|
||||
"va": 18446741874841189032,
|
||||
"va_hex": "0xFFFFFE00093B7AA8",
|
||||
"foff_hex": "0x023B3AA8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_create]",
|
||||
"va": 18446741874841189416,
|
||||
"va_hex": "0xFFFFFE00093B7C28",
|
||||
"foff_hex": "0x023B3C28"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_create]",
|
||||
"va": 18446741874841189420,
|
||||
"va_hex": "0xFFFFFE00093B7C2C",
|
||||
"foff_hex": "0x023B3C2C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_chroot]",
|
||||
"va": 18446741874841190132,
|
||||
"va_hex": "0xFFFFFE00093B7EF4",
|
||||
"foff_hex": "0x023B3EF4"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_chroot]",
|
||||
"va": 18446741874841190136,
|
||||
"va_hex": "0xFFFFFE00093B7EF8",
|
||||
"foff_hex": "0x023B3EF8"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_proc_check_set_cs_info2]",
|
||||
"va": 18446741874841190476,
|
||||
"va_hex": "0xFFFFFE00093B804C",
|
||||
"foff_hex": "0x023B404C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_proc_check_set_cs_info2]",
|
||||
"va": 18446741874841190480,
|
||||
"va_hex": "0xFFFFFE00093B8050",
|
||||
"foff_hex": "0x023B4050"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_proc_check_set_cs_info]",
|
||||
"va": 18446741874841191576,
|
||||
"va_hex": "0xFFFFFE00093B8498",
|
||||
"foff_hex": "0x023B4498"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_proc_check_set_cs_info]",
|
||||
"va": 18446741874841191580,
|
||||
"va_hex": "0xFFFFFE00093B849C",
|
||||
"foff_hex": "0x023B449C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_proc_check_get_cs_info]",
|
||||
"va": 18446741874841192124,
|
||||
"va_hex": "0xFFFFFE00093B86BC",
|
||||
"foff_hex": "0x023B46BC"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_proc_check_get_cs_info]",
|
||||
"va": 18446741874841192128,
|
||||
"va_hex": "0xFFFFFE00093B86C0",
|
||||
"foff_hex": "0x023B46C0"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_getattr]",
|
||||
"va": 18446741874841194768,
|
||||
"va_hex": "0xFFFFFE00093B9110",
|
||||
"foff_hex": "0x023B5110"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_getattr]",
|
||||
"va": 18446741874841194772,
|
||||
"va_hex": "0xFFFFFE00093B9114",
|
||||
"foff_hex": "0x023B5114"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "mov x0,#0 [_hook_vnode_check_exec]",
|
||||
"va": 18446741874841293164,
|
||||
"va_hex": "0xFFFFFE00093D116C",
|
||||
"foff_hex": "0x023CD16C"
|
||||
},
|
||||
{
|
||||
"method": "patch_sandbox_hooks_extended",
|
||||
"desc": "ret [_hook_vnode_check_exec]",
|
||||
"va": 18446741874841293168,
|
||||
"va_hex": "0xFFFFFE00093D1170",
|
||||
"foff_hex": "0x023CD170"
|
||||
},
|
||||
{
|
||||
"method": "patch_shared_region_map",
|
||||
"desc": "cmp x0,x0 [_shared_region_map_and_slide_setup]",
|
||||
"va": 18446741874821037596,
|
||||
"va_hex": "0xFFFFFE000807FE1C",
|
||||
"foff_hex": "0x0107BE1C"
|
||||
},
|
||||
{
|
||||
"method": "patch_spawn_validate_persona",
|
||||
"desc": "b #0x130 [_spawn_validate_persona gate]",
|
||||
"va": 18446741874820204720,
|
||||
"va_hex": "0xFFFFFE0007FB48B0",
|
||||
"foff_hex": "0x00FB08B0"
|
||||
},
|
||||
{
|
||||
"method": "patch_syscallmask_apply_to_proc",
|
||||
"desc": "mov x0,xzr [_syscallmask_apply_to_proc low-risk]",
|
||||
"va": 18446741874841151204,
|
||||
"va_hex": "0xFFFFFE00093AE6E4",
|
||||
"foff_hex": "0x023AA6E4"
|
||||
},
|
||||
{
|
||||
"method": "patch_syscallmask_apply_to_proc",
|
||||
"desc": "retab [_syscallmask_apply_to_proc low-risk]",
|
||||
"va": 18446741874841151208,
|
||||
"va_hex": "0xFFFFFE00093AE6E8",
|
||||
"foff_hex": "0x023AA6E8"
|
||||
},
|
||||
{
|
||||
"method": "patch_task_conversion_eval_internal",
|
||||
"desc": "cmp xzr,xzr [_task_conversion_eval_internal]",
|
||||
"va": 18446741874815337472,
|
||||
"va_hex": "0xFFFFFE0007B10400",
|
||||
"foff_hex": "0x00B0C400"
|
||||
},
|
||||
{
|
||||
"method": "patch_task_for_pid",
|
||||
"desc": "NOP [_task_for_pid proc_ro copy]",
|
||||
"va": 18446741874820567328,
|
||||
"va_hex": "0xFFFFFE000800D120",
|
||||
"foff_hex": "0x01009120"
|
||||
},
|
||||
{
|
||||
"method": "patch_thid_should_crash",
|
||||
"desc": "zero [_thid_should_crash]",
|
||||
"va": 18446741874810612552,
|
||||
"va_hex": "0xFFFFFE000768EB48",
|
||||
"foff_hex": "0x0068AB48"
|
||||
},
|
||||
{
|
||||
"method": "patch_vm_fault_enter_prepare",
|
||||
"desc": "NOP [_vm_fault_enter_prepare]",
|
||||
"va": 18446741874816027020,
|
||||
"va_hex": "0xFFFFFE0007BB898C",
|
||||
"foff_hex": "0x00BB498C"
|
||||
},
|
||||
{
|
||||
"method": "patch_vm_map_protect",
|
||||
"desc": "b #0x48C [_vm_map_protect]",
|
||||
"va": 18446741874816125352,
|
||||
"va_hex": "0xFFFFFE0007BD09A8",
|
||||
"foff_hex": "0x00BCC9A8"
|
||||
}
|
||||
]
|
||||
@@ -1,85 +0,0 @@
|
||||
0xFFFFFE000768EB48
|
||||
0xFFFFFE000774E5A0
|
||||
0xFFFFFE000774E5B0
|
||||
0xFFFFFE000774E5B4
|
||||
0xFFFFFE0007B10400
|
||||
0xFFFFFE0007B12100
|
||||
0xFFFFFE0007BB898C
|
||||
0xFFFFFE0007BD09A8
|
||||
0xFFFFFE0007CB4260
|
||||
0xFFFFFE0007CB75B0
|
||||
0xFFFFFE0007FB09DC
|
||||
0xFFFFFE0007FB48B0
|
||||
0xFFFFFE000800D120
|
||||
0xFFFFFE000805FED0
|
||||
0xFFFFFE000806DF38
|
||||
0xFFFFFE000806DF40
|
||||
0xFFFFFE00080705F0
|
||||
0xFFFFFE00080705F4
|
||||
0xFFFFFE000807FE1C
|
||||
0xFFFFFE0008240C24
|
||||
0xFFFFFE000836E1F0
|
||||
0xFFFFFE0008640760
|
||||
0xFFFFFE0008645B10
|
||||
0xFFFFFE0008645B14
|
||||
0xFFFFFE0008645B18
|
||||
0xFFFFFE0008645B1C
|
||||
0xFFFFFE000864DF00
|
||||
0xFFFFFE000864DF04
|
||||
0xFFFFFE000864E38C
|
||||
0xFFFFFE00093AE6E4
|
||||
0xFFFFFE00093AE6E8
|
||||
0xFFFFFE00093B3B18
|
||||
0xFFFFFE00093B3B1C
|
||||
0xFFFFFE00093B5100
|
||||
0xFFFFFE00093B5104
|
||||
0xFFFFFE00093B53D8
|
||||
0xFFFFFE00093B53DC
|
||||
0xFFFFFE00093B5540
|
||||
0xFFFFFE00093B5544
|
||||
0xFFFFFE00093B56A8
|
||||
0xFFFFFE00093B56AC
|
||||
0xFFFFFE00093B5800
|
||||
0xFFFFFE00093B5804
|
||||
0xFFFFFE00093B5958
|
||||
0xFFFFFE00093B595C
|
||||
0xFFFFFE00093B5BEC
|
||||
0xFFFFFE00093B5BF0
|
||||
0xFFFFFE00093B5E54
|
||||
0xFFFFFE00093B5E58
|
||||
0xFFFFFE00093B5FD8
|
||||
0xFFFFFE00093B5FDC
|
||||
0xFFFFFE00093B6538
|
||||
0xFFFFFE00093B653C
|
||||
0xFFFFFE00093B6690
|
||||
0xFFFFFE00093B6694
|
||||
0xFFFFFE00093B68D8
|
||||
0xFFFFFE00093B68DC
|
||||
0xFFFFFE00093B6A5C
|
||||
0xFFFFFE00093B6A60
|
||||
0xFFFFFE00093B711C
|
||||
0xFFFFFE00093B7120
|
||||
0xFFFFFE00093B7404
|
||||
0xFFFFFE00093B7408
|
||||
0xFFFFFE00093B7560
|
||||
0xFFFFFE00093B7564
|
||||
0xFFFFFE00093B7720
|
||||
0xFFFFFE00093B7724
|
||||
0xFFFFFE00093B7AA4
|
||||
0xFFFFFE00093B7AA8
|
||||
0xFFFFFE00093B7C28
|
||||
0xFFFFFE00093B7C2C
|
||||
0xFFFFFE00093B7EF4
|
||||
0xFFFFFE00093B7EF8
|
||||
0xFFFFFE00093B804C
|
||||
0xFFFFFE00093B8050
|
||||
0xFFFFFE00093B8498
|
||||
0xFFFFFE00093B849C
|
||||
0xFFFFFE00093B86BC
|
||||
0xFFFFFE00093B86C0
|
||||
0xFFFFFE00093B9110
|
||||
0xFFFFFE00093B9114
|
||||
0xFFFFFE00093D116C
|
||||
0xFFFFFE00093D1170
|
||||
0xFFFFFE00093D2CE8
|
||||
0xFFFFFE00093D2CEC
|
||||
File diff suppressed because it is too large
Load Diff
155
scripts/boot_host_preflight.sh
Normal file
155
scripts/boot_host_preflight.sh
Normal file
@@ -0,0 +1,155 @@
|
||||
#!/bin/zsh
|
||||
# boot_host_preflight.sh — Diagnose whether the host can launch the signed
|
||||
# vphone-cli binary required for PV=3 virtualization boot/DFU flows.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="${0:A:h}"
|
||||
PROJECT_ROOT="${SCRIPT_DIR:h}"
|
||||
|
||||
ASSERT_BOOTABLE=0
|
||||
QUIET=0
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--assert-bootable)
|
||||
ASSERT_BOOTABLE=1
|
||||
shift
|
||||
;;
|
||||
--quiet)
|
||||
QUIET=1
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
RELEASE_BIN="${PROJECT_ROOT}/.build/release/vphone-cli"
|
||||
DEBUG_BIN="${PROJECT_ROOT}/.build/debug/vphone-cli"
|
||||
ENTITLEMENTS="${PROJECT_ROOT}/sources/vphone.entitlements"
|
||||
TMP_DIR="$(mktemp -d "${TMPDIR:-/tmp}/vphone-preflight.XXXXXX")"
|
||||
TMP_SIGNED_DEBUG="${TMP_DIR}/vphone-cli.debug.signed"
|
||||
|
||||
cleanup() {
|
||||
rm -rf "$TMP_DIR"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
print_section() {
|
||||
(( QUIET == 0 )) || return 0
|
||||
echo ""
|
||||
echo "=== $1 ==="
|
||||
}
|
||||
|
||||
run_capture() {
|
||||
local label="$1"
|
||||
shift
|
||||
|
||||
local log_file="${TMP_DIR}/${label}.log"
|
||||
set +e
|
||||
"$@" >"$log_file" 2>&1
|
||||
local rc=$?
|
||||
set -e
|
||||
|
||||
(( QUIET == 0 )) && echo "[${label}] exit=${rc}"
|
||||
if (( QUIET == 0 )) && [[ -s "$log_file" ]]; then
|
||||
sed -n '1,40p' "$log_file"
|
||||
fi
|
||||
return "$rc"
|
||||
}
|
||||
|
||||
MODEL_NAME="$(system_profiler SPHardwareDataType 2>/dev/null | awk -F': ' '/Model Name/ {print $2; exit}')"
|
||||
HV_VMM_PRESENT="$(sysctl -n kern.hv_vmm_present 2>/dev/null || true)"
|
||||
SIP_STATUS="$(csrutil status)"
|
||||
RESEARCH_GUEST_STATUS="$(csrutil allow-research-guests status)"
|
||||
CURRENT_BOOT_ARGS="$(sysctl -n kern.bootargs 2>/dev/null || true)"
|
||||
NEXT_BOOT_ARGS="$(nvram boot-args 2>/dev/null | sed 's/^boot-args[[:space:]]*//')"
|
||||
ASSESSMENT_STATUS="$(spctl --status 2>/dev/null || true)"
|
||||
|
||||
print_section "Host"
|
||||
sw_vers
|
||||
echo "model: $MODEL_NAME"
|
||||
echo "kern.hv_vmm_present: $HV_VMM_PRESENT"
|
||||
echo "SIP: $SIP_STATUS"
|
||||
echo "allow-research-guests: $RESEARCH_GUEST_STATUS"
|
||||
echo "current kern.bootargs: $CURRENT_BOOT_ARGS"
|
||||
echo "next-boot nvram boot-args: $NEXT_BOOT_ARGS"
|
||||
echo "assessment: $ASSESSMENT_STATUS"
|
||||
|
||||
if (( ASSERT_BOOTABLE == 1 )); then
|
||||
if [[ "$HV_VMM_PRESENT" == "1" ]] || [[ "$MODEL_NAME" == "Apple Virtual Machine 1" ]]; then
|
||||
(( QUIET == 0 )) && {
|
||||
echo ""
|
||||
echo "Error: nested Apple VM host detected; Virtualization.framework guest boot is unavailable here." >&2
|
||||
}
|
||||
exit 3
|
||||
fi
|
||||
fi
|
||||
|
||||
print_section "Entitlements"
|
||||
if [[ -f "$RELEASE_BIN" ]]; then
|
||||
codesign -d --entitlements :- "$RELEASE_BIN" 2>/dev/null || true
|
||||
else
|
||||
echo "missing release binary: $RELEASE_BIN"
|
||||
fi
|
||||
|
||||
print_section "Policy"
|
||||
if [[ -f "$RELEASE_BIN" ]]; then
|
||||
spctl --assess --type execute --verbose=4 "$RELEASE_BIN" 2>&1 || true
|
||||
fi
|
||||
|
||||
print_section "Unsigned Debug Binary"
|
||||
if [[ ! -f "$DEBUG_BIN" ]]; then
|
||||
echo "missing debug binary: $DEBUG_BIN"
|
||||
exit 1
|
||||
fi
|
||||
set +e
|
||||
run_capture "debug_help" "$DEBUG_BIN" --help
|
||||
DEBUG_HELP_RC=$?
|
||||
set -e
|
||||
|
||||
print_section "Signed Release Binary"
|
||||
if [[ ! -f "$RELEASE_BIN" ]]; then
|
||||
echo "missing release binary: $RELEASE_BIN"
|
||||
exit 1
|
||||
fi
|
||||
set +e
|
||||
run_capture "release_help" "$RELEASE_BIN" --help
|
||||
RELEASE_HELP_RC=$?
|
||||
set -e
|
||||
|
||||
print_section "Signed Debug Control"
|
||||
cp "$DEBUG_BIN" "$TMP_SIGNED_DEBUG"
|
||||
codesign --force --sign - --entitlements "$ENTITLEMENTS" "$TMP_SIGNED_DEBUG" >/dev/null
|
||||
set +e
|
||||
run_capture "signed_debug_help" "$TMP_SIGNED_DEBUG" --help
|
||||
SIGNED_DEBUG_HELP_RC=$?
|
||||
set -e
|
||||
|
||||
print_section "Result"
|
||||
echo "If unsigned debug runs but either signed binary exits 137 / signal 9,"
|
||||
echo "the host is not currently permitting the required private virtualization entitlements."
|
||||
echo "If the signed release binary exits 0 but the signed debug control still exits 137,"
|
||||
echo "a path/CDHash-scoped amfidont bypass may already be active for this repo."
|
||||
echo "Typical requirements for this project are:"
|
||||
echo " 1. macOS 15+ with PV=3 support"
|
||||
echo " 2. Host hardware must expose Virtualization.framework VM support (not a nested VM without virtualization availability)"
|
||||
echo " 3. SIP disabled"
|
||||
echo " 4. allow-research-guests enabled in Recovery OS"
|
||||
echo " 5. AMFI / execution policy state that permits the private entitlements"
|
||||
echo " 6. Gatekeeper / assessment configured so the signed binary is launchable"
|
||||
|
||||
if (( ASSERT_BOOTABLE == 1 )); then
|
||||
if (( RELEASE_HELP_RC != 0 )); then
|
||||
(( QUIET == 0 )) && {
|
||||
echo ""
|
||||
echo "Error: signed release vphone-cli is not launchable on this host (exit $RELEASE_HELP_RC)." >&2
|
||||
}
|
||||
exit "$RELEASE_HELP_RC"
|
||||
fi
|
||||
fi
|
||||
284
scripts/dtree.py
284
scripts/dtree.py
@@ -1,284 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Patch DeviceTree IM4P with a fixed property set."""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
from pyimg4 import IM4P
|
||||
|
||||
|
||||
PATCHES = [
|
||||
{
|
||||
"node_path": ["device-tree"],
|
||||
"prop": "serial-number",
|
||||
"length": 12,
|
||||
"flags": 0,
|
||||
"kind": "string",
|
||||
"value": "vphone-1337",
|
||||
},
|
||||
{
|
||||
"node_path": ["device-tree", "buttons"],
|
||||
"prop": "home-button-type",
|
||||
"length": 4,
|
||||
"flags": 0,
|
||||
"kind": "int",
|
||||
"value": 2,
|
||||
},
|
||||
{
|
||||
"node_path": ["device-tree", "product"],
|
||||
"prop": "artwork-device-subtype",
|
||||
"length": 4,
|
||||
"flags": 0,
|
||||
"kind": "int",
|
||||
"value": 2556,
|
||||
},
|
||||
{
|
||||
"node_path": ["device-tree", "product"],
|
||||
"prop": "island-notch-location",
|
||||
"length": 4,
|
||||
"flags": 0,
|
||||
"kind": "int",
|
||||
"value": 144,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class DTProperty:
|
||||
name: str
|
||||
length: int
|
||||
flags: int
|
||||
value: bytes
|
||||
|
||||
|
||||
@dataclass
|
||||
class DTNode:
|
||||
properties: list[DTProperty] = field(default_factory=list)
|
||||
children: list["DTNode"] = field(default_factory=list)
|
||||
|
||||
|
||||
def _align4(n: int) -> int:
|
||||
return (n + 3) & ~3
|
||||
|
||||
|
||||
def _decode_cstr(data: bytes) -> str:
|
||||
return data.split(b"\x00", 1)[0].decode("utf-8", errors="ignore")
|
||||
|
||||
|
||||
def _encode_name(name: str) -> bytes:
|
||||
raw = name.encode("ascii")
|
||||
if len(raw) >= 32:
|
||||
raise RuntimeError(f"property name too long: {name}")
|
||||
return raw + (b"\x00" * (32 - len(raw)))
|
||||
|
||||
|
||||
def _parse_node(blob: bytes, offset: int) -> tuple[DTNode, int]:
|
||||
if offset + 8 > len(blob):
|
||||
raise RuntimeError("truncated node header")
|
||||
|
||||
n_props = int.from_bytes(blob[offset : offset + 4], "little")
|
||||
n_children = int.from_bytes(blob[offset + 4 : offset + 8], "little")
|
||||
offset += 8
|
||||
|
||||
node = DTNode()
|
||||
|
||||
for _ in range(n_props):
|
||||
if offset + 36 > len(blob):
|
||||
raise RuntimeError("truncated property header")
|
||||
|
||||
name = _decode_cstr(blob[offset : offset + 32])
|
||||
length = int.from_bytes(blob[offset + 32 : offset + 34], "little")
|
||||
flags = int.from_bytes(blob[offset + 34 : offset + 36], "little")
|
||||
offset += 36
|
||||
|
||||
if offset + length > len(blob):
|
||||
raise RuntimeError(f"truncated property value: {name}")
|
||||
|
||||
value = blob[offset : offset + length]
|
||||
offset += _align4(length)
|
||||
node.properties.append(DTProperty(name=name, length=length, flags=flags, value=value))
|
||||
|
||||
for _ in range(n_children):
|
||||
child, offset = _parse_node(blob, offset)
|
||||
node.children.append(child)
|
||||
|
||||
return node, offset
|
||||
|
||||
|
||||
def _parse_payload(blob: bytes) -> DTNode:
|
||||
root, end = _parse_node(blob, 0)
|
||||
if end != len(blob):
|
||||
raise RuntimeError(f"unexpected trailing payload bytes: {len(blob) - end}")
|
||||
return root
|
||||
|
||||
|
||||
def _serialize_node(node: DTNode) -> bytes:
|
||||
out = bytearray()
|
||||
out += len(node.properties).to_bytes(4, "little")
|
||||
out += len(node.children).to_bytes(4, "little")
|
||||
|
||||
for prop in node.properties:
|
||||
out += _encode_name(prop.name)
|
||||
out += int(prop.length & 0xFFFF).to_bytes(2, "little")
|
||||
out += int(prop.flags & 0xFFFF).to_bytes(2, "little")
|
||||
out += prop.value
|
||||
|
||||
pad = _align4(prop.length) - prop.length
|
||||
if pad:
|
||||
out += b"\x00" * pad
|
||||
|
||||
for child in node.children:
|
||||
out += _serialize_node(child)
|
||||
|
||||
return bytes(out)
|
||||
|
||||
|
||||
def _get_prop(node: DTNode, prop_name: str) -> DTProperty:
|
||||
for prop in node.properties:
|
||||
if prop.name == prop_name:
|
||||
return prop
|
||||
raise RuntimeError(f"missing property: {prop_name}")
|
||||
|
||||
|
||||
def _node_name(node: DTNode) -> str:
|
||||
for prop in node.properties:
|
||||
if prop.name == "name":
|
||||
return _decode_cstr(prop.value)
|
||||
return ""
|
||||
|
||||
|
||||
def _find_child(node: DTNode, child_name: str) -> DTNode:
|
||||
for child in node.children:
|
||||
if _node_name(child) == child_name:
|
||||
return child
|
||||
raise RuntimeError(f"missing child node: {child_name}")
|
||||
|
||||
|
||||
def _resolve_node(root: DTNode, node_path: list[str]) -> DTNode:
|
||||
if not node_path or node_path[0] != "device-tree":
|
||||
raise RuntimeError(f"invalid path: {node_path}")
|
||||
node = root
|
||||
for name in node_path[1:]:
|
||||
node = _find_child(node, name)
|
||||
return node
|
||||
|
||||
|
||||
def _encode_fixed_string(text: str, length: int) -> bytes:
|
||||
raw = text.encode("utf-8") + b"\x00"
|
||||
if len(raw) > length:
|
||||
return raw[:length]
|
||||
return raw + (b"\x00" * (length - len(raw)))
|
||||
|
||||
|
||||
def _encode_int(value: int, length: int) -> bytes:
|
||||
if length not in (1, 2, 4, 8):
|
||||
raise RuntimeError(f"unsupported integer length: {length}")
|
||||
return int(value).to_bytes(length, "little", signed=False)
|
||||
|
||||
|
||||
def _apply_patches(root: DTNode) -> None:
|
||||
for patch in PATCHES:
|
||||
node = _resolve_node(root, patch["node_path"])
|
||||
prop = _get_prop(node, patch["prop"])
|
||||
|
||||
prop.length = int(patch["length"])
|
||||
prop.flags = int(patch["flags"])
|
||||
|
||||
if patch["kind"] == "string":
|
||||
prop.value = _encode_fixed_string(str(patch["value"]), prop.length)
|
||||
elif patch["kind"] == "int":
|
||||
prop.value = _encode_int(int(patch["value"]), prop.length)
|
||||
else:
|
||||
raise RuntimeError(f"unsupported patch kind: {patch['kind']}")
|
||||
|
||||
|
||||
def patch_device_tree_payload(payload: bytes | bytearray) -> bytes:
|
||||
root = _parse_payload(bytes(payload))
|
||||
_apply_patches(root)
|
||||
return _serialize_node(root)
|
||||
|
||||
|
||||
def _load_input_payload(input_path: Path) -> bytes:
|
||||
if input_path.suffix.lower() == ".dtb":
|
||||
return input_path.read_bytes()
|
||||
if input_path.suffix.lower() != ".im4p":
|
||||
raise RuntimeError("input must be .im4p or .dtb")
|
||||
|
||||
raw = input_path.read_bytes()
|
||||
im4p = IM4P(raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
return bytes(im4p.payload.data)
|
||||
|
||||
|
||||
def _der_len(length: int) -> bytes:
|
||||
if length < 0:
|
||||
raise RuntimeError("negative DER length")
|
||||
if length < 0x80:
|
||||
return bytes([length])
|
||||
|
||||
raw = bytearray()
|
||||
while length:
|
||||
raw.append(length & 0xFF)
|
||||
length >>= 8
|
||||
raw.reverse()
|
||||
return bytes([0x80 | len(raw)]) + bytes(raw)
|
||||
|
||||
|
||||
def _der_tlv(tag: int, value: bytes) -> bytes:
|
||||
return bytes([tag]) + _der_len(len(value)) + value
|
||||
|
||||
|
||||
def _build_im4p_der(fourcc: str, description: bytes, payload: bytes) -> bytes:
|
||||
if len(fourcc) != 4:
|
||||
raise RuntimeError(f"invalid IM4P fourcc: {fourcc!r}")
|
||||
if len(description) == 0:
|
||||
description = b""
|
||||
|
||||
body = bytearray()
|
||||
body += _der_tlv(0x16, b"IM4P") # IA5String
|
||||
body += _der_tlv(0x16, fourcc.encode("ascii")) # IA5String
|
||||
body += _der_tlv(0x16, description) # IA5String
|
||||
body += _der_tlv(0x04, payload) # OCTET STRING
|
||||
return _der_tlv(0x30, bytes(body)) # SEQUENCE
|
||||
|
||||
|
||||
def patch_dtree_file(
|
||||
input_file: str | Path,
|
||||
output_file: str | Path,
|
||||
) -> Path:
|
||||
input_path = Path(input_file).expanduser().resolve()
|
||||
output_path = Path(output_file).expanduser().resolve()
|
||||
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
payload = _load_input_payload(input_path)
|
||||
patched_payload = patch_device_tree_payload(payload)
|
||||
|
||||
output_path.write_bytes(_build_im4p_der("dtre", b"", patched_payload))
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Patch DeviceTree IM4P with fixed values")
|
||||
parser.add_argument("input", help="Path to DeviceTree .im4p or .dtb")
|
||||
parser.add_argument("output", help="Output DeviceTree .im4p")
|
||||
args = parser.parse_args()
|
||||
|
||||
output_path = patch_dtree_file(
|
||||
input_file=args.input,
|
||||
output_file=args.output,
|
||||
)
|
||||
print(f"[+] wrote: {output_path}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
raise SystemExit(main())
|
||||
except RuntimeError as exc:
|
||||
print(f"[!] {exc}", file=sys.stderr)
|
||||
raise SystemExit(1)
|
||||
@@ -1,240 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate patch reference JSON for each firmware component.
|
||||
|
||||
Runs each Python patcher in dry-run mode (find patches but don't apply)
|
||||
and exports the patch sites with offsets and bytes as JSON.
|
||||
|
||||
Usage:
|
||||
source .venv/bin/activate
|
||||
python3 scripts/export_patch_reference.py ipsws/patch_refactor_input
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import struct
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__)))
|
||||
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
|
||||
|
||||
def disasm_one(data, off):
|
||||
insns = list(_cs.disasm(bytes(data[off:off + 4]), off))
|
||||
return insns[0] if insns else None
|
||||
|
||||
|
||||
def disasm_bytes(b, addr=0):
|
||||
insns = list(_cs.disasm(bytes(b), addr))
|
||||
if insns:
|
||||
return f"{insns[0].mnemonic} {insns[0].op_str}"
|
||||
return "???"
|
||||
|
||||
|
||||
def patches_to_json(patches, component):
|
||||
"""Convert list of (offset, patch_bytes, description) to JSON-serializable records."""
|
||||
records = []
|
||||
for off, pb, desc in patches:
|
||||
records.append({
|
||||
"file_offset": off,
|
||||
"patch_bytes": pb.hex(),
|
||||
"patch_size": len(pb),
|
||||
"description": desc,
|
||||
"component": component,
|
||||
})
|
||||
return records
|
||||
|
||||
|
||||
def load_firmware(path):
|
||||
"""Load firmware file, decompress IM4P if needed."""
|
||||
with open(path, "rb") as f:
|
||||
raw = f.read()
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
im4p = IM4P(raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
return bytearray(im4p.payload.data)
|
||||
except Exception:
|
||||
return bytearray(raw)
|
||||
|
||||
|
||||
def export_avpbooter(base_dir, out_dir):
|
||||
"""Export AVPBooter patch reference."""
|
||||
import glob
|
||||
paths = glob.glob(os.path.join(base_dir, "AVPBooter*.bin"))
|
||||
if not paths:
|
||||
print(" [!] AVPBooter not found, skipping")
|
||||
return
|
||||
|
||||
path = paths[0]
|
||||
data = bytearray(open(path, "rb").read())
|
||||
print(f" AVPBooter: {path} ({len(data)} bytes)")
|
||||
|
||||
# Inline the AVPBooter patcher logic (from fw_patch.py)
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN)
|
||||
|
||||
def asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
return bytes(enc)
|
||||
|
||||
patches = []
|
||||
DGST = struct.pack("<I", 0x44475354)
|
||||
off = data.find(DGST)
|
||||
if off < 0:
|
||||
print(" [!] AVPBooter: DGST marker not found")
|
||||
return
|
||||
|
||||
insns = list(_cs.disasm(bytes(data[off:off + 0x200]), off, 50))
|
||||
for i, ins in enumerate(insns):
|
||||
if ins.mnemonic == "ret":
|
||||
prev = insns[i - 1] if i > 0 else None
|
||||
if prev and prev.mnemonic == "mov" and "x0" in prev.op_str:
|
||||
patches.append((prev.address, asm("mov x0, #0"),
|
||||
"AVPBooter DGST bypass: mov x0, #0"))
|
||||
break
|
||||
|
||||
records = patches_to_json(patches, "avpbooter")
|
||||
out_path = os.path.join(out_dir, "avpbooter.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def export_iboot(base_dir, out_dir):
|
||||
"""Export iBSS/iBEC/LLB patch references."""
|
||||
from patchers.iboot import IBootPatcher
|
||||
|
||||
components = [
|
||||
("ibss", "Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"),
|
||||
("ibec", "Firmware/dfu/iBEC.vresearch101.RELEASE.im4p"),
|
||||
("llb", "Firmware/all_flash/LLB.vresearch101.RELEASE.im4p"),
|
||||
]
|
||||
|
||||
for mode, rel_path in components:
|
||||
path = os.path.join(base_dir, rel_path)
|
||||
if not os.path.exists(path):
|
||||
print(f" [!] {mode}: {rel_path} not found, skipping")
|
||||
continue
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" {mode}: {rel_path} ({len(data)} bytes)")
|
||||
|
||||
patcher = IBootPatcher(data, mode=mode, verbose=True)
|
||||
patcher.find_all()
|
||||
records = patches_to_json(patcher.patches, mode)
|
||||
|
||||
out_path = os.path.join(out_dir, f"{mode}.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def export_txm(base_dir, out_dir):
|
||||
"""Export TXM patch reference."""
|
||||
from patchers.txm import TXMPatcher as TXMBasePatcher
|
||||
|
||||
path = os.path.join(base_dir, "Firmware/txm.iphoneos.research.im4p")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] TXM not found, skipping")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" TXM: ({len(data)} bytes)")
|
||||
|
||||
patcher = TXMBasePatcher(data, verbose=True)
|
||||
patcher.find_all()
|
||||
records = patches_to_json(patcher.patches, "txm")
|
||||
|
||||
out_path = os.path.join(out_dir, "txm.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def export_kernel(base_dir, out_dir):
|
||||
"""Export kernel patch reference."""
|
||||
from patchers.kernel import KernelPatcher
|
||||
|
||||
path = os.path.join(base_dir, "kernelcache.research.vphone600")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] kernelcache not found, skipping")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" kernelcache: ({len(data)} bytes)")
|
||||
|
||||
patcher = KernelPatcher(data, verbose=True)
|
||||
patcher.find_all()
|
||||
records = patches_to_json(patcher.patches, "kernelcache")
|
||||
|
||||
out_path = os.path.join(out_dir, "kernelcache.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def export_dtree(base_dir, out_dir):
|
||||
"""Export DeviceTree patch reference."""
|
||||
import dtree
|
||||
|
||||
path = os.path.join(base_dir, "Firmware/all_flash/DeviceTree.vphone600ap.im4p")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] DeviceTree not found, skipping")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" DeviceTree: ({len(data)} bytes)")
|
||||
|
||||
# dtree.patch_device_tree_payload returns list of patches
|
||||
patches = dtree.find_patches(data)
|
||||
records = []
|
||||
for off, old_bytes, new_bytes, desc in patches:
|
||||
records.append({
|
||||
"file_offset": off,
|
||||
"original_bytes": old_bytes.hex() if isinstance(old_bytes, (bytes, bytearray)) else old_bytes,
|
||||
"patch_bytes": new_bytes.hex() if isinstance(new_bytes, (bytes, bytearray)) else new_bytes,
|
||||
"patch_size": len(new_bytes) if isinstance(new_bytes, (bytes, bytearray)) else 0,
|
||||
"description": desc,
|
||||
"component": "devicetree",
|
||||
})
|
||||
|
||||
out_path = os.path.join(out_dir, "devicetree.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print(f"Usage: {sys.argv[0]} <firmware_dir>")
|
||||
sys.exit(1)
|
||||
|
||||
base_dir = os.path.abspath(sys.argv[1])
|
||||
out_dir = os.path.join(base_dir, "reference_patches")
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
|
||||
print(f"=== Exporting patch references from {base_dir} ===\n")
|
||||
|
||||
# Change to scripts dir so imports work
|
||||
os.chdir(os.path.join(os.path.dirname(__file__)))
|
||||
|
||||
export_avpbooter(base_dir, out_dir)
|
||||
print()
|
||||
export_iboot(base_dir, out_dir)
|
||||
print()
|
||||
export_txm(base_dir, out_dir)
|
||||
print()
|
||||
export_kernel(base_dir, out_dir)
|
||||
print()
|
||||
# DeviceTree needs special handling - the dtree.py may not have find_patches
|
||||
# We'll handle it separately
|
||||
print(f"\n=== Done. References saved to {out_dir}/ ===")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,150 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate patch reference JSON for ALL variants (regular + dev + jb).
|
||||
|
||||
Usage:
|
||||
source .venv/bin/activate
|
||||
python3 scripts/export_patch_reference_all.py ipsws/patch_refactor_input
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import struct
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__)))
|
||||
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
|
||||
|
||||
def disasm_one(data, off):
|
||||
insns = list(_cs.disasm(bytes(data[off:off + 4]), off))
|
||||
return insns[0] if insns else None
|
||||
|
||||
|
||||
def patches_to_json(patches, component):
|
||||
records = []
|
||||
for off, pb, desc in patches:
|
||||
records.append({
|
||||
"file_offset": off,
|
||||
"patch_bytes": pb.hex(),
|
||||
"patch_size": len(pb),
|
||||
"description": desc,
|
||||
"component": component,
|
||||
})
|
||||
return records
|
||||
|
||||
|
||||
def load_firmware(path):
|
||||
with open(path, "rb") as f:
|
||||
raw = f.read()
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
im4p = IM4P(raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
return bytearray(im4p.payload.data)
|
||||
except Exception:
|
||||
return bytearray(raw)
|
||||
|
||||
|
||||
def export_txm_dev(base_dir, out_dir):
|
||||
"""Export TXM dev patch reference (base + dev patches)."""
|
||||
from patchers.txm import TXMPatcher as TXMBasePatcher
|
||||
from patchers.txm_dev import TXMPatcher as TXMDevPatcher
|
||||
|
||||
path = os.path.join(base_dir, "Firmware/txm.iphoneos.research.im4p")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] TXM not found, skipping txm_dev")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" TXM dev: ({len(data)} bytes)")
|
||||
|
||||
# Base TXM patches
|
||||
base = TXMBasePatcher(data, verbose=True)
|
||||
base.find_all()
|
||||
base_records = patches_to_json(base.patches, "txm_dev_base")
|
||||
|
||||
# Dev TXM patches (on same data, without applying base)
|
||||
dev = TXMDevPatcher(bytearray(data), verbose=True)
|
||||
dev.find_all()
|
||||
dev_records = patches_to_json(dev.patches, "txm_dev")
|
||||
|
||||
out_path = os.path.join(out_dir, "txm_dev.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump({"base": base_records, "dev": dev_records}, f, indent=2)
|
||||
print(f" → {out_path} ({len(base_records)} base + {len(dev_records)} dev patches)")
|
||||
|
||||
|
||||
def export_iboot_jb(base_dir, out_dir):
|
||||
"""Export iBSS JB patch reference."""
|
||||
from patchers.iboot_jb import IBootJBPatcher
|
||||
|
||||
path = os.path.join(base_dir, "Firmware/dfu/iBSS.vresearch101.RELEASE.im4p")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] iBSS not found, skipping iboot_jb")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" iBSS JB: ({len(data)} bytes)")
|
||||
|
||||
patcher = IBootJBPatcher(data, mode="ibss", verbose=True)
|
||||
# Only find JB patches (not base)
|
||||
patcher.patches = []
|
||||
patcher.patch_skip_generate_nonce()
|
||||
records = patches_to_json(patcher.patches, "ibss_jb")
|
||||
|
||||
out_path = os.path.join(out_dir, "ibss_jb.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def export_kernel_jb(base_dir, out_dir):
|
||||
"""Export kernel JB patch reference."""
|
||||
from patchers.kernel_jb import KernelJBPatcher
|
||||
|
||||
path = os.path.join(base_dir, "kernelcache.research.vphone600")
|
||||
if not os.path.exists(path):
|
||||
print(" [!] kernelcache not found, skipping kernel_jb")
|
||||
return
|
||||
|
||||
data = load_firmware(path)
|
||||
print(f" kernelcache JB: ({len(data)} bytes)")
|
||||
|
||||
patcher = KernelJBPatcher(data, verbose=True)
|
||||
patches = patcher.find_all()
|
||||
records = patches_to_json(patches, "kernelcache_jb")
|
||||
|
||||
out_path = os.path.join(out_dir, "kernelcache_jb.json")
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(records, f, indent=2)
|
||||
print(f" → {out_path} ({len(records)} patches)")
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print(f"Usage: {sys.argv[0]} <firmware_dir>")
|
||||
sys.exit(1)
|
||||
|
||||
base_dir = os.path.abspath(sys.argv[1])
|
||||
out_dir = os.path.join(base_dir, "reference_patches")
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
|
||||
print(f"=== Exporting dev/jb patch references from {base_dir} ===\n")
|
||||
os.chdir(os.path.join(os.path.dirname(__file__)))
|
||||
|
||||
export_txm_dev(base_dir, out_dir)
|
||||
print()
|
||||
export_iboot_jb(base_dir, out_dir)
|
||||
print()
|
||||
export_kernel_jb(base_dir, out_dir)
|
||||
|
||||
print(f"\n=== Done. References saved to {out_dir}/ ===")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -168,7 +168,7 @@ def main():
|
||||
m["SEP"] = entry(C, VP, "SEP")
|
||||
m["RestoreSEP"] = entry(C, VP, "RestoreSEP")
|
||||
|
||||
# ── Kernel (vphone600, patched by fw_patch.py) ────────────────────
|
||||
# ── Kernel (vphone600, patched by the Swift firmware pipeline) ─────
|
||||
m["KernelCache"] = entry(C, VPR, "KernelCache") # research
|
||||
m["RestoreKernelCache"] = entry(C, VP, "RestoreKernelCache") # release
|
||||
|
||||
|
||||
@@ -1,352 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
patch_firmware.py — Patch all boot-chain components for vphone600.
|
||||
|
||||
Run this AFTER prepare_firmware.sh from the VM directory.
|
||||
|
||||
Usage:
|
||||
python3 patch_firmware.py [vm_directory]
|
||||
|
||||
vm_directory defaults to the current working directory.
|
||||
The script auto-discovers the iPhone*_Restore directory and all
|
||||
firmware files by searching for known patterns.
|
||||
|
||||
Components patched (ALL dynamically — no hardcoded offsets):
|
||||
1. AVPBooter — DGST validation bypass (mov x0, #0)
|
||||
2. iBSS — serial labels + image4 callback bypass
|
||||
3. iBEC — serial labels + image4 callback + boot-args
|
||||
4. LLB — serial labels + image4 callback + boot-args + rootfs + panic
|
||||
5. TXM — trustcache bypass (mov x0, #0)
|
||||
6. kernelcache — 25 patches (APFS, MAC, debugger, launch constraints, etc.)
|
||||
7. patch_dtree — vphone600 DeviceTree patch + repack
|
||||
|
||||
Dependencies:
|
||||
pip install keystone-engine capstone pyimg4
|
||||
"""
|
||||
|
||||
import sys, os, glob, subprocess, tempfile
|
||||
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
from pyimg4 import IM4P
|
||||
|
||||
from patchers.kernel import KernelPatcher
|
||||
from patchers.iboot import IBootPatcher
|
||||
from patchers.txm import TXMPatcher
|
||||
from dtree import patch_device_tree_payload
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# Assembler helpers (for AVPBooter only — iBoot/TXM/kernel are
|
||||
# handled by their own patcher classes)
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
|
||||
|
||||
def _asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed: {s}")
|
||||
return bytes(enc)
|
||||
|
||||
|
||||
MOV_X0_0 = _asm("mov x0, #0")
|
||||
RET_MNEMONICS = {"ret", "retaa", "retab"}
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# IM4P / raw file helpers — auto-detect format
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
def load_firmware(path):
|
||||
"""Load firmware file, auto-detecting IM4P vs raw.
|
||||
|
||||
Returns (im4p_or_None, raw_bytearray, is_im4p_bool, original_bytes).
|
||||
"""
|
||||
with open(path, "rb") as f:
|
||||
raw = f.read()
|
||||
|
||||
try:
|
||||
im4p = IM4P(raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
return im4p, bytearray(im4p.payload.data), True, raw
|
||||
except Exception:
|
||||
return None, bytearray(raw), False, raw
|
||||
|
||||
|
||||
def save_firmware(path, im4p_obj, patched_data, was_im4p, original_raw=None):
|
||||
"""Save patched firmware, repackaging as IM4P if the original was IM4P."""
|
||||
if was_im4p and im4p_obj is not None:
|
||||
if original_raw is not None:
|
||||
_save_im4p_with_payp(path, im4p_obj.fourcc, patched_data, original_raw)
|
||||
else:
|
||||
new_im4p = IM4P(
|
||||
fourcc=im4p_obj.fourcc,
|
||||
description=im4p_obj.description,
|
||||
payload=bytes(patched_data),
|
||||
)
|
||||
with open(path, "wb") as f:
|
||||
f.write(new_im4p.output())
|
||||
else:
|
||||
with open(path, "wb") as f:
|
||||
f.write(patched_data)
|
||||
|
||||
|
||||
def _save_im4p_with_payp(path, fourcc, patched_data, original_raw):
|
||||
"""Repackage as lzfse-compressed IM4P and append PAYP from original."""
|
||||
with (
|
||||
tempfile.NamedTemporaryFile(suffix=".raw", delete=False) as tmp_raw,
|
||||
tempfile.NamedTemporaryFile(suffix=".im4p", delete=False) as tmp_im4p,
|
||||
):
|
||||
tmp_raw_path = tmp_raw.name
|
||||
tmp_im4p_path = tmp_im4p.name
|
||||
tmp_raw.write(bytes(patched_data))
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"pyimg4",
|
||||
"im4p",
|
||||
"create",
|
||||
"-i",
|
||||
tmp_raw_path,
|
||||
"-o",
|
||||
tmp_im4p_path,
|
||||
"-f",
|
||||
fourcc,
|
||||
"--lzfse",
|
||||
],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
output = bytearray(open(tmp_im4p_path, "rb").read())
|
||||
finally:
|
||||
os.unlink(tmp_raw_path)
|
||||
os.unlink(tmp_im4p_path)
|
||||
|
||||
payp_offset = original_raw.rfind(b"PAYP")
|
||||
if payp_offset >= 0:
|
||||
payp_data = original_raw[payp_offset - 10 :]
|
||||
output.extend(payp_data)
|
||||
old_len = int.from_bytes(output[2:5], "big")
|
||||
output[2:5] = (old_len + len(payp_data)).to_bytes(3, "big")
|
||||
print(f" [+] preserved PAYP ({len(payp_data)} bytes)")
|
||||
|
||||
with open(path, "wb") as f:
|
||||
f.write(output)
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# Per-component patch functions
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
|
||||
# ── 1. AVPBooter ──────────────────────────────────────────────────
|
||||
# Already dynamic — finds DGST constant, locates x0 setter before
|
||||
# ret, replaces with mov x0, #0. Base address is irrelevant
|
||||
# (cancels out in the offset calculation).
|
||||
|
||||
AVP_SEARCH = "0x4447"
|
||||
|
||||
|
||||
def patch_avpbooter(data):
|
||||
md = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
md.skipdata = True
|
||||
insns = list(md.disasm(bytes(data), 0))
|
||||
|
||||
hits = [i for i in insns if AVP_SEARCH in f"{i.mnemonic} {i.op_str}"]
|
||||
if not hits:
|
||||
print(" [-] DGST constant not found")
|
||||
return False
|
||||
|
||||
addr2idx = {insn.address: i for i, insn in enumerate(insns)}
|
||||
idx = addr2idx[hits[0].address]
|
||||
|
||||
ret_idx = None
|
||||
for i in range(idx, min(idx + 512, len(insns))):
|
||||
if insns[i].mnemonic in RET_MNEMONICS:
|
||||
ret_idx = i
|
||||
break
|
||||
if ret_idx is None:
|
||||
print(" [-] epilogue not found")
|
||||
return False
|
||||
|
||||
x0_idx = None
|
||||
for i in range(ret_idx - 1, max(ret_idx - 32, -1), -1):
|
||||
op, mn = insns[i].op_str, insns[i].mnemonic
|
||||
if mn == "mov" and op.startswith(("x0,", "w0,")):
|
||||
x0_idx = i
|
||||
break
|
||||
if mn in ("cset", "csinc", "csinv", "csneg") and op.startswith(("x0,", "w0,")):
|
||||
x0_idx = i
|
||||
break
|
||||
if mn in RET_MNEMONICS or mn in ("b", "bl", "br", "blr"):
|
||||
break
|
||||
if x0_idx is None:
|
||||
print(" [-] x0 setter not found")
|
||||
return False
|
||||
|
||||
target = insns[x0_idx]
|
||||
file_off = target.address
|
||||
data[file_off : file_off + 4] = MOV_X0_0
|
||||
print(f" 0x{file_off:X}: {target.mnemonic} {target.op_str} -> mov x0, #0")
|
||||
return True
|
||||
|
||||
|
||||
# ── 2–4. iBSS / iBEC / LLB ───────────────────────────────────────
|
||||
# Fully dynamic via IBootPatcher — no hardcoded offsets.
|
||||
|
||||
|
||||
def patch_ibss(data):
|
||||
p = IBootPatcher(data, mode="ibss", label="Loaded iBSS")
|
||||
n = p.apply()
|
||||
print(f" [+] {n} iBSS patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
def patch_ibec(data):
|
||||
p = IBootPatcher(data, mode="ibec", label="Loaded iBEC")
|
||||
n = p.apply()
|
||||
print(f" [+] {n} iBEC patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
def patch_llb(data):
|
||||
p = IBootPatcher(data, mode="llb", label="Loaded LLB")
|
||||
n = p.apply()
|
||||
print(f" [+] {n} LLB patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
# ── 5. TXM ───────────────────────────────────────────────────────
|
||||
# Fully dynamic via TXMPatcher — no hardcoded offsets.
|
||||
|
||||
|
||||
def patch_txm(data):
|
||||
p = TXMPatcher(data)
|
||||
n = p.apply()
|
||||
print(f" [+] {n} TXM patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
# ── 6. Kernelcache ───────────────────────────────────────────────
|
||||
# Fully dynamic via KernelPatcher — no hardcoded offsets.
|
||||
|
||||
|
||||
def patch_kernelcache(data):
|
||||
kp = KernelPatcher(data)
|
||||
n = kp.apply()
|
||||
print(f" [+] {n} kernel patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
def patch_dtree(data):
|
||||
patched = patch_device_tree_payload(data)
|
||||
data[:] = patched
|
||||
print(" [+] DeviceTree patches applied dynamically")
|
||||
return True
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# File discovery
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
def find_restore_dir(base_dir):
|
||||
for entry in sorted(os.listdir(base_dir)):
|
||||
full = os.path.join(base_dir, entry)
|
||||
if os.path.isdir(full) and "Restore" in entry:
|
||||
return full
|
||||
return None
|
||||
|
||||
|
||||
def find_file(base_dir, patterns, label):
|
||||
for pattern in patterns:
|
||||
matches = sorted(glob.glob(os.path.join(base_dir, pattern)))
|
||||
if matches:
|
||||
return matches[0]
|
||||
print(f"[-] {label} not found. Searched patterns:")
|
||||
for p in patterns:
|
||||
print(f" {os.path.join(base_dir, p)}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# Main
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
|
||||
COMPONENTS = [
|
||||
# (name, search_base_is_restore, search_patterns, patch_function, preserve_payp)
|
||||
("AVPBooter", False, ["AVPBooter*.bin"], patch_avpbooter, False),
|
||||
("iBSS", True, ["Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"], patch_ibss, False),
|
||||
("iBEC", True, ["Firmware/dfu/iBEC.vresearch101.RELEASE.im4p"], patch_ibec, False),
|
||||
(
|
||||
"LLB",
|
||||
True,
|
||||
["Firmware/all_flash/LLB.vresearch101.RELEASE.im4p"],
|
||||
patch_llb,
|
||||
False,
|
||||
),
|
||||
("TXM", True, ["Firmware/txm.iphoneos.research.im4p"], patch_txm, True),
|
||||
("kernelcache", True, ["kernelcache.research.vphone600"], patch_kernelcache, True),
|
||||
(
|
||||
"patch_dtree",
|
||||
True,
|
||||
["Firmware/all_flash/DeviceTree.vphone600ap.im4p"],
|
||||
patch_dtree,
|
||||
False,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def patch_component(path, patch_fn, name, preserve_payp):
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f" {name}: {path}")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
im4p, data, was_im4p, original_raw = load_firmware(path)
|
||||
fmt = "IM4P" if was_im4p else "raw"
|
||||
extra = ""
|
||||
if was_im4p and im4p:
|
||||
extra = f", fourcc={im4p.fourcc}"
|
||||
print(f" format: {fmt}{extra}, {len(data)} bytes")
|
||||
|
||||
if not patch_fn(data):
|
||||
print(f" [-] FAILED: {name}")
|
||||
sys.exit(1)
|
||||
|
||||
save_firmware(path, im4p, data, was_im4p, original_raw if preserve_payp else None)
|
||||
print(f" [+] saved ({fmt})")
|
||||
|
||||
|
||||
def main():
|
||||
vm_dir = sys.argv[1] if len(sys.argv) > 1 else os.getcwd()
|
||||
vm_dir = os.path.abspath(vm_dir)
|
||||
|
||||
if not os.path.isdir(vm_dir):
|
||||
print(f"[-] Not a directory: {vm_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
restore_dir = find_restore_dir(vm_dir)
|
||||
if not restore_dir:
|
||||
print(f"[-] No *Restore* directory found in {vm_dir}")
|
||||
print(" Run prepare_firmware_v2.sh first.")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"[*] VM directory: {vm_dir}")
|
||||
print(f"[*] Restore directory: {restore_dir}")
|
||||
print(f"[*] Patching {len(COMPONENTS)} boot-chain components ...")
|
||||
|
||||
for name, in_restore, patterns, patch_fn, preserve_payp in COMPONENTS:
|
||||
search_base = restore_dir if in_restore else vm_dir
|
||||
path = find_file(search_base, patterns, name)
|
||||
patch_component(path, patch_fn, name, preserve_payp)
|
||||
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f" All {len(COMPONENTS)} components patched successfully!")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
fw_patch_dev.py — Patch boot-chain components using dev TXM patch set.
|
||||
|
||||
Usage:
|
||||
python3 fw_patch_dev.py [vm_directory]
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from fw_patch import (
|
||||
find_file,
|
||||
find_restore_dir,
|
||||
patch_avpbooter,
|
||||
patch_ibec,
|
||||
patch_ibss,
|
||||
patch_kernelcache,
|
||||
patch_llb,
|
||||
patch_dtree,
|
||||
patch_txm,
|
||||
patch_component,
|
||||
)
|
||||
from patchers.txm_dev import TXMPatcher as TXMDevPatcher
|
||||
|
||||
|
||||
def patch_txm_dev(data):
|
||||
if not patch_txm(data):
|
||||
return False
|
||||
p = TXMDevPatcher(data)
|
||||
n = p.apply()
|
||||
print(f" [+] {n} TXM dev patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
COMPONENTS = [
|
||||
# (name, search_base_is_restore, search_patterns, patch_function, preserve_payp)
|
||||
("AVPBooter", False, ["AVPBooter*.bin"], patch_avpbooter, False),
|
||||
("iBSS", True, ["Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"], patch_ibss, False),
|
||||
("iBEC", True, ["Firmware/dfu/iBEC.vresearch101.RELEASE.im4p"], patch_ibec, False),
|
||||
(
|
||||
"LLB",
|
||||
True,
|
||||
["Firmware/all_flash/LLB.vresearch101.RELEASE.im4p"],
|
||||
patch_llb,
|
||||
False,
|
||||
),
|
||||
("TXM", True, ["Firmware/txm.iphoneos.research.im4p"], patch_txm_dev, True),
|
||||
("kernelcache", True, ["kernelcache.research.vphone600"], patch_kernelcache, True),
|
||||
(
|
||||
"patch_dtree",
|
||||
True,
|
||||
["Firmware/all_flash/DeviceTree.vphone600ap.im4p"],
|
||||
patch_dtree,
|
||||
False,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def main():
|
||||
vm_dir = sys.argv[1] if len(sys.argv) > 1 else os.getcwd()
|
||||
vm_dir = os.path.abspath(vm_dir)
|
||||
|
||||
if not os.path.isdir(vm_dir):
|
||||
print(f"[-] Not a directory: {vm_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
restore_dir = find_restore_dir(vm_dir)
|
||||
if not restore_dir:
|
||||
print(f"[-] No *Restore* directory found in {vm_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"[*] VM directory: {vm_dir}")
|
||||
print(f"[*] Restore directory: {restore_dir}")
|
||||
print(f"[*] Patching {len(COMPONENTS)} boot-chain components (dev mode) ...")
|
||||
|
||||
for name, in_restore, patterns, patch_fn, preserve_payp in COMPONENTS:
|
||||
search_base = restore_dir if in_restore else vm_dir
|
||||
path = find_file(search_base, patterns, name)
|
||||
patch_component(path, patch_fn, name, preserve_payp)
|
||||
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f" All {len(COMPONENTS)} components patched successfully (dev mode)!")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,146 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
fw_patch_jb.py — Patch boot-chain components using dev patches + JB extensions.
|
||||
|
||||
Usage:
|
||||
python3 fw_patch_jb.py [vm_directory]
|
||||
|
||||
This script extends fw_patch_dev with additional JB-oriented patches.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from fw_patch import (
|
||||
find_file,
|
||||
find_restore_dir,
|
||||
patch_avpbooter,
|
||||
patch_ibec,
|
||||
patch_ibss,
|
||||
patch_kernelcache,
|
||||
patch_llb,
|
||||
patch_dtree,
|
||||
patch_component,
|
||||
)
|
||||
from fw_patch_dev import patch_txm_dev
|
||||
from patchers.iboot_jb import IBootJBPatcher
|
||||
from patchers.kernel_jb import KernelJBPatcher
|
||||
|
||||
|
||||
def _env_enabled(name, default=False):
|
||||
raw = os.environ.get(name)
|
||||
if raw is None:
|
||||
return default
|
||||
return raw.strip().lower() in ("1", "true", "yes", "on")
|
||||
|
||||
|
||||
def patch_ibss_jb(data):
|
||||
p = IBootJBPatcher(data, mode="ibss", label="Loaded iBSS")
|
||||
n = p.apply()
|
||||
print(f" [+] {n} iBSS JB patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
def patch_kernelcache_jb(data):
|
||||
kp = KernelJBPatcher(data)
|
||||
n = kp.apply()
|
||||
print(f" [+] {n} kernel JB patches applied dynamically")
|
||||
return n > 0
|
||||
|
||||
|
||||
# Base components — same as fw_patch_dev (dev TXM includes selector24 bypass).
|
||||
COMPONENTS = [
|
||||
# (name, search_base_is_restore, search_patterns, patch_function, preserve_payp)
|
||||
("AVPBooter", False, ["AVPBooter*.bin"], patch_avpbooter, False),
|
||||
("iBSS", True, ["Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"], patch_ibss, False),
|
||||
("iBEC", True, ["Firmware/dfu/iBEC.vresearch101.RELEASE.im4p"], patch_ibec, False),
|
||||
(
|
||||
"LLB",
|
||||
True,
|
||||
["Firmware/all_flash/LLB.vresearch101.RELEASE.im4p"],
|
||||
patch_llb,
|
||||
False,
|
||||
),
|
||||
("TXM", True, ["Firmware/txm.iphoneos.research.im4p"], patch_txm_dev, True),
|
||||
("kernelcache", True, ["kernelcache.research.vphone600"], patch_kernelcache, True),
|
||||
(
|
||||
"patch_dtree",
|
||||
True,
|
||||
["Firmware/all_flash/DeviceTree.vphone600ap.im4p"],
|
||||
patch_dtree,
|
||||
False,
|
||||
),
|
||||
]
|
||||
|
||||
# JB extension components — applied AFTER base components on the same files.
|
||||
JB_COMPONENTS = [
|
||||
# (name, search_base_is_restore, search_patterns, patch_function, preserve_payp)
|
||||
("iBSS (JB)", True, ["Firmware/dfu/iBSS.vresearch101.RELEASE.im4p"], patch_ibss_jb, False),
|
||||
(
|
||||
"kernelcache (JB)",
|
||||
True,
|
||||
["kernelcache.research.vphone600"],
|
||||
patch_kernelcache_jb,
|
||||
True,
|
||||
),
|
||||
]
|
||||
def main():
|
||||
vm_dir = sys.argv[1] if len(sys.argv) > 1 else os.getcwd()
|
||||
vm_dir = os.path.abspath(vm_dir)
|
||||
|
||||
if not os.path.isdir(vm_dir):
|
||||
print(f"[-] Not a directory: {vm_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
restore_dir = find_restore_dir(vm_dir)
|
||||
if not restore_dir:
|
||||
print(f"[-] No *Restore* directory found in {vm_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"[*] VM directory: {vm_dir}")
|
||||
print(f"[*] Restore directory: {restore_dir}")
|
||||
print(f"[*] Patching {len(COMPONENTS)} boot-chain components (jailbreak mode) ...")
|
||||
|
||||
allow_missing = _env_enabled("VPHONE_FW_PATCH_ALLOW_MISSING", default=False)
|
||||
skipped = []
|
||||
|
||||
for name, in_restore, patterns, patch_fn, preserve_payp in COMPONENTS:
|
||||
search_base = restore_dir if in_restore else vm_dir
|
||||
try:
|
||||
path = find_file(search_base, patterns, name)
|
||||
except SystemExit:
|
||||
# AVPBooter is often absent in unpacked firmware-only directories.
|
||||
if name == "AVPBooter" or allow_missing:
|
||||
print(f"[!] Missing component '{name}', skipping this component")
|
||||
skipped.append(name)
|
||||
continue
|
||||
raise
|
||||
patch_component(path, patch_fn, name, preserve_payp)
|
||||
|
||||
if JB_COMPONENTS:
|
||||
print(f"\n[*] Applying {len(JB_COMPONENTS)} JB extension patches ...")
|
||||
for name, in_restore, patterns, patch_fn, preserve_payp in JB_COMPONENTS:
|
||||
search_base = restore_dir if in_restore else vm_dir
|
||||
try:
|
||||
path = find_file(search_base, patterns, name)
|
||||
except SystemExit:
|
||||
if allow_missing:
|
||||
print(f"[!] Missing component '{name}', skipping this component")
|
||||
skipped.append(name)
|
||||
continue
|
||||
raise
|
||||
patch_component(path, patch_fn, name, preserve_payp)
|
||||
|
||||
print(f"\n{'=' * 60}")
|
||||
if skipped:
|
||||
print(
|
||||
f" Components patched with {len(skipped)} skipped missing components:"
|
||||
f" {', '.join(skipped)}"
|
||||
)
|
||||
else:
|
||||
print(f" All components patched successfully (jailbreak mode)!")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -38,6 +38,23 @@ echo " IPSWs: $IPSW_DIR"
|
||||
echo " Output: $(pwd)/$IPHONE_DIR/"
|
||||
echo ""
|
||||
|
||||
cleanup_old_restore_dirs() {
|
||||
local keep="$1"
|
||||
local found=0
|
||||
shopt -s nullglob
|
||||
for dir in *Restore*; do
|
||||
[[ -d "$dir" ]] || continue
|
||||
[[ "$dir" == "$keep" ]] && continue
|
||||
if [[ $found -eq 0 ]]; then
|
||||
echo "==> Removing stale restore directories ..."
|
||||
found=1
|
||||
fi
|
||||
echo " rm -rf $dir"
|
||||
rm -rf "$dir"
|
||||
done
|
||||
shopt -u nullglob
|
||||
}
|
||||
|
||||
# ── Fetch (download or copy) ─────────────────────────────────────────
|
||||
is_local() { [[ "$1" != http://* && "$1" != https://* ]]; }
|
||||
|
||||
@@ -87,6 +104,10 @@ extract() {
|
||||
extract "$IPHONE_IPSW_PATH" "$IPHONE_CACHE" "$IPHONE_DIR"
|
||||
extract "$CLOUDOS_IPSW_PATH" "$CLOUDOS_CACHE" "$CLOUDOS_DIR"
|
||||
|
||||
# Keep exactly one active restore tree in the working directory so fw_patch
|
||||
# cannot accidentally pick a stale older firmware directory.
|
||||
cleanup_old_restore_dirs "$IPHONE_DIR"
|
||||
|
||||
# ── Merge cloudOS firmware into iPhone restore directory ──────────────
|
||||
echo "==> Importing cloudOS firmware components ..."
|
||||
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
from .iboot import IBootPatcher
|
||||
from .kernel import KernelPatcher
|
||||
from .txm import TXMPatcher
|
||||
"""patchers package.
|
||||
|
||||
__all__ = ["IBootPatcher", "KernelPatcher", "TXMPatcher"]
|
||||
Only CFW-related helpers remain in Python. Firmware patching now lives in the
|
||||
Swift `FirmwarePatcher` module and is invoked through `vphone-cli`.
|
||||
"""
|
||||
|
||||
__all__ = []
|
||||
|
||||
@@ -1,496 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
iboot_patcher.py — Dynamic patcher for iBoot-based images (iBSS, iBEC, LLB).
|
||||
|
||||
Finds all patch sites by string anchors, instruction patterns, and unique
|
||||
error-code constants — NO hardcoded offsets. Works across iBoot variants
|
||||
as long as the code structure is preserved.
|
||||
|
||||
iBSS, iBEC, and LLB share the same raw binary; the difference is which
|
||||
patches are applied:
|
||||
- iBSS: serial labels + image4 callback bypass
|
||||
- iBEC: iBSS + boot-args
|
||||
- LLB: iBEC + rootfs bypass (6 patches) + panic bypass
|
||||
|
||||
Dependencies: keystone-engine, capstone
|
||||
"""
|
||||
|
||||
import struct
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
|
||||
# ── Assembly / disassembly singletons ──────────────────────────
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
_cs.skipdata = True
|
||||
|
||||
|
||||
def _asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed: {s}")
|
||||
return bytes(enc)
|
||||
|
||||
|
||||
NOP = _asm("nop")
|
||||
MOV_X0_0 = _asm("mov x0, #0")
|
||||
PACIBSP = _asm("hint #27")
|
||||
|
||||
|
||||
def _rd32(buf, off):
|
||||
return struct.unpack_from("<I", buf, off)[0]
|
||||
|
||||
|
||||
def _wr32(buf, off, v):
|
||||
struct.pack_into("<I", buf, off, v)
|
||||
|
||||
|
||||
def _disasm_one(data, off):
|
||||
insns = list(_cs.disasm(data[off : off + 4], off))
|
||||
return insns[0] if insns else None
|
||||
|
||||
|
||||
def _disasm_n(data, off, n):
|
||||
return list(_cs.disasm(data[off : off + n * 4], off))
|
||||
|
||||
|
||||
def _find_asm_pattern(data, asm_str):
|
||||
"""Find all file offsets where the assembled instruction appears."""
|
||||
enc, _ = _ks.asm(asm_str)
|
||||
pattern = bytes(enc)
|
||||
results = []
|
||||
off = 0
|
||||
while True:
|
||||
idx = data.find(pattern, off)
|
||||
if idx < 0:
|
||||
break
|
||||
results.append(idx)
|
||||
off = idx + 4
|
||||
return results
|
||||
|
||||
|
||||
def _encode_b(pc, target):
|
||||
"""Encode an unconditional `b` instruction at pc targeting target."""
|
||||
offset = (target - pc) >> 2
|
||||
return 0x14000000 | (offset & 0x3FFFFFF)
|
||||
|
||||
|
||||
def _encode_adrp(rd, pc, target):
|
||||
imm = ((target & ~0xFFF) - (pc & ~0xFFF)) >> 12
|
||||
imm &= (1 << 21) - 1
|
||||
return 0x90000000 | ((imm & 3) << 29) | ((imm >> 2) << 5) | (rd & 0x1F)
|
||||
|
||||
|
||||
def _encode_add_imm12(rd, rn, imm12):
|
||||
return 0x91000000 | ((imm12 & 0xFFF) << 10) | ((rn & 0x1F) << 5) | (rd & 0x1F)
|
||||
|
||||
|
||||
# ── IBootPatcher ───────────────────────────────────────────────
|
||||
|
||||
|
||||
class IBootPatcher:
|
||||
"""Dynamic patcher for iBoot binaries (iBSS / iBEC / LLB).
|
||||
|
||||
mode controls which patches are applied:
|
||||
'ibss' — serial labels + image4 callback
|
||||
'ibec' — ibss + boot-args
|
||||
'llb' — ibec + rootfs bypass + panic bypass
|
||||
"""
|
||||
|
||||
BOOT_ARGS = b"serial=3 -v debug=0x2014e %s"
|
||||
CHUNK_SIZE, OVERLAP = 0x2000, 0x100
|
||||
|
||||
def __init__(self, data, mode="ibss", label=None, verbose=True):
|
||||
self.data = data # bytearray (mutable)
|
||||
self.raw = bytes(data) # immutable snapshot
|
||||
self.size = len(data)
|
||||
self.mode = mode
|
||||
self.label = label or f"Loaded {mode.upper()}"
|
||||
self.verbose = verbose
|
||||
self.patches = []
|
||||
|
||||
def _log(self, msg):
|
||||
if self.verbose:
|
||||
print(msg)
|
||||
|
||||
# ── emit / apply ───────────────────────────────────────────
|
||||
def emit(self, off, patch_bytes, desc):
|
||||
self.patches.append((off, patch_bytes, desc))
|
||||
if self.verbose:
|
||||
original = self.raw[off : off + len(patch_bytes)]
|
||||
before_insns = _disasm_n(self.raw, off, len(patch_bytes) // 4)
|
||||
after_insns = list(_cs.disasm(patch_bytes, off))
|
||||
b_str = "; ".join(f"{i.mnemonic} {i.op_str}" for i in before_insns) or "???"
|
||||
a_str = "; ".join(f"{i.mnemonic} {i.op_str}" for i in after_insns) or "???"
|
||||
print(f" 0x{off:06X}: {b_str} → {a_str} [{desc}]")
|
||||
|
||||
def emit_string(self, off, data_bytes, desc):
|
||||
"""Record a string/data patch (not disassemblable)."""
|
||||
self.patches.append((off, data_bytes, desc))
|
||||
if self.verbose:
|
||||
try:
|
||||
txt = data_bytes.decode("ascii")
|
||||
except Exception:
|
||||
txt = data_bytes.hex()
|
||||
print(f" 0x{off:06X}: → {repr(txt)} [{desc}]")
|
||||
|
||||
def apply(self):
|
||||
"""Find all patches, apply them, return count."""
|
||||
self.find_all()
|
||||
for off, pb, _ in self.patches:
|
||||
self.data[off : off + len(pb)] = pb
|
||||
|
||||
if self.verbose and self.patches:
|
||||
self._log(f"\n [{len(self.patches)} {self.mode.upper()} patches applied]")
|
||||
return len(self.patches)
|
||||
|
||||
# ── Master find ────────────────────────────────────────────
|
||||
def find_all(self):
|
||||
self.patches = []
|
||||
|
||||
self.patch_serial_labels()
|
||||
self.patch_image4_callback()
|
||||
|
||||
if self.mode in ("ibec", "llb"):
|
||||
self.patch_boot_args()
|
||||
|
||||
if self.mode == "llb":
|
||||
self.patch_rootfs_bypass()
|
||||
self.patch_panic_bypass()
|
||||
|
||||
return self.patches
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# 1. Serial labels — find two long '====...' banner runs
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_serial_labels(self):
|
||||
label_bytes = self.label.encode() if isinstance(self.label, str) else self.label
|
||||
eq_runs = []
|
||||
i = 0
|
||||
while i < self.size:
|
||||
if self.raw[i] == ord("="):
|
||||
start = i
|
||||
while i < self.size and self.raw[i] == ord("="):
|
||||
i += 1
|
||||
if i - start >= 20:
|
||||
eq_runs.append(start)
|
||||
else:
|
||||
i += 1
|
||||
|
||||
if len(eq_runs) < 2:
|
||||
self._log(" [-] serial labels: <2 banner runs found")
|
||||
return
|
||||
|
||||
for run_start in eq_runs[:2]:
|
||||
write_off = run_start + 1
|
||||
self.emit_string(write_off, label_bytes, f"serial label")
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# 2. image4_validate_property_callback
|
||||
# Pattern: b.ne + mov x0, x22 (preceded by cmp within 8 insns)
|
||||
# Patch: b.ne → NOP, mov x0, x22 → mov x0, #0
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_image4_callback(self):
|
||||
candidates = []
|
||||
for insns in self._chunked_disasm():
|
||||
for i in range(len(insns) - 1):
|
||||
if insns[i].mnemonic != "b.ne":
|
||||
continue
|
||||
if not (
|
||||
insns[i + 1].mnemonic == "mov" and insns[i + 1].op_str == "x0, x22"
|
||||
):
|
||||
continue
|
||||
addr = insns[i].address
|
||||
if not any(insns[j].mnemonic == "cmp" for j in range(max(0, i - 8), i)):
|
||||
continue
|
||||
# Prefer candidate with movn w22 (sets -1) earlier
|
||||
neg1 = any(
|
||||
(insns[j].mnemonic == "movn" and insns[j].op_str.startswith("w22,"))
|
||||
or (
|
||||
insns[j].mnemonic == "mov"
|
||||
and "w22" in insns[j].op_str
|
||||
and (
|
||||
"#-1" in insns[j].op_str or "#0xffffffff" in insns[j].op_str
|
||||
)
|
||||
)
|
||||
for j in range(max(0, i - 64), i)
|
||||
)
|
||||
candidates.append((addr, neg1))
|
||||
|
||||
if not candidates:
|
||||
self._log(" [-] image4 callback: pattern not found")
|
||||
return
|
||||
|
||||
# Prefer the candidate with the movn w22 (error return -1)
|
||||
off = None
|
||||
for a, n in candidates:
|
||||
if n:
|
||||
off = a
|
||||
break
|
||||
if off is None:
|
||||
off = candidates[-1][0]
|
||||
|
||||
self.emit(off, NOP, "image4 callback: b.ne → nop")
|
||||
self.emit(off + 4, MOV_X0_0, "image4 callback: mov x0,x22 → mov x0,#0")
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# 3. Boot-args — redirect ADRP+ADD x2 to custom string
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_boot_args(self, new_args=None):
|
||||
if new_args is None:
|
||||
new_args = self.BOOT_ARGS
|
||||
|
||||
# Find the standalone "%s" format string near "rd=md0"
|
||||
fmt_off = self._find_boot_args_fmt()
|
||||
if fmt_off < 0:
|
||||
self._log(" [-] boot-args: format string not found")
|
||||
return
|
||||
|
||||
# Find ADRP+ADD x2 referencing it
|
||||
adrp_off, add_off = self._find_boot_args_adrp(fmt_off)
|
||||
if adrp_off < 0:
|
||||
self._log(" [-] boot-args: ADRP+ADD x2 not found")
|
||||
return
|
||||
|
||||
# Find a NUL slot for the new string
|
||||
new_off = self._find_string_slot(len(new_args))
|
||||
if new_off < 0:
|
||||
self._log(" [-] boot-args: no NUL slot")
|
||||
return
|
||||
|
||||
self.emit_string(new_off, new_args, "boot-args string")
|
||||
new_adrp = struct.pack("<I", _encode_adrp(2, adrp_off, new_off))
|
||||
new_add = struct.pack("<I", _encode_add_imm12(2, 2, new_off & 0xFFF))
|
||||
self.emit(adrp_off, new_adrp, "boot-args: adrp x2 → new string page")
|
||||
self.emit(add_off, new_add, "boot-args: add x2 → new string offset")
|
||||
|
||||
def _find_boot_args_fmt(self):
|
||||
anchor = self.raw.find(b"rd=md0")
|
||||
if anchor < 0:
|
||||
anchor = self.raw.find(b"BootArgs")
|
||||
if anchor < 0:
|
||||
return -1
|
||||
off = anchor
|
||||
while off < anchor + 0x40:
|
||||
off = self.raw.find(b"%s", off)
|
||||
if off < 0 or off >= anchor + 0x40:
|
||||
return -1
|
||||
if self.raw[off - 1] == 0 and self.raw[off + 2] == 0:
|
||||
return off
|
||||
off += 1
|
||||
return -1
|
||||
|
||||
def _find_boot_args_adrp(self, fmt_off):
|
||||
for insns in self._chunked_disasm():
|
||||
for i in range(len(insns) - 1):
|
||||
a, b = insns[i], insns[i + 1]
|
||||
if a.mnemonic != "adrp" or b.mnemonic != "add":
|
||||
continue
|
||||
if a.op_str.split(",")[0].strip() != "x2":
|
||||
continue
|
||||
if len(a.operands) < 2 or len(b.operands) < 3:
|
||||
continue
|
||||
if a.operands[0].reg != b.operands[1].reg:
|
||||
continue
|
||||
if a.operands[1].imm + b.operands[2].imm == fmt_off:
|
||||
return a.address, b.address
|
||||
return -1, -1
|
||||
|
||||
def _find_string_slot(self, string_len, search_start=0x14000):
|
||||
off = search_start
|
||||
while off < self.size:
|
||||
if self.raw[off] == 0:
|
||||
run_start = off
|
||||
while off < self.size and self.raw[off] == 0:
|
||||
off += 1
|
||||
if off - run_start >= 64:
|
||||
write_off = (run_start + 8 + 15) & ~15
|
||||
if write_off + string_len <= off:
|
||||
return write_off
|
||||
else:
|
||||
off += 1
|
||||
return -1
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# 4. LLB rootfs bypass — 6 patches in two functions
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_rootfs_bypass(self):
|
||||
# ── 4a: cbz w0 → unconditional b (error code 0x3B7) ──
|
||||
self._patch_cbz_before_error(0x3B7, "rootfs: skip sig check (0x3B7)")
|
||||
|
||||
# ── 4b: cmp x8, #0x400; b.hs → nop ────────────────────
|
||||
self._patch_bhs_after_cmp_0x400()
|
||||
|
||||
# ── 4c: cbz w0 → unconditional b (error code 0x3C2) ──
|
||||
self._patch_cbz_before_error(0x3C2, "rootfs: skip sig verify (0x3C2)")
|
||||
|
||||
# ── 4d: cbz x8 → nop (ldr xR, [xN, #0x78]) ──────────
|
||||
self._patch_null_check_0x78()
|
||||
|
||||
# ── 4e: cbz w0 → unconditional b (error code 0x110) ──
|
||||
self._patch_cbz_before_error(0x110, "rootfs: skip size verify (0x110)")
|
||||
|
||||
def _patch_cbz_before_error(self, error_code, desc):
|
||||
"""Find unique 'mov w8, #<error_code>', cbz/cbnz is 4 bytes before.
|
||||
Convert conditional branch to unconditional b to same target."""
|
||||
locs = _find_asm_pattern(self.raw, f"mov w8, #{error_code}")
|
||||
if len(locs) != 1:
|
||||
self._log(
|
||||
f" [-] {desc}: expected 1 'mov w8, #{error_code:#x}', "
|
||||
f"found {len(locs)}"
|
||||
)
|
||||
return
|
||||
|
||||
err_off = locs[0]
|
||||
cbz_off = err_off - 4
|
||||
insn = _disasm_one(self.raw, cbz_off)
|
||||
if not insn or insn.mnemonic not in ("cbz", "cbnz"):
|
||||
self._log(
|
||||
f" [-] {desc}: expected cbz/cbnz at 0x{cbz_off:X}, "
|
||||
f"got {insn.mnemonic if insn else '???'}"
|
||||
)
|
||||
return
|
||||
|
||||
# Extract the branch target from the conditional instruction
|
||||
target = insn.operands[1].imm
|
||||
b_word = _encode_b(cbz_off, target)
|
||||
self.emit(cbz_off, struct.pack("<I", b_word), desc)
|
||||
|
||||
def _patch_bhs_after_cmp_0x400(self):
|
||||
"""Find unique 'cmp x8, #0x400', NOP the b.hs that follows."""
|
||||
locs = _find_asm_pattern(self.raw, "cmp x8, #0x400")
|
||||
if len(locs) != 1:
|
||||
self._log(
|
||||
f" [-] rootfs b.hs: expected 1 'cmp x8, #0x400', found {len(locs)}"
|
||||
)
|
||||
return
|
||||
|
||||
cmp_off = locs[0]
|
||||
bhs_off = cmp_off + 4
|
||||
insn = _disasm_one(self.raw, bhs_off)
|
||||
if not insn or insn.mnemonic != "b.hs":
|
||||
self._log(
|
||||
f" [-] rootfs b.hs: expected b.hs at 0x{bhs_off:X}, "
|
||||
f"got {insn.mnemonic if insn else '???'}"
|
||||
)
|
||||
return
|
||||
|
||||
self.emit(bhs_off, NOP, "rootfs: NOP b.hs size check (0x400)")
|
||||
|
||||
def _patch_null_check_0x78(self):
|
||||
"""Find 'ldr x8, [xN, #0x78]; cbz x8' preceding unique error 0x110.
|
||||
NOP the cbz."""
|
||||
locs = _find_asm_pattern(self.raw, "mov w8, #0x110")
|
||||
if len(locs) != 1:
|
||||
self._log(
|
||||
f" [-] rootfs null check: expected 1 'mov w8, #0x110', "
|
||||
f"found {len(locs)}"
|
||||
)
|
||||
return
|
||||
|
||||
err_off = locs[0]
|
||||
# Walk backwards from the error code to find ldr+cbz pattern
|
||||
for scan in range(err_off - 4, max(err_off - 0x300, 0), -4):
|
||||
i1 = _disasm_one(self.raw, scan)
|
||||
i2 = _disasm_one(self.raw, scan + 4)
|
||||
if (
|
||||
i1
|
||||
and i2
|
||||
and i1.mnemonic == "ldr"
|
||||
and "#0x78" in i1.op_str
|
||||
and i2.mnemonic == "cbz"
|
||||
and i2.op_str.startswith("x")
|
||||
):
|
||||
self.emit(scan + 4, NOP, "rootfs: NOP cbz x8 null check (#0x78)")
|
||||
return
|
||||
|
||||
self._log(" [-] rootfs null check: ldr+cbz #0x78 pattern not found")
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# 5. LLB panic bypass
|
||||
# Pattern: mov w8, #0x328; movk w8, #0x40, lsl #16;
|
||||
# str wzr, ...; str wzr, ...; bl X; cbnz w0
|
||||
# Patch: NOP the cbnz
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_panic_bypass(self):
|
||||
mov328_locs = _find_asm_pattern(self.raw, "mov w8, #0x328")
|
||||
for loc in mov328_locs:
|
||||
# Verify movk w8, #0x40, lsl #16 follows
|
||||
next_insn = _disasm_one(self.raw, loc + 4)
|
||||
if not (
|
||||
next_insn
|
||||
and next_insn.mnemonic == "movk"
|
||||
and "w8" in next_insn.op_str
|
||||
and "#0x40" in next_insn.op_str
|
||||
and "lsl #16" in next_insn.op_str
|
||||
):
|
||||
continue
|
||||
|
||||
# Walk forward to find bl; cbnz w0
|
||||
for step in range(loc + 8, loc + 32, 4):
|
||||
i = _disasm_one(self.raw, step)
|
||||
if i and i.mnemonic == "bl":
|
||||
ni = _disasm_one(self.raw, step + 4)
|
||||
if ni and ni.mnemonic == "cbnz":
|
||||
self.emit(step + 4, NOP, "panic bypass: NOP cbnz w0")
|
||||
return
|
||||
break
|
||||
|
||||
self._log(" [-] panic bypass: pattern not found")
|
||||
|
||||
# ── Chunked disassembly helper ─────────────────────────────
|
||||
def _chunked_disasm(self):
|
||||
off = 0
|
||||
while off < self.size:
|
||||
end = min(off + self.CHUNK_SIZE, self.size)
|
||||
insns = list(_cs.disasm(self.raw[off:end], off))
|
||||
yield insns
|
||||
off += self.CHUNK_SIZE - self.OVERLAP
|
||||
|
||||
|
||||
# ── CLI entry point ────────────────────────────────────────────
|
||||
if __name__ == "__main__":
|
||||
import sys, argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Dynamic iBoot patcher (iBSS / iBEC / LLB)"
|
||||
)
|
||||
parser.add_argument("firmware", help="Path to raw or IM4P iBoot image")
|
||||
parser.add_argument(
|
||||
"-m",
|
||||
"--mode",
|
||||
choices=["ibss", "ibec", "llb"],
|
||||
default="llb",
|
||||
help="Patch mode (default: llb = all patches)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l", "--label", default=None, help="Serial label text (default: 'Loaded MODE')"
|
||||
)
|
||||
parser.add_argument("-q", "--quiet", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"Loading {args.firmware}...")
|
||||
file_raw = open(args.firmware, "rb").read()
|
||||
|
||||
# Auto-detect IM4P
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
|
||||
im4p = IM4P(file_raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
payload = im4p.payload.data
|
||||
print(f" format: IM4P (fourcc={im4p.fourcc})")
|
||||
except Exception:
|
||||
payload = file_raw
|
||||
print(f" format: raw")
|
||||
|
||||
data = bytearray(payload)
|
||||
print(f" size: {len(data)} bytes ({len(data) / 1024:.1f} KB)\n")
|
||||
|
||||
patcher = IBootPatcher(
|
||||
data, mode=args.mode, label=args.label, verbose=not args.quiet
|
||||
)
|
||||
n = patcher.apply()
|
||||
print(f"\n {n} patches applied.")
|
||||
@@ -1,113 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
iboot_jb.py — Jailbreak extension patcher for iBoot-based images.
|
||||
|
||||
Currently adds iBSS-only nonce generation bypass used by fw_patch_jb.py.
|
||||
"""
|
||||
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG, ARM64_REG_W0
|
||||
|
||||
from .iboot import IBootPatcher, _disasm_one
|
||||
|
||||
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
|
||||
|
||||
class IBootJBPatcher(IBootPatcher):
|
||||
"""JB-only patcher for iBoot images."""
|
||||
|
||||
def _asm_at(self, asm_line, addr):
|
||||
enc, _ = _ks.asm(asm_line, addr=addr)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed at 0x{addr:X}: {asm_line}")
|
||||
return bytes(enc)
|
||||
|
||||
def apply(self):
|
||||
self.patches = []
|
||||
if self.mode == "ibss":
|
||||
self.patch_skip_generate_nonce()
|
||||
|
||||
for off, pb, _ in self.patches:
|
||||
self.data[off : off + len(pb)] = pb
|
||||
|
||||
if self.verbose and self.patches:
|
||||
self._log(
|
||||
f"\n [{len(self.patches)} {self.mode.upper()} JB patches applied]"
|
||||
)
|
||||
return len(self.patches)
|
||||
|
||||
def _find_refs_to_offset(self, target_off):
|
||||
refs = []
|
||||
for insns in self._chunked_disasm():
|
||||
for i in range(len(insns) - 1):
|
||||
a, b = insns[i], insns[i + 1]
|
||||
if a.mnemonic != "adrp" or b.mnemonic != "add":
|
||||
continue
|
||||
if len(a.operands) < 2 or len(b.operands) < 3:
|
||||
continue
|
||||
if a.operands[0].reg != b.operands[1].reg:
|
||||
continue
|
||||
if a.operands[1].imm + b.operands[2].imm == target_off:
|
||||
refs.append((a.address, b.address, b.operands[0].reg))
|
||||
return refs
|
||||
|
||||
def _find_string_refs(self, needle):
|
||||
if isinstance(needle, str):
|
||||
needle = needle.encode()
|
||||
seen = set()
|
||||
refs = []
|
||||
off = 0
|
||||
while True:
|
||||
s_off = self.raw.find(needle, off)
|
||||
if s_off < 0:
|
||||
break
|
||||
off = s_off + 1
|
||||
for r in self._find_refs_to_offset(s_off):
|
||||
if r[0] not in seen:
|
||||
seen.add(r[0])
|
||||
refs.append(r)
|
||||
return refs
|
||||
|
||||
def patch_skip_generate_nonce(self):
|
||||
refs = self._find_string_refs(b"boot-nonce")
|
||||
if not refs:
|
||||
self._log(" [-] iBSS JB: no refs to 'boot-nonce'")
|
||||
return False
|
||||
|
||||
for _, add_off, _ in refs:
|
||||
for scan in range(add_off, min(add_off + 0x100, self.size - 12), 4):
|
||||
i0 = _disasm_one(self.raw, scan)
|
||||
i1 = _disasm_one(self.raw, scan + 4)
|
||||
i2 = _disasm_one(self.raw, scan + 8)
|
||||
if not i0 or not i1 or not i2:
|
||||
continue
|
||||
if i0.mnemonic not in ("tbz", "tbnz"):
|
||||
continue
|
||||
if len(i0.operands) < 3:
|
||||
continue
|
||||
if not (
|
||||
i0.operands[0].type == ARM64_OP_REG
|
||||
and i0.operands[0].reg == ARM64_REG_W0
|
||||
):
|
||||
continue
|
||||
if not (
|
||||
i0.operands[1].type == ARM64_OP_IMM and i0.operands[1].imm == 0
|
||||
):
|
||||
continue
|
||||
if i1.mnemonic != "mov" or i1.op_str != "w0, #0":
|
||||
continue
|
||||
if i2.mnemonic != "bl":
|
||||
continue
|
||||
|
||||
target = i0.operands[2].imm
|
||||
self.emit(
|
||||
scan,
|
||||
self._asm_at(f"b #0x{target:X}", scan),
|
||||
"JB: skip generate_nonce",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] iBSS JB: generate_nonce branch pattern not found")
|
||||
return False
|
||||
@@ -1,188 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
kernel.py — Dynamic kernel patcher for iOS prelinked kernelcaches.
|
||||
|
||||
Finds all patch sites by string anchors, ADRP+ADD cross-references,
|
||||
BL frequency analysis, and Mach-O structure parsing. Nothing is hardcoded;
|
||||
works across kernel variants (vresearch101, vphone600, etc.).
|
||||
|
||||
Dependencies: keystone-engine, capstone
|
||||
"""
|
||||
|
||||
# Re-export asm helpers for backward compatibility (kernel_jb.py imports from here)
|
||||
from .kernel_asm import (
|
||||
asm,
|
||||
NOP,
|
||||
MOV_X0_0,
|
||||
MOV_X0_1,
|
||||
MOV_W0_0,
|
||||
MOV_W0_1,
|
||||
RET,
|
||||
CMP_W0_W0,
|
||||
CMP_X0_X0,
|
||||
_rd32,
|
||||
_rd64,
|
||||
_asm_u32,
|
||||
_verify_disas,
|
||||
)
|
||||
from .kernel_base import KernelPatcherBase
|
||||
from .kernel_patch_apfs_snapshot import KernelPatchApfsSnapshotMixin
|
||||
from .kernel_patch_apfs_seal import KernelPatchApfsSealMixin
|
||||
from .kernel_patch_bsd_init import KernelPatchBsdInitMixin
|
||||
from .kernel_patch_launch_constraints import KernelPatchLaunchConstraintsMixin
|
||||
from .kernel_patch_debugger import KernelPatchDebuggerMixin
|
||||
from .kernel_patch_post_validation import KernelPatchPostValidationMixin
|
||||
from .kernel_patch_dyld_policy import KernelPatchDyldPolicyMixin
|
||||
from .kernel_patch_apfs_graft import KernelPatchApfsGraftMixin
|
||||
from .kernel_patch_apfs_mount import KernelPatchApfsMountMixin
|
||||
from .kernel_patch_sandbox import KernelPatchSandboxMixin
|
||||
|
||||
|
||||
class KernelPatcher(
|
||||
KernelPatchSandboxMixin,
|
||||
KernelPatchApfsMountMixin,
|
||||
KernelPatchApfsGraftMixin,
|
||||
KernelPatchDyldPolicyMixin,
|
||||
KernelPatchPostValidationMixin,
|
||||
KernelPatchDebuggerMixin,
|
||||
KernelPatchLaunchConstraintsMixin,
|
||||
KernelPatchBsdInitMixin,
|
||||
KernelPatchApfsSealMixin,
|
||||
KernelPatchApfsSnapshotMixin,
|
||||
KernelPatcherBase,
|
||||
):
|
||||
"""Dynamic kernel patcher — all offsets found at runtime."""
|
||||
|
||||
def find_all(self):
|
||||
"""Find and record all kernel patches. Returns list of (offset, bytes, desc)."""
|
||||
self._reset_patch_state()
|
||||
self.patch_apfs_root_snapshot() # 1
|
||||
self.patch_apfs_seal_broken() # 2
|
||||
self.patch_bsd_init_rootvp() # 3
|
||||
self.patch_proc_check_launch_constraints() # 4-5
|
||||
self.patch_PE_i_can_has_debugger() # 6-7
|
||||
self.patch_post_validation_nop() # 8
|
||||
self.patch_post_validation_cmp() # 9
|
||||
self.patch_check_dyld_policy() # 10-11
|
||||
self.patch_apfs_graft() # 12
|
||||
self.patch_apfs_vfsop_mount_cmp() # 13
|
||||
self.patch_apfs_mount_upgrade_checks() # 14
|
||||
self.patch_handle_fsioc_graft() # 15
|
||||
self.patch_apfs_get_dev_by_role_entitlement() # 16
|
||||
self.patch_sandbox_hooks() # 17-26
|
||||
return self.patches
|
||||
|
||||
def apply(self):
|
||||
"""Find all patches and apply them to self.data. Returns patch count."""
|
||||
self._patch_num = 0
|
||||
patches = self.find_all()
|
||||
# emit() already writes patches through to self.data,
|
||||
# but re-apply in case subclasses override find_all().
|
||||
for off, patch_bytes, desc in patches:
|
||||
self.data[off : off + len(patch_bytes)] = patch_bytes
|
||||
return len(patches)
|
||||
|
||||
|
||||
# ── CLI entry point ──────────────────────────────────────────────
|
||||
if __name__ == "__main__":
|
||||
import sys, argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Dynamic kernel patcher — find & apply patches on iOS kernelcaches"
|
||||
)
|
||||
parser.add_argument("kernelcache", help="Path to raw or IM4P kernelcache")
|
||||
parser.add_argument(
|
||||
"-v",
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Show detailed before/after disassembly for each patch",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-c",
|
||||
"--context",
|
||||
type=int,
|
||||
default=5,
|
||||
help="Instructions of context before/after each patch (default: 5, requires -v)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
path = args.kernelcache
|
||||
print(f"Loading {path}...")
|
||||
file_raw = open(path, "rb").read()
|
||||
|
||||
# Auto-detect IM4P vs raw Mach-O
|
||||
if file_raw[:4] == b"\xcf\xfa\xed\xfe":
|
||||
payload = file_raw
|
||||
print(f" format: raw Mach-O")
|
||||
else:
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
|
||||
im4p = IM4P(file_raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
payload = im4p.payload.data
|
||||
print(f" format: IM4P (fourcc={im4p.fourcc})")
|
||||
except Exception:
|
||||
payload = file_raw
|
||||
print(f" format: unknown (treating as raw)")
|
||||
|
||||
data = bytearray(payload)
|
||||
print(f" size: {len(data)} bytes ({len(data) / 1024 / 1024:.1f} MB)\n")
|
||||
|
||||
kp = KernelPatcher(data, verbose=args.verbose)
|
||||
patches = kp.find_all()
|
||||
print(f"\n {len(patches)} patches found")
|
||||
|
||||
if args.verbose:
|
||||
# ── Print ranged before / after disassembly for every patch ──
|
||||
ctx = args.context
|
||||
|
||||
print(f"\n{'═' * 72}")
|
||||
print(f" {len(patches)} PATCHES — before / after disassembly (context={ctx})")
|
||||
print(f"{'═' * 72}")
|
||||
|
||||
# Apply patches to get the "after" image
|
||||
after = bytearray(kp.raw) # start from original
|
||||
for off, pb, _ in patches:
|
||||
after[off : off + len(pb)] = pb
|
||||
|
||||
for i, (off, patch_bytes, desc) in enumerate(sorted(patches), 1):
|
||||
n_insns = len(patch_bytes) // 4
|
||||
start = max(off - ctx * 4, 0)
|
||||
end = off + n_insns * 4 + ctx * 4
|
||||
total = (end - start) // 4
|
||||
|
||||
before_insns = kp._disas_n(kp.raw, start, total)
|
||||
after_insns = kp._disas_n(after, start, total)
|
||||
|
||||
print(f"\n ┌{'─' * 70}")
|
||||
print(f" │ [{i:2d}] 0x{off:08X}: {desc}")
|
||||
print(f" ├{'─' * 34}┬{'─' * 35}")
|
||||
print(f" │ {'BEFORE':^33}│ {'AFTER':^34}")
|
||||
print(f" ├{'─' * 34}┼{'─' * 35}")
|
||||
|
||||
# Build line pairs
|
||||
max_lines = max(len(before_insns), len(after_insns))
|
||||
for j in range(max_lines):
|
||||
|
||||
def fmt(insn):
|
||||
if insn is None:
|
||||
return " " * 33
|
||||
h = insn.bytes.hex()
|
||||
return f"0x{insn.address:07X} {h:8s} {insn.mnemonic:6s} {insn.op_str}"
|
||||
|
||||
bi = before_insns[j] if j < len(before_insns) else None
|
||||
ai = after_insns[j] if j < len(after_insns) else None
|
||||
|
||||
bl = fmt(bi)
|
||||
al = fmt(ai)
|
||||
|
||||
# Mark if this address is inside the patched range
|
||||
addr = (bi.address if bi else ai.address) if (bi or ai) else 0
|
||||
in_patch = off <= addr < off + len(patch_bytes)
|
||||
marker = " ◄" if in_patch else " "
|
||||
|
||||
print(f" │ {bl:33s}│ {al:33s}{marker}")
|
||||
|
||||
print(f" └{'─' * 34}┴{'─' * 35}")
|
||||
@@ -1,81 +0,0 @@
|
||||
"""Shared asm/constants/helpers for kernel patchers."""
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
kernel_patcher.py — Dynamic kernel patcher for iOS prelinked kernelcaches.
|
||||
|
||||
Finds all patch sites by string anchors, ADRP+ADD cross-references,
|
||||
BL frequency analysis, and Mach-O structure parsing. Nothing is hardcoded;
|
||||
works across kernel variants (vresearch101, vphone600, etc.).
|
||||
|
||||
Dependencies: keystone-engine, capstone
|
||||
"""
|
||||
|
||||
import struct, plistlib
|
||||
from collections import defaultdict
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
from capstone.arm64_const import (
|
||||
ARM64_OP_REG,
|
||||
ARM64_OP_IMM,
|
||||
ARM64_REG_W0,
|
||||
ARM64_REG_X0,
|
||||
ARM64_REG_X8,
|
||||
)
|
||||
|
||||
# ── Assembly / disassembly helpers ───────────────────────────────
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
|
||||
|
||||
def asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed: {s}")
|
||||
return bytes(enc)
|
||||
|
||||
|
||||
NOP = asm("nop")
|
||||
MOV_X0_0 = asm("mov x0, #0")
|
||||
MOV_X0_1 = asm("mov x0, #1")
|
||||
MOV_W0_0 = asm("mov w0, #0")
|
||||
MOV_W0_1 = asm("mov w0, #1")
|
||||
RET = asm("ret")
|
||||
CMP_W0_W0 = asm("cmp w0, w0")
|
||||
CMP_X0_X0 = asm("cmp x0, x0")
|
||||
|
||||
|
||||
def _asm_u32(s):
|
||||
"""Assemble a single instruction and return its uint32 encoding."""
|
||||
return struct.unpack("<I", asm(s))[0]
|
||||
|
||||
|
||||
def _verify_disas(u32_val, expected_mnemonic):
|
||||
"""Verify a uint32 encoding disassembles to expected mnemonic via capstone."""
|
||||
code = struct.pack("<I", u32_val)
|
||||
insns = list(_cs.disasm(code, 0, 1))
|
||||
assert insns and insns[0].mnemonic == expected_mnemonic, (
|
||||
f"0x{u32_val:08X} disassembles to {insns[0].mnemonic if insns else '???'}, expected {expected_mnemonic}"
|
||||
)
|
||||
return u32_val
|
||||
|
||||
|
||||
# Named instruction constants (via keystone where possible, capstone-verified otherwise)
|
||||
_PACIBSP_U32 = _asm_u32("hint #27") # keystone doesn't know 'pacibsp'
|
||||
_RET_U32 = _asm_u32("ret")
|
||||
_RETAA_U32 = _verify_disas(0xD65F0BFF, "retaa") # keystone can't assemble PAC returns
|
||||
_RETAB_U32 = _verify_disas(0xD65F0FFF, "retab") # verified via capstone disassembly
|
||||
_FUNC_BOUNDARY_U32S = frozenset((_RET_U32, _RETAA_U32, _RETAB_U32, _PACIBSP_U32))
|
||||
|
||||
|
||||
def _rd32(buf, off):
|
||||
return struct.unpack_from("<I", buf, off)[0]
|
||||
|
||||
|
||||
def _rd64(buf, off):
|
||||
return struct.unpack_from("<Q", buf, off)[0]
|
||||
|
||||
|
||||
# ── KernelPatcher ────────────────────────────────────────────────
|
||||
|
||||
|
||||
@@ -1,679 +0,0 @@
|
||||
"""Base class with all infrastructure for kernel patchers."""
|
||||
|
||||
import struct, plistlib, threading
|
||||
from collections import defaultdict
|
||||
|
||||
from capstone.arm64_const import (
|
||||
ARM64_OP_REG,
|
||||
ARM64_OP_IMM,
|
||||
ARM64_REG_W0,
|
||||
ARM64_REG_X0,
|
||||
ARM64_REG_X8,
|
||||
)
|
||||
|
||||
from .kernel_asm import (
|
||||
_cs,
|
||||
_rd32,
|
||||
_rd64,
|
||||
_PACIBSP_U32,
|
||||
_FUNC_BOUNDARY_U32S,
|
||||
)
|
||||
|
||||
|
||||
class KernelPatcherBase:
|
||||
def __init__(self, data, verbose=False):
|
||||
self.data = data # bytearray (mutable)
|
||||
self.raw = bytes(data) # immutable snapshot for searching
|
||||
self.size = len(data)
|
||||
self.patches = [] # collected (offset, bytes, description)
|
||||
self._patch_by_off = {} # offset -> (patch_bytes, desc)
|
||||
self.verbose = verbose
|
||||
self._patch_num = 0 # running counter for clean one-liners
|
||||
self._emit_lock = threading.Lock()
|
||||
|
||||
# Hot-path caches (search/disassembly is repeated heavily in JB mode).
|
||||
self._disas_cache = {}
|
||||
self._disas_cache_limit = 200_000
|
||||
self._string_refs_cache = {}
|
||||
self._func_start_cache = {}
|
||||
|
||||
self._log("[*] Parsing Mach-O segments …")
|
||||
self._parse_macho()
|
||||
|
||||
self._log("[*] Discovering kext code ranges from __PRELINK_INFO …")
|
||||
self._discover_kext_ranges()
|
||||
|
||||
self._log("[*] Building ADRP index …")
|
||||
self._build_adrp_index()
|
||||
|
||||
self._log("[*] Building BL index …")
|
||||
self._build_bl_index()
|
||||
|
||||
self._find_panic()
|
||||
self._log(
|
||||
f"[*] _panic at foff 0x{self.panic_off:X} "
|
||||
f"({len(self.bl_callers[self.panic_off])} callers)"
|
||||
)
|
||||
|
||||
# ── Logging ──────────────────────────────────────────────────
|
||||
def _log(self, msg):
|
||||
if self.verbose:
|
||||
print(msg)
|
||||
|
||||
def _reset_patch_state(self):
|
||||
"""Reset patch bookkeeping before a fresh find/apply pass."""
|
||||
self.patches = []
|
||||
self._patch_by_off = {}
|
||||
self._patch_num = 0
|
||||
|
||||
# ── Mach-O / segment parsing ─────────────────────────────────
|
||||
def _parse_macho(self):
|
||||
"""Parse top-level Mach-O: discover BASE_VA, segments, code ranges."""
|
||||
magic = _rd32(self.raw, 0)
|
||||
if magic != 0xFEEDFACF:
|
||||
raise ValueError(f"Not a 64-bit Mach-O (magic 0x{magic:08X})")
|
||||
|
||||
self.code_ranges = [] # [(start_foff, end_foff), ...]
|
||||
self.all_segments = [] # [(name, vmaddr, fileoff, filesize, initprot)]
|
||||
self.base_va = None
|
||||
|
||||
ncmds = struct.unpack_from("<I", self.raw, 16)[0]
|
||||
off = 32 # past mach_header_64
|
||||
for _ in range(ncmds):
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x19: # LC_SEGMENT_64
|
||||
segname = self.raw[off + 8 : off + 24].split(b"\x00")[0].decode()
|
||||
vmaddr, vmsize, fileoff, filesize = struct.unpack_from(
|
||||
"<QQQQ", self.raw, off + 24
|
||||
)
|
||||
initprot = struct.unpack_from("<I", self.raw, off + 60)[0]
|
||||
self.all_segments.append((segname, vmaddr, fileoff, filesize, initprot))
|
||||
if segname == "__TEXT":
|
||||
self.base_va = vmaddr
|
||||
CODE_SEGS = ("__PRELINK_TEXT", "__TEXT_EXEC", "__TEXT_BOOT_EXEC")
|
||||
if segname in CODE_SEGS and filesize > 0:
|
||||
self.code_ranges.append((fileoff, fileoff + filesize))
|
||||
off += cmdsize
|
||||
|
||||
if self.base_va is None:
|
||||
raise ValueError("__TEXT segment not found — cannot determine BASE_VA")
|
||||
|
||||
self.code_ranges.sort()
|
||||
total_mb = sum(e - s for s, e in self.code_ranges) / (1024 * 1024)
|
||||
self._log(f" BASE_VA = 0x{self.base_va:016X}")
|
||||
self._log(
|
||||
f" {len(self.code_ranges)} executable ranges, total {total_mb:.1f} MB"
|
||||
)
|
||||
|
||||
def _va(self, foff):
|
||||
return self.base_va + foff
|
||||
|
||||
def _foff(self, va):
|
||||
return va - self.base_va
|
||||
|
||||
# ── Kext range discovery ─────────────────────────────────────
|
||||
def _discover_kext_ranges(self):
|
||||
"""Parse __PRELINK_INFO + embedded kext Mach-Os to find code section ranges."""
|
||||
self.kext_ranges = {} # bundle_id -> (text_start, text_end)
|
||||
|
||||
# Find __PRELINK_INFO segment
|
||||
prelink_info = None
|
||||
for name, vmaddr, fileoff, filesize, _ in self.all_segments:
|
||||
if name == "__PRELINK_INFO":
|
||||
prelink_info = (fileoff, filesize)
|
||||
break
|
||||
|
||||
if prelink_info is None:
|
||||
self._log(" [-] __PRELINK_INFO not found, using __TEXT_EXEC for all")
|
||||
self._set_fallback_ranges()
|
||||
return
|
||||
|
||||
foff, fsize = prelink_info
|
||||
pdata = self.raw[foff : foff + fsize]
|
||||
|
||||
# Parse the XML plist
|
||||
xml_start = pdata.find(b"<?xml")
|
||||
xml_end = pdata.find(b"</plist>")
|
||||
if xml_start < 0 or xml_end < 0:
|
||||
self._log(" [-] __PRELINK_INFO plist not found")
|
||||
self._set_fallback_ranges()
|
||||
return
|
||||
|
||||
xml = pdata[xml_start : xml_end + len(b"</plist>")]
|
||||
pl = plistlib.loads(xml)
|
||||
items = pl.get("_PrelinkInfoDictionary", [])
|
||||
|
||||
# Kexts we need ranges for
|
||||
WANTED = {
|
||||
"com.apple.filesystems.apfs": "apfs",
|
||||
"com.apple.security.sandbox": "sandbox",
|
||||
"com.apple.driver.AppleMobileFileIntegrity": "amfi",
|
||||
}
|
||||
|
||||
for item in items:
|
||||
bid = item.get("CFBundleIdentifier", "")
|
||||
tag = WANTED.get(bid)
|
||||
if tag is None:
|
||||
continue
|
||||
|
||||
exec_addr = item.get("_PrelinkExecutableLoadAddr", 0) & 0xFFFFFFFFFFFFFFFF
|
||||
kext_foff = exec_addr - self.base_va
|
||||
if kext_foff < 0 or kext_foff >= self.size:
|
||||
continue
|
||||
|
||||
# Parse this kext's embedded Mach-O to find __TEXT_EXEC.__text
|
||||
text_range = self._parse_kext_text_exec(kext_foff)
|
||||
if text_range:
|
||||
self.kext_ranges[tag] = text_range
|
||||
self._log(
|
||||
f" {tag:10s} __text: 0x{text_range[0]:08X} - 0x{text_range[1]:08X} "
|
||||
f"({(text_range[1] - text_range[0]) // 1024} KB)"
|
||||
)
|
||||
|
||||
# Derive the ranges used by patch methods
|
||||
self._set_ranges_from_kexts()
|
||||
|
||||
def _parse_kext_text_exec(self, kext_foff):
|
||||
"""Parse an embedded kext Mach-O header and return (__text start, end) in file offsets."""
|
||||
if kext_foff + 32 > self.size:
|
||||
return None
|
||||
magic = _rd32(self.raw, kext_foff)
|
||||
if magic != 0xFEEDFACF:
|
||||
return None
|
||||
|
||||
ncmds = struct.unpack_from("<I", self.raw, kext_foff + 16)[0]
|
||||
off = kext_foff + 32
|
||||
for _ in range(ncmds):
|
||||
if off + 8 > self.size:
|
||||
break
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x19: # LC_SEGMENT_64
|
||||
segname = self.raw[off + 8 : off + 24].split(b"\x00")[0].decode()
|
||||
if segname == "__TEXT_EXEC":
|
||||
vmaddr = struct.unpack_from("<Q", self.raw, off + 24)[0]
|
||||
filesize = struct.unpack_from("<Q", self.raw, off + 48)[0]
|
||||
nsects = struct.unpack_from("<I", self.raw, off + 64)[0]
|
||||
# Parse sections to find __text
|
||||
sect_off = off + 72
|
||||
for _ in range(nsects):
|
||||
if sect_off + 80 > self.size:
|
||||
break
|
||||
sectname = (
|
||||
self.raw[sect_off : sect_off + 16]
|
||||
.split(b"\x00")[0]
|
||||
.decode()
|
||||
)
|
||||
if sectname == "__text":
|
||||
sect_addr = struct.unpack_from(
|
||||
"<Q", self.raw, sect_off + 32
|
||||
)[0]
|
||||
sect_size = struct.unpack_from(
|
||||
"<Q", self.raw, sect_off + 40
|
||||
)[0]
|
||||
sect_foff = sect_addr - self.base_va
|
||||
return (sect_foff, sect_foff + sect_size)
|
||||
sect_off += 80
|
||||
# No __text section found, use the segment
|
||||
seg_foff = vmaddr - self.base_va
|
||||
return (seg_foff, seg_foff + filesize)
|
||||
off += cmdsize
|
||||
return None
|
||||
|
||||
def _set_ranges_from_kexts(self):
|
||||
"""Set patch-method ranges from discovered kext info, with fallbacks."""
|
||||
# Full __TEXT_EXEC range
|
||||
text_exec = None
|
||||
for name, vmaddr, fileoff, filesize, _ in self.all_segments:
|
||||
if name == "__TEXT_EXEC":
|
||||
text_exec = (fileoff, fileoff + filesize)
|
||||
break
|
||||
|
||||
if text_exec is None:
|
||||
text_exec = (0, self.size)
|
||||
|
||||
self.text_exec_range = text_exec
|
||||
self.apfs_text = self.kext_ranges.get("apfs", text_exec)
|
||||
self.amfi_text = self.kext_ranges.get("amfi", text_exec)
|
||||
self.sandbox_text = self.kext_ranges.get("sandbox", text_exec)
|
||||
# Kernel code = full __TEXT_EXEC (includes all kexts, but that's OK)
|
||||
self.kern_text = text_exec
|
||||
|
||||
def _set_fallback_ranges(self):
|
||||
"""Use __TEXT_EXEC for everything when __PRELINK_INFO is unavailable."""
|
||||
text_exec = None
|
||||
for name, vmaddr, fileoff, filesize, _ in self.all_segments:
|
||||
if name == "__TEXT_EXEC":
|
||||
text_exec = (fileoff, fileoff + filesize)
|
||||
break
|
||||
if text_exec is None:
|
||||
text_exec = (0, self.size)
|
||||
|
||||
self.text_exec_range = text_exec
|
||||
self.apfs_text = text_exec
|
||||
self.amfi_text = text_exec
|
||||
self.sandbox_text = text_exec
|
||||
self.kern_text = text_exec
|
||||
|
||||
# ── Index builders ───────────────────────────────────────────
|
||||
def _build_adrp_index(self):
|
||||
"""Index ADRP instructions by target page for O(1) string-ref lookup."""
|
||||
self.adrp_by_page = defaultdict(list)
|
||||
for rng_start, rng_end in self.code_ranges:
|
||||
for off in range(rng_start, rng_end, 4):
|
||||
insn = _rd32(self.raw, off)
|
||||
if (insn & 0x9F000000) != 0x90000000:
|
||||
continue
|
||||
rd = insn & 0x1F
|
||||
immhi = (insn >> 5) & 0x7FFFF
|
||||
immlo = (insn >> 29) & 0x3
|
||||
imm = (immhi << 2) | immlo
|
||||
if imm & (1 << 20):
|
||||
imm -= 1 << 21
|
||||
pc = self._va(off)
|
||||
page = (pc & ~0xFFF) + (imm << 12)
|
||||
self.adrp_by_page[page].append((off, rd))
|
||||
|
||||
n = sum(len(v) for v in self.adrp_by_page.values())
|
||||
self._log(f" {n} ADRP entries, {len(self.adrp_by_page)} distinct pages")
|
||||
|
||||
def _build_bl_index(self):
|
||||
"""Index BL instructions by target offset."""
|
||||
self.bl_callers = defaultdict(list) # target_off -> [caller_off, ...]
|
||||
for rng_start, rng_end in self.code_ranges:
|
||||
for off in range(rng_start, rng_end, 4):
|
||||
insn = _rd32(self.raw, off)
|
||||
if (insn & 0xFC000000) != 0x94000000:
|
||||
continue
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = off + imm26 * 4
|
||||
self.bl_callers[target].append(off)
|
||||
|
||||
def _find_panic(self):
|
||||
"""Find _panic: most-called function whose callers reference '@%s:%d' strings."""
|
||||
candidates = sorted(self.bl_callers.items(), key=lambda x: -len(x[1]))[:15]
|
||||
for target_off, callers in candidates:
|
||||
if len(callers) < 2000:
|
||||
break
|
||||
confirmed = 0
|
||||
for caller_off in callers[:30]:
|
||||
for back in range(caller_off - 4, max(caller_off - 32, 0), -4):
|
||||
insn = _rd32(self.raw, back)
|
||||
# ADD x0, x0, #imm
|
||||
if (insn & 0xFFC003E0) == 0x91000000:
|
||||
add_imm = (insn >> 10) & 0xFFF
|
||||
if back >= 4:
|
||||
prev = _rd32(self.raw, back - 4)
|
||||
if (prev & 0x9F00001F) == 0x90000000: # ADRP x0
|
||||
immhi = (prev >> 5) & 0x7FFFF
|
||||
immlo = (prev >> 29) & 0x3
|
||||
imm = (immhi << 2) | immlo
|
||||
if imm & (1 << 20):
|
||||
imm -= 1 << 21
|
||||
pc = self._va(back - 4)
|
||||
page = (pc & ~0xFFF) + (imm << 12)
|
||||
str_foff = self._foff(page + add_imm)
|
||||
if 0 <= str_foff < self.size - 10:
|
||||
snippet = self.raw[str_foff : str_foff + 60]
|
||||
if b"@%s:%d" in snippet or b"%s:%d" in snippet:
|
||||
confirmed += 1
|
||||
break
|
||||
break
|
||||
if confirmed >= 3:
|
||||
self.panic_off = target_off
|
||||
return
|
||||
self.panic_off = candidates[2][0] if len(candidates) > 2 else candidates[0][0]
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────
|
||||
def _disas_at(self, off, count=1):
|
||||
"""Disassemble *count* instructions at file offset. Returns a list."""
|
||||
if off < 0 or off >= self.size:
|
||||
return []
|
||||
|
||||
key = None
|
||||
if count <= 4:
|
||||
key = (off, count)
|
||||
cached = self._disas_cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
end = min(off + count * 4, self.size)
|
||||
code = bytes(self.raw[off:end])
|
||||
insns = list(_cs.disasm(code, off, count))
|
||||
|
||||
if key is not None:
|
||||
if len(self._disas_cache) >= self._disas_cache_limit:
|
||||
self._disas_cache.clear()
|
||||
self._disas_cache[key] = insns
|
||||
|
||||
return insns
|
||||
|
||||
def _is_bl(self, off):
|
||||
"""Return BL target file offset, or -1 if not a BL."""
|
||||
insns = self._disas_at(off)
|
||||
if insns and insns[0].mnemonic == "bl":
|
||||
return insns[0].operands[0].imm
|
||||
return -1
|
||||
|
||||
def _is_cond_branch_w0(self, off):
|
||||
"""Return True if instruction is a conditional branch on w0 (cbz/cbnz/tbz/tbnz)."""
|
||||
insns = self._disas_at(off)
|
||||
if not insns:
|
||||
return False
|
||||
i = insns[0]
|
||||
if i.mnemonic in ("cbz", "cbnz", "tbz", "tbnz"):
|
||||
return (
|
||||
i.operands[0].type == ARM64_OP_REG and i.operands[0].reg == ARM64_REG_W0
|
||||
)
|
||||
return False
|
||||
|
||||
def find_string(self, s, start=0):
|
||||
"""Find string, return file offset of the enclosing C string start."""
|
||||
if isinstance(s, str):
|
||||
s = s.encode()
|
||||
off = self.raw.find(s, start)
|
||||
if off < 0:
|
||||
return -1
|
||||
# Walk backward to the preceding NUL — that's the C string start
|
||||
cstr = off
|
||||
while cstr > 0 and self.raw[cstr - 1] != 0:
|
||||
cstr -= 1
|
||||
return cstr
|
||||
|
||||
def find_string_refs(self, str_off, code_start=None, code_end=None):
|
||||
"""Find all (adrp_off, add_off, dest_reg) referencing str_off via ADRP+ADD."""
|
||||
key = (str_off, code_start, code_end)
|
||||
cached = self._string_refs_cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
target_va = self._va(str_off)
|
||||
target_page = target_va & ~0xFFF
|
||||
page_off = target_va & 0xFFF
|
||||
|
||||
refs = []
|
||||
for adrp_off, rd in self.adrp_by_page.get(target_page, []):
|
||||
if code_start is not None and adrp_off < code_start:
|
||||
continue
|
||||
if code_end is not None and adrp_off >= code_end:
|
||||
continue
|
||||
if adrp_off + 4 >= self.size:
|
||||
continue
|
||||
nxt = _rd32(self.raw, adrp_off + 4)
|
||||
# ADD (imm) 64-bit: 1001_0001_00_imm12_Rn_Rd
|
||||
if (nxt & 0xFFC00000) != 0x91000000:
|
||||
continue
|
||||
add_rn = (nxt >> 5) & 0x1F
|
||||
add_imm = (nxt >> 10) & 0xFFF
|
||||
if add_rn == rd and add_imm == page_off:
|
||||
add_rd = nxt & 0x1F
|
||||
refs.append((adrp_off, adrp_off + 4, add_rd))
|
||||
self._string_refs_cache[key] = refs
|
||||
return refs
|
||||
|
||||
def find_function_start(self, off, max_back=0x4000):
|
||||
"""Walk backwards to find PACIBSP or STP x29,x30,[sp,#imm].
|
||||
|
||||
When STP x29,x30 is found, continues backward up to 0x20 more
|
||||
bytes to look for PACIBSP (ARM64e functions may have several STP
|
||||
instructions in the prologue before STP x29,x30).
|
||||
"""
|
||||
use_cache = max_back == 0x4000
|
||||
if use_cache:
|
||||
cached = self._func_start_cache.get(off)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
result = -1
|
||||
for o in range(off - 4, max(off - max_back, 0), -4):
|
||||
insn = _rd32(self.raw, o)
|
||||
if insn == _PACIBSP_U32:
|
||||
result = o
|
||||
break
|
||||
dis = self._disas_at(o)
|
||||
if dis and dis[0].mnemonic == "stp" and "x29, x30, [sp" in dis[0].op_str:
|
||||
# Check further back for PACIBSP (prologue may have
|
||||
# multiple STP instructions before x29,x30)
|
||||
for k in range(o - 4, max(o - 0x24, 0), -4):
|
||||
if _rd32(self.raw, k) == _PACIBSP_U32:
|
||||
result = k
|
||||
break
|
||||
if result < 0:
|
||||
result = o
|
||||
break
|
||||
|
||||
if use_cache:
|
||||
self._func_start_cache[off] = result
|
||||
return result
|
||||
|
||||
def _disas_n(self, buf, off, count):
|
||||
"""Disassemble *count* instructions from *buf* at file offset *off*."""
|
||||
end = min(off + count * 4, len(buf))
|
||||
if off < 0 or off >= len(buf):
|
||||
return []
|
||||
code = bytes(buf[off:end])
|
||||
return list(_cs.disasm(code, off, count))
|
||||
|
||||
def _fmt_insn(self, insn, marker=""):
|
||||
"""Format one capstone instruction for display."""
|
||||
raw = insn.bytes
|
||||
hex_str = " ".join(f"{b:02x}" for b in raw)
|
||||
s = f" 0x{insn.address:08X}: {hex_str:12s} {insn.mnemonic:8s} {insn.op_str}"
|
||||
if marker:
|
||||
s += f" {marker}"
|
||||
return s
|
||||
|
||||
def _print_patch_context(self, off, patch_bytes, desc):
|
||||
"""Print disassembly before/after a patch site for debugging."""
|
||||
ctx = 3 # instructions of context before and after
|
||||
# -- BEFORE (original bytes) --
|
||||
lines = [f" ┌─ PATCH 0x{off:08X}: {desc}"]
|
||||
lines.append(" │ BEFORE:")
|
||||
start = max(off - ctx * 4, 0)
|
||||
before_insns = self._disas_n(self.raw, start, ctx + 1 + ctx)
|
||||
for insn in before_insns:
|
||||
if insn.address == off:
|
||||
lines.append(self._fmt_insn(insn, " ◄━━ PATCHED"))
|
||||
elif off < insn.address < off + len(patch_bytes):
|
||||
lines.append(self._fmt_insn(insn, " ◄━━ PATCHED"))
|
||||
else:
|
||||
lines.append(self._fmt_insn(insn))
|
||||
|
||||
# -- AFTER (new bytes) --
|
||||
lines.append(" │ AFTER:")
|
||||
after_insns = self._disas_n(self.raw, start, ctx)
|
||||
for insn in after_insns:
|
||||
lines.append(self._fmt_insn(insn))
|
||||
# Decode the patch bytes themselves
|
||||
patch_insns = list(_cs.disasm(patch_bytes, off, len(patch_bytes) // 4))
|
||||
for insn in patch_insns:
|
||||
lines.append(self._fmt_insn(insn, " ◄━━ NEW"))
|
||||
# Trailing context after the patch
|
||||
trail_start = off + len(patch_bytes)
|
||||
trail_insns = self._disas_n(self.raw, trail_start, ctx)
|
||||
for insn in trail_insns:
|
||||
lines.append(self._fmt_insn(insn))
|
||||
lines.append(f" └─")
|
||||
self._log("\n".join(lines))
|
||||
|
||||
def emit(self, off, patch_bytes, desc):
|
||||
"""Record a patch and apply it to self.data immediately.
|
||||
|
||||
Writing through to self.data ensures _find_code_cave() sees
|
||||
previously allocated shellcode and won't reuse the same cave.
|
||||
"""
|
||||
patch_bytes = bytes(patch_bytes)
|
||||
with self._emit_lock:
|
||||
existing = self._patch_by_off.get(off)
|
||||
if existing is not None:
|
||||
existing_bytes, existing_desc = existing
|
||||
if existing_bytes != patch_bytes:
|
||||
raise RuntimeError(
|
||||
f"Conflicting patch at 0x{off:08X}: "
|
||||
f"{existing_desc!r} vs {desc!r}"
|
||||
)
|
||||
return
|
||||
|
||||
self._patch_by_off[off] = (patch_bytes, desc)
|
||||
self.patches.append((off, patch_bytes, desc))
|
||||
self.data[off : off + len(patch_bytes)] = patch_bytes
|
||||
self._patch_num += 1
|
||||
patch_num = self._patch_num
|
||||
print(f" [{patch_num:2d}] 0x{off:08X} {desc}")
|
||||
if self.verbose:
|
||||
self._print_patch_context(off, patch_bytes, desc)
|
||||
|
||||
def _find_by_string_in_range(self, string, code_range, label):
|
||||
"""Find string, find ADRP+ADD ref in code_range, return ref list."""
|
||||
str_off = self.find_string(string)
|
||||
if str_off < 0:
|
||||
self._log(f" [-] string not found: {string!r}")
|
||||
return []
|
||||
refs = self.find_string_refs(str_off, code_range[0], code_range[1])
|
||||
if not refs:
|
||||
self._log(f" [-] no code refs to {label} (str at 0x{str_off:X})")
|
||||
return refs
|
||||
|
||||
# ── Chained fixup pointer decoding ───────────────────────────
|
||||
def _decode_chained_ptr(self, val):
|
||||
"""Decode an arm64e chained fixup pointer to a file offset.
|
||||
|
||||
- auth rebase (bit63=1): foff = bits[31:0]
|
||||
- non-auth rebase (bit63=0): VA = (bits[50:43] << 56) | bits[42:0]
|
||||
"""
|
||||
if val == 0:
|
||||
return -1
|
||||
if val & (1 << 63): # auth rebase
|
||||
return val & 0xFFFFFFFF
|
||||
else: # non-auth rebase
|
||||
target = val & 0x7FFFFFFFFFF # bits[42:0]
|
||||
high8 = (val >> 43) & 0xFF
|
||||
full_va = (high8 << 56) | target
|
||||
if full_va > self.base_va:
|
||||
return full_va - self.base_va
|
||||
return -1
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════
|
||||
# Per-patch finders
|
||||
# ═══════════════════════════════════════════════════════════════
|
||||
|
||||
_COND_BRANCH_MNEMONICS = frozenset(
|
||||
(
|
||||
"b.eq",
|
||||
"b.ne",
|
||||
"b.cs",
|
||||
"b.hs",
|
||||
"b.cc",
|
||||
"b.lo",
|
||||
"b.mi",
|
||||
"b.pl",
|
||||
"b.vs",
|
||||
"b.vc",
|
||||
"b.hi",
|
||||
"b.ls",
|
||||
"b.ge",
|
||||
"b.lt",
|
||||
"b.gt",
|
||||
"b.le",
|
||||
"b.al",
|
||||
"cbz",
|
||||
"cbnz",
|
||||
"tbz",
|
||||
"tbnz",
|
||||
)
|
||||
)
|
||||
|
||||
def _decode_branch_target(self, off):
|
||||
"""Decode conditional branch at off via capstone. Returns (target, mnemonic) or (None, None)."""
|
||||
insns = self._disas_at(off)
|
||||
if not insns:
|
||||
return None, None
|
||||
i = insns[0]
|
||||
if i.mnemonic in self._COND_BRANCH_MNEMONICS:
|
||||
# Target is always the last IMM operand
|
||||
for op in reversed(i.operands):
|
||||
if op.type == ARM64_OP_IMM:
|
||||
return op.imm, i.mnemonic
|
||||
return None, None
|
||||
|
||||
def _get_kernel_text_range(self):
|
||||
"""Return (start, end) file offsets of the kernel's own __TEXT_EXEC.__text.
|
||||
|
||||
Parses fileset entries (LC_FILESET_ENTRY) to find the kernel component,
|
||||
then reads its Mach-O header to get the __TEXT_EXEC.__text section.
|
||||
Falls back to the full __TEXT_EXEC segment.
|
||||
"""
|
||||
# Try fileset entries
|
||||
ncmds = struct.unpack_from("<I", self.raw, 16)[0]
|
||||
off = 32
|
||||
for _ in range(ncmds):
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x80000035: # LC_FILESET_ENTRY
|
||||
vmaddr = struct.unpack_from("<Q", self.raw, off + 8)[0]
|
||||
str_off_in_cmd = struct.unpack_from("<I", self.raw, off + 24)[0]
|
||||
entry_id = self.raw[off + str_off_in_cmd :].split(b"\x00")[0].decode()
|
||||
if entry_id == "com.apple.kernel":
|
||||
kext_foff = vmaddr - self.base_va
|
||||
text_range = self._parse_kext_text_exec(kext_foff)
|
||||
if text_range:
|
||||
return text_range
|
||||
off += cmdsize
|
||||
return self.kern_text
|
||||
|
||||
@staticmethod
|
||||
def _is_func_boundary(insn):
|
||||
"""Return True if *insn* typically ends/starts a function."""
|
||||
return insn in _FUNC_BOUNDARY_U32S
|
||||
|
||||
def _find_sandbox_ops_table_via_conf(self):
|
||||
"""Find Sandbox mac_policy_ops table via mac_policy_conf struct."""
|
||||
self._log("\n[*] Finding Sandbox mac_policy_ops via mac_policy_conf...")
|
||||
|
||||
seatbelt_off = self.find_string(b"Seatbelt sandbox policy")
|
||||
sandbox_raw = self.raw.find(b"\x00Sandbox\x00")
|
||||
sandbox_off = sandbox_raw + 1 if sandbox_raw >= 0 else -1
|
||||
if seatbelt_off < 0 or sandbox_off < 0:
|
||||
self._log(" [-] Sandbox/Seatbelt strings not found")
|
||||
return None
|
||||
self._log(
|
||||
f" [*] Sandbox string at foff 0x{sandbox_off:X}, "
|
||||
f"Seatbelt at 0x{seatbelt_off:X}"
|
||||
)
|
||||
|
||||
data_ranges = []
|
||||
for name, vmaddr, fileoff, filesize, prot in self.all_segments:
|
||||
if name in ("__DATA_CONST", "__DATA") and filesize > 0:
|
||||
data_ranges.append((fileoff, fileoff + filesize))
|
||||
|
||||
for d_start, d_end in data_ranges:
|
||||
for i in range(d_start, d_end - 40, 8):
|
||||
val = _rd64(self.raw, i)
|
||||
if val == 0 or (val & (1 << 63)):
|
||||
continue
|
||||
if (val & 0x7FFFFFFFFFF) != sandbox_off:
|
||||
continue
|
||||
val2 = _rd64(self.raw, i + 8)
|
||||
if (val2 & (1 << 63)) or (val2 & 0x7FFFFFFFFFF) != seatbelt_off:
|
||||
continue
|
||||
val_ops = _rd64(self.raw, i + 32)
|
||||
if not (val_ops & (1 << 63)):
|
||||
ops_off = val_ops & 0x7FFFFFFFFFF
|
||||
self._log(
|
||||
f" [+] mac_policy_conf at foff 0x{i:X}, "
|
||||
f"mpc_ops -> 0x{ops_off:X}"
|
||||
)
|
||||
return ops_off
|
||||
|
||||
self._log(" [-] mac_policy_conf not found")
|
||||
return None
|
||||
|
||||
def _read_ops_entry(self, table_off, index):
|
||||
"""Read a function pointer from the ops table, handling chained fixups."""
|
||||
off = table_off + index * 8
|
||||
if off + 8 > self.size:
|
||||
return -1
|
||||
val = _rd64(self.raw, off)
|
||||
if val == 0:
|
||||
return 0
|
||||
return self._decode_chained_ptr(val)
|
||||
@@ -1,173 +0,0 @@
|
||||
"""kernel_jb.py — Jailbreak extension patcher for iOS kernelcache."""
|
||||
|
||||
import time
|
||||
|
||||
from .kernel_jb_base import KernelJBPatcherBase
|
||||
from .kernel_jb_patch_amfi_trustcache import KernelJBPatchAmfiTrustcacheMixin
|
||||
from .kernel_jb_patch_amfi_execve import KernelJBPatchAmfiExecveMixin
|
||||
from .kernel_jb_patch_task_conversion import KernelJBPatchTaskConversionMixin
|
||||
from .kernel_jb_patch_sandbox_extended import KernelJBPatchSandboxExtendedMixin
|
||||
from .kernel_jb_patch_post_validation import KernelJBPatchPostValidationMixin
|
||||
from .kernel_jb_patch_proc_security import KernelJBPatchProcSecurityMixin
|
||||
from .kernel_jb_patch_proc_pidinfo import KernelJBPatchProcPidinfoMixin
|
||||
from .kernel_jb_patch_port_to_map import KernelJBPatchPortToMapMixin
|
||||
from .kernel_jb_patch_vm_fault import KernelJBPatchVmFaultMixin
|
||||
from .kernel_jb_patch_vm_protect import KernelJBPatchVmProtectMixin
|
||||
from .kernel_jb_patch_mac_mount import KernelJBPatchMacMountMixin
|
||||
from .kernel_jb_patch_dounmount import KernelJBPatchDounmountMixin
|
||||
from .kernel_jb_patch_bsd_init_auth import KernelJBPatchBsdInitAuthMixin
|
||||
from .kernel_jb_patch_spawn_persona import KernelJBPatchSpawnPersonaMixin
|
||||
from .kernel_jb_patch_task_for_pid import KernelJBPatchTaskForPidMixin
|
||||
from .kernel_jb_patch_load_dylinker import KernelJBPatchLoadDylinkerMixin
|
||||
from .kernel_jb_patch_shared_region import KernelJBPatchSharedRegionMixin
|
||||
from .kernel_jb_patch_nvram import KernelJBPatchNvramMixin
|
||||
from .kernel_jb_patch_secure_root import KernelJBPatchSecureRootMixin
|
||||
from .kernel_jb_patch_thid_crash import KernelJBPatchThidCrashMixin
|
||||
from .kernel_jb_patch_cred_label import KernelJBPatchCredLabelMixin
|
||||
from .kernel_jb_patch_syscallmask import KernelJBPatchSyscallmaskMixin
|
||||
from .kernel_jb_patch_hook_cred_label import KernelJBPatchHookCredLabelMixin
|
||||
from .kernel_jb_patch_kcall10 import KernelJBPatchKcall10Mixin
|
||||
from .kernel_jb_patch_iouc_macf import KernelJBPatchIoucmacfMixin
|
||||
|
||||
|
||||
class KernelJBPatcher(
|
||||
KernelJBPatchKcall10Mixin,
|
||||
KernelJBPatchIoucmacfMixin,
|
||||
KernelJBPatchHookCredLabelMixin,
|
||||
KernelJBPatchSyscallmaskMixin,
|
||||
KernelJBPatchCredLabelMixin,
|
||||
KernelJBPatchThidCrashMixin,
|
||||
KernelJBPatchSecureRootMixin,
|
||||
KernelJBPatchNvramMixin,
|
||||
KernelJBPatchSharedRegionMixin,
|
||||
KernelJBPatchLoadDylinkerMixin,
|
||||
KernelJBPatchTaskForPidMixin,
|
||||
KernelJBPatchSpawnPersonaMixin,
|
||||
KernelJBPatchBsdInitAuthMixin,
|
||||
KernelJBPatchDounmountMixin,
|
||||
KernelJBPatchMacMountMixin,
|
||||
KernelJBPatchVmProtectMixin,
|
||||
KernelJBPatchVmFaultMixin,
|
||||
KernelJBPatchPortToMapMixin,
|
||||
KernelJBPatchProcPidinfoMixin,
|
||||
KernelJBPatchProcSecurityMixin,
|
||||
KernelJBPatchPostValidationMixin,
|
||||
KernelJBPatchSandboxExtendedMixin,
|
||||
KernelJBPatchTaskConversionMixin,
|
||||
KernelJBPatchAmfiExecveMixin,
|
||||
KernelJBPatchAmfiTrustcacheMixin,
|
||||
KernelJBPatcherBase,
|
||||
):
|
||||
_TIMING_LOG_MIN_SECONDS = 10.0
|
||||
|
||||
# Group A: Core gate-bypass methods.
|
||||
_GROUP_A_METHODS = (
|
||||
"patch_amfi_cdhash_in_trustcache", # JB-01 / A1
|
||||
# "patch_amfi_execve_kill_path", # JB-02 / A2 (superseded by C21 on current PCC 26.1 path; keep standalone only)
|
||||
"patch_task_conversion_eval_internal", # JB-08 / A3
|
||||
"patch_sandbox_hooks_extended", # JB-09 / A4
|
||||
"patch_iouc_failed_macf", # JB-10 / A5
|
||||
)
|
||||
|
||||
# Group B: Pattern/string anchored methods.
|
||||
_GROUP_B_METHODS = (
|
||||
"patch_post_validation_additional", # JB-06 / B5
|
||||
"patch_proc_security_policy", # JB-11 / B6
|
||||
"patch_proc_pidinfo", # JB-12 / B7
|
||||
"patch_convert_port_to_map", # JB-13 / B8
|
||||
"patch_bsd_init_auth", # JB-14 / B13 (retargeted 2026-03-06 to real _bsd_init rootauth gate)
|
||||
"patch_dounmount", # JB-15 / B12
|
||||
"patch_io_secure_bsd_root", # JB-16 / B19 (retargeted 2026-03-06 to SecureRootName deny-return)
|
||||
"patch_load_dylinker", # JB-17 / B16
|
||||
"patch_mac_mount", # JB-18 / B11
|
||||
"patch_nvram_verify_permission", # JB-19 / B18
|
||||
"patch_shared_region_map", # JB-20 / B17
|
||||
"patch_spawn_validate_persona", # JB-21 / B14
|
||||
"patch_task_for_pid", # JB-22 / B15
|
||||
"patch_thid_should_crash", # JB-23 / B20
|
||||
"patch_vm_fault_enter_prepare", # JB-24 / B9 (retargeted 2026-03-06 to upstream cs_bypass gate)
|
||||
"patch_vm_map_protect", # JB-25 / B10
|
||||
)
|
||||
|
||||
# Group C: Shellcode/trampoline heavy methods.
|
||||
_GROUP_C_METHODS = (
|
||||
"patch_cred_label_update_execve", # JB-03 / C21 (disabled: reworked on 2026-03-06, pending boot revalidation)
|
||||
"patch_hook_cred_label_update_execve", # JB-04 / C23 (faithful upstream trampoline)
|
||||
"patch_kcall10", # JB-05 / C24 (ABI-correct rebuilt cave)
|
||||
"patch_syscallmask_apply_to_proc", # JB-07 / C22
|
||||
)
|
||||
|
||||
# Active JB patch schedule (known failing methods are temporarily excluded).
|
||||
_PATCH_METHODS = _GROUP_A_METHODS + _GROUP_B_METHODS + _GROUP_C_METHODS
|
||||
|
||||
def __init__(self, data, verbose=False):
|
||||
super().__init__(data, verbose)
|
||||
self.patch_timings = []
|
||||
|
||||
def _run_patch_method_timed(self, method_name):
|
||||
before = len(self.patches)
|
||||
t0 = time.perf_counter()
|
||||
getattr(self, method_name)()
|
||||
dt = time.perf_counter() - t0
|
||||
added = len(self.patches) - before
|
||||
self.patch_timings.append((method_name, dt, added))
|
||||
if dt >= self._TIMING_LOG_MIN_SECONDS:
|
||||
print(f" [T] {method_name:36s} {dt:7.3f}s (+{added})")
|
||||
|
||||
def _run_methods(self, methods):
|
||||
for method_name in methods:
|
||||
self._run_patch_method_timed(method_name)
|
||||
|
||||
def _build_method_plan(self):
|
||||
methods = list(self._PATCH_METHODS)
|
||||
final = []
|
||||
seen = set()
|
||||
for method_name in methods:
|
||||
if method_name in seen:
|
||||
continue
|
||||
if not callable(getattr(self, method_name, None)):
|
||||
continue
|
||||
seen.add(method_name)
|
||||
final.append(method_name)
|
||||
return tuple(final)
|
||||
|
||||
def _print_timing_summary(self):
|
||||
if not self.patch_timings:
|
||||
return
|
||||
slow_items = [
|
||||
item
|
||||
for item in sorted(
|
||||
self.patch_timings, key=lambda item: item[1], reverse=True
|
||||
)
|
||||
if item[1] >= self._TIMING_LOG_MIN_SECONDS
|
||||
]
|
||||
if not slow_items:
|
||||
return
|
||||
|
||||
print(
|
||||
"\n [Timing Summary] JB patch method cost (desc, >= "
|
||||
f"{self._TIMING_LOG_MIN_SECONDS:.0f}s):"
|
||||
)
|
||||
for method_name, dt, added in slow_items:
|
||||
print(f" {dt:7.3f}s (+{added:3d}) {method_name}")
|
||||
|
||||
def find_all(self):
|
||||
self._reset_patch_state()
|
||||
self.patch_timings = []
|
||||
|
||||
plan = self._build_method_plan()
|
||||
self._log("[*] JB method plan: " + (", ".join(plan) if plan else "(empty)"))
|
||||
self._run_methods(plan)
|
||||
self._print_timing_summary()
|
||||
|
||||
return self.patches
|
||||
|
||||
def apply(self):
|
||||
patches = self.find_all()
|
||||
for off, patch_bytes, _ in patches:
|
||||
self.data[off : off + len(patch_bytes)] = patch_bytes
|
||||
return len(patches)
|
||||
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
# Group A: Existing patches (unchanged)
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
@@ -1,361 +0,0 @@
|
||||
"""kernel_jb_base.py — JB base class with infrastructure methods."""
|
||||
|
||||
import struct
|
||||
from collections import Counter
|
||||
|
||||
from .kernel_asm import _PACIBSP_U32
|
||||
from capstone.arm64_const import (
|
||||
ARM64_OP_REG,
|
||||
ARM64_OP_IMM,
|
||||
ARM64_OP_MEM,
|
||||
ARM64_REG_X0,
|
||||
ARM64_REG_X1,
|
||||
ARM64_REG_W0,
|
||||
ARM64_REG_X8,
|
||||
)
|
||||
|
||||
from .kernel import (
|
||||
KernelPatcher,
|
||||
NOP,
|
||||
MOV_X0_0,
|
||||
MOV_X0_1,
|
||||
MOV_W0_0,
|
||||
MOV_W0_1,
|
||||
CMP_W0_W0,
|
||||
CMP_X0_X0,
|
||||
RET,
|
||||
asm,
|
||||
_rd32,
|
||||
_rd64,
|
||||
)
|
||||
|
||||
|
||||
CBZ_X2_8 = asm("cbz x2, #8")
|
||||
STR_X0_X2 = asm("str x0, [x2]")
|
||||
CMP_XZR_XZR = asm("cmp xzr, xzr")
|
||||
MOV_X8_XZR = asm("mov x8, xzr")
|
||||
|
||||
|
||||
class KernelJBPatcherBase(KernelPatcher):
|
||||
def __init__(self, data, verbose=False):
|
||||
super().__init__(data, verbose)
|
||||
self._jb_scan_cache = {}
|
||||
self._proc_info_anchor_scanned = False
|
||||
self._proc_info_anchor = (-1, -1)
|
||||
self._build_symbol_table()
|
||||
|
||||
# ── Symbol table (best-effort, may find 0 on stripped kernels) ──
|
||||
|
||||
def _build_symbol_table(self):
|
||||
"""Parse nlist entries from LC_SYMTAB to build symbol→foff map."""
|
||||
self.symbols = {}
|
||||
|
||||
# Parse top-level LC_SYMTAB
|
||||
ncmds = struct.unpack_from("<I", self.raw, 16)[0]
|
||||
off = 32
|
||||
for _ in range(ncmds):
|
||||
if off + 8 > self.size:
|
||||
break
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x2: # LC_SYMTAB
|
||||
symoff = struct.unpack_from("<I", self.raw, off + 8)[0]
|
||||
nsyms = struct.unpack_from("<I", self.raw, off + 12)[0]
|
||||
stroff = struct.unpack_from("<I", self.raw, off + 16)[0]
|
||||
self._parse_nlist(symoff, nsyms, stroff)
|
||||
off += cmdsize
|
||||
|
||||
# Parse fileset entries' LC_SYMTAB
|
||||
off = 32
|
||||
for _ in range(ncmds):
|
||||
if off + 8 > self.size:
|
||||
break
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x80000035: # LC_FILESET_ENTRY
|
||||
# fileoff is at off+16
|
||||
foff_entry = struct.unpack_from("<Q", self.raw, off + 16)[0]
|
||||
self._parse_fileset_symtab(foff_entry)
|
||||
off += cmdsize
|
||||
|
||||
self._log(f"[*] Symbol table: {len(self.symbols)} symbols resolved")
|
||||
|
||||
def _parse_fileset_symtab(self, mh_off):
|
||||
"""Parse LC_SYMTAB from a fileset entry Mach-O."""
|
||||
if mh_off < 0 or mh_off + 32 > self.size:
|
||||
return
|
||||
magic = _rd32(self.raw, mh_off)
|
||||
if magic != 0xFEEDFACF:
|
||||
return
|
||||
ncmds = struct.unpack_from("<I", self.raw, mh_off + 16)[0]
|
||||
off = mh_off + 32
|
||||
for _ in range(ncmds):
|
||||
if off + 8 > self.size:
|
||||
break
|
||||
cmd, cmdsize = struct.unpack_from("<II", self.raw, off)
|
||||
if cmd == 0x2: # LC_SYMTAB
|
||||
symoff = struct.unpack_from("<I", self.raw, off + 8)[0]
|
||||
nsyms = struct.unpack_from("<I", self.raw, off + 12)[0]
|
||||
stroff = struct.unpack_from("<I", self.raw, off + 16)[0]
|
||||
self._parse_nlist(symoff, nsyms, stroff)
|
||||
off += cmdsize
|
||||
|
||||
def _parse_nlist(self, symoff, nsyms, stroff):
|
||||
"""Parse nlist64 entries: add defined function symbols to self.symbols."""
|
||||
for i in range(nsyms):
|
||||
entry_off = symoff + i * 16
|
||||
if entry_off + 16 > self.size:
|
||||
break
|
||||
n_strx, n_type, n_sect, n_desc, n_value = struct.unpack_from(
|
||||
"<IBBHQ", self.raw, entry_off
|
||||
)
|
||||
if n_type & 0x0E != 0x0E:
|
||||
continue
|
||||
if n_value == 0:
|
||||
continue
|
||||
name_off = stroff + n_strx
|
||||
if name_off >= self.size:
|
||||
continue
|
||||
name_end = self.raw.find(b"\x00", name_off)
|
||||
if name_end < 0 or name_end - name_off > 512:
|
||||
continue
|
||||
name = self.raw[name_off:name_end].decode("ascii", errors="replace")
|
||||
foff = n_value - self.base_va
|
||||
if 0 <= foff < self.size:
|
||||
self.symbols[name] = foff
|
||||
|
||||
def _resolve_symbol(self, name):
|
||||
"""Look up a function symbol, return file offset or -1."""
|
||||
return self.symbols.get(name, -1)
|
||||
|
||||
# ── Shared kernel anchor finders ──────────────────────────────
|
||||
|
||||
def _find_proc_info_anchor(self):
|
||||
"""Find `_proc_info` switch anchor as (func_start, switch_off).
|
||||
|
||||
Shared by B6/B7 patches. Cached because searching this anchor in
|
||||
`kern_text` is expensive on stripped kernels.
|
||||
"""
|
||||
if self._proc_info_anchor_scanned:
|
||||
return self._proc_info_anchor
|
||||
|
||||
def _scan_range(start, end):
|
||||
"""Fast raw matcher for:
|
||||
sub wN, wM, #1
|
||||
cmp wN, #0x21
|
||||
"""
|
||||
key = ("proc_info_switch", start, end)
|
||||
cached = self._jb_scan_cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
scan_start = max(start, 0)
|
||||
limit = min(end - 8, self.size - 8)
|
||||
for off in range(scan_start, limit, 4):
|
||||
i0 = _rd32(self.raw, off)
|
||||
# SUB (immediate), 32-bit
|
||||
if (i0 & 0xFF000000) != 0x51000000:
|
||||
continue
|
||||
if ((i0 >> 22) & 1) != 0: # sh must be 0
|
||||
continue
|
||||
if ((i0 >> 10) & 0xFFF) != 1:
|
||||
continue
|
||||
sub_rd = i0 & 0x1F
|
||||
|
||||
i1 = _rd32(self.raw, off + 4)
|
||||
# CMP wN,#imm == SUBS wzr,wN,#imm alias (rd must be wzr)
|
||||
if (i1 & 0xFF00001F) != 0x7100001F:
|
||||
continue
|
||||
if ((i1 >> 22) & 1) != 0: # sh must be 0
|
||||
continue
|
||||
if ((i1 >> 10) & 0xFFF) != 0x21:
|
||||
continue
|
||||
cmp_rn = (i1 >> 5) & 0x1F
|
||||
if sub_rd != cmp_rn:
|
||||
continue
|
||||
|
||||
self._jb_scan_cache[key] = off
|
||||
return off
|
||||
self._jb_scan_cache[key] = -1
|
||||
return -1
|
||||
|
||||
# Prefer direct symbol when present.
|
||||
proc_info_func = self._resolve_symbol("_proc_info")
|
||||
if proc_info_func >= 0:
|
||||
search_end = min(proc_info_func + 0x800, self.size)
|
||||
switch_off = _scan_range(proc_info_func, search_end)
|
||||
if switch_off < 0:
|
||||
switch_off = proc_info_func
|
||||
self._proc_info_anchor = (proc_info_func, switch_off)
|
||||
self._proc_info_anchor_scanned = True
|
||||
return self._proc_info_anchor
|
||||
|
||||
ks, ke = self.kern_text
|
||||
switch_off = _scan_range(ks, ke)
|
||||
if switch_off >= 0:
|
||||
proc_info_func = self.find_function_start(switch_off)
|
||||
self._proc_info_anchor = (proc_info_func, switch_off)
|
||||
else:
|
||||
self._proc_info_anchor = (-1, -1)
|
||||
|
||||
self._proc_info_anchor_scanned = True
|
||||
return self._proc_info_anchor
|
||||
|
||||
# ── Code cave finder ──────────────────────────────────────────
|
||||
|
||||
def _find_code_cave(self, size, align=4):
|
||||
"""Find a region of zeros/0xFF/UDF in executable memory for shellcode.
|
||||
Returns file offset of the cave start, or -1 if not found.
|
||||
Reads from self.data (mutable) so previously allocated caves are skipped.
|
||||
|
||||
Only searches __TEXT_EXEC and __TEXT_BOOT_EXEC segments.
|
||||
__PRELINK_TEXT is excluded because KTRR makes it non-executable at
|
||||
runtime on ARM64e, even though the Mach-O marks it R-X.
|
||||
"""
|
||||
EXEC_SEGS = ("__TEXT_EXEC", "__TEXT_BOOT_EXEC")
|
||||
exec_ranges = [
|
||||
(foff, foff + fsz)
|
||||
for name, _, foff, fsz, _ in self.all_segments
|
||||
if name in EXEC_SEGS and fsz > 0
|
||||
]
|
||||
exec_ranges.sort()
|
||||
|
||||
needed = (size + align - 1) // align * align
|
||||
for rng_start, rng_end in exec_ranges:
|
||||
run_start = -1
|
||||
run_len = 0
|
||||
for off in range(rng_start, rng_end, 4):
|
||||
val = _rd32(self.data, off)
|
||||
if val == 0x00000000 or val == 0xFFFFFFFF or val == 0xD4200000:
|
||||
if run_start < 0:
|
||||
run_start = off
|
||||
run_len = 4
|
||||
else:
|
||||
run_len += 4
|
||||
if run_len >= needed:
|
||||
return run_start
|
||||
else:
|
||||
run_start = -1
|
||||
run_len = 0
|
||||
return -1
|
||||
|
||||
# ── Branch encoding helpers ───────────────────────────────────
|
||||
|
||||
def _encode_b(self, from_off, to_off):
|
||||
"""Encode an unconditional B instruction."""
|
||||
delta = (to_off - from_off) // 4
|
||||
if delta < -(1 << 25) or delta >= (1 << 25):
|
||||
return None
|
||||
return struct.pack("<I", 0x14000000 | (delta & 0x3FFFFFF))
|
||||
|
||||
def _encode_bl(self, from_off, to_off):
|
||||
"""Encode a BL instruction."""
|
||||
delta = (to_off - from_off) // 4
|
||||
if delta < -(1 << 25) or delta >= (1 << 25):
|
||||
return None
|
||||
return struct.pack("<I", 0x94000000 | (delta & 0x3FFFFFF))
|
||||
|
||||
# ── Function finding helpers ──────────────────────────────────
|
||||
|
||||
def _find_func_end(self, func_start, max_size=0x4000):
|
||||
"""Find the end of a function (next PACIBSP or limit)."""
|
||||
limit = min(func_start + max_size, self.size)
|
||||
for off in range(func_start + 4, limit, 4):
|
||||
if _rd32(self.raw, off) == _PACIBSP_U32:
|
||||
return off
|
||||
return limit
|
||||
|
||||
def _find_bl_to_panic_in_range(self, start, end):
|
||||
"""Find first BL to _panic in range, return offset or -1."""
|
||||
for off in range(start, end, 4):
|
||||
bl_target = self._is_bl(off)
|
||||
if bl_target == self.panic_off:
|
||||
return off
|
||||
return -1
|
||||
|
||||
def _find_func_by_string(self, string, code_range=None):
|
||||
"""Find a function that references a given string.
|
||||
Returns the function start (PACIBSP), or -1.
|
||||
"""
|
||||
str_off = self.find_string(string)
|
||||
if str_off < 0:
|
||||
return -1
|
||||
if code_range:
|
||||
refs = self.find_string_refs(str_off, *code_range)
|
||||
else:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
return -1
|
||||
func_start = self.find_function_start(refs[0][0])
|
||||
return func_start
|
||||
|
||||
def _find_func_containing_string(self, string, code_range=None):
|
||||
"""Find a function containing a string reference.
|
||||
Returns (func_start, func_end, refs) or (None, None, None).
|
||||
"""
|
||||
str_off = self.find_string(string)
|
||||
if str_off < 0:
|
||||
return None, None, None
|
||||
if code_range:
|
||||
refs = self.find_string_refs(str_off, *code_range)
|
||||
else:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
return None, None, None
|
||||
func_start = self.find_function_start(refs[0][0])
|
||||
if func_start < 0:
|
||||
return None, None, None
|
||||
func_end = self._find_func_end(func_start)
|
||||
return func_start, func_end, refs
|
||||
|
||||
def _find_nosys(self):
|
||||
"""Find _nosys: a tiny function that returns ENOSYS (78 = 0x4e).
|
||||
Pattern: mov w0, #0x4e; ret (or with PACIBSP wrapper).
|
||||
"""
|
||||
# Search for: mov w0, #0x4e (= 0x528009C0) followed by ret (= 0xD65F03C0)
|
||||
mov_w0_4e = struct.unpack("<I", asm("mov w0, #0x4e"))[0]
|
||||
ret_val = struct.unpack("<I", RET)[0]
|
||||
for s, e in self.code_ranges:
|
||||
for off in range(s, e - 4, 4):
|
||||
v0 = _rd32(self.raw, off)
|
||||
v1 = _rd32(self.raw, off + 4)
|
||||
if v0 == mov_w0_4e and v1 == ret_val:
|
||||
return off
|
||||
# Also check with PACIBSP prefix
|
||||
if v0 == 0xD503237F and v1 == mov_w0_4e:
|
||||
v2 = _rd32(self.raw, off + 8)
|
||||
if v2 == ret_val:
|
||||
return off
|
||||
return -1
|
||||
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
# Patch dispatcher
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
# Re-export for patch mixins
|
||||
__all__ = [
|
||||
"KernelJBPatcherBase",
|
||||
"CBZ_X2_8",
|
||||
"STR_X0_X2",
|
||||
"CMP_XZR_XZR",
|
||||
"MOV_X8_XZR",
|
||||
"NOP",
|
||||
"MOV_X0_0",
|
||||
"MOV_X0_1",
|
||||
"MOV_W0_0",
|
||||
"MOV_W0_1",
|
||||
"CMP_W0_W0",
|
||||
"CMP_X0_X0",
|
||||
"RET",
|
||||
"asm",
|
||||
"_rd32",
|
||||
"_rd64",
|
||||
"struct",
|
||||
"Counter",
|
||||
"ARM64_OP_REG",
|
||||
"ARM64_OP_IMM",
|
||||
"ARM64_OP_MEM",
|
||||
"ARM64_REG_X0",
|
||||
"ARM64_REG_X1",
|
||||
"ARM64_REG_W0",
|
||||
"ARM64_REG_X8",
|
||||
]
|
||||
@@ -1,87 +0,0 @@
|
||||
"""Mixin: KernelJBPatchAmfiExecveMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_W0_0, _rd32
|
||||
|
||||
|
||||
class KernelJBPatchAmfiExecveMixin:
|
||||
def patch_amfi_execve_kill_path(self):
|
||||
"""Bypass AMFI execve kill by changing the shared kill return value.
|
||||
|
||||
All kill paths in the AMFI execve hook converge on a shared epilogue
|
||||
that does ``MOV W0, #1`` (kill) then returns. We change that single
|
||||
instruction to ``MOV W0, #0`` (allow), which converts every kill path
|
||||
to a success return without touching the rest of the function.
|
||||
|
||||
Previous approach (patching early BL+CBZ/CBNZ sites) was incorrect:
|
||||
those are vnode-type precondition assertions, not the actual kill
|
||||
checks. Replacing BL with MOV X0,#0 triggered the CBZ → panic.
|
||||
"""
|
||||
self._log("\n[JB] AMFI execve kill path: shared MOV W0,#1 → MOV W0,#0")
|
||||
|
||||
str_off = self.find_string(b"AMFI: hook..execve() killing")
|
||||
if str_off < 0:
|
||||
str_off = self.find_string(b"execve() killing")
|
||||
if str_off < 0:
|
||||
self._log(" [-] execve kill log string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no refs to execve kill log string")
|
||||
return False
|
||||
|
||||
patched = False
|
||||
seen_funcs = set()
|
||||
for adrp_off, _, _ in refs:
|
||||
func_start = self.find_function_start(adrp_off)
|
||||
if func_start < 0 or func_start in seen_funcs:
|
||||
continue
|
||||
seen_funcs.add(func_start)
|
||||
|
||||
func_end = min(func_start + 0x800, self.kern_text[1])
|
||||
for p in range(func_start + 4, func_end, 4):
|
||||
d = self._disas_at(p)
|
||||
if d and d[0].mnemonic == "pacibsp":
|
||||
func_end = p
|
||||
break
|
||||
|
||||
# Scan backward from function end for MOV W0, #1 (0x52800020)
|
||||
# followed by LDP x29, x30 (epilogue start).
|
||||
MOV_W0_1_ENC = 0x52800020
|
||||
target_off = -1
|
||||
for off in range(func_end - 8, func_start, -4):
|
||||
if _rd32(self.raw, off) != MOV_W0_1_ENC:
|
||||
continue
|
||||
# Verify next instruction is LDP x29, x30, [sp, #imm]
|
||||
d1 = self._disas_at(off + 4)
|
||||
if not d1:
|
||||
continue
|
||||
i1 = d1[0]
|
||||
if i1.mnemonic == "ldp" and "x29, x30" in i1.op_str:
|
||||
target_off = off
|
||||
break
|
||||
|
||||
if target_off < 0:
|
||||
self._log(
|
||||
f" [-] MOV W0,#1 + epilogue not found in "
|
||||
f"func 0x{func_start:X}"
|
||||
)
|
||||
continue
|
||||
|
||||
self.emit(
|
||||
target_off,
|
||||
MOV_W0_0,
|
||||
"mov w0,#0 [AMFI kill return → allow]",
|
||||
)
|
||||
self._log(
|
||||
f" [+] Patched kill return at 0x{target_off:X} "
|
||||
f"(func 0x{func_start:X})"
|
||||
)
|
||||
patched = True
|
||||
break
|
||||
|
||||
if not patched:
|
||||
self._log(" [-] AMFI execve kill return not found")
|
||||
return patched
|
||||
@@ -1,89 +0,0 @@
|
||||
"""Mixin: KernelJBPatchAmfiTrustcacheMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_X0_1, CBZ_X2_8, STR_X0_X2, RET
|
||||
|
||||
|
||||
class KernelJBPatchAmfiTrustcacheMixin:
|
||||
def patch_amfi_cdhash_in_trustcache(self):
|
||||
"""AMFIIsCDHashInTrustCache rewrite (semantic function matching)."""
|
||||
self._log("\n[JB] AMFIIsCDHashInTrustCache: always allow + store flag")
|
||||
|
||||
def _find_after(insns, start, pred):
|
||||
for idx in range(start, len(insns)):
|
||||
if pred(insns[idx]):
|
||||
return idx
|
||||
return -1
|
||||
|
||||
hits = []
|
||||
s, e = self.amfi_text
|
||||
for off in range(s, e - 4, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0 or d0[0].mnemonic != "pacibsp":
|
||||
continue
|
||||
|
||||
func_end = min(off + 0x200, e)
|
||||
for p in range(off + 4, func_end, 4):
|
||||
dp = self._disas_at(p)
|
||||
if dp and dp[0].mnemonic == "pacibsp":
|
||||
func_end = p
|
||||
break
|
||||
|
||||
insns = []
|
||||
for p in range(off, func_end, 4):
|
||||
d = self._disas_at(p)
|
||||
if not d:
|
||||
break
|
||||
insns.append(d[0])
|
||||
|
||||
i1 = _find_after(
|
||||
insns, 0, lambda x: x.mnemonic == "mov" and x.op_str == "x19, x2"
|
||||
)
|
||||
if i1 < 0:
|
||||
continue
|
||||
i2 = _find_after(
|
||||
insns,
|
||||
i1 + 1,
|
||||
lambda x: x.mnemonic == "stp" and x.op_str.startswith("xzr, xzr, [sp"),
|
||||
)
|
||||
if i2 < 0:
|
||||
continue
|
||||
i3 = _find_after(
|
||||
insns, i2 + 1, lambda x: x.mnemonic == "mov" and x.op_str == "x2, sp"
|
||||
)
|
||||
if i3 < 0:
|
||||
continue
|
||||
i4 = _find_after(insns, i3 + 1, lambda x: x.mnemonic == "bl")
|
||||
if i4 < 0:
|
||||
continue
|
||||
i5 = _find_after(
|
||||
insns, i4 + 1, lambda x: x.mnemonic == "mov" and x.op_str == "x20, x0"
|
||||
)
|
||||
if i5 < 0:
|
||||
continue
|
||||
i6 = _find_after(
|
||||
insns,
|
||||
i5 + 1,
|
||||
lambda x: x.mnemonic == "cbnz" and x.op_str.startswith("w0,"),
|
||||
)
|
||||
if i6 < 0:
|
||||
continue
|
||||
i7 = _find_after(
|
||||
insns,
|
||||
i6 + 1,
|
||||
lambda x: x.mnemonic == "cbz" and x.op_str.startswith("x19,"),
|
||||
)
|
||||
if i7 < 0:
|
||||
continue
|
||||
|
||||
hits.append(off)
|
||||
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 AMFI trustcache body hit, found {len(hits)}")
|
||||
return False
|
||||
|
||||
func_start = hits[0]
|
||||
self.emit(func_start, MOV_X0_1, "mov x0,#1 [AMFIIsCDHashInTrustCache]")
|
||||
self.emit(func_start + 4, CBZ_X2_8, "cbz x2,+8 [AMFIIsCDHashInTrustCache]")
|
||||
self.emit(func_start + 8, STR_X0_X2, "str x0,[x2] [AMFIIsCDHashInTrustCache]")
|
||||
self.emit(func_start + 12, RET, "ret [AMFIIsCDHashInTrustCache]")
|
||||
return True
|
||||
@@ -1,145 +0,0 @@
|
||||
"""Mixin: KernelJBPatchBsdInitAuthMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_REG, ARM64_REG_W0, ARM64_REG_X0, NOP
|
||||
|
||||
|
||||
class KernelJBPatchBsdInitAuthMixin:
|
||||
_ROOTVP_PANIC_NEEDLE = b"rootvp not authenticated after mounting"
|
||||
|
||||
def patch_bsd_init_auth(self):
|
||||
"""Bypass the real rootvp auth failure branch inside ``_bsd_init``.
|
||||
|
||||
Fresh analysis on ``kernelcache.research.vphone600`` shows the boot gate is
|
||||
the in-function sequence:
|
||||
|
||||
call vnode ioctl handler for ``FSIOC_KERNEL_ROOTAUTH``
|
||||
cbnz w0, panic_path
|
||||
bl imageboot_needed
|
||||
|
||||
The older ``ldr/cbz/bl`` matcher was not semantically tied to ``_bsd_init``
|
||||
and could false-hit unrelated functions. We now resolve the branch using the
|
||||
panic string anchor and the surrounding local control-flow instead.
|
||||
"""
|
||||
self._log("\n[JB] _bsd_init: ignore FSIOC_KERNEL_ROOTAUTH failure")
|
||||
|
||||
func_start = self._resolve_symbol("_bsd_init")
|
||||
if func_start < 0:
|
||||
func_start = self._func_for_rootvp_anchor()
|
||||
if func_start is None or func_start < 0:
|
||||
self._log(" [-] _bsd_init not found")
|
||||
return False
|
||||
|
||||
site = self._find_bsd_init_rootauth_site(func_start)
|
||||
if site is None:
|
||||
self._log(" [-] rootauth branch site not found")
|
||||
return False
|
||||
|
||||
branch_off, state = site
|
||||
if state == "patched":
|
||||
self._log(f" [=] rootauth branch already bypassed at 0x{branch_off:X}")
|
||||
return True
|
||||
|
||||
self.emit(branch_off, NOP, "NOP cbnz (rootvp auth) [_bsd_init]")
|
||||
return True
|
||||
|
||||
def _find_bsd_init_rootauth_site(self, func_start):
|
||||
panic_ref = self._rootvp_panic_ref_in_func(func_start)
|
||||
if panic_ref is None:
|
||||
return None
|
||||
|
||||
adrp_off, add_off = panic_ref
|
||||
bl_panic_off = self._find_panic_call_near(add_off)
|
||||
if bl_panic_off is None:
|
||||
return None
|
||||
|
||||
err_lo = bl_panic_off - 0x40
|
||||
err_hi = bl_panic_off + 4
|
||||
imageboot_needed = self._resolve_symbol("_imageboot_needed")
|
||||
|
||||
candidates = []
|
||||
scan_start = max(func_start, adrp_off - 0x400)
|
||||
for off in range(scan_start, adrp_off, 4):
|
||||
state = self._match_rootauth_branch_site(off, err_lo, err_hi, imageboot_needed)
|
||||
if state is not None:
|
||||
candidates.append((off, state))
|
||||
|
||||
if not candidates:
|
||||
return None
|
||||
|
||||
if len(candidates) > 1:
|
||||
live = [item for item in candidates if item[1] == "live"]
|
||||
if len(live) == 1:
|
||||
return live[0]
|
||||
return None
|
||||
|
||||
return candidates[0]
|
||||
|
||||
def _rootvp_panic_ref_in_func(self, func_start):
|
||||
str_off = self.find_string(self._ROOTVP_PANIC_NEEDLE)
|
||||
if str_off < 0:
|
||||
return None
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
for adrp_off, add_off, _ in refs:
|
||||
if self.find_function_start(adrp_off) == func_start:
|
||||
return adrp_off, add_off
|
||||
return None
|
||||
|
||||
def _find_panic_call_near(self, add_off):
|
||||
for scan in range(add_off, min(add_off + 0x40, self.size), 4):
|
||||
if self._is_bl(scan) == self.panic_off:
|
||||
return scan
|
||||
return None
|
||||
|
||||
def _match_rootauth_branch_site(self, off, err_lo, err_hi, imageboot_needed):
|
||||
insns = self._disas_at(off, 1)
|
||||
if not insns:
|
||||
return None
|
||||
insn = insns[0]
|
||||
|
||||
if not self._is_call(off - 4):
|
||||
return None
|
||||
if not self._has_imageboot_call_near(off, imageboot_needed):
|
||||
return None
|
||||
|
||||
if insn.mnemonic == "nop":
|
||||
return "patched"
|
||||
|
||||
if insn.mnemonic != "cbnz":
|
||||
return None
|
||||
if len(insn.operands) < 2 or insn.operands[0].type != ARM64_OP_REG:
|
||||
return None
|
||||
if insn.operands[0].reg not in (ARM64_REG_W0, ARM64_REG_X0):
|
||||
return None
|
||||
|
||||
target, _ = self._decode_branch_target(off)
|
||||
if target is None or not (err_lo <= target <= err_hi):
|
||||
return None
|
||||
|
||||
return "live"
|
||||
|
||||
def _is_call(self, off):
|
||||
if off < 0:
|
||||
return False
|
||||
insns = self._disas_at(off, 1)
|
||||
return bool(insns) and insns[0].mnemonic.startswith("bl")
|
||||
|
||||
def _has_imageboot_call_near(self, off, imageboot_needed):
|
||||
for scan in range(off + 4, min(off + 0x18, self.size), 4):
|
||||
target = self._is_bl(scan)
|
||||
if target < 0:
|
||||
continue
|
||||
if imageboot_needed < 0 or target == imageboot_needed:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _func_for_rootvp_anchor(self):
|
||||
needle = b"rootvp not authenticated after mounting @%s:%d"
|
||||
str_off = self.find_string(needle)
|
||||
if str_off < 0:
|
||||
return None
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
return None
|
||||
fn = self.find_function_start(refs[0][0])
|
||||
return fn if fn >= 0 else None
|
||||
@@ -1,335 +0,0 @@
|
||||
"""Mixin: KernelJBPatchCredLabelMixin."""
|
||||
|
||||
from .kernel_jb_base import asm, _rd32
|
||||
|
||||
|
||||
class KernelJBPatchCredLabelMixin:
|
||||
_RET_INSNS = (0xD65F0FFF, 0xD65F0BFF, 0xD65F03C0)
|
||||
_MOV_W0_0_U32 = int.from_bytes(asm("mov w0, #0"), "little")
|
||||
_MOV_W0_1_U32 = int.from_bytes(asm("mov w0, #1"), "little")
|
||||
_RELAX_CSMASK = 0xFFFFC0FF
|
||||
_RELAX_SETMASK = 0x0000000C
|
||||
|
||||
def _is_cred_label_execve_candidate(self, func_off, anchor_refs):
|
||||
"""Validate candidate function shape for _cred_label_update_execve."""
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
if func_end - func_off < 0x200:
|
||||
return False, 0, func_end
|
||||
|
||||
anchor_hits = sum(1 for r in anchor_refs if func_off <= r < func_end)
|
||||
if anchor_hits == 0:
|
||||
return False, 0, func_end
|
||||
|
||||
has_arg9_load = False
|
||||
has_flags_load = False
|
||||
has_flags_store = False
|
||||
|
||||
for off in range(func_off, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "ldr" and op.startswith("x26,[x29"):
|
||||
has_arg9_load = True
|
||||
break
|
||||
|
||||
for off in range(func_off, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "ldr" and op.startswith("w") and ",[x26" in op:
|
||||
has_flags_load = True
|
||||
elif i.mnemonic == "str" and op.startswith("w") and ",[x26" in op:
|
||||
has_flags_store = True
|
||||
if has_flags_load and has_flags_store:
|
||||
break
|
||||
|
||||
ok = has_arg9_load and has_flags_load and has_flags_store
|
||||
score = anchor_hits * 10 + (1 if has_arg9_load else 0) + (1 if has_flags_load else 0) + (1 if has_flags_store else 0)
|
||||
return ok, score, func_end
|
||||
|
||||
def _find_cred_label_execve_func(self):
|
||||
"""Locate _cred_label_update_execve by AMFI kill-path string cluster."""
|
||||
anchor_strings = (
|
||||
b"AMFI: hook..execve() killing",
|
||||
b"Attempt to execute completely unsigned code",
|
||||
b"Attempt to execute a Legacy VPN Plugin",
|
||||
b"dyld signature cannot be verified",
|
||||
)
|
||||
|
||||
anchor_refs = set()
|
||||
candidates = set()
|
||||
s, e = self.amfi_text
|
||||
|
||||
for anchor in anchor_strings:
|
||||
str_off = self.find_string(anchor)
|
||||
if str_off < 0:
|
||||
continue
|
||||
refs = self.find_string_refs(str_off, s, e)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
for adrp_off, _, _ in refs:
|
||||
anchor_refs.add(adrp_off)
|
||||
func_off = self.find_function_start(adrp_off)
|
||||
if func_off >= 0 and s <= func_off < e:
|
||||
candidates.add(func_off)
|
||||
|
||||
best_func = -1
|
||||
best_score = -1
|
||||
for func_off in sorted(candidates):
|
||||
ok, score, _ = self._is_cred_label_execve_candidate(func_off, anchor_refs)
|
||||
if ok and score > best_score:
|
||||
best_score = score
|
||||
best_func = func_off
|
||||
|
||||
return best_func
|
||||
|
||||
def _find_cred_label_return_site(self, func_off):
|
||||
"""Pick a return site with full epilogue restore (SP/frame restored)."""
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
fallback = -1
|
||||
for off in range(func_end - 4, func_off, -4):
|
||||
val = _rd32(self.raw, off)
|
||||
if val not in self._RET_INSNS:
|
||||
continue
|
||||
if fallback < 0:
|
||||
fallback = off
|
||||
|
||||
saw_add_sp = False
|
||||
saw_ldp_fp = False
|
||||
for prev in range(max(func_off, off - 0x24), off, 4):
|
||||
d = self._disas_at(prev)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "add" and op.startswith("sp,sp,#"):
|
||||
saw_add_sp = True
|
||||
elif i.mnemonic == "ldp" and op.startswith("x29,x30,[sp"):
|
||||
saw_ldp_fp = True
|
||||
|
||||
if saw_add_sp and saw_ldp_fp:
|
||||
return off
|
||||
|
||||
return fallback
|
||||
|
||||
def _find_cred_label_epilogue(self, func_off):
|
||||
"""Locate the canonical epilogue start (`ldp x29, x30, [sp, ...]`)."""
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_end - 4, func_off, -4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic == "ldp" and op.startswith("x29,x30,[sp"):
|
||||
return off
|
||||
|
||||
return -1
|
||||
|
||||
def _find_cred_label_csflags_ptr_reload(self, func_off):
|
||||
"""Recover the stack-based `u_int *csflags` reload used by the function.
|
||||
|
||||
We reuse the same `ldr x26, [x29, #imm]` form in the trampoline so the
|
||||
common C21-v1 cave works for both deny and success exits, even when the
|
||||
live x26 register has not been initialized on a deny-only path.
|
||||
"""
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_off, func_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
i = d[0]
|
||||
op = i.op_str.replace(" ", "")
|
||||
if i.mnemonic != "ldr" or not op.startswith("x26,[x29"):
|
||||
continue
|
||||
mem_op = i.op_str.split(",", 1)[1].strip()
|
||||
return off, mem_op
|
||||
|
||||
return -1, None
|
||||
|
||||
def _decode_b_target(self, off):
|
||||
"""Return target of unconditional `b`, or -1 if instruction is not `b`."""
|
||||
insn = _rd32(self.raw, off)
|
||||
if (insn & 0x7C000000) != 0x14000000:
|
||||
return -1
|
||||
imm26 = insn & 0x03FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
return off + imm26 * 4
|
||||
|
||||
def _find_cred_label_deny_return(self, func_off, epilogue_off):
|
||||
"""Find the shared `mov w0,#1` kill-return right before the epilogue."""
|
||||
mov_w0_1 = self._MOV_W0_1_U32
|
||||
scan_start = max(func_off, epilogue_off - 0x40)
|
||||
for off in range(epilogue_off - 4, scan_start - 4, -4):
|
||||
if _rd32(self.raw, off) == mov_w0_1 and off + 4 == epilogue_off:
|
||||
return off
|
||||
|
||||
return -1
|
||||
|
||||
def _find_cred_label_success_exits(self, func_off, epilogue_off):
|
||||
"""Find late success edges that already decided to return 0.
|
||||
|
||||
On the current vphone600 AMFI body these are the final `b epilogue`
|
||||
instructions in the success tail, reached after the original
|
||||
`tst/orr/str` cleanup has already run.
|
||||
"""
|
||||
exits = []
|
||||
func_end = self._find_func_end(func_off, 0x1000)
|
||||
for off in range(func_off, func_end, 4):
|
||||
target = self._decode_b_target(off)
|
||||
if target != epilogue_off:
|
||||
continue
|
||||
saw_mov_w0_0 = False
|
||||
for prev in range(max(func_off, off - 0x10), off, 4):
|
||||
if _rd32(self.raw, prev) == self._MOV_W0_0_U32:
|
||||
saw_mov_w0_0 = True
|
||||
break
|
||||
if saw_mov_w0_0:
|
||||
exits.append(off)
|
||||
|
||||
return tuple(exits)
|
||||
|
||||
def patch_cred_label_update_execve(self):
|
||||
"""C21-v3: split late exits and add minimal helper bits on success.
|
||||
|
||||
This version keeps the boot-safe late-exit structure from v2, but adds
|
||||
a small success-only extension inspired by the older upstream shellcode:
|
||||
|
||||
- keep `_cred_label_update_execve`'s body intact;
|
||||
- redirect the shared deny return into a tiny deny cave that only
|
||||
forces `w0 = 0` and returns through the original epilogue;
|
||||
- redirect the late success exits into a success cave;
|
||||
- reload `u_int *csflags` from the stack only on the success cave;
|
||||
- clear only `CS_HARD|CS_KILL|CS_CHECK_EXPIRATION|CS_RESTRICT|
|
||||
CS_ENFORCEMENT|CS_REQUIRE_LV` on the success cave;
|
||||
- then OR only `CS_GET_TASK_ALLOW|CS_INSTALLER` (`0xC`) on the
|
||||
success cave;
|
||||
- return through the original epilogue in both cases.
|
||||
|
||||
This preserves AMFI's exec-time analytics / entitlement handling and
|
||||
avoids the boot-unsafe entry-time early return used by older variants.
|
||||
"""
|
||||
self._log("\n[JB] _cred_label_update_execve: C21-v3 split exits + helper bits")
|
||||
|
||||
func_off = -1
|
||||
|
||||
# Try symbol first, but still validate shape.
|
||||
for sym, off in self.symbols.items():
|
||||
if "cred_label_update_execve" in sym and "hook" not in sym:
|
||||
ok, _, _ = self._is_cred_label_execve_candidate(off, set([off]))
|
||||
if ok:
|
||||
func_off = off
|
||||
break
|
||||
|
||||
if func_off < 0:
|
||||
func_off = self._find_cred_label_execve_func()
|
||||
|
||||
if func_off < 0:
|
||||
self._log(" [-] function not found, skipping shellcode patch")
|
||||
return False
|
||||
|
||||
epilogue_off = self._find_cred_label_epilogue(func_off)
|
||||
if epilogue_off < 0:
|
||||
self._log(" [-] epilogue not found")
|
||||
return False
|
||||
|
||||
deny_off = self._find_cred_label_deny_return(func_off, epilogue_off)
|
||||
if deny_off < 0:
|
||||
self._log(" [-] shared deny return not found")
|
||||
return False
|
||||
|
||||
deny_already_allowed = _rd32(self.data, deny_off) == self._MOV_W0_0_U32
|
||||
if deny_already_allowed:
|
||||
self._log(
|
||||
f" [=] shared deny return at 0x{deny_off:X} already forces allow; "
|
||||
"skipping deny trampoline hook"
|
||||
)
|
||||
|
||||
success_exits = self._find_cred_label_success_exits(func_off, epilogue_off)
|
||||
if not success_exits:
|
||||
self._log(" [-] success exits not found")
|
||||
return False
|
||||
|
||||
_, csflags_mem_op = self._find_cred_label_csflags_ptr_reload(func_off)
|
||||
if not csflags_mem_op:
|
||||
self._log(" [-] csflags stack reload not found")
|
||||
return False
|
||||
|
||||
deny_cave = -1
|
||||
if not deny_already_allowed:
|
||||
deny_cave = self._find_code_cave(8)
|
||||
if deny_cave < 0:
|
||||
self._log(" [-] no code cave found for C21-v3 deny trampoline")
|
||||
return False
|
||||
|
||||
success_cave = self._find_code_cave(32)
|
||||
if success_cave < 0 or success_cave == deny_cave:
|
||||
self._log(" [-] no code cave found for C21-v3 success trampoline")
|
||||
return False
|
||||
|
||||
deny_branch_back = b""
|
||||
if not deny_already_allowed:
|
||||
deny_branch_back = self._encode_b(deny_cave + 4, epilogue_off)
|
||||
if not deny_branch_back:
|
||||
self._log(" [-] branch from deny trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
success_branch_back = self._encode_b(success_cave + 28, epilogue_off)
|
||||
if not success_branch_back:
|
||||
self._log(" [-] branch from success trampoline back to epilogue is out of range")
|
||||
return False
|
||||
|
||||
deny_shellcode = asm("mov w0, #0") + deny_branch_back if not deny_already_allowed else b""
|
||||
success_shellcode = (
|
||||
asm(f"ldr x26, {csflags_mem_op}")
|
||||
+ asm("cbz x26, #0x10")
|
||||
+ asm("ldr w8, [x26]")
|
||||
+ asm(f"and w8, w8, #{self._RELAX_CSMASK:#x}")
|
||||
+ asm(f"orr w8, w8, #{self._RELAX_SETMASK:#x}")
|
||||
+ asm("str w8, [x26]")
|
||||
+ asm("mov w0, #0")
|
||||
+ success_branch_back
|
||||
)
|
||||
|
||||
for index in range(0, len(deny_shellcode), 4):
|
||||
self.emit(
|
||||
deny_cave + index,
|
||||
deny_shellcode[index : index + 4],
|
||||
f"deny_trampoline+{index} [_cred_label_update_execve C21-v3]",
|
||||
)
|
||||
|
||||
for index in range(0, len(success_shellcode), 4):
|
||||
self.emit(
|
||||
success_cave + index,
|
||||
success_shellcode[index : index + 4],
|
||||
f"success_trampoline+{index} [_cred_label_update_execve C21-v3]",
|
||||
)
|
||||
|
||||
if not deny_already_allowed:
|
||||
deny_branch_to_cave = self._encode_b(deny_off, deny_cave)
|
||||
if not deny_branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{deny_off:X} to deny trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
deny_off,
|
||||
deny_branch_to_cave,
|
||||
f"b deny cave [_cred_label_update_execve C21-v3 exit @ 0x{deny_off:X}]",
|
||||
)
|
||||
|
||||
for off in success_exits:
|
||||
branch_to_cave = self._encode_b(off, success_cave)
|
||||
if not branch_to_cave:
|
||||
self._log(f" [-] branch from 0x{off:X} to success trampoline is out of range")
|
||||
return False
|
||||
self.emit(
|
||||
off,
|
||||
branch_to_cave,
|
||||
f"b success cave [_cred_label_update_execve C21-v3 exit @ 0x{off:X}]",
|
||||
)
|
||||
|
||||
return True
|
||||
@@ -1,106 +0,0 @@
|
||||
"""Mixin: KernelJBPatchDounmountMixin."""
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchDounmountMixin:
|
||||
def patch_dounmount(self):
|
||||
"""Match the known-good upstream cleanup call in dounmount.
|
||||
|
||||
Anchor class: string anchor. Recover the dounmount body through the
|
||||
stable panic string `dounmount:` and patch the unique near-tail 4-arg
|
||||
zeroed cleanup call used by `/Users/qaq/Desktop/patch_fw.py`:
|
||||
|
||||
mov x0, xMountLike
|
||||
mov w1, #0
|
||||
mov w2, #0
|
||||
mov w3, #0
|
||||
bl target
|
||||
mov x0, xMountLike
|
||||
bl target2
|
||||
cbz x19, ...
|
||||
|
||||
This intentionally rejects the later `mov w1,#0x10 ; mov x2,#0 ; bl`
|
||||
site because that drifted away from upstream and represents a different
|
||||
call signature/control-flow path.
|
||||
"""
|
||||
self._log("\n[JB] _dounmount: upstream cleanup-call NOP")
|
||||
|
||||
foff = self._find_func_by_string(b"dounmount:", self.kern_text)
|
||||
if foff < 0:
|
||||
self._log(" [-] 'dounmount:' anchor not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x4000)
|
||||
patch_off = self._find_upstream_cleanup_call(foff, func_end)
|
||||
if patch_off is None:
|
||||
self._log(" [-] upstream dounmount cleanup call not found")
|
||||
return False
|
||||
|
||||
self.emit(patch_off, NOP, "NOP [_dounmount upstream cleanup call]")
|
||||
return True
|
||||
|
||||
def _find_upstream_cleanup_call(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 0x1C, 4):
|
||||
d = self._disas_at(off, 8)
|
||||
if len(d) < 8:
|
||||
continue
|
||||
i0, i1, i2, i3, i4, i5, i6, i7 = d
|
||||
if i0.mnemonic != "mov" or i1.mnemonic != "mov" or i2.mnemonic != "mov" or i3.mnemonic != "mov":
|
||||
continue
|
||||
if i4.mnemonic != "bl" or i5.mnemonic != "mov" or i6.mnemonic != "bl":
|
||||
continue
|
||||
if i7.mnemonic != "cbz":
|
||||
continue
|
||||
|
||||
src_reg = self._mov_reg_reg(i0, "x0")
|
||||
if src_reg is None:
|
||||
continue
|
||||
if not self._mov_imm_zero(i1, "w1"):
|
||||
continue
|
||||
if not self._mov_imm_zero(i2, "w2"):
|
||||
continue
|
||||
if not self._mov_imm_zero(i3, "w3"):
|
||||
continue
|
||||
if not self._mov_reg_reg(i5, "x0", src_reg):
|
||||
continue
|
||||
if not self._cbz_uses_xreg(i7):
|
||||
continue
|
||||
hits.append(i4.address)
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _mov_reg_reg(self, insn, dst_name, src_name=None):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return None
|
||||
dst, src = insn.operands
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_REG:
|
||||
return None
|
||||
if insn.reg_name(dst.reg) != dst_name:
|
||||
return None
|
||||
src_reg = insn.reg_name(src.reg)
|
||||
if src_name is not None and src_reg != src_name:
|
||||
return None
|
||||
return src_reg
|
||||
|
||||
def _mov_imm_zero(self, insn, dst_name):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg) == dst_name
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
)
|
||||
|
||||
def _cbz_uses_xreg(self, insn):
|
||||
if len(insn.operands) != 2:
|
||||
return False
|
||||
reg_op, imm_op = insn.operands
|
||||
return reg_op.type == ARM64_OP_REG and imm_op.type == ARM64_OP_IMM and insn.reg_name(reg_op.reg).startswith("x")
|
||||
@@ -1,255 +0,0 @@
|
||||
"""Mixin: KernelJBPatchHookCredLabelMixin."""
|
||||
|
||||
import struct
|
||||
|
||||
from .kernel_asm import asm, _PACIBSP_U32, _asm_u32
|
||||
from .kernel_jb_base import _rd32, _rd64
|
||||
|
||||
|
||||
class KernelJBPatchHookCredLabelMixin:
|
||||
_HOOK_CRED_LABEL_INDEX = 18
|
||||
_C23_CAVE_WORDS = 46
|
||||
_VFS_CONTEXT_CURRENT_SHAPE = (
|
||||
_PACIBSP_U32,
|
||||
_asm_u32("stp x29, x30, [sp, #-0x10]!"),
|
||||
_asm_u32("mov x29, sp"),
|
||||
_asm_u32("mrs x0, tpidr_el1"),
|
||||
_asm_u32("ldr x1, [x0, #0x3e0]"),
|
||||
)
|
||||
|
||||
def _find_vnode_getattr_via_string(self):
|
||||
"""Resolve vnode_getattr from a nearby BL around its log string."""
|
||||
str_off = self.find_string(b"vnode_getattr")
|
||||
if str_off < 0:
|
||||
return -1
|
||||
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
return -1
|
||||
|
||||
start = str_off
|
||||
for _ in range(6):
|
||||
refs = self.find_string_refs(start)
|
||||
if refs:
|
||||
ref_off = refs[0][0]
|
||||
for scan_off in range(ref_off - 4, ref_off - 80, -4):
|
||||
if scan_off < 0:
|
||||
break
|
||||
insn = _rd32(self.raw, scan_off)
|
||||
if (insn >> 26) != 0x25:
|
||||
continue
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = scan_off + imm26 * 4
|
||||
if any(s <= target < e for s, e in self.code_ranges):
|
||||
self._log(
|
||||
f" [+] vnode_getattr at 0x{target:X} "
|
||||
f"(via BL at 0x{scan_off:X}, near string ref 0x{ref_off:X})"
|
||||
)
|
||||
return target
|
||||
next_off = self.find_string(b"vnode_getattr", start + 1)
|
||||
if next_off < 0:
|
||||
break
|
||||
start = next_off
|
||||
|
||||
return -1
|
||||
|
||||
def _find_vfs_context_current_via_shape(self):
|
||||
"""Locate the concrete vfs_context_current body by its unique prologue."""
|
||||
key = ("c23_vfs_context_current", self.kern_text)
|
||||
cached = self._jb_scan_cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
ks, ke = self.kern_text
|
||||
hits = []
|
||||
pat = self._VFS_CONTEXT_CURRENT_SHAPE
|
||||
for off in range(ks, ke - len(pat) * 4, 4):
|
||||
if all(_rd32(self.raw, off + i * 4) == pat[i] for i in range(len(pat))):
|
||||
hits.append(off)
|
||||
|
||||
result = hits[0] if len(hits) == 1 else -1
|
||||
if result >= 0:
|
||||
self._log(f" [+] vfs_context_current body at 0x{result:X} (shape match)")
|
||||
else:
|
||||
self._log(f" [-] vfs_context_current shape scan ambiguous ({len(hits)} hits)")
|
||||
self._jb_scan_cache[key] = result
|
||||
return result
|
||||
|
||||
def _find_hook_cred_label_update_execve_wrapper(self):
|
||||
"""Resolve the faithful upstream C23 target: sandbox ops[18] wrapper."""
|
||||
ops_table = self._find_sandbox_ops_table_via_conf()
|
||||
if ops_table is None:
|
||||
self._log(" [-] sandbox ops table not found")
|
||||
return None
|
||||
|
||||
entry_off = ops_table + self._HOOK_CRED_LABEL_INDEX * 8
|
||||
if entry_off + 8 > self.size:
|
||||
self._log(" [-] hook ops entry outside file")
|
||||
return None
|
||||
|
||||
entry_raw = _rd64(self.raw, entry_off)
|
||||
if entry_raw == 0:
|
||||
self._log(" [-] hook ops entry is null")
|
||||
return None
|
||||
if (entry_raw & (1 << 63)) == 0:
|
||||
self._log(
|
||||
f" [-] hook ops entry is not auth-rebase encoded: 0x{entry_raw:016X}"
|
||||
)
|
||||
return None
|
||||
|
||||
wrapper_off = self._decode_chained_ptr(entry_raw)
|
||||
if wrapper_off < 0 or not any(s <= wrapper_off < e for s, e in self.code_ranges):
|
||||
self._log(f" [-] decoded wrapper target invalid: 0x{wrapper_off:X}")
|
||||
return None
|
||||
|
||||
self._log(
|
||||
f" [+] hook cred-label wrapper ops[{self._HOOK_CRED_LABEL_INDEX}] "
|
||||
f"entry 0x{entry_off:X} -> 0x{wrapper_off:X}"
|
||||
)
|
||||
return ops_table, entry_off, entry_raw, wrapper_off
|
||||
|
||||
def _encode_auth_rebase_like(self, orig_val, target_off):
|
||||
"""Retarget an auth-rebase chained pointer while preserving PAC metadata."""
|
||||
if (orig_val & (1 << 63)) == 0:
|
||||
return None
|
||||
return struct.pack("<Q", (orig_val & ~0xFFFFFFFF) | (target_off & 0xFFFFFFFF))
|
||||
|
||||
def _build_upstream_c23_cave(
|
||||
self,
|
||||
cave_off,
|
||||
vfs_context_current_off,
|
||||
vnode_getattr_off,
|
||||
wrapper_off,
|
||||
):
|
||||
code = []
|
||||
code.append(asm("nop"))
|
||||
code.append(asm("cbz x3, #0xa8"))
|
||||
code.append(asm("sub sp, sp, #0x400"))
|
||||
code.append(asm("stp x29, x30, [sp]"))
|
||||
code.append(asm("stp x0, x1, [sp, #0x10]"))
|
||||
code.append(asm("stp x2, x3, [sp, #0x20]"))
|
||||
code.append(asm("stp x4, x5, [sp, #0x30]"))
|
||||
code.append(asm("stp x6, x7, [sp, #0x40]"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
bl_vfs_off = cave_off + len(code) * 4
|
||||
bl_vfs = self._encode_bl(bl_vfs_off, vfs_context_current_off)
|
||||
if not bl_vfs:
|
||||
return None
|
||||
code.append(bl_vfs)
|
||||
|
||||
code.append(asm("mov x2, x0"))
|
||||
code.append(asm("ldr x0, [sp, #0x28]"))
|
||||
code.append(asm("add x1, sp, #0x80"))
|
||||
code.append(asm("mov w8, #0x380"))
|
||||
code.append(asm("stp xzr, x8, [x1]"))
|
||||
code.append(asm("stp xzr, xzr, [x1, #0x10]"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
bl_getattr_off = cave_off + len(code) * 4
|
||||
bl_getattr = self._encode_bl(bl_getattr_off, vnode_getattr_off)
|
||||
if not bl_getattr:
|
||||
return None
|
||||
code.append(bl_getattr)
|
||||
|
||||
code.append(asm("cbnz x0, #0x4c"))
|
||||
code.append(asm("mov w2, #0"))
|
||||
code.append(asm("ldr w8, [sp, #0xcc]"))
|
||||
code.append(asm("tbz w8, #0xb, #0x14"))
|
||||
code.append(asm("ldr w8, [sp, #0xc4]"))
|
||||
code.append(asm("ldr x0, [sp, #0x18]"))
|
||||
code.append(asm("str w8, [x0, #0x18]"))
|
||||
code.append(asm("mov w2, #1"))
|
||||
code.append(asm("ldr w8, [sp, #0xcc]"))
|
||||
code.append(asm("tbz w8, #0xa, #0x14"))
|
||||
code.append(asm("mov w2, #1"))
|
||||
code.append(asm("ldr w8, [sp, #0xc8]"))
|
||||
code.append(asm("ldr x0, [sp, #0x18]"))
|
||||
code.append(asm("str w8, [x0, #0x28]"))
|
||||
code.append(asm("cbz w2, #0x14"))
|
||||
code.append(asm("ldr x0, [sp, #0x20]"))
|
||||
code.append(asm("ldr w8, [x0, #0x454]"))
|
||||
code.append(asm("orr w8, w8, #0x100"))
|
||||
code.append(asm("str w8, [x0, #0x454]"))
|
||||
code.append(asm("ldp x0, x1, [sp, #0x10]"))
|
||||
code.append(asm("ldp x2, x3, [sp, #0x20]"))
|
||||
code.append(asm("ldp x4, x5, [sp, #0x30]"))
|
||||
code.append(asm("ldp x6, x7, [sp, #0x40]"))
|
||||
code.append(asm("ldp x29, x30, [sp]"))
|
||||
code.append(asm("add sp, sp, #0x400"))
|
||||
code.append(asm("nop"))
|
||||
|
||||
branch_back_off = cave_off + len(code) * 4
|
||||
branch_back = self._encode_b(branch_back_off, wrapper_off)
|
||||
if not branch_back:
|
||||
return None
|
||||
code.append(branch_back)
|
||||
code.append(asm("nop"))
|
||||
|
||||
if len(code) != self._C23_CAVE_WORDS:
|
||||
raise RuntimeError(
|
||||
f"C23 cave length drifted: {len(code)} insns, expected {self._C23_CAVE_WORDS}"
|
||||
)
|
||||
return b"".join(code)
|
||||
|
||||
def patch_hook_cred_label_update_execve(self):
|
||||
"""Faithful upstream C23: wrapper trampoline + setugid credential fixup.
|
||||
|
||||
Historical upstream behavior does not short-circuit the sandbox execve
|
||||
update hook. It redirects `mac_policy_ops[18]` to a code cave that:
|
||||
- fetches vnode owner/mode via vnode_getattr(vp, vap, vfs_context_current()),
|
||||
- copies VSUID/VSGID owner values into the pending new credential,
|
||||
- sets P_SUGID when either credential field changes,
|
||||
- then branches back to the original sandbox wrapper.
|
||||
"""
|
||||
self._log("\n[JB] _hook_cred_label_update_execve: faithful upstream C23")
|
||||
|
||||
wrapper_info = self._find_hook_cred_label_update_execve_wrapper()
|
||||
if wrapper_info is None:
|
||||
return False
|
||||
_, entry_off, entry_raw, wrapper_off = wrapper_info
|
||||
|
||||
vfs_context_current_off = self._find_vfs_context_current_via_shape()
|
||||
if vfs_context_current_off < 0:
|
||||
self._log(" [-] vfs_context_current not resolved")
|
||||
return False
|
||||
|
||||
vnode_getattr_off = self._find_vnode_getattr_via_string()
|
||||
if vnode_getattr_off < 0:
|
||||
self._log(" [-] vnode_getattr not resolved")
|
||||
return False
|
||||
|
||||
cave_size = self._C23_CAVE_WORDS * 4
|
||||
cave_off = self._find_code_cave(cave_size)
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for faithful C23")
|
||||
return False
|
||||
|
||||
cave_bytes = self._build_upstream_c23_cave(
|
||||
cave_off,
|
||||
vfs_context_current_off,
|
||||
vnode_getattr_off,
|
||||
wrapper_off,
|
||||
)
|
||||
if cave_bytes is None:
|
||||
self._log(" [-] failed to encode faithful C23 branch/call relocations")
|
||||
return False
|
||||
|
||||
new_entry = self._encode_auth_rebase_like(entry_raw, cave_off)
|
||||
if new_entry is None:
|
||||
self._log(" [-] failed to encode hook ops entry retarget")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
entry_off,
|
||||
new_entry,
|
||||
"retarget ops[18] to faithful C23 cave [_hook_cred_label_update_execve]",
|
||||
)
|
||||
self.emit(
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
"faithful upstream C23 cave (vnode getattr -> uid/gid/P_SUGID fixup -> wrapper)",
|
||||
)
|
||||
return True
|
||||
@@ -1,106 +0,0 @@
|
||||
"""Mixin: KernelJBPatchIoucmacfMixin."""
|
||||
|
||||
|
||||
class KernelJBPatchIoucmacfMixin:
|
||||
def patch_iouc_failed_macf(self):
|
||||
"""Bypass the narrow IOUC MACF deny branch after mac_iokit_check_open.
|
||||
|
||||
Upstream-equivalent design goal:
|
||||
- keep the large IOUserClient open/setup path intact
|
||||
- keep entitlement/default-locking/sandbox-resolver flow intact
|
||||
- only force the post-MACF gate onto the allow path
|
||||
|
||||
Local validated shape in `sub_FFFFFE000825B0C0`:
|
||||
- `BL <macf_aggregator>`
|
||||
- `CBZ W0, <allow>`
|
||||
- later `ADRL X0, "IOUC %s failed MACF in process %s\n"`
|
||||
|
||||
Patch action:
|
||||
- replace that `CBZ W0, <allow>` with unconditional `B <allow>`
|
||||
"""
|
||||
self._log("\n[JB] IOUC MACF gate: branch-level deny bypass")
|
||||
|
||||
fail_macf_str = self.find_string(b"IOUC %s failed MACF in process %s")
|
||||
if fail_macf_str < 0:
|
||||
self._log(" [-] IOUC failed-MACF format string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(fail_macf_str, *self.kern_text)
|
||||
if not refs:
|
||||
self._log(" [-] no xrefs for IOUC failed-MACF format string")
|
||||
return False
|
||||
|
||||
def _has_macf_aggregator_shape(callee_off):
|
||||
callee_end = self._find_func_end(callee_off, 0x400)
|
||||
saw_slot_load = False
|
||||
saw_indirect_call = False
|
||||
for off in range(callee_off, callee_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
ins = d[0]
|
||||
op = ins.op_str.replace(" ", "").lower()
|
||||
if ins.mnemonic == "ldr" and ",#0x9e8]" in op and op.startswith("x10,[x10"):
|
||||
saw_slot_load = True
|
||||
if ins.mnemonic in ("blraa", "blrab", "blr") and op.startswith("x10"):
|
||||
saw_indirect_call = True
|
||||
if saw_slot_load and saw_indirect_call:
|
||||
return True
|
||||
return False
|
||||
|
||||
for adrp_off, _, _ in refs:
|
||||
func_start = self.find_function_start(adrp_off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
func_end = self._find_func_end(func_start, 0x2000)
|
||||
|
||||
for off in range(max(func_start, adrp_off - 0x120), min(func_end, adrp_off + 4), 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
if not d0 or not d1:
|
||||
continue
|
||||
i0 = d0[0]
|
||||
i1 = d1[0]
|
||||
if i0.mnemonic != "bl" or i1.mnemonic != "cbz":
|
||||
continue
|
||||
if not i1.op_str.replace(" ", "").startswith("w0,"):
|
||||
continue
|
||||
|
||||
bl_target = self._is_bl(off)
|
||||
if bl_target < 0 or not _has_macf_aggregator_shape(bl_target):
|
||||
continue
|
||||
|
||||
if len(i1.operands) < 2:
|
||||
continue
|
||||
allow_target = getattr(i1.operands[-1], 'imm', -1)
|
||||
if not (off < allow_target < func_end):
|
||||
continue
|
||||
|
||||
fail_log_adrp = None
|
||||
for probe in range(off + 8, min(func_end, off + 0x80), 4):
|
||||
d = self._disas_at(probe)
|
||||
if not d:
|
||||
continue
|
||||
ins = d[0]
|
||||
if ins.mnemonic == "adrp" and probe == adrp_off:
|
||||
fail_log_adrp = probe
|
||||
break
|
||||
if fail_log_adrp is None:
|
||||
continue
|
||||
|
||||
patch_bytes = self._encode_b(off + 4, allow_target)
|
||||
if not patch_bytes:
|
||||
continue
|
||||
|
||||
self._log(
|
||||
f" [+] IOUC MACF gate fn=0x{func_start:X}, bl=0x{off:X}, cbz=0x{off + 4:X}, allow=0x{allow_target:X}"
|
||||
)
|
||||
self.emit(
|
||||
off + 4,
|
||||
patch_bytes,
|
||||
f"b #0x{allow_target - (off + 4):X} [IOUC MACF deny → allow]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] narrow IOUC MACF deny branch not found")
|
||||
return False
|
||||
@@ -1,294 +0,0 @@
|
||||
"""Mixin: KernelJBPatchKcall10Mixin."""
|
||||
|
||||
from .kernel_jb_base import _rd64, struct
|
||||
from .kernel import asm
|
||||
from .kernel_asm import _PACIBSP_U32, _RETAB_U32
|
||||
|
||||
|
||||
# Max sysent entries in XNU (dispatch clamps at 0x22E = 558).
|
||||
_SYSENT_MAX_ENTRIES = 558
|
||||
# Each sysent entry is 24 bytes.
|
||||
_SYSENT_ENTRY_SIZE = 24
|
||||
# PAC discriminator used by the syscall dispatch (MOV X17, #0xBCAD; BLRAA X8, X17).
|
||||
_SYSENT_PAC_DIVERSITY = 0xBCAD
|
||||
|
||||
# Rebuilt PCC 26.1 semantics:
|
||||
# uap[0] = target function pointer
|
||||
# uap[1] = arg0
|
||||
# ...
|
||||
# uap[7] = arg6
|
||||
# Return path:
|
||||
# store X0 as 64-bit into retval, expose through sy_return_type=UINT64
|
||||
_KCALL10_NARG = 8
|
||||
_KCALL10_ARG_BYTES_32 = _KCALL10_NARG * 4
|
||||
_KCALL10_RETURN_TYPE = 7
|
||||
_KCALL10_EINVAL = 22
|
||||
|
||||
|
||||
class KernelJBPatchKcall10Mixin:
|
||||
def _find_sysent_table(self, nosys_off):
|
||||
"""Find the real sysent table base.
|
||||
|
||||
Strategy:
|
||||
1. Find any DATA entry whose decoded pointer == _nosys.
|
||||
2. Scan backward in 24-byte steps to find the true table start
|
||||
(entry 0 is the indirect syscall handler, NOT _nosys).
|
||||
3. Validate each backward entry: sy_call decodes to a code range
|
||||
AND the metadata fields (narg, arg_bytes) look reasonable.
|
||||
|
||||
Previous bug: the old code took the first _nosys match as entry 0,
|
||||
but _nosys first appears at entry ~428 (varies by XNU build).
|
||||
"""
|
||||
nosys_entry = -1
|
||||
seg_start = -1
|
||||
for seg_name, _, fileoff, filesize, _ in self.all_segments:
|
||||
if "DATA" not in seg_name:
|
||||
continue
|
||||
for off in range(fileoff, fileoff + filesize - _SYSENT_ENTRY_SIZE, 8):
|
||||
val = _rd64(self.raw, off)
|
||||
decoded = self._decode_chained_ptr(val)
|
||||
if decoded == nosys_off:
|
||||
val2 = _rd64(self.raw, off + _SYSENT_ENTRY_SIZE)
|
||||
decoded2 = self._decode_chained_ptr(val2)
|
||||
if decoded2 > 0 and any(
|
||||
s <= decoded2 < e for s, e in self.code_ranges
|
||||
):
|
||||
nosys_entry = off
|
||||
seg_start = fileoff
|
||||
break
|
||||
if nosys_entry >= 0:
|
||||
break
|
||||
|
||||
if nosys_entry < 0:
|
||||
return -1
|
||||
|
||||
self._log(
|
||||
f" [*] _nosys entry found at foff 0x{nosys_entry:X}, "
|
||||
f"scanning backward for table start"
|
||||
)
|
||||
|
||||
base = nosys_entry
|
||||
entries_back = 0
|
||||
while base - _SYSENT_ENTRY_SIZE >= seg_start:
|
||||
if entries_back >= _SYSENT_MAX_ENTRIES:
|
||||
break
|
||||
prev = base - _SYSENT_ENTRY_SIZE
|
||||
val = _rd64(self.raw, prev)
|
||||
decoded = self._decode_chained_ptr(val)
|
||||
if decoded <= 0 or not any(s <= decoded < e for s, e in self.code_ranges):
|
||||
break
|
||||
narg = struct.unpack_from("<H", self.raw, prev + 20)[0]
|
||||
arg_bytes = struct.unpack_from("<H", self.raw, prev + 22)[0]
|
||||
if narg > 12 or arg_bytes > 96:
|
||||
break
|
||||
base = prev
|
||||
entries_back += 1
|
||||
|
||||
self._log(
|
||||
f" [+] sysent table base at foff 0x{base:X} "
|
||||
f"({entries_back} entries before first _nosys)"
|
||||
)
|
||||
return base
|
||||
|
||||
def _encode_chained_auth_ptr(self, target_foff, next_val, diversity=0, key=0, addr_div=0):
|
||||
"""Encode an arm64e kernel cache auth rebase chained fixup pointer."""
|
||||
val = (
|
||||
(target_foff & 0x3FFFFFFF)
|
||||
| ((diversity & 0xFFFF) << 32)
|
||||
| ((addr_div & 1) << 48)
|
||||
| ((key & 3) << 49)
|
||||
| ((next_val & 0xFFF) << 51)
|
||||
| (1 << 63)
|
||||
)
|
||||
return struct.pack("<Q", val)
|
||||
|
||||
def _extract_chain_next(self, raw_val):
|
||||
return (raw_val >> 51) & 0xFFF
|
||||
|
||||
def _extract_chain_diversity(self, raw_val):
|
||||
return (raw_val >> 32) & 0xFFFF
|
||||
|
||||
def _extract_chain_addr_div(self, raw_val):
|
||||
return (raw_val >> 48) & 1
|
||||
|
||||
def _extract_chain_key(self, raw_val):
|
||||
return (raw_val >> 49) & 3
|
||||
|
||||
def _find_munge32_for_narg(self, sysent_off, narg, arg_bytes):
|
||||
"""Find a reusable 32-bit munger entry with matching metadata.
|
||||
|
||||
Returns `(target_foff, exemplar_entry, match_count)` or `(-1, -1, 0)`.
|
||||
Requires a unique decoded helper target across all matching sysent rows.
|
||||
"""
|
||||
candidates = {}
|
||||
for idx in range(_SYSENT_MAX_ENTRIES):
|
||||
entry = sysent_off + idx * _SYSENT_ENTRY_SIZE
|
||||
cur_narg = struct.unpack_from("<H", self.raw, entry + 20)[0]
|
||||
cur_arg_bytes = struct.unpack_from("<H", self.raw, entry + 22)[0]
|
||||
if cur_narg != narg or cur_arg_bytes != arg_bytes:
|
||||
continue
|
||||
raw_munge = _rd64(self.raw, entry + 8)
|
||||
target = self._decode_chained_ptr(raw_munge)
|
||||
if target <= 0:
|
||||
continue
|
||||
bucket = candidates.setdefault(target, [])
|
||||
bucket.append(entry)
|
||||
|
||||
if not candidates:
|
||||
return -1, -1, 0
|
||||
if len(candidates) != 1:
|
||||
self._log(
|
||||
" [-] multiple distinct 8-arg munge32 helpers found: "
|
||||
+ ", ".join(f"0x{target:X}" for target in sorted(candidates))
|
||||
)
|
||||
return -1, -1, 0
|
||||
|
||||
target, entries = next(iter(candidates.items()))
|
||||
return target, entries[0], len(entries)
|
||||
|
||||
def _build_kcall10_cave(self):
|
||||
"""Build an ABI-correct kcall cave.
|
||||
|
||||
Contract:
|
||||
x0 = proc*
|
||||
x1 = &uthread->uu_arg[0]
|
||||
x2 = &uthread->uu_rval[0]
|
||||
|
||||
uap layout (8 qwords):
|
||||
[0] target function pointer
|
||||
[1] arg0
|
||||
[2] arg1
|
||||
[3] arg2
|
||||
[4] arg3
|
||||
[5] arg4
|
||||
[6] arg5
|
||||
[7] arg6
|
||||
|
||||
Behavior:
|
||||
- validates uap / retval / target are non-null
|
||||
- invokes target(arg0..arg6, x7=0)
|
||||
- stores 64-bit X0 into retval for `_SYSCALL_RET_UINT64_T`
|
||||
- returns 0 on success or EINVAL on malformed request
|
||||
"""
|
||||
code = []
|
||||
code.append(struct.pack("<I", _PACIBSP_U32))
|
||||
code.append(asm("sub sp, sp, #0x30"))
|
||||
code.append(asm("stp x21, x22, [sp]"))
|
||||
code.append(asm("stp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("stp x29, x30, [sp, #0x20]"))
|
||||
code.append(asm("add x29, sp, #0x20"))
|
||||
code.append(asm(f"mov w19, #{_KCALL10_EINVAL}"))
|
||||
code.append(asm("mov x20, x1"))
|
||||
code.append(asm("mov x21, x2"))
|
||||
code.append(asm("cbz x20, #0x30"))
|
||||
code.append(asm("cbz x21, #0x2c"))
|
||||
code.append(asm("ldr x16, [x20]"))
|
||||
code.append(asm("cbz x16, #0x24"))
|
||||
code.append(asm("ldp x0, x1, [x20, #0x8]"))
|
||||
code.append(asm("ldp x2, x3, [x20, #0x18]"))
|
||||
code.append(asm("ldp x4, x5, [x20, #0x28]"))
|
||||
code.append(asm("ldr x6, [x20, #0x38]"))
|
||||
code.append(asm("mov x7, xzr"))
|
||||
code.append(asm("blr x16"))
|
||||
code.append(asm("str x0, [x21]"))
|
||||
code.append(asm("mov w19, #0"))
|
||||
code.append(asm("mov w0, w19"))
|
||||
code.append(asm("ldp x21, x22, [sp]"))
|
||||
code.append(asm("ldp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("ldp x29, x30, [sp, #0x20]"))
|
||||
code.append(asm("add sp, sp, #0x30"))
|
||||
code.append(struct.pack("<I", _RETAB_U32))
|
||||
return b"".join(code)
|
||||
|
||||
def patch_kcall10(self):
|
||||
"""Rebuilt ABI-correct kcall patch for syscall 439.
|
||||
|
||||
The historical `kcall10` idea cannot be implemented as a literal
|
||||
10-argument Unix syscall on arm64 XNU. The rebuilt variant instead
|
||||
repoints `SYS_kas_info` to a cave that consumes the real syscall ABI:
|
||||
|
||||
uap[0] = target
|
||||
uap[1..7] = arg0..arg6
|
||||
|
||||
It returns the 64-bit X0 result via `retval` and
|
||||
`_SYSCALL_RET_UINT64_T`.
|
||||
"""
|
||||
self._log("\n[JB] kcall10: ABI-correct sysent[439] cave")
|
||||
|
||||
nosys_off = self._resolve_symbol("_nosys")
|
||||
if nosys_off < 0:
|
||||
nosys_off = self._find_nosys()
|
||||
if nosys_off < 0:
|
||||
self._log(" [-] _nosys not found")
|
||||
return False
|
||||
|
||||
sysent_off = self._find_sysent_table(nosys_off)
|
||||
if sysent_off < 0:
|
||||
self._log(" [-] sysent table not found")
|
||||
return False
|
||||
|
||||
entry_439 = sysent_off + 439 * _SYSENT_ENTRY_SIZE
|
||||
|
||||
munger_target, exemplar_entry, match_count = self._find_munge32_for_narg(
|
||||
sysent_off, _KCALL10_NARG, _KCALL10_ARG_BYTES_32
|
||||
)
|
||||
if munger_target < 0:
|
||||
self._log(" [-] no unique reusable 8-arg munge32 helper found")
|
||||
return False
|
||||
|
||||
cave_bytes = self._build_kcall10_cave()
|
||||
cave_off = self._find_code_cave(len(cave_bytes))
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for kcall10")
|
||||
return False
|
||||
|
||||
old_sy_call_raw = _rd64(self.raw, entry_439)
|
||||
call_next = self._extract_chain_next(old_sy_call_raw)
|
||||
|
||||
old_munge_raw = _rd64(self.raw, entry_439 + 8)
|
||||
munge_next = self._extract_chain_next(old_munge_raw)
|
||||
munge_div = self._extract_chain_diversity(old_munge_raw)
|
||||
munge_addr_div = self._extract_chain_addr_div(old_munge_raw)
|
||||
munge_key = self._extract_chain_key(old_munge_raw)
|
||||
|
||||
self._log(f" [+] sysent table at file offset 0x{sysent_off:X}")
|
||||
self._log(f" [+] sysent[439] entry at 0x{entry_439:X}")
|
||||
self._log(
|
||||
f" [+] reusing unique 8-arg munge32 target 0x{munger_target:X} "
|
||||
f"from exemplar entry 0x{exemplar_entry:X} ({match_count} matching sysent rows)"
|
||||
)
|
||||
self._log(f" [+] cave at 0x{cave_off:X} ({len(cave_bytes):#x} bytes)")
|
||||
|
||||
self.emit(
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
"kcall10 ABI-correct cave (target + 7 args -> uint64 x0)",
|
||||
)
|
||||
self.emit(
|
||||
entry_439,
|
||||
self._encode_chained_auth_ptr(
|
||||
cave_off,
|
||||
next_val=call_next,
|
||||
diversity=_SYSENT_PAC_DIVERSITY,
|
||||
key=0,
|
||||
addr_div=0,
|
||||
),
|
||||
f"sysent[439].sy_call = cave 0x{cave_off:X} (auth rebase, div=0xBCAD, next={call_next}) [kcall10]",
|
||||
)
|
||||
self.emit(
|
||||
entry_439 + 8,
|
||||
self._encode_chained_auth_ptr(
|
||||
munger_target,
|
||||
next_val=munge_next,
|
||||
diversity=munge_div,
|
||||
key=munge_key,
|
||||
addr_div=munge_addr_div,
|
||||
),
|
||||
f"sysent[439].sy_arg_munge32 = 8-arg helper 0x{munger_target:X} [kcall10]",
|
||||
)
|
||||
self.emit(
|
||||
entry_439 + 16,
|
||||
struct.pack("<IHH", _KCALL10_RETURN_TYPE, _KCALL10_NARG, _KCALL10_ARG_BYTES_32),
|
||||
"sysent[439].sy_return_type=7,sy_narg=8,sy_arg_bytes=0x20 [kcall10]",
|
||||
)
|
||||
return True
|
||||
@@ -1,74 +0,0 @@
|
||||
"""Mixin: KernelJBPatchLoadDylinkerMixin."""
|
||||
|
||||
|
||||
class KernelJBPatchLoadDylinkerMixin:
|
||||
def patch_load_dylinker(self):
|
||||
"""Bypass load_dylinker policy gate in the dyld path.
|
||||
|
||||
Raw PCC 26.1 kernels resolve this patch through a single runtime path:
|
||||
1. Anchor the containing function by a kernel-text reference to
|
||||
'/usr/lib/dyld'.
|
||||
2. Inside that function, find BL <check>; CBZ W0, <allow>.
|
||||
3. Replace BL with unconditional B to <allow>.
|
||||
"""
|
||||
self._log("\n[JB] _load_dylinker: skip dyld policy check")
|
||||
|
||||
str_off = self.find_string(b"/usr/lib/dyld")
|
||||
if str_off < 0:
|
||||
self._log(" [-] '/usr/lib/dyld' string not found")
|
||||
return False
|
||||
|
||||
kstart, kend = self._get_kernel_text_range()
|
||||
refs = self.find_string_refs(str_off, kstart, kend)
|
||||
if not refs:
|
||||
self._log(" [-] no kernel-text code refs to '/usr/lib/dyld'")
|
||||
return False
|
||||
|
||||
for adrp_off, _, _ in refs:
|
||||
func_start = self.find_function_start(adrp_off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
func_end = self._find_func_end(func_start, 0x1200)
|
||||
result = self._find_bl_cbz_gate(func_start, func_end)
|
||||
if not result:
|
||||
continue
|
||||
bl_off, allow_target = result
|
||||
b_bytes = self._encode_b(bl_off, allow_target)
|
||||
if not b_bytes:
|
||||
continue
|
||||
self._log(
|
||||
f" [+] dyld anchor func at 0x{func_start:X}, "
|
||||
f"patch BL at 0x{bl_off:X}"
|
||||
)
|
||||
self.emit(
|
||||
bl_off,
|
||||
b_bytes,
|
||||
f"b #0x{allow_target - bl_off:X} [_load_dylinker policy bypass]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] dyld policy gate not found in dyld-anchored function")
|
||||
return False
|
||||
|
||||
def _find_bl_cbz_gate(self, start, end):
|
||||
"""Find BL <check>; CBZ W0,<allow>; MOV W0,#2 gate and return (bl_off, allow_target)."""
|
||||
for off in range(start, end - 8, 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
if not d0 or not d1:
|
||||
continue
|
||||
i0 = d0[0]
|
||||
i1 = d1[0]
|
||||
if i0.mnemonic != "bl" or i1.mnemonic != "cbz":
|
||||
continue
|
||||
if not i1.op_str.startswith("w0, "):
|
||||
continue
|
||||
if len(i1.operands) < 2:
|
||||
continue
|
||||
allow_target = i1.operands[-1].imm
|
||||
|
||||
# Keep selector strict: deny path usually sets errno=2 right after CBZ.
|
||||
if d2 and d2[0].mnemonic == "mov" and d2[0].op_str.startswith("w0, #2"):
|
||||
return off, allow_target
|
||||
return None
|
||||
@@ -1,166 +0,0 @@
|
||||
"""Mixin: KernelJBPatchMacMountMixin."""
|
||||
|
||||
from .kernel_asm import _cs
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, asm
|
||||
|
||||
|
||||
class KernelJBPatchMacMountMixin:
|
||||
def patch_mac_mount(self):
|
||||
"""Apply the upstream twin bypasses in the mount-role wrapper.
|
||||
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which
|
||||
patches two sites in the wrapper that decides whether execution can
|
||||
continue into `mount_common()`:
|
||||
|
||||
- `tbnz wFlags, #5, deny` -> `nop`
|
||||
- `ldrb w8, [xTmp, #1]` -> `mov x8, xzr`
|
||||
|
||||
Runtime design avoids unstable symbols by:
|
||||
1. recovering `mount_common` from the in-image `"mount_common()"`
|
||||
string,
|
||||
2. scanning only a bounded neighborhood for local callers of that
|
||||
recovered function,
|
||||
3. selecting the unique caller that contains both upstream gates.
|
||||
"""
|
||||
self._log("\n[JB] ___mac_mount: upstream twin bypass")
|
||||
|
||||
mount_common = self._find_func_by_string(b"mount_common()", self.kern_text)
|
||||
if mount_common < 0:
|
||||
self._log(" [-] mount_common anchor function not found")
|
||||
return False
|
||||
|
||||
search_start = max(self.kern_text[0], mount_common - 0x5000)
|
||||
search_end = min(self.kern_text[1], mount_common + 0x5000)
|
||||
candidates = {}
|
||||
for off in range(search_start, search_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target != mount_common:
|
||||
continue
|
||||
caller = self.find_function_start(off)
|
||||
if caller < 0 or caller == mount_common or caller in candidates:
|
||||
continue
|
||||
caller_end = self._find_func_end(caller, 0x1200)
|
||||
sites = self._match_upstream_mount_wrapper(caller, caller_end, mount_common)
|
||||
if sites is not None:
|
||||
candidates[caller] = sites
|
||||
|
||||
if len(candidates) != 1:
|
||||
self._log(f" [-] expected 1 upstream mac_mount candidate, found {len(candidates)}")
|
||||
return False
|
||||
|
||||
branch_off, mov_off = next(iter(candidates.values()))
|
||||
self.emit(branch_off, asm("nop"), "NOP [___mac_mount upstream flag gate]")
|
||||
self.emit(mov_off, asm("mov x8, xzr"), "mov x8,xzr [___mac_mount upstream state clear]")
|
||||
return True
|
||||
|
||||
def _match_upstream_mount_wrapper(self, start, end, mount_common):
|
||||
call_sites = []
|
||||
for off in range(start, end, 4):
|
||||
if self._is_bl(off) == mount_common:
|
||||
call_sites.append(off)
|
||||
if not call_sites:
|
||||
return None
|
||||
|
||||
flag_gate = self._find_flag_gate(start, end)
|
||||
if flag_gate is None:
|
||||
return None
|
||||
|
||||
state_gate = self._find_state_gate(start, end, call_sites)
|
||||
if state_gate is None:
|
||||
return None
|
||||
|
||||
return (flag_gate, state_gate)
|
||||
|
||||
def _find_flag_gate(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 4, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "tbnz" or not self._is_bit_branch(insn, "w", 5):
|
||||
continue
|
||||
target = insn.operands[2].imm
|
||||
if not (start <= target < end):
|
||||
continue
|
||||
td = self._disas_at(target)
|
||||
if not td or not self._is_mov_w_imm_value(td[0], 1):
|
||||
continue
|
||||
hits.append(off)
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _find_state_gate(self, start, end, call_sites):
|
||||
hits = []
|
||||
for off in range(start, end - 8, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
continue
|
||||
i0, i1, i2 = d
|
||||
if not self._is_add_x_imm(i0, 0x70):
|
||||
continue
|
||||
if not self._is_ldrb_same_base_plus_1(i1, i0.operands[0].reg):
|
||||
continue
|
||||
if i2.mnemonic != "tbz" or not self._is_bit_branch(i2, self._reg_name(i1.operands[0].reg), 6):
|
||||
continue
|
||||
target = i2.operands[2].imm
|
||||
if not any(target <= call_off <= target + 0x80 for call_off in call_sites):
|
||||
continue
|
||||
hits.append(i1.address)
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _is_bit_branch(self, insn, reg_prefix_or_name, bit):
|
||||
if len(insn.operands) != 3:
|
||||
return False
|
||||
reg_op, bit_op, target_op = insn.operands
|
||||
if reg_op.type != ARM64_OP_REG or bit_op.type != ARM64_OP_IMM or target_op.type != ARM64_OP_IMM:
|
||||
return False
|
||||
reg_name = self._reg_name(reg_op.reg)
|
||||
if len(reg_prefix_or_name) == 1:
|
||||
if not reg_name.startswith(reg_prefix_or_name):
|
||||
return False
|
||||
elif reg_name != reg_prefix_or_name:
|
||||
return False
|
||||
return bit_op.imm == bit
|
||||
|
||||
def _is_mov_w_imm_value(self, insn, imm):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and self._reg_name(dst.reg).startswith("w")
|
||||
and src.imm == imm
|
||||
)
|
||||
|
||||
def _is_add_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "add" or len(insn.operands) != 3:
|
||||
return False
|
||||
dst, src, imm_op = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_REG
|
||||
and imm_op.type == ARM64_OP_IMM
|
||||
and self._reg_name(dst.reg).startswith("x")
|
||||
and self._reg_name(src.reg).startswith("x")
|
||||
and imm_op.imm == imm
|
||||
)
|
||||
|
||||
def _is_ldrb_same_base_plus_1(self, insn, base_reg):
|
||||
if insn.mnemonic != "ldrb" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_MEM
|
||||
and src.mem.base == base_reg
|
||||
and src.mem.disp == 1
|
||||
and self._reg_name(dst.reg).startswith("w")
|
||||
)
|
||||
|
||||
def _reg_name(self, reg):
|
||||
return _cs.reg_name(reg)
|
||||
@@ -1,45 +0,0 @@
|
||||
"""Mixin: KernelJBPatchNvramMixin."""
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchNvramMixin:
|
||||
def patch_nvram_verify_permission(self):
|
||||
"""NOP the verifyPermission gate in the `krn.` key-prefix path.
|
||||
|
||||
Runtime reveal is string-anchored only: enumerate code refs to `"krn."`,
|
||||
recover the containing function for each ref, then pick the unique
|
||||
`tbz/tbnz` guard immediately before that key-prefix load sequence.
|
||||
"""
|
||||
self._log("\n[JB] verifyPermission (NVRAM): NOP")
|
||||
|
||||
str_off = self.find_string(b"krn.")
|
||||
if str_off < 0:
|
||||
self._log(" [-] 'krn.' string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs to 'krn.'")
|
||||
return False
|
||||
|
||||
hits = []
|
||||
seen = set()
|
||||
for ref_off, _, _ in refs:
|
||||
foff = self.find_function_start(ref_off)
|
||||
if foff < 0 or foff in seen:
|
||||
continue
|
||||
seen.add(foff)
|
||||
for off in range(ref_off - 4, max(foff - 4, ref_off - 0x20), -4):
|
||||
d = self._disas_at(off)
|
||||
if d and d[0].mnemonic in ('tbnz', 'tbz'):
|
||||
hits.append(off)
|
||||
break
|
||||
|
||||
hits = sorted(set(hits))
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 NVRAM verifyPermission gate, found {len(hits)}")
|
||||
return False
|
||||
|
||||
self.emit(hits[0], NOP, 'NOP [verifyPermission NVRAM]')
|
||||
return True
|
||||
@@ -1,71 +0,0 @@
|
||||
"""Mixin: KernelJBPatchPortToMapMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, NOP, struct, _rd32
|
||||
|
||||
|
||||
class KernelJBPatchPortToMapMixin:
|
||||
def patch_convert_port_to_map(self):
|
||||
"""Skip panic in _convert_port_to_map_with_flavor.
|
||||
|
||||
Anchor: 'userspace has control access to a kernel map' panic string.
|
||||
|
||||
The function flow around the kernel_map check is:
|
||||
CMP X16, X8 ; compare map ptr with kernel_map
|
||||
B.NE normal_path ; if NOT kernel_map, continue normally
|
||||
; fall through: set up panic args and call _panic (noreturn)
|
||||
|
||||
Fix: walk backward from the string ref to find the B.cond that
|
||||
guards the panic fall-through, then make it unconditional.
|
||||
This causes the kernel_map case to take the normal path instead
|
||||
of panicking, allowing userspace to access the kernel map.
|
||||
"""
|
||||
self._log("\n[JB] _convert_port_to_map_with_flavor: skip panic")
|
||||
|
||||
str_off = self.find_string(b"userspace has control access to a kernel map")
|
||||
if str_off < 0:
|
||||
self._log(" [-] panic string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
# Walk backward from the string ADRP to find CMP + B.cond
|
||||
# The pattern is: CMP Xn, Xm; B.NE target
|
||||
# We want to change B.NE to unconditional B (always skip panic).
|
||||
for back in range(adrp_off - 4, max(adrp_off - 0x60, 0), -4):
|
||||
d = self._disas_at(back, 2)
|
||||
if not d or len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
# Look for CMP + B.NE/B.CS/B.HI (conditional branch away from
|
||||
# the panic path). The branch target should be AFTER the panic
|
||||
# call (i.e., forward past the string ref region).
|
||||
if i0.mnemonic != "cmp":
|
||||
continue
|
||||
if not i1.mnemonic.startswith("b."):
|
||||
continue
|
||||
# Decode the branch target
|
||||
target, kind = self._decode_branch_target(back + 4)
|
||||
if target is None:
|
||||
continue
|
||||
# The branch should go FORWARD past the panic (beyond adrp_off)
|
||||
if target <= adrp_off:
|
||||
continue
|
||||
|
||||
# Found the conditional branch that skips the panic path.
|
||||
# Replace it with unconditional B to same target.
|
||||
b_bytes = self._encode_b(back + 4, target)
|
||||
if b_bytes:
|
||||
self.emit(
|
||||
back + 4,
|
||||
b_bytes,
|
||||
f"b 0x{target:X} "
|
||||
f"[_convert_port_to_map skip panic]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] branch site not found")
|
||||
return False
|
||||
@@ -1,77 +0,0 @@
|
||||
"""Mixin: KernelJBPatchPostValidationMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_REG, ARM64_OP_IMM, ARM64_REG_W0, CMP_W0_W0
|
||||
|
||||
|
||||
class KernelJBPatchPostValidationMixin:
|
||||
def patch_post_validation_additional(self):
|
||||
"""Rewrite the SHA256-only reject compare in AMFI's post-validation path.
|
||||
|
||||
Runtime reveal is string-anchored only: use the
|
||||
`"AMFI: code signature validation failed"` xref, recover the caller,
|
||||
then recover the BL target whose body contains the unique
|
||||
`cmp w0,#imm ; b.ne` reject gate reached immediately after a BL.
|
||||
No broad AMFI-text fallback is kept.
|
||||
"""
|
||||
self._log("\n[JB] postValidation additional: cmp w0,w0")
|
||||
|
||||
str_off = self.find_string(b"AMFI: code signature validation failed")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.amfi_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
hits = []
|
||||
seen = set()
|
||||
for ref_off, _, _ in refs:
|
||||
caller_start = self.find_function_start(ref_off)
|
||||
if caller_start < 0 or caller_start in seen:
|
||||
continue
|
||||
seen.add(caller_start)
|
||||
|
||||
func_end = self._find_func_end(caller_start, 0x2000)
|
||||
bl_targets = set()
|
||||
for scan in range(caller_start, func_end, 4):
|
||||
target = self._is_bl(scan)
|
||||
if target >= 0:
|
||||
bl_targets.add(target)
|
||||
|
||||
for target in sorted(bl_targets):
|
||||
if not (self.amfi_text[0] <= target < self.amfi_text[1]):
|
||||
continue
|
||||
callee_end = self._find_func_end(target, 0x200)
|
||||
for off in range(target, callee_end, 4):
|
||||
d = self._disas_at(off, 2)
|
||||
if len(d) < 2:
|
||||
continue
|
||||
i0, i1 = d[0], d[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0:
|
||||
continue
|
||||
if ops[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
has_bl = False
|
||||
for back in range(off - 4, max(off - 12, target), -4):
|
||||
if self._is_bl(back) >= 0:
|
||||
has_bl = True
|
||||
break
|
||||
if has_bl:
|
||||
hits.append(off)
|
||||
|
||||
hits = sorted(set(hits))
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 postValidation compare site, found {len(hits)}")
|
||||
return False
|
||||
|
||||
self.emit(hits[0], CMP_W0_W0, "cmp w0,w0 [postValidation additional]")
|
||||
return True
|
||||
@@ -1,52 +0,0 @@
|
||||
"""Mixin: KernelJBPatchProcPidinfoMixin."""
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchProcPidinfoMixin:
|
||||
def patch_proc_pidinfo(self):
|
||||
"""Bypass the two early pid-0/proc-null guards in proc_pidinfo.
|
||||
|
||||
Reveal from the shared `_proc_info` switch-table anchor, then match the
|
||||
precise early shape used by upstream PCC 26.1:
|
||||
ldr x0, [x0,#0x18]
|
||||
cbz x0, fail
|
||||
bl ...
|
||||
cbz/cbnz wN, fail
|
||||
Patch only those two guards.
|
||||
"""
|
||||
self._log("\n[JB] _proc_pidinfo: NOP pid-0 guard (2 sites)")
|
||||
|
||||
proc_info_func, _ = self._find_proc_info_anchor()
|
||||
if proc_info_func < 0:
|
||||
self._log(" [-] _proc_info function not found")
|
||||
return False
|
||||
|
||||
first_guard = None
|
||||
second_guard = None
|
||||
prologue_end = min(proc_info_func + 0x80, self.size)
|
||||
for off in range(proc_info_func, prologue_end - 0x10, 4):
|
||||
d0 = self._disas_at(off)
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
d3 = self._disas_at(off + 12)
|
||||
if not d0 or not d1 or not d2 or not d3:
|
||||
continue
|
||||
i0, i1, i2, i3 = d0[0], d1[0], d2[0], d3[0]
|
||||
if (
|
||||
i0.mnemonic == 'ldr' and i0.op_str.startswith('x0, [x0, #0x18]') and
|
||||
i1.mnemonic == 'cbz' and i1.op_str.startswith('x0, ') and
|
||||
i2.mnemonic == 'bl' and
|
||||
i3.mnemonic in ('cbz', 'cbnz') and i3.op_str.startswith('w')
|
||||
):
|
||||
first_guard = off + 4
|
||||
second_guard = off + 12
|
||||
break
|
||||
|
||||
if first_guard is None or second_guard is None:
|
||||
self._log(' [-] precise proc_pidinfo guard pair not found')
|
||||
return False
|
||||
|
||||
self.emit(first_guard, NOP, 'NOP [_proc_pidinfo pid-0 guard A]')
|
||||
self.emit(second_guard, NOP, 'NOP [_proc_pidinfo pid-0 guard B]')
|
||||
return True
|
||||
@@ -1,85 +0,0 @@
|
||||
"""Mixin: KernelJBPatchProcSecurityMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_X0_0, RET, Counter, _rd32
|
||||
|
||||
|
||||
class KernelJBPatchProcSecurityMixin:
|
||||
def patch_proc_security_policy(self):
|
||||
"""Stub _proc_security_policy: mov x0,#0; ret.
|
||||
|
||||
Anchor: find _proc_info via its distinctive switch-table pattern
|
||||
(sub wN,wM,#1; cmp wN,#0x21), then identify _proc_security_policy
|
||||
among BL targets — it's called 2+ times, is a small function
|
||||
(<0x200 bytes), and is NOT called from the proc_info prologue
|
||||
(it's called within switch cases, not before the switch dispatch).
|
||||
"""
|
||||
self._log("\n[JB] _proc_security_policy: mov x0,#0; ret")
|
||||
|
||||
# Find _proc_info by switch pattern:
|
||||
# sub wN,wM,#1 ; cmp wN,#0x21
|
||||
proc_info_func, switch_off = self._find_proc_info_anchor()
|
||||
ks, ke = self.kern_text
|
||||
|
||||
if proc_info_func < 0:
|
||||
self._log(" [-] _proc_info function not found")
|
||||
return False
|
||||
|
||||
proc_info_end = self._find_func_end(proc_info_func, 0x4000)
|
||||
self._log(
|
||||
f" [+] _proc_info at 0x{proc_info_func:X} "
|
||||
f"(size 0x{proc_info_end - proc_info_func:X})"
|
||||
)
|
||||
|
||||
# Count BL targets within _proc_info (only AFTER the switch dispatch,
|
||||
# since security policy is called from switch cases not the prologue)
|
||||
bl_targets = Counter()
|
||||
for off in range(switch_off, proc_info_end, 4):
|
||||
insn = _rd32(self.raw, off)
|
||||
if (insn & 0xFC000000) != 0x94000000:
|
||||
continue
|
||||
imm26 = insn & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = off + imm26 * 4
|
||||
if ks <= target < ke:
|
||||
bl_targets[target] += 1
|
||||
|
||||
if not bl_targets:
|
||||
self._log(" [-] no BL targets found in _proc_info switch cases")
|
||||
return False
|
||||
|
||||
# Find _proc_security_policy among candidates.
|
||||
# It's called 2+ times, is a small function (<0x300 bytes),
|
||||
# and is NOT a utility like copyio (which is much larger).
|
||||
for foff, count in bl_targets.most_common():
|
||||
if count < 2:
|
||||
break
|
||||
|
||||
func_end = self._find_func_end(foff, 0x400)
|
||||
func_size = func_end - foff
|
||||
|
||||
self._log(
|
||||
f" [*] candidate 0x{foff:X}: {count} calls, "
|
||||
f"size 0x{func_size:X}"
|
||||
)
|
||||
|
||||
# Skip large functions (utilities like copyio are ~0x28C bytes)
|
||||
if func_size > 0x200:
|
||||
self._log(f" [-] skipped (too large, likely utility)")
|
||||
continue
|
||||
|
||||
# Skip tiny functions (< 0x40 bytes, likely trivial helpers)
|
||||
if func_size < 0x40:
|
||||
self._log(f" [-] skipped (too small)")
|
||||
continue
|
||||
|
||||
self._log(
|
||||
f" [+] identified _proc_security_policy at 0x{foff:X} "
|
||||
f"({count} calls, size 0x{func_size:X})"
|
||||
)
|
||||
self.emit(foff, MOV_X0_0, "mov x0,#0 [_proc_security_policy]")
|
||||
self.emit(foff + 4, RET, "ret [_proc_security_policy]")
|
||||
return True
|
||||
|
||||
self._log(" [-] _proc_security_policy not identified among BL targets")
|
||||
return False
|
||||
@@ -1,115 +0,0 @@
|
||||
"""Mixin: KernelJBPatchSandboxExtendedMixin."""
|
||||
|
||||
from .kernel_jb_base import MOV_X0_0, RET, struct, _rd64
|
||||
|
||||
|
||||
class KernelJBPatchSandboxExtendedMixin:
|
||||
def patch_sandbox_hooks_extended(self):
|
||||
"""Retarget extended Sandbox MACF hooks to the common allow stub.
|
||||
|
||||
Upstream `patch_fw.py` rewrites the `mac_policy_ops` entries rather than
|
||||
patching each hook body. Keep the same runtime strategy here:
|
||||
recover
|
||||
`mac_policy_ops` from `mac_policy_conf`, recover the shared
|
||||
`mov x0,#0; ret` Sandbox stub, then retarget the selected ops entries
|
||||
while preserving their chained-fixup/PAC metadata.
|
||||
"""
|
||||
self._log("\n[JB] Sandbox extended hooks: retarget ops entries to allow stub")
|
||||
|
||||
ops_table = self._find_sandbox_ops_table_via_conf()
|
||||
if ops_table is None:
|
||||
return False
|
||||
|
||||
allow_stub = self._find_sandbox_allow_stub()
|
||||
if allow_stub is None:
|
||||
self._log(" [-] common Sandbox allow stub not found")
|
||||
return False
|
||||
|
||||
hook_indices_ext = {
|
||||
"iokit_check_201": 201,
|
||||
"iokit_check_202": 202,
|
||||
"iokit_check_203": 203,
|
||||
"iokit_check_204": 204,
|
||||
"iokit_check_205": 205,
|
||||
"iokit_check_206": 206,
|
||||
"iokit_check_207": 207,
|
||||
"iokit_check_208": 208,
|
||||
"iokit_check_209": 209,
|
||||
"iokit_check_210": 210,
|
||||
"vnode_check_getattr": 245,
|
||||
"proc_check_get_cs_info": 249,
|
||||
"proc_check_set_cs_info": 250,
|
||||
"proc_check_set_cs_info2": 252,
|
||||
"vnode_check_chroot": 254,
|
||||
"vnode_check_create": 255,
|
||||
"vnode_check_deleteextattr": 256,
|
||||
"vnode_check_exchangedata": 257,
|
||||
"vnode_check_exec": 258,
|
||||
"vnode_check_getattrlist": 259,
|
||||
"vnode_check_getextattr": 260,
|
||||
"vnode_check_ioctl": 261,
|
||||
"vnode_check_link": 264,
|
||||
"vnode_check_listextattr": 265,
|
||||
"vnode_check_open": 267,
|
||||
"vnode_check_readlink": 270,
|
||||
"vnode_check_setattrlist": 275,
|
||||
"vnode_check_setextattr": 276,
|
||||
"vnode_check_setflags": 277,
|
||||
"vnode_check_setmode": 278,
|
||||
"vnode_check_setowner": 279,
|
||||
"vnode_check_setutimes": 280,
|
||||
"vnode_check_stat": 281,
|
||||
"vnode_check_truncate": 282,
|
||||
"vnode_check_unlink": 283,
|
||||
"vnode_check_fsgetpath": 316,
|
||||
}
|
||||
|
||||
patched = 0
|
||||
for hook_name, idx in hook_indices_ext.items():
|
||||
entry_off = ops_table + idx * 8
|
||||
if entry_off + 8 > self.size:
|
||||
continue
|
||||
entry_raw = _rd64(self.raw, entry_off)
|
||||
if entry_raw == 0:
|
||||
continue
|
||||
entry_new = self._encode_auth_rebase_like(entry_raw, allow_stub)
|
||||
if entry_new is None:
|
||||
continue
|
||||
self.emit(
|
||||
entry_off,
|
||||
entry_new,
|
||||
f"ops[{idx}] -> allow stub [_hook_{hook_name}]",
|
||||
)
|
||||
patched += 1
|
||||
|
||||
if patched == 0:
|
||||
self._log(" [-] no extended sandbox hooks retargeted")
|
||||
return False
|
||||
return True
|
||||
|
||||
def _find_sandbox_allow_stub(self):
|
||||
"""Return the common Sandbox `mov x0,#0; ret` stub used by patch_fw.
|
||||
|
||||
On PCC 26.1 research/release there are two such tiny stubs in Sandbox
|
||||
text; the higher-address one matches upstream `patch_fw.py`
|
||||
(`0x23B73BC` research, `0x22A78BC` release). Keep the reveal
|
||||
structural: scan Sandbox text for 2-insn `mov x0,#0; ret` stubs and
|
||||
select the highest-address candidate.
|
||||
"""
|
||||
sb_start, sb_end = self.sandbox_text
|
||||
hits = []
|
||||
for off in range(sb_start, sb_end - 8, 4):
|
||||
if self.raw[off:off + 4] == MOV_X0_0 and self.raw[off + 4:off + 8] == RET:
|
||||
hits.append(off)
|
||||
if len(hits) < 1:
|
||||
return None
|
||||
allow_stub = max(hits)
|
||||
self._log(f" [+] common Sandbox allow stub at 0x{allow_stub:X}")
|
||||
return allow_stub
|
||||
|
||||
@staticmethod
|
||||
def _encode_auth_rebase_like(orig_val, target_off):
|
||||
"""Retarget an auth-rebase chained pointer while preserving PAC bits."""
|
||||
if (orig_val & (1 << 63)) == 0:
|
||||
return None
|
||||
return struct.pack("<Q", (orig_val & ~0xFFFFFFFF) | (target_off & 0xFFFFFFFF))
|
||||
@@ -1,201 +0,0 @@
|
||||
"""Mixin: KernelJBPatchSecureRootMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, asm
|
||||
|
||||
|
||||
class KernelJBPatchSecureRootMixin:
|
||||
_SECURE_ROOT_MATCH_OFFSET = 0x11A
|
||||
|
||||
def patch_io_secure_bsd_root(self):
|
||||
"""Force the SecureRootName policy return to success.
|
||||
|
||||
Historical versions of this patch matched the first BL* + CBZ/CBNZ W0
|
||||
inside the AppleARMPE secure-root dispatch function and rewrote the
|
||||
"SecureRoot" gate. That site is semantically wrong and can perturb the
|
||||
broader platform-function dispatch path.
|
||||
|
||||
The correct minimal bypass is the final CSEL in the "SecureRootName"
|
||||
path that selects between success (0) and kIOReturnNotPrivileged.
|
||||
"""
|
||||
self._log("\n[JB] _IOSecureBSDRoot: force SecureRootName success")
|
||||
|
||||
func_candidates = self._find_secure_root_functions()
|
||||
if not func_candidates:
|
||||
self._log(" [-] secure-root dispatch function not found")
|
||||
return False
|
||||
|
||||
for func_start in sorted(func_candidates):
|
||||
func_end = self._find_func_end(func_start, 0x1200)
|
||||
site = self._find_secure_root_return_site(func_start, func_end)
|
||||
if not site:
|
||||
continue
|
||||
|
||||
off, reg_name = site
|
||||
patch_bytes = self._compile_zero_return_checked(reg_name)
|
||||
self.emit(
|
||||
off,
|
||||
patch_bytes,
|
||||
f"mov {reg_name}, #0 [_IOSecureBSDRoot SecureRootName allow]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] SecureRootName deny-return site not found")
|
||||
return False
|
||||
|
||||
def _find_secure_root_functions(self):
|
||||
funcs_with_name = self._functions_referencing_string(b"SecureRootName")
|
||||
if not funcs_with_name:
|
||||
return set()
|
||||
|
||||
funcs_with_root = self._functions_referencing_string(b"SecureRoot")
|
||||
common = funcs_with_name & funcs_with_root
|
||||
if common:
|
||||
return common
|
||||
return funcs_with_name
|
||||
|
||||
def _functions_referencing_string(self, needle):
|
||||
func_starts = set()
|
||||
for str_off in self._all_cstring_offsets(needle):
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
for adrp_off, _, _ in refs:
|
||||
fn = self.find_function_start(adrp_off)
|
||||
if fn >= 0:
|
||||
func_starts.add(fn)
|
||||
return func_starts
|
||||
|
||||
def _all_cstring_offsets(self, needle):
|
||||
if isinstance(needle, str):
|
||||
needle = needle.encode()
|
||||
out = []
|
||||
start = 0
|
||||
while True:
|
||||
pos = self.raw.find(needle, start)
|
||||
if pos < 0:
|
||||
break
|
||||
cstr = pos
|
||||
while cstr > 0 and self.raw[cstr - 1] != 0:
|
||||
cstr -= 1
|
||||
cend = self.raw.find(b"\x00", cstr)
|
||||
if cend > cstr and self.raw[cstr:cend] == needle:
|
||||
out.append(cstr)
|
||||
start = pos + 1
|
||||
return sorted(set(out))
|
||||
|
||||
def _find_secure_root_return_site(self, func_start, func_end):
|
||||
for off in range(func_start, func_end - 4, 4):
|
||||
dis = self._disas_at(off)
|
||||
if not dis:
|
||||
continue
|
||||
ins = dis[0]
|
||||
if ins.mnemonic != "csel" or len(ins.operands) != 3:
|
||||
continue
|
||||
if ins.op_str.replace(" ", "").split(",")[-1] != "ne":
|
||||
continue
|
||||
|
||||
dest = ins.reg_name(ins.operands[0].reg)
|
||||
zero_src = ins.reg_name(ins.operands[1].reg)
|
||||
err_src = ins.reg_name(ins.operands[2].reg)
|
||||
if zero_src not in ("wzr", "xzr"):
|
||||
continue
|
||||
if not dest.startswith("w"):
|
||||
continue
|
||||
if not self._has_secure_rootname_return_context(off, func_start, err_src):
|
||||
continue
|
||||
if not self._has_secure_rootname_compare_context(off, func_start):
|
||||
continue
|
||||
|
||||
return off, dest
|
||||
return None
|
||||
|
||||
def _has_secure_rootname_return_context(self, off, func_start, err_reg_name):
|
||||
saw_flag_load = False
|
||||
saw_flag_test = False
|
||||
saw_err_build = False
|
||||
lookback_start = max(func_start, off - 0x40)
|
||||
|
||||
for probe in range(off - 4, lookback_start - 4, -4):
|
||||
dis = self._disas_at(probe)
|
||||
if not dis:
|
||||
continue
|
||||
ins = dis[0]
|
||||
ops = ins.op_str.replace(" ", "")
|
||||
|
||||
if not saw_flag_test and ins.mnemonic == "tst" and ops.endswith("#1"):
|
||||
saw_flag_test = True
|
||||
continue
|
||||
|
||||
if (
|
||||
saw_flag_test
|
||||
and not saw_flag_load
|
||||
and ins.mnemonic == "ldrb"
|
||||
and f"[x19,#0x{self._SECURE_ROOT_MATCH_OFFSET:x}]" in ops
|
||||
):
|
||||
saw_flag_load = True
|
||||
continue
|
||||
|
||||
if self._writes_register(ins, err_reg_name) and ins.mnemonic in ("mov", "movk", "sub"):
|
||||
saw_err_build = True
|
||||
|
||||
return saw_flag_load and saw_flag_test and saw_err_build
|
||||
|
||||
def _has_secure_rootname_compare_context(self, off, func_start):
|
||||
saw_match_store = False
|
||||
saw_cset_eq = False
|
||||
saw_cmp_w0_zero = False
|
||||
lookback_start = max(func_start, off - 0xA0)
|
||||
|
||||
for probe in range(off - 4, lookback_start - 4, -4):
|
||||
dis = self._disas_at(probe)
|
||||
if not dis:
|
||||
continue
|
||||
ins = dis[0]
|
||||
ops = ins.op_str.replace(" ", "")
|
||||
|
||||
if (
|
||||
not saw_match_store
|
||||
and ins.mnemonic == "strb"
|
||||
and f"[x19,#0x{self._SECURE_ROOT_MATCH_OFFSET:x}]" in ops
|
||||
):
|
||||
saw_match_store = True
|
||||
continue
|
||||
|
||||
if saw_match_store and not saw_cset_eq and ins.mnemonic == "cset" and ops.endswith(",eq"):
|
||||
saw_cset_eq = True
|
||||
continue
|
||||
|
||||
if saw_match_store and saw_cset_eq and not saw_cmp_w0_zero and ins.mnemonic == "cmp":
|
||||
if ops.startswith("w0,#0"):
|
||||
saw_cmp_w0_zero = True
|
||||
break
|
||||
|
||||
return saw_match_store and saw_cset_eq and saw_cmp_w0_zero
|
||||
|
||||
def _writes_register(self, ins, reg_name):
|
||||
if not ins.operands:
|
||||
return False
|
||||
first = ins.operands[0]
|
||||
if getattr(first, "type", None) != 1:
|
||||
return False
|
||||
return ins.reg_name(first.reg) == reg_name
|
||||
|
||||
def _compile_zero_return_checked(self, reg_name):
|
||||
patch_bytes = asm(f"mov {reg_name}, #0")
|
||||
insns = self._disas_n(patch_bytes, 0, 1)
|
||||
assert insns, "capstone decode failed for secure-root zero-return patch"
|
||||
ins = insns[0]
|
||||
assert ins.mnemonic == "mov", (
|
||||
f"secure-root zero-return decode mismatch: expected 'mov', got '{ins.mnemonic}'"
|
||||
)
|
||||
got_dst = ins.reg_name(ins.operands[0].reg)
|
||||
assert got_dst == reg_name, (
|
||||
f"secure-root zero-return destination mismatch: expected '{reg_name}', got '{got_dst}'"
|
||||
)
|
||||
got_imm = None
|
||||
for op in ins.operands[1:]:
|
||||
if op.type == ARM64_OP_IMM:
|
||||
got_imm = op.imm
|
||||
break
|
||||
assert got_imm == 0, (
|
||||
f"secure-root zero-return immediate mismatch: expected 0, got {got_imm}"
|
||||
)
|
||||
return patch_bytes
|
||||
@@ -1,70 +0,0 @@
|
||||
"""Mixin: KernelJBPatchSharedRegionMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_REG, CMP_X0_X0
|
||||
|
||||
|
||||
class KernelJBPatchSharedRegionMixin:
|
||||
def patch_shared_region_map(self):
|
||||
"""Match the upstream root-vs-preboot gate in shared_region setup.
|
||||
|
||||
Anchor class: string anchor. Resolve the setup helper from the in-image
|
||||
`/private/preboot/Cryptexes` string, then patch the *first* compare that
|
||||
guards the preboot lookup block:
|
||||
|
||||
cmp mount_reg, root_mount_reg
|
||||
b.eq skip_lookup
|
||||
... prepare PREBOOT_CRYPTEX_PATH ...
|
||||
|
||||
This intentionally matches `/Users/qaq/Desktop/patch_fw.py` by forcing
|
||||
the initial root-mount comparison to compare equal, rather than only
|
||||
patching the later fallback compare against the looked-up preboot mount.
|
||||
"""
|
||||
self._log("\n[JB] _shared_region_map_and_slide_setup: upstream cmp x0,x0")
|
||||
|
||||
foff = self._find_func_by_string(b"/private/preboot/Cryptexes", self.kern_text)
|
||||
if foff < 0:
|
||||
self._log(" [-] function not found via Cryptexes anchor")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
str_off = self.find_string(b"/private/preboot/Cryptexes")
|
||||
if str_off < 0:
|
||||
self._log(" [-] Cryptexes string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, foff, func_end)
|
||||
hits = []
|
||||
for adrp_off, _, _ in refs:
|
||||
patch_off = self._find_upstream_root_mount_cmp(foff, adrp_off)
|
||||
if patch_off is not None:
|
||||
hits.append(patch_off)
|
||||
|
||||
if len(hits) != 1:
|
||||
self._log(" [-] upstream root-vs-preboot cmp gate not found uniquely")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
hits[0], CMP_X0_X0, "cmp x0,x0 [_shared_region_map_and_slide_setup]"
|
||||
)
|
||||
return True
|
||||
|
||||
def _find_upstream_root_mount_cmp(self, func_start, str_ref_off):
|
||||
scan_start = max(func_start, str_ref_off - 0x24)
|
||||
scan_end = min(str_ref_off, scan_start + 0x24)
|
||||
for off in range(scan_start, scan_end, 4):
|
||||
d = self._disas_at(off, 3)
|
||||
if len(d) < 3:
|
||||
continue
|
||||
cmp_insn, beq_insn, next_insn = d[0], d[1], d[2]
|
||||
if cmp_insn.mnemonic != "cmp" or beq_insn.mnemonic != "b.eq":
|
||||
continue
|
||||
if len(cmp_insn.operands) != 2 or len(beq_insn.operands) != 1:
|
||||
continue
|
||||
if cmp_insn.operands[0].type != ARM64_OP_REG or cmp_insn.operands[1].type != ARM64_OP_REG:
|
||||
continue
|
||||
if beq_insn.operands[0].type != ARM64_OP_IMM or beq_insn.operands[0].imm <= beq_insn.address:
|
||||
continue
|
||||
if next_insn.mnemonic != "str" or "xzr" not in next_insn.op_str:
|
||||
continue
|
||||
return cmp_insn.address
|
||||
return None
|
||||
@@ -1,140 +0,0 @@
|
||||
"""Mixin: KernelJBPatchSpawnPersonaMixin."""
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, NOP
|
||||
|
||||
|
||||
class KernelJBPatchSpawnPersonaMixin:
|
||||
def patch_spawn_validate_persona(self):
|
||||
"""Restore the upstream dual-CBZ bypass in the persona helper.
|
||||
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which NOPs
|
||||
two sibling `cbz w?, deny` guards in the small helper reached from the
|
||||
entitlement-string-driven spawn policy wrapper.
|
||||
|
||||
Runtime design intentionally avoids unstable symbols:
|
||||
1. recover the outer spawn policy function from the embedded
|
||||
`com.apple.private.spawn-panic-crash-behavior` string,
|
||||
2. enumerate its local BL callees,
|
||||
3. choose the unique small callee whose local CFG matches the upstream
|
||||
helper shape (`ldr [arg,#8] ; cbz deny ; ldr [arg,#0xc] ; cbz deny`),
|
||||
4. NOP both `cbz` guards at the upstream sites.
|
||||
"""
|
||||
self._log("\n[JB] _spawn_validate_persona: upstream dual-CBZ bypass")
|
||||
|
||||
anchor_func = self._find_func_by_string(
|
||||
b"com.apple.private.spawn-panic-crash-behavior", self.kern_text
|
||||
)
|
||||
if anchor_func < 0:
|
||||
self._log(" [-] spawn entitlement anchor not found")
|
||||
return False
|
||||
|
||||
anchor_end = self._find_func_end(anchor_func, 0x4000)
|
||||
sites = self._find_upstream_persona_cbz_sites(anchor_func, anchor_end)
|
||||
if sites is None:
|
||||
self._log(" [-] upstream persona helper not found from string anchor")
|
||||
return False
|
||||
|
||||
first_cbz, second_cbz = sites
|
||||
self.emit(first_cbz, NOP, "NOP [_spawn_validate_persona pid-slot guard]")
|
||||
self.emit(second_cbz, NOP, "NOP [_spawn_validate_persona persona-slot guard]")
|
||||
return True
|
||||
|
||||
def _find_upstream_persona_cbz_sites(self, anchor_start, anchor_end):
|
||||
matches = []
|
||||
seen = set()
|
||||
for off in range(anchor_start, anchor_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target < 0 or target in seen:
|
||||
continue
|
||||
if not (self.kern_text[0] <= target < self.kern_text[1]):
|
||||
continue
|
||||
seen.add(target)
|
||||
func_end = self._find_func_end(target, 0x400)
|
||||
sites = self._match_persona_helper(target, func_end)
|
||||
if sites is not None:
|
||||
matches.append(sites)
|
||||
|
||||
if len(matches) == 1:
|
||||
return matches[0]
|
||||
if matches:
|
||||
self._log(
|
||||
" [-] ambiguous persona helper candidates: "
|
||||
+ ", ".join(f"0x{a:X}/0x{b:X}" for a, b in matches)
|
||||
)
|
||||
return None
|
||||
|
||||
def _match_persona_helper(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 0x14, 4):
|
||||
d = self._disas_at(off, 6)
|
||||
if len(d) < 6:
|
||||
continue
|
||||
i0, i1, i2, i3, i4, i5 = d[:6]
|
||||
if not self._is_ldr_mem(i0, disp=8):
|
||||
continue
|
||||
if not self._is_cbz_w_same_reg(i1, i0.operands[0].reg):
|
||||
continue
|
||||
if not self._is_ldr_mem_same_base(i2, i0.operands[1].mem.base, disp=0xC):
|
||||
continue
|
||||
if not self._is_cbz_w_same_reg(i3, i2.operands[0].reg):
|
||||
continue
|
||||
deny_target = i1.operands[1].imm
|
||||
if i3.operands[1].imm != deny_target:
|
||||
continue
|
||||
if not self._looks_like_errno_return(deny_target, 1):
|
||||
continue
|
||||
if not self._is_mov_x_imm_zero(i4):
|
||||
continue
|
||||
if not self._is_ldr_mem(i5, disp=0x490):
|
||||
continue
|
||||
hits.append((i1.address, i3.address))
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _looks_like_errno_return(self, target, errno_value):
|
||||
d = self._disas_at(target, 2)
|
||||
return len(d) >= 1 and self._is_mov_w_imm_value(d[0], errno_value)
|
||||
|
||||
def _is_ldr_mem(self, insn, disp):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return dst.type == ARM64_OP_REG and src.type == ARM64_OP_MEM and src.mem.disp == disp
|
||||
|
||||
def _is_ldr_mem_same_base(self, insn, base_reg, disp):
|
||||
return self._is_ldr_mem(insn, disp) and insn.operands[1].mem.base == base_reg
|
||||
|
||||
def _is_cbz_w_same_reg(self, insn, reg):
|
||||
if insn.mnemonic != "cbz" or len(insn.operands) != 2:
|
||||
return False
|
||||
op0, op1 = insn.operands
|
||||
return (
|
||||
op0.type == ARM64_OP_REG
|
||||
and op0.reg == reg
|
||||
and op1.type == ARM64_OP_IMM
|
||||
and insn.reg_name(op0.reg).startswith("w")
|
||||
)
|
||||
|
||||
def _is_mov_x_imm_zero(self, insn):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
and insn.reg_name(dst.reg).startswith("x")
|
||||
)
|
||||
|
||||
def _is_mov_w_imm_value(self, insn, imm):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == imm
|
||||
and insn.reg_name(dst.reg).startswith("w")
|
||||
)
|
||||
@@ -1,280 +0,0 @@
|
||||
"""Mixin: KernelJBPatchSyscallmaskMixin."""
|
||||
|
||||
from .kernel_jb_base import asm, _rd32, struct
|
||||
|
||||
|
||||
class KernelJBPatchSyscallmaskMixin:
|
||||
_PACIBSP_U32 = 0xD503237F
|
||||
_SYSCALLMASK_FF_BLOB_SIZE = 0x100
|
||||
|
||||
def _find_syscallmask_manager_func(self):
|
||||
"""Find the high-level apply manager using its error strings."""
|
||||
strings = (
|
||||
b"failed to apply unix syscall mask",
|
||||
b"failed to apply mach trap mask",
|
||||
b"failed to apply kernel MIG routine mask",
|
||||
)
|
||||
candidates = None
|
||||
for string in strings:
|
||||
str_off = self.find_string(string)
|
||||
if str_off < 0:
|
||||
return -1
|
||||
refs = self.find_string_refs(str_off, *self.sandbox_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
func_starts = {
|
||||
self.find_function_start(ref[0])
|
||||
for ref in refs
|
||||
if self.find_function_start(ref[0]) >= 0
|
||||
}
|
||||
if not func_starts:
|
||||
return -1
|
||||
candidates = func_starts if candidates is None else candidates & func_starts
|
||||
if not candidates:
|
||||
return -1
|
||||
|
||||
return min(candidates)
|
||||
|
||||
def _extract_w1_immediate_near_call(self, func_off, call_off):
|
||||
"""Best-effort lookup of the last `mov w1, #imm` before a BL."""
|
||||
scan_start = max(func_off, call_off - 0x20)
|
||||
for off in range(call_off - 4, scan_start - 4, -4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "mov":
|
||||
continue
|
||||
op = insn.op_str.replace(" ", "")
|
||||
if not op.startswith("w1,#"):
|
||||
continue
|
||||
try:
|
||||
return int(op.split("#", 1)[1], 0)
|
||||
except ValueError:
|
||||
return None
|
||||
return None
|
||||
|
||||
def _find_syscallmask_apply_func(self):
|
||||
"""Find the low-level syscallmask apply wrapper used three times.
|
||||
|
||||
On older PCC kernels this corresponds to the stripped function patched by
|
||||
the historical upstream C22 shellcode. On newer kernels it is the wrapper
|
||||
underneath `_proc_apply_syscall_masks`.
|
||||
"""
|
||||
for name in ("_syscallmask_apply_to_proc", "_proc_apply_syscall_masks"):
|
||||
sym_off = self._resolve_symbol(name)
|
||||
if sym_off >= 0:
|
||||
return sym_off
|
||||
|
||||
manager_off = self._find_syscallmask_manager_func()
|
||||
if manager_off < 0:
|
||||
return -1
|
||||
|
||||
func_end = self._find_func_end(manager_off, 0x300)
|
||||
target_calls = {}
|
||||
for off in range(manager_off, func_end, 4):
|
||||
target = self._is_bl(off)
|
||||
if target < 0:
|
||||
continue
|
||||
target_calls.setdefault(target, []).append(off)
|
||||
|
||||
for target, calls in sorted(target_calls.items(), key=lambda item: -len(item[1])):
|
||||
if len(calls) < 3:
|
||||
continue
|
||||
whiches = {
|
||||
self._extract_w1_immediate_near_call(manager_off, call_off)
|
||||
for call_off in calls
|
||||
}
|
||||
if {0, 1, 2}.issubset(whiches):
|
||||
return target
|
||||
|
||||
return -1
|
||||
|
||||
def _find_last_branch_target(self, func_off):
|
||||
"""Find the last BL/B target in a function."""
|
||||
func_end = self._find_func_end(func_off, 0x280)
|
||||
for off in range(func_end - 4, func_off, -4):
|
||||
target = self._is_bl(off)
|
||||
if target >= 0:
|
||||
return off, target
|
||||
val = _rd32(self.raw, off)
|
||||
if (val & 0xFC000000) == 0x14000000:
|
||||
imm26 = val & 0x3FFFFFF
|
||||
if imm26 & (1 << 25):
|
||||
imm26 -= 1 << 26
|
||||
target = off + imm26 * 4
|
||||
if self.kern_text[0] <= target < self.kern_text[1]:
|
||||
return off, target
|
||||
return -1, -1
|
||||
|
||||
def _resolve_syscallmask_helpers(self, func_off, helper_target):
|
||||
"""Resolve the mutation helper and tail setter target deterministically.
|
||||
|
||||
Historical C22 calls the next function after the pre-setter helper's
|
||||
containing function. On the upstream PCC 26.1 kernel this is the
|
||||
`zalloc_ro_mut` wrapper used by the original shellcode. We derive the
|
||||
same relation structurally instead of relying on symbol fallback.
|
||||
"""
|
||||
if helper_target < 0:
|
||||
return -1, -1
|
||||
|
||||
helper_func = self.find_function_start(helper_target)
|
||||
if helper_func < 0:
|
||||
return -1, -1
|
||||
|
||||
mutator_off = self._find_func_end(helper_func, 0x200)
|
||||
if mutator_off <= helper_target or mutator_off >= helper_func + 0x200:
|
||||
return -1, -1
|
||||
|
||||
head = self._disas_at(mutator_off)
|
||||
if not head:
|
||||
return -1, -1
|
||||
if head[0].mnemonic not in ("pacibsp", "bti"):
|
||||
return -1, -1
|
||||
|
||||
_, setter_off = self._find_last_branch_target(func_off)
|
||||
if setter_off < 0:
|
||||
return -1, -1
|
||||
return mutator_off, setter_off
|
||||
|
||||
def _find_syscallmask_inject_bl(self, func_off):
|
||||
"""Find the pre-setter helper BL that upstream C22 replaced."""
|
||||
func_end = self._find_func_end(func_off, 0x280)
|
||||
scan_end = min(func_off + 0x80, func_end)
|
||||
seen_cbz_x2 = False
|
||||
for off in range(func_off, scan_end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
op = insn.op_str.replace(" ", "")
|
||||
if insn.mnemonic == "cbz" and op.startswith("x2,"):
|
||||
seen_cbz_x2 = True
|
||||
continue
|
||||
if seen_cbz_x2 and self._is_bl(off) >= 0:
|
||||
return off
|
||||
return -1
|
||||
|
||||
def _find_syscallmask_tail_branch(self, func_off):
|
||||
"""Find the final tail `B` into the setter core."""
|
||||
branch_off, target = self._find_last_branch_target(func_off)
|
||||
if branch_off < 0:
|
||||
return -1, -1
|
||||
if self._is_bl(branch_off) >= 0:
|
||||
return -1, -1
|
||||
return branch_off, target
|
||||
|
||||
def _build_syscallmask_cave(self, cave_off, zalloc_off, setter_off):
|
||||
"""Build a C22 cave that forces the installed mask bytes to 0xFF.
|
||||
|
||||
Semantics intentionally follow the historical upstream design: mutate the
|
||||
pointed-to mask buffer into an allow-all mask, then continue through the
|
||||
normal setter path.
|
||||
"""
|
||||
blob_size = self._SYSCALLMASK_FF_BLOB_SIZE
|
||||
code_off = cave_off + blob_size
|
||||
code = []
|
||||
code.append(asm("cbz x2, #0x6c"))
|
||||
code.append(asm("sub sp, sp, #0x40"))
|
||||
code.append(asm("stp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("stp x21, x22, [sp, #0x20]"))
|
||||
code.append(asm("stp x29, x30, [sp, #0x30]"))
|
||||
code.append(asm("mov x19, x0"))
|
||||
code.append(asm("mov x20, x1"))
|
||||
code.append(asm("mov x21, x2"))
|
||||
code.append(asm("mov x22, x3"))
|
||||
code.append(asm("mov x8, #8"))
|
||||
code.append(asm("mov x0, x17"))
|
||||
code.append(asm("mov x1, x21"))
|
||||
code.append(asm("mov x2, #0"))
|
||||
|
||||
adr_off = code_off + len(code) * 4
|
||||
blob_delta = cave_off - adr_off
|
||||
code.append(asm(f"adr x3, #{blob_delta}"))
|
||||
code.append(asm("udiv x4, x22, x8"))
|
||||
code.append(asm("msub x10, x4, x8, x22"))
|
||||
code.append(asm("cbz x10, #8"))
|
||||
code.append(asm("add x4, x4, #1"))
|
||||
|
||||
bl_off = code_off + len(code) * 4
|
||||
branch_back_off = code_off + 27 * 4
|
||||
bl = self._encode_bl(bl_off, zalloc_off)
|
||||
branch_back = self._encode_b(branch_back_off, setter_off)
|
||||
if not bl or not branch_back:
|
||||
return None
|
||||
code.append(bl)
|
||||
code.append(asm("mov x0, x19"))
|
||||
code.append(asm("mov x1, x20"))
|
||||
code.append(asm("mov x2, x21"))
|
||||
code.append(asm("mov x3, x22"))
|
||||
code.append(asm("ldp x19, x20, [sp, #0x10]"))
|
||||
code.append(asm("ldp x21, x22, [sp, #0x20]"))
|
||||
code.append(asm("ldp x29, x30, [sp, #0x30]"))
|
||||
code.append(asm("add sp, sp, #0x40"))
|
||||
code.append(branch_back)
|
||||
|
||||
return (b"\xFF" * blob_size) + b"".join(code), code_off, blob_size
|
||||
|
||||
def patch_syscallmask_apply_to_proc(self):
|
||||
"""Retargeted C22 patch based on the verified upstream semantics.
|
||||
|
||||
Historical C22 does not early-return. It hijacks the low-level apply
|
||||
wrapper, rewrites the effective syscall/mach/kobj mask bytes to an
|
||||
allow-all blob via `zalloc_ro_mut`, then resumes through the normal
|
||||
setter path.
|
||||
"""
|
||||
self._log("\n[JB] _syscallmask_apply_to_proc: retargeted upstream C22")
|
||||
|
||||
func_off = self._find_syscallmask_apply_func()
|
||||
if func_off < 0:
|
||||
self._log(" [-] syscallmask apply wrapper not found (fail-closed)")
|
||||
return False
|
||||
|
||||
call_off = self._find_syscallmask_inject_bl(func_off)
|
||||
if call_off < 0:
|
||||
self._log(" [-] helper BL site not found in syscallmask wrapper")
|
||||
return False
|
||||
|
||||
branch_off, setter_off = self._find_syscallmask_tail_branch(func_off)
|
||||
if branch_off < 0 or setter_off < 0:
|
||||
self._log(" [-] setter tail branch not found in syscallmask wrapper")
|
||||
return False
|
||||
|
||||
mutator_off, _ = self._resolve_syscallmask_helpers(func_off, self._is_bl(call_off))
|
||||
if mutator_off < 0:
|
||||
self._log(" [-] syscallmask mutation helper not resolved structurally")
|
||||
return False
|
||||
|
||||
cave_size = self._SYSCALLMASK_FF_BLOB_SIZE + 0x80
|
||||
cave_off = self._find_code_cave(cave_size)
|
||||
if cave_off < 0:
|
||||
self._log(" [-] no executable code cave found for C22")
|
||||
return False
|
||||
|
||||
cave_info = self._build_syscallmask_cave(cave_off, mutator_off, setter_off)
|
||||
if cave_info is None:
|
||||
self._log(" [-] failed to encode C22 cave branches")
|
||||
return False
|
||||
cave_bytes, code_off, blob_size = cave_info
|
||||
|
||||
branch_to_cave = self._encode_b(branch_off, code_off)
|
||||
if not branch_to_cave:
|
||||
self._log(" [-] tail branch cannot reach C22 cave")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
call_off,
|
||||
asm("mov x17, x0"),
|
||||
"mov x17,x0 [syscallmask C22 save RO selector]",
|
||||
)
|
||||
self.emit(
|
||||
branch_off,
|
||||
branch_to_cave,
|
||||
"b cave [syscallmask C22 mutate mask then setter]",
|
||||
)
|
||||
self.emit(
|
||||
cave_off,
|
||||
cave_bytes,
|
||||
f"syscallmask C22 cave (ff blob {blob_size:#x} + structural mutator + setter tail)",
|
||||
)
|
||||
return True
|
||||
@@ -1,238 +0,0 @@
|
||||
"""Mixin: KernelJBPatchTaskConversionMixin."""
|
||||
|
||||
import os
|
||||
|
||||
from .kernel_jb_base import (
|
||||
ARM64_OP_REG,
|
||||
ARM64_OP_MEM,
|
||||
ARM64_REG_X0,
|
||||
ARM64_REG_X1,
|
||||
ARM64_REG_W0,
|
||||
CMP_XZR_XZR,
|
||||
asm,
|
||||
struct,
|
||||
_rd32,
|
||||
)
|
||||
|
||||
|
||||
def _u32(insn):
|
||||
return struct.unpack("<I", asm(insn))[0]
|
||||
|
||||
|
||||
def _derive_mask_and_value(insns):
|
||||
vals = [_u32(i) for i in insns]
|
||||
mask = 0xFFFFFFFF
|
||||
for v in vals[1:]:
|
||||
mask &= ~(vals[0] ^ v)
|
||||
value = vals[0] & mask
|
||||
return mask, value
|
||||
|
||||
|
||||
def _field_mask(total_bits=32, variable_fields=()):
|
||||
mask = (1 << total_bits) - 1
|
||||
for start, width in variable_fields:
|
||||
mask &= ~(((1 << width) - 1) << start)
|
||||
return mask & ((1 << total_bits) - 1)
|
||||
|
||||
|
||||
class KernelJBPatchTaskConversionMixin:
|
||||
_ALLOW_SLOW_FALLBACK = (
|
||||
os.environ.get("VPHONE_TASK_CONV_ALLOW_SLOW_FALLBACK", "").strip() == "1"
|
||||
)
|
||||
|
||||
# Build all matcher constants from keystone-assembled instruction bytes.
|
||||
# No hardcoded opcode constants.
|
||||
_CMP_XN_X0_MASK, _CMP_XN_X0_VAL = _derive_mask_and_value(
|
||||
("cmp x0, x0", "cmp x1, x0", "cmp x30, x0")
|
||||
)
|
||||
_CMP_XN_X1_MASK, _CMP_XN_X1_VAL = _derive_mask_and_value(
|
||||
("cmp x0, x1", "cmp x1, x1", "cmp x30, x1")
|
||||
)
|
||||
_BEQ_MASK = _field_mask(variable_fields=((5, 19),))
|
||||
_BEQ_VAL = _u32("b.eq #0x100") & _BEQ_MASK
|
||||
_LDR_X_UNSIGNED_MASK = _field_mask(variable_fields=((0, 5), (5, 5), (10, 12)))
|
||||
_LDR_X_UNSIGNED_VAL = _u32("ldr x0, [x0]") & _LDR_X_UNSIGNED_MASK
|
||||
_ADRP_MASK = 0x9F000000
|
||||
_ADRP_VAL = 0x90000000
|
||||
_BL_MASK = 0xFC000000
|
||||
_BL_VAL = 0x94000000
|
||||
_CBZ_W_MASK = 0x7F000000
|
||||
_CBZ_W_VAL = 0x34000000
|
||||
_CBNZ_W_VAL = 0x35000000
|
||||
_MOV_X19_X0 = _u32("mov x19, x0")
|
||||
_MOV_X0_X1 = _u32("mov x0, x1")
|
||||
|
||||
def patch_task_conversion_eval_internal(self):
|
||||
"""Allow task conversion: cmp Xn,x0 -> cmp xzr,xzr at unique guard site."""
|
||||
self._log("\n[JB] task_conversion_eval_internal: cmp xzr,xzr")
|
||||
|
||||
ks, ke = self.kern_text
|
||||
candidates = self._collect_candidates_fast(ks, ke)
|
||||
# Fail-closed by default. Slow fallback can be explicitly enabled for
|
||||
# manual triage on unknown kernels.
|
||||
if len(candidates) != 1 and self._ALLOW_SLOW_FALLBACK:
|
||||
self._log(
|
||||
" [!] fast matcher non-unique, trying slow fallback "
|
||||
"(VPHONE_TASK_CONV_ALLOW_SLOW_FALLBACK=1)"
|
||||
)
|
||||
candidates = self._collect_candidates_slow(ks, ke)
|
||||
|
||||
if len(candidates) != 1:
|
||||
msg = (
|
||||
" [-] expected 1 task-conversion guard site, found "
|
||||
f"{len(candidates)}"
|
||||
)
|
||||
if not self._ALLOW_SLOW_FALLBACK:
|
||||
msg += " (slow fallback disabled)"
|
||||
self._log(msg)
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
candidates[0], CMP_XZR_XZR, "cmp xzr,xzr [_task_conversion_eval_internal]"
|
||||
)
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
def _decode_b_cond_target(off, insn):
|
||||
imm19 = (insn >> 5) & 0x7FFFF
|
||||
if imm19 & (1 << 18):
|
||||
imm19 -= 1 << 19
|
||||
return off + imm19 * 4
|
||||
|
||||
def _is_candidate_context_safe(self, off, cmp_reg):
|
||||
# Require ADRP + LDR preamble for the same register.
|
||||
p2 = _rd32(self.raw, off - 8)
|
||||
if (p2 & self._ADRP_MASK) != self._ADRP_VAL:
|
||||
return False
|
||||
if (p2 & 0x1F) != cmp_reg:
|
||||
return False
|
||||
|
||||
# Require the known post-compare sequence shape.
|
||||
if _rd32(self.raw, off + 16) != self._MOV_X19_X0:
|
||||
return False
|
||||
if _rd32(self.raw, off + 20) != self._MOV_X0_X1:
|
||||
return False
|
||||
|
||||
i6 = _rd32(self.raw, off + 24)
|
||||
if (i6 & self._BL_MASK) != self._BL_VAL:
|
||||
return False
|
||||
|
||||
i7 = _rd32(self.raw, off + 28)
|
||||
op = i7 & self._CBZ_W_MASK
|
||||
if op not in (self._CBZ_W_VAL, self._CBNZ_W_VAL):
|
||||
return False
|
||||
if (i7 & 0x1F) != 0: # require w0
|
||||
return False
|
||||
|
||||
# Both b.eq targets must be forward and nearby in the same routine.
|
||||
t1 = self._decode_b_cond_target(off + 4, _rd32(self.raw, off + 4))
|
||||
t2 = self._decode_b_cond_target(off + 12, _rd32(self.raw, off + 12))
|
||||
if t1 <= off or t2 <= off:
|
||||
return False
|
||||
if (t1 - off) > 0x200 or (t2 - off) > 0x200:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _collect_candidates_fast(self, start, end):
|
||||
cache = getattr(self, "_jb_scan_cache", None)
|
||||
key = ("task_conversion_fast", start, end)
|
||||
if cache is not None:
|
||||
cached = cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
out = []
|
||||
for off in range(start + 8, end - 28, 4):
|
||||
i0 = _rd32(self.raw, off)
|
||||
if (i0 & self._CMP_XN_X0_MASK) != self._CMP_XN_X0_VAL:
|
||||
continue
|
||||
cmp_reg = (i0 >> 5) & 0x1F
|
||||
|
||||
p = _rd32(self.raw, off - 4)
|
||||
if (p & self._LDR_X_UNSIGNED_MASK) != self._LDR_X_UNSIGNED_VAL:
|
||||
continue
|
||||
p_rt = p & 0x1F
|
||||
p_rn = (p >> 5) & 0x1F
|
||||
if p_rt != cmp_reg or p_rn != cmp_reg:
|
||||
continue
|
||||
|
||||
i1 = _rd32(self.raw, off + 4)
|
||||
if (i1 & self._BEQ_MASK) != self._BEQ_VAL:
|
||||
continue
|
||||
|
||||
i2 = _rd32(self.raw, off + 8)
|
||||
if (i2 & self._CMP_XN_X1_MASK) != self._CMP_XN_X1_VAL:
|
||||
continue
|
||||
if ((i2 >> 5) & 0x1F) != cmp_reg:
|
||||
continue
|
||||
|
||||
i3 = _rd32(self.raw, off + 12)
|
||||
if (i3 & self._BEQ_MASK) != self._BEQ_VAL:
|
||||
continue
|
||||
|
||||
if not self._is_candidate_context_safe(off, cmp_reg):
|
||||
continue
|
||||
|
||||
out.append(off)
|
||||
if cache is not None:
|
||||
cache[key] = out
|
||||
return out
|
||||
|
||||
def _collect_candidates_slow(self, start, end):
|
||||
cache = getattr(self, "_jb_scan_cache", None)
|
||||
key = ("task_conversion_slow", start, end)
|
||||
if cache is not None:
|
||||
cached = cache.get(key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
out = []
|
||||
for off in range(start + 4, end - 12, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0:
|
||||
continue
|
||||
i0 = d0[0]
|
||||
if i0.mnemonic != "cmp" or len(i0.operands) < 2:
|
||||
continue
|
||||
a0, a1 = i0.operands[0], i0.operands[1]
|
||||
if not (a0.type == ARM64_OP_REG and a1.type == ARM64_OP_REG):
|
||||
continue
|
||||
if a1.reg != ARM64_REG_X0:
|
||||
continue
|
||||
cmp_reg = a0.reg
|
||||
|
||||
dp = self._disas_at(off - 4)
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
d3 = self._disas_at(off + 12)
|
||||
if not dp or not d1 or not d2 or not d3:
|
||||
continue
|
||||
p = dp[0]
|
||||
i1, i2, i3 = d1[0], d2[0], d3[0]
|
||||
|
||||
if p.mnemonic != "ldr" or len(p.operands) < 2:
|
||||
continue
|
||||
p0, p1 = p.operands[0], p.operands[1]
|
||||
if p0.type != ARM64_OP_REG or p0.reg != cmp_reg:
|
||||
continue
|
||||
if p1.type != ARM64_OP_MEM:
|
||||
continue
|
||||
if p1.mem.base != cmp_reg:
|
||||
continue
|
||||
|
||||
if i1.mnemonic != "b.eq":
|
||||
continue
|
||||
if i2.mnemonic != "cmp" or len(i2.operands) < 2:
|
||||
continue
|
||||
j0, j1 = i2.operands[0], i2.operands[1]
|
||||
if not (j0.type == ARM64_OP_REG and j1.type == ARM64_OP_REG):
|
||||
continue
|
||||
if not (j0.reg == cmp_reg and j1.reg == ARM64_REG_X1):
|
||||
continue
|
||||
if i3.mnemonic != "b.eq":
|
||||
continue
|
||||
|
||||
out.append(off)
|
||||
if cache is not None:
|
||||
cache[key] = out
|
||||
return out
|
||||
@@ -1,135 +0,0 @@
|
||||
"""Mixin: KernelJBPatchTaskForPidMixin."""
|
||||
|
||||
from .kernel_asm import _cs
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG, NOP
|
||||
|
||||
|
||||
class KernelJBPatchTaskForPidMixin:
|
||||
def patch_task_for_pid(self):
|
||||
"""NOP the upstream early `pid == 0` reject gate in `task_for_pid`.
|
||||
|
||||
Preferred design target is `/Users/qaq/Desktop/patch_fw.py`, which
|
||||
patches the early `cbz wPid, fail` gate before `port_name_to_task()`.
|
||||
|
||||
Anchor class: heuristic.
|
||||
|
||||
There is no stable direct `task_for_pid` symbol path on the stripped
|
||||
kernels, so the runtime reveal first recovers the enclosing function via
|
||||
the in-function string `proc_ro_ref_task`, then scans only that function
|
||||
and looks for the unique upstream local shape:
|
||||
|
||||
ldr wPid, [xArgs, #8]
|
||||
ldr xTaskPtr, [xArgs, #0x10]
|
||||
...
|
||||
cbz wPid, fail
|
||||
mov w1, #0
|
||||
mov w2, #0
|
||||
mov w3, #0
|
||||
mov x4, #0
|
||||
bl port_name_to_task-like helper
|
||||
cbz x0, fail
|
||||
"""
|
||||
self._log("\n[JB] _task_for_pid: upstream pid==0 gate NOP")
|
||||
|
||||
func_start = self._find_func_by_string(b"proc_ro_ref_task", self.kern_text)
|
||||
if func_start < 0:
|
||||
self._log(" [-] task_for_pid anchor function not found")
|
||||
return False
|
||||
search_end = min(self.kern_text[1], func_start + 0x800)
|
||||
|
||||
hits = []
|
||||
for off in range(func_start, search_end - 0x18, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0 or d0[0].mnemonic != "cbz":
|
||||
continue
|
||||
hit = self._match_upstream_task_for_pid_gate(off, func_start)
|
||||
if hit is not None:
|
||||
hits.append(hit)
|
||||
|
||||
if len(hits) != 1:
|
||||
self._log(f" [-] expected 1 upstream task_for_pid candidate, found {len(hits)}")
|
||||
return False
|
||||
|
||||
self.emit(hits[0], NOP, "NOP [_task_for_pid pid==0 gate]")
|
||||
return True
|
||||
|
||||
def _match_upstream_task_for_pid_gate(self, off, func_start):
|
||||
d = self._disas_at(off, 7)
|
||||
if len(d) < 7:
|
||||
return None
|
||||
cbz_pid, mov1, mov2, mov3, mov4, bl_insn, cbz_ret = d
|
||||
if cbz_pid.mnemonic != "cbz" or len(cbz_pid.operands) != 2:
|
||||
return None
|
||||
if cbz_pid.operands[0].type != ARM64_OP_REG or cbz_pid.operands[1].type != ARM64_OP_IMM:
|
||||
return None
|
||||
|
||||
if not self._is_mov_imm_zero(mov1, "w1"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov2, "w2"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov3, "w3"):
|
||||
return None
|
||||
if not self._is_mov_imm_zero(mov4, "x4"):
|
||||
return None
|
||||
if bl_insn.mnemonic != "bl":
|
||||
return None
|
||||
if cbz_ret.mnemonic != "cbz" or len(cbz_ret.operands) != 2:
|
||||
return None
|
||||
if cbz_ret.operands[0].type != ARM64_OP_REG or cbz_ret.reg_name(cbz_ret.operands[0].reg) != "x0":
|
||||
return None
|
||||
fail_target = cbz_pid.operands[1].imm
|
||||
if cbz_ret.operands[1].type != ARM64_OP_IMM or cbz_ret.operands[1].imm != fail_target:
|
||||
return None
|
||||
|
||||
pid_load = None
|
||||
taskptr_load = None
|
||||
for prev_off in range(max(func_start, off - 0x18), off, 4):
|
||||
prev_d = self._disas_at(prev_off)
|
||||
if not prev_d:
|
||||
continue
|
||||
prev = prev_d[0]
|
||||
if pid_load is None and self._is_w_ldr_from_x_imm(prev, 8):
|
||||
pid_load = prev
|
||||
continue
|
||||
if taskptr_load is None and self._is_x_ldr_from_x_imm(prev, 0x10):
|
||||
taskptr_load = prev
|
||||
if pid_load is None or taskptr_load is None:
|
||||
return None
|
||||
if cbz_pid.operands[0].reg != pid_load.operands[0].reg:
|
||||
return None
|
||||
return cbz_pid.address
|
||||
|
||||
def _is_mov_imm_zero(self, insn, dst_name):
|
||||
if insn.mnemonic != "mov" or len(insn.operands) != 2:
|
||||
return False
|
||||
dst, src = insn.operands
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg) == dst_name
|
||||
and src.type == ARM64_OP_IMM
|
||||
and src.imm == 0
|
||||
)
|
||||
|
||||
def _is_w_ldr_from_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg).startswith("w")
|
||||
and src.type == ARM64_OP_MEM
|
||||
and insn.reg_name(src.mem.base).startswith("x")
|
||||
and src.mem.disp == imm
|
||||
)
|
||||
|
||||
def _is_x_ldr_from_x_imm(self, insn, imm):
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
return False
|
||||
dst, src = insn.operands[:2]
|
||||
return (
|
||||
dst.type == ARM64_OP_REG
|
||||
and insn.reg_name(dst.reg).startswith("x")
|
||||
and src.type == ARM64_OP_MEM
|
||||
and insn.reg_name(src.mem.base).startswith("x")
|
||||
and src.mem.disp == imm
|
||||
)
|
||||
@@ -1,62 +0,0 @@
|
||||
"""Mixin: KernelJBPatchThidCrashMixin."""
|
||||
|
||||
from .kernel_jb_base import _rd32, _rd64
|
||||
|
||||
|
||||
class KernelJBPatchThidCrashMixin:
|
||||
def patch_thid_should_crash(self):
|
||||
"""Zero out `_thid_should_crash` via the nearby sysctl metadata.
|
||||
|
||||
The raw PCC 26.1 kernels do not provide a usable runtime symbol table,
|
||||
so this patch always resolves through the sysctl name string
|
||||
`thid_should_crash` and the adjacent `sysctl_oid` data.
|
||||
"""
|
||||
self._log("\n[JB] _thid_should_crash: zero out")
|
||||
|
||||
str_off = self.find_string(b"thid_should_crash")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
return False
|
||||
|
||||
self._log(f" [*] string at foff 0x{str_off:X}")
|
||||
|
||||
data_const_ranges = [
|
||||
(fo, fo + fs)
|
||||
for name, _, fo, fs, _ in self.all_segments
|
||||
if name in ("__DATA_CONST",) and fs > 0
|
||||
]
|
||||
|
||||
for delta in range(0, 128, 8):
|
||||
check = str_off + delta
|
||||
if check + 8 > self.size:
|
||||
break
|
||||
val = _rd64(self.raw, check)
|
||||
if val == 0:
|
||||
continue
|
||||
low32 = val & 0xFFFFFFFF
|
||||
if low32 == 0 or low32 >= self.size:
|
||||
continue
|
||||
target_val = _rd32(self.raw, low32)
|
||||
if 1 <= target_val <= 255:
|
||||
in_data = any(s <= low32 < e for s, e in data_const_ranges)
|
||||
if not in_data:
|
||||
in_data = any(
|
||||
fo <= low32 < fo + fs
|
||||
for name, _, fo, fs, _ in self.all_segments
|
||||
if "DATA" in name and fs > 0
|
||||
)
|
||||
if in_data:
|
||||
self._log(
|
||||
f" [+] variable at foff 0x{low32:X} "
|
||||
f"(value={target_val}, found via sysctl_oid "
|
||||
f"at str+0x{delta:X})"
|
||||
)
|
||||
self.emit(low32, b"\x00\x00\x00\x00", "zero [_thid_should_crash]")
|
||||
return True
|
||||
|
||||
self._log(" [-] variable not found")
|
||||
return False
|
||||
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
# Group C: Complex shellcode patches
|
||||
# ══════════════════════════════════════════════════════════════
|
||||
@@ -1,138 +0,0 @@
|
||||
"""Mixin: KernelJBPatchVmFaultMixin."""
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_MEM, ARM64_OP_REG
|
||||
|
||||
from .kernel_jb_base import NOP
|
||||
|
||||
|
||||
class KernelJBPatchVmFaultMixin:
|
||||
def patch_vm_fault_enter_prepare(self):
|
||||
"""Force the upstream cs_bypass fast-path in _vm_fault_enter_prepare.
|
||||
|
||||
Strict mode:
|
||||
- Resolve vm_fault_enter_prepare function via symbol/string anchor.
|
||||
- In-function only (no global fallback scan).
|
||||
- Require the unique `tbz Wflags,#3 ; mov W?,#0 ; b ...` gate where
|
||||
Wflags is loaded from `[fault_info,#0x28]` near the function prologue.
|
||||
|
||||
This intentionally reproduces the upstream PCC 26.1 research-site
|
||||
semantics and avoids the old false-positive matcher that drifted onto
|
||||
the `pmap_lock_phys_page()` / `pmap_unlock_phys_page()` pair.
|
||||
"""
|
||||
self._log("\n[JB] _vm_fault_enter_prepare: NOP")
|
||||
|
||||
candidate_funcs = []
|
||||
|
||||
foff = self._resolve_symbol("_vm_fault_enter_prepare")
|
||||
if foff >= 0:
|
||||
candidate_funcs.append(foff)
|
||||
|
||||
str_off = self.find_string(b"vm_fault_enter_prepare")
|
||||
if str_off >= 0:
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
candidate_funcs.extend(
|
||||
self.find_function_start(adrp_off)
|
||||
for adrp_off, _, _ in refs
|
||||
if self.find_function_start(adrp_off) >= 0
|
||||
)
|
||||
|
||||
candidate_sites = set()
|
||||
for func_start in sorted(set(candidate_funcs)):
|
||||
func_end = self._find_func_end(func_start, 0x4000)
|
||||
result = self._find_cs_bypass_gate(func_start, func_end)
|
||||
if result is not None:
|
||||
candidate_sites.add(result)
|
||||
|
||||
if len(candidate_sites) == 1:
|
||||
result = next(iter(candidate_sites))
|
||||
self.emit(result, NOP, "NOP [_vm_fault_enter_prepare]")
|
||||
return True
|
||||
if len(candidate_sites) > 1:
|
||||
self._log(
|
||||
" [-] ambiguous vm_fault_enter_prepare candidates: "
|
||||
+ ", ".join(f"0x{x:X}" for x in sorted(candidate_sites))
|
||||
)
|
||||
return False
|
||||
|
||||
self._log(" [-] patch site not found")
|
||||
return False
|
||||
|
||||
def _find_cs_bypass_gate(self, start, end):
|
||||
"""Find the upstream-style cs_bypass gate in vm_fault_enter_prepare.
|
||||
|
||||
Expected semantic shape:
|
||||
... early in prologue: LDR Wflags, [fault_info_reg, #0x28]
|
||||
... later: TBZ Wflags, #3, validation_path
|
||||
MOV Wtainted, #0
|
||||
B post_validation_success
|
||||
|
||||
Bit #3 in the packed fault_info flags word is `cs_bypass`.
|
||||
NOPing the TBZ forces the fast-path unconditionally, matching the
|
||||
upstream PCC 26.1 research patch site.
|
||||
"""
|
||||
flag_regs = set()
|
||||
prologue_end = min(end, start + 0x120)
|
||||
for off in range(start, prologue_end, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0:
|
||||
continue
|
||||
insn = d0[0]
|
||||
if insn.mnemonic != "ldr" or len(insn.operands) < 2:
|
||||
continue
|
||||
dst, src = insn.operands[0], insn.operands[1]
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_MEM:
|
||||
continue
|
||||
dst_name = insn.reg_name(dst.reg)
|
||||
if not dst_name.startswith("w"):
|
||||
continue
|
||||
if src.mem.base == 0 or src.mem.disp != 0x28:
|
||||
continue
|
||||
flag_regs.add(dst.reg)
|
||||
|
||||
if not flag_regs:
|
||||
return None
|
||||
|
||||
hits = []
|
||||
scan_start = max(start + 0x80, start)
|
||||
for off in range(scan_start, end - 0x8, 4):
|
||||
d0 = self._disas_at(off)
|
||||
if not d0:
|
||||
continue
|
||||
gate = d0[0]
|
||||
if gate.mnemonic != "tbz" or len(gate.operands) != 3:
|
||||
continue
|
||||
reg_op, bit_op, target_op = gate.operands
|
||||
if reg_op.type != ARM64_OP_REG or reg_op.reg not in flag_regs:
|
||||
continue
|
||||
if bit_op.type != ARM64_OP_IMM or bit_op.imm != 3:
|
||||
continue
|
||||
if target_op.type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
d1 = self._disas_at(off + 4)
|
||||
d2 = self._disas_at(off + 8)
|
||||
if not d1 or not d2:
|
||||
continue
|
||||
mov_insn = d1[0]
|
||||
branch_insn = d2[0]
|
||||
|
||||
if mov_insn.mnemonic != "mov" or len(mov_insn.operands) != 2:
|
||||
continue
|
||||
mov_dst, mov_src = mov_insn.operands
|
||||
if mov_dst.type != ARM64_OP_REG or mov_src.type != ARM64_OP_IMM:
|
||||
continue
|
||||
if mov_src.imm != 0:
|
||||
continue
|
||||
if not mov_insn.reg_name(mov_dst.reg).startswith("w"):
|
||||
continue
|
||||
|
||||
if branch_insn.mnemonic != "b" or len(branch_insn.operands) != 1:
|
||||
continue
|
||||
if branch_insn.operands[0].type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
hits.append(off)
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
@@ -1,116 +0,0 @@
|
||||
"""Mixin: KernelJBPatchVmProtectMixin."""
|
||||
|
||||
from capstone.arm64_const import ARM64_REG_WZR
|
||||
|
||||
from .kernel_jb_base import ARM64_OP_IMM, ARM64_OP_REG
|
||||
|
||||
|
||||
class KernelJBPatchVmProtectMixin:
|
||||
def patch_vm_map_protect(self):
|
||||
"""Skip the vm_map_protect write-downgrade gate.
|
||||
|
||||
Source-backed anchor: recover the function from the in-kernel
|
||||
`vm_map_protect(` panic string, then find the unique local block matching
|
||||
the XNU path that conditionally strips `VM_PROT_WRITE` from a combined
|
||||
read+write request before later VM entry updates:
|
||||
|
||||
mov wMask, #6
|
||||
bics wzr, wMask, wProt
|
||||
b.ne skip
|
||||
tbnz wEntryFlags, #22, skip
|
||||
...
|
||||
and wProt, wProt, #~VM_PROT_WRITE
|
||||
|
||||
Rewriting the `b.ne` to an unconditional `b` preserves the historical
|
||||
patch semantics from `patch_fw.py`: always skip the downgrade block.
|
||||
"""
|
||||
self._log("\n[JB] _vm_map_protect: skip write-downgrade gate")
|
||||
|
||||
foff = self._find_func_by_string(b"vm_map_protect(", self.kern_text)
|
||||
if foff < 0:
|
||||
self._log(" [-] kernel-text 'vm_map_protect(' anchor not found")
|
||||
return False
|
||||
|
||||
func_end = self._find_func_end(foff, 0x2000)
|
||||
gate = self._find_write_downgrade_gate(foff, func_end)
|
||||
if gate is None:
|
||||
self._log(" [-] vm_map_protect write-downgrade gate not found")
|
||||
return False
|
||||
|
||||
br_off, target = gate
|
||||
b_bytes = self._encode_b(br_off, target)
|
||||
if not b_bytes:
|
||||
self._log(" [-] branch rewrite out of range")
|
||||
return False
|
||||
|
||||
self.emit(br_off, b_bytes, f"b #0x{target - br_off:X} [_vm_map_protect]")
|
||||
return True
|
||||
|
||||
def _find_write_downgrade_gate(self, start, end):
|
||||
hits = []
|
||||
for off in range(start, end - 0x20, 4):
|
||||
d = self._disas_at(off, 10)
|
||||
if len(d) < 5:
|
||||
continue
|
||||
|
||||
mov_mask, bics_insn, bne_insn, tbnz_insn = d[0], d[1], d[2], d[3]
|
||||
if mov_mask.mnemonic != "mov" or bics_insn.mnemonic != "bics":
|
||||
continue
|
||||
if bne_insn.mnemonic != "b.ne" or tbnz_insn.mnemonic != "tbnz":
|
||||
continue
|
||||
if len(mov_mask.operands) != 2 or len(bics_insn.operands) != 3:
|
||||
continue
|
||||
if mov_mask.operands[0].type != ARM64_OP_REG or mov_mask.operands[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
if mov_mask.operands[1].imm != 6:
|
||||
continue
|
||||
|
||||
mask_reg = mov_mask.operands[0].reg
|
||||
if bics_insn.operands[0].type != ARM64_OP_REG or bics_insn.operands[0].reg != ARM64_REG_WZR:
|
||||
continue
|
||||
if bics_insn.operands[1].type != ARM64_OP_REG or bics_insn.operands[1].reg != mask_reg:
|
||||
continue
|
||||
if bics_insn.operands[2].type != ARM64_OP_REG:
|
||||
continue
|
||||
prot_reg = bics_insn.operands[2].reg
|
||||
|
||||
if len(bne_insn.operands) != 1 or bne_insn.operands[0].type != ARM64_OP_IMM:
|
||||
continue
|
||||
if len(tbnz_insn.operands) != 3:
|
||||
continue
|
||||
if tbnz_insn.operands[0].type != ARM64_OP_REG or tbnz_insn.operands[1].type != ARM64_OP_IMM or tbnz_insn.operands[2].type != ARM64_OP_IMM:
|
||||
continue
|
||||
|
||||
target = bne_insn.operands[0].imm
|
||||
if target <= bne_insn.address or tbnz_insn.operands[2].imm != target:
|
||||
continue
|
||||
if tbnz_insn.operands[1].imm != 22:
|
||||
continue
|
||||
|
||||
and_off = self._find_write_clear_between(tbnz_insn.address + 4, min(target, end), prot_reg)
|
||||
if and_off is None:
|
||||
continue
|
||||
|
||||
hits.append((bne_insn.address, target))
|
||||
|
||||
if len(hits) == 1:
|
||||
return hits[0]
|
||||
return None
|
||||
|
||||
def _find_write_clear_between(self, start, end, prot_reg):
|
||||
for off in range(start, end, 4):
|
||||
d = self._disas_at(off)
|
||||
if not d:
|
||||
continue
|
||||
insn = d[0]
|
||||
if insn.mnemonic != "and" or len(insn.operands) != 3:
|
||||
continue
|
||||
dst, src, imm = insn.operands
|
||||
if dst.type != ARM64_OP_REG or src.type != ARM64_OP_REG or imm.type != ARM64_OP_IMM:
|
||||
continue
|
||||
if dst.reg != prot_reg or src.reg != prot_reg:
|
||||
continue
|
||||
imm_val = imm.imm & 0xFFFFFFFF
|
||||
if (imm_val & 0x7) == 0x3:
|
||||
return off
|
||||
return None
|
||||
@@ -1,115 +0,0 @@
|
||||
"""Mixin: APFS graft and fsioc helpers."""
|
||||
|
||||
from .kernel_asm import MOV_W0_0, _PACIBSP_U32, _rd32
|
||||
|
||||
|
||||
class KernelPatchApfsGraftMixin:
|
||||
def _find_validate_root_hash_func(self):
|
||||
"""Find validate_on_disk_root_hash function via 'authenticate_root_hash' string."""
|
||||
str_off = self.find_string(b"authenticate_root_hash")
|
||||
if str_off < 0:
|
||||
return -1
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
return -1
|
||||
return self.find_function_start(refs[0][0])
|
||||
|
||||
def patch_apfs_graft(self):
|
||||
"""Patch 12: Replace BL to validate_on_disk_root_hash with mov w0,#0.
|
||||
|
||||
Instead of stubbing _apfs_graft at entry, find the specific BL
|
||||
that calls the root hash validation and neutralize just that call.
|
||||
"""
|
||||
self._log("\n[12] _apfs_graft: mov w0,#0 (validate_root_hash BL)")
|
||||
|
||||
# Find _apfs_graft function
|
||||
exact = self.raw.find(b"\x00apfs_graft\x00")
|
||||
if exact < 0:
|
||||
self._log(" [-] 'apfs_graft' string not found")
|
||||
return False
|
||||
str_off = exact + 1
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
graft_start = self.find_function_start(refs[0][0])
|
||||
if graft_start < 0:
|
||||
self._log(" [-] _apfs_graft function start not found")
|
||||
return False
|
||||
|
||||
# Find validate_on_disk_root_hash function
|
||||
vrh_func = self._find_validate_root_hash_func()
|
||||
if vrh_func < 0:
|
||||
self._log(" [-] validate_on_disk_root_hash not found")
|
||||
return False
|
||||
|
||||
# Scan _apfs_graft for BL to validate_on_disk_root_hash
|
||||
# Don't stop at ret/retab (early returns) — only stop at PACIBSP (new function)
|
||||
for scan in range(graft_start, min(graft_start + 0x2000, self.size), 4):
|
||||
if scan > graft_start + 8 and _rd32(self.raw, scan) == _PACIBSP_U32:
|
||||
break
|
||||
bl_target = self._is_bl(scan)
|
||||
if bl_target == vrh_func:
|
||||
self.emit(scan, MOV_W0_0, "mov w0,#0 [_apfs_graft]")
|
||||
return True
|
||||
|
||||
self._log(" [-] BL to validate_on_disk_root_hash not found in _apfs_graft")
|
||||
return False
|
||||
def _find_validate_payload_manifest_func(self):
|
||||
"""Find the AppleImage4 validate_payload_and_manifest function."""
|
||||
str_off = self.find_string(b"validate_payload_and_manifest")
|
||||
if str_off < 0:
|
||||
return -1
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
return -1
|
||||
return self.find_function_start(refs[0][0])
|
||||
|
||||
def patch_handle_fsioc_graft(self):
|
||||
"""Patch 15: Replace BL to validate_payload_and_manifest with mov w0,#0.
|
||||
|
||||
Instead of stubbing _handle_fsioc_graft at entry, find the specific
|
||||
BL that calls AppleImage4 validation and neutralize just that call.
|
||||
"""
|
||||
self._log("\n[15] _handle_fsioc_graft: mov w0,#0 (validate BL)")
|
||||
|
||||
exact = self.raw.find(b"\x00handle_fsioc_graft\x00")
|
||||
if exact < 0:
|
||||
self._log(" [-] 'handle_fsioc_graft' string not found")
|
||||
return False
|
||||
str_off = exact + 1
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
fsioc_start = self.find_function_start(refs[0][0])
|
||||
if fsioc_start < 0:
|
||||
self._log(" [-] function start not found")
|
||||
return False
|
||||
|
||||
# Find the validation function
|
||||
val_func = self._find_validate_payload_manifest_func()
|
||||
if val_func < 0:
|
||||
self._log(" [-] validate_payload_and_manifest not found")
|
||||
return False
|
||||
|
||||
# Scan _handle_fsioc_graft for BL to validation function
|
||||
for scan in range(fsioc_start, min(fsioc_start + 0x400, self.size), 4):
|
||||
insns = self._disas_at(scan)
|
||||
if not insns:
|
||||
continue
|
||||
if scan > fsioc_start + 8 and insns[0].mnemonic == "pacibsp":
|
||||
break
|
||||
bl_target = self._is_bl(scan)
|
||||
if bl_target == val_func:
|
||||
self.emit(scan, MOV_W0_0, "mov w0,#0 [_handle_fsioc_graft]")
|
||||
return True
|
||||
|
||||
self._log(" [-] BL to validate_payload_and_manifest not found")
|
||||
return False
|
||||
|
||||
# ── Sandbox MACF hooks ───────────────────────────────────────
|
||||
@@ -1,248 +0,0 @@
|
||||
"""Mixin: APFS mount checks patches."""
|
||||
|
||||
from capstone.arm64_const import (
|
||||
ARM64_OP_IMM,
|
||||
ARM64_OP_REG,
|
||||
ARM64_REG_W0,
|
||||
ARM64_REG_W8,
|
||||
ARM64_REG_X0,
|
||||
)
|
||||
|
||||
from .kernel_asm import CMP_X0_X0, MOV_W0_0, NOP, _PACIBSP_U32, _rd32
|
||||
|
||||
|
||||
class KernelPatchApfsMountMixin:
|
||||
def patch_apfs_vfsop_mount_cmp(self):
|
||||
"""Patch 13: cmp x0,x0 in _apfs_vfsop_mount (current_thread == kernel_task check).
|
||||
|
||||
The target CMP follows the pattern: BL (returns current_thread in x0),
|
||||
ADRP + LDR + LDR (load kernel_task global), CMP x0, Xm, B.EQ.
|
||||
We require x0 as the first CMP operand to distinguish it from other
|
||||
CMP Xn,Xm instructions in the same function.
|
||||
"""
|
||||
self._log("\n[13] _apfs_vfsop_mount: cmp x0,x0 (mount rw check)")
|
||||
|
||||
refs_upgrade = self._find_by_string_in_range(
|
||||
b"apfs_mount_upgrade_checks\x00",
|
||||
self.apfs_text,
|
||||
"apfs_mount_upgrade_checks",
|
||||
)
|
||||
if not refs_upgrade:
|
||||
return False
|
||||
|
||||
func_start = self.find_function_start(refs_upgrade[0][0])
|
||||
if func_start < 0:
|
||||
return False
|
||||
|
||||
# Find BL callers of _apfs_mount_upgrade_checks
|
||||
callers = self.bl_callers.get(func_start, [])
|
||||
if not callers:
|
||||
for off_try in [func_start, func_start + 4]:
|
||||
callers = self.bl_callers.get(off_try, [])
|
||||
if callers:
|
||||
break
|
||||
|
||||
if not callers:
|
||||
self._log(" [-] no BL callers of _apfs_mount_upgrade_checks found")
|
||||
for off in range(self.apfs_text[0], self.apfs_text[1], 4):
|
||||
bl_target = self._is_bl(off)
|
||||
if bl_target >= 0 and func_start <= bl_target <= func_start + 4:
|
||||
callers.append(off)
|
||||
|
||||
for caller_off in callers:
|
||||
if not (self.apfs_text[0] <= caller_off < self.apfs_text[1]):
|
||||
continue
|
||||
# Scan a wider range — the CMP can be 0x800+ bytes before the BL
|
||||
caller_func = self.find_function_start(caller_off)
|
||||
scan_start = (
|
||||
caller_func
|
||||
if caller_func >= 0
|
||||
else max(caller_off - 0x800, self.apfs_text[0])
|
||||
)
|
||||
scan_end = min(caller_off + 0x100, self.apfs_text[1])
|
||||
|
||||
for scan in range(scan_start, scan_end, 4):
|
||||
dis = self._disas_at(scan)
|
||||
if not dis or dis[0].mnemonic != "cmp":
|
||||
continue
|
||||
ops = dis[0].operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
# Require CMP Xn, Xm (both register operands)
|
||||
if ops[0].type != ARM64_OP_REG or ops[1].type != ARM64_OP_REG:
|
||||
continue
|
||||
# Require x0 as first operand (return value from BL)
|
||||
if ops[0].reg != ARM64_REG_X0:
|
||||
continue
|
||||
# Skip CMP x0, x0 (already patched or trivial)
|
||||
if ops[0].reg == ops[1].reg:
|
||||
continue
|
||||
self.emit(
|
||||
scan,
|
||||
CMP_X0_X0,
|
||||
f"cmp x0,x0 (was {dis[0].mnemonic} {dis[0].op_str}) "
|
||||
"[_apfs_vfsop_mount]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] CMP x0,Xm not found near mount_upgrade_checks caller")
|
||||
return False
|
||||
|
||||
def patch_apfs_mount_upgrade_checks(self):
|
||||
"""Patch 14: Replace TBNZ w0,#0xe with mov w0,#0 in _apfs_mount_upgrade_checks.
|
||||
|
||||
Within the function, a BL calls a small flag-reading leaf function,
|
||||
then TBNZ w0,#0xe branches to the error path. Replace the TBNZ
|
||||
with mov w0,#0 to force the success path.
|
||||
"""
|
||||
self._log("\n[14] _apfs_mount_upgrade_checks: mov w0,#0 (tbnz bypass)")
|
||||
|
||||
refs = self._find_by_string_in_range(
|
||||
b"apfs_mount_upgrade_checks\x00",
|
||||
self.apfs_text,
|
||||
"apfs_mount_upgrade_checks",
|
||||
)
|
||||
if not refs:
|
||||
return False
|
||||
|
||||
func_start = self.find_function_start(refs[0][0])
|
||||
if func_start < 0:
|
||||
self._log(" [-] function start not found")
|
||||
return False
|
||||
|
||||
# Scan for BL followed by TBNZ w0
|
||||
# Don't stop at ret/retab (early returns) — only stop at PACIBSP (new function)
|
||||
for scan in range(func_start, min(func_start + 0x200, self.size), 4):
|
||||
if scan > func_start + 8 and _rd32(self.raw, scan) == _PACIBSP_U32:
|
||||
break
|
||||
bl_target = self._is_bl(scan)
|
||||
if bl_target < 0:
|
||||
continue
|
||||
# Check if BL target is a small leaf function (< 0x20 bytes, ends with ret)
|
||||
is_leaf = False
|
||||
for k in range(0, 0x20, 4):
|
||||
if bl_target + k >= self.size:
|
||||
break
|
||||
dis = self._disas_at(bl_target + k)
|
||||
if dis and dis[0].mnemonic == "ret":
|
||||
is_leaf = True
|
||||
break
|
||||
if not is_leaf:
|
||||
continue
|
||||
# Check next instruction is TBNZ w0, #0xe
|
||||
next_off = scan + 4
|
||||
insns = self._disas_at(next_off)
|
||||
if not insns:
|
||||
continue
|
||||
i = insns[0]
|
||||
if i.mnemonic == "tbnz" and len(i.operands) >= 1:
|
||||
if (
|
||||
i.operands[0].type == ARM64_OP_REG
|
||||
and i.operands[0].reg == ARM64_REG_W0
|
||||
):
|
||||
self.emit(
|
||||
next_off, MOV_W0_0, "mov w0,#0 [_apfs_mount_upgrade_checks]"
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] BL + TBNZ w0 pattern not found")
|
||||
return False
|
||||
|
||||
def patch_apfs_get_dev_by_role_entitlement(self):
|
||||
"""Patch 16: bypass APFS get-dev-by-role entitlement gate.
|
||||
|
||||
In handle_get_dev_by_role, APFS checks:
|
||||
1) context predicate (BL ... ; CBZ X0, deny)
|
||||
2) entitlement check for "com.apple.apfs.get-dev-by-role"
|
||||
(BL ... ; CBZ W0, deny)
|
||||
|
||||
mount-phase-1 for /private/preboot and /private/xarts can fail here with:
|
||||
"%s:%d: %s This operation needs entitlement" (line 13101)
|
||||
|
||||
We NOP the deny branches so the function continues into normal role lookup.
|
||||
"""
|
||||
self._log("\n[16] handle_get_dev_by_role: bypass entitlement gate")
|
||||
|
||||
str_off = self.find_string(b"com.apple.apfs.get-dev-by-role")
|
||||
if str_off < 0:
|
||||
self._log(" [-] entitlement string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs to entitlement string")
|
||||
return False
|
||||
|
||||
def _is_entitlement_error_block(target_off, func_end):
|
||||
"""Heuristic: target block sets known entitlement-gate line IDs."""
|
||||
scan_end = min(target_off + 0x30, func_end)
|
||||
for off in range(target_off, scan_end, 4):
|
||||
ins = self._disas_at(off)
|
||||
if not ins:
|
||||
continue
|
||||
i = ins[0]
|
||||
# Keep scan local to the direct target block.
|
||||
# Crossing a call/unconditional jump usually means a different path.
|
||||
if i.mnemonic in ("bl", "b", "ret", "retab"):
|
||||
break
|
||||
if i.mnemonic != "mov" or len(i.operands) < 2:
|
||||
continue
|
||||
if (
|
||||
i.operands[0].type == ARM64_OP_REG
|
||||
and i.operands[0].reg == ARM64_REG_W8
|
||||
and i.operands[1].type == ARM64_OP_IMM
|
||||
and i.operands[1].imm in (0x332D, 0x333B)
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
for ref in refs:
|
||||
ref_off = ref[0]
|
||||
func_start = self.find_function_start(ref_off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
func_end = min(func_start + 0x1200, self.size)
|
||||
|
||||
# Hardened logic:
|
||||
# patch all CBZ/CBNZ on X0/W0 that jump into entitlement
|
||||
# error blocks (line 0x33xx logger paths).
|
||||
candidates = []
|
||||
for off in range(func_start, func_end, 4):
|
||||
ins = self._disas_at(off)
|
||||
if not ins:
|
||||
continue
|
||||
i = ins[0]
|
||||
if i.mnemonic not in ("cbz", "cbnz") or len(i.operands) < 2:
|
||||
continue
|
||||
if (
|
||||
i.operands[0].type != ARM64_OP_REG
|
||||
or i.operands[1].type != ARM64_OP_IMM
|
||||
):
|
||||
continue
|
||||
if i.operands[0].reg not in (ARM64_REG_W0, ARM64_REG_X0):
|
||||
continue
|
||||
|
||||
target = i.operands[1].imm
|
||||
if not (func_start <= target < func_end):
|
||||
continue
|
||||
if target <= off:
|
||||
continue
|
||||
if not _is_entitlement_error_block(target, func_end):
|
||||
continue
|
||||
|
||||
# Keep deterministic order; avoid duplicate offsets.
|
||||
if all(prev_off != off for prev_off, _, _ in candidates):
|
||||
candidates.append((off, i.operands[0].reg, target))
|
||||
|
||||
if candidates:
|
||||
for off, reg, target in candidates:
|
||||
gate = "context" if reg == ARM64_REG_X0 else "entitlement"
|
||||
self.emit(
|
||||
off,
|
||||
NOP,
|
||||
f"NOP [handle_get_dev_by_role {gate} check -> 0x{target:X}]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] handle_get_dev_by_role entitlement gate pattern not found")
|
||||
return False
|
||||
@@ -1,48 +0,0 @@
|
||||
"""Mixin: APFS seal broken patch."""
|
||||
|
||||
from .kernel_asm import NOP
|
||||
|
||||
|
||||
class KernelPatchApfsSealMixin:
|
||||
def patch_apfs_seal_broken(self):
|
||||
"""Patch 2: NOP the conditional branch leading to 'root volume seal is broken' panic."""
|
||||
self._log("\n[2] _authapfs_seal_is_broken: seal broken panic")
|
||||
|
||||
str_off = self.find_string(b"root volume seal is broken")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.apfs_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
# Find BL _panic after string ref
|
||||
bl_off = -1
|
||||
for scan in range(add_off, min(add_off + 0x40, self.size), 4):
|
||||
bl_target = self._is_bl(scan)
|
||||
if bl_target == self.panic_off:
|
||||
bl_off = scan
|
||||
break
|
||||
|
||||
if bl_off < 0:
|
||||
continue
|
||||
|
||||
# Search backwards for a conditional branch that jumps INTO the
|
||||
# panic path. The error block may set up __FILE__/line args
|
||||
# before the string ADRP, so allow target up to 0x40 before it.
|
||||
err_lo = adrp_off - 0x40
|
||||
for back in range(adrp_off - 4, max(adrp_off - 0x200, 0), -4):
|
||||
target, kind = self._decode_branch_target(back)
|
||||
if target is not None and err_lo <= target <= bl_off + 4:
|
||||
self.emit(
|
||||
back,
|
||||
NOP,
|
||||
f"NOP {kind} (seal broken) [_authapfs_seal_is_broken]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] could not find conditional branch to NOP")
|
||||
return False
|
||||
@@ -1,50 +0,0 @@
|
||||
"""Mixin: APFS root snapshot patch."""
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG
|
||||
|
||||
from .kernel_asm import NOP
|
||||
|
||||
|
||||
class KernelPatchApfsSnapshotMixin:
|
||||
def patch_apfs_root_snapshot(self):
|
||||
"""Patch 1: NOP the tbnz w8,#5 that gates sealed-volume root snapshot panic."""
|
||||
self._log("\n[1] _apfs_vfsop_mount: root snapshot sealed volume check")
|
||||
|
||||
refs = self._find_by_string_in_range(
|
||||
b"Rooting from snapshot with xid", self.apfs_text, "apfs_vfsop_mount log"
|
||||
)
|
||||
if not refs:
|
||||
refs = self._find_by_string_in_range(
|
||||
b"Failed to find the root snapshot",
|
||||
self.apfs_text,
|
||||
"root snapshot panic",
|
||||
)
|
||||
if not refs:
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
for scan in range(add_off, min(add_off + 0x200, self.size), 4):
|
||||
insns = self._disas_at(scan)
|
||||
if not insns:
|
||||
continue
|
||||
i = insns[0]
|
||||
if i.mnemonic not in ("tbnz", "tbz"):
|
||||
continue
|
||||
# Check: tbz/tbnz w8, #5, ...
|
||||
ops = i.operands
|
||||
if (
|
||||
len(ops) >= 2
|
||||
and ops[0].type == ARM64_OP_REG
|
||||
and ops[1].type == ARM64_OP_IMM
|
||||
and ops[1].imm == 5
|
||||
):
|
||||
self.emit(
|
||||
scan,
|
||||
NOP,
|
||||
f"NOP {i.mnemonic} {i.op_str} "
|
||||
"(sealed vol check) [_apfs_vfsop_mount]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] tbz/tbnz w8,#5 not found near xref")
|
||||
return False
|
||||
@@ -1,46 +0,0 @@
|
||||
"""Mixin: bsd_init rootvp patch."""
|
||||
|
||||
from .kernel_asm import MOV_X0_0, NOP
|
||||
|
||||
|
||||
class KernelPatchBsdInitMixin:
|
||||
def patch_bsd_init_rootvp(self):
|
||||
"""Patch 3: NOP the conditional branch guarding the 'rootvp not authenticated' panic."""
|
||||
self._log("\n[3] _bsd_init: rootvp not authenticated panic")
|
||||
|
||||
str_off = self.find_string(b"rootvp not authenticated after mounting")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs in kernel __text")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
# Find the BL _panic after the string ref
|
||||
bl_panic_off = -1
|
||||
for scan in range(add_off, min(add_off + 0x40, self.size), 4):
|
||||
bl_target = self._is_bl(scan)
|
||||
if bl_target == self.panic_off:
|
||||
bl_panic_off = scan
|
||||
break
|
||||
|
||||
if bl_panic_off < 0:
|
||||
continue
|
||||
|
||||
# Search backwards for a conditional branch whose target is in
|
||||
# the error path (the block ending with BL _panic).
|
||||
# The error path is typically a few instructions before BL _panic.
|
||||
err_lo = bl_panic_off - 0x40 # error block start (generous)
|
||||
err_hi = bl_panic_off + 4 # error block end
|
||||
|
||||
for back in range(adrp_off - 4, max(adrp_off - 0x400, 0), -4):
|
||||
target, kind = self._decode_branch_target(back)
|
||||
if target is not None and err_lo <= target <= err_hi:
|
||||
self.emit(back, NOP, f"NOP {kind} (rootvp auth) [_bsd_init]")
|
||||
return True
|
||||
|
||||
self._log(" [-] conditional branch into panic path not found")
|
||||
return False
|
||||
@@ -1,139 +0,0 @@
|
||||
"""Mixin: debugger enablement patch."""
|
||||
|
||||
from .kernel_asm import MOV_X0_1, RET, _rd32, _rd64
|
||||
|
||||
_GPR_X8_NUM = 8
|
||||
|
||||
|
||||
class KernelPatchDebuggerMixin:
|
||||
def _is_adrp_x8(self, insn):
|
||||
"""Fast raw check: ADRP x8, <page>."""
|
||||
return (insn & 0x9F000000) == 0x90000000 and (insn & 0x1F) == _GPR_X8_NUM
|
||||
|
||||
def _has_w_ldr_from_x8(self, func_off, max_insns=8):
|
||||
"""Heuristic: first few instructions include ldr wN, [x8, ...]."""
|
||||
for k in range(1, max_insns + 1):
|
||||
off = func_off + k * 4
|
||||
if off >= self.size:
|
||||
break
|
||||
dk = self._disas_at(off)
|
||||
if (
|
||||
dk
|
||||
and dk[0].mnemonic == "ldr"
|
||||
and dk[0].op_str.startswith("w")
|
||||
and "x8" in dk[0].op_str
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _find_debugger_by_bl_histogram(self, kern_text_start, kern_text_end):
|
||||
"""Find target from BL call histogram to avoid full __text scan."""
|
||||
best_off = -1
|
||||
best_callers = 0
|
||||
for target_off, callers in self.bl_callers.items():
|
||||
n_callers = len(callers)
|
||||
# _PE_i_can_has_debugger is broadly used but far from panic-level fanout.
|
||||
if n_callers < 50 or n_callers > 250:
|
||||
continue
|
||||
if target_off < kern_text_start or target_off >= kern_text_end:
|
||||
continue
|
||||
if target_off + 4 > self.size or (target_off & 3):
|
||||
continue
|
||||
|
||||
first_insn = _rd32(self.raw, target_off)
|
||||
if not self._is_adrp_x8(first_insn):
|
||||
continue
|
||||
|
||||
if target_off >= 4 and not self._is_func_boundary(
|
||||
_rd32(self.raw, target_off - 4)
|
||||
):
|
||||
continue
|
||||
|
||||
if not self._has_w_ldr_from_x8(target_off):
|
||||
continue
|
||||
|
||||
if n_callers > best_callers:
|
||||
best_callers = n_callers
|
||||
best_off = target_off
|
||||
|
||||
return best_off, best_callers
|
||||
|
||||
def patch_PE_i_can_has_debugger(self):
|
||||
"""Patches 6-7: mov x0,#1; ret at _PE_i_can_has_debugger."""
|
||||
self._log("\n[6-7] _PE_i_can_has_debugger: stub with mov x0,#1; ret")
|
||||
|
||||
# Strategy 1: find symbol name in __LINKEDIT and parse nearby VA
|
||||
str_off = self.find_string(b"\x00_PE_i_can_has_debugger\x00")
|
||||
if str_off < 0:
|
||||
str_off = self.find_string(b"PE_i_can_has_debugger")
|
||||
if str_off >= 0:
|
||||
linkedit = None
|
||||
for name, vmaddr, fileoff, filesize, _ in self.all_segments:
|
||||
if name == "__LINKEDIT":
|
||||
linkedit = (fileoff, fileoff + filesize)
|
||||
if linkedit and linkedit[0] <= str_off < linkedit[1]:
|
||||
name_end = self.raw.find(b"\x00", str_off + 1)
|
||||
if name_end > 0:
|
||||
for probe in range(name_end + 1, min(name_end + 32, self.size - 7)):
|
||||
val = _rd64(self.raw, probe)
|
||||
func_foff = val - self.base_va
|
||||
if self.kern_text[0] <= func_foff < self.kern_text[1]:
|
||||
first_insn = _rd32(self.raw, func_foff)
|
||||
if first_insn != 0 and first_insn != 0xD503201F:
|
||||
self.emit(
|
||||
func_foff,
|
||||
MOV_X0_1,
|
||||
"mov x0,#1 [_PE_i_can_has_debugger]",
|
||||
)
|
||||
self.emit(
|
||||
func_foff + 4, RET, "ret [_PE_i_can_has_debugger]"
|
||||
)
|
||||
return True
|
||||
|
||||
# Strategy 2: pick candidates from BL histogram + lightweight signature checks.
|
||||
self._log(" [*] trying code pattern search...")
|
||||
|
||||
# Determine kernel-only __text range from fileset entries if available
|
||||
kern_text_start, kern_text_end = self._get_kernel_text_range()
|
||||
|
||||
best_off, best_callers = self._find_debugger_by_bl_histogram(
|
||||
kern_text_start, kern_text_end
|
||||
)
|
||||
|
||||
if best_off >= 0:
|
||||
self._log(
|
||||
f" [+] code pattern match at 0x{best_off:X} ({best_callers} callers)"
|
||||
)
|
||||
self.emit(best_off, MOV_X0_1, "mov x0,#1 [_PE_i_can_has_debugger]")
|
||||
self.emit(best_off + 4, RET, "ret [_PE_i_can_has_debugger]")
|
||||
return True
|
||||
|
||||
# Strategy 3 (fallback): full-range scan with raw opcode pre-filtering.
|
||||
# Keeps cross-variant resilience while avoiding capstone on every address.
|
||||
self._log(" [*] trying full scan fallback...")
|
||||
best_off = -1
|
||||
best_callers = 0
|
||||
for off in range(kern_text_start, kern_text_end - 12, 4):
|
||||
first_insn = _rd32(self.raw, off)
|
||||
if not self._is_adrp_x8(first_insn):
|
||||
continue
|
||||
if off >= 4 and not self._is_func_boundary(_rd32(self.raw, off - 4)):
|
||||
continue
|
||||
if not self._has_w_ldr_from_x8(off):
|
||||
continue
|
||||
|
||||
n_callers = len(self.bl_callers.get(off, []))
|
||||
if 50 <= n_callers <= 250 and n_callers > best_callers:
|
||||
best_callers = n_callers
|
||||
best_off = off
|
||||
|
||||
if best_off >= 0:
|
||||
self._log(
|
||||
f" [+] fallback match at 0x{best_off:X} ({best_callers} callers)"
|
||||
)
|
||||
self.emit(best_off, MOV_X0_1, "mov x0,#1 [_PE_i_can_has_debugger]")
|
||||
self.emit(best_off + 4, RET, "ret [_PE_i_can_has_debugger]")
|
||||
return True
|
||||
|
||||
self._log(" [-] function not found")
|
||||
return False
|
||||
@@ -1,62 +0,0 @@
|
||||
"""Mixin: dyld policy patch."""
|
||||
|
||||
from .kernel_asm import MOV_W0_1
|
||||
|
||||
|
||||
class KernelPatchDyldPolicyMixin:
|
||||
def patch_check_dyld_policy(self):
|
||||
"""Patches 10-11: Replace two BL calls in _check_dyld_policy_internal with mov w0,#1.
|
||||
|
||||
The function is found via its reference to the Swift Playgrounds
|
||||
entitlement string. The two BLs immediately preceding that string
|
||||
reference (each followed by a conditional branch on w0) are patched.
|
||||
"""
|
||||
self._log("\n[10-11] _check_dyld_policy_internal: mov w0,#1 (two BLs)")
|
||||
|
||||
# Anchor: entitlement string referenced from within the function
|
||||
str_off = self.find_string(
|
||||
b"com.apple.developer.swift-playgrounds-app.development-build"
|
||||
)
|
||||
if str_off < 0:
|
||||
self._log(" [-] swift-playgrounds entitlement string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.amfi_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs in AMFI")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
# Walk backward from the ADRP, looking for BL + conditional-on-w0 pairs
|
||||
bls_with_cond = [] # [(bl_off, bl_target), ...]
|
||||
for back in range(adrp_off - 4, max(adrp_off - 80, 0), -4):
|
||||
bl_target = self._is_bl(back)
|
||||
if bl_target < 0:
|
||||
continue
|
||||
if self._is_cond_branch_w0(back + 4):
|
||||
bls_with_cond.append((back, bl_target))
|
||||
|
||||
if len(bls_with_cond) >= 2:
|
||||
bl2_off, bl2_tgt = bls_with_cond[0] # closer to ADRP
|
||||
bl1_off, bl1_tgt = bls_with_cond[1] # farther from ADRP
|
||||
# The two BLs must call DIFFERENT functions — this
|
||||
# distinguishes _check_dyld_policy_internal from other
|
||||
# functions that repeat calls to the same helper.
|
||||
if bl1_tgt == bl2_tgt:
|
||||
continue
|
||||
self.emit(
|
||||
bl1_off,
|
||||
MOV_W0_1,
|
||||
"mov w0,#1 (was BL) [_check_dyld_policy_internal @1]",
|
||||
)
|
||||
self.emit(
|
||||
bl2_off,
|
||||
MOV_W0_1,
|
||||
"mov w0,#1 (was BL) [_check_dyld_policy_internal @2]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] _check_dyld_policy_internal BL pair not found")
|
||||
return False
|
||||
@@ -1,38 +0,0 @@
|
||||
"""Mixin: launch constraints patch."""
|
||||
|
||||
from .kernel_asm import MOV_W0_0, RET
|
||||
|
||||
|
||||
class KernelPatchLaunchConstraintsMixin:
|
||||
def patch_proc_check_launch_constraints(self):
|
||||
"""Patches 4-5: mov w0,#0; ret at _proc_check_launch_constraints start.
|
||||
|
||||
The AMFI function does NOT reference the symbol name string
|
||||
'_proc_check_launch_constraints' — only the kernel wrapper does.
|
||||
Instead, use 'AMFI: Validation Category info' which IS referenced
|
||||
from the actual AMFI function.
|
||||
"""
|
||||
self._log("\n[4-5] _proc_check_launch_constraints: stub with mov w0,#0; ret")
|
||||
|
||||
str_off = self.find_string(b"AMFI: Validation Category info")
|
||||
if str_off < 0:
|
||||
self._log(" [-] 'AMFI: Validation Category info' string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.amfi_text)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs in AMFI")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
func_start = self.find_function_start(adrp_off)
|
||||
if func_start < 0:
|
||||
continue
|
||||
self.emit(
|
||||
func_start, MOV_W0_0, "mov w0,#0 [_proc_check_launch_constraints]"
|
||||
)
|
||||
self.emit(func_start + 4, RET, "ret [_proc_check_launch_constraints]")
|
||||
return True
|
||||
|
||||
self._log(" [-] function start not found")
|
||||
return False
|
||||
@@ -1,122 +0,0 @@
|
||||
"""Mixin: post-validation patches."""
|
||||
|
||||
from capstone.arm64_const import ARM64_OP_IMM, ARM64_OP_REG, ARM64_REG_W0
|
||||
|
||||
from .kernel_asm import CMP_W0_W0, NOP, _PACIBSP_U32, _rd32
|
||||
|
||||
|
||||
class KernelPatchPostValidationMixin:
|
||||
def patch_post_validation_nop(self):
|
||||
"""Patch 8: NOP the TBNZ after TXM CodeSignature error logging.
|
||||
|
||||
The 'TXM [Error]: CodeSignature: selector: ...' string is followed
|
||||
by a BL (printf/log), then a TBNZ that branches to an additional
|
||||
validation path. NOP the TBNZ to skip it.
|
||||
"""
|
||||
self._log("\n[8] post-validation NOP (txm-related)")
|
||||
|
||||
str_off = self.find_string(b"TXM [Error]: CodeSignature")
|
||||
if str_off < 0:
|
||||
self._log(" [-] 'TXM [Error]: CodeSignature' string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.kern_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
for adrp_off, add_off, _ in refs:
|
||||
# Scan forward past the BL (printf/log) for a TBNZ
|
||||
for scan in range(add_off, min(add_off + 0x40, self.size), 4):
|
||||
insns = self._disas_at(scan)
|
||||
if not insns:
|
||||
continue
|
||||
if insns[0].mnemonic == "tbnz":
|
||||
self.emit(
|
||||
scan,
|
||||
NOP,
|
||||
f"NOP {insns[0].mnemonic} {insns[0].op_str} "
|
||||
"[txm post-validation]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] TBNZ not found after TXM error string ref")
|
||||
return False
|
||||
|
||||
def patch_post_validation_cmp(self):
|
||||
"""Patch 9: cmp w0,w0 in postValidation (AMFI code signing).
|
||||
|
||||
The 'AMFI: code signature validation failed' string is in the CALLER
|
||||
function, not in postValidation itself. We find the caller, collect
|
||||
its BL targets, then look inside each target for CMP W0, #imm + B.NE.
|
||||
"""
|
||||
self._log("\n[9] postValidation: cmp w0,w0 (AMFI code signing)")
|
||||
|
||||
str_off = self.find_string(b"AMFI: code signature validation failed")
|
||||
if str_off < 0:
|
||||
self._log(" [-] string not found")
|
||||
return False
|
||||
|
||||
refs = self.find_string_refs(str_off, *self.amfi_text)
|
||||
if not refs:
|
||||
refs = self.find_string_refs(str_off)
|
||||
if not refs:
|
||||
self._log(" [-] no code refs")
|
||||
return False
|
||||
|
||||
caller_start = self.find_function_start(refs[0][0])
|
||||
if caller_start < 0:
|
||||
self._log(" [-] caller function start not found")
|
||||
return False
|
||||
|
||||
# Collect unique BL targets from the caller function
|
||||
# Only stop at PACIBSP (new function), not at ret/retab (early returns)
|
||||
bl_targets = set()
|
||||
for scan in range(caller_start, min(caller_start + 0x2000, self.size), 4):
|
||||
if scan > caller_start + 8 and _rd32(self.raw, scan) == _PACIBSP_U32:
|
||||
break
|
||||
target = self._is_bl(scan)
|
||||
if target >= 0:
|
||||
bl_targets.add(target)
|
||||
|
||||
# In each BL target in AMFI, look for: BL ... ; CMP W0, #imm ; B.NE
|
||||
# The CMP must check W0 (return value of preceding BL call).
|
||||
for target in sorted(bl_targets):
|
||||
if not (self.amfi_text[0] <= target < self.amfi_text[1]):
|
||||
continue
|
||||
for off in range(target, min(target + 0x200, self.size), 4):
|
||||
if off > target + 8 and _rd32(self.raw, off) == _PACIBSP_U32:
|
||||
break
|
||||
dis = self._disas_at(off, 2)
|
||||
if len(dis) < 2:
|
||||
continue
|
||||
i0, i1 = dis[0], dis[1]
|
||||
if i0.mnemonic != "cmp" or i1.mnemonic != "b.ne":
|
||||
continue
|
||||
# Must be CMP W0, #imm (first operand = w0, second = immediate)
|
||||
ops = i0.operands
|
||||
if len(ops) < 2:
|
||||
continue
|
||||
if ops[0].type != ARM64_OP_REG or ops[0].reg != ARM64_REG_W0:
|
||||
continue
|
||||
if ops[1].type != ARM64_OP_IMM:
|
||||
continue
|
||||
# Must be preceded by a BL within 2 instructions
|
||||
has_bl = False
|
||||
for gap in (4, 8):
|
||||
if self._is_bl(off - gap) >= 0:
|
||||
has_bl = True
|
||||
break
|
||||
if not has_bl:
|
||||
continue
|
||||
self.emit(
|
||||
off,
|
||||
CMP_W0_W0,
|
||||
f"cmp w0,w0 (was {i0.mnemonic} {i0.op_str}) [postValidation]",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] CMP+B.NE pattern not found in caller's BL targets")
|
||||
return False
|
||||
@@ -1,46 +0,0 @@
|
||||
"""Mixin: sandbox hook patches."""
|
||||
|
||||
from .kernel_asm import MOV_X0_0, RET
|
||||
|
||||
|
||||
class KernelPatchSandboxMixin:
|
||||
def patch_sandbox_hooks(self):
|
||||
"""Patches 17-26: Stub Sandbox MACF hooks with mov x0,#0; ret.
|
||||
|
||||
Uses mac_policy_ops struct indices from XNU source (xnu-11215+).
|
||||
"""
|
||||
self._log("\n[17-26] Sandbox MACF hooks")
|
||||
|
||||
ops_table = self._find_sandbox_ops_table_via_conf()
|
||||
if ops_table is None:
|
||||
return False
|
||||
|
||||
HOOK_INDICES = {
|
||||
"file_check_mmap": 36,
|
||||
"mount_check_mount": 87,
|
||||
"mount_check_remount": 88,
|
||||
"mount_check_umount": 91,
|
||||
"vnode_check_rename": 120,
|
||||
}
|
||||
|
||||
sb_start, sb_end = self.sandbox_text
|
||||
patched_count = 0
|
||||
|
||||
for hook_name, idx in HOOK_INDICES.items():
|
||||
func_off = self._read_ops_entry(ops_table, idx)
|
||||
if func_off is None or func_off <= 0:
|
||||
self._log(f" [-] ops[{idx}] {hook_name}: NULL or invalid")
|
||||
continue
|
||||
if not (sb_start <= func_off < sb_end):
|
||||
self._log(
|
||||
f" [-] ops[{idx}] {hook_name}: foff 0x{func_off:X} "
|
||||
f"outside Sandbox (0x{sb_start:X}-0x{sb_end:X})"
|
||||
)
|
||||
continue
|
||||
|
||||
self.emit(func_off, MOV_X0_0, f"mov x0,#0 [_hook_{hook_name}]")
|
||||
self.emit(func_off + 4, RET, f"ret [_hook_{hook_name}]")
|
||||
self._log(f" [+] ops[{idx}] {hook_name} at foff 0x{func_off:X}")
|
||||
patched_count += 1
|
||||
|
||||
return patched_count > 0
|
||||
@@ -1,195 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
txm_patcher.py — Dynamic patcher for TXM (Trusted Execution Monitor) images.
|
||||
|
||||
Finds the trustcache hash lookup (binary search) in the AMFI certificate
|
||||
verification function and bypasses it. NO hardcoded offsets.
|
||||
|
||||
Dependencies: keystone-engine, capstone
|
||||
"""
|
||||
|
||||
import struct
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
|
||||
# ── Assembly / disassembly singletons ──────────────────────────
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
_cs.skipdata = True
|
||||
|
||||
|
||||
def _asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed: {s}")
|
||||
return bytes(enc)
|
||||
|
||||
|
||||
MOV_X0_0 = _asm("mov x0, #0")
|
||||
|
||||
|
||||
def _disasm_one(data, off):
|
||||
insns = list(_cs.disasm(data[off : off + 4], off))
|
||||
return insns[0] if insns else None
|
||||
|
||||
|
||||
def _find_asm_pattern(data, asm_str):
|
||||
enc, _ = _ks.asm(asm_str)
|
||||
pattern = bytes(enc)
|
||||
results = []
|
||||
off = 0
|
||||
while True:
|
||||
idx = data.find(pattern, off)
|
||||
if idx < 0:
|
||||
break
|
||||
results.append(idx)
|
||||
off = idx + 4
|
||||
return results
|
||||
|
||||
|
||||
# ── TXMPatcher ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TXMPatcher:
|
||||
"""Dynamic patcher for TXM images.
|
||||
|
||||
Patches:
|
||||
1. Trustcache binary-search BL → mov x0, #0
|
||||
(in the AMFI cert verification function identified by the
|
||||
unique constant 0x20446 loaded into w19)
|
||||
"""
|
||||
|
||||
def __init__(self, data, verbose=True):
|
||||
self.data = data
|
||||
self.raw = bytes(data)
|
||||
self.size = len(data)
|
||||
self.verbose = verbose
|
||||
self.patches = []
|
||||
|
||||
def _log(self, msg):
|
||||
if self.verbose:
|
||||
print(msg)
|
||||
|
||||
def emit(self, off, patch_bytes, desc):
|
||||
self.patches.append((off, patch_bytes, desc))
|
||||
if self.verbose:
|
||||
before_insns = list(_cs.disasm(self.raw[off : off + 4], off))
|
||||
after_insns = list(_cs.disasm(patch_bytes, off))
|
||||
b_str = (
|
||||
f"{before_insns[0].mnemonic} {before_insns[0].op_str}"
|
||||
if before_insns
|
||||
else "???"
|
||||
)
|
||||
a_str = (
|
||||
f"{after_insns[0].mnemonic} {after_insns[0].op_str}"
|
||||
if after_insns
|
||||
else "???"
|
||||
)
|
||||
print(f" 0x{off:06X}: {b_str} → {a_str} [{desc}]")
|
||||
|
||||
def apply(self):
|
||||
self.find_all()
|
||||
for off, pb, _ in self.patches:
|
||||
self.data[off : off + len(pb)] = pb
|
||||
if self.verbose and self.patches:
|
||||
self._log(f"\n [{len(self.patches)} TXM patches applied]")
|
||||
return len(self.patches)
|
||||
|
||||
def find_all(self):
|
||||
self.patches = []
|
||||
self.patch_trustcache_bypass()
|
||||
return self.patches
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# Trustcache bypass
|
||||
#
|
||||
# The AMFI cert verification function has a unique constant:
|
||||
# mov w19, #0x2446; movk w19, #2, lsl #16 (= 0x20446)
|
||||
#
|
||||
# Within that function, a binary search calls a hash-compare
|
||||
# function with SHA-1 size:
|
||||
# mov w2, #0x14; bl <hash_cmp>; cbz w0, <match>
|
||||
# followed by:
|
||||
# tbnz w0, #0x1f, <lower_half> (sign bit = search direction)
|
||||
#
|
||||
# Patch: bl <hash_cmp> → mov x0, #0
|
||||
# This makes cbz always branch to <match>, bypassing the
|
||||
# trustcache lookup entirely.
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_trustcache_bypass(self):
|
||||
# Step 1: Find the unique function marker (mov w19, #0x2446)
|
||||
locs = _find_asm_pattern(self.raw, "mov w19, #0x2446")
|
||||
if len(locs) != 1:
|
||||
self._log(f" [-] TXM: expected 1 'mov w19, #0x2446', found {len(locs)}")
|
||||
return
|
||||
marker_off = locs[0]
|
||||
|
||||
# Step 2: Find the containing function (scan back for PACIBSP)
|
||||
pacibsp = _asm("hint #27")
|
||||
func_start = None
|
||||
for scan in range(marker_off & ~3, max(0, marker_off - 0x200), -4):
|
||||
if self.raw[scan : scan + 4] == pacibsp:
|
||||
func_start = scan
|
||||
break
|
||||
if func_start is None:
|
||||
self._log(" [-] TXM: function start not found")
|
||||
return
|
||||
|
||||
# Step 3: Within the function, find mov w2, #0x14; bl; cbz w0; tbnz w0, #0x1f
|
||||
func_end = min(func_start + 0x2000, self.size)
|
||||
insns = list(_cs.disasm(self.raw[func_start:func_end], func_start))
|
||||
|
||||
for i, ins in enumerate(insns):
|
||||
if not (ins.mnemonic == "mov" and ins.op_str == "w2, #0x14"):
|
||||
continue
|
||||
if i + 3 >= len(insns):
|
||||
continue
|
||||
bl_ins = insns[i + 1]
|
||||
cbz_ins = insns[i + 2]
|
||||
tbnz_ins = insns[i + 3]
|
||||
if (
|
||||
bl_ins.mnemonic == "bl"
|
||||
and cbz_ins.mnemonic == "cbz"
|
||||
and "w0" in cbz_ins.op_str
|
||||
and tbnz_ins.mnemonic in ("tbnz", "tbz")
|
||||
and "#0x1f" in tbnz_ins.op_str
|
||||
):
|
||||
self.emit(
|
||||
bl_ins.address, MOV_X0_0, "trustcache bypass: bl → mov x0, #0"
|
||||
)
|
||||
return
|
||||
|
||||
self._log(" [-] TXM: binary search pattern not found in function")
|
||||
|
||||
|
||||
# ── CLI entry point ────────────────────────────────────────────
|
||||
if __name__ == "__main__":
|
||||
import sys, argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Dynamic TXM patcher")
|
||||
parser.add_argument("txm", help="Path to raw or IM4P TXM image")
|
||||
parser.add_argument("-q", "--quiet", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"Loading {args.txm}...")
|
||||
file_raw = open(args.txm, "rb").read()
|
||||
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
|
||||
im4p = IM4P(file_raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
payload = im4p.payload.data
|
||||
print(f" format: IM4P (fourcc={im4p.fourcc})")
|
||||
except Exception:
|
||||
payload = file_raw
|
||||
print(f" format: raw")
|
||||
|
||||
data = bytearray(payload)
|
||||
print(f" size: {len(data)} bytes ({len(data) / 1024:.1f} KB)\n")
|
||||
|
||||
patcher = TXMPatcher(data, verbose=not args.quiet)
|
||||
n = patcher.apply()
|
||||
print(f"\n {n} patches applied.")
|
||||
@@ -1,562 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
txm_patcher.py — Dynamic patcher for TXM (Trusted Execution Monitor) images.
|
||||
|
||||
Finds TXM patch sites dynamically and applies trustcache/entitlement/developer
|
||||
mode bypasses. NO hardcoded offsets.
|
||||
|
||||
Dependencies: keystone-engine, capstone
|
||||
"""
|
||||
|
||||
import struct
|
||||
from keystone import Ks, KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN as KS_MODE_LE
|
||||
from capstone import Cs, CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN
|
||||
|
||||
# ── Assembly / disassembly singletons ──────────────────────────
|
||||
_ks = Ks(KS_ARCH_ARM64, KS_MODE_LE)
|
||||
_cs = Cs(CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN)
|
||||
_cs.detail = True
|
||||
_cs.skipdata = True
|
||||
|
||||
|
||||
def _asm(s):
|
||||
enc, _ = _ks.asm(s)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed: {s}")
|
||||
return bytes(enc)
|
||||
|
||||
|
||||
MOV_X0_0 = _asm("mov x0, #0")
|
||||
MOV_X0_1 = _asm("mov x0, #1")
|
||||
MOV_W0_1 = _asm("mov w0, #1")
|
||||
MOV_X0_X20 = _asm("mov x0, x20")
|
||||
STRB_W0_X20_30 = _asm("strb w0, [x20, #0x30]")
|
||||
NOP = _asm("nop")
|
||||
PACIBSP = _asm("hint #27")
|
||||
|
||||
|
||||
def _disasm_one(data, off):
|
||||
insns = list(_cs.disasm(data[off : off + 4], off))
|
||||
return insns[0] if insns else None
|
||||
|
||||
|
||||
def _find_asm_pattern(data, asm_str):
|
||||
enc, _ = _ks.asm(asm_str)
|
||||
pattern = bytes(enc)
|
||||
results = []
|
||||
off = 0
|
||||
while True:
|
||||
idx = data.find(pattern, off)
|
||||
if idx < 0:
|
||||
break
|
||||
results.append(idx)
|
||||
off = idx + 4
|
||||
return results
|
||||
|
||||
|
||||
# ── TXMPatcher ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TXMPatcher:
|
||||
"""Dev/JB dynamic patcher for TXM images.
|
||||
|
||||
Patches (base trustcache bypass is in txm.py):
|
||||
1. Selector24: force PASS return (mov w0, #0xa1 + b epilogue)
|
||||
2. get-task-allow entitlement check BL → mov x0, #1
|
||||
3. Selector42|29: shellcode hook + manifest flag force
|
||||
4. debugger entitlement check BL → mov w0, #1
|
||||
5. developer-mode guard branch → nop
|
||||
"""
|
||||
|
||||
def __init__(self, data, verbose=True):
|
||||
self.data = data
|
||||
self.raw = bytes(data)
|
||||
self.size = len(data)
|
||||
self.verbose = verbose
|
||||
self.patches = []
|
||||
|
||||
def _log(self, msg):
|
||||
if self.verbose:
|
||||
print(msg)
|
||||
|
||||
def emit(self, off, patch_bytes, desc):
|
||||
self.patches.append((off, patch_bytes, desc))
|
||||
if self.verbose:
|
||||
before_insns = list(_cs.disasm(self.raw[off : off + 4], off))
|
||||
after_insns = list(_cs.disasm(patch_bytes, off))
|
||||
b_str = (
|
||||
f"{before_insns[0].mnemonic} {before_insns[0].op_str}"
|
||||
if before_insns
|
||||
else "???"
|
||||
)
|
||||
a_str = (
|
||||
f"{after_insns[0].mnemonic} {after_insns[0].op_str}"
|
||||
if after_insns
|
||||
else "???"
|
||||
)
|
||||
print(f" 0x{off:06X}: {b_str} → {a_str} [{desc}]")
|
||||
|
||||
def apply(self):
|
||||
self.find_all()
|
||||
for off, pb, _ in self.patches:
|
||||
self.data[off : off + len(pb)] = pb
|
||||
if self.verbose and self.patches:
|
||||
self._log(f"\n [{len(self.patches)} TXM patches applied]")
|
||||
return len(self.patches)
|
||||
|
||||
def find_all(self):
|
||||
self.patches = []
|
||||
self.patch_selector24_force_pass()
|
||||
self.patch_get_task_allow_force_true()
|
||||
self.patch_selector42_29_shellcode()
|
||||
self.patch_debugger_entitlement_force_true()
|
||||
self.patch_developer_mode_bypass()
|
||||
return self.patches
|
||||
|
||||
# ── helpers ──────────────────────────────────────────────────
|
||||
def _asm_at(self, asm_line, addr):
|
||||
enc, _ = _ks.asm(asm_line, addr=addr)
|
||||
if not enc:
|
||||
raise RuntimeError(f"asm failed at 0x{addr:X}: {asm_line}")
|
||||
return bytes(enc)
|
||||
|
||||
def _find_func_start(self, off, back=0x1000):
|
||||
start = max(0, off - back)
|
||||
for scan in range(off & ~3, start - 1, -4):
|
||||
if self.raw[scan : scan + 4] == PACIBSP:
|
||||
return scan
|
||||
return None
|
||||
|
||||
def _find_refs_to_offset(self, target_off):
|
||||
refs = []
|
||||
for off in range(0, self.size - 8, 4):
|
||||
a = _disasm_one(self.raw, off)
|
||||
b = _disasm_one(self.raw, off + 4)
|
||||
if not a or not b:
|
||||
continue
|
||||
if a.mnemonic != "adrp" or b.mnemonic != "add":
|
||||
continue
|
||||
if len(a.operands) < 2 or len(b.operands) < 3:
|
||||
continue
|
||||
if a.operands[0].reg != b.operands[1].reg:
|
||||
continue
|
||||
if a.operands[1].imm + b.operands[2].imm == target_off:
|
||||
refs.append((off, off + 4))
|
||||
return refs
|
||||
|
||||
def _find_string_refs(self, needle):
|
||||
if isinstance(needle, str):
|
||||
needle = needle.encode()
|
||||
refs = []
|
||||
seen = set()
|
||||
off = 0
|
||||
while True:
|
||||
s_off = self.raw.find(needle, off)
|
||||
if s_off < 0:
|
||||
break
|
||||
off = s_off + 1
|
||||
for r in self._find_refs_to_offset(s_off):
|
||||
if r[0] not in seen:
|
||||
seen.add(r[0])
|
||||
refs.append((s_off, r[0], r[1]))
|
||||
return refs
|
||||
|
||||
def _find_debugger_gate_func_start(self):
|
||||
refs = self._find_string_refs(b"com.apple.private.cs.debugger")
|
||||
starts = set()
|
||||
for _, _, add_off in refs:
|
||||
for scan in range(add_off, min(add_off + 0x20, self.size - 8), 4):
|
||||
i = _disasm_one(self.raw, scan)
|
||||
n = _disasm_one(self.raw, scan + 4)
|
||||
p1 = _disasm_one(self.raw, scan - 4) if scan >= 4 else None
|
||||
p2 = _disasm_one(self.raw, scan - 8) if scan >= 8 else None
|
||||
if not all((i, n, p1, p2)):
|
||||
continue
|
||||
if not (
|
||||
i.mnemonic == "bl"
|
||||
and n.mnemonic == "tbnz"
|
||||
and n.op_str.startswith("w0, #0,")
|
||||
and p1.mnemonic == "mov"
|
||||
and p1.op_str == "x2, #0"
|
||||
and p2.mnemonic == "mov"
|
||||
and p2.op_str == "x0, #0"
|
||||
):
|
||||
continue
|
||||
fs = self._find_func_start(scan)
|
||||
if fs is not None:
|
||||
starts.add(fs)
|
||||
if len(starts) != 1:
|
||||
return None
|
||||
return next(iter(starts))
|
||||
|
||||
def _find_udf_cave(self, min_insns=6, near_off=None, max_distance=0x80000):
|
||||
need = min_insns * 4
|
||||
start = 0 if near_off is None else max(0, near_off - 0x1000)
|
||||
end = self.size if near_off is None else min(self.size, near_off + max_distance)
|
||||
best = None
|
||||
best_dist = None
|
||||
off = start
|
||||
while off < end:
|
||||
run = off
|
||||
while run < end and self.raw[run : run + 4] == b"\x00\x00\x00\x00":
|
||||
run += 4
|
||||
if run - off >= need:
|
||||
prev = _disasm_one(self.raw, off - 4) if off >= 4 else None
|
||||
if prev and prev.mnemonic in (
|
||||
"b",
|
||||
"b.eq",
|
||||
"b.ne",
|
||||
"b.lo",
|
||||
"b.hs",
|
||||
"cbz",
|
||||
"cbnz",
|
||||
"tbz",
|
||||
"tbnz",
|
||||
):
|
||||
# Leave 2-word safety gap after the preceding branch.
|
||||
padded = off + 8
|
||||
if padded + need <= run:
|
||||
return padded
|
||||
return off
|
||||
if near_off is not None and _disasm_one(self.raw, off):
|
||||
dist = abs(off - near_off)
|
||||
if best is None or dist < best_dist:
|
||||
best = off
|
||||
best_dist = dist
|
||||
off = run + 4 if run > off else off + 4
|
||||
return best
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
# Trustcache bypass
|
||||
#
|
||||
# The AMFI cert verification function has a unique constant:
|
||||
# mov w19, #0x2446; movk w19, #2, lsl #16 (= 0x20446)
|
||||
#
|
||||
# Within that function, a binary search calls a hash-compare
|
||||
# function with SHA-1 size:
|
||||
# mov w2, #0x14; bl <hash_cmp>; cbz w0, <match>
|
||||
# followed by:
|
||||
# tbnz w0, #0x1f, <lower_half> (sign bit = search direction)
|
||||
#
|
||||
# Patch: bl <hash_cmp> → mov x0, #0
|
||||
# This makes cbz always branch to <match>, bypassing the
|
||||
# trustcache lookup entirely.
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
def patch_trustcache_bypass(self):
|
||||
# Step 1: Find the unique function marker (mov w19, #0x2446)
|
||||
locs = _find_asm_pattern(self.raw, "mov w19, #0x2446")
|
||||
if len(locs) != 1:
|
||||
self._log(f" [-] TXM: expected 1 'mov w19, #0x2446', found {len(locs)}")
|
||||
return
|
||||
marker_off = locs[0]
|
||||
|
||||
# Step 2: Find the containing function (scan back for PACIBSP)
|
||||
pacibsp = _asm("hint #27")
|
||||
func_start = None
|
||||
for scan in range(marker_off & ~3, max(0, marker_off - 0x200), -4):
|
||||
if self.raw[scan : scan + 4] == pacibsp:
|
||||
func_start = scan
|
||||
break
|
||||
if func_start is None:
|
||||
self._log(" [-] TXM: function start not found")
|
||||
return
|
||||
|
||||
# Step 3: Within the function, find mov w2, #0x14; bl; cbz w0; tbnz w0, #0x1f
|
||||
func_end = min(func_start + 0x2000, self.size)
|
||||
insns = list(_cs.disasm(self.raw[func_start:func_end], func_start))
|
||||
|
||||
for i, ins in enumerate(insns):
|
||||
if not (ins.mnemonic == "mov" and ins.op_str == "w2, #0x14"):
|
||||
continue
|
||||
if i + 3 >= len(insns):
|
||||
continue
|
||||
bl_ins = insns[i + 1]
|
||||
cbz_ins = insns[i + 2]
|
||||
tbnz_ins = insns[i + 3]
|
||||
if (
|
||||
bl_ins.mnemonic == "bl"
|
||||
and cbz_ins.mnemonic == "cbz"
|
||||
and "w0" in cbz_ins.op_str
|
||||
and tbnz_ins.mnemonic in ("tbnz", "tbz")
|
||||
and "#0x1f" in tbnz_ins.op_str
|
||||
):
|
||||
self.emit(
|
||||
bl_ins.address, MOV_X0_0, "trustcache bypass: bl → mov x0, #0"
|
||||
)
|
||||
return
|
||||
|
||||
self._log(" [-] TXM: binary search pattern not found in function")
|
||||
|
||||
def patch_selector24_force_pass(self):
|
||||
"""Force selector24 handler to return 0xA1 (PASS) immediately.
|
||||
|
||||
Return code semantics (checked by caller via `tst w0, #0xff00`):
|
||||
- 0xA1 (byte 1 = 0x00) → PASS
|
||||
- 0x130A1 (byte 1 = 0x30) → FAIL
|
||||
- 0x22DA1 (byte 1 = 0x2D) → FAIL
|
||||
|
||||
We insert `mov w0, #0xa1 ; b <epilogue>` right after the prologue,
|
||||
skipping all validation logic while preserving the stack frame for
|
||||
clean register restore via the existing epilogue.
|
||||
"""
|
||||
for off in range(0, self.size - 4, 4):
|
||||
ins = _disasm_one(self.raw, off)
|
||||
if not (ins and ins.mnemonic == "mov" and ins.op_str == "w0, #0xa1"):
|
||||
continue
|
||||
|
||||
func_start = self._find_func_start(off)
|
||||
if func_start is None:
|
||||
continue
|
||||
|
||||
# Verify this is the selector24 handler by checking for the
|
||||
# characteristic pattern: LDR X1,[Xn,#0x38] / ADD X2,... / BL / LDP
|
||||
for scan in range(func_start, off, 4):
|
||||
i0 = _disasm_one(self.raw, scan)
|
||||
i1 = _disasm_one(self.raw, scan + 4)
|
||||
i2 = _disasm_one(self.raw, scan + 8)
|
||||
i3 = _disasm_one(self.raw, scan + 12)
|
||||
if not all((i0, i1, i2, i3)):
|
||||
continue
|
||||
if not (
|
||||
i0.mnemonic == "ldr"
|
||||
and "x1," in i0.op_str
|
||||
and "#0x38]" in i0.op_str
|
||||
):
|
||||
continue
|
||||
if not (i1.mnemonic == "add" and i1.op_str.startswith("x2,")):
|
||||
continue
|
||||
if i2.mnemonic != "bl":
|
||||
continue
|
||||
if i3.mnemonic != "ldp":
|
||||
continue
|
||||
|
||||
# Find prologue end: scan for `add x29, sp, #imm`
|
||||
body_start = None
|
||||
for p in range(func_start + 4, func_start + 0x30, 4):
|
||||
pi = _disasm_one(self.raw, p)
|
||||
if pi and pi.mnemonic == "add" and pi.op_str.startswith("x29, sp,"):
|
||||
body_start = p + 4
|
||||
break
|
||||
if body_start is None:
|
||||
self._log(" [-] TXM: selector24 prologue end not found")
|
||||
return False
|
||||
|
||||
# Find epilogue: scan for retab/ret, walk back to first ldp x29
|
||||
epilogue = None
|
||||
for r in range(off, min(off + 0x200, self.size), 4):
|
||||
ri = _disasm_one(self.raw, r)
|
||||
if ri and ri.mnemonic in ("retab", "ret"):
|
||||
for e in range(r - 4, max(r - 0x20, func_start), -4):
|
||||
ei = _disasm_one(self.raw, e)
|
||||
if ei and ei.mnemonic == "ldp" and "x29, x30" in ei.op_str:
|
||||
epilogue = e
|
||||
break
|
||||
break
|
||||
if epilogue is None:
|
||||
self._log(" [-] TXM: selector24 epilogue not found")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
body_start,
|
||||
_asm("mov w0, #0xa1"),
|
||||
"selector24 bypass: mov w0, #0xa1 (PASS)",
|
||||
)
|
||||
self.emit(
|
||||
body_start + 4,
|
||||
self._asm_at(f"b #0x{epilogue:x}", body_start + 4),
|
||||
"selector24 bypass: b epilogue",
|
||||
)
|
||||
return True
|
||||
|
||||
self._log(" [-] TXM: selector24 handler not found")
|
||||
return False
|
||||
|
||||
def patch_get_task_allow_force_true(self):
|
||||
"""Force get-task-allow entitlement call to return true."""
|
||||
refs = self._find_string_refs(b"get-task-allow")
|
||||
if not refs:
|
||||
self._log(" [-] TXM: get-task-allow string refs not found")
|
||||
return False
|
||||
|
||||
cands = []
|
||||
for _, _, add_off in refs:
|
||||
for scan in range(add_off, min(add_off + 0x20, self.size - 4), 4):
|
||||
i = _disasm_one(self.raw, scan)
|
||||
n = _disasm_one(self.raw, scan + 4)
|
||||
if not i or not n:
|
||||
continue
|
||||
if (
|
||||
i.mnemonic == "bl"
|
||||
and n.mnemonic == "tbnz"
|
||||
and n.op_str.startswith("w0, #0,")
|
||||
):
|
||||
cands.append(scan)
|
||||
|
||||
if len(cands) != 1:
|
||||
self._log(
|
||||
f" [-] TXM: expected 1 get-task-allow BL site, found {len(cands)}"
|
||||
)
|
||||
return False
|
||||
|
||||
self.emit(cands[0], MOV_X0_1, "get-task-allow: bl -> mov x0,#1")
|
||||
return True
|
||||
|
||||
def patch_selector42_29_shellcode(self):
|
||||
"""Selector 42|29 patch via dynamic cave shellcode + branch redirect."""
|
||||
fn = self._find_debugger_gate_func_start()
|
||||
if fn is None:
|
||||
self._log(" [-] TXM: debugger-gate function not found (selector42|29)")
|
||||
return False
|
||||
|
||||
stubs = []
|
||||
for off in range(4, self.size - 24, 4):
|
||||
p = _disasm_one(self.raw, off - 4)
|
||||
i0 = _disasm_one(self.raw, off)
|
||||
i1 = _disasm_one(self.raw, off + 4)
|
||||
i2 = _disasm_one(self.raw, off + 8)
|
||||
i3 = _disasm_one(self.raw, off + 12)
|
||||
i4 = _disasm_one(self.raw, off + 16)
|
||||
i5 = _disasm_one(self.raw, off + 20)
|
||||
if not all((p, i0, i1, i2, i3, i4, i5)):
|
||||
continue
|
||||
if not (p.mnemonic == "bti" and p.op_str == "j"):
|
||||
continue
|
||||
if not (i0.mnemonic == "mov" and i0.op_str == "x0, x20"):
|
||||
continue
|
||||
if not (
|
||||
i1.mnemonic == "bl" and i2.mnemonic == "mov" and i2.op_str == "x1, x21"
|
||||
):
|
||||
continue
|
||||
if not (
|
||||
i3.mnemonic == "mov"
|
||||
and i3.op_str == "x2, x22"
|
||||
and i4.mnemonic == "bl"
|
||||
and i5.mnemonic == "b"
|
||||
):
|
||||
continue
|
||||
if i4.operands and i4.operands[0].imm == fn:
|
||||
stubs.append(off)
|
||||
|
||||
if len(stubs) != 1:
|
||||
self._log(f" [-] TXM: selector42|29 stub expected 1, found {len(stubs)}")
|
||||
return False
|
||||
stub_off = stubs[0]
|
||||
|
||||
cave = self._find_udf_cave(min_insns=6, near_off=stub_off)
|
||||
if cave is None:
|
||||
self._log(" [-] TXM: no UDF cave found for selector42|29 shellcode")
|
||||
return False
|
||||
|
||||
self.emit(
|
||||
stub_off,
|
||||
self._asm_at(f"b #0x{cave:X}", stub_off),
|
||||
"selector42|29: branch to shellcode",
|
||||
)
|
||||
self.emit(cave, NOP, "selector42|29 shellcode pad: udf -> nop")
|
||||
self.emit(cave + 4, MOV_X0_1, "selector42|29 shellcode: mov x0,#1")
|
||||
self.emit(
|
||||
cave + 8, STRB_W0_X20_30, "selector42|29 shellcode: strb w0,[x20,#0x30]"
|
||||
)
|
||||
self.emit(cave + 12, MOV_X0_X20, "selector42|29 shellcode: mov x0,x20")
|
||||
self.emit(
|
||||
cave + 16,
|
||||
self._asm_at(f"b #0x{stub_off + 4:X}", cave + 16),
|
||||
"selector42|29 shellcode: branch back",
|
||||
)
|
||||
return True
|
||||
|
||||
def patch_debugger_entitlement_force_true(self):
|
||||
"""Force debugger entitlement call to return true."""
|
||||
refs = self._find_string_refs(b"com.apple.private.cs.debugger")
|
||||
if not refs:
|
||||
self._log(" [-] TXM: debugger refs not found")
|
||||
return False
|
||||
|
||||
cands = []
|
||||
for _, _, add_off in refs:
|
||||
for scan in range(add_off, min(add_off + 0x20, self.size - 4), 4):
|
||||
i = _disasm_one(self.raw, scan)
|
||||
n = _disasm_one(self.raw, scan + 4)
|
||||
p1 = _disasm_one(self.raw, scan - 4) if scan >= 4 else None
|
||||
p2 = _disasm_one(self.raw, scan - 8) if scan >= 8 else None
|
||||
if not all((i, n, p1, p2)):
|
||||
continue
|
||||
if (
|
||||
i.mnemonic == "bl"
|
||||
and n.mnemonic == "tbnz"
|
||||
and n.op_str.startswith("w0, #0,")
|
||||
and p1.mnemonic == "mov"
|
||||
and p1.op_str == "x2, #0"
|
||||
and p2.mnemonic == "mov"
|
||||
and p2.op_str == "x0, #0"
|
||||
):
|
||||
cands.append(scan)
|
||||
|
||||
if len(cands) != 1:
|
||||
self._log(f" [-] TXM: expected 1 debugger BL site, found {len(cands)}")
|
||||
return False
|
||||
|
||||
self.emit(cands[0], MOV_W0_1, "debugger entitlement: bl -> mov w0,#1")
|
||||
return True
|
||||
|
||||
def patch_developer_mode_bypass(self):
|
||||
"""Developer-mode bypass: NOP conditional guard before deny log path."""
|
||||
refs = self._find_string_refs(
|
||||
b"developer mode enabled due to system policy configuration"
|
||||
)
|
||||
if not refs:
|
||||
self._log(" [-] TXM: developer-mode string ref not found")
|
||||
return False
|
||||
|
||||
cands = []
|
||||
for _, _, add_off in refs:
|
||||
for back in range(add_off - 4, max(add_off - 0x20, 0), -4):
|
||||
ins = _disasm_one(self.raw, back)
|
||||
if not ins:
|
||||
continue
|
||||
if ins.mnemonic not in ("tbz", "tbnz", "cbz", "cbnz"):
|
||||
continue
|
||||
if not ins.op_str.startswith("w9, #0,"):
|
||||
continue
|
||||
cands.append(back)
|
||||
|
||||
if len(cands) != 1:
|
||||
self._log(
|
||||
f" [-] TXM: expected 1 developer mode guard, found {len(cands)}"
|
||||
)
|
||||
return False
|
||||
|
||||
self.emit(cands[0], NOP, "developer mode bypass")
|
||||
return True
|
||||
|
||||
|
||||
# ── CLI entry point ────────────────────────────────────────────
|
||||
if __name__ == "__main__":
|
||||
import sys, argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Dynamic TXM patcher")
|
||||
parser.add_argument("txm", help="Path to raw or IM4P TXM image")
|
||||
parser.add_argument("-q", "--quiet", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"Loading {args.txm}...")
|
||||
file_raw = open(args.txm, "rb").read()
|
||||
|
||||
try:
|
||||
from pyimg4 import IM4P
|
||||
|
||||
im4p = IM4P(file_raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
payload = im4p.payload.data
|
||||
print(f" format: IM4P (fourcc={im4p.fourcc})")
|
||||
except Exception:
|
||||
payload = file_raw
|
||||
print(f" format: raw")
|
||||
|
||||
data = bytearray(payload)
|
||||
print(f" size: {len(data)} bytes ({len(data) / 1024:.1f} KB)\n")
|
||||
|
||||
patcher = TXMPatcher(data, verbose=not args.quiet)
|
||||
n = patcher.apply()
|
||||
print(f"\n {n} patches applied.")
|
||||
@@ -2,7 +2,7 @@
|
||||
"""
|
||||
build_ramdisk.py — Build a signed SSH ramdisk for vphone600.
|
||||
|
||||
Expects firmware already patched by patch_firmware.py.
|
||||
Expects the VM restore tree to have already been patched by the Swift firmware pipeline.
|
||||
Extracts patched components, signs with SHSH, and builds SSH ramdisk.
|
||||
|
||||
Usage:
|
||||
@@ -15,8 +15,8 @@ Directory structure:
|
||||
./Ramdisk/ — Final signed IMG4 output
|
||||
|
||||
Prerequisites:
|
||||
pip install keystone-engine capstone pyimg4
|
||||
Run patch_firmware.py first to patch boot-chain components.
|
||||
pip install pyimg4
|
||||
Run make fw_patch / make fw_patch_dev / make fw_patch_jb first to patch boot-chain components.
|
||||
"""
|
||||
|
||||
import gzip
|
||||
@@ -26,23 +26,11 @@ import plistlib
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
# Ensure sibling modules (patch_firmware) are importable when run from any CWD
|
||||
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
if _SCRIPT_DIR not in sys.path:
|
||||
sys.path.insert(0, _SCRIPT_DIR)
|
||||
import tempfile
|
||||
|
||||
from pyimg4 import IM4M, IM4P, IMG4
|
||||
|
||||
from fw_patch import (
|
||||
load_firmware,
|
||||
_save_im4p_with_payp,
|
||||
patch_txm,
|
||||
find_restore_dir,
|
||||
find_file,
|
||||
)
|
||||
from patchers.iboot import IBootPatcher
|
||||
from patchers.kernel import KernelPatcher
|
||||
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# Configuration
|
||||
@@ -53,6 +41,7 @@ TEMP_DIR = "ramdisk_builder_temp"
|
||||
INPUT_DIR = "ramdisk_input"
|
||||
RESTORED_EXTERNAL_PATH = "usr/local/bin/restored_external"
|
||||
RESTORED_EXTERNAL_SERIAL_MARKER = b"SSHRD_Script Sep 22 2022 18:56:50"
|
||||
DEFAULT_IBEC_BOOT_ARGS = b"serial=3 -v debug=0x2014e %s"
|
||||
|
||||
# Ramdisk boot-args
|
||||
RAMDISK_BOOT_ARGS = b"serial=3 rd=md0 debug=0x2014e -v wdt=-1 %s"
|
||||
@@ -88,6 +77,7 @@ SIGN_DIRS = [
|
||||
|
||||
# Compressed archive of ramdisk_input/ (located next to this script)
|
||||
INPUT_ARCHIVE = "ramdisk_input.tar.zst"
|
||||
PATCHER_BINARY_ENV = "VPHONE_PATCHER_BINARY"
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
@@ -204,6 +194,117 @@ def check_prerequisites():
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def project_root():
|
||||
return os.path.abspath(os.path.join(_SCRIPT_DIR, ".."))
|
||||
|
||||
|
||||
def patcher_binary_path():
|
||||
override = os.environ.get(PATCHER_BINARY_ENV, "").strip()
|
||||
if override:
|
||||
return os.path.abspath(override)
|
||||
return os.path.join(project_root(), ".build", "debug", "vphone-cli")
|
||||
|
||||
|
||||
def run_swift_patch_component(component, src_path, output_path):
|
||||
"""Patch a single component via the Swift FirmwarePatcher CLI."""
|
||||
binary = patcher_binary_path()
|
||||
if not os.path.isfile(binary):
|
||||
print(f"[-] Swift patcher binary not found: {binary}")
|
||||
print(" Run: make patcher_build")
|
||||
sys.exit(1)
|
||||
|
||||
run(
|
||||
[
|
||||
binary,
|
||||
"patch-component",
|
||||
"--component",
|
||||
component,
|
||||
"--input",
|
||||
src_path,
|
||||
"--output",
|
||||
output_path,
|
||||
"--quiet",
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def load_firmware(path):
|
||||
"""Load firmware file, auto-detecting IM4P vs raw."""
|
||||
with open(path, "rb") as f:
|
||||
raw = f.read()
|
||||
|
||||
try:
|
||||
im4p = IM4P(raw)
|
||||
if im4p.payload.compression:
|
||||
im4p.payload.decompress()
|
||||
return im4p, bytearray(im4p.payload.data), True, raw
|
||||
except Exception:
|
||||
return None, bytearray(raw), False, raw
|
||||
|
||||
|
||||
def _save_im4p_with_payp(path, fourcc, patched_data, original_raw):
|
||||
"""Repackage as LZFSE-compressed IM4P and append PAYP from original."""
|
||||
with (
|
||||
tempfile.NamedTemporaryFile(suffix=".raw", delete=False) as tmp_raw,
|
||||
tempfile.NamedTemporaryFile(suffix=".im4p", delete=False) as tmp_im4p,
|
||||
):
|
||||
tmp_raw_path = tmp_raw.name
|
||||
tmp_im4p_path = tmp_im4p.name
|
||||
tmp_raw.write(bytes(patched_data))
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
[
|
||||
"pyimg4",
|
||||
"im4p",
|
||||
"create",
|
||||
"-i",
|
||||
tmp_raw_path,
|
||||
"-o",
|
||||
tmp_im4p_path,
|
||||
"-f",
|
||||
fourcc,
|
||||
"--lzfse",
|
||||
],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
output = bytearray(open(tmp_im4p_path, "rb").read())
|
||||
finally:
|
||||
os.unlink(tmp_raw_path)
|
||||
os.unlink(tmp_im4p_path)
|
||||
|
||||
payp_offset = original_raw.rfind(b"PAYP")
|
||||
if payp_offset >= 0:
|
||||
payp_data = original_raw[payp_offset - 10 :]
|
||||
output.extend(payp_data)
|
||||
old_len = int.from_bytes(output[2:5], "big")
|
||||
output[2:5] = (old_len + len(payp_data)).to_bytes(3, "big")
|
||||
print(f" [+] preserved PAYP ({len(payp_data)} bytes)")
|
||||
|
||||
with open(path, "wb") as f:
|
||||
f.write(output)
|
||||
|
||||
|
||||
def find_restore_dir(base_dir):
|
||||
for entry in sorted(os.listdir(base_dir)):
|
||||
full = os.path.join(base_dir, entry)
|
||||
if os.path.isdir(full) and "Restore" in entry:
|
||||
return full
|
||||
return None
|
||||
|
||||
|
||||
def find_file(base_dir, patterns, label):
|
||||
for pattern in patterns:
|
||||
matches = sorted(glob.glob(os.path.join(base_dir, pattern)))
|
||||
if matches:
|
||||
return matches[0]
|
||||
print(f"[-] {label} not found. Searched patterns:")
|
||||
for pattern in patterns:
|
||||
print(f" {os.path.join(base_dir, pattern)}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
# Firmware extraction and IM4P creation
|
||||
# ══════════════════════════════════════════════════════════════════
|
||||
@@ -279,17 +380,9 @@ def derive_ramdisk_kernel_source(kc_src, temp_dir):
|
||||
return None
|
||||
|
||||
print(f" deriving ramdisk kernel from pristine source: {pristine}")
|
||||
im4p_obj, data, was_im4p, original_raw = load_firmware(pristine)
|
||||
kp = KernelPatcher(data)
|
||||
n = kp.apply()
|
||||
print(f" [+] {n} base kernel patches applied for ramdisk variant")
|
||||
|
||||
out_path = os.path.join(temp_dir, f"kernelcache.research.vphone600{RAMDISK_KERNEL_SUFFIX}")
|
||||
if was_im4p and im4p_obj is not None:
|
||||
_save_im4p_with_payp(out_path, im4p_obj.fourcc, data, original_raw)
|
||||
else:
|
||||
with open(out_path, "wb") as f:
|
||||
f.write(data)
|
||||
run_swift_patch_component("kernel-base", pristine, out_path)
|
||||
print(" [+] base kernel patches applied for ramdisk variant")
|
||||
return out_path
|
||||
|
||||
|
||||
@@ -301,14 +394,13 @@ def derive_ramdisk_kernel_source(kc_src, temp_dir):
|
||||
def patch_ibec_bootargs(data):
|
||||
"""Replace normal boot-args with ramdisk boot-args in already-patched iBEC.
|
||||
|
||||
Finds the boot-args string written by patch_firmware.py (via IBootPatcher)
|
||||
Finds the boot-args string written by the Swift firmware pipeline
|
||||
and overwrites it in-place. No hardcoded offsets needed — the ADRP+ADD
|
||||
instructions already point to the string location.
|
||||
"""
|
||||
normal_args = IBootPatcher.BOOT_ARGS
|
||||
off = data.find(normal_args)
|
||||
off = data.find(DEFAULT_IBEC_BOOT_ARGS)
|
||||
if off < 0:
|
||||
print(f" [-] boot-args: existing string not found ({normal_args.decode()!r})")
|
||||
print(f" [-] boot-args: existing string not found ({DEFAULT_IBEC_BOOT_ARGS.decode()!r})")
|
||||
return False
|
||||
|
||||
args = RAMDISK_BOOT_ARGS + b"\x00"
|
||||
@@ -695,10 +787,17 @@ def main():
|
||||
"TXM",
|
||||
)
|
||||
txm_raw = os.path.join(temp_dir, "txm.raw")
|
||||
im4p_obj, data, original_raw = extract_to_raw(txm_src, txm_raw)
|
||||
patch_txm(data)
|
||||
txm_patched_raw = os.path.join(temp_dir, "txm.patched.raw")
|
||||
im4p_obj, data, _, original_raw = load_firmware(txm_src)
|
||||
with open(txm_raw, "wb") as f:
|
||||
f.write(bytes(data))
|
||||
print(f" source: {txm_src}")
|
||||
print(f" format: IM4P, {len(data)} bytes")
|
||||
run_swift_patch_component("txm", txm_src, txm_patched_raw)
|
||||
with open(txm_patched_raw, "rb") as f:
|
||||
patched_txm = f.read()
|
||||
txm_im4p = os.path.join(temp_dir, "txm.im4p")
|
||||
_save_im4p_with_payp(txm_im4p, TXM_FOURCC, data, original_raw)
|
||||
_save_im4p_with_payp(txm_im4p, TXM_FOURCC, patched_txm, original_raw)
|
||||
sign_img4(
|
||||
txm_im4p, os.path.join(output_dir, "txm.img4"), im4m_path
|
||||
)
|
||||
|
||||
47
scripts/start_amfidont_for_vphone.sh
Normal file
47
scripts/start_amfidont_for_vphone.sh
Normal file
@@ -0,0 +1,47 @@
|
||||
#!/bin/zsh
|
||||
# start_amfidont_for_vphone.sh — Start amfidont for the current vphone build.
|
||||
#
|
||||
# This is the README "Option 2" host workaround packaged for this repo:
|
||||
# - computes the signed release binary CDHash
|
||||
# - uses the URL-encoded project path form observed by AMFIPathValidator
|
||||
# - starts amfidont in daemon mode so signed vphone-cli launches are allowlisted
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="${0:A:h}"
|
||||
PROJECT_ROOT="${SCRIPT_DIR:h}"
|
||||
RELEASE_BIN="${PROJECT_ROOT}/.build/release/vphone-cli"
|
||||
AMFIDONT_BIN="${HOME}/Library/Python/3.9/bin/amfidont"
|
||||
|
||||
[[ -x "$AMFIDONT_BIN" ]] || {
|
||||
echo "amfidont not found at $AMFIDONT_BIN" >&2
|
||||
echo "Install it first: xcrun python3 -m pip install --user amfidont" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
[[ -x "$RELEASE_BIN" ]] || {
|
||||
echo "Missing release binary: $RELEASE_BIN" >&2
|
||||
echo "Run 'make build' first." >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
CDHASH="$(
|
||||
codesign -dv --verbose=4 "$RELEASE_BIN" 2>&1 \
|
||||
| sed -n 's/^CDHash=//p' \
|
||||
| head -n1
|
||||
)"
|
||||
[[ -n "$CDHASH" ]] || {
|
||||
echo "Failed to extract CDHash for $RELEASE_BIN" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
ENCODED_PROJECT_ROOT="${PROJECT_ROOT// /%20}"
|
||||
|
||||
echo "[*] Project root: $PROJECT_ROOT"
|
||||
echo "[*] Encoded AMFI path: $ENCODED_PROJECT_ROOT"
|
||||
echo "[*] Release CDHash: $CDHASH"
|
||||
|
||||
exec sudo "$AMFIDONT_BIN" daemon \
|
||||
--path "$ENCODED_PROJECT_ROOT" \
|
||||
--cdhash "$CDHASH" \
|
||||
--verbose
|
||||
@@ -142,6 +142,34 @@ public enum ARM64 {
|
||||
/// ldr x1, [x0, #0x3e0]
|
||||
static let ldr_x1_x0_0x3e0: UInt32 = 0xF941_F001
|
||||
|
||||
// MARK: Syscallmask C22 Cave Instructions (verified via clang/as)
|
||||
|
||||
static let syscallmask_cbzX2_0x6c: UInt32 = 0xB400_0362 // cbz x2, #+0x6c
|
||||
static let syscallmask_subSP_0x40: UInt32 = 0xD101_03FF // sub sp, sp, #0x40
|
||||
static let syscallmask_stpX19X20_0x10: UInt32 = 0xA901_53F3 // stp x19, x20, [sp, #0x10]
|
||||
static let syscallmask_stpX21X22_0x20: UInt32 = 0xA902_5BF5 // stp x21, x22, [sp, #0x20]
|
||||
static let syscallmask_stpFP_LR_0x30: UInt32 = 0xA903_7BFD // stp x29, x30, [sp, #0x30]
|
||||
static let syscallmask_movX19_X0: UInt32 = 0xAA00_03F3 // mov x19, x0
|
||||
static let syscallmask_movX20_X1: UInt32 = 0xAA01_03F4 // mov x20, x1
|
||||
static let syscallmask_movX21_X2: UInt32 = 0xAA02_03F5 // mov x21, x2
|
||||
static let syscallmask_movX22_X3: UInt32 = 0xAA03_03F6 // mov x22, x3
|
||||
static let syscallmask_movX8_8: UInt32 = 0xD280_0108 // mov x8, #8
|
||||
static let syscallmask_movX0_X17: UInt32 = 0xAA11_03E0 // mov x0, x17
|
||||
static let syscallmask_movX1_X21: UInt32 = 0xAA15_03E1 // mov x1, x21
|
||||
static let syscallmask_movX2_0: UInt32 = 0xD280_0002 // mov x2, #0
|
||||
static let syscallmask_udivX4_X22_X8: UInt32 = 0x9AC8_0AC4 // udiv x4, x22, x8
|
||||
static let syscallmask_msubX10_X4_X8_X22: UInt32 = 0x9B08_D88A // msub x10, x4, x8, x22
|
||||
static let syscallmask_cbzX10_8: UInt32 = 0xB400_004A // cbz x10, #+8
|
||||
static let syscallmask_addX4_X4_1: UInt32 = 0x9100_0484 // add x4, x4, #1
|
||||
static let syscallmask_movX0_X19: UInt32 = 0xAA13_03E0 // mov x0, x19
|
||||
static let syscallmask_movX1_X20: UInt32 = 0xAA14_03E1 // mov x1, x20
|
||||
static let syscallmask_movX2_X21: UInt32 = 0xAA15_03E2 // mov x2, x21
|
||||
static let syscallmask_movX3_X22: UInt32 = 0xAA16_03E3 // mov x3, x22
|
||||
static let syscallmask_ldpX19X20_0x10: UInt32 = 0xA941_53F3 // ldp x19, x20, [sp, #0x10]
|
||||
static let syscallmask_ldpX21X22_0x20: UInt32 = 0xA942_5BF5 // ldp x21, x22, [sp, #0x20]
|
||||
static let syscallmask_ldpFP_LR_0x30: UInt32 = 0xA943_7BFD // ldp x29, x30, [sp, #0x30]
|
||||
static let syscallmask_addSP_0x40: UInt32 = 0x9101_03FF // add sp, sp, #0x40
|
||||
|
||||
// MARK: UInt32 Values (for pattern matching)
|
||||
|
||||
public static let nopU32: UInt32 = 0xD503_201F
|
||||
@@ -149,6 +177,7 @@ public enum ARM64 {
|
||||
public static let retaaU32: UInt32 = 0xD65F_0BFF
|
||||
public static let retabU32: UInt32 = 0xD65F_0FFF
|
||||
public static let pacibspU32: UInt32 = 0xD503_237F
|
||||
public static let movX0_0_U32: UInt32 = 0xD280_0000 // MOV X0, #0 (MOVZ X0, #0)
|
||||
|
||||
/// Set of instruction uint32 values that indicate function boundaries.
|
||||
public static let funcBoundaryU32s: Set<UInt32> = [
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// AVPBooterPatcher.swift — AVPBooter DGST bypass patcher.
|
||||
//
|
||||
// Python source: scripts/fw_patch.py patch_avpbooter()
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy:
|
||||
// 1. Disassemble the entire binary.
|
||||
@@ -49,7 +49,9 @@ public final class AVPBooterPatcher: Patcher {
|
||||
|
||||
@discardableResult
|
||||
public func apply() throws -> Int {
|
||||
let _ = try findAll()
|
||||
if patches.isEmpty {
|
||||
let _ = try findAll()
|
||||
}
|
||||
for record in patches {
|
||||
buffer.writeBytes(at: record.fileOffset, bytes: record.patchedBytes)
|
||||
}
|
||||
|
||||
@@ -2,6 +2,19 @@
|
||||
|
||||
import Foundation
|
||||
|
||||
extension Data {
|
||||
/// Load a little-endian integer without assuming the buffer is naturally aligned.
|
||||
@inlinable
|
||||
func loadLE<T: FixedWidthInteger>(_: T.Type, at offset: Int) -> T {
|
||||
precondition(offset >= 0 && offset + MemoryLayout<T>.size <= count)
|
||||
var value: T = .zero
|
||||
_ = Swift.withUnsafeMutableBytes(of: &value) { dst in
|
||||
copyBytes(to: dst, from: offset ..< offset + MemoryLayout<T>.size)
|
||||
}
|
||||
return T(littleEndian: value)
|
||||
}
|
||||
}
|
||||
|
||||
/// A mutable binary buffer for reading and patching firmware data.
|
||||
public final class BinaryBuffer: @unchecked Sendable {
|
||||
/// The mutable working data.
|
||||
@@ -15,8 +28,10 @@ public final class BinaryBuffer: @unchecked Sendable {
|
||||
}
|
||||
|
||||
public init(_ data: Data) {
|
||||
self.data = data
|
||||
original = data
|
||||
// Rebase to startIndex 0 so zero-based subscripts are always valid.
|
||||
let rebased = data.startIndex == 0 ? data : Data(data)
|
||||
self.data = rebased
|
||||
original = rebased
|
||||
}
|
||||
|
||||
public convenience init(contentsOf url: URL) throws {
|
||||
@@ -28,17 +43,13 @@ public final class BinaryBuffer: @unchecked Sendable {
|
||||
/// Read a little-endian UInt32 at the given byte offset.
|
||||
@inlinable
|
||||
public func readU32(at offset: Int) -> UInt32 {
|
||||
data.withUnsafeBytes { buf in
|
||||
buf.load(fromByteOffset: offset, as: UInt32.self)
|
||||
}
|
||||
data.loadLE(UInt32.self, at: offset)
|
||||
}
|
||||
|
||||
/// Read a little-endian UInt64 at the given byte offset.
|
||||
@inlinable
|
||||
public func readU64(at offset: Int) -> UInt64 {
|
||||
data.withUnsafeBytes { buf in
|
||||
buf.load(fromByteOffset: offset, as: UInt64.self)
|
||||
}
|
||||
data.loadLE(UInt64.self, at: offset)
|
||||
}
|
||||
|
||||
/// Read bytes at the given range.
|
||||
|
||||
@@ -5,6 +5,8 @@ import Img4tool
|
||||
|
||||
/// Handles loading, extracting, and re-packaging IM4P firmware containers.
|
||||
public enum IM4PHandler {
|
||||
private static let paypPreservingFourCCs: Set<String> = ["trxm", "krnl"]
|
||||
|
||||
/// Load a firmware file as IM4P or raw data.
|
||||
///
|
||||
/// - Parameter url: Path to the firmware file.
|
||||
@@ -37,16 +39,81 @@ public enum IM4PHandler {
|
||||
to url: URL
|
||||
) throws {
|
||||
if let original = originalIM4P {
|
||||
// Re-package as IM4P with same fourcc and LZFSE compression
|
||||
// Rebuild the IM4P container with the patched payload. Do not force
|
||||
// a new compression mode here; the Python pipeline currently writes
|
||||
// these patched payloads back uncompressed and preserves any PAYP
|
||||
// metadata tail from the original container.
|
||||
let newIM4P = try IM4P(
|
||||
fourcc: original.fourcc,
|
||||
description: original.description,
|
||||
payload: patchedData,
|
||||
compression: "lzfse"
|
||||
payload: patchedData
|
||||
)
|
||||
try newIM4P.data.write(to: url)
|
||||
let output: Data = if paypPreservingFourCCs.contains(original.fourcc) {
|
||||
try appendPAYPIfPresent(from: original.data, to: newIM4P.data)
|
||||
} else {
|
||||
newIM4P.data
|
||||
}
|
||||
try output.write(to: url)
|
||||
} else {
|
||||
try patchedData.write(to: url)
|
||||
}
|
||||
}
|
||||
|
||||
private static func appendPAYPIfPresent(from original: Data, to rebuilt: Data) throws -> Data {
|
||||
let marker = Data("PAYP".utf8)
|
||||
guard let markerRange = original.range(of: marker, options: .backwards),
|
||||
markerRange.lowerBound >= 10
|
||||
else {
|
||||
return rebuilt
|
||||
}
|
||||
|
||||
let payp = original[(markerRange.lowerBound - 10) ..< original.endIndex]
|
||||
var output = rebuilt
|
||||
try updateTopLevelDERLength(of: &output, adding: payp.count)
|
||||
output.append(payp)
|
||||
return output
|
||||
}
|
||||
|
||||
private static func updateTopLevelDERLength(of data: inout Data, adding extraBytes: Int) throws {
|
||||
guard data.count >= 2, data[0] == 0x30 else {
|
||||
throw Img4Error.invalidFormat("rebuilt IM4P missing top-level DER sequence")
|
||||
}
|
||||
|
||||
let lengthByte = data[1]
|
||||
let headerRange: Range<Int>
|
||||
let currentLength: Int
|
||||
|
||||
if lengthByte & 0x80 == 0 {
|
||||
headerRange = 1 ..< 2
|
||||
currentLength = Int(lengthByte)
|
||||
} else {
|
||||
let lengthOfLength = Int(lengthByte & 0x7F)
|
||||
let start = 2
|
||||
let end = start + lengthOfLength
|
||||
guard end <= data.count else {
|
||||
throw Img4Error.invalidFormat("invalid DER length field")
|
||||
}
|
||||
headerRange = 1 ..< end
|
||||
currentLength = data[start ..< end].reduce(0) { ($0 << 8) | Int($1) }
|
||||
}
|
||||
|
||||
let replacement = derLengthBytes(currentLength + extraBytes)
|
||||
data.replaceSubrange(headerRange, with: replacement)
|
||||
}
|
||||
|
||||
private static func derLengthBytes(_ length: Int) -> Data {
|
||||
precondition(length >= 0)
|
||||
if length < 0x80 {
|
||||
return Data([UInt8(length)])
|
||||
}
|
||||
|
||||
var value = length
|
||||
var encoded: [UInt8] = []
|
||||
while value > 0 {
|
||||
encoded.append(UInt8(value & 0xFF))
|
||||
value >>= 8
|
||||
}
|
||||
encoded.reverse()
|
||||
return Data([0x80 | UInt8(encoded.count)] + encoded)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,25 +32,25 @@ public enum MachOParser {
|
||||
var segments: [MachOSegmentInfo] = []
|
||||
guard data.count > 32 else { return segments }
|
||||
|
||||
let magic: UInt32 = data.withUnsafeBytes { $0.load(as: UInt32.self) }
|
||||
let magic = data.loadLE(UInt32.self, at: 0)
|
||||
guard magic == 0xFEED_FACF else { return segments } // MH_MAGIC_64
|
||||
|
||||
let ncmds: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: 16, as: UInt32.self) }
|
||||
let ncmds = data.loadLE(UInt32.self, at: 16)
|
||||
var offset = 32 // sizeof(mach_header_64)
|
||||
|
||||
for _ in 0 ..< ncmds {
|
||||
guard offset + 8 <= data.count else { break }
|
||||
let cmd: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset, as: UInt32.self) }
|
||||
let cmdsize: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 4, as: UInt32.self) }
|
||||
let cmd = data.loadLE(UInt32.self, at: offset)
|
||||
let cmdsize = data.loadLE(UInt32.self, at: offset + 4)
|
||||
|
||||
if cmd == 0x19 { // LC_SEGMENT_64
|
||||
let nameData = data[offset + 8 ..< offset + 24]
|
||||
let name = String(data: nameData, encoding: .utf8)?
|
||||
.trimmingCharacters(in: CharacterSet(charactersIn: "\0")) ?? ""
|
||||
let vmAddr: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 24, as: UInt64.self) }
|
||||
let vmSize: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 32, as: UInt64.self) }
|
||||
let fileOff: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 40, as: UInt64.self) }
|
||||
let fileSize: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 48, as: UInt64.self) }
|
||||
let vmAddr = data.loadLE(UInt64.self, at: offset + 24)
|
||||
let vmSize = data.loadLE(UInt64.self, at: offset + 32)
|
||||
let fileOff = data.loadLE(UInt64.self, at: offset + 40)
|
||||
let fileSize = data.loadLE(UInt64.self, at: offset + 48)
|
||||
|
||||
segments.append(MachOSegmentInfo(
|
||||
name: name, vmAddr: vmAddr, vmSize: vmSize,
|
||||
@@ -68,22 +68,22 @@ public enum MachOParser {
|
||||
var sections: [String: MachOSectionInfo] = [:]
|
||||
guard data.count > 32 else { return sections }
|
||||
|
||||
let magic: UInt32 = data.withUnsafeBytes { $0.load(as: UInt32.self) }
|
||||
let magic = data.loadLE(UInt32.self, at: 0)
|
||||
guard magic == 0xFEED_FACF else { return sections }
|
||||
|
||||
let ncmds: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: 16, as: UInt32.self) }
|
||||
let ncmds = data.loadLE(UInt32.self, at: 16)
|
||||
var offset = 32
|
||||
|
||||
for _ in 0 ..< ncmds {
|
||||
guard offset + 8 <= data.count else { break }
|
||||
let cmd: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset, as: UInt32.self) }
|
||||
let cmdsize: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 4, as: UInt32.self) }
|
||||
let cmd = data.loadLE(UInt32.self, at: offset)
|
||||
let cmdsize = data.loadLE(UInt32.self, at: offset + 4)
|
||||
|
||||
if cmd == 0x19 { // LC_SEGMENT_64
|
||||
let segNameData = data[offset + 8 ..< offset + 24]
|
||||
let segName = String(data: segNameData, encoding: .utf8)?
|
||||
.trimmingCharacters(in: CharacterSet(charactersIn: "\0")) ?? ""
|
||||
let nsects: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 64, as: UInt32.self) }
|
||||
let nsects = data.loadLE(UInt32.self, at: offset + 64)
|
||||
|
||||
var sectOff = offset + 72 // sizeof(segment_command_64) header
|
||||
for _ in 0 ..< nsects {
|
||||
@@ -91,9 +91,9 @@ public enum MachOParser {
|
||||
let sectNameData = data[sectOff ..< sectOff + 16]
|
||||
let sectName = String(data: sectNameData, encoding: .utf8)?
|
||||
.trimmingCharacters(in: CharacterSet(charactersIn: "\0")) ?? ""
|
||||
let addr: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: sectOff + 32, as: UInt64.self) }
|
||||
let size: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: sectOff + 40, as: UInt64.self) }
|
||||
let fileOff: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: sectOff + 48, as: UInt32.self) }
|
||||
let addr = data.loadLE(UInt64.self, at: sectOff + 32)
|
||||
let size = data.loadLE(UInt64.self, at: sectOff + 40)
|
||||
let fileOff = data.loadLE(UInt32.self, at: sectOff + 48)
|
||||
|
||||
let key = "\(segName),\(sectName)"
|
||||
sections[key] = MachOSectionInfo(
|
||||
@@ -129,19 +129,19 @@ public enum MachOParser {
|
||||
public static func parseSymtab(from data: Data) -> (symoff: Int, nsyms: Int, stroff: Int, strsize: Int)? {
|
||||
guard data.count > 32 else { return nil }
|
||||
|
||||
let ncmds: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: 16, as: UInt32.self) }
|
||||
let ncmds = data.loadLE(UInt32.self, at: 16)
|
||||
var offset = 32
|
||||
|
||||
for _ in 0 ..< ncmds {
|
||||
guard offset + 8 <= data.count else { break }
|
||||
let cmd: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset, as: UInt32.self) }
|
||||
let cmdsize: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 4, as: UInt32.self) }
|
||||
let cmd = data.loadLE(UInt32.self, at: offset)
|
||||
let cmdsize = data.loadLE(UInt32.self, at: offset + 4)
|
||||
|
||||
if cmd == 0x02 { // LC_SYMTAB
|
||||
let symoff: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 8, as: UInt32.self) }
|
||||
let nsyms: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 12, as: UInt32.self) }
|
||||
let stroff: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 16, as: UInt32.self) }
|
||||
let strsize: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: offset + 20, as: UInt32.self) }
|
||||
let symoff = data.loadLE(UInt32.self, at: offset + 8)
|
||||
let nsyms = data.loadLE(UInt32.self, at: offset + 12)
|
||||
let stroff = data.loadLE(UInt32.self, at: offset + 16)
|
||||
let strsize = data.loadLE(UInt32.self, at: offset + 20)
|
||||
return (Int(symoff), Int(nsyms), Int(stroff), Int(strsize))
|
||||
}
|
||||
offset += Int(cmdsize)
|
||||
@@ -157,8 +157,8 @@ public enum MachOParser {
|
||||
let entryOff = symtab.symoff + i * 16 // sizeof(nlist_64)
|
||||
guard entryOff + 16 <= data.count else { break }
|
||||
|
||||
let nStrx: UInt32 = data.withUnsafeBytes { $0.load(fromByteOffset: entryOff, as: UInt32.self) }
|
||||
let nValue: UInt64 = data.withUnsafeBytes { $0.load(fromByteOffset: entryOff + 8, as: UInt64.self) }
|
||||
let nStrx = data.loadLE(UInt32.self, at: entryOff)
|
||||
let nValue = data.loadLE(UInt64.self, at: entryOff + 8)
|
||||
|
||||
guard nStrx < symtab.strsize, nValue != 0 else { continue }
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// DeviceTreePatcher.swift — DeviceTree payload patcher.
|
||||
//
|
||||
// Translated from Python source: scripts/dtree.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy:
|
||||
// 1. Parse the flat device tree binary into a node/property tree.
|
||||
@@ -17,6 +17,7 @@ public final class DeviceTreePatcher: Patcher {
|
||||
|
||||
let buffer: BinaryBuffer
|
||||
var patches: [PatchRecord] = []
|
||||
var rebuiltData: Data?
|
||||
|
||||
// MARK: - Patch Definitions
|
||||
|
||||
@@ -114,16 +115,24 @@ public final class DeviceTreePatcher: Patcher {
|
||||
|
||||
public func findAll() throws -> [PatchRecord] {
|
||||
patches = []
|
||||
rebuiltData = nil
|
||||
let root = try parsePayload(buffer.data)
|
||||
try applyPatches(root: root)
|
||||
rebuiltData = serializePayload(root)
|
||||
return patches
|
||||
}
|
||||
|
||||
@discardableResult
|
||||
public func apply() throws -> Int {
|
||||
let _ = try findAll()
|
||||
for record in patches {
|
||||
buffer.writeBytes(at: record.fileOffset, bytes: record.patchedBytes)
|
||||
if patches.isEmpty, rebuiltData == nil {
|
||||
let _ = try findAll()
|
||||
}
|
||||
if let rebuiltData {
|
||||
buffer.data = rebuiltData
|
||||
} else {
|
||||
for record in patches {
|
||||
buffer.writeBytes(at: record.fileOffset, bytes: record.patchedBytes)
|
||||
}
|
||||
}
|
||||
if verbose, !patches.isEmpty {
|
||||
print("\n [\(patches.count) DeviceTree patch(es) applied]")
|
||||
@@ -132,7 +141,7 @@ public final class DeviceTreePatcher: Patcher {
|
||||
}
|
||||
|
||||
public var patchedData: Data {
|
||||
buffer.data
|
||||
rebuiltData ?? buffer.data
|
||||
}
|
||||
|
||||
// MARK: - Parsing
|
||||
@@ -209,6 +218,39 @@ public final class DeviceTreePatcher: Patcher {
|
||||
return root
|
||||
}
|
||||
|
||||
private func serializeNode(_ node: DTNode) -> Data {
|
||||
var out = Data()
|
||||
out.append(contentsOf: withUnsafeBytes(of: UInt32(node.properties.count).littleEndian) { Data($0) })
|
||||
out.append(contentsOf: withUnsafeBytes(of: UInt32(node.children.count).littleEndian) { Data($0) })
|
||||
|
||||
for prop in node.properties {
|
||||
var name = Data(prop.name.utf8)
|
||||
if name.count >= 32 {
|
||||
name = Data(name.prefix(31))
|
||||
}
|
||||
name.append(contentsOf: [UInt8](repeating: 0, count: 32 - name.count))
|
||||
out.append(name)
|
||||
|
||||
out.append(contentsOf: withUnsafeBytes(of: UInt16(prop.length).littleEndian) { Data($0) })
|
||||
out.append(contentsOf: withUnsafeBytes(of: prop.flags.littleEndian) { Data($0) })
|
||||
out.append(prop.value)
|
||||
|
||||
let pad = Self.align4(prop.length) - prop.length
|
||||
if pad > 0 {
|
||||
out.append(Data(repeating: 0, count: pad))
|
||||
}
|
||||
}
|
||||
|
||||
for child in node.children {
|
||||
out.append(serializeNode(child))
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
private func serializePayload(_ root: DTNode) -> Data {
|
||||
serializeNode(root)
|
||||
}
|
||||
|
||||
// MARK: - Node Navigation
|
||||
|
||||
/// Get the "name" property value from a node.
|
||||
@@ -303,6 +345,10 @@ public final class DeviceTreePatcher: Patcher {
|
||||
try Self.encodeInteger(v, length: patch.length)
|
||||
}
|
||||
|
||||
prop.length = patch.length
|
||||
prop.flags = patch.flags
|
||||
prop.value = newValue
|
||||
|
||||
let record = PatchRecord(
|
||||
patchID: patch.patchID,
|
||||
component: component,
|
||||
@@ -324,14 +370,3 @@ public final class DeviceTreePatcher: Patcher {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// MARK: - Data Helpers
|
||||
|
||||
private extension Data {
|
||||
/// Load a little-endian integer at the given byte offset.
|
||||
func loadLE<T: FixedWidthInteger>(_: T.Type, at offset: Int) -> T {
|
||||
withUnsafeBytes { buf in
|
||||
T(littleEndian: buf.load(fromByteOffset: offset, as: T.self))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// IBootJBPatcher.swift — JB-variant iBoot patcher (nonce bypass).
|
||||
//
|
||||
// Python source: scripts/patchers/iboot_jb.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Capstone
|
||||
import Foundation
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// IBootPatcher.swift — iBoot chain patcher (iBSS, iBEC, LLB).
|
||||
//
|
||||
// Translated from Python: scripts/patchers/iboot.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
// Each patch mirrors Python logic exactly — no hardcoded offsets.
|
||||
//
|
||||
// Patch schedule by mode:
|
||||
@@ -71,7 +71,9 @@ public class IBootPatcher: Patcher {
|
||||
|
||||
@discardableResult
|
||||
public func apply() throws -> Int {
|
||||
let _ = try findAll()
|
||||
if patches.isEmpty {
|
||||
let _ = try findAll()
|
||||
}
|
||||
for record in patches {
|
||||
buffer.writeBytes(at: record.fileOffset, bytes: record.patchedBytes)
|
||||
}
|
||||
@@ -188,7 +190,11 @@ public class IBootPatcher: Patcher {
|
||||
/// Find the two long '====...' banner runs and write the mode label into each.
|
||||
/// Python: `patch_serial_labels()`
|
||||
func patchSerialLabels() {
|
||||
let labelStr = "Loaded \(mode.rawValue.uppercased())"
|
||||
let labelStr = switch mode {
|
||||
case .ibss: "Loaded iBSS"
|
||||
case .ibec: "Loaded iBEC"
|
||||
case .llb: "Loaded LLB"
|
||||
}
|
||||
guard let labelBytes = labelStr.data(using: .ascii) else { return }
|
||||
|
||||
// Collect all runs of '=' (>=20 chars) — same logic as Python.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchAmfiExecve.swift — JB kernel patch: AMFI execve kill path bypass (disabled)
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_amfi_execve.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy: All kill paths in the AMFI execve hook converge on a shared
|
||||
// epilogue that does `MOV W0, #1` (kill) then returns. Changing that single
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchAmfiTrustcache.swift — JB kernel patch: AMFI trustcache gate bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_amfi_trustcache.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (semantic function matching):
|
||||
// Scan amfi_text for functions (PACIBSP boundaries) that match the
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchBsdInitAuth.swift — JB: bypass FSIOC_KERNEL_ROOTAUTH failure in _bsd_init.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_bsd_init_auth.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// GUARDRAIL (CLAUDE.md): recover _bsd_init → locate rootvp panic block →
|
||||
// find unique in-function call → cbnz w0/x0, panic → bl imageboot_needed → patch gate.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchCredLabel.swift — JB kernel patch: _cred_label_update_execve C21-v3
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_cred_label.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (C21-v3): Split late exits, add helper bits on success.
|
||||
// - Keep _cred_label_update_execve body intact.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchDounmount.swift — JB: NOP the upstream cleanup call in dounmount.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_dounmount.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: string-anchor "dounmount:" → find the unique near-tail 4-arg zeroed cleanup
|
||||
// call: mov x0,xN ; mov w1,#0 ; mov w2,#0 ; mov w3,#0 ; bl ; mov x0,xN ; bl ; cbz x19,...
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchHookCredLabel.swift — JB kernel patch: Faithful upstream C23 hook
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_hook_cred_label.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (faithful upstream C23): Redirect mac_policy_ops[18]
|
||||
// (_hook_cred_label_update_execve sandbox wrapper) to a code cave that:
|
||||
@@ -162,7 +162,7 @@ extension KernelJBPatcher {
|
||||
return nil
|
||||
}
|
||||
|
||||
let entryRaw = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entryOff, as: UInt64.self) }
|
||||
let entryRaw = buffer.readU64(at: entryOff)
|
||||
guard entryRaw != 0 else {
|
||||
log(" [-] hook ops entry is null")
|
||||
return nil
|
||||
@@ -236,9 +236,11 @@ extension KernelJBPatcher {
|
||||
let refs = findStringRefs(searchStart)
|
||||
if let ref = refs.first {
|
||||
let refOff = ref.adrpOff
|
||||
// Scan back 80 bytes from the ref for a BL
|
||||
var scanOff = max(0, refOff - 80)
|
||||
while scanOff < refOff {
|
||||
// Python scans backward from the string ref so we prefer the
|
||||
// nearest call site rather than the first BL in the window.
|
||||
var scanOff = refOff - 4
|
||||
let scanLimit = max(0, refOff - 80)
|
||||
while scanOff >= scanLimit {
|
||||
let insn = buffer.readU32(at: scanOff)
|
||||
if insn >> 26 == 0b100101 { // BL
|
||||
let imm26 = insn & 0x03FF_FFFF
|
||||
@@ -250,7 +252,7 @@ extension KernelJBPatcher {
|
||||
return target
|
||||
}
|
||||
}
|
||||
scanOff += 4
|
||||
scanOff -= 4
|
||||
}
|
||||
}
|
||||
// Try next occurrence
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchIoucMacf.swift — JB kernel patch: IOUC MACF gate bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_iouc_macf.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy:
|
||||
// 1. Locate the "IOUC %s failed MACF in process %s" format string.
|
||||
@@ -117,27 +117,24 @@ extension KernelJBPatcher {
|
||||
|
||||
let funcEnd = findFuncEnd(calleeOff, maxSize: 0x400)
|
||||
|
||||
// LDR X10, [X10, #0x9e8]:
|
||||
// LDR (unsigned offset) Xt, [Xn, #imm]: size=11(X), opc=01
|
||||
// bits[31:22]=1111_1001_01, imm12=offset>>3, Rn, Rt
|
||||
// imm12 = 0x9e8 >> 3 = 0x13D
|
||||
// Rn = X10 = 10, Rt = X10 = 10
|
||||
// Full: 0xF9400000 | (0x13D << 10) | (10 << 5) | 10 = 0xF944F54A
|
||||
let ldrX10SlotVal: UInt32 = 0xF944_F54A
|
||||
|
||||
// BLRAA X10 = 0xD73F0940, BLRAB X10 = 0xD73F0D40, BLR X10 = 0xD63F0140
|
||||
let blraaX10: UInt32 = 0xD73F_0940
|
||||
let blrabX10: UInt32 = 0xD73F_0D40
|
||||
let blrX10: UInt32 = 0xD63F_0140
|
||||
|
||||
var sawSlotLoad = false
|
||||
var sawIndirectCall = false
|
||||
|
||||
var off = calleeOff
|
||||
while off < funcEnd {
|
||||
let insn = buffer.readU32(at: off)
|
||||
if insn == ldrX10SlotVal { sawSlotLoad = true }
|
||||
if insn == blraaX10 || insn == blrabX10 || insn == blrX10 { sawIndirectCall = true }
|
||||
guard let insn = disasAt(off) else {
|
||||
off += 4
|
||||
continue
|
||||
}
|
||||
let op = insn.operandString.replacingOccurrences(of: " ", with: "").lowercased()
|
||||
if insn.mnemonic == "ldr", op.hasPrefix("x10,[x10"), op.contains(",#0x9e8]") {
|
||||
sawSlotLoad = true
|
||||
}
|
||||
if insn.mnemonic == "blraa" || insn.mnemonic == "blrab" || insn.mnemonic == "blr",
|
||||
op.hasPrefix("x10")
|
||||
{
|
||||
sawIndirectCall = true
|
||||
}
|
||||
if sawSlotLoad, sawIndirectCall { return true }
|
||||
off += 4
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchKcall10.swift — JB kernel patch: kcall10 ABI-correct sysent[439] cave
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_kcall10.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy: Replace SYS_kas_info (sysent[439]) with a cave implementing
|
||||
// the kcall10 primitive:
|
||||
@@ -70,10 +70,10 @@ extension KernelJBPatcher {
|
||||
log(" [-] sysent[439] outside file")
|
||||
return false
|
||||
}
|
||||
let oldSyCallRaw = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entry439, as: UInt64.self) }
|
||||
let oldSyCallRaw = buffer.readU64(at: entry439)
|
||||
let callNext = extractChainNext(oldSyCallRaw)
|
||||
|
||||
let oldMungeRaw = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entry439 + 8, as: UInt64.self) }
|
||||
let oldMungeRaw = buffer.readU64(at: entry439 + 8)
|
||||
let mungeNext = extractChainNext(oldMungeRaw)
|
||||
let mungeDiv = extractChainDiversity(oldMungeRaw)
|
||||
let mungeAddrDiv = extractChainAddrDiv(oldMungeRaw)
|
||||
@@ -129,11 +129,11 @@ extension KernelJBPatcher {
|
||||
let sEnd = sStart + Int(seg.fileSize)
|
||||
var off = sStart
|
||||
while off + Self.sysent_entry_size <= sEnd {
|
||||
let val = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: off, as: UInt64.self) }
|
||||
let val = buffer.readU64(at: off)
|
||||
let decoded = decodeChainedPtr(val)
|
||||
if decoded == nosysOff {
|
||||
// Confirm: next entry also decodes to a code-range address
|
||||
let val2 = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: off + Self.sysent_entry_size, as: UInt64.self) }
|
||||
let val2 = buffer.readU64(at: off + Self.sysent_entry_size)
|
||||
let dec2 = decodeChainedPtr(val2)
|
||||
let inCode = dec2 > 0 && codeRanges.contains { dec2 >= $0.start && dec2 < $0.end }
|
||||
if inCode {
|
||||
@@ -156,14 +156,14 @@ extension KernelJBPatcher {
|
||||
while base - Self.sysent_entry_size >= segStart {
|
||||
guard entriesBack < Self.sysent_max_entries else { break }
|
||||
let prev = base - Self.sysent_entry_size
|
||||
let val = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: prev, as: UInt64.self) }
|
||||
let val = buffer.readU64(at: prev)
|
||||
let decoded = decodeChainedPtr(val)
|
||||
guard decoded > 0 else { break }
|
||||
let inCode = codeRanges.contains { decoded >= $0.start && decoded < $0.end }
|
||||
guard inCode else { break }
|
||||
// Check narg and arg_bytes for sanity
|
||||
let narg: UInt16 = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: prev + 20, as: UInt16.self) }
|
||||
let argBytes: UInt16 = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: prev + 22, as: UInt16.self) }
|
||||
let narg = buffer.data.loadLE(UInt16.self, at: prev + 20)
|
||||
let argBytes = buffer.data.loadLE(UInt16.self, at: prev + 22)
|
||||
guard narg <= 12, argBytes <= 96 else { break }
|
||||
base = prev
|
||||
entriesBack += 1
|
||||
@@ -186,10 +186,10 @@ extension KernelJBPatcher {
|
||||
for idx in 0 ..< Self.sysent_max_entries {
|
||||
let entry = sysEntOff + idx * Self.sysent_entry_size
|
||||
guard entry + Self.sysent_entry_size <= buffer.count else { break }
|
||||
let curNarg: UInt16 = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entry + 20, as: UInt16.self) }
|
||||
let curArgBytes: UInt16 = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entry + 22, as: UInt16.self) }
|
||||
let curNarg = buffer.data.loadLE(UInt16.self, at: entry + 20)
|
||||
let curArgBytes = buffer.data.loadLE(UInt16.self, at: entry + 22)
|
||||
guard curNarg == narg, curArgBytes == argBytes else { continue }
|
||||
let rawMunge = buffer.data.withUnsafeBytes { $0.load(fromByteOffset: entry + 8, as: UInt64.self) }
|
||||
let rawMunge = buffer.readU64(at: entry + 8)
|
||||
let target = decodeChainedPtr(rawMunge)
|
||||
guard target > 0 else { continue }
|
||||
candidates[target, default: []].append(entry)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchLoadDylinker.swift — JB: bypass load_dylinker policy gate in the dyld path.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_load_dylinker.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: string-anchor "/usr/lib/dyld" → kernel-text function containing the ref →
|
||||
// inside that function: BL <check>; CBZ W0, <allow>; MOV W0, #2 (deny path).
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchMacMount.swift — JB kernel patch: MAC mount bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_mac_mount.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Capstone
|
||||
import Foundation
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchNvram.swift — JB kernel patch: NVRAM permission bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_nvram.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Foundation
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchPortToMap.swift — JB: skip kernel-map panic in _convert_port_to_map_with_flavor.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_port_to_map.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: string-anchor "userspace has control access to a kernel map" →
|
||||
// walk backward from ADRP to find CMP + B.cond (conditional branch forward past panic) →
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchPostValidation.swift — JB: additional post-validation cmp w0,w0 bypass.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_post_validation.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: string-anchor "AMFI: code signature validation failed" → caller function →
|
||||
// BL targets in AMFI text → callee with `cmp w0,#imm ; b.ne` preceded by a BL.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchProcPidinfo.swift — JB: NOP the two pid-0 guards in proc_pidinfo.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_proc_pidinfo.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: shared _proc_info switch-table anchor → function prologue (first 0x80 bytes) →
|
||||
// precise 4-insn pattern: ldr x0,[x0,#0x18] ; cbz x0,fail ; bl ... ; cbz/cbnz wN,fail.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchProcSecurity.swift — JB: stub _proc_security_policy with mov x0,#0; ret.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_proc_security.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: find _proc_info by `sub wN,wM,#1 ; cmp wN,#0x21` switch pattern,
|
||||
// then identify _proc_security_policy among BL targets called 2+ times,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchSandboxExtended.swift — JB kernel patch: Extended sandbox hooks bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_sandbox_extended.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (ops-table retargeting — matches upstream patch_fw.py):
|
||||
// 1. Locate mac_policy_conf via the "Seatbelt sandbox policy" and "Sandbox" strings
|
||||
@@ -173,14 +173,11 @@ extension KernelJBPatcher {
|
||||
let sbRange = sandboxTextRange()
|
||||
let (sbStart, sbEnd) = (sbRange.start, sbRange.end)
|
||||
|
||||
let movX0_0: UInt32 = 0xD280_0000 // MOV X0, #0 (MOVZ X0, #0)
|
||||
let retVal: UInt32 = 0xD65F_03C0 // RET
|
||||
|
||||
var hits: [Int] = []
|
||||
var off = sbStart
|
||||
while off < sbEnd - 8 {
|
||||
if buffer.readU32(at: off) == movX0_0,
|
||||
buffer.readU32(at: off + 4) == retVal
|
||||
if buffer.readU32(at: off) == ARM64.movX0_0_U32,
|
||||
buffer.readU32(at: off + 4) == ARM64.retU32
|
||||
{
|
||||
hits.append(off)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchSecureRoot.swift — JB: force SecureRootName policy to return success.
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_secure_root.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Reveal: find functions referencing both "SecureRootName" and "SecureRoot" strings →
|
||||
// locate the final CSEL that selects between wzr (success) and kIOReturnNotPrivileged →
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchSharedRegion.swift — JB kernel patch: Shared region map bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_shared_region.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Capstone
|
||||
import Foundation
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchSpawnPersona.swift — JB kernel patch: Spawn validate persona bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_spawn_persona.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Capstone
|
||||
import Foundation
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchSyscallmask.swift — JB kernel patch: syscallmask C22 apply-to-proc
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_syscallmask.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (retargeted C22): Hijack the low-level syscallmask apply wrapper.
|
||||
// 1. Replace the pre-setter helper BL with `mov x17, x0` (save RO selector).
|
||||
@@ -283,31 +283,31 @@ extension KernelJBPatcher {
|
||||
var code: [Data] = []
|
||||
|
||||
// 0: cbz x2, #exit (28 instrs * 4 = 0x70 — jump to after add sp)
|
||||
code.append(ARM64.encodeU32(0xB400_0622)) // cbz x2, #+0x6c
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_cbzX2_0x6c))
|
||||
// 1: sub sp, sp, #0x40
|
||||
code.append(ARM64.encodeU32(0xD101_03FF)) // sub sp, sp, #0x40
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_subSP_0x40))
|
||||
// 2: stp x19, x20, [sp, #0x10]
|
||||
code.append(ARM64.encodeU32(0xA901_4FF3)) // stp x19, x20, [sp, #0x10]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_stpX19X20_0x10))
|
||||
// 3: stp x21, x22, [sp, #0x20]
|
||||
code.append(ARM64.encodeU32(0xA902_57F5)) // stp x21, x22, [sp, #0x20]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_stpX21X22_0x20))
|
||||
// 4: stp x29, x30, [sp, #0x30]
|
||||
code.append(ARM64.encodeU32(0xA903_7BFD)) // stp x29, x30, [sp, #0x30]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_stpFP_LR_0x30))
|
||||
// 5: mov x19, x0
|
||||
code.append(ARM64.encodeU32(0xAA00_03F3)) // mov x19, x0
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX19_X0))
|
||||
// 6: mov x20, x1
|
||||
code.append(ARM64.encodeU32(0xAA01_03F4)) // mov x20, x1
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX20_X1))
|
||||
// 7: mov x21, x2
|
||||
code.append(ARM64.encodeU32(0xAA02_03F5)) // mov x21, x2
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX21_X2))
|
||||
// 8: mov x22, x3
|
||||
code.append(ARM64.encodeU32(0xAA03_03F6)) // mov x22, x3
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX22_X3))
|
||||
// 9: mov x8, #8
|
||||
code.append(ARM64.encodeU32(0xD280_0108)) // movz x8, #8
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX8_8))
|
||||
// 10: mov x0, x17
|
||||
code.append(ARM64.encodeU32(0xAA11_03E0)) // mov x0, x17
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX0_X17))
|
||||
// 11: mov x1, x21
|
||||
code.append(ARM64.encodeU32(0xAA15_03E1)) // mov x1, x21
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX1_X21))
|
||||
// 12: mov x2, #0
|
||||
code.append(ARM64.encodeU32(0xD280_0002)) // movz x2, #0
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX2_0))
|
||||
|
||||
// 13: adr x3, #blobDelta (blob is at caveOff, code is at codeOff)
|
||||
let adrOff = codeOff + code.count * 4
|
||||
@@ -321,13 +321,13 @@ extension KernelJBPatcher {
|
||||
code.append(ARM64.encodeU32(adrInsn))
|
||||
|
||||
// 14: udiv x4, x22, x8
|
||||
code.append(ARM64.encodeU32(0x9AC8_0AC4)) // udiv x4, x22, x8
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_udivX4_X22_X8))
|
||||
// 15: msub x10, x4, x8, x22
|
||||
code.append(ARM64.encodeU32(0x9B08_5C8A)) // msub x10, x4, x8, x22
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_msubX10_X4_X8_X22))
|
||||
// 16: cbz x10, #8 (skip 2 instrs)
|
||||
code.append(ARM64.encodeU32(0xB400_004A)) // cbz x10, #+8
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_cbzX10_8))
|
||||
// 17: add x4, x4, #1
|
||||
code.append(ARM64.encodeU32(0x9100_0484)) // add x4, x4, #1
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_addX4_X4_1))
|
||||
|
||||
// 18: bl mutatorOff
|
||||
let blOff = codeOff + code.count * 4
|
||||
@@ -335,21 +335,21 @@ extension KernelJBPatcher {
|
||||
code.append(blMutator)
|
||||
|
||||
// 19: mov x0, x19
|
||||
code.append(ARM64.encodeU32(0xAA13_03E0)) // mov x0, x19
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX0_X19))
|
||||
// 20: mov x1, x20
|
||||
code.append(ARM64.encodeU32(0xAA14_03E1)) // mov x1, x20
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX1_X20))
|
||||
// 21: mov x2, x21
|
||||
code.append(ARM64.encodeU32(0xAA15_03E2)) // mov x2, x21
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX2_X21))
|
||||
// 22: mov x3, x22
|
||||
code.append(ARM64.encodeU32(0xAA16_03E3)) // mov x3, x22
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_movX3_X22))
|
||||
// 23: ldp x19, x20, [sp, #0x10]
|
||||
code.append(ARM64.encodeU32(0xA941_4FF3)) // ldp x19, x20, [sp, #0x10]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_ldpX19X20_0x10))
|
||||
// 24: ldp x21, x22, [sp, #0x20]
|
||||
code.append(ARM64.encodeU32(0xA942_57F5)) // ldp x21, x22, [sp, #0x20]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_ldpX21X22_0x20))
|
||||
// 25: ldp x29, x30, [sp, #0x30]
|
||||
code.append(ARM64.encodeU32(0xA943_7BFD)) // ldp x29, x30, [sp, #0x30]
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_ldpFP_LR_0x30))
|
||||
// 26: add sp, sp, #0x40
|
||||
code.append(ARM64.encodeU32(0x9101_03FF)) // add sp, sp, #0x40
|
||||
code.append(ARM64.encodeU32(ARM64.syscallmask_addSP_0x40))
|
||||
|
||||
// 27: b setterOff (tail-call)
|
||||
let branchBackOff = codeOff + code.count * 4
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchTaskConversion.swift — JB kernel patch: Task conversion eval bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_task_conversion.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
//
|
||||
// Strategy (fast raw scanner):
|
||||
// Locate the unique guard site in _task_conversion_eval_internal that performs:
|
||||
@@ -24,8 +24,10 @@ extension KernelJBPatcher {
|
||||
func patchTaskConversionEvalInternal() -> Bool {
|
||||
log("\n[JB] task_conversion_eval_internal: cmp xzr,xzr")
|
||||
|
||||
guard let codeRange = codeRanges.first else { return false }
|
||||
let (ks, ke) = (codeRange.start, codeRange.end)
|
||||
guard let range = kernTextRange ?? codeRanges.first.map({ ($0.start, $0.end) }) else {
|
||||
return false
|
||||
}
|
||||
let (ks, ke) = range
|
||||
|
||||
let candidates = collectTaskConversionCandidates(start: ks, end: ke)
|
||||
|
||||
@@ -50,11 +52,11 @@ extension KernelJBPatcher {
|
||||
// CMP Xn, X0 = SUBS XZR, Xn, X0 → bits [31:21]=1110_1011_000, [20:16]=X0=00000,
|
||||
// [15:10]=000000, [9:5]=Rn, [4:0]=11111(XZR)
|
||||
// Mask covers the fixed opcode and X0 operand; leaves Rn free.
|
||||
let cmpXnX0Mask: UInt32 = 0xFFE0_FC1F
|
||||
let cmpXnX0Mask: UInt32 = 0xFFFF_FC1F
|
||||
let cmpXnX0Val: UInt32 = 0xEB00_001F // cmp Xn, X0 — Rn wildcard
|
||||
|
||||
// CMP Xn, X1 = SUBS XZR, Xn, X1 → Rm=X1=00001
|
||||
let cmpXnX1Mask: UInt32 = 0xFFE0_FC1F
|
||||
let cmpXnX1Mask: UInt32 = 0xFFFF_FC1F
|
||||
let cmpXnX1Val: UInt32 = 0xEB01_001F // cmp Xn, X1 — Rn wildcard
|
||||
|
||||
// B.EQ #offset → bits[31:24]=0101_0100, bit[4]=0, bits[3:0]=0000 (EQ cond)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// KernelJBPatchTaskForPid.swift — JB kernel patch: task_for_pid bypass
|
||||
//
|
||||
// Python source: scripts/patchers/kernel_jb_patch_task_for_pid.py
|
||||
// Historical note: derived from the legacy Python firmware patcher during the Swift migration.
|
||||
|
||||
import Capstone
|
||||
import Foundation
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user