You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently fuzzing a specific function within dwm.exe(dwmcore.dll), using a harness implemented as a DLL that is injected into the target process.
Our fuzzing architecture is as follows:
A new thread (hereafter referred to as Thread 1) is created inside dwm.exe, and our harness DLL is loaded within it.
The harness installs a trampoline hook at the return address of a specific handler function in dwmcore.dll (hereafter referred to as Function A).
This trampoline is designed to invoke a Release_Hypercall instead of returning normally from Function A.
The goal is to allow A function to fully process the fuzzed input, then trigger a snapshot restoration via hypercall, enabling the next fuzzing iteration.
The harness then issues a specific NT syscall, which queues a message for DWM’s internal message handling thread (hereafter referred to as Thread 2).
Thread 2 processes the queued message by calling Function A.
Upon completing Function A, the trampoline triggers the Release Hypercall, restoring the VM snapshot and continuing the fuzzing loop.
Current Issue:
The NT syscall initiated by our harness (Thread 1) is successfully executed.
However, Thread 2(Original Dwm handling Thread) does not appear to invoke Function A, meaning the trampoline is never reached.
As a result, the Release Hypercall is never triggered, and QEMU remains stuck in a halted state. (We’ve set the timeout to 255 seconds, but the issue persists.)
We are currently considering two possible causes:
1. QEMU-NYX is configured to run with a single core and single thread, which may cause Thread 1 (our harness) to monopolize CPU time, preventing Thread 2 from running.
To address this, we attempted to yield execution using SwitchToThread() and similar mechanisms, but this did not resolve the issue.
2. The submitted IP range for coverage feedback is limited to userland (dwm.exe, dwmcore.dll), and execution transitions into kernel space (win32k.sys) during the input processing path.
Specifically, the call path goes from userland (harness) → kernel (win32k.sys) → back to userland (dwmcore.dll), and we suspect that QEMU-NYX halts execution when it encounters instructions outside the submitted IP range.
However, since we are not fully familiar with the internals of QEMU-NYX, this remains a hypothesis.
Additional Observations:
DWM does not load unless the vga std option is passed to QEMU, so we included this to ensure proper rendering device initialization.
We also enabled display vnc=:1 to observe the fuzzing state via VNC. After our harness is injected and the initial hypercall is issued, the screen appears frozen.
This could indicate that either DWM, QEMU, or the VNC server has stalled.
Has anyone experienced similar issues, or do you have any insights into what might be going wrong in this setup?
We’ve attached our harness (DLL) source code below for reference.
P.S. In the latest version of Windows 11, QEMU-NYX crashes after reaching "Getting devices ready... 100%". We are currently using Windows 11 23H2 (older release) for stability.
The text was updated successfully, but these errors were encountered:
hyjun0407
changed the title
Possible Thread Starvation or Kernel Transition Issue in QEMU-NYX DWM Fuzzing
Possible Thread Starvation Issue in QEMU-NYX DWM Fuzzing
Apr 10, 2025
We have additionally identified that the DWM composition thread(thread 2) is blocked within the FlushAllRenderingTasks() function, specifically waiting on a NotificationEvent that is expected to be signaled upon the completion of GPU rendering work.
However, it appears that under QEMU-NYX, this signal is never issued. Our current hypothesis is that internal rendering components—such as DWM flush, Drawing.. etc —are not functioning as expected after hypercall issued.
As a result, the synchronization object remains unsignaled, causing the thread to hang indefinitely and preventing our fuzzing harness from making progress.
this is Thread Stack dump and blocklist(UserRequest) while Fuzzing.
Hello,
We are currently fuzzing a specific function within
dwm.exe
(dwmcore.dll), using a harness implemented as a DLL that is injected into the target process.Our fuzzing architecture is as follows:
dwm.exe
, and our harness DLL is loaded within it.dwmcore.dll
(hereafter referred to as Function A).Release_Hypercall
instead of returning normally from Function A.Release Hypercall
, restoring the VM snapshot and continuing the fuzzing loop.Current Issue:
Release Hypercall
is never triggered, and QEMU remains stuck in a halted state. (We’ve set the timeout to 255 seconds, but the issue persists.)We are currently considering two possible causes:
1. QEMU-NYX is configured to run with a single core and single thread, which may cause Thread 1 (our harness) to monopolize CPU time, preventing Thread 2 from running.
SwitchToThread()
and similar mechanisms, but this did not resolve the issue.2. The submitted IP range for coverage feedback is limited to userland (
dwm.exe
,dwmcore.dll
), and execution transitions into kernel space (win32k.sys
) during the input processing path.win32k.sys
) → back to userland (dwmcore.dll
), and we suspect that QEMU-NYX halts execution when it encounters instructions outside the submitted IP range.Additional Observations:
vga std
option is passed to QEMU, so we included this to ensure proper rendering device initialization.display vnc=:1
to observe the fuzzing state via VNC. After our harness is injected and the initial hypercall is issued, the screen appears frozen.Has anyone experienced similar issues, or do you have any insights into what might be going wrong in this setup?
We’ve attached our harness (DLL) source code below for reference.
P.S. In the latest version of Windows 11, QEMU-NYX crashes after reaching "Getting devices ready... 100%". We are currently using Windows 11 23H2 (older release) for stability.
harness.zip
The text was updated successfully, but these errors were encountered: