Skip to content

(WIP) feat: More extensive execution of tgpu.fns on the CPU #1467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

iwoplaza
Copy link
Collaborator

@iwoplaza iwoplaza commented Jul 9, 2025

No description provided.

OpenCode Assistant and others added 11 commits July 8, 2025 10:04
This plan outlines a focused 1-week approach to enable slots, derived values,
and privateVars to work on the CPU by creating a lightweight ExecutionCtx
abstraction. The key insight is that ResolutionCtx is an implementation detail
for GPU mode - we just need simple CPU-side slot and variable tracking.

Key changes planned:
- Create CpuExecutionCtx for CPU-side execution context
- Remove CPU mode restrictions from variables
- Enable variable assignment in CPU mode
- Update slot/derived access to work with ExecutionCtx
- Maintain full backward compatibility

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
Corrected the plan to properly reflect TypeGPU's three-mode execution model:
- WGSL Mode: GPU shader code generation (current ResolutionCtx)
- COMPTIME Mode: Resolution-time computation for derived values
- JS Mode: Runtime JavaScript execution for dual implementations

Key corrections:
- Derived values run at resolution-time (COMPTIME), not CPU simulation
- Variables should work during shader preprocessing, not runtime
- createDualImpl works in WGSL/JS modes only, undefined in COMPTIME
- Updated RuntimeMode to include COMPTIME alongside WGSL and JS
- Fixed variable access patterns and test scenarios

This aligns with the actual TypeGPU execution model where derived
computations happen during shader preprocessing, not GPU simulation.

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
Fixed the plan to properly reflect TypeGPU's execution model:

- WGSL Mode: Slots + Variables (shader generation)
- COMPTIME Mode: Slots only (dependency injection for derived values)
- JS Mode: Variables only (runtime state for dual implementations)

Key corrections:
- Variables should NOT be accessible in COMPTIME mode
- COMPTIME is purely for slot-based dependency injection
- Variables are execution constructs, not comptime constructs
- Derived values use slots for parameterization, not variables
- Removed all variable support from ComptimeExecutionCtx
- Added JS mode variable support for dual implementations

This aligns with the principle that slots are a comptime mechanism
for dependency injection, while variables are for actual execution.

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
…n PLAN.md

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
- WGSL → CODEGEN: Better reflects GPU shader code generation purpose
- JS → SIMULATE: Better reflects JavaScript simulation/runtime purpose
- COMPTIME: Remains unchanged as it clearly indicates compile-time evaluation

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
The overview now clearly states that the primary goal is enabling users to
simulate running shaders on the CPU for testing, debugging, and development.
This better aligns with the actual purpose of the ExecutionCtx system.

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
- Slots are available in CODEGEN, COMPTIME, and SIMULATE modes
- SIMULATE mode has both slots and variables for full CPU simulation
- Updated execution model and implementation details accordingly

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
- SIMULATE mode now has its own SimulationCtx that implements ExecutionCtx
- SimulationCtx handles slot bindings and variable storage for CPU simulation
- Added provideSimulationCtx function for context management
- ResolutionCtx remains unchanged for CODEGEN mode

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
…ionship

- ExecutionCtxImpl used for both COMPTIME and SIMULATE modes
- ResolutionCtx is a supertype of ExecutionCtx, implemented by ResolutionCtxImpl
- ExecutionCtxImpl can share slot implementation with ResolutionCtxImpl
- Updated function names and references throughout

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
- Add ExecutionCtx interface for unified slot access across all modes
- Update runtime mode system: CPU→SIMULATE, GPU→CODEGEN, add COMPTIME
- Implement ExecutionCtxImpl for COMPTIME and SIMULATE modes
- Update variable system to support SIMULATE mode for CPU execution
- Update slot and derived value systems to work with ExecutionCtx
- Update createDualImpl to work in CODEGEN/SIMULATE modes only
- Add comprehensive tests for all execution modes
- Enable CPU simulation of GPU shaders for testing and debugging

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant