Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedbacks on documentation #3

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/user_guides/templates/wgcore/buffers_initialization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If you can’t derive `bytemuck::Pod` for your own type, consider the [solution
it can be passed to `GpuScalar::init`.
- Any type implementing `AsRef<[T]>` (like `Vec<T>`, `&[T]`, or `DVector` from [`nalgebra`](https://nalgebra.rs) can be
passed to `GpuVector::init`.
- Any matrix type, parametrized by `T`, from the [`nalgebra`](https://nalgebra.rs) crate can
- Any matrix type, parameterized by `T`, from the [`nalgebra`](https://nalgebra.rs) crate can
be passed to `GpuMatrix::init`.

```rust
Expand Down Expand Up @@ -68,7 +68,7 @@ instead by calling the `::encase` constructor of `GpuScalar/GpuVector/GpuMatrix`
it can be passed to `GpuScalar::encase`.
- Any type implementing `AsRef<[T]>` (like `Vec<T>`, `&[T]`, or `DVector` from [`nalgebra`](https://nalgebra.rs) can be
passed to `GpuVector::encase`.
- Any matrix type, parametrized by `T`, from the [`nalgebra`](https://nalgebra.rs) crate can
- Any matrix type, parameterized by `T`, from the [`nalgebra`](https://nalgebra.rs) crate can
be passed to `GpuMatrix::encase`.


Expand Down
2 changes: 1 addition & 1 deletion docs/user_guides/templates/wgcore/buffers_readback.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The code described in this section can be run from the
:::

After our buffer have been [initialized](./buffers_initialization.mdx), and our compute kernels have run, you might need
to read the results back to RAM for further processing on the CPU side. Reading the content of a GPU buffers require a
to read the results back to RAM for further processing on the CPU side. Reading the content of a GPU buffer require a
few steps:
1. Be sure that the buffer you want to read from was initialized with `BufferUsages::COPY_SRC`:
```rust
Expand Down
6 changes: 3 additions & 3 deletions docs/user_guides/templates/wgcore/hot_reloading.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,16 @@ Say you are working on an application’s code (like, for example, one of the ex
[wgebra](https://github.com/dimforge/wgmath)) using a [cargo patch](https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html).
Using the code snippet from the previous section, you can leverage hot-reloading on **any** shader, even the ones
from the local dependencies. The `derive(Shader)` macros will automatically figure out the absolute path of all the
shaders at compile-time so the can be watched with `Shader::watch_sources`.
shaders at compile-time so they can be watched with `Shader::watch_sources`.

:::danger
This automatic detection of shader paths might not work properly if you run your application from a directory that is
different from the root of the rust workspace it is built from. This is due to some limitations in the Rust libraries
that will hopefully be stabilized in future versions of the compiler.
:::

This won’t work for shaders of a dependency that is not available locally on your machine, since there is no way to
actual shader file that could be modified (since they are embedded in the library directly). In order to make it work
This won’t work for shaders of a dependency that is not available locally on your machine, since there is no way that
the actual shader file could be modified (since they are embedded in the library directly). In order to make it work
for these shaders, you can [overwrite them](./overwriting_shaders.mdx) with a local version of the shader by specifying
their path with `Shader::set_wgsl_path`. After their path is overwritten, `Shader::watch_sources` needs to be called
and hot-reloading will work.
Expand Down
6 changes: 3 additions & 3 deletions docs/user_guides/templates/wgcore/shaders_composition.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The code described in this section can be run from the
[end of this page](#complete-example).
:::

The main feature of `wgcore` is the provide the ability share and compose WGSL shaders easily across crates. Roughly
The main feature of `wgcore` is the provide the ability to share and compose WGSL shaders easily across crates. Roughly
speaking, `wgcore` exposes a trait and derive macro for generating the boilerplate needed by
[`naga-oil`](https://crates.io/crates/naga_oil) for shader composition.

Expand All @@ -26,7 +26,7 @@ since it’s a regular rust `struct`, it can be exported and used across crates

Say you have two `.wgsl` shaders, one `dependency.wgsl` exposing a wgsl structure and a mathematical function, and the other
`kernel.wgsl` defines a compute pipeline entrypoint that calls functions from `dependency.wgsl`. The dependency shader
contains a nana-oil `#define_import_path` statement indicating it is intended to be imported from other shaders. The
contains a naga-oil `#define_import_path` statement indicating it is intended to be imported from other shaders. The
kernel shaders contains an `#import` statement indicating it requires symbols exported by the dependency shader:

<Tabs
Expand Down Expand Up @@ -148,7 +148,7 @@ Your output will differ from this one due to various factors like different UUID
## Running the kernel

In addition to initializing the compute pipeline, `wgcore` exports some convenient utilities for actually running it.
First, the input buffers can be initialized easily using the `GpuScalar`, `GpuVector`, `GpuMatrix` wrappers (the all
First, the input buffers can be initialized easily using the `GpuScalar`, `GpuVector`, `GpuMatrix` wrappers (they all
initialize and contain a regular wgpu `Buffer`).

```rust
Expand Down
2 changes: 1 addition & 1 deletion docs/user_guides/templates/wgcore/timestamp_queries.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ let timestamps_read = timestamps.wait_for_results_ms(gpu.device(), gpu.queue());
```

Each compute pass requires **2 timestamps** for measuring their runtime: one for when the pass _starts_, and one for when its
_ends_. The actual runtime of the compute pass is the the difference between the two.
_ends_. The actual runtime of the compute pass is the difference between the two.

Note that there is currently no way to know which timestamp is related to which compute pass unless you know exactly in
which order all the calls to `queue.compute_pass` happened: their timestamps will be in the same order.
Expand Down