Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to solve the problem of having 20M + points #8

Open
meidachen opened this issue Apr 28, 2024 · 2 comments
Open

How to solve the problem of having 20M + points #8

meidachen opened this issue Apr 28, 2024 · 2 comments

Comments

@meidachen
Copy link

Thank you for the great work. I'm trying your work on compressing my large-scale scene, everything works great until I try to visualize it using the viewer. It seems like there are too many points to be rendered using the current implementation. specifically, the error/limitation occurred as:
Caused by:
In a ComputePass
note: encoder = render command encoder
In a dispatch command, indirect:false
note: compute pipeline = preprocess pipeline
Each current dispatch group size dimension ([153754, 1, 1]) must be less or equal to 65535

Which originated from render.rs line 409:
let wgs_x = (pc.num_points() as f32 / 256.0).ceil() as u32;
pass.dispatch_workgroups(wgs_x, 1, 1);

Is there any work around for this to handle more points in the viewer?

Thank you in advance for the help!

@KeKsBoTer
Copy link
Owner

Hello,

thanks for your interest!
I hope this fixes your issue:

You have to increase the limit for max_compute_workgroups_per_dimension.
How high you can set it depends on the limit of your GPU driver.

For the renderer you can edit the limits here:

required_limits: wgpu::Limits {

If this does not solve the problem one would need to invoke the shader multiple times (which would require some rework of rust and shader code)

@meidachen
Copy link
Author

@KeKsBoTer , thanks for your response, it seems like by default max_compute_workgroups_per_dimension is already at the maximum. What would you suggest to look into? Does this mean the data itself also needs to be chunked so that the shader can focus on each chunk and eventually merge results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants