pure python js_webgpu backend using pyodide#753
Conversation
|
there are were many headaches around the type conversion which aren't well documented... but I got to the cube in the end. self._internal.getMappedRange(offset=js_offset, size=data.nbytes)vs self._internal.getMappedRange(0, size)And the error you get is about browser_cube.mp4I will hopefully find some more time this coming week to continue and maybe get some more interesting examples to run (pygfx?/fastplotlib?). |
|
super exciting to get imgui working with a few tweaks imgui_example.mp4cc @pthom thanks a lot for your article I read a few weeks ago, that motivated me to give it a try here! |
|
@Vipitis : many thanks for the info, that looks very promising. Please keep me in the loop! |
|
I was expecting codegen to come a long way here. The codegen knows when the arguments of a function were actually wrapped in a dict in IDL, so we can generate the code to reconstruct the dict before passing it to the JS WebGPU API call. |
|
I also think that codegen can do a lot, I just need to give it a try. The def some_function(self, *args, **kwargs):
js_args = to_js(args, eager_converter=js_acccessor)
js_kwrags = to_js(kwargs, eager_converter=js_accessor, dict_converter=Object.from_entries)
self._internal.someFunction(*js_args, js_kwargs) |
|
Whatever way this goes, what I care most about, is that when the IDL changes for a certain method, it will place some FIXME comment in the code for the JS backend, so that we won't forget to update that method there. |
|
I feel like I have finally moved passt all the headaches and found a "general" approach to most functions. I switched to the pyodide dev branch as the upcoming 0.29 release makes changes to how dictionaries are converted... which has been a ton of pain and the upcoming version seems to work much better. I couldn't find any release timeline so it might still be month until there is a release... Also got started with a codegen prototype and I am feeling confident this is largely going to work, depends on how much time I find in the coming week. There was also some weirdness with css scaled canvas for click events and resizing with the imgui example - so the rendercanvas PR likely needs some more fixes, I will see if I can find time for that too. |
@Vipitis I had an idea to create a very simple 3D scene/animation editor which has it's user facing UI running in the browser on the client side. But once the scene is done, it's submitted to the backend, where it's rendered out as a video, headless. I want to share code between the two as much as possible, so the output on both end stay closely matched. |
Sounds similar to the hybrid rendering ideas right now. You are rendering on the server and sending back pixels as a video/image stream. This sorta already works - however pyodide support will let you render in the users browser directly and it should even work in tandem with the JS ecosystem if the UI doesn't need to be portable. |
Yes, I had this idea in mind, but I dismissed it because I was afraid it would work very poorly in terms of reactivity. Maybe I'm wrong, and it's not a big issue nowadays to manage latency as such. Thank you for the suggestion! |
|
I am moving this out of draft, as it's has been working for a while. I finally cleaned up the really messy To try this right now, see the docs preview of this branch or better yet pygfx where the vast majority of gallery examples already work. I run this on Chrome in Windows, Firefox might not work due to JSPI and Linux might not work due to webGPU support (but I haven't tried either). I am unhappy with the current codegen approach where it's duplicating code from one file into another. Most of the simple functions are generated, and I think all of the aysnc methods can be generated as well. Only the APIdiff needs manual implementation as well as a few constructors. (which still have open TODOs and missing/buggy behaviour). I think using the Finally I would like to mention that this whole approach might not be needed at all. (I will be on a trip the next week or two, so can read and respond but won't be able to commit any code myself) |
|
Thanks for all the work so far! I will try to find time to have a proper look at this.
I wonder what the size of the wasm binary for wgpu-native would be. Because a pretty significant advantage of the JS approach could be that the wgpu-py wasm wheel can be really small, which helps reduce load times. That said, piggybacking on wgpu-core for wasm support does sound appealing. |
I honestly don't know. Theoretically it should be less than the python mapping to js. But that means we still have to include our mapping. If we really want to optimize the file size of how small the library is in the pyodide usecase there even are bundler available to minimize the code. Modern browsers seem to do a good job at caching the wheel, but the wheel built can definitely shed a few more files to be smaller. |
Potentially remaining tasks
jswriter.pyas aPatcherand add the comment injectornew weekend, new project...
I think there is two options to get wgpu-py into the browser: compile wgpu-native for wasm and package that, or call the js backend directly. I run into compilation errors with the rust code, so gave up there... but:
basically autocompleted my way through errors to see what kind of patterns are needed... everything around moving data requires more effort. While pyodide provides some functions, they feel buggy and unpredictable.
It is likely possible to codegen the vast majority of this and then fix up all the api diff - might get to that over the next few days.
structs have potential to make this easier.
I changed some of the examples to auto layout since I couldn't get
.create_bindgroup_layout()to work - and you don't need it with auto layout.works with pygfx/rendercanvas#115

couldn't get the cube example to work just yet, but triangle does - so the potential is there
more to come