Upgrade to parallelproj 2.0 and use cuvec#1689
Conversation
|
@KrisThielemans : in case you are wondering that the tof_sino_fwd / back projections are slightly different compared to libparallelproj v1.x - that is expected. In the new version I make sure that the "sum over TOF bins" of a TOF fwd projection is the same as the non-TOF fwd projection (if num_TOF bins is big enough) - even with truncated TOF kernels. |
|
Currently just getting zero in both fwd and backprojection... |
|
The code is currently confusing as I tried to make minimal changes, but taking into account pre-processor symbol |
de95e7f to
791904e
Compare
At runtime, you can check whether libparallelproj was built with cuda using: and at cmake config time |
|
Sure, I meant that the old CUDA code is still present in the file, but it's intentionally never used as the preprocessor symbol isn't set. |
791904e to
b357df6
Compare
|
MacOS failure is due to unrelated #1691 |
ba7b8cb to
858ddbf
Compare
|
Current status:
|
6ada6f5 to
a4005a5
Compare
|
could it be related to device syncing? |
|
Rebuilding the Docker image from scratch without cache resolved the issue, so it must have been an out-of-date dependency! |
ok, that is clear.
ok What about the following then: change void array_to_host(Array<num_dimensions, elemT>& stir_array, const CuVec<elemT>& dev_data, bool sync=true)and drop the explicit syncs that @denproc inserted. This way Good idea? |
|
Dear Professor @KrisThielemans,
|
|
I've put the parallelproj 1.* compatibility in and tested it on my Linux system without CUDA. It'd be great if someone could test it with CUDA. |
Just tested it - all fine. |
|
Dear Professor @KrisThielemans ,
I am not entirely sure whether the preferred route here is to open separate follow-up PRs against THANK YOU!! |
I'll do this myself, as we need some documentation etc
I've created KrisThielemans#11
I think this is already merged on master? #1694
Are you referring to Dimitra-Kyriakopoulou@3b1ea8f? This is an interesting approach. I'd like to get @casperdcl's opinion on that. However, let's not do that here, but in #1679. See below.
Thanks! I generally prefer PRs with a smal aim. Some of the above have nothing to do with this PR, really, but only with the overall project, so I'd keep those for separate PRs. It also keeps the discussion focused. |
|
anyone can check, please. |
|
Dear Professor @KrisThielemans,
However for safety, I think it would still be good if someone else could also check, because I already made errors today ...
Indeed! I am really sorry about that ...
Yes. I am really sorry for all the carelessness, and thank you so much for your reply!! |
|
@casperdcl I'm afraid I don't have the time to fix the CMake for using |
|
This should be fine now, aside from the release notes. I'd appreciate a full check and review from a few of you :-) |
Tested it and reviewed the code. All good from my side. |
[ci skip]
|
I'm not going to change history on this and merge. Thanks a lot all for your contributions! |
See https://github.com/KUL-recon-lab/libparallelproj
Currently this PR is on top of #1676, while at least initially there is no good reason for this. Look therefore only at the last commit(s) and ignore thetest_Arrayfailure. SorryWARNING: Commits here will be rebased/squashed etc. The PR will probably also be split in 2 or 3 other PRs.
@gschramm @markus-jehl feel free to comment :-)