-
-
Notifications
You must be signed in to change notification settings - Fork 31
New implementation of _exp! based on generated code
#64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha, this is probably good for precompiles. @ChrisRackauckas will like this :-p
src/exp2.jl
Outdated
| return cache[k-1]; | ||
| end | ||
|
|
||
| function _exp2!(A; caches=nothing, do_balancing = A isa StridedMatrix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is an internal function that shouldn't be called besides testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see _exp2! as equivalent to _exp! or a replacement thereof. When we are done comparing _exp! and _exp2!, we could consider renaming _exp! <- _exp2!. The function _exp! is currently not exported, although it is documented on the main page.
|
Any specific thoughts about using generated code? The functions |
|
It's fine, used the generated code. |
|
Test failure looks real |
|
I believe The CI seems to complain about something related to |
|
Thoughts about replacing |
|
Check whether the generated kernels for CUDA work (probably not without changing the LAPACK calls?). The main use cases are the exponential integrators https://github.com/SciML/ExponentialUtilities.jl/blob/5460b95594a9e23f418710e568db3a92d7bcfd26/src/krylov_phiv.jl . It might be interesting to run the exprk benchmark in https://benchmarks.sciml.ai/html/MOLPDE/allen_cahn_fdm_wpd.html and see how that's effected. |
|
Currently, CI is complaining about |
If we disable balancing, there is only one LAPACK call: the linear solve https://github.com/jarlebring/ExponentialUtilities.jl/blob/src/exp_generated/exp_5.jl#L77 What should the be in a CUDA setting? |
|
Okay. |
|
rename the other to And do a test on the exponential integrators and this is good to go. |
You mean |
|
If we keep both, and see the exponential integrators as the main application, I think we need the feature to specify which of the two non-allocating exp implementations should be used, eg here ExponentialUtilities.jl/src/krylov_phiv.jl Lines 85 to 90 in 5460b95
I think such features should only be added if they add value. |
|
When we are anyway messing with this code: |
|
I think we mainly want to keep it around to potentially upstream it. |
|
Alright. Fine with me. Changed to
But aren't they already tested like in the test Phi? |
|
|
||
| ```julia | ||
| _exp(A) | ||
| _exp!(A) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I just noticed, the README should get an update too. Follow up PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean documentation of the kwargs in the README.md?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that no longer describes the method of _exp! and _baseexp! is not documented, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or is the new one based off of the 2008 paper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are both implementations of the Higham 2008 paper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh okay, so all that's needed is a baseexp section then.
This is a new implementation of non-allocating matrix exponential
_exp!. This implementation is better in terms of efficiency since it fixes #63, but also makes more use of dot fusion.The implementation is based on generated code. It is generated using the package https://github.com/matrixfunctions/GraphMatFun.jl and the files
exp_XX.jlinexp_generatedare generated by the script https://github.com/jarlebring/ExponentialUtilities.jl/blob/master/src/exp_generated/generate_exp_code.jl