-
Notifications
You must be signed in to change notification settings - Fork 84
Unifying integral kernel NO architectures #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dario-coscia
commented
Feb 9, 2024
- Unifying integral kernel NO architectures with BaseNO (maybe rename better)
- Implement FNO based on BaseNO
|
Everything fine to me, please just fix the Codacy issues (https://app.codacy.com/gh/mathLab/PINA/pullRequest?prid=13602959) |
pina/model/fno.py
Outdated
| :param input_numb_fields: Number of input fields. | ||
| :type input_numb_fields: int | ||
| :param output_numb_fields: Number of output fields. | ||
| :type output_numb_fields: int | ||
| :param n_modes: Number of modes. | ||
| :type n_modes: int or list[int] | ||
| :param dimensions: Number of dimensions (1, 2, or 3). | ||
| :type dimensions: int | ||
| :param padding: Padding size, defaults to 8. | ||
| :type padding: int | ||
| :param padding_type: Type of padding, defaults to "constant". | ||
| :type padding_type: str | ||
| :param inner_size: Inner size, defaults to 20. | ||
| :type inner_size: int | ||
| :param n_layers: Number of layers, defaults to 2. | ||
| :type n_layers: int | ||
| :param func: Activation function, defaults to nn.Tanh. | ||
| :type func: torch.nn.Module | ||
| :param layers: List of layer sizes, defaults to None. | ||
| :type layers: list[int] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for primitive type, use the compact form
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
I tried to address as many as I could. The ones remaining are either not solvable (too many arguments in the input) or I do not know how to avoid triggering the codacy error (for example the forward pass one...). Also, I think the test failing on windows is due to the fact that some packages are not correctly installed when using windows... |
|
Yes, just rebase conflicts after #243 and we're done! |
… it better) * Implement FNO based on BaseNO
* modify doc for FNO and adding for FourierIntegralKernel, NeuralKernelOperator * adding tests
Should be done! |
* Unify integral kernel NO architectures with NeuralKernelOperator * Implement FNO based on NeuralKernelOperator * modify doc for FNO and add for FourierIntegralKernel, NeuralKernelOperator * adding tests --------- Co-authored-by: Dario Coscia <[email protected]> Co-authored-by: Dario Coscia <[email protected]>
* Unify integral kernel NO architectures with NeuralKernelOperator * Implement FNO based on NeuralKernelOperator * modify doc for FNO and add for FourierIntegralKernel, NeuralKernelOperator * adding tests --------- Co-authored-by: Dario Coscia <[email protected]> Co-authored-by: Dario Coscia <[email protected]>
* Unify integral kernel NO architectures with NeuralKernelOperator * Implement FNO based on NeuralKernelOperator * modify doc for FNO and add for FourierIntegralKernel, NeuralKernelOperator * adding tests --------- Co-authored-by: Dario Coscia <[email protected]> Co-authored-by: Dario Coscia <[email protected]>