Skip to content

Conversation

@Riccardo231
Copy link

@Riccardo231 Riccardo231 commented Feb 8, 2025

Starting to create locally connected 1d layers. I was able to create a prototype of this, with the same syntax you used in conv2d layer.

EDIT: I added two new files for generalized reshaping; i still get an error in nf_layer_constructors_submodule. I have to work on this. I think you have to help me here because I am stuck.

This is what i get:

/home/r_orsi/neural-fortran/src/nf/nf_layer_constructors_submodule.f90:164:50:

164 | module function reshape_generalized(output_shape) result(res)
| 1
Error: Rank mismatch in argument 'output_shape' (1/0) at (1)
make[2]: *** [CMakeFiles/neural-fortran.dir/build.make:309: CMakeFiles/neural-fortran.dir/src/nf/nf_layer_constructors_submodule.f90.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:160: CMakeFiles/neural-fortran.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

I know I added a lot of stuff. You don't have to look it all. Just look at the reshape_generalized files.

You have to help me here. I added also a handling of the scalar values in input but i keep getting the same error; i don't understand what it is. If i change the name of the function, I get all well, but then in trying reshape_generalized i get other shape errors. I truly don't understand what is going on.

I wanted to change directly the reshape 3d to a generalized reshape but that function was used way too much and I didn't want to take a bigger step than what I already did.

Still don't look at the implementation of locally connected 1d. I think it is still not what is described in Keras.

@OneAdder
Copy link
Collaborator

Wow, looks smart!

Figuring out your issue might not be super easy, given the fact, that everything here is about the shape.
I see that you're using the usual reshape in CNN 1D for MNIST. Is it intentional?
If this is not the problem, then I can only suggest compiling the reshape generalized layer only and testing it for rank issues with ouput_shape...

@Riccardo231
Copy link
Author

Hello, it is intentional since I still haven't come to the cnn_mnist_1d part because I was stuck in compiling my reshape_generalized function

@OneAdder
Copy link
Collaborator

Hm. The only suggestion I have is to add some tests for the layers. And figure out the issue this way. I usually prefer sort of test-driven approach when working with manual memory allocation.
Hopefully, the issue can get resolved this way. So far I can only see, that for some reason your changes break memory allocation for all layers. Making tests for your layers can help identify where the issue with that lies.

@OneAdder
Copy link
Collaborator

I tried compiling your code, but didn't manage to pinpoint the exact culprit. So, I can only suggest a way I would go about figuring it out 🙃

@Riccardo231
Copy link
Author

I tried compiling your code, but didn't manage to pinpoint the exact culprit. So, I can only suggest a way I would go about figuring it out 🙃

I patched the bug i was talking about. Now i need to implement the reshape_generalized function into the network but i still haven't written anything

@OneAdder
Copy link
Collaborator

The newer version does compile, but tests fail with Fortran runtime error: Dimension 1 of array 'res' has extent 0 instead of <some_random_unallocated_value>

@Riccardo231
Copy link
Author

Thank you, I still didn't try the tests

@milancurcic
Copy link
Owner

Sorry that I only see this now and somehow missed the email notifications (too many).

Thank you for this addition; definitely needed and in scope.

I only need to figure out how to change this PR so that it's against modern-fortran/neural-fortran main, instead of my own fork. Perhaps the easiest approach could be, when this PR is ready, to merge it into milancurcic/neural-fortran main, and then I would open a new PR to merge from milancurcic/neural-fortran main into modern-fortran/neural-fortran main.

@Riccardo231
Copy link
Author

Going to apply these modifies today. But I still need to add a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants