Skip to content

Conversation

@fmassa
Copy link
Contributor

@fmassa fmassa commented Mar 14, 2016

Currently, every forward/backward creates a new tensor (which shares the same storage).
This PR allows to keep track of the module's output/gradInput, as the tensor pointers won't change at every forward/backward pass.

I think that this behaviour should be enforced for all nn modules. If you agree, I can send a PR fixing this behaviour in other modules.

Allows to keep track of its output, as the tensors won't change at every forward/backward pass
soumith added a commit that referenced this pull request Mar 14, 2016
View reuses the same internal tensors
@soumith soumith merged commit 6dbdac2 into torch:master Mar 14, 2016
@soumith
Copy link
Member

soumith commented Mar 14, 2016

Totally agree. Have been writing new modules with this in mind.

@fmassa fmassa deleted the view_fix branch March 14, 2016 15:02
@fmassa
Copy link
Contributor Author

fmassa commented Mar 14, 2016

Great ! :) I'll have a pass over nn modules and I'll prepare a PR later today (or when I find some time)

@nagadomi
Copy link

This commit breaks older saved models. It does not have self.output.

In 14 module of nn.Sequential:
/home/nagadomi/torch/install/share/lua/5.1/nn/View.lua:80: attempt to index field 'output' (a nil value)

@fmassa
Copy link
Contributor Author

fmassa commented Mar 15, 2016

@nagadomi sorry for breaking your loading. I haven't thought about the case in which one saves a model before ever forwarding/backwarding it. This should be fixed with #713.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants