-
Notifications
You must be signed in to change notification settings - Fork 339
Implement vectorized adstock transformations #114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
pymmmc/transformers.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ricardoV94, is there another way to implement this shape broadcasting?
- If I’m not explicit about it, broadcasting of symbolic
xandwfails and takes the shape ofxas the correct one (maybe symbolic isn’t the correct term, I mean tensors that haveNonein their shape). - When I am explicit but the arrays fail to broadcast at run time, I get segfaults (probably due to the C backend trying to read or write to memory locations that aren’t part of the array)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a closer look at the source I think that there should be a couple of assert Ops there.
|
Could I recommend: |
|
Feature request: At the moment there's the assumption that if our data is shape Would it be possible to have the option (or an alternative function) which allows you to just have different priors over predictors which is applied over all regions. So in this example, that would be data of size |
This is already supported, just pass the parameters with shape |
eea28d4 to
3b68971
Compare
3b68971 to
1573905
Compare
|
@ricardoV94, I'd like to ask for your input on the |
1573905 to
eb4c204
Compare
|
I opened pymc-devs/pytensor#152, related to the problem I found with |
eb4c204 to
9643df7
Compare
|
Pinging @ricardoV94 to have a look. I updated the |
|
@lucianopaz sorry for the delay. I think pymc-devs/pytensor#175 is a better solution |
I don’t agree. I think that PR addresses a separate and valid issue: |
|
This seems like an odd place to reimplement a utility like this. Right now the broadcastable flag is ignored by design all over Pytensor. Wouldn't that be the place to start fixing things? |
|
Hey! Any blockers for this one? Anything I could do to support ? |
|
Would be good to see if we can remove the custom |
|
Closed in favor of #221 |
Closes #22