Skip to content

Conversation

@dlfivefifty
Copy link
Contributor

@dlfivefifty dlfivefifty commented Apr 1, 2021

Fixes #40277

@KristofferC KristofferC added backport 1.6 Change should be backported to release-1.6 compiler:inference Type inference labels Apr 1, 2021
@thchr
Copy link
Contributor

thchr commented Apr 1, 2021

This may be a stupid question, but why does sum(map(length, V)) lead to succesful inference here while sum(length, V) doesn't (which I thought would be more idiomatic)? Same, question essentially for the original mapreduce(length, +, V)?

I realize V is a tuple, which I guess is a salient point, but I had imagined either of those variations would "work" the same way (e.g., they have the same performance and essentially the same native code in isolation; but I can vaguely appreciate that this in principle is separate from how they might interact with inference).

@JeffBezanson
Copy link
Member

Not a stupid question at all; they should be the same. Maybe due to #35800.

@dlfivefifty
Copy link
Contributor Author

I've had to work around quite a few type-inference issues and there are a number of rules of thumbs I've picked up:

  1. map(f, v) is more reliable than broadcast(f, v)
  2. sum(map(f, v)) is more reliable than mapreduce(f, +, v)
  3. Sometimes it helps to write out a map or mapreduce explicitly: i.e. instead of mapreduce(f, +, v) one could write a function mapplus(v...) a la
mapplus() = 0
mapplus(x, y...) = f(x) + mapplus(x, y...)

@KristofferC
Copy link
Member

I added an extra test that was reported from that issue

AbstractVecOrTuple{T} = Union{AbstractVector{<:T}, Tuple{Vararg{T}}}

_typed_vcat_similar(V, ::Type{T}, n) where T = similar(V[1], T, n)
_typed_vcat(::Type{T}, V::AbstractVecOrTuple{AbstractVector}) where T =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this change just for V::Tuple and keep using mapreduce for V::AbstractVector, since in that case, it is probably beneficial to avoid the extra allocation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since vcat itself is allocating I don't think this will make any noticeable difference, but it's easy to add if there's a good argument for why it's still needed.

@vtjnash vtjnash merged commit 03abc45 into JuliaLang:master Apr 3, 2021
KristofferC pushed a commit that referenced this pull request Apr 4, 2021
@KristofferC KristofferC mentioned this pull request Apr 4, 2021
33 tasks
KristofferC pushed a commit that referenced this pull request Apr 4, 2021
KristofferC pushed a commit that referenced this pull request Apr 4, 2021
@KristofferC KristofferC removed the backport 1.6 Change should be backported to release-1.6 label May 4, 2021
ElOceanografo pushed a commit to ElOceanografo/julia that referenced this pull request May 4, 2021
antoine-levitt pushed a commit to antoine-levitt/julia that referenced this pull request May 9, 2021
johanmon pushed a commit to johanmon/julia that referenced this pull request Jul 5, 2021
staticfloat pushed a commit that referenced this pull request Dec 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

compiler:inference Type inference

Projects

None yet

Development

Successfully merging this pull request may close these issues.

reduce type inference on 1.6

6 participants