reducing an Array of Float using scala.math.max -


i confused following behavior - why reducing array of int work using math.max, array of float requires wrapped function? have memories not issue in 2.9, i'm not that.

$ scala -version scala code runner version 2.10.2 -- copyright 2002-2013, lamp/epfl  $ scala  scala> import scala.math._  scala> array(1, 2, 4).reduce(max) res47: int = 4  scala> array(1f, 3f, 4f).reduce(max) <console>:12: error: type mismatch;  found   : (int, int) => int  required: (anyval, anyval) => anyval           array(1f, 3f, 4f).reduce(max)                                    ^  scala> def fmax(a: float, b: float) = max(a, b) fmax: (a: float, b: float)float  scala> array(1f, 3f, 4f).reduce(fmax) res45: float = 4.0 

update : work

scala> array(1f, 2f, 3f).reduce{(x,y) => math.max(x,y)} res2: float = 3.0 

so reduce(math.max) cannot shorthanded?

the first thing note math.max overloaded, , if compiler has no hint expected argument types, picks 1 of overloads (i'm not clear yet on rules govern overload picked, become clear before end of post).

apparently favors overload takes int parameters on others. can seen in repl:

scala> math.max _ res6: (int, int) => int = <function2> 

that method specific because first of following compiles (by virtue of numeric widening conversions) , second not:

scala> (math.max: (float,float)=>float)(1,2) res0: float = 2.0  scala> (math.max: (int,int)=>int)(1f,2f) <console>:8: error: type mismatch;  found   : float(1.0)  required: int               (math.max: (int,int)=>int)(1f,2f)                                          ^ 

the test whether 1 function applies param types of other, , test includes conversions.

now, question is: why can't compiler infer correct expected type? knows type of array(1f, 3f, 4f) array[float]

we can clue if replace reduce reduceleft: compiles fine.

so surely has difference in signature of reduceleft , reduce. can reproduce error following code snippet:

case class mycollection[a]() {   def reduce[b >: a](op: (b, b) => b): b = ???   def reduceleft[b >: a](op: (b, a) => b): b = ??? } mycollection[float]().reduce(max) // fails compile mycollection[float]().reduceleft(max) // compiles fine 

the signatures subtly different.

in reduceleft second argument forced a (the collection's type), type inference trivial: if a==float (which compiler knows), compiler knows valid overload of max 1 takes float second argument. compiler finds 1 ( max(float,float) ), , happens other constraint (that b >: a) trivially satisfied (as b == == float overload).

this different reduce: both first , second arguments can (same) super-type of a (that is, of float in our specific case). more lax constraint, , while argued in case compiler see there 1 possibility, compiler not smart enough here. whether compiler supposed able handle case (meaning inference bug) or not, must don't know. type inference tricky business in scala, , far know spec intentionally vague can inferred or not.

since there useful applications such as:

scala> array(1f,2f,3f).reduce[any](_.tostring+","+_.tostring) res3: = 1.0,2.0,3.0 

trying overload resolution against every possible substitution of type parameter expensive , change result depending on expected type wind with; or have issue ambiguity error?

using -xlog-implicits -yinfer-debug shows difference between reduce(math.max), overload resolution happens first, , version param type solved first:

scala> array(1f,2f,3f).reduce(math.max(_,_))  [solve types] solving a1 in ?a1 inferexprinstance {   tree      scala.this.predef.floatarrayops(scala.array.apply(1.0, 2.0, 3.0)).reduce[a1]   tree.tpe  (op: (a1, a1) => a1)a1   tparams   type a1   pt        ?   targs     float   tvars     =?float } 

Comments

Popular posts from this blog

plot - Remove Objects from Legend When You Have Also Used Fit, Matlab -

java - Why does my date parsing return a weird date? -

Need help in packaging app using TideSDK on Windows -