Musings on Reduction

In one man’s humble opinion, reduction () is the Queen of Operators.

Each (¨) comes a close second, but doesn’t get the cigar because each can be written in terms of reduction.

Two special cases are of interest: reduction along axes of length 1 (or reduction of a scalar) and reduction along axes of length 0.

With a length-1 axis (or scalar), the operand function is not applied (+⌿'A' → 'A'). This can be useful as an implicit no-op – see DFS video on YouTube.

With a length-0 axis, a primitive operand returns its right identity item – but only if one is defined (⌊⌿⍬). Otherwise: DOMAIN ERROR.

Another way to think about the 0-length axis case is that a right identity item (if there is one) is catenated to the argument prior to the reduction. Functional Programming languages tend to define reduction in this way by supplying an explicit initial value (ival) to the reduction:

    fold fn ival [] = ival
    fold fn ival (x:xs) = fn x (fold fn ival xs)

We can write such a variant of in APL, supplying the initial value as right operand ⍵⍵:

      fold ← {⍺⍺⌿⍵⍪⍵⍵}        ⍝ right operand ⍵⍵ is initial value

      × fold 1 ⊢2 3 4         ⍝ same as regular ×⌿
24

      {⍺×⍵}fold 1 ⊢2 3 4      ⍝ non-primitive operand function
24

      {⍺×⍵}fold 1 ⊢⍬          ⍝ initial value returned for empty argument
1

Whilst it doesn’t provide the no-op trick for length-1 axes, fold gives us better control for null cases than does primitive reduction, which relies on the single prototypical item of its argument array:

      ⊢mat ← 2 3 ∘.+ 0(0 0)
┌─┬───┐
│2│2 2│
├─┼───┤
│3│3 3│
└─┴───┘

Notice the discontinuity in the depth of the result with regular +⌿ as the number of rows reaches 0:

      +⌿ 2↑mat
┌─┬───┐
│5│5 5│
└─┴───┘

      +⌿ 1↑mat
┌─┬───┐
│2│2 2│
└─┴───┘

      +⌿ 0↑mat              ⍝ Eh?
0 0

Supplying the variant with a prototypical row produces a more uniform convergence:

      +fold 0(0 0) ⊢2↑mat
┌─┬───┐
│5│5 5│
└─┴───┘

      +fold 0(0 0) ⊢1↑mat
┌─┬───┐
│2│2 2│
└─┴───┘

      +fold 0(0 0) ⊢0↑mat   ⍝ Ah!
┌─┬───┐
│0│0 0│
└─┴───┘

A similar discontinuity can be seen even for axes of length 1, with non-scalar primitive operand functions:

      ⊢mat ← 3 3⍴⍳9
1 2 3
4 5 6
7 8 9

Now:

      ,⌿ 3↑mat              ⍝ join reduction
┌─────┬─────┬─────┐
│1 4 7│2 5 8│3 6 9│
└─────┴─────┴─────┘

      ,⌿ 2↑mat
┌───┬───┬───┐
│1 4│2 5│3 6│
└───┴───┴───┘

      ,⌿ 1↑mat              ⍝ Tsk!
1 2 3

      ,⌿ 0↑mat              ⍝ Bah!
DOMAIN ERROR

But:

      ,fold(⊂⍬) ⊢3↑mat
┌─────┬─────┬─────┐
│1 4 7│2 5 8│3 6 9│
└─────┴─────┴─────┘

      ,fold(⊂⍬) ⊢2↑mat
┌───┬───┬───┐
│1 4│2 5│3 6│
└───┴───┴───┘

      ,fold(⊂⍬) ⊢1↑mat      ⍝ Ooh!
┌─┬─┬─┐
│1│2│3│
└─┴─┴─┘

      ,fold(⊂⍬) ⊢0↑mat      ⍝ Aah!
┌┬┬┐
││││
└┴┴┘

Although we have no specific plans to do so, it is conceivable that this definition of fold could be introduced as a variant of primitive reduction:

        nums ← ⍠(⊂⍬)        ⍝ possible variant for numeric reduction

      ,⌿nums 1↑mat          ⍝ Ooh!
┌─┬─┬─┐
│1│2│3│
└─┴─┴─┘
      ,⌿nums 0↑mat          ⍝ Aah!
┌┬┬┐
││││
└┴┴┘
      ,/nums 3 1↑mat        ⍝ Mmm! reduction along last axis
┌─┬─┬─┐
│1│4│7│
└─┴─┴─┘

Notice that the left and right arguments of a reduction’s operand function need not be of the same kind. Using an informal type notation:

      ⌿ :: (⍺ ∇ ⍵ → ⍵) ∇∇ [⍺]⍪⍵ → ⊂⍵

which, given an argument of uniform kind, collapses to:

      ⌿ :: (⍺ ∇ ⍺ → ⍺) ∇∇ [⍺] → ⊂⍺

I hope to say more about this style of polymorphic type notation in a future posting. In the meantime, the significant point is only that, in the general case, the operand function is of “kind” (⍺ ∇ ⍵ → ⍵), which means that the kind of its left argument may differ from that of its right argument and result. See more discussion on this idea in the notes for foldl in dfns.dws.

Solving the 2014 APL Problem Solving Competition – it’s as easy as 1 1 2 3…

Competition LogoThe winners of the 2014 APL Problem Solving Competition were recognized at the Dyalog ’14 user meeting and had a chance to share their competition experiences. Emil Bremer Orloff won the student competition and received $2500 USD and an expenses-paid trip to Dyalog ’14, while Iryna Pashenkovska took first place among the non-student entries and received a complimentary registration to Dyalog ’14. Our congratulations to them for their fine efforts! This post is the first of a series where we will examine some of the problems selected for this year’s competition. The problem we’ll discuss first is Problem 3 from Phase I dealing with the Fibonacci sequence.

Problem 3 – Tell a Fib Write a dfn that takes an integer right argument and returns that number of terms in the Fibonacci sequence. Test cases:

      {your_solution} 10
 1 1 2 3 5 8 13 21 34 55
      {your_solution} 1
 1
      {your_solution} 0   ⍝ should return an empty vector

Essentially, you start with the sequence 1 1 and concatenate the sum of the last 2 numbers to the end and repeat until the sequence has the correct number of terms. In Python, a solution might look something like this:

def fib(n): # return the first n elements of the Fibonacci sequence
    result = []
    a, b = 0, 1
    while 0 < n:
        result.append(b)
        a, b = b, a+b
        n -= 1
    return result

You can write nearly the same code in APL:

r←fibLooping n;a;b
  r←⍬
  (a b)←0 1
:While 0<n
  r,←b
  (a b)←b,a+b
  n-←1
:EndWhile

While it’s possible to write APL code that looks like Python (or many other languages), one of the judging criteria for the competition is the effective use of APL syntax – in this case, by avoiding the explicit loop. Two ways to do this are 1) using recursion and 2) using the power operator ().

Recursive Solution

      fibRecursive←{⍵<3:⍵⍴1 ⋄ {⍵,+/¯2↑⍵}∇⍵-1}

The neat thing about recursion is that the function keeps calling itself with a “simpler” argument until it reaches a base case and then passes the results back up the stack. Here the base case occurs when the argument () is less than 3 and the function returns ⍵⍴1 – it’s either an empty vector, 1, or 1 1 for values of of 0, 1 and 2 respectively. It’s easy to illustrate the recursive process by adding some code (⎕←) to display information at each level of recursion.

      fibRecursive←{⍵<3:⎕←⍵⍴1 ⋄ {⎕←⍵,+/¯2↑⍵}∇⎕←⍵-1}
      fibRecursive 10
9
8
7
6
5
4
3
2
1 1
1 1 2
1 1 2 3
1 1 2 3 5
1 1 2 3 5 8
1 1 2 3 5 8 13
1 1 2 3 5 8 13 21
1 1 2 3 5 8 13 21 34
1 1 2 3 5 8 13 21 34 55

The recursive solution above is termed “stack recursive” in that it creates a new level on the function call stack for each recursive call. We can modify the code slightly to implement a “tail recursive” solution which Dyalog can detect and optimize by avoiding creating those additional function call stack levels. You can see the effect of this as each level computes its result immediately:

      fibTail←{⍺←0 1 ⋄ ⍵=0:⎕←1↓¯1↓⍺ ⋄ (⎕←⍺,+/¯2↑⍺)∇ ⎕←⍵-1}
      fibTail 10
        fibTail 10
9
0 1 1
8
0 1 1 2
7
0 1 1 2 3
6
0 1 1 2 3 5
5
0 1 1 2 3 5 8
4
0 1 1 2 3 5 8 13
3
0 1 1 2 3 5 8 13 21
2
0 1 1 2 3 5 8 13 21 34
1
0 1 1 2 3 5 8 13 21 34 55
0
0 1 1 2 3 5 8 13 21 34 55 89
1 1 2 3 5 8 13 21 34 55

Power Operator Solution

      fibPower←{⍵⍴({⍵,+/¯2↑⍵}⍣(0⌈⍵-1))1}

The power operator is defined as {R}←{X}(f⍣g)Y. When the right operand g is a numeric integer scalar – in this case (0⌈⍵-1), the power operator applies its left operand function f{⍵,+/¯2↑⍵} cumulatively g times to its argument Y, which in this case is 1. Similar to the recursive solution, we can add ⎕← to see what happens at each application:

      fibPower←{⍵⍴({⎕←⍵,+/¯2↑⍵}⍣(0⌈⍵-1))1}
      fibPower 10
1 1
1 1 2
1 1 2 3
1 1 2 3 5
1 1 2 3 5 8
1 1 2 3 5 8 13
1 1 2 3 5 8 13 21
1 1 2 3 5 8 13 21 34
1 1 2 3 5 8 13 21 34 55
1 1 2 3 5 8 13 21 34 55

In discussing this blog post with my fellow Dyalogers, Roger Hui showed me a rather neat construct:

      fibRoger←{1∧+∘÷\⍵⍴1}     ⍝ Roger's original version
      fibBrian←{1∧÷(+∘÷)\⍵⍴1}  ⍝ tweaked to make result match contest examples
      fibRoger 10
1 2 3 5 8 13 21 34 55 89
      fibBrian 10
1 1 2 3 5 8 13 21 34 55

I’ve added the parentheses around +∘÷ to make it a bit clearer what the left operand to the scan operator \ is. Let’s break the code down…

      ⍵⍴1 ⍝ produces a vector of ⍵ 1's 
1 1 1 1 1 ⍝ for ⍵=5

Then we apply scan with the function +∘÷, which in this case has the effect of adding 1 to the reciprocal of the previous result:

      (+∘÷)\1 1 1 1 1
1 2 1.5 1.666666667 1.6

Note that the denominator of each term is the corresponding element of the Fibonacci series…

1 ÷ 1 = 1
2 ÷ 1 = 2
3 ÷ 2 = 1.5
5 ÷ 3 = 1.6666666667
8 ÷ 5 = 1.6

To extract them, take the reciprocal and then the LCM (lowest common multiple) with 1.

      1∧÷1 2 1.5 1.666666667 1.6
1 1 2 3 5

What happens if you want to accurately compute larger Fibonacci numbers? There’s a limit to the precision based on the internal number representations. Because fibRoger and fibBrian delve into floating point representation, they’re the most limited. (Roger points out that the J language has native support for extended precision and does not suffer from this limitation.)

Dyalog has a system variable, ⎕FR, that sets the how floating-point operations are performed using either IEEE 754 64-bit floating-point operations or IEEE 754-2008 128-bit decimal floating-point operation. For most applications, 64-bit operations are perfectly acceptable, but in applications that demand a very high degree of precision, 128-bit operations can be used, albeit at some performance degradation. Using 64-bit operations, fibBrian loses accuracy after the 35th term while using 128-bits extends the accuracy to 68 terms.

Even setting ⎕FR to use 128-bit operations and the print precision, ⎕PP, to its maximum, we can only compute up to the 164th element 34-digit 8404037832974134882743767626780173 before being forced into scientific notation.

Can we go further easily? Yes we can! Using Dyalog for Microsoft Windows’ .NET integration, we can seamlessly make use of the BigInteger class in the System.Numerics .NET namespace.
First we need to specify the namespace and where to look for it…

      ⎕USING←'System.Numerics,system.numerics.dll'

Then we make a trivial change to our code to use BigInteger instead of native data types…

      fibRecursive←{⍵<3:⍵⍴⎕NEW BigInteger 1 ⋄ {⍵,+/¯2↑⍵}∇ ⍵-1}

And we’re done!

So the next time someone walks up to you and asks “What can you tell me about the 1,000th element of the Fibonacci series?”, you can confidently reply that it has 209 digits and a value of
43466557686937456435688527675040625802564660517371780402481729
08953655541794905189040387984007925516929592259308032263477520
96896232398733224711616429964409065331879382989696499285160037
04476137795166849228875.

For other APL Fibonacci implementations, check out this page.

A Speed-Up Story

The first e-mail of the work week came from Nicolas Delcros. He wondered whether anything clever can be done with ∘.≡ on enclosed character strings. I immediately thought of using “magic functions“, an implementation technique whereby interpreter facilities are coded as dfns. I thought of magic functions because the APL expressions involved in this case are particularly terse:

   t ←' zero one two three four five six seven eight nine'
   t,←' zéro un deux trois quatre cinq six sept huit neuf'
   t,←' zero eins zwei drei vier fünf sechs sieben acht neun'
   b←⌽a←1↓¨(' '=t)⊂t

   cmpx 'a∘.≡a' '∘.=⍨⍳⍨a'
  a∘.≡a   → 9.69E¯5 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
  ∘.=⍨⍳⍨a → 8.21E¯6 | -92% ⎕⎕
   cmpx 'a∘.≡b' '(a⍳a)∘.=(a⍳b)'
  a∘.≡b         → 9.70E¯5 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
  (a⍳a)∘.=(a⍳b) → 1.43E¯5 | -86% ⎕⎕⎕⎕

   y←⌽x←300⍴a

   cmpx 'x∘.≡x' '∘.=⍨⍳⍨x'
  x∘.≡x   → 9.53E¯3 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
  ∘.=⍨⍳⍨x → 1.52E¯4 | -99% ⎕
   cmpx 'x∘.≡y' '(x⍳x)∘.=(x⍳y)'
  x∘.≡y         → 9.55E¯3 |   0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
  (x⍳x)∘.=(x⍳y) → 1.95E¯4 | -98% ⎕

The advantage will be even greater if/when we speed up . (And, obviously, the idea is also applicable to ∘.≢ : just replace with and = with .)

Jay Foad objected that the comparisons above aren’t quite fair as the more verbose expressions should check that either ⎕ct←0 or that the arguments do not contain any elements subject to tolerant comparison. I countered that the checking in C would not greatly affect the benchmarks as the time to do the checking is O(m+n) but the benefits are O(m×n) .

Since the factors are so promising and the coding relatively easy, I went ahead and did the work, with the following results:

14.1 14.0 ratio cost of checking
a∘.≡a 9.16e¯6 9.69e¯5 10.58 1.09
a∘.≡b 1.53e¯5 9.70e¯5 6.34 1.08
x∘.≡x 1.48e¯4 9.53e¯3 64.39 1.00
x∘.≡y 1.94e¯4 9.55e¯3 49.23 1.01

The last column (the cost of checking that tolerant comparison is not involved) is under 10% and decreases as the argument sizes increase.

This work is another illustration of the ubiquity and practical usefulness of the “selfie” concept — in the new (14.1) implementation, x∘.≡y is faster when the left and right arguments are the same than when they are not. In particular, the selfie x⍳x or ⍳⍨x occurs twice, and bolsters the point made in item 3 of Sixteen APL Amuse-Bouches:

x⍳x are like ID numbers; questions of identity on x can often be answered more efficiently on x⍳x than on x itself.

Finally, after all that, I recommend that one should consider using x⍳y before using x∘.≡y . The latter requires O(m×n) space and O(m×n) time, and is inherently inefficient.