After a wonderful banquet dinner bonding with fellow teammates of last night’s Viking Challenge, we were invited to the Jorns Auditorium for the world premiere of our movies from earlier in the day. The screening and awards show was a roaring success with everybody being surprised and thrilled at the quality of what came out from the editing room. We would like to thank Filmteambuilding.dk for an incredibly enjoyable afternoon and evening.

In another world premiere, Adám Brudzewsky introduced us to APLcart this morning. This is the new answer to the question “how do I… in APL?”. Luckily Adám’s presentation strategy of asking the audience for functionality to search for was a win-win – if APLcart had it then we were impressed, and if not Adám had a new item to add to APLcart. Try it now and see if APLcart has what you’re looking for. If you can’t, Adám invites you to email the functions you want to see to adam(AT)aplcart.info.

Richard Park then gave his third and final presentation of the week on the theme of using APL for education. He showed us how you can quickly and easily create Dyalog Jupyter notebooks and recommended using them for how-to, instructional documents and problem sets for students. You can view and download his presentation (which is a notebook) from GitHub, and interact with the live running notebook by clicking this button → .

We then had the final talk of the User Meeting. Tomas Gustafsson, creator of the Stormwind boating simulator, told us the fascinating story of the Finnish ship M/S Irma. It disappeared while travelling a common route in 1968 and became one of the greatest mysteries in Finnish maritime history. Eventually some wreckage was found near Åland and Tomas was able to use APL, reconstructing possible paths of the debris via simulation, to make an educated guess of where to search for the main wreckage.

Lastly Gitte expressed to us how enjoyable the week had been, and all in the audience seemed to agree. We thanked Helene, Karen, Jason, Fiona and all of the staff at Konventum for their hard work “behind the scenes” to make the User Meeting run smoothly.

For the last afternoon of the User Meeting three final workshops were held. Two focused on technical software development issues, with Morten and Josh answering users’ questions related to using text-based source with ]LINK and Git. Andy Shiers and John Daintree were generally helping users with application-related issues, but were especially helpful to some of the young new users of APL. Some of our delegates took on another challenge in the workshop on code golfing.

I think it is safe to say that we have all thoroughly enjoyed this week. You can look forward to seeing our commercials from the Viking Challenge as well as recordings of talks from this week at some point in the future on dyalog.tv.

]]>In contrast with Monday night’s brain-bending puzzles, last night there was some lighter entertainment as Richard Park presented his molecular dynamics framework APLPhys. He showed us how elegantly APL could express mathematical equations and we joined in his fascination watching simulations of little balls flying around on his MiServer based graphical interface.

This morning we got to hear from Roberto and his students again. Pietro, Alessandro and Gabriele told us how after they were shown APL in school their interest was sparked to the point that they would write APL in other, slightly more dull lessons. They gave us more details on their competitive league scoring algorithm which was used in Monday evening’s contest. Lastly they expressed how APL’s ability to have you think differently led them to develop their puzzle competition platform called MathMaze. They had familiarity with Python but were new to APL, so they used Py’n’APL to make Dyalog communicate with a python-based Django server. In that way, MathMaze contestants could enter either a direct puzzle solution, or an APL statement which is evaluated on the server to solve the puzzle.

Afterwards Stephen Taylor led the Young APLers panel. To begin he introduced us to Josh David from the small town of Scranton, PA. We learned how he started working with APL at 15 years old after being introduced to it by his neighbour Paul Mansour of The Carlisle Group. Next was James Heslip from Optima, telling of his discovery of programming through Visual Basic. During and after university he wanted to pursue computing but keep the maths aspect of his work in the future. After meeting Paul Grosvenor he managed to convince Paul to take him on as an apprentice at Optima, and now APL allows him to write programs using mathematical notation. Yuliia Serhiienko from Ukraine came next to the stage, and said how she loved mathamatics in school but never imagined becoming a programmer. She had been an actuary in a previous life but, in the end, her transition from Excel macros to APL turned out wonderfully. Alve Björk, last year’s competition winner, claimed to spend more time reading about programming languages than actually programming. He said that in many languages he will think of a program but not write it. However, since APL is terse he actually sometimes tries it out when he thinks of a program. Alve also stated that he found it interesting that when you have a problem, in APL it’s not the first thing you do to go online looking for a ready made solution.

All of the panelists discussed the importance of having a teacher and being able to ask questions. It was suggested that some kind of mentor system for APL could be fostered. Once again the idea of “spreading the gospel” and getting APL in front of more people was brought up, and how it may be necessary to do this in order for the community to grow – as much as some of us would like it to remain niche.

Finally, the moment we’d all been waiting for: the prize ceremony for this year’s problem solving competition. Brian Becker talked about how we had made the leap to “eat our own dog food”, having built and hosted the competition website using MiServer (you can still see it at dyalogaplcompetition.com). Many technologies came together so that Dyalog could have the Phase I “one-liner” problems automatically validated in collaboration with TiO.Run. We saw some stastics about registrations and submissions, and heard about the extremely high quality of both Phase I and Phase II entries this year.

Then Gitte presented the top professional and student competition winners with their prizes. Torsten Grust expressed how much fun he had thinking about the problems and how clever he felt when he managed to come up with his solutions.

The Grand Prize winner Jamin Wu told us about how he discovered programming when he was looking into ways to solve problems using computers – something he still needs to do despite being a medical student – and how he had found the APL family of lanugages via the project Euler website. Jamin then took us through some of his solutions, including his incredible invertible tacit functions for tap encoding and decoding. He expressed how nice it had been to think about his implementation of the Romberg method of integration by solving the problem with a pen and paper first, and then implementing the refined solution at the end, since writing the APL was so cheap in terms of effort. We were enthralled by his brilliant explanations and incredibly impressed by his well considered problem solutions.

After lunch we were made extremely busy in the Viking Challenge. The delegates were split into teams and had to make short commercials emphasising a certain aspect of APL to a particular audience. We expect to see some oscar-winning performances at the screening after the banquet dinner – so now it’s time to get on my Sunday Best ready for the prize acceptance speech I expect to make.

]]>Last night Roberto Minervini and his students Pietro Pio Palumbo, Gabriele Meroni and Alessandro Laselli of Liceo Scientifico GB Grassi Saronno conducted A Puzzle League – sneakily introducing us to another APL. The delegates were divided into teams who competed to solve 18 maths and logic puzzles, which could be solved both using APL and with good old pen and paper. The scoring system rewarded teams who solved puzzles that other teams did not solve, but it was soon clear that the real challenge was solving the puzzle whatsoever in many cases. The furious cognitive battle lasted long into the night but eventually the team “AdamsAPL” (Adám did not choose the name) beat “MKTeam” (Morten may well have chosen the name) to prove that, although Morten Kromberg is the CTO, he should be glad to rely on the problem solving capabilities of Dyalog’s employees.

This morning the second day of talks commenced. While the first talk of the day had Marshall Lochbaum melting brains with the details of utilising CPU vector processing (among other techniques) for implementing fast reductions, the majority of the day’s talks focused on various tools for developing software applications with Dyalog.

Richard Park and Michael Baas gave an update on recent developments of the statistical package TamStat. The creator, Stephen Mansour, couldn’t be with us this week as he is using TamStat to teach his class at the University of Scranton, PA.

Erik Wallace gave us a view of the wide range of functions available in his cryptographic library Mystika. His talk mentioned promising work in combination with Aaron Hsu’s co-dfns compiler to give speed ups, as well as some of his own work on algorithms and implementation. Erik also expounded on some non-cryptographic use cases such as high precision squaring and inverses.

Stig Nielsen told us how SimCorp is moving Dimension to the cloud – an undertaking which requires that the business logic within its 2.5 million lines of APL code be moved from the desktop application to the server and run on multiple instances of Dyalog within a .NET process.

Asset and Liability Management has been getting more popular as the legal landscape changes and SimCorp Italiana have been swift to account for those needs. Francesco Garue gave an impression of the complexity of ALM and how SimCorp Italiana have been trying to tame their “pretty messy scheme” by, for example, removing the dependency on system-specific databases or tables.

Another asset management system was presented by Claus Madsen of FinE Analytics. Claus has been using Dyalog since version 6 so he has seen the evolution of APL user needs over the last thirty years. He showed us how he has been using .NET classes to allow his APL solution to integrate with other languages used by those he is working with, using an object oriented model to handle settings for various types of financial data.

The HTMLRenderer has been developed to enable easy creation of cross-browser user interfaces using web technologies. Brian Becker drew some analogies between his granddaughter and HTMLRenderer over their respective developments, from sometimes making a mess (SYSERROR) to becoming more able to communicate over time (WebSockets). He then introduced some recent changes and additions to the way the HTMLRenderer is used in Dyalog 17.1. However, he also explained that MiServer sites would need no change to their code in order to run as a desktop application using the HRServer (HTMLRenderer Server).

Josh David has recently transferred from Dyalog user to Dyalog employee. Neatly segueing from Brian’s talk, he gave us a demonstration of the tools he has created which use the HTMLRenderer to quickly and easily create graphical elements in the Dyalog session.

Somehow COO Andy Shiers of Dyalog keeps improving the fireside chat, and this year has reached version 5! Jokes aside, his fireside chats are an opportunity to address a few points which might not fit into the scope of other individual talks. This year Andy informed us about why it is now really important that the interpreter knows its serial number, and gave us his usual smorgasbord of tips on subjects such as: the new Windows “backtick” keyboard enhancement, resizing the language bar, the memory manager I-Beam (2000⌶) and using ⎕NINFO on directories where there are inaccessible subdirectories. In return he asked the we send him examples of APL errors which would be made clearer with better DMX messages.

Excel and APL: A match made in Windows – now available cross-platform. As new recruit Nathan Rogers demonstrated, since modern Excel spreadsheets are zipped XML documents using the OOXML specification, Excel spreadsheets can be created directly from APL arrays on any platform. Unfortantely Nathan could not be here in person due to an important performance of Argentine tango with his wife conflicting with the User Meeting. However, he was still able to present via video-link – what a time to be alive!

Don’t forget that tomorrow we will be live streaming the prize ceremony for the 2019 APL Problem Solving Competition on dyalog.tv from 11:00 until 12:00 (09:00 to 10:00 UTC).

]]>As usual, we began the series of User Meeting talks with a warm welcome from Managing Director Gitte Christensen. This year Gitte’s message felt somewhat spiritual as she described the lore of Thor’s hammer Mjölnir and its place in formal ceremonies. The choice of logo for this year’s User Meeting feels appropriate to the current zeitgeist: the desire to make some order from the chaos we may feel around us. You could feel a sense of hope in hard times.

Hard and sad times have certainly befallen us recently and we remember two APLers who have sadly passed away this year. Harriett Neville who was supposed to attend the User Meeting, and John Scholes who left us in February. However, we also say hello to new faces at this User Meeting – Josh David and Nathan Rogers who were hired to form our new US consulting team.

In consideration of new APLers, Gitte also gave a call to arms that we should “spread the gospel” about APL. She expressed how Dyalog is making it easier to introduce people to APL by having unregistered copies of Dyalog available for non-commercial users in the near future, and also by encouraging people to share their APL tools freely online using services such as GitHub.

This was shortly followed by a road map of future Dyalog development from Technical Director Morten Kromberg. He also emphasised how Dyalog is making APL easier to find for new people, this time mentioning the APL Orchard Chat room; the Dyalog Webinars which have been running for about 2 years now; our talks in the wider programming community at events like LamdaConf and FunctionalConf; and of course the open source APL projects as Gitte had mentioned.

Morten also described how we have been working and continue to work to make APL applications easier to deploy, maintain, test and integrate with other frameworks and development processes.

There were no surprises in JD’s demonstration of the .NET Core Bridge. The functionality in terms of the .NET Framework remained, but JD showed us what it would look like if .NET “was as portable as they tell us it should be” and could be used on Windows, MacOS and Linux with the same APL code.

Marshall took us in a more technical direction, although still pointed towards the future of APL. He showed us the new operators constant `⍨`

, atop `⍤`

and over `⍥`

which we can look forward to in version 18.0. He also gave the suggested new nomenclature for the several types of function composition which will be available when these function-composition operators are released.

Tommy Johannessen has been running his one-man company for some decades now, and we got to see a great user story as he demonstrated the interface to his school lunch system SkoleMad. It enables the delivery of 20,000 meals daily to 100,000 students!

Morten and Adam teamed up again to bring ]LINK to a wider audience and emphasize the importance of using text-based APL source files to modernize your APL development workflows. This is also a vital tool if you want to share your code easily with services like GitHub.

The theme of modern APL development continued seamlessly as Paul Mansour of The Carlisle Group presented a Git workflow for Dyalog APL using the Acre project management system. He demonstrated using AcreTools user commands and broke down a Git workflow into something accessible even to people who are new to Git and may find it slightly daunting to use.

The co-dfns compiler is a staple of the Dyalog User Meetings at this point. Aaron Hsu’s PhD project allows APL code to run fast on GPUs, and this year he was clearly excited to show us some of the revelations that have come from the development of co-dfns. These revelations came in the form of some quite high level development concepts for us to chew on and there are sure to be some interesting conversations as a result.

The future is here, the future is now and the future is cross-platform. Richard Smith brought us yet another tool in the future tool box for Dyalog: Cross-Platform Configuration Files. The project is in early development and so the majority of the talk became an interesting debate into the pros and cons of various ideas for the format. XML, JSON, YAML or another – who will win? Only time will tell…

Geoff told us the story of how a request to have functions loaded on demand led him to the germ of the idea and eventual implementation of shared code files. The audience was attentively, silently listening and the air of the Damgårdsalen was that of a village gathered around listening to their elder.

Richard Smith returned to show us some datetime functions to help us find out whether or not it is yet Christmas (SPOILER: It’s not Christmas yet). His slick demonstration reassured us that handling dates and times in Dyalog can and will be as painless as the idiosynchrosies of time and calendars permit – again after some details have been worked out.

Now we are adjurning for dinner. Later this evening we will be puzzling some puzzles in the APL Team Contest, hosted by members of Liceo Scientifico GB Grassi Saronno (a scientific high school in Italy). Don’t forget to check out tomorrow’s blog post to see how things went!

]]>This year once again the Dyalog User Meeting returns to beautiful Elsinore in Denmark. The historic seaside city is home to Kronborg castle, famously immortalised in Shakespeare’s *Hamlet* – and Kromberg castle, where Morten lives. We are holding the user meeting at Konventum in the western outskirts of Elsinore. It features winding corridors adorned with contemporary Danish art and many comfortable seating areas conducive to social engagement, so we hope that delegates will find themselves meeting new people and conjuring beautiful new ideas as the week progresses.

Today six half-day workshops were held, with topics ranging from source code management and graphical interfaces to cutting-edge APL techniques which have become available in the last decade of APL extensions in Dyalog.

Morten and Adam’s morning workshop focused on helping users collaborate on code with text-based source files. Adám introduced the ]LINK user command and Morten showed the way with Git – both guided the adoption of these technologies with worked examples.

We saw the delegates shine with their understanding of APL in the workshop on grouping and processing text. Nic helped us to understand the differences between `⊆`

partition and partitioned `⊂`

enclose. Powerful search and replace with `⎕R`

and `⎕S`

was elucidated by Richard Smith and, in this section, the ability of the participants to ask exactly the expected questions made the progression to understanding relatively smooth.

Brian Becker of the tools group and new recruit Josh David teamed up to introduce users to the new HTMLRenderer, which allows APLers to use web technologies to create cross-platform graphical user interfaces in Dyalog.

Function trains are a relatively recent addition to the APL syntax, and with their terseness people can find them daunting both to read and write. However, once again we saw delegates stepping up to the challenge and finding creative ways to solve problems. It was a joy to see the creativity on display and the variety of approaches people took to solving the problems using only function trains. Marshall gave some details on the use of the rank operator `⍤`

, and despite this formidable challenge of understanding, by the end people were starting to grasp the power of this operator.

The morning’s HTMLRenderer workshop was mirrored by another GUI workshop in the afternoon. Michael and Chris Hughes worked to help people take their ⎕WC graphical interfaces, for Microsoft Windows, and get them to work on MacOS and Linux by using their qWC functions.

The mainframe is now the cloud, and with the ability to share great computing resources has come the need to learn another sizeable set of technologies. Morten and Norbert Jurkiewicz helped to clear some of the fog on this recent computing paradigm.

For the rest of this week there will be many presentations from Dyalog employees and users – as well as an APL Team Challenge, the Viking Challenge, and of course the Banquet dinner on Wednesday evening.

We will be continuing to publish short daily recapitulations to give you a flavour of the talks and events of each day. However, if you are too impatient even for that, we will be streaming Monday morning’s talks live from 09:00 to 10:45 (07:00 to 08:45 UTC). Also, on Wednesday morning between 11:00 and 12:00 (09:00 to 10:00 UTC), the 2019 APL Problem Solving Competition will be concluded as this year’s grand prize winner Jamin Wu will be presented with his prize and will talk about his experience with the competition. These streams will be available to watch live on dyalog.tv, so make sure to tune in if you don’t want to wait until the talks are published later this year.

]]>

He is no stranger to APL. In Scranton, he was introduced to APL during an internship with The Carlisle Group. From there, he continued learning and developing in APL and was one of the three grand prize winners in the 2016 APL problem solving competition. Throughout his college career he sporadically worked on other APL projects. He has done frequent pair programming with Stephen Mansour, who was conveniently teaching statistics at the same university! His interest is in Computer Science, and he has also done non-APL related software development at his University and professionally during another internship with MetLife.

He will primarily be a contractor for North American clients. Some of his time will also be spent with Dyalog’s Tools group, developing tools to make APL programming easier, more powerful, and current with new technologies.

One particular area he wants to tackle is creating more libraries and interfaces in APL. With the recent push towards git and source code in text files among the Dyalog APL community, he believes that now is a prime time to do this.

]]>Nathan first came into contact with APL when discussing code obfuscation with other programmers, and a coworker mentioned K and APL. APL became an immediate obsession, and Nathan became a regular in the Stack Exchange chat room “The APL Orchard”. He quickly began spending all of his free time learning APL, building familiar applications and tools using this quirky language, and reading about its fascinating history. He finds it funny in hindsight that he was introduced to the language in a conversation about code obfuscation, only to now be an APL evangelist, believing the concepts of APL to be as fundamental to elevating the world of computer programming as the Arabic numerals were to the study of Mathematics. After a year or so, Nathan was put in touch with Morten Kromberg at Dyalog. The two began pair-programming projects, which quickly proved fruitful and led to Nathan joining the team soon after.

When Nathan isn’t working on consulting projects, or tools for Dyalog, you can typically find him behind his keyboard building his own tools and toy functions in APL, with two aims in mind: convert as many traditional programmers as possible to APL, and bring his knowledge and experience to bear on modernizing APL and its tools for the current and next generation of new programmers.

]]>In Tolerated Comparison, Part 1, I discussed the structure of tolerant inequality with one argument fixed, and showed that

- For any real number
`B`

, there’s another number`b`

so that a number is*tolerantly*less than or equal to`B`

if and only if it is*intolerantly*less than or equal to`b`

. - This number is equal to
`B÷1-⎕CT`

when`B<0`

, and`B×1-⎕CT`

otherwise.

But these results were proven only for mathematical real numbers, which have many properties among which is the complete inability to be implemented in a silicon chip. To actually apply the technique in Dyalog APL, we must know that it works for IEEE floats like Dyalog uses (we have not implemented tolerated comparison for the decimal floating point numbers used when `⎕FR`

is 1287, and there are serious concerns regarding precision which might make it impossible to tolerate values efficiently).

Why should we care if a tolerated value is off by one or a few units in the last place? It’s certainly unlikely to cause widespread chaos. But we think programmers should be able to expect, for instance, that after setting `i←v⍳x`

it is always safe to assume that `v[i]=x`

. A language that behaves otherwise can easily cause “impossible” bugs in programs that are provably correct according to Dyalog’s specification. And finding a value that lies just on the boundary of equality with `x`

is not as obscure an issue as it may appear. With the default value `⎕CT←1E¯14`

, there are at most about 180 numbers which are tolerantly equal to a typical floating-point number `x`

. So it’s not much of a stretch to think that a program which handles a lot of similar values will eventually run into a problem with an inaccurate version of tolerated equality. And this is a really scary problem to debug—even the slightest difference in the values used would make it disappear, frustrating any efforts to track down the cause. We’ve dealt with tolerant comparison issues in the past and this kind of problem is certainly not something we want to stumble on in the future.

On to floating-point numbers. I’m afraid this is not a primer on the subject, although I can point any interested readers to the excellent What Every Computer Scientist Should Know About Floating-Point Arithmetic. In brief, Dyalog’s floating-point numbers use 64 bits to represent a particular set of real numbers chosen to cover many orders of magnitude and to satisfy some nice mathematical properties. We need to know only a surprisingly small number of things about these numbers, though—see the short list below. Here we consider only *normal* numbers, and not denormal numbers, which appear at extremely small magnitudes. The important result of this post is still valid for denormal numbers, which have higher tolerance for error than normal numbers, but we will not demonstrate this detail here.

**Definitions**: In the discussion below, `q`

is used as a short name for the value `⎕CT`

. Unless stated otherwise, formulas below are to be interpreted *not* as floating-point calculations but as mathematical expressions—there is no rounding and all comparisons in formulas are intolerant. Evaluation order follows APL except that `=`

is used as in mathematics: it has lower precedence and can be used multiple times in chains to show that many values are all equal to each other. The word “error” indicates absolute error, that is, the absolute distance of a computed value from some desired value. The value `ulp`

(from “Unit in the Last Place”) is used to indicate what some might denote ULP(1), the distance from 1 to the next higher floating point number. It is equal to `2*¯52`

, and it is an upper bound on the error between two adjacent normal floating-point numbers divided by the smaller of their magnitudes.

We will require the following **facts about floating point numbers**:

- Two adjacent (normal, nonzero) floating-point numbers
`a`

and`b`

differ by at least`0.5×(|a)×ulp`

and at most`(|a)×ulp`

. - Consequently, the error introduced by exact rounding in a computation whose exact result is
`x`

is at most`(|x)×0.5×ulp`

. The operations`+-×÷`

are all exactly rounded. - Sterbenz’s lemma: If
`x`

and`y`

are two floating-point numbers with`x≤2×y`

and`y≤2×x`

, then the difference`x-y`

is exactly equal to a floating-point number. Theorem 11 in the link above is closely related, and its proof indicates how one would prove this fact. - Given a floating-point number, the next lower or next higher number can be efficiently computed (in fact, provided the initial number is nonzero, their binary representations differ from that number by exactly 1 when considered as 64-bit integers).

We’ll need **one other fact**, which Dyalog APL guarantees (other APLs might not). The maximum value of `⎕CT`

is `2*¯32`

, chosen so that two 32-bit integers can’t be tolerantly equal to each other. Otherwise, integers couldn’t be compared using the typical CPU instructions, which would be a huge performance problem. The value of `ulp`

is `2*¯52`

for IEEE doubles, so `⎕CT*2`

is at most `ulp÷2*12`

. The proof below holds for `⎕CT*2`

as high as `ulp÷9`

, but not for `⎕CT*2`

higher than `ulp÷8`

.

In the following discussion, we will primarily consider the case `B>0`

. We want to define a function `tolerateLE`

which, given `B`

, returns the greatest floating-point value tolerantly less than or equal to `B`

, and to show that every value smaller than `tolerateLE B`

is also tolerantly less than or equal to `B`

. The last post analysed this situation on real (not floating-point) numbers, and showed that in that case `tolerateLE B`

is equal to `B÷1-q`

.

The case `B<0`

is substantially simpler to analyse, because the formula `B×1-q`

for this case is more tractable. This case is not described fully but can be handled using the same techniques. Also not included is the case `B=0`

. `tolerateLE 0`

is zero, since comparison with zero is already intolerant.

(This section isn’t necessary for our proof. But it’s useful to see why the obvious formula isn’t good enough, and serves as a nice warmup before the more difficult computations later.)

When we compute `B÷1-q`

on a computer, how does that differ from the result of computing `B÷1-q`

using the mathematician’s technique of not actually computing it? There are two operations here, and each is subject to floating-point rounding afterwards. To compute the final error we must use an alternating procedure: for each operation, first find the greatest error that could happen if the operation was computed exactly, based on the error in its arguments. Then add another error term for rounding, which is based on the size of the operation’s result.

It’s helpful to know first how inverting a number close to 1 affects its error. Suppose `x`

is such a number, and it has a maximum error `x×r`

. We’ll get the largest possible error by comparing `y÷x×1-r`

to the exact value `y÷x`

(you can verify this by re-doing the calculation below using `1+r`

instead). The error is

```
err = | (y÷x) - y÷x×1-r
= (y÷x) × | 1 - ÷1-r
= (y÷x) × | r÷1-r
```

Assuming `r<0.5`

, which will be wildly conservative for our uses, we know that `(1-r)>0.5`

and hence `(÷1-r)<2`

. So if the absolute error in `x`

is at most `x×r`

, then the absolute error in `y÷x`

(assuming `y`

is exact, and before any rounding) is at most:

`err < (y÷x) × 2×r`

Now we can figure out the error when evaluating `B÷1-q`

. At each step the rounding error is at most `0.5×ulp`

times the current value.

```
⍝computation error before rounding error after rounding
1-q 0 (1-q)×0.5×ulp
B÷1-q (B÷1-q) × 2×0.5×ulp (B÷1-q)×1.5×ulp
```

The actual upper bound on error has a coefficient substantially less than `1.5`

, since the error estimate for `B÷1-q`

was very conservative. But the important thing is that it’s definitely greater than `1`

. The value we compute could be one of the two closest to `B÷1-q`

, but it could also be further out. Obviously we can’t guarantee this is the exact value that `tolerateLE B`

should return. But what kind of bounds can we set on that value, anyway?

The last post showed that, when `B>0`

, a value `a`

is tolerantly less than or equal to `B`

if and only if it is exactly less than or equal to `B÷1-q`

. But that was based on perfectly accurate real numbers. What actually happens around this value for IEEE floats? Let’s say `B`

is some positive floating-point number and `at`

is the exact value of `B÷1-q`

(which might not be a floating-point number). Then suppose `a`

is another floating-point number, and define `e`

(another possibly-non-floating-point number) so that `a = at+e`

. What is the result of evaluating the tolerant less-than formula below?

`(a-B) ≤ q × 0⌈a⌈-B`

The left-hand side turns out to be very easy to analyse due to Sterbenz’s lemma, which states that if `x`

and `y`

are two floating-point numbers with `x≤2×y`

and `y≤2×x`

, then the difference `x-y`

is exactly equal to a floating-point number, meaning that it will not be rounded at all when it is computed. It’s easy to show that if `a>2×B`

then `a`

is tolerantly greater than `B`

, and that if `B>2×a`

then `a`

is tolerantly less than or equal to `B`

. So in the interesting case, where `a`

is close to `B`

, we know that the following chain of equalities holds exactly:

```
a-B = e + at-B
= e + (B÷1-q)-B
= e + B×(÷1-q)-1
= e + B×q÷1-q
```

Now what about the right-hand side? Because `B>0`

and (by our simplifying assumption in the previous paragraph) `a≥B÷2`

, `a`

is the largest of the three numbers in `0⌈a⌈-B`

. Floating-point maximum is always exact (since it’s equal to one of its arguments), so the right-hand side simplifies to `q×a`

. This expression does end up rounding. Its value before rounding can be expressed in terms of `a-B`

and `e`

:

```
q×a = (q×at) + q×e
= (B×q÷1-q) + q×e
= (e + B×q÷1-q) - (e - q×e)
= (a-B) - e×1-q
```

It’s very helpful here to know that `a-B`

is exactly a floating-point number! `q×a`

will round to a value that is smaller than `a-B`

(thus making the tolerant inequality `a≤B`

come out false) when it is closer to the next-smallest floating-point number than to `a-B`

(if it is halfway between, it could round either way depending on the last bit of `a-B`

). This happens as long as `e×1-q`

is larger than half the distance to that predecessor. The floating-point format guarantees that, as long as `a-B`

is a normal number, this distance is between `0.25×ulp×a-B`

and `0.5×ulp×a-B`

, where `ulp`

is the difference between 1 and the next floating-point number. Consequently, if `e`

is less than `0.25×ulp×a-B`

, we are sure that `a`

will be found tolerantly less than or equal to `B`

, and if `e`

is greater than `0.5×ulp×a-B`

, it won’t be. If it falls in that range, we can’t be sure.

The zone of uncertainty for the value `B←2*÷5`

is illustrated above. It contains all the values of `a`

for which we can’t say for sure whether `a`

is tolerantly less than or equal to `B`

, or greater, without actually doing the computation and rounding (that is, the result will depend on specifics of the floating-point format and not just `ulp`

). It’s very small! It will almost never contain an actual floating point value (one of the black ticks), but it could.

If there isn’t a floating point number in the zone of uncertainty, then `tolerateLE B`

has to be the first floating point number to its left. But if there is one, say `c`

, then the value depends on whether `c`

is tolerantly less than or equal to `B`

: if it is, then `c = tolerateLE B`

. If not, then that obviously can’t be the case, and `tolerateLE B`

is again the nearest floating point value to the left of the zone of uncertainty.

How can we compute `B÷1-q`

more accurately than our first try? One good way of working with the expression `÷1-x`

when x is between 0 and 1 is to use its well-known expansion as in infinite polynomial. A mathematically-inclined APLer (who prefers `⎕IO←0`

) might write

`(÷1-x) = +/x*⍳∞`

where the right-hand side represents the infinite series 1+x+x²+x³+…. One fact that seems more obvious when thinking about the series than about the reciprocal is that, defining `z←÷1-x`

, we know `z = 1+x×z`

. So similarly,

`(B÷1-q) = B+q×B÷1-q`

But it turns out to be much easier than that! The difference between `1`

and `÷1-q`

is fairly close to `q`

. So if we replace `÷1-q`

by `1`

, then we end up off by about `B×q×q`

. Knowing that `q*2`

is much smaller than `ulp`

, we see that this difference is miniscule compared to `B`

. So why don’t we try the expression `B+q×B`

?

The error in using `B`

instead of `B÷1-q`

is

```
(|B - B÷1-q) = |B × 1-÷1-q
= |B × ((1-q)-1)÷1-q
= B × q÷1-q
```

Multiplying by `q`

, the absolute error of `q×B`

is `q×B × q÷1-q`

, which, knowing that `(÷1-q)<2`

, is less than `B × 2×q*2`

, and consequently less than, say, `B×0.05×ulp`

.

```
⍝computation relative to err before rounding err after rounding
q×B q×B÷1-q B×0.05×ulp B×(0.05+q)×ulp
B+q×B B÷1-q B×0.06×ulp
```

That’s pretty close: the unrounded error is substantially less than the error that will be introduced by the final rounding (about `B×0.5×ulp`

). Chances are, it’s the closest floating point number to `B÷1-q`

. But it could wind up on either side of that value, so we will need to perform a final adjustment to obtain `tolerateLE B`

.

Note that the new formula `B+q×B`

is very similar to the formula `B×1-q`

which is used when `B`

is negative. In fact, calculating the latter value with the expression `B+q×-B`

will also have a very low error. That means we can use `B+q×|B`

for both cases! However, we will still need to distinguish between them when testing whether the value that we get is actually tolerantly less than or equal to `B`

.

After we calculate `a←B+q×B`

, we still don’t know which way `a≤B`

will go. There’s just too much error to make sure it falls on one side or the other of the critical band. But we do know about the numbers just next to it: a value adjacent to `a`

must be separated from the unrounded value of `B+q×B`

by at least `0.25×B×(1+q)×ulp`

, or else we would have rounded `a`

towards it. That unrounded value differs from the true value `B÷1-q`

by only `0.06×B×ulp`

at most, so we know that these neighbors are at least `((0.25×1+q)-0.06)×B×ulp`

or (rounding down some) `0.15×B×ulp`

from `at`

. But that’s way outside of the zone of uncertainty, which goes out only to `0.5×ulp×a-B`

, since `a-B`

is somewhere around `q×B`

.

So we know that the predecessor to `a`

must be tolerantly less than or equal to `B`

, and its sucessor must not be. That leaves us with only two possibilities: either `a`

is tolerantly less than or equal to `B`

, in which case it is the largest floating-point number with this property, or it isn’t, in which case its predecessor is that number. In the diagram above, we can see that the range for `a`

is a little bigger than the gap between ticks, but it’s small enough that the ranges for its predecessor `P(a)`

and successor `S(a)`

don’t overlap with `B÷1-⎕CT`

or the invisibly small zone of uncertainty to its right. In this case `a`

rounds left, so `a = tolerateLE B`

, but if it rounded right, then we would have `(P(a)) = tolerateLE B`

.

So that’s the algorithm! Just compute `B+q×|B`

, and compare to see if it is tolerantly less than or equal to `B`

. If it is, return it, and otherwise, return its predecessor, the next floating point number in the direction of negative infinity. We also add checks to the Dyalog interpreter’s debug mode to make sure the number returned is actually tolerantly less than or equal to `B`

, and that the next larger one isn’t.

The following code implements the ideas above in APL. Note that it can give a domain error for numbers near the edges of the floating-point range; Dyalog’s internal C implementation has checks to handle these cases properly. `adjFP`

does some messy work with the binary representation of a floating-point value in order to add or subtract one from the integer it represents. Once that’s out of the way, tolerated inequalities are very simple!

```
⍝ Return the next smaller floating-point number if ⍺ is ¯1, or the
⍝ next larger if ⍺ is 1 (default).
⍝ Not valid if ⍵=0.
adjFP ← {
⍺←1 ⋄ x←(⍺≥0)≠⍵≥0
bo←,∘⌽(8 8∘⍴)⍣(~⊃83 ⎕DR 256) ⍝ Order bits little-endian (self-inverse)
⊃645⎕DR bo (⊢≠¯1↓1,(∧\x≠⊢)) bo 11⎕DR ⊃0 645⎕DR ⍵
}
⍝ Tolerate the right-hand side of an inequality.
⍝ tolerateLE increases its argument while tolerantGE decreases it.
⍝ tolerantEQ returns the smallest and largest values equal to its argument.
tolerateLE ← { ¯1 adjFP⍣(t>⍵)⊢ t←⍵+⎕ct×|⍵ }
tolerateGE ← -∘tolerateLE∘-
tolerateEQ ← tolerateGE , tolerateLE
```

We can see below that tolerateEQ returns values which are tolerantly equal to the original argument, but which are adjacent to other values that aren’t.

```
(⊢=tolerateEQ) 2*÷5
1 1
(⊢=¯1 1 adjFP¨ tolerateEQ) 2*÷5
0 0
```

Of course, using `tolerateEQ`

followed by intolerant comparison won’t speed anything up in version 17.0: that’s already been done!

We’ll be writing much more about version 17.1 soon, and next year’s 18.0 release in due course. The main purpose of this blog entry is to let you know about new members of the Dyalog team and, unfortunately, a couple of departures as well.

In February, John Scholes passed away. Together with Geoff Streeter, John was one of the original implementors of Dyalog APL in 1982-1983, a cornerstone of all aspects of the Dyalog language and business, and one of the pillars of the APL community. Many members of the community have paid tribute to our Genius, Gentleman and Mischievous Schoolboy at http://johnscholes.rip.

At the end of May 2019, Jay Foad is leaving Dyalog to return to his first love (as a software developer) and become a proper compiler geek again, after nearly a decade of helping move Dyalog APL forward and, for the last three years, helping to “herd the cats” as CTO. We will sorely miss Jay’s technical excellence but understand the desire to hit the sweet skill spot when the opportunity arises, and we wish him good fortune in that pursuit! You can read Jay’s farewell blog post here.

Jay’s management responsibilities will be shared between Richard Smith, our Development Manager and myself; I will be re-assuming the role of CTO until further notice.

The good news is that we will welcome several new people to Dyalog in 2019 – new hands to write code *in* APL, to work on the APL interpreter, and to write documentation and training materials to help new and old users get their work done more effectively.

In response to client requests and to help new clients get started writing their first APL systems, we are creating a consulting group in the USA. To date, we have recruited two members for this team: Nathan Rogers joined the team at the end of April and is based in Denver, Colorado, and Josh David starts work for Dyalog in early June (as soon as he graduates) and will be based in New Jersey. If you think you have heard of Josh before, that is probably because he was a winner of the Dyalog Problem-Solving Contest in 2016 (https://www.dyalog.com/news/112/420/2016-APL-Programming-Contest-Winners.htm) – and a runner up in 2015. Nathan found us thanks to Adam Brudzewsky’s work on Stack Exchange: https://chat.stackexchange.com/rooms/52405/the-apl-orchard. You can reach them both using e-addresses in the form firstname at dyalog.com.

When members of the consulting team are not working for clients, the intention is that they will be members of the APL Tools Group at Dyalog, working on new tools for APL application development and helping create test suites for Dyalog APL. They will also support Richard Park, who joined us late in 2018, to work on the creation of training materials and tutorials for new users.

Once we have a better idea of the demand for consulting in North America, we expect to grow the team. Please let us know if you could use hired APL hands – in any territory! If we don’t have the resources ourselves, we may be able to find someone else.

Nathan comes to us with experience from a broad set of tools and programming languages. In addition to writing tools in APL, he will be a part-time member of the core development team, working on the APL interpreter and its interfaces in C, C#, JavaScript, Python and other languages. However, he won’t spend enough time on this to make up for the loss of Jay, who (like most managers at Dyalog) spent a significant amount of his time writing code.

Therefore, as described at https://www.dyalog.com/careers.htm, we are recruiting at least one C / C++ programmer to help us grow the core team.

2019 is looking like an extremely busy year, with significant growth at Dyalog. As usual, our plan is to bring all the new (and old) hands to the Dyalog user meeting, which will be held in Elsinore, Denmark this year – September 8th to 12th. Details of the programme will soon start to appear at https://www.dyalog.com/user-meetings/dyalog19.htm. If you would like to present an APL-related experience to the user community, make proposals for new features of Dyalog products or suggest topics that you would like Dyalog to speak about at the user meeting, then please let us know as soon as possible!

]]>When I joined Dyalog in 2010 I knew nothing about APL, so there was a really steep learning curve as I got to grips with both the language and its implementation. I was using some of my previous experience with compilers to improve the performance of the implementation, and thinking about ways to compile APL. This is a tough problem, and one that many people have worked on over the years (see for example Timothy Budd’s 1988 book An APL Compiler). My own ideas have shifted as I’ve gained more experience with APL and the way it is used. At first I thought “writing a compiler” was an obvious thing to do; now I think that hybrid compiler/interpreter techniques are much more promising, and Dyalog’s recent experiments with deferred execution and *thunks* are a good step in that direction.

At the same time, there was a lot of excitement around the APL language itself. Dyalog was working on APL#, a new .NET-based APL dialect (sadly abandoned as Microsoft’s own commitment to .NET waned). And Dyalog APL itself was starting to borrow more language features from the SharpAPL/J branch of the family tree, starting with the Rank operator and continuing over many years. This prompted me to delve more into the history of APL, to try to understand some of the fundamental differences between different implementations, so that we could reconcile those differences in Dyalog APL and provide, as far as possible, the best of both worlds. I think we’ve done pretty well in that, as evidenced by the fact that many APLers are happily using Rank, Key, function trains *et al* in an APL2-based language, something that seemed unthinkable a decade ago.

One of the most gratifying developments in the time I’ve been working with APL is the rapid growth of new APL implementations, open source projects and grass-roots enthusiasm. In particular, the open source movement has made it much easier for anyone with a good idea about language design to implement it, and share it with the world. We’ve seen a wide variety of new APLs and APL-inspired languages popping up over the years, ranging from full-featured to highly experimental, including but not limited to (in roughly the order I remember hearing about them): ELI, ngn/apl, GNU APL, Ivy, April, dzaima/APL and APL\iv.

And speaking of new APLs, of course there is Co-dfns, a compiled APL implementation that tries to solve another tough problem: harnessing the power of GPUs and other massively parallel hardware in a way that makes it accessible to the end user. This is something that many people are trying to do, in a wide variety of languages, but as far as I can tell no-one has quite succeeded yet. The state of the art is still that, in order to get good performance, you need quite a lot of mechanical sympathy for the underlying hardware. But Co-dfns has come a long way, and if any language is well-suited to run on parallel array processors then surely it is APL!

This brings me on neatly to my next job: I’ll be working on compilers for GPUs, the parallel computers that render 3D graphics. They are closely related to their “general purpose” cousins the GPGPUs, which are used for pure number crunching, and to so-called *tensor* processing units (TPUs) that simulate neural networks for use in machine learning and artificial intelligence. “Tensor” here just means an array of arbitrary rank, or as we would say: an array. For programming TPUs there is a Python-based framework called TensorFlow. But, look closely at the APIs for some of the core TensorFlow libraries, and you’ll see operations like reshape, reverse and transpose, which are eerily similar to their APL equivalents. There truly is nothing new under the sun!

With fond regards to all APLers,

Jay.