DYNA Fall 2025 – A Review

DYNA Fall 2025 brought together APL enthusiasts, customers, and Dyalog Ltd staff for two days of presentations and workshops in New York City. Attendance was strong, energy was high, and the sessions showcased both the maturity and momentum of the Dyalog ecosystem.

The event took place in the Jay Suites Conference Center at the end of September. We’d arranged a bigger venue this time, but it was still filled to capacity! There were eight members of Team Dyalog present, and the European contingent gathered at Brian Becker’s house for a few days beforehand for a “Conclave”. As we’re a geographically-distributed team, we try to take the opportunities that present themselves to work together in person – our thanks to Brian for hosting.

Day 1: Presentations

The Dyalog Road Map – Fall 2025 Edition
Morten Kromberg

First impressions: so many people! Old friends, new acquaintances, and a palpable buzz in the air. Our CTO, Morten, started proceedings with his customary roadmap and vision – what have we done, what are we doing now, and where are we going next. He highlighted both the necessary Sisyphean work, such as the ongoing improvements and interfaces to other systems, and the more exciting developments, such as new language features (array and namespace notation, ⎕VSET, ⎕VGET, and inline tracing). Morten emphasised our growth, in terms of both revenue and staff. He introduced our newest hires, including Asher Harvey-Smith, whose first day happened to be today!

Dyalog and AI
Stefan Kruger

Following Morten’s presentation, Stefan talked about Dyalog and AI. AI is a hot topic that impacts most companies in some way, but, as a minority language, LLM performance has been poor in APL. Stefan’s talk covered some of the recent developments in the field, highlighting the fact that frontier models today are capable of explaining even quite complex APL code correctly, but that they still struggle to write code unaided. He demonstrated an LLM agent capable of executing code, running tests, and reading documentation – showing that while this improved performance, we’re far off the productivity improvements that a Python developer could expect.

JAWS – Jarvis And WebSockets
Brian Becker

Brian took over to present JAWS. No, not the 70s movie, but the WebSocket extension to Jarvis: JAWS = Jarvis And Web Sockets (Jarvis is our web service framework). He outlined the different use cases for HTTP, and contrasted those against the use cases for which you need something else, and how websockets can fill that function. Brian showed some practical examples on how websockets are used, and how the websocket functionality slots neatly into Jarvis.

A Dyalog Interface to Kafka
Martina Crippa

Martina introduced us to Apache Kafka, an event-streaming platform widely used as a backbone in the data infrastructure in large organisations, such as banks. Large cloud providers often use Kafka as the glue between their services. Martina has been working on a Dyalog interface to Kafka. She explained why it matters, and demonstrated the APL Kafka API live. The Dyalog Kafka interface is being built primarily in response to customer requests, but it’s fully open source and we will be offering optional paid support packages for those that so wish.

Static Analysis of APL for Tooling and Compliance
Aaron Hsu and Brandon Wilson

The morning finished with Aaron and Brandon talking about the Static Analysis project. Static Analysis is becoming increasingly important, especially in regulated industries, where compliance demands for such checks percolate through all the way down to platform vendors. They demonstrated the principles upon which the Dyalog Static Analyser is being built: the parser from the Co-Dfns compiler, which has been extended to handle the whole of Dyalog APL. A clear visualisation of how the static analyser’s rules select features from the analysed code showcased the powerful potential that this approach promises. Presenting as a double-act performance is hard to get right, but Aaron’s and Brandon’s presentation went down well. Static analysis is challenging at the best of times, but demand is particularly acute in finance, where regulatory compliance drives tooling requirements.

Lessons Learned when Converting from APL+Win to Dyalog APL
Alex Holtzapple of Metsim International (MSI)

METSIM® is an all-in-one solution for mining and metallurgical operations; it is used in 57(!) different countries around the world – “in the remotest corners of the map”, as Alex put it. Over the past 18 months, Alex and MSI have worked closely with us to migrate METSIM® from APL+Win to Dyalog. After introducing himself and METSIM®, Alex described the process of working with Dyalog Ltd. He had a clear vision of what he wanted to achieve: he specifically wanted to preserve the UI’s design as it stands, because that places less of a burden on customers to modify their workflows and established processes. The migration has been a successful project, and the Dyalog-based product is now in the hands of their customers. Both MSI and Dyalog Ltd learned important lessons from the migration project. Alex said that they’d thought about this migration project for a long time, but what finally swung the decision was a visit to our HQ in Bramley, UK, to “look the team in the eye”.

Dyalog APL: Our (Not So) Secret Ingredient
Mark Wolfson of BIG

Mark Wolfson told us how they’re disrupting the jewellery business, and how Dyalog is forming a central component in this. BIG’s stack is a great example of a modern, heterogeneous services architecture: by the very nature of the business, they need to be able to consume data from a multitude of diverse systems and protocols. After this data has been transformed into a common format, it is then consumed by several internal systems to “derive insight from chaos”. BIG has always been an APL promoter, and Mark is doubling down on this. BIG is increasingly reliant on APL, and they’re investing significantly in their capabilities.

The Data Science Journey
Josh David

Josh talked us through Dyalog’s data science journey. Dyalog APL has long been a natural fit for data analysis, long before the term ‘Data Science’ became fashionable. Today, the field is – like so many others – dominated by Python and R. At Dyalog Ltd, we firmly believe that we have a role to play in this space, and we’re actively trying to attract new practitioners. Josh recounted his experience exhibiting at the 2025 Joint Statistical Meetings (JSM), Nashville, together with Martina Crippa, Rich Park, and Steve Mansour. There was a lot of interest from delegates, especially when walked through the extremely compact formulation of the k-means clustering algorithm in APL. A renewed focus on the Data Science application of Dyalog APL will inevitably impact our development roadmap – we need to improve both our data ingest story, and the raw performance in some key areas.

Josh also showcased Steve Mansour’s statistics package TamStat. TamStat is primarily intended as a package for teaching statistics, but it has several other interesting facets, too: it can be used as a library for statistics routines that you can use in your own applications (it’s open source), and also as a “statistics DSL” – a compact, dedicated way to express and evaluate statistics formulae.

What Can Vectorised Trees Do For You?
Asher Harvey-Smith

Asher is the newest member of Team Dyalog, and he has started his role by giving a presentation and hosting a workshop on the first two days of his employment! However, his association with Dyalog Ltd goes back longer than that, as he has previously completed two internships (the second as “senior intern”), and is already a seasoned presenter (he stepped up to the podium at the Dyalog ’24 user meeting in Glasgow). Asher wants to popularise the tree manipulation techniques used in the Co-Dfns compiler and also in the static analyser. Through a set of clear examples and animations, Asher has found a great pedagogical treatment of a set of techniques that many people have had difficulties grappling with. Asher also outlined when the “parent vector” technique is not appropriate.

ArrayLab: Building a 3D APL Game with raylibAPL
Holden Hoover, University of Waterloo

Holden Hoover, the inaugural APL Forge winner in 2024, demonstrated a 3D game called ArrayLab that he’s been building on top of Brian Ellingsgaard’s raylibAPL (Brian is also a former summer intern at Dyalog Ltd). One purpose for developing the ArrayLab game was to test the raylibAPL bindings whilst simultaneously exercising the Dyalog interpreter. When working with native extensions there are a lot of things that can go wrong! Holden also had to learn a lot about game development, in particular in-game physics. The ArrayLab game is a work in progress, but he showed a live walk-through of his progress so far, demonstrating correct physics and collision detection.

The APL Trust US
Diane Hymas and Mark Wolfson, The APL Trust US

The first day concluded with Diane Hymas and Mark Wolfson reporting on the progress they, together with a wider team, have been able to make on the APL Trust US. The purpose of the APL trust is to “give back” to the community. If you have an idea for something that you want to do with APL, you can apply for funding help from the APL Trust – the application process is being formalised at the moment. The good news from Diane was that the APL Trust is now launched as a registered tax-deductible charity.

Socialising!

An important part of multi-day gatherings of this kind is the impromptu hallway encounters – networking opportunities where like-minded people meet and learn from one another. After the presentations had completed, we retreated to the Yard House restaurant a few blocks up the road for dinner, conversation, and making friends. For several of us, this was our first visit to New York, and taking in the sights and sounds of Times Square at night is definitely an experience!

Day 2: Workshops

Tuesday was workshop day. We had a packed programme across two streams. In the morning, you could choose between learning how to use Jarvis with Brian and Stefan, or a deep-dive into namespaces with Josh, Morten, and Martina.

We have found that the namespace aspect of APL is frequently misunderstood, and we’ve run the “Introduction to Namespaces” workshop a few times now. Josh was managing a full house, with Morten and Martina assisting. In the room next door, Brian gave a hands-on, practical introduction to Jarvis and JAWS, with detailed explanations of the different use cases. Jarvis is already a core component in many users’ deployed Dyalog applications; with the introduction of web socket support, Jarvis/JAWS will find its way into more application deployments.

In the afternoon you could choose between learning how to use Link with instruction from Morten and Stefan, or an introduction to the key operator with Asher, who also attracted a full house.

Key is one of the advanced operators in Dyalog APL, and mastery unlocks a lot of applications, especially in the Data Science domain. Simplifying hugely, the science goes to the left and the data goes to the right! Asher had a job on his hands teaching such a large group with a range of abilities, but he was ably assisted by Martina and Josh. Next door, Morten was helping the group through gradually more complex source code scenarios with Link. Link is now into its fourth major version, and a mature workflow component that lets Dyalog users take advantage of a range of external tools that are expecting to operate on text files, such as Git, GitHub, VS Code, and many others.

In Conclusion…

The 2025 Fall DYNA conference was a well-attended, well-received event, with a great mix of newcomers, veterans, customers, and members of Team Dyalog. The highlights for us were the two customer presentations from Alex Holtzapple and Mark Wolfson – it is always interesting for us to see how people use our product in the field! Alex is new to Dyalog, and it was fantastic to hear him reporting such a positive experience and outcome. Although Mark has been a Dyalog user for longer, he was no less enthusiastic, and to hear that they’re really growing their APL development team is a vote of confidence. It was also great to hear how The APL Trust US is taking off.

DYNA Fall 2025 reflected our growing momentum, both in technology and community. Each talk underscored a shared commitment to pushing APL forward, not just as a language, but as a living ecosystem shaped by its practitioners.


Materials from DYNA Fall 2025 are being uploaded to the event webpage as they become available.

APLearn: The Winning APL Forge 2025 Project

By: Borna Ahmadzadeh

Borna is the 2025 winner of the APL Forge, our annual competition designed to promote the use and development of APL by challenging participants to create innovative open-source libraries and commercial applications. Winners of the APL Forge are invited to present their winning work at the next Dyalog user meeting. As this will not be held until October 2026, he has written a blog post about his work as a preview of what to expect from his Dyalog ’26 presentation.


As a machine learning (ML) developer, I regularly utilise the popular Python package scikit-learn to handle many aspects of ML workflows, including data pre-processing, model training and evaluation, and more. Its simple, uniform interface makes it straightforward to design ML pipelines or switch between the myriad algorithms that it supplies without significant changes to the code. This means that scikit-learn has witnessed extensive adoption across many domains and industries.

Having to deal with large arrays and costly operations, scikit-learn, though written in Python, is heavily dependent on NumPy, an APL-inspired numerical computing package. NumPy itself delegates expensive routines to C and potentially Fortran to maximise performance. Portions of scikit-learn are also written in Cython, a statically compiled, C-like superset of Python, to further improve efficiency. All of this means that a complete understanding of scikit-learn requires, in addition to a knowledge of Python, familiarity with Cython, C, and even Fortran. Although you can develop professional scikit-learn applications without having such a profound knowledge, modifying an algorithm or implementing one from scratch to experiment with research ideas can be very challenging without expertise in those languages. From a didactic perspective this situation leaves much to be desired, as learners will need to dedicate as much effort to the software-specific, low-level details of their code as to the actual algorithm. For instance, matrix multiplication, which can be described in a handful of steps – this constitutes its algorithmic intent – takes hundreds of lines to efficiently implement in C… then there is the issue of portability!

This forces ML developers or students, the majority of whom don’t have the technical aptitude to write well-tuned low-level code, to compromise between efficiency versus tweakability, flexibility, readability, and so on. My winning APL Forge submission, APLearn, is a case study to investigate whether APL can reconcile these seemingly opposed objectives.

Why APL?

To many outsiders, APL is an arcane write-once-read-never language that is exclusively suited to manipulating tabular (typically financial) data; a glyph-heavy alternative to Microsoft Excel. There are many blog posts and real-world projects that dispel this mistaken belief, but I’d like to enumerate a few of APL’s advantages that are of particular relevance to APLearn:

  • Arrays: ML data is usually best represented as multi-dimensional arrays, which are manipulated in various ways to optimise model parameters, make predictions, and so on. Array programming is a natural fit for this.
  • Performance: Data parallelism is at the heart of APL, that is, operations are carried out across entire arrays at once. This can be exploited to run code very efficiently on single instruction, multiple data hardware, especially GPUs (for example, Co-dfns).
  • Algorithms, not software: Languages such as C have a tendency to pollute algorithmic intent with software “noise”, that is, they often get stuck on how to communicate with the computer, neglecting the algorithm itself. Careful memory management, complicated pointer usage, cache-friendly practices… they shift the focus away from what the code is doing (the algorithm) to how it’s doing it (the software). APL doesn’t suffer from this problem because it resembles mathematical notation and, therefore, the implementation is tied to the description, akin to declarative programming.

This led me to ask: Can we efficiently and elegantly develop end-to-end ML workflows in APL to simultaneously facilitate practical applications, research exploration, and pedagogy?

Goals

APLearn is my attempt at recreating a small sub-set of scikit-learn in APL while being mindful of the distinct style called for by array programming. Key features include training and inference methods for common models, pre-processing modules for normalising or encoding data, and miscellaneous utilities like evaluation metrics. The core design principles of APLearn are:

  • Uniformity: Models and utilities follow a uniform interface for training, prediction, or data transformation, thereby reducing the learning curve for new users.
  • Transparency: The full algorithm behind a model is available in a single file with minimal software details.
  • Composability: Components can be chained together, reading like a sentence with verbs as ML methods (for example, make classifications using logistic regression) and nouns as data (for example, input samples).

The first of these isn’t unique to APLearn, and is a cornerstone of many mainstream data science or ML packages such as scikit-learn. However, APL’s true power shows when considering transparency and composability. With a concise grammar that is analogous to a supercharged form of linear algebra, the reader is spared the necessity of cutting through layers of software noise to arrive at the algorithm underlying the code. Additionally, APL’s syntax natively supports chaining without resorting to special objects like pipes in R or Pipeline in scikit-learn. These two work together to surmount the problems (discussed above) that scikit-learn faces.

Example

A user guide and several examples showing APLearn in action can be found in the APLearn GitHub repository. A video on YouTube demonstrates one of these examples.

Development Journey

In my (biased) opinion, the APLearn user experience is relatively smooth, and one only needs a very basic knowledge knowledge of scikit-learn and APL to be able to set up basic ML workflows. That leaves out my own developer experience working on this project – after all, the initial inquiry was how easy it would be to build, not use, something like scikit-learn in APL. I have multiple takeaways and observations to share:

  • Do arrays suffice?
    APL isn’t appropriate for problems that can’t be adequately reduced to array-like structures, so this entire project would be futile if ML processes couldn’t be captured through arrays. The good news is that, for the majority of algorithms, arrays are the ideal representation. Concrete examples include generalised linear models, statistic-based methods like normalisation or naïve Bayes, and classification or regression metrics. There is, however, one major exception – trees. The naïve way to represent trees in APL, employed by APLearn, is a recursive approach where the first element of a vector stands for the tree node and subsequent elements are sub-trees or leaves. The following, used in APLearn’s decision tree implementation, sets the current parent node using the threshold function and recurses to construct the left and right children. Despite its simplicity, this scheme is undesirable for two reasons. First, it’s contrary to the data-parallel spirit of APL. Second, trees can quickly grow in size, and highly-nested arrays are notoriously inefficient. Smarter alternatives exist, but they’re not easy to get the hang of, hurting the didactic aspect of APLearn. Although not many models rely on trees (the main exceptions being decision trees and space partitioning algorithms), this is an important caveat to consider.

    (thresh i) ((⍺⍺ ∇∇ ⍵⍵)X_l y_l) ((⍺⍺ ∇∇ ⍵⍵)X_r y_r)
  • Is APL efficient?
    I didn’t expect (Dyalog) APL to keep up with scikit-learn performance-wise, and I wasn’t wrong; APLearn is about an order of magnitude slower on average, or even thousands of times for k-means clustering and random forests. The culprit for the inefficiency in k-means is the expensive inner product in the update, depicted below. Luckily, this doesn’t make it unusable, and many real-world workflows can be feasibly executed using APLearn, albeit more slowly. Parts of the code are incompatible with Co-dfns at the time of writing, so that remains an exciting future avenue for unlocking substantial performance enhancements.

    upd←{
        inds←⊃⍤1⍋⍤1⊢X+.{2*⍨⍺-⍵}⍉⍵
        {(+⌿⍵)÷≢⍵}∘⊃⍤0⊢{X⌿⍨inds=⍵}¨⍳st.k
    }
  • How easy is the implementation process?
    I began with the impression that models such as ridge regression or k-nearest neighbors would be easy to implement because they’re based on standard linear algebra operations like matrix multiplication and Euclidean distance. Others, however, like lasso or k-means, daunted me at first because it wasn’t immediately obvious how they could be implemented without explicit loops or branches. Many struggles and clumsy lines of code later, I realized where my error lay: I was trying to translate the imperative pseudo-code of those models bit by bit into APL. This strategy is fine when porting, say, JavaScript to Python, but array processing is in a class of its own and must be treated as such. Instead of fruitlessly endeavouring to translate imperative instructions into array instructions, I decided to start off with the conceptual, non-imperative description of an algorithm and directly implement it APL. Thereafter, my job became significantly easier, and there were no more major hurdles in my way. This is the most impactful lesson I learned – being an effective array programmer requires thinking like one.
  • What degree of noise is there?
    Recall that “noise” in this context refers to code that is not integral to the algorithm but is necessary for effective communication with the computer. If our algorithmic intent is vector addition, for example, noise in the C implementation could comprise advancing pointers or allocating and freeing heap memory. APL mostly dispenses with this type of artifact, and there are no frills in the APLearn codebase; only the essentials are there. This helps the developer to focus on the algorithm instead of the software. For example, the following code is the implementation of APLearn’s metrics, containing practically nothing that would detract attention from the mathematical definitions of each. In contrast, a C implementation would be much longer and more distracting.

    mae←{{(+/⍵)÷≢⍵},|⍺-⍵}
    mse←{{(+/⍵)÷≢⍵},2*⍨⍺-⍵}
    rmse←{0.5*⍨⍺ mse ⍵}
    acc←{{(+/⍵)÷≢⍵},⍺=⍵}
    prec←{(+/,⍺⊥⍵)÷+/,⍵}
    rec←{(+/,⍺⊥⍵)÷+/,⍺}
    f1←{2×p×r÷(p←⍺ prec ⍵)+(r←⍺ rec ⍵)}
  • Is it readable?
    Unreadability is the top accusation leveled against APL. To an extent, that’s a subjective judgement that can’t be fought against, although it’s not fair to call a language unreadable simply because its syntax or characters appear strange at first glance. Ultimately, the reader needs to deliver the final verdict on readability, but one thing is certain: Assuming a firm grasp on APL syntax, it’s far easier to understand what a piece of code written in APL is meant to achieve than that written in most other languages. For example, the APLearn snippet below calculates the parameters of ridge regression. This is a perfect mirror of the mathematical solution, reinforcing the argument above that APL, being concise and noise-free, directly reflects algorithmic intent, aiding comprehension.

    st.w←y+.×⍨(⌹(X+.×⍨⍉X)+reg×∘.=⍨⍳≢⍉X)+.×⍉X

Conclusion

I wrote APLearn to assess APL’s viability for tackling ML problems in a manner that is friendly to learners, researchers, and practitioners. APLearn is not flawless, and there is plenty of room for improvement. For example, some models have fixed hyperparameters that ought to be customisable, random forests are slow, and errors should be handled more gracefully. However, I believe that APLearn proves that APL is a reasonable choice for ML development:

  • It has the potential to achieve cutting-edge performance thanks to data parallelism.
  • With an array-first mindset, it’s faster and easier to implement many ML algorithms in APL than other languages.
  • APL acts like an extension of linear algebra, so model descriptions seamlessly translate to code. This is an incredible benefit for students and researchers who would like to know how it works or modify existing algorithms without being overwhelmed by software-exclusive details.

In the future, I’m planning on improving the performance of APLearn, especially random forests. I’d also like to incorporate better documentation and more robust unit tests.


Relevant links:

JSM 2025 – Introducing APL and TamStat to Statisticians

Joint Statistical Meetings (JSM) is an annual conference hosted by the American Statistical Association. Last month, Josh, Martina, and I attended JSM 2025 in Nashville, Tennessee, with Professor Stephen Mansour. Josh, Martina, and I were exhibitors, and hosted a stall in the exposition hall alongside other organisations. Steve exhibited TamStat (a free-to-use statistical package written in Dyalog APL) alongside us, but was also an attendee and gave a presentation and a workshop at the conference.

Josh, Stephen, Rich, and Martina at the Dyalog Ltd booth

Our large poster included TamStat as one of several use cases, a visual hook comparing APL with traditional mathematical notation, an example using APL to compute k-means clustering, and information about Dyalog features. It was designed to intrigue people from a distance and encourage them to come closer to read more details. The maths and clustering selected for the poster were chosen based on them being things that statisticians are already familiar with. One person said APL glyphs reminded them of Inuktitut syllabics.

Promotional poster used at JSM 2025

Promotional poster used at JSM 2025

TamStat attracted a lot of interest from both students and teachers. Students were impressed by its simplicity, and some teachers were looking for web browser-based applications because they are easy to access on their institutions’ systems. Steve has distilled core statistical concepts into a very digestible syntax and vocabulary that echoes APL. In his workshop, the students remarked how easily they were able to pick up the syntax and use the graphical interface to solve the exercises.

A sticker on the back window of a car that reads "=>÷"

While driving around the city, we noticed this sticker on a car window. I’m not sure the driver knows that it’s valid APL! Can you tell what it means in English? Can you tell what it does in APL?

On the first night there was an informal event hosted in the exposition hall, including “speed posters” presentations on an array of computer monitors; presenters had only one hour to show their work (as opposed to the larger poster sessions that were a whole morning and afternoon). There was karaoke and refreshments, and exhibitors were in their booths for dicussions. People did come and talk to us, although that wasn’t the main purpose of the evening. Our highlight was seeing Steve’s incredible cover of Your Cheatin’ Heart by Hank Williams.

Twice a day there were large poster sessions where academics were available to explain their research. Many of these centred around assessing various statistical methods, machine learning, and some deep learning methods, especially for high dimensional (lots of parameters, not as in array axes) data or limited available data, and they were often in the context of supporting medical studies. Despite biostatistics being the plurality, there was a range of research on display including statistical methods applied to exoplanet detection (astrophysics), medical studies, and an adventure board game for teaching statistics. Many established statistical software packages were employed – most people used R, although we heard a few cases of Julia being used for heavy computation.

We met so many lovely people during our week in Nashville, and were happy to be able to introduce so many people to Dyalog, APL, and TamStat. We learned a lot about statisticians, their use cases, and their feelings about the software they use. It’s always great showing Dyalog outside the APL community, and even better in person!

————————————

For more information on TamStat, see:

Employee Spotlight: Neil

Neil looking for (and failing to find) a photo of himself to include in this blog post…

A year passes quickly, especially when it’s your first year with a new company! Neil has now been our “JavaScript guy” in the Tools group for a full trip round the sun, and we asked him how he’s enjoyed his first twelve months. Fortunately, he seems to have settled in well – “Most of all, I don’t think any company has done as much to make me feel welcome, or to check in sometimes. Being a remote worker, that was always greatly appreciated. Also, I’ve worked for quite a few companies – so I mean it.”

For anyone considering working at Dyalog Ltd. Neil has the following advice – “Be prepared to be involved in conversations that push you to the limit of both your practical and your theoretical understanding. I mostly get to watch the APL masters work from a slight distance. If anything, that was the reason I chose to join Dyalog Ltd. The patient mentorship from Adám (and glimpses in to APL language decisions and history) has been a delight. Watching how quickly Morten can work, knowing every nook and cranny, has been… humbling. However, that’s where it’s interesting: there is so much skill and ability packed in to such a tiny company. But they’re nice people and there’s nothing to fear.”

At Dyalog Ltd, Neil mainly works on a project that will allow cross-platform ⎕WC-style interfaces, optionally remotely, and even embedded in webpages. He has also dedicated himself to steadily increase his APL skills. His broad experience with non-APL environments and languages has provided another useful outside perspective to our efforts at connecting APL with the outside world.

When not working, Neil enjoys spending time outside. He spends a lot of time in nature, walking and enjoying the lovely lakes where he lives in Germany. He likes to grow wildflowers because he hopes that they will grow without any help, but unfortunately this is not always the case and this year, he had only two flowers before August – it’s as if they hadn’t read the packet that very clearly stated that they should bloom in ‘June – July’!

ECOOP 2025 – Presenting APL Standards and Array Notation

The European Conference on Object-Oriented Programming (ECOOP), Europe’s longest-standing annual programming languages conference, brings together researchers, practitioners, and students to share their ideas and experiences in all topics related to programming languages, software development, systems, and applications. Every year, ECOOP happens at a new location, and this year it was held on the main campus of Western Norway University of Applied Sciences in Bergen, Norway’s second-largest city, where a quarter million people live nestled between seven mountains.A view over Bergen, Norway, showing colourful buildings, lush green trees, and a calm river winding through the city with hills in the background under a partly cloudy sky.

ECOOP 2025 took place last week, and Karta and Adám attended and presented at the Programming Language Standardization and Specification (PLSS) part. PLSS is a workshop module of the week-long ECOOP conference; a smaller group of domain experts in a more intimate setting, with enough time to ask questions and discuss content during breaks. This year’s event was arranged by Dr. Mikhail Barash, a researcher at Bergen Language Design Laboratory at University of Bergen, and Yulia Startsev who works on Firefox’s JavaScript engine at Mozilla and represents Mozilla at the standardising body for JavaScript. The audience was mostly comprised of computer scientists; predominantly academics, but also people working on language design, standards, and specifications.

Adám presented APL Array Notation (slides), in which he related the decade-long story of designing, specifying, and implementing APL’s answer to JavaScript’s JSON. Although the event’s main theme was programming language standards and the resulting APL Array Notation specification document has not yet been adopted as an official standard, it has been designed and written in a way that would facilitate this process.
Adám Brudzewsky presenting at PLSS 2025 in Bergen, Norway. He stands at a lectern in front of a projection screen showing APL code and the Dyalog version 20.0 logo, with the topic "APL Array Notation".

APL was first standardised in 1989; the latest, extended, standard was published in 2001. Karta’s presentation on APL Standards (slides) discussed how Dyalog APL conforms to the latest APL standard, differences between Dyalog APL and the versions of APL implemented by other vendors, and places where Dyalog APL diverges from the standard, along with our rationale for doing so.
Karta Kooner presenting at PLSS 2025 in Bergen, Norway. He stands beside two large screens displaying APL glyphs, with the Dyalog logo shown in the top-left corner.

Our talks were well-received, and the audience seemed engaged, asking several questions. Hopefully we have inspired some participants to look further into APL!

Employee Spotlight: Stine

Five years has passed since Stine joined Dyalog Ltd, and in only a few weeks she will also celebrate a year and a half as our CEO. When asked about her first five years, Stine said “Time passes very quickly when you’re having fun, so for me, this anniversary has come far sooner than I was expecting! Taking over this role from Gitte meant not rocking the boat too much – if it’s not broken, there is no reason to fix it. I do make changes though – I try to use my fresh perspective to identify places where we can improve. These changes must be made while still staying true to the Dyalog culture and making people feel safe, both within Dyalog Ltd and in our user community. I try to make small incremental changes, and give people plenty of opportunities to contribute and complain along the way.”

The most important skills for a person in Stine’s position are patience and empathy. Change can be difficult, even if it’s for the better – this is especially true at Dyalog Ltd, where Stine’s favourite part of our culture is the passion. “Every employee is passionate about the company and our product. Dyalog Ltd is not just a workplace, it is a family…a lifestyle. I try to guard that passion so that it never burns out, while feeding it new firewood in the form of good working conditions, influence on the product, and a collaborate leadership style.”

Stine works from our office in Copenhagen. It is here that her favourite ducks are – they are part of a flock of 400 mini ducks that Martina placed around the office as an April Fools joke!

When asked about her most significant project in her time at Dyalog Ltd, Stine replied “Since I became CEO, things stopped being about me. My main task is to make sure that everyone else has the chance to shine, so I have nothing that I have done on my own”. If she must identify her proudest achievement so far, it is getting everyone to arrive on time for meetings (sometimes even five minutes early!).

Even though Stine grew up with APL being the language of choice at home, she never really learnt it (despite trying multiple times), mostly because her interest lies more with people and processes. Helping people grow and making their lives easier makes Stine happy, so even though she is still being teased about her lack of APL skills(!), we can understand that she prioritises other things.

Stine is working on making Dyalog Ltd future proof. She aims to ensure that knowledge and skills are shared from the more experienced employees to the newcomers, so we can continue to operate and support our users for many years to come. We have many key people in the older generation of employees, and the challenge of externalising years of accumulated knowledge and experience is one that Stine has been happy to take on. Some of us have been with Dyalog Ltd for as long as Stine has been alive, and no single individual joining us today can possibly assume their roles without extensive mentoring and knowledge transfer. For Stine, playing role matchmaker and facilitator is both exiting and very rewarding. She plans to serve Dyalog Ltd for many years, continuing to focus on improving and simplifying our work lives, so that we can deliver a good product that remains in touch with the latest technological developments.

Outside work, Stine enjoys dancing and reading books, as well as taking care of her proudest achievement: her children. Long-standing members of the community might remember a Dyalog user meeting in Elsinore where she led a Zumba class every afternoon for the whole week; Stine still does Zumba twice a week, and it is one of the things that helps her to stay sane and in shape.