Employee Spotlight: Andrea

When Andrea joined Dyalog Ltd a year ago, she quite literally took up residence next to Stine, as they shared a desk in the 4 by 3 metre office that Stine and Martina worked in. That changed in March, when Andrea oversaw moving the Copenhagen branch of Dyalog Ltd from an office hotel into a three-room office of their own. Handling the move was one of her first big tasks, apart from co-organising our internal company meetings. Now everyone in Copenhagen has room to work, and there’s even room for visitors! There are, of course, also 401 small ducks and four regular sized ducks.

“In some ways, it doesn’t feel like it’s only been a year. I feel like I’ve been at Dyalog Ltd forever – in a good way!” Andrea says, before continuing “but on the other hand, I still have so much to learn, so I also still feel quite new. Next year will be my first user meeting, for example, so I’m looking forward to taking part in that.”

When not working, Andrea enjoys creating delicious treats for sharing with family and friends

Trying to summarise the last year can be hard. “My days are varied, and I do a lot of different things. Some days I do accounts, plan meetings, and write up minutes. Others I advise on communication and presentations, draft new processes, and facilitate discussions. This is part of what I like about working at Dyalog Ltd. I always have a plan for what I want to get done when I start my day. Sometimes that happens, and sometimes, other things turn out to be more pressing. I like the unpredictability, and I like trying to navigate that, keeping clarity high and chaos low.”

As an executive assistant, Andrea is involved in multiple varied projects across Dyalog Ltd, helping Stine and her colleagues wherever she can. “That’s how I like it, really. I like helping. I think it’s as basic as that. My best days and proudest moments are when I make it easier for others to do their job or make them feel more ready to give a presentation.”

Andrea plans to spend the next many years at Dyalog Ltd, doing exactly what she’s doing now. “It seems that as soon as I’ve finished one task, two more have appeared on my list, so I don’t fear running out of work. I’m excited to keep working on my tasks in communication, administration, and management support. Dyalog Ltd is a really warm and welcoming place to work, and even though most of my colleagues are far away physically, they are never far away socially or mentally.”

Updating us on her time outside of work, Andrea must admit that she still doesn’t spend as much time sipping coffee and reading the newspaper as she would like. She tends to pack her schedule tight, but her new goal is to only have two social events per week, leaving time for calling friends, spending time with her partner and hopefully some of that quiet reading time!

DYNA Fall 2025 – A Review

DYNA Fall 2025 brought together APL enthusiasts, customers, and Dyalog Ltd staff for two days of presentations and workshops in New York City. Attendance was strong, energy was high, and the sessions showcased both the maturity and momentum of the Dyalog ecosystem.

The event took place in the Jay Suites Conference Center at the end of September. We’d arranged a bigger venue this time, but it was still filled to capacity! There were eight members of Team Dyalog present, and the European contingent gathered at Brian Becker’s house for a few days beforehand for a “Conclave”. As we’re a geographically-distributed team, we try to take the opportunities that present themselves to work together in person – our thanks to Brian for hosting.

Day 1: Presentations

The Dyalog Road Map – Fall 2025 Edition
Morten Kromberg

First impressions: so many people! Old friends, new acquaintances, and a palpable buzz in the air. Our CTO, Morten, started proceedings with his customary roadmap and vision – what have we done, what are we doing now, and where are we going next. He highlighted both the necessary Sisyphean work, such as the ongoing improvements and interfaces to other systems, and the more exciting developments, such as new language features (array and namespace notation, ⎕VSET, ⎕VGET, and inline tracing). Morten emphasised our growth, in terms of both revenue and staff. He introduced our newest hires, including Asher Harvey-Smith, whose first day happened to be today!

Dyalog and AI
Stefan Kruger

Following Morten’s presentation, Stefan talked about Dyalog and AI. AI is a hot topic that impacts most companies in some way, but, as a minority language, LLM performance has been poor in APL. Stefan’s talk covered some of the recent developments in the field, highlighting the fact that frontier models today are capable of explaining even quite complex APL code correctly, but that they still struggle to write code unaided. He demonstrated an LLM agent capable of executing code, running tests, and reading documentation – showing that while this improved performance, we’re far off the productivity improvements that a Python developer could expect.

JAWS – Jarvis And WebSockets
Brian Becker

Brian took over to present JAWS. No, not the 70s movie, but the WebSocket extension to Jarvis: JAWS = Jarvis And Web Sockets (Jarvis is our web service framework). He outlined the different use cases for HTTP, and contrasted those against the use cases for which you need something else, and how websockets can fill that function. Brian showed some practical examples on how websockets are used, and how the websocket functionality slots neatly into Jarvis.

A Dyalog Interface to Kafka
Martina Crippa

Martina introduced us to Apache Kafka, an event-streaming platform widely used as a backbone in the data infrastructure in large organisations, such as banks. Large cloud providers often use Kafka as the glue between their services. Martina has been working on a Dyalog interface to Kafka. She explained why it matters, and demonstrated the APL Kafka API live. The Dyalog Kafka interface is being built primarily in response to customer requests, but it’s fully open source and we will be offering optional paid support packages for those that so wish.

Static Analysis of APL for Tooling and Compliance
Aaron Hsu and Brandon Wilson

The morning finished with Aaron and Brandon talking about the Static Analysis project. Static Analysis is becoming increasingly important, especially in regulated industries, where compliance demands for such checks percolate through all the way down to platform vendors. They demonstrated the principles upon which the Dyalog Static Analyser is being built: the parser from the Co-Dfns compiler, which has been extended to handle the whole of Dyalog APL. A clear visualisation of how the static analyser’s rules select features from the analysed code showcased the powerful potential that this approach promises. Presenting as a double-act performance is hard to get right, but Aaron’s and Brandon’s presentation went down well. Static analysis is challenging at the best of times, but demand is particularly acute in finance, where regulatory compliance drives tooling requirements.

Lessons Learned when Converting from APL+Win to Dyalog APL
Alex Holtzapple of Metsim International (MSI)

METSIM® is an all-in-one solution for mining and metallurgical operations; it is used in 57(!) different countries around the world – “in the remotest corners of the map”, as Alex put it. Over the past 18 months, Alex and MSI have worked closely with us to migrate METSIM® from APL+Win to Dyalog. After introducing himself and METSIM®, Alex described the process of working with Dyalog Ltd. He had a clear vision of what he wanted to achieve: he specifically wanted to preserve the UI’s design as it stands, because that places less of a burden on customers to modify their workflows and established processes. The migration has been a successful project, and the Dyalog-based product is now in the hands of their customers. Both MSI and Dyalog Ltd learned important lessons from the migration project. Alex said that they’d thought about this migration project for a long time, but what finally swung the decision was a visit to our HQ in Bramley, UK, to “look the team in the eye”.

Dyalog APL: Our (Not So) Secret Ingredient
Mark Wolfson of BIG

Mark Wolfson told us how they’re disrupting the jewellery business, and how Dyalog is forming a central component in this. BIG’s stack is a great example of a modern, heterogeneous services architecture: by the very nature of the business, they need to be able to consume data from a multitude of diverse systems and protocols. After this data has been transformed into a common format, it is then consumed by several internal systems to “derive insight from chaos”. BIG has always been an APL promoter, and Mark is doubling down on this. BIG is increasingly reliant on APL, and they’re investing significantly in their capabilities.

The Data Science Journey
Josh David

Josh talked us through Dyalog’s data science journey. Dyalog APL has long been a natural fit for data analysis, long before the term ‘Data Science’ became fashionable. Today, the field is – like so many others – dominated by Python and R. At Dyalog Ltd, we firmly believe that we have a role to play in this space, and we’re actively trying to attract new practitioners. Josh recounted his experience exhibiting at the 2025 Joint Statistical Meetings (JSM), Nashville, together with Martina Crippa, Rich Park, and Steve Mansour. There was a lot of interest from delegates, especially when walked through the extremely compact formulation of the k-means clustering algorithm in APL. A renewed focus on the Data Science application of Dyalog APL will inevitably impact our development roadmap – we need to improve both our data ingest story, and the raw performance in some key areas.

Josh also showcased Steve Mansour’s statistics package TamStat. TamStat is primarily intended as a package for teaching statistics, but it has several other interesting facets, too: it can be used as a library for statistics routines that you can use in your own applications (it’s open source), and also as a “statistics DSL” – a compact, dedicated way to express and evaluate statistics formulae.

What Can Vectorised Trees Do For You?
Asher Harvey-Smith

Asher is the newest member of Team Dyalog, and he has started his role by giving a presentation and hosting a workshop on the first two days of his employment! However, his association with Dyalog Ltd goes back longer than that, as he has previously completed two internships (the second as “senior intern”), and is already a seasoned presenter (he stepped up to the podium at the Dyalog ’24 user meeting in Glasgow). Asher wants to popularise the tree manipulation techniques used in the Co-Dfns compiler and also in the static analyser. Through a set of clear examples and animations, Asher has found a great pedagogical treatment of a set of techniques that many people have had difficulties grappling with. Asher also outlined when the “parent vector” technique is not appropriate.

ArrayLab: Building a 3D APL Game with raylibAPL
Holden Hoover, University of Waterloo

Holden Hoover, the inaugural APL Forge winner in 2024, demonstrated a 3D game called ArrayLab that he’s been building on top of Brian Ellingsgaard’s raylibAPL (Brian is also a former summer intern at Dyalog Ltd). One purpose for developing the ArrayLab game was to test the raylibAPL bindings whilst simultaneously exercising the Dyalog interpreter. When working with native extensions there are a lot of things that can go wrong! Holden also had to learn a lot about game development, in particular in-game physics. The ArrayLab game is a work in progress, but he showed a live walk-through of his progress so far, demonstrating correct physics and collision detection.

The APL Trust US
Diane Hymas and Mark Wolfson, The APL Trust US

The first day concluded with Diane Hymas and Mark Wolfson reporting on the progress they, together with a wider team, have been able to make on the APL Trust US. The purpose of the APL trust is to “give back” to the community. If you have an idea for something that you want to do with APL, you can apply for funding help from the APL Trust – the application process is being formalised at the moment. The good news from Diane was that the APL Trust is now launched as a registered tax-deductible charity.

Socialising!

An important part of multi-day gatherings of this kind is the impromptu hallway encounters – networking opportunities where like-minded people meet and learn from one another. After the presentations had completed, we retreated to the Yard House restaurant a few blocks up the road for dinner, conversation, and making friends. For several of us, this was our first visit to New York, and taking in the sights and sounds of Times Square at night is definitely an experience!

Day 2: Workshops

Tuesday was workshop day. We had a packed programme across two streams. In the morning, you could choose between learning how to use Jarvis with Brian and Stefan, or a deep-dive into namespaces with Josh, Morten, and Martina.

We have found that the namespace aspect of APL is frequently misunderstood, and we’ve run the “Introduction to Namespaces” workshop a few times now. Josh was managing a full house, with Morten and Martina assisting. In the room next door, Brian gave a hands-on, practical introduction to Jarvis and JAWS, with detailed explanations of the different use cases. Jarvis is already a core component in many users’ deployed Dyalog applications; with the introduction of web socket support, Jarvis/JAWS will find its way into more application deployments.

In the afternoon you could choose between learning how to use Link with instruction from Morten and Stefan, or an introduction to the key operator with Asher, who also attracted a full house.

Key is one of the advanced operators in Dyalog APL, and mastery unlocks a lot of applications, especially in the Data Science domain. Simplifying hugely, the science goes to the left and the data goes to the right! Asher had a job on his hands teaching such a large group with a range of abilities, but he was ably assisted by Martina and Josh. Next door, Morten was helping the group through gradually more complex source code scenarios with Link. Link is now into its fourth major version, and a mature workflow component that lets Dyalog users take advantage of a range of external tools that are expecting to operate on text files, such as Git, GitHub, VS Code, and many others.

In Conclusion…

The 2025 Fall DYNA conference was a well-attended, well-received event, with a great mix of newcomers, veterans, customers, and members of Team Dyalog. The highlights for us were the two customer presentations from Alex Holtzapple and Mark Wolfson – it is always interesting for us to see how people use our product in the field! Alex is new to Dyalog, and it was fantastic to hear him reporting such a positive experience and outcome. Although Mark has been a Dyalog user for longer, he was no less enthusiastic, and to hear that they’re really growing their APL development team is a vote of confidence. It was also great to hear how The APL Trust US is taking off.

DYNA Fall 2025 reflected our growing momentum, both in technology and community. Each talk underscored a shared commitment to pushing APL forward, not just as a language, but as a living ecosystem shaped by its practitioners.


Materials from DYNA Fall 2025 are being uploaded to the event webpage as they become available.

APLearn: The Winning APL Forge 2025 Project

By: Borna Ahmadzadeh

Borna is the 2025 winner of the APL Forge, our annual competition designed to promote the use and development of APL by challenging participants to create innovative open-source libraries and commercial applications. Winners of the APL Forge are invited to present their winning work at the next Dyalog user meeting. As this will not be held until October 2026, he has written a blog post about his work as a preview of what to expect from his Dyalog ’26 presentation.


As a machine learning (ML) developer, I regularly utilise the popular Python package scikit-learn to handle many aspects of ML workflows, including data pre-processing, model training and evaluation, and more. Its simple, uniform interface makes it straightforward to design ML pipelines or switch between the myriad algorithms that it supplies without significant changes to the code. This means that scikit-learn has witnessed extensive adoption across many domains and industries.

Having to deal with large arrays and costly operations, scikit-learn, though written in Python, is heavily dependent on NumPy, an APL-inspired numerical computing package. NumPy itself delegates expensive routines to C and potentially Fortran to maximise performance. Portions of scikit-learn are also written in Cython, a statically compiled, C-like superset of Python, to further improve efficiency. All of this means that a complete understanding of scikit-learn requires, in addition to a knowledge of Python, familiarity with Cython, C, and even Fortran. Although you can develop professional scikit-learn applications without having such a profound knowledge, modifying an algorithm or implementing one from scratch to experiment with research ideas can be very challenging without expertise in those languages. From a didactic perspective this situation leaves much to be desired, as learners will need to dedicate as much effort to the software-specific, low-level details of their code as to the actual algorithm. For instance, matrix multiplication, which can be described in a handful of steps – this constitutes its algorithmic intent – takes hundreds of lines to efficiently implement in C… then there is the issue of portability!

This forces ML developers or students, the majority of whom don’t have the technical aptitude to write well-tuned low-level code, to compromise between efficiency versus tweakability, flexibility, readability, and so on. My winning APL Forge submission, APLearn, is a case study to investigate whether APL can reconcile these seemingly opposed objectives.

Why APL?

To many outsiders, APL is an arcane write-once-read-never language that is exclusively suited to manipulating tabular (typically financial) data; a glyph-heavy alternative to Microsoft Excel. There are many blog posts and real-world projects that dispel this mistaken belief, but I’d like to enumerate a few of APL’s advantages that are of particular relevance to APLearn:

  • Arrays: ML data is usually best represented as multi-dimensional arrays, which are manipulated in various ways to optimise model parameters, make predictions, and so on. Array programming is a natural fit for this.
  • Performance: Data parallelism is at the heart of APL, that is, operations are carried out across entire arrays at once. This can be exploited to run code very efficiently on single instruction, multiple data hardware, especially GPUs (for example, Co-dfns).
  • Algorithms, not software: Languages such as C have a tendency to pollute algorithmic intent with software “noise”, that is, they often get stuck on how to communicate with the computer, neglecting the algorithm itself. Careful memory management, complicated pointer usage, cache-friendly practices… they shift the focus away from what the code is doing (the algorithm) to how it’s doing it (the software). APL doesn’t suffer from this problem because it resembles mathematical notation and, therefore, the implementation is tied to the description, akin to declarative programming.

This led me to ask: Can we efficiently and elegantly develop end-to-end ML workflows in APL to simultaneously facilitate practical applications, research exploration, and pedagogy?

Goals

APLearn is my attempt at recreating a small sub-set of scikit-learn in APL while being mindful of the distinct style called for by array programming. Key features include training and inference methods for common models, pre-processing modules for normalising or encoding data, and miscellaneous utilities like evaluation metrics. The core design principles of APLearn are:

  • Uniformity: Models and utilities follow a uniform interface for training, prediction, or data transformation, thereby reducing the learning curve for new users.
  • Transparency: The full algorithm behind a model is available in a single file with minimal software details.
  • Composability: Components can be chained together, reading like a sentence with verbs as ML methods (for example, make classifications using logistic regression) and nouns as data (for example, input samples).

The first of these isn’t unique to APLearn, and is a cornerstone of many mainstream data science or ML packages such as scikit-learn. However, APL’s true power shows when considering transparency and composability. With a concise grammar that is analogous to a supercharged form of linear algebra, the reader is spared the necessity of cutting through layers of software noise to arrive at the algorithm underlying the code. Additionally, APL’s syntax natively supports chaining without resorting to special objects like pipes in R or Pipeline in scikit-learn. These two work together to surmount the problems (discussed above) that scikit-learn faces.

Example

A user guide and several examples showing APLearn in action can be found in the APLearn GitHub repository. A video on YouTube demonstrates one of these examples.

Development Journey

In my (biased) opinion, the APLearn user experience is relatively smooth, and one only needs a very basic knowledge knowledge of scikit-learn and APL to be able to set up basic ML workflows. That leaves out my own developer experience working on this project – after all, the initial inquiry was how easy it would be to build, not use, something like scikit-learn in APL. I have multiple takeaways and observations to share:

  • Do arrays suffice?
    APL isn’t appropriate for problems that can’t be adequately reduced to array-like structures, so this entire project would be futile if ML processes couldn’t be captured through arrays. The good news is that, for the majority of algorithms, arrays are the ideal representation. Concrete examples include generalised linear models, statistic-based methods like normalisation or naïve Bayes, and classification or regression metrics. There is, however, one major exception – trees. The naïve way to represent trees in APL, employed by APLearn, is a recursive approach where the first element of a vector stands for the tree node and subsequent elements are sub-trees or leaves. The following, used in APLearn’s decision tree implementation, sets the current parent node using the threshold function and recurses to construct the left and right children. Despite its simplicity, this scheme is undesirable for two reasons. First, it’s contrary to the data-parallel spirit of APL. Second, trees can quickly grow in size, and highly-nested arrays are notoriously inefficient. Smarter alternatives exist, but they’re not easy to get the hang of, hurting the didactic aspect of APLearn. Although not many models rely on trees (the main exceptions being decision trees and space partitioning algorithms), this is an important caveat to consider.

    (thresh i) ((⍺⍺ ∇∇ ⍵⍵)X_l y_l) ((⍺⍺ ∇∇ ⍵⍵)X_r y_r)
  • Is APL efficient?
    I didn’t expect (Dyalog) APL to keep up with scikit-learn performance-wise, and I wasn’t wrong; APLearn is about an order of magnitude slower on average, or even thousands of times for k-means clustering and random forests. The culprit for the inefficiency in k-means is the expensive inner product in the update, depicted below. Luckily, this doesn’t make it unusable, and many real-world workflows can be feasibly executed using APLearn, albeit more slowly. Parts of the code are incompatible with Co-dfns at the time of writing, so that remains an exciting future avenue for unlocking substantial performance enhancements.

    upd←{
        inds←⊃⍤1⍋⍤1⊢X+.{2*⍨⍺-⍵}⍉⍵
        {(+⌿⍵)÷≢⍵}∘⊃⍤0⊢{X⌿⍨inds=⍵}¨⍳st.k
    }
  • How easy is the implementation process?
    I began with the impression that models such as ridge regression or k-nearest neighbors would be easy to implement because they’re based on standard linear algebra operations like matrix multiplication and Euclidean distance. Others, however, like lasso or k-means, daunted me at first because it wasn’t immediately obvious how they could be implemented without explicit loops or branches. Many struggles and clumsy lines of code later, I realized where my error lay: I was trying to translate the imperative pseudo-code of those models bit by bit into APL. This strategy is fine when porting, say, JavaScript to Python, but array processing is in a class of its own and must be treated as such. Instead of fruitlessly endeavouring to translate imperative instructions into array instructions, I decided to start off with the conceptual, non-imperative description of an algorithm and directly implement it APL. Thereafter, my job became significantly easier, and there were no more major hurdles in my way. This is the most impactful lesson I learned – being an effective array programmer requires thinking like one.
  • What degree of noise is there?
    Recall that “noise” in this context refers to code that is not integral to the algorithm but is necessary for effective communication with the computer. If our algorithmic intent is vector addition, for example, noise in the C implementation could comprise advancing pointers or allocating and freeing heap memory. APL mostly dispenses with this type of artifact, and there are no frills in the APLearn codebase; only the essentials are there. This helps the developer to focus on the algorithm instead of the software. For example, the following code is the implementation of APLearn’s metrics, containing practically nothing that would detract attention from the mathematical definitions of each. In contrast, a C implementation would be much longer and more distracting.

    mae←{{(+/⍵)÷≢⍵},|⍺-⍵}
    mse←{{(+/⍵)÷≢⍵},2*⍨⍺-⍵}
    rmse←{0.5*⍨⍺ mse ⍵}
    acc←{{(+/⍵)÷≢⍵},⍺=⍵}
    prec←{(+/,⍺⊥⍵)÷+/,⍵}
    rec←{(+/,⍺⊥⍵)÷+/,⍺}
    f1←{2×p×r÷(p←⍺ prec ⍵)+(r←⍺ rec ⍵)}
  • Is it readable?
    Unreadability is the top accusation leveled against APL. To an extent, that’s a subjective judgement that can’t be fought against, although it’s not fair to call a language unreadable simply because its syntax or characters appear strange at first glance. Ultimately, the reader needs to deliver the final verdict on readability, but one thing is certain: Assuming a firm grasp on APL syntax, it’s far easier to understand what a piece of code written in APL is meant to achieve than that written in most other languages. For example, the APLearn snippet below calculates the parameters of ridge regression. This is a perfect mirror of the mathematical solution, reinforcing the argument above that APL, being concise and noise-free, directly reflects algorithmic intent, aiding comprehension.

    st.w←y+.×⍨(⌹(X+.×⍨⍉X)+reg×∘.=⍨⍳≢⍉X)+.×⍉X

Conclusion

I wrote APLearn to assess APL’s viability for tackling ML problems in a manner that is friendly to learners, researchers, and practitioners. APLearn is not flawless, and there is plenty of room for improvement. For example, some models have fixed hyperparameters that ought to be customisable, random forests are slow, and errors should be handled more gracefully. However, I believe that APLearn proves that APL is a reasonable choice for ML development:

  • It has the potential to achieve cutting-edge performance thanks to data parallelism.
  • With an array-first mindset, it’s faster and easier to implement many ML algorithms in APL than other languages.
  • APL acts like an extension of linear algebra, so model descriptions seamlessly translate to code. This is an incredible benefit for students and researchers who would like to know how it works or modify existing algorithms without being overwhelmed by software-exclusive details.

In the future, I’m planning on improving the performance of APLearn, especially random forests. I’d also like to incorporate better documentation and more robust unit tests.


Relevant links: