+/×∘. Array Programming
APL concepts in JavaScript
No libraries · Small runnable demos
Interactive explainer · Data structures in practice

Think in arrays,
not in loops

APL (1966) introduced a radical idea: operations apply to entire arrays at once, with no explicit loops. Today that idea powers NumPy, SQL window functions, Apache Arrow, GPU shaders, and every major data processing system.

+/
reduce (sum)
×/
reduce (product)
+\
scan (prefix sum)
∘.
outer product
transpose
group-by key
Rank & shape Broadcasting Reduce / Scan Outer product Tacit style NumPy · SQL · Arrow · SIMD
§01

The core idea

In a conventional language you write a loop to add two arrays element-by-element. In an array language, the operator itself iterates — you write A + B and the language handles the traversal. This is not syntactic sugar: the runtime can execute the operation as a single vectorised SIMD instruction on hardware, or distribute it across cores.

APLc ← a + b
a ← 1 2 3 4 5 b ← 10 20 30 40 50 c ← a + b ⍝ → 11 22 33 44 55 ⍝ No loop. One operation.
JavaScript// element-wise add
const a = [1,2,3,4,5]; const b = [10,20,30,40,50]; const c = zip(a, b, (x,y) => x+y); // → [11, 22, 33, 44, 55] // Loop hidden inside zip()

The key shift is thinking in transformations on whole arrays rather than iterating over elements. This makes code shorter, easier to reason about in parallel, and directly maps to how hardware actually works (SIMD, GPU warps, vector registers).

array_primitives.js — the building blocks
// Array programming primitives — the foundation
// These replace explicit for-loops throughout the rest of this explainer

// Element-wise binary operation (APL: A f B)
const zip = (a, b, f) => a.map((v, i) => f(v, b[i]));

// Apply scalar f to every element (APL: f A)
const each = (a, f) => a.map(f);

// Reduce: collapse array to scalar (APL: f/A)
const reduce = (a, f, init) => a.reduce(f, init);

// Iota: integer sequence (APL: ⍳n)
const iota = n => Array.from({length: n}, (_, i) => i + 1);

// Reshape: reinterpret flat array as matrix (APL: m n ⍴ A)
const reshape = (a, rows, cols) =>
  Array.from({length: rows}, (_, r) => a.slice(r*cols, r*cols+cols));

// Grade up: return the index permutation that would sort the array (APL: ⍋A)
const gradeUp = a => [...a.keys()].sort((i, j) => a[i] - a[j]);

// Index-of: first occurrence (APL: A⍳B)
const indexOf = (haystack, needle) => haystack.indexOf(needle);

console.log("iota(8)  =", iota(8).join(" "));
console.log("each ×2  =", each(iota(8), x => x*2).join(" "));
console.log("zip + ×2 =", zip(iota(8), each(iota(8), x => x*2), (a,b) => a+b).join(" "));

const mat = reshape(iota(12), 3, 4);
console.log("reshape(⍳12, 3, 4):");
mat.forEach(row => console.log(" ", row.join("  ")));

const scrambled = [3, 1, 4, 1, 5, 9, 2, 6];
const order = gradeUp(scrambled);
console.log("gradeUp indices =", order.join(" "));
console.log("sorted via grade =", order.map(i => scrambled[i]).join(" "));

gradeUp does not return the sorted values themselves. It returns the positions of the values in sorted order. For example, if scrambled = [3,1,4,1], then gradeUp(scrambled) is [1,3,0,2]: read those positions back out of the original array, and you get the sorted result [1,1,3,4].

§02

Rank and shape

Every value in APL has a rank (number of dimensions) and a shape (size along each dimension). A single number is rank-0 (scalar). A list is rank-1 (vector). A table is rank-2 (matrix). Higher ranks are tensors.

Rank taxonomy
Rank 0 · scalar
42
Rank 1 · vector (5)
Rank 2 · matrix (3×3)
Rank 3 · tensor (2×3×3)

The shape determines which operations are valid and how they generalise. A function that works on a vector should work on each row of a matrix — this is the rank polymorphism that makes array languages composable.

rank_shape.js — shape, rank, and reshaping
// Rank and shape arithmetic — the type system of array programming

function shape(a) {
  const s = [];
  let cur = a;
  while (Array.isArray(cur)) { s.push(cur.length); cur = cur[0]; }
  return s;
}
const rank  = a => shape(a).length;
const numel = a => shape(a).reduce((p,d) => p*d, 1);

// Flatten any nested array (APL: ,A  "ravel")
const ravel = a => Array.isArray(a) ? a.flatMap(ravel) : [a];

// Reshape flat data into arbitrary shape (APL: s ⍴ A)
function reshapeND(flat, dims) {
  if (dims.length === 1) return flat.slice(0, dims[0]);
  const stride = dims.slice(1).reduce((p,d) => p*d, 1);
  return Array.from({length: dims[0]}, (_, i) =>
    reshapeND(flat.slice(i*stride), dims.slice(1)));
}

// Transpose (APL: ⍉A) — swap last two axes
function transpose(mat) {
  const rows = mat.length, cols = mat[0].length;
  return Array.from({length: cols}, (_, c) =>
    Array.from({length: rows}, (_, r) => mat[r][c]));
}

const flat = [1,2,3,4,5,6,7,8,9,10,11,12];
const m34  = reshapeND(flat, [3,4]);
const t234 = reshapeND(flat, [2,3,2]);

console.log("flat shape:",  shape(flat),  "rank:", rank(flat));
console.log("3×4  shape:",  shape(m34),   "rank:", rank(m34));
console.log("2×3×2 shape:", shape(t234),  "rank:", rank(t234));

console.log("\n3×4 matrix:");
m34.forEach(r => console.log(" ", r.join("  ")));

console.log("\nTransposed (4×3):");
transpose(m34).forEach(r => console.log(" ", r.join("  ")));

// Shape rules: reshape is just a different view — same data
console.log("\nravel(3×4):", ravel(m34).join(" "));
console.log("numel(3×4):", numel(m34), "(same as 12)");
§03

Scalar extension & broadcasting

When shapes don't match, array languages don't error — they broadcast: automatically extend the smaller array to match the larger one. A scalar extends to any shape. A vector of length n extends along any axis of size n.

Broadcasting — click to animate
broadcasting.js — scalar extension and shape coercion
// Broadcasting: automatic shape extension
// NumPy and APL both follow these rules

// Broadcast a scalar to any shape (APL: scalar f array → applies scalar to each)
const scalarBroadcast = (scalar, arr, f) => arr.map(v => f(scalar, v));

// Broadcast a vector along rows of a matrix (APL: mat + vec → add vec to each row)
const rowBroadcast = (mat, vec, f) =>
  mat.map(row => row.map((v, i) => f(v, vec[i])));

// Outer product IS broadcasting in disguise:
// col (n×1) + row (1×m) → matrix (n×m)
const outerAdd = (col, row) =>
  col.map(c => row.map(r => c + r));

const vec = [1, 2, 3, 4, 5];
const mat = [[1,2,3],[4,5,6],[7,8,9]];

console.log("Scalar 10 × vec:", scalarBroadcast(10, vec, (a,b) => a*b).join(" "));
console.log("Scalar 2 + mat:");
scalarBroadcast(2, mat, (s, row) => row.map(v => v + s))
  .forEach(r => console.log(" ", r.join("  ")));

console.log("\nRow broadcast: mat + [10,20,30]:");
rowBroadcast(mat, [10,20,30], (a,b) => a+b)
  .forEach(r => console.log(" ", r.join("  ")));

console.log("\nOuter add [1,2,3] ∘.+ [10,20,30,40]:");
outerAdd([1,2,3], [10,20,30,40])
  .forEach(r => console.log(" ", r.join("  ")));

// Broadcasting rule: shapes are compatible if, for each dimension,
// they are equal OR one of them is 1 (extends to match the other)
function broadcastShape(s1, s2) {
  const len = Math.max(s1.length, s2.length);
  const p1  = [...new Array(len - s1.length).fill(1), ...s1];
  const p2  = [...new Array(len - s2.length).fill(1), ...s2];
  return p1.map((d, i) => {
    if (d === p2[i]) return d;
    if (d === 1) return p2[i];
    if (p2[i] === 1) return d;
    throw new Error(`Incompatible: ${d} vs ${p2[i]}`);
  });
}
console.log("\nbroadcastShape([3,4],[4])    =", broadcastShape([3,4],[4]));
console.log("broadcastShape([3,1],[1,4])  =", broadcastShape([3,1],[1,4]));
console.log("broadcastShape([2,3,4],[3,4]) =", broadcastShape([2,3,4],[3,4]));
§04

Reduce — collapsing dimensions

The reduce operator f/ inserts a binary function between every element of a vector, collapsing it to a scalar. +/ sums, ×/ multiplies, ⌈/ finds the maximum. Applied to a matrix, it reduces along a specified axis — giving a vector.

APLf/ — reduce
A ← 1 2 3 4 5 +/A ⍝ → 15 (sum) ×/A ⍝ → 120 (product) ⌈/A ⍝ → 5 (max) ⌊/A ⍝ → 1 (min) M ← 3 4 ⍴ ⍳12 +/M ⍝ → sum each row +⌿M ⍝ → sum each column
JavaScript// Array.reduce
const A = [1,2,3,4,5]; A.reduce((a,b)=>a+b) // 15 A.reduce((a,b)=>a*b) // 120 Math.max(...A) // 5 Math.min(...A) // 1 M.map(r=>r.reduce((a,b)=>a+b)) // row sums colReduce(M, (a,b)=>a+b) // col sums
Reduce visualiser — choose a function
reduce.js — reduce along any axis of a matrix
// Reduce (fold) — the universal aggregation primitive
// APL's f/A inserts f between every element: +/1 2 3 = 1+2+3 = 6

const reduceVec = (a, f) => a.slice(1).reduce(f, a[0]);

// Reduce along axis 0 (columns) or axis 1 (rows) of a matrix
function reduceMat(mat, f, axis = 1) {
  if (axis === 1) return mat.map(row => reduceVec(row, f));   // row reduction
  const cols = mat[0].length;
  return Array.from({length: cols}, (_, c) =>
    reduceVec(mat.map(row => row[c]), f));  // column reduction
}

const v = [3, 1, 4, 1, 5, 9, 2, 6];
console.log("Vector:", v.join(" "));
console.log("+/ sum    =", reduceVec(v, (a,b) => a+b));
console.log("×/ product =", reduceVec(v, (a,b) => a*b));
console.log("⌈/ max    =", reduceVec(v, Math.max));
console.log("⌊/ min    =", reduceVec(v, Math.min));
console.log("∨/ any>5  =", reduceVec(v, (a,b) => a || b>5) ? "true" : "false");

const mat = [[3,1,4],[1,5,9],[2,6,5],[3,5,8]];
console.log("\n4×3 matrix:");
mat.forEach(r => console.log(" ", r.join("  ")));

console.log("+/ rows (axis=1):", reduceMat(mat, (a,b) => a+b, 1).join(" "));
console.log("+⌿ cols (axis=0):", reduceMat(mat, (a,b) => a+b, 0).join(" "));
console.log("⌈⌿ col max (axis=0):", reduceMat(mat, Math.max, 0).join(" "));

// Reduce a 3D tensor along each axis — mirrors NumPy's np.sum(T, axis=k)
function tensorReduceAxis0(tensor, f) {
  // tensor: [depth, rows, cols] → result: [rows, cols]
  const [D, R, C] = [tensor.length, tensor[0].length, tensor[0][0].length];
  return Array.from({length: R}, (_, r) =>
    Array.from({length: C}, (_, c) =>
      tensor.slice(1).reduce((acc, d) => f(acc, d[r][c]), tensor[0][r][c])));
}
const T = [[[1,2],[3,4]],[[5,6],[7,8]]];
console.log("\nTensor shape 2×2×2, sum along axis 0:");
tensorReduceAxis0(T, (a,b) => a+b).forEach(r => console.log(" ", r.join(" ")));
§05

Scan — the prefix operation

Where reduce collapses an array to a scalar, scan (f\) keeps all the intermediate results. +\ produces the running (prefix) sum. ×\ produces the running product. The output has the same length as the input.

Prefix sums are foundational in data processing: cumulative sales figures, running totals in SQL (SUM() OVER), histogram computation, and GPU parallel prefix algorithms all reduce to scan.
Scan visualiser — watch the accumulator move through the array
scan.js — prefix scans and their applications
// Scan (prefix reduction) — APL: f\A
// scan(f, [a,b,c,d]) = [a, f(a,b), f(f(a,b),c), f(f(f(a,b),c),d)]

const scan = (a, f) => {
  const result = [a[0]];
  for (let i = 1; i < a.length; i++)
    result.push(f(result[i-1], a[i]));
  return result;
};

// Application 1: prefix sums for histogram range queries
// Given freq[i] = count of items in bucket i,
// cumFreq[i] = total items in buckets 0..i
const freq    = [3, 7, 2, 9, 4, 1, 6];
const cumFreq = scan(freq, (a,b) => a+b);
console.log("freq:    ", freq.join("  "));
console.log("prefix+: ", cumFreq.join("  "));
// Range query: how many items in buckets 2..5?
console.log("items in buckets [2,5]:", cumFreq[5] - (cumFreq[1] ?? 0), "← O(1) after O(n) scan");

// Application 2: exclusive prefix sum (like SQL ROW_NUMBER with OVER)
const exclusiveScan = (a, f, identity) => [identity, ...scan(a.slice(0,-1), f)];
const offsets = exclusiveScan([5,3,8,2], (a,b) => a+b, 0);
console.log("\nlengths:  [5 3 8 2] (e.g. sizes of 4 string chunks)");
console.log("offsets:  ", offsets.join("  "), "← start byte of each chunk");

// Application 3: parallel prefix (how GPUs compute scans in O(log n) steps)
// Hillis-Steele algorithm — stride doubling
function parallelPrefixScan(a, f) {
  let cur = [...a];
  const n = a.length;
  let steps = 0;
  for (let stride = 1; stride < n; stride *= 2) {
    const next = [...cur];
    for (let i = stride; i < n; i++) next[i] = f(cur[i-stride], cur[i]);
    cur = next;
    steps++;
  }
  console.log(`  → ${steps} parallel steps (vs ${n-1} sequential)`);
  return cur;
}
console.log("\nParallel prefix scan of [1,1,1,1,1,1,1,1]:");
const ps = parallelPrefixScan([1,1,1,1,1,1,1,1], (a,b) => a+b);
console.log("  Result:", ps.join(" "));
§06

Outer product — ∘.f

The outer product A ∘.f B applies function f to every combination of elements from A and B, producing a matrix of shape ⍴A, ⍴B. With addition it produces an addition table; with equality it produces a membership matrix; with multiplication it's the multiplication table.

Outer product tables — select a function
outer_product.js — outer product and its uses
// Outer product (APL: A ∘.f B)
// Every pair (a,b) from A×B — produces a len(A) × len(B) matrix

const outer = (a, b, f) => a.map(x => b.map(y => f(x, y)));

// Multiplication table (∘.× on ⍳9)
const iota9 = [1,2,3,4,5,6,7,8,9];
console.log("Multiplication table (∘.× ⍳9):");
outer(iota9.slice(0,5), iota9.slice(0,5), (a,b) => a*b)
  .forEach(r => console.log(" ", r.map(v => String(v).padStart(3)).join("")));

// Membership matrix: which elements of A appear in B?
// APL: A ∘.= B  (equivalent to np.equal.outer)
const A = [1,3,5], B = [1,2,3,4,5];
const membership = outer(A, B, (a,b) => a===b ? 1 : 0);
console.log("\nMembership: A=[1,3,5] ∘.= B=[1,2,3,4,5]");
membership.forEach((r, i) => console.log(`  A[${A[i]}] matches: ` + r.join(" ")));

// Pairwise distance matrix — used in k-nearest neighbours, clustering
const pts = [1,3,6,10];
const dists = outer(pts, pts, (a,b) => Math.abs(a-b));
console.log("\nPairwise |distance| matrix:");
dists.forEach(r => console.log("  ", r.map(v => String(v).padStart(3)).join("")));

// Boolean "less than" mask — gives a lower-triangular matrix
// Useful for masking in transformers (attention), graph adjacency
const n = 5, idx = [0,1,2,3,4];
const causal = outer(idx, idx, (r,c) => c <= r ? 1 : 0);
console.log("\nCausal mask (lower triangular) — used in GPT attention:");
causal.forEach(r => console.log("  ", r.join(" ")));
§07

The rank operator — f⍤k

The rank operator f⍤k applies function f to rank-k sub-arrays (cells) of its argument, then collects the results. sum⍤1 sums each rank-1 cell (each row) of a matrix. sort⍤1 sorts each row independently. In Python terms, this is closer to numpy.apply_along_axis or an axis-aware higher-order operation than to every use of NumPy's generic axis parameter.

APLrank operator
M ← 3 4 ⍴ ⍳12 (+/)⍤1 M ⍝ sum each row (⌈/)⍤1 M ⍝ max of each row (⍋)⍤1 M ⍝ grade-up each row (⊂∘⍋⌷⊢)⍤1 M ⍝ sort each row
JavaScript// row-wise special case
// A full rank operator is more general. // Here we show the common "apply to each row" case. const applyToRows = f => mat => mat.map(row => f(row)); applyToRows(sum)(M) // row sums applyToRows(max)(M) // row maxes applyToRows(gradeUp)(M) // row grades applyToRows(sort)(M) // sort rows
rank_operator.js — applying functions along axes
// Rank-oriented array programming in JavaScript
// This is not a full general implementation of APL's f⍤k.
// It shows three important special cases: rows, columns, and pages.

const applyToRows = (f, mat) => mat.map(f);
const applyToCols = (f, mat) => {
  const T = mat[0].map((_, c) => mat.map(r => r[c])); // transpose
  return T.map(f);
};

// Rank-2 cells of a rank-3 tensor (each "page")
const applyToPages = (f, tensor) => tensor.map(f);

const sum     = a => a.reduce((x,y) => x+y, 0);
const norm    = a => Math.sqrt(a.reduce((x,y) => x + y*y, 0));
const sortAsc = a => [...a].sort((x,y) => x-y);
const gradeUp = a => [...a.keys()].sort((i,j) => a[i]-a[j]);
const softmax = a => {
  const m = Math.max(...a), exps = a.map(x => Math.exp(x-m));
  const s = sum(exps); return exps.map(x => +(x/s).toFixed(3));
};

const M = [[3,1,4,1],[5,9,2,6],[5,3,5,8]];
console.log("Matrix M:");
M.forEach(r => console.log("  ", r.join("  ")));

console.log("\nsum⍤1 (row sums):  ", applyToRows(sum, M).join("  "));
console.log("sum⍤0 (col sums):  ", applyToCols(sum, M).join("  "));
console.log("norm⍤1 (row norms): ", applyToRows(norm, M).map(v => v.toFixed(2)).join("  "));
console.log("sort⍤1 (sort rows):");
applyToRows(sortAsc, M).forEach(r => console.log("  ", r.join("  ")));

// Softmax of each row — key in transformer attention
const logits = [[2.0,1.0,0.1],[0.5,3.0,0.2]];
console.log("\nsoftmax⍤1 (each row of logit matrix):");
applyToRows(softmax, logits).forEach(r => console.log("  ", r.join(" ")));

// Rank-3: apply matrix operation to each page of a 3D tensor
const T = [[[1,2],[3,4]], [[5,6],[7,8]]];
const rowSumsPerPage = applyToPages(page => applyToRows(sum, page), T);
console.log("\nRow sums per page of 2×2×2 tensor:", JSON.stringify(rowSumsPerPage));
§08

Tacit / point-free style

APL encourages composing functions without naming intermediate values (tacit or point-free style). A fork or 3-train, (f g h) x, means g(f(x), h(x)) — useful for computing "mean = sum÷count" without an explicit variable. A 2-train in APL is an atop: (f g) x means f(g(x)). This section focuses on the fork pattern because it is the most visually distinctive and maps cleanly to the JavaScript combinators below.

APLtacit definitions
⍝ Fork: mean = (+/ ÷ ≢) mean ← +/ ÷ ≢ ⍝ Fork: normalise = subtract mean, divide std std ← (2÷⍨+/∘(×⍨)-mean)∘.*0.5 norm ← (⊢-mean) ÷ std ⍝ Composition with ∘ sumSq ← +/∘(×⍨) ⍝ sum of squares
JavaScript// point-free equivalents
// fork(f, g, h)(x) = g(f(x), h(x)) const fork = (f,g,h) => x => g(f(x),h(x)); const mean = fork(sum, (a,b)=>a/b, len); // compose: (f∘g)(x) = f(g(x)) const compose = (...fs) => x => fs.reduceRight((v,f)=>f(v),x); const sumSq = compose(sum, a=>a.map(x=>x*x));
tacit.js — point-free composition and combinators
// Tacit/point-free style — functions defined by composition, not application
// This is how APL programmers write entire programs without naming variables

// Combinators
const compose = (...fs) => x => fs.reduceRight((v, f) => f(v), x);
const fork    = (f, g, h) => x => g(f(x), h(x));  // (f g h) x = g(f x, h x)
const atop    = (f, g) => x => f(g(x));            // f∘g
const over    = (f, g) => (x, y) => f(g(x), g(y)); // (f⍥g) x y = f(g x, g y)

// Primitives
const sum  = a => a.reduce((x,y) => x+y, 0);
const len  = a => a.length;
const sqr  = a => a.map(x => x*x);
const sqrt = Math.sqrt;

// Point-free definitions (no explicit data variable)
const mean  = fork(sum, (a,n) => a/n, len);
const sumSq = compose(sum, sqr);
const std   = a => sqrt(mean(sqr(a.map(x => x - mean(a)))));
const norm  = a => a.map(x => (x - mean(a)) / std(a));   // z-score normalise
const rms   = compose(sqrt, atop(mean, sqr));            // root mean square

// A more correct dot product (element-wise multiply then sum)
const dotProduct = (a, b) => sum(a.map((v, i) => v * b[i]));

const data = [2, 4, 4, 4, 5, 5, 7, 9];
console.log("data:  ", data.join(" "));
console.log("mean:  ", mean(data).toFixed(4));
console.log("std:   ", std(data).toFixed(4));
console.log("rms:   ", rms(data).toFixed(4));
console.log("sumSq: ", sumSq(data));
console.log("norm:  ", norm(data).map(v => v.toFixed(2)).join(" "));

// Pipeline-style (tacit chain) — APL reads right to left
// APL: mean∘(×⍨)∘(-mean) — equivalent to rms of deviations from mean
const variance = compose(mean, sqr, a => a.map(x => x - mean(a)));
console.log("\nvariance (composed):", variance(data).toFixed(4));
console.log("std² matches:      ", (std(data)**2).toFixed(4));

// Dot product — inner product with + and ×
const a = [1,2,3], b = [4,5,6];
console.log("\n[1,2,3] +.× [4,5,6] =", dotProduct(a, b));
§09

Modern relevance

APL was designed in 1962. Its ideas now underpin every major data processing system. The mapping is direct and intentional — many of these systems were built by people who knew APL.

APL concept NumPy / PyTorch SQL Streaming / GPU
f/ reducenp.sum, .reduce()SUM, COUNT, MAXkafka aggregations
f\ scannp.cumsum, cummaxSUM() OVER (...)GPU prefix sum
∘.f outernp.outer, einsumCROSS JOINtensor matmul
f⍤k ranknp.apply_along_axisGROUP BY + windowwarp-level ops
broadcastingNumPy broadcastingscalar subqueriesSIMD splat
⍋ grade-upnp.argsortROW_NUMBER() OVER(ORDER BY)radix sort
⌸ keypd.groupbyGROUP BYhash partitioning
modern_array.js — SQL window functions as array operations
// SQL window functions are array operations in disguise
// Every window function = a scan, rank, or reduce over a partition

// Data: sales records
const sales = [
  { dept: "eng", emp: "alice", amount: 120 },
  { dept: "eng", emp: "bob",   amount: 95  },
  { dept: "mkt", emp: "carol", amount: 200 },
  { dept: "mkt", emp: "dave",  amount: 150 },
  { dept: "eng", emp: "eve",   amount: 180 },
  { dept: "mkt", emp: "frank", amount: 90  },
];

// APL ⌸ (key operator) = GROUP BY
function groupBy(arr, keyFn) {
  const map = new Map();
  for (const item of arr) {
    const k = keyFn(item);
    if (!map.has(k)) map.set(k, []);
    map.get(k).push(item);
  }
  return map;
}

// SQL: SELECT dept, SUM(amount) FROM sales GROUP BY dept
// APL: +/∘(⊢[;amount])⌸sales[dept]
const byDept = groupBy(sales, s => s.dept);
console.log("GROUP BY dept, SUM(amount):");
for (const [dept, rows] of byDept)
  console.log(`  ${dept}: ${rows.reduce((s,r) => s+r.amount, 0)}`);

// SQL: SUM(amount) OVER (PARTITION BY dept ORDER BY amount)
// This is a SCAN (prefix sum) within each partition
const withRunning = [...sales].sort((a,b) => a.dept.localeCompare(b.dept) || a.amount-b.amount);
const partitioned = groupBy(withRunning, s => s.dept);

console.log("\nSUM(amount) OVER (PARTITION BY dept ORDER BY amount) — running total:");
for (const [dept, rows] of partitioned) {
  let acc = 0;
  rows.forEach(r => {
    acc += r.amount;
    console.log(`  ${dept} | ${r.emp.padEnd(6)} | ${r.amount} | running: ${acc}`);
  });
}

// SQL: RANK() OVER (PARTITION BY dept ORDER BY amount DESC)
// APL: ⍋⍋ applied per partition (double grade = rank)
console.log("\nRANK() OVER (PARTITION BY dept ORDER BY amount DESC):");
for (const [dept, rows] of partitioned) {
  const sorted = [...rows].sort((a,b) => b.amount-a.amount);
  sorted.forEach((r, rank) => console.log(`  ${dept} | rank ${rank+1} | ${r.emp}: ${r.amount}`));
}
§10

Stream processing

Real-time stream processing systems like Kafka Streams, Apache Flink, and RxJS can be usefully understood through array programming primitives — they apply related ideas to time-ordered sequences rather than in-memory arrays. Map, filter, scan, windowed aggregation, and group-by are the family resemblance to look for here, even though production streaming systems add time semantics, distribution, state management, and recovery.

Stream operators — map to APL primitives
Source
events[]
⍳∞
Map
transform each
f¨ each
Filter
select elements
(f⍨)/ compress
Window
sliding groups
f⍤1 on matrix
Aggregate
reduce window
+/ reduce
Sink
output[]
⍝ result
stream_processing.js — a streaming pipeline using array primitives
// A toy stream pipeline expressed with array-programming primitives
// The goal is conceptual correspondence, not a production streaming runtime.

class Stream {
  constructor(data) { this.data = data; }

  // APL: f¨  — apply f to each element
  map(f)    { return new Stream(this.data.map(f)); }

  // APL: (f⍨)/ — compress by boolean mask
  filter(f) { return new Stream(this.data.filter(f)); }

  // APL: f/ — reduce to scalar
  reduce(f, init) { return this.data.reduce(f, init); }

  // APL: f\ — scan (running aggregate)
  scan(f, init) {
    const result = [];
    let acc = init;
    for (const x of this.data) { acc = f(acc, x); result.push(acc); }
    return new Stream(result);
  }

  // APL: f⍤1 on sliding matrix — tumbling window then reduce
  window(size, f) {
    const result = [];
    for (let i = size - 1; i < this.data.length; i++)
      result.push(f(this.data.slice(i - size + 1, i + 1)));
    return new Stream(result);
  }

  // APL: {⌸⊢} — group by key, then apply f to each group
  groupReduce(keyFn, valFn, reduceFn) {
    const map = new Map();
    for (const x of this.data) {
      const k = keyFn(x), v = valFn(x);
      map.set(k, reduceFn(map.get(k) ?? 0, v));
    }
    return new Stream([...map.entries()].map(([k,v]) => ({key:k, val:v})));
  }

  collect() { return this.data; }
}

// Simulate a stream of sensor readings over 20 time steps
const readings = Stream.from = data => new Stream(data);
const sensor = readings([
  {t:0, sensor:"A", val:12}, {t:1, sensor:"B", val:30}, {t:2, sensor:"A", val:15},
  {t:3, sensor:"B", val:22}, {t:4, sensor:"A", val:18}, {t:5, sensor:"B", val:35},
  {t:6, sensor:"A", val:9},  {t:7, sensor:"B", val:28}, {t:8, sensor:"A", val:21},
  {t:9, sensor:"B", val:40},
]);

// Pipeline 1: values per sensor
const vals = sensor.map(x => x.val);
console.log("Values:", vals.collect().join(" "));

// Pipeline 2: 3-event sliding window mean (APL: mean⍤1 applied to overlapping rows)
const sma3 = vals.window(3, w => +(w.reduce((a,b)=>a+b,0)/w.length).toFixed(1));
console.log("3-event SMA:", sma3.collect().join(" "));

// Pipeline 3: running max (APL: ⌈\ scan)
const runMax = vals.scan(Math.max, -Infinity);
console.log("Running max:", runMax.collect().join(" "));

// Pipeline 4: anomaly detection — flag readings > 1.5× running mean
const runMean = sensor
  .scan((acc, x) => ({sum: acc.sum+x.val, n: acc.n+1}), {sum:0,n:0})
  .map(acc => acc.sum/acc.n);

console.log("\nAnomaly detection (val > 1.5× running mean):");
sensor.collect().forEach((x, i) => {
  const rm = runMean.collect()[i];
  const flag = x.val > 1.5 * rm ? " ← ANOMALY" : "";
  console.log(`  t=${x.t} sensor=${x.sensor} val=${x.val} runMean=${rm.toFixed(1)}${flag}`);
});

// Pipeline 5: group-reduce (APL ⌸ key): total per sensor
const totals = sensor.groupReduce(x => x.sensor, x => x.val, (a,b) => a+b);
console.log("\nTotal per sensor (⌸ group-reduce):");
totals.collect().forEach(({key, val}) => console.log(`  ${key}: ${val}`));
The Stream class above is a toy, eager, in-memory model, not a replacement for RxJS, Kafka Streams, or Flink. The point is that those systems expose many of the same conceptual building blocks: map (each), filter (compress), scan (prefix), window + aggregate, and groupBy + reduce (⌸). Array programming is one useful lens for understanding why those APIs feel so composable.