Tech 13 min read

The EML Operator: Expressing All Elementary Functions with Just exp(x) - ln(y)

IkesanContents

If you’ve studied digital circuits, you probably know the NAND gate story. AND, OR, NOT — any logic circuit can be built from NAND gates alone. So does the world of continuous mathematics — sin\sin, cos\cos, ln\ln, \sqrt{} — also have a “single operator that does everything”?

In March 2026, Andrzej Odrzywołek of Jagiellonian University (Poland) published a paper claiming to have found exactly that.

arXiv:2603.21852 “All elementary functions from a single binary operator”

I read through it and sorted out what it can and can’t do.

What Is the EML Operator?

The definition is simple. Given two inputs x,yx, y:

eml(x,y)=exlny\text{eml}(x, y) = e^x - \ln y

That’s it. A binary operator combining exp and ln — EML stands for Exp-Minus-Log. The paper claims that this operator plus the constant 11 is enough to reproduce every function on a scientific calculator.

The grammar:

S1eml(S,S)S \to 1 \mid \text{eml}(S, S)

Only two choices — “the constant 1” or “feed two expressions into eml\text{eml}” — and any elementary function can be expressed.

Building Constants First

The starting point is eml(1,1)=e1ln1=e0=e\text{eml}(1, 1) = e^1 - \ln 1 = e - 0 = e, so Euler’s number ee pops out immediately. From there, constants are bootstrapped one by one.

ConstantEML ExpressionTree Depth
eeeml(1,1)\text{eml}(1, 1)1
00eml(1,eml(1,1))\text{eml}(1, \text{eml}(1, 1))2
1-1eml(1,eml(eml(1,1),1))\text{eml}(1, \text{eml}(\text{eml}(1,1), 1))3
π\pi(5 levels of nesting)5
ii(6 levels of nesting)6

Tracing the derivation of 00:

eml(1,eml(1,1))=e1ln(eml(1,1))=eln(e)=e1\text{eml}(1, \text{eml}(1, 1)) = e^1 - \ln(\text{eml}(1,1)) = e - \ln(e) = e - 1

Wait, e10e - 1 \neq 0? The paper actually discusses “compiler-based” constructions and “direct search” constructions separately — the table above shows the compiler’s optimized version. The direct approach uses identities like ln1=0\ln 1 = 0 and eml(0,1)=e0ln1=10=1\text{eml}(0, 1) = e^0 - \ln 1 = 1 - 0 = 1 to build things up.

Constructing π\pi and the imaginary unit ii uses Euler’s formula eiπ=1e^{i\pi} = -1 in reverse. Once 1-1 is available, ln(1)=iπ\ln(-1) = i\pi (principal value) gives access to both π\pi and ii. This requires the principal value of the complex logarithm, so real numbers alone don’t suffice — an important constraint discussed later.

Exp and Ln Are Straightforward

Extracting exe^x and lnx\ln x is intuitive.

ex=eml(x,1)=exln1=ex0=exe^x = \text{eml}(x, 1) = e^x - \ln 1 = e^x - 0 = e^x

Since ln1=0\ln 1 = 0, plugging 11 into the second argument kills the ln\ln term and exe^x remains.

lnx\ln x takes a bit more work:

lnx=eml(1,eml(eml(1,x),1))\ln x = \text{eml}(1, \text{eml}(\text{eml}(1, x), 1))

Expanding from the inside out:

  1. eml(1,x)=e1lnx=elnx\text{eml}(1, x) = e^1 - \ln x = e - \ln x
  2. eml(elnx,1)=eelnx0=eelnx\text{eml}(e - \ln x, 1) = e^{e - \ln x} - 0 = e^{e - \ln x}
  3. eml(1,eelnx)=eln(eelnx)=e(elnx)=lnx\text{eml}(1, e^{e - \ln x}) = e - \ln(e^{e - \ln x}) = e - (e - \ln x) = \ln x

Three levels of nesting to get a logarithm. The core EML technique: feed exe^x into ln\ln so they cancel each other out.

graph TD
    A["eml"] --> B["1"]
    A --> C["eml"]
    C --> D["eml"]
    C --> E["1"]
    D --> F["1"]
    D --> G["x"]
    style A fill:#4a90d9,color:#fff
    style C fill:#4a90d9,color:#fff
    style D fill:#4a90d9,color:#fff
    style B fill:#e8e8e8
    style E fill:#e8e8e8
    style F fill:#e8e8e8
    style G fill:#f5a623,color:#fff

The diagram above shows the EML tree for lnx\ln x. Blue nodes are all the same eml\text{eml} operator, gray is the constant 11, orange is the variable xx.

Constructing Arithmetic

This is where it gets interesting. With nothing but exp\exp and ln\ln, you can build addition and multiplication.

Addition: x+yx + y

x+y=ln(exey)=ln(ex+y)x + y = \ln(e^x \cdot e^y) = \ln(e^{x+y})

In eml\text{eml} notation:

x+y=eml(1,eml(eml(1,x),eml(1,y)))x + y = \text{eml}(1, \text{eml}(\text{eml}(1, x), \text{eml}(1, y)))

Tree depth 5. Five levels of nesting for a single addition.

Multiplication: x×yx \times y

x×y=elnx+lnyx \times y = e^{\ln x + \ln y}

Convert to addition via logarithms, then exponentiate back — the classic slide rule trick.

Reciprocal: 1/x1/x

1/x=elnx1/x = e^{-\ln x}, so we first need to negate lnx\ln x. Constructing negation (x-x) requires going through 00, and 00 itself demands nesting — so despite the apparent simplicity, it runs deep. Compiler version: depth 65. Direct search: depth 15.

Complexity Comparison

OperationEML Compiler DepthDirect Search Depth
exe^x33
lnx\ln x77
x-x5715
1/x1/x6515
x+yx + y2719
x×yx \times y4117
x\sqrt{x}13943+
sinx\sin x100+75+

x\sqrt{x} hits tree depth 139. sinx\sin x exceeds 100 levels of nesting. “One operator can do everything” is technically true, but the expression blowup is staggering.

Trig Functions via Euler’s Formula

Building sin\sin and cos\cos with EML goes through Euler’s formula.

eix=cosx+isinxe^{ix} = \cos x + i \sin x

sinx=eixeix2i,cosx=eix+eix2\sin x = \frac{e^{ix} - e^{-ix}}{2i}, \quad \cos x = \frac{e^{ix} + e^{-ix}}{2}

graph LR
    A["Want sin x"] --> B["First construct i"]
    B --> C["Compute ix<br/>(multiplication)"]
    C --> D["Compute e^ix<br/>and e^-ix"]
    D --> E["Subtract and<br/>divide by 2i"]
    E --> F["sin x done"]
    style A fill:#e8e8e8
    style F fill:#4a90d9,color:#fff

Every step in this pipeline becomes EML nesting, so the final expression is enormous. Constructing ii alone takes depth 6. Multiplication adds depth 4-5. Subtraction and division pile on more.

Inverse trig functions (arcsin\arcsin, arctan\arctan, etc.) are converted to logarithmic form:

arctanx=12iln1+ix1ix\arctan x = \frac{1}{2i} \ln \frac{1 + ix}{1 - ix}

Comparison with NAND Gates

The paper’s pitch is “the NAND gate of continuous mathematics,” but there are meaningful differences.

PropertyNAND (Boolean)EML (Continuous)
InputTwo bitsTwo real/complex numbers
ConstantsNot needed (self-generating)11 required
GrammarSaNAND(S,S)S \to a \mid \text{NAND}(S,S)S1eml(S,S)S \to 1 \mid \text{eml}(S,S)
Cost per nodeOne gateFull exp\exp and ln\ln computation
Practical useActually used in chipsTheoretical existence proof

NAND gates are practical because each one is cheap and fast. EML nodes each involve computing exp\exp and ln\ln, so doing a single addition means running exponentials and logarithms dozens of times. Calling this “the continuous NAND” is somewhat misleading.

Dependence on Complex Numbers and Extended Reals

An important caveat: EML doesn’t close over the reals alone.

  • Constructing π\pi requires ln(1)=iπ\ln(-1) = i\pi — the principal value of the complex logarithm
  • All trig functions go through Euler’s formula — complex arithmetic internally
  • The convention ln0=\ln 0 = -\infty, e=0e^{-\infty} = 0 assumes the extended reals

Standard floating-point arithmetic in Python or Julia won’t handle every case. NumPy and PyTorch follow IEEE 754 conventions for inf and signed zero, so they work — but “runs everywhere” it is not.

Testing in 5 Languages

Reading the paper leaves the question “does this actually work?” open, so I implemented the EML operator in Node.js, Python, PHP, Go, and Rust. Tests cover generating the constant ee, extracting exe^x and lnx\ln x, round-tripping, and multiplication/addition.

Implementing the EML Operator

The definition is a one-liner in every language — just write exlnye^x - \ln y directly.

JavaScript (Node.js v25.3.0):

const eml = (x, y) => Math.exp(x) - Math.log(y);

Python 3.9.6:

import math

def eml(x, y):
    return math.exp(x) - math.log(y)

PHP 8.5.1:

function eml(float $x, float $y): float {
    return exp($x) - log($y);
}

Go 1.26.2:

func eml(x, y float64) float64 {
    return math.Exp(x) - math.Log(y)
}

Rust:

fn eml(x: f64, y: f64) -> f64 {
    x.exp() - y.ln()
}

Extracting ln (Depth-3 Nesting)

lnx=eml(1,eml(eml(1,x),1))\ln x = \text{eml}(1, \text{eml}(\text{eml}(1, x), 1)) nests eml three levels deep.

JavaScript:

const emlLn = (x) => eml(1, eml(eml(1, x), 1));

Python:

def eml_ln(x):
    return eml(1, eml(eml(1, x), 1))

PHP:

function eml_ln(float $x): float {
    return eml(1, eml(eml(1, $x), 1));
}

Go:

func emlLn(x float64) float64 {
    return eml(1, eml(eml(1, x), 1))
}

Rust:

fn eml_ln(x: f64) -> f64 {
    eml(1.0, eml(eml(1.0, x), 1.0))
}

The Bootstrap Problem with Multiplication and Addition

Trying to build multiplication and addition purely from EML runs into a chicken-and-egg circular dependency.

  • Multiplication x×y=elnx+lnyx \times y = e^{\ln x + \ln y}: exp and ln are extractable from EML, but addition is needed internally
  • Addition x+y=ln(exey)x + y = \ln(e^x \cdot e^y): same — exp and ln are EML, but multiplication is needed internally
graph LR
    A["Build multiplication<br/>with EML"] --> B["exp(ln x + ln y)"]
    B --> C["Need addition"]
    C --> D["Build addition<br/>with EML"]
    D --> E["ln(exp x · exp y)"]
    E --> F["Need multiplication"]
    F --> A
    style A fill:#e8e8e8
    style D fill:#e8e8e8
    style C fill:#f5a623,color:#fff
    style F fill:#f5a623,color:#fff

The paper’s “compiler” brute-forces through this circularity to produce trees of depth 27-41, but hand-writing that level of nesting isn’t realistic. Since the goal here is to verify practicality, I used a “hybrid approach” — extracting exp and ln via EML and using native operations for the rest.

JavaScript:

// Multiplication: exp(ln(x) + ln(y)) — exp and ln via EML, + is native
const emlMul = (x, y) => eml(emlLn(x) + emlLn(y), 1);

// Addition: ln(exp(x) * exp(y)) — exp and ln via EML, * is native
const emlAdd = (x, y) => emlLn(eml(x, 1) * eml(y, 1));

Python:

def eml_mul(x, y):
    return eml(eml_ln(x) + eml_ln(y), 1)

def eml_add(x, y):
    return eml_ln(eml(x, 1) * eml(y, 1))

PHP:

function eml_mul(float $x, float $y): float {
    return eml(eml_ln($x) + eml_ln($y), 1);
}

function eml_add(float $x, float $y): float {
    return eml_ln(eml($x, 1) * eml($y, 1));
}

Go:

func emlMul(x, y float64) float64 {
    return eml(emlLn(x)+emlLn(y), 1)
}

func emlAdd(x, y float64) float64 {
    return emlLn(eml(x, 1) * eml(y, 1))
}

Rust:

fn eml_mul(x: f64, y: f64) -> f64 {
    eml(eml_ln(x) + eml_ln(y), 1.0)
}

fn eml_add(x: f64, y: f64) -> f64 {
    eml_ln(eml(x, 1.0) * eml(y, 1.0))
}

Results

All 5 languages use IEEE 754 double precision, so results are nearly identical. Below is Node.js (v25.3.0) output. The other 4 languages matched to 15 significant digits.

Constant ee and exp (Depth 1)

InputEML ResultExpectedError
eml(1, 1)2.718281828459045ee0
eml(0, 1)1.000000000000000e0e^0 = 10
eml(2, 1)7.389056098930650e2e^20
eml(-1, 1)0.367879441171442e1e^{-1}0

eml(x, 1) just eliminates ln1=0\ln 1 = 0, so it returns exactly the same value as native exp(). Zero error.

ln (Depth 3)

Input xxEML ResultNative ln\lnError
10.0000000000000000.0000000000000000
ee1.0000000000000001.0000000000000000
20.6931471805599450.6931471805599451.11e-16
102.3025850929940462.3025850929940460
0.5-0.693147180559945-0.6931471805599451.11e-16

At depth 3, errors on the order of 101610^{-16} start appearing. About half of IEEE 754’s machine epsilon (approximately 2.22×10162.22 \times 10^{-16}), so within 1 ULP (unit of least precision).

exp(ln(x)) Round Trip (Depth 4)

Input xxResultError
11.0000000000000000
22.0000000000000000
33.0000000000000004.44e-16
1010.0000000000000021.78e-15
100100.0000000000000434.26e-14

Error grows with input magnitude. At x=100x = 100: 4.26×10144.26 \times 10^{-14}. This from just depth-4 eml calls. sinx\sin x requires depth 100+, so the error accumulation is easy to imagine.

Multiplication (Hybrid)

ExpressionEML ResultExpectedError
2 x 36.00000000000000060 to 8.88e-16
4 x 519.999999999999996203.55e-15
1.5 x 2.53.7499999999999993.758.88e-16
10 x 10100.0000000000000431004.26e-14

10×1010 \times 10 yields 100.000000000000043. Native 10 * 10 = 100 and done. Via EML’s exp-ln-add-exp chain, 101410^{-14}-level error creeps in.

The 2×32 \times 3 error range of “0 to 8.88e-16” reflects cross-language differences, discussed below.

Addition (Hybrid)

ExpressionEML ResultExpectedError
1 + 23.00000000000000030
3 + 47.00000000000000070
-1 + 54.00000000000000040
0.1 + 0.20.3000000000000000.32.22e-16

Addition is more precise than multiplication. In the ln(exey)\ln(e^x \cdot e^y) route, even when intermediate exponential values are large, the logarithm cancels them out, limiting error accumulation.

The 0.1+0.20.1 + 0.2 error of 2.22×10162.22 \times 10^{-16} isn’t EML’s fault — it’s the famous IEEE 754 floating-point issue. Native 0.1 + 0.2 doesn’t exactly equal 0.3 either.

Cross-Language Differences

The only notable difference was in 2×32 \times 3 multiplication:

Language2 x 3 ResultError
Node.js v25.3.06.0000000000000000
Python 3.9.66.0000000000000018.88e-16
PHP 8.5.16.0000000000000018.88e-16
Go 1.26.26.0000000000000000
Rust6.0000000000000018.88e-16

Node.js and Go hit zero error; Python, PHP, and Rust are off by 1 ULP. This comes from differences in the internal exp/log implementations — rounding modes, instruction set selection — all within IEEE 754 spec. The cross-language difference is effectively negligible.

An interesting detail: Node.js (V8 engine) and Go produce identical results. Both likely compute exp/log via paths close to CPU hardware instructions. Python (via libm), PHP (via libm), and Rust (via LLVM’s libm) take different rounding paths, producing a least-significant-bit difference.

Summary: depth 1 is error-free, depth 3 hits 101610^{-16}, and round-trip at depth 4 balloons to 101410^{-14} proportional to input magnitude. Cross-language differences are at the LSB level — practically irrelevant.

Nesting DepthOperationError Magnitude
1exe^x0 (exact match with native)
3lnx\ln x101610^{-16} (within 1 ULP)
4exp(ln(x))\exp(\ln(x))101610^{-16} to 101410^{-14}
5-7Multiplication, addition101610^{-16} to 101410^{-14}

Error accumulates with each nesting level. These results are from the hybrid approach (using native operations where needed), so the paper’s compiler output at depth 27-65 for pure EML trees would produce orders-of-magnitude larger errors.

Application to Symbolic Regression

The paper’s latter half experiments with using EML trees as “learnable circuits” for symbolic regression.

graph TD
    A["Data points<br/>(x, y) pairs"] --> B["Construct parameterized<br/>EML tree"]
    B --> C["Optimize parameters<br/>with Adam optimizer"]
    C --> D{"MSE near 0?"}
    D -->|Yes| E["Snap parameters<br/>to 0/1 to get<br/>closed-form expression"]
    D -->|No| F["Increase depth<br/>and retry"]
    style E fill:#4a90d9,color:#fff

The master formula for a depth-nn EML tree has 5×2n65 \times 2^n - 6 parameters. Each input takes the form α+βx+γf\alpha + \beta x + \gamma f as a linear combination, optimized via gradient descent.

Results by Depth

Tree DepthParameter CountExact Recovery Rate
214100%
3-434-74~25%
5154<1%
6+314+<1%

Depth 2 (exe^x, lnx\ln x level) reliably recovers the original expression, but deeper trees cause the parameter space to expand exponentially, making it impossible to find the correct solution. Functions like sinx\sin x at depth 8+ are effectively unrecoverable with this method.

After optimization, snapping parameter values to 00 or 11 drops the MSE to around 103210^{-32} (the square of machine epsilon) in correct cases. This “snap causes precision to jump” phenomenon is the key to extracting discrete formulas from continuous optimization.

How Significant Is This Discovery?

Mathematically interesting, but practically near-zero impact.

What’s Interesting

  • Constructively proves that continuous mathematics has a “universal primitive” like NAND
  • The elegance of the grammar S1eml(S,S)S \to 1 \mid \text{eml}(S, S)
  • Mathematically vindicates the intuition that exp\exp and ln\ln are the “atoms” of elementary functions

Limitations

  • For actual computation, even addition requires calling exp\exp and ln\ln dozens of times — absurdly inefficient
  • Depends on complex numbers and extended reals — not “universal within real numbers alone”
  • Symbolic regression is only practical up to depth 4

From a programmer’s perspective, this expression blowup resembles the endgame of excessive currying. Functional programming has SKI combinator calculus, where just three primitives S, K, and I can express all of lambda calculus. In practice, the result looks like S(K(SI))(S(KK)I) — unreadable by humans. EML is the same story: theoretical minimalism and practical utility are in a perfect tradeoff.

Hacker News discussion was also along the lines of “mathematically elegant, computationally nonsensical” and “computing exp and ln multiple times for a single addition is absurdly expensive.” Commenters also pointed out prior universal-function results, including a 1935 PNAS paper and a construction by Terence Tao.


Most people probably never think about universal operators for elementary functions. If math brings up any feelings, it’s more like “why do they cram in so much stuff I’ll never use in real life?” or the classic trajectory from “too many trig identities” through calculus into “I officially hate math.”

So when someone says “just learn this one operator and you can derive everything!” it looks like a magic wand for a second — but when sin\sin requires 100 levels of nesting, just memorizing the formulas is 100 times easier. The content itself is genuinely interesting. But “huh, that’s neat” is probably the honest reaction for most people.

References