The EML Operator: Expressing All Elementary Functions with Just exp(x) - ln(y)
Contents
If you’ve studied digital circuits, you probably know the NAND gate story. AND, OR, NOT — any logic circuit can be built from NAND gates alone. So does the world of continuous mathematics — , , , — also have a “single operator that does everything”?
In March 2026, Andrzej Odrzywołek of Jagiellonian University (Poland) published a paper claiming to have found exactly that.
arXiv:2603.21852 “All elementary functions from a single binary operator”
I read through it and sorted out what it can and can’t do.
What Is the EML Operator?
The definition is simple. Given two inputs :
That’s it. A binary operator combining exp and ln — EML stands for Exp-Minus-Log. The paper claims that this operator plus the constant is enough to reproduce every function on a scientific calculator.
The grammar:
Only two choices — “the constant 1” or “feed two expressions into ” — and any elementary function can be expressed.
Building Constants First
The starting point is , so Euler’s number pops out immediately. From there, constants are bootstrapped one by one.
| Constant | EML Expression | Tree Depth |
|---|---|---|
| 1 | ||
| 2 | ||
| 3 | ||
| (5 levels of nesting) | 5 | |
| (6 levels of nesting) | 6 |
Tracing the derivation of :
Wait, ? The paper actually discusses “compiler-based” constructions and “direct search” constructions separately — the table above shows the compiler’s optimized version. The direct approach uses identities like and to build things up.
Constructing and the imaginary unit uses Euler’s formula in reverse. Once is available, (principal value) gives access to both and . This requires the principal value of the complex logarithm, so real numbers alone don’t suffice — an important constraint discussed later.
Exp and Ln Are Straightforward
Extracting and is intuitive.
Since , plugging into the second argument kills the term and remains.
takes a bit more work:
Expanding from the inside out:
Three levels of nesting to get a logarithm. The core EML technique: feed into so they cancel each other out.
graph TD
A["eml"] --> B["1"]
A --> C["eml"]
C --> D["eml"]
C --> E["1"]
D --> F["1"]
D --> G["x"]
style A fill:#4a90d9,color:#fff
style C fill:#4a90d9,color:#fff
style D fill:#4a90d9,color:#fff
style B fill:#e8e8e8
style E fill:#e8e8e8
style F fill:#e8e8e8
style G fill:#f5a623,color:#fff
The diagram above shows the EML tree for . Blue nodes are all the same operator, gray is the constant , orange is the variable .
Constructing Arithmetic
This is where it gets interesting. With nothing but and , you can build addition and multiplication.
Addition:
In notation:
Tree depth 5. Five levels of nesting for a single addition.
Multiplication:
Convert to addition via logarithms, then exponentiate back — the classic slide rule trick.
Reciprocal:
, so we first need to negate . Constructing negation () requires going through , and itself demands nesting — so despite the apparent simplicity, it runs deep. Compiler version: depth 65. Direct search: depth 15.
Complexity Comparison
| Operation | EML Compiler Depth | Direct Search Depth |
|---|---|---|
| 3 | 3 | |
| 7 | 7 | |
| 57 | 15 | |
| 65 | 15 | |
| 27 | 19 | |
| 41 | 17 | |
| 139 | 43+ | |
| 100+ | 75+ |
hits tree depth 139. exceeds 100 levels of nesting. “One operator can do everything” is technically true, but the expression blowup is staggering.
Trig Functions via Euler’s Formula
Building and with EML goes through Euler’s formula.
graph LR
A["Want sin x"] --> B["First construct i"]
B --> C["Compute ix<br/>(multiplication)"]
C --> D["Compute e^ix<br/>and e^-ix"]
D --> E["Subtract and<br/>divide by 2i"]
E --> F["sin x done"]
style A fill:#e8e8e8
style F fill:#4a90d9,color:#fff
Every step in this pipeline becomes EML nesting, so the final expression is enormous. Constructing alone takes depth 6. Multiplication adds depth 4-5. Subtraction and division pile on more.
Inverse trig functions (, , etc.) are converted to logarithmic form:
Comparison with NAND Gates
The paper’s pitch is “the NAND gate of continuous mathematics,” but there are meaningful differences.
| Property | NAND (Boolean) | EML (Continuous) |
|---|---|---|
| Input | Two bits | Two real/complex numbers |
| Constants | Not needed (self-generating) | required |
| Grammar | ||
| Cost per node | One gate | Full and computation |
| Practical use | Actually used in chips | Theoretical existence proof |
NAND gates are practical because each one is cheap and fast. EML nodes each involve computing and , so doing a single addition means running exponentials and logarithms dozens of times. Calling this “the continuous NAND” is somewhat misleading.
Dependence on Complex Numbers and Extended Reals
An important caveat: EML doesn’t close over the reals alone.
- Constructing requires — the principal value of the complex logarithm
- All trig functions go through Euler’s formula — complex arithmetic internally
- The convention , assumes the extended reals
Standard floating-point arithmetic in Python or Julia won’t handle every case.
NumPy and PyTorch follow IEEE 754 conventions for inf and signed zero, so they work — but “runs everywhere” it is not.
Testing in 5 Languages
Reading the paper leaves the question “does this actually work?” open, so I implemented the EML operator in Node.js, Python, PHP, Go, and Rust. Tests cover generating the constant , extracting and , round-tripping, and multiplication/addition.
Implementing the EML Operator
The definition is a one-liner in every language — just write directly.
JavaScript (Node.js v25.3.0):
const eml = (x, y) => Math.exp(x) - Math.log(y);
Python 3.9.6:
import math
def eml(x, y):
return math.exp(x) - math.log(y)
PHP 8.5.1:
function eml(float $x, float $y): float {
return exp($x) - log($y);
}
Go 1.26.2:
func eml(x, y float64) float64 {
return math.Exp(x) - math.Log(y)
}
Rust:
fn eml(x: f64, y: f64) -> f64 {
x.exp() - y.ln()
}
Extracting ln (Depth-3 Nesting)
nests eml three levels deep.
JavaScript:
const emlLn = (x) => eml(1, eml(eml(1, x), 1));
Python:
def eml_ln(x):
return eml(1, eml(eml(1, x), 1))
PHP:
function eml_ln(float $x): float {
return eml(1, eml(eml(1, $x), 1));
}
Go:
func emlLn(x float64) float64 {
return eml(1, eml(eml(1, x), 1))
}
Rust:
fn eml_ln(x: f64) -> f64 {
eml(1.0, eml(eml(1.0, x), 1.0))
}
The Bootstrap Problem with Multiplication and Addition
Trying to build multiplication and addition purely from EML runs into a chicken-and-egg circular dependency.
- Multiplication : exp and ln are extractable from EML, but addition is needed internally
- Addition : same — exp and ln are EML, but multiplication is needed internally
graph LR
A["Build multiplication<br/>with EML"] --> B["exp(ln x + ln y)"]
B --> C["Need addition"]
C --> D["Build addition<br/>with EML"]
D --> E["ln(exp x · exp y)"]
E --> F["Need multiplication"]
F --> A
style A fill:#e8e8e8
style D fill:#e8e8e8
style C fill:#f5a623,color:#fff
style F fill:#f5a623,color:#fff
The paper’s “compiler” brute-forces through this circularity to produce trees of depth 27-41, but hand-writing that level of nesting isn’t realistic. Since the goal here is to verify practicality, I used a “hybrid approach” — extracting exp and ln via EML and using native operations for the rest.
JavaScript:
// Multiplication: exp(ln(x) + ln(y)) — exp and ln via EML, + is native
const emlMul = (x, y) => eml(emlLn(x) + emlLn(y), 1);
// Addition: ln(exp(x) * exp(y)) — exp and ln via EML, * is native
const emlAdd = (x, y) => emlLn(eml(x, 1) * eml(y, 1));
Python:
def eml_mul(x, y):
return eml(eml_ln(x) + eml_ln(y), 1)
def eml_add(x, y):
return eml_ln(eml(x, 1) * eml(y, 1))
PHP:
function eml_mul(float $x, float $y): float {
return eml(eml_ln($x) + eml_ln($y), 1);
}
function eml_add(float $x, float $y): float {
return eml_ln(eml($x, 1) * eml($y, 1));
}
Go:
func emlMul(x, y float64) float64 {
return eml(emlLn(x)+emlLn(y), 1)
}
func emlAdd(x, y float64) float64 {
return emlLn(eml(x, 1) * eml(y, 1))
}
Rust:
fn eml_mul(x: f64, y: f64) -> f64 {
eml(eml_ln(x) + eml_ln(y), 1.0)
}
fn eml_add(x: f64, y: f64) -> f64 {
eml_ln(eml(x, 1.0) * eml(y, 1.0))
}
Results
All 5 languages use IEEE 754 double precision, so results are nearly identical. Below is Node.js (v25.3.0) output. The other 4 languages matched to 15 significant digits.
Constant and exp (Depth 1)
| Input | EML Result | Expected | Error |
|---|---|---|---|
| eml(1, 1) | 2.718281828459045 | 0 | |
| eml(0, 1) | 1.000000000000000 | = 1 | 0 |
| eml(2, 1) | 7.389056098930650 | 0 | |
| eml(-1, 1) | 0.367879441171442 | 0 |
eml(x, 1) just eliminates , so it returns exactly the same value as native exp(). Zero error.
ln (Depth 3)
| Input | EML Result | Native | Error |
|---|---|---|---|
| 1 | 0.000000000000000 | 0.000000000000000 | 0 |
| 1.000000000000000 | 1.000000000000000 | 0 | |
| 2 | 0.693147180559945 | 0.693147180559945 | 1.11e-16 |
| 10 | 2.302585092994046 | 2.302585092994046 | 0 |
| 0.5 | -0.693147180559945 | -0.693147180559945 | 1.11e-16 |
At depth 3, errors on the order of start appearing. About half of IEEE 754’s machine epsilon (approximately ), so within 1 ULP (unit of least precision).
exp(ln(x)) Round Trip (Depth 4)
| Input | Result | Error |
|---|---|---|
| 1 | 1.000000000000000 | 0 |
| 2 | 2.000000000000000 | 0 |
| 3 | 3.000000000000000 | 4.44e-16 |
| 10 | 10.000000000000002 | 1.78e-15 |
| 100 | 100.000000000000043 | 4.26e-14 |
Error grows with input magnitude. At : . This from just depth-4 eml calls. requires depth 100+, so the error accumulation is easy to imagine.
Multiplication (Hybrid)
| Expression | EML Result | Expected | Error |
|---|---|---|---|
| 2 x 3 | 6.000000000000000 | 6 | 0 to 8.88e-16 |
| 4 x 5 | 19.999999999999996 | 20 | 3.55e-15 |
| 1.5 x 2.5 | 3.749999999999999 | 3.75 | 8.88e-16 |
| 10 x 10 | 100.000000000000043 | 100 | 4.26e-14 |
yields 100.000000000000043.
Native 10 * 10 = 100 and done. Via EML’s exp-ln-add-exp chain, -level error creeps in.
The error range of “0 to 8.88e-16” reflects cross-language differences, discussed below.
Addition (Hybrid)
| Expression | EML Result | Expected | Error |
|---|---|---|---|
| 1 + 2 | 3.000000000000000 | 3 | 0 |
| 3 + 4 | 7.000000000000000 | 7 | 0 |
| -1 + 5 | 4.000000000000000 | 4 | 0 |
| 0.1 + 0.2 | 0.300000000000000 | 0.3 | 2.22e-16 |
Addition is more precise than multiplication. In the route, even when intermediate exponential values are large, the logarithm cancels them out, limiting error accumulation.
The error of isn’t EML’s fault — it’s the famous IEEE 754 floating-point issue.
Native 0.1 + 0.2 doesn’t exactly equal 0.3 either.
Cross-Language Differences
The only notable difference was in multiplication:
| Language | 2 x 3 Result | Error |
|---|---|---|
| Node.js v25.3.0 | 6.000000000000000 | 0 |
| Python 3.9.6 | 6.000000000000001 | 8.88e-16 |
| PHP 8.5.1 | 6.000000000000001 | 8.88e-16 |
| Go 1.26.2 | 6.000000000000000 | 0 |
| Rust | 6.000000000000001 | 8.88e-16 |
Node.js and Go hit zero error; Python, PHP, and Rust are off by 1 ULP. This comes from differences in the internal exp/log implementations — rounding modes, instruction set selection — all within IEEE 754 spec. The cross-language difference is effectively negligible.
An interesting detail: Node.js (V8 engine) and Go produce identical results. Both likely compute exp/log via paths close to CPU hardware instructions. Python (via libm), PHP (via libm), and Rust (via LLVM’s libm) take different rounding paths, producing a least-significant-bit difference.
Summary: depth 1 is error-free, depth 3 hits , and round-trip at depth 4 balloons to proportional to input magnitude. Cross-language differences are at the LSB level — practically irrelevant.
| Nesting Depth | Operation | Error Magnitude |
|---|---|---|
| 1 | 0 (exact match with native) | |
| 3 | (within 1 ULP) | |
| 4 | to | |
| 5-7 | Multiplication, addition | to |
Error accumulates with each nesting level. These results are from the hybrid approach (using native operations where needed), so the paper’s compiler output at depth 27-65 for pure EML trees would produce orders-of-magnitude larger errors.
Application to Symbolic Regression
The paper’s latter half experiments with using EML trees as “learnable circuits” for symbolic regression.
graph TD
A["Data points<br/>(x, y) pairs"] --> B["Construct parameterized<br/>EML tree"]
B --> C["Optimize parameters<br/>with Adam optimizer"]
C --> D{"MSE near 0?"}
D -->|Yes| E["Snap parameters<br/>to 0/1 to get<br/>closed-form expression"]
D -->|No| F["Increase depth<br/>and retry"]
style E fill:#4a90d9,color:#fff
The master formula for a depth- EML tree has parameters. Each input takes the form as a linear combination, optimized via gradient descent.
Results by Depth
| Tree Depth | Parameter Count | Exact Recovery Rate |
|---|---|---|
| 2 | 14 | 100% |
| 3-4 | 34-74 | ~25% |
| 5 | 154 | <1% |
| 6+ | 314+ | <1% |
Depth 2 (, level) reliably recovers the original expression, but deeper trees cause the parameter space to expand exponentially, making it impossible to find the correct solution. Functions like at depth 8+ are effectively unrecoverable with this method.
After optimization, snapping parameter values to or drops the MSE to around (the square of machine epsilon) in correct cases. This “snap causes precision to jump” phenomenon is the key to extracting discrete formulas from continuous optimization.
How Significant Is This Discovery?
Mathematically interesting, but practically near-zero impact.
What’s Interesting
- Constructively proves that continuous mathematics has a “universal primitive” like NAND
- The elegance of the grammar
- Mathematically vindicates the intuition that and are the “atoms” of elementary functions
Limitations
- For actual computation, even addition requires calling and dozens of times — absurdly inefficient
- Depends on complex numbers and extended reals — not “universal within real numbers alone”
- Symbolic regression is only practical up to depth 4
From a programmer’s perspective, this expression blowup resembles the endgame of excessive currying.
Functional programming has SKI combinator calculus, where just three primitives S, K, and I can express all of lambda calculus.
In practice, the result looks like S(K(SI))(S(KK)I) — unreadable by humans.
EML is the same story: theoretical minimalism and practical utility are in a perfect tradeoff.
Hacker News discussion was also along the lines of “mathematically elegant, computationally nonsensical” and “computing exp and ln multiple times for a single addition is absurdly expensive.” Commenters also pointed out prior universal-function results, including a 1935 PNAS paper and a construction by Terence Tao.
Most people probably never think about universal operators for elementary functions. If math brings up any feelings, it’s more like “why do they cram in so much stuff I’ll never use in real life?” or the classic trajectory from “too many trig identities” through calculus into “I officially hate math.”
So when someone says “just learn this one operator and you can derive everything!” it looks like a magic wand for a second — but when requires 100 levels of nesting, just memorizing the formulas is 100 times easier. The content itself is genuinely interesting. But “huh, that’s neat” is probably the honest reaction for most people.