ramblings of @harudagondi

Torturing rustc by Emulating HKTs, Causing an Inductive Cycle and Borking the Compiler

Reading time: 32 min read

Table of Contents

1.1 Prelude
1.2 Rust does not have HKTs, kinda
1.3 What’s 1 + 1? A trait requirement evaluation overflow, of course
1.4 A detour into mathematical proofs and logic
1.4.1 Induction beyond natural numbers
1.4.2 That moment when Curry and Howard did a joint yaoi slay and caused a motherquake measuring 9.9 on the cunter scale
1.4.3 The self-indulgent Lean 4 part of the article
1.5 We now return to your most programming language ever
1.5.1 Here’s the part where we return to that Discord convo and explain what inductive cycles are
1.5.2 The Coinductive Disaster
1.6 …In other words

Photo by Dynamic Wang.

Prelude

On the 28th day of February of the year 2026, I was attempting to start writing an FP scripting language for fun.

So I wrote up an Ast enum for a basic calculator to start:

pub enum Ast {
Binary {
left: Box<Ast>,
operator: Token,
right: Box<Ast>,
},
Unary {
operator: Token,
right: Box<Ast>,
},
Value {
value: Value,
},
}

I realized that I wanted to add a way to include spans when I wanted or not. So here’s something I cooked up:

pub struct Spanned<T> {
inner: T,
span: std::range::Range,
}
pub struct Simple<T> {
inner: T
}

Of course, since this is my project and I wanted to be a little bit silly, I went ahead and abstract this like a Java Enterprise Software Engineer but in an alternative universe where everything is Haskell.

pub struct Wrapper<T, M> {
inner: T,
metadata: M,
}
type Spanned<T> = Wrapper<T, std::range::Range>;
type Simple<T> = Wrapper<T, ()>;

So uh let’s add that to our Ast struct.

pub enum Ast<W: ???> {
Binary {
left: Box<W<Ast>>,
operator: Token,
right: Box<W<Ast>>,
},
Unary {
operator: Token,
right: Box<W<Ast>>,
},
Value {
value: Value,
},
}

Uh oh! We forgot something. That is…

Rust does not have HKTs, kinda

Higher kinded types, or HKTs, is the notion that generics can have generics. That is, in something like struct Foo<T>(T<i32>);, T would be any specific type that hypothetically accepts one generic type. In that sense, T is a type constructor that has an arity of one that accepts a generic. In this case, we construct a new type T<i32> by passing i32 as an argument to the T type constructor.

Okay let’s take a step back.

Type systems usually live in their own universe. They have similar analogues to the current universe we live in. If we have values, the universe above us has types. If we have functions, the universe above us has type constructors. So T is a “function” accepting a type and returns a new, different type.

Okay you got that? Uh? What’s the “kinded” part mean? What’s a universe?

Uhhhh.

Okay. Let’s think about types. Generic types, that is.

Contrary to some people’s belief, Vec is not a type.

Huh?

Well, let’s demonstrate:

fn accepts_a_type<T>() {}
accepts_a_type::<Vec>();
Compiling playground v0.0.1 (/playground)
error[E0107]: missing generics for struct `Vec`
--> src/main.rs:4:22
|
4 | accepts_a_type::<Vec>();
| ^^^ expected at least 1 generic argument
|
note: struct defined here, with at least 1 generic parameter: `T`
--> /playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:438:12
|
438 | pub struct Vec<T, #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global> {
| ^^^ -
help: add missing generic argument
|
4 | accepts_a_type::<Vec<T>>();
| +++
For more information about this error, try `rustc --explain E0107`.
error: could not compile `playground` (bin "playground") due to 1 previous error

Let’s follow the logic:

  1. accepts_a_type, well, accepts a type. Any type, in fact.
  2. We try to pass in a Vec.
  3. It fails to compile.
  4. Ergo, Vec is not a type.
  5. QED.

Such bulletproof logic.

So what is Vec? By itself, it’s not a type. Vec<i32> for instance? That’s a type. See:

accepts_a_type::<Vec<i32>>();
Compiling playground v0.0.1 (/playground)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.65s
Running `target/debug/playground`

Tada! Proof by compilation. Foreshadowing. 👻

So Vec<i32> is a type. And i32 is also a type. The proof is left as an exercise to the reader. But not Vec. What is it exactly? In the parlance of type systems, Vec is a type constructor. You can think of it as fn(type) -> type, where it accepts a type, i.e. i32, and spits out a new type Vec<i32>. Let’s consult the comparison table:

ValuesTypes
functionfn(bool) -> bool e.g. unary not !fn(type) -> type e.g. Vec
argumentan bool e.g. truea type e.g. i32
outputan bool e.g. falsea type e.g. Vec<i32>

Ah, so ! has a type which is effectively fn(bool) -> bool, People would write it as bool -> bool ala Haskell style. Starting now, let’s write it like that for simplicity and brevity. and !true has the type bool. bool and i32 what people consider a base type, which are primitive to the language itself and cannot be formed with other types. These base types can be rearranged to form different new types, like bool -> bool and i32 -> String. An example of an i32 -> String value is i32::to_string.

Both bool -> bool and bool are types. In other words, bool -> bool and bool has the kind type.

If there are base types, then are there base kinds? Yes! Most languages only have one base kind, the type kind. Rust has two: the type kind and the lifetime kind. Although in this post, we are going to mostly ignore the lifetime kind.

Next, can we form new kinds from rearranging base kinds? Yes! If we had a bool -> bool type earlier, then we could have a type -> type kind. Well, according to the comparison table, we do have a kind in the form of type -> type, it’s Vec!

Of course, higher kinded terms like these don’t stop at type -> type. Result is type, type -> type, Be quiet, Haskellers, I’m not writing actual Haskell here. and Cow<'a, T> is lifetime, type -> type.

Finally we can define some terms. These stuff such as type -> type are called type operators and type operators that evaluate to a type are called type constructors.

There are even higher order type operators, which are operators that take in type operators. Like (type -> type) -> type or fn(fn(type) -> type) -> type. These are what we consider as higher kinded types, since these abstract over type constructors.

Does rust have higher kinded types?

No. because Rust is a bad language 😔

Well.

Sorta kinda.

The explanation earlier is not that original; I copied it from the Generic Associate Types RFC. And GATs are kinda discount HKTs, so uh close enough?

The problem we have with the nonexistence of HKTs is that we don’t have a way to pass in type constructors to generics. Take this non-syntactically valid example we introduced earlier.

struct Foo<T> {
inner: T<i32>
}

Theoretically we could pass in a Vec as a type constructor:

type Bar = Foo<Vec>;

Which when applied gives us effectively a:

struct Bar {
inner: Vec<i32>
}

But generics only accepts types, not type constructors. We just proved that in our bulletproof earlier before. So what’s the workaround for this?

Enter GATs.

Let’s make up a new trait:

trait Wrap {
type Wrapper<T>;
}

Prior to 1.65, the <T> part in type Wrapper<T> isn’t possible. Now, given this newfound power, we can define a dummy marker struct like so:

struct VecAsTypeConstructor;
impl Wrap for VecAsTypeConstructor {
type Wrapper<T> = Vec<T>;
}

Now, since VecAsTypeConstructor is a type and not a type constructor, we can now rewrite Foo and pass the VecAsTypeConstructor as a concrete type to the generic.

struct Foo<T: Wrap> {
inner: T::Wrapper<i32>,
}
type Bar = Foo<VecAsTypeConstructor>;

Let’s check if Bar’s inner field is actually of type Vec<i32>.

let bar = Bar {
inner: Vec::new(),
};
dbg!(type_name_of_val(&bar.inner));
Compiling playground v0.0.1 (/playground)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.62s
Running `target/debug/playground`
[src/main.rs:23:1] type_name_of_val(&bar.inner) = "alloc::vec::Vec<i32>"

Yay! Gets? Okay gets.

Here’s the full code:

use std::any::type_name_of_val;
trait Wrap {
type Wrapper<T>;
}
struct VecAsTypeConstructor;
impl Wrap for VecAsTypeConstructor {
type Wrapper<T> = Vec<T>;
}
struct Foo<T: Wrap> {
inner: T::Wrapper<i32>,
}
type Bar = Foo<VecAsTypeConstructor>;
let bar = Bar {
inner: Vec::new(),
};
dbg!(type_name_of_val(&bar.inner));

Just ugly and nasty. Too bad.

Ugh.

Okay, let’s go back and remind you of our initial problem.

pub enum Ast<W: ???> {
Binary {
left: Box<W<Ast>>,
operator: Token,
right: Box<W<Ast>>,
},
Unary {
operator: Token,
right: Box<W<Ast>>,
},
Value {
value: Value,
},
}

I wanted to make a wrapper type that abstracts between Simple and Spanned? Why? Because type-fu is fun!

Anyways, given the newfound Wrap superpower, let’s rewrite this. On second thought, wrapping every field in W::Wrapper is a stupid idea. Should have been just the Ast.

pub enum Ast<W: Wrap> {
Binary {
left: Box<W::Wrapper<Ast<W>>>,
operator: W::Wrapper<Token>,
right: Box<W::Wrapper<Ast<W>>>,
},
Unary {
operator: W::Wrapper<Token>,
right: Box<W::Wrapper<Ast<W>>>,
},
Value {
value: W::Wrapper<Value>,
},
}

Uhhhh, W::Wrapper<Ast<W>>?

That looks recursive.

What’s 1 + 1? A trait requirement evaluation overflow, of course

Okay let’s make a simple test. "1 + 1" should parse to effectively Binary(Value(1), Plus, Value(1)) and evaluate to 2. A simple calculator! We derive Debug, PartialEq on all relevant types.

Then wrote this test.

fn one_plus_one_eq_two() {
let source = "1 + 1";
let tokens = tokenize(source);
assert_eq!(
&tokens[..],
&[
Spanned::with_span(Token::Number, Range::from(0..1)),
Spanned::with_span(Token::Plus, Range::from(2..3)),
Spanned::with_span(Token::Number, Range::from(4..5)),
][..]
);
let ast = parse(tokens);
assert_eq!(
ast,
SpannedAst {
inner: Ast::Binary {
left: Box::new(SpannedAst {
inner: Ast::Value {
value: Spanned::with_span(Value::Number(1.0), Range::from(0..1)),
},
metadata: Range::from(0..1)
}),
operator: Spanned::with_span(Token::Plus, Range::from(2..3)),
right: Box::new(SpannedAst {
inner: Ast::Value {
value: Spanned::with_span(Value::Number(1.0), Range::from(4..5)),
},
metadata: Range::from(4..5)
}),
},
metadata: Range::from(0..5)
}
)
}

I was using Spanned as my wrapper in this case.

Anyways, i was gonna run this test.

It should’ve panicked. Because every function was stubbed with todo!().

But it gave a compiler error.

error[E0275]: overflow evaluating the requirement `Wrapper<Ast<SpannedMarker>, std::range::Range<usize>>: PartialEq`
--> src\lib.rs:139:9
|
139 | / assert_eq!(
140 | | ast,
141 | | SpannedAst {
142 | | inner: Ast::Binary {
... |
159 | | )
| |_________^
|
note: required for `Ast<SpannedMarker>` to implement `PartialEq`
--> src\lib.rs:76:17
|
76 | #[derive(Debug, PartialEq, Eq)]
| ^^^^^^^^^ unsatisfied trait bound introduced in this `derive` macro
= note: 1 redundant requirement hidden
= note: required for `Wrapper<Ast<SpannedMarker>, std::range::Range<usize>>` to implement `PartialEq`
For more information about this error, try `rustc --explain E0275`.

Uh oh.

error[E0275]: overflow evaluating the requirement Wrapper<Ast<SpannedMarker>, std::range::Range<usize>>: PartialEq

What the hell is that?

I tried rewriting the PartialEq derive manually. No dice.

This error was confusing to me, so I—as a good rustacean would do—made a minimal reproducible example.

trait Wrap {
type Wrapper<T>;
}
#[derive(Debug, PartialEq)]
struct Cons<W: Wrap> {
inner: W::Wrapper<i32>,
next: Option<Box<W::Wrapper<Cons<W>>>>,
}
#[derive(Debug, PartialEq)]
struct MyWrapper<T>(T);
#[derive(Debug, PartialEq)]
struct MyWrapperMarker;
impl Wrap for MyWrapperMarker {
type Wrapper<T> = MyWrapper<T>;
}
fn main() {
let x: Cons<MyWrapperMarker> = Cons {
inner: MyWrapper(0),
next: Some(Box::new(MyWrapper(Cons {
inner: MyWrapper(1),
next: None,
}))),
};
let y: Cons<MyWrapperMarker> = Cons {
inner: MyWrapper(0),
next: Some(Box::new(MyWrapper(Cons {
inner: MyWrapper(1),
next: None,
}))),
};
assert_eq!(x, y);
}

I guess a step down from a tree would be a cons list.

error[E0275]: overflow evaluating the requirement `Cons<MyWrapperMarker>: PartialEq`
--> src/main.rs:36:5
|
36 | assert_eq!(x, y);
| ^^^^^^^^^^^^^^^^
|
note: required for `MyWrapper<Cons<MyWrapperMarker>>` to implement `PartialEq`
--> src/main.rs:12:8
|
11 | #[derive(Debug, PartialEq)]
| --------- in this derive macro expansion
12 | struct MyWrapper<T>(T);
| ^^^^^^^^^ - type parameter would need to implement `PartialEq`
= help: consider manually implementing `PartialEq` to avoid undesired bounds
= note: 1 redundant requirement hidden
= note: required for `Cons<MyWrapperMarker>` to implement `PartialEq`
For more information about this error, try `rustc --explain E0275`.

Similar error.

I posted this snippet to the RPLCS Discord Server. Fortunately, Helix theemathas on GitHub, notable for finding a lot of bugs in rustc. helped with me in debugging this snippet.

After some back and forth with him, the horrible truth set upon me.

Conversation between me and Helix on Discord. Below is the transcription:

Uh oh.

What the fuck is coinduction?

A detour into mathematical proofs and logic

Let’s open the Wikipedia article on “Coinduction”.

This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)

This article may be too technical for most readers to understand. (October 2011)

This article relies excessively on references to primary sources. (November 2019)

Okay let’s not.

Let’s do a different link.

Mastering Coinduction in Logic

A Comprehensive Guide to Understanding Coinduction Principles in Mathematical Logic

Sarah Lee (AI generated) (Llama-4-Maverick-17B-128E-Instruct-FP8)

Okay I’m not reading that. I’m not gonna even link it here.

Uhhhhh.

Okay found another one. Oh my god it’s an academic article hell yeah.

Okay still a bit technical for me.

Let’s revisit what induction means first.

Mathematical induction is a method on proving a statement is true for all natural numbers (Natural numbers meaning n1n \ge 1). Proof by induction works by splitting the proof into two: the base case and the inductive step.

Giving an example, let’s have this problem from Math Is Fun: Is it true that 3n13^n - 1 is a multiple of 2? In other words,

3n1mod203^n - 1 \mod 2 \equiv 0

The base case proves the statement when n=1n = 1. No other stuff required. So let’s test it out! Substituting n=1n = 1, we get

311=31=2\begin{equation} \begin{split} 3^1 - 1 &= 3 - 1 \\ &= 2 \end{split} \end{equation}2mod2=02 \mod 2 = 0

QED or whatever.

Next, we assume that for n=kn = k, the statement true. Okay. 3k1mod203^k - 1 \mod 2 \equiv 0 is true. I assume it is true. Therefore it is true. Have you assumed it yet? Yeah you should assume it now. Did you assume it yet? If so, then you have assumed it. Therefore it is true. Because you assumed it.

Then, the meatiest part of this method is the inductive step. Given that n=kn = k is true Of course, we have just assumed that. , then n=k+1n = k + 1 is true. This is the “step” part of inductive step; the process of stepping through every number is enough to prove the statement as long as we have a base case.

Okay let’s substitute n=k+1n = k + 1 into 3n13^n - 1.

3k+11=3×3k1=2×3k+3k1\begin{equation} \begin{split} 3^{k + 1} - 1 &= 3 \times 3^k - 1 \\ &= 2 \times 3^k + 3^k - 1 \end{split} \end{equation}

Remember that 3x3x can be rewritten as 2x+x2x + x, and so if we let x=3kx = 3^k, 3k3^k can be rewritten as 2×3k+1×3k2 \times 3^k + 1 \times 3^k. We omit 1×1 \times for brevity.

Since 2×3k2 \times 3^k is a multiple of 2, and 3k13^k - 1 is also a multiple of 2, by assumption, then

3k+11mod203^{k + 1} - 1 \mod 2 \equiv 0

is true!

Okay so this is the part where marry the base case and the inductive step together In a gay way of course. I don’t believe in heterosexuality. . Given the base case n=1n = 1, which we proved to be true in equation 1, then we can try n=1+1=2n = 1 + 1 = 2. We could manually evaluate it. But since we did prove for n=k+1n = k + 1 to be true, we can substitute k=1k = 1 to get n=2n = 2 and have 321mod203^2 - 1 \mod 2 \equiv 0 be true without bothering to evaluate it wholesale.

Our “hole” then lies in our assumption that n=kn = k is true always. However, given that we have our base case n=0n = 0 and inductive step n=k+1n = k + 1, we can eventually prove the statement true by recursively applying n=k+1n = k + 1 forever.

Since the n=1n = 1 case is true, then n=1+1=2n = 1 + 1 = 2 is true, by applying the inductive step. Since the n=2n = 2 case is true, then n=2+1=3n = 2 + 1 = 3 is true, by applying it again. And again and again, to the point where the statement is true for every number ever.

Induction beyond natural numbers

Okay that’s nice, but what does have that to do with whatever we are doing earlier?

Well okay, we need to generalize the concept of mathematical induction to work beyond natural numbers. How do we use induction in stuff that aren’t integers? Researching about this topic led me to structural induction, which could be related, so I’ll try talking about it here.

Structural induction is a way to prove some property holds for a structure like lists, strings, and structures. Just like normal induction, structural induction has a base case and an inductive step. We can emulate the creation of such structures by recursively defining them.

Natural numbers could be defined recursively as follows:

  • Base case: 1N1 \in \N
  • Recursive step: if kNk \in \N then k+1Nk + 1 \in \N

Integers:

  • Base case: 0Z0 \in \Z
  • Recursive step: if xZx \in \Z then x+1Zx + 1 \in \Z and x1Zx - 1 \in \Z

Since these structures are defined inductively, we could prove some properties about them via induction too.

Let’s use this example from a random pdf I found on the internet:

Let SS be a set where:

  • Base case: 6S6 \in S, and 15S15 \in S
  • Recursive step: if x,ySx, y \in S, then x+ySx + y \in S

Now, let’s say we want to prove the property that all the elements in the set SS is divisible by 3. How do we do that? By induction of course!

As is tradition, let’s consider the base case first.

Is 6 divisible by 3? Yeah. Is 15 divisible by 3? Yup. Do we need to elaborate on that? No? Thank god. I thought I have to prove 1 + 1 = 2 or something. Anyways.

Let’s consider the inductive step:

For all numbers xx and yy, we assume that 3x3 | x and 3y3 | y. That’s notation for “3 divides x”. Personally I think it looks backwards. Because of that, we can represent x=3nx = 3n for some integer nn, and same for the other with y=3my = 3m for some integer mm. Therefore, applying the recursive step gives us x+y=3n+3m=3(n+m)x + y = 3n + 3m = 3(n + m). Wait, since x+yx + y takes the form 3p3p where p=n+mp = n + m, then it must be that 3x+y3 | x + y.

By the power of induction, we have proved that all the elements of set SS are divisible by 3.

Okay but what all of that have to do with anyth—

That moment when Curry and Howard did a joint yaoi slay and caused a motherquake measuring 9.9 on the cunter scale

damn curry and howard corresponded all over that thang

Why are we talking about proofs in the first place?

Well.

Haskell Curry yes that haskell and that curry in FP parlance and William Alvin Howard realized something and that is there is a connection between mathematical proofs and computer programs. To be more specific, both of them are isomorphic to each other. In fact, there is a direct correspondence between logic terminologies and programming terminologies. See the table below, taken from Wikipedia:

Logic sideProgramming side
formulatype
proofterm
formula is truetype has an element
formula is falsetype does not have an element
logical constant ⊤ (truth)unit type
logical constant ⊥ (falsehood)empty type
implicationfunction type
conjunctionproduct type
disjunctionsum type
universal quantificationdependent product type
existential quantificationdependent sum type
Hilbert-style deduction systemtype system for combinatory logic
natural deductiontype system for lambda calculus
hypothesesfree variables
implication elimination (modus ponens)application
implication introductionabstraction

Unfortunately, teaching about this in a detour section of an article about HKTs is kinda uhhhh. So I’ll let you watch this video first. Remember to come back!

You will.

The self-indulgent Lean 4 part of the article

Oh you’re back?

Hell yeah.

So this part is me trying out Lean 4 for the first for the following reasons:

  1. I want to have an excuse to try out Lean 4.
  2. lol
  3. Since Curry-Howard Isomorphism states that proofs == programs, I want to try rewriting the proof above as a program in Lean 4.
  4. I tried writing the proof in Rust, but then I got bitten by the conflicting trait implementation error. Curse you Rust!

So let’s start!

First, we have to define the set SS in Lean. How do we do that? We make a type, using the inductive keyword:

inductive S where
| six : S
| fifteen : S
| step : S → S → S

six and fifteen are our base cases. It is a type constructor that takes nothing and returns an S. step is our recursive step, where it is a constructor that takes two instances of S and return an S.

Surprisingly, there’s a rust analogue for this: the enum! You can rewrite this in rust as follows:

enum S {
Six,
Fifteen,
Step(Box<S>, Box<S>),
}

Next step, we define a way to convert S to a natural number. We use natural numbers because uhhhhh I did the natural numbers game and that’s the only number set I know.

def toNat (s : S) : Nat := match s with
| six => 6
| fifteen => 15
| step s1 s2 => toNat s1 + toNat s2

(s : S) is our argument list and : Nat is the return part of the definition. It’s similar to this:

fn to_nat(s: S) -> u32 {
match s {
S::Six => 6,
S::Fifteen => 15,
S::Step(s1, s2) => to_nat(*s1) + to_nat(*s2),
}
}

Then, we define a proposition. This is the part where theorem provers and programming languages diverge a bit. Well not really. But sorta kinda. What am I saying.

Propositions are types.

In Lean, you can define the “divisible by 3” proposition as so:

def Threeven (i : Nat) := ∃ k, i = 3 * k

Let’s translate this.

We define a new proposition Threeven which takes in a natural number i. We state that given a number i that is Threeven, there exists a number k (that’s what we mean by ∃ k) where i = 3 * k.

If we wanna prove that i is threeven, then we just need to provide a value for k such as k * 3 equals i.

That seems easy!

So let’s try proving the base cases using this.

We use the theorem keyword to, well, introduce a theorem.

theorem three_divides_six : Threeven 6 :=
have h : 6 = 3 * 2 := by simp
Exists.intro 2 h

Yes I blatantly copied this from learnxinyminutes.

We introduce a hypothesis h that 6 = 3 * 2, then we simplify it to 6 = 6, which is trivially true. This is the part where I’m not sure what is happening, but my intuition tells me that Exists.intro 2 h applies with k := 2 and h := i = 3 * k. Since they match up together when i := 6, then the goals have been solved and the program typechecks.

We repeat this for the other base case.

theorem three_divides_fifteen : Threeven 15 :=
have h : 15 = 3 * 5 := by simp
Exists.intro 5 h

Now this is the part where I got stuck. I didn’t know how to prove the recursive step of the theorem:

theorem three_divides_threeven_plus_threeven
(a : Nat) (b : Nat)
(ha : Threeven a) (hb : Threeven b) : Threeven (a + b) :=

Translating this, we say that:

  1. The theorem three_divides_threeven_plus_threeven
  2. accepts two natural numbers a and b,
  3. and two hypotheses ha where a is threeven,
  4. and hb where b is also threeven,
  5. where we must prove that the sum of a and b is also a threeven.

I do have experience in the natural numbers game, but I don’t have experience in proving existentials.

So of course I asked rustaceans about this on Discord.

Thankfully, pitust and Sabrina was kind enough to help me with this.

First, I could’ve rewrote the above theorems as:

theorem three_divides_six : Threeven 6 := by exists 2
theorem three_divides_fifteen : Threeven 15 := by exists 5

Thank you tactics mode. Why didn’t I think of that.

Then, pitust gave the answer to three_divides_threeven_plus_threeven with this:

theorem three_divides_threeven_plus_threeven
(a : Nat) (b : Nat)
(ha : Threeven a) (hb : Threeven b) : Threeven (a + b) := by
obtain ⟨a', ha⟩ := ha
obtain ⟨b', hb⟩ := hb
exists a' + b'
rw [ha, hb, Nat.mul_add]

This is the first time I’ve seen obtain, so hopefully this explanation sounds good enough.

First, by entering tactics mode, we get a tactic state with a goal.

a b : Nat
ha : Threeven a
hb : Threeven b
⊢ Threeven (a + b)

Our goal is that given a and b as natural numbers and ha and hb being the hypotheses that the two numbers are threeven, then we must meet the goal that a + b is also threeven. (This tactic state appears when you put your text cursor after the word by)

After that, we use obtain, which is kinda like unpacking the Threeven a hypothesis into its parts. What are its parts, exactly?

Remember the definition: def Threeven (i : Nat) := ∃ k, i = 3 * k

The two parts here are the natural number k and the proposition i = 3 * k. So we split Threeven from k to a' and i = 3 * k to ha.

Our proof state now looks like:

a b : Nat
hb : Threeven b
a' : Nat
ha : a = 3 * a'
⊢ Threeven (a + b)

Let’s do the same for hb. Now we get:

a b a' : Nat
ha : a = 3 * a'
b' : Nat
hb : b = 3 * b'
⊢ Threeven (a + b)

Since the goal state of our theorem is an existential, we can use the exists to substitute k with anything we want. Here, we substitute a' + b' into the goal.

a b a' : Nat
ha : a = 3 * a'
b' : Nat
hb : b = 3 * b'
⊢ a + b = 3 * (a' + b')

Now it’s a matter of rewriting the goal. Yes that’s what rw stands for.

We can rewrite a to 3 * a' and b to 3 * 3b' with rw [ha, hb] to get this goal state

a b a' : Nat
ha : a = 3 * a'
b' : Nat
hb : b = 3 * b'
3 * a' + 3 * b' = 3 * (a' + b')

Of course, the last step would be us expanding 3 * (a' + b') via the distributive property of numbers. This is spelled out as rw [Nat.mul_add]. Why is it called that? Because a * (b + c) has a * first then + next. mul_add. Yup. That’s creative naming.

After the last statement, we get:

a b a' : Nat
ha : a = 3 * a'
b' : Nat
hb : b = 3 * b'
3 * a' + 3 * b' = 3 * a' + 3 * b'

Notice how both sides are the exact same. As such the goal is met and the theorem typechecks. Hell yeah!

Now here is the fun stuff. We can use these theorems to prove that the statement “all elements of SS are divisible by 3”.

Let’s define this theorem right now.

theorem S.is_threeven (s : S) : Threeven s.toNat :=

pitust helpfully tells me I could use that S.is_threeven for ease. I also redefined toNat to be defined on S. Anyways.

This theorem states that all elements of S are threeven. I’m not sure how that theorem implies that as spelled out, but apparently it is a feature of calculus of constructions. I’m not really sure what that means. pitust says:

if you have a function which maps a value of type S into a proof that something is true for that value, then by definition that thing is true for all values of that type

I don’t know what that means.

Anyways.

This is the part where the first part of this section is leading up to.

Induction.

We do proof by induction on s:

theorem S.is_threeven (s : S) : Threeven s.toNat := by
induction s with

and then we pattern match on s.

theorem S.is_threeven (s : S) : Threeven s.toNat := by
induction s with
| six => apply three_divides_six
| fifteen => apply three_divides_fifteen
| step a b ha hb => ...

Both six and fifteen base cases are self-explanatory, we just proved them earlier. We just have to apply them here.

step is a bit cute. With induction, we not only pattern match on its variables, but also its propositions. What are these proposition you ask?

That a and b are threevens.

This is The Big Assumption. That we know is True. By Assumption.

Starting with this branch we get the state:

case step
a b : S
ha : Threeven a.toNat
hb : Threeven b.toNat
⊢ Threeven (a.step b).toNat

First we rewrite (a.step b).toNat with its definition via rw [S.toNat], giving us:

a b : S
ha : Threeven a.toNat
hb : Threeven b.toNat
⊢ Threeven (a.toNat + b.toNat)

Wow, that awfully looks like our three_divides_threeven_plus_threeven theorem. Let’s apply it.

| step a b ha hb =>
rw [S.toNat]
apply three_divides_threeven_plus_threeven

Gives us:

case step.ha
a b : S
ha : Threeven a.toNat
hb : Threeven b.toNat
⊢ Threeven a.toNat
case step.hb
a b : S
ha : Threeven a.toNat
hb : Threeven b.toNat
⊢ Threeven b.toNat

What the hell?

At first, I found this confusing, but thankfully Sabrina explained this to me:

apply matches up the type of the result of the proposition with the goal, and then looks at what hypotheses you need to prove to get there it creates new goals for each of those hypotheses

So since the goal matches with the result of the sum of threevens theorem, then it’s up to us to prove the hypotheses we defined instead of the result itself. That’s nice. Because our hypotheses is much simpler that our original goal, we can exploit those instead.

We got two cases: step.ha and step.hb. What do you know, the goal is the exact same as ha. Let’s use exact ha. And same for exact hb.

The final theorem:

theorem S.is_threeven (s : S) : Threeven s.toNat := by
induction s with
| six => apply three_divides_six
| fifteen => apply three_divides_fifteen
| step a b ha hb =>
rw [S.toNat]
apply three_divides_threeven_plus_threeven
· exact ha
· exact hb

Nice! I don’t know what the dot is for.

Here’s the full program if you are wondering.

inductive S where
| six : S
| fifteen : S
| step : S → S → S
def S.toNat (s : S) : Nat := match s with
| six => 6
| fifteen => 15
| step s1 s2 => toNat s1 + toNat s2
def Threeven (i : Nat) := ∃ k, i = 3 * k
theorem three_divides_six : Threeven 6 := by exists 2
theorem three_divides_fifteen : Threeven 15 := by exists 5
theorem three_divides_threeven_plus_threeven
(a : Nat) (b : Nat)
(ha : Threeven a) (hb : Threeven b) : Threeven (a + b) := by
obtain ⟨a', ha⟩ := ha
obtain ⟨b', hb⟩ := hb
exists a' + b'
rw [ha, hb, Nat.mul_add]
theorem S.is_threeven (s : S) : Threeven s.toNat := by
induction s with
| six => apply three_divides_six
| fifteen => apply three_divides_fifteen
| step a b ha hb =>
rw [S.toNat]
apply three_divides_threeven_plus_threeven
· exact ha
· exact hb

This proof is mathematically sound and valid and correct or whatever because the Lean compiler stopped yelling at me for “incomplete goals”. So uh.

QED.

We now return to your most programming language ever

Right.

Where was I?

Rust?

Oh yeah.

Let’s circle back on that.

The reason we talked about logic because we are going to talk about the trait solver. Specifically, both the chalk-based trait resolving and the next generation trait resolver which is based in the previous chalk-based one.

Here’s a quote from the rustc-dev guide on chalk-based trait resolution:

The key observation here is that the Rust trait system is basically a kind of logic, and it can be mapped onto standard logical inference rules. We can then look for solutions to those inference rules in a very similar fashion to how e.g. a Prolog solver works. It turns out that we can’t quite use Prolog rules (also called Horn clauses) but rather need a somewhat more expressive variant.

Prolog??? I should’ve learned prolog instead of lean!!!

:clueless:

Looking at the example on the next-gen solver, they have:

fn uses_vec_clone<T: Clone>(x: Vec<T>) -> (Vec<T>, Vec<T>) {
(x.clone(), x)
}

In this code, to be able to clone Vec<T>, then T must implement Clone. Since this function has T bound on Clone, we can safely

assume
that Vec<T> is also Clone. The statement “prove Vec<T> is Clone assuming T is Clone” is called a Goal in next-gen parlance. Assuming is fine here because we defer the bound to the caller of the function. The bounds Vec<T>: Clone and T: Clone are called Predicates.

Wow this looks a bit familiar.

So apparently to prove Vec<T>: Clone we check its implementation, which is impl<T: Clone> Clone for Vec<T>. However, to use this impl, we also have to check if T: Clone also holds. Since we have the

assumption
that T: Clone from uses_vec_clone, and there are no nested goals from that, therefore Vec<T>: Clone holds.

Here’s the part where we return to that Discord convo and explain what inductive cycles are

It is important to note that the default Rust trait solving is inductive. According to this page, proving something is true by induction must happen without any cycles.

So to inductively prove a goal there should be a finite proof tree.

For example, for the bound Vec<Vec<Vec<Vec<i32>>>>: Debug to hold, the tree would look like as follows:

  • Vec<Vec<Vec<Vec<i32>>>>: Debug is provable
    • Vec<Vec<Vec<i32>>>: Debug is provable
      • Vec<Vec<i32>>: Debug is provable
        • Vec<i32>: Debug is provable
          • i32: Debug is provable (which it is, we have an impl for that)

Let’s check if Vec<SomeRandomStruct>: Debug holds, assuming SomeRandomStruct doesn’t implement any trait.

  • Vec<SomeRandomStruct>: Debug is provable
    • SomeRandomStruct: Debug is provable (which it isn’t)

The trait resolver can return a success or an error depending on the outcome of proving a bound holds.

However, let’s give an example of a recursive type.

struct List<T> {
value: T,
next: Option<Box<List<T>>>,
}

To prove that List<T>: Send, we need to construct a proof tree first, checking each part:

  • List<T>: Send is provable
    • T: Send is provable
    • Option<Box<List<T>>>: Send is provable
      • Box<List<T>>: Send is provable
        • List<T>: Send is prova—wait huh

Wait. This tree is infinitely large. It’s not well-founded.

We can have a proof tree that is theoretically infinite.

To note, recursive does not mean infinite. Sometimes recursive things eventually end if there’s a base case.

So this isn’t reliant on induction. This requires something else.

And that’s

coinduction.
According to the rustc-dev guide:

To coinductively prove a goal the provided proof tree may be infinite.

So since infinite proof trees are fine, then it is sufficient for coinductive traits as long as there is a cycle and there are no other requirements needed.

Given List<i32>, we have the proof tree as follows:

  • List<i32>: Send is provable
    • i32: Send is provable (yes, i32 is Send)
    • Option<Box<List<i32>>>: Send is provable
      • Box<List<i32>>: Send is provable
        • List<i32>: Send is provable (uh, that’s a cycle)

Since the two terminal branches here are i32: Send and List<i32>: Send, then we can say that List<i32> is Send as Send is a coinductive trait.

According to the rustc-dev guide, only auto-traits, Sized, and WF-goals Not sure what this means. Does it mean “well-formed”? are coinductive.

What about other traits?

Well.

The Coinductive Disaster

Let’s return to the actual original context.

After a little bit of time, I made the MRE even more minimal:

use std::marker::PhantomData;
trait Wrap {
type Wrapper<T>;
}
#[derive(Debug, PartialEq)]
struct Noop;
impl Wrap for Noop {
type Wrapper<T> = T;
}
#[derive(Debug, PartialEq)]
struct Foo<W: Wrap>(PhantomData<W::Wrapper<Foo<W>>>);
fn main() {
let (x, y) = (Foo::<Noop>(PhantomData), Foo::<Noop>(PhantomData));
dbg!(x == y);
}

So here’s the thing. Expanding the derives, cleaning up, and ignoring Debug we get:

impl PartialEq for Noop {
fn eq(&self, other: &Noop) -> bool { true }
}
impl<W: PartialEq + Wrap> PartialEq for Foo<W>
where
W::Wrapper<Foo<W>>: PartialEq
{
fn eq(&self, other: &Foo<W>) -> bool { self.0 == other.0 }
}

Let’s check if Foo<Noop>: PartialEq holds:

  • Foo<Noop>: PartialEq is provable
    • PhantomData<Noop::Wrapper<Foo<Noop>>>: PartialEq is provable
      • Noop::Wrapper<Foo<Noop>>: PartialEq is provable
        • Foo<Noop>: PartialEq is provable (uh, oh. A cycle!)

And since PartialEq is not a coinductive trait, then we cannot use this proof tree to declare that Foo<Noop>: PartialEq holds. We just cannot

assume
that Foo<Noop>: PartialEq holds because Foo<Noop>: PartialEq holds, because this isn’t induction. This is
✨coinduction✨.

Do they plan to support this?

Well I mean yeah. In the next generation solver.

It’s just that there are bugs.

After some trial and error, Helix found an ICE using the next-solver. Using the options -Copt-level=0 -Znext-solver=globally on rustc (godbolt link):

use std::marker::PhantomData;
pub trait Wrap {
type Wrapper<T: PartialEq>: PartialEq;
}
#[derive(Debug, PartialEq)]
pub struct Noop;
impl Wrap for Noop {
type Wrapper<T: PartialEq> = T;
}
#[derive(Debug, PartialEq)]
pub struct Foo<W: Wrap + PartialEq>(PhantomData<W::Wrapper<Foo<W>>>);
fn foo<W: Wrap + PartialEq>(x: &Foo<W>) {
let _ = x == x;
}
pub fn main(x: &Foo<Noop>) {
foo::<Noop>(x);
}

We get:

error: internal compiler error: /rustc-dev/b41f22de2a13a0babd28771e96feef4c309f54aa/compiler/rustc_middle/src/ty/instance.rs:569:21: failed to resolve instance for <&Foo<Noop> as PartialEq>::eq
--> <source>:18:13
|
18 | let _ = x == x;
| ^^^^^^
thread 'rustc' (3) panicked at /rustc-dev/b41f22de2a13a0babd28771e96feef4c309f54aa/compiler/rustc_middle/src/ty/instance.rs:569:21:
Box<dyn Any>
stack backtrace:
0: 0x77442855c54b - <<std[f377252fd1978e75]::sys::backtrace::BacktraceLock>::print::DisplayBacktrace as core[40ef8ddb62afa269]::fmt::Display>::fmt
1: 0x774428c1f288 - core[40ef8ddb62afa269]::fmt::write
2: 0x774428573556 - <std[f377252fd1978e75]::sys::stdio::unix::Stderr as std[f377252fd1978e75]::io::Write>::write_fmt
...

Okay I’m not gonna copy paste all of that.

But that’s crazy. I was that close to borking the compiler.

Anyways.

Helix did add a reply on his original issue, so all is well in the end.

I mean.

I still can’t use HKTs tho.

Goodbye Wrap trait. I will miss you!

This gave me hope though:

However, we’re currently also working on strenghening/formalizing our cycle semantics, and my current expectation is that the instantiated impl should soundly hold:

impl Trait for Thing<Identity> where Thing<Identity>: Trait {}

…In other words

Rust is a dogshit language that is so void of features that it is basically borderline useless because I can’t jerk off to use higher kinded types in my code. The fact that you can’t have monads makes this an unserious language and nobody should use it for anything serious ever. The fact that this bug will only get fixed after the next-gen solver lands means the rustc devs are lazy ass bums and should pull themselves up by their bootstraps.

But then, attempting to use higher kinded types made me learn logic and Lean 4 and the word coinduction which is complicated and therefore if the rustc devs added support for custom coinductive traits then everyone will have to learn what an inductive cycle mean which makes Rust a complicated language and therefore it becomes a bloated and complex mess and Rust becomes a dogshit language. Because they’re adding woke to the language. Like monads.

Lol.

I guess this article is really an excuse for me to write more and basically make me learn about a bunch of things about type systems.

Which is very interesting, don’t get me wrong!

But wow this shit is complicated. I wish there was like a collection of resources about these kind of stuff, but everything is gatekept behind a bunch of pdfs of presentations from lectures in different universities.

So my knowledge of induction is basically from my weird advanced math tutor classes and the logic books I’ve read before. And then coinduction is basically from Wikipedia and a bunch of random people on Discord.

My fault for trying to learn an impenetrable topic :clueless:

I guess I should read more books. And finish that ebook about category theory for programmers.

Wait why am I reading that shit? I’m a chemist for goddamn sake!

I should read a book about abstract algebra for chemists so I can understand what the fuck is happening in inorganic chemistry!

Why am I writing this blog post!!!

Girl bye!


Friday, March 13th, 2026 #coinduction #induction #lean4 #logic #math #rust #type system

Liked this blog post and want some more? Consider donating to support the author!