Initial commit (up to Homework 7)

This commit is contained in:
Jonathan Chan 2025-03-20 10:42:00 -04:00
commit 7c971a7989
11 changed files with 1061 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build/

31
Homework 1.md Normal file
View File

@ -0,0 +1,31 @@
Here's my formal definition of substitution. Since I wasn't in class last week, I'm not sure what notational conventions are being used, so I'll define my grammar first.
```ebnf
n, m :: Nat
a, b, c :: Term ::= n | λ b | b a
r :: Nat -> Nat
```
I'm using simultaneous renamings r mapping from indices to indices. Lifting a renaming means weakening by 1, so the new mapping refers to the previous entry in the old mapping, and increments what it maps to, defined below.
```haskell
lift :: (Nat → Nat) → Nat → Nat
lift r n = if n == 0 then 0 else (r (n - 1)) + 1
```
Applying a renaming requires lifting it when going under a binder.
```haskell
rename :: (Nat → Nat) → Term → Term
rename r n = r n
rename r (b a) = (rename r b) (rename r a)
rename r (λ b) = λ (rename (lift r) b)
```
I denote substitution as `b{c/n}`, replacing the index `n` by `c` inside of `b`. Renaming by incrementing `1` is used when going under a binder so that the `c`'s zeroth indices don't get captured.
```haskell
n{c/m} = if n == m then c else n
(b a){c/m} = b{c/m} a{c/m}
(λ b){c/m} = let c' = rename (+ 1) c in λ (b{c'/m})
```

67
Homework 2.md Normal file
View File

@ -0,0 +1,67 @@
**N.B.** The definition of multistep reduction I use is composed of a reflexive case (`t →* t`) and a transitive case (`t₁ → t₂ →* t₃`). I need the following two lemmas throughout.
**Lemma** (transitivity): If `t₁ →* t₂ →* t₃` then `t₁ →* t₃`.
**Proof**: By induction on `t₁ →* t₂`. The reflexive case is trivial and the transitive case uses the induction hypothesis to combine the two multistep reductions.
**Lemma** (congruence): Multistep reduction is congruent in head positions, i.e. if `t₁ →* t₁'` then `if t₁ then t₂ else t₃ →* if t₁' then t₂ else t₃`, and similarly for `succ`, `pred`, and `iszero`.
**Proof**: If the reduction is reflexive, then the proof is trivial. Otherwise, the reduction is of the form `t₁ → t₁' →* t₁''`. By `E-If` (likewise `E-Succ`, `E-Pred`, `E-Iszero`), we have `if t₁ then t₂ else t₃ → if t₁' then t₂ else t₃`. By the induction hypothesis, we have `if t₁' then t₂ else t₃ →* if t₁'' then t₂ else t₃` (likewise for`succ`, `pred`, `iszero`). We have our goal by combining the two by transitivity.
----
**Exercise 3.5.16**: The two treatments of run-time errors agree if the following hold:
1. If `t` evaluates to a stuck state (using the rules without `E-*-Wrong`), then it evaluates to `wrong` (using the rules with `E-*-Wrong`). More precisely, if `t →* t' ↛` and `t ≠ v`, then `t →* wrong`.
2. If `t` evaluates to `wrong` and does not contain `wrong` as a subterm, then `t` evaluates to a stuck state. This is the converse of the first statement, except we exclude cases where `t` was wrong to begin with.
**Lemma 0**: If `t ↛` (using the rules without `E-*-Wrong`) and `t ≠ v` then `t → wrong` (using the rules with `E-*-Wrong`).
**Proof**: By induction on the shape of `t`. The cases where `t` is `true`, `false`, `0`, or `succ nv` are impossible.
* Case `t = if t₁ then t₂ else t₃`: The term does not step by `E-IfTrue`, `E-IfFalse`, nor `E-If`, so `t₁` is not `true` nor `false`, and does not step. If `t₁` is a value, then it is either a numeric value `nv`; otherwise, if it is not a value, then by the induction hypothesis, it steps to `wrong`. In both cases, `t₁` is a `badbool`, so the whole expression steps to `wrong` by `E-If-Wrong`.
* Case `t = succ t₁` where `t₁ ≠ nv`: The term does not step by `E-Succ`, so `t₁` does not step. If `t₁` is a value, then it must be `true` or `false`; otherwise, if it is not a value, then by the induction hypothesis, it steps to `wrong`. In both cases, `t₁` is a `badnat`, so the whole expression steps to `wrong` by `E-Succ-Wrong`.
* Case `t = pred t₁`: The term does not step by `E-PredZero`, `E-PredSucc`, nor `E-Pred`, so `t₁ ≠ nv` and does not step. Then the argument for the previous case follows, using instead `E-Pred-Wrong`.
* Case `t = iszero t₁`: The argument for the previous case follows, noting that the term does not step by `E-IszeroZero`, `E-IszeroSucc`, nor `E-Iszero`, and using `E-Iszero-Wrong` to step.
**Lemma 1**: If `t → t'` (using the rules with `E-*-Wrong`) and `t'` contains a `wrong` subterm, then either `t` contains a `wrong` subterm, or `t ↛` (using the rules without `E-*-Wrong`) and `t ≠ v`.
**Proof**: By induction on `t → t'`.
* Cases `E-If-True`, `E-If-False`: The reduction is `if true then t₁ else t₂ → t₁` or `if false then t₁ else t₂ → t₂`. If the RHS contains a `wrong` subterm, then evidently the LHS does as well.
* Cases `E-If`, `E-Pred`, `E-Iszero`: The three cases are similar; I use `E-Iszero` as representative. The reduction is `iszero t₁ → iszero t₂`, where `t₁ → t₂`, and `t₂` must contain a `wrong` subterm. By the induction hypothesis, either `t₁` contains a `wrong` subterm, in which case so does `iszero t₁`, or `t₁ ↛` and `t₁ ≠ v`. In this latter case, we have `iszero t₁ ↛`, since none of `E-Iszero-Zero`, `E-Iszero-Succ`, and `E-Iszero` apply, and `iszero t₁` is obviously not a value.
* Case `E-Succ`: The reduction is `succ t₁ → succ t₂`, where `t₁ → t₂`, and `t₂` must contain a `wrong` subterm. By the induction hypothesis, either `t₁` contains a wrong subterm, in which case so does `succ t₁`, or `t₁ ↛` and `t₁ ≠ v`. In this latter case, we have `succ t₁ ↛` since `E-Succ` does not apply, and we know that `succ t₁` is not a value since `t₁` is not a value.
**Lemma 2**: If `t →* wrong` (using the rules with `E-*-Wrong`), then either `t` contains a `wrong` subterm, or `t →* t' ↛` (using the rules without `E-*-Wrong`) and `t' ≠ v`.
**Proof**: By induction on `t →* wrong`. The reflexive case `wrong →* wrong` is trivial. In the transitive case `t → t' →* wrong`, by the induction hypothesis, either `t'` contains a `wrong` subterm, or `t' →* t'' ↛` and `t'' ≠ v`. In the former case, we use Lemma 1 on the reduction `t → t'`. In the latter case, we have `t → t' →* t'' ↛` and `t'' ≠ v` as required.
**Proof of 3.5.16** (Condition 1): Suppose `t →* t' ↛` and `t' ≠ v`. Then `t' → wrong` by Lemma 0, and `t →* wrong` by transitivity.
**Proof of 3.5.16** (Condition 2): Suppose `t →* wrong` and `t` does not contain a `wrong` subterm. Then by Lemma 2, it must be that `t →* t' ↛` and `t' ≠ v`.
----
**Exercise 3.5.17**: `t →* v` iff `t ⇓ v`.
**Lemma 3**: If `t₁ → t₂` and `t₂ ⇓ v`, then `t₁ ⇓ v`.
**Proof**: By induction on `t₁ → t₂`.
* Case `E-IfTrue`: The reduction is `if true then t₄ else t₅ → t₂`, with `t₂ ⇓ v`. By `B-Value`, `true ⇓ true`. Then by `B-IfTrue`, we construct `if true then t₄ else t₅ ⇓ v`.
* Case `E-IfFalse`: Similar as above, but using `B-IfFalse`.
* Case `E-If`: The reduction is `if t₃ then t₄ else t₅ → if t₃' then t₄ else t₅`, where `t₃ → t₃'`, with `if t₃' then t₄ else t₅ ⇓ v`. By inversion on the latter, the only possible cases are `B-IfTrue` and `B-IfFalse`.
* Subcase `B-IfTrue`: We have `t₃' ⇓ true` and `t₄ ⇓ v`. By the induction hypothesis, we have `t₃ ⇓ true`. Then by `B-IfTrue`, we construct `if t₃ then t₄ else t₄ ⇓ true`.
* Subcase `B-IfFalse`: Similar as above, but the induction hypothesis gives `t₃ ⇓ false`, and we use `B-IfFalse` instead.
* Case `E-Succ`: The reduction is `succ t₁ → succ t₂`, where `t₁ → t₂`, with `succ t₂ ⇓ v`. By inversion on the latter, the only possible case is `B-Succ`, with `t₂ ⇓ v`. By the induction hypothesis, we have `t₁ ⇓ v`. Then by `B-Succ`, we construct `succ t₁ ⇓ v`.
* Case `E-PredZero`: The reduction is `pred 0 → 0`, with `0 ⇓ v`. By inversion on the latter, the only possible case is `B-Value`, with `v = 0`. The goal `pred 0 ⇓ 0` holds directly by `B-PredZero`.
* Case `E-PredSucc`: The reduction is `pred (succ nv) → nv`, with `nv ⇓ v`. By inversion on the latter, the only possible case is `B-Value`, with `v = nv`. The goal `pred (succ nv) ⇓ nv` holds directly by `B-PredSucc`, whose premise `succ nv ⇓ succ nv` holds by `B-succ`, whose premise `nv ⇓ nv` holds by `B-Value`.
* Case `E-Pred`: The reduction is `pred t₁ → pred t₂`, where `t₁ → t₂`, with `pred t₂ ⇓ v`. By inversion on the latter, the only possible cases are `B-PredZero` and `B-PredSucc`.
* Subcase `B-PredZero`: We have `pred t₂ ⇓ 0`, where `t₂ ⇓ 0`. By the induction hypothesis, we have `t₁ ⇓ 0`. Then by `B-PredZero`, we construct `pred t₁ ⇓ 0`.
* Subcase `B-PredSucc`: We have `pred t₂ ⇓ nv`, where `t₂ ⇓ succ nv`. By the induction hypothesis, we have `t₁ ⇓ succ nv`. Then by `B-PredSucc`, we construct `pred t₁ ⇓ nv`.
* Case `E-IszeroZero`: The reduction is `iszero 0 → true`, with `true ⇓ v`. By inversion on the latter, the only possible case is `B-Value`, with `v = true`. The goal `iszero 0 ⇓ true` holds directly by `B-IszeroZero`, whose premise `0 ⇓ 0` holds by `B-Value`.
* Case `E-IszeroSucc`: The reduction is `iszero (succ nv) → false`, with `false ⇓ v`. By inversion on the latter, the only possible case is `B-Value`, with `v = false`. The goal `iszero (succ nv) ⇓ false` holds directly by `B-IszeroSucc`, whose premise `succ nv ⇓ succ nv` holds by `B-Succ`, whose premise `nv ⇓ nv` holds by `B-Value`.
* Case `E-Iszero`: The reduction is `iszero t₁ → iszero t₂`, where `t₁ → t₂`, with `iszero t₂ ⇓ v`. By inversion on the latter, the only possible cases are `B-IszeroZero` and `B-IszeroSucc`.
* Subcase `B-IszeroZero`: We have `iszero t₂ ⇓ true`, where `t₂ ⇓ 0`. By the induction hypothesis, we have `t₁ ⇓ 0`. Then by `B-IszeroZero`, we construct `iszero t₁ ⇓ true`.
* Subcase `B-IszeroSucc`: We have `iszero t₂ ⇓ false`, where `t₂ ⇓ succ nv`. By the induction hypothesis, we have `t₁ ⇓ succ nv`. Then by `B-IszeroSucc`, we construct `iszero t₁ ⇓ false`.
**Proof of 3.5.17** (LtR direction): By induction on `t →* v`. In the reflexive case, we have `v → v`, and `v ⇓ v` holds by `B-Value`. In the transitive case, we have `t₁ → t₂ →* v`. By the induction hypothesis, we have `t₂ ⇓ v`. Then by Lemma 3, we have `t₁ ⇓ v`.
**Proof of 3.5.17** (RtL direction): By induction on `t ⇓ v`. I freely use the transitivity and congruence lemmata to combine multistep reductions.
* Case `B-Value`: By reflexivity.
* Case `B-IfTrue`: The evaluation is `if t₁ then t₂ else t₃ ⇓ v`, where `t₁ ⇓ true` and `t₂ ⇓ v`. By the induction hypothesis, we have `t₁ →* true` and `t₂ →* v`. Then we have `if t₁ then t₂ else →* if true then t₂ else t₃ →* t₂ →* v`, the middle step by `E-IfTrue`.
* Case `B-IfFalse`: Similar as above, but using `E-IfFalse`.
* Case `B-Succ`: The evaluation is `succ t ⇓ succ v`, where `t ⇓ v`. By the induction hypothesis, we have `t →* v`. Then we have `succ t →* succ v` by congruence.
* Case `B-PredZero`: The evaluation is `pred t ⇓ 0`, where `t ⇓ 0`. By the induction hypothesis, we have `t →* 0`. Then we have `pred t →* pred 0 →* 0`, the last step by `E-PredZero`.
* Case `B-PredSucc`: The evaluation is `pred t ⇓ nv`, where `t ⇓ succ nv`. By the induction hypothesis, we have `t →* succ nv`. Then we have `pred t →* pred (succ nv) →* nv`, the last step by `E-PredSucc`.
* Case `B-IszeroZero`: The evaluation is `iszero t ⇓ true`, where `t ⇓ 0`. By the induction hypothesis, we have `t →* 0`. Then we have `iszero t →* iszero 0 →* true`, the last step by `E-IszeroZero`.
* Case `B-IszeroSucc`: The evaluation is `iszero t ⇓ false`, where `t ⇓ succ nv`. By the induction hypothesis, we have `t →* succ nv`. Then we have `iszero t →* iszero (succ nv) →* false`, the last step by `E-IszeroSucc`.

40
Homework 3.md Normal file
View File

@ -0,0 +1,40 @@
**Definition** (negative formulation of hereditary termination):
* 🆕 HT{T₁ × T₂}(M) ≝ HT{T₁}(M.1) ∧ HT{T₂}(M.2)
* 🆕 HT{T₁ → T₂}(M) ≝ ∀M₁, HT{T₁}(M₁) ⊃ HT{T₂}(M M₁)
* HT{1}(M) ≝ M →* ⟨⟩
* HT{Ans}(M) ≝ M →* yes M →* no
**Lemma** (head expansion): If M' → M and HT{A}(M) then HT{A}(M').
**Theorem** (fundamental): If Γ ⊢ M : A then Γ ≫ M ∈ A.
**Proof**: By induction on the typing derivation. Assume γ and HT{Γ}(γ); the goal is to show HT{A}(γ̂(M)).
* Case Lam: The premise is Γ, x : A₁ ⊢ M₂ : A₂, with conclusion Γ ⊢ λx. M₂ : A₁ → A₂.
Assuming M₁ and HT{A₁}(M₁), the goal is to show HT{A₂}((λx. γ̂(M₂)) M₁).
Using the assumptions, we have HT{Γ, x : A₁}(γ, x ↦ M₁),
which we apply to the induction hypothesis on the premise to get HT{A₂}(γ̂(M₂)[x ↦ M₁]).
By head expansion on (λx. γ̂(M₂)) M₁ → γ̂(M₂)[x ↦ M₁], we obtain our goal.
* Case App: The premises are Γ ⊢ M : A₁ → A₂ and Γ ⊢ M₁ : A₁, with conclusion is Γ ⊢ M M₁ : A₂; the goal is to show HT{A₂}(γ̂(M) γ̂(M₁)).
By the induction hypothesis on the first premise specialized to γ̂(M₁),
we have that HT{T₁}(γ̂(M₁)) implies HT{T₂}(γ̂(M) γ̂(M₁)), our goal.
It suffices to show the antecedent, which holds by the induction hypothesis on the second premise.
* Case Pair: The premises are Γ ⊢ M₁ : A₁ and Γ ⊢ M₂ : A₂, with conclusion Γ ⊢ ⟨M₁, M₂⟩ : A₁ × A₂.
The goal is to show HT{A₁}(⟨γ̂(M₁), γ̂(M₂)⟩.1) and HT{A₂}(⟨γ̂(M₁), γ̂(M₂)⟩.2).
By the induction hypotheses on the two premises, we have HT{A₁}(γ̂(M₁)) and HT{A₂}(γ̂(M₂)).
By head expansion on ⟨γ̂(M₁), γ̂(M₂)⟩.i → γ̂(Mᵢ), we have our goal.
* Case Left/Right: The premise is Γ ⊢ M : A₁ × A₂, and the conclusion is Γ ⊢ M.i : Aᵢ.
The goal is to show HT{Aᵢ}(γ̂(M).i).
By the induction hypothesis on the premise, we have HT{Aᵢ}(γ̂(M).i) as needed.
* Cases Var, Yes, No, Unit: Unchanged.
----
**Lemma**: If M₁ and M₂[x ↦ M₁] are strongly normalizing, then M₂ is strongly normalizing.
**Proof**: By induction on M₂. In the variable case for x, we use strong normalization of M₂.
**Claim**: The largest saturated set is SN itself. Obviously there is no larger set, since all saturated sets are subsets of SN. To check saturation:
* If N... ∈ SN then x N... ∈ SN: By congruence of application, the only possible reduction paths are through reduction of N..., which are strongly normalizing.
* If M₁, N... ∈ SN and M₂[x ↦ M₁] N... ∈ SN then (λx. M₂) M₁ N... ∈ SN: The possible reduction paths are by congruence through M₁, M₂, or N..., or by β-reduction of the head redex.
* β-reduction: (λx. M₂) M₁ N... → M₂[x ↦ M₁] N..., which was assumed to be strongly normalizing.
* Congruence (M₁ and N...): They were assumed to be strongly normalizing.
* Congruence (M₂): By assumption, M₂[x ↦ M₁] N... is strongly normalizing, so M₂[x ↦ M₁] must be strongly normalizing.† Then by the above lemma, M₂ is strongly normalizing.
† I'm fairly certain this is true, but I'm now doubting that it's simple to prove rigorously...

136
Homework 4.md Normal file
View File

@ -0,0 +1,136 @@
**Exercise 13.3.1**: I don't think you could model automatic garbage collection using single step semantics, given that it has no information about the outer evaluation context and whether a label will be used or not. We could write single step evaluation with an explicit context like this:
```
E ⩴ _ | t E | E v | !E | E := t | v := E | ref E
t | μ → t' | μ'
---------------------
E[t] | μ → E[t'] | μ'
```
and analyse for labels in `E`, but I think it would be cleaner to instead have an explicit freeing operator and error value:
```
t ⩴ ... | free t | error
┌───────────────┐
│ t | μ → t | μ │
└───────────────┘
t | μ → t' | μ'
-------------------------
free t | μ → free t' | μ'
--------------------------------
free l | μ → unit | μ[l ↦ error]
-----------------------
error t | μ → error | μ
-----------------------
v error | μ → error | μ
-------------------------
ref error | μ → error | μ
----------------------
!error | μ → error | μ
--------------------------
error := t | μ → error | μ
--------------------------
v := error | μ → error | μ
┌───────────────┐
│ Γ | Σ ⊢ t : T │
└───────────────┘
Γ | Σ ⊢ t : Ref T
---------------------
Γ | Σ ⊢ free t : Unit
-----------------
Γ | Σ ⊢ error : T
```
`free`ing updates the store with an `error` value , so the store still never shrinks. `error`s propagate upward, so if it's encountered anywhere inside a computation, the entire computation `error`s. The preservation theorem remains the same, but the progress theorem now says that either a well typed term evaluates to a value or it evaluates to `error` (if you don't consider `error` to be a value).
Unfortunately, the type system isn't strong enough to rule out usage after free (i.e. if a term is well typed then it never errors), and I'm not sure whether it would be possible to design a type system that does. Originally, I was thinking you could remove locations from the store typing, but then it's unclear which location should be removed when typing `free`, and even more unclear how to fix the preservation theorem.
_Theorem_ (preservation): If `Γ | Σ ⊢ μ`, `Γ | Σ ⊢ t : T`, `t | μ → t' | μ'`, then there exists `Σ'` such that `Γ | Σ' ⊢ μ'`, `Γ | Σ' ⊢ t' : T`, and `Σ ??? Σ'`.
We would want to allow, for instance, `t₁ := free (ref t₂)` where `t₁ ≠ t₂`, but not `t₁ := free t₁`, nor `(free t₁; t₁) := t₂`. All of these would shrink the store typing in some way, but it doesn't seem like there would be a way to combine the store typings from the two premises of typing assignment in a way that rules out the last two but not the first.
----
**Exercise 13.5.8**: First, suppose we had a sequencing operation `t₁; t₂`, which could be primitive, or just sugar for `(λ_. t₂) t₁` in our CBV language. Assuming the typing rules for the booleans and natural numbers from Chapter 8, as well as a `mult`iplication function, here is a factorial function:
```
(λn: Nat.
(λf: Ref (Nat → Nat).
λg: Ref (Nat → Nat).
f := λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!g (pred x)));
g := !f;
!g n)
(ref (λx: Nat. x))
(ref (λx: Nat. x)))
```
Given some concrete `n`, after evaluating the two initial references, this steps to the following:
```
lf ↦ λx: Nat. x
lg ↦ λx: Nat. x
lf := λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!lg (pred x)));
lg := !lf;
!lg n
```
After one step, we update `lf`:
```
lf ↦ λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!lg (pred x)))
lg ↦ λx: Nat. x
lg := !lf;
!lg n
```
After another step, we update `lg`:
```
lf ↦ λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!lg (pred x)))
lg ↦ λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!lg (pred x)))
!lg n
```
Then finally yield the factorial function we expect (dropping `lf` now that we no longer need it):
```
lg ↦ λx: Nat.
if (iszero x)
then (succ 0)
else (mult x (!lg (pred x)))
if (iszero n)
then (succ 0)
else (mult n (!lg (pred n)))
```

52
Homework 5.md Normal file
View File

@ -0,0 +1,52 @@
This lemma wasn't part of the exercises assigned but it's needed for the exercises and not proven in the book so I thought I'd sketch it out quickly.
**Lemma 15.3.2** (Inversion of the subtype relation):
1. If `S <: T₁ → T₂`, then `S = S₁ → S₂` such that `T₁ <: S₁` and `S₂ <: T₂`.
2. If `S <: {lᵢ : Tᵢ}`, then `S = {kⱼ : Sⱼ}` such that `{lᵢ} ⊆ {kⱼ}` and `Sⱼ <: Tᵢ` when `lᵢ = kⱼ`.
**Proof**:
1. By induction on the subtyping derivation. The applicable cases are `S-Refl` (holds directly), `S-Trans` (holds by induction hypothesis twice), and `S-Arrow` (holds directly).
2. By induction on the subtyping derivation. The applicable cases are `S-Refl` (holds directly), `S-Trans` (holds by induction hypothesis twice, using `S-Trans` and transitivity of subsets of labels), `S-RcdWidth` (holds directly using `S-Refl` and `{l₁..lₙ} ⊆ {l₁..lₙ₊ₖ})`, `S-RcdDepth` (holds directly), and `S-RcdPerm` (holds by induction hypothesis).
Here are the assigned exercises.
**Lemma 15.3.6** (Canonical forms):
1. If `v` is a closed value of type `T₁ → T₂`, then `v` has the form `λx: S₁. t₂`.
2. If `v` is a closed value of type `{lᵢ : Tᵢ}`, then `v` has the form `{kⱼ = vⱼ}`, where `{lᵢ} ⊆ {kⱼ}`.
**Proof**:
1. By induction on the typing derivation of `Γ ⊢ v : T₁ → T₂`. The only applicable cases are `T-Abs` and `T-Sub`; the first case holds directly. In the second case, we have `Γ ⊢ v : S` and `S <: T₁ → T₂`. By inversion on subtyping (Lemma 15.3.2), `S = S₁ → S₂`. Then `v` has the form of a function by the induction hypothesis.
2. By induction on the typing derivation of `Γ ⊢ v : {lᵢ : Tᵢ}`. We proceed similarly to the above, where the applicable cases are `T-Rcd` and `T-Sub`, the first case holds directly, and the second case holds using inversion (Lemma 15.3.2) and the induction hypothesis.
The top-level lemmas hold by instantiating `Γ = ∅`.
**Exercise 16.1.3**: If we add `Bool`, we also need to add the subtyping rule `Bool <: Bool` so that the judgement remains reflexive.
**Theorem 16.2.5**: If `Γ ⊢ t : T`, then `Γ ⊢> t : S` and `⊢> S <: T`.
**Proof**: By induction on the typing derivation.
* Case `T-Var`: Holds by `TA-Var`.
* Case `T-Abs`: The premise is `Γ, x: T₁ ⊢ t₂ : T₂`, concluding with `Γ ⊢ λx: T₁. t₂ : T₁ → T₂`. By the induction hypothesis, we have `Γ, x: T₁ ⊢> t₂ : S₂` and `⊢> S₂ <: T₂`. By `TA-Abs`, we have `Γ ⊢> λx: T₁. t₂ : T₁ → S₂`, and by reflexivity and `SA-Arrow` we have `⊢> T₁ → S₂ <: T₁ → T₂`.
* Case `T-App`: The premises are `Γ ⊢ t₁ : T₁₁ → T₁₂` and `Γ ⊢ t₂ : T₁₁`, concluding with `Γ ⊢ t₁ t₂ : T₁₂`. By the induction hypothesis, we have `Γ ⊢> t₁ : S₁`, `⊢> S₁ <: T₁₁ → T₁₂` for the first premise and `Γ ⊢> t₂ : S₂`, `⊢> S₂ <: T₁₁` for the second. By inversion on subtyping, `S₁ = S₁₁ → S₁₂` , `⊢> T₁₁ <: S₁₁`, and `⊢> S₁₂ <: T₁₂`. By transitivity, `⊢> S₂ <: T₁₁ <: S₁₁`. Then we can use `TA-App` to build `Γ ⊢> t₁ t₂ : S₁₂`, with `⊢> S₁₂ <: T₁₂` as needed.
* Case `T-Rcd`: The premises are `Γ ⊢ tᵢ : Tᵢ`, concluding with `Γ ⊢ {lᵢ = tᵢ} : {lᵢ : Tᵢ}`. By the induction hypothesis, we have `Γ ⊢> tᵢ : Sᵢ` and `⊢> Sᵢ <: Tᵢ`. By `TA-Rcd`, we have `Γ ⊢> {lᵢ = tᵢ} : {lᵢ : Sᵢ}`, and by `SA-Rcd`, we have `⊢> {lᵢ : Sᵢ} <: {lᵢ : Tᵢ}`.
* Case `T-Proj`: The premise is `Γ ⊢ t : {lᵢ : Tᵢ}`, concluding with `Γ ⊢ t.lⱼ : Tⱼ`. By the induction hypothesis, we have `Γ ⊢> t : S` and `⊢> S <: {lᵢ : Tᵢ}`. By inversion on subtyping, `S = {kᵢ : Sᵢ}`, where `{lᵢ} ⊆ {kᵢ}` and `⊢> Sᵢ <: Tⱼ` when `lⱼ = kᵢ`. Letting `lⱼ = kᵢ`, we can use `TA-Proj` to build `Γ ⊢> t.lⱼ : Sᵢ`, with `⊢> Sᵢ <: Tⱼ`.
* Case `T-Sub`: The premises are `Γ ⊢ t : S` and `S <: T`, concluding with `Γ ⊢ t : T`. By the induction hypothesis, we have `Γ ⊢> t : U` and `⊢> U <: S`. By completeness (Proposition 16.1.5), we get `⊢> S <: T`. Then by transitivity, we get `⊢> U <: S <: T` as needed.
**Exercise 16.2.6**: I think the system without `S-Arrow` would still have the minimal typing property. The proof would proceed exactly as in Theorem 16.2.5. Completeness (`S <: T` implies `⊢> S <: T`) still holds, since the induction would just be missing one case.
**Lemma** (existence of a subtype): For every `S`, `T`, either there exists some `N` such that `N <: S` or `N <: T`, or no such `N` is possible.
**Proof**: By induction on the structure of `S` and `T`. If one of the types is `Top`, then the other type is a subtype of both by reflexivity and `S-Top`. If the types have different shapes (i.e. one is a function type, the other a record type), supposing we have a subtype `N`, by inversion on the subtyping derivations, `N` would have to be both a function type and a record type, which is a contradiction, so no such `N` exists. That leaves the cases where both types are function types or record types.
* Case `S₁ → S₂`, `T₁ → T₂`. By the induction hypothesis, either `S₂` and `T₂` have a subtype `N₂`, or no such subtype exists. If it exists, then `Top → N₂` is a subtype, since `S₁ <: Top`, `T₁ <: Top`, `N₂ <: S₂`, and `N₂ <: T₂`, and we construct `Top → N₂ <: S₁ → S₂` and `Top → N₂ <: T₁ → T₂` by `S-Arrow`. If it does not exist, then supposing we had a subtype `N₁ → N₂` of both `S₁ → S₂` and `T₁ → T₂`, by inversion, we have `N₂ <: S₂` and `N₂ <: T₂`, which is a contradiction, so there must be no such subtype.
* Case `{kᵢ : Sᵢ}`, `{lⱼ : Tⱼ}`. By the induction hypothesis, each pair `Sᵢ`, `Tᵢ` either have a subtype `Nᵢⱼ`, or no such subtype exists. Let `{klᵢⱼ} = {kᵢ} {lⱼ}`.
* If all `Sᵢ` and `Tⱼ` for which `kᵢ = lⱼ` have a subtype, then we claim the subtype is `{klᵢⱼ : Uᵢⱼ}` such that `Uᵢⱼ = Nᵢⱼ` if `klᵢⱼ = kᵢ = lⱼ`, `Uᵢⱼ = Sᵢ` if `klᵢⱼ = kᵢ` only, or `Uᵢⱼ = Tⱼ` if `klᵢⱼ = lⱼ` only. We can construct `{klᵢⱼ : Uᵢⱼ} <: {kᵢ : Sᵢ}` by `S-Rcd` since `{kᵢ} ⊆ {klᵢⱼ}`, and `kᵢ = kₗᵢⱼ` implies `Sᵢ <: Uᵢⱼ`, where `Uᵢⱼ = Sᵢ` or `Uᵢⱼ = Nᵢⱼ`. Similarly, we can construct `{klᵢⱼ : Uᵢⱼ} <: {lⱼ : Tⱼ}` by `S-Rcd`.
* If there exists some `klᵢⱼ` such that `klᵢⱼ = kᵢ = lⱼ` but `Sᵢ` and `Tⱼ` have no subtype, then we claim there is no subtype. Supposing there were a subtype `N`, by inversion we must have that `N = {nₘ : Nₘ}` where `kᵢ = nₘ` implies `Nₘ <: Sᵢ` and `lⱼ = nₘ` implies `Nₘ <: Tⱼ`. In the case where `nₘ = klᵢⱼ`, this gives us a subtype of `Sᵢ` and `Tⱼ`, which is a contradiction.
**Proposition 16.3.2**:
1. For every `S`, `T`, there exists some `J = S T`.
2. For every `S`, `T`, `N` such that `N <: S` and `N <: T`, there exists some `M = S ∧ T`.
**Proof**:
By induction on the structure of `S` and `T`, proving both simultaneously, although I will write them separately, even though I use the induction hypotheses jointly.
1. Goal `S T`. If one of the types is `Top`, then the join is `Top`. If the types have different shapes, then the join must be `Top`, since a function type cannot be the supertype of a record type and vice versa by inversion. That leaves the cases where both types are function types or record types.
* Case `S₁ → S₂`, `T₁ → T₂`. By the induction hypothesis, we know that`J₂ = S₂ T₂` exists. There are two possible subcases to consider, using the above lemma (existence of a subtype).
* If `S₁` and `T₁` have a subtype, then by the induction hypothesis, we know that `M₁ = S₁ ∧ T₁` exists. We claim that `M₁ → J₂` is the join; we know it is a supertype by `S-Arrow`. It remains to show that it is minimal, i.e. for every `U` such that `S₁ → S₂ <: U` and `T₁ → T₂ <: U`, we have `M₁ → J₂ <: U`. If `U` is `Top`, then we are done. Otherwise, it must be that `U = U₁ → U₂`, and by inversion, `U₁ <: S₁`, `U₁ <: T₁`, `S₂ <: U₂`, and `T₂ <: U₂`. It suffices to show that `U₁ <: M₁` and `J₂ <: U₂`. The former holds by maximality of the meet `M₁ = S₁ T₁`, while the latter holds by minimality of the join `J₂ = S₂ T₂`.
* If `S₁` and `T₁` do not have a subtype, then we claim that `Top` is the join. We must show it to be minimal: for every `U` such that `S₁ → S₂ <: U` and `T₁ → T₂ <: U`, we have `Top <: U`. If `U` is `Top`, then we are done. Otherwise, it must be that `U = U₁ → U₂`, and by inversion, `U₁ <: S₁`, `U₁ <: T₁`, `S₂ <: U₂`, and `T₂ <: U₂`. However, this makes `U₁` a subtype of `S₁` and `T₁`, which is a contradiction.
* Case `{kᵢ : Sᵢ}`, `{lⱼ : Tⱼ}`. By the induction hypothesis, we know that every `Sᵢ`, `Tⱼ` have a join. Let `{klᵢⱼ | klᵢⱼ = kᵢ = lⱼ}` be the common labels, i.e. `{kᵢ} ∩ {lⱼ}`, and let `Jᵢⱼ = Sᵢ Tⱼ`. We claim that `{klᵢⱼ : Jᵢⱼ}` is the join; we know it is a supertype by `S-Rcd`. It remains to show that it is minimal, i.e. for every `U` such that `{kᵢ : Sᵢ} <: U` and `{lⱼ : Tⱼ} <: U`, we have `{klᵢⱼ : Jᵢⱼ} <: U`. If `U` is `Top`, then we are done. Otherwise, it must be that `U = {uₙ : Uₙ}`, and by inversion, `{uₙ} ⊆ {kᵢ}`, `{uₙ} ⊆ {lⱼ}`, `kᵢ = uₙ` implies `Sᵢ <: Uₙ`, and `lⱼ = uₙ` implies `Tⱼ <: Uₙ`. Then we know `{uₙ} ⊆ {kᵢ} ∩ {lⱼ} = {klᵢⱼ}`, and if `kᵢⱼ = uₙ`, then `Sᵢ <: Uₙ` and `Tᵢ <: Uₙ`. By minimality of the join `Jᵢⱼ`, we have `Jᵢⱼ <: Uₙ`. Then we can use `S-Rcd` to build `{klᵢⱼ : Jᵢⱼ} <: U`.
2. Goal `S ∧ T`. If one of the types is `Top`, then the meet is the other type. If the types have different shapes, then `N` being a subtype of both is a contradiction by inversion. That leaves the cases where both types are function types or record types.
* Case `S₁ → S₂`, `T₁ → T₂`. By inversion, `N = N₁ → N₂` where `S₁ <: N₁`, `T₁ <: N₁`, `N₂ <: S₂`, `N₂ <: T₂`. By the induction hypothesis, we know that `M₂ = S₂ T₂` and `J₁ = S₁ ∧ T₁` exist. We claim that `J₁ → M₂` is the meet; we know it is a subtype by `S-Arrow`. It remains to show that it is maximal, i.e. for every `U` such that `U <: S₁ → S₂` and `U <: T₁ → T₂`, we have `U <: J₁ → M₂`. By inversion, `U = U₁ → U₂` such that `S₁ <: U₁`, `T₁ <: U₁`, `U₂ <: S₂`, and `U₂ <: T₂`. It suffices to show that `J₁ <: U₁` and `U₂ <: M₂`. The former holds by minimality of the join `J₁ = S₁ ∧ T₁`, while the latter holds by maximality of the meet `M₂ = S₂ T₂`.
* Case `{kᵢ : Sᵢ}`, `{lⱼ : Tⱼ}`. By inversion, `N = {nₘ : Nₘ}` where `{nₘ} ⊆ {kᵢ}`, `{nₘ} ⊆ {lⱼ}`, `kᵢ = nₘ` implies `Nₘ <: Sᵢ`, and `lⱼ = nₘ` implies `Nₘ <: Tⱼ`. By the induction hypothesis, we know that every `Sᵢ`, `Tⱼ` have a meet. Let `{klᵢⱼ} = {kᵢ} {lⱼ}` be the union of all labels, and for every `klᵢⱼ`, let `Mᵢⱼ = Sᵢ ∧ Tᵢ` if `klᵢⱼ ∈ {kᵢ} ∩ {lⱼ}`, `Mᵢⱼ = Sᵢ` if `klᵢⱼ ∈ {kᵢ}` only, and `Mᵢⱼ = Tᵢ` if `klᵢⱼ ∈ {lⱼ}` only. We claim that `{klᵢⱼ : Mᵢⱼ}` is the meet; we know it is a subtype by `S-Rcd`. It remains to show that it is maximal, ie. for every `U` such that `U <: {kᵢ : Sᵢ}` and `U <: {lⱼ : Tⱼ}`, we have `U <: {Mᵢⱼ : Jᵢⱼ}`. By inversion, `U = {uₙ : Uₙ}` where `{kᵢ} ⊆ {uₙ}`, `{lⱼ} ⊆ {uₙ}`, `kᵢ = uₙ` implies `Uₙ <: Sᵢ`, and `lⱼ = uₙ` implies `Uₙ <: Tⱼ`. Then we know `{kᵢⱼ} = {kᵢ} {lⱼ} ⊆ {uₙ}`. Supposing `kᵢⱼ = uₙ`, either `uₙ = kᵢ = lⱼ`, in which case `Uₙ <: Sᵢ ∧ Tⱼ` by maximality of the meet, or `uₙ = kᵢ` only, in which case `Uₙ <: Sᵢ`, or `uₙ = lⱼ` only, in which case `Uₙ <: Tⱼ`, so together we have `Uₙ <: Mᵢⱼ`. Then we can use `S-Rcd` to build `U <: {klᵢⱼ : Mᵢⱼ}`.

333
Homework 6.md Normal file
View File

@ -0,0 +1,333 @@
I have an issue with the notion of saturated sets as defined in class:
it seems to be tailored specifically to the exact system it's for,
and it's unclear how it should be extended to fit an extended system
with additional eliminators and reduction rules.
In the POPLMark reloaded[^1] paper,
they provide an inductive definition of strongly normalizing terms
and a notion of reducibility candidates/saturated sets based on this definition,
and I think it's much clearer how the proof of normalization can be extended.
To demonstrate, I'm going to consider System F extended with booleans,
and try to prove normalization directly using their inductive definition.
First, here are the complete typing rules including booleans and the conditional.
```
┌───────────┐
│ Γ ⊢ M : σ
└───────────┘
x : σ ∈ Γ Γ, x : σ ⊢ M : τ Γ ⊢ M : σ → τ Γ ⊢ N : σ
--------- Var ----------------- Abs -------------------------- App
Γ ⊢ x : σ Γ ⊢ λx. M : σ → τ Γ ⊢ M N : τ
Γ ⊢ M : σ α ∉ FV(Γ) Γ ⊢ M : ∀α. σ α ∉ FV(τ)
---------------------- TAbs -------------------------- TApp
Γ ⊢ M : ∀α. σ Γ ⊢ M : σ{α ↦ τ}
--------------- True ---------------- False
Γ ⊢ true : Bool Γ ⊢ false : Bool
Γ ⊢ b : Bool Γ ⊢ M : A Γ ⊢ N : A
-------------------------------------- If
Γ ⊢ if b then M else N : A
```
## Strongly normalizing terms
Strongly normalizing terms are defined mutually with
strongly normalizing neutral terms and strong head reduction.
For short, I interchangeably refer to strongly normalizing terms as "normal terms",
and strongly normalizing neutral terms as "neutral terms",
since there is no separate notion of normal and neutral terms that precludes reduction.
The underlying principles are:
1. Neutral terms are variables and eliminators neutral in the head position
and normal in all other arguments;
2. Normal terms are neutral terms and constructors with normal components,
backward closed under head reduction;
3. Head reduction consists of β-reduction on normal arguments,
and congruence rules for eliminators in the head position.
Reduction is defined mutually with normality to ensure backward closure of normal terms,
since otherwise looping terms may disappear after reduction,
e.g. the function application argument if it is unused, or the other branch of a conditional.
Using these principles, the definition in POPLMark reloaded can be extended to booleans.
(In contrast to the paper, I'll stick to untyped judgements, which is an orthogonal feature.)
```
┌─────────┐
│ M ∈ SNe │
└─────────┘
M ∈ SNe N ∈ SN
------- SNe-Var ----------------- SNe-App
x ∈ SNe M N ∈ SNe
M ∈ SNe N₁, N₂ ∈ SN
-------------------------- SNe-If
if M then N₁ else N₂ ∈ SNe
┌────────┐
│ M ∈ SN │
└────────┘
M ∈ SNe M ⇝ N N ∈ SN
------- SN-SNe --------------- SN-Red
M ∈ SN M ∈ SN
M ∈ SN
---------- SN-Abs ---------------- SN-True,False
λx. M ∈ SN true, false ∈ SN
┌───────┐
│ M ⇝ N │
└───────┘
N ∈ SN M ⇝ M'
-------------------- Red-β ---------- Red-App
(λx. M) N ⇝ M{x ↦ N} M N ⇝ M' N
N₂ ∈ SN
---------------------------- Red-True
if true then N₁ else N₂ ⇝ N₁
N₁ ∈ SN
----------------------------- Red-False
if false then N₁ else N₂ ⇝ N₂
M ⇝ M'
------------------------------------------- Red-If
if M then N₁ else N₂ ⇝ if M then N₁ else N₂
```
I assume the following property because I don't want to prove it but I'm sure it's true.
**Proposition** (antisubstitution):
1. If N ∈ SN and M{x ↦ N} ∈ SN then M ∈ SN.
2. If N ∈ SN and M{x ↦ N} ∈ SNe then M ∈ SN.
3. If M{x ↦ N} ⇝ M' and M' ∈ SN then M ∈ SN.
Because these are inductive definitions, we will need some lemmas
for properties that formerly held by definition, namely inversion and congruence.
**Lemma** (Inv-App): If (M N) ∈ SN then M ∈ SN.
**Proof**: By induction on SN.
There are two possible cases:
* Case SN-SNe. We have (M N) ∈ SNe, and we wish to show M ∈ SN.
By inversion on neutrals, the only possible case is SNe-App,
where M ∈ SNe. Then M ∈ SN by SN-SNe.
* Case SN-Red. We have (M N) ⇝ N' ∈ SN, and we wish to show M ∈ SN.
By inversion on the reduction, there are two possible cases.
* Case Red-β. We have M = (λx. M'), N ∈ SN, and N' = M'{x ↦ N}.
By antisubstitution, M' ∈ SN, so (λx. M') ∈ SN.
* Case Red-App. We have M N ⇝ M' N = N' ∈ SN and M ⇝ M'.
By the induction hypothesis, M' ∈ SN. By SN-Red, M ∈ SN.
**Lemma** (Reds-If): If M ⇝* M' then (if M then N₁ else N₂) ⇝* (if M' then N₁ else N₂).
**Proof**: By induction on the reflexive, transitive closure of reduction,
using Red-If and the induction hypothesis in the transitive case.
**Lemma** (Reds-App): If M ⇝* M' then M N ⇝* M' N.
**Proof**: By induction on the reflexive, transitive closure of reduction,
using Red-App and the induction hypothesis in the transitive case.
## Reducibility candidates
Their definition of reducibility candidates (CR) then captures the same properties SAT did,
namely backward closure under head reduction,
and that SATs/CRs are squeezed between neutrals and normals.
**Definition**: A set R is a reducibility candidate (R ∈ CR) when the following hold:
1. R ⊆ SN;
2. If M ⇝ M' and M' ∈ R then M ∈ R; and
3. SNe ⊆ R.
I refer to these as CR1, CR2, and CR3.
We can prove the same lemmas for CR as we did for SAT,
with an additional backward closure under the reflexive, transitive closure of reduction.
**Lemma 0** (CR2*): Let R ∈ CR. If M ⇝* M' and M' ∈ R then M ∈ R.
**Proof**: By induction on the reflexive, transitive closure of reduction,
using CR2 and the induction hypothesis in the transitive case.
**Lemma 1**: SN ∈ CR.
**Proof**: CR1 holds trivially, CR2 holds by SN-Red, and CR3 holds by SN-SNe.
**Lemma 2**: If A, B ∈ CR, then A ⇒ B ∈ CR, where (M ∈ A ⇒ B) ⇔ for all N ∈ A, M N ∈ B.
**Proof**: Let A, B ∈ CR.
1. Suppose M ∈ A ⇒ B. We need to show that M ∈ SN.
Picking any variable x, we know x ∈ A since SNe ⊆ A ∈ CR by CR3.
Then by definition, M x ∈ B ∈ CR, so by M x ∈ SN by CR1.
Then by Inv-App, M ∈ SN.
2. Suppose M ⇝ M' ∈ A ⇒ B. We need to show that M ∈ A ⇒ B,
i.e. supposing N ∈ A, M N ∈ B. By supposition, we know M' N ∈ B.
By Red-App, we know M N ⇝ M' N, and by CR2, we have M N ∈ B.
3. Suppose M ∈ SNe. We need to show that M ∈ A ⇒ B,
i.e. supposing N ∈ A, M N ∈ B.
By CR1, we know N ∈ A ⊆ SN, so by SNe-App, we have M N ∈ SNe.
Then by CR2, we have M N ∈ SNe ⊆ B.
**Lemma 3**: Let Aᵢ be some family of sets of terms indexed by i ∈ I (potentially infinite).
If Aᵢ ∈ CR, then ∩ᵢ Aᵢ ∈ CR.
**Proof**: Suppose Aᵢ ∈ CR.
1. Let M ∈ ∩ᵢ Aᵢ. We need to show that M ∈ SN.
By definition, M ∈ Aᵢ for some Aᵢ, and by CR1, we have M ∈ Aᵢ ⊆ SN.
2. Suppose M ⇝ M' ∈ ∩ᵢ Aᵢ. We need to show that M ∈ ∩ᵢ Aᵢ.
By definition, M' ∈ Aᵢ for every Aᵢ.
By CR2, we have M ∈ Aᵢ for every Aᵢ, so M ∈ ∩ᵢ Aᵢ.
3. Suppose M ∈ SNe. We need to show that M ∈ ∩ᵢ Aᵢ.
By CR3, we have M ∈ Aᵢ for every Aᵢ, so M ∈ ∩ᵢ Aᵢ.
## Type interpretations
With the addition of booleans, we need an interpretation of the type of booleans.
The positive formulation states that elements of the interpretation
reduce to normal constructor forms or is neutral;
the negative formulation states that elements of the interpretation
behave "as expected" when eliminated.
For booleans, the latter looks like this:
**_not_ the Definition** (semantic booleans): M ∈ ⟦Bool⟧ξ ⇔ for all σ and N₁, N₂ ∈ ⟦σ⟧ξ,
(if M then N₁ else N₂) ∈ ⟦σ⟧ξ.
However, Joe Cutler says this doesn't work (don't tell him I said he said this),
and I can't be bothered to check, so I'll stick with the positive formulation.
We also need to show that it, too, is a reducibility candidate.
**Definition** (semantic booleans): M ∈ ⟦Bool⟧ξ ⇔ M ∈ SNe or M ⇝* true or M ⇝* false.
**Lemma 4**: ⟦Bool⟧ξ ∈ CR.
**Proof**:
1. We need to show that if M ∈ ⟦Bool⟧ξ, then M ∈ SN.
By definition of ⟦Bool⟧ξ, there are three cases.
If M ∈ SNe, then M ∈ SN by SN-SNe.
If M ⇝* true or M ⇝* false, then induct on the reflexive, transitive closure.
In the reflexive case, we use SN-True and SN-False.
In the transitive case, we use SN-Red followed by the induction hypothesis.
2. We need to show that if M ⇝ M' ∈ ⟦Bool⟧ξ, then M ∈ ⟦Bool⟧ξ.
By definition of ⟦Bool⟧ξ, there are three cases.
If M' ∈ SNe, then by SN-SNe and SN-Red, we have M ∈ SNe, so M ∈ ⟦Bool⟧ξ
If M' ⇝* true or M ⇝* false, by transitivity, M ⇝ M' ⇝* true or false.
3. We need to show that if M ∈ SNe, then M ∈ ⟦Bool⟧ξ, which holds by definition.
With these four lemmas, the interpretation of types is a reducibility candidate.
**Definition** (type interpretation): Let ξ be a function from type variables to CRs.
We define ⟦·⟧ξ recursively on types:
* ⟦α⟧ξ ≔ ξ(α)
* ⟦σ → τ⟧ξ ≔ ⟦σ⟧ξ ⇒ ⟦τ⟧ξ
* ⟦∀α. σ⟧ξ ≔ ∩ₓ ⟦σ⟧(ξ, α ↦ x) where X ∈ CR
* ⟦Bool⟧ξ ≔ semantic booleans as above
**Lemma 5**: ⟦σ⟧ξ ∈ CR.
**Proof**: By induction on types, using Lemmas 2, 3, 4
for functions, polymorphism, and booleans, respectively.
## Soundness and normalization
The definitions of semantic typing remain unchanged, reproduced below,
using ρ for substitution maps from term variables to terms,
ξ for substitution maps from type variables to sets,
and M{ρ} for applying the substitutions of ρ in M.
**Definitions**:
* ⟦M⟧ρ ⇔ M{ρ}
* ρ, ξ ⊧ Γ ⇔ ∀x : σ ∈ Γ, ρ(x) ∈ ⟦σ⟧ξ
* Γ ⊧ M : σ ⇔ ∀ρ, ξ ⊧ Γ, ⟦M⟧ρ ∈ ⟦σ⟧ξ
We can now prove the fundamental soundness theorem.
All of the old cases hold as usual;
I'll cover just the old polymorphism cases along with the new boolean cases.
**Theorem** (soundness): If Γ ⊢ M : σ then Γ ⊧ M : σ.
**Proof**: By induction on the typing derivation.
* Case TAbs: We have premises Γ ⊢ M : σ and α ∉ FV(Γ),
and we need to show Γ ⊧ M : ∀α. σ,
i.e. for every ρ, ξ ⊧ Γ, we have ⟦M⟧ρ ∈ ⟦∀α. σ⟧ξ = ∩ₓ ⟦σ⟧{ξ, α ↦ X}.
By the induction hypothesis, for every ρ', ξ' ⊧ Γ, we have ⟦M⟧ρ ∈ ⟦σ⟧ξ.
Consequently, for every X ∈ CR, we have ρ, {ξ, α ↦ X} ⊧ Γ implies ⟦M⟧ρ ∈ ⟦σ⟧ξ.
Since α ∉ FV(Γ), given ρ, ξ ⊧ Γ, we can harmlessly extend to ρ, {ξ, α ↦ X} ⊧ Γ.
Then for every X ∈ CR, ⟦M⟧ρ ∈ ⟦σ⟧{ξ, α ↦ X}, i.e. ⟦M⟧ρ ∈ ∩ₓ ⟦σ⟧{ξ, α ↦ X}.
* Case TApp: We have premises Γ ⊢ M : ∀α. σ and α ∉ FV{τ},
and we need to show Γ ⊧ M : σ{α ↦ τ},
i.e. for every ρ, ξ ⊧ Γ, we have ⟦M⟧ρ ∈ ⟦σ{α ↦ τ}⟧ξ = ⟦σ⟧{ξ, α ↦ ⟦τ⟧ξ}.
By the induction hypothesis, we have ⟦M⟧ρ ∈ ⟦∀α. σ⟧ξ = ∩ₓ ⟦σ⟧{ξ, α ↦ X}.
By Lemma 5, ⟦τ⟧ξ ∈ CR, so we can instantiate X with ⟦τ⟧ξ.
* Case True: We need to show Γ ⊧ true : Bool,
i.e. for every ρ, ξ ⊧ Γ, we have ⟦true⟧ρ ∈ ⟦Bool⟧ξ.
This holds trivially by true ⇝* true.
* Case False: As above.
* Case If: We have premises Γ ⊢ M : Bool, Γ ⊢ N₁ : σ, Γ ⊢ N₂ : σ,
and we need to show Γ ⊧ if M then N₁ else N₂,
i.e. for every ρ, ξ ⊧ Γ, we have (if ⟦M⟧ρ then ⟦N₁⟧ρ else ⟦N₂⟧ρ) ∈ ⟦σ⟧ξ.
By the induction hypotheses, we have ⟦M⟧ρ ∈ ⟦Bool⟧ξ and ⟦Nᵢ⟧ρ ∈ ⟦σ⟧ξ.
By Lemma 5, we have ⟦M⟧ρ, ⟦Nᵢ⟧ρ ∈ CR, and by CR1, we have ⟦M⟧ρ, ⟦Nᵢ⟧ρ ∈ SN.
By definition of ⟦Bool⟧ξ, there are three cases to consider.
* ⟦M⟧ρ ∈ SNe: By SNe-If, we know that (if ⟦M⟧ρ then ⟦N₁⟧ρ else ⟦N₂⟧ρ) ∈ SNe.
Then by CR3, we have (if ⟦M⟧ρ then ⟦N₁⟧ρ else ⟦N₂⟧ρ) ∈ ⟦σ⟧ξ.
* ⟦M⟧ρ ⇝* true or false: By Reds-If, Red-True/-False, and transitivity, we have
(if ⟦M⟧ρ then ⟦N₁⟧ρ else ⟦N₂⟧ρ) ⇝* ⟦N₁⟧ρ or ⟦N₂⟧ρ ∈ ⟦σ⟧ξ.
Then by CR2*, (if ⟦M⟧ρ then ⟦N₁⟧ρ else ⟦N₂⟧ρ) ∈ ⟦σ⟧ξ.
Strong normalization holds as a corollary like before.
**Corollary** (strong normalization): If Γ ⊢ M : σ then M ∈ SN.
**Proof**: By soundness, unfolding the definition of semantic typing, we have
∀ρ, ξ, (∀x : τ ∈ Γ, ρ(x) ∈ ⟦τ⟧ξ) ⇒ M{ρ} ∈ ⟦σ⟧ξ.
Pick ρ(x) ↦ x and ξ(α) ↦ SAT. We need to show that ∀x : τ ∈ Γ, x ∈ ⟦τ⟧ξ.
By CR3, ⟦τ⟧ξ ⊆ SNe, so by SN-Var, x ∈ ⟦τ⟧ξ.
Then we have M ∈ ⟦σ⟧ξ ∈ CR, and by CR1, M ∈ SN.
## Normalization and infinite reductions
However, this proof of strong normalization is still stated using the inductive definition,
not the original statement of strong normalization,
which states that there are no infinite reduction sequences,
where the usual notion of reduction is single-step congruent β-reduction.
We can show that the former implies the latter,
assuming the corresponding antisubstitution property for the latter,
as well as various inversion properties.
**Proposition** (antisubstitution): If N and M{x ↦ N} have no infinite reduction sequences,
then M has no infinite reduction sequences.
**Proposition** (inversion): If M has no infinite reduction sequences,
then neither do its syntactic components.
For example, if (M N) has no infinite reduction sequences, then neither do M nor N.
**Lemma** (soundness of SN):
1. If M ∈ SN then M has no infinite reduction sequences.
2. If M ∈ SNe then M has no infinite reduction sequences.
3. If M ⇝ M' and M' has no infinite reduction sequences,
then M has no infinite reduction sequences.
**Proof**: By mutual induction.
The first and second parts are fairly direct by the induction hypotheses.
We look at the third part in detail.
* Case SN-Red: We have M ⇝ N and N ∈ SN.
By the first induction hypothesis, N has no infinite reduction sequences.
Then by the third induction hypothesis, M has no infinite reduction sequeces.
* Case Red-β: We have ((λx. M) N) ⇝ M{x ↦ N} and N ∈ SN,
where M{x ↦ N} has no infinite reduction sequences.
By the first induction hypothesis, N has no infinite reduction sequences.
By antisubstitution, M has no infinite reduction sequences,
so neither does ((λx. M) N)
* Case Red-App: We have (M N) ⇝ (M' N) and M ⇝ M",
where (M' N) has no infinite reduction sequences.
By inversion, M', N have no infinite reduction sequences.
By the third induction hypothesis, M has no infinite reduction sequences.
Then (M N) has no infinite reduction sequence.
* Case Red-True: We have (if true then N₁ else N₂) ⇝ N₁ and N₂ ∈ SN,
where N₁ has no infinite reduction sequences.
By the first induction hypothesis, N₂ has no infinite reduction sequences.
Then (if true then N₁ else N₂) has no infinite reduction sequences.
* Case Red-False: As above.
* Case Red-If: We have (if M then N₁ else N₂) ⇝ (if M' then N₁ else N₂) and M ⇝ M',
where (if M' then N₁ else N₂) has no infinite reduction sequences.
By inversion, M', N₁, and N₂ have no infinite reduction sequences.
By the third induction hypothesis, M has no infinite reduction sequences.
Then (if M then N₁ else N₂) has no infinite reduction sequences.
**Corollary** (no infinite reductions): If Γ ⊢ M : σ, then M has no infinite reduction sequences.
---
[^1]: Andreas Abel, Guillaume Allais, Aliya Hameer, Brigitte Pientka, Alberto Momigliano, Steven Schäfer, and Kathrin Stark. (2019). POPLMark reloaded: Mechanizing proofs by logical relations. Journal of Functional Programming, 29(e19). DOI:[10.1017/S0956796819000170](https://doi.org/10.1017/S0956796819000170 )

82
Homework 7.md Normal file
View File

@ -0,0 +1,82 @@
**Exercise 22.3.9**: For clarity, the original constraint typing rules from class are reproduced below.
```
┌──────────────────┐
│ Γ ⊢ t : T | V; C │
└──────────────────┘
x : T ∈ Γ
---------------- CT-Var
Γ ⊢ x : T | ∅, ∅
Γ, x : T₁ ⊢ t₂ : T₂ | V; C
-------------------------------- CT-Abs
Γ ⊢ λx : T₁. t₂ : T₁ → T₂ | V; C
Γ ⊢ t₁ : T₁ | V₁; C₁
Γ ⊢ t₂ : T₂ | V₂; C₂
V₁ ∩ V₂ = V₁ ∩ FV(T₂) = V₂ ∩ FV(T₁) = ∅
X ∉ V₁, V₂, FV(T₁, T₂, C₁, C₂, Γ, t₁, t₂)
------------------------------------------------------ CT-App
Γ ⊢ t₁ t₂ : X | V₁ V₂ {X}; C₁ C₂ {T₁ = T₂ → X}
```
The new constraint typing rules that thread through unused type variables are below.
```
┌─────────────────────┐
│ Γ; F ⊢ t : T | F; C │
└─────────────────────┘
x : T ∈ Γ
------------------- FCT-Var
Γ; F ⊢ x : T | F; ∅
Γ, x : T₁; F ⊢ t₂ : T₂ | F'; C
------------------------------------ FCT-Abs
Γ; F ⊢ λx : T₁. t₂ : T₁ → T₂ | F'; C
Γ; F₁ ⊢ t₁ : T₁ | F₂; C₁
Γ; F₂ ⊢ t₂ : T₂ | F₃; C₂
X ∈ F₃ F₄ = F₃ \ {X}
----------------------------------------------------- FCT-App
Γ; F₁ ⊢ t₁ t₂ : X | F₄; C₁ C₂ {T₁ = T₂ → X}
```
A key property of the new rules is that the output variables
are indeed fresh with respect to the inputs and the other outputs.
**Lemma**: If Γ; Fᵢ ⊢ t : T | Fₒ; C and Fᵢ ∩ (FV(Γ) FV(t)) = ∅ then Fₒ ⊆ Fᵢ and Fₒ ∩ (FV(T) FV(C)) = ∅.
**Proof**: By induction on the derivation of the new typing rules.
* Case FCT-Var (F₀ = Fᵢ = F): F ∩ FV(Γ) = ∅ and x : T ∈ Γ so F ∩ FV(T) = ∅.
* Case FCT-Abs: By supposition, F ∩ FV(T₁) = ∅.
Then by the induction hypothesis, F' ⊆ F and F' ∩ (FV(T₂) FV(C)) = ∅,
and we also have F ∩ FV(T₁) = ∅ by the supposition and by subset,
so we have F ∩ FV(T₁ → T₂) = ∅ as needed.
* Case FCT-App: By supposition, F₁ ∩ FV(t₁) = F₁ ∩ FV(t₂) = ∅.
By the induction hypothesis on both premises in sequence,
we have F₃ ⊆ F₂ ⊆ F₁ and F₂ ∩ (FV(T₁) FV(C₁)) = F₃ ∩ (FV(T₂) FV(C₂)) = ∅.
By definition, F₄ ⊆ F₃, and X ∉ F₄.
Then F₄ ∩ (FV(T₁) FV(T₂) FV(C₁) FV(C₂)) = ∅ by subset,
so F₄ ∩ (FV(C₁ C₂ {T₁ = T₂ → X})) - ∅ as well.
We can then show that the typing judgements imply one another.
**Lemma** (soundness of FCT wrt CT): Suppose Fᵢ ∩ (FV(Γ) FV(t)) = ∅.
If Γ; Fᵢ ⊢ t : T | Fₒ : C, then Γ ⊢ t : T | Fᵢ \ Fₒ; C.
**Proof**: By induction on the derivation of the new typing rules.
* Case FCT-App: The premises are Γ; F₁ ⊢ t₁ : T₁ | F₂; C₁ and Γ; F₂ ⊢ t₂ : T₂ | F₃; C₂,
along with X ∈ F₃ and F₄ = F₃ \ {X}.
The goal is to show that Γ ⊢ t : T | F₁ \ F₄; C₁ C₂ {T₁ = T₂ → X}.
Applying CT-App, we need the following premises:
1. Γ ⊢ t₁ : T₁ | V₁; C₁ for some V₁;
2. Γ ⊢ t₂ : T₂ | V₂; C₂ for some V₂;
3. V₁ ∩ V₂ = V₁ ∩ FV(T₂) = V₂ ∩ FV(T₁) = ∅; and
4. X ∉ V₁, V₂, FV(T₁, T₂, C₁, C₂, Γ, t₁, t₂).
By the induction hypothesis, we have the first goal where V₁ = F₁ \ F₂.
By the above lemma, F₂ ⊆ F₁, so its variables are not free in Γ or t₂.
Then we apply the induction hypothesis again to yield the second goal where V₂ = F₂ \ F₃.
By set subtraction, we know that V₁ ∩ V₂ = ∅.
**Lemma** (completeness of FCT wrt CT): V ∩ (FV(Γ) FV(t)) = ∅.
If Γ ⊢ t : T | V; C, then Γ; V ⊢ t : T | {}; C.

15
README.md Normal file
View File

@ -0,0 +1,15 @@
# CIS 6700: Type Systems - Spring 2025
## Homeworks
Exercise numbers refer to TAPL.
| # | Homework | Due |
| - | -------- | --- |
| 1 | Formal definition of substitution on de Bruijn terms | 27 January |
| 2 | 3.5.16, 3.5.17 | 3 February |
| 3 | [Exercise 2](https://www.cs.cmu.edu/~rwh/courses/atpl/pdfs/tait.pdf); largest and smallest saturated sets | 10 February |
| 4 | 13.3.1, 13.5.8 | 19 Feburary |
| 5 | 15.3.6, 16.1.3, 16.2.5, 16.2.6, 16.3.2 | 24 Feburary |
| 6 | Barendregt's or Gallier's normalization proof | 5 March |
| 7 | 22.3.9, 22.3.10, 22.4.6, 22.5.5 | 17 March |

300
TyRecon.idr Normal file
View File

@ -0,0 +1,300 @@
import Data.Fin
import Data.Vect
import Control.Monad.State
import Control.Monad.Reader
import Control.Monad.Either
%default total
{-- Basic data definitions:
* types;
* length-indexed contexts and lookup;
* well-scoped terms;
* constraint sets; and
* solutions.
--}
data Ty : Type where
TyId : String -> Ty
TyArr : Ty -> Ty -> Ty
TyBool : Ty
TyNat : Ty
FromString Ty where
fromString = TyId
0 Ctxt : Nat -> Type
Ctxt n = Vect n Ty
data Tm : Nat -> Type where
Var : forall n. Fin n -> Tm n
Abs : forall n. Ty -> Tm (S n) -> Tm n
App : forall n. Tm n -> Tm n -> Tm n
True : forall n. Tm n
False : forall n. Tm n
If : forall n. Tm n -> Tm n -> Tm n -> Tm n
Zero : forall n. Tm n
Succ : forall n. Tm n -> Tm n
Pred : forall n. Tm n -> Tm n
IsZero : forall n. Tm n -> Tm n
0 Constrs : Type
Constrs = List (Ty, Ty)
0 Solution : Type
Solution = (Ty, Constrs)
{-- Raw, unscoped terms to scoped terms --}
private infix 1 `Then`
data TmRaw : Type where
ι : String -> TmRaw
λ : String -> Ty -> TmRaw -> TmRaw
(#) : TmRaw -> TmRaw -> TmRaw
T : TmRaw
F : TmRaw
Then : TmRaw -> (TmRaw, TmRaw) -> TmRaw
ZZ : TmRaw
SS : TmRaw -> TmRaw
PP : TmRaw -> TmRaw
IZ : TmRaw -> TmRaw
FromString TmRaw where
fromString = ι
Iff : TmRaw -> TmRaw
Iff = id
private infix 2 `Else`
Else : TmRaw -> TmRaw -> (TmRaw, TmRaw)
Else = (,)
scoped : forall n. Vect n String -> TmRaw -> Either String (Tm n)
scoped g (ι x) =
case findIndex (== x) g of
Just n => pure $ Var n
Nothing => throwError "The variable \{x} is not in scope"
scoped g (λ x t b) = Abs t <$> scoped (x :: g) b
scoped g (b # a) = App <$> scoped g b <*> scoped g a
scoped g T = pure True
scoped g F = pure False
scoped g (b `Then` (c, d)) = If <$> scoped g b <*> scoped g c <*> scoped g d
scoped g ZZ = pure Zero
scoped g (SS n) = Succ <$> scoped g n
scoped g (PP n) = Pred <$> scoped g n
scoped g (IZ n) = IsZero <$> scoped g n
{-- Some instances for the above, Interpolation acting as pretty printer --}
Eq Ty where
TyId x == TyId y = x == y
TyArr tal tbl == TyArr tar tbr = tal == tar && tbl == tbr
TyBool == TyBool = True
TyNat == TyNat = True
_ == _ = False
isTerminal : forall n. Tm n -> Bool
isTerminal (Var _) = True
isTerminal True = True
isTerminal False = True
isTerminal Zero = True
isTerminal _ = False
Interpolation Ty where
interpolate (TyId x) = x
interpolate (TyArr ta@(TyArr _ _) tb) = "(\{ta}) -> \{tb}"
interpolate (TyArr ta tb) = "\{ta} → \{tb}"
interpolate TyBool = "Bool"
interpolate TyNat = "Nat"
Interpolation (Tm n) where
interpolate (Var n) = "v" ++ show n
interpolate (Abs ta b) = "λ_: \{ta}. \{b}"
interpolate (App b a) =
case b of
App _ _ => "\{b}"
_ => if isTerminal b then "\{b}" else "(\{b})"
++ " " ++ if isTerminal a then "\{a}" else "(\{a})"
interpolate True = "true"
interpolate False = "false"
interpolate (If b c d) = "if \{b} then \{c} else \{d}"
interpolate Zero = "0"
interpolate (Succ n) =
case n of
Succ _ => "\{n} + 1"
Pred _ => "\{n} + 1"
_ => if isTerminal n then "\{n} + 1" else "(\{n}) + 1"
interpolate (Pred n) =
case n of
Succ _ => "\{n} - 1"
Pred _ => "\{n} - 1"
_ => if isTerminal n then "\{n} - 1" else "(\{n}) - 1"
interpolate (IsZero n) =
if isTerminal n then "zero? \{n}" else "zero? (\{n})"
Interpolation (Ctxt n) where
interpolate [] = "·"
interpolate [t] = "\{t}"
interpolate (t :: ts) = "\{ts}, \{t}"
Interpolation Constrs where
interpolate [] = ""
interpolate [(ta, tb)] = "\{ta} = \{tb}"
interpolate ((ta, tb) :: cs) = "\{ta} = \{tb}, \{cs}"
{-- Constraint-based type checking monad with:
* state monad for generating fresh type variables; and
* reader monad for extending type context.
--}
0 CTM : Nat -> Type -> Type
CTM n = ReaderT (Ctxt n) (State Nat)
-- Reader.local doesn't work since extending the context changes its type
withTy : forall a, n. Ty -> CTM (S n) a -> CTM n a
withTy t ctm = MkReaderT (\ctxt => runReaderT (t :: ctxt) ctm)
nextVar : forall n. CTM n Ty
nextVar = do
c <- get
put (S c)
pure $ TyId ("?X_" ++ show c)
{-- Constraint-based type checking algorithm --}
ctype : forall n. Tm n -> CTM n Solution
ctype (Var n) = do
ctxt <- ask
pure (index n ctxt, [])
ctype (Abs ta b) = do
(tb, cb) <- withTy ta (ctype b)
pure (TyArr ta tb, cb)
ctype (App b a) = do
(tb, cb) <- ctype b
(ta, ca) <- ctype a
x <- nextVar
pure (x, (tb, TyArr ta x) :: cb ++ ca)
ctype True = pure (TyBool, [])
ctype False = pure (TyBool, [])
ctype (If b c d) = do
(tb, cb) <- ctype b
(tc, cc) <- ctype c
(td, cd) <- ctype d
pure (tc, (tb, TyBool) :: (tc, td) :: cb ++ cc ++ cd)
ctype Zero = pure (TyNat, [])
ctype (Succ n) = do
(tn, cn) <- ctype n
pure (TyNat, (tn, TyNat) :: cn)
ctype (Pred n) = do
(tn, cn) <- ctype n
pure (TyNat, (tn, TyNat) :: cn)
ctype (IsZero n) = do
(tn, cn) <- ctype n
pure (TyBool, (tn, TyNat) :: cn)
solve : forall n. Ctxt n -> Tm n -> Solution
solve ctxt = runIdentity . evalStateT 0 . runReaderT ctxt . ctype
{-- Substitutions are mappings from strings to types
and act on terms and types --}
0 Subst : Type
Subst = String -> Ty
(:<) : Subst -> (String, Ty) -> Subst
s :< (x, t) = \y => if x == y then t else s y
subst : Subst -> Ty -> Ty
subst s (TyId x) = s x
subst s (TyArr ta tb) = TyArr (subst s ta) (subst s tb)
subst s t = t
substTm : forall n. Subst -> Tm n -> Tm n
substTm s (Abs ta b) = Abs (subst s ta) (substTm s b)
substTm s (App b a) = App (substTm s b) (substTm s a)
substTm s (If b c d) = If (substTm s b) (substTm s c) (substTm s d)
substTm s a = a
{-- Constraint unification algorithm --}
occurs : Ty -> String -> Bool
occurs (TyId y) x = x == y
occurs (TyArr ta tb) x = occurs ta x || occurs tb x
occurs _ _ = False
0 UCM : Type -> Type
UCM = StateT Subst $ EitherT String Identity
covering
unify : Constrs -> UCM ()
unify [] = pure ()
unify ((ta, tb) :: cs) = do
s <- get
let ta = subst s ta
let tb = subst s tb
if ta == tb then unify cs else
case (ta, tb) of
(TyId x, t) =>
if occurs t x
then throwError "The variable \{x} occurs in the right hand side \{t}"
else do
put $ s :< (x, t)
unify cs
(t, TyId x) =>
if occurs t x
then throwError "The variable \{x} occurs in the left hand side \{t}"
else do
put $ s :< (x, t)
unify cs
(TyArr tal tbl, TyArr tar tbr) =>
unify ((tal, tar) :: (tbl, tbr) :: cs)
_ => throwError "Cannot unify \{ta} with \{tb}"
covering
unifier : Constrs -> Either String Subst
unifier = runIdentity . runEitherT . execStateT TyId . unify
{-- Example expressions from 22.5.2, 22.5.5 --}
-- λx: X. x y
test0 : TmRaw
test0 = λ "x" "X" $ "x" # "y"
-- λx: X. x
test1 : TmRaw
test1 = λ "x" "X" "x"
-- λz: Z. λy: Y. z (y true)
test2 : TmRaw
test2 = λ "z" "Z" $ λ "y" "Y" $ "z" # ("y" # T)
-- λw: W. if true then false else w false
test3 : TmRaw
test3 = λ "w" "W" $ Iff T `Then` F `Else` ("w" # F)
-- λx: X. λy: Y. λz: Z. (x z) (y z)
test4 : TmRaw
test4 = λ "x" "X" $ λ "y" "Y" $ λ "z" "Z" $ ("x" # "z") # ("y" # "z")
-- λx: X. x x
test5 : TmRaw
test5 = λ "x" "X" $ "x" # "x"
covering
printSolveUnify : TmRaw -> IO ()
printSolveUnify a = do
let Right a = scoped [] a
| Left e => putStrLn "scope failed: \{e}"
let (ta, cs) = solve [] a
-- [context] ⊢ (term) : (type) | {constraints}
putStrLn "solve: ⊢ (\{a}) : (\{ta}) | {\{cs}}"
let Right s = unifier cs
| Left e => putStrLn "unify failed: \{e}"
-- [context{s}] ⊢ (term{s}) : (type{s})
putStrLn "unify: ⊢ (\{substTm s a}) : (\{subst s ta})"
covering
main : IO ()
main = do
traverse_ {t = List} printSolveUnify
[test0, test1, test2, test3, test4, test5]

4
tyrecon.ipkg Normal file
View File

@ -0,0 +1,4 @@
package tyrecon
modules = TyRecon
depends = contrib