-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathfoundationsintroport.tex
6752 lines (3680 loc) · 204 KB
/
foundationsintroport.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[12pt]{article}
\title{Introduction to the foundations of mathematics, using the Lestrade Type Inspector}
\author{Randall Holmes}
\date{4/17/2020: annotating for talk. If I like it, this might be the permanent version. 4/15/2020: I have enabled abstract iteration without rewriting. Further corrections to the final section.\\4/14/2020: The text is revised (largely) to conform with the new implementation, though it does not exhaustively survey new features.\\4/13/2020: revised to run under the reimplementation. Rewriting cannot be used, so the block about abstract iteration is not run. Substitution isn't working correctly for some reason. Only Lestrade code is executed: remarks about Lestrade still presume the old version.\\
The reason substitution didn't work had to do with the order in which its arguments were declared: the new version puts
implicit arguments in in the actual order in which they appear, and this can force a need for a different order.\\
10/31/2017 10:30 am: Systematic introduction of terms with bound variables.}
\usepackage{amssymb}
\begin{document}
\maketitle
\tableofcontents
\newpage
The purpose of this document is to introduce a reader to the foundations of logic and mathematics using the Lestrade Type Inspector, a piece of software designed to allow the specification of mathematical objects in a very general way. It could also be used as an introduction to the software for someone familiar with the foundational subject matter.
Lestrade implements a particular very general framework for the implementation of mathematical objects, statements, and proofs of statements. Part of the underpinning of the approach is that in this framework the statements and their proofs are viewed as particular kinds of mathematical object themselves.
The actual implementation of foundational concepts of logic and mathematics here is not dictated by Lestrade: there is considerable latitude for different design decisions in the implementation of logic and mathematics in the framework. We may sometimes indicate alternative approaches.
\section{Initial examples. Conjunction, implication, and their rules.}
We begin with the implementation of the very simple concepts of logical conjunction, the use of the word ``and" to link sentences, and logical implication, the use of ``if$\ldots$then$\ldots$" to link sentences.
\begin{verbatim}
begin Lestrade execution
>>> declare A prop
A : prop
{move 1}
>>> declare B prop
B : prop
{move 1}
end Lestrade execution
\end{verbatim}
Here is a bit of initial dialogue with Lestrade. Here we use the {\tt declare} command to introduce two variables, $A$ and $B$, of type {\tt prop},
the type inhabited by mathematical statements.
Lines starting with \verb >>> followed by Lestrade command names such as {\tt declare}, {\tt postulate}, {\tt define} are entered by the user. Other lines in these blocks are Lestrade responses to commands typed by the user.
\begin{verbatim}
begin Lestrade execution
>>> postulate & A B : prop
& : [(A_1 : prop), (B_1 : prop) =>
(--- : prop)]
{move 0}
>>> postulate -> A B : prop
-> : [(A_1 : prop), (B_1 : prop) =>
(--- : prop)]
{move 0}
end Lestrade execution
\end{verbatim}
Here we declare the operations of conjunction and implication. At the moment, they look just the same: the only thing Lestrade knows about them so far is that they are operations taking two proposition inputs to a proposition output. Details of the input and output of Lestrade itself (the things the user enters and the replies that Lestrade produces) will be analyzed more carefully as we go forward.
\begin{verbatim}
begin Lestrade execution
>>> define proptest A B : (A & B) -> \
A
proptest : [(A_1 : prop), (B_1 : prop) =>
({def} (A_1 & B_1) -> A_1 : prop)]
proptest : [(A_1 : prop), (B_1 : prop) =>
(--- : prop)]
{move 0}
end Lestrade execution
\end{verbatim}
We illustrate another Lestrade command, using {\tt define} to introduce a defined operation. The main point here is to notice that Lestrade supports
infix use of the conjunction and implication operators, though the Lestrade declaration commands requires their use in prefix position when they are newly declared. The Lestrade user should get used to typing lots of parentheses, though she does not need to use as many as are displayed in the output: she does need to be aware that in general terms all infix (or mixfix) operations have the same precedence and group to the right if explicit parentheses are not provided, and unary operations bind more tightly than binary or infix operations.
\begin{verbatim}
begin Lestrade execution
>>> open
{move 2}
>>> declare A1 prop
A1 : prop
{move 2}
>>> declare B1 prop
B1 : prop
{move 2}
>>> define proptest2 A1 B1 : (A1 & B1) -> \
A1
proptest2 : [(A1_1 : prop), (B1_1
: prop) => (--- : prop)]
{move 1}
>>> close
{move 1}
end Lestrade execution
\end{verbatim}
Here we do something subtle in the Lestrade declaration environment which we don't explain fully for now: the {\tt open}$\ldots${\tt close} environment creates
a separate little Lestrade context. The alternative version {\tt proptest2} of our defined notion will behave a little differently as we see at once.
\begin{verbatim}
begin Lestrade execution
>>> declare C prop
C : prop
{move 1}
>>> declare D prop
D : prop
{move 1}
>>> define zorch C D : proptest C & D, D -> \
C
zorch : [(C_1 : prop), (D_1 : prop) =>
({def} ((C_1 & D_1) proptest D_1) ->
C_1 : prop)]
zorch : [(C_1 : prop), (D_1 : prop) =>
(--- : prop)]
{move 0}
>>> define zorch2 C D : proptest2 C & D, D -> \
C
zorch2 : [(C_1 : prop), (D_1 : prop) =>
({def} (((C_1 & D_1) & D_1) ->
C_1 & D_1) -> C_1 : prop)]
zorch2 : [(C_1 : prop), (D_1 : prop) =>
(--- : prop)]
{move 0}
end Lestrade execution
\end{verbatim}
Here we use {\tt proptest} and {\tt proptest2} to define new operations {\tt zorch} and {\tt zorch2}. The interesting thing which happens is that the operation
{\tt proptest2} which was defined in its own little local context gets expanded when it is used, while {\tt proptest} (which ``means" the same thing) is left unexpanded. Expansion of definitions is the main kind of ``calculation" that Lestrade does, though we may detect it doing more complex things as we go forward.
Now we will return to our main line of development, introducing the machinery of proof in Lestrade.
\begin{verbatim}
begin Lestrade execution
>>> declare aa that A
aa : that A
{move 1}
>>> declare bb that B
bb : that B
{move 1}
end Lestrade execution
\end{verbatim}
We declare new variables {\tt aa} and {\tt bb}. The sorts of these variables require special explanation. With each proposition $p$ of sort
{\tt prop}, we associate a new sort {\tt that} $p$ inhabited by proofs of $p$, or, perhaps better, evidence that $p$ is true.
\begin{verbatim}
begin Lestrade execution
>>> postulate Andproof0 A B aa bb : that \
A & B
Andproof0 : [(A_1 : prop), (B_1 : prop), (aa_1
: that A_1), (bb_1 : that B_1) =>
(--- : that A_1 & B_1)]
{move 0}
>>> postulate Andproof aa bb : that A & B
Andproof : [(.A_1 : prop), (.B_1
: prop), (aa_1 : that .A_1), (bb_1
: that .B_1) => (--- : that .A_1
& .B_1)]
{move 0}
>>> define Selfand aa : Andproof aa aa
Selfand : [(.A_1 : prop), (aa_1 : that
.A_1) =>
({def} aa_1 Andproof aa_1 : that .A_1
& .A_1)]
Selfand : [(.A_1 : prop), (aa_1 : that
.A_1) => (--- : that .A_1 & .A_1)]
{move 0}
end Lestrade execution
\end{verbatim}
And now we introduce a rule of proof: if we have evidence that $A$ and evidence that $B$, we can conclude $A \wedge B$: to conclude $A \wedge B$ is equivalent to postulateing or defining an object of sort {\tt that A \& B}. The symbol $\wedge$ is the standard representation of ``and" in formal logic; Lestrade uses {\tt \&} because of the limitations of the typewriter keyboard.
The fully verbose version {\tt Andproof0} takes the arguments $A$, $B$, {\tt aa} and {\tt bb}, and the Lestrade framework requires these arguments officially. Notice though that from the arguments {\tt aa} and {\tt bb} we can deduce what $A$ and $B$ have to be: the second version {\tt Andproof} uses the
``implicit argument inference" feature of Lestrade to allow the user to enter just the names of the proofs, deducing the names of the propositions proved. The declaration that Lestrade gives as a response makes it clear that it knows about the hidden arguments.
{\tt Selfand} is a defined operation on proofs: from a proof of $A$ it generates a proof of $A \wedge A$. We might think that this is a proof of ``If $A$ then $A\wedge A$, or $A \rightarrow (A\wedge A)$: the fact that we might think this is a hint as to how Lestrade represents proofs of implications. In fact, {\tt Selfand} is a rule of inference, not a proof of a conditional, but it can be used to prove the conditional as we will see below.
\begin{verbatim}
begin Lestrade execution
>>> declare xx that A & B
xx : that A & B
{move 1}
>>> postulate Simplification1 xx : that \
A
Simplification1 : [(.A_1 : prop), (.B_1
: prop), (xx_1 : that .A_1 & .B_1) =>
(--- : that .A_1)]
{move 0}
>>> postulate Simplification2 xx : that \
B
Simplification2 : [(.A_1 : prop), (.B_1
: prop), (xx_1 : that .A_1 & .B_1) =>
(--- : that .B_1)]
{move 0}
end Lestrade execution
\end{verbatim}
For completeness, we introduce the other two (quite obvious) rules of conjunction: from evidence {\tt xx} for $A \wedge B$, we can extract evidence for $A$ and evidence for $B$. We introduce them in forms which hide implicit arguments.
\begin{verbatim}
begin Lestrade execution
>>> declare cc that A -> B
cc : that A -> B
{move 1}
>>> postulate Mp aa cc : that B
Mp : [(.A_1 : prop), (.B_1 : prop), (aa_1
: that .A_1), (cc_1 : that .A_1
-> .B_1) => (--- : that .B_1)]
{move 0}
end Lestrade execution
\end{verbatim}
This snippet of code embodies the traditional rule of {\em modus ponens}: given evidence for $A$ and evidence for $A \rightarrow B$, we have evidence for $B$. We have only given the version with implicit arguments.
\begin{verbatim}
open
declare aaa that A
>> aaa: that A {move 2}
postulate ded aaa that B
>> ded: [(aaa_1:that A) => (---:that B)]
>> {move 1}
close
postulate Deduction ded : that A -> B
>> Deduction: [(.A_1:prop),(.B_1:prop),(ded_1:
>> [(aaa_2:that .A_1) => (---:that .B_1)])
>> => (---:that (.A_1 -> .B_1))]
>> {move 0}
end Lestrade execution
\end{verbatim}
Above is the old style implementation of the proof of the deduction theorem without variable binding terms.
\begin{verbatim}
begin Lestrade execution
>>> declare aaa1 that A
aaa1 : that A
{move 1}
>>> declare ded [aaa1 => that B]
ded : [(aaa1_1 : that A) => (--- : that
B)]
{move 1}
>>> postulate Deduction ded : that A -> \
B
Deduction : [(.A_1 : prop), (.B_1
: prop), (ded_1 : [(aaa1_2 : that
.A_1) => (--- : that .B_1)]) =>
(--- : that .A_1 -> .B_1)]
{move 0}
end Lestrade execution
\end{verbatim}
This piece of code implements a standard strategy for proving implications, in the more compact style which variable binding terms allow (it is not necessary to open a new move to declare {\tt ded} as in the code given above): if assuming $A$ allows us to deduce $B$, we can conclude $A \rightarrow B$. What is quite tricky is how Lestrade represents this. We open a little environment in which we postulate the function {\tt ded} which takes
evidence {\tt aaa} for $A$ to evidence for $B$: we close this environment, and the symbol {\tt ded} remains as a variable representing a function of this type. We are then able to postulate a function which takes any such function to evidence for $A \rightarrow B$. We {\em will\/} in due course have a careful discussion of Lestrade environments. For the moment, we will content ourselves with giving an example of how this is used.
A side remark to those in the know: it is important to notice that a proof of an implication is not identified with a function from proofs of its antecedent to proofs of its consequent, but obtained from such a function by applying a constructor casting from a function sort to an object sort (see the next section on metaphysics of Lestrade for a discussion of object vs. function sorts).
\begin{verbatim}
open
declare aaa that A
>> aaa: that A {move 2}
define selfand aaa : Andproof aaa aaa
>> selfand: [(aaa_1:that A) => ((aaa_1
>> Andproof aaa_1):that (A & A))]
>> {move 1}
close
define Selfand2 A : Deduction selfand
>> Selfand2: [(A_1:prop) => (Deduction([(aaa_2:
>> that A_1) => ((aaa_2 Andproof aaa_2):
>> that (A_1 & A_1))])
>> :that (A_1 -> (A_1 & A_1)))]
>> {move 0}
end Lestrade execution
\end{verbatim}
A proof given in the old style, discussed below.
\begin{verbatim}
begin Lestrade execution
>>> define Selfand2 A : Deduction [aaa1 \
=> Andproof aaa1 aaa1]
Selfand2 : [(A_1 : prop) =>
({def} Deduction ([(aaa1_2 : that
A_1) =>
({def} aaa1_2 Andproof aaa1_2 : that
A_1 & A_1)]) : that A_1 -> A_1
& A_1)]
Selfand2 : [(A_1 : prop) => (--- : that
A_1 -> A_1 & A_1)]
{move 0}
end Lestrade execution
\end{verbatim}
Here we actually prove the theorem $A \rightarrow (A \wedge A)$ for any proposition $A$, in a very compact form allowed by use of the term {\tt [aaa1 => Andproof aaa1 aaa1]} for the function declared as {\tt selfand} in the open/close block in the original proof. It is interesting to observe that this is actually a function of the proposition $A$ rather than of a proof of $A$: whether $A$ itself is true or not, this theorem is true, and the definition of the function {\tt Selfand2} encapsulates reasoning justifying this: from a proposition $A$, we can postulate evidence for the proposition $A \rightarrow A \wedge A$.
An interesting feature of the Lestrade output is that it contains a mathematical expression $${\tt [(A_1:prop) => (Deduction([(aaa_2: that \,A_1) => ((aaa_2 \,Andproof\, aaa_2): that (A_1 \& A_1))])}$$
standing for the proof as a mathematical object. Lestrade allows itself output notation significantly more complex that the user input notation, but with experience we will be able to read this.
\newpage
\section{The Curry Howard isomorphism: defining type constructors analogous to propositional connectives}
We recapitulate the basic declarations for the propositional connectives of conjunction and implication, and in parallel implement the type constructors which build Cartesian products and function spaces, along with the basic operations on the complex types. The analogy between the propositional constructions on the one hand and the type constructions on the other is known as the Curry-Howard isomorphism.
\begin{verbatim}
declare A prop
>> A: prop {move 1}
declare B prop
>> B: prop {move 1}
postulate & A B : prop
>> &: [(A_1:prop),(B_1:prop) => (---:prop)]
>> {move 0}
postulate -> A B : prop
>> ->: [(A_1:prop),(B_1:prop) => (---:prop)]
>> {move 0}
end Lestrade execution
\end{verbatim}
We declare the Cartesian product and function space constructors.
\begin{verbatim}
begin Lestrade execution
>>> declare At type
At : type
{move 1}
>>> declare Bt type
Bt : type
{move 1}
>>> postulate X At Bt : type
X : [(At_1 : type), (Bt_1 : type) =>
(--- : type)]
{move 0}
>>> postulate ->> At Bt : type
->> : [(At_1 : type), (Bt_1 : type) =>
(--- : type)]
{move 0}
end Lestrade execution
\end{verbatim}
\begin{verbatim}
declare aa that A
>> aa: that A {move 1}
declare bb that B
>> bb: that B {move 1}
postulate Andproof aa bb:that A & B
>> Andproof: [(.A_1:prop),(aa_1:that .A_1),(.B_1:
>> prop),(bb_1:that .B_1) => (---:that
>> (.A_1 & .B_1))]
>> {move 0}
declare xx that A & B
>> xx: that (A & B) {move 1}
postulate Simplification1 xx : that A
>> Simplification1: [(.A_1:prop),(.B_1:prop),
>> (xx_1:that (.A_1 & .B_1)) => (---:that
>> .A_1)]
>> {move 0}
postulate Simplification2 xx : that B
>> Simplification2: [(.A_1:prop),(.B_1:prop),
>> (xx_1:that (.A_1 & .B_1)) => (---:that
>> .B_1)]
>> {move 0}
end Lestrade execution
\end{verbatim}
We introduce the pair operation and projection functions, which we see are formally analogous to the logical rules of conjunction and simplification.
\begin{verbatim}
begin Lestrade execution
>>> declare aat in At
aat : in At
{move 1}
>>> declare bbt in Bt
bbt : in Bt
{move 1}
>>> postulate Pair aat bbt in At X Bt
Pair : [(.At_1 : type), (.Bt_1 : type), (aat_1
: in .At_1), (bbt_1 : in .Bt_1) =>
(--- : in .At_1 X .Bt_1)]
{move 0}
>>> declare xxt in At X Bt
xxt : in At X Bt
{move 1}
>>> postulate proj1 xxt in At
proj1 : [(.At_1 : type), (.Bt_1 : type), (xxt_1
: in .At_1 X .Bt_1) => (--- : in
.At_1)]
{move 0}
>>> postulate proj2 xxt in Bt
proj2 : [(.At_1 : type), (.Bt_1 : type), (xxt_1
: in .At_1 X .Bt_1) => (--- : in
.Bt_1)]
{move 0}
end Lestrade execution
\end{verbatim}
\begin{verbatim}
declare cc that A->B
>> cc: that (A -> B) {move 1}
postulate Mp aa cc: that B
>> Mp: [(.A_1:prop),(aa_1:that .A_1),(.B_1:prop),
>> (cc_1:that (.A_1 -> .B_1)) => (---:that
>> .B_1)]
>> {move 0}
open
declare aaa that A
>> aaa: that A {move 2}
postulate ded aaa that B
>> ded: [(aaa_1:that A) => (---:that B)]
>> {move 1}
close
postulate Deduction ded : that A -> B
>> Deduction: [(.A_1:prop),(.B_1:prop),(ded_1:
>> [(aaa_2:that .A_1) => (---:that .B_1)])
>> => (---:that (.A_1 -> .B_1))]
>> {move 0}
end Lestrade execution
\end{verbatim}
We introduce function application and the formation of function objects from functions (lambda abstraction), which we see are formally analogous to modus ponens and the deduction theorem. The arguments of function application are supplied in converse order to those of modus ponens (not because there is any virtue to this but because existing text below written earlier would have had to be extensively revised to reverse the order of the arguments of {\tt Mp})!)
\begin{verbatim}
begin Lestrade execution
>>> declare cct in At ->> Bt
cct : in At ->> Bt
{move 1}
>>> declare aat2 in At
aat2 : in At
{move 1}
>>> postulate Apply cct aat2 in Bt
Apply : [(.At_1 : type), (.Bt_1 : type), (cct_1
: in .At_1 ->> .Bt_1), (aat2_1 : in
.At_1) => (--- : in .Bt_1)]
{move 0}
>>> declare dedt [aat2 => in Bt]
dedt : [(aat2_1 : in At) => (--- : in
Bt)]
{move 1}
>>> postulate Lambda dedt in At ->> Bt
Lambda : [(.At_1 : type), (.Bt_1
: type), (dedt_1 : [(aat2_2 : in
.At_1) => (--- : in .Bt_1)]) =>
(--- : in .At_1 ->> .Bt_1)]
{move 0}
end Lestrade execution
\end{verbatim}
In the section on equality, we will introduce more primitives for the case of types, which would have analogues for propositions if we were working in a constructive logic and wanted to carry out formal operations on proofs.
\newpage
\section{A brief discussion of the metaphysics of Lestrade}
Probably we should explain ourselves a bit more.
The most general word used for things we talk about in Lestrade is {\em entity\/}. Entities are further partitioned into {\em objects} and {\em functions}.
Entities have sorts: the sort indicates what kind of thing we are talking about.
The sorts of object can be reviewed quickly:
\begin{enumerate}
\item {\tt prop} is the sort of propositions, i.e., mathematical statements.
\item For each proposition $p$, we provide a sort {\tt that} $p$ inhabited by evidence that $p$ is true. A proof of $p$ is such evidence, and explicitly constructed objects of sort {\tt that} $p$ will be referred to as ``proofs of $p$"; but to suppose that $p$ is true (to postulate an object of the sort {\tt that} $p$) is not the same thing as to suppose that $p$ has actually been proved or even can be proved.\footnote{A constructivist might presume that to suppose that $X$ is true is the same thing as to suppose that $X$ has been proved, but we do not presume a constructivist view here.}
\item {\tt obj} is a sort inhabited by untyped mathematical objects.
\item {\tt type} is a sort inhabited by ``type labels". An example of an object of sort {\tt type} would be the label {\tt Nat} for the sort ``natural number".
\item For each $\tau$ of sort {\tt type} we provide a sort {\tt in} $\tau$ inhabited by objects of type $\tau$. If $n$ is a natural number, it might be construed as of sort {\tt in Nat}. Something of sort {\tt in} $\tau$ may more briefly be said to be of type $\tau$.
\end{enumerate}
If the reader notices an analogy between {\tt prop}/{\tt that} and {\tt type}/{\tt in}, she is perceptive.
Functions are more complicated and their sorts are more complicated. A Lestrade function takes a list of arguments of a fixed length, each item of which is of a sort possibly determined by earlier arguments in the list, and yields output of a sort which may depend on its arguments. A lot of the logical power of this framework comes from the fact that the sort of an argument of a function may depend on the values of earlier arguments, and the sort of the output may depend on the values of the inputs. One mechanism which makes such dependencies possible is the fact that the object sorts of the form {\tt that} $p$ and {\tt in} $\tau$ may contain quite complex expressions abbreviated here by $p$, $\tau$; we have already seen this in Lestrade output above; function sorts also have complex internal structure which supports such dependencies.
The general notation for a function sort is $$(x_1:\tau_1),\ldots,(x_n:\tau_n) \Rightarrow (-,\tau)$$
The variables $x_i$ representing the arguments are dummy variables (they are ``bound" in this expression) Distinct function sort expressions (including ones which might appear as $\tau_i$'s or parts of $\tau_i$'s) have different dummy variables. Each $\tau_i$ is an expression representing the sort of $x_i$, which may be an object or a function sort and is allowed to include $x_j$'s only for $j<i$. The output sort $\tau$ will be an object sort, not a function sort, and may include any of the $x_i$.
A species of notation for a function used in Lestrade output is $$(x_1:\tau_1),\ldots,(x_n:\tau_n) \Rightarrow (y,\tau),$$ where $y$ is an expression for the value of the function which may of course include any or all of the $x_i$'s (and which must be of the object sort $\tau$): the formation rules for such an expression are the same as for function sorts: the function sort expression $(x_1:\tau_1),\ldots,(x_n,\tau_n) \Rightarrow (-,\tau)$ must be well-formed for the expression above to be well-formed.
When a function is declared in Lestrade and explicitly defined, the sort reported for it is actually this notation for it. The user will always refer to it using its name (the identifier declared with this type): users do not enter function sort notations or function notations.
The account given here should allow the reader to make a stab at interpreting details of Lestrade responses to user commands which we skated over above.
\section{The care and feeding of declarations: the system of possible worlds or ``moves"}
We have to give an account of the declaration environments of Lestrade. We'll do this in the simplest way (in which all declared environments are anonymous and in a sense ephemeral: we will look at the consequences of allowing environments to be named and saved in an appended subsection).
In the simplest model of what we are doing, the Lestrade user is working in a finite sequence of environments indexed by natural numbers, called
``move 0", ``move 1",\ldots,``move $i$", ``move $i+1$". Move $i$ is called ``the last move" and move $i+1$ is called ``the next move" (elsewhere sometimes ``the current move"). There are always at least two moves, so all four explicitly given items are present, though they may not all be distinct. Each move contains an ordered list of declarations of identifiers as representing entities of given sorts. The sort of an identifier declared at a given sort will not mention identifiers declared at moves of higher index or declared at the same move but later in the list of declarations. Entities declared at the last move or earlier moves are to be thought of as constant;
entities declared at the next move are to be thought of as variable.
By a fresh identifier we mean an identifier not declared at the moment in any move. It will never be the case that the same identifier is declared more than once.\footnote{In this simple model: saved moves make this possible, but Lestrade manages it with a minimum of fuss.}
The {\tt declare} command takes a fresh identifier and an object or function sort as its two arguments (in that order) and declares the identifier as a variable
of the given sort in the next move (placed last in the order on the move). Object sorts are represented by {\tt prop}, {\tt obj}, {\tt type}, or by {\tt that} $p$ when $p$ is of sort {\tt prop}, or {\tt in} $\tau$ when $\tau$ is of sort {\tt type}. Function sorts are represented by terms $[x_1,\ldots,x_n \Rightarrow \tau]$
where the $x_i$'s are variables declared in the next move and $\tau$ is an object term.
The {\tt postulate} command takes a fresh identifier followed by zero or more arguments (variables declared previously at the next move, appearing in the order in which their declarations appear in the next move), followed
by an object sort [optionally separated from the previous arguments by a colon ``{\tt :}"; this is sometimes mandatory for the sake of the parser]. If there are zero arguments, the identifier is declared as being of the given sort, but at (the end of) the last move rather than the next move. This can be thought of as declaring a constant (relatively speaking, as we will see). If there are arguments $x_i$ of types $\tau_i$
and the output type is $\tau$, the identifier is declared at the last move (not at the next move!) and appearing finally in the order on the last move, as a function of sort
$$(x_1:\tau_1),\ldots,(x_n:\tau_n) \Rightarrow (-,\tau)$$ (with the refinement that the names of the parameters, since they become bound, are systematically changed).
The {\tt postulate} command can be thought of as declaring axioms and primitive notions, when it is used when $i=0$. At higher indexed moves, what it is doing is subtler, but will become evident with experience: we will see that in combination with the {\tt open} and {\tt close} commands it allows
declaration of function variables.
The {\tt define} command is a sister command of the {\tt postulate} command: the keyword is followed by an identifier, then by zero or more arguments, variables $x_i$ of type $\tau_i$ appearing in the same order in which they were declared, then by a Lestrade expression $y$ of an object type [always separated from the previous arguments by a colon {\tt :}]. The identifier is defined at the last move (not the next move), and finally in the order on the last move, as $$(x_1:\tau_1),\ldots,(x_n,\tau_n) \Rightarrow (y,\tau),$$ as long as sort checking reports that this is possible [in the case where there are no arguments, it is just defined as $y$]. Identifiers declared in this way are not eligible to serve as arguments of functions (they are not variables).
The {\tt open} command introduces a new move with the index $i+2$: as it were, the parameter $i$ is incremented, so that the old ``next move", move $i+1$,
becomes the new ``last move", and the new move $i+2$ is the new next move. We call this action ``opening move $i+2$".
The {\tt close} command erases all information in move $i+1$ and decrements the parameter $i$, if $i>1$; it is not possible to close move 1.
The old ``last move" move $i$ becomes the new next move, and move $i-1$ becomes the new last move. We call this action ``closing move $i+1$".
The {\tt clearcurrent} command removes all declarations from move $i+1$ but does not decrement the counter: at the end of this action,
move $i$ is unchanged and move $i+1$ is empty. This amounts to clearing accumulated variable declarations; it is needed because there is no other way
to remove declarations from move 1. It will be a while before we see uses of this command: over a large initial segment of the document, we will suppose that the program remembers all previous move 1 declarations (which can cause the namespace to get rather cluttered!)
There are devices whereby moves can be saved and then reopened after being closed, which lead to some complexities, but these can be ignored for the present.
It may seem that we cannot create a function variable (recall that we said above that functions can have functions as arguments) but we can and in fact we have already illustrated this in an example above. One creates a function variable in move $i+1$ by opening move $i+2$, declaring desired variable parameters, postulating a function of the desired type in move $i+1$ in its then role as the last move, then closing move $i+2$ whereupon the constructed function is now a variable. We did this (and the reader may now review the example to see that it conforms with our account) in postulating {\tt Deduction} above, which needed the function parameter {\tt ded}.
Functions found in the next move which were introduced by the {\tt define} command when there were more moves do not become variables: they are as it were ``variable expressions", and a distinctive point about these is that where they are used in the final argument of a {\tt define} command they must be expanded out (as {\tt proptest2} was in an example above)
as a defined identifier at move $i+1$ cannot appear in a declaration at move $i$. Where a defined operator declared at the last move is used in applied position, its application is carried out (suitable substitutions are made) as in the example above; where it appears as an argument it is replaced by its anonymous formal notation.
A further point about declarations of functions which must be noted, though its details are nasty, is the permission we give ourselves to not give all arguments of a function under certain circumstances. In fact, any non-defined identifier declared at the next move appearing in the sort of a variable appearing as an argument of a function must itself be an earlier argument of that function: the input/output mechanism of Lestrade itself allows us to hide this, omitting arguments when their presence can be deduced. If we did not do this, we would have a lot of arguments in argument lists which ``felt" redundant, like $A$ and $B$ as arguments of {\tt Andproof0} (it being evident from the sorts of {\tt aa} and {\tt bb} what $A$ and $B$ must be).
We make a philosophical remark at this point. The currently popular view of the nature of functions is that they are as it were actually infinite tables containing all their values. We resist this. We regard a function as determined by a specification of how a value is to be obtained (or, in the case of a primitive notion, simply {\em that\/} a value of given sort can be obtained) from {\em any given} sequence of inputs of appropriate sorts which may happen to be presented now or in the future, not from all possible such sequences in a way given all at once. The arbitrary objects used as inputs in a function definition can each be viewed as a single object drawn from a ``possible world" (``the next move") accessible from the world which is our current standing point (``the last move"). Another metaphor which might be helpful is that objects at the next move are things to be chosen in the future; we do not know anything about them except what is given in their sort. When we declare a function as a primitive, we declare that there is a construction principle which for
any given inputs of given types will give an output of that type: we do not presume that we have given such outputs for all possible inputs (such outputs are produced on demand when we apply the constructed function to specific inputs). In this way we preserve the possibility of the view that all infinities are potential, never completely realized. Nonetheless, the mathematical consequences of the particular Lestrade theory we present are fully classical.
\subsection{Namespace management refined: saving and retrieving environments}
With the limited environment handling given above, there is no way to remove or revise declarations of variables and variable expressions in move 1 other than clearing all of them. After a while, it is quite hard to remember what sorts have been assigned to parameters and variable expressions, and for that matter what order they appear in (recalling that parameters in {\tt postulate} and {\tt define} commands must appear in order of declaration). We have already noted
that the {\tt clearcurrent} command will clear all declarations at the next move.
More intelligent namespace management is supported by the full specification of the {\tt open}, {\tt clearcurrent}, and {\tt save} commands.
Each move is assigned a name. The default name is its numeral index (the $j$ such that it is move $j$). The command {\tt save envname} will save the next move with the name
{\tt envname}, associated with the list of names of preceding moves at the time it is saved (a saved move is actually identified by the sequence of names of all moves at the time it is saved, and this is how it is identified internally; this means that moves saved in different contexts can quite safely be tagged with the same name). The command {\tt open envname} will open an already existing move (of the right index, wth the same preceding moves) with the name
{\tt envname} or if there is no such move, or create a new blank move with that name. The command {\tt clearcurrent envname} will clear the net move and replace it with a move named {\tt envname} if there is such a move with the appropriate preceding names of moves associated with it or replace it with a blank move of that name otherwise. A move cannot be saved or opened with its default numeral name: the reason for this is that we do not want the
parameterless {\tt open} or {\tt clearcurrent} command to unexpectedly invoke declarations from a saved environment. For the same reason, no move other than move 0 in the sequence of moves associated with a named move created by an application of {\tt save}, {\tt open}, or {\tt clearcurrent} may have its default numeral name.
If identifiers have been declared in earlier moves which conflict with those in a reopened saved move, Lestrade will interpret the identifier when entered by the user as referring to the instance from the latest move in which it appears, but will handle instances of earlier uses of the same identifier correctly in declarations from earlier moves (and will display them unhelpfully without additional annotation, at present).
The effect of all of this is that that instead of having a linear sequence of moves, which we can think of as times or possible worlds, we have a tree structure.\footnote{which can still be thought of using a temporal metaphor as working out the consequences of different choices on alternative timelines, as it were.}
\section{A proof as an example $A \wedge B \rightarrow B \wedge A$.}
We give the proof of a simple theorem of propositional logic, then present the proof in the form of Lestrade declarations.
\begin{description}
\item[Theorem:] $A \wedge B \rightarrow B \wedge A$
Assume $A \wedge B$ for the sake of argument: our goal is to show that $B \wedge A$ follows.
$B$ follows from $A \wedge B$ by simplification. $A$ follows from $A \wedge B$ by simplification.
The local conclusion $B \wedge A$ follows by conjunction from $B$ and $A$.
By deduction, we can conclude $A \wedge B \rightarrow B \wedge A$.
\end{description}
\begin{verbatim}
begin Lestrade execution
>>> open
{move 2}
>>> declare yy that A & B
yy : that A & B
{move 2}
>>> define zz yy : Simplification1 \
yy
zz : [(yy_1 : that A & B) => (---
: that A)]
{move 1}
>>> define ww yy : Simplification2 \
yy
ww : [(yy_1 : that A & B) => (---
: that B)]
{move 1}
>>> define uu yy : Andproof (ww yy, zz \
yy)
uu : [(yy_1 : that A & B) => (---
: that B & A)]
{move 1}
>>> close
{move 1}
>>> define Andconj A B : Deduction uu
Andconj : [(A_1 : prop), (B_1 : prop) =>
({def} Deduction ([(yy_2 : that
A_1 & B_1) =>
({def} Simplification2 (yy_2) Andproof
Simplification1 (yy_2) : that
B_1 & A_1)]) : that (A_1 & B_1) ->
B_1 & A_1)]
Andconj : [(A_1 : prop), (B_1 : prop) =>
(--- : that (A_1 & B_1) -> B_1 & A_1)]
{move 0}
end Lestrade execution
\end{verbatim}
The Lestrade declarations given embody the proof given. One very subtle point is that the functions {\tt ww} and {\tt yy} are
distinct from {\tt Simplification1} and {\tt Simplification2}, because the latter functions take additional arguments which are not visible.
A point to note is that the argument under the hypothesis $A \wedge B$, assumed for the sake of argument, corresponds to the introduction of a new environment by the {\tt open} command in which the variable {\tt yy} of sort {\tt that} $(A \& B)$ is declared.
\section{The Lestrade user input language}
We discuss practical details of entering mathematical expressions in the language of Lestrade. This section concentrates on what users can enter at the keyboard.
Lestrade identifiers are the first detail of the syntax. An identifier is a string of characters of positive length, consisting of zero or one capital letters,
followed by zero or more lower case letters, followed by zero or more numerals.
A Lestrade object expression is either an identifier declared of an object sort, or an application expression $f(t_1,\ldots,t_n)$ where $f$ is
an identifier declared as of function type with $n$ arguments, and $t_1,\ldots,t_n$ are expressions of the correct sorts (some may be function expressions).
A mixfix expression $(t_1 \,f \,t_2, \ldots, t_n)$ is well-formed under the same conditions and has the same referent.
The parentheses and commas in these expressions may be omitted under some circumstances. All infix and mixfix operators have the same precedence and group to the right (in the absence of restrictive punctuation they will take as many arguments as they can). A function symbol used as an argument must be followed by a comma or parenthesis to avoid it attempting to take the next expression as an argument. A parenthesis following a function symbol will always be taken as opening an argument list (so if one wants to enclose the first argument in parentheses one must also enclose the entire argument list in parentheses). A function symbol representing a function taking more than one argument
must be preceded by a comma when it might otherwise take a preceding object expression as a first argument [reading a mixfix expression]. A function symbol appearing as the first argument of a mixfix expression must be enclosed in parentheses to avoid the function symbol trying to eat the mixfix.
Function expressions include identifiers declared as of function type, and expressions $f(t_1,\ldots,t_m)$ where $m<n$, the number of arguments taken by $f$. Such expressions are understood as functions of $(x_{m+1},\ldots,x_n)$, and may only appear as arguments, not function or mixfix symbols. The parentheses around the argument list in such a function expression are mandatory. In addition, there are $\lambda$-terms of quite general form, $[x_1,\ldots,x_n \Rightarrow T]$ where the $x_i$'s are variables declared at the next move and $T$ is an object term.
An additional important punctuation device is the use of a colon {\tt :} to separate the final argument of a {\tt postulate} or {\tt define} command from the preceding arguments. The colon is optional in the {\tt postulate} command (in earlier versions it was sometimes needed if the final preceding argument was a function identifier; it is now (we believe) always optional); it is mandatory in the {\tt define} command. (The colon is neither needed nor allowed in the {\tt declare} command).
Lestrade output will use infix form for functions of two arguments where the first argument is not of function type. Lestrade output will never use mixfix notation for functions of more than two arguments.
In general, problems with parsing of input notation can be solved by explicitly writing more parentheses and commas. In Lestrade output, all parentheses and commas are shown.
\section{We begin considering ontology: equality primitives introduced. The biconditional as equality on propositions. Identification of proofs of the same proposition.}
We are by no means through with logic, but we will begin to consider the treatment of objects. In this section we introduce the notion of equality and its basic primitives. Equality is defined for typed mathematical objects: related notions applying to propositions and their proofs are discussed, and defining equality for untyped mathematical objects of sort {\tt obj} is straightforward but not discussed here.
\begin{verbatim}
begin Lestrade execution
>>> declare T type
T : type
{move 1}
>>> declare t in T
t : in T
{move 1}
>>> declare u in T
u : in T