Dana Scott's Stochastic Lambda-Calculus Lecture Transcribed by Assembly AI
Summary Link
Transcript Starts
SPEAKER A
I'd like to say a couple of things, if I may. I mean, he's received innumerable awards and distinctions, the touring award, the role shock cries and many other things. But over and above that, if we think of the whole field of logic in computer science, it's pretty hard to imagine it as the thriving field of research that it is today, without Dana's seminal and fundamental contribution, in particular to the logic and semantics area in his work on domain theory and denotational semantics. Beyond those technical contributions, dana has connected the whole field with the great traditions of mathematical logic, which is also made, of course, huge contributions. And his vision of the subject, I think, has been a source of wisdom and guidance for all of us over the years. So I think it's very fitting that he should give the opening lecture of this boot camp and indeed, of this program. So, day nine, please. Thank you.
SPEAKER B
Thank you, Sampson, very much. And it's great to be able to greet all of you here. This has been a long time in planning, and so it's wonderful that it's actually taking place now. I must say that Berkeley is incredibly lucky to have the Simons Institute, and the Simons Institute is incredibly lucky to have this building. I was on the faculty here when it was originally built for Calvin, the Nobel Prize winner. He wanted a round building. You can read about it just outside the door here. He wanted a round building to make sure that collaboration took place. Of course, it was filled with laboratories for a long time. Now the building is filled with siri, but I hope it makes collaboration. I'm a little bit alarmed by the glass offices up there. I'm thinking I'm going to bring a lot of crepe paper to cover the window so no one can see how much time I waste looking at political news every day. Just while I was waiting here, I looked on my iPhone at political news, and I want to tell you that Zuckerberg, Mark Zuckerberg, has visited the Vatican and had an audience with a Pope. He wore a suit and tie, which was very good. He gave the Pope a drone. I don't know what the Pope was going to do with that. I guess he could bless whole cities at the same time with a drone. Only someone from Silicon Valley would think of something like that. So I want to bring up my slides here. One thing I suggested to Prakash this morning is it might be possible to establish a dropbox site for this semester, because there's lots of papers and things. I mean, we're covering very wide number of subjects there. And I think one of the good outcomes of this could be that you all go home and teach a new course. But in order to do that, you need to have a lot of background material. So I think if we establish a dropbox site to put key papers in. And maybe you can put comments on why you think the papers are important then that will be a great thing to be able to use after you go home after this session. Many of you have heard the talk I'm about to give before, but that's okay, because what I'm emphasizing is the basis of it, and I'll explain why I want to emphasize that. Let's see if I can use the remote here. There we go. I gave this talk at CUNY in the summer. I was hoping to see Raymond Smolliam, an old, old friend, but his health is not too good at the moment. And I use as a theme his puzzle book To Mock a Mockingbird, where he pretends that there's this forest of birds that speak in combinators. Their bird calls are combinators. And then when you combine the combinators together, you get other interesting bird calls. And so I made this terrible joke that maybe the birds are speaking a kind of pigeon version of Curry's language there. It's not true, of course. Raymond Smollion is absolutely formal in all the things. The rules he uses are exactly Curry's rules. But he does invent amusing stories as to what they're doing. So that might be a teaching aid, if you like that sort of thing. And I also collected some other quotations. One thing appropriate for the topic I'm emphasizing today is that Turing wanted to introduce random elements in it. I'm sure he thought in terms more of a physical device that would produce random numbers, and we use pseudo random numbers generators there. But I'm trying to emphasize today that there is a theory that you can just call on ordinary probability theory to introduce random elements in a way that's very familiar from other areas of probability theory. I also remember Alan Perlus, the late Alan Perlis. He loved epigrams about programming. They're fun to read. You can find them on the Net. Some of them are a bit out of date there. He didn't like Turing machines very well, and so he had that sarcastic comment there. And he was very pleased with the pun he made about me because, of course, I studied with Alonzo Church. And then for the domain theory, I introduced a lot of lattice stuff that he thought was much too theoretical. And so he put that in. I noticed when his department put up their version of his epigrams, they left that out because they didn't understand the joke and they thought someone might sue against them, especially the Mormons might sue against them. So they left that out. So we can remind ourselves about Church's lambda calculus. And here's a notation, standard notation that I'm using. Lambda abstraction. And then you have some formula in here with a variable there. And alpha conversion means alphabetic conversion there. Curry's name for it means it doesn't matter which bound variable you use. But of course, you have to be careful about free and bound variables. And for computer science applications there's been a big development. And nominal logics for other handling of bound variables would be interesting to discuss. But I'm going to be very informal today about doing it. Now, the key thing about lambda calculus is, of course, if you form a lambda abstraction here that's supposed to be the name of the function that does whatever you said it was supposed to be doing, then if you apply that function to an argument, you better get the answer you were supposed to get out of it. And so you assume that axiomatically again, beta conversion is Curry's name for it. And the reason that it's called Beta Conversion is that Beta is a second letter of the Greek alphabet. There is no metaphysical explanation for Beta for lambda. I once ask John Addison, Church's son in law do you know why Church used lambda for lambda abstraction? He said, no, I don't. I'll write to my father in law and ask him. So he wrote a postcard. Dear Professor Church john always referred to his father in law as Professor Church. Dear Professor Church russell had the Iota operator and Hilbert had the silent operator. Why did you choose lambda for your operator? There are various stories floating around about how it came. The Church just annotated the postcard and put an envelope and sent it back. And his complete annotation was any mini mini mole. There is no metaphysical significance to choice of dilemmas. Some people think it's a circumflex that got transformed there. But I don't believe that's the story. I don't use the Ada conversion here. Ada here, of course, stands for extensionality. And they think of this as extensionality. It's too strong. It says everything is a function. Everything is a function. It's a little bit strong. And I don't assume that axiom here. Now, here's a model using enumeration operators. So in order to build up this model, you have the following ingredients. You have the integers and we have a small amount of Girdle numbering on the integers. So every positive integer is uniquely factorable into an even part and an odd part there. And so that means every positive integer can stand for an ordered pair of the two components there. Of course, this is a rather wasteful Girdle numbering because it's exponential and you can give polynomial ones. But it doesn't matter for this discussion here. We just have some Girdle numbering for ordered pairs and then by iterating ordered pairs. And this is even more wasteful here because it'll be exponentials. Exponentials. You can find Girdle numbers for finite sequences. Girdle use prime factorization. But this is a very cheap and quick way to say, oh, yes, there must be some Girdle numbering for finite sequences. And I use the finite sequences also to number finite sets. Now, zero, these are always non zero things. No, except for the empty sequence. But the empty sequence or the number zero can also stand for the empty set. And if you do this const operation here, of course what this stands for is the set that you have in the first part here and one extra element on the end. So those are very simple minded kinds of girdle numbers. And then I use the notation of the clini star. You can think of X as an alphabet here and so you ask yourself what are the girdle numbers of all the finite sequences that you can form, selecting elements from the set X. And so that's the meaning of this. Now, here is the definition in terms of enumeration operators. And of course, if you have some other girdle number in there, you can use the same kind of definition there. But this girdle numbering is convenient especially. It's important here that the girdle number of a compound thing is strictly greater than the girdle numbers of the pieces of the structure there. So the idea of an enumeration operator is you start enumerating X and then look at all the finite subsets of X and note their girdle numbers of the finite subsets and then you start enumerating F. I call F a lookup table here. Whenever you match a finite set to the first component of something that you found in F, then finding that as a lookup in F that gives you leave to output M as one element of the set that's being enumerated. So as you enumerate X and as you enumerate F, these are just sets of integers. You're enumerating what I'm calling here f of x. Now, you can just turn that definition around the other way there. The opposite thing is lambda abstraction is if you have a nice formula that should be pretty well behaved and I'll explain what well behaved means in a minute here, then you stick in finite sets via their girdle numbers to get a corresponding transform set here according to the formula, whatever it is. And then you see what elements you would get as the value that's set there. And then you put that in the lookup table here and I'll tell you in a minute, I throw in the number zero, which is not an ordered pair for a simple reason, for nice laws there. So why does this model work as a model for the lambda calculus? So first we'll recall that the power set of N is a topological space. It's a topological space in two familiar ways. If you think of it as two to the N, then you give the product topology there and you get the counter space, the house storage space. I'm using the weak topology here of positive information here, where I only use, so to speak, half of the counter space topology there. You could think of this as sometimes you introduced the Shapinski space which is a two element space that has one open point and one closed point. If you think of two as a discrete space. Both points are open and closed at the same time, but the Shapinsky space is weaker. And so the topology that corresponds to that product topology is to take as a basis for the open sets, those families of sets, because we're thinking of the points of the space as sets here which have a certain finite amount of information to them. And so that makes a very nice topology there. I knew Professor Sapinski, I met him several times in Warsaw and other places. It's very sad to have a two point space named after you, it seems to me. Now, in terms of this topology, it's very easy to explain continuity here. It just says a finite amount of information about the arguments to a function, a finite amount of information about the value of the function is already determined by a finite amount of information about the arguments. And you can make that entirely analogous to the epsilon delta definition of continuity over the real numbers there. A finite approximation to the value you get will remain stable under a finite amount approximation to the arguments. And this topology is even easier than the real number topology there because it's easy to combine continuous functions in a nice way. And now we can see immediately that as a binary operation this is continuous, because a finite subset of this is already determined by a finite subset of X and a finite subset of F. And so we have lots of continuous functions immediately the binary operation for the lambda calculus models there. Also, it's very easy to see from the definition that if you have a continuous function of several variables and you apply lambda abstraction to it, then the result is continuous in this because a finite subset of this will be determined by a finite subset here, and it will be already determined by a finite number of information about the arguments of the other arguments there. And the way I set it up here because I put into the definition here of taking all possible ways of checking this out here. And I threw in zero to make sure that every number is paid attention to. Then the meaning of lambda x phi of x, where phi is a continuous function, is the largest lookup table that gives you the right values when you use the binary operation of there. And so you can see that of the Ada law, this is the largest lookup table that behaves the same way that this lookup table does. You can have very succinct lookup tables that give you a function there. But the way lambda is defined in this model, this is the largest table that will give you the right answers there. Once you see how the role of continuity here, it's very easy to see that this model satisfies the Church curry axioms for lambda calculus. Now, every once in a while I wake up in the middle of the night at about 04:00, and I kick myself in the butt because when I was a graduate student in Princeton under church, I could have given this definition, and then I would be rich and famous. I don't care so much about rich because all of this stuff is very simple minded from the point of view of mathematics and a little bit of logic, a little bit of Girdle numbering, a little bit of topology there. And it was something I could explain then. Even more embarrassing is that both My Hill and Shepherdson and Hartley Rogers had thought of what corresponds to enumeration operators. So Myhill and Shepherdson, I knew both of them very well, they thought about defining partial functionals on partial functions. You remember Claney's big advance was Gerdle only used total functions. But Cliny realized that since the recursion needn't complete its answer there, it's much better to use partial functions in defining recursive function theory. And so my hill and shepherds and said, Good. Let's think about operators on those functionals on those and in their paper, they say, wow. No, they didn't exactly say wow, but in effect, they say, look at all the cool. No, they didn't say cool. Look at all the operators we can define. They defined lambda abstraction for partial functionals so that you could get lots of partial functionals. On the other hand, with his student Friedberg, who solved the postproblem, rogers, in his lectures at MIT, was trying to explain degree theory. And he thought, a Turing calculation has to use both positive and negative information about the input for assessing degrees. And so Rogers thought that just using positive information about it would give an easier theory of degrees. And so they introduced the idea of enumeration operators. And it's the definition I've given for F of X is word for word, practically what's in Hartley Rogers book except for the choice of Girdle numbers there. Here's a strange thing. Both of them had good definitions. Neither group said, well, what's the algebra of these operators? They could have discovered lambda calculus just from the work that they had done there, but they didn't think in those terms. So that's why I kick myself every once in a while. Now, here are some easy properties that you can get. Then instead of the Ada rule of extensionality, we have a better kind of extensionality. Here. It says, one lookup table is contained in another lookup table. If, and only if point wise. The first function is less than or equal to the second function. And in fact, this lambda abstraction preserves, meet and join union and intersection. Here it's a lattice theoretic morphism because the function space is not exactly the same as the power set. The function space is a retract of the power set, and it has nice lattice theoretic properties there. And it's very easy to prove a number of them from these definitions. And of course from the point of view of computability as we can now give a definition of a computable continuous operator on the power set of the integers there. Because when you lambda abstract on all the variables so you make the full lookup table that gives you the total structure of the continuous function on all of its variables. If that Lookup table Is Recursively Innumerable, then that's a good idea of a computable operator and we can do more computations there if we think of how to get recursive definitions, you can check quite easily from the definitions that lambda terms always define computable Operators. If you just take Pure lambda terms that have lambda and application as the only components then they will be Computable. But even Better, you can do a recursion by getting a fixed point of a continuous Operator. It's quite easy lattice theoretically to show that Every monotone operator has a lease fixed point and every continuous operator has an even more economical fixed Point. But in terms of the model, Curry's Paradoxical combinator here by using Self application gives you a formula in the lambda calculus for the least fixed point and It's quite easy to prove that that is a fixed point of course just from the rules of Lambda Calculus. But Because I set up the Girdle numbering to be well Founded, numbering, namely The Girdle number of a finite structure is always larger than the Girdle numbers of the parts of it that well foundedness allows you to give a Proof that this is in fact the least fixed point Operator. And of course It follows that fixed points of computable operators are also Computable. And we can push this A little bit further here of seeing how computable things arise by just taking the arithmetic Combinators on the power set Here. We just promote successor Predecessor and test for Zero up one level here to the sets of integers here by those obvious definitions and then Lambda Calculus together with the arithmetic Combinators give you a Notation, a functional Programming Language that will give you every computable continuous Function. And we can even do It by setting up Girdle numbers for the combinations Here. And this is just a sketch here you can control Continuous functions so that they're really determined by what they do to singletons and then the continuous function on a whole set is just the union of all the values on the singletons there and It's quite easy to set this up with Lambda Calculus Definitions. So Curry showed us that his combinators, K and S give you all lambda definable things. So if you combine Curry's Combinators with the arithmetic Combinators and just supply them One to the other over and over again so we can use Girdle numbers here and we just keep applying One to the Other that will take all the possible combinations there. And So You get every that can be Defined recursively and you can cook up a Lambda term with Arithmetic Combinators that do the recursion to define V there. And moreover, the range of V on the singletons is a whole collection of recursively innumerable sets. This is exactly the analog of the universal Turing machine. What I'm using is griddle numbers of lambda expressions. And so it's just like the universal Turing machine, except it's using the mechanism of enumeration operators. And you can make finite approximations to this by restricting a set to being only a finite set on a certain initial segment of the integers there. It's quite easy to see from the definitions of application, for example, that a finite operator applied to a finite set has a finite value there because there are only finitely many combinations you can need to look through there. So you stay within the realm of finite sets. But this is a kind of space complexity here. As you increase the size of griddle numbers that you're allowed to use, you get more results. You don't cancel any results. You just get more results. And in the limit, you get everything that you need out of there. And in fact, checking things on a finite range is recursive. So you get an analysis of the recursion that's needed there. And you can just apply Rosser's trick to giving recursively inseparable sets here. These are you do a self application and everything is finite here. And so all of this is recursive. And so you say the zero lovers are the things that get to the answer zero before they ever get to the answer one. And then the disjoint set there is you get to the answer one before you get to the answer zero. Those two disjoint sets are easily seen to be recursively enumerable and easily proved in the usual way that they're inseparable. I could in fact, I did give a course and recursive function theory this way, not using Turing machines, I hope. Turing will forgive me, but I thought this was a very clean way of introducing things, and you can introduce many concepts on them. Now, the second part of the talk is where do probability or random numbers come into the thing? So here's the idea. In this idea, Dexter Cozen suggested it when he was thinking of programming once simple semantics for wild programming. Jerry Kessler thought of it in terms of model theory because many structures can be thought of as being topological. And so, using the topology there, you say that a random variable is just parametrized, say by the unit interval. We take a standard sample space from probability theory. A random variable is just a parameterized family of sets of integers such that under the topology on power set of N, the inverse image of open sets is always measurable. That's exactly the definition we give for real valued random variables there. We use the topological space of the reels here, I'm using the topological space of the power set of integers. And you can apply that to lots of other structures too, where a closely associated topology is available there, like the power set of in there. And so that's a really completely standard definition and it's quite easy to prove that the random variables form a model for the lambda calculus on real variables. Sums and products of real variable random variables. Real valued random variables. Sums and products of real valued random variables are again random variables according to the definition because sum and product is a continuous function and you get nice measure theoretic properties out of the property of continuity there. Of course they're discontinuous functions that preserve things too, but plus and times are continuous and so sums and products of random real valued random variables are random variables. Okay, all the lambda things here, because of the continuity principles here, all the lambda things mean that you can take the combinators, apply it to random variables and the result is a random variable. The random variables are an expansion of the power set for having families of sets. And you think of it as thinking with some probability that you want to think of a particular set there, but you don't have to think of a particular set, you just have to think of the whole family there. And so you have expanded the model to random variables. Now for further thinking about this, and I hope there will be time for some sessions where we can have a reading group or something to go over things there. It's even better if you expand from the classical definition of random variables to having more logic in the thing there by thinking of having a Boolean valued logic where the truth values are chosen from the measurable sets, measurable sets, modulo sets of major zero, that's a very fine complete Boolean algebra. And so you do Boolean valued logic with a major algebra and that's the same as introducing random variables. That easy to explain when the definitions are set out. And now what I want to do is to think about a random algorithm here. Well, that reminds me, it's a very good question. There's a lot of discussion very recently on the FOM list. Janice Maskovakis has been asking for a very long time what is an algorithm? We give the definition of computability, but that doesn't necessarily tell you what an algorithm is. And so I ask you to go and look at the FOM list about those things there. But I'm going to show you some algorithms that can involve random things here. Now, a coin toss is just a function, a measurable function which takes heads and tails there with probability one half each or fair coin there. And now if we introduce on the power set of N, some other combinators here, which of course are computable combinators, I just give the direct definitions here. But you could give lambda definitions if you stand on your head a little bit. You can take pairs of sets of integers by taking copies on the evens and the odds. And of course, you can decode a pair that way by looking at what happens on the evens and what happens on the odds there. So a process of flipping coins is not just one fair coin because if you're doing random choices there, every time you ask for a new choice, you should take an independent coin out of your pocket to flip. So you don't use the previous answer you got there over again, you get a fresh random answer out of it. So you have to have independent coins there. And it's an easy exercise in probability theory to show that there are infinite sequences of independent coin, fair coin choices. And we can put them together by pairing there. And we can think of pairs of pairs of pairs of pairs as an infinite sequence of possibilities of coin tossing there. And so I use the pairing functions there to represent that. Now, we can put that into a very simple minded algorithm language. We just use lambda calculus with application and lambda abstraction there, with an extra stochastic choice thing here. It means flip a coin and then do either M or N, depending upon whether you get heads or tails out of it. Now, I'm going to give a semantics for this. I didn't try to do it here, I should have girdlenumbered everything here and think of this as a semantic, as a syntactical component here. And so the semantics here is a function on syntactical things, integers, girdle numbers there, which tells you what the behavior of this program is supposed to be. Now, it depends upon an environment for the free variables and then I do it in terms of continuation semantics here. In other words, you evaluate a little bit of the program and then pass the result onto the continuation to do the rest of the program. But in the meantime, you have to keep track of your pocket of fair coins there, so that when you're asked to make a stochastic choice, you'll have a coin available out of your pocket to do it. So I haven't had any way to test this. I just wrote down these equations, I think they're pretty likely to be correct, but I would really like to discuss it more definitely here. So only go over a couple of the steps here. If you're evaluating an application that says find out the value of the first part there of the operator that's supposed to be applied and pass that on. Pass that on to the continuation and then evaluate the second term there in the environment and pass that on to the continuation. What is a continuation supposed to do? It says, oh, I had to apply one to the other, so go ahead and apply that and now I'll know what to do for the rest of the program. So that's a standard kind of continuation passing semantics that I think. Is I wrote it down correctly. Now, what about the probabilistic choice here? Well, what you do is you take the first coin and see whether it's heads or tails. If it's heads, you then evaluate the M and pass that value onto the continuation. If it's tails, you evaluate the other part and pass that on to the continuation. This will reduce to one or the other of them, depending on what the first coin does. In the meantime, you throw the first coin away and get ready to use the independent coin tossings in the future. So running a closed program means evaluating the program in the beginning. All the variables are just set at the empty set there. Of course, you could have some starting values if you wanted there. And then what you do in passing it to the continuation is you get the final value there, you throw away the coins and that gives you the answer here. Of course, while you're doing this evaluation, you've kept your pocket of coins available there, but in the very end, of course, they've been used up sufficiently, so you throw them away. So I think that's a correct definition. And I think I could also prove this thing, especially if I could take more advantage of Boolean valued logic. The secret is that the measure algebra measurable sets modular sets of measure zero is a very symmetric measure algebra. It has lots of automorphisms that are probability preserving, measure preserving automorphisms. Any two events of the same probability are related by an automorphism. So in other words, the value of the probability is the major thing there for the nature of the event. I believe I can just use standard exercises from probability to show any two systems of coin tossing are related by automorphism of the major algebra. So, in other words, one pocket of coins. We only have to understand the distributions of one two, one half one to one and half one two independently there. Any pocket of coins looks like any other pocket of coins under an automorphism. Therefore, if you use the somatics and get a value, say that a certain program behaves to get a certain answer with probability greater than 0.7, if that holds for one pocket of coins, it holds for every pocket of coins. By the automorphism argument, I think you could turn that into a proof that says, though I use specific events and specific random variables here to implement this denotationally here, that that automorphism argument shows that the operational running of a program like this gives the same answers as the denotational one there. But that needs to be worked out and I hope someone will help me to work it out. So that's my cry for help here. I think the setup is good. It uses only generic ingredients. Generic is good for you, right? A little bit of sets, a little bit of girdle dumbering, a little bit of probability, a little bit of topology. I could teach this all at an undergraduate level, junior senior level here, but of course, proving some things about it might take more effort than you could expect of undergraduates, but I think the setup is all there and so that we could investigate that. So I hope we will. I'm going to say one more thing here that's completely different from this talk. But if there are a few quick questions here, I'll take some questions. Yes, please.
SPEAKER A
Yes. So in denotational semantics, it's very important that it correspond to some kind of operational semantics that we understand. And I'm wondering what the denotational semantics you're proposing here corresponds to in terms of an operational picture.
SPEAKER B
Okay. I think with probabilistic computations, we have a problem here. I'm using standard probability theory. When you do an operational semantics, you use pseudo random number generators. So what is the connection, precise connection between pseudorandomness and real randomness? There's a basic question there. I think we can argue that because the structure of the answers just like the people are going back to Raven and the probabilistic prime testing there, he can say, oh, well, the pseudo random number generator is so different from the algebra that you're using for carrying out the test there that the random number generator wouldn't be biased. And so we can really feel that prime testing is sound. But I don't think that's a rigorous argument. So that's the big problem. I think please.
SPEAKER A
I think someone related, it seems to me, with a continuation passing semantics. So this stream of random variables is something like a state which is being accessed. I can see that if it's an IID sequence, then the order isn't, as you were saying, isn't going to play a role. But I'm just wondering, if it wasn't an IID sequence, then what would kind.
SPEAKER B
Of detect the could sure, because you can test your random choices there for bias. Yeah, sure. You could set up an algorithm that would say, are these coins really suitable or not? But of course, if we want to do really random things there, then we start with independent fair coins. Right? But of course, other algorithms could misbehave if they're allowed to somehow test the coins for bias in some sense, yeah.
SPEAKER A
Another question, if I may. So if one took this really valued logic perspective that you were saying, I don't see this model as sort of internally just a model of the lambda calculus in top hours you get from the school in algebra.
SPEAKER B
Yeah, sure. The Boolean value logic, we could think in terms of set theory, too, introduce Boolean value sets. And so we're using the Boolean valued power set with the definitions we have here. Any Boolean algebra, complete Boolean algebra could give you a logic there and you'll be able to verify all the axioms of lambda calculus there, because those are theorems that we can prove on the basis of very elementary set theory. Just needs the little properties of integers induction and a little bit of power set to think in terms of sets of integers there. It's just the beginning of set theory and of course, various topoe. I think we could do the proofs intuitionistically as well. It's only based on positive information there, and negation doesn't come into it. So I think it holds intuitionistically as well, and intuitionistic topos question okay, so just give me a moment here to step down. So this is a personal statement I want to make. My wife and I, who visited with Helmot and Vienna very intensively two visits we had there were devastated when we heard about his death there. And you see just at age 45, this is a good picture of Helmut. His hair was always wild, as far as I remember, and he forgot to shave as well. Now, his career was very, very active, and a very special note was when he got his PhD in Vienna, it was reconsidered so good that the president of the country came to give him his diploma as a special recognition there. Then he had some appointments in Darmstadt and Munich before he had a call to come back to Vienna, where he was incredibly active in doing things. And he has many awards for just a few of the awards that his work gained, some of his activities there. Oh, yeah, and there's a picture sometimes Helmo did cut his hair. He was one of the organizers of the Vienna Summer Logic back in 2014. But he was on all kinds of committees and other things there program committees, both chairs as chair. And on the committees, there endless numbers of program committees. And if you look at the website there that they've kept up, it lists over 130 publications. So he was just amazingly active. Now, at the Vienna Summer of Logic, the Austrian magazine Profile, which is sort of the Time magazine of Austria, put out this publicity here that Logic is the most important science of the world. I've copied the article for you. If you read German or get someone who reads German to tell you, I'm going to make a notebook with this information in it and also the copy of the Pofel article there. And you can see if they justify that statement there. And there's that picture. You can see Helmuth standing in the picture, and it says undou cent ERISA Raikish and Fitzrator supporters, advocates, representatives of Logic. And some of the publicity they gave for themselves for the Vienna Summer of Logic was it was the largest event in the history of Logic. Well, it was a very large, interesting meeting and has a big turnout. It was really quite a success. And Helmut was very important in doing the organization. Here are some photographs, and there are more photographs online. And I'm also have a book of photographs from there. So those are the two chairs, chair and co chair of it with their wives, matthias, Bas and Helmouth and Anna there. And then they took us to the Kunsta Storage Museum to get our pictures taken there. And so there's Helmet with Ed Clark, who was a great mentor and with whom Helmut worked there. And who is that other guy? Who's that other guy in the picture? Anyone recognize him?
SPEAKER A
Sophomore class? Aristotle.
SPEAKER B
Aristotle. Aristotle, yes. They have a famous copy of it, of that statue. So that was great fun there. Oh, I need to quote something from Helmot that I had here in the profile article. Of course, in writing up things about logic, they use quotations from logic comics. I'm sure you know logic comics. And one of the things that's emphasized in logic comics is how many logicians were verifiably insane. And so they ask about that. But they gave Helmut the last word, the very last paragraph of the article. Helmut says about some logicians being crazy, said, oh, that's all ancient history. The single behavioral peculiarity which we logicians show today is our general preference for the TV series The Big Bang Theory. So that was Helmwood's joke. Now, I've been in contact with his wife. They were going to be here. In fact, I gave him all kinds of arguments to convince his family to come here. The son is high school age and that Berkeley High would be okay to be here. And so they were all ready to come here. So Anna decided that they would go away for a semester. So they're living in Princeton at the current time. And I've been in contact with her and she just told me that she just told me no, she just told me that the Formal Methods and Computer Aided Design Conference that takes place in October is going to have a section devoted to Helmot in his work. She's coming out for that to meet people. And you can find out about it on the website. One weekend we got noticed that three elderly friends had passed away on weekend. Of course they were elderly and not unexpected. And then the message about Helmote came through. I was sitting in the gloom of my study feeling very, very, very depressed and a thought occurred to me that I'd like to share with you. I hope I can say it without being overemotional. What I thought was this life produces such surprises. Death removes such essentials. Thank you. I think we have a break. Can I add something, please?
SPEAKER A
After Henwood passed away, they found his grand or something in Vienna for C Four gives his students in his name.
SPEAKER B
Oh, yes, I'm going to find out about it. They gave a I found they gave a place where donations could be made to a memorial thing. So other thing too is I've prepared a card. I couldn't bring it today, so we can sign it and send it to Anna. But I'll also find out about that scholarship fund so we can make donations if we like to. The scholarship fund in his name.