Cabal News and Meeting Recaps

November 21, 2016.
Brought to you by: Scott

Hey all,
Friday night we covered chapter 42 of PFPL on Modularity and Linking, which divided up into several interesting but connected discussions.

Next Friday will be chapter 43 on Singleton Kinds and Subkinding in the usual time and place, 6:30pm at Bento Miso.

Modularity is that thing that every developer says their design has, but in practice modular code can be as elusive as a Jackalope. In this chapter, Harper boils away the unnecessary concepts leaving just the bare essentials for modularity, namely substitution and a type signature for the API. The type signature specifies a clean type-safe boundary between modules and substitution is what lets you put the code back in the right place.

Harper uses the metaphor that APIs can then be used a bit like lemmas in a mathematical proof, providing convenient shorthand for an earlier derivation that gets used frequently. This metaphor gets really interesting though if you take into consideration the Propositions as Types isomorphism; which means that an API isn’t just like a lemma in mathematics, it is a lemma!

Linking is the important implementation glue that lets you take advantage of all this modularity andHarper outlines a number of different approaches and some of their strengths and weaknesses. I found the Cardelli paper, [Fragments, Linking, and Modularization] (http://lucacardelli.name/Papers/Linking.A4.pdf) to be a good resource to read more in depth about the Linking process.

Until next week,
Scott

November 7, 2016.
Brought to you by: Scott

Hello fellow cs enthusiasts!

Don’t worry! Despite the lack of updates, the PFPL readings have been going apace and we are now at Chapter 41 on Distributed Computing.

Next Friday, Nov 11 at 6:30pm at Bento Miso/Gamma Space we will be revisiting chapter 41 on distributed computation and sharing any projects people are working on.

This week saw us explore distributed computation, not to be confused with parallel computation or concurrent computation. This is really the big idea here, it’s that you can have computation in different distributed locations, but they don’t necessarily have to be concurrent. This is really interesting because I’m not sure I’ve ever seen this distinction emphasized, usually you see parallel, concurrent, and distributed used almost as synonyms.

As a not concurrent, but distributed computation you could imagine the case of people sitting in a circle and to mediate who is speaking they pass around an object, e.g. whoever holds the wooden spoon can speak and everyone else must sit quiet and listen. If we think of each person as a ‘site’ in the distributed computation, then each site would have a single process waiting for the spoon, and once it receives the spoon it can say it’s bit and then pass along the control spoon; conveniently leaving no time or processing power for listening, but that’s not what was important here anyway.

We still have lots of parts to try and understand, looking forward to discussing it again next week,

Scott

July 26, 2016.
Brought to you by: Scott

Greetings fellow CS Entusiasts!
Thanks for another wonderful Friday night, filled with some mysterious Cobol, a first appearance of the latest Toliver, and some more mysterious Dynamic Classification from Chapter 33 of PFPL.

Next week at the usual time and place, Friday night at Gamma Space (aka Bento Miso) at 6:30pm we will continue into the long awaited Chapter 34 on Modernized Algol. For those keeners among us, and who is kidding who, that’s everyone, there is a Modernized Algol Interpreter from Jon Sterling available Modernized-Algol. So we should be able to work through more examples than usual, and may help us get our many-flavours of PCF project off the ground.

The Chapter on Dynamic Classification proved quite interesting and the mechanism seems quite useful, especially once we hit on the use-case of a server that executes callback functions submitted by users. In that situation if an exception is raised by the server, it becomes entirely possible that the user code could improperly catch that exception and head off in a different direction. By defining exceptions dynamically, it makes that type of improper exception handling impossible. That application did lead us into some interesting discussions on what types of secrets and confidentiality we could expect to handle this way, and our interpretation was that they were less secrets and more like ways of keeping the programmer from messing up, since anyone who steps outside of the system can always access the raw data however they please.

Apologies, on missing the update last week on Fluid binding, but it was a busy week. I felt like I learned a lot over the course of the evening as the whiteboard examples and exceptionally precise yet confusing questions helped us separate the concepts of scope and binding. In particular, Ben’s example that demonstrated how the scoping depended on lexical visibility settled a misunderstanding I had brought with me about how the scope-free dynamics worked and dann summarized the effect nicely by saying that scope-free doesn’t mean that a symbol can exist free of a context, but it does mean that a symbol can be exported back into an existing context via a lambda.

Until next time,
Scott

July 10, 2016.
Brought to you by: Scott

Hey everyone, great to see you all! Hope you all had a good wizard nap, er summer break!

Wasting no time we jumped right back into Chapter 31 “Symbols” of PFPL after a brief discussion concerning the overwhelming success rate of food based Kickstarter projects. Next week will be Chapter 32 “Fluid Binding” at the usual time and place, Friday at 6:30pm at gamma Space (Bento Miso).

Symbols provide us with a means for naming things and referencing them without placing any demands on their internal structure. This sounds straightforward but as we found working through the mathematics in this section they also give ample room for confusion.

We spent a lot of time working through some examples exploring how symbol references worked and the typing requirements of the “is” expressions. After that we returned to the start of the chapter and discussed mobility and the two different dynamic ranges of symbols; scoped to an expression or scope-free, which seem to loosely correspond with the demands of naming variables within a program or function and with storing data in a permanent store to be accessed later respectively.

We still have some open questions about the consequences of scope-free dynamics and if shadowing of symbols is possible, and are continuing to discuss it in the slack channel.

See you all next time!
Scott

May 23, 2016.
Brought to you by: Scott

Hi everyone!
Lots of cabal activity this week, Tuesday night we had the pleasure of meeting the author of PFPL, Bob Harper himself as he kindly agreed to join us for dinner and drinks at the Red Room and on Friday we discussed chapter 28 of PFPL, our first foray into control flow and the analysis of control stacks.

On Friday we will meet as usual to discuss chapter 29 “Exceptions” at 6:30pm at Bento Miso. Also any and all self projects are welcome, so if you have some interesting things you would like to share, bring them! :)

Bob Harper was in Toronto this week for the Homotopy Type Theory Worksop at the Fields Institute which Ben and Dann also attended and so was available to meet with our small collective of CS enthusiasts as well as a some other workshop attendees. Apologies to those who couldn’t make it out, you were missed, but hopefully we asked some of the questions you would have. It was a pleasure to meet Bob, he was quite relaxed and friendly and didn’t see to mind that we asked the same question a few times :).

It was really interesting to hear some more perspective on some of the topics from PFPL. For instance he went over his thoughts on the “Boolean blindness” problem from PFPL 25, and explained how languages shouldn’t have this concept of a proposition testing for the class of something followed by different clauses, rather the better way to approach it is to think of them as sum types from the outset and your class selection as a case analysis. He said that when teaching, he has even tried having students use languages where there are no “if” statements at all, only case selection.

Another topic he brought up was the second order polymorphic lambda calculus and a result of Reynolds that “Polymorphism is not set-theoretic” which he considered one of the most beautiful results in CS. We’ve actually danced around some of the consequences of this result in chapter 17 when discussing representational independence. I found the Reynolds paper and you can take a look, https://hal.inria.fr/inria-00076261/document .

All for now, thanks Bob for coming to hang out with us, and thanks Dann for orchestrating things!
Scott

May 1, 2016.
Brought to you by: Scott

Hey everyone!

Bit of a hiatus on recaps but that hasn’t slowed us down, since the last time I wrote up something we have covered chapters 23, 24, and 25 of PFPL, which all illustrate different ways of organizing your language; hybrid typing, structural typing, and type refinement respectively.

Next week will be Chapter 26 on classes and methods which I suspect is one of those topics everyone knows experientially but probably not seen presented in the PFPL way before. We are sticking to the usual time and place, Friday 6:30pm at Bento Miso / Gamma Space. I’d also like to demo the synesthetic numbers I’ve implemented in terralang, and if anyone else has something cool they’d like to show or just talk about, please bring your ideas.

Type refinement as introduced in chapter 25 is a way of resolving the ambiguous nature of the dyn type in the hybrid-dynamic language presented in chapter 23. Since applying the same generic dyn type to all dyn classes effectively erases all information about the underlying class and forces the language to treat every dyn variable the same way; thereby unnecessarily restricting the number of programs that can be safely typed.

Refinements provide a mechanism for capturing information about the class of a dynamic type as part of a refinement of that type. One way to think about this is that refinements represent subsets of the total class, so for instance the set of even integers is a refinement of the class of all integers.

The section on Boolean blindness seems like a very important thing to internalize, I still don’t know that I fully appreciate what is happening there. My understanding is that when handling conditional statements, if the language treats the condition as a single Boolean bit then it will not have enough contextual information to type the subsequent then and else clauses. His resolution is to disallow the separation of testing for a type class and casting a dyn type to that class, so that the information about the type being tested is directly tied to the clauses.

All for now, see you all next week,
Scott

March 21, 2016.
Brought to you by: Scott

Hi Everyone,

Double Recap for the last 2 weeks, so a few things have happened, but I’ll get to that.

Next week Good Friday there will be no meeting as most people are away, we will reconvene the following week, Friday April 1st, 6:30PM at Bento Miso. Since many people were away for PFPL 22 this week, topics will be open to anything from PFPL 22 or 23 and all projects welcome. Dann also suggested discussing other people’s projects that you have found is also fair game and I agree. See you all then!

Two weeks back we covered PFPL 21.4 ‘Untyped means Uni-typed’ which evolved into a lot of discussion around dynamic and static typing and what do they really mean. Taylor also pointed us at Harper’s blog post on the topic. Later, Pete demoed his really really cool System T Interpreter in the browser; though he kept trying to say it wasn’t cool, but this of course is a patent lie not to be believed. I encourage you all to try and compute the largest factorial you can without crashing your browser, mine bombs out pretty quickly once memory gets over 600MB of memory; you could use his decimal number optimization but where’s the fun in that. Also Jim’s Cheatsheet is looking great and should be a great resource for future readers, Jim’s Cheatsheet.

This last Friday we read through PFPL 22 ‘Dynamic Typing’ which means Lisp! Harper provides the full dynamics rules for DPCF and there are some interesting things lurking in these rules. One thing that popped out to me was that no where in the rules can you specify something like, ‘this function expects an int’, which is normally how one thinks about “typing” a function in a dynamic language, instead the error will pop out later.

After that Harper goes into great detail on all the problems with dynamic languages. A lot of these seemed quite familiar, as they often come up in practice in many different languages; e.g. The difficulty of noticing a malformed list in Scheme, or the requirement to repeatedly type check the same argument over again in recursive calls. Jim had some interesting ideas on how you could build a system that could do remove the repeated type checking, but my suspicion is that such a system would no longer be able to give the same dynamics rules as DPCF.

Near the end of the chapter and in the exercises, things get distinctly practical in nature, and I enjoyed going over the different approaches to implementing multi argument or multi return value functions. Leo had lots of great examples of different approaches taken by different languages, even a quick example for me of using pattern matching on the number and type of return values since apparently Haskell’s read function does this.

All for now, see you all after Easter,
Scott

March 7, 2016.
Brought to you by: Scott

Hi Everyone,
Last Friday we pondered Church’s Law and Scott’s theorem, both found in the untyped lambda calculus chapter (PFPL 21). It was also the first split theory/practice meeting and I think this worked well and hope to continue this in the future. It was not however the first time that the structure of the universe came up in conversation… nor do I expect it to be the last :)

Next Friday we are again dividing the section to cover PFPL 22, Dynamic Typing and again allocating half then time to discuss implementations or other questions that come up.

This was a wide-ranging discussion, the statement of Church’s Law (Normally referred to as the Church Turing Thesis), stimulated a lot of interesting tangents into what types of systems can be captured by computable functions over the natural numbers and conversely what type of systems can be used to implement such a computation system. At this point I think we can safely say, the outlook is unclear, though placing a computational hypothesis into the realm of a scientific theory is a tantalizing possibility.

In slightly more grounded concepts, we found that when unpacking the Y-combinator some of the properties we had observed of the fix point operator in the self language from PFPL 20.3 no longer held in the untyped lambda calculus version. For instance, last week we found that applying the fix operator to a number would give back the number itself, so in the self language, `fix(5) = 5`

, however this is no longer the case, and the fixed point of the church encoding of 5 will be whatever `Y(5)`

gives you. Similarly, unpacking `Y(succ)`

shows that the fixed point of `succ`

is definitely not `succ`

and is in fact quite a complex expression, despite the fact that `succ`

is not a recursive function. I think this may be a manifestation of the `all elements have the recursive type`

property of the untyped lambda calculus that Harper is developing in this chapter.

Taylor also showed us the Turing fixed point combinator which though different from the more famous Y combinator seems to share many properties with it. One thing that wasn’t clear to me was whether the Turing and Y combinators would give you the same or different fixed points?

When we moved into the practical section, Ben gave us a lovely introduction to using the twelf environment to define and proof properties of our languages in a syntax that is very similar to the presentation in the book. This seems like a very attractive tool and I’m looking forward to trying it out myself. Jim has also gotten the cheat sheet into quite a complete state, and it looks really useful for being able to quickly identify the many different syntaxes used in the book.

All in all, a most excellent evening, thanks everyone and looking forward to next time,
Scott

February 29, 2016.
Brought to you by: Scott

Hi everyone!
Thanks for a very interesting session on Friday finishing up PFPL 20 on recursive types!

Maybe it makes sense that a session on self reference started with all of us telling war stories of our first paid programming gigs :). As we learned though, self reference is not narcissistic at all but rather a fundamental building block of programming languages and as a bonus if you hold “self” up to a mirror, “fles” is rarely lethal.

Next week we are changing gears and splitting the session into “practical” and “theory” (impractical?) parts. One half will be discussions on the untyped lambda calculus from PFPL 21 and the other half is open ended discussion of any implementations of languages or exercises from or relating to the book.

We worked out some examples on the board using the self reference language found in section 20.3. For myself this was very helpful and I think more concretely pinned down what the fix operator is doing. I think one way of looking at this section is that it is providing the details of how to implement “recursive functions” in a language. Often we use this feature in other languages but don’t always think about what has to happen underneath the hood to make it work. I found later reviewing the additional examples in chapter 7 “Recursive Functions” from Harper’s SML tutorial (http://www.cs.cmu.edu/~rwh/smlbook/) helped ground the operation of the fix operator in some concrete code examples. We also looked at how recursive values are perfectly normal in a lazy language but require the insertion of a delay or thunk operation in an eager language.

We ended up not leaving as much time for the origin of state section 20.4 but still got a poorly drawn RS latch on the board. Instead of probing into latches we instead discussed what “state” was in this context, in particular in this language we don’t have truly mutable state so instead you get larger expression trees that record their history and encode some notion of “time” into a discrete sequence of steps through that tree. Having a discrete notion of time like this makes resolving the zen koan, “What time is it between the 15th and the 16th day of the month?”, trivial since it is obviously not a time at all, but rather a state transition.

If only enlightenment was that easy,
Scott

February 21, 2016.
Brought to you by: Scott

Hey Team!
Thanks for another great meeting, we had a full crew for our first crack at Chapter 20 and System FPC for recursive types.

Next week at the usual time and place (Friday @ 6:30 pm at Bento Miso) we will have a dual session, first we will continue with the rest of chapter 20 and then we will move into a meta session where we plan out what to do next. There seems to be some interest, myself included in spending some time to do some more hands on implementations of things we have been learning in PFPL. So bring your ideas next week!

I found this chapter quite challenging, and still isn’t sitting well, but I think that may be more a reflection of a lot of ideas that have been being built up over the last few chapters never being fully digested. We started by discussing Dann’s resolution to the problem Pete posed last week of could you give an upper complexity bound and be guaranteed to always find a language that had it as an upper bound on programs that could be written in it.

We then went through the number of intriguing but difficult to understand statements in the introduction and I think we just had to move on, Ben pointed out that perhaps we didn’t have the necessary machinery to understand it just yet. I think the most useful for me was the concrete examples we worked out on the board (great idea Pete), these helped us explore the meaning of the examples for natural numbers and lists that he gives in the text. An interesting question that came out of this was how does one specify the type of arbitrarily long but finite lists?

Harper discusses at length the greater expressive power of eager languages in comparison to lazy, by going through some examples and comparing with Haskell, I’m starting to get at what he is trying to say, though I can’t say I fully appreciate why it is so important.

We ended the night going through the self referencing language examples but ran out of time to discuss state. Looking at the self referencing it felt like maybe some examples in a language with a bit more normalcy to it, e.g. with plus and numbers might help to clarify what is going on there.

See you all next week,
Scott