CS Cabal

Toronto-based reading group that meets weekly to read computer science books and papers.

Currently reading: The Little Typer

GuildMeetup listings

Mastodon Social Hub

Mailing list Communication nexus

Code of Conduct Must read

Experimentalisp

SICP GC animated

Constraint solver

Mark and Compact GC

Sketch of a Josephus solver

Cabal News and Meeting Recaps

Turns out the base case is Pete

February 15, 2016.   Brought to you by: Scott sarostru

Hi everyone! In spite of the recent flurry of gravity waves, Friday night saw us return to recursive functions in chapter 19 and Pete return to the fold; welcome back we missed you :) This was an evening full of interesting digressions, which is more or less as planned since we decided to spend another week on this chapter because of the many digression worthy topics.

Next week we will move to chapter 20: System FPC of Recursive Types, meeting at the usual time and place 6:30pm Friday at Bento Miso.

I’m fairly certain we raised more questions than we resolved, turns out Church’s Law, partial recursive functions, and Gödel numbering get one into fairly deep waters. Jotting down a few that I remember, feel free to add to the post with others that I may have forgotten.

  • If I write an interpreter for language L in language L, is that sufficient to show L is universal?
  • Is there an expressive ordering hierarchy among total languages? E.g. Something like having a total language T0 in which you can write an interpreter for another total language T1, which in turn can be used to write an interpreter for T2, and so on.
  • Clarification around the distinction between partial functions and partial recursive functions still seems necessary.
  • Can I choose an order of maximal complexity, e.g. n^2 and then find a system such that only total programs up to that complexity may be written? This could be viewed as a variant on a language with primitive recursion, but no higher order functions.
  • How does Church’s law relate to the breakdown of languages in the lambda cube?

Thanks everyone for the mind bending questions, I may be recovering from this one for a while to come, Scott

(PFPL (PFPL (PFPL ...

February 8, 2016.   Brought to you by: Scott sarostru

Hey Everyone, Thanks for another stimulating session on Friday, starting our foray into General Recursion and as it turns out Chapter 19 packs in so much that we will continue on with it next week too and discuss the many intriguing comments in the second half of the chapter.

The meeting next week will be at the same time and place as usual, 6:30pm Friday at Bento Miso and will be starting from section 19.3.

Since the last update we have covered a fair bit of ground in the book, Girard’s System F in 16, Abstract Types in 17, and Higher Kinds in 18. Unfortunately I’ve been a bit remiss in making notes so I don’t remember too much. I was spending most of normal PFPL time instead reading about RPython and preparing for the Paper’s We Love Toronto talk on Meta-Tracing JITs which was a lot of fun.

In Chapters 16 and 17 we devoted significant discussion to comparing the presentation with the one we had seen in Cardelli and Wegner’s heavily cited paper, “On Understanding Types, Data Abstraction, and Polymorphism.” Higher Kinds in chapter 18, I don’t feel like I understand deeply, however in many applications you can see how they are used, so they feel like an eminently practical idea.

This week we went through the statics and dynamics of the language PCF introduced in 19. PCF has general recursion and so is the first language we have met that is not total, i.e. that does not contain a proof of its termination in the code. The presentation in the first part of the chapter presents the fixed point operator in a mathematical way which was pleasantly familiar from taking functional analysis courses back in University. Once we started going through it though, I found the comparison perhaps a bit misleading since what we are using the fix operator for here is very different from many of the tasks you might do with a fixed point in math class, e.g. find a fixed point, or prove the dimensionality of a null space.

Instead the fixed point operator here is being used to allow us to reference the definition of a function within the definition of that function. Ben and Leo did a nice job illustrating with examples how fix and define are effectively equivalent in enabling general recursion. I’ve encountered the problem before, if you try using different data structures for the definition of the environment in a simple scheme-like interpreter you can find yourself missing a fix operator.

Next week I think will stir some interesting discussion as there are more than a few brain warping statements in the latter half of the chapter. See you all there, Scott

New Year's Resolutions and Other Fresh Renamings

January 11, 2016.   Brought to you by: Scott sarostru

Hi Everyone! Welcome back fellow PFPL reading group members, Friday night at Bento Miso we got caught up on Chapters 14 and 15 among other diversions. Next Friday at 6:30pm at Bento Miso we will continue into Chapter 16.

For the uninitiated, PFPL is an acronym for Bob Harper’s book, Practical Foundations of Programming Language which we have been reading since September 2015 at a rate of about a chapter a week. We meet Friday nights, however we have an active discussion channel on slack so remote readers are welcome to join the fun.

Next week is exciting as in Chapter 16 Harper introduces us to System F, Girard’s highly influential typed lambda Calculus that includes polymorphic types and is the most powerful system we have seen so far in the book. Hopefully everyone will have been able to read Chapter 16 by then and for those who were there on Friday your homework is exercise 15.1.

Last week saw us through most of Chapter 15 which covers Inductive and Coinductive types. By adding Inductive and Coinductive types into our system we gain the ability intrinsically define infinitely large types like natural numbers.

Leo summed up the difference between Inductive and Coinductive types quite well, an inductive type starts from a single fixed starting point (e.g. zero) and then they build outwards indefinitely. Coinductive types go in the opposite direction, you start with an infinitely large bag and then remove elements from it.

We then went through a number of fairly straightforward examples of the dynamic types on nats and conats (introduced in section 15.4) and Ben gave a ton of helpful explanation, thanks Ben! I’ve started compiling some of the examples over on GitHub.

Another topic that came up and garnered a lot of attention was what are the differences between a language that is Turing complete, and a language in which you can write an interpreter for itself. We weren’t able to resolve that question and it has been deferred until futher investigation, with some already lengthy discussion in slack.

Alrighty then, Scott

Final Session of 2015

December 30, 2015.   Brought to you by: Scott sarostru

Hey everyone! Hope you all had a great holidays and wish you a happy new year!

Back on December 18th we held our last session for the year, a festive one with oatmeal cookies, gluten free cheese filled monkey bread (thanks Leo!), brownies, some other tasty snacks and a yellow owl shaped fruit plate courtesy of the fine fruit slicers at Metro foods. We ate more than usual, reminisced on 2015, and discussed chapter 15 of PFPL, infinite types.

We will jump back in on Friday January 8th, 6:30pm at Bento Miso and bring a fresh 2016 perspective to bear on chapters 14 and 15 of PFPL. We have covered each already however the material seems to be important and worth further discussion.

Dann had the great idea to look back over our year and remember some of the things we did in 2015, hint it turns out a lot! Some favorites were the over ambitious weekly paper readings from earlier in the year, Strachey in 1 week, Array Mapped Tries, and the garbage collection paper came up. Chapter 9 of PFPL (Gödel’s system T) was also mentioned as well as a number of other things which I can’t remember at the moment but we definitely agreed were more confusing than they needed to be. Or alternatively if maximizing confusion was the author’s aim, then they achieved a stunning success.

As for the infinite types material I felt like I came away with a better grasp of how these finite types (i.e. product and sum), infinite types (I.e. inductive and coinductive ), and the languages introduced earlier in the book all fit together.

One question I came to the session with was what exactly is infinite about these types? The motivating examples are natural numbers and infinite sequences of naturals, but we had naturals earlier in the book in the E family of languages, so what is different now that we decide to class them as infinite types? The answer is that in the E languages we were piggybacking on the mathematical definition of natural numbers, but now the constructs in chapter 15 give you the power to define them without requiring recourse to an external definition of numbers. This is similar to Gödel’s T language from Chapter 9 where nats were defined in terms of the successor function.

This explains why the types in chapter 15 are called infinite but the product and sum types of chapters 10 and 11 are called finite types. The infinite refers to the infinity of natural numbers whose values can be captured by the inductive and coinductive type mechanisms. An infinite number of values cannot be expressed in terms of product or sum types since in those types the full set of possible index values must be known a priori.

All for now, looking forward to taking one more crack at these infinite types next week, see you all then, Scott

Classic Logic

December 6, 2015.   Brought to you by: Scott sarostru

Hey everyone! Friday night we discussed Classical Logic (13 PFPL) in a decidedly non-classical way. It was a somewhat surprising and mind bending to treat the same old logic statements we know and love in a very different way, thanks Harper!

Next week we will meet on Friday at 6:30pm at Bento Miso to cover Chapter 14 of PFPL, Generic Programming.

Sorry for the hiatus on recaps, Im hoping to fill in a couple of summary notes sections for chapters 10-12 rather than try to do catch up recaps.

Chapters 12 and 13 of PFPL discuss logic systems and how to construct a logic grammar using the same principles used earlier in the book. Chapter 12 uses the constructivist approach, “Only that which is proven is true”; whereas chapter 13 uses classical logic which is the common one taught in school, “all statements are either true or false”. These may seem the same at first but in constructivist logic equating truth with having a proof in hand gives rise to statements being “undecided” which isn’t possible in classical logic. This in turn lays the groundwork for the propositions as types principle in which there is an isomorphism between programs and proofs.

The classical logic section aims to replicate the mechanics of the normal logic operators you learn in school; i.e. and, or, not, if - then; by building a logic grammar following the same principles used to build the constructivist logic grammar in chapter 12. The symmetry between truth and falsehood seems to give rise to a more complex set of grammar rules and more complex programs than in the constructivist logic. Harper later claims that the classical logic is weaker than constructivist logic and equates this with the concept that sometimes when adding new features to a language they may actually weaken the language. We were undecided ourselves on whether we agree with that or not.

We went over a variety of examples of writing programs to prove logic statements in constructivist logic, but when we tried to do the same in the classical formulation we found it significantly more challenging. As all proofs in must be built upon an absurdity (I.e. True is false), it requires jumping back and forth between refutations of statements and proofs of statements and these are composed to construct a program. This seemed decidedly less intuitive than when using the constructivist approach, though it may just need more practice.

Looking forward to next week! Scott

Lambdas > Lembas?

November 1, 2015.   Brought to you by: Scott sarostru

Hey everyone, Part 2 of PFPL, oh my! Friday night we covered chapter 8 where we start adding functions to our language E, this was in short, awesome!

Next week we will move on to chapter 9 where we meet System T and I am really excited to hear more about Pete’s implementation, https://github.com/pbevin/systemt . We meet as usual at 6:30 pm at Bento Miso on Friday night.

Before discussing chapter 8, I wanted to talk briefly about Ben’s implementation of E which he shared with us a couple of weeks back and I have been remiss in recapping. Seeing this really brought out the concrete applicability of the ideas expressed in PFPL and gave life to the sometimes (always?) terse and concise mathematics used in PFPL. Not sure about others, but I really want to have a go at doing that myself, it seemed like an exercise that would be very rewarding.

Onto chapter 8; in 8.1 the language ED is introduced which extends E with function definition and application but no lambdas. It’s kind of a funny extension, I think it is intended to introduce some concepts rather than be useful in its own right. We spent some time going through the different judgements and equations,and then wrote out an example of how to write a function of two arguments in this language.

In 8.2 we meet EF a more fleshed out extension to E, which promotes functions to first class citizens, i.e. lambdas. We walked through the code example for function k which I personally found quite helpful and then took a look at the preservation and progress theorems. Dann pointed out that preservation is proved by using the dynamics of the language while progress is proved using the statics; the why of this remained unclear to me though Ben had some ideas.

Section 8.3 introduces the concept of definitional equality which on my first reading seemed like a technical detail and I suggested maybe we skip the section, but I am really glad my idea was ignored :). This was perhaps the most interesting part for me once Pete and Ben started discussing the nuances of the definition. I’m going to take a stab at writing down my take away understanding, if it’s wrong let me know and I will edit this.

Definitional equality is a construct introduced in order to identify when different ABTs are in fact equivalent to one another in some potentially useful way to us as language designers. At one end of the spectrum we might say that an ABT is only equivalent to another if every node is equal, but this is too strict since by that reckoning x+1+1 is completely different then x+2. At the other end of the spectrum we might say it only matters what an ABT evaluates to once all free variables have been replaced, but this is too hard to use since often we want to work with trees containing unassigned variables. Definitional equality lands somewhere in the middle and tries to let us make as many equivalencies as possible without compromising the integrity of our language. Though in this aspect it is still a best effort attempt rather than a precise definition. Harper brings up an example showing this in Chapter five, where he shows that commutativity of the nat type can not be proved true in general though it can be shown true for every instance.

We didn’t have too much time for 8.4 but dynamic binding today doesn’t seem nearly attractive as it must once have, though I found it interesting how placing it into the ABT description used in PFPL makes it clear that dynamic binding requires that you give up a lot of the structures we have built up over the last 8 chapters, which hardly seems worthwhile.

Ok that got a bit longer than usual, see you all next week, Scott

Conveniently Nonlinear

October 5, 2015.   Brought to you by: Scott sarostru

Hey Everyone, Thanks for coming out on Friday for a dynamic session, dipping our toes into Chapter 5 of PFPL and the pleasure that is “Dynamics”.

Next week (Friday night at 6:30 at Bento Miso) we will be covering sections 5.3, 5.4, and 5.5 and maybe some of us will overlap on some homework problems, who knows! The suggested exercises are 5.3 from the book, hw00.3, hw1.1.1, and hw1.1.2 and for bonus cabal cred hw01 sounds really interesting and Ben will be telling us a bit about it.

I had missed the last two weeks so was trying to catch up on the material and found the discussion quite helpful. Perhaps it’s just the presentation and the Theorem Proof style, but I find I get into a mindset where I stop thinking about the meaning of things and just start churning through trying to do the math. However, in discussion that viewpoint cannot stay :). It was great to hear other’s thoughts and start relating some of the definitions in the text to the programming concepts we want to model and thinking about what problems might arise as we continue to explore things. Also, I really appreciated seeing Ben’s concise proof approach, it’s been so long since I did math proofs, I’d forgotten some things. I was pleasantly relieved that my attempts seem to have some of the right ideas, which I conveniently scattered across the page in a non-linear pattern interspersed with scribbled out notes.

Looking forward to seeing you all next week, Scott

Sanguine Moons

September 29, 2015.   Brought to you by: Dann dxnn

Hello, computer scientist enthusiasts!

This is a special Double Recap, so it’s extra short! We had a great meeting last Friday, and additionally had a second great meeting the week before! We made a glossary of terms, so now when someone says something like “gamma turnstile ex colon tau surfboard ex colon tau” we can all follow along. Which is really something, when you think about it.

This week we’ll be reading sections 5.1 and 5.2, doing book homework problems 5.1 and 5.2, reviewing hw1.1 and hw1.2 from the grad course, possibly discussing sections 2 and 3 of hw00 from the undergrad course, and generally having an amazingly good time. If you’re just tuning in, or trying to get caught up, I’d definitely recommend doing the hw00 undergrad coursework – it’s actually quite fluffy and fun.

I’m very much looking forward to seeing all of your smiling lambdas there, and to the return of our unofficial steganographer Scott, who usually publishes these missives but has been on a surfboard these past long weeks.

Sanguine moons, and happy coding!

Dann

The First Practical Friday

September 11, 2015.   Brought to you by: Scott sarostru

Hello fellow Cabalers and welcome to all new Recruits! Last Friday we kicked off the Cabal year with a bang and started Chapter 1 of The Practical Foundations of Programming! So exciting!

For those who couldn’t make it, this Friday, September 18th, 6:30pm at Bento Miso is still a great time to start, as we will blatantly ignore the advice of the bearers of the Holy Handgrenade and proceed directly unto Chapter 4. The reading for the week is sections 4.1 and 4.2 and and Pete has already started some notes to help us because he is awesome, here: Static and Dynamic and here: Type Judgements.

After a round of introductions, we started on the preface which in typical Harper style, is short but incredibly dense, and I suspect that other groups without as judicious wranglers as ours would never have escaped the preface. He asks some pretty fundamental questions about programming, e.g. what is a program made of? What structure does a program have? And though he wasn’t able to answer those questions, I think it will be good to keep those questions in the back of our minds even when wading through murkier waters.

In Chapter 1 of PFPL, Harper starts laying some mathematical ground work for what will follow, so we defined Abstract Syntax Trees, Sorts (Not to be confused with types!), and introduced a variety of notation for manipulating variables and trees. Pete’s and Ben’s explanations surrounding Sorts I found particularly helpful for my understanding, as while reading I had thought of them like types, but they are serving a different purpose here. I liked Pete’s analogy with English language, where the types of a typed programming language like Haskell correspond to the meanings of words, whereas Sorts correspond to the grammatical structure provided by a word. As example, “The dog chased Sally” works as a sentence, but not “The dog swam Sally”, as the verb swam requires a preposition “to” to have a full English sentence. A type error concerns meaning, so a sentence could be grammatically correct but make no sense, e.g. “The dog quoted Sally”, which makes sense if the dog happens to be Mister Peabody but otherwise is impossible.

Thanks to Dann and Bento Miso as always for doing a wonderful job, and see you all again soon, Scott

Recap Recap

August 14, 2015.   Brought to you by: Scott sarostru

Hey Team! Friday night we met up to discuss the ‘Go Meta’ manifesto from Snapl 2015. Dann had imagined this might wrap up our month of multi stage programming, but as it turns out it was more of a beginning and our one paper has exploded into a finite but large number of follow on readings.

So next week we ‘Go Meta’ once more, but this time we are allocating everyone 10 mins to cover a follow up reference of interest to them from the paper, Go Meta . Also we are having a potluck so bring along some food and we will be taking a 2 week hiatus after that before we start our next large book length commitment, Harper’s Practical Foundations for Programming Languages.

This manifesto really seems primed for spurring discussion and pushing readers to explore the many references. We tended to alternate between discussing the high level ideas in the document and trying to comb through the many motivating examples, for example the 500 line SQL compiler, or the parametrized code generator.

The authors put forward a development path that creates effective communication channels between the code that a developer writes in a high level language and the assembly code that gets generated on the other end. Everyone of us has encountered the problems mentioned in the paper, I think because it is pretty much the standard development flow; write a first version in a high level interpreted language, then once all the kinks are worked out throw it away and write it again in a lower level language where you can have more control over performance. Unfortunately, that often means your code loses a lot of its dynamism and makes future changes more expensive, and in addition there really is no 1 low level version that will satisfy all possible demands so often you end up with multiple versions and/or extensive parameter controls to guide the compilation.

The metaprogramming approach put forward in the paper isn’t a solution to these problems, but it is a powerful tool that I think has the potential to become a very effective way to do development. So maybe we can do away with the disjoint 2 stage development flow and create some better tools for ourselves.

Until Friday, Scott