Cabal News and Meeting Recaps
May 26, 2022.
Brought to you by: Erik
TL;DR: We’re still alive, we read lots of stuff, and we’re starting The Seasoned Schemer soon.
We still exist! After finding out that a few of our friends only found us because of our web presence, I figured it was time to update the site with all the stuff we’ve been up to in the last 5 years.
Myself and a new group of folks who weren’t around for the original reading of SICP decided to spin off a group on Thursdays and work our way through it, and kept going after that book was done. We alternated between books / courses and shorter papers to give ouselves a break between longer readings, some highlights are listed below:
In a bit of a departure from our usual CS or CS-adjacent topics, we spent a few months going through a big chunk of the MIT Underactuated Robotics course.
In the meantime, the Friday version of the reading group was still going strong reading papers, Paradigms of AI Programming by Norvig, and plenty of other stuff I can’t remember (I’ll try to get someone to post a second update, but some of it is listed here). The two groups joined forced around this time and started working through PLFA, possibly the densest book we’ve attempted to date?
After those, we jumped back into some papers:
We then jumped into The Little Schemer with plans to read through the entire “Little” series with some breaks for more papers in between. We’ve just finished up that book and are working our way through another block of papers before we start The Seasoned Schemer:
Today we read Adding Interactive Visual Syntax to Textual Code by Anderson et al., and next week we have Go Meta! A Case for Generative Programming and DSLs in Performance Critical Systems by Rompf et al. lined up.
Then we’ll be onto The Seasoned Schemer! If this seems interesting to you and you’re in the GTA (or not, we’ve been meeting remotely since COVID started), feel free to reach out. Hopefully it won’t be 5 years until the next update - we’ll try to post some more thoughts on what we’re reading instead of just dumping a long list of books and papers in the future.
Erik
March 28, 2017.
Brought to you by: Inaimathi
This entry has been shamelessly cross-posted from langnostic.
For the past year or so, the Toronto Comp Sci Cabal has been reading a book called Practical Foundations of Programming Languages(PFPL). And the sheer idea density in this thing was giving me enough fodder to chew over and blog about for something like a month after each reading. The trouble is, we met to discuss and read the next section about once a week, so chewing and blogging time has been getting very seriously truncated on a consistent basis. The end result of this will eventually be a series of posts, each focusing on trying to understand a single topic we covered, but I won’t embark on that journey today.
This past Friday, we held the concluding meeting of the reading of that book. It was pretty awesome. We got the entire group back together, and were joined by Bob Harper by video. Even though we’d met him before, it was nice to talk to him once we’d finished reading his work.
The conversation ranged from type theory, to language design, to the current state of programming as a discipline. The definite highlight for me was finally getting to thank him for explaining some of the things that get vaguely and inconclusively discussed in the comment sections of his blog. The big ones are
- the distinction between Concurrency, Parallelism and Distribution, which typically get conflated in general programming conversation, and are covered separately in PFPL chapters 37 through 41.
- the idea that dynamically typed languaes are subsets of statically typed languages, which is something that seems deeply counter-intuitive from the perspective of most indusutry programmers. Bob mentioned that he’s in a very small minority of people that actually understand this, so I don’t feel bad for needing convincing. This one is covered in chapters 21 through 23.
Those are highlights, but they’re not even remotely the only ideas worth exploring in the book. In Harper’s own mind, some of the highlights of the tome are
- Dynamic Dispatch, covered in chapters 26 and 27, which defines and explains the entirety of the object oriented programming paradigm in a typed context
- Variables are not Assignables, covered in chapter 35. This is a conflation that most current languages accept, to the point that mainstream thought essentially makes them equivalent. I won’t try to explain the concept here, because it’s subtle and needs enough examples that it could be a blog-post on its own. The thumbnail is that a variable should be immutable; it’s a name for a value, not a name for a box where you might put a value.
- Exceptions are Shared Secrets, covered in 33 (as well as one of Bob’s blog posts), which covers an implementation of exceptions that falis to fall into the typical pitfall of non-compositional handlers by making sure that exception classes are created at runtime. I’m honestly not sure about the general utility of this one, but it does seem like an interesting approach.
As that light sprinkling of topics should show you, this book is a comprehensive treatment of the art and science of programming language theory and design, from a perspective that isn’t exactly mainstream in the industry at the moment. Despite all the bitching and silence, I thoroughly recommend reading it. Especially if you think you couldn’t handle it at the moment. Because it will start you from the very basics of proofs, and take you to the height of building solid systems. Because it will challenge many notions that may have become intuitive, but are nevertheless wrong once you look at them closely. Because it will very probably force your brain through the wringer, and enlighten you as few other tomes can.
For our next book, Bob suggests we pick up Reynolds’ Theories of Programming Languages, which we very well may do. However, for the moment we’re gearing up to start on some interesting papers, right after a short and well-earned rest.
February 7, 2017.
Brought to you by: Scott
Hey everyone!
Friday night we finished off the rest of PFPL46: Equational Reasoning in System T, digging into the definitions some more and trying to understand how observational and logical equivalence complement each other.
Next week the group will continue with PFPL47: Equational Reasoning in PCF, otherwise known as “Coming to Grips with Fix”. The time and place will be the same as usual, Gamma Space at 6:30pm on Friday. Unfortunately I won’t be making it for the remainder of the book as I’m away on business, but will be trying to keep up on the reading and over slack.
I really enjoyed the discussion this week and feel like it helped me understand the concepts of observational and logical equivalence. In some sense, they are intuitive in that we want to classify expressions into equivalence classes where expressions that always give the same result can be considered “equal”.
- Observational equivalence expresses this in a top-down kind of way, it says that for all possible programs with holes in them, two expressions are indistinguishable from one another based on the program output.
- Logical equivalence comes from the other direction, we start with some simple equivalences and then build up a full expression. The surprising part is that they coincide!
This led us to the following high level interpretation of this chapter; observational equivalence is a highly desirable property of a language, but it’s hard to work with. How does one do computations over all possible programs? Logical equivalence at first sight seems less powerful, but is much easier to work with. By the happy coincidence that they coincide we can work out results that talk about observational equivalence while working only with logical equivalence relations.
This somewhat involved and technical work with the equivalences gives us at the end of the chapter our seemingly intuitive laws of equality in system T, but on a sound surfboard footing.
Induction anyone?
Til next time,
Scott
January 30, 2017.
Brought to you by: Scott
Hey everyone,
Friday night we discussed PFPL 46: Equality in System T. For those just joining in, System T is Goedel’s famous language which can be used to prove interesting properties, such as his yet more famous Incompleteness Theorem.
Next week we will be moving on to equality in richer climes in PFPL 47: Equality in PCF, meeting at the usual time and place, Friday at 6:30 at Gamma Space.
Equality like most obvious things is distressingly complex and difficult to define. I love this example from the opening of Girard’s book, Proofs and Types:
It is an abuse (and this is not cheap philosophy – it is a concrete
question) to say that 27 x 37
equals 999
, since if the two things we have were the same then we would never feel the need to state their equality. Concretely we ask a question, 27 x 37
, and get an answer, 999
.
This observation starts to unpack what it means when you write an equation on a blackboard. What rules are we implicitly applying in our minds to make sense of these lines and squiggles? When confronted with a requirement for clear definition, pinning down precise rules becomes quite a challenge and leads to the types of equational equivalences presented in this chapter.
In the case of functions, we use logical equivalence. When two functions evaluate to the same result on the same inputs, then we call them logically equivalent.
The opposing way of looking at this, is from the perspective of a program with a hole in it waiting for an expression. When for all possible program + holes (holy programs?) two expressions appear to be the same we call them observationally equivalent.
It’s remarkable that even in the context of system T, a complete and total language, when trying to compare two expressions the task seems to transform into something akin to a scientific process, where you have to perform experiments in order to validate your hypotheses.
All for now, happy computing,
Scott
January 23, 2017.
Brought to you by: Scott
#Counting in hex
We all know how to count to 10; why, because we have ten fingers! So passé! Now that our fingers can touch keyboards and numpads have finally fallen out of fashion we should be counting to 16.
##Naming
One of the first things I realized is that you can’t repurpose “ten” the word to refer to 10
the numerals in base 16, this quickly gets very confusing as the concept of 10 is so deeply ingrained. It’s a lot easier if 10 remains ten, so that if you happen to slip back into base 10 to do a calculation the meanings of words don’t depend on context. This then means we need new names for common sequences that clearly identify that we are in base 16. At first I replaced t
everywhere with h
or hex
or x
, but it didn’t always work, so eventually I made the transition of x
to z
and then it works better and is pretty easy to pronounce and is more than a little hilarious and makes you sound a bit like Bilbo saying how old he is. This swap will give you the following list of terms:
10
: zen
11
: zeleven
12
: zwelve
- the teens become zeens, I.e thirzeen, fourzeen, up to eezeen, effzeen
13
: thirzeen
1F
: effzeen
- double digits are twenzy, thirzy, …, eezy, effzy
31
: thirzy one
3D
: thirzy dee
100
: zundred
152
: zundred and fifzy two
7DF
: seven zundred and deezy eff
1,000
: zousand
1,000,000
: milzion
1,000,000,000
: bilzion
1,000,000,000,000
: trilzion
Multiplying
Now we get to learn our number tables all over again, many of the simple tricks we learn in high school for base ten carry over directly since they are generic properties of any position based number representation.
10
: zen times anything just shifts the number over and adds a zero, same as multiplying by ten in base ten
F
: eff times any single digit number has a similar sequence to the nine times table in base ten, first digit counts up, second digit counts down.
8
: eight works like five does in base 10, alternating final digit of 8 or 0
4
: gets a nice pattern you don’t get in base ten, final digit alternating between 0, 4, 8, C
2
: has the same even number pattern as in base ten: final digit of 0, 2, 4, 6, 8, A, C, E
F
: gets a rule a bit like the sum of digits must be divisible by 3 rule (this falls out of modulo arithmetic).
Since, I’ve often used hex for bit patterns, I thought multiplying by 2 might be easier if you think of it in base 2, and that does seem to be the case,
but still won’t help you calculate at speed in your head, you need to memorize some tables.
##Common big numbers
- 1024, KB:
400
, four zundred
- 1024^2, MB:
100,000
, zundred zousand
- 1024^3, GB:
40,000,000
, forzy milzion
December 12, 2016.
Brought to you by: Scott
Hey everyone,
We were short a couple of regulars but still enough to make quorum, we started on the modules chapter of PFPL 44, and Huma showed us an example dictionary implementation like in the text that was using C++ concepts, very neat!
This was the last meeting of the year, so everybody polish your modules over the holidays and we will continue with chapter 44 again in the new year, Friday Jan 6th, 6:30pm at Gamma Space.
This chapter is all about the modules, and by modules I mean interfaces, and by interfaces I mean one of the hardest things to get right in software. Even though modules may not help you choose a good API for your users, they will give you a language for selectively hiding or exposing implementation details across software interfaces. Signatures
are introduced as a new construct and they describe a module boundary. They can be fully abstract interfaces like in chapter 17 or they can act as partial views into an implementation, called type classes.
For instance if you want to distribute a sorting function, sort(keys, values)
you might worry about:
- how to make it work for different types of keys?
- how to allow users to swap the compare function without hurting efficiency?
- how to keep users from depending on details of your implementation?
The last one we have seen earlier in chapter 17, we can use type abstraction to hide all of the implementation details behind an abstract interface. Similarly, you could use type abstraction to solve the first worry as well, by making your sort
function depend on abstract Key
, Value
, and compare modules. This can work quite well, in fact the qsort
function in the C standard library has an interface like this, where you pass in pointers to your data and a custom compare function.
Where this approach can fall short is that you as the implementor of the sort
function you can’t make any optimizations based on the type of the keys or the choice of compare function. For sorting performance this can be pretty important since if you are sorting integers for instance it is often better to use a linear time radix sort then to use quick sort or something else. To do this you could use type classes which would let you take advantage of some implementation aspects of the Key
type and comparison function.
Ok all for now, happy holidays, until next time!
Scott
December 5, 2016.
Brought to you by: Scott
Hey everyone!
On Friday, round 2 of PFPL 43 on Singleton and Dependent Kinds went off famously, and next we move on to chapter 44, for realz this time. Meeting at the usual time and place, 6:30pm at Gamma Space on Friday Dec 9th.
We had a longer than usual off-topic welcoming discussion that veered into Acadian history and the social organization of early North American colonies. But with the arrival of Leo though we swung back on topic, into Singleton Kinds just after the equivalences 43.2 but before 43.3, where we went through the illustrative examples. We discussed subkinding quite a bit; in particular the relation expressed in 43.3, and I’m not sure we ever landed on a consensus of how to best interpret the meaning of the subkinding relation. We are planning to revisit the question after seeing more applications in the next chapter.
Dependent Kinds at first seemed like a natural extension of the previous section, however as we discussed them we identified several differences that we hadn’t noticed at first; for instance the covariance and contravariance relationships of the product and function kinds. We were able to put together some simple examples applying the different constructs, but I don’t think we ever hit on one that truly needed the full power of Dependent Kinds, so we will have to keep working on coming up with examples.
Near the end, Ben brought up the paper the five stages of accepting constructive mathematics which is worth taking a look at, for myself I think I’m out of denial and into bargaining myself…
Until next time,
Scott
November 29, 2016.
Brought to you by: Scott
Hello fellow Cabalers!
Thanks for an insightful intro to higher kinds on Friday, we were ostensibly reading chapter 43 on Singleton Kinds, but we spent a lot of time on Chapter 18 on Higher Kinds as well. The refresher was welcome since 25 chapters is a long time to wait for an encore.
Next week we continue with chapter 44, which is a big one, type abstractions and type classes. Meeting as usual on Friday at 6:30pm at Bento Miso/Gamma Space.
The terminology and interplay between Kinds and types provided us with a vast trove of confusion, but in spite of this I felt like I learned some things. The first was simply why do we have these kinds in the first place, why not just more types?
This can be motivated by the desire to express a concept independent of the underlying data type, we can employ type constructors to make new type out of old, e.g. a list data structure, list<T>
should work for any element type T
. This is a pretty common concept that people use quite frequently in many languages, however it raises the question what is a list<>
before the T
has been supplied? Since a type constructor is not a type instead we need a new concept, hence Kinds.
Type constructors and type abstraction are important for providing polymorphic functions or data structures, like the list above, and also for modularity and packaging of code. Since we saw last week that an API is a type definition selected as a hard boundary between sections of otherwise independent code, type abstraction is important as it lets you supply implementations separate from the API proper.
Singleton Kinds introduced in 43 are a bit harder to motivate, but what they provide us with is a means of giving an identity to a Kind such that we can keep information about it with us to do type-checking even through type abstraction boundaries. The full import of this is still a bit hazy to me, but I think this is important for the packaging and module approaches we will look at in upcoming chapters.
Until next time,
Scott
p.s. It turns out you don’t bind types to type constructors in the same way as variables, so
Ben: You don’t bind it, but kind it
Leo: Be kind, don’t bind
November 21, 2016.
Brought to you by: Scott
Hey all,
Friday night we covered chapter 42 of PFPL on Modularity and Linking, which divided up into several interesting but connected discussions.
Next Friday will be chapter 43 on Singleton Kinds and Subkinding in the usual time and place, 6:30pm at Bento Miso.
Modularity is that thing that every developer says their design has, but in practice modular code can be as elusive as a Jackalope. In this chapter, Harper boils away the unnecessary concepts leaving just the bare essentials for modularity, namely substitution and a type signature for the API. The type signature specifies a clean type-safe boundary between modules and substitution is what lets you put the code back in the right place.
Harper uses the metaphor that APIs can then be used a bit like lemmas in a mathematical proof, providing convenient shorthand for an earlier derivation that gets used frequently. This metaphor gets really interesting though if you take into consideration the Propositions as Types isomorphism; which means that an API isn’t just like a lemma in mathematics, it is a lemma!
Linking is the important implementation glue that lets you take advantage of all this modularity andHarper outlines a number of different approaches and some of their strengths and weaknesses. I found the Cardelli paper, [Fragments, Linking, and Modularization] (http://lucacardelli.name/Papers/Linking.A4.pdf) to be a good resource to read more in depth about the Linking process.
Until next week,
Scott
November 7, 2016.
Brought to you by: Scott
Hello fellow cs enthusiasts!
Don’t worry! Despite the lack of updates, the PFPL readings have been going apace and we are now at Chapter 41 on Distributed Computing.
Next Friday, Nov 11 at 6:30pm at Bento Miso/Gamma Space we will be revisiting chapter 41 on distributed computation and sharing any projects people are working on.
This week saw us explore distributed computation, not to be confused with parallel computation or concurrent computation. This is really the big idea here, it’s that you can have computation in different distributed locations, but they don’t necessarily have to be concurrent. This is really interesting because I’m not sure I’ve ever seen this distinction emphasized, usually you see parallel, concurrent, and distributed used almost as synonyms.
As a not concurrent, but distributed computation you could imagine the case of people sitting in a circle and to mediate who is speaking they pass around an object, e.g. whoever holds the wooden spoon can speak and everyone else must sit quiet and listen. If we think of each person as a ‘site’ in the distributed computation, then each site would have a single process waiting for the spoon, and once it receives the spoon it can say it’s bit and then pass along the control spoon; conveniently leaving no time or processing power for listening, but that’s not what was important here anyway.
We still have lots of parts to try and understand, looking forward to discussing it again next week,
Scott