On Nov. 14, John gave a talk on lessons learned from the Reasoned Schemer, here’s the recording:
Toronto-based reading group that meets weekly to read computer science books and papers.
Currently reading: The Little Typer
Cabal News and Meeting Recaps
On Nov. 14, John gave a talk on lessons learned from the Reasoned Schemer, here’s the recording:
Since our last update, the group has finished reading the Seasoned Schemer as well as The Reasoned Schemer, which we attempted (successfully, but occasionally painfully) to do in clojure using core.logic.
On Nov. 14, John will be sharing a talk on lessons learned from the Reasoned Schemer, so if you’re around in Toronto come hang out! Unfortunately this will be an in person event only. Guild link here
Starting this Thursday (Oct 5 2023) we’ll be back on zoom (and Dr Racket) reading The Little Typer, if you’re interested in joining, join the guild page here.
TL;DR: We’re still alive, we read lots of stuff, and we’re starting The Seasoned Schemer soon.
We still exist! After finding out that a few of our friends only found us because of our web presence, I figured it was time to update the site with all the stuff we’ve been up to in the last 5 years.
Myself and a new group of folks who weren’t around for the original reading of SICP decided to spin off a group on Thursdays and work our way through it, and kept going after that book was done. We alternated between books / courses and shorter papers to give ouselves a break between longer readings, some highlights are listed below:
In a bit of a departure from our usual CS or CS-adjacent topics, we spent a few months going through a big chunk of the MIT Underactuated Robotics course.
In the meantime, the Friday version of the reading group was still going strong reading papers, Paradigms of AI Programming by Norvig, and plenty of other stuff I can’t remember (I’ll try to get someone to post a second update, but some of it is listed here). The two groups joined forced around this time and started working through PLFA, possibly the densest book we’ve attempted to date?
After those, we jumped back into some papers:
We then jumped into The Little Schemer with plans to read through the entire “Little” series with some breaks for more papers in between. We’ve just finished up that book and are working our way through another block of papers before we start The Seasoned Schemer:
Today we read Adding Interactive Visual Syntax to Textual Code by Anderson et al., and next week we have Go Meta! A Case for Generative Programming and DSLs in Performance Critical Systems by Rompf et al. lined up.
Then we’ll be onto The Seasoned Schemer! If this seems interesting to you and you’re in the GTA (or not, we’ve been meeting remotely since COVID started), feel free to reach out. Hopefully it won’t be 5 years until the next update - we’ll try to post some more thoughts on what we’re reading instead of just dumping a long list of books and papers in the future.
Erik
This entry has been shamelessly cross-posted from langnostic.
For the past year or so, the Toronto Comp Sci Cabal has been reading a book called Practical Foundations of Programming Languages(PFPL). And the sheer idea density in this thing was giving me enough fodder to chew over and blog about for something like a month after each reading. The trouble is, we met to discuss and read the next section about once a week, so chewing and blogging time has been getting very seriously truncated on a consistent basis. The end result of this will eventually be a series of posts, each focusing on trying to understand a single topic we covered, but I won’t embark on that journey today.
This past Friday, we held the concluding meeting of the reading of that book. It was pretty awesome. We got the entire group back together, and were joined by Bob Harper by video. Even though we’d met him before, it was nice to talk to him once we’d finished reading his work.
The conversation ranged from type theory, to language design, to the current state of programming as a discipline. The definite highlight for me was finally getting to thank him for explaining some of the things that get vaguely and inconclusively discussed in the comment sections of his blog. The big ones are
Those are highlights, but they’re not even remotely the only ideas worth exploring in the book. In Harper’s own mind, some1 of the highlights of the tome are
As that light sprinkling of topics should show you, this book is a comprehensive treatment of the art and science of programming language theory and design, from a perspective that isn’t exactly mainstream in the industry at the moment. Despite all the bitching and silence, I thoroughly recommend reading it. Especially if you think you couldn’t handle it at the moment. Because it will start you from the very basics of proofs, and take you to the height of building solid systems. Because it will challenge many notions that may have become intuitive, but are nevertheless wrong once you look at them closely. Because it will very probably force your brain through the wringer, and enlighten you as few other tomes can.
For our next book, Bob suggests we pick up Reynolds’ Theories of Programming Languages, which we very well may do. However, for the moment we’re gearing up to start on some interesting papers, right after a short and well-earned rest.
He also mentioned that he half considered re-writing the whole thing on the basis of Concurrency and Dynamic Classification. Which are apparently the two features you need in order to implement the rest of everything laid out in the book. I haven’t tried to do this, but it does seem plausible though difficult. ↩
Hey everyone! Friday night we finished off the rest of PFPL46: Equational Reasoning in System T, digging into the definitions some more and trying to understand how observational and logical equivalence complement each other.
Next week the group will continue with PFPL47: Equational Reasoning in PCF, otherwise known as “Coming to Grips with Fix”. The time and place will be the same as usual, Gamma Space at 6:30pm on Friday. Unfortunately I won’t be making it for the remainder of the book as I’m away on business, but will be trying to keep up on the reading and over slack.
I really enjoyed the discussion this week and feel like it helped me understand the concepts of observational and logical equivalence. In some sense, they are intuitive in that we want to classify expressions into equivalence classes where expressions that always give the same result can be considered “equal”.
This led us to the following high level interpretation of this chapter; observational equivalence is a highly desirable property of a language, but it’s hard to work with. How does one do computations over all possible programs? Logical equivalence at first sight seems less powerful, but is much easier to work with. By the happy coincidence that they coincide we can work out results that talk about observational equivalence while working only with logical equivalence relations.
This somewhat involved and technical work with the equivalences gives us at the end of the chapter our seemingly intuitive laws of equality in system T, but on a sound surfboard footing.
Induction anyone? Til next time, Scott
Hey everyone,
Friday night we discussed PFPL 46: Equality in System T. For those just joining in, System T is Goedel’s famous language which can be used to prove interesting properties, such as his yet more famous Incompleteness Theorem.
Next week we will be moving on to equality in richer climes in PFPL 47: Equality in PCF, meeting at the usual time and place, Friday at 6:30 at Gamma Space.
Equality like most obvious things is distressingly complex and difficult to define. I love this example from the opening of Girard’s book, Proofs and Types:
It is an abuse (and this is not cheap philosophy – it is a concrete question) to say that
27 x 37
equals999
, since if the two things we have were the same then we would never feel the need to state their equality. Concretely we ask a question,27 x 37
, and get an answer,999
.
This observation starts to unpack what it means when you write an equation on a blackboard. What rules are we implicitly applying in our minds to make sense of these lines and squiggles? When confronted with a requirement for clear definition, pinning down precise rules becomes quite a challenge and leads to the types of equational equivalences presented in this chapter.
In the case of functions, we use logical equivalence. When two functions evaluate to the same result on the same inputs, then we call them logically equivalent.
The opposing way of looking at this, is from the perspective of a program with a hole in it waiting for an expression. When for all possible program + holes (holy programs?) two expressions appear to be the same we call them observationally equivalent.
It’s remarkable that even in the context of system T, a complete and total language, when trying to compare two expressions the task seems to transform into something akin to a scientific process, where you have to perform experiments in order to validate your hypotheses.
All for now, happy computing, Scott
#Counting in hex
We all know how to count to 10; why, because we have ten fingers! So passé! Now that our fingers can touch keyboards and numpads have finally fallen out of fashion we should be counting to 16.
##Naming
One of the first things I realized is that you can’t repurpose “ten” the word to refer to 10
the numerals in base 16, this quickly gets very confusing as the concept of 10 is so deeply ingrained. It’s a lot easier if 10 remains ten, so that if you happen to slip back into base 10 to do a calculation the meanings of words don’t depend on context. This then means we need new names for common sequences that clearly identify that we are in base 16. At first I replaced t
everywhere with h
or hex
or x
, but it didn’t always work, so eventually I made the transition of x
to z
and then it works better and is pretty easy to pronounce and is more than a little hilarious and makes you sound a bit like Bilbo saying how old he is. This swap will give you the following list of terms:
10
: zen11
: zeleven12
: zwelve13
: thirzeen1F
: effzeen31
: thirzy one3D
: thirzy dee100
: zundred152
: zundred and fifzy two7DF
: seven zundred and deezy eff1,000
: zousand1,000,000
: milzion1,000,000,000
: bilzion1,000,000,000,000
: trilzionNow we get to learn our number tables all over again, many of the simple tricks we learn in high school for base ten carry over directly since they are generic properties of any position based number representation.
10
: zen times anything just shifts the number over and adds a zero, same as multiplying by ten in base tenF
: eff times any single digit number has a similar sequence to the nine times table in base ten, first digit counts up, second digit counts down.8
: eight works like five does in base 10, alternating final digit of 8 or 04
: gets a nice pattern you don’t get in base ten, final digit alternating between 0, 4, 8, C2
: has the same even number pattern as in base ten: final digit of 0, 2, 4, 6, 8, A, C, EF
: gets a rule a bit like the sum of digits must be divisible by 3 rule (this falls out of modulo arithmetic).Since, I’ve often used hex for bit patterns, I thought multiplying by 2 might be easier if you think of it in base 2, and that does seem to be the case,
E,25F * 2 = 1C,4BE
but still won’t help you calculate at speed in your head, you need to memorize some tables.
##Common big numbers
400
, four zundred100,000
, zundred zousand40,000,000
, forzy milzionHey everyone, We were short a couple of regulars but still enough to make quorum, we started on the modules chapter of PFPL 44, and Huma showed us an example dictionary implementation like in the text that was using C++ concepts, very neat!
This was the last meeting of the year, so everybody polish your modules over the holidays and we will continue with chapter 44 again in the new year, Friday Jan 6th, 6:30pm at Gamma Space.
This chapter is all about the modules, and by modules I mean interfaces, and by interfaces I mean one of the hardest things to get right in software. Even though modules may not help you choose a good API for your users, they will give you a language for selectively hiding or exposing implementation details across software interfaces. Signatures
are introduced as a new construct and they describe a module boundary. They can be fully abstract interfaces like in chapter 17 or they can act as partial views into an implementation, called type classes.
For instance if you want to distribute a sorting function, sort(keys, values)
you might worry about:
The last one we have seen earlier in chapter 17, we can use type abstraction to hide all of the implementation details behind an abstract interface. Similarly, you could use type abstraction to solve the first worry as well, by making your sort
function depend on abstract Key
, Value
, and compare modules. This can work quite well, in fact the qsort
function in the C standard library has an interface like this, where you pass in pointers to your data and a custom compare function.
Where this approach can fall short is that you as the implementor of the sort
function you can’t make any optimizations based on the type of the keys or the choice of compare function. For sorting performance this can be pretty important since if you are sorting integers for instance it is often better to use a linear time radix sort then to use quick sort or something else. To do this you could use type classes which would let you take advantage of some implementation aspects of the Key
type and comparison function.
Ok all for now, happy holidays, until next time! Scott
Hey everyone! On Friday, round 2 of PFPL 43 on Singleton and Dependent Kinds went off famously, and next we move on to chapter 44, for realz this time. Meeting at the usual time and place, 6:30pm at Gamma Space on Friday Dec 9th.
We had a longer than usual off-topic welcoming discussion that veered into Acadian history and the social organization of early North American colonies. But with the arrival of Leo though we swung back on topic, into Singleton Kinds just after the equivalences 43.2 but before 43.3, where we went through the illustrative examples. We discussed subkinding quite a bit; in particular the relation expressed in 43.3, and I’m not sure we ever landed on a consensus of how to best interpret the meaning of the subkinding relation. We are planning to revisit the question after seeing more applications in the next chapter.
Dependent Kinds at first seemed like a natural extension of the previous section, however as we discussed them we identified several differences that we hadn’t noticed at first; for instance the covariance and contravariance relationships of the product and function kinds. We were able to put together some simple examples applying the different constructs, but I don’t think we ever hit on one that truly needed the full power of Dependent Kinds, so we will have to keep working on coming up with examples.
Near the end, Ben brought up the paper the five stages of accepting constructive mathematics which is worth taking a look at, for myself I think I’m out of denial and into bargaining myself…
Until next time, Scott
Hello fellow Cabalers!
Thanks for an insightful intro to higher kinds on Friday, we were ostensibly reading chapter 43 on Singleton Kinds, but we spent a lot of time on Chapter 18 on Higher Kinds as well. The refresher was welcome since 25 chapters is a long time to wait for an encore.
Next week we continue with chapter 44, which is a big one, type abstractions and type classes. Meeting as usual on Friday at 6:30pm at Bento Miso/Gamma Space.
The terminology and interplay between Kinds and types provided us with a vast trove of confusion, but in spite of this I felt like I learned some things. The first was simply why do we have these kinds in the first place, why not just more types?
This can be motivated by the desire to express a concept independent of the underlying data type, we can employ type constructors to make new type out of old, e.g. a list data structure, list<T>
should work for any element type T
. This is a pretty common concept that people use quite frequently in many languages, however it raises the question what is a list<>
before the T
has been supplied? Since a type constructor is not a type instead we need a new concept, hence Kinds.
Type constructors and type abstraction are important for providing polymorphic functions or data structures, like the list above, and also for modularity and packaging of code. Since we saw last week that an API is a type definition selected as a hard boundary between sections of otherwise independent code, type abstraction is important as it lets you supply implementations separate from the API proper.
Singleton Kinds introduced in 43 are a bit harder to motivate, but what they provide us with is a means of giving an identity to a Kind such that we can keep information about it with us to do type-checking even through type abstraction boundaries. The full import of this is still a bit hazy to me, but I think this is important for the packaging and module approaches we will look at in upcoming chapters.
Until next time, Scott
p.s. It turns out you don’t bind types to type constructors in the same way as variables, so
Ben: You don’t bind it, but kind it
Leo: Be kind, don’t bind