...what McCarthy was saying...look, what we are doing on the computer is simulating models; and the thing we're basically doing wrong is we're letting the CPU tell us what time it is. What we wanna do is: we should simulate time also. We'll use pseudotime. -- Alan Kay ([REBASE24] Interview)
Databases today do a very poor job with time. You know, before we had computers, we had the terms "memory" and "records" and they meant highly durable things. "Memory" meant remembering things, and "records" were things you kept, not things you kept writing over. -- Rich Hickey (The Datomic Architecture and Data Model)
Introduction
What if "time" is an "application-specific" concept? In many software systems, what we call "entities", change over time. How valuable to you is access to the past? Is it valuable to constrain what is and isn't allowed to be "simultaneous"? And if so, would all applications use the same constraints?
Lets explore what it's like to work with software that explicitly models it's own sense of "time" - a "pseudotime", and makes explicit rules of "simultaneity".
What is "pseudotime"?
If you google "pseudotime" or even "Alan Kay pseudotime", the search results are awful. But that doesn't mean the concept is too obscure. By zooming up and broadening our concept of "pseudotime", we don't have to limit ourselves to the authoritative, but niche, definitions. That said, lets start with what's referred to by the Alan Kay quote above.
One early discussion of the concept is from John McCarthy, the famous inventor of the LISP programming language. According to Kay's retelling of the concept, McCarthy attached a "time"/"timestamp" to every "variable" that a robot/software/"advice taker" would reason about. He called this "variable+timestamp" combo a "fluent", apparently, but idea is the same as that of "pseudotime".
The abstract and mathematical sense in which McCarthy was thinking and writing about these things must allow for "time" to have a range of possible implementations. It doesn't have to be something like 2025-11-05T10:20:50-05:00 (iso8601 timestamp) but could be something like "version/revision number" 1,2,3,4,5,.... We can choose to model "time" how it makes sense for our software system.
The important ingredients for modelling time explicitly are:
-
different time 'moments' happen one after the other (they are ordered / sortable)
-
a piece of application state has one value at time T and can have another value at time T' (the "next" time after T)
-
whether 2 or more different pieces of application state are allowed to belong to the same time T is a thoughtful, explicit decision.
Real world examples
Thinking broadly about pseudotime, simultanaety, and very-careful concurrency (no race conditions allowed), lets see some examples of pseudotime already present in real, popular applications, even if they're usually not discussed in such terms.
Git
Most discussions of immutability make the analogy to Git version control system. And for good reasons. Everyone is familiar with the feeling of how useful Git's access to past revisions is. At first glance, "time" in Git is a regular old "clock" time. Commits have "timestamps" of when they were created, but that's not the interesting "time" in the Git experience and workflow. The interesting "time" in Git, comes from the main thing that we do in Git: We branch, we add commits "on top" of other commits, which consist of multiple separate files changing together.
So, in a branch with 3 commits (each pointing to the previous), the answer to "what time is it?" is 3. Because there's been 3 commits. "merge commits" introduce a sense of "simultaneity". when merging 2 branches, in some sense, the 2 branches are "at the same time" as far as the child commit is concerned. So when we interact with Git we're explicit about what we allow to be simultaneous or not.
Chess and other games (even real-time ones)
If we see a chess game, mostly in the starting position, except for 2 pawns at e4 and e5, and the knight at e4, what time is it? The answer is 3 . There's been 3 turns made. Next turn would cause 4 to be the new "time". I would suspect many games and videogames have a notion of a "tick", a "game tick", which denotes "when" the state of the game is in.
Event Sourcing
Similar to the chess example above, if your application has an event-sourced entity, that entity is guided through its changes by events, which form an ever-growing list. The size of the list is the "time". I must be brief with the Event Sourcing example, because that topic alone is too rich. One day, I'll write more about event sourcing specifically.
Fancy pseudotimes
What I like about the above examples of "pseudotime" is that they are accessible and easy to reason about for a lay programmer such as myself. I have to mention though, that if we were to trace the lineage of this concept to the influences Alan Kay provides, there are discussions of "modelling time" that are far more advanced, but are kind of "hidden" from us, day-to-day programmers:
-
Perhaps the most impactful application of the 'pseudotime' concept is the 1978 PhD thesis by David P. Reed, where, in chapter 3 ("Pseudo-Time and Possibilities"), the concept makes the appearance. The paper is commonly cited as an overwhelming influence on MVCC (the concurrency control mechanism in the popular Postgres database and many others). It might even be the case that "2-phase commit" protocol takes some influence from this "NAMOS" thesis.
-
Kay also cites "Tea Time" protocol as having pseudotime at it's core. Not surprising, given David P. Reed and Kay himself supposedly contributed to the protocol design and implementation.
-
Leslie Lamport is also a figure in the pseudotime neighbourhood. Among numerous contributions to the field of concurrency and distributed systems, Lamport created or advanced concepts such as "Vector clocks", "Lamport timestamps", etc. Clearly he's into the "modeling time" idea.
These "fancy" pseudotime ideas are deeply embedded in the tech we build our applications on. In the databases (distributed or not), in concurrency mechanisms (e.g software-transactional memory), and probably other areas.
While these computer science giants are far superior masters of the pseudotime concept, and can apply it to massively-scaled effects via the platforms we build on, i have a hunch there are pseudotime techniques that have a place higher up in the “application level” where I work.
Conclusion
I hope I've shown above that "pseudotime" doesn't have to be a niche idea. With reverence to the computer science legends that created and developed this concept, I'd love to intrigue you with my quest to harness the power of applying various, relatively basic "poor man's" implementations of pseudotime to very practical software applications.
In the next post, we'll see a discussion of why pseudotime is such a useful concept to both development methods and "business value". In the future instalments, I'll present a few examples of how to apply the pseudotime technique to typical web applications.