Tackling the Awkward Squad for Reactive Programming

https://2020.ecoop.org/details/ecoop-2020-papers/19/Tackling-the-Awkward-Squad-for-Reactive-Programming-The-Actor-Reactor-Model

Sam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, Wolfgang De Meuter

Reactive programming is a programming paradigm whereby programs are internally represented by a dependency graph, which is used to automatically (re)compute parts of a program whenever its input changes. In practice reactive programming can only be used for some parts of an application: a reactive program is usually embedded in an application that is still written in ordinary imperative languages such as JavaScript or Scala. In this paper we investigate this embedding and we distill “the awkward squad for reactive programming” as 3 concerns that are essential for real-world software development, but that do not fit within reactive programming. They are related to long lasting computations, side-effects, and the coordination between imperative and reactive code. To solve these issues we design a new programming model called the Actor-Reactor Model in which programs are split up in a number of actors and reactors. Actors and reactors enforce a strict separation of imperative and reactive code, and they can be composed via a number of composition operators that make use of data streams. We demonstrate the model via our own implementation in a language called Stella.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Transaction Machines

My solution to the awkward squad is the 'transaction machine'.

Transaction Machine: A deterministic transaction is evaluated repeatedly, in a non-deterministic environment.

It's a simple idea. A significant motive for its design is to simplify live coding. It's also good for asynchronous effects (e.g. mailboxes, channels). And conveniently, it implicitly supports process control and reactive programming:

When a deterministic transaction is unproductive, it will (deterministically) always be unproductive until the observed input changes. Thus, the system can arrange the transaction to wait for a relevant change. Aborted transactions, read-only transactions, and idempotent writes are all unproductive.

Large transactions are problematic. This is solved by introducing a `fork [1,2,3]` effect that replicates the transaction, returning to each replica with a different value from the list. Logically, this is equivalent to non-deterministic response from the environment (for isolated transactions, repetition and replication are logically indistinguishable).

Schedulers can deal much more easily with repeating transactions than with ad-hoc one-off transactions. E.g. when two transactions are empirically detected as frequently conflicting, they can be arranged to run sequentially. Thus, ensuring fairness and progress is simpler.

Transaction machines seem much simpler and more robust than the solution described in the paper. However, the disadvantage would be the inability to express synchronous external effects, and less precise control over performance.