Lecture 13: Concurrency Abstractions

24 February 2010

Lecture 13 (given 22nd February): further concurrency control and abstraction mechanisms provided in Java. Data abstraction: ensuring consistency by using library classes for atomic actions or thread-safe datatypes such as concurrent collections. Why these are still not enough . Control abstractions: fine-grained parallelism by decomposing processes into threads, then threads into tasks. Executors in Java. Callables and futures.

Link: Slides.

Homework

You should explore Java concurrency mechanisms in more detail, in particular, using some of the facilities provided by the java.util.concurrent packages.

  • Experiment with a program that exhibits a race condition, and make sure you can fix it in the ways described in the lecture slides.
  • Modify the Pigeon Fancier program you wrote earlier to use the executor framework and allow for dynamic reconfiguration of the pigeon coop.
  • Investigate some of the explicit locking facilities provided in java.util.concurrent and make sure you can explain situations when these might be needed.

See the lecture slides for more detail. Please post questions and discussion here and we’ll follow up.


Lecture 12: Assignment review

18 February 2010

Review of the coursework assignment. Everyone had successfully submitted the first stage. We ran through the topic choices with a little bit of discussion on each, and talked about what is left to do for the final stage.
The coursework should include a convincing worked example of the programming lanugage feature and more investigation on the background and connections of the topic (with proper citations). Strong opinions are encouraged!

Link: Slides and assignment web page.


Lecture 11: Concurrency

15 February 2010

Concurrency: motivations for concurrent code. Derivation of Amdahl’s law explaining the need to parallelize as much as possible and the opportunity for language design. Challenges with concurrent code, including liveness, fairness, scalability. Threads and processes. Java Thread basics. The need for mutual exclusion and serialisation. Monitors in Java: locks and synchronize, condition variables with wait() and notify().

Link: Slides.

Homework

You should read Sun’s tutorial on concurrency to get an introduction to the concurrency facilities provided in Java if you don’t already know them well.
Experimentation is essential: try some of the examples inside Netbeans or Eclipse and investigate the facilities for debugging concurrent code.

Try to write your own concurrent program from scratch which implements the Pigeon Fancier specification given in the Link: lecture slides. Post questions and discussion here.


Lecture 10: State transformers

11 February 2010

Stateful computation, and how sometimes that’s just exactly what you need. Using the State monad to organise imperative code in a functional language: writing Haskell that looks imperative, but the rest of the program sees as functional.

State makes this possible, but it doesn’t make it happen. Refining this to the ST monad: abstracting the state type, but adding MutVar and MutArray for richer state manipulation. Running stateful code: how a rank-2 polymorphic type means that runST is sure to insulate local state from outside interference, and preserve outside code from the snares of mutable state.

This is good, and raises the bar for modularity: not just separating routines so that neither need depend on the inner workings of the other; but so that they can even independently use distinct programming paradigms. We’ll see more of this multiparadigmatic metaprogramming later in the course.

How the IO monad is really an ST monad, but over a very particular state.

Link: Slides

Homework

Read the following paper, on which the lecture is based:

  • Lazy functional state threads. In Proceedings of the 1994 ACM SIGPLAN Conference on Programming Language Design and Implementation, SIGPLAN Notices, 29(6), pages 24–35. ACM Press, June 1994.

Lecture 9: Monads and I/O

8 February 2010

Organising imperative programs within a purely functional language; the identification of monads as a general notion of “computation”. Examples: exceptions, mutable store, input, output, nondeterminism, environment, … Metaprogramming: once you have code encapsulated in a monad, you can do more than just execute it.

Haskell do-syntax as sugar for monadic computation. How the IO monad took over the world of interactive functional programs; and how it can be efficiently implemented by an audacious one-off optimisation. Challenges with monads, and future directions for development.

Link: Slides

Homework

Find an online tutorial or other explanation of monads in programming, and post a link on the blog. Write a comment reviewing whether you found the explanation helpful, or otherwise. Bonus points if the language is not Haskell.

Read the following paper, which introduced the IO monad:

  • Simon L Peyton Jones and Phil Wadler. Imperative Functional Programming. In Conference Record of the Twentieth Annual ACM Symposium on Principles of Programming Languages, POPL ’93, pages 71–84. ACM Press, 1993

Optional: The final slides from the lecture contain some further references. If you are interested in learning more about programming with monads, try the awkward squad tutorial.