Lecture 1: What’s So Important about Language?

Introduction to the course; review of some programming languages; how languages shape programs; what programming languages might do for us; some topics to be covered in the course.

Link: Slides; Sheet.

Homework

The next lecture is at 10am on Friday. It’s about programming for concurrency. Before then:

  1. Read the Wikipedia article on History of programming languages. (If you find it’s missing something, fix that.)
  2. Pick a programming language, and find out what support (if any) it offers for concurrency. Then post a brief comment here describing what you have found out. Try to avoid duplication — and no more than one language each, leave some for others.
  3. Find out about the Blub Paradox. Post citations here.

To leave comments on the blog you will need to log in: use the link in the “META” box at bottom right. The system uses your Informatics username and password. Once logged in, you can edit your profile to change how your name appears.

11 Responses to Lecture 1: What’s So Important about Language?

  1. s0788616 says:

    Concurrent programming in Java is mostly concerned with threads. From the programmer’s point of view, at the start there is only one thread (the main thread), which can then create other threads. Threads have to be properly syncronized so that they do not interfere with each other (http://en.wikipedia.org/wiki/Java_concurrency). Java has built-in constructs to support this coordination, the most important of which is the monitor. Monitors make if possible for an object to be accessed safely by more than one thread, because at most one thread can execute monitor’s methods. Other mechanisms to support concurrency include interrupts and locks. An interrupt is an indication to a thread that it should stop what it is doing and do somehting else instead (http://download.oracle.com/javase/tutorial/essential/concurrency/index.html). Locks are used for both enforcing exclusive access to an object’s state and establishing happens-before relationships. There are also higher-level building blocks to support larger concurrent applications. These include executors, concurrent collections and atomic variables.

    • s0788616 says:

      This article on the Blub paradox seemed useful: http://c2.com/cgi/wiki?BlubParadox

      • Ian Stark says:

        Lots of discussion there, although quite a lot about Paul Graham personally rather than the paradox. Do you think that the “Blub” situation really arises?

        • s0788616 says:

          The discussion is a bit off-topic, but is quite amusing nevertheless. Personally, I think there are some people who rather shortsightedly tend to disregard languages which they are not familiar with. I tend to disagree with Paul Graham’s statement and I think languages generally cannot be partially ordered. It seems to me that one language may be better than another for a specific domain, but not for all domains.

  2. s1019422 says:

    As a beginner of Haskell, I found the basic facilities in Haskell are primitive but powerful.
    Haskell use forkIO to spawn an independent thread, which is an IO action.
    Prelude Control.Concurrent> :t forkIO
    forkIO :: IO () -> IO ThreadId

    For synchronisation between threads, Haskell provides synchronising variable type (MVar a).
    It’s a location where it can either be empty or hold a value of type a.
    Prelude Control.Concurrent> :t putMVar
    putMVar :: MVar a -> a -> IO ()
    Prelude Control.Concurrent> :t takeMVar
    takeMVar :: MVar a -> IO a

    If a running thread calls takeMVar in an attempt to take a value from an empty MVar, the thread will be blocked, i.e. put to sleep.
    Likewise, if it tries to fill in MVar by calling putMVar, then it’ll be put to sleep as well.

    1. “Real World Haskell”, Chapter 24 is contributed to concurrent programming.
    2. http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/ offers very clear explanation.

    • Ian Stark says:

      OK. What happens if two threads try to write to an MVar at the same time?

      • s1019422 says:

        i need to amend what i wrote before:
        threads must update(take & put) a MVar in this order:
        {
        v<-takeMVar count — others got blocked here
        — operations
        putMVar count v+1 — one of the waiting threads calling takeMVar can continue
        }
        this manner guarantees that updating a MVar is indivisible.

  3. Ian Stark says:

    Yes, that’s how you can use an MVar for atomic update, or to write a monitor. But you can use MVars in other ways too: to queue up jobs for a single worker, or distribute them among many.

  4. Laurentiu says:

    I am not very familiar with Python concurrency, but I thought I should give it a go and see what I can find out about it.
    Python uses the GIL (http://en.wikipedia.org/wiki/Global_Interpreter_Lock) which limits concurrency of a single interpreter process with multiple threads and hence can harm the concurrency of python programs if outside calls are not made if the GIL is not released before any heavy computation or other resource consuming action takes place.

    Stackless Python (http://www.stackless.com/) allows you to use micro-threads but only a minimal amount of functionality exposed through the stackless module.

    This article offers an interesting perspective of Python and concurrency : http://www.drdobbs.com/open-source/206103078;jsessionid=5GEW0YIVJUN1TQE1GHPCKH4ATMY32JVN?pgno=2

    • Laurentiu says:

      The article offers an interesting perspective on Python and concurrency and briefly describes what are the hurdles that one needs to overcome (it is trying to address I/O concurrency on a webserver).
      Stackless Python copies the C stack that the standard interpreter is using and also implements an I/O scheduler in order to offer true concurrency. A spin-off of Stackless is Greenlets. The main difference is that you do not have to recompile the Python interpreter if you are using Greenlets, but they can be less efficient. More about this here : http://ptspts.blogspot.com/2010/01/emulating-stackless-python-using.html

      Main Python concurrency page : http://wiki.python.org/moin/Concurrency

  5. Ian Stark says:

    I didn’t know about the Python global interpreter lock. This means that even when a Python program has multiple independent threads, and you have a multicore system, no more than one thread can run at a time.

    The talk below claims that Python threads run slower the more cores you have, because of all the time spent fighting over the global lock.

    http://www.dabeaz.com/python/GIL.pdf

    Anyone able to confirm this?