Effective Concurrency (Europe)
March 16-18, 2009 Stockholm, Sweden
This is my only public European seminar in 2009. I'll
cover the following topics:
Fundamentals: Define basic concurrency
goals and requirements � Understand applications� scalability needs � Key
concurrency patterns
Isolation -- Keep work separate: Running
tasks in isolation and communicate via async messages � Integrating multiple
messaging systems, including GUIs and sockets � Building responsive
applications using background workers � Threads vs. thread pools
Scalability -- Re-enable the Free Lunch:
When and how to use more cores � Exploiting parallelism in algorithms �
Exploiting parallelism in data structures � Breaking the scalability barrier
Consistency -- Don�t Corrupt Shared State:
The many pitfalls of locks--deadlock, convoys, etc. � Locking best practices
� Reducing the need for locking shared data � Safe lock-free coding patterns
� Avoiding the pitfalls of general lock-free coding � Races and race-related
effects
High Performance Concurrency: Machine
architecture and concurrency � Costs of fundamental operations, including
locks, context switches, and system calls � Memory and cache effects � Data
structures that support and undermine concurrency � Enabling linear and
superlinear scaling
Migrating Existing Code Bases to Use Concurrency
Near-Future Tools and Features
Machine Architecture: Things Your Programming Language Never Told You (Google video) (pdf slides) September 19, 2007 Northwest C++ Users Group, Seattle, Washington, USA Programmers are routinely surprised at what simple code actually does and how expensive it can be, because so many of us are unaware of the increasing complexity of the machine on which the program actually runs. This talk examines the �real meanings� and �true costs� of the code we write and run especially on commodity and server systems, by delving into the performance effects of bandwidth vs. latency limitations, the ever-deepening memory hierarchy, the changing costs arising from the hardware concurrency explosion, memory model effects all the way from the compiler to the CPU to the chipset to the cache, and more -- and what you can do about them. |
Effective Concurrency: Sharing Is the Root of All Contention
Dr. Dobb's Report,
March 2009. From the article: "... In
this article, I�ll deliberately focus most of the examples on one
important case, namely mutable (writable) shared objects in memory,
which are an inherent bottleneck to scalability on multicore
systems. But please don�t lose sight of the key point, namely that
"sharing causes contention" is a general principle that is not
limited to shared memory or even to computing. The Inherent Costs of
Sharing Here�s the issue in one sentence: Sharing fundamentally
requires waiting and demands answers to expensive questions. �"

volatile vs. volatile Dr. Dobb's Journal,
February 2009. From the article: "What does the
volatile keyword mean? How should you use it? Confusingly,
there are two common answers, because depending on the language you use
volatile supports one or the other of two different programming techniques:
lock-free programming, and dealing with �unusual� memory. Adding to the
confusion, these two different uses have overlapping requirements and impose
overlapping restrictions, which makes them appear more similar than they
are. Let�s define and understand them clearly, and see how to spell them
correctly in C, C++, Java and C# � and not always as volatile. ..."
The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software, Slashdotted December 2004. In print Dr. Dobb's Journal, 30(3), March 2005. The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency. This is the widely-cited landmark article that first coined the term "concurrency revolution" to describe the turn to parallel hardware and its impact on the future of software.
Software and the Concurrency Revolution (with Jim Larus), ACM Queue, September 2005. The concurrency revolution is primarily a software revolution. Soon all new machines will be multicore, and the difficult problem is programming this hardware so that mainstream applications benefit from the continued exponential growth in CPU performance. |