small medium large xlarge

Seven Concurrency Models in Seven Weeks: When Threads Unravel


Cover image for Seven Concurrency Models in Seven Weeks

Seven Concurrency Models in Seven Weeks


Your software needs to leverage multiple cores, handle thousands of users and terabytes of data, and continue working in the face of both hardware and software failure. Concurrency and parallelism are the keys, and Seven Concurrency Models in Seven Weeks equips you for this new world. See how emerging technologies such as actors and functional programming address issues with traditional threads and locks development. Learn how to exploit the parallelism in your computer’s GPU and leverage clusters of machines with MapReduce and Stream Processing. And do it all with the confidence that comes from using tools that help you write crystal clear, high-quality code.

Watch Paul Butcher on the Mostly Erlang podcast.

And enjoy this bonus chapter

Just as each new spoken language can make you smarter and increase your options, each programming language increases your mental tool kit, adding new abstractions you can throw at each new problem. Knowledge is power. The Seven in Seven series builds on that power across many different dimensions. Each chapter in each book walks you through some nontrivial problem with each language, or database, or web server. These books take commitment to read, but their impact can be profound.

Choose Your Format(s)

  • $48.00 In Stock

    Save $15.00 on the combo pack.

  • $38.00 In Stock
  • $25.00 In Stock
  • Ebooks are DRM free.

  • Ebook delivery options.

About this Title

Pages: 296
Published: 2014-07-10
Release: P1.0 (2014-07-15)
ISBN: 978-1-93778-565-9

This book will show you how to exploit different parallel architectures to improve your code’s performance, scalability, and resilience. You’ll learn about seven concurrency models: threads and locks, functional programming, separating identity and state, actors, sequential processes, data parallelism, and the lambda architecture.

Learn about the perils of traditional threads and locks programming and how to overcome them through careful design and by working with the standard library. See how actors enable software running on geographically distributed computers to collaborate, handle failure, and create systems that stay up 24/7/365. Understand why shared mutable state is the enemy of robust concurrent code, and see how functional programming together with technologies such as Software Transactional Memory (STM) and automatic parallelism help you tame it.

You’ll learn about the untapped potential within every GPU and how GPGPU software can unleash it. You’ll see how to use MapReduce to harness massive clusters to solve previously intractable problems, and how, in concert with Stream Processing, big data can be tamed.

With an understanding of the strengths and weaknesses of each of the different models and hardware architectures, you’ll be empowered to tackle any problem with confidence.

Q&A with Author Paul Butcher:

Q: Concurrency and parallelism aren’t new; why are they such hot topics now?

A: Because processors have stopped getting faster. You can no longer make your code run faster by simply using newer hardware. These days, if you need more performance, you need to exploit multiple cores, and that means exploiting parallelism.

Q: Is it just about exploiting multiple cores?

A: Not at all. Concurrency also allows you to create software that’s responsive, fault-tolerant, geographically distributed, and (if you use the right approach) simpler than traditional sequential software.

Q: Aren’t threads and locks enough?

A: Getting multi-threaded code right is really hard (much harder than most people realize). There are better choices available that are easier to understand and easier to debug. And threads and locks give you no help when it comes to distribution, fault-tolerance, or exploiting data parallel architectures.

Q: Data parallelism? What’s that?

A: We tend to think of parallel computer architecture in terms of multiple cores, but that’s just one of the ways to implement parallelism. You have a supercomputer hidden in your laptop—your graphics card is a very sophisticated data parallel processor. You can use data parallel programming techniques to unlock its potential, and when you do, its performance will blow you away.

Q: How does concurrency help with fault tolerance?

A: If your software is running on a single computer, and that computer fails, there’s no way for your software to recover. So true fault-tolerance necessarily requires more than one computer, and that means that it has to be concurrent. Sequential software can never be as resilient as concurrent software.

Q: How will this affect me?

A: Concurrency is everywhere—even client-side web programming is going concurrent. So no matter what kind of software you write, it’s going to be an increasingly important aspect of what you do. By giving you an overview of the concurrency landscape, I hope that this book will help you tackle your future projects with confidence.

Top 5 tips

1. Know where the dragons lie in wait.

Threads and locks are the most popular approach to concurrency, but they’re very difficult to get right. Do you know how to avoid deadlock? Livelock? Do you know what the memory model says about when it’s safe for one thread to read changes made by another? Or why you shouldn’t call a foreign function while holding a lock?

2. Understand the difference between parallelism and concurrency.

Although they’re often confused, parallelism and concurrency are different things. Concurrency is an aspect of the problem domain—your code needs to handle multiple simultaneous (or near simultaneous) events. Parallelism, by contrast, is an aspect of the solution domain—you want to make your program run faster by processing different portions of the problem in parallel. Some approaches are applicable to concurrency, some to parallelism, and some to both. Understand which you’re faced with and choose the right tool for the job.

3. Brush up your functional programming.

You don’t need to write functional programs to write concurrent software, but even if you’re using a traditional imperative language like Java, understanding the functional principles of immutable data and referential transparency will be incredibly helpful.

4. Remember that it’s not just about multiple cores.

Although the resurgence of interest in concurrency is a result of the multicore crisis, concurrency is about much more than just exploiting multiple cores. Concurrency also allows you to create software that’s responsive, fault-tolerant, geographically distributed, and (if you use the right approach) simpler than traditional software.

5. Don’t be daunted.

Concurrent programming has a fearsome reputation, but that mostly derives from the problems with threads and locks. If you use the right tools for the job, concurrent programming can be simple, expressive and even fun.

Read the reviews .

What You Need

The example code can be compiled and executed on *nix, OS X, or Windows. Instructions on how to download the supporting build systems are given in each chapter.

Contents & Extracts

Bonus Chapter

  • Concurrent or Parallel?
  • Parallel Architecture
  • Concurrency: Beyond Multiple Cores
  • The Seven Models
  • Threads and Locks
    • The Simplest Thing That Could Possibly Work
    • Day 1: Mutual Exclusion and Memory Models
    • Day 2: Beyond Intrinsic Locks
    • Day 3: On the Shoulders of Giants
    • Wrap-Up
  • Functional Programming excerpt
    • If It Hurts, Stop Doing It
    • Day 1: Programming Without Mutable State
    • Day 2: Reducers
    • Day 3: Dataflow Programming with Futures and Promises
    • Wrap-Up
  • The Clojure Way—Separating Identity from State
    • The Best of Both Worlds
    • Day 1: Atoms and Persistent Data Structures
    • Day 2: Agents and Software Transactional Memory
    • Day 3: In Depth
    • Wrap-Up
  • Actors excerpt
    • More Object-Oriented than Objects
    • Day 1: Messages and Mailboxes
    • Day 2: Error Handling and Resilience
    • Day 3: Distribution
    • Wrap-Up
  • Data Parallelism
  • Lambda Architecture
  • Tuple Spaces
  • Wrapping Up


Paul Butcher has worked in diverse fields at all levels of abstraction, from microcode on bit-slice processors to high-level declarative programming, and all points in between. Paul’s experience comes from working for startups, where he’s had the privilege of collaborating with several great teams on cutting-edge technology. He is the author of Debug It!.