April 1st, 2008 | Published in Google Open Source
Early in March, Google's Open Source Team hosted a one-day concurrency summit organized by O'Reilly Media. The participants represented a broad spectrum of interests and perspectives, from fast-paced startups, to academic researchers, to hardware companies with unique offerings in concurrent hardware or hardware acceleration for concurrent software, to large software companies looking to concurrency for the next generation of radical technology advances. The discussion ran fast and furious, over a variety of topics: the implications of increasingly concurrent hardware and software for power consumption, whether locks should be considered harmful, solutions for testing concurrent code, the obstacle of legacy code in moving toward concurrent implementations, REST as a model of distributed concurrency, the benefits and limits of mathematic formalism in concurrency, and how to train the next generation of programmers for concurrent development so they can solve the next generation of concurrent problems. Much positive attention went to concurrency models based on components/boundaries and message passing. Energy demands and economics, rather than improved performance, were generally agreed to be the driving factors behind the current trend toward massively multicore machines. All-in-all it was an incredibly valuable day. My main take-away is that we're still in the very early days for concurrency research and development. There was a time when people talked about artificial intelligence, but now we talk about genetic algorithms, neural networks, symbolic learning, fuzzy systems, certainty factors, and a host of other topics related to or inspired by early research in artificial intelligence. Concurrency has a similar path ahead, where we stop thinking and talking about it as a monolithic solution, and start treating it as a general field full of rich and diverse solutions to practical problems.