The Great Tanenbaum-Torvalds Debate Revisited
Unit 12
Welcome to Week 12 where we will revisit (and recreate in our own small way) the Tanenbaum-Torvalds debate of the early nineties.
At the time of the original debate, Andrew Tanenbaum was a university lecturer who had created Minix – a small,
microkernel-based operating system that he used as a teaching tool. Linus Torvalds was a master's student who had completed a module in operating systems design.
Anecdotally, Torvalds claimed he was just trying to create his own OS kernel that he could use with GNU and would make best use of his new (at the time) 386 based PC.
He used Minix as the theoretical basis of his new system.
The world of computing was very different in 1992. Processor speeds were in the MHz, not GHz range.
There was no multi-core x86 CPU available – and certainly not in desktop systems. Therefore, the decisions about how OSes should be designed –
the trade-offs between single address space systems versus user-kernel memory splits, and the concerns about context switch times were much more acute in those days.
Today we live in a web scale world: workloads run in
virtual clouds with supposedly unlimited capacity for expansion – both for RAM and CPU. Malevolent intention
is everywhere – barely a day goes by without another announcement of a
critical exploit or
vulnerability in one system or another – and sometimes in the underlying hardware as well!
All these things should be borne in mind when considering the validity of both Tanenbaum’s and Torvalds’ arguments nearly thirty years on: is a monolithic kernel
still the best approach in a mostly distributed world? Does the use of microservices equate to a need for microkernels? And how does the shadow of incessant cyber-attacks impact these decisions? Hopefully this debate will help us all to re-evaluate our opinions on the best approach to take.