Saturday, November 17, 2018

On virtual machines and computer architectures

There is a wide variety of different instruction set architectures being used today, with varying degrees of complexity. In particular, there are now reduced instruction set computer architectures (RISC) that are the most barebone, having only operations for dealing with the registers and transferring data between the registers and memory. These architectures are especially common in mobile devices, because it is believed that implementing more complex instructions will place greater demands on heat and power usage. Due to the increase in mobile devices, RISC is especially common. CISC architectures tend to offer a bit more complex instructions, like instructions that combine ALU operations and loads and stores. High level computer architectures offer even more complex operations, but they are much less common. The wide variety of different architectures is a necessary consequence of, if nothing else, the vast amount of people now involved in computing at this stage in history.

Due to this wide variety of instruction set architectures, there is no reason to expect any particular complex feature will be available in any given processor. Indeed, if you selected some computer processor there is a good chance it would be a RISC computer with no complex features at all but only the bare minimum available to perform basic operations. So the utility of a virtual machine, is its portability and the extent to which it makes its complex features portable. The best way to make a program portable, or to get high level features, is to use a virtual machine.

The enormous success of the JVM is due to its portability. It was the first to really push the motto "compile once, run anywhere." Any given program for the JVM in any compiled language, can be compiled directly to JVM bytecode, there is no need for compilation to separate platforms. Given a JVM program you can can send it to someone over the network without even knowing what platform they have, simply trusting they have a compliant JVM implementation. This is genuine, true portability. We live in a networked, interconnected world, so portability certainly cannot be ignored.

The historical trend seems to be that we originally had to start with simple computers and writing assembly language by hand for any given physical machine, which is one of the impetus for the development of higher level architectures which made it easier to write assembly code for the machine. But then compilers got better, making that less necessary, leading to more reduced instruction sets. At the same time the physical high level architectures that existed, died out due to among other things a lack of portability to other more compiler-dependent reduced instruction sets. The Lisp machines died out at this time.

The next development was the virtual machine, which adapted to the reality of highly networked machines. This was best exemplified by the JVM, which first allowed people to compile a program once and run it anywhere. Its success, well other systems failed, was due to this feature. Then the next thing that happened was that specialized hardware was created for the JVM. So the only successful high level computer architectures today are specialized processors designed to run a VM more efficiently. The only way to get a successful physical machine with high level features today is to create a specialized processor for an existing virtual machine. All that is to say you need a virtual machine.

I have in mind a garbage collected stack machine for the implementation of Lisp. At least these high level features will together make the implementation of a Lisp environment more effective. Of course, these features are hardly supported in stock hardware. Fortunately, the JVM (as well as the CLR) is already a stack machine with an excellent garbage collector. Its not clear what more features one could use to implement Lisp so the JVM may already be perfect as is. Hence, the utility of Clojure.

No comments:

Post a Comment