Wednesday, January 2, 2019

2018 year in review

In terms of mathematics, one thing that was studied this year was topology. Topologies are ordinarily defined as certain set systems, with a restricted intersection closed condition, which limits the neighbourhood of points. An Alexandrov topology, on the other hand, doesn't have this condition and it is simply equivalent to a preorder, and every point has a small neighbourhood set. A topology defined this way typically has an interior but not a closure operation, as a consequence of this fact, so it is a Comoore family. There is a complementary Moore family which does have a closure operation, but only an Alexandrov family of this sort has a closure and an interior, and as described already it is determined by its specialization preorder. All finite topologies are Alexandrov topologies, so at least in the finite case, topology doesn't tell us anything more then order theory.

Frechet topology trivializes the specialization preorder aspect of the topology. A Hausdorff space also introduces the axiom that separate points are distinguished by separate neighborhoods, which is generally a good thing as we want to deal with spaces in which points are separated in some sense. As a consequence, most of the time we want to deal with Hausdorff topologies. It happens that order topologies defined by the open rays of a total order are all Hausdorff topologies for example. All finite Frechet topologies, and all finite Hausdorff topologies as well are discrete, so a Hausdorff topology can tell us nothing about a finite structure. There are other order topologies that are discrete and all of them that are hereditary discrete can be embedded within the integers, as already mentioned. Order topology, however, describes a great deal of properties of other types of total orders, like scattered orders, dense orders, connected dense orders, etc.

The most interesting and important aspect of order theory is the study of boolean algebra and relatedly set theory. This study of set theory, is the foundation from which all of mathematics inevitably derives. It is interesting then that the theory of total orders, which are so different from boolean algebras, produces an interesting type of theory on its own. A large part of the reason that this is the case, is due to the study of order topology. Topology provides an organizing principle for the study of total orders. This was demonstrated by my paper on the subject, which unlike its predecessors dealing with total orders, used topology as its foundation. It is worth bearing in mind that order topological principles can be applied to partial orders in general, by considering the total suborders of a partial order. For example, a partial order is hereditary discrete if all its chains are, or likewise, it is scattered if all of its chains are. This means topological principles can be used to better understand all of order theory in general.

Even so, a large part of the reason for studying topology is to better understand metric geometry. Geometry plays an absolutely fundamental role in physics and therefore every field of science, so it is worth pursuing it. It is generally agreed that real space is something like a manifold, and can be modeled as such. Every metric space has a topology of open sets associated with it, which as it happens is also a Hausdorff topology. I demonstrated that if the order topology of the set of distances is related to the topology of the metric space, in that it partially determines what types of topologies can be formed. This helped to demonstrate the relation between order and metric, which are two structures that share the property of having a topology.

In terms of computing, I studied the theory of stack machines. In particular, I studied the Java virtual machine (JVM), its specification, how to write programs in Java bytecode, and I made a compiler for a language for the platform. I read parts of the Java virtual machine specification many times. I also studied the CLR and programmed in CIL, and I intend to do so some more if I have the time. These are the two virtual stack machines which seem to be most used. There are some Common Lisp implementations which also have virtual machine stack machines used in their implementations, which was interesting to study as well. I intend to use many Java libraries in my code, regardless of which language I use, with the understanding of how they are dealt with by the understanding of the virtual machine.

One of the great things about stack machines like the JVM and the CLR is that they, being stack machines, can be easily made to correspond to Lisp because lists and stacks can both be made to correspond to one another since they are both total orders. It doesn't hurt that there is also a vast amount of libraries available on this platforms either, including libraries dealing with things like machine learning, and so on. These platforms are a great thing for language developers as well, who have a high level platform to deal with, and who don't need to deal with register allocation, much less deal with problems of producing code for particular platforms. The LLVM is a low level register based RISC machine, but at least it does register allocation so you don't have to deal with concrete register files for different platforms, things like that should be dealt with by the great libraries.

I like to think about programming from the perspective of a compiler writer, especially now, as I have studied the theory of compilers to a greater extent then ever before. Its good to know all the steps the compiler is doing, like parsing, semantic analysis, code generation, etc and what it is producing (like JVM or CLR bytecode) and that you are not programming with some black box. Certainly it is good to know how to be able to use all the features of your machine, and to be better able to deal with problems of performance, so common when you deal with projects that get really large over the years. So compiler development is my great organizing principle of programming.

A recent thought I had on the theory of parsers, which I thought might be interesting is to when given a sequence, output a set system of indices of that sequence. This set system, partially ordered by inclusion, could even characterize the parse tree of the sequence, if it formed an upper tree under inclusion. This could work for both text files and perhaps even binary files in general. Binary file formats will have their own way of dealing with them with an appropriate parser, which will determine which parts of the binary file are important, so we have more then just some opaque stream of numbers. This would use set theory necessarily. In terms of code generation I have been thinking about building an ontology of instructions, using pattern matching on instruction streams, and so on to see how instruction sequences are isomorphic to certain programs that compile to them.

No comments:

Post a Comment