Wednesday, January 30, 2019

Block cut tree

One of the most important concepts in graph theory is the block cut tree of a graph. The block cut tree of a graph is determined by splitting up the components of a graph into two parts: the cut vertices whose removal disconnects the graph, and the blocks which are biconnected components that don't contain any cut vertices in their own graphs. The block cut tree determines the betweenness relations of the graph, and in particular it completely determines the total betweenness relations which are defined by the set of all points in paths between two points. To consider the block cut tree we will start with the example below.



One thing that is worth realizing about the block cut tree is that cut vertices are singular elements and all blocks have two or more elements in them. This suggests a way of representing the block cut tree as a set system, so I decided to represent it in this way. In order to get the block cut tree like this the function bcset can be used.
(= (bcset graph)
    #{#{0} #{1} #{2} #{3} #{7} #{8}
      #{0 4} #{7 3} #{6 2} #{1 5} #{7 8} 
      #{9 10 8} #{0 1 3 2}})
This produces a height two forest-comparability ordered set system in which cut vertices belong in the blocks associated with them. The block cut tree is merely the inclusion comparability of this set system, which as displayed below is a tree.



One characteristic of the block cut tree, is that every element is either a cut vertex or it is contained in some block. This relates the elements of the graph to the elements of the block cut tree. One interesting aspect of this set system representation is that the block or the cut of the point within the tree can be determined by getting the smallest set containing it. In other words, it can be computed by getting the intersection of the set of all sets that contained the point as a member. This is what the subdimembers function does with respect to the set system.
(= (subdimembers tree 2)
   #{2})
(= (subdimembers tree 10) 
   #{9 10 8})
This relates the members of the graph to the members of the block cut tree. The next thing to be computed is the path between two points in the block cut tree as determined by the sets corresponding to those points. This determines a path between any two points in the block cut tree of the graph. All points between two points can be determined by the union of the elements in this path set system.
(= (bcpath graph 0 9) 
   #{#{0} #{3} #{7} #{8} #{3 7} #{7 8} #{9 10 8} #{0 1 3 2}})
(= (bcinterval graph 0 9) 
   #{0 1 2 3 7 8 9 10})
This demonstrates how the problem of determining all points that are between any two points in the graph regardless of length can be determined by the block cut tree. As a tree is defined as a type of graph that has a unique path between any two points, all the paths between any two points have the same length, so the metric betweenness and the total betweeenness coincide. So well the total betweenness and the metric betweenness don't always coincide, they do coincide in the block cut tree which determines the total betweenness.

Sunday, January 27, 2019

Directed and undirected trees

In both graph theory and order theory there is a concept of trees. In graph theory there an undirected trees and in order theory there are directed trees, which are given some root. A tree graph is shown below, its lack of direction is because it is a graph.



One interesting property of graphs is that they have a set of metric intervals associated with them, these metric intervals contain all points between any two points. When the graph is not traceable, then the set of metric intervals will not be upper bounded like it was with the path graph considered earlier. Well the metric intervals of the path graph form a triangular structure, only the maximal sets of the metric intervals of the tree form a triangular structure among there parts in the more general case as shown below.



Well that set of intervals of the tree graph can be considerably larger then the original graph (to a roughly triangular quadratic extent) it is interesting to consider how we can go from the original tree graph to a directed tree graph on the same number of vertices. In order to convert an undirected tree to a directed tree we need some special node to be the root. I realized that the directed version of a tree at some root is simply the set of intervals of the tree with the root as one of its endpoints. This turns into a lower tree with the root as the minimal element, as shown below for the case of the tree graph with the root chosen at zero.



Each set of intervals produced from some root produces a different directed version of the tree. The total set of intervals of the tree is therefore equal to the union of the different rooted orientations of the tree produced in this manner and the empty interval, with non trivial intervals repeated. This rooted set of intervals is also an order containment family. This means that the set of metric intervals centered around some point is also equal to the set of the principal ideals of the corresponding rooted tree.

Friday, January 25, 2019

Triangular family of intervals

Intervals are one of the most basic concepts arises from betweenness relations. A particular family of intervals is shown below. This sort of family can be formed even by metric betweenness on a path graph or order betweenness on a total order. This set system is an atomistic Moore family with a triangular structure. The number of points at some covering distance from the maximum is the covering distance plus one.



The triangular structure of the intervals has three edges. These correspond to the the join irreducibles and the meet irreducibles of the lattice. The edges on the side are the meet irreducibles and the bottom edge is the join irreducibles. The meet irreducibles are the rays and the join irreducibles are the singleton intervals. Every interval can be expressed as an intersection of rays or as a union of singleton intervals. The inner elements of the triangle. The triangular structure seems to be defining character of the set of intervals.

Thursday, January 24, 2019

Concepts of betweenness in graph theory

Given a metric space, then metric betweenness can be defined by the metric betweenness formula. This states that a point is between two points if it going to through that point doesn't increase the distance to the endpoints, in other words, if it is in the shortest path between the two points. In the particular case of graphs, their metric can be defined by the shortest path length. Then this concept from the theory of metric spaces can be applied to the special case of graph theory.

$d(x,y)=d(x,z)+d(y,z)$

The actual formula $d(x,y)=d(x,z)+d(y,z)$ should be familiar from the triangle inequality, which is an axiom of the definition of metric spaces. It states that the distance between two points is less then the distance between the two points and a midpoint or it is equal to them in which case it is between them. With this in mind, the axioms of a metric space make perfect sense. The metric betweenness is the main concept of metrics used in both graph theory and metric geometry in general. A different notion is total betweenness defined by all paths rather then just the shortest paths.

Total betweenness:
The most basic context in which to consider total betweenness is trees, where there is only a single path between any two points. In this context, the total betweenness and the metric betweenness coincide. So for example, with a path graph the central vertex is between its two neighbors.


Total betweenness is totally defined by the cut vertices of a graph. To see this, consider that it is only when one reaches a cut vertex then one must make a choice of which direction to go to, which can limit the number of vertices that can be reached between any two points. In a biconnected graph, any points can be reached in a path between any other one. As a consequence, the total betweenness is entirely determined by the block cut tree of the graph. Given any two distinct vertices, all vertices in paths between them can be produced by determining all blocks and cut vertices between the two points in the block cut tree. As this is merely a process of determining betweenness on a tree which is determined by metric betweenness, it can be said that the metric definition is the main form of betweenness worth considering.

Metric betweenness:
Perhaps the simplest case to consider where metric betweenness is different then total betweenness is the case of the cycle graph on five elements. Given two non-adjacent points on this graph, there is a path between them of length three and another one of length four. All the points are between them in the context of total betweenness, but only one of them is under metric betweenness. This demonstrates that metric betweenness can give us more information in some cases.



Once we have a concept of betweenness like this, it is useful to consider convex sets, which are subsets of the graph in which all points between them are contained within the set. A closed metric interval is defined by the set of all points between them plus the endpoints. This leads to a better understanding of subsets of a graph. There is also a concept of betweenness on partial orders, which isn't considered here. This demonstrates how useful betweenness relations can be to a variety of different subjects in mathematics.

Sunday, January 20, 2019

Metric recovery theorem

The metric recovery theorem proven by Malament is perhaps the most important theorem for understanding actual space and time, and the relation between order and metric in spacetime. Basically, general relativity describes the structure of spacetime using the metric tensor, which is a four dimensional symmetric bilinear form. This means that the matrix form of the metric tensor is defined by ten points at each point in spacetime. The metric tensor used in general relativity is different from a typical metric used in classical models of spacetime, the euclidean metric, because it carries with it a causal order structure on its points as well.

As the metric tensor used in special and general relativity has a causal factor associated with it, it is useful to consider which relativistic spacetimes can be recovered from a causal ordering relation. The metric recovery theorem basically demonstrates that the metric tensor can be recovered from the causal ordering up to a single conformal scale factor. So nine of the ten distinct terms of the metric tensor can be recovered. The only thing remaining is the conformal factor.

The basic issue of how to construct the conformal factor then is the most important thing that remains. Remember from order topology, that a given partially ordered set can have different topologies associated with it: discrete, scattered, or continuous. One aspect of a discrete partially ordered set, is that concepts like measure and volume come for free from it, and they don't need to be defined separately. It has thus been concluded that spacetime could be modeled as a discrete partial order, with the conformal factor being volume, and then there is no need for any extra structure. This is the program of causal set theory.

It has been said that spacetime looks awfully a lot like a discrete partially ordered set. Rather it is one or another is a separate issue, it certainly looks like one. On the other hand, one can attach a conformal scale factor to a continuous partially ordered in order to allow for an effective continuous model of spacetime. This leads to the metric tensor. In either case, spacetime can be modeled as a partial order. It is either a discrete partial order or a continuous partial order which is equipped with a conformal scale factor.

References: Malament, David B. (July 1977). "The class of continuous timelike curves determines the topology of spacetime". Journal of Mathematical Physics.

Wednesday, January 16, 2019

World games

I will consider for the purpose of philosophizing about space and time world games which have spatial relations through a world and temporal relations through gameplay of some kind. Of particular interest is the kinematics of units, which can move in certain patterns in the world. I will start with considering combinatorial game theory in particular. In combinatorics, spatial relations can be modeled by graphs and time can be modeled by partial orders. Conway's game of life can be considered a zero-player combinatorial game, the spatial relations are basically a king's graph where each position is related vertically, horizontally, and diagonally. Checkers has its own sort of graph, defined by the movements of units backwards and forwards.

Chess is a separate concept because of the different kinds of motions of its units. This makes one wonder what the space of the world of chess means. Perhaps the most sensible thing, in general is that the spatial relations are the union of the knight's graph and the queen's graph. This is a diameter two graph, which means that no matter what unit you are using there are places that cannot be reached. This means the chess space at least preserves some sense locality which wouldn't be there if its union graph was diameter one.

Then there are a variety of subgraphs of these graph that determine the kinematics of different units. The minor piece graphs the bishop graph, the knight's graph, and the bishop graphs are all perfect. The bishop graphs are defined for different square colors, and the knight's graph is also bipartite. The bishop graphs need to be defined for a subset of the total space, because they cannot change colors. The rook graph is diameter two, which is shared by the queen's graph since it is a supergraph, as well the graph of the whole space, etc. The king's graph is also defined for the king. All of these are different graphs that describe the movement of different units. There are many different variants of chess as well.

Generally the most popular of these games tend to be board games where the world is simply the board and units move around it. War games are another interesting case, where the games tend to have a world map. The world map can have rivers, roads, mountains, forests, plains, and other concepts analogous to these concepts of real world geography. Likewise, units tend to come in a variety of forms, which can be moved around in these world which is a concept of space. Analogous concepts occur in turn-based and real time strategy games, which have spatial maps that units can move around in. A separate concept, something like an action game, tends to have only one main unit but they still tend to have some spatial world they take place in.

Tuesday, January 15, 2019

Metaphysics of space and time

In the mathematical abstractions of space time I described that I think that space and time can be specified by the mathematical concepts of metric and order. These are mathematical abstractions of our physical concepts of space and time. When space and time are separated from their physical meaning in this way, then they take on a metaphysical meaning.

When considering different metaphysical possibilities, like the notion of a two dimensional flat world like space structure, a space with more dimensions, or something like an automata whose only notion of space is a graph and whose time is merely a discrete total order, quite obviously we are already divorcing the concepts of space and time from their actual physical meaning.

As space and time can be considered both metaphysically as well physically, I would would feel remiss without talking at least a little about the nature of physical space and time. There are two important sources of realizations about physical space and time (1) special relativity and (2) general relativity. Special relativity at least tells us that the type of order that we will use for time must be a partial order rather then a total order. Many of these models mentioned before like automata tend to have absolute time which makes them a bad match for reality. The partial order underlying spacetime, could even be the fundamental structure of the universe as described by causal sets.

Then there is the issue of general relativity and the whole issue of the curvature of spacetime. There are two aspects to the curvature of spacetime (1) the local curvature (2) the global curvature. The global curvature appears to be mostly flat, so the curvature is largely caused by local phenomena in spacetime. The curvature is very much localized, because the gravitational force follows the inverse square law. It is localized both in space and time because the curvature can change over time, which means there is not a constant geometry of the universe. This means that you cannot simply use a separate concept like a metric to describe space separate from time. Nonetheless, it is often useful to consider metrics as a separate concept as described earlier.

Thursday, January 10, 2019

The nature of stacks

Well registers can be better understood using graph theory, stacks can be better understood as total orders. Stacks, lists, vectors, and other related structures can all be classified as totally ordered data structures. Their underlying structure is that of a total order, but they are represented differently so that certain operations work differently on them. Stacks are optimized to deal with the top of the stack by pushing and popping elements from the top. The operations of push and pop are reverse to one another. This makes for an ideal implementation of stacks.

The relationship between lists and stacks is the reason why Lisp is so amenable towards programming on stack machines. Lists, which are fundamental to Lisp, and stacks are both totally ordered data structures. As a result, in order to make a Lisp dialect that works on a stack machine, all that you have to do is map between these two isomorphic data structures. Ideally, a Lisp dialect can be created for a stack machine which exposes all the aspects of its functionality to the user.

Wednesday, January 9, 2019

Register allocation through graph coloring

Instruction set architectures can be either register based or stack based. In general, register based machines are more low level, and they have a certain set of registers that variables can be stored in, otherwise they will be spilled into memory. Register allocation is considerably important for at least two reasons, first of all because no significant high level programming system exposes registers to the programmer. Generally, a high level architecture is based upon stacks which abstract registers away, necessitating allocation. Secondly, even among register machines there is a great deal of differences between the register files they have, the number of registers, their size, and so on, so that register allocation is needed for portability. The ideal today is to have a portable high level stack machine architecture (like the JVM or the CLR) which has both properties.

The issue is really how to pass operands to functions. Registers place special conditions on how operands can be passed to functions, where as stacks do not. A register based system of operand passing makes it so that you have to constantly consider where things are being allocated. Lisp requires a high level abstraction of operand passing, which is why it has always been so effective on top of a stack machine architecture. Any stack machine architecture can be made immediately amenable to Lisp, by making push instructions into atomic forms, and making other instructions into functional combiners. Fortunately, the problem of register allocation has an effective solution.

In order to proceed with dealing with this problem, we will start with a set of variables and we will perform liveness analysis in order to form an interference graph. The problem of register allocation can therefore be reduced to a problem of graph coloring, which we can deal with effectively using our understanding of graph theory. A coloring of a graph in this context is the same thing as a partition of into independent sets. The independent sets are registers to which values can be allocated to without effecting the other variables. One special case is that of an SSA (like the LLVM) for which the interference graph is chordal. Chordal graphs are a familiar class of graphs with only polynomial time coloring, so that makes register allocation even more readily computable.

Since register allocation is a graph coloring problem necessary to program in any register machine, you generally shouldn't program in the assembly language of a register machine like X86 or ARM unless you are a compiler developer. The register allocator and instruction optimizer can produce assembly code for these architectures better then or at least as good any individual developer can. On the other hand, it is sometimes acceptable to program in a high level stack based assembly language, as the language is sufficiently high level to be easily dealt with. Then you could understand a program all the way down to its assembly.

Sunday, January 6, 2019

Fundamentals of combinatorics

If any two types of structures can be considered to be the fundamentals of combinatorics it is graphs and orders. Other structures are constructed after these simpler ones, but these are two of the simplest and most fundamental structures. Orders such as lattices and graphs are widely used throughout combinatorics, which is why they are so fundamental.

It is worthwhile to consider the metric properties of graphs. Graphs aren't just another structure, they are a way of defining the distance relations between points. There are metric properties of graphs like eccentricity, center, radius, diameter, and so on. It is worth considering this, because metrics and orders are perhaps the most fundamental concepts in general. With all this about the central importance of graphs and orders said, it is worth considering that they are not completely separate subjects from one another. Orders and graphs can be formed from one another in the manner listed below.

Orders derived from graphs:
The most immediate order that can be derived from a graph is adjacency containment. This forms a preorder, in which two elements are related if the set of nodes they are adjacent are included in one another. For the class of trivially perfect graphs, this preorder fully determines the graph. For example, a cluster graph is equivalent to a symmetric preorder. Cotrivially perfect graphs are determined by their coadjacency containment. Otherwise, the preorder only provides partial information on the graph.

Graphs derived from orders:
The most immediate graph that can be formed from an order is the comparability graph. The complement of this is the cocomparability graph. Both the comparability graph and the cocomparability graph are instances of perfect graphs. The comparability graph is the most general graph that can be formed from a partial order, but it doesn't always suffice. Another graph that can be formed by a partial order, particularly a hereditary discrete partial order, is the covering graph determined by the comparability of the transitive reduction of the partial order. This is a subgraph of the comparability graph.

Saturday, January 5, 2019

Mathematical abstractions of space, time, and kinematics

There is a question of what mathematical structures to use in order to represent space, time, and related concepts. It is well known that time can be represented as a partial order relating events that are caused by one another. Space on the other hand, can be represented in multiple different ways, but generally we need some concept of a metric to determine distances between points. Entities in a space can only most readily effect other entities that are most near to them, as determined by the distance relation.

One property of these abstractions of space and time is that they both have a topology associated with them, the metric topology and the order topology. There is a question as to rather or not space and time are discrete or continuous or not, this can be determined by the topology of these structures. In particular, a discrete space will have a discrete metric and discrete time will have a discrete order. Another issue is rather time is absolute or not. An absolute time will use a total order and a relativistic time will use a partial order. The overwhelming evidence for relativity favors a partial order for time. The theory of causal sets uses a discrete partial order to define spacetime, for example, but sometimes it might be useful to have a different abstraction of time, or to have a way of representing space, since geometry is clearly so fundamental.

  • Space : metric
  • Time : order

These are mathematical abstractions of space and time, and therefore they don't have to be used to model actual physical space, but rather, they can be used to model anything analogous to them. If space is discrete for example, it may be useful to define it simply by a connected graph, then the metric will be simply be the path length. For example, for a cellular automata like Conway's game of life, the space is the connected graph of the max degree eight neighborhood relation between squares and the time will be an absolute, or total, discrete ordering. For a simple game like checkers, the space could be the squares pieces move on, and the time could be a discrete total ordering, and so on. The space could also be a manifold of any sort, like a two dimensional manifold, assuming there is a two dimensional space, like flat world. Or one could imagine a manifold with many dimensions greater then are own, or with some extra structure to it. This gives us an idea as to how to model all the different possible universes and environments an agent can be embedded in or involved with.

Of course, space and time, cannot be considered completely separate concepts even though concepts like order and metric can be considered separately from one another. One reason that they are related to one another is kinematics, the study of motion, which connects the two together to form spacetime. If one accepts a process ontology, which says that everything is an event, then one could suppose that the meaning of space in a kinematic universe is to define the nature of motion events. It is helpful to examine the relation between them by determining rather the amount of spatial distance between events is greater then the temporal distance, to determine if they can be causally related. In particular, one can define the average speed by the spatial distance divided by the temporal distance as it is typically defined in kinematics. In a discrete universe, the average speed is simply the average amount of time units spent on discrete spatial jumps. With these concepts we can define the equivalent of light cones used in relativity to determine causality.

Wednesday, January 2, 2019

2018 year in review

In terms of mathematics, one thing that was studied this year was topology. Topologies are ordinarily defined as certain set systems, with a restricted intersection closed condition, which limits the neighbourhood of points. An Alexandrov topology, on the other hand, doesn't have this condition and it is simply equivalent to a preorder, and every point has a small neighbourhood set. A topology defined this way typically has an interior but not a closure operation, as a consequence of this fact, so it is a Comoore family. There is a complementary Moore family which does have a closure operation, but only an Alexandrov family of this sort has a closure and an interior, and as described already it is determined by its specialization preorder. All finite topologies are Alexandrov topologies, so at least in the finite case, topology doesn't tell us anything more then order theory.

Frechet topology trivializes the specialization preorder aspect of the topology. A Hausdorff space also introduces the axiom that separate points are distinguished by separate neighborhoods, which is generally a good thing as we want to deal with spaces in which points are separated in some sense. As a consequence, most of the time we want to deal with Hausdorff topologies. It happens that order topologies defined by the open rays of a total order are all Hausdorff topologies for example. All finite Frechet topologies, and all finite Hausdorff topologies as well are discrete, so a Hausdorff topology can tell us nothing about a finite structure. There are other order topologies that are discrete and all of them that are hereditary discrete can be embedded within the integers, as already mentioned. Order topology, however, describes a great deal of properties of other types of total orders, like scattered orders, dense orders, connected dense orders, etc.

The most interesting and important aspect of order theory is the study of boolean algebra and relatedly set theory. This study of set theory, is the foundation from which all of mathematics inevitably derives. It is interesting then that the theory of total orders, which are so different from boolean algebras, produces an interesting type of theory on its own. A large part of the reason that this is the case, is due to the study of order topology. Topology provides an organizing principle for the study of total orders. This was demonstrated by my paper on the subject, which unlike its predecessors dealing with total orders, used topology as its foundation. It is worth bearing in mind that order topological principles can be applied to partial orders in general, by considering the total suborders of a partial order. For example, a partial order is hereditary discrete if all its chains are, or likewise, it is scattered if all of its chains are. This means topological principles can be used to better understand all of order theory in general.

Even so, a large part of the reason for studying topology is to better understand metric geometry. Geometry plays an absolutely fundamental role in physics and therefore every field of science, so it is worth pursuing it. It is generally agreed that real space is something like a manifold, and can be modeled as such. Every metric space has a topology of open sets associated with it, which as it happens is also a Hausdorff topology. I demonstrated that if the order topology of the set of distances is related to the topology of the metric space, in that it partially determines what types of topologies can be formed. This helped to demonstrate the relation between order and metric, which are two structures that share the property of having a topology.

In terms of computing, I studied the theory of stack machines. In particular, I studied the Java virtual machine (JVM), its specification, how to write programs in Java bytecode, and I made a compiler for a language for the platform. I read parts of the Java virtual machine specification many times. I also studied the CLR and programmed in CIL, and I intend to do so some more if I have the time. These are the two virtual stack machines which seem to be most used. There are some Common Lisp implementations which also have virtual machine stack machines used in their implementations, which was interesting to study as well. I intend to use many Java libraries in my code, regardless of which language I use, with the understanding of how they are dealt with by the understanding of the virtual machine.

One of the great things about stack machines like the JVM and the CLR is that they, being stack machines, can be easily made to correspond to Lisp because lists and stacks can both be made to correspond to one another since they are both total orders. It doesn't hurt that there is also a vast amount of libraries available on this platforms either, including libraries dealing with things like machine learning, and so on. These platforms are a great thing for language developers as well, who have a high level platform to deal with, and who don't need to deal with register allocation, much less deal with problems of producing code for particular platforms. The LLVM is a low level register based RISC machine, but at least it does register allocation so you don't have to deal with concrete register files for different platforms, things like that should be dealt with by the great libraries.

I like to think about programming from the perspective of a compiler writer, especially now, as I have studied the theory of compilers to a greater extent then ever before. Its good to know all the steps the compiler is doing, like parsing, semantic analysis, code generation, etc and what it is producing (like JVM or CLR bytecode) and that you are not programming with some black box. Certainly it is good to know how to be able to use all the features of your machine, and to be better able to deal with problems of performance, so common when you deal with projects that get really large over the years. So compiler development is my great organizing principle of programming.

A recent thought I had on the theory of parsers, which I thought might be interesting is to when given a sequence, output a set system of indices of that sequence. This set system, partially ordered by inclusion, could even characterize the parse tree of the sequence, if it formed an upper tree under inclusion. This could work for both text files and perhaps even binary files in general. Binary file formats will have their own way of dealing with them with an appropriate parser, which will determine which parts of the binary file are important, so we have more then just some opaque stream of numbers. This would use set theory necessarily. In terms of code generation I have been thinking about building an ontology of instructions, using pattern matching on instruction streams, and so on to see how instruction sequences are isomorphic to certain programs that compile to them.