Friday, July 30, 2021

Graph theory on subsets

A common theme in the study of commuting graphs and their centralizers is the theory of adjacency relations between subsets of graphs. There are a number of different possible definitions of adjacency between subsets: but for the purposes of algebraic graph theory and the graph theoretic algebra we are interested in only two:
  • Weak adjacency: $A \sim B \Leftrightarrow \exists a \in A, b \in B : a \sim b$
  • Strongy adjacency: $A \sim B \Leftrightarrow \forall a \in A, b \in B : a \sim b$
The weak adjacency relationship is used to define quotients in the category of graphs: let $P$ be a partition of $G$ then the relation of weak adjacency on $\frac{G}{P}$ makes the projection $\pi : G \to \frac{G}{P}$ into a graph homomorphism. The relationship of strong adjacency is the key to the theory of centralizers: the centralizer of a set is the largest set is strogly adjacent to.

These relations amongst subsets have an alternate category realisation: let $E \subseteq V^2$ be the symmetric set of ordered pairs of a graph and $\chi : V^2 \to 2$ its subobject classifier. Then use the image functor on the subobject classifier to form a function on subsets of $V^2$: \[ Im(\chi): \wp(V^2) \to \wp(2) \] Then the relationship between two vertex subsets of a graph can be determined by the image of the subobject classifier on the direct product $S \times T$ of two subsets. Two subsets are strongly adjacent if $\chi(S \times T)$ is $\{1\}$ and they are weakly adjacent if $\chi(S \times T)$ is $\{1\}$ or $\{0,1\}$, so the basic relations between subsets of a graph can be formed this way.

The distance between two subsets $S$ and $T$ of $G$ can be defined to be the minimal distance between any pairs of elements in $S$ and $T$, although this doesn't form a metric. We can also define the boundary of a set $S$ to be all elements not in $S$ and adjacent to a member of $S$, and exterior elements are ones not in $S$ or its boundary. Most aspects of graph theory can be generalized to sets in a number of different ways.

It is also customary to generalize operations in abstract algebra from elements to sets. This leads to the various quantales of sets of commutative rings, categories, semigroups, and so on. A case in point is the quantale of morphisms of a category. In graph theory, we can now engage in an analogous process of generalisation to subsets.

Proposition (relation between algebra and graph theory on subsets). Let $A$ be a semigroup and let $S,T$ be subsets of $A$ then if $S$ and $T$ commute with respect to strong adjacency then \[ ST = TS \] As sets are the most basic objects in mathematics, it is fitting that the various operations in algebra and graph theory should be generalized from elements to sets.

See also:
Commuting graphs of subsemigroup lattices:

Wednesday, July 28, 2021

Centralizer lattices of graphs

The commutativity necessary subalgebras of a semigroup or semiring form a lattice: the centralizer lattice of the commuting graph. Let $G$ be a graph and let $S,T$ be subsets of $G$ then $S$ and $T$ are said to be strongly adjacent provided that every vertex of $S$ is adjacent to every vertex of $T$.

Definition. Let $G$ be a graph then subsets $S$ and $T$ are strongly adjacent iff \[ \forall s \in S, t \in T: s \sim t \] Let $\wp(S)^2$ be the product lattice of the power set of $S$. Then it can shown that strong adjacency is hereditary within this lattice.

Proposition. let $(A,B) \subseteq (C,D)$ then $C \sim D \Rightarrow A \sim B$.

Proof. $ (a,b) \in (A,B) \implies (a,b) \in (C,D) \implies a \sim b $ $\square$

The centralizer of a set $S$ is merely the largest set that is strongly adjacent to it. Thusly, we have the following two identities on strong adjacency:

Proposition. let $A \sim B$ then $A \subseteq Centralizer(B)$ and $B \subseteq Centralizer(A)$.

The centralizer is the intersection of all centralizers of individual elements, so it is a semilattice homomorphism from unions to intersections.

Lemma 1. $A \subseteq Centralizer(Centralizer(A))$

Proof. the centralizer of a set is the largest set that forms a strongly adjacent pair with it. Therefore, $(A,Centralizer(A))$ form a strongly adjacent pair. The centralizer of a set must be larger then any set that forms a strongly adjacent pair with it so $A \subseteq Centralizer(Centralizer(A))$. $\square$

Lemma 2. the centralizer : $\wp(G) \to \wp(G)$ is antitone.

Proof. let $S \subseteq T$ be subsets of $G$ then \[ Centralizer(T) = Centralizer(S) \cap Centralizer(T-S) \subseteq Centralizer(S) \] Therefore, $Centralizer(T) \subseteq Centralizer(S)$. $\square$

Lemma 3. $Centralizer(Centralizer(Centralizer(A))) \subseteq Centralizer(A)$.

Proof. by lemma 1 we have that $A \subseteq Centralizer(Centralizer(A))$. By lemma two, application of centralizer to both sides reverses this inequality. $\square$

These three basic lemmas are the building blocks for the characterization theorem of the centralizer function and its associated centralizer lattice.

Definition. let $Centralizer : \wp(A) \to \wp(A)$ be the centralizer function. The the centralizer lattice $L$ is the image of this function.

The centralizer function can clearly be restricted to the centralizer lattice, which produces a special action which is the subject of our theorem today.

Theorem. the edges of the centralizer function on $L$ are the maximal strongly adjacent pairs of $G$

Proof. (1) let $(A,B)$ be a maximal strongly adjacent pair, then $A \subseteq Centralizer(B)$ and $B \subseteq Centralizer(A)$ by the definition of centralizers. By maximality, $Centralizer(B) \subseteq A$ and $Centralizer(A) \subseteq B$. Thusly, $Centralizer(A) = B$ and $Centralizer(B) = A$ so that $A$ and $B$ are both centralizers of one another.

(2) let $L$ be the centralizer lattice and let $C \in L$. It will be shown that $(C,Centralizer(C))$ forms a strongly adjacent pair. By the fact that $C$ is a centralizer there exists $A$ such that $C=Centralizer(A)$. Then consider the pair $(Centralizer(A),Centralizer(Centralizer(A))$. The later of these is maximal by the definition of the centralizer. The former is maximal by lemma three. $\square$

Not only does this describe the effect of the centralizer function, it describes it on the level of individual edges and it relates it to strong adjacency. By the fact that strong adjacency is hereditary, we could expect that it was fully determined by its maximal elements. These are precisely the edges of the centralizer action on the centralier lattice.

The relation of strong adjacency is symmetric, so the set of all maximal strongly adjacent pairs is symmetric. The centralizer action merely switches between the two elements of a strongly adjacent pair. It follows that it is an involution. As the centralizer is antitone, it forms a isomorphism from $L$ to its dual.

Corollary. the centralizer lattice is self-dual

This is the one thing we can say for sure about the centralizer lattices of graphs in general. The centralizer lattice is generated by all element centralizers, so one other issue worthy of examination is the meet irreducibles of a centralizer lattice.

Proposition. let $centralizer(a) = centralizer(B)$ then a is a lower bound of each $B$ in the adjacency preorder.

Proof. let $b \in B$ then by the fact that the centralizer is antitone, $centralizer(B) \subseteq centralizer(b)$ so by simple substitution $centralizer(a) \subseteq centralizer(b)$ whence $a \subseteq b$ with respect to the adjacency preordering. $\square$

Corollary. let $G$ be a graph with upper forest adjacency preorder, then the element centralizers are all meet irreducible elements of the centralizer lattice

Obvious examples where this is the case include triangle free graphs without leaf nodes and trivially perfect graphs. The key to constructing centralizer lattices of trivially perfect graphs is the following construction.

Definition. let $F$ be a family of lattices then the lattice theoretic disjoint union of $F$ is simply the lattice completion of the disjoint union of $F$ as posets.

The lattice congruence constructed by all identifying all the connected components and bounds separately has a height three lattice quotient.

Proposition. the centralizer lattices of trivially perfect graphs are precisely the graphs constructed from the single element lattice and repeated application of the lattice theoretic disjoint union.

Example. the following graph has a centralizer lattice of the form The centralizer lattices of trivially perfect graphs are planar. The centralizer lattices of graphs constructed from graph joins, like cographs can also be computed but they don't have as nice of depictions as they grow much faster. The centralizers in product graphs can also be determined componentwise.

See also:
Intersection semilattices of maximal cliques

Monday, July 26, 2021

Monotonicity of morphism properties

Recall that every category is associated with a suborder of the lattice of partitions of morphisms. It will be shown that the members of this lattice can be represented by either monotone or antitone maps over the preorders of a category. In order to do this, we must establish the following trinity of preorders on the elements of a category:
  • Objects
  • Hom classes
  • Morphisms
Then the monotonicity of the maps between these preorders will clarify the relationship between the object and morphism preorders and the order theoretic nature of categories.

Object preorder:
The object preorder is the most basic of the three. It fully determines thin categories.

Definition. let $C$ be a category, $Ob(C)$ its set of objects, and $x,y \in Ob(C)$. Then $x \subseteq y$ means that $\exists f : x \to y$.

Example. the total order $T_4$ The hom class preorder:
The hom class preorder is simply the interval inclusion preorder of the object preorder.

Definition. let $(A,B)$ and $(C,D)$ be hom classes in $C$ then $(A,B) \subseteq (C,D)$ provided that $A \subseteq C$ and $B \subseteq D$.

Example. the interval inlusion ordering on $T_4$
Morphic preordering:
The morphic preordering of a category is a generalisation of the Green's J preorder of a semigroup.

Definition. let $C$ be a category, $Arrows(C)$ its set or proper class of morphisms, $x,y \in Arrows(C)$. Then $x \subseteq y$ provided that $\exists l,r : y = l\circ x \circ r$.

Monotonicity:
We can now show that the object and morphism preorders of a category are related by monotone and antitone relatiosnhips.

Theorem.
  1. $T: Arrows(C) \to Ob(C)^2$ the map from any morphism to its hom class is monotone
  2. $In : Arrows(C) \to Ob(C)$ the map from any morphism to its input object is antitone
  3. $Out : Arrows(C) \to Ob(C)$ the map from any morphism to its output object is monotone
Proof. (1) let $x : A \to B, y : C \to D \in Arrows(C)$ be morphisms, then $x \subseteq y$ implies that $y = l \circ x \circ r$. Thus, we have the following chain of morphisms. By simple inspection we have $C \subseteq A$ and $B \subseteq D$, so that $(A,B) \subseteq (C,D)$ in the hom class ordering.

(2) $C \subseteq A$, so that the input object of $y$ is less then that of $x$. It follows that the input object is antitone.

(3) $B \subseteq D$ implies that the output object map is monotone. $\square$

These are the three properties of morphisms inherent to the definition of a category and they are all monotone. This defines the relationship between the object and morphism preorders of a category.

See also:
Categories for order theorists:

Friday, July 23, 2021

Intersection semilattices of maximal cliques

Let $G$ be a graph, then the maximal cliques of the graph form a sperner family whose meet subsemilattice closure is the semilattice $M(G)$ of intersections of maximal cliques.

Example. let $A$ be a semigroup, ring, semigroup, or semiring then the maximal commuting clique intersections form a meet subsemilattice of $Sub(A)$.

The intersections of maximal cliques of $A$ are precisely the commutative subalgebras determined by the commuting graph. They produce a number of examples of commutative subalgebras in noncommutative algebra. Additionally, $M(G)$ provides an important order-theoretic account of graph properties.

Order theoretic properties of the intersection semilattice

The order types of the intersection semilattices $M(G)$ generated by maximal cliques have a complete order-theoretic classification.

Definition. let $a$,$b$ be elements of a poset then we say that $a \sim b$ provided that $\exists z: a \subseteq z \text{ and } b \subseteq z$.

Theorem. let $L$ be a meet semilattice, then it is a semilattice of intersections of the maximal cliques of a graph iff it satisfies the following two conditions:
  1. $L$ is generated by its maximal elements
  2. $L$ satisfies the upper bound sharing property: let $C$ be any clique in the upper bound sharing graph $\sim$ then $\exists z : \forall c \in C : c \subseteq z$.
Proof. Neccessity of condition (1): the maximal cliques of a graph always form a sperner family by maximality, so their intersections must always be generated by an antichain. In a meet semilattice, this antichain is the set of all maximal elements.

Neccessity of condition (2): let $C_1,C_2 \in M(G)$ such that $C_1 \sim C_2$, then $C_1$ and $C_2$ are contained in some common clique $D$, which means that $C_1$ and $C_2$ are adjacent. Let $F$ be a family of cliques of $M(G)$ that pairwise share upper bounds, then $F$ is pairwise adjacent, so that $\cup F$ forms a clique. There must be at least one maximal clique $\cup F \subseteq Z$ such that $\forall C \in F : C \subseteq Z$.

Constructive proof of sufficiency: let $L$ be a meet semilattice that satisfies these conditions. Then let $F$ be the maximal principal ideals of $L$. By condition (1) $F$ forms a sperner family. We can reconstruct the primal graph of the set system $F$ by $x \sim y$ if $\exists S \in F : \{x,y\} \subseteq S$, but this is precisely the upper bound sharing graph of $L$.

Let $C$ be any maximal clique of the upper bound sharing graph, then by condition (2) there is some clique $Z$ in $F$ that contains $C$ so that $C \subseteq Z$. It follows that any $\sim$ clique is a clique in $F$. Let $S \in F$ then $S$ is a $\sim$ clique, because $S$ is contains any pair of its own elements, and it is maximal because it is in a family $F$ that contains all its own cliques. Therefore, $F$ forms a maximal cliques family. $\square$

Suborder of adjacency principal filters:

Let $x \in G$. Then the intersection of all cliques of $M(G)$ containing $x$ is the adjacency principal filter $\uparrow x$ of $x$. This is the smallest member of $M(G)$ containing $x$. \[ \uparrow : G \to M(G) \] The image of $\uparrow$ is the subset of $M(G)$ consisting of all adjacency principal filters. The kernel of $\uparrow$ is the adjacency equality relation of $G$. The condensation of $G$ is the quotient of $G$ with respect to adjacency equality.

Theorem. the join irreducibles (excluding the minimal element as the empty join) of $M(G)$ are necessarily adjacency principal filters.

Proof. let $S \in M(G)$ be a clique in $M(G)$. Then suppose that $S$ is not an adjacency principal filter, then $S$ doesn't have any elements unique to itself, so it is the union of all its predecessors. As it is a union it is join reducible. Then suppose that a join irreducible element that is not an adjacency principal filter, then it is union irreducible which means it is join irreducible which is a contradiction. $\square$

We have the following bounds the suborder of the adjacency principal filters of $M(G)$: (1) the filters must contain all join irreducibles and (2) they can include up to the entire set $G$. The suborder of $M(G)$ consisting of adjacency principal filters is an important source of information, because it determines a graph up to adjacency equality.

Theorem. let $L$ be a semilattice with subset $S$ consisting of adjacency principal filters. Then all graphs having $S$ as a suborder of $M(G)$ equal to $L$ have the same condensation.

Proof. let $\uparrow : G \to L$ be the principal adjacency filter function for the semilattice $L$. Then we can recover $M(G)$ by recursion. \[ f(l) = \uparrow^{-1}(l) \cup \bigcup_{i \subset l} f(i)\] Then since the graph $G$ can be recovered by its maximal cliques, every graph $G$ can then be defined by its $\uparrow$ function. The adjacency equality of $G$ is the kernel of $\uparrow$, which has no effect up to condensation. Condensed graphs are therefore determined by the image of $\uparrow$ which is a suborder of $M(G)$. $\square$

Corollary. the condensation types of graphs are determined by the images of $\uparrow$, which are subobjects of the semilattice $L$.

There are generally multiple condensation types associated to a given order type $L$. This is demonstrated below.

Example 1. the cyclic graph $C_4$ it has a maximal cliques intersection semilattice of the following form, with the adjacency principal filters highlighted: the following larger graph has the same intersection order type but it has a different set of principal adjacency filters Example 2. the path graph $P_4$ has an intersection semilattice that looks like this the following larger graph has a semilattice with more adjacency principal filters In fact, every graph of with this semilattice order type has the condensation of either a path, gem, bull, or the above larger graph.

Theorem. let $G$ be a graph then $G$ is trivially perfect (e.g $P_4,C_4$-free) iff $M(G)$ is a lower tree

Proof. let $G$ be a trivially perfect graph, then removal of the center always disconnects the graph into connected components. The class of trivially perfect graphs is hereditary, so this can be applied recursively to get a lower tree decomposition of $G$. In the other direction, if the minimal element of $M(G)$ is a cut vertex, it can be removed to get connected components. In a lower tree this can be applied again recursively, to get a trivially perfect graph. $\square$

We can recover the clique graph of a graph, provided that we have the order type of $M(G)$ and its suborder of adjacency principal filters.

Proposition. let $G$ be a graph, with $M(G)$ its semilattice of intersections of maximal cliques, with $S$ its set of adjacency principal filters. Then if the minimal element of $M(G)$ is in $S$ the clique graph is a complete graph, otherwise it is equal to the independence graph of the maximal elements: two elements are independent provided they meet at the minimal element.

A graph is a clique graph provided that it has a Helly edge covering of its cliques. The Helly property of the independence graph of any semilattice $M(G)$ follows from the upper bound sharing property.

Graph theoretic properties:

The intersection semilattice has an order type and a suborder of adjacency principal filters, but it also contains additional data as a set system such as the clique number of the graph, which would not be recordered otherwise.

Proposition. the clique number of $G$ is equal to the rank of $M(G)$

A quick look at some of the set-theoretic properties of a selected assortment of classes of graphs is presented below:
$G$ $M(G)$
Empty graph Rank one family
Complete graph Unique family
Cluster graph Pairwise disjoint family
Cocluster graph Convex family
Triangle free graph Rank two family
Cotriangle free graph Cotriangle free maxima intersections
Trivially perfect graph: Lower tree ordered family
In a cluster graph, the maximal cliques correspond to connected components so that $M(G)$ is a pairwise disjoint family. In a cocluster graph, the cocluster is the graph join of empty graphs, which means that is convex from the maximal cliques of $G$ to the center.

Another case where we might want to compute $M(G)$ is when dealing with a $G$ composed from certain basic graph operations:
  • Graph coproducts: $M(C_1 + C_2) = M(C_1) \cup M(C_2) \cup \{\emptyset\}$.
  • Graph join: $M(\overline{\overline{C_1} + \overline{C_2}} ) = \{x \cup y: x \in C_1, y \in C_2\}$
  • Graph products: $M(G \times H) = \{ C_1 \times C_2 : C_1 \in M(G), C_2 \in M(H) \}$
The computation of $M(G)$ from the elementary graph operations, can aid in the computation of $M(G)$ for certain graphs $G$. Although emerging as a subsemilattice of $Sub(A)$ for certain algebraic structures, $M(G)$ also encodes a great deal about information about graphs in general.

References:
[1] Clique graphs

[2] Trivially perfect graphs

Tuesday, July 20, 2021

Relation and matrix algebras

Matrices are sort of like labeled directed graphs. They both have a combinatorial part, determined by their underlying directed graphs, and an algebraic part determined by matrix semirings. The relational aspects of the matrix algebra will be briefly discussed.

Definition. let $A$ be a semiring, with matrix semiring $Mat_n(A)$. Then the underlying relation of a matrix $R(M)$ is an adjacency matrix with one for all non-zero entries in $M$ and zero otherwise. \[ R : Mat_n(A) \to Mat_n(T_2) \] The underlying relation $R(M)$ of a matrix $M$ determines an inequality because relations form an ordered algebraic structure.

Theorem. let $M,N$ be matrices in the semiring $Mat_n(A)$ then \[ R(M\circ N) \subseteq R(M) \circ R(N) \] Proof. let $(i,j)$ be an entry in the matrix $M \circ N$ then if $(i,j) \in R(M \circ N)$ this implies that $(M \circ N)_{(i,j)} \not= 0$. The entries in a matrix can be written by a dot product $u \cdot v$ where $u$ is the $ith$ column of $N$ and $v$ is the $jth$ row of $M$. \[ u \cdot v = \sum_{i=1}^{n} u_i v_i\] Suppose that $u \cdot v \not= 0$, then this implies that there exists at least one index $i$ such that $u \cdot v \not= 0$ so that both entries at the index are not both zero. If that were not the case, then $u \cdot v$ would have to be zero. To see this notice that zero is an annihilator of every element in a semiring, so if there is a zero at each index of one of $u,v$ then $\forall i : u_i v_i = 0$.

By substitution, if $\forall n : u_n v_n = 0$ then $u \cdot v = \sum_{i=1}^n 0$, but this equals zero because the sum of any number of zeros is zero. It follows that there is at least one entry that both vectors are both not zero at. Then due to this entry $R(u) \circ R(v) = 1$ because all the dot product of $T_2$ vectors need to be one is a common non-zero entry at some index.

By the fact that $R(v_1) \circ R(v_2) = 1$ we have that $(R(M) \circ(N))_{(i,j)} = 1$, which implies that $(i,j) \in R(M) \circ R(N)$. Finally, by the fact that for all $(i,j)$ we have that $(i,j) \in R(M \circ N) \Rightarrow (i,j) \in R(M) \circ R(N)$ we know that $R(M \circ N) \subseteq R(M) \circ R(N)$. $\square$

In a semiring without additive inverses or zero divisors, like $\mathbb{N}$ then $R(M\circ N) = R(M) \circ R(N)$ and $R$ turns into a homomorphism. This relationship allow us to form subalgebras of matrix rings, from subalgebras of the relation algebra. In order to elaborate on this we must first establish the following lemma.

Lemma. let $Mat_n(T_2)$ be a matrix semiring then power sets of relations $R$ are multiplicative sets iff $R$ is transitive.

Proof. let $R$ be a relation, then in order for $\wp(R)$ to be composition closed, we must have that for all $E_1, E_2 \in R$ we have $\{E_1\} \circ \{E_2\} \in R$. This implies $R^2 \subseteq R$, so that $R$ is a transitive relation. Then if $R$ is a transitive relation and $A,B \subseteq R$ we have that $A \circ B$ consists of edges $E_1 \circ E_2$ such that $E_1 \in A$ and $E_2 \in B$, but all such edges are in $R$ since $R$ is transitive. Thus $\wp(R)$ is a multiplicative set. $\square$

We can use this to form suborders of matrix semirings. In particular, if $Mat_n(R)$ is a matrix semiring, then the subset of $Mat_n(R)$ consisting of matrices whose non-zero entries are all in a transitive relation $T$ forms a subalgebra. In short, this allows us to form subalgebras of matrices from ordering relations.

Theorem. let $R$ be a ring or a semiring, then the restriction $S$ of the matrix algebra $Mat_n(R)$ to those martices whose non-zero entries are all in a transitive relation $T$ forms a subalgebra.

Proof. this proof will split up into additive and multiplicative components.

(1) addition is defined componentwise, so any restriction of $Mat_n(R)$ to a set of non-zero entries is an additive submonoid. Negation is also defined componentwise, so in the case $R$ is a ring, then the restriction $S$ is also an additive subgroup.

(2) let $M,N$ have entries in $T$. Then because $T$ forms a composition closed class of the matrix semiring $Mat_n(T_2)$ by the previous lemma, we have that $R(M) \circ R(N) \subseteq T$, but by the inequality theorem this has $R(M\circ N) \subseteq R(M) \circ R(N) \subseteq T$. This means that the non-zero entries in $M \circ N$ are a subset of $T$, which implies that $M \circ N \in S$, so that $S$ is multiplicatively closed.

These restrictions don't necessarily need to have a multiplicative identity, which is worth pointing out. This often happens in ring theory, for example ideals don't need to have the multiplicative identity. This allows us to restrict the non-zero entries in a matrix to a transitive relation.

Example 1. diagonal matrices are determined by an antichain $i = j$

Example 2. upper triangular matrices are determined by a total order $i \leq j$

Example 3. lower triangular matrices are determined by a total order $j \leq i$

Example 4. upper triangular matrices with zero main diagonal are defined by a strict total order $i \lt j$ and lower triangular matrices with zero main diagonal are defined by the strict total order $j \lt i$.

Example 5. take any adjacency matrix of a transitive relation, and you can form rings of matrices with non-zero entries in it

References:
Triangular matrix

Saturday, July 17, 2021

Matrix algebras of distributive lattices

Let $L$ be a bounded distributive lattice, then $L$ is also a semiring. The idempotent semiring of matrices $Mat_n(L)$ has a natural presentation as an ordered algebraic structure, by its own additive partial order. Given a representation of $L$ by a family of sets then $Mat_n(L)$ consists of set valued matrices. Set valued matrices can then be defined by products of relation algebras.

Matrix semiring of $T_2$
The composition of two relations $R \circ S$ on a set $X$ is a relation whose entries $(a,b) \in R \circ S$ are defined by: \[ \exists x : (a,x) \in S, (x,b) \in R \] Which can be expressed as a distributive lattice polynomial: \[ \bigvee_{x \in X} (a,x) \in S \wedge (x,b) \in R \] This distributive lattice polynomial corresponds to the inner product of $T_2$ valued vectors, and thus it consists of terms of the product of $T_2$ valued matrices. Thus, \[ Rel \cong Mat(T_2) \] In order to produce a proper correspondence, it is necessary to consider the different types of composition ordering and matrix representations. The most common representation of relations by matrices, uses the row-first approach. In that case, the relation algebra might correspond to the opposite semiring.

Set-valued matrices:
In order to construct a more general theory of matrices over distributive lattices, it is necessary to first construct a set-theoretic representation of $L$. \[ f : L \to \wp(X) \] This invites us to consider the theory of set-valued matrices. In these cases, the distributive lattice describing the combination of set vectors $U,V$ with indices in $I$ is: \[ U \cdot V = \bigcup_{i \in I} U_i \cap V_i \] In that case, the membership of $x$ in the above set is a distributive lattice polynomial in $T_2$ \[ x \in U\cdot V = \bigvee_{i \in I} x \in U_i \wedge x \in V_i \] This suggests that the composition of set-valued matrices in $Mat_n(L)$ is a product of compositions of relation compositions in $Mat_n(T_2)$. This leads to the theory of relation decompositions.

Example. let $S = {a,b,c}$ \[ \begin{bmatrix} \{a\} & \{a,b\} & \{b\} \\ \{a,c\} & \{a\} & \{a,b\} \\ \{c\} & \{a,c\} & \{a\} \end{bmatrix} \] We can split this up into three adjacency matrices $M_a,M_b,M_c$. \[ M_a = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{bmatrix} , M_b = \begin{bmatrix} 0 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}, M_c = \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 0 \end{bmatrix} \] This turns $M$ into an indexed family of relations. Then composition of matrices in $Mat(L)$ then simply becomes the composition of a product of relations.

The matrix algebra:
Let $L$ be a distributive lattice, with set theoretic representation $f : L \to \wp(S)$. Then we can naturally turn any $L$ matrix into a $S$ indexed family of relations, whose composition is the componentwise composition of relations. If $L$ is a finite boolean algebra, then this is equivalent to a product of relation algebras.

In the case that $L$ is not a boolean algebra, such as the ordered triple $T_3$ which has set theoretic representation $\emptyset, \{a\},\{a,b\}$ then we get an indexed family of relations $\{R_a,R_b\}$ except with the condition that $R_b \subseteq R_a$, so that matrices over $T_3$ are a totally ordered family of relations. The specialization order of the set theoretic representation corresponds to the inclusion ordering of the relation decomposition of the matrix representation.

The matrix semiring construction provides order-theoretic foundations for the relation algebra. The relation algebra $Rel$ can be construed as the matrix semiring over the ordered pair $T_2$, the simplest non-trivial case of a matrix semiring over a distributive lattice.

Category theory typically deals with the composition of morphisms one at a time, this problem is dealt with by the use of idempotent semirings. This provides a matrix theoretic representation for the simplest of these semirings, the semiring of families of morphisms of a complete thin groupoid.

Commutative monoids as semimodules

The generalization of rings and modules to semirings and semimodules is justified by the fact we can use this new setting to make any commutative monoid into an $\mathbb{N}$ semimodule. Semimodules then can play a basic role in the theory of commutativity.

The basic construction:
Definition. let $R$ be a semiring and $M$ a commutative monoid, then $M$ is a (left) semimodule provided that it satisfies the following conditions:
  • Action $r(m+n) = rm + rn$.
  • Additivity $(r+s)m = rm + sm$.
  • Multiplicativity $(rs)m = r(sm)$.
  • Annihilation: $0m = r0 = 0$.
A unitary semimodule also has $1m = m$.

Theorem. let $M$ be a commutative monoid, then it is a $\mathbb{N}$ semimodule

Proof. (1) $(ab)^n = ababab...$ which by commutativity equals $a^n b^n$.
(2) $a^n a^m = \underbrace{a...}_{\text{n}}\underbrace{a...}_{\text{m}} = \underbrace{a...}_{\text{n+m}} = a^{n+m}$
(3) $(a^n)^m = \underbrace{(a^n)...}_{\text{m}} = a^{\underbrace{n...}_{\text{m}}} = a^{nm}$
(4) $a^0 = 1_M = (1_M)^n$. $\square$

This is a good first stepping stone towards a theory of commutative operations. The general idea is that commutative operations, as distinguished from non-commutative ones, are always define over some semiring: which provides coefficients, multiplicities, etc.

Guide to commutative operations:
The fact that commutative groups are $\mathbb{Z}$ modules means that different types of commutative operations are interpreted in different kinds of ways. This is described below.
  • Commutative semigroups: let $S$ be a commutative semigroup, then we can adjoin an identity to it to get $S^1$ which is a $\mathbb{N}$ semimodule.
  • Commutative monoids are $\mathbb{N}$ semimodules.
  • Commutative groups are $\mathbb{Z}$ modules.
Commutative cancellative monoids can be embedded in $\mathbb{Z}$ modules. In particular, the free commutative $\mathbb{N}$ semimodule $F(S)$ can be embedded in the free commutative $\mathbb{Z}$ module $F^{\circ}(S)$. The techniques of module theory and semimodule are very useful in the theory of commutative operations.

Data structures:
The implementation of commutative operations in a computer algebra system requires the use of a number of different data structures:
  • Multisets: $\mathbb{N}$ semimodules
  • Signed multisets: $\mathbb{Z}$ modules
  • Real valued sets: $\mathbb{R}$ modules
The use of modules and semimodules allows us to better describe the sort of data structures that are used in commutative monoids and commutative groups. Commutative monoids ($\mathbb{N}$ semimodules) deal with multisets, and commutative groups ($\mathbb{Z}$ modules) deal with computations over signed multisets.

Friday, July 16, 2021

Additive preorders of semirings

Semirings come from two major sources: classical ring theory and ordered algebraic structures. Every semiring $R$ is associated with an additive preorder: the natural increasing action preorder of the commutative additive monoid of $R$. This has $a \sqsubseteq b \Leftrightarrow \exists c : a+c = b$. This breaks up semirings into two basic classes: those whose additive preorders are symmetric and those whose preorder are antisymmetric.

Those semirings with symmetric preorder are precisely rings. To see this, notice that all J-total monoids are groups. on the other hand, it can be shown that semirings with antisymmetric preorder are ordered algebraic structures. This divides semirings along the lines that they most apppear in applications. Beyond these there are all the in between cases. The additively ordered semiring $\mathbb{N}$ is a special case, because although it is additively J-trivial it is also cancellative so that it can be embedded in a ring $\mathbb{Z}$. Most additively ordered semirings, such as the idempotent semirings emerging from quantales, cannot be embedded in rings under any circumstances. Numerical semigroups are also cancellative, commutative, and J-trivial, so they are another source of addition of semirings of this type.

Semirings with antisymmetric addition also emerge from classical ring theory. The ideals of a commutative ring form an idempotent semiring by the addition and multiplication of ideals. Radical ideals form a bounded distributive lattice $Spec(R)$ which means that they can also be represented as semirings under their lattice operations.

Theorem. let $R$ be a semiring with additive preorder $\sqsubseteq$. Then addition and multiplication are both monotone over this preorder.

Proof. (1) suppose $a \sqsubseteq b$ then $a+c = b$. Let $d$ be another element, then $(d+a)+c = (d+b)$ so $d+a \subseteq d+b$ by $c$.

(2) suppose $a \subseteq b$ then $a+c = b$. Let $d$ be another element, then $d(a+c) = da + dc = db$ so $da\subseteq db$ by $dc$. $\square$

By the preceding theorem, every semiring is a preordered algebraic structure. This is even true in the case of rings, it is just that the additive preorder on rings is the complete relation, and every map to a complete relation is monotone. Complete preorders are maximal in the hom class comparison ordering induced by underlying set functor.

In the more interesting case, when the semiring is not a ring, this turns any semiring into a preordered algebraic structure. Further, every semiring with antisymmetric preorder is an ordered algebraic structure. $\mathbb{N}$ is a good first example: both addition and multiplication are monotone over the additive ordering of the natural numbers. This is the subject of a corollary.

Corollary. let $R$ be a semiring with additive partial order $\sqsubseteq$. Then $R$ is an ordered semiring with respect to its additive ordering.

This covers the second basic case of semirings besides rings: those emerging from ordered algebraic structures. This idea neatly divides semirings into two basic classes: rings and ordered semirings. The other semirings have a mix of symmetry and antisymmetry of some kind.

Idempotent semirings are a very promising case, because they allow us to define all kinds of algebraic operations on sets: such as the arithmetic of ideals of semigroups and rings and the composition of morphism systems of a category. The use of idempotent semirings in category theory gets around the use of partial operations. There are also links between idempotent semirings, quantales, and hyperoperations that could be of use in number theory.

Wednesday, July 14, 2021

Commutativity necessary subrings

The commuting graphs of semigroups induce a number of different commutativity necessary subsemigroups: centralizers, maximal commuting cliques, commutative principal filters, the center, etc. As it it happens all of these concepts can be transferred to non-commutative rings, so that we can understand non-commutative algebra using graph-theoretic techniques.

Definition. let $R$ be a ring then the commuting graph $Com(R)$ of $R$ is the commuting graph of its multiplicative semigroup. \[ Com(R) = \{(x,y) : xy = yx \} \] The first step towards proving the existence of commutativity necessary subrings for a ring $R$ is to demonstrate that centralizers are subrings. The existence of the other types of commutativity necessary subrings immediately follows.

Theorem. let $R$ be a ring, $x$ an element of $R$, then the centralizer $S$ of $x$ is a subring. \[ S = \{ c : cx = xc \}\] Proof. (1) by basic semigroup theory, then centralizer is a multiplicative subsemigroup.

(2) by the nullary distributive law we have \[ \forall x*0 = 0 = 0*x \] It follows that $0$ is a central element which implies $0 \in S$.

(3) by the binary distributive law we have \[ x(a+b) = xa + xb = ax + bx = (a+b)x \] So the centralizer $S$ is additively closed.

(4) let $c \in R$ then $(-c)x$ is equal to $-(cx)$ by the distributive law and the definition of additive inverses. \[ cx + (-c)x = (c-c)x = 0*x = 0 \] In the other direction, $x(-c)$ is equal to $-(xc)$ for the same reasons. \[ xc + x(-c) = x(c-c) = x*0 = 0 \] Thus if $c$ is in the centralizer $S$ then so is $-c$ \[ (-c)*x = -(cx) = -(xc) = x*(-c)\] Thus $S$ is also negation closed, so that $S$ is a subring. $\square$

The first three conditions are equally applicable to semirings. Semirings axiomatically have both the binary and nullary distributive laws, which ensures that centralizers are additive submonoids. In the ring case, we see that centralizers also form additive subgroups.

All other commutativity necessary subrings can be formed by intersections of centralizers. The center is the intersection of all centralizers. Maximal commuting cliques are equal to the intersection of the centralizers of all their elements. Commutative principal filters are formed by the centralizer of the centralizer.

Centralizers:
Let $Mat_n(F)$ be the non-commutative ring of matrices over a field $F$. Then the centralizers of $Mat_n(F)$ can be computed by solving systems of linear polynomial equations. In general, the previous theorem shows that centralizers are always subrings.

Centers:
The center of any non-commutative ring is a commutative ring, over which the non-commutative ring is a ring extension. For example, the center of the quaternions $\mathbb{H}$ are the reals $\mathbb{R}$, so that $\mathbb{}$ is a four dimensional vector space over $\mathbb{R}$. In a non-commutative ring of matrices like $Mat_n(F)$ the center consists of the commutative ring of scalar matrices.

Commutative principal filters:
The commuting preorder of a ring is the adjacency preorder of its commuting graph. Then the multiplicative iteration preorder is a suborder of the commuting preorder. The principal filters of this preorder are all commutativity necessary subrings. The center is the minimal commutative principal filter.

Maximal commuting cliques:
Let $G$ be a finite graph, then every element of $G$ is contained in some maximal clique. In the infinite case we need Zorn's lemma, and the fact that the union of a chain of cliques is a clique. Then every element of an infinite graph is in a maximal clique as well. This implies that every element in a ring is contained in some commutative subring.

Theorem. monogenic rings are commutative

Proof. By Zorn's lemma every element is embedded in a maximal clique, which is a commutative subring. Then every element of a ring is embedded in a commutative subring. A monogenic subring is a minimal subring containing a given element, so it must be embedded in some greater then or equal commutative subring. The property of commutativity is hereditary, so monogenic rings are commutative. $\square$

This should work unless there is some issue with Zorn's lemma in the infinite case. This generalizes the well known fact that all monogenic semigroups are commutative, and it shows that rings can be built up from commutative building blocks.

Implications of Lagrange's theorem:
Let $R$ be a finite ring, then Lagrange's theorem implies that all the different commutativity necessary subrings have orders that divide the order of the ring $R$. In particular, the degrees of each element of the commuting graph are divisors of its order. This means that commuting graphs of rings are a lot like those for groups, but they aren't completely the same.

For example, the commuting graphs of rings don't necessarily have to have the same degree of commuting equality which is implied for in groups by Cauchy's theorem and the number of iteration equal elements $\phi(n)$ of prime cyclic groups of order $n$. Even so, Lagrange's theorem alone places a greater constraint on the set of possible multiplicative semigroups of rings.

In the case of finite division rings, we get that subalgebras must satisfy Lagrange's theorem in two different ways, corresponding to the two types of subgroups in a division ring: \[ (d)|(n) \] \[ (d-1)|(n-1) \] This places even greater constraints on the possible commuting graphs of division rings, but we can get around worrying about that altogether by Wedderburn's little theorem [1] which shows that all finite division rings are commutative. The double Lagrange's theorem still applies to finite fields.

See also:
Commutativity necessary subsemigroups

Subalgebra lattices of finite fields

References:
Weddernburn's theorem

Monday, July 12, 2021

Categorical approach to relation algebra

In the same way that composition of morphisms in a category is an extension of the composition in a preorder the composition of sets of morphisms is an extension of relation algebra. The relation algebra on a set is the quantale of morphism systems of the complete thin groupoid. As a result, morphisms are enriched ordered pairs and morphism systems are enriched relations.

The composition of morphism systems in a category is simply the set of all compositions of morphisms in each set, whenever they exist. If there are no compatible morphisms then the empty set is returned. Aside from the empty compositions, the quantale of morphism systems is similar to the quantales of sets of other algebraic structures.

Semiring of morphism systems:
Let $C$ be a category and $F,G$ families of morphisms in $C$. Then $F \circ G$ is the composition of all morphisms in $F$ and $G$ whenever they exist. This can be expressed in symbols as: \[ \bigcup_{m \in F, n \in G} \left\{ \begin{array}{ll} \emptyset & \text{Input}(m) \not= \text{Output}(n) \\ \{m \circ n\} & \text{otherwise} \end{array} \right. \] The resulting structure $(\wp(Arrows(C)),\circ)$ is a monoid with identity $I$ equal to the family of all identity morphisms of the category $C$. The composition function of a category has existence associativity, so that if $(ab)c$ exists then so does $a(bc)$ and then they both coincide. Therefore, the composition of morphism systems is also associative.

A corresponding semilattice structure $(\wp(Arrows(C)), \vee)$ can be defined by the union of morphism systems. This is a unital semilattice, by the fact that $\emptyset$ is an identity with respect to unions. In order to show that $\wp(Arrows(C))$ is a semiring, we only now need to show that these two monoids are related to one another.

Theorem. $(\wp(Arrows(C)),\vee,\circ)$ is a semiring

Proof. for the sake of notation it is convenient to replace the piecewise description of the composition of two morphisms by the composition of two singleton sets $\{m\} \circ \{n\}$. Then we can simply the rewrite the definition of composition, moving around the unions a little in order to get distributivity. \[ F \circ (G \cup H) \] \[ \bigcup_{m \in F, n \in G \cup H} \{m\} \circ \{n\} \] \[=\bigcup_{m \in F} ( \bigcup_{n \in G \cup H} \{m\} \circ \{n\} )\] \[=\bigcup_{m \in F}( ( \bigcup_{n \in G } \{m\} \circ \{n\}) \cup ( \bigcup_{n \in H} \{m\} \circ \{n\})) \] \[ = F\circ G \cup F \circ H \] It follows that $F\circ(G \cup H) = F\circ G \cup F \circ H$, the other distributive law $(F \cup G) \circ H = F \circ H \cup G \circ H$ follows by exactly the same reasoning. In order to confirm the structure is a semiring, we finally need to demonstrate that the additive identity $\emptyset$ is a multiplicative zero. In fact it is, because $\emptyset \circ A = A \circ \emptyset = \emptyset$. It follows that $\wp(Arrows(C))$ is a semiring $\square$

The relationship between the semiring of a category and its partial semigroup, is that the semiring is a multi-valued extension of the ordinary composition of morphisms. In particular, the composition function of a category $(C,\circ)$ can be recovered as a partial semigroup from the subset of the composition monoid of morphism systems consisting of all singleton morphism systems.

The composition of singleton morphisms in the case where no composition exists is the empty set. The partiality of the composition function of a category can be gotten around by adjoining a zero element. This is not necessary when the category $C$ is a monoid.

Semiring of relations:
Let $K_X$ be the complete thin groupoid on $X$. Then the morphisms of $K_X$ are ordered pairs $(a,b)$ and composition of ordered pairs $(a,b)$ and $(b,c)$ is $(a,c)$. Then since morphisms are ordered pairs, morphism systems are relations. Then the composition of relations is a special case of the composition of families of morphisms in a category: \[ R \circ S = \{ (a,c) : \exists (a,b) \in S, (b,c) \in R : (a,b) \circ (b,c) = (a,c) \}\] It follows that $(\wp(Arrows(K_X)),\circ)$ is equal to the composition of relations, and so relation algebra can be recovered by the composition of families of morphisms in a category. By the preceding theorem, $(\wp(Arrows(K_X)),\vee,\circ)$ is also a semiring as a special case of a family of morphism systems of a category.

The semiring congruence of a category:
The composition of morphism systems of a category is an extension of the composition of relations. The semiring of morphism systems of a category extends relation algebra on the semiring level.

Definition. let $F \in \wp(Arrows(C))$ be a morphism system of a category $C$. Then the underlying binary relation $T(F)$ is the subset of $\wp(Ob(C)^2)$ consisting of all ordered pairs of objects in the morphisms of $F$. \[ T : \wp(Arrrows(C)) \to \wp(Ob(C)^2) \] Theorem. let $C$ be a category then the underlying binary relation forms a congruence of the composition of morphism systems.

Proof. let $F,G$ be morphism systems with underlying binary relations $R,S$. Then if $m : A \to C \in F \circ G$ we have that there exists $f : B \to C$ in F and $g : A \to B$ in G such that $m = f \circ g$. Then $(B,C) \in R$ and $(A,B) \in S$ implies $(A,C) \in R \circ S$. In the other direction, suppose that $(A,C) \in R \circ S$ then there exists $f : B \to C$ and $g : A \to B$ such that $f \circ g : A \to C$, so that $(A,C)$ is in the underlying binary relation of $F \circ G$. $\square$

Theorem. let $C$ be a category then the underlying binary relation forms a congruence of the union of morphism systems.

Proof. let $F,G$ be morphism systems with underlying binary relations $R,S$. Then if $m : A \to B$ is in $F \cup G$ then either (1) $m \in F$ in which case $(A,B) \in R$ or (2) $m \in G$ in which case $(A,B) \in S$. Therefore, $(A,B) \in R \cup S$. In the other direction, every ordered pair in $R \cup S$ is an ordered pair of $F \cup G$, so the two coincide. $\square$

Corollary. the underlying binary relation forms a congruence of the semiring of morphism systems of a category.

The fact that the underlying binary relation is a semiring congruence, means that we can say that the composition fo morphism systems in a category is an extension of relation algebra. This is formalized by a semiring homomorphism.

Theorem. $T : \wp(Arrrows(C)) \to \wp(Ob(C)^2)$ is a semiring homomorphism from the semiring of homomorphisms of a category to the semiring of relations.

Proof. the function $T$ is defined by the image functor so it is a union homomorphism. $T(\emptyset = \emptyset)$ so $T$ also preserves additive identities. Finally, each morphism in $R \circ S$ is some composite $g \circ f : A \to C$ with $f : A \to B \in S$ and $g : B \to C \in R$ such that $(A,B) \in T(S)$ and $(B,C) \in T(R)$ so $(A,C) \in T(R) \circ T(S)$. Then in the other direction, $T(R) \circ T(S)$ is all ordered pairs $(A,C)$ such that there is a morphism $f: A \to B \in R$ and a morphism $g : B \to C \in S$ then $g \circ f \in R \circ S$ so $(A,C) \in T(R \circ S)$. Therefore, $T(R \circ S) = T(R) \circ T(S)$. $\square$

The equivalence classes of $T$ do not form intervals, so the underlying binary relation cannot form a lattice congruence. It suffices then to show that they form a semiring congruence, which we have done here.

Relation algebra:
The definition of the composition of morphism systems is based upon the union of the compositions of each individual pair of morphisms, so that composition can be union decomposed. Therefore, $\wp(Arrows(C))$ clearly forms a quantale for any category $C$ but this isn't enough to recover the full relation algebra structure. In order to do that we must recall that $K_X$ is a groupoid. \[ R^{-1} = \{ m^{-1} : m \in R \} \] The inverse of a morphism system of a groupoid is simply the set of inverses of each of its morphisms. With this groupoid-theoretic characterization we have a full characterisation of the relation algebra: $(S,\vee,\wedge, 0,1,\circ,I,-)$.
  • $\vee,\wedge$ are the union and intersection of relations. $0$ is the empty relation and $1$ is the complete relation.
  • $\circ$ is the composition of morphism systems
  • $I$ is the set consisting of all identity morphisms
  • $R^{-1}$ maps each morphism to its inverse
The residuals $R \triangleleft S$ and $R \triangleright S$ are $R \circ S^{-1}$ and $R^{-1} \circ S$, and so they are special cases of operations that can be defined for any morphism system of a groupoid. This turns the relation algebra of the complete thin groupoid into a residuated boolean algebra.

We can relate the relation algebra to general categories by the underlying binary relation congruence. In the special case of the morphism system of a groupoid, this also forms a unary congruence of the inverse operation.

Theorem. let $C$ be a groupoid with inverse operation $^{-1} : Ob(C) \to Ob(C)$. Then $T(R^{-1}) = T(R)^{-1}$. The underlying binary relation of the inverse of a morphism system, is the inverse of its underlying binary relation.

Proof. let $f : A \to B$ be a morphism in a groupoid it has ordered pair $(A,B)$ which has inverse $(B,A)$. Its inverse is $f^{-1} : B \to A$ which has ordered pair $(B,A)$ so the two coincide. This then applies to morphism systems on the level of images by these functions. $\square$

Although the morphism systems of categories always extend relations, the special properties of the relation algebra come from their embedding in a groupoid.

Properties of relations:
The properties of relations, expressed in relation algebra, are analogous to those which can be expressed in any category. The morphism system of a subsemigroupoid can be expressed by the condition: \[ R^2 \subseteq R \] The condition $R^2 \subseteq R$ in any quantale expresses composition closure. In a monoid, this defines subsemigroups of the monoid, but in the relation algebra this defines a transitive relation. Categories are basic extensions of transitivity, so transitive relations can be defined like this. The dual condition is: \[ R \subseteq R^2 \] This means that a given morphism system is self-factorizable. The idempotents of $\wp(Arrows(C))$ are clearly all morphism systems that satisfy both conditions. Relation composition forms an ordered semigroup, so that composition is bimonotone. In a number of cases it is also increasing.

When $R$ is reflexive so that $I \subseteq R$ then clearly $R$ is increasing. The same is true of morphism systems of a category with respect to the identity $I$. We have $I \subseteq R$ means that $R \circ S \subseteq R \circ I = R$ and $S \circ R \subseteq I \circ R = R$. The dual condition $I \cap R = \emptyset$ defines irreflexivity. Finally, coreflexive relations $R \subseteq I$ form a distributive lattice subsemiring.

Theorem. coreflexive relations $R \subseteq I$ form a distributive lattice subsemiring.

Proof. (1) union closure follows from the fact that $R \subseteq I$ is a principal down set, as does the membership of $\emptyset$. (2) two loop edges (a,a) are only composable when they are equal, so the composition of coreflexive relations is intersection which is again closed because $R \subseteq I$ forms a principal down set. This means multiplication is a semilattice, so that as a semiring the coreflexive relations form a distributive lattice. $\square$

Systems of identity morphisms still form a distributive lattice subsemiring in any category. The other properties of relations require the use of the inverse, which means that they are only applicable to a groupoid.
  • Symmetric : $R^{-1} = R$
  • Antisymmetric : $R^{-1} \cap R \subseteq I$
  • Functional : $R^{-1} \circ R \subseteq I$
  • Inverse functional: $R \circ R^{-1} \subseteq I$
Symmetry corresponds to inverse closure in a groupoid. Clearly, the combination of symmetry, transitivity, and reflexivity defines wide subgroupoids. In the case of of the complete thin groupoid $K_X$ we already know that this corresponds to the lattice $Part(A)$.

Corollary. the lattice $Part(A)$ is isomorphic to the family of morphism systems of the complete thin groupoid that satsify $I,R^{-1},R^2 \subseteq R$.

We see how a number of concepts like subsemigroupoids, wide subcategories, and wide subgroupoids can be defined by the algebra of morphism systems. In the case of wide thin groupoids, these create the familar classes of relations.

Subsemigroups of relations:
Every relation algebra is associated with the following composition subsemigroups: $PT_X$ consisting of all partial transformations, $PS_X$ the symmetric inverse semigroup of charts, $T_X$ the full transformation monoid, and the symmetric group $S_X$ of permutations. Each of these subsemigroups is a suborder of $Rel$. The full transformation monoid and the symmetric group form antichains, so we can ignore them for now. On the other hand, $PT_X$ and $PS_X$ both have non-trivial partial orders so we would like to relate the order on them to their semigroups.

Recall that coreflexive relations form a distributive lattice subsemigroup. Let $E$ be the composition subsemigroup of coreflexive relations of $Rel$. The composition of coreflexive relations is decreasing, so its action preorder is a suborder of the dual of the inclusion order of relations.

Input composition by a coreflexive relation is a domain restriction and output composition is codomain restriction. Thusly, the decreasing left actions by $E$ form domain restrictions of relations, which can be used to recover the inclusion ordering of $PT_X$ and $PS_X$. The natural ordering of an inverse semigroup in this context is the same as the inclusion ordering of relations.

Wednesday, July 7, 2021

Lattice ordered semimodules of multisets

The free commutative monoid $F(S)$ on a set $S$ consists of all multisets on $S$. This has a natural semiring action by $\mathbb{N}$, so $F(S)$ is a semimodule. The algebraic preorder of $F(S)$ is not only a partial order, since $F(S)$ is J-trivial, but it also a lattice.

Commutativity and semimodule theory:
A semiring action on a commutative monoid is defined by a structure preserving map from $R$ to $End(M)$. This requires that three conditions must be satisfied:
  • Action: $(ab)^n = a^nb^n$
  • Additivity $a^n a^m = a^(n+m)$
  • Multiplicativity $(a^n)^m = a^{nm}$
Every commutative monoid is a $\mathbb{N}$ semimodule. In a non-commutative monoid we can get the last two conditions, but we cannot get the first. Thusly, the techniques of semimodule theory cannot be applied to non-commutative monoids.

Commutativity and lattice theory:
The algebraic preordering of the free commutative semigroup is a lattice. This provides a natural link between commutative operations and lattices. Additionally, it is a distributive lattice of multisets.

On the other hand, the algebraic preordering on a non-commutative free semigroup is in general not a lattice. It is a partial order which orders lists by consecutive subsequences. Thusly, the techniques of lattice theory cannot be applied to non-commutative monoids. It seems that the lattice ordered semimodule structure is something that can only be applied in the commutative case.

Examination of the ordered algebraic structure:
An ordered monoid is an internal monoid in the category of preorders and monotone maps. It is not hard to see that $(\mathbb{N},\le,+)$ is an ordered monoid, but it has the stronger property that its action is biextensive. This naturally trasfers to the free commutative monoid $(F(S),+)$.
  • Monotone: $\forall a,b,c : a \leq b \implies a+c \leq b+c$
  • Biextensive: $\forall a,b : a \leq a + b$
The commutative monoid of all multiset addition actions is a submonoid of the monoid of all increasing monotone actions on the distributive lattice of multisets. We can therefore say that $F(S)$ is not only an ordered semimodule, its actions are also increasing.

Linear maps:
Recall from basic list processing, that mapcat replaces each element of a list with another list and then concatenates them all together. The corresponding operation over multisets is the unordered mapcat, which takes any element of a multiset and replaces it with another multiset, then adds them altogether.

The linear maps of free commutative semimodules $m : F(S) \to F(T)$ are precisely those defined by some unordered mapcat function $f : S \to F(T)$ that takes each element of the underlying set to a multiset in $F(T)$. This is also a monotone map of ordered semimodules. Therefore, the category of ordered semimodules can be used to understand the maps of free commutative monoids.

Tuesday, July 6, 2021

Commutative connected components

A number of different types of subsets of graphs provide example of commutatively necessary subsemigroups. A special case worth examining is when the connected components of a commuting graph are subsemigroups. A further special case is when commutative connectivity forms a congruence then its quotient will be a rectangular band.

In the other direction, if a semigroup has a rectangular band quotient, then its commuting graph is necessarily disconnected. The congruence classes of the congruence with rectangular band quotient are all disjoint unions of connected components of the commuting graph.

In order for the commutative connectivity partition of a semigroup to be non-trivial, the commuting graph of the semirgoup must not be connected. This already rules out any finite semigroup with idempotent commutative, such as any finite inverse semigroup.

Theorem. let $S$ be a semigroup and let $C$ be a commutativity connected subsemigroup of $S$ then $C$ is a radical subsemigroup.

Proof. let $y$ be an element in $C$ and suppose that there exists $x \in S$ and $n \in \mathbb{Z}_+$ such that $x^n = y$. Then $x * x^n = x^{n+1} = x^n * x$ which implies that $xy=yx$. Now since $x$ and $y$ commute we have that $x$ is in the same commutativity connected component as $y$, therefore $y \in C$. It follows that $C$ is a radical subsemigroup. $\square$

An example is the free semigroup on $n$ generators, in that case two lists only commute if they share a common generator. The commutativity connected components of the free semigroup are all principal radical subsemigroups, and so the commuting graph of a free semigroup is a cluster graph. A cluster graph is a special case of a class of graphs whose connected components all form subsemigroups.

Theorem. let $G$ be a trivially perfect commuting graph of a semigroup. Then all connected components of $G$ are radical subsemigroups.

Proof. trivially perfect graphs are precisely the graphs for which every connected component has a universal vertex. Therefore, let $C$ be a connected component of $G$, then let $x$ be a universal vertex of $C$. Then since $x$ is a universal vertex of $C$, $C$ is equal to the centralizer of $x$. Therefore, $C$ is a subsemigroup of $G$, and by the previous theorem it is a radical subsemigroup as well. $\square$

Every single graph of order four or less is either trivially perfect or connected. It is therefore an immediate consequence that commutative connected components are subsemigroups for all semigroups that are of order four or less. It seems that in at least the simplest cases, the connected components of the commuting graph are subsemigroups but this doesn't appear to be the case in general.

Monday, July 5, 2021

Categories for order theorists

Preorders can only describe the direction that objects that move in, they cannot explain the dynamic forces responsible for the motion of objects. Categories are just one way to explain motion in a preorder, defined by axiomatizing moves from one point to another as morphisms. Monoid actions are another way to explain motion in a preorder.

The order-theoretic perspective on categories requires further examination beyond what was provided by my first post. In particular, we need order-theoretic foundations for functors. This is provided by a homomorphism theorem which relates functors to subalgebra and congruence lattices. This leads to the following more complete account of the order theoretic foundations of category theory.
  1. Order-theoretic introduction of categories
  2. Lattices of subcategories
  3. Lattices of congruences of categories
  4. Theory of functors
  5. Natural transformations
Notes:

[1] The idea of recasting "structure preserving maps" as structure including maps provides a very direct accounting of how most categories are extensions of preorders. Monotone maps, continuous maps, relation homomorphisms, etc all correspond to structural inclusions in corresponding partial orders. This is formalized by hom class comparisons.

[2] There is a lot more that can be done to put category theory on a good order-theoretic footing. There are also many partial orders related to categories that are worthy of further examination. Everything is a work in progress.

[3] In these posts I mostly deal with locally small categories as structured sets. There are also large categories which can be treated as structured proper classes. The differences can be worked out in an axiomatic set theoretic account of categories. The important point is that categories can be treated as a sort of structured collection.

Sunday, July 4, 2021

Natural transformations

The trinity of concepts of category theory: categories, functors, and natural transformations form an ordered triple. So it makes sense to introduce functors before dealing with natural transformations. The introduction of functors suggested a basic perspective in which they are treated like functions, and that in turn suggested certain things about the structure and parts of functors.

Natural transformations, however, force us to change our perspective on functors. Natural transformations treat functors not like functions, but rather like indexed families of objects and morphisms. This leads to a slight change of perspective on the nature of functors, which will be dealt with now.

Morphisms of morphisms:
Let $C$ be a category, then $C^{\to}$ is its arrow category consisting of all morphisms of $C$ as objects and as morphisms ordered pairs $(i,o)$ of morphisms of $C$ that satisfy the commutative diagram: If the topos $Sets$ is the quintessential example of a category, then the topos of functions $Sets^{\to}$ is the quintessential example of an arrow category.

Functors as indexed families:
By turning the category of locally small categories into a concrete category, we considered functors as functions. What if a functor $F : C \to D$ is actually a family of objects indexed by a category $C$: objects in the category $D$ and the arrow category $D^{\to}$.
  • For each $X \in Ob(C)$ we have an object $F(X)$ in $D$
  • For each $m \in Arrows(C)$ we have an object $F(m)$ in $D^{\to}$
It would then seem that because functors are indexed families of objects, they can be treated as objects of a category themselves. This is indeed possible with functor categories and natural transformations, which operate on the objects indexed by a functor componentwise.

Natural transformations:
As functors are indexed families of objects, morphisms of functors are naturally morphisms of each object indexed by them. Let $F,G : C \to D$ be two functors and let $t : F \to G$ be a componentwise morphism of functors. Then $t$ maps categorical elements in $F$ to those of $G$. In particular,
  • $t$ associates to each object $X$ in $C$ a $D$ morphism from $F(X)$ to $G(X)$
  • $t$ associates to each morphism $m$ in $C$ a $D^{\to}$ morphism from $F(m)$ to $G(m)$
The standard way to define a componentwise morphism of functors is by defining the object mapping by a functon $t: Ob(C) \to Arrows(D)$. Then the morphism mapping is defined by doubling up the object mapping so that for any $C$ morphism $f: A \to B$ then $(t_A,t_B)$ is the morphism of morphisms $F(f) \to G(f)$. This simply means that $t$ satisfies the following commutative diagram: The componentwise nature of natural transformations explains why everything in a functor category is defined by componentwise versions of constructions in certain underlying categories. For example, in topoi of set-valued functors all limits/colimits are defined componentwise.

The componentwise nature of functor categories, suggests that the best way to understand them is by their output category $D$ and its arrow category $D^{\to}$. In particular, the most useful tools for studying any topos of set-valued functors are $Sets$ and $Sets^{\to}$ and all other constructions are defined over them. Morphisms in $D$ and $D^{\to}$ are the basic units that make up all natural transformations.

Enriched categories:
Let $Ab$ be the category of commutative groups. Then componentwise addition in $Ab$ gives each hom class $Hom(G,H)$ of commutative groups the structure of a commutative group. In general, the idea of an enriched category is that we can take hom classes of any given category, and define certain additional componentwise algebraic operations on it.

The description of natural transformations simply makes the concrete category $Cat$ into an enriched category, so that each hom class $Hom(F,G)$ has the structure of a category. These hom classes $Hom(F,G)$ are then all the different categories of functors. This suggests natural transformations are no more rare of a concept then any componentwise operation on homomorphisms encountered in abstract algebra.

New perspective on functors:
Our previous analysis of functors focused on their relationship to functions, determined by making the category $Cat$ into a concrete category. Then parts of functions could be determined on the function-level, but the idea of making functors into families means that they have different types of parts.

In the same way that a family of sets not only has subsets of itself, but also subsets of its members a functor not only has parts but it also has parts of its elements. Thus a subfunctor is defined by a natural transformation that takes each categorical element of a functor to a subobject, and a quotient functor does the same for quotients. We thus arrive at a different kind of construction on functors defined componentwise.

Saturday, July 3, 2021

Theory of functors

The category of locally small categories is a concrete category. As a consequence of this, every category has an underlying set and every functor has an underlying function. As functions are very well understood objects, we can understand functors by reference to their underlying functions. By this treatment, functors are simply the homomorphism functions of categories.

Parts referred to by functors:
The homomorphisms for basic algebraic structures like monoids are well known. These show that the kernel of any homomorphism is a congruence and the image is a subalgebra. We will now present the analagous result for categories.
  • The image of a functor is a categorical system of a subcategory.
  • The kernel of a functor is a categorical congruence.
The image is embedded in a subcategory by closure, as is the case for any categorical system. If a functor is object injective, then its image is necessarily a subcategory. These two homomorphism theorems for categories relate functors to the lattices of subalgebras $Sub(C)$ and congruences $Con(C)$ of a category.

Parts of functors:
A functor is a type of function, which means it is also an algebraic structure in its own right with its own subobjects and quotients. Subobjects are restrictions, and quotients are input/output relationships where in some partition of the input determines a partition of the output.

Every functor can be restricted to an object function and a morphism function. A functor is then simply a coproduct of its object and morphism functions. On the most basic level, the subobjects of a functor are individual ordered pairs of categorical elements from the two categories. Another case when you might get a subobject of a functor is by restriction to a subcategory of the domain.

Quotients are generally more interesting because they can tell you more about the functioning of a functor. There are some quotient relationships inherent to any functor. Recall the following hierarchy of properties of a morphism. Then if we have the morphism function part of a functor we have that the input object of the input of the functor determines the input object of the output of the functor. Likewise, the output object of the input determines the output object of the output. In both cases, the quotient function is the object function. By combination of these, we can also get that input output pairs of morphism determines input output pairs of their output.

Limits and colimits of categories:
The concrete category $Cat$ of locally small categories $Cat$ has all small limits/colimits. In particular, we can form products and coproducts of categories. The coproduct of two categories has the objects in the two categories in separate connected components. The product has as morphisms ordered pairs of morphisms of the two categories with composition between them piecewise.

Universal algebraic properties of categories:
In conclusion, categories have all the trappings of any algebraic structure studied in universal algebra. They have underlying sets, homomorphisms, subalgebras, congruences, free objects, products, coproducts, varieties and pseudovarieties, and so on. The elementary theory of categories is provided by constructions of universal algebra applicable to any algebraic structure.

Friday, July 2, 2021

Lattices of congruences of categories

Every algebraic structure in universal algebra is associated with two types of lattices: lattices of subalgebras and lattices of categories. Categories in this sense are no different then any other structure we have encountered in universal algebra. Let $C$ be a category, then its lattice of subcategories is $Sub(C)$ and its lattice of congruences is $Con(C)$ which we will now construct.

Categorical partitions:
Let $C$ be a category. Then its underlying set consists of both its objects and morphisms. Therefore, a partition of the domain of a category is a set partition with its set of objects and morphisms as a ground set. A noticeable property of categories is the separation of objects and morphisms. This separation holds for categorical partitions.
  • Object partition: a set partition of $Ob(C)$
  • Morphism partition: a set partition of $Arrows(C)$
There are therefore two ways to represent a categorical partition: either as a partition of the underlying set of objects and morphisms or as an ordered pair of partitions $(O,M)$ of the object and morphism sets.

Compositional congruences:
The composition function of a category $\circ : M_2^* \to M$ only effects the morphism system of the category. Compositional congruences therefore can be phrased entirely in terms of morphism partitions. A morphism partition $M$ is a compositional congruence provided that: \[ \forall a_1,a_2,b_1,b_2 : a_1 =_M a_2 \wedge b_2 =_M b_2 \Rightarrow a_1 \circ b_1 =_M a_2 \circ b_2 \] It follows from the definition of functors that for any functor $F$ we have $F(fg) = F(f)F(g)$ for all morphisms $f$ and $g$ in $C$. This clearly implies that the morphism partition induced by any functor must at least be a compositional congruence.

Congruence closure:
Let $(O,M)$ be a partition of the object and morphism sets of a category. Then in order for a partition of this form to be a congruence it must satisfy a number of preservation conditions defined by logical implications:
  • Composition : $a_1 =_M a_2 \wedge b_2 =_M b_2 \Rightarrow a_1 \circ b_1 =_M a_2 \circ b_2$
  • Identities: $1_X =_M 1_Y \Leftrightarrow X =_O Y$
  • Inputs: $f =_M g \Rightarrow Source(f) =_O Source(g)$
  • Outputs: $f =_M g \Rightarrow Target(f) =_O Target(g)$
Let $(O,M)$ be a partition of a category $C$. Then the congruence closure of $(O,M)$ is determined by running through all logical implications until a closed set is produced. A partition that satisfies all four identities is a categorical congruence.

Definition. a categorical congruence is a partition that separates objects from morphisms that preserves composition, identities, inputs, and outpust.

The four conditions correspond to the four conditions placed on any functor, so the equivalence relation induced by any functor is a categorical congruence.

Lattice of congruences of a category
The set of congruences of a category forms a closure system with an upper bound equal to the maximal congruence, which makes all objects and morphisms equal. The closed sets of this closure operation therefore form a lattice.

Definition. let $C$ be a category then $Con(C)$ is the set of all categorical congruences on $C$ ordered by inclusion.

A well known special case consists of all object-injective congruences. These form an interval in the lattice of congruences of a category from the minimal congruence to the hom congruence, which equates all morphisms in each hom class.

Quotients:
Let $C$ be a category and let $P = (O,M)$ be a congruence of $C$. Then we can define a partial magmoid $\frac{C}{P}$ with $O$ classes as its objects and $M$ classes as its morphisms. The axioms of source preservation and target preservation mean that $\frac{C}{P}$ is a valid quiver, so it only remains to construct a valid composition operation for $\frac{C}{P}$ in order for it to be a partial magmoid.

Let the composition operation of the quiver $\circ$ be presented as a ternary relation $R \subseteq Arrows(C)^3$. Then we can construct a composition operation for $\frac{C}{P}$ by a map from $R$ to $M^3$ that maps each ordered triple in $R$ to an ordered triple in $M^3$ by the projection $\pi : Arrows(C) \to M$ applied to each component of the triple. \[ \pi^3 : R \to M^3 \] We are interested in the image $S$ of this projection map. By the fact that $M$ is a compositional congruence, this image is a single valued ternary relation, and so it can also be cast to a binary operation. Consider any triple $T$ in $S$. Then by the fact that $\frac{C}{P}$ is a valid quiver, each morphism class is associated to a hom class $(O_1,O_2)$ consisting of a pair of object classes.

The fact that any triple $T$ is the image of $R$ means that there is a triple of morphisms $(f : A \to B, g : B \to C, g \circ f : A \to C)$ whose image under $\pi^3$ is equal to $T$. This image has hom classes $((\pi(A),\pi(B))$, $(\pi(B),\pi(C))$ and $(\pi(A),\pi(C))$. It follows that the object classes of any triple take the form $((O_1,O_2),(O_2,O_3),(O_1,O_3))$ which means that this composition respects quotient hom classes.

If we have a triple of morphisms $(f,g,h)$ then the two types of composition $f \circ (g \circ h)$ and $(f \circ g) \circ h$ are equal, so they belong in the same $M$ class. It follows that quotient composition respects associtaivity. The partition preserves identities, so there is a quotient identity morphism for each object. This is clearly an identity for quotient composition. Therefore, $\frac{C}{P}$ is a partial magmoid.

Lattices of subcategories

Every structure is characterized by its component parts. In the case of categories, a locally small category $C$ can be considered to be a special type of structured set. The parts of a category are determined by subsets of its underlying set. Subcategories are parts of categories which are themselves categories.

We define the elements of a category to be either its objects or morphisms. It follows that the underlying set of a category is the coproduct of its object and morphism sets. This produces a combined ontology of categorical elements containing both objects and morphisms as special cases. The characterization of categories as structured sets produces a corresponding ontology of sets of elements of categories. These sets of categorical elements are called categorical systems, and they form a broad generalization of both object systems and morphism systems. Subcategories are merely a special case.

As a first step toward subcategories, we can require that a categorical system have all the objects of its morphisms. This produces the lattice of subquivers of a category, which forms a distributive lattice. This distributive lattice characterizes the fact that categorical elements have a dependency ordering in which morphisms are dependent on the objects they refer to.

The requirement that a categorical system preserves identities still produces a distributive lattice, in this case with identity morphisms equivalent to the objects they refer to. It is the condition of composition closure that makes the lattice of subcategories $Sub(C)$ of a category $C$ lose distributivity. The lattice of subsemigroupoids, which also has composition closure, also doesn't need to be distributive.

A number of special cases of categorical systems emerge from the various object and morphism preorders of a category. These are essentially the ideals of a category, commonly known as either sieves or cribles. These lattices, as lattices of ideals, are naturally distributive. An ontology of categorical systems like these is presented below.

A special case is the lattice of subgroupoids which is used in groupoid theory. The lattice of subcategories $Sub(C)$ contains a number of sublattices. The lattice of discrete subcategories is an example of a boolean sublattice. The lattice of wide subcategories consists of the interval of subcategories with the maximal discrete subcategory as its minimal element. Wide subgroupoids emerge from the combination of the two conditions.

The first special case we will be considering are hom complete subcategories. These correspond naturally to subpreorders of the morphic preordering, so they naturally produce a link between categories and order theory. This will further illuminate the order-theoretic nature of categories.

Suborders of a category

Categories can be characterized as sorts of generalized posets. This is formalized by the functor of categories that takes any category to its underlying morphic preorder, and a functor to its underlynig monotone map. This functor reflects subcategories, which produces our notion of suborders of a category.

Theorem. the morphic preordering $R : Cat \to Ord$ is an epifunctor from the category of categories to the category of preorders and monotone maps. It takes each category its underlying preorder, and each functor to its underlying monotone map.

Proof. (1) let $f : C \to D$ be a functor and $R(f) : R(C) \to R(D)$ its underlying object map between preorders. Then if $A \subseteq B$ in $R(C)$ that means $\exists m \in C : m : A \to B$. Then the output of $f$ on $m$ has signature $f(m) : f(A) \to f(B)$. Which means that $\exists f(m) \in Arrows(D)$ such that $f(m)$ has signature $f(A) \to f(B)$. Therefore, $f(A) \subseteq f(B)$ in $R(D)$. It follows that $R(f)$ is monotone.

(2) Let $1_C : C \to C$ be the identity endofunctor, then its underlying monotone map is $1_{R(C)} : R(C) \to R(C)$ which is an identity of preorders. Therefore, $R(C)$ preserves identities.

(3) Let $f : A \to B$ and $g : B \to C$ be functors. Then $R(f \circ g)$ takes $Ob(A) \to Ob(C)$ by $f \circ g$. Therefore, $R(f \circ g)(x) = f(g(x))$. This is equal to $R(f) \circ R(g)$ so $R(fg) = R(f) \circ R(g)$. $\square$

By presenting any preorder as a thin category, we can also get a functor that maps any category to its underlying preorder. This maps objects to objects and morphisms to edges in the preorder.

Theorem. let $C$ be a category and $\pi : C \to R(C)$ the map from $C$ to its morphic preordering. Then $\pi$ is an epimorphism of categories.

Proof. let $f,g \in Arrows(C)$ with $f : A \to B$ and $g : B \to C$. Then $\pi(f) = (A,B)$, $\pi(g) = (B,C)$ and $\pi(g \circ f) = (A,C) = (A,B) \circ (A,C)$. Therefore, $\pi(f \circ g) = \pi(f) \circ \pi(g)$ so $\pi$ is a functor. $\square$

Lemma. let $F : C \to D$ be a functor, then $f$ reflects subcategories.

Proof. Let $S$ be a subcategory of $D$. Then this can be proven in three stages.

(1) let $let f \in Arrows(C)$ with $f : A \to B$ such that $F(f) \in S$. Then $F(f) : F(A) \to F(B)$ so by the fact that $S$ is a subcategory, $F(A),F(B) \in S$. It follows that $F^{-1}(S)$ is at least a subquiver.

(2) let $X \in Ob(C)$ such that $F(X) \in S$ then $F(1_X) = 1_{F(X)}$. By the fact that $S$ is a subcategory, and $F(X) \in S$ we have that $1_{F(X)} \in S$. It follows that $F^{-1}(S)$ is identity closed.

(3) let $f : A \to B$ and $g : B \to C$ such that $f,g \in Arrows(C)$ and $F(f),F(g) \in S$. Then $F(g \circ f) = F(g) \circ F(f)$. By the fact that $S$ is a subcategory, and that $F(g) \in S$ and $F(f) \in S$ we have that $F(g) \circ F(f) \in S$. It follows that $F^{-1}(S)$ is composition closed. $\square$

We can precede these previous two lemmas to get that the map $\pi$ reflects subcategories, so the following corollary follows immediately.

Corollary. $\pi$ reflects subcategories

We can now define the suborders of a category to the subcategories produced by reflection of the $\pi$ functor, which maps each category to its underlying preorder.

Definition. let $C$ be a category with a suborder $S$ of its morphic preordering $R$. The suborder of the corresponding category is the reflection of $S$ with respect to the functor $\pi : C \to R$.

The lattice of preorders can be denoted $Po(R)$. Then clearly the lattice of preorders of $R$ is clearly equal to the lattice of categories of $R$ presented as a thin category. At this point, we have a lattice monomorphism from the lattice of preorders $Po(R)$ to the lattice of subcategories of any category $C$. \[ s : Po(R) \to Sub(C) \] Every suborder of a category is hom-complete, meaning that for any hom class in the subcategory $S$ every morphism in the same hom class in $C$ is in $S$. This leads the notion of hom completion, which completes a category by adding all morphisms in hom classes of the parent category to it. Therefore, the image of this lattice monomorphism is the lattice of hom complete subcategories of a category.

The lattice of preorders doesn't appear as much as its special case: the lattice of equivalence relations $Part(A)$. This lattice is clearly a suborder of the lattice of preorders, and we will also show that it is a sublattice.

Lemma. let $R$ be a symmetric relation, then its transitive closure is symmetric

Proof. if we have $(A,B),(B,C) \in R$ then we have $(A,C)$ by transitivity. By symmetry we have $(B,A),(C,B)$ are in $R$ as well so $(C,A)$ is in $R$ so transitive closure preserves symmetry. $\square$

Theorem. $Part(A)$ is a sublattice of $Po(R)$

Proof. (1) the intersection of equivalence relations is an equivalence relation so it is clearly a meet subsemilattice (2) the union of reflexive, symmetric relations is symmetric and reflexive so in order to prove join closure all we need is to demonstrate that transitive closure preserves symmetry, by the previous lemma it does so $Part(A)$ is a sublattice. $\square$

We can express this in categorical terms, by replacing the symmetric relations with groupoids. Equivalence relations are then equivalent to thin groupoids.

Theorem. $Part(A)$ is the sublattice of wide subgroupids of the subcategory lattice of the complete thin groupoid.

A special case of a suborder is an induced suborder, which is a subset of a preorder based upon a set of vertices in which all edges between the vertices are included in the suborder. The corresponding categorical notion is a full subcategory. This produces a lattice monomorphism from subsets of $Ob(C)$ to $Sub(C)$, which is weaker then the suborder morphism. \[ f : \wp(Ob(C)) \to Sub(C) \] The corresponding type of category for which all subcategories are full is the class of discrete categories. Then the subcategory lattice of a discrete category is a boolean algebra of sets. We therefore have a characterization of $\wp(S)$ and $Part(A)$ in terms of subcategory lattices.

Subcategory producing functions:

The most important types of morphisms like monomorphisms, epimorphisms, isomorphisms, and endomorphisms are composition closed. This means all these types of morphisms are associated to types of subcategories of any category $C$.

Theorem. let $C$ be a category, then the categories of monos, epis, bis, split epis, split monos, isos, endos, and autos are all subcatgories.

Proof. (1) let $f$ and $g$ be monos. Then $fx = fy \Rightarrow x=y$. Therefore, if we have $g(fx) = g(fy)$ this implies that $fx = fy$ which implies $x=y$ so mononess is composition closed. Dually for epis, then bis are simply the intersection of epis and monos.

(2) Let $f,g$ be split epimorphisms then the inverse of $g \circ f$ is equal to $f^{-} \circ g^{-1}$ because $g \circ f \circ f^{-1} \circ g^{-1}$ which clearly cancels to the identity. In the other direction, if we have $f,g$ as split monomorphisms then $g \circ f$ has inverse $f^{-1} \circ g^{-1}$ so that $f^{-1} \circ g^{-1} \circ g \circ f$ clearly cances to the identity. The composition closure of isomorphisms follows from the intersection of split monos and split epis.

(3) Endomorphisms are clearly composition closed, and they are hom-complete so they are also the suborder produced by the coreflexive relation on $Ob(C)$. $\square$

Let $C$ be a category, then the subcategory of morphisms of one of these types is a subcategory. This can be applied to get the underlying groupoid of a category.

Definition. Let $C$ be a category, then the subcategory generated by its isomorphisms is the underlying groupoid of $C$.

We can get the underlying subgroupoid of a subcategory. In general, for any class of morphisms that generates types of subcategories, we can get an action on $Sub(C)$. \[ f : Sub(C) \to Sub(C) \] This construction of an action on the lattice of subcategories invites us to consider the effect of taking subcategories on the different types of morphisms.
  • Monos,epis, and bis are preserved under subcategories
  • Iso are preserved by preserved under extensions.
  • Endos are preserved under both subcategories and extensions.
In the case of non-torsion groups, some subgroups do not preserve inverses. So isomorphims cannot be preserved by all subcategories. Monomorphisms and epimorphisms forbid certain pairs of morphisms related to them, so clearly they are closed under subcategories. As isos are closed under extensions, the underlying groupoid is a monotone map.

Ideals theory:

Categories are certain types of partial semigroups on edge sets of quivers, so its not hard to see why the ideals theory of categories is essentially the same as that of semigroups. We have three different types of preorder inherent to any category:
  • Input action preorder: $f \subseteq g$ means $\exists h : f \circ h = g$
  • Output action preorder: $f \subseteq g$ means $\exists h : h \circ f = g$
  • Two sided action preorder: $f \subseteq g$ means $\exists h_1,h_2 : h_1 \circ f \circ h_2 = g$
These correspond to the left, right, and two sided preorders of a semigroup. The Green's relations of a category are then simply the strongly connected components of these preorders: R is the connectivity of the input action preorder, L is the connectivity of the output action preorder, and J is the connectivity of the two sided preorder. The Green's relations D and H can be formed by the lattice of partitions $Part(A)$.

It is self-evident that ideals of form subcategories, because they are closed under composition with respect to morphisms from the overall category $C$. An interesting construction occurs where in we can change the category of morphisms acting on $I$ to some other subcategory $S$. This produces a family of three types of monotone maps from the lattice of subcategories to the lattice of preorders: \[ L : Sub(C) \to Po(Ob(C)) \] \[ R : Sub(C) \to Po(Ob(C)) \] \[ J : Sub(C) \to Po(Ob(C)) \] As these preorders are constructed from the actions of subcategories, their ideals don't need to be subcategories. An important example is an isomorphism ideal, which is an ideal with respect to two sided action of the underlying groupoid. Subcategories that are also closed under underlying groupoid action are called replete.

Repletion forms a closure action on the lattice of subcategories $Sub(C)$, and so the family of all replete subcategories forms a lattice determined by the skeleton of the category. Action preorders can also be formed by the mono and epi subcategories.
  • The preorder of subobjects is the input action preorder of the mono subcategory.
  • The preorder of quotients is the output action preorder of the epi subcategory.
The preorders of subobjects and quotients are therefore simply special cases of preorders determined by the monotone maps from the lattice of subcategories to the lattice of preorders. Posets of subobjects and quotients are then the condensations of the principal filters of either preorder.

The monotonicity of the map from subcategories to preorders means that smaller subcategories can only have smaller preorders of subobjects and quotients. This is translated to the language of subobject and quotient posets by the condensation which produces partial orders from preorders.

Connected components:

The underlying morphic preorder on the objects of a category induces a partition of $Ob(C)$ into weakly connected components. The resulting partition topology on the objects of the category consisting of all connectivity closed object systems then induces a boolean algebra of full subcategories each of which is clearly a prime two-sided ideals of $C$.

A category with non-trivial connected components admits a coproduct decomposition, and so this leads to the following theorem which characterizes the lattice of subcategories of a coproduct category in terms of product lattices.

Theorem. $Sub(C + D) = Sub(C) \times Sub(D)$.

Proof. This can be proven by a one to one map. Let $C$ and $D$ be connected components of the category $C + D$ and let $S$ and $T$ be subcategories of $C$ and $D$ respectively. Then by the fact that $C$ and $D$ are prime subcategories $S \cup T$ is a subcategory of $C + D$ so the union leads to an injective map from $Sub(C) \times Sub(D)$ to $Sub(C + D)$. In the other direction, any subcategory of $C+D$ can be decomposed into a subcategory of $C$ and a subcategory of $D$ by taking full subcategories with respect to $Ob(C)$ and $Ob(D)$. $\square$

This simple construction allows us to determine the lattices of subcategories of antichain categories from the lattices of subcategories of the endomorphism monoids of the category.

Families of subcategories:

If we take $Sub(C)$ to be the lattice of subcategories of a category $C$, then a family of subcategories is simply one of its suborders. These suborders come in a number of forms.

Lower sets:
If $F$ is a subalgebra closed family of categories, then the family of all subcategories of $C$ in $F$ forms a lower set of $Sub(C)$. For example, the family of all thin subcategories of a category is a lower set as is the lattice of all discrete subcategories.

Closure systems:
The families of subgropoids, the various ideal-related construction on categories like replete subcategories, and so on all form closure systems. These are upper bound maintaining meet subsemilattices of $Sub(C)$.

Sublattices:
Besides the lattice of discrete subcategories, the lattice of subgroupoids forms a sublattice of the lattice of subcategories of a groupoid. This can be proven in the following theorem.

Theorem. let $G$ be a groupoid and let $A,B \subseteq G$ be subgroupoids. Then the subcategory join $A \vee B$ is a subgroupoid.

Proof. the subcategory join $A \vee B$ consists of all words of the form $a_1 b_1 a_2 b_2 a_3 b_3 ... $. This simply has inverse $b_3^{-1} a_3^{-1} b_2^{-1} a_2^{-1} b_1^{-1} a_1^{-1}$. The words of $A \vee B$ have inverses, so join preserves inverses. The fact that the subcategory join of subgroupoids is a subgroupoid means that the lattice of subgroupoids is a sublattice of the lattice of subcategories.