Monday, November 30, 2020

Order-theoretic introduction of categories

Ordinary ordered algebraic structures like quantales, residuated lattices, ordered groups, ordered fields, etc add additional algebraic structure to the objects of a preorder. Categories, on the other hand, are different in that they add algebraic structure to the edges of a preorder instead. In a category, the edges of the preorder are mapped to sets of morphisms by the hom set function, and it is these morphisms that are given additional algebraic structure. Categories are therefore a different type of preordered algebraic structure, and by changing the units of ordering, they present a different outlook on order relations and therefore potentially all of mathematics.

The essence of categories:


Input/output ordering:

As categories are such a general concept, we will first comment on the differences between functions and relations. In the theory of relations, there is not necessarily a preferred input for the relation, whereas functions are characterized by specific inputs and outputs. While this distinguishes functions from relations, it is also the source of their utility. Functions can be used as basic units of ordering, to describe any ordered process by which one thing leads to a next thing, as denoted by the arrow notation. \[ f: I \to O \]

Units of ordering and categories

If we consider the general lattice of preorders, then the join irreducible elements are always either single elements or ordered pairs. Starting with these simple components, we can build up any preordering relation. In the case of categories, these same units of ordering are enhanced with additional algebraic structure. This leads to ordered algebraic units which are also called morphisms.
  • Ordered units
  • Algebraic ordered units

Morphism properties:

Once we have motivated the introduction of morphisms, we can then describe a hierarchy of properties of morphisms using a suborder of the lattice of apartnenss relations. Generally speaking, the identity of a morphism is not determined by its input/output pair unless the category is a preposetal category, as it is different morphisms in hom classes that give categories their algebraic distinctiveness.

Ordered pairs of morphisms:

Once we have got past the basic properties of morphisms like their input/output pair, their input object, and their output object then we can consider ordered pairs of morphisms. This naturally leads to a tree of properties descended from the first and second morphisms, but there are two other properties that mix the two objects that are of relevance in category theory: the inner objects and the outer objects. This produces the following partial order of properties:
The inner objects and the outer objects of an ordered pair of morphisms are relevant to the definition of composition. The other properties follow from the properties of morphisms and ordered pairs.
  • Inner objects: the domain of composition of a category consists of all inner-equal ordered pairs of morphisms. We can denote this domain of composition $M_{*}^2$.
  • Outer objects: the input-output pair of composition in a category is necessarily equal to the outer objects pair
These two conditions: that inner objects coincide and that the outer objects form a new input-output pair, make it so that composition generalizes transitivity. Indeed, the entire purpose of category theory is to examine algebraic generalizations of transitivity.

An equivalent definition:

We can now define categories using more order-theoretic terminology then done previously, as presented below. Equivalent definition. let $O$ be a class of objects, $M$ be a class of morphisms, $R$ a binary relation on $O$, and $M_{*}^2$ be the class of inner-equal pairs of morphisms, then we can define a pairing function $p$ and an associative partial binary operation $\circ$ of composition: \[ p : M \to R \] \[ \circ: M_{*}^2 \to M \] Then if the following two conditions are satisified (1) generalized reflexivity: for every object $X$ there is an identity endomorphism $1_X$ such that $f \circ 1_x = f$ and $1_x \circ g = g$ for all $f$ that end at $X$ and all $g$ that begin at $X$ and (2) generalized transitivity: the input-output pair of the composition $M_{*}^2$ consists of the outer pair of objects then the structure forms a category. The generalized reflexivity and transitivity conditions clearly make $R$ a preorder, which we can now all call the morphic preorder of the category.

Elementary constructions:


Objects:

As objects are one of the basic elements of any category, we can naturally form a partially ordered set of classes of them. The main elementary classes of objects, used in abstract category are initial and terminal objects. Taking an order-theoretic foundation, these are generalisations of minimal and maximal elements of order-theory with the extra condition that hom classes have only one element.

Morphisms:

Each morphism produces a left-action and a right-action on morphisms by composition (the two are distinguished because categories are non-commutative). A morphism is an epimorphism if its left-action is invertible and its a right action if its right-action is invertible. We can therefore produce preorders on common output morphism systems and common input morphism systems for each object by how epi/mono they are based upon degrees of reversibility. A morphism is split if it not only is invertible, it has an inverse. The other classes of morphisms follow producing the following partial ordered set of classes.

Object systems:

Several classes of collections of morphisms can be introduced based upon the morphism preordering and the symmetric preordering of isomorphism. Morphism lower sets, upper sets, principal ideals, and principal filters can be defined based upon the natural morphic preordering of any category. This is another way to describe the morphic preordering, only instead using a more set-theoretic approach.

Morphism systems:

Morphism systems are the basic objects of most of category-theory. Hom classes appear here as the inverse images of the function that maps any morphism to its ordered pair, and they are non-empty for any ordered pair in the morphic preordering. In other words, there is a mapping $f : R \to Hom$ that maps each ordered pair to a non-empty set of morphisms which are generalizations of the edges of the preorder. Other types of morphism systems include common input and common output families of morphisms, of which parallel morphism systems are an intersection. Common output systems are used in the definition of sites, for example. Hom classes are maximal parallel families of morphisms.

While we demonstrated how objects in a category are naturally preordered, it is not hard to see how the morphisms in a category are preordered as well. Algebraic preorders, introduced naturally in monoid, can equally be applied to categories to get a preordering of morphisms based upon reachability. In the case of single-object categories this of course naturally coincides with the monoid definition. Algebraic lower sets, upper sets, and principal ideals and filters in categories naturally arise as classes of morphism systems from the preordering on morphisms.

The class of composition closed morphisms is directly related to the lattice of subcategories, because the axioms of categories make it so that identity morphisms and objects coincide with one another. To some extent then, the case of subcategories can be reduced to composition closed morphism systems. Another class of morphisms that plays a central role in category-related applications is morphism sequences, as these are used to define exact sequences and chain complexes which are central objects of study in homology.

Object properties:

We previously described properties of morphisms, naturally therefore we would be amiss if we didn't talk about properties of objects. The main property of objects in any category is the isomorphism type, which is a part of the isomorphism identity. Except in the case of skeletal categories in which the two equivalence classes coincide, rather then one being a part of the other. The isomorphism type of an object is merely one part of a lattice of properties, where parts of the isomorphism type in the lattice are typically called invariants.

For example, the isomorphism type of the fundamental group is a a part of the homotopy type, which is a part of the isomorphism type of a topology, which is in turn part of the topological identity. Parts are defined by the lattice of apartness relations in the natural manner. Another example is the algebraic preorder type of a commutative aperiodic semigroup which is an invariant, we can use this to define a preorder on isomorphism types of algebraic preorder isomorphism type-equal classes of semigroups, which is useful in the study of commutative aperiodic semigroups.

Universal algebraic constructions:


Lattice of subcategories:

Let $C$ be a category, then we can naturally form a lattice of subcategories $Sub(C)$. This contains all categories on subsets of objects and morphisms, such that units and composition are preserved. As mentioned previously, $Sub(C)$ is essentially the same as a lattice as to the family of unital composition-closed morphism systems of a category, because objects coincide with initial morphisms. The first thing we can notice about $Sub(C)$ is that the morphic preordering produces a monotone function (a morphism in the category of partial orders) between itself and $Sub(C)$.

Proposition. the mapping $f : Sub(C) \to (F,\subseteq)$ where $(F,\subseteq)$ is the principal ideal in the lattice of preorders of the morphic preordering of $C$ forms a monotone map.

Proof. in any subcategory the collection of morphisms of the subcategory is a subclass of its supercategory. Therefore, by the monotonicity of images, the image of the pairing function on the supercategory, which is the morphic preordering, must be larger then the image of the pairing function on the subcategory. Therefore, the morphic preordering morphism is a monotone morphism of preorders.

Proposition. for a thin category $C$ the lattice of subcategories $Sub(C)$ is isomorphic to the lattice of subpreorders of the morphic preorder on objects of $C$.

Proof. This follows directly from the correspondence between generalized reflexivity and generalized transitivity in thin categories and the reflexivity and transitivity conditions in preorders. This shows that lattices of preorders emerges out of category theory.

Proposition. thin categories are subalgebra-closed.

Proof. Let $C$ be a thin category then for all $a,b \in O$ we have $|Hom_C(a,b)| \le 1$. Let $S$ be a subcategory of $C$ then it has a class $N$ of morphisms which is a subclass of the class $M$ of morphisms of $C$ and any two objects $a,b$ in $S$ are also in $C$ therefore $|Hom_S(a,b)| \le |Hom_C(a,b)| \le 1$.

Corollary. the subalgebra system of all thin subcategories of a category forms a lower class of $Sub(C)$.

Lattice of congruences:

Congruences on categories are defined by refinements of the hom equivalence relation, that satisfies the standard input/output relationship used to define congruences. If a congruence is denoted $E$ then a quotient category can be defined by $\frac{C}{E}$ by reducing class of morphisms to individual morphisms. Based upon this definition, the quotient $\frac{C}{Hom}$ is clearly equal to the morphic preordering. It is in this sense that we can say that categories are merely algebraic generalisations of the transitivity relationship at the basis of order theory.

Higher category theory:

Treated as a branch of universal algebra, categories can be applied to any algebraic structure including themselves. In particular, category theorists have established a total ordering on classes of categories based upon how their height, so that categories themselves can be ordered based upon how nested they are. As a generalization of order theory, there are two types of functors between categories: covariant functors and contravariant functors corresponding to monotone and antitone maps. The opposite category for example is a contravariant functor, which generalizes the dual ordering concept in order theory.

Products and coproducts:

As categories are a preordered algebraic structure, we can define morphic lower bounds and upper bounds in any category. A natural question is how we can generalize joins and meets to an arbitrary category. This requires two preliminaries:
  1. If $a,b$ are two objects, then for the meet $a \vee b$ to be the greatest lower bound then for any other lower bound $L$ there must be a weak suborder of the form $[\{L\}, \{a \vee b\}, \{a, b\}$]. The same applies for joins in the other direction.
  2. The commutative diagrams in category theory correspond to preposetal subcategories, and therefore commutative diagrams are entirely order-theoretic construction.
The definition of a product in a category is that for any other lower bound $L$ other then the product then there must be a unique weak-ordered preposetal subcategory of the form $[\{L\}, \{a \times b\}, \{a,b\} ]$ (and therefore there must be some unique morphism which makes this subcategory preposetal, that is for it to commute). Coproducts are defined dually. In other words, products and coproducts are generalizatoins of meets/joins:
  • Product $\leftrightarrow$ Meet
  • Coproduct $\leftrightarrow$ Join
The arithmetical properties of semilattices defined by commutative group-free semigroups are well-understood. Generally speaking, most semilattices are not multiplicative, and semilattices exist with arbitrary arithmetic properties. The one semilattice that describes products is the well known meet operation in the lattice of equivalence relations. It follows that products and coproducts in category theory, are abstract general concepts that describe operations that don't necessarily have anything to do with products.

The origin of the term product is because the category-theoretic product in the category of sets is the direct product. But even in the category of sets, the coproduct is the disjoint union which is an additive operation. Indeed, we define measures by a generalization of the additive property of the disjoint union. Products/coproducts are better thought of as general classes of operations which generalize meet/joins of lattice theory.

Categories with additional structure:

So far we have only dealt with abstract categories with no additional structure on them. Firstly, recall that the concept of having additional structure is an order-theoretic concept explained by lattices of equivalence relations and their dual. We can preorder objects by how much mathematical structure they have, in this sense categories are extensions of preorders as we have described. But there is no reason to stop our exploration of this preorder at ordinary categories.

The first thing worth mentioning is concrete categories, technically concrete categories are categories with additional structure, where that additional structure is a functor to the category of sets. This makes the morphisms in the category behave like functions and the objects behave like sets. These are probably the most important class of categories in most applications.

Another direction is adding additional structure to the hom classes of a category, which leads to preadditive categories, quantaloids, etc. Preadditive categories have a commutative group over each hom class, while quantaloids have a complete lattice subjected to additional axioms. Abelian categories are a special case that are probably the most important in most applications. Clearly, we can build a hierarchy of concepts by how much structure they have.

Monday, November 23, 2020

Ideal multiplication in Artinian rings

Previously, I described the fundamental importance of quantales in commutative algebra, number theory, etc. In particular, while the non-negative integers naturally form a commutative aperiodic quantale, multiplication in its extension structures is generally not aperiodic. We can recover this desired property of aperiodicity for general commutative rings by instead transitioning to ideal multiplication. This makes ideal multiplication an entirely order-theoretic concept.

Recall for example that the meet of two elements in a lattice is defined to be the "greatest lower bound." This very definition admits immediate order-theoretic generalisation as we can consider other lower bounds and partially order them by how great they are. While a basic order-theory considers semilattices, a more advanced theory can produce all kinds of bounds-producing functions naturally associated to partial orders. The previously mentioned multiplication of the natural numbers arises naturally from the order dual of the lattice of set partitions. Ideal multiplication is an example of such a bound-producing function and it produces a less great lower bound then intersection. This can be clarified more, by considering a special case.

Proposition. ideal multiplication in Artinian commutative rings has bounded index

Proof. ideal multiplication is a monotone and anti-extensive operation, therefore if we consider any ideal $I$ an Artinian ring, then its power $I^2$ is going to be less then $I$ and this can be continued to get a descending chain $I \supseteq I_2 \supseteq I_3 \supseteq I_4 ...$ which clearly must terminate after a finite number of steps. Therefore, there exists $n$ such that $I^{n+1} = I^n$ which is the index of the ideal. In other words, the ideal generates a commutative aperiodic monogenic semigroup with $n$ elements.

This naturally leads to the fact that Artinian integral domains are fields, and that Artinian rings are have T1 spectrum. However, the important point right now is that Artinian rings have a special property relating to ideal multiplication. To summarise:
  • Ideal multiplication semigroups are always group-free but they are not always of finite index. The quantale $Ideals(\mathbb{Z})$ for example only has finite index for its bounds.
  • Ideal multiplication is always of finite index for Artinian rings
In general, that a semigroup is aperiodic means that it is group-free. Ideal multiplication is always group-free, but it isn't always of finite index unless the commutative ring it arises from is Artinian.

Sunday, November 22, 2020

Compactness and limit points

In order topology, limit points can be defined for any total order. It is natural to ask then, if certain order-topological principles can be passed on to the case of partial orders from total orders. Clearly, for example, we can define discrete and scattered partial orders based upon order topological chain conditions. To some extent, compactness generalizes lower/upper isolated points from order topology.
  • A point is called lower isolated if it is an isolated point in its principal ideal.
  • A point is called upper isolated if it is an isolated point in its principal filter.
Compactness is a dual-condition: an element can be either meet-compact or join-compact depending upon the semilattice operation being considered. In a total order, join-compactness coincides with lower-isolated points and meet-compactness coincides with upper-isolated points. To see this, consider that in a total order the join of all finite sets is contained within the set, so the only way to generate an element is with an infinite set. At the same time, the order topology for any total order is Frechet, which means that all limit points are generated by infinite sets.

If an element is necessarily generated by an infinite set it is not-compact by definition and it necessarily has some element in that infinite set in each of its neighbourhoods so it is also a limit point of its predecessors. Therefore, limit points and infinite joins/meets coincide. These is why the order-type of the real numbers can either be defined by its bounded-completeness as a lattice or by its topological completeness, because these two principles coincide.

Clearly limit points and non-compactness coincide in total orders, but non-compactness applies to partial orders. Its immediately clear that all join-compact elements are lower isolated in all chains ending in them and dually for meet-compact elements. If they are not, then clearly an infinite total order with no finite reduction produces them as a join/meet which means they are not compact. In the other direction, there is a process whereby we can construct a total order that has them as a join from any infinite non-compact representation which demonstrates the similarity between non-compat elements and limit points in the other direction.

Theorem. let $S$ be a countable non-compact join representation of an element $x$, then we can construct an ascending sequence that has $x$ as its join from $S$.

Proof. we will construct an infinite ascending sequence $T$ from the elements of $S$ (1) first get any element of $S$ and put it in $T$ (2) then we can always get another element $y$ of $S-T$ such that the join of that element and $T$ is greater then the current join of $T$ because if we can't then $T$ already is an upper bound for the entire set and therefore since its finite that would mean the representation $S$ is not compact which is a contradiction (3) we can applies this infinitely to get an ascending sequence of order type $\omega$ that has $x$ as its join.

Collary. we can construct descending sequences from non-compact meet representations

Saturday, November 21, 2020

Enriched subalgebra systems

A subalgebra system $S \subseteq Sub(A)$ for a given algebraic structure $A$ can be given additional mathematical structure. The obvious example is Spec(R) for a given commutative ring with identity $R$, which gives the prime ideals topological structure. Spec(R) is nothing more then a single example of the idea of an enriched subalgebra system. The points of Spec(R) are extrema-free.

Proposition. Prime ideals are finite extrema-free

Proof. They are union-free because prime ideals form additive subgroups, and subgroups are union-free. They are intersection-free because for any two order-incomparable prime ideals $I$, $J$ whose intersection is a prime ideal $P$ we can get elements $a \in I-J$ and $b \in J-I$ for which $ab \in (IJ \subseteq I \cap J = P)$ and $ab \not\in P$ which is clearly a contradiction. Finally, they are extrema-free because they are both union-free and intersection-free.

Multichain families are the trivially finite extrema-free families in the ontology of set systems, but generally speaking otherwise set systems that are extrema-free aren't particularly common. Therefore, that prime ideals are extrema-free distinguishes it from its fellow set systems.

Ideal multiplication:


Ideal multiplication is monotone and decreasing:

Proposition. ideal multiplication is decreasing meaning that for every ideal $I$ multiplication by it produces a smaller ideal so that $IJ \subseteq I$.

Proof. the product ideal $IJ$ contains all products of $I$ and $J$ and their sum closure. But by the definition of ideals $I$ contains all products of $I$ and any element of $R$ so $IJ \subseteq I$. In order-theoretic terminology this means that action of ideal multiplication on inclusion is anti-extensive.

Alternatively, since $IJ \subseteq I \cap J$ this means that $IJ$ is a lower-bound producing function. The only difference is that decreasing terminology classifies the individual actions of ideal multiplication on inclusion, rather then the operation itself.

Proposition. ideal multiplication is monotone

Proof. let $M$ be an ideal, then it produces an action on inclusion by ideal multiplication. Now, let $I$ and $J$ be two ideals such that $I \subseteq J$. Now, $MJ$ contains all products of elements of $M$ and $J$ and their sums, so since $J$ contains $I$, it also contains all elemens of $M$ and $I$ and their sums.

Remarks. subtraction by a positive integer is also decreasing and monotone. Ideal multiplication subtracts or removes certain elements from an ideal, but in a potentially much more interesting manner.

Ideal multiplication forms a commutative aperiodic semigroup:

Ideal multiplication clearly forms a commutative semigroup, because the multiplication of the underlying commutative ring with identity does. The difference is that multiplication generally is not aperiodic (for example on the integers it is not aperiodic) but it is for ideals.

Theorem. ideal multiplication is aperiodic

Proof. let $I$ and $J$ be ideals and let $x,y$ be ideals such that $Ix = J$ and $Jy = I$. Then by the previously-mentioned decreasing nature of ideal multiplication $I \subseteq J$ and $J \subseteq I$ but since the inclusion of sets is a partial order this means that $I = J$. Therefore, any pair of ideals which are multiplicatively reachable from one another are equal which means the semigroup is aperiodic.

Remarks. in general every bound-producing function on a partial order is aperiodic, in this case ideal multiplication is lower bound producing with respect to inclusion. Furthermore, since commutative aperiodic semigroups are used to describe bounds in order theory, this means that ideal multiplication is entirely an order-theoretic construction (much like Spec(R)). This clearly distinguishes ideal multiplication from ring multiplication.

Residuals:

As ideal multiplication is aperiodic, there is never a multiplicative inverse for any ideal, or anything of that sort. However, we can define the order-theoretic concept of a residual for a pair of ideals. As ideal multiplication is a decreasing function, it does not make sense to consider the smallest ideal such that the product is greater then something, because the product of an ideal may never be greater then something.

Therefore, the only residual that makes sense is the largest ideal such that the product is less then something. This is the colon of ideals $I : J$ defined by $\{ x : xJ \subseteq I \}$. This means that if $x$ is an ideal such that $xJ \subseteq I$ then $x \subseteq I : J$ which makes this the residual operation for ideals. This is an entirely order-theoretic construction relative to ideal multiplication, and it makes ideals into a residuated lattice.

Quantales:


Introducing quantales:

Definition. a structure $(Q,\vee,\wedge,*)$ is called a quantale if it forms a complete lattice, multiplication forms a semigroup, and multiplication distributes over joins.

Clearly, $(Q, \vee, *)$ forms an idempotent semiring for any quantale, therefore another way of describing a quantale is that they are a combination of a complete lattice and an idempotent semiring. A quantale is called commutative if its multiplication is commutative, and it is commutative aperiodic if its multiplication is also aperiodic.

The definition of quatales requires that they be a complete lattice, so $Sub(Q)$ for any quantale consists of all multiplicatively closed complete sublattices. Perhaps, though it would be more natural to define $Sub(Q)$ to consist of all sets closed under its three semigroups, which would lead to the concept of incomplete quantales. Later studies of quantales may need to make distinctions regarding completeness.

The fundamental quantale:

While quantales are usually an advanced construction it is also the case that $(\mathbb{N},gcd,lcm,*)$ forms a commutative aperiodic quantale. Therefore, as a quantale can be constructed out of the positive integers, they are an entirely natural construction much like rings which combine addition and multiplication. This fundamental quantale combines all operations which are entirely multiplicative in nature, and which are entirely determined by prime factorisation.

Enriched subalgebra systems:


Ideal-related constructions:

It is not hard to see that the product of ideals distributes over the sum of ideals, because multiplication in any commutative ring distributes over addition. Therefore, the subalegra system of ideals $Ideals(R) \subseteq Sub(R)$ can be enriched with additional structure to make it a commutative aperiodic quantale. Therefore, now we have two fundamental constructions:
  • Prime ideals are enriched with a topological structure Spec(R)
  • Ideals are enriched with the structure of a commutative aperiodic quantale $(Ideals(R),+,\cap,*)$.
It is not hard to see that the quantale of ideals on the integers is isomorphic to the fundamental quantale on the non-negative integers. Quantales of ideals are a very general construction, applicable to any commutative ring. Therefore, the quantale of ideals and the topology of prime ideals are the two most basic enriched subalgebra systems in commutative algebra.

Thursday, November 19, 2020

Meet decompositions in the radical ideals lattice

Let $L$ be a lattice, then a semilattice decomposition (either meet or join) is described by a function from the lattice to the power set of the lattice: \[ f : L \to \mathcal{P}(L) \] Then there are numerous types of decompositions for lattice elements, but one particularly important type which is well-studied is the complete irreducible decompositions. Using this purely lattice-theoretic notion, we can define Spec(R) for any ring by the complete meet irreducible decompositions of the radical ideals lattice. \[ f : RadicalIdeals(R) \to Spec(R) \] We know from basic order-theory that this mapping $f$ is antitone, which means that the lattice Spec(R) is order-dual to the lattice of radical ideals. Additionally, Spec(R) is necessarily a Moore family which means it is union-closed. It does not follow, however, from basic order-theory that it is finite union-closed and therefore a cotopology.

To prove that it is union-closed requires us to drop back to commutative algebra to study properties of prime ideals, which are the meet irreducibles in the lattice of radical ideals. You can find this proof briefly outlined in Hatshorne's algebraic geometry, but I want to expand on it since it is fundamental to proving that Spec(R) is a cotopology. The only thing needed is a single lemma.

Spectrum-construction lemma: Let $R$ be a commutative ring with identity. Let $I,J$ be ideals and let $P$ be a prime ideal. Then $IJ \subseteq P$ implies that $I \subseteq P \lor J \subseteq P$.

(1) suppose that $I \subseteq P$ then we already have that one of the two ideals is a subset.
(2) if $I \not\subseteq P$ then there exists $a \in I - P$, that is there is some element which is a member of $I$ but not of $P$. We will now prove what we want by universal quantification on $J$. The first thing to realize is that $ab$ is in $P$. To see this, notice that $ab$ is in $IJ$ because $a \in I$ and $b \in J$ and the product of two ideals contains all products of members of the two ideals. Further, we had $IJ \subseteq P$ by supposition, so by the transitivty of inclusion $ab \in P$. \[ \forall b : ab \in (IJ \subseteq P) \implies ab \in P \] Once we have $ab \in P$ all it takes to see that $b \in P$ is that the definition of prime ideals states that if $ab \in P$ and $a not\in P$ then $b \in P$, and both of those two conditions are satisfied so $b \in P$. \[ ab \in P \implies b \in P \] Therefore, for all $b \in J$, we have that $b \in P$. Which means that $J \subseteq P$. Finally, by logical combination of the two cases (1) and (2) we know that either $I$ or $J$ is in $P$ which is what we wanted to show.

Theorem. Spec(R) is finite union closed.

Proof. let $IJ$ be two ideals then $I \subseteq I \cap J \subseteq P$ and $J \subseteq I \cap J \subseteq P$ means that $I \subseteq P$ or $J \subseteq P$ imply that $IJ \subseteq P$. The spectrum-construction lemma demonstrates the reverse direction so $ I \subseteq P \lor J \subseteq P \iff IJ \subseteq P$.

The elements of $Spec(R)$ are sets of prime ideals that are greater then a given ideal $I \subseteq P$, so the union of the them is all sets of the form $I \subseteq P \lor J \subseteq P$ for two ideals $I,J$ but we just showed that this is equal to all prime ideals greater then $IJ$ which is therefore a member of $Spec(R)$. Therefore, $Spec(R)$ is finite union closed.

Order-theoretic collaries:
  • Spec(R) forms a cotopology
  • The radical ideals lattice is distributive.
  • Radical ideals form a locale in the sense of pointless topology.
The additional properties of $Spec(R)$ related to compactness will be discussed later, but for now I wanted to demonstrate and confirm that this is a valid topological construction. The nice thing is that $Spec(R)$ is entirely a lattice-theoretic construction, so it is fully determined by its order-type. Therefore, $Spec(R)$ is a fundamental construction in commutative algebra which can be entirely understood using lattice theory.

Wednesday, November 18, 2020

Lattice-theoretic decompositions

We can represent a semigroup by a function $f : S^2 \to S$ on ordered pairs. In the case of semilattices, this can naturally be extended to a function on any set: \[ f: \mathcal{P}(S) \to S \] This naturally makes a semilattice operation a mapping between two ordered sets. The most fundamental mappings between ordered sets are ones that are either monotone or antitone. In this case, the two dual concepts of join and meet correspond to monotone and antitone mappings on sets.
  • Generalized join: monotone
  • Generalized meet: antitone
To see this, consider that joins are upper bounds and so they can only get bigger. Likewise, meets are lower bounds so they can only monotonically decrease. This property holds for any upper-bound producing function or lower-bound producing function regardless of rather or not they are semilattices. Consider that $(\mathbb{N},+)$ produces only upper bounds, and therefore it acts monotonically on sets even though it is not a semilattice. In the case of semilattices, we are interested in inverse images of singletons. \[ f^{-1}(\{x\}) \] The elements of the inverse image are all the semilattice decompositions of the element. These decompositions are naturally partially ordered by inclusion. Indeed, as a set system we will show that they form an upper bounded convex family.

Theorem. semilattice decompositions form an upper bounded convex family of sets

Proof. Suppose the semilattice in question is a join semilattice. Then the principal ideal forms the maximal join representation, as each element $x$ is the join of all of its predecessors. It follows that the inverse image $f^{-1}(\{x\})$ is upper bounded. To see convexity, consider that the join operation is monotone. Therefore, if there is an intermediate decomposition it must have the same join as its predecessor and its parent, because if it did not that would violate monotonicity. Therefore, intermediate sets are in the family of decompositions, which implies that the set system is convex.

Theorem. the decompositions family of an irreducible element forms an interval of sets

Proof. the minimal decomposition of an irreducible element is itself, and that must be contained in every other decomposition because if it is not then the element will not be irreducible. The decompositions are upper bounded convex, so if they are bounded convex then they form an interval.

Corollary. inverse images form a power set partition whose members are upper bounded convex families

Example. consider the weak order [1 3 1].


The following convex upper bounded family consists of all semilattice decompositions of the upper bound element, it is not irreducible so this does not form an interval.


Irreducible decompositions: a special case exists when we seek to decompose semilattice elements into irreducibles. Irreducible decompositions can also be partially ordered, and the maximal decomposition is the one that contains all irreducibles. This can be described by a decomposition mapping from an element to its set of its irreducible components. \[ g: S \to \mathcal{P}(S) \] Decomposition mappings are partial inverses to the set-theoretic generalization of the semilattice operation. In other words, we can go from the decomposition back to the element it was produced from by the semilattice operation.

Proposition. complete irreducible semilattice decomposition form a Moore family

Proof. Suppose that we have that we have a set of irreducibles, then we can always get the complete set of irreducibles by applying the generalized set-theoretic semilattice mapping and then getting the complete set of irreducibles from that. This forms a closure operation, so its lattice of closed sets naturally forms a Moore family.

In general, complete irreducible semilattice decompositions do not form cotopologies. In order for them to do so, the lattice clearly must at least be distributive. The key to demonstrating that complete representations form a cotopology is to prove finite union closure, which is possible for some distributive lattices.

Definition. an element of a semilattice is join-compact if all minimal join decompositions are finite, likewise it is called meet-compact if all minimal meet decompositions are finite.

Compactness can be phrased in terms of partially ordered sets of decompositions of elements. In this sense, compactness simply means that all minimal representations are finite.

Tuesday, November 17, 2020

Subalgebra-related congruences

The algebraic structures studied in classical abstract algebra: groups, rings, fields, vector spaces, modules, etc allow theorists to brush aside congruences and focus on subalgebras instead. Most abstract algebra textbooks describe how normal subgroups, ideals, submodules, etc produce the congruence relations. This can be formalized with a mapping between the two universal algebra lattices $Sub(A)$ and $Con(A)$. \[ f: (K \subseteq Sub(A)) \to Con(A) \] I call this the congruencization mapping, which is a term which probably hasn't been used before. The idea is that congruences are comparatively more complicated, so if we can construct them from simpler objects that can save us a bit of work. Indeed, this is the case with groups and related structures where $Con(A)$ is fully determined by the sublattice of normal subgroups. In semigroups, separate mappings to $Con(A)$ need to be considered. The obvious one that comes to mind is that in a Rees congruence semigroup there is a mapping from ideals to congruences. \[ f: (Ideals(S) \subseteq Sub(S)) \to Con(A) \] Once we have formally defined these mappings to $Con(A)$ we can separately consider their properties. It is clear that in both cases, the larger the subalgebra the bigger the congruence, so the mapping is monotone. In particular, the larger the ideal, the larger the Rees congruence associated with it.

Saturday, November 14, 2020

Inclusion-exclusion principle for set partitions

Set theory is essentially an additive logic, while on the other hand partition theory is a multiplicative logic. Sets can be added together with respect to cardinality, and the difference from the sum can be determined by the intersection. This naturally leads one to wonder if a similar construct be introduced for set partitions to measure how far they are from being multiplicative. To deal with this, I introduced the idea of the sparsity of two set partitions.

Definition. Let $P$ and $Q$ be two set partitions, and let $P \times Q$ be their cartesian product. Then the sparsity of $P$ and $Q$ is the number of non-intersecting sets in $P \times Q$ defined by $|\{ (p,q) \in P \times Q : |p \cap q| = 0 \}|$

This naturally leads to a generalization of the inclusion-exclusion principle to set partitions. Instead of intersection measuring how far two sets are from being additive, the sparsity measures how far two set partitions are from being multiplicative.

Theorem. Let $P$ and $Q$ be two equivalence relations with a finite number of equivalence classes $|P|$ and $|Q|$ in each of them. Then the number of equivalence classes in their meet in the lattice of equivalence relations is the product of the number of equivalence classes in each of them minus the sparsity. In other words, \[ |P \wedge Q| = |P|*|Q| - sparsity(p,q) \] Proof. Let $P \times Q$ be the cartesian product of the sets of equivalence classes of $P$ and $Q$. Then we can partition $P \times Q$ into two classes $Z$ consisting of all pairs of equivalence classes with zero intersection and $N$ consisting of all pairs of equivalence classes without zero intersection. Since these two classes are non-intersecting, by the inclusion-exclusion principle for set partitions: \[ |P \times Q| = |N| + |Z| \] The cartesian product of two finite sets $A \times B$ can be partitioned into $|A|$ different equivalence classes all containing $|B|$ elements. By the inclusion-exclusion principle and the fact that each of these sets is disjoint, this can be expressed as a sum which by the definition of multiplication is equal to $|A| * |B|$. \[ |A \times B| = \sum_{n=1}^{|A|} |B| = |A|*|B| \] Using this formula, we can substitute $|P|*|Q|$ into the formula for the cardinality of $|P \times Q|$ where $P$ and $Q$ are the specifically sets of equivalence classes. This produces the new formula below. \[ |P|*|Q| = |N| + |Z| \] We are almost done with the proof now, except that we need to relate this decomposition of the product to the lattice operation $P \wedge Q$. In order to do that, we must construct $P \wedge Q$. If we express $P$ and $Q$ as equivalence relations, this is $P \cap Q$ and so $P \cap Q \subseteq P,Q$. If we take $P \cap Q$ and them break it down into equivalence classes, each equivalence classes $C$ of $P \cap Q$ is a subset of exactly one equivalence class in $P$ and $Q$ since $P$ and $Q$ are partitions, which produces a single member of $P \times Q$ consisting of two equivalence classes that contain $C$. These two equivalence classes that contain $C$ are members of $N$ which produce non-empty intersections and each equivalence class is distinct so they all define different members of $N$. Therefore, this defines a one to one mapping $f: P \wedge Q \to N$ which means that $|P \wedge Q|$ equals $|N|$ since one to one mappings preserve cardinality. Additionally, $|Z| = sparsity(p,q)$ by definition. \[ |N| = |P \wedge Q| \] \[ |Z| = sparsity(p,q) \] Finally, these two equivalent definitions can be substituted into the previous formula for $|P|*|Q|$ and $sparsity(p,q)$ can be subtracted from both sides to get the intended formula. \[ |P \wedge Q| = |P|*|Q| - sparsity(p,q) \] The two formulas compared:
We can now compare the two formulas used in additive logic and multiplicative logic side by side to get the analogy between the intersection of sets and the sparsity of partitions. \[ |A \cup B| = |A| + |B| - |A \cap B| \] \[ |A \wedge B| = |A| * |B| - sparsity(a,b) \] These are the fundamental formulas that relate logic and arithmetic. Partition logic can most accurately be defined as a higher form of set theory. While set lattices have $2^n$ members, the lattice of partitions has $2^{n-1}-1$ difference atoms. That is the atoms in partition logic are sort of like sets, in that they are bits of information. Bits of information then can be used to construct all pieces of information in the partition lattice. As this is the fundamental logic of information, its fair to say that partition logic is the natural logic of multiplication.
  • Addition -> set theory
  • Multiplication -> partition logic
Therefore, we can form certain partially ordered sets from which the addition and multiplication operations are abstracted from. Numbers are already a highly-abstract object, where as sets are much more concrete as specific examples of collections of objects emerge all the time. In the other direction, everyone is processing information and partition logic deals with concrete examples of that. For example, the bits of information in a computer can be represented as atoms in the partition lattice, and they can be combined to construct any piece of information.

Direct products:
As we have seen how partitions multiply together, we can now consider special cases of multiplicative partitions. One such special case is that of a direct product.

Definition. two set partitions $P$ and $Q$ are direct products of one another if they are multiplicative and the intersection of equivalence classes taken from $P$ and $Q$ all have cardinality one.

This can be used to describe a cartesian product used in set theory. In a cartesian product, the first and second equivalence relations are direct products of one another in the lattice of equivalence relations. On the same note, the direct product of two algebraic structures is defined by congruences that are direct products of one another.

References:
https://arxiv.org/abs/0902.1950

Friday, November 13, 2020

Difference join of input/output relations

Given a function $f: A \to B$ we have seen how we can establish a logical input/output relationship using the lattice of set partitions, which establishes that for certain ordered pair of partitions $I$ and $O$ a mapping can be defined between the equivalence classes of $I$ and the equivalence classes of $O$. We will now see that the natural way of combining these input/output relations is defined by the meet operation in the lattice of equivalence relations.

Lemma. let $A$ and $B$ be sets and let $P_1 \times Q_1$ and $P_2 \times Q_2$ be set partitions of $A \times B$. Then $P_1 \times Q_1 \wedge P_2 \times Q_2 = P_1 \wedge P_2 \times Q_1 \wedge Q_2$.

Proof. define projection mappings to $P_1$ and $P_2$ on the first index and $Q_1$ and $Q_2$ on the second index. Then we can define a mapping to the ordered pair $(P_1, P_2)$ and this has $P_1 \wedge P_2$ as a kernel and a mapping to the ordered pair $(Q_1, Q_2)$ with $Q_1 \wedge Q_2$ as a kernel. Then mapping to both ordered pairs $((P_1,P_2),(Q_1,Q_2))$ produces a kernel of $P_1 \wedge P_2 \times Q_1 \wedge Q_2$. By swapping the inner elements of this function we can get $((P_1,Q_1),(P_2,Q_2))$ which is the ordered pair expression of $P_1 \times Q_1 \wedge P_2 \times Q_2 = P_1$. Since these two projection mappings have one-to-one mappings between one another they have isomorphic set partitions.

Corollary. $P^2 \wedge Q^2 = (P \wedge Q)^2$.

Proof. this can be immediately achieved by a simple change of variables.

Proving the main theorem:
Theorem. let $f: A \to B$ be a function and let $I_1 \to O_1$ and $I_2 \to O_2$, then $I_1 \wedge I_2 \to O_1 \wedge O_2$.

Proof. let $a =_{I_1 \wedge I_2} b$, then by the definition of intersection $a =_{I_1} b$ and $a =_{I_2} b$. Since $I_1 \to O_1$ by assumption, $f(a) =_{O_1} f(b)$ and likewise by $I_2 \to O_2$ we can infer $f(a) =_{O_2} f(b)$. Finally, by the definition of intersection of equivalence relations $f(a) =_{O_1 \wedge O_2} f(b)$.

Remarks. it makes perfect sense that given a collection of inputs you can get a collection of all their outputs. The interesting thing is that we can formally model this using the meet operation of the lattice of equivalence relations, which is the mathematical means of handling collections of information.

Constructing the congruence lattice:
Corollary. let $f : A^2 \to B$ and let $P,Q$ be congruence relations. Then their equality meet $P \wedge Q$ is a congruence.

Proof. congruence relations define mappings between equivalence classes from $P^2 \to P$ and from $Q^2 \to Q$. By the difference join of equivalence relations we can now get a mapping between equivalence classes $P^2 \wedge Q^2 \to P \wedge Q$. By the meet of cartesian products of partitions this is equal to $(P \wedge Q)^2 \to (P \wedge Q)$ which means that $P \wedge Q$ is a congruence.

Corollary. the set of congruences of an algebraic structure $A$ forms a lattice

Definition. the lattice of congruences of an algebraic structure $A$ will be denoted $Con(A)$.

Remarks. the lattice of congruences $Con(A)$ has more structure then the lattice of subalgebras $Sub(A)$ because it is a set of set systems rather then simply a set system. Therefore, it makes sense to consider $Sub(A)$ before moving on to $Con(A)$ which is potentially more complicated. Even then, $Con(A)$ has an intuitive basis in the idea of combining inputs and outputs as we have seen here.

Properties of lattices of congruences:
By the theorems that we have proven here we can immediately infer that the lattice of congruences $Con(A)$ is a lattice-ordered bounds-maintining meet subsemilattice of the lattice of equivalence relations on the ground set of an algebraic structure. As equivalence relations are intersection closed, this means that $Con(A)$ forms a Moore family as a set system just like $Sub(A)$.

Monday, November 9, 2020

Congruences of binary operations clarified

The definition of congruences of binary operations is well known, and they are widely used in semigroup theory and related branches of abstract algebra. We can make the definition of a congruence into an input/output relation, and thereby make it more intuitive / easier to understand and put it in the larger context of input/output relations. In order to do this, we must first introduce the idea of the cartesian product of two set partitions.

Cartesian product of set partitions:
Let $A$ and $B$ be sets, and let $P,Q$ be set partitions of $A$ and $B$ respectively. Then $P \times Q$ is the set partition on the cartesian product $A \times B$ defined by equality of $P$ in the first index and equality by $Q$ in the second index. \[ (a,b) =_{P \times Q} (c,d) \iff (a =_P c) \land (b =_Q d) \] It is not hard to see that a given set partition can have a product defined over itself by making both set partitions equal. This will determine the input set partition on the congruences of binary operation. \[ (a,b) =_{P^2} (c,d) \iff (a =_P c) \land (b =_P d) \] This sort of construction can even be generalized to any equality of n-tuple for use in universal algebra. In this way, all congruences can be defined as input/output relations where some piece of information determines another piece of information within a function.

Congruences as input/output relations:
It is not hard to see then that congruences of a binary operation $f$ are defined by input/output relationships of the form $P^2 \to P$. In other words, \[ (a,b) =_{P^2} (c,d) \implies f(a,b) =_P f(c,d) \] As is the case with all input/output relations this produces a quotient operation $P^2 \to P$ which describes how the $P^2$ input information completely determines the $P$ output information.

Sunday, November 8, 2020

Congruences of unary operations

Input/output relations can be defined on any function from a partition of the input set and a partition of the output set. Suppose that we have a function $f : A \to A$, then a single set partition can be used on both the input set and the output set. This leads to the notion of a congruence of unary operation.

Definition. let $f : A \to A$ be a unary operation, and let $C$ be a set partition of $A$. Then $P$ forms a congruence of $f$ provided that it satisfies the input/output relationship $P \to P$. In other words, \begin{equation} x =_P y \implies f(x) =_P f(y) \end{equation} Example one. let $\mathbb{C}$ be the set of complex numbers and consider the complex conjugation map $f : \mathbb{C} \to \mathbb{C}$ defined by $f(a+b) = f(a-bi)$. Then the set partitions for the real part R and the imaginary part I both form congruences. The quotient function f/R which is the effect of the complex conjugation on the real part is the identity, and the quotient function f/I is negation. In other words, f/I maps equivalence classes by imaginary number value to their corresponding negative equivalence class.

Example two. let sgn(x) be the sign function, then it produces three equivalence classes. The sign function is a congruence on the negation function of the real numbers $f : \mathbb{R} \to \mathbb{R}$. The quotient is a permutation on three elements which maps zero to itself, one to negative one, and negative one back to one.

Example three. let $f : \mathbb{R}^3 \to \mathbb{R}^3$ be the function defined by $f(x,y,z) = (1/x,-y,z^2)$. Then first, second, and third all produce congruences with corresponding quotients of negation, reciprocal, and square. All the functions operate on their inputs/outputs independently, and they have no effect on one another. In situations like these, we can get a commutative factorization as all functions can be applied independently without effecting their inputs or outputs.

Example four. let $P$ and $Q$ be two set partitions in which each pair of equivalence classes between intersects exactly once and let $T$ be the transformation semigroup on their common input set. Then the functions which are congruences of $P$ with identity quotients and which are congruences of $Q$ form the subsemigroup of transformations of $Q$. The effect on $Q$ of any transformation is its quotient. $P$ and $Q$ can be anything from indices or fields to any piece of information as long as they have all singular intersections. In this way, unary congruences make getters and setters both axiomatic and comprehensible.

Saturday, November 7, 2020

Input/output relations

The lattice of set partitions is one of the most important constructions in mathematics, because of its key role in the theory of functions. As functions are relations between input and output, all aspects of the relationship between set partitions and functions can be described by input/output relationships. First, I will briefly discuss set partitions.

Lattice of set partitions:
Set partitions play two different roles: they represent equivalences and they represent differences. Therefore, they can be ordered in two different ways: the equivalence ordering and the differences ordering. The equivalence ordering is useful for example because there is a monotone mapping between subalgebras of the symmetric group (permutation groups) and the lattice of equivalence relations defined by orbits. The differences ordering is useful because it determines the information ordering of functions on a given input. A function has more information then another if preserves more differences then it. In order to avoid confusion between these dual lattices, the semilattices can be labeled individually:
  • Differences join: the join in the differences ordering
  • Equality join: the join in the equvialence ordering
It may be useful to given these semilattices different labels as well, but in any case which semilattice is being referred to will always be made clear. It is a basic fact of the lattice of set partitions that these two semilattices are actually very different. There are far more difference atoms then equality atoms: difference atoms are bits of information and equality atoms are simply unordered pairs. The general behaviour of the two semilattices is very different as well, which is one reason they need to be considered separately and always distinguished from one another. This is not like the case in set theory where union and intersection are very similar.

Input/output relations:
Suppose that we have a function $f : A \to B$ with an input set $A$ and an an output set $B$. This essentially means that the function defines an input/output relationship between the inputs and the outputs. Given any input a single output is defined. We can relate this to set partitions by defining a set partition of the input set $I$ and a set partition of the output set $O$. Using these set partitions, we can define a criterion for a functional input/output relationship between these set partitions: \[ x =_I y \implies f(x) =_O f(y) \] Intuitively this means that given the information in the input, then we can determine the information in the output. Partitions can be defined from anything from indices of sequences, fields, or any other piece of information which makes this possible. So this general construction lets us determine when any piece of information determines another. Congruences defined in abstract algebra are also defined in this way.

Functions as composite objects:
Given an input/output relationship on a function, then we can always get a quotient function which defines how given an element of I we can determine which element of O is produced. That is the quotient function defines how we can get the output information from the input information. In this way, we see how functions are partially ordered: that is they have parts. As functions are input/output relationships the most natural way to define their parts is not through subsets but rather through set partitions that determine input/output relationships that can produce new quotient functions. These are new input/output relationships which are parts of functions.