Tuesday, December 29, 2020

Condensation of commutative semigroups

A commutative semigroup is naturally preordered by the factorisation preorder $ x \leq y \Leftrightarrow \exists z : xz = y$. This generalizes the divisibility ordering of the natural numbers, which is the factorisation preorder of multiplication. As is natural, in any preorder there are symmetric components which partition the relation. Further, for any preorder we can get the condensation by reducing all symmetric components to individual elements. I will show that an analagous condensation is possible for commutative semigroups.

Theorem. the symmetric components of a commutative semigroup form a congruence

Proof. let H denote the symmetric components, and now suppose that $a = c [H]$ and $b = d [H]$. That means that there exists $x_1,x_2,y_1,y_2$ such that $ax_1=c$, $a = cy_1$, $bx_2 = d$, and $b = dy_2$. We can now get that $ab = (cy_1)(dy_2)$ which means that $ab = (y_1y_2)cd$ and dually $(ax_1)(bx_2) = cd$ which means that $(x_1x_2)ab = cd$. Therefore, $ab$ and $cd$ are both factors of one another, so they belong to the same symmetric component. Which means that the symmetric components $H$ form a congruence.

Theorem. the quotient of the commutative semigroup by its symmetric components is antisymmetric.

Proof. let $H_1$, $H_2$ be symmetric components, suppose that $H_1 = H_2$ then there exists $I_1, I_2$ such that $I_1H_1 = H_2$ and $H_1 = H_2I_2$. Let $x$ be an element of $H_1$ and $y$ be an element of $H_2$ then there exists an element $a_1 \in I_1$ such that $ax$ is in $H_2$ then since $a_1x$ and $y$ are both in $H_2$ there exists some $b_1$ such that $a_1b_1x = y$. Dually, there exists some $a_2$ such that $a_2y$ is in $H_1$ and then since $a_2y$ and $x$ are both in $H_1$ there exists some $b_2$ such that $a_2b_2y = x$. This means that $x$ and $y$ both belong to the same symmetric component, which is a contradiction because we supposed that they were in different symmetric components at the start. Therefore, $\frac{S}{H}$ is antisymmetric.

From now on we can refer to $\frac{S}{H}$ as the condensation by analogy with preorders. As with how we can condense the symmetric components of any preorder to get a poset, we can condense the symmetric components of any commutative semigroup to get a posetal commutative semigroup. This quotient structure fully determines the order theory of commutative semigroups.

Commutative algebra:
Ideal multiplication forms a commutative posetal semigroup, even when its multiplicative semigroup is not posetal. I will show here that in the special case of PIDs, ideal multiplication is simply the condensation of multiplicative semigroup.

Theorem. in a PID the ideal multiplication is isomorphic to the condensation of the multiplicative semigroup.

Proof. every multiplicative ideal in a PID is sum closed, to see this notice that every PID has identity, so the sum of any number of elements in $aX$ is of the form $(a + a + ... + ax + ay + ...)$ which is simply $(1+1+...+x+y+...)a$. Two multiplicative ideals are equal if they are in the same symmetric component, so we have a natural one to one mapping $f : \frac{(R,*)}{H} \to (Ideals(R),*)$ between the condensation of the multiplicative semigroup and the multiplicative semigroup of ideals.

Let $H_1,H_2$ be elements of the condensation semigroup. Then let $a \in H_1$ and $b \in H_2$ and now since the condensation is a quotient semigroup $ab \in H_1H_2$. Therefore, $f(ab) = (ab) = f(a)*f(b) = (a)*(b) = (ab) $ because the product of two principal ideals in a PID is the product of their generators. Dually, in the other direction if we are given two principal ideals $(a)$ and $(b)$ their product is $(ab)$ so $f^{-1}( (a)(b) ) = f^{-1}((a))*f^{-1}((b)) = H_a * H_b = H_{ab}$.

Examples:
1. The condensation of a commutative semigroup is a semilattice iff it is a commutative clifford semigroup.

2. In the multiplicative semigroup of a field the condensation is a two element semilattice consisting of units and the zero element. The multiplicative semigroups of fields are clifford.

3. Let $(\mathbb{Z},*)$ be the multiplicative group of the integers. Then the condensation $\frac{(\mathbb{Z},*)}{H}$ is isomorphic to $(\mathbb{N},*)$ because the absolute value of two integers is determined by the absolute value of its arguments. Therefore, by modding out signs we get the condensation.

4. The condensation of a finite monogenic semigroup is a finite commutative aperiodic semigroup.

Notes:
The theory of $\frac{S}{H}$ was first explored by Kolibiarova in the late fifties. It is mentioned in chapter five of Grillet's commutative semigroups. It is the first thing anyone should know about commutative semigroups.

Monday, December 21, 2020

Properties of principal moore families

I previously introduced principal moore families, and I presented an algorithm to test for them. However, I didn't discuss any of their set-theoretic properties. In this post I will describe how principal moore families are families of principal ideals (or principal filters) of certain preorders with lattice condensation. In order to do this, we need to describe properties of unions and intersections of ideals.

Proposition. principal moore families are union-free

Proof. let $S$ and $T$ be two inclusion-incomparable sets of the moore family and suppose their union $S \cup T$ is contained in the moore family. Then by the definition of principal moore families there exists an element $x$ such that $cl(\{x\}) = S \cup T$. But the definition of the closure implies that the closure of a set is the smallest set containing it, and since $x$ is in either $S$ or $T$ and both of them are less then their union the closure must be $S$ or $T$ and not $S \cup T$. Therefore, by contradiction the family is union-free.

It is well known that subgroups, submodules, vector subspaces, subrings, ideals, subalgebras, and so on are all union-free. It should not come as a suprise then that the family of ideals in a PID is union-free and so on, but this theorem applies in general.

Question. when is the intersection of principal ideals of a preorder a principal ideal?

Answer. let $S$, $T$ be principal ideals and consider their intersection $cl(\{s\}) \cap cl(\{t\})$ in order for this intersection to be a principal ideal there must be some element $x$ for which the principal ideal $\{ y : y \subseteq x \}$ is equal to $\{ y : y \subseteq s \land y \subseteq t \}$. In other words, $y \subseteq x \iff y \subseteq s \land y \subseteq t$. The first inclusion means that $x$ must be a lower bound of $s$ and $t$ and the second inclusion which states that all elements are less then it means that $x$ must be the greatest lower bound. Therefore, intersections of principal ideals correspond to meets when they exist.

Theorem. principal moore families are preorder containment families of upper-bounded meet-complete prelattices

Proof. let $M$ be a moore family, and define the generation preorder to be $a \leq b$ is logically equivalent to $cl(\{a\}) \subseteq cl(\{b\})$ then $M$ is the family of principal ideals of this preorder. To see this note that $cl({a}) \subseteq cl(\{b\})$ logically implies that $\{a\} \subseteq cl({a}) \subseteq cl({b})$ because the closure operation is extensive, which means that $a \in cl({b})$. Suppose that we have an element $x$ that isn't a predecessor but which is contained in $cl({b})$ then this means that the closure $cl(\{x\})$ is not contained in $cl(\{b\})$ but this contradicts the definition of closure as the smallest parent set. It follows that $M$ is a preorder containment family.

Moore families are upper-bounded meet-complete lattices, so it further follows that this preorder has an upper bounded meet-complete lattice as its condensation. In the other direction, in order for the family of all principal ideals to be a Moore family, it must have all intersections, but these correspond to meets so intersection closure follows from meet completeness. The upper bound is provided by supposition, so the family of principal ideals forms a principal moore family. Therefore, the two concepts are equivalent.

Corollary. the family of principal ideals of a finite lattice is a principal kolmogorov moore family

The two different ways of presenting a preorder are either as preorder containment families or alexandrov families. The only real difference between the two is that the first is union-free and the later is union-closed. If we take a finite lattice, and get the union closure of its principal kolmogorov moore family then we will get a kolmogorov alexandrov topology whose inclusion order corresponds to a distributive lattice with the original lattice as its suborder of join irreducibles.

In the case of PIDs, it suffices to consider the ideal generation preorder on elements. This preorder is clearly a meet-complete upper bounded prelattice, with the equivalence class of units as its preorder upper bound. The resulting family of ideals is entirely determined by the resulting preorder and individual ideals are its symmetric components. The lattice of ideals is determined by the condensation.

Saturday, December 19, 2020

V-equivalence classes

Previously we mentioned that the mapping $V : \wp(R[x_1,...,x_n]) \to \wp(\mathbb{A}^n)$ has partially ordered equivalence classes. As suborders of a power set $\wp(R[x_1,...,x_n])$ these V-equivalence classes also form set systems, and have set-theoretic features such as unions and intersections. Firstly, I need to show that $V$ is a complete semilattice homomorphism from union to intersection. This is a simple result of associativity and idempotence.

Proposition. $V(\cup f) = \cap V(f_i) $

Proof. $V$ can be expressed as the intersection of the roots of each of its polynomials. When the argument is given a union decomposition, then $V$ can be equivalently be expressed as the intersection of the roots of the polynomials of each its components. The nesting is cancelled by the associativity of intersection, and any overlap between sets in the union decomposition is cancelled by idempotence: \[ V(\cup f) = \] \[ \bigcap_{p \in \cup f} \{ a \in \mathbb{A}^n : p(a) = 0 \} = \] \[ \bigcap_{s \in f} (\bigcap_{p \in s} \{ a \in \mathbb{A}^n : p(a) = 0 \}) \] This produces two different representations of $V(\cup f)$ which are distinguished only by nesting and repetition. As nesting and repetition are cancelled out in a semilattice $V(\cup f) = \cap V(f_i)$ and $V$ is a complete semilattice homomorphism from union to intersection.

This construction works for any arbitrary family of polynomial systems, even if they are not ideals. There is no similarly general decomposition available for intersections. It is true that the restriction map of $V$ to ideals in an integral domain maps intersections to unions, but that only works for integral domains and ideals. With this out of the way, we get to the set-theoretic properties of V equivalence classes.

Corollary. $V$ equivalence classes are completely union closed.

Proof. Let $f$ be a subclass of a V-equivalence class, this means that there exists $S$ such that $\forall f_i \in f : V(f_i) = S$. Therefore, by idempotence of intersection the intersection $\cap V(f_i) = S$. By the previous theorem $V(\cup f) = \cap V(f_i) = S$, and now since $V(\cup f) = S$ the union of the subclass of the V-equivalence class is contained in it, which demonstates union closure.

Corollary. V equivalence classes are upper bounded by $\cup V^{-1}(S)$.

This is the maximal polynomial system that produces a given algebraic set, and by the invariance of ideal closure under $V$ this maximal polynomial system is an ideal. This is not necessarily a unique lower bound for a V-equivalence class. Consider two pairs of distinct lines that intersect in the same point, then their subsets have common roots, and so even for the equivalence class for a single point there doesn't need to be a corresponding lower bound. We will therefore focus on these upper bounds instead.

It is clear that the maximal element of any $V$ class is equal to $\mathcal{I}(S)$, because the maximal polynomial system must contain every polynomial that vanishes at $S$ which means $\mathcal{I}(S)$ is contained in it. To see the inverse inclusion, notice that every root of an element of the V class vanishes at $S$ so it is contained in $\mathcal{I}(S)$. Therefore, if we take $V$ as our central objects of study we can describe algebraic sets as emerging from its image and $\mathcal{I}$ as emerging from its inverse images. Let $M$ be the family of all maximal polynomial systems associated to an algebraic set and let $F$ be all algebraic sets, then the restriction mapping to $M$ is one to one. \[ V|M : M \to F \] This means that there is always a one to one mapping between a family of ideals $M$ and the family of all algebraic sets. Hilbert's nullstellensatz simply says that $M$ consists of all radical ideals for a given algebraically closed field. The beauty of Hilbert's nullstellensatz is that it is a topological correspondence, because the lattice of radical ideals has an order-dual set system presentation $Spec(R)$ that is order-isomorphic to the Zariski cotopology. As $Spec(R)$ is a sober topology, and determined entirely by its order this is a correspondence of topological properties between $Spec(R)$ and the Zariski topology.

Friday, December 18, 2020

Polynomial systems and algebraic varieties

This blog has mainly dealt with set systems, and so polynomial systems haven't been considered as much. I intend that to change. The way I think of it now, is that set systems are the nicest objects of study in order theory (for example every poset represented as a set system can be made into a lattice by adding certain missing sets) and polynomial systems are the nicest objects of study in commutative algebra. Lets get started.

Definition. let $R$ be a commutative ring, and let $R[x_1,...,x_n]$ be a polynomial ring with $n$ variables, then the complete family of polynomial systems is denoted $\wp(R[x_1,...x_n])$.

Anyone non-trivial study of set systems must take certain monotone maps as its central objects of study. How fitting then that the central object of study in polynomial systems is an antitone map. In this post, we will consider this antitone map from polynomial systems to point sets. \[ V : \wp(R[x_1,...x_n]) \to \wp(\mathbb{A}^n) \] This fundamental antitone map $V$ maps any polynomial system to its set of common roots. Its not hard to see that this is an antitone map, as the more polynomials you have the fewer common roots there are. \[ V(S) = \{ a \in A^n : \forall p \in S : p(a) = 0 \} \] The map $V$ is a morphism in the category of sets, but it need not be an epimorphism. We can consider the subset $F$ of $\wp(\mathbb{A}^n)$ which consistutes its image. We can say that the algebraic sets emerge here, as essentially elements of the image of the fundamental antitone mapping $V$. \[ F = image(V) \] The image of $V$ consisting of all algebraic sets is clearly a set system. We can use basic set theory, to get that this image is a Moore family (it has complete intersection closure). As an antitone function, it maps the empty set to the largest algebraic set $\mathbb{A}^n$ and the set of all polynomials to the smallest algebraic set $\emptyset$. As is common in these cases, the only thing remaining is to show that this family has finite union closure and then we will get a cotopology.

In order to get that this forms a cotopology, note that in integral domains the roots of two polynomials $p$ and $q$ can both be combined in their product polynomial $pq$ and since integral domains have no non-trivial zero divisors $p(a)q(a) = 0$ is logically equivalent to $p(a)=0$ or $q(a)=0$. This can be extended to get finite union closure. It is not hard to see then, that in the special case of an integral domain the algebraic sets form a cotopology (the zariski cotopology). Now that we have considered the image of $V$ now we can consider the inverse image of an affine set $S$. \[ V^{-1}(S) \] Given an algebraic set, then we can get a family of polynomial systems which produce it as an output. This family of polynomial systems is clearly partially ordered by inclusion, so we can get smaller or larger polynomial systems that have the same common roots. One way that we can get a larger polynomial system is taking the ideal closure, because given any two polynomials, if they have a common root then their sum is a root as well and ideals are uneffected by scaling. This is the natural manner in which ideals emerge from polynomial systems.

Further, this partially ordered family of polynomial systems has a maximal element defined by $\mathcal{I}(S)$ which is the family of all polynomials that have $S$ as roots. This is maximal because, suppose there is some other polynomial not in $\mathcal{I}(S)$ that has $S$ as a root, then by definition it must be contained in $\mathcal{I}(S)$ as it contains everything with $S$ as roots. In the special case of algebraically closed fields, Hilbert's Nullstellensatz states that these maximal sets of the inverse images are radical ideals, which means that there is a one-to-one restriction mapping of $V$ from radical ideals to algebraic sets.

Wednesday, December 16, 2020

Ontology of polynomials

When considering polynomials over a commutative ring, it is often useful to limit yourself to considering special cases. Even though the most interesting polynomials are multivariable, we tend to limit ourselves to univariate polynomials or even more then that to monic polynomials such as in the definition of integral extensions. The whole field of linear algebra basically built around the simple idea of a linear polynomial. Clearly, a classification system for polynomials is necessary. Here are some classes to start with:
Polynomials are relatively simple data structures, which make them a solid part of any computer algebra system. They can simply be represented by a collection of individual monomial terms, which only need to contain a coefficient and some variables. With such a simple representation as a sum of terms, each of these different classes of polynomials can easily be turned into computable predicates on polynomials. This is then a hierarchy of computable predicates, which is generally the nicest kind of ontology.

Homogeneous polynomials: it is trivial to check if a given polynomial has only terms of the same degree, degree in this case means the sum of all exponents of variables in each term. $x^2 + y^2 + xy$ for example is a homogeneous polynomial and a binary quadratic form while $ax^3 + bx^2y + cxy^2 + dy^3$ is a binary cubic form. Homogeneous polynomials are fundamental in algebraic geometry because roots are invariant under scaling, which produces a natural link to projective geometry. Applying a little creativity, we could define anti-homogeneous polynomials to be ones with different degrees for each term.

Max power one polynomials: these are polynomials in which the exponent of each variable in each term is never greater then one. For example $xy + yz + xz$ is a max power one quadratic form but $x^2 + 2x + 1$ is not because of the exponent of two in one of the variables. Although not as familiar, these might appear from semirings in which multiplication is idempotent.

Diagonal polynomials: the dual of a polynomial in which each variable is different, is one in which each variable is the same. We can call these diagonal polynomials and diagonal forms are special cases that are also homogeneous. For example, $x^3 + y^3 + z^3$ is a diagonal form.

Additional classes: this is a very limited upper ontology, which is applicable to general rings. We could classify polynomials themselves based upon the ring that they emerge from, so for example we could consider real polynomials, complex polynomials, etc but these wouldn't fit into an upper ontology. As these classes are based upon sum representation, factorisation based considerations like separable and irreducible polynomials are not included. In algebraically closed fields, irreducible polynomials are simply linear univariate ones, so over some rings it is not necessary to consider irreducible polynomials as a separate class, so they must dealt with separately.

Tuesday, December 8, 2020

Principal moore families

Principal ideal domains, principal ideal rings, etc are set system theoretic classifications of systems of ideals of certain rings or integral domains. This leads naturally to a set theoretic generalization: the concept of principally generated moore families of sets. These can be defined like so:
(defn principal-moore-family?
  [family]

  (and
   (moore-family? family)
   (let [singleton-closures (set
                             (map
                              (fn [i]
                                (cl family #{i}))
                              (apply union family)))]
     (= family singleton-closures))))
As a general concept applicable to any kind of set system (or hypergraph) which emerges from algebra, topology, analysis, or any other subject these can also be applied to other constructs besides commutative rings. Sub(G) for a finite cyclic group also forms a principal moore family. Here are some examples of this predicate in action:
; true example
(principal-moore-family? 
  #{#{0} #{0 1} #{0 1 2 3}}) 

; false example
(principal-moore-family? 
  #{#{0} #{0 1} #{0 2} #{0 3} #{0 1 2 3}}) 
Some of these principal moore families also form preorder containment families, and they can potentially have interesting intersections with other classes of set systems which is a subject for further exploration.

Monday, December 7, 2020

Exact sequences and irreversibility

Recently, I defined morphism sequences and related concepts in any abstract category. There are many more concepts that can be defined categories with additional structure such as abelian categories, and exact sequences are a particularly important example. The key to the construction is that abelian categories have certain limit/colimit related properties like the existence of kernels and cokernels for any morphism.

Ordering relations and their subrelations are characterized by the fact that given any two distinct comparable elements, there is no reverse path from the sucessor element to its predecessor. In other words, partial orders are directed acyclic graphs and acyclicity is path-irreversibility. In the other direction, morphisms are associated with degrees of irreversibility. This leads to the dual logics of set theory and partition logic in concrete categories.

Take the category of sets as an example: an irreversible morphism $f: A \to B$ uses a subset of the input information of $A$ and a subset of the output values of $B$. That is, whenever we are using an irreversible morphism, you are implicitly referring to an order relation. In the other direction, category theorists like to relate order to irreversible morphisms, in particular through monomorphisms and epimorphisms. Proper subobjects are simply defined by irreversible mononomorphisms. All category-theoretic concepts of order simply come from different notions of irreversibility.

I would characterize exact sequences as a construction of this sort. The essence of exact sequences lies in how they allow for certain manipulations with regards to degrees of reversibility. In doing so, they allow for a convenient expression of order-related concepts like group extensions in certain categories. In particular, while in the category of sets there are two different kinds of reversibility covered by partition logic and set theory the concept of a kernel in an abelian category allows a relation between the two types of irreversibility.

Exact pairs:
Binary relations of morphisms are one of the foundational concepts of category theory, in particular composition is defined on the relation $M^2_{*}$ of all inner-equal ordered pairs of morphisms because composition is a partial operation. In that same vein, I would define exactness as a binary relation between morphisms: two inner-equal morphisms in an abelian category are exact if the image of the first morphism is equal to the kernel of the next morphism.

To see how this exactness construction is simply a trick for manipulating degrees of reversibility, consider that $1 \to A \to B$ simply makes the morphism from $A$ to $B$ a monomorphism (and therefore an expression of subobjects) and $A \to B \to 1$ makes it into an epimorphism (and therefore an expression of quotients). Therefore, simple exact sequences allow you to express the dual order-theoretic irreversibility-related concepts of ordering in a category using morphisms.

By combining both constructions we can clearly make a morphism reversible. For example, the exact sequence $1 \to A \to B \to 1$ has a reversible central morphism. Reversibility is ensured because in an abelian category, every bimorphism is an isomorphism. In this case, no proper order-theoretic concept like a subobject or a quotient is described but rather what is described is the lack of category-theoretic distinctions.

Exact sequences and group extensions:
Category-theoretic irreversibility allows for previously order-theoretic concepts like subobjects and quotients to be described in category-theoretic language. Using exact sequences we can describe properties of both subobjects and quotients at once. The decisive example comes from group extensions, as exact sequences can be used to describe how a quotient group emerges from a normal subgroup of a group. \[ 1 \to N \to G \to Q \to 1 \] This is the most common construction, and it provides a different language for expressing the already widely familiar concept of a group extension. The reason that this is possible is because of how exactness relates to the irreversibility orderings of a category. These irreversibility orderings lead to the dual notions of subobjects and quotients.

Friday, December 4, 2020

Fractional ideals

Neither ideal multiplication nor submodule multiplication have multiplicative inverses. Indeed, ideal multiplication forms a commutative H-trivial semigroup. Yet, in spite of this we often talk of fractional ideals: a related semigroup on a set system in which certain sets have inverses. It is clear that in order to construct inverses we need a different sort of structure then a module or a ring. Instead we should turn to R-algebras.

Definition. let $R$ be a subring of another ring $K$, then $\frac{K}{R}$ forms an extension R-algebra with addition and multiplication provided by $K$ and scalar multiplication provided by the subring $R$.

Example. let $R$ be an integral domain and let $K$ be its field of fractions then $\frac{K}{R}$ forms an extension R-algebra.

The point is that now unlike with rings and modules, we have two different types of multiplication to take care of: (1) scalar multiplication by the subring elements and (2) the multiplication operation of the R-algebra. In both submodules and ideals the multiplication of sets was the same multiplication used to define the sets. By having two different kinds of multiplication, we make possible inverses and fractions, which is a significant difference then with ideal multiplication.

The definition of fractional ideals of the field of fractions extension R-algebra of an integral domain proceeds with a submodule I closed under addition and scalar multiplication for which there is a number $d$ such that $dI \subset R$. The lattice operations are inherited from submodules, but here fractional ideal multiplication is defined by the ordinary multiplication operation of the R-algebra, rather then the scalar multiplication used to define fractional ideals themselves.

Multiplication of fractional ideals forms a semigroup, as for any two ideals $I,J$ then the numbers $d_I, d_J$ can be multiplied to get $d_I d_J$ which clears out the denominators in $IJ$. Unlike the ideal multiplication semigroup, which is commutative and group-free these semigroups have much more varied behavior which is related to comparison to the identity element:
  • The full set of scalars $R$ is the identity element because it is neither increasing nor decreasing
  • Subidentity ideals are decreasing as a special case of multiplication by submodules
  • Superidentity ideals are increasing because they contain $1$ which is a multiplicative identity
The elements that are incomparable to $R$ have different and varied behavior. This shows that for fractional ideals, multiplication can be either increasing/decreasing and so it clearly forms a different kind of semigroup then ideal multiplication. Fractional ideals can even have inverses, and the set of all invertible fractional ideals, the group of units, forms a subgroup of the fractional ideal semigroup.

In the case of a Dedekind domain, we further know that all non-zero ideals have inverses. This means we can go from a setting with no multiplicative inverses to ones with all of them. Much like how the commutative semigroup $(\mathbb{N},*)$ has no inverses but all of its non-zero elements do in $(\mathbb{Q}_+,*)$ for a Dedekind domain, non-zero ideals have no inverses but once they are embedded in the fractional ideal semigroup they all do. So for a Dedekind domain, the introduction of fractional ideals is like the introduction of fractions in number theory.

Wednesday, December 2, 2020

Submodule lattices with operators

Modules are distinguished from commutative rings by the fact they are defined by action by an external set. Recall, that commutative groups form an abelian category so for any commutative group $G$ a ring action on $G$ can be defined by a ring homomorphism to the endomorphism ring $End(G)$. The scalar multiplication action also forms a group endomorphism, which makes modules a special case of groups with operators. The transition to the submodules lattice $Sub(M)$ proceeds in a similar manner.

Lattices with operators:
Let $L$ be a lattice, then a lattice with operators can be defined by an indexed family of endomorphisms in the category of preorders and monotone maps. This clearly forms an abstract class of ordered algebraic structures like residuated lattices and quantales. We can make R-submodules into a lattice with operators in the standard manner: \[ \cdot : Ideals(R) \times Sub(M) \to Sub(M) \] The only thing that needs to be proven really is that ideal action on submodules is indeed monotone. Let $I$ be a fixed ideal and suppose that $M_1 \subseteq M_2$. Let $b$ be an element of $IM_1$ then $b = \sum a_i x_i$ for some $a \in I$ and $x_i \in M_1$. Now by the fact that $M_1 \subseteq M_2$ this means that $b \in \sum a_i x_i$ for some $a_i \in I$ and $x_i \in M_2$ so $b \in IM_2$ which implies $IM_1 \subseteq IM_2$. This confirms that ideal action is monotone. Therefore, we can distinguish between two cases: (1) ideals which form a quantale and (2) submodules which form a lattice with operators.