Monday, December 26, 2011

Everything is a function

Everything is a function. Associative arrays are just functions of their keys, lists are functions of the natural numbers, etc. These data structures are all groups of key/value pairs. There is only one problem: sometimes a value isn't given any key. In this case, we can automatically generate a key for it to turn it into a map:
(= (to-map #{1 2 3})
   {"#_2185" 3, "#_1873" 2, "#_0195" 1})
Now here is an example of the combination of a group of map structures:
(= (merge-maps {:x 10, :y 100} [1 2 3] #{1 2 3})
   {"#_0249" 1, "#_1847" 2, "#_212" 3, 0 1, 1 2, 2 3, :x 10, :y 100})
This shows how we can represent essentially any value as a map. This has the further advantage that we can easily add metadata to any object. Now a further concern comes when we want to describe links between these structures:
(add-edges [0 1] [1 2] [2 3] #{:x :y})
Any system of simple links can be represented in terms of adjacency matrices which are predicate functions on pairs on natural numbers:
[[0 1 1 0]
 [1 0 0 1]
 [1 0 0 1]
 [0 1 1 0]]

[[0]
 [1 0]
 [1 1 0]
 [1 1 1 0]]
Otherwise the links may have complicated metadata, such as labels. Furthermore, sometimes we want to deal with hypergraphs, which requires yet more complicated edge objects.

Monday, December 19, 2011

History and versioning

Versioning occurs sequentially. That is, new versions of objects are created one after another. Nonetheless, it is often useful to organise our representation of versions hierarchically. For example, most software programs have versions such as 23.3.1, 5.25.1, etc.

Similarly, we organise dates hierarchically with specific orders of magnitude such as years, months, days, hours, minutes, and seconds. For example, the date December 19, 2011 can be represented as 2011.12.19. As these version numbers are just paths in a hierarchy, we can easily represent them in Lisp:
(date. 1957 6 12)
(date. 2008 9 3)
These paths are also similar to file system paths, which allows you to easily organise your files according to date. In Clojure, you can access the value of a data structure at a path using get-in.

Sunday, December 18, 2011

Emacs setup for Christmas break

I have created a special emacs setup for this Christmas break. I have further customised my keyboard with new hotkeys specifically for navigating S-expressions so that I can effectively use emacs as a structured data editor. I intend to use emacs as a general structured data editor, to edit any tree-like data structures such as file systems, lisp programs, and other data formats such as XML. I am also exploring options for manipulating visual and graph-based systems.

Saturday, December 17, 2011

Applications of graph theory in computer science

Graph theory has a diverse range of applications in computer science. In the past people programmed using sequential side effects and that worked relatively well at the time. However, with the creation of the internet and the onset of the multicore age we need to view computation in terms of several interacting agents.

Graph theory is the mathematical mechanism which allows us to understand the interactions between a diverse range of agents in a network. As the Internet expands, computation will become increasingly distributed across the Internet, which will increase the importance of graph theory.

Tuesday, December 6, 2011

Linked data structures

Graph theory deals with collections of nodes and edges. Linked data structures are graphs such that each edge is a link between precisely two nodes. Each such structure can be expressed using an adjacency list:
{
  :a [[:- :a] [:> :b] [:> :c]]
  :b [[:- :b] [:> :c]]
  :c [[:- :c]]
}
Edges stored outside the adjacency list may also be linked to in order to describe more complicated metadata, in which case our representation is closer to an incidence list.

Saturday, December 3, 2011

Infinitesimal analysis

In the absence of known discrete foundations for some mathematical system we can produce some other foundational abstraction. In particular, if elements occur at extremely small scales, the extent of which is also unknown, then infinitesimals are the particular class of abstraction that should be considered.

Once we have some notion of infinitesimals in our grasp, the next step is to describe the effects of infinitesimal changes in input has to a function. In particular, a function is called smooth if infinitesimal changes in input only result in infinitesimal changes in output.

Derivatives are just the slope of an infinitesimal secant line of a function and integration is the process of calculating areas of infinitesimal points.

Thursday, November 24, 2011

Logic and binary trees

Lisp uses parenthesis notation to build tree structures. These trees are represented internally as binary trees that consist of cons cells with a car and cdr pointer:

Since all of our logic operations are based off of binary values, we can relate binary trees to boolean logic by associating the car node with false and the cdr node with tree. For example, here are the four unary logic operations:
(def contradiction    '())
(def logical-identity '(nil true)
(def logical-not      '(true))
(def tautology        '(true true))
With these four logic operations in place, binary logic operations can be defined:
(def logical-and (list contradiction identity))
(def logical-xor (list logical-identity logical-not))
(def logical-or  (list logical-identity tautology))
The correspondence between binary computers and binary trees is one thing in favor of current implementations of Lisp. Of course, Lisp is not implementation dependent so other representations of tree structures may be considered.

Monday, November 21, 2011

Computer graph system

My next programming project is to construct an effective computer graph system. Here are some examples of graphs this system might implement:
  • Dataflow graphs which model computation using arrows between components.
  • Molecular graphs which model physical structures in terms of chemical elements.
  • Categories which model many abstract structures.
This system will be based upon nodes which contain links to one another. For example, Lisp cons cells are one type of node that have a unique next link. This sort of cell is powerful enough to effectively model all data structures, which is a fact most Lispers are familiar with.

Functions are one type of node with an $I \to O$ interface which generates some new output value for any input value. The implementation of the node can be freely modified so long as its interface remains in tact.

Morphisms are functions which contain another arrow $X \to Y$ representing a source set and a target set for the function. From there more complicated metadata can be used to describe functions, such as commutativity, associativity, invertibility, etc.

Wednesday, November 16, 2011

Tau radians

I believe that tau radians are a fundamentally advantageous angle representation because they are based upon fractions of a circle. This allows us to relate any fraction to an angle, for example 0.25 becomes 90 degrees.

Any complex number can then be approximated by a natural number and two fractions, the first one for the fractional part of the maginitude and the second one for the number's angle on the complex plane. For example, (3, 5) could be represented as (5 0.83 0.16).

Monday, October 24, 2011

Software agents

In cognitive science we are generally interested in studying software agents which are based upon the perception-action cycle. In particular, we are interested in control policies which react to our perceptions in a way that will move us towards some goal.

A reactive agent deterministically translates events into reactions. For example, I may want to declare that (= a (+ b c)), then in this case, I am creating a reactive agent that translates values of b and c to a new value for a.

One important consideration when dealing with agents, is their reaction time. This is especially important in turn based games which may have a reaction time set by a clock.

Wednesday, October 19, 2011

Jason Spisak's Laws of Interface Design

*Fitt was right and no one listened.

I listened and I then implemented corner targets in GoldOS. However, now I use a large monitor at home and I use a tablet in public.  I also suspect I am not unique in this regard.

In neither setup are corner targets a good idea, because on a very large monitor the size of the target doesn't make up for the long distance and on a tablet you have to consider Fitt's law in 3D, so in neither case corner targets are a good solution there.

A much better solution is to use a pie menu with four elements that represent each corner target. Then users can access any of the targets by moving the mouse in the direction of any of the four corners, and this will eventually became a part of muscle memory. Similarly, pie menu can be accessed using gestures on a touch screen.

* Nested menus are evil.

* Pop up dialogs and ballons are a horrible interface tool.

Nested menus, pop up dialogs, ballons, etc, are all products of placing things in menus rather then in place, which is what happens in a ZUI. I strongly oppose all forms of popups, which is something I have written about extensively in the past.

* Scrolling sucks.

I think what you mean here is that forced scrolling sucks. We should have a ZUI which lets allows you to zoom out and get a birds eye view of things whenever you want to.

* The drag-and-drop desktop and its icons are the junk-drawer of the modern computer and should be eliminated.

I totally agree with this. The desktop is a unzoomable pane with space constraints. It is not a viable part of any ZUI.

* Drill down interfaces are evil

* Configuration gluttony must be stopped.

In modern operating systems in order to find what you want you have to search through cumbersome drill down interfaces, and then to get the system to behave the way you want you have to manually specify it in configuration files.

On the other hand, an AI system uses intelligent search algorithms to satisfy the users every want and need, regardless of rather they are specified as a long term configuration file or as a short term query.

* Consistency is worth more than multiple placement.

Consistency certainly is an important part of HCI design and the biggest source of inconsistencies are applications. In an AI system, there are no applications, there is just a unified system which does whatever you want it to do.

Saturday, October 15, 2011

The evolution of computer input models

Keyboard era
Typewriters were created in the 19 century, and at that time the QWERTY layout begun to define keyboard layouts, and it still largely does to this day. QWERTY was designed to prevent jams in these early typewritters, so it is an inefficient layout and it is relatively hard to learn.

The keyboard was an important technology even before then invention of computers, and then when computers where invented, they played an important role in defining their input models.

At first computers couldn't even accept input from keyboards themselves, so you had to use them to separetly created punched cards to input into the computer. This was known as the batch input model.

The next development was the CLI input model, in which computers directly accepted keyboard input and responded to it in various ways, first with printers, and later with CRT displays like the one the IBM 5100 had. The CLI computers were generally used for programming in Basic or some other programming language.

The first challenge to the use of the keyboard as an input device came from indirect pointing devices like the mouse, however, the mouse alone is far from a challenge to the dominance of the keyboard. The PARC input model, based upon the introduction of the mouse, was introduced to the world with the Apple Macintosh and it subsequently came dominate the computing world for nearly thirty years. Most PARC based computers used a desktop environment with windows, menus, toolbars, and icons.

Post-keyboard era
The "tablet revolution" was partially launched by the introduction of the Apple iPad in April 2010, is based upon multitouch and voice input. These input methods allow you to avoid using hard keyboard using at least three separate means:

1) The use of a virtual keyboard or optical character recognition to input text through the touch screen.

2) The use of speech recognition to input text through voice sensors.

These two methods allow you to eliminate the use of hard keyboards in most cases. One of effect of this is that Laptops are now obsolete, since they carry hard keyboards which exerted extra weight, and which aren't ergonomic either. I have given away all of my laptops so that I can use tablets instead.

Now I require that all the keyboards that I use are ergonomic keyboards with a dvorak layout, which will significantly reduce the strain on my fingers, in order to prevent me from acquiring repetitive strain injury, which is a symptom that effects most heavy computer users from the keyboard age. Now that we have tablets, which are no longer dependent upon hard keyboards, I have directions in which I think we should head:

1) Transition from the old WIMP GUI to a multitouch ZUI and effectively do away with all sorts of popups, including menus and dialog windows, which will have the added effect of moving us towards eliminating the distinction web applications and desktop applications.

2) Make commands rather then applications central to the user interface using NLP algorithms and command packages. Commands can be accessed using speech recognition. One important command should be undo, so that the system can forgive mistakes.

These two ideas were first proposed by design expert Jef Raskins, however, he didn't yet realise that they are best implemented on tablet systems with multitouch input to the ZUI and speech input to the commands. I look forward to seeing these ideas implemented in the future.

Tuesday, October 4, 2011

Implementing cauchy sequences

I have decided to work on constructing some of the computable numbers using cauchy sequences. Generally, this is as simple as mapping over the counting numbers. Here is a construction of e:

(def e
 (map
  (fn [i]
    (apply + (map (comp / factorial) (range i))))
  (range)))

You can evaluate this sequence into:

(0 1 2 5/2 8/3 65/24 163/60 1957/720 ...)

In this way we will provide us with an approximation for e. Here are some additional sequences:

(def pi
 (infinite-series
  (fn [n]
    (/ (* 4 (Math/pow -1 n))
       (+ (* 2 n) 1)))))

(def golden-ratio
 (map
  (fn [i]
    (/ (nth fib (inc i))
       (nth fib i)))
  (rest (range))))

These should make for relatively good approximations of these irrational numbers.

Monday, October 3, 2011

Stanford AI

I am taking two AI classes from stanford. The first is the introduction to artificial intelligence, the second is on machine learning.

http://www.ai-class.com/newsfeed/
http://www.ml-class.org/course/auth/welcome

Tuesday, September 20, 2011

Types of data

A fundamental distinction is the distinction between space and time. This leads us to the distinction between extensional definitions and intensional definitions, extensional definitions represent every matching object in space and intensional definitions describe some temporal process for generating objects.

The field of mathematics is focused on the study of a particular form of temporal process: the pure function. The only noticeable side effect of a pure function is the creation of some new value.

Tuesday, September 6, 2011

Antireduce

Antireduce is an useful function in sequence pattern recognition. Antireduce takes a binary operation (usually the inverse of some other binary operation) and returns the result of applying it across a sequence of values:

(defn antireduce
  [op coll]

  (map
   (fn [i]
     (if (= i 0)
       (first coll)
       (op (nth coll i) (nth coll (dec i)))))
   (range (count coll))))

For example, we can apply this function to the triangular-numbers:

(= (take 4 (iterate (partial antireduce -) 
  [0 1 3 6 10 15 21 28 36 45])
  
  (0 1 3 6 10 15 21 28 36 45)
  (0 1 2 3 4 5 6 7 8 9)
  (0 1 1 1 1 1 1 1 1 1)
  (0 1 0 0 0 0 0 0 0 0))

Alternatively, we can use this on the factorial function by using the higher inverse-hyper-operator, division.

(= (take 2 (iterate (partial antireduce /) 
  [1 1 2 6 24 120 720 5040 40320 362880]))

  (1 1 2 6 24 120 720 5040 40320 362880)
  (1 1 2 3 4 5 6 7 8 9))

The antireduce function itself can be reversed:

(defn reducify
   [op coll]

   (map
    (fn [i]
      (reduce op (map (partial nth coll) (range (inc i)))))
    (range (count coll))))

This function should allow us to automatically recognise some basically sequences.

Sunday, September 4, 2011

PI is wrong

At the core of any Lispers understanding is the fact that mathematical notation sucks. We recognise just how painful it is to have to parse expressions using a horrid order of operations table like PEMDAS. We reject the use of such syntax in program source code, which is written the luxury of a keyboard, but we are tolerant of such vices when writing with pencil and paper.

As we have a skeptical outlook of mathematical notation from the get-go, you will be hard-pressed to find one of us that isn't also receptive of new mathematical symbols like Tau:

$$\tau = 2\pi = 8 \sum _{n=0}^{\infty} \frac{(-1)^n}{2n+1}$$

I am one of those people who has constantly failed to make sense of the mind boggling concept of pi-radians, so I when I read Micheal Hartl's "Tau Manifesto" I was a quick convert.

The reason that Tau makes more sense then Pi in most cases is that Tau is based upon the distance from the center to any point in the circle, or the radius, rather then the diameter which is relatively rarely used.

$$\tau = {C \over r}$$
$$\pi = {C \over D}$$

Tau makes trigonometry much easier, because tau is the periodicity of the sin and cosine functions, and Euler's identity becomes $e^{\tau i} = 1$ which is far more elegant. Even the circular area makes more sense with tau, as it becomes tau times the antiderivative of r:

$$A = \frac{\tau r^2}{2}$$

Regardless of these considerations mathematicians still overuse pi, which is another example of a flaw in our notation systems. However, I would still say that most mathematical theorems are factually correct, despite our flawed attempts at expressing them.

Friday, September 2, 2011

On the Lisp machines

The Lisp machines had a hardware support for automating memory management, persistence, and many other tasks we could do with automating today. The way they achieved this was to use a single address space tagging architecture.

However, the real advantage of the Lisp machines is their use of Lisp all the way down. You could trace anything all the way down to the device drivers, and the system was based upon Lisp you could effectively automate anything.

Thursday, August 25, 2011

Cauchy sequences and intervals

The counting numbers are fundamental to all of computation. Using collections of counting numbers we can construct more complex objects, for example, the rational numbers can be constructed with three counting numbers: a numerator, a denominator, and a sign.

The rational numbers allow us to effectively confront most challenges; however, there are some numbers, known as the irrational numbers, in which can never be expressed as rational numbers. Just as you can never express infinity because there is always a greater number, there is always a greater rational approximation for any irrational number. Such values that can never be expressed because they always have another step that can always bring them closer to a target point are the convergence value of a cauchy sequence.

For example, one of the first irrational numbers discovered was the square root of two, which can be expressed in terms of the following cauchy sequence:

$$\sqrt{2} = \sum _{n = 0}^{\infty} {(-1)^{n + 1} \frac{(2n-3)!!}{(2n)!!}}$$

Most other popular irrational numbers like e, pi, or the golden ratio can be expressed in terms of cauchy sequences:

$$e = \sum _{n=0}^{\infty} {\frac{1}{n!}}$$
$$\pi = 4 \sum _{n=0}^{\infty} {\frac{(-1)^n}{2n+1}}$$
$$\varphi = [1; \overline{1}]$$

Although we can never acquire any of these values, we can approximate them in terms of intervals. For example, we know pi is in the interval between 3 and 4, and as we use our cauchy sequence we progressively narrow that range. Once we have constructed intervals, we are going to want to perform some operations on them, leading to interval arithmetic. These same methods approximation can be used for physical measurements.

Tuesday, August 23, 2011

Properties of operations

Some binary operations are either commutative or associative, as defined by a properties on their domain:

(defn commutative?
  [op]
  
  (∀ [a b]
    (= (op a b) 
       (op b a))))

(defn associative? 
  [op]
  
  (∀ [a b c]
    (= (op (op a b) c) 
       (op a (op b c)))))

An example binary operation is nand:

(defn nand
  [& nums]
  
  (case nums
    [false false] true
    [false true ] true
    [true  false] true
    [true  true ] false))

From its truth table you can immediately see that it is commutative. However, it is not associative because it applies to the following lists of length three:

[[false false false] 
 [false true  false] 
 [true  false false] 
 [true  true  false] 
 [true  true  true ]]

Which implies that the following case doesn't hold.

(/= (nand (nand false true) false))
    (nand false (nand true false)))

Many functions are both associative and commutative, such as addition and multiplication. If both of these properties apply then the functions arguments are orderless.

If the function's arguments are orderless then they can be thought of as a multiset. Multiple instances of the same member can be replaced using a higher hyper-operation:

(= (+ x y z x x y)
   (+ (* x 3)
      (* y 2)
      (* z 1)))

Other properties of binary operations are the identity element and the inverse element. If an operation has all these properties over a set, then it forms a group. If it is also commutative it is an abelian group.

Thursday, August 18, 2011

Latex DSL

As far as I am able to understand, Latex commands take the form:

\command[option1,option2]{arg1}{arg2}

This can easily be translated into Lisp:

(command [option1 option2] arg1 arg2)

That way I can use Lisp to produce Latex.

Tuesday, August 16, 2011

JavaScript DSL

I have created a DSL that directly translates to JavaScript.

(function factorial [n]
  (return
    (ternary-operator (== n 0)
      1
      (* n (factorial (- n 1))))))

The above code directly translates into the JavaScript factorial function. Along with prxml, this allows me to create entire web pages entirely in Lisp:

[:html
  [:head
    [:title "Welcome"]]
  [:body
    [:script {:type "text/javascript"}
      [:raw! (to-js '(alert "Hello World"))]]]]

I look forward to being able to use this to develop lots of webpages.

Monday, August 15, 2011

Numeral systems

Here is a function that can be used to evaluate any numeral:

(defn evaluate-numeral
  [numerals radix]

  (apply +
  (map-indexed
   (fn [i v]
     (* v (Math/pow radix (- (dec (count numerals)) i))))
   numerals)))

(= (evaluate-numeral '(1 0 1 0 1) 2)
   21)

You can also evaluate the amount of digits needed for a number in any numeral system:

(defn digits-in-number
  [num radix]

  (inc (Math/floor (/ (Math/log num) (Math/log radix)))))

(= (digits-in-number 255 2)
   8)

Basic predicate functions

After discussing sets and predicates in my previous post I have come up with some basic predicate functions:

(def every-true? (partial every? true?))
(def some-true? (comp not nil? (partial some #{true})))

(defn union
  [& predicates]

  (fn [& obj]
    (every-true? (map (fn [predicate] (apply predicate obj)) predicates))))

(defn intersection
  [& predicates]

  (fn [& obj]
    (some-true? (map (fn [predicate] (apply predicate obj)) predicates))))

(defn cartesian-product
  [& predicates]

  (fn [& obj]
    (and
     (= (count obj) (count predicates))
     (every-true?
      (map
       (fn [i]
         (apply (nth predicates i) (nth obj i)))
       (range 0 (count obj)))))))

Saturday, August 13, 2011

Sets and predicates

Set theory doesn't make any distinction between sets defined extensionally (sets) or sets defined intensionally (predicates). Here are some sets:

#{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
#{0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20}

Here are the same things as predicates:

(fn [n] (<= 0 n 10))
(fn [n] (and (even? n) (<= 0 n 20)))

Some means must be developed to deal with these two different presentations of classes.

Thursday, August 11, 2011

Mathematical universe hypothesis

Not only am I a mathematical realist, I go one step further by stating that mathematical structures are all that exist (mathematical monism). This is epitomised by Max Tegmarks mathematical universe hypothesis (MUH).


For me, this belief arose from constantly thinking in terms of immutability, which eventually lead me to think of spacetime as an immutable object with time as another dimension.

Additionally, we keep discovering mathematical regularities in spacetime, so it would seem that spacetime isn't just some randomly configured object - but rather a mathematical structure. Max Tegmark has described this really elegantly.

This philosophy has gotten me interested in physics again, however, if I do get into physics it would have to be timeless physics just as when I when I do programming it has to be declarative programming. Mathematics will definitely continue to be my primary area of study because it is intrinsically declarative.

Counting number representations

We can define the counting numbers based upon successive calls to the inc function:

(defn expand-number*
  [n]

  (if (zero? n)
    0
    (cons inc (list (expand-number* (dec n))))))

(defmacro expand-number
  [n]

  (expand-number* n))

We can also define them in terms of the unary numeral system:

(defmacro tally-marks
  [n]

  (cons + (map (constantly 1) (range 0 n))))

Here is how these functions work:

(= (macroexpand-1 '(expand-number 5))
   `(inc (inc (inc (inc (inc 0))))))

(= (macroexpand-1 '(tally-marks 5))
   `(+ 1 1 1 1 1))

Saturday, July 2, 2011

Functional geometry

When I started using Clojure I could only find imperative graphics manipulation systems. I eventually realised that functional geometry is the Lispy way of dealing with graphics so I created a functional geometry library:

(polygon [100 100] [[0 0] [0 100] [100 100] [100 0] [0 0]])

Each function creates a picture data-type. The procedure display! can be used to display any picture on a graphics instance:

(display! (select-region g [[0 0] [100 100]]) picture)

Monday, June 27, 2011

Case insensitivity

My first programming experience was with VB.NET in Visual Studio. The IDE would automatically correct case for me, so I was saved from worrying about case until much later in my programming experience.

I am glad that to IDE did this for me. Case is akin to syntax highlighting, it is a way of presenting code graphically, so it should be controlled by the IDE. Here are more reasons for this:

1. English is case insensitive.
2. It can be cumbersome to click shift.
3. There is no good way of presenting case through sound.
4. There are numerous contradictory case conventions, for example Java uses upper camel case for methods and C# uses upper camel case. Clearly case is a personal preference, so it should just be decided by the IDE.

If there is any central tenet of Lisp it is that data and presentation should be separate. This also applies to case as it is just a way of presenting letters graphically.

Common Lisp is case insensitive by default, however, Clojure isn't. Clojure chooses to have case sensitivity, not necessarily because it is a good thing, but rather because it is an absolutely necessary to interoperate with the modern C-based computing world.

Tuesday, June 21, 2011

Nested upto

A simple upto function might look like this:

(defn upto
  "Get the numbers from 1 to n."
  [n]

  (range 1 (inc n)))

This can be extended to a nested upto function:

(defn nested-upto
  "Compute the numbers upto n nested by k."
  [n k]

  (cond
   (= k 0) 1
   (= k 1) n   
   :else (map #(nested-upto % (dec k)) (range 1 (inc n)))))

Here is upto 5 at different levels of nesting to show what I mean:

(1 2 3 4 5)

((1) (1 2) (1 2 3) (1 2 3 4) (1 2 3 4 5))

(((1))
 ((1) (1 2))
 ((1) (1 2) (1 2 3))
 ((1) (1 2) (1 2 3) (1 2 3 4))
 ((1) (1 2) (1 2 3) (1 2 3 4) (1 2 3 4 5)))

The multichoose function I talked about previously can now be implemented using nested ranges:

(defn multichoose
  [n k]
  (apply + (flatten (list (nested-upto n k)))))

This demonstrates this basic mathematical principle.

Saturday, June 18, 2011

List processing functions continued

I am going to implement all of the array methods from the JavaScript prelude. First of all here is the mutator methods:

(swap! coll rest)                 ; shift
(swap! coll butlast)              ; pop
(swap! coll concat [1])           ; push
(swap! coll (partial concat [1])) ; unshift

(swap! coll reverse) 
(swap! coll (partial sort <))
The only one that really requires special attention is splice.
(defn splice
  [coll start-index how-many & insertions]

  (let [start-coll (map coll (range 0 start-index))
        end-coll (map coll (range (+ start-index how-many) (count coll)))]
    (concat start-coll insertions end-coll)))
The accessor method concat is already implemented in Clojure, and here is the rest of the methods:
(defn join
  [coll separator]

  (cond
   (empty? coll) ""
   (= (count coll) 1) (str (first coll))
   :else (str (first coll) separator (join (rest coll) separator))))

(defn to-string
  [coll]

  (join coll ","))

(defn to-source
  [coll]
  
  (str "[" (join coll ", ") "]"))

(defn slice
  [coll start end]

  (map coll (range start end)))

(defn index-of
  [coll elt]

  (cond
   (nil? ((set coll) elt)) -1
   (= (first coll) elt) 0
   :else (inc (index-of (rest coll) elt))))

(defn last-index-of
  [coll elt]

  (- (dec (count coll)) (index-of (reverse coll) elt)))

The methods, filter, map, some, every?, and reduce are already implemented in clojure. The only thing we don't really have already is for-each. This can be implemented as a macro that uses the loop primitive.
(defmacro for-each
  [coll func]

  (let [i (gensym)]
    `(loop [~i 0]
       (when-not (= ~i (count ~coll))
         (do
           (~func (nth ~coll ~i))
           (recur (inc ~i)))))))
This function is actually quite useful, for example we can use it print out elements of a list:
(for-each [1 2 3 4 5] prn)

Wednesday, June 15, 2011

List processing functions push, pop, shift, unshift

In Clojure most funtions are purely functional, and side effects occur by applying pure functions to objects. So instead of the shift and pop functions, we have rest and butlast:

(swap! coll rest)
(swap! coll butlast)

We can also take a slice over the collection by mapping over a range:

(swap! coll map (range 0 2))

Adding new elements to the collection like with unshift and push is a bit more complicated. You can use conj to add new elements to a list depending on rather it is a list or a vector, however, that is not neccessarily the same thing as push and unshift, so instead we should use concat:

(swap! coll concat [1 2 3])

Unshift is the hardest to implement yet:

(defn unshift 
  "Add the elements of the first collections behind the later ones." 
  [& colls]

  (apply concat (reverse colls)))

(swap! coll ushift [1 2 3])

Now we can do these familiar array operations in Clojure. I think this shows that the application of pure functions - which is the fundemental basis of Clojure's effect system - is a legitimate and powerful model of computation.

Due to the advantages of purity, we should limit our impure functions to a minimum. What few impure functions we do have should be distinguished with an exclamation mark, like the swap! function.

Saturday, June 11, 2011

Implementing get-in

This is an implementation of matrix point access:

(defn curried-apply*
  "Apply args to the curried function func."
  [func args]

  (if (= (count args) 1)
    (func (first args))
    (curried-apply* (func (first args)) (rest args))))

(defn decurrify
  "Take a curried function and return an uncurried function."
  [func]

  (fn [& args]
    (curried-apply* func args)))

(defn get-in*
  "Get the value of the matrix at a point."
  [matrix point]

  (apply (decurrify matrix) point))

This really demonstrates the power of functional programming. The function to decurrify a matrix can also be used to get the value at a point, since a matrix is actually just a curried function.

Friday, June 10, 2011

Programming Clojure

Now I am going through the programming Clojure book. It provides many useful insights about the language.

Tuesday, June 7, 2011

Dvorak

For years I was a qwerty typist, and I used Algol-based languages such as C, Perl, and JavaScript, which benefit from the position of the punctuation keys on qwerty. Now, I have discovered a superior keyboard layout: dvorak.


The most important letters are placed on the home row and all of the other letters are placed in accessible positions. This makes typing English much easier, which is especially helpful when typing lisp, as lisp is essentially english words structured with parenthesis.

Dvorak moves the dash symbol to the home row which is a big boost to lispers, as we use the dash as our separator, on the other hand it is only used for subtraction in most programming languages.

The semicolon is moved off of the homerow to the inaccessible position below the left pinky. The other brackets are also moved further away. This is fine for lispers, the parenthesis are what matter anyways.

Monday, May 30, 2011

The flare programming language

One of the earliest contributions of our friends at the Singularity Institute for Artificial Intelligence (SIAI) was the flare programming language. When I first saw this language, I was quite impressed. It has strong support for annotations and metadata and some other relatively impressive features.

However as an educated Lisper I now know better. Flare uses XML rather then sexps and its purported advantages are derived from this. This is explained in detail on the site for the language:

LISP, of course, is the traditional king of self-modifying languages, because LISP uses the same representation for program code and program data. Flare also uses the same representation for code and data, except that the common representation is extensible tree structures (XML) rather than lists. The difference is a major one; in a list structure, an object's role is determined by where it is. In Flare, an object's role is determined by its name and its metadata. Extensible tree structures are thus less breakable; in LISP lists, meaning is often conveyed by position, and inserting new elements can change the position of other elements in a list. Adding another Flare subelement, by contrast, does not at all change the behavior or appearance of the other elements.

In Lisp you cannot insert elements into a lists order, without breaking that lists order. However you can instead insert a special kind of node that does not effect the list's order. Lets call this sort of node an attribute and lets express it as (:name value).

(html
  (head
    (title "Main Page"))
  (body
    (a (:href "home.html") "Welcome")))

And now, with this new construct you have XML in Lisp. Actually what you have is better then XML, because attributes in XML are almost entirely unstructured and they instantiate the attribute/element false dichotomy.

The other alleged advantage of flare is its use of metadata, however, XML leads to a distorted view and improper usage of metadata.

If you want to develop a programming language don't base it upon XML when Lisp is such a better alternative. Even the developers of flare knew that XML is painful, which is why they created flare speak to be used instead. However, the flarespeak language offers nothing over existing languages such as python, it is just another camel-case using, infix ridden, heteroiconic programming language.

Ultimately the flare programming language was abandoned and the SIAI went on to address more important problems like AI morality. For further reading, Erik Naggum provided some interesting insights on XML and flare:

http://genius.cat-v.org/erik-naggum/sgml-attributes-metadata
http://www.xach.com/naggum/articles/3206985430398054@naggum.net.html

Thursday, May 26, 2011

Solving 4Clojure Problems

I have been solving problems on the 4clojure website. In order to solve the problems on this site I used clojure.walk/macroexpand-all and I defined all my functions as macros in order to get past the def problem.

(require 'clojure.walk)

(clojure.walk/macroexpand-all (quote (func)))

This seems to work quite well. Another option is to use let to define everything.

Wednesday, May 25, 2011

Emacs user interface

In my opinion Emacs represents the state of the art in user interface design. When Emacs was invented in the 70s, most computers used text based user interfaces. Later graphical WIMP interfaces were developed as a "successor" to textual user interfaces, however, I don't believe they are a true successor as they are deeply flawed and in many cases they actually reduce my productivity. Zooming user interfaces (ZUIs) are the true successor.

The main problem with traditional GUIs is that they are full of pop-ups, such as alerts, prompts, confirmation dialogs, menus, and tooltips. These things cause more problems then they solve. Dialogs break the users focus, and menus are hard to navigate and they have to be fit into the screen when you go near the corners.

Pop-ups are also a security problem, as they essentially mean that a program is drawing outside of the space secured for it. This is why most modern web browsers come with pop-up blockers. To move into the web age we should completely abolish pop-ups.

To disable pop-ups in emacs add this to your .emacs file:

(setq use-dialog-box nil)
(menu-bar-mode -1)

ZUIs solve this problem by placing content in place rather then hiding it in a pop-up, and they make more efficiently use of the users spatial memory, which is why I think they are the true successor to textual user interfaces, unfortunately there is no practical ZUI available right now.

The problem we have to address now is creating a synergy between textual user interfaces like emacs and the ZUI paradigm, then we will have the humane interface that Jef Raskins envisioned decades ago.

Wednesday, May 18, 2011

Embedding XML trees in Lisp

Like Lisp expressions XML documents are essentially just trees:
However, since traditional XML notation is verbose and inefficient it makes sense to embed XML in Lisp:
[:html
 [:head
  [:title "Main Page"]]
 [:body
  [:p "Welcome"]]]

This new notation also us to integrate Lisp with many XML based data formats such as HTML and SVG.

Sunday, May 15, 2011

Converting finite functions into discrete maps

Maps are the extensional analogue of functions. For example, here is the nand function expressed as a map:
(defn unapplify
  [func]

  (fn [& args]
    (apply func args)))

(def nand
  (unapplify
   '{(false false) true
     (false true)  true
     (true  false) true
     (true  true)  false}))

Now if a function is finite and we have a complete description of its domain, then we can turn it into a map:
(defn function-to-map
  [func domain]

  (zipmap
   domain
   (map (partial apply func) domain)))

Now we have the machinery necessary to prove De Morgan's Laws:

(def boolean-pairs
 '((false false)
   (false true )
   (true  false)
   (true  true )))

(= (function-to-map
     (fn [a b] 
       (not (and a b)))
     boolean-pairs)
  
   (function-to-map
     (fn [a b]
       (or (not a) (not b)))
     boolean-pairs))

(= (function-to-map
     (fn [a b] 
       (not (or a b)))
     boolean-pairs)
  
   (function-to-map
     (fn [a b]
       (and (not a) (not b)))
     boolean-pairs))

This method works for all finite functions. On the other hand, infinite functions require more sophisticated mathematically machinery.

Friday, May 13, 2011

Factorials and triangular numbers

These two functions are:

$$T_n = \sum_{i=1}^n {i}$$
$$n! = \prod_{i=1}^n {i}$$

There are also raising and falling versions of the factorial:

$$x^{\overline n} = \prod_{i=0}^n {x+i}$$
$$x^{\underline n} = \prod_{i=0}^n {x-i}$$

Here is the corresponding code:

(def triangular-number 
  (let [sum (partial apply +)]
    (comp sum range inc)))

(def factorial
  (let [product (partial apply *)
        upto (comp (partial map inc) range)]
    (comp product upto)))

(defn rising-factorial
  [x n]
  (apply * (map (partial + x) (range 0 n))))

(defn falling-factorial 
  [x n]
  (apply * (map (partial - x) (range 0 n))))

(defn choose
  [n k]
  (/ (falling-factorial n k)
     (factorial k)))

(defn multichoose 
  [n k]
  (/ (rising-factorial n k)
     (factorial k)))

The multichoose function is incredibly important because all polytopic numbers, including triangular-numbers and tetrahedral numbers can be expressed with it.

(def triangular-number
  #(multichoose % 2))

(def tetrahedral-number
  #(multichoose % 3))

(def pentatope-number
  #(multichoose % 4))

This is the relationship between factorials and triangular-numbers. Triangular-numbers can be expressed in terms of factorials probably because the higher hyper operator, multiplication, encodes more information then addition.

Sunday, May 8, 2011

Properties of zero

In mathematics zero is considered to be unsigned and therefore neither positive or negative. However, some floating point representations have signed zero.

(= (pos? 0) false)
(= (neg? 0) false)

Zero is also the first counting number, a concept introduced in Lisp. Earlier programming languages used one as the first counting number.

(= (nth ["a" "b" "c"] 0) "a")

Perhaps the most interesting property of zero is zero to the power of zero. My hyper-operator implementation in my last blog post leads me to the empty product:

(= (hyper-operation 3 0 0)
   (apply (partial hyper-operation 2) (repeat 0 0))
   (hyper-operation 2)
   (*))

The JVM seems to agree with me on this:

(= (Math/pow 0 0) 1)

However, some people considered zero to the power of zero to be an indefinite form, however, I think it is better to define it as zero rather then raising a disruptive error. It also just so happens that the value of one is necessary for various formulas to work, most notably, the series expansion for Math/exp.

Saturday, May 7, 2011

Hyper-operations

Hyper-operations are an infinite sequence of arithmetic operations, which may be implemented on lists of unsigned integers with the following algorithm:

(defmulti hyper-operation
  (fn [n & nums] (zero? n)))

(defmethod hyper-operation true
  [n & nums] (+ (first nums) (count nums)))

(defmethod hyper-operation false
  ([n] (if (<= n 1) 0 1))
  ([n a] a)
  ([n a b]
    (if (and (= n 1) (zero? b)) 
      a
      (apply hyper-operation (dec n) (repeat b a))))
  ([n a b & args]
    (reduce (partial hyper-operation n) (hyper-operation n a b) args)))
The first five hyper-operations have names of their own, which can be specified by calling the partial function:
(def zeration (partial hyper-operation 0))
(def addition (partial hyper-operation 1))
(def multiplication (partial hyper-operation 2))
(def exponentation (partial hyper-operation 3))
(def tetration (partial hyper-operation 4))

Friday, May 6, 2011

Open Computer Science Problems

I found this new problems site. The first problem is simple:

(apply + 
  (map
    (fn [i]
      (Math/pow 3 (dec i)))
    (range 1 16)))

The second problem isn't so simple. I implemented the procedure the site described, however, it doesn't work because the range is way too large:

(let [n (fn [i]
          (count (filter #(= \1 %) (Integer/toString i 2))))]
  (map n (range 1 (inc (Math/pow 2 50)))))

Instead I solved this problem using mathematical simplification which lead me to:

(inc (* 50 (Math/pow 2 49)))

Then I got to the third problem. Apparently they want me parse some expression! I am a Lisp programmer for a reason: I don't like to parse. So instead I made a macro based upon this problem:

(defmacro ternary-expression 
  [if-true expr if-false]
  `(if ~expr
     ~if-true
     ~if-false))