Interface Theory: A paradigm for interface design
December 17th, 2017
Manifest Statement: Computational Programming as a discipline deserves theoretical spaces for the design of interface models, which themselves are the foundations for navigating, translating, and solving computational problems.
I assert that as toolmakers—as designers of tools—problem solving starts with interfaces. As with any language, we first learn to read, then to write; we then express and manipulate a language's interfaces to solve its problems. A poor or badly chosen interface of interaction hinders our ability and our capacity to construct solutions to the problems we set out to solve.
An interface is a space of names to which we navigate. We express our problem as an initial location, and determine a path to a terminal location as our solution. In practice, our computational space is a data structure. This is significant as our paradigm is best served by modularizing structure from data, which is possible since it is known in general that structure and content are orthogonal.
Interface Template
A general and potent template for cleanly designing any interface is as follows:
- Constructively define the structure being modeled, starting with the irreducible names.
- Define every possible substructure, not only are these navigable spaces themselves, they are also names.
- Find all possible algebras of names, frequently these are monoidal spaces.
- Compress the names within each algebra, these are the interface layers.
- Determine how each interface layer relates to every other interface layer.
This template as an algorithm suffers from a computational asymmetry: Using it to find a solution to an interface design tends to be computationally expensive, while verifying such a solution tends to be computationally inexpensive.
I declare then that this theoretical paradigm is primarily applicable as a diagnostic: Any heuristic design can be tested against this template to see if it satisfies. With committed practice, this template can then be used to shape ones habits of design, allowing a good designer to build an inventory of best practices within this genre of interface model.
Orders of Design
The secondary application of this theoretical paradigm is in deducing best practices, known here as orders of design. This separates such practices from inducing best practices—ones learned from inductive experience with (as of yet) no known deductive mechanism.
A guiding principle of our orders of design (alluded to above) I now state as follows: Literacy is mastered first by reading, then by writing. In the context of an interface, this translates to first designing the navigational operators, then designing the mutative operators (assuming they exist, or there is sufficient privilege).
With this guiding principle comes our first order of design:
Batching
Structure and content are orthogonal in specification, but in implementation when one is concerned with various efficiency optimizations, they are not. A functional map (fmap) is often used to refactor the process of applying the same operator to the content of several locations of a structure: Fmap then, as a generic operator intersects both structure and content. This is relevant as there is a special situation where several fmaps operate over the same locations.
I submit, when optimizing for process efficiency, it is best to refactor separate fmaps by the navigational path of the shared locations being acted upon, as it reduces parse cycles. This is known as batch mapping, and it implies that every navigational operator which in theory is independent of content mutation, should actually allow for the batching of mutative operations in practice. Let me restate this for clarity: Design for navigation, then add batch mapping, acknowledging it as a deviation from design.
Algebra
We often seek monoidal binary operators which allow us to locally navigate existing objects within the interface layer. Such operators allow us to relocate ourselves by knowing our current location as well as a path to the point of relocation.
When looking to determine an algebra for a given interface layer, a common practical heuristic is to look at the lifecycle ecology of the names as objects. The focus is not on the names themselves, but to use them to reveal their lifecycle operators. This might not always expose an underlying algebra directly, but it often reduces the search to a limited handful of operators for which to abstract a common factor. If nothing else, it also leaves the designer with an intuitive operator basis to add as a convenient library module for implementation.
The above stated heuristic is partitioned into the following categorical stages and their questions:
- Birth: What names can be constructed into the interface layer?
- Growth: Given an existing name, how can it be extended structurally?
- Change: Given an existing name, how can its structure mutate while otherwise remaining invariant?
- Decline: Given an existing name, how can it be reduced structurally?
- Death: What names can be destroyed from the interface layer?
Arithmetic
A given ecology or algebra allows for navigation to all possible names within an interface layer. Although we seek to specify certain operators, the ultimate focus remains on the objects being navigated, with the goal of knowing we can access or reach them all. Another order of design related operators though is to find an arithmetic of navigational operators instead, which allows us to construct the operators within our ecologies and our algebras.
Finding arithmetics reveals the bare minimum we'd need to equip an operator library with. In particular, we seek the constructive grammatical elements that allow us to build any other grammatical element.
Transitivity
A chain of transitive interface layers is a sequence of layers, where each subsequent layer is not only an interface to the original, it is an interface to all layers in the sequence that came before it. Discovering such sequences in practice allows you to optimize by inlining. The idea is that any indirect interface to the original may be translated into a direct interface. Optimization occurs during translation as a change of face often allows for reduction of what is otherwise unnecessary overhead.
Security
Interface security is a major concern of modern digital technologies. Any interface we heuristically design must also be tested against chosen security models. Here I only (briefly) consider how poor interface design can lead to insecure systems.
Within industry, there exist many languages with grammar designed for optimization of code (as a means of reducing cost-in-use), but not designed for security of code. As these languages are used to build larger interfaces, and as these interfaces tend to evolve organically, many bugs, or rather opportunities for exploitation, are introduced along the way as well.
Adhering to more powerful models of interface design such as functional programming equipped with type theory offers a means to mitigating the complexity of security. As significant as this form of denotational semantics is, it rests on systems which prevent optimization beyond a certain level. There are still many industries which require a higher level of optimization, and as such we still need supplimentary strategies for secure design. I claim the template offered here is one such strategy.
Higher optimization occurs in relation to higher entropy of concurrency or side-effects. This is to say, parallel programming, as well as state based functions with side effects each allow for the flexibilities needed in higher orders of optimization. The chains of an interface within the above template act as a bridge between these higher and lower entropies allowing both for optimization as well as security in an ideal way.
Designing lower level layers with weaker more flexible specifications—greater internal access to those with privilege—allows us to optimize our interfaces. Using higher level layers with stronger less flexible specifications—lesser internal access to those without privilege—allows us to secure our interfaces. A given implemention would create the necessary transition from optimization to security, allowing for the best of both.