I recently looked at A Talk by Bob Martin in which he says that we've "basically discovered all the programming paradigms". He says this due to his view that programming paradigms are just restrictions on what programmers can do, but I disagree. My view is that a good programming language is the minimal amount needed to add to a model of computation. I will explain what I mean.
According to Wikipedia, a Model of Computation is "a model which describes how an output of a mathematical function is computed given an input". There are a list of models of computation on the page, which include
As you can see, there are many models of computation.
What do I mean by "minimal additon". Well, a model of computation only describes mathematical functions, which are pure. We would like our programs to be able to interact with the outside world. This means some abilities must be added to a model of computation in order to allow interaction. The gold standard for this is Haskell, with its monadic model of computation. In a good language, effects should be modelled using some kind of language construct, such as Haskell's monads, but I think the construct will vary for each model of computation.
I will show how the best-designed languages today are simply the minimal additions to already good models of computation.
Model of Computation | Language |
---|---|
Untyped Lambda Calculus | Lisp |
Typed Lambda Calculus | Haskell |
Predicate Logic | Prolog |
Rewriting System | Wolfram |
Actor Model | Smalltalk |
Interaction Nets | Kind2 |
Combinatory Logic | APL |
Hopefully you see that all the languages on the right column are widely admired for being well-designed. And what unifies them all? They are all strictly based on a model of computation.
Let us discuss the example of Lisp. Lisp is the minimal wrapper on top of the untyped lambda calculus. It does not include any features beyound unleashing the inherent power of the lambda calculus. This is the main difference between Smalltalk and Java. Both are based on the Actor Model but Smalltalk sticks truthfully to its model of computation, while Java confusedly borrows concepts from other languages.
Due to Turing Completeness, we can say that all models of computation are equally expressive. So, it remains to be decided what model we should use. I think we should first choose a model that is the best for our needs.
Given all these models of computation, which should we chose as a basis for our languages? I think it comes down to 2 critera:
For the first point, I think the model should be very simple. I think the SK Calculus is very good in this regard, and it might be the simplest model of computation that I have ever seen. Turing Machines, although the most well-known example of a model of computation, are actually very complex and hard to explain.
For the second point, I mean that the model has properties that make it easy to implement and fast to run on our given hardware. One model that is very promising is Interaction Nets because they have these properties
These properties would make an implementation very fast, so I encourage you to try writing an implementation.
If you want to design a language, I have some advice for how to do it. The first step is to choose a model of computation that has not yet been well-explored. For example, I would not design a language based on lambda calculus because that space has already been perfected by Lisp and Haskell, and I would not design a langauge based on predicate logic because Prolog already does a good job there. I would choose a model that has good properties (can be executed quickly on hardware) and has not been explored by any other language. Right now I think the field of Linear Logic doesn't have its own industry-strength programming language. If you think you can design such a language, I would encourage you on.
In short, a programming language is nothing but an implentation of a model of computation.