There are over 700 programming languages and approximately 10 software development paradigms. How do we choose the programming language we use to implement a project? What are the evaluation criteria we use to decide the fate of our new project?
What are the 700 programming language? Are these just reinterpretations of the same principle with slight syntactic differences? This is definitely not so. The answer we are looking for lies in the space of software development paradigms. Software development paradigms do not oppose each other. They are rather orthogonal.
We present development from the point of view of WHAT it can solve (the project objective) and HOW it can approach project goals (the way in which the programming language tackles a problem).
This paradigm requires an explicit control flux: each command shows how the computation advances, step by step, and how each step affects the general state of the computation.
This is based on mathematical functions, on lambda functions. The control flux is based on the combination of functions and of value attribution to working variables. This paradigm's potential comes from transmitting functions to other functions (and returning functions from functions).
In this paradigm, developers declare the relations between the data, as facts or as rules. For example, the facts can be the information about which superclass belongs to which entity. The rules can be a logical rule that states that class A is the grandfather of class B, if A is the parent of an intermediate class which is the parent of class B.
Contrary to other development paradigms, the solution is not designed in the logical development paradigm. We will neither write functions, nor give imperative orders. We will simply describe rules that define the relations among data entities and we will target the data which satisfies the rules.
This paradigm is probably the most commonly used in commerce environments. The separation of data from the actions that affect the data is lost and re-found in this paradigm. This is an artificial, yet useful unification.
An object has some internal data and a set of functions, called methods. It is possible to create several replicas (instances) of an object, if necessary. Each instance has its own data space where the object keeps the information content private. Some object functions can be invoked from other parts of the program which do not belong to the object definition. The outer environment cannot directly access the data space, as this is private, but it can interrogate it by invoking a function which belongs to the object. Such a function is the interface, and invoking a function is called message transfer.
In addition to data hiding, the OOP paradigm entails other concepts such as inheritance, which means that a developer can define an object, starting from an already defined object. In this case, the new object inherits all the definitions of existing objects, extends them and adds new ones to already existing ones.
All these features help represent real problems much more easily and generate reusable stretches of code, by improving the maintenance of big volumes of code to which tens and hundreds of developers contributed.
Concurrent development is a paradigm which intends to solve a problem by concurrently engaging several processors. Each processor will act independently in tackling the attributed task and will solve a part of the problem. Later on, the partial solutions of each processor are combined to reach the final solution.
This paradigm seems to manage the correct control flux as well as the inbound/outbound data that feeds the code independently.
Development consists of events and actions tied to these events. When an event occurs, the action attached to it is automatically triggered. This action can calculate an event, but also have an effect on other events. The developer's role is to create a design for this clock mechanism which has chain-reaction actions, and which implements the actions as unitary program chunks (functions).
The architecture must avail itself of all the events that may occur, avoid deadlocks, and manage any event while another action takes place.
This paradigm is extensively used in applications that have an elaborate GUI.
Each hyperspace has a set of coordinates (axes). The axes of a programming hyperspace are actually the development paradigms. It is not easy, but we can imagine it with a little bit of effort. Inside hyperspace, we can represent programming languages as clouds, depending on how much the language supports a paradigm. The cloud will be placed closer to, farther from, or not at all on the axis which that paradigm represents.
To take a simple example, we can choose only three paradigms. This means a tridimensional space, where we can assess three low-level programming languages. The shape of the clouds, more stretched or more flat, is not random, as it highlights the way in which the language occupies the space between the hyperspace axes.
Moreover, we visualize space as cloud-like, not as dot-like, because:
We deal with fuzzy concepts that cannot be quantified by a single scalar dimension. It is just not valid to say that C is 81% functional (instead of 80% or 82% functional).
Therefore, the developer's personal mark should be represented inside the cloud, but it should float dynamically.
Is there a programming language that can have a tangent-like relation to all the paradigms? Is there a cloud that can have high values for all the hyperspace axes? The answer lies in the Minotaur's labyrinth, namely the degree to which we accept the idea of surrealistic creatures being present.
The truth is that the creation of a super-language would simply become blocked due to requirement conflicts, and, even if these conflicts could be surpassed, the language would not be versatile enough, fast enough and portable enough, etc. This is the reason why the current programming languages cover a maximum of two or three paradigms.
All paradigms, all programming languages and all processors are Turing-Equivalent. This means that two computers, P and Q, are Turing-Equivalent if P could simulate Q and Q could simulate P.
To be more explicit: a man who is helped by a shovel and a bulldozer are equivalent, as long as the assessment standpoint is the displacement of a large volume of soil. The key to this issue lies in "efficiency" and "ease". Turing-equivalence does not take into account such concepts. Instead, it considers the achievement and success potential of the mission.
We must, therefore, choose an efficient, easy-to-use language. Our daily decisions are based on these criteria.
Yet, there are several other factors in choosing programming language paradigms:
The scope and the technical aspects of the problem (objective factor)
Extensibility: some projects require the existence of several, simultaneous users present in the database of a web page.
Adequate instruments: some programming languages have native features that help implement the solution.
The culture of the software-making body (subjective factor)
The organizational policy: every company has its own set of rules for using the hardware and the software (some for Microsoft, some for Apple, other for open-source).
The availability of the resources which are already engaged by the programming language.
Development and maintenance costs: some languages cannot ensure bug and future update support.
The constraints that derive from various circumstances:
It is an empirical fact that there are time constraints on development. Moreover, sometimes, these time constraints are so drastic that you know you will go for a suboptimal solution from the very start. However, pay attention to stereotypes.
The development of an experimental product (a prototype, PoC) versus the development of a marketable product require a completely different approach:
A PoC is created to understand the concept and/or the solution potential. This is done for you and you alone to understand.
Fashionable programming languages come equipped with plugins that target contemporary issues.
There is a high professional interest to create something better, which also means faster solutions suggested on forums and in literature (books, movies), etc.
There are large collections of libraries/ plugins, most of them free.
It is easy to replace the human resource, when needed, at least in theory. Practically, a pool of limited resources can easily trigger a deadlock because all the products need the same type of resource.