Language influences and limits one's ability to express (and even formulate) ideas, because people tend to "think in a language". Many CS 1 students, for example, have difficulties because they don't yet know the programming language well enough to know what it can do.
By knowing about the various abstraction mechanisms available in various languages (e.g., recursion, objects, associative arrays, functions as "first-class" entities, etc.), a programmer can more easily solve problems, even if programming in a language lacking an abstraction relevant to the solution.
"If all you know is how to use a hammer, every problem looks like a nail."
All general-purpose programming languages are equivalent (i.e., Turing universal) in terms of capability, but, depending upon the application, one language may be better suited than another.
Examples: COBOL was designed with business applications in mind, FORTRAN for scientific applications, C for systems programming, SNOBOL for string processing.
Given how frequently new programming languages rise in popularity, this is an important skill.
E.g., FORTRAN was designed to be fast; IBM 704 had three address registers, so arrays were limited to be no more than 3-dimensional.
E.g., Why are there separate types for integers and reals?
E.g., Why have "associative arrays" become common as built-in constructs only in recently-introduced languages? Does it have to do with complexity of implementation?
By learning about programming language constructs in general, you may come to understand (and thus begin making use of) features/constructs in your "favorite" language that you may have not used before.
Here, Sebesta argues that, if programmers (in general) had greater knowledge of programming language concepts, the software industry would do a better job of adopting languages based upon their merits rather than upon political and other forces. (E.g., Algol 60 never made large inroads in the U.S., despite being superior to FORTRAN. Eiffel is not particularly popular, despite being a great language!)
So he sets forth a few evaluation criteria (namely readability, writability, reliability, and cost) and several characteristics of programming languages that should be considered when evaluating a language with respect to those criteria.
See Table 1.1 on page 8. Then, for each of the criteria, Sebesta discusses how each of the characteristics relates to it.
1.3.1 Readability: This refers to the ease with which programs (in the language under consideration) can be understood. This is especially important for software maintenance.
One can write a hard-to-understand program in any language, of course (e.g., by using non-descriptive variable/subprogram names, by failing to format code acccording to accepted conventions, by omitting comments, etc.), but a language's characteristics can make it easier, or more difficult, to write easy-to-read programs.
If there are very few (e.g., assembly language), code can be hard to read because what may be a single operation, conceptually, could require several instructions to encode it.
In the context of a programming language, a set of features/constructs is said to be orthogonal if those features can be used freely in combination with each other. In particular, the degree of orthogonality is lessened if
Examples of non-orthogonality in C:
Example from assembly languages: In VAX assembler, the instruction for 32-bit integer addition is of the form
where each of the opi's can refer to either a register or a memory location. This is nicely orthogonal.
In contrast, in the assembly languages for IBM mainframes, there are two separate analogous ADD instructions, one of which requires op1 to refer to a register and op2 to refer to a memory location, the other of which requires both to refer to registers. This is lacking in orthogonality.
Too much orthogonality? As with almost everything, one can go too far. Algol 68 was designed to be very orthogonal, and turned out to be too much so, perhaps. As B.T. Denvir wrote (see page 18 in "On Orthogonality in Programming Languages", ACM SIGPLAN Notices, July 1979, accessible via the ACM Digital Library):
Intuition leads one to ascribe certain advantages to orthogonality: the reduction in the number of special rules or exceptions to rules should make a language easier "to describe, to learn, and to implement" — in the words of the Algol 68 report. On the other hand, strict application of the orthogonality principle may lead to constructs which are conceptually obscure when a rule is applied to a context in an unusual combination. Likewise the application of orthogonality may extend the power and generality of a language beyond that required for its purpose, and thus may require increased conceptual ability on the part of those who need to learn and use it.As an example of Algol 68's extreme orthogonality, it allows the left hand side of an assignment statement to be any expression that evaluates to an address!
Adequate facilities for defining data types and structures aids readability. E.g. Early FORTRAN had no record/struct construct, so the "fields" of an "object" could not be encapsulated within a single structure (that could be referred to by one name).
Primitive/intrinsic data types should be adequate, too. E.g., Early versions of C had no boolean type, forcing programmer to use an int to represent true/false (0 is false, everything else is true, so flag = 1; would be used to set flag to true.) How about this statement fragment:
Data abstraction and process (or procedural) abstraction.
Typically, assembly/machine languages lack expressivity in that each operation does something relatively simple, which is why a single instruction in a high-level language could translate into several instructions in assembly language.
Functional languages tend to be very expressive, in part because functions are "first-class" entities. In Lisp, you can even construct a function and execute it!
1.3.3 Reliability: This is the property of performing to specifications under all conditions.
In Java, for example, type checking during compilation is so tight that just about the only type errors that will occur during run-time result from explicit type casting by the programmer (e.g., when casting a reference to an object of class A into one of subclass B in a situation where that is not warranted) or from an input being of the wrong type/form.
As an example of a lack of reliability, consider earlier versions of C, in which the compiler made no attempt to ensure that the arguments being passed to a function were of the right types! (Of course, this is a useful trick to play in some cases.)
1.3.4 Cost: The following contribute to the cost of using a particular language:
Other criteria (not deserving separate sections in textbook):
Portability: the ease with which programs that work on one platform can be modified to work on another. This is strongly influenced by to what degree a language is standardized.
Generality: Applicability to a wide range of applications.
Well-definedness: Completeness and precision of the language's official definition.
The criteria listed here are neither precisely defined nor exactly measurable, but they are, nevertheless, useful in that they provide valuable insight when evaluating a language.
1.4.1 Computer Architecture: By 1950, the basic architecture of digital computers had been established (and described nicely in John von Neumann's EDVAC report). A computer's machine language is a reflection of its architecture, with its assembly language adding a thin layer of abstraction for the purpose of making easier the task of programming. When FORTRAN was being designed in the mid to late 1950's, one of the prime goals was for the compiler to generate code that was as fast as the equivalent assembly code that a programmer would produce "by hand". To achieve this goal, the designers —not surprisingly— simply put a layer of abstraction on top of assembly language, so that the resulting language still closely reflected the structure and operation of the underlying machine. To have designed a language that deviated greatly from that would have been to make the compiler more difficult to develop and less likely to produce fast-running machine code.
The style of programming exemplified by FORTRAN is referred to as imperative, because a program is basically a bunch of commands. (Recall that, in English, a command is referred to as an "imperative" statement, as opposed to, say, a question, which is an "interrogative" statement.)
This style of programming has dominated for the last fifty years! Granted, many refinements have occurred. In particular, OO languages put much more emphasis on designing a program based upon the data involved and less on the commands/processing. But the notion of having variables (corresponding to memory locations) and changing their values via assignment commands is still prominent.
Functional languages (in which the primary means of computing is to apply functions to arguments) have much to recommend them, but they've never gained wide popularity, in part because they tend to run slowly on machines with a von Neumann architecture. (The granddaddy of functional languages is Lisp, developed in about 1958 by McCarthy at MIT.)
The same could be said for Prolog, the most prominent language in the logic programming paradigm.
Interestingly, as long ago as 1977 (specifically, in his Turing Award Lecture, with the corresponding paper appearing in the August 1978 issue of Communications of the ACM), John Backus (famous for leading the team who designed and implemented FORTRAN) harshly criticized imperative languages, asking "Can Programming be Liberated from the von Neumann Style?" He set forth the idea of an FP (functional programming) system, which he viewed as being a superior style of programming. He also challenged the field to develop an architecture well-suited to this style of programming.
Here is an interesting passage from the article:
Conventional programming languages are basically high level, complex versions of the von Neumann computer. Our thirty year old belief that there is only one kind of computer is the basis of our belief that there is only one kind of programming language, the conventional —von Neumann— language. The differences between Fortran and Algol 68, although considerable, are less significant than the fact that both are based on the programming style of the von Neumann computer. ...Von Neumann programming languages use variables to imitate the computer's storage cells; control statements elaborate its jump and test instructions; and assignment statements imitate its fetching, storing, and arithmetic. The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer's bottleneck does.
1.4.2 Programming Method(ologie)s: Advances in methods of programming also have influenced language design, of course. Refinements in thinking about flow of control led to better language constructs for selection (i.e., if statements) and loops that force the programmer to be disciplined in the use of jumps/branching (by hiding them). This is called structured programming.
An increased emphasis on data (as compared to process) led to better language support for data abstraction. This continued to the point where now the notions of abstract data type and module have been fused into the concept of a class in object-oriented programming.
There are three general translation methods: compilation, interpretation, and a hybrid of the two.
1.7.1 Compilation: Here, a compiler translates each compilation unit (e.g., class, module, or file, depending upon the programming language) into an object module containing object code, which is like machine code except that two kinds of references have not yet been put into machine code form: external references (i.e., references to entities in other modules) and relative references (i.e., references expressed as an offset from the location of the module itself). Also —for the purpose of making subsequent steps in the translation possible— an object module contains tables in which are listed
See Figure 1.3 for a depiction of the various phases that occur in compilation. The first two phases, lexical and syntax analysis, are covered in Chapter 4. The job of a lexical analyzer, or scanner, is to transform the text comprising a program unit (e.g., class, module, file) into a sequence of tokens corresponding to the logical units occurring in the program. (For example, the substring while is recognized as being one unit, as is each occurrence of an identifier, each operator symbol, etc.) The job of the syntax analyzer is to take the sequence of tokens yielded by the scanner and to "figure out" the program's structure, i.e., how those tokens relate to each other.
To draw an analogy with analyzing sentences in English, lexical analysis identifies the words (and possibly their parts of speech) and punctuation, which the syntax analyzer uses to determine the boundaries between sentences and to form a diagram of each sentence. Example sentence: The gorn killed Kirk with a big boulder.
S V D.O. gorn | killed | Kirk -------+--------+------- \T \w \h \i \e \t (adj) \h boulder -------------- \a \b \i (prep. \g phrase)
1.7.2 Pure Interpretation: Let X be a programming language. An X interpreter is a program that simulates a computer whose "native language" is X. That is, the interpreter repeatedly fetches the "next" instruction (from the X program being interpreted), decodes it, and executes it. A computer is itself an interpreter of its own machine language, except that it is implemented in hardware rather than software.
1.7.3 Hybrid: Here, a program is translated (by the same means as a compiler) not into machine code but rather into some intermediate language, typically one that is at a level of abstraction strictly between language X and machine code. Then the resulting intermediate code is interpreted. This is the usual way that Java programs are processed, with the intermediate language being Java bytecode (as found in .class files) and the Java Virtual Machine (JVM) acting as the interpreter.
Alternatively, the intermediate code produced by the compiler can itself be compiled into machine code and saved for later use. In a Just-in-Time (JIT) scenario, this latter compilation step is done on a piecemeal basis on each program unit the first time it is needed during execution. (Subsequent uses of that unit result in directly accessing its machine code rather than re-translating it.)