Under construction ....

Notes on Chapter 1 of Sebesta's Concepts of Programming Languages (9th ed.)

Chapter Outline

  1. Reasons for Studying Concepts of Programming Languages
  2. Programming Domains
  3. Language Evaluation Criteria
  4. Influences on Language Design
  5. Language Categories
  6. Language Design Trade-Offs
  7. Implementation Methods
  8. Programming Environments

1.1 Reasons for Studying Concepts of Programming Languages

1.2 Programming Domains

Computers have been used to solve problems in a wide variety of application areas, or domains. Many programming languages were designed with a particular domain in mind.

1.3 Language Evaluation Criteria

Aside from simply examining the concepts that underlie the various constructs/features of programming languages, Sebesta aims also to evaluate those features with respect to how they impact the software development process, including maintenance.

So he sets forth a few evaluation criteria (namely readability, writability, reliability, and cost) and several characteristics of programming languages that should be considered when evaluating a language with respect to those criteria.

See Table 1.1 on page 8. Then, for each of the criteria, Sebesta discusses how each of the characteristics relates to it.

1.3.1 Readability: This refers to the ease with which programs (in the language under consideration) can be understood. This is especially important for software maintenance.

One can write a hard-to-understand program in any language, of course (e.g., by using non-descriptive variable/subprogram names, by failing to format code acccording to accepted conventions, by omitting comments, etc.), but a language's characteristics can make it easier, or more difficult, to write easy-to-read programs.

1.3.2 Writability: This is a measure of how easily a language can be used to develop programs for a chosen problem domain.

1.3.3 Reliability: This is the property of performing to specifications under all conditions.

1.3.4 Cost: The following contribute to the cost of using a particular language:

Other criteria (not deserving separate sections in textbook):

Portability: the ease with which programs that work on one platform can be modified to work on another. This is strongly influenced by to what degree a language is standardized.

Generality: Applicability to a wide range of applications.

Well-definedness: Completeness and precision of the language's official definition.

The criteria listed here are neither precisely defined nor exactly measurable, but they are, nevertheless, useful in that they provide valuable insight when evaluating a language.

1.7 Language Design Trade-offs

Not surprisingly, a language feature that makes a language score higher on one criterion may make it score lower in another. Examples:

1.4 Influences on Language Design

1.4.1 Computer Architecture: By 1950, the basic architecture of digital computers had been established (and described nicely in John von Neumann's EDVAC report). A computer's machine language is a reflection of its architecture, with its assembly language adding a thin layer of abstraction for the purpose of making easier the task of programming. When FORTRAN was being designed in the mid to late 1950's, one of the prime goals was for the compiler to generate code that was as fast as the equivalent assembly code that a programmer would produce "by hand". To achieve this goal, the designers —not surprisingly— simply put a layer of abstraction on top of assembly language, so that the resulting language still closely reflected the structure and operation of the underlying machine. To have designed a language that deviated greatly from that would have been to make the compiler more difficult to develop and less likely to produce fast-running machine code.

The style of programming exemplified by FORTRAN is referred to as imperative, because a program is basically a bunch of commands. (Recall that, in English, a command is referred to as an "imperative" statement, as opposed to, say, a question, which is an "interrogative" statement.)

This style of programming has dominated for the last fifty years! Granted, many refinements have occurred. In particular, OO languages put much more emphasis on designing a program based upon the data involved and less on the commands/processing. But the notion of having variables (corresponding to memory locations) and changing their values via assignment commands is still prominent.

Functional languages (in which the primary means of computing is to apply functions to arguments) have much to recommend them, but they've never gained wide popularity, in part because they tend to run slowly on machines with a von Neumann architecture. (The granddaddy of functional languages is Lisp, developed in about 1958 by McCarthy at MIT.)

The same could be said for Prolog, the most prominent language in the logic programming paradigm.

Interestingly, as long ago as 1977 (specifically, in his Turing Award Lecture, with the corresponding paper appearing in the August 1978 issue of Communications of the ACM), John Backus (famous for leading the team who designed and implemented FORTRAN) harshly criticized imperative languages, asking "Can Programming be Liberated from the von Neumann Style?" He set forth the idea of an FP (functional programming) system, which he viewed as being a superior style of programming. He also challenged the field to develop an architecture well-suited to this style of programming.

Here is an interesting passage from the article:

Conventional programming languages are basically high level, complex versions of the von Neumann computer. Our thirty year old belief that there is only one kind of computer is the basis of our belief that there is only one kind of programming language, the conventional —von Neumann— language. The differences between Fortran and Algol 68, although considerable, are less significant than the fact that both are based on the programming style of the von Neumann computer. ...

Von Neumann programming languages use variables to imitate the computer's storage cells; control statements elaborate its jump and test instructions; and assignment statements imitate its fetching, storing, and arithmetic. The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer's bottleneck does.

1.4.2 Programming Method(ologie)s:   Advances in methods of programming also have influenced language design, of course. Refinements in thinking about flow of control led to better language constructs for selection (i.e., if statements) and loops that force the programmer to be disciplined in the use of jumps/branching (by hiding them). This is called structured programming.

An increased emphasis on data (as compared to process) led to better language support for data abstraction. This continued to the point where now the notions of abstract data type and module have been fused into the concept of a class in object-oriented programming.

1.5 Language Categories

The four categories usually recognized are imperative, object-oriented, functional, and logic. Sebesta seems to doubt that OO is deserving of a separate category, because one need not add all that much to an imperative language, for example, to make it support the OO style. (Indeed, C++, Java, and Ada 95 all are quite imperative.) (And even functional and logic languages have had OO constructs added to them.)

1.7 Implementation Methods

Computers execute machine code. Hence, to run code written in any other language, first that code has to be translated into machine code. Software that does this is called a translator. If you have a translator that allows you to execute programs written in language X, then, in effect, you have a virtual X machine. (See Figure 1.2.)

There are three general translation methods: compilation, interpretation, and a hybrid of the two.

1.7.1 Compilation: Here, a compiler translates each compilation unit (e.g., class, module, or file, depending upon the programming language) into an object module containing object code, which is like machine code except that two kinds of references have not yet been put into machine code form: external references (i.e., references to entities in other modules) and relative references (i.e., references expressed as an offset from the location of the module itself). Also —for the purpose of making subsequent steps in the translation possible— an object module contains tables in which are listed

A linker is responsible for linking together the object modules that comprise a program, which means that it uses the tables in each object module to "resolve" all the external references. The result of the linker is a load module, which is a "relocatable" machine code program, i.e., one in which the only unresolved references are the relative references. When the time comes to execute the program, a relocating loader puts the code into the appointed area in memory, at the same time replacing all relative references by the actual memory addresses.

See Figure 1.3 for a depiction of the various phases that occur in compilation. The first two phases, lexical and syntax analysis, are covered in Chapter 4. The job of a lexical analyzer, or scanner, is to transform the text comprising a program unit (e.g., class, module, file) into a sequence of tokens corresponding to the logical units occurring in the program. (For example, the substring while is recognized as being one unit, as is each occurrence of an identifier, each operator symbol, etc.) The job of the syntax analyzer is to take the sequence of tokens yielded by the scanner and to "figure out" the program's structure, i.e., how those tokens relate to each other.

To draw an analogy with analyzing sentences in English, lexical analysis identifies the words (and possibly their parts of speech) and punctuation, which the syntax analyzer uses to determine the boundaries between sentences and to form a diagram of each sentence. Example sentence: The gorn killed Kirk with a big boulder.

                   S      V       D.O.

                 gorn | killed | Kirk
                \T      \w
                 \h      \i
                  \e      \t
               (adj)       \h   boulder
                               \a  \b
                       (prep.        \g

1.7.2 Pure Interpretation: Let X be a programming language. An X interpreter is a program that simulates a computer whose "native language" is X. That is, the interpreter repeatedly fetches the "next" instruction (from the X program being interpreted), decodes it, and executes it. A computer is itself an interpreter of its own machine language, except that it is implemented in hardware rather than software.

1.7.3 Hybrid: Here, a program is translated (by the same means as a compiler) not into machine code but rather into some intermediate language, typically one that is at a level of abstraction strictly between language X and machine code. Then the resulting intermediate code is interpreted. This is the usual way that Java programs are processed, with the intermediate language being Java bytecode (as found in .class files) and the Java Virtual Machine (JVM) acting as the interpreter.

Alternatively, the intermediate code produced by the compiler can itself be compiled into machine code and saved for later use. In a Just-in-Time (JIT) scenario, this latter compilation step is done on a piecemeal basis on each program unit the first time it is needed during execution. (Subsequent uses of that unit result in directly accessing its machine code rather than re-translating it.)

1.8 Programming Environments

Collection of tools that aid in the program development process.