H.x = {
f.x
if b.x
H.(g.x)
otherwise (i.e., ¬b.x)
where b, f, and g are functions that can be defined without reference to H. For some types S and T, the signatures of these functions are
Note that either or both of the types S and T may themselves be cartesian products of types (e.g., S = S_{1} × S_{2}) so that the definition of tail recursive function should be understood to apply not only to functions of one argument but also to multiargument functions. If, for example, we had S = S_{1} × S_{2} and we preferred to view H —as well as f, b, and g— as being twoargument functions, we could write the definition of H as follows:
H.x_{1}.x_{2} = {  f.x_{1}.x_{2}  if b.x_{1}.x_{2} 
H.(g_{1}.x_{1}.x_{2}).(g_{2}.x_{1}.x_{2})  otherwise (i.e., ¬b.x_{1}.x_{2}) 
(Here we have that g_{1} and g_{2} are such that g.x_{1}.x_{2} = (g_{1}.x_{1}.x_{2}, g_{2}.x_{1}.x_{2}).)
The generalization of this to S = S_{1} × S_{2} × ... × S_{k} (k > 2) should be obvious.
What makes these definitions "tail" recursive is that, in the recursive case,
the result is simply an application of the function being defined (with no
other operators having to be applied thereafter). Thus, when evaluating
the function at a given value, the recursive "call" is the very last thing
you do (i.e., the tail). Compare this to, say, the standard (nontail)
recursive definition of the factorial function:
Fact.n = {
1
if n=0
n × Fact.(n1)
otherwise (i.e., n>0)
In evaluating Fact.k (for any k>0), we would (recursively) evaluate Fact.(k1) and then multiply the result by k.
Exploring the function H described above, we find that
H.x = H.(g.x) (assuming ¬b.x) = H.(g.(g.x)) (assuming ¬b.(g.x)) = H.(g.(g.(g.x))) (assuming ¬b.(g.(g.x))) = ... ... = H.(g^{[k]}.x) (assuming ¬b.(g^{[i]}.x) for all i satisfying 0≤i<k) = f.(g^{[k]}.x) (assuming b.(g^{[k]}.x))where by g^{[k]}.x we mean g.(g.(....(g.x)....)), where g occurs k times. (Formally, we define g^{[0]}.x = x and, for j≥0, g^{[j+1]}.x = g(g^{[j]}.x).) In other words, assuming that there exists some i≥0 for which b.(g^{[i]}.x) holds, we find that
where K is the minimum such i. That is, K = (min i  0≤i ∧ b.(g^{[i]}.x) : i).
This suggests the following iterative program for establishing y = H.X.

The code above suffers from the fact that it relies upon "knowing" (magically, apparently) the value of K. From the loop invariant (and our definition of K) it follows that the loop guard is equivalent to ¬b.x. By using this as our guard, we remove the program's dependence upon K.
Further observation of the code reveals that, except insofar as the loop invariant refers to it, the variable k is useless. It is worth investigating, then, whether we can restate the invariant so as not to mention k. It turns out that, indeed, we can, by observing that, as x assumes the values
on successive iterations of the loop, the property H.x = H.X is preserved. Hence, we can state the loop invariant as I : H.x = H.X and we can omit the variable k from the program altogether. Let us prove that this I is, indeed, a loop invariant. To do so, it suffices to prove that the truth of I is established by the initialization code and that its truth is preserved by an arbitrary iteration of the loop. (These correspond to proof obligations (i) and (ii) in the loop checklist.) That I is established by the initialization is obvious (a proof of {true} x := X {H.x = H.X} is left to the reader); that I is preserved by each loop iteration is proved by showing the Hoare triple
which is equivalent to
Here is the proof:
Assume I (i.e., H.x = H.X) and ¬b.x wp.(x := g.x).I = < wp assignment law > I(x := g.x) = < defn of I > (H.x = H.X)(x := g.x) = < textual sub > H.(g.x) = H.X = < assumption ¬b.x; by defn. of H, ¬b.x implies H.x = H.(g.x) > H.x = H.X = < assumption > true
Unfortunately, the bound function still refers not only to k but also to K. Although it is not as elegant as the original, the bound function can be described as
In other words, t is the number of (additional) times that g must be applied to x in order for the result to satisfy b. Of course, whether or not such a number exists depends upon g, b, and x. In the typical case, x will be an integer, b will be true when x is sufficiently close to zero, and g.x will be a number closer to zero than is x.
The final version of the program is as follows:

As a concrete example of a tail recursive function definition, we offer this:
H.n = {
n
if 0≤n≤1
H.(n2) otherwise (i.e., n>1)
It should not take you long to recognize that this function, when applied to a natural number n, yields zero if n is even and one if n is odd.
In general, however, "natural" examples of tail recursive function definitions are not often encountered.
Few "natural" recursive function definitions (i.e., ones that someone is
likely to devise via intuition in formally defining a function) are tail
recursive. However, some nontail recursive function definitions can be
transformed to obtain a tail recursive definition of a function having
one extra argument and in terms of which the original function can be
defined directly.
In particular, such a transformation can be applied to any
definition having the following "pseudo tail recursive" form:
G.x = {
f.x if b.x
h.x ⊕ G.(g.x) otherwise (i.e., ¬b.x)
where ⊕ : T × T → T is an associative operator having an identity element e and where b, f, and g are functions as described before.
Note: The righthand side of the recursive case in the definition of G could have been G.(g.x) ⊕ h.x (even if ⊕ were not commutative/symmetric). This would simply mean that, in what follows, the two operands in every subexpression of the form a ⊕ b should be swapped. End of note.
Examples of pseudotail recursive function definitions:
Example 1: the classic factorial function
Fact.n = { 1 if n=0 { n × Fact.(n1) otherwise (i.e., n>0)
Example 2: a function that calculates the sum of the digits in the decimal (base ten) numeral describing a number. (In 462, for instance, the sum of the digits is 4+6+2 = 12.)
digit_sum.n = { 0 if n=0 { (n mod 10) + digit_sum.(n/10) otherwise
Example 3: a function that reports whether or not a specified value (x) occurs among the values in the prefix of a specified length (n) of a specified array (b). That is, it answers the question, Does x occur in b[0..n)?
occurs_in.x.b.n = { false if n=0 { (b.(n1) = x) ∨ occurs_in.x.b.(n1) otherwise
We now show that, for any function that can be defined via a pseudotail recursive definition, there exists a "more general" function that can be defined via tail recursion.
Let G be the function defined via the pseudotail recursive definition template above. Define H as follows:
Notice that H has one "extra" argument, y. One might call this the "accumulating argument" in that (something close to) the result of the function application "accumulates" in it as we go deeper and deeper into the recursive applications of the function. This will become evident when we do a concrete example.
Consider the two cases b.x and ¬b.x:
Case b.x: H.x.y = < defn of H > y ⊕ G.x = < defn of G; assumption b.x > y ⊕ f.x 
Case ¬b.x: H.x.y = < defn of H > y ⊕ G.x = < defn of G; assumption ¬b.x > y ⊕ (h.x ⊕ G.(g.x)) = < associativity of ⊕ > (y ⊕ h.x) ⊕ G.(g.x) = < defn of H, with x,y := g.x, y ⊕ h.x > H.(g.x).(y ⊕ h.x) 
What this establishes is that we may characterize H as follows:
H.x.y = {
y ⊕ f.x
if b.x
H.(g.x).(y ⊕ h.x)
otherwise (i.e., ¬b.x)
But this has the format of a (twoargument) tail recursive function definition. Hence, H is tail recursive!
Taken together with the fact that G.x = e ⊕ G.x = H.x.e (recall that e denotes the identity element of ⊕), we have that G is directly definable in terms of a tail recursive function.
Let us carry out this transformation on a concrete example. Take the function digit_sum defined above:
digit_sum.n = { 0 if n=0 { (n mod 10) + digit_sum.(n/10) otherwise
In accord with the procedure suggested above, we define
from which we derive (in a manner analogous to our analysis of H above) that
digit_sum'.n.m = { m + 0 if n=0 { digit_sum'.(n/10).(m + (n mod 10)) otherwise(Of course, we can omit the "+ 0" in the base case.)
Using this characterization of digit_sum' together with the fact that digit_sum.n = digit_sum'.n.0, we carry through a particular application of digit_sum:
digit_sum.462 = < digit_sum.n = digit_sum'.n.0 for all n > digit_sum'.462.0 = < defn of digit_sum', recursive case > digit_sum'.46.(0+2) = < defn of digit_sum', recursive case > digit_sum'.4.(0+2+6) = < defn of digit_sum', recursive case > digit_sum'.0.(0+2+6+4) = < defn of digit_sum', base case > 0+2+6+4 = < arithmetic > 12
This illustrates the idea that the "extra" argument that was introduced in transforming pseudotail recursive digit_sum into (fully)tail recursive digit_sum' serves to accumulate the result.
End of Concrete ExampleLet's now return to our generic function G (having a pseudotail recursive definition) and the function H (with the (fully) tail recursive definition) satisfying G.x = H.x.e, for all x in G's domain (and where e is the identity of ⊕). Suppose that we wish to develop a program that, given input X, establishes z = G.X. Then, because G.X = H.X.e, it suffices to construct a program that establishes the equivalent z = H.X.e. Such a program would be as follows:

Returning to our concrete example pertaining to the functions digit_sum and digit_sum', the above translates into this program:

H.x = { f_{0}.x if b_{0}.x { f_{1}.x if b_{1}.x { H.(g_{0}.x) if c_{0}.x { H.(g_{1}.x) if c_{1}.xwhere [b_{0}.x ∨ b_{1}.x ∨ c_{0}.x ∨ c_{1}.x] (i.e., for every x, at least one of b_{0}, b_{1}, c_{0}, or c_{1} holds).
This particular example has exactly two base cases and two recursive cases, but it can be easily generalized to any number of each. Strictly speaking, this does not qualify as a tail recursive definition. However, it can be shown that such a definition can be transformed into an equivalent one that is tail recursive. (Exactly how this is accomplished is beyond the scope of this document.) What is important to us is how to transform a function definition such as this into a program that computes the defined function. Well, here it is:

Copyright Robert McCloskey 20042018