Backtracking may occur in daily life when a problem has to be solved which has a number of alternative solutions that must be examined to select the best one. For example, if the shortest route from one city to another has to be found often a number of possible ways are available. The same applies to solving a puzzle, or playing chess. In all those cases first one or more provisional moves are made in a forward direction to explore a possible path, then, after an evaluation of the alternative, backtracking is applied to return to a previous position, often to examine other alternatives.
In most conventional languages the execution of a program can only proceed in a forward direction. In Elisa it is possible to go in two directions: forward and backward. The system may proceed forward but can also retreat backward. The combination of these two provides its great expressive power. In this chapter we will discuss how and when backtracking occurs and what the implications are for the underlying execution model.
But before that, we will discuss the differences between reading a definition declaratively or imperatively.
There are two ways to read a definition: the first way is to read a definition declaratively, the other way is to read it imperatively. The difference between declarative and imperative reading is between what the meaning of a given definition is, and how the definition is executed.
Consider the following definition:
F (integer, integer) -> integer; F (m, n) = G (m) * H (n);
where G and H are references to other definitions.
A declarative reading of a definition describes what is to be computed rather than how the computation should proceed. A declarative reading of F may be:
An imperative reading of a definition describes its execution as a number of steps. So, an imperative reading of F may be:
The declarative reading is concerned with what the meaning of a definition is as expressed in terms of other definitions. On the other hand, the imperative reading is focusing on how the computation may proceed and how the steps of a definition are evaluated by the system. Declarative reading concentrates on the static aspects of a definition, while imperative reading is mainly concerned with the dynamic aspects of its execution.
Many definitions can be read declaratively. This is of practical importance because the declarative aspects of a definition are usually easier to understand than the imperative details. In addition, it allows the user of the definition to consider the declarative meaning relatively independent of their imperative meaning and of the context in which the definition is used. Furthermore, a declarative reading often facilitates the possibility to reason about a definition and to draw conclusions about the ranges of the results.
Unfortunately, however, the declarative approach is not in all cases sufficient. Imperative aspects cannot always be ignored, especially when definitions are changing or updating global or local information, as will be discussed later, or when it is necessary to study executional efficiency. In the following we will show how declarative and imperative readings in some situations are related.
In the examples of definitions we used so far, the right-hand sides of the definition rules were simple expressions to express the results. However, in many situations, simple expressions are not adequate to define the desired effects. Often other language constructs are needed for the computation. Let us start with a discussion of some problems of simple expressions.
One of the problems is how to describe common sub-expressions only once, as in:
( 2 * a + b) + square ( 2 * a + b)
If we are using this expression as it is, it is reasonable to expect that, without optimizations, the sub-expression 2 * a + b is computed twice. This can be avoided by using a so-called block, such as:
[ P = 2 * a + b; P + square ( P) ]
In this block 2 * a + b is only computed once and the result will get the name of P. As known, the construct P = 2 * a + b is a name definition. P is a local name which is not known outside the block. We will say that the scope of P starts at the definition point of P and ends at the closing bracket of the block. The type and quantifier of P are determined by means of type and quantifier propagation. The result of the block is represented by the expression P + square( P).
Let us use another example:
sin (a) * cos (b) + f( cos (b), sin (a))
This expression may be rewritten as:
[ S = sin (a); T = cos (b); S * T + f (T, S) ]
In this block there are two name definitions introduced. The names S and T are both local names which are not visible outside the block.
In general, there are no limitations to the number of name definitions in a block. However, each local name should be unique within a block and should not be in conflict with already existing names.
Blocks may also be nested. For example,
a * b + 2 * ( p * q - square (p * q) + a * b) * c
may be rewritten as:
[ A = a * b; A + 2 * [ B = p * q; B - square (B) + A ] * c ]
As a check, substitution of the local names by their expressions and removal of superfluous brackets should give back the original expression.
The last element of a block determines the type of a block. For example, if the last element of a block represents an integer expression then the block will produce, at execution-time, an integer value. But if the last element of a block represents an expression which returns nothing then the block will return nothing. Take as an example:
[ A = 3; B = A + 7; print (A + B) ]
In this example, A and B are both local names valid till the end of the block. The last element of the block determines the outcome of the block. We assume that the print function will return nothing, so our block will return nothing.
Often we will make a distinction between two kinds of blocks: expression blocks and statement blocks.
Expressions blocks are blocks whose last element is a value expression. A value expression may produce zero, one, or more data items. Its quantifier may be multi, single, or optional.
Statement blocks are blocks whose last element is a statement. Statements are never returning values; they always possess the nothing quantifier.
The elements of a block are statements, name-definitions, and, only for the last element, expressions. An element of a block may also be another block which means that blocks may be nested. The elements of a block are evaluated from left to right.
In our examples so far, it was always possible to understand the results of a multi-value expression without exactly knowing how the results are obtained. However, if more complex expressions are used, this will not always be so obvious. That is the reason that this section is devoted to an examination of how stream expressions are evaluated and how their results are determined.
As an example: what happens if we are using an expression such as:
print( (1 .. m) * (1 .. n) )
Let us first try to understand what this means declaratively:
print( (1 .. m) * (1 .. n) ) =
print( (1, ..., n, ....., m, ..., m * n) ) =
print(1); ... print(n); ..... print(m); ... print(m * n);
From the original expression we derived, based on the rules for stream operations as discussed in the preceding chapter, a description of what the results are.
So, by taking a declarative view and by using corresponding transformation rules the results can be deduced without knowing how they are obtained.
Let us now take an imperative approach. For that purpose, we first will transform the print expression into a block:
[ i = 1..m; j = 1..n; print ( i * j) ]
This block is equivalent to the original print statement. In this block, i is the name of a stream of integer values from 1 to m, and j is the name of another stream of integer values ranging from 1 to n.
We can use the block to follow the individual steps in its execution. Suppose m = 2 and n = 3, then we can trace the execution of the block:
[ i = 1..2; j = 1..3; print (i * j) ]
The trace is:
i = 1; j = 1; print(1); << backtrack to j >> j = 2; print(2); << backtrack to j >> j = 3; print(3); << backtrack to j >> j = << end of j stream >> << backtrack to i >> i = 2; j = 1; print(2); << backtrack to j >> j = 2; print(4); << backtrack to j >> j = 3; print(6); << backtrack to j >> j = << end of j stream >> << backtrack to i >> i = << end of i stream>> << backtrack from block >>
The last line indicates that after the end of the i stream has been detected, the system will backtrack. Backtracking means that the execution is resumed at a previous point in the program. The point in the program where the execution is resumed depends on the dynamic context of the expression in execution. In other words, it depends from where the expression is activated. For example, the expression may be part of a larger expression or it may be the right-hand side of a definition rule. In both cases further backtracking may occur as will be discussed in the following sections. Here it is sufficient to understand how backtracking works within two nested loops.
This example could have been written in Pascal in the following way:
for i:= 1 to 2 do for j:= 1 to 3 do print (i * j);
As we see there are two nested loops. Later on, we will show other examples which cannot be written as nested Pascal loops.
Let us now investigate what will happen in case of a multi-value expression as given in the following example:
2 * (1 .. 3)
This is declaratively equivalent to:
( 2, 4, 6 )
Imperatively, we may rewrite the expression as:
[ i = 1 .. 3; 2 * i ]
The trace is:
i = 1; value = 2; << backtrack to i >> i = 2; value = 4; << backtrack to i >> i = 3; value = 6; << backtrack to i >> i = << end of i stream >> << backtrack from block >>
The values produced by this imperative approach are in agreement with the declarative results obtained earlier.
The next question is: what will happen if a multi-value expression is part of another expression, which may also be a multi-value expression. To explain the interactions between different multi-value expressions we will use the following example:
2 * (1 .. 3) - 3 * (1 .. 2)
which is equivalent to:
[ i = 1 .. 3; 2 * i ] - [ j = 1 .. 2; 3 * j ]
In the following trace we will use the name T1 to denote the temporary value of the first block and T2 for the temporary value of the second block:
i = 1; T1 = 2; j = 1; T2 = 3; T1 - T2 = -1; << backtrack to j >> j = 2; T2 = 6; T1 - T2 = -4; << backtrack to j >> j = << end of j stream >> << backtrack to i >> i = 2; T1 = 4; j = 1; T2 = 3; T1 - T2 = 1; << backtrack to j >> j = 2; T2 = 6; T1 - T2 = -2; << backtrack to j >> j = << end of j stream >> << backtrack to i >> i = 3; T1 = 6; j = 1; T2 = 3; T1 - T2 = 3; << backtrack to j >> j = 2; T2 = 6; T1 - T2 = 0; << backtrack to j >> j = << end of j stream >> << backtrack to i >> i = << end of i stream >> << backtrack from expression >>
This trace shows how the interactions between multi-value expressions can be understood in terms of the interactions between corresponding blocks.
( 5 - 1 .. 2) * ( (1 .. 3) + 7)
11 .. 13 * ( 51 .. 53, 17) + 3 * 14 .. 12 - 5
In this section we will discuss how and when backtracking occurs in and from definitions and what the implications are for the underlying execution model for definitions.
As we saw earlier, backtracking occurs when the end of a stream is detected. Backtracking means that the execution of the program will be resumed at an earlier point in the program. That means in the case of definitions that definition boundaries may be crossed as we will see in the following example.
Assume the following definition:
F( integer) -> multi ( integer); F( n) = 5 * (1 .. n);
Suppose this function is called in an expression E in the following way:
E = 2 * F( 3);
Now the question is: what are the values of E?
First, we start with a declarative approach using the substitution rules for operating on streams:
E = 2 * F(3); = 2 * ( 5 * (1 .. 3)); = 2 * ( 5 * (1, 2, 3)); = 2 * ( 5, 10, 15); = ( 10, 20, 30);
In this approach the definition call is substituted by the corresponding expression of the definition rule to compute the stream values.
The imperative view is quite different. From previous discussions we know that a multi-value expression may be replaced by an equivalent block. This means that our definition may be rewritten as:
F( integer) -> multi (integer); F( n) = [ I = 1 .. n; 5 * I ];
If we are now calling this function in the expression E as
E = 2 * F ( 3);
we can trace its execution as follows:
inside F | outside F ------------------------------------------- I = 1; result F = 5; | E = 2 * 5 = 10; << backtrack to I in F >> ------------------------------------------- I = 2; result F = 10; | E = 2 * 10 = 20; << backtrack to I in F >> ------------------------------------------- I = 3; result F = 15; | E = 2 * 15 = 30; << backtrack to I in F >> ------------------------------------------- I = <<end of stream>> | <<backtrack from F>> | E = <<end of stream; backtrack from E >> ---------------------------------------------------------
The execution is performed line after line. It shows that execution alternates between F and E. As soon as F has a result, it will give it to E. E will double that value and, on its turn, will give that value to its neighbor, which may be a following expression, etc. After the value of E has been processed the system will backtrack to I for a new value. We see that the information of F is maintained even if the execution continues outside F. Only after all values of I have been produced and the end of the stream of I has been detected, backtracking from F occurs. Because there is no result of F anymore, the end of the stream of E values also occurs.
There are some general rules for backtracking from a definition:
So, we see that only in the case of multi and optional definitions backtracking is possible. In the case of single and nothing only forward execution is defined.
An important consequence of the execution model is that the elements of a stream are produced and processed one by one. Each element is processed before the following element is requested. This creates a push and pull mechanism. The push occurs if an element is produced and is pushed to be processed; the pull occurs if a new element is needed.
The interaction between both mechanisms facilitates so-called pipeline programming. Pipeline programming provides a neat solution for many producer-consumer problems. Because expressions may be nested, producer-consumer combinations may be repeated several times. Let use assume that, beside F there are two other definitions, called G and H, where G triples its input and where H selects only the even values:
F( integer) -> multi ( integer); F( n) = 5 * (1 .. n);
G( integer) -> integer; G( i) = 3 * i;
H( integer) -> optional ( integer); H( i) = i when mod (i, 2) == 0;
The mod function used in the last definition rule is a predefined function which computes the remainder of i / 2.
We may now nest definition calls for F, G, and H, which have all different result-specifications, in the following expression:
H (G (F (4) ) )
The result of the evaluation of this expression is shown in the following diagram:
----- ----- ----- 4 | | 20, 15, 10, 5 | | 60, 45, 30, 15 | | 60,30 ----> | F | --------------> | G | ----------------> | H | -------> | | ----- | | <==== | | <====================================== | | <=== ----- -----
A diagram of three nested definition calls.
This diagram displays that F, G, and H are produce-consumer combinations. It shows how one input value for F generates a stream of 4 output values. These 4 values are used as input for G which produces for each input value one output value. The 4 output values of G are used as input values for H. H, as a filter for even numbers, will reject odd numbers and will pass through the even numbers. The <== line represents backtracking.
Note that, in reality, there is always only one value in a pipeline. For example, the first value produced by F is 5, which is processed by G and transformed into 15, which is processed by H. H will reject 15 because it is odd. This rejection will cause a pull for the next value of F, being 10, which will be processed by G, and so on.
Another language which has backtracking is Prolog. Backtracking in Prolog is described by Bratko (1990), Clocksin and Mellish (1984), and Sterling and Shapiro (1986).
This page was last modified on 27-09-2012 13:43:06