Chapter 1: Building Abstractions with Procedure
The acts of the mind, wherein it exerts its power over simple ideas, are chiefly these three: 1. Combining several simple ideas into one compound one, and thus all complex ideas are made. 2. The second is bringing two ideas, whether simple or complex, together, and setting them by one another so as to take a view of them at once, without uniting them into one, by which it gets all its ideas of relations. 3. The third is separating them from all other ideas that accompany them in their real existence: this is called abstraction, and thus all its general ideas are made.
——John Locke, An Essay Concerning Human Understanding (1690)
Computational processes are abstract being that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules called a program.
Contents
1.1 The Elements of Programming
Every powerful language has three mechanisms for combining simple ideas to form more complex ideas:
- primitive expressions, which represent the simplest entities the language is concerned with.
- means of combinations, by which compound elements are built from simpler ones.
- means of abstraction, by which compound elements can be named and manipulated as units.
1.1.1 Expressions
Expressions are formed by delimiting a list of expressions within parentheses in order to denote procedure application, are called combinations.
The leftmost element in the list is called the operator, and the other elements are called operands.
Lisp obeys the convention that every expression has a value.
1.1.2 Naming and the Environment
We say that the name identifies a variable whose value is the object.
In the Scheme dialect of Lisp, we name things with
define.
The interpreter must maintain some sort of memory that keeps track of the name-object pairs, and this memory is called the environment.
1.1.3 Evaluating Combinations
To evaluate a combination, the interpreter do the following:
- Evaluate the subexpressions of the combination
- Apply the procedure that is the value of the leftmost subexpression (the operator) to the arguments that are the values of the other subexpressions (the operands).
Thus, the evaluation rule is recursive in nature.
define is not a combination but a special form.
(e.g. evaluating
(define x 3) does not apply
define to two arguments, since the purpose of the
define is precisely to associate x with a value.)
Each special form has its own evaluation rule.
1.1.4 Compound Procedure
The general form of a procedure definition is
1 2 |
(define (<name> <formal parameters>) <body>) |
The body of the procedure can be a sequence of expressions. In this case, the interpreter evaluates each expression in the sequence in turn and returns the value of the final expression as the value of the procedure application.
1.1.5 The Substitution Model for Procedure Application
The substitution model is a model that determines the “meaning” of procedure application: to apply a compound procedure to arguments, evaluate the body of the procedure with each formal parameter replaced by the corresponding argument.
(When we address in Chapter 3 the use of procedures with “mutable data”, we will see that the substitution model breaks down and must be replaced by a more complicated model.)
The interpreter actually uses “evaluate the arguments and then apply” method which is called applicative-order evaluation.
The alternative “fully expand and then reduce” evaluation method, known as normal-order evaluation, would not evaluate the operands until their values were needed (e.g. Exercise 1.5).
For procedure applications that can be modeled using substitution and that yield legitimate values, normal-order and applicative-order evaluation produce the same value (see Exercise 1.5 for an instance of an “illegitimate” value).
Lisp uses applicative-order evaluation, partly because normal-order evaluation becomes much more complicated to deal with when we leave the realm of procedures that can be modeled by substitution.
1.1.6 Conditional Expressions and Predicates
Then general form of a conditional expression is
1 2 3 4 |
(cond (<p1> <e1>) (<p2> <e2>) ... (<pn> <en>)) |
It is a special form:
The predicate
<p1> is evaluated first, if its value is false, the
<p2> is evaluated…
This process continues until a predicate is found whose value is true, in which case the interpreter returns the value of the corresponding consequent expression
<e> of the clause as the value of the conditional expression.
else if a special form that can be used in place of the
<p> in the final clause of a
cond.
The general form of an if expression is
1 2 3 |
(if <predicate> <consequent> <alternative>) |
It is a special form:
The interpreter starts by evaluating the
<predicate>, if it is true, the interpreter then evaluates the
<consequent> and then returns its value, otherwise, evaluates and returns
<alternative> instead.
Three logical composition operations:
1 2 3 |
(and <e1> ... <en>) (or <e1> ... <en>) (not <e>) |
The and and or are special form because of the short-circuit evaluation just as the cond and if above.
1.1.7 Example: Square Roots by Newton’s Method
In mathematics we are usually concerned with declarative (what is) descriptions,
whereas in computer science we are usually concerned with imperative (how to) descriptions.
Exercise 1.6 shows why
if needs to be provided as a special form.
1.1.8 Procedures are Black-Box Abstractions
We say that the procedure definition binds its formal parameters. If a variable is not bound, we say that it is free.
The set of expressions for which a binding defines a name is called the scope of that name.
We allow a procedure to have internal definitions that are local to that procedure, which is called block structure.
(Embedded definitions must come first in a procedure body.)
1.2 Procedures and the Processes They Generate
A procedure is a pattern for the local evolution of a computational process. It specifies how each stage of the process is built upon the previous stage.
In this section we will examine some common “shapes” for processes generated by simple procedures.
We will also investigate the rates at which these processes consume the important computational resources of time and space.
1.2.1 Linear Recursion and Iteration
In general, an iterative process is one whose state can be summarized by a fixed number of state variables, together with a fixed rule that describes how the state variables should be updated as the process moves from state to state and an (optional) end test.
The program variables provide a complete description of the state of the iterative process at any point, while there is some additional “hidden” information in the recursive process.
An iterative process will be executed in constant space, even if it is described by a recursive procedure. An implementation with this property is called tail-recursive.
1.2.2 Tree Recursion
In general, the evolved process looks like a tree.
1.2.3 Orders of Growth
In general, order of growth provides a useful indication of how we may expect the behavior of the process to change as we change the size of the problem.
1.2.4 Exponentiation
This section shows an example: quick exponentiation.
In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.
1.2.5 Greatest Common Divisors
This section shows an example: GCD.
1.2.6 Example: Testing for Primality
This section shows an example: Miller-Rabin test.
1.3 Formulating Abstractions with Higher-Order Procedures
We have introduced compound procedures as a mechanism for abstracting patterns of numerical operations so as to make them independent of the particular numbers involved.
We began to see a more powerful kind of abstraction: procedures used to express general methods of computation, independent of the particular functions involved.
Procedures that manipulate procedures are called higher-order procedures which express patterns among different procedures as concepts.
1.3.1 Procedures as Arguments
A procedure that express the concept of summation itself rather than only procedures that compute particular sums:
1 2 3 4 5 |
(define (sum term a next b) (if (> a b) 0 (+ (term a) (sum term (next a) next b)))) |
1.3.2 Constructing Procedures Using lambda
In general, lambda is used to created procedures in the same way as define, except that no name is specified for the procedure:
1 2 |
(lambda (<formal parameters>) <body>) |
We can use lambda in any context where we would normally use a procedure name.
Another use of
lambda is in creating local variables.
The general form of a let expression is
1 2 3 4 5 |
(let ((<var1> <exp1>) (<var2> <exp2>) ... (<varn> <expn>)) <body>) |
No new mechanism is required in the interpreter in order to provide local variables.
A
let expression is simply syntactic sugar for the underlying
lambda application:
1 2 3 4 5 |
((lambda (<var1> ... <varn>) <body>) <exp1> ... <expn>) |
Therefore, the variables’ values are computed outside the
let.
(Somtimes we can use internal definitions to get the same effect as with
let, however, we prefer to use internal
define only for internal procedures.)
1.3.3 Procedures as General Methods
some examples…
1.3.4 Procedures as Returned Values
some examples…
In general, programming languages impose restrictions on the ways in which computational elements can be manipulated.
Elements with the fewest restrictions are said to have first-class status.
Some of the “rights and privileges” of first-class elements are:
- They may be named by variables.
- They may be passed as arguments to procedures.
- They may be returned as the results of procedures.
- They may be included in data structures.
Lisp awards procedures full first-class status, which poses challenges for efficient implementation, but the resulting gain in expressive power is enormous.
All exercises about fixed point:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
#lang sicp ; fixed point (define tolerance 0.00001) (define (fixed-point f first-guess) (define (close-enough? v1 v2) (< (abs (- v1 v2)) tolerance)) (define (try guess) (display guess) (newline) (let ((next (f guess))) (if (close-enough? guess next) guess (try next)))) (try first-guess)) ; golden ration (fixed-point (lambda (x) (+ 1 (/ 1 x))) 1.0) ; x^x = 1000 (fixed-point (lambda (x) (/ (log 1000) (log x))) 6) ; transform (define (fixed-point-of-transform g transform guess) (fixed-point (transform g) guess)) ; average damp (define (average-damp f) (define (average a b) (/ (+ a b) 2)) (lambda (x) (average x (f x)))) (define (sqrt x) (fixed-point-of-transform (lambda (y) (/ x y)) average-damp 1.0)) ; newtons method (define (deriv g) (define dx 0.00001) (lambda (x) (/ (- (g (+ x dx)) (g x)) dx))) (define (newton-transform g) (lambda (x) (- x (/ (g x) ((deriv g) x))))) (define (sqrt x) (define (square x) (* x x)) (fixed-point-of-transform (lambda (y) (- (square y) x)) newton-transform 1.0)) (sqrt 9.0) ; n-th root (define (compose f g) (lambda (x) (f (g x)))) (define (repeated f n) (if (= n 0) (lambda (x) x) (compose f (repeated f (- n 1))))) (define (n-th-power x n) (if (= n 0) 1 (* x (n-th-power x (- n 1))))) (define (lg x) (/ (log x) (log 2))) (define (n-th-root x n) (fixed-point-of-transform (lambda (y) (/ x (n-th-power y (- n 1)))) (repeated average-damp (floor (lg n))) 1.1)) (n-th-root 1024 50) ; iterative improve (define (iterative-improve test improve) (define (try guess) (let ((next (improve guess))) (if (test guess next) guess (try next)))) (lambda (guess) (try guess))) (define (fixed-point f first-guess) (define (close-enough? v1 v2) (< (abs (- v1 v2)) tolerance)) ((iterative-improve close-enough? f) first-guess)) |