So here we have two pieces of information that I think are very valuable. In fact, I know it from my own experience. Number 1, languages vary in power. Number 2, most managers deliberately ignore this. Between them, these two facts are literally a recipe for making money. ITA is an example of this recipe in action. If you want to win in a software business, just take on the hardest problem you can find, use the most powerful language you can get, and wait for your competitors' pointy-haired bosses to revert to the mean.
13.7. Appendix: Power
As an illustration of what I mean about the relative power of programming languages, consider the following problem. We want to write a function that generates accumulators—a function that takes a number n ,and returns a function that takes another number i and returns n incremented by i. (That's incremented by, not plus. An accumulator has to accumulate.)
In Common Lisp 7 this would be:
(defun foo (n)
(lambda (i) (incf n i)))
In Ruby it's almost identical:
def foo (n)
lambda {|i| n += i } end
Whereas in Perl 5 it's
sub foo {
my ($n) = @_;
sub {$n += shift}
}
which has more elements than the Lisp/Ruby version because you have toextract parameters manually in Perl.
In Smalltalk the code is also slightly longer than in Lisp and Ruby:
foo: n
|s|
s := n.
^[:i| s := s+i. ]
because although in general lexical variables work, you can't do an assignment to a parameter, so you have to create a new variable s to hold the accumulated value.
In Javascript the example is, again, slightly longer, because Javascript retains the distinction between statements and expressions, so you need explicit return statements to return values:
function foo (n) {
return function (i) {
return n += i } }
(To be fair, Perl also retains this distinction, but deals with it in typical Perl fashion by letting you omit returns.)
If you try to translate the Lisp/Ruby/Perl /Smalltalk/Javascript code into Python you run into some limitations. Because Python doesn't fully support lexical variables, you have to create a data structure to hold the value of n . And although Python does have a function data type, there is no literal representation for one (unless the body is only a single expression) so you need to create a named function to return. This is what you end up with:
def foo (n):
s = [n]
def bar (i):
s[0] += i
return s[0]
return bar
Python users might legitimately ask why they can't just write
def foo (n):
return lambda i: return n += i
or even
def foo (n):
lambda i: n += i
and my guess is that they probably will, one day. (But if they don't want to wait for Python to evolve the rest of the way into Lisp, they could always just...)
In OO languages, you can, to a limited extent, simulate a closure (a function that refers to variables defined in surrounding code) by defining a class with one method and a field to replace each variable from an enclosing scope. This makes the programmer do the kind of code analysis that would be done by the compiler in a language with full support for lexical scope, andit won't work if more than one function refers to the same variable, but it is enough in simple cases like this.
Python experts seem to agree that this is the preferred way to solve the problem in Python, writing either
def foo (n):
class acc:
def _ _init_ _ (self, s):
self.s = s
def inc (self, i):
self.s += i
return self.s
return acc (n).inc
or
class foo:
def _ _init_ _ (self, n):
self.n = n
def _ _call_ _ (self, i):
self.n += i
return self.n
I include these because I wouldn't want Python advocates to say I was misrepresenting the language, but both seem to me more complex than the first version. You're doing the same thing, setting up a separate place to hold the accumulator; it's just a field in an object instead of the head of a list. And the use of these special, reserved field names, especially _ _call_ _, seems a bit of a hack.
In the rivalry between Perl and Python, the claim of the Python hackers seems to be that Python is a more elegant alternative to Perl, but what this case shows is that power is the ultimate elegance: the Perl program is simpler (has fewer elements), even if the syntax is a bit uglier.
How about other languages? In the other languages mentioned here—Fortran, C, C++, Java, and Visual Basic—it does not appear that you can solve this problem at all. Ken Anderson says this is about as close as you can get in Java:
public interface Inttoint {
public int call (int i);
}
public static Inttoint foo (final int n) {
return new Inttoint () {
int s = n;
public int call (int i) {
s = s + i;
return s;
}};
}
which falls short of the spec because it only works for integers.
It's not literally true that you can't solve this problem in other languages, of course. The fact that all these languages are Turing-equivalent means that, strictly speaking, you can write any program in any of them. So how would you do it? In the limit case, by writinga Lisp interpreter in the less powerful language.
That sounds like a joke, but it happens so often to varying degrees in large programming projects that there is a name for the phenomenon, Greenspun's Tenth Rule:
Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.
If you try to solve a hard problem, the question is not whether you will use a powerful enough language, but whether you will (a) use a powerful language, (b) write a de facto interpreter for one, or (c) yourself become a human compiler for one. We see this already beginning to happen in the Python example, where we are in effect simulating the code that a compiler would generate to implement a lexical variable.
This practice is not only common, but institutionalized. For example, in the OO world you hear a good deal about "patterns." I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work. When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I'm using abstractions that aren't powerful enough—often that I'm generating by hand the expansions of some macro that I need to write.
Chapter 14. The Dream Language
Of all tyrannies, a tyranny exercised for the good of its victims may be the most
oppressive.
C. S. LEWIS
A friend of mine once told an eminent operating systems expert that he wanted to design a really good programming language. The expert said that itwould be a waste of time, that programming languages don't become popular or unpopular based on their merits, and so no matter how good his language was, no one would use it. At least, that was what had happened to the language he had designed.
What does make a language popular? Do popular languages deserve their popularity? Is it worth trying to define a good programming language? How would you do it?
I think the answers to these questions can be found by looking at hackers, and learning what they want. Programming languages are for hackers, and a programming language is good as a programming language (rather than, say, an exercise in denotational semantics or compiler design) if and only if hackers like it.
14.1. The Mechanics of Populari ty
It's true, certainly, that most people don't choose programming languages simply based on their merits. Most programmers are told what language to use by someone else. And yet I think the effect of such external factors on the popularity of programming languages is not as great as it's sometimes thought to be. I think a bigger problem is that a hacker's idea of a good programming language is not the same as most language designers'.
Читать дальше