An e-mail discussion can be rendered into print in several ways. Rather than trying to imitate a genuine conversation, this is a personal essay containing comments and replies by the other contributors. Most of the substantial points made in the e-mail discussion are contained here, although not always in the order they happened.
The problem posed was to give a characterisation of a computer. According to Searle, the notions of ``computer'' and ``computation'' are not natural kinds, and anything with a sufficient number of states can be thought of as performing any computation, simply by interpreting its states to have appropriate meanings; this is always legitimate since such external interpretation is the only means by which the purely formal patterns in a conventional computer can be given meanings in any case. Thus a claim that something is a computer, or operates computationally, is simply vacuous. For example, a wall can be said to be running Wordstar, and our digestive systems have as much claim to being computational as our nervous systems do.
Many agree that this conclusion seems evidently wrong, but it is not clear how to refute it. One approach, suggested by Valerie Hartcastle, is to allow Searle to have his logical point but to deny its claimed significance. Hayes suggests that an alternative way to characterise computers than the ``abstract mechanism'' used in conventional computability theory might avoid Searle's reductio ad trivium. Selmer Bringsjord disagrees and urges that the point simply needs some modal care; while it may be logically possible that anything be a computer, this is not physically possible. Istvan Berkeley raises an interesting issue by claiming that ``computation'' is inherently ambiguous. George McKee suggests that something is a computer when it is being programmed; but with a very broad notion of ``programming''; and Rob Stufflebeam suggested the broadest definition of all: a computer is something that implements a function.
One interesting disagreement concerned whether or not the human nervous system - the brain, in short - is a computer. While all the contributors agree that it probably is one, Hayes takes this to be an empirical question, while Hartcastle and Berkeley want to make it matter of definition. This difference seems to reflect a well-known methodological split within cognitive science, between those who see the discipline as defined by a collection of ideas emerging from computer science, and those who regard it as defined by its subject matter, ie human and animal cognition. If history proves that the best explanation of human cognition is provided by quantum physics and microbiology, then from the first perspective cognitive science will have failed; from the second, it will simply have found a better theory. The natural move then for someone who insists that brains are computers would be to stretch the notion of ``computer'' to include such apparently noncomputational phenomena, as McKee suggests.
I begin with a summary of the ``target'' article written by me to stimulate discussions.