"Mesoscopic Processes In Biocomputing:
The Role Of Randomness And Determinism"

Felix T. Hong
Dept. of Physiology,
Wayne State University, Detroit, MI 48201 USA


This paper helps show why biological processes are so fundamental, and critical, to producing intelligence; and perhaps, even to what is called 'free will.'

Editorial Preface: It has been known for some time that there are a myriad of sophisticated biochemical processes within the brain which are responsible for producing human intelligence. For example, over twenty years ago, 'second messengers' were found inside brain cells that help effect a requested action. It is biochemical processes such as these that are the sole province of the 'wet' brain.

Which is why this paper by Felix Hong is so important. It helps provide clues as to what kind of biochemical processes are ultimately responsible for the manifestation of intelligence. It also suggests possible bottlenecks or impasses future neural network research may face, and why molecular electronics may be a viable alternative. Dr. Hong's research was briefly mentioned in the Molecular Computing Overview that appeared in the Jan. 1, and Jan. 15, 1996, issues of 21st.

[Some Artificial Neural Network Technical Background: Computer-based neural networks are parallel, distributed models of computation whose design is very loosely based on the organization of natural nervous systems. Neural nets consist of collections of interconnected nodes. The mechanisms of interconnection can vary widely. Neural network architectures are usually trained to do a specific task via some type of learning procedure or mechanism. Generally, the learning goal is how best to allocate weights in a multilevel network.

The learning technique of back propagation was initially developed to solve the problem of how to set the weights in a hidden layer of neurons; the one between the input layer and the output layer. Such learning generally presumes the presence of a 'teacher', i.e., the learning is actively supervised.

Back propagation learning techniques of various types have been successfully applied to a wide variety of neural networks. Typically, back propagation techniques require that the desired neural network output be compared with an actual desired output.

There are some neural net architectures that do not require an output to learn (Art H, Kohonen, etc.) Training a network without supervised learning is called unsupervised learning. But the most popular learning techniques use some form of back propagation.

In feedforward neural networks, an input layer of neurons connects to an output layer through one or more intermediate or hidden layers in a unidirectional fashion. In feedforward-only networks, the intersections of the input and output sets of all neurons are empty.

But in recurrent networks, feedback loops are allowed. In feedback networks, there are at least some common elements between the input and output sets. Recurrent architectures may range from restricted classes of feedback, to full interconnection between every neuron in the network. A number of generalizations of the back propagation training algorithm have been developed for recurrent neural networks (Perlmutter.)

Discrete time neural network systems have outputs that change discontinuously in time. Continuous time neural networks have outputs that change smoothly in time.

Continuous time neural networks may offer certain adaptability advantages as they are less likely to be upset by an unexpected temporal event. Continuous time networks also have advantages as they can store information for as long as you need it, and you can combine data from different time events.

The ability of recurrent networks to accommodate feedback between layers, coupled with the benefits of continuous-time output, might offer a more dynamical neural network architecture (Beer, Gallagher.)

However, while general estimation techniques are available for those instances when no particular desired output exists, most of these techniques do not perform very well when confronted with the need for finding good maxima or minima for systems containing many dozens of (usually unknown) parameters.

Training neural networks with little or no outside active teaching, or where there is not much reinforcement, is a difficult learning problem because one does not know or cannot otherwise specify in advance what the 'correct' outputs should be at each time step of the process.

For reinforcement learning applications, the set of target outputs that correspond to some set of inputs to train the net are not known a priori. Rather the evaluation of the network is performance based. (Schaffer, Whitley, Eshelman) It is therefore up to the neural network to figure out a successful solution using mostly its own initiative, with successful results selectively encouraged, or reinforced. E.g., picture a rat in maze. You cannot explicitly instruct the rat how to find some food. But you can reinforce a rat's successful food-finding behavior within the maze.]

Computer-based neural nets are thus valuable, useful, and perform extremely important work. In the future, this will be even more true. But 'top down' artificial neural networks will always confront a fundamental problem: By the nature of their silicon materials and construction, they have little or nothing to do with biochemically-based, neural networks found in the human brain.

Thus, when true molecular devices made out of active biochemical materials finally appear, the stage will at last be set for a true information processing revolution. --Francis Vale.


The control laws that relate the inputs of biocomputing to the outputs are examined in the framework of the macroscopic-microscopic scheme previously proposed by Conrad. Between the random processes of molecular diffusion and chemical reactions and the well defined processes of neural network signaling lies the mesoscopic processes in the membrane and its vicinity. Mesoscopic processes which depend on medium-range intermolecular forces considerably diminish the randomness in microscopic processes and render the processes at the macroscopic level more deterministic. However, the control laws do not follow strict determinism. We speculate that the controlled randomness may allow intelligence and perhaps also free will to emerge. We suggest that future development of molecular electronic and bioelectronic devices should exploit the controlled randomness in structural organization.


Biocomputing (biological information processing) differs from digital computing mainly in its dependence on the materials used for the construction of the biological structure. The exploitation of the underlying chemical phenomena of biomaterials in addition to physical phenomena makes it possible to utilize analog and digital computing alternatively at different stages.

Conrad [1] has compared biocomputing and digital computing and described biocomputing in terms of a macroscopic-microscopic (M-m) scheme, in which the microscopic scheme of intracellular information processing and the macroscopic scheme of neural network processing are coupled vertically via second messenger-mediated signal transduction.

Second-messenger-mediated processes take place in the membrane and its vicinity, the dimension of which is too large to be considered microscopic but too small to be considered macroscopic, and is usually referred to as mesoscopic.

However, mesoscopic processes are not mere coupling links between the macroscopic level and the microscopic level, because they exhibit a network-like structure with rich internal dynamics. It seems justified to elevate mesoscopic processes to a distinct level and to extend the M-m scheme to a macroscopic-mesoscopic-microscopic (3M) scheme.

In considering these internal dynamics, one obvious question to ask is how random processes of molecular diffusion and biochemical reactions give rise to highly organized processes which are the characteristics of life.

By examining some of the microscopic and mesoscopic dynamics, it becomes apparent that many of these processes are not completely random but are under different degrees of control as a result of molecular interactions involving various intermolecular forces.

In considering the systems performance of a living organism, another question to raise is how deterministic is life as a whole process. With regard to this latter question, one cannot avoid dealing with a problem that dates back to St. Augustine, namely the conflict of free will and determinism [2].

The determinism here is strict determinism as stipulated by Laplace in accordance with classical Newtonian mechanics. But is life strictly deterministic? Our answer is no. From the computing point of view, a key element is the control law that relates the output to the inputs of a computing process.

In biocomputing, the control laws are neither completely deterministic nor completely random. We shall illustrate the mesoscopic processes with additional examples and examine the control laws governing these processes.

The Membrane As A Mesoscopic Substrate

The importance of the plasma membrane cannot be overemphasized. Aside from the obvious role of the membrane in macroscopic neural signaling, many important local events occur in the mesoscopic range at the membrane surface and the membrane interior [see, for example, [3]).

Two important features of biomembranes are particularly important for mesoscopic processes: the small but finite thickness (of the same dimension as many macromolecules) and the membrane fluidity.

The membrane fluidity allows for membrane bound macromolecules to diffuse laterally in the plane of the membrane and to rotate with an axis perpendicular to the membrane surface. Examples in biocomputing that exploit this feature are the electron transport in mitochondrial inner membrane and the thylakoid membrane of photosynthetic apparatus.

Several supramolecular complexes of redox centers in mitochondria and the two reaction centers in green plant chloroplasts constitute the "hard-wired" electron paths. The connection between them is accomplished through lateral diffusion of several quinone-like compounds that serve as electron shuttles.

This arrangement allows for dynamic allocation of biocomputing resources in response to changing environmental conditions. Together these components form 2- dimensional networks but the connectivity is not fixed. Parts of the electron transfer paths are "hard-wired" and partly are loosely connected by lateral diffusion. In other words, they form a dynamic network.

In addition to lateral diffusion of macromolecules within the membrane, there is a proton conduction network on the surface of the membrane. Teissie ' and his colleagues [4] have detected rapid lateral movement of protons on a phospholipid monolayer-water interface by a number of measurements. They found that the proton conduction along the surface is considerably faster than proton conduction in bulk phase (2-3 minutes vs. 40 minutes for a comparable distance in their setup).

This novel conduction mechanism is proton-specific, as has been confirmed by replacement with deuterated water. The conduction mechanism is truly mesoscopic because it is present only in the liquid expanded (fluid) state of the monolayer but disappears in the liquid condensed (gel) state.

These authors indicated that the enhanced lateral proton movement occurs along a hydrogen-bonded network on the membrane surface similar to proton conduction in ice crystals. Their findings helped resolve a long standing controversy in bioenergetic conversion research known as the "Pacific Ocean effect" [5].

It is well known that the bioenergetic conversion process of electron transfer in mitochondria and in chloroplasts is coupled to ATP production via a mesoscopic state -- the transmembrane electrochemical gradient of protons.

The controversy is about the postulated local proton path leading from the electron transfer site to the ATP production site -- a role readily filled by the lateral proton conduction network. The alternative of dumping protons into the vast ocean of extracellular space was considered energetically unacceptable to investigators who championed the local proton path.

Another important aspect of mesoscopic dynamics is the effect of electric fields near and inside the membrane. Aside from the well known diffusion potentials which neurons use for intercellular signaling, there are localized electric fields generated by charged membrane surfaces or charged surface domains of macromolecules.

Charged surface domains of macromolecules are commonly involved in the formation of salt bridges which stabilize the docking of two macromolecules to form a complex. Salt bridges can also form intramolecularly and are important in stabilizing macromolecular conformations in addition to hydrogen bonding and other intermolecular forces.

There is another effect of surface charges, namely the generation of a surface potential on the membrane surface and a zeta potential on the surface of a macromolecule. This effect is not generally recognized in solution phase biochemistry partly because surface potentials are thought to vanish upon disruption or rupture of a membrane like diffusion potentials.

The truth is that these localized potentials do not require an intact membrane to be sustained. A charged membrane surface causes excess counterions to accumulate in the nearby diffuse double layer. Because of the charge screening effect of the double layers, this polarization effect does not extend very far into the bulk solution, and the extent is indicated by the Debye length which is a function of ionic strength.

Because of the small Debye length (of the order of 5 A in physiological solutions) there exists a steep electric potential gradient and therefore an intense electric field inside the membrane. Thus, the sudden appearance of a surface potential causes a concentration jump of counterions and can be used as a biochemical switch, transforming the random encounter of molecules with the membrane into a more deterministic process.

We have previously speculated that a light-induced surface potential is a trigger to the visual transduction process [6]. The activation of protein kinase C (PKC) by a negatively charged membrane surface is partially an elec- trostatic event and also partially due to a specific interaction of PKC with phosphatidyl serine [7]. A surface potential can thus serve as a homing mechanism.

A similar situation occurs at the surface and the interior of a macromolecule. Many macromolecules contain charged groups at the exposed hydrophilic domain. These surface charges confer a zeta potential to the macromolecules and an intense but short ranged electric field at the surface of the macromolecules. Likewise, surface charges or charged groups buried in the hydrophobic domain exert electric forces on the interior of the macromolecule, and may affect its overall conformation.

Superficially, the intracellular dynamics appears to be rather stochastic in nature because diffusion is a random process and chemical reactions in solutions are mediated by random collisions. One may wonder how coupling of reactions and diffusion can lead to meaningful events characteristic of intelligent life. An examination of the factors that govern the intracellular dynamics is in order.

First of all, the cell is highly compartmentalized and this compartmentalization is made possible by an extensive intracellular membrane system. Distributed among various compartments and often linking various molecular components is the cytoskeleton, an intricate network formed by a variety of macromolecules which can undergo rapid polymerization and depolymerization in response to some mesoscopic factors.

Thus, desirable reactants tend to be grouped together by transport processes, some of which are mediated by membranes and some others by the cytoskeleton. Additional factors are also at work to make desirable chemical reactions less random and more deterministic.

Another important factor is the electrostatic interaction mentioned above. Reactant molecules find each other by random collisions, but they are just as easily deflected from each other before they have time to initiate the desired reaction. Electrostatic interactions in the form of salt bridges tend to stabilize the reaction complex and ensure the consummation of the desired reaction.

In addition, charged groups placed at strategic locations have the effect of making molecular recognition more specific than mere shape-fitting. Specificity conferred by matching pairs of surface charges also guarantees that the matching macromolecules have the correct relative orientation for bimolecular reactions such as intermolecular electron transfers to take place.

Many enzymes are activated by phosphorylation. Thus, electrostatic interactions also play a prominent role in enzyme activation by phosphorylation [8]. Other mechanisms such as allosteric effects and steric hindrance sometimes also contribute to the activation. Through judicious activation of selected enzymes, Nature is able to preferentially channel reactions in desirable directions and to maintain optimal concentrations of various molecules. By virtue of a concerted regulatory regime, a high degree of determinancy can be achieved.

Controlled Randomness Of The Macroscopic Dynamics

The control laws in biocomputing range from highly random to highly deterministic. The mechanism of neural excitation provides insights into the inner working of these dynamics. It is well known that neural excitability arises from the voltage-dependence of sodium ion channels in the membrane: the increase of sodium conductance is critically dependent on the extent of depolarization of the membrane potential.

The control law at the macroscopic level of nerve impulse generation is fairly well defined. However, the activity of individual ion channels are highly random, and the control law is only statistically defined. The conductance of individual channels is not voltage-dependent, but the probability of the opening of ion channels is voltage-dependent [9].

What is remarkable here is the transformation of a statistical control law at the mesoscopic level into a well defined control law at the macroscopic level. Nevertheless, the control laws at the macroscopic level are not strictly deterministic. This is in part a consequence of mixing digital and analog processing. Signal transmission by means of nerve impulses and the release of neurotransmitters are digital processes but the mesoscopic event linking the two are analog in nature.


The control laws that link the input to the output in various steps of biocomputing are not as random as suggested by molecular diffusion and chemical reactions. Many mesoscopic mechanisms can bring the randomness under control (controlled randomness).

But the control laws are not strictly deterministic as in digital computing. This weaker form of determinism in which the control laws exhibit small but nonzero error is the price paid for unleashing the internal dynamics of biocomputing. The error fuels evolution but it may also be the source of intelligence and creativity in a living organism.

Since biocomputing does not practice strict determinism, the conflict between free will and determinism does not really exist. Free will, however, cannot be experimentally proved or disproved by conventional science because it is impossible to rigorously repeat an experiment to allow for either time-averages or ensemble-averages to be determined.

Ensemble-average is impossible because a particular experiment must be performed on the same individual in order to be meaningful whereas time-average is impossible because repeating the same experiment on the same individual at a later time is meaningless.

With regard to applications to machine intelligence, exploiting the carbon-based chemistry (organic materials science) is the first step. However, the randomness must be brought under control. Fabrication technology is thus of paramount importance at this stage of development of molecular electronic and bioelectronic devices.

But if the randomness is excessively contained, the "intelligence" inherent in the materials may not be fully unleashed. Future research should exploit the possibility of introducing structural variability (e.g., exploiting membrane fluidity) into the molecular electronic and bioelectronic devices.


Hong Article References:

1. M. Conrad, 1984, Microscopic-macroscopic interface in biological information processing, Biosystems 16:345-363 (1984).

2. A. Goldman, Action and free will, in: Visual Cognition and Action: An Invitation to Cognitive Science, Vol. 2, Osherson, D.N., Kosslyn, S.M., and Hollerback, J.M., Eds., pp. 315-340, MIT Press, Cambridge and London (1990).

3. F. T. Hong, Do biomolecules process information differently than synthetic organic molecules?, BioSystems 27:189-194 (1992).

4. J. Teissie and B. Gabriel, Lateral proton conduction along lipid monolayers and its relevance to energy transduction, in: Proc. 12th School on Biophysics of membrane transport, S. Przestalski, J. Kuczera and H.Kleszczynska, Eds., Vol. II, pp. 143-157, Agricultural University of Wroclaw, Wroclaw, Poland (1994).

5. R. J. P. Williams, The history and the hypotheses concerning ATP-formation by energized protons, FEBS Lett. 85:9-19 (1978).

6. F. T. Hong, Electrochemical approach to the design of bioelectronic devices, in: Proc. of the 2nd International Symposium on Bioelectronic and Molecular Electronic Devices, M. Aizawa, Ed., pp. 121-124, R & D Association for Future Electron Devices, Tokyo, Japan (1988).

7. A. C. Newton, Interaction of proteins with lipid headgroups: lessons from protein kinase C, Annu. Rev. Biophys. Biomol. Struct. 22:1-25 (1993).

8. L. N. Johnson and D. Barford, The effects of phosphorylation on the structure and function of proteins, Annu. Rev. Biophys. Biomol. Struct. 22:199-232 (1993).

9. F. J. Sigworth and E. Neher, Single Na channel currents observed in cultured rat muscle cells, Nature (Lond.) 287:447-449 (1980).

Full Citation Of This Article:

Extended Abstracts of the 5th International Symposium on
Biomolecular and Molecular Electronic Devices and the 6th
International Conference on Molecular Electronics and Biocomputing,
Sponsored by Research and Development Association for Future
Electron Devices, November 28-30, 1995, Tokyo, Japan, pp. 281-284.

Hong Article Copyright by:

Research and Development Association for Future Electron Devices
Sumitomofudosan Akasaka Bldg., 8-10-24, Akasaka, Minato-ku,
Tokyo 107 Japan
Fax : +81-3-3423-1680

This article was reprinted with the permission of the Research and Development Association for Future Electron Devices, Okinawa, Japan.

21st Editor's Preface References:

1. Anderson, C.S. (1990) Learning to control an inverted pendulum using neural networks. IEEE Control Systems Magazine, 9, 31-37.

2. Anderson, C.W. & Miller, W.T. (1990) A challenging set of control problems. In T. Miller, R. Sutton and P. Werbos (Eds.) Neural Networks for Control (pp. 475-510). Cambridge, MA: MIT Press.

3. Beer, R.D. & Gallagher, J.C. (1992, Summer) Evolving dynamical neural networks for adaptive behavior. Adaptive Behavior. (pp. 91-122).

4. de Garis, H. (1989). WALKER, A genetically programmed, time dependent, neural net which teaches a pair of sticks to walk. Technical report Fairfax, VA: Center for AI, George Mason University.

5. Ichikawa, Y. & Sawa, T. (1992, March). Neural network application for direct feedback controllers. IEEE Transaction on Neural Networks, (pp.224-231).

6. Pearlmutter, B.A. (1989). Learning state space trajectories in recurrent neural networks. Neural Computation 1:263-269

7. Torreele, J. (1991). Temporal processing with recurrent networks: An evolutionary approach. In R.K. Belew & L.B. Booker (Eds.), Fourth international conference on genetic algorithms (pp. 555-561). San Mateo, CA: Morgan Kaufmann.

21st Editor's Preface, Copyright 1996, Francis Vale All Rights Reserved,

21st, The VXM Network,