Platform for art and theory/fiction

The Out­side, Naturalised

An Exercise in Speculative Evolutionary Dynamics

Anyone can invoke the Real, but unless there’s some mechanism that provides, not a voice for the Outside, but an actual functional intervention from the Outside, so it has a selective function, then the language is empty.

—Nick Land[1]

It is, without any doubt, the most radical parade of possibilities to ever trammel my imagination—a truly post-intentional philosophy—and I feel as though I have just begun to chart the extent of its implicature.

—Scott Bakker[2]

So Simple a Beginning

Let’s start with an exercise of the imagination. Think of yourself as a generic mammal who has just been born. Utterly dependent, with your faculties yet to be developed, it is rather unlikely that you would survive, left to your own devices. Not only do you lack the strength or the speed to feed yourself and avoid the relevant threats, but you lack the tools required to navigate your environment. Given adult retrospective, the world seems so beautifully ordered. Sharp distinctions and strong oppositions interact seamlessly around you without any conscious acknowledgement. But, looked closely, our world is anything but. It is, rather, a confusing mess of stimuli, whose underlying patterns can be interpreted and rearranged in multiple ways.

In this context, one of the crucial tasks awaiting this young mammal consists in developing categories for its unlabelled world. It needs to produce an ordered representation of its confusing stream of stimuli which, above all, has to score highly on adaptive value—meaning that such a representation should serve her well, given its particular ecological niche. Needless to say, not all the categories that such a mammal will develop during its lifecycle will develop in this way. Some of those categories (in general, some behaviours) are already encoded in its genetic material, and the relative proportions will depend on the idiosyncrasies of the relevant species. This genetic component to the development of perceptual categories will also play a part in the overall adaptiveness of the animal in question. But the underlying point persists: a mammal needs to respond to the ambivalence of its environment by developing a set of categories that will allow it to navigate the said environment. These categories, in turn, need to capture the spatial and temporal invariances of the stream of stimuli.

There’s a catch, however. If the environment is ambivalent, then there will always be a multiplicity of available spatiotemporal invariances to pick up on. Truth, by itself, is not adaptive. And what does and doesn’t count as adaptive behaviour will depend on circumstantial causes, each of which will likely maximise completely disparate parameters. This implies that the functional network in which each of our faculties—including cognition—is embedded will likely respond to a circumstantial set of ecological causes that have no interest in truth or the real. Intentional cognition can, on this basis, be framed as the set of tools that we have developed to respond to those circumstantial, ecologically determined causes. Take away the ecological invariants that sustain our cognition and, of course, the adaptiveness of cognition falls down with them.

This is Scott Bakker’s form of ecological determinism: the idea that whatever reaches conscious cognition, it will be couched in terms of the ecological function it is meant to serve. Take all those categories that you use to navigate your environment. Doesn’t matter how high-level: from basic orientation invariances in stimuli to complex semantic relationships between words in our language. All such invariances, as we’ve already mentioned, do not capture any inherent properties of the sensory stimuli. They are, thus, but a set of all possible selected patterns, and the function they play in our behaviour is an expression of the relative advantage they displayed in the environment where they arose.

A natural consequence of this ecological tractability of cognition is that our categories, both perceptual and semantic, display a tendency to interpret stimuli in a way that reinforces their own function. We see examples of this every day and everywhere. Take our innate capacity to skilfully recognise faces. This is a highly localised ability, processed mainly in the right side of the fusiform gyrus, located at the ventral surface of the temporal lobe (right at the inferior side).

Figure 1. Location of the Fusiform Face Area.[3]

One of the main consequences of strong localization, which is crucial for several reasons, is that damage usually leads to relatively isolated, but sharp, loss of function.[4] What is most important for our purposes, however, is that it is evidence for the adaptive value of the function that the area supports—specialisation indicates sustained selection pressures. The recognition of individuals of our own species (that is, friend or foe) presumably played a key role in enabling human social behaviour. As a consequence, we are bound to see faces everywhere. Not only that, but we are bound to see them exactly as we’re meant to see them, even under very strange perceptual conditions. The typical example is when we rotate a face 180º but leave the cues we usually use for recognition intact. As a result, we perceive the entire face upside down, and don’t notice anything bizarre (see Figure 2).

Figure 2. Facial illusion showing wholistic face coding.[5]

For Bakker, this characteristic is generalised all the way through to cognition (broadly construed), on the cheap condition of assuming ecological conditioning and adaptive value. This does not only mean that some cognitive abilities are bound to their ecology, but that our entire cognitive apparatus is moulded by the ecology in which it developed and geared around it. To put it in somewhat scholastic terms, the faculty of cognition tends towards its proper object. Here, it is worth quoting Bakker’s lengthy yet sharp summary of this predicament in his review of Pinker’s Enlightenment Now:

Human intentional cognition neglects the intractable task of cognising natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense, it is constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.[6]

And so we arrive at the possibility of the “collapse” of intentional cognition: what if the ecological invariants that have sustained the adaptiveness of our categories—or even beyond, the very meaning with which we relate to our world—fail to obtain? This collapse of meaning is what Bakker dubs the semantic apocalypse: the threshold upon which the last thread that connected our world with its outside is severed; more generally, the threshold that separates adaptive from non-adaptive behaviour. But Bakker’s cheerful recount of the facts does not end here.

Presumably, intentional cognition depends on the possibility of making accessible to consciousness at least a part of the information processed in the brain. That is, intentional cognition depends, via conscious access, on the recursive accessibility of a limited proportion of all neurally available information. The Blind-Brain Theory of consciousness (BBT) suggests that the possibilities available to meta-cognition are hence limited in a structural manner—that there is a strong threshold to the information that can possibly be made accessible to consciousness, and with it to intentional cognition. Meaning, then, becomes incompatible with the natural, insofar as its conditions of possibility require the reduction of its environment to a low-dimensional caricature. Such low-dimensional cartoon is imposed by the structure of consciousness itself, condemned as it is to what Thomas Metzinger has called “transparency”:[7] the impossibility of the self to perceive the model it produces of the world as a model. Under these conditions, the semantic apocalypse becomes the result of an asymmetry between the static loop of intentional cognition and its rapidly evolving environment. The specifically human cognitive ecology simply cannot keep up: it has lost the very ability to adapt to its environment.

Bakker’s assumptions make complete sense in evolutionary terms. In fact, one cannot avoid the thought that the relation of relevance and availability of the information within human cognition follows an inverse proportion: the more critical for survival, the more likely it will just be assumed (i.e. not made recursively available). It makes sense, then, not only that this type of information will not be available to the accessible space of the self-model, but that it is simply constitutively incapable of entering it. The resulting picture is that of an entity which in order to optimally replicate itself has developed a fundamentally limited cognitive range (the information that it is able to process through its self-model) plus a constitutive incapacity to access that very information. Nature has thus got its way: it has assembled a cognitively dependent entity, while only presenting it with limited relevant information. As a result, the actual behaviour of the entity is meticulously controlled. Its agency range (the extent to which this entity is able to intentionally intervene in its environment) is always, at best, its cognitive range or epistemic threshold—the point from which information cannot be integrated into the self-model. The semantic apocalypse represents the point in history where such a threshold is crossed for good.

This presents us with a cognitive model for the organism whose fundamental value is closure of the system. Nature is endowed with the capacity to 1) make it constitutively impossible for the system to reach any information that is not relevant to its reproduction (consistent with the evolutionary premise), and above all 2) maintain this closure uninterruptedly. Notice the crucial move that has been made by the BBT: structural inaccessibility implies a static threshold, and a static threshold implies constitutive inadaptability. But where is this threshold? One may be tempted to negate the second condition. Is there an in-principle limit to adaptation? Put another way: is this threshold truly static or could it be constitutively dynamic, capable of being indefinitely moulded by the sensory stream, condemned to following the whims of its outside? I suspect this is the case.

Neural Darwinism

The question of the ecological determinants of our cognitive abilities boils down to function: what was the function that those abilities exercised in the environmental context in which they displayed differential reproductive advantage? This is all well and good. And yet, this way of framing the problem seems at too high a level. We are examining complex categories in already complex environments and surmising the whole set of biological invariances that support them. This procedure, then, does not preclude a question about the neural structures and mechanisms that make possible the very dynamic development of such categories. It seems intuitive to jump from the existence of an ecological niche to the acquisition of the cognitive abilities that allow its exploitation. Rarely does one ever see opened the neural black box in between. Let’s do that.

The fact that that we are dealing with a black box has not gone unnoticed by the early practitioners of cognitive science. In fact, a way out readily presents itself. Assume a task—say, face recognition. Subdivide that task into multiple subtasks (such as calculating feature proportions or inducing lighting invariances across time). Finally, make the brain compute given outputs, like the identification of a face, from the aforementioned subtasks. In a word, the brain would map some input information (e.g. feature proportions) to some output information (“This is Barry”). By interpreting the problem in informational terms, one gets rid of the issue of the actual biological mechanisms that underpin a process of such complexity. This is what the psychologist David Marr did in his now classic Vision,[8] where he distinguished between three separate levels of analysis in a process like this:

  1. The computational level is the one we have just been talking about;
  2. The algorithmic level involves the manner in which an information-processing system represents to itself its inputs and outputs, as well as the transformations required to go from one to the other;
  3. And, finally, the implementational level, that is, “the details of how the algorithm and representation are realized physically”.[9]

Notice how whole strata are cashed out in an entirely top-down fashion: given a task, then we can think of the physical “details” that would implement it. In this framework, it is assumed that typologies and categories of the physical world are amenable to processing in a program-like manner. In their more extreme versions, like Chomsky’s Rules and Representations,[10] extremely complex objects like the rules of syntactical structures in natural language are simply posited to map on to corresponding neural structures.

But implementation comes back to haunt us. For starters, the brain is far removed from the notion of one-to-one wiring, required for such computational tasks. Neural variability runs rampant. And even if we assume that neural structures are fixed, (what now are ancient) pharmacological studies have shown how these repetitive structures can use multiple neurotransmitters or display chemical heterogeneity at different locations.[11] What is more, if we examine even the most elementary psychophysical tasks (those that have to do with the way our mind processes the physical world), it quickly becomes evident that they are not accomplished by a unitary neural structure, but a plurality of them, including for one and the same task.[12]

These arguments converge around a simple idea: the brain is composed of a population of structures that do not allow for one-to-one mapping. The immunologist and neuroscientist Gerald Edelman took this insight in the eighties and used it to develop a theory about the mechanisms underlying the brain’s functional organisation, called the theory of neuronal group selection (TNGS), which will be the main object of our discussion in this section.[13]

TNGS explains the origin of neural categories via the selection of variant groups of neurons. As such, it inherits the classical three conditions of any general process of selection: 1. variability in the population; 2. some mechanism of inheritance; 3. some differential capacity for reproduction (what is canonically referred to as “fitness”).[14] Ironically, these three requirements function like an algorithm that allows for their implementation in an immense array of systems. If such an algorithm is provided with the characteristics of a population, such as the distribution of traits or the reproduction rate, it generates a selection dynamics in biological, computational and, as we shall see, neural substrata. These selection dynamics can be typified in multiple ways, but the three basic types are directional, diversifying and stabilising selection (see Figure 3). It is worth pointing out, however, that selection dynamics do not automatically imply the emergence of evolution, especially in the biological domain. This is because most of the time selection processes contribute to the trajectory of the population rather than determine it. Depending on its complexity, the evolutionary landscape may, for instance, face severe topological constraints that restrict the available evolutionary space.[15] But, in other cases, selection may act as a stabilising force that halts the evolutionary process (see Figure 3).

Figure 3. The three main types of selection processes.[16] Stabilising selection represses the extreme traits of the population; disruptive selection promotes the differentiation of the extremes; while directional selection occurs when only one of the extremes is favoured.

TNGS is somewhat opposed to the usual, intuitive notion of selection that acts from generation to generation, because it understands the brain as undergoing an iterated process of somatic selection, i.e. selection occurring within the individual organism. In the theory, the brain reflects population-like variability in the formation of two types of neuronal groups or repertoires. The first, or “primary” repertoire, consists of the variable wiring that emerges in the process of development (like during the embryological stages), which forms part of the conspecific neuroanatomy of a given species. Such wiring, however, varies wildly from individual to individual. This is where the first kind of neural selection, developmental selection, makes an appearance, and its product is the variant neuroanatomical structure of each individual. An organism uses its primary repertoire as a basis for engaging in a multiplicity of behaviours throughout its lifecycle. The connections within and among these neuronal groups are then strengthened or weakened according to those behaviours, roughly following Hebb’s rule: neurons that fire together, wire together—more specifically, variant groups that fire together, wire together. Such a selective weighting of synapses dependent on experience forms the second type of selection, experiential selection, whose product is the “secondary” repertoire. Somatic selection, then, will occur on those two types of neuronal groups which constitute the basic units of selection in the theory.

The very notion of neuronal group variability, however, depends on the assumption of degeneracy: the fact that for each function there is a multiplicity of neuronal groups that are able to carry it out.[17] Degeneracy, then, entails that “some non-isomorphic groups must be isofunctional”.[18] An example is illustrated in Figure 4. Assume we assign a task of signal recognition to two types of neural repertoires: one where there is no degeneracy—hence there is one-to-one mapping—and one where there is. Under these simple conditions, the non-degenerate neuronal group will fail to adequately recognise the input signals. And there is a simple, albeit rather unintuitive, reason why.

Figure 4. Degeneracy in mapping between cell groups and signal mapping.[19] In the case with no degeneracy, each cell group maps on to a unique signal, leading to a failure in the recognition task. For the degenerate repertoire, however, contravariance leads to an increased success rate.

As the number of neuronal groups that could intervene is higher when solving a complex task (in this case, recognising a multiplicity of signals) than it is in the recognition of one signal per group, the number of dimensions relevant to the problem increases exponentially. One may quickly infer that this makes matters worse for the repertoire involved. It would seem that the number of dimensions increases proportionally to the difficulty of the problem: the more available solutions, the more factors that are likely to be involved in finding them. However, this is not the case. Put in evolutionary terms, the number of constraints is, in fact, inversely proportional to the difficulty of the computational task. Think of it this way. If you only have a hammer, the likelihood of being able to fix a simple breakdown is relatively lower than if you possess a set of tools, even when the target problem is much more complex. This is because the number of constraints increases the number of available solutions that can be creatively explored and selected. This is what Cao and Yamins call the contravariance principle,[20] which applies both to neural systems (like brains) and neural networks.

Degeneracy therefore provides the link between function and selection, connecting repertoire variability with a target for successful deployment. But we still need one final ingredient. To carry out these functions, the primary and secondary repertoire require maps. Both are composed of an enormous array of parallel and reciprocal connections, which form robust clusters of connectivity along the neural design. When a signal is processed in the brain, it is recursively passed and transformed through these clusters, leading to a process of reentry (see Figure 5). Reentry is the process which allows for the robust linkage between selection of neuronal groups and target functions. Via reentry, top-down constraints emerge as the result of topographical connectivity—and given the massive interconnectivity between multiple neuronal groups, selection of multiple repertoires can happen in parallel. An important characteristic of the reentry of signals is that the dynamical flow of successive neural maps produces new types of signals, signals which may not have an outside origin and can be made recursively available to cognition. We see emerge the possibility of complex levels of organisation and, above all, the recursive availability of a limited proportion of the total incoming signals.

Figure 5. The general schema of reentry in terms of a classification couple.[21] An input is sampled by two independent networks—a detector and correlator of features—which the brain relates via mutual mapping. In this sense, the signal is “reentered” through the totality of the neuronal group.

If this is starting to sound a lot like Bakker, it’s because that’s what it is. The difference being that under TNGS, we are also able to provide the genesis of the categories which structure cognition—and explain what adaptiveness means in neural terms. Categories, then, are contingent clusters of neural groups which are reentrantly selected on the basis of functional deployment. Precisely for that reason, however, they are rendered dynamic assemblages that allow for both developmental and experiential transformation. Via reentry of signals along neuronal groups, we can explain the formation of recursive structures in the brain—and with it, the origins of limited informational availability. Compared to BBT’s static epistemic threshold, TNGS provides us with a dynamic threshold which is constantly in the process of reshaping via the transformation of its inputs. In the end, BBT seems to be all too top-down.

What about the pay-off? Well, in a way, Bakker’s radical proposal hath given what it hath taken away. Thought like this, the conclusion is almost inescapable. The human cognitive apparatus is doomed to disaster, apocalypse even. But the reason for this is that it is too strong, it opposes perfect resistance to its outside, exercises an absolute closure that is able to oppose all forces of nature: it is, in a sense, supernatural. Bakker’s ecological determinism, it seems, simply had to be intensified, made to point towards a cognitive hyperecology, where the environment does, indeed, hold all the cards. Who would have thought that the human cognitive assemblage would win? Examined closely, the seemingly omnipotent, abstract recursive system of consciousness dissolves in the acid of neural selection. And if the human recursive system shows resistance, all the worse for the human recursive system.

This essay could stop here. But I take it that some speculative implications follow. We have shown how the human cognitive assemblage, via neural Darwinism, is constantly subjected to selection from its outside. This process, however, does not happen sporadically, but is rather uninterruptedly sustained, insofar as it results from the very mechanism that underpins the brain’s functional organisation. A question begins to impose itself: what can we say about a system that is constantly on the verge of selection from its outside? And, beyond that, if such a system has lost all identity, if it becomes nothing more but an expression of its deterritorialization, could it, like a mirror, allow us to sustain inferences about the outside itself?

A Direction of Selection

We have alluded to the fact that the brain is constantly being selected by its outside. From a certain perspective, this is incorrect. The brain is rather constantly selecting itself in its successive changes. Given the neural architecture that characterises it, each signal is reprocessed and transformed, every step of the way, across the network. If we take vision as an example, naked photons cannot intervene functionally in any nervous system. In fact, the visual cortex is hierarchically “layered” in five separable areas—V1, V2, V3, V4 and V5—that have a range of functions, from relatively simple to increasingly abstract ones. V1 (or primary visual cortex), for instance, is specialised in edge detection, while V5 holistically integrates information received from the other areas. But before it is even taken in by cortex, information needs to be transformed into electrical impulses by the retina, and projected to the lateral geniculate nucleus.[22]

All this makes it seem as though the visual system were something akin to a “feature detector” that picks up salient characteristics of the environment and synthesises them into the visual scene. The understanding of perception as feature detection is predicated on the idea that processing occurs unidirectionally, that is, that projection of neural signals only happens outwards, towards cortex and other areas associated with higher brain function. But this is not the case: the brain does not restrict the direction in which projection occurs across the cortical hierarchy. In terms of the visual cortex, signals are not only transmitted from V1 to V5, but also from V5 to V1.

As an illustration of the concept, consider Figure 6. We operate under the assumption that light brightens up the colour of surfaces. As a result, when a shadow is cast over a surface, we expect that its colour will darken. This is a completely reasonable heuristic that is encoded in the way our expectations condition the very sensations we experience. Of course, this is a case where our expectations fail to deliver on reality, but it is precisely for this reason that it pushes to the fore the kind of high-level shortcuts our brain uses all the time: we don’t passively wait for stimuli to feed the visual scene, but the brain tries to predict its sensations on the basis of previously stored information about the world.

Figure 6. The checker shadow illusion.[23] The squares A and B are the same shade of grey. The brain is led, via visual cortex, to expect that the shadow cast on B dims its colour.

This idea that the brain is engaged in a constant process of predicting its own sensations is called predictive processing (PP). In PP, all these neural maps that we’ve talked about encode massive probability distributions which model all the brain’s environment—past, present and future. In perceiving, the brain takes its own model of sensations as input. But such probability distribution about the stimuli and its causes, on the other hand, must be massive, because the environment is an equally daunting beast. This becomes especially problematic if we assume that perception is so self-contained. Constantly varying in both space and time and rabidly non-linear—how is the brain to get a handle on such an environment? How is it to acquire the approximately correct assumption that shadows dim colour, for example? Put more generally, how does a system like that learn?

One might even (very reasonably) ask what the role of the entire perceptual system even is, if we perceive by predicting. The answer is that the stimuli provide feedback to the brain’s assumptions about the world. Feedback is the way through which the brain updates its model of the environment and improves its predictions on future sensations. The senses, then, inform of prediction error, which encodes the divergence between the model that the brain has of its environment and the actual input that the environment provides. In short, the brain adapts dynamically to its environment via the feedback provided by prediction error. What emerges is a dual functional architecture of the brain, structured around the asymmetry between prediction (feedforward) and prediction error (feedback). This constant exchange between prediction and error units constitutes a cycle that spans across the entire network of the brain (see Figure 7).

This network is formed of clusters of neuronal groups, which in turn form the reentrant maps that support the complex architecture needed for correspondingly complex functions. There is thus a convergence of TNGS and PP along a generalised selectionism, both at the implementational (TNGS) and algorithmic level (PP). To see this, we must return to the idea that the brain encodes a probability distribution of its environment. We mentioned that the non-linearity of environmental processes posed a challenge for the brain, insofar as there always is a multiplicity of causes that might explain the stimuli. One of the reasons why probability distribution is a useful tool to model these cases is that it allows the brain to weight a manifold of causes that may map on to the observed phenomenon. Neurally, this corresponds to the strength of the synapses that conform the secondary repertoire and form the units of experiential selection. In this context, feedback is the dimension of the mechanism that enacts the adaptation of the probability distribution given experience.

Figure 7. Modified representation of PP’s functional architecture across cortical regions from the deepest (R1) to the most superficial (R3).[24] Rightward arrows stand for bottom-up projections sent from the darker “error units” towards superficial regions. Leftward arrows depict top-down signals emitted at the lighter “state units”—what previously I called “prediction units”—towards deeper regions of the brain. The triangles represent pyramidal cells that send the predictions, while circles represent inhibitory neurons of those predictions (by inhibiting top-down projections, they correct and modulate their contribution). As we can see, the duality of prediction-prediction error is replicated at every level of processing.

Prediction error, however, is in reality an approximation of the way that the brain encodes such divergence between the environment and its model. A much more accurate quantity is surprisal, a concept that originates from statistical physics and measures the negative logarithm of the probability of an event, given a model of the world in which it occurs:[25]

Where h is surprisal and P the probability of event r. This means that as surprisal increases, the logarithm of the probability decreases. Surprisal, however, can be averaged by the probability of events, assuming that we can sum probabilities of events. How “surprised” of an event I am in total, on this assumption, would be the sum of how “surprised” I am of each event separately. The key is that under these conditions, surprisal h encodes entropy H:[26]

The brain, in this picture, minimises overall prediction error insofar as its total predictive projections produce an overall better grasp of future stimuli. If prediction error minimisation applies to the brain in its entirety, that means that the brain, at every level of organisation, is driven by the minimisation of entropy, through minimisation of surprisal. Of course, specific levels in the hierarchy may increase surprisal or prediction error. The key is to bear in mind how the overall minimisation of prediction error nonetheless defines the general direction of activity in this iterative process—like an underlying structural cause, responsible for a global pattern of activity. The cyclic processing of prediction and prediction error leads to an overall asymmetrical process, characterised by the minimisation of surprisal, and ultimately entropy.

We arrive at a crucial result: both at the cognitive and neural levels, the brain is functionally structured for neural selection. This structure results in a process which consists, given its architecture, in the minimisation of surprisal—i.e. the minimisation of entropy. But then again, this twinning of abstract selection processes and minimisation of entropy should not surprise us. Selection dynamics emerge when there is an iterated sampling of a population, which results in a progressive evolution of the population’s characteristics. Entropy, in its classical formulation, measures the proportion of energy involved in a process that cannot be converted back into mechanical energy—that is, energy that cannot be cyclically reinserted in the process (this is the Carnot cycle). In both cases, we are dealing with measures of irreversibility that apply to the evolution of spaces of possibilities: the bigger the proportion of the population that does not replicate—the bigger the proportion of energy that cannot be reverted back into the cycle—the higher the irreversibility of the process.

In this sense, selection processes are structured to occur in the direction of entropy minimisation. Of course, that does not mean that they fully compensate for the increase in entropy that is prescribed by the second law of thermodynamics—this is a property of the structure of the process, part of the way that it is set up and that establishes the conditions for its occurrence. One way to see this is through the relationship between optimisation processes and selection. Although selection processes usually do not lead to an outcome that can be considered absolutely optimal in any meaningful sense, they are driven by processes of optimisation—like a drive that pulls the system at every instant but does not follow any long-term teleology. Optimality is a global property, optimisation is a local process. Similarly, the minimisation of entropy, via surprisal, via prediction error, is the drive that directs the selection process, even though it need not lead to a total compensation of the expected entropy increase.

We are now in a position to offer a more complete picture. The brain is an assemblage of degenerate neuronal groups whose distribution evolves dynamically as a function of selection. At the level of its functional organisation, this process is expressed as the minimisation of surprisal via a constant selection that feedback from the sensorial stream performs on the probabilistic model. However, as surprisal is a measure of entropy, we find that the brain is geared towards the minimisation of entropy, in which we find the ultimate cause that shapes its very functional organisation.

The Outside, Naturalised

It’s time to wrap things up. Let’s come back to the very first quote in this essay, where Land posited the criterion that, in order for the Real to have any sort of effective intervention, it should display a selective function. That is precisely what we have arrived at. We have shown how, by enacting a generalised selective mechanism at both the algorithmic and implementational levels, the brain fulfils the requirement of an entity that effectively mobilises its Outside.

Notice, however, that we are in a fundamentally new scenario. If for BBT the epistemic threshold is fundamentally rigid and static, incapable of being overcome by the human—and reshaped by its experience—TNGS renders such a threshold dynamic, thereby functionally open to its outside, which can then do whatever it may with the recipient (us). Once we take this insight into consideration, we begin to see an important point of convergence between Land and Bakker. For them, the Outside plays a structural role in delimiting what it means to be human, and what the human can thus do. The human may change, the human may reterritorialise, and the Outside may be pulling the strings behind the curtain, but the Outside can never be named. For Bakker, the Outside is clearly the array of non-intentional processes that determine intentional cognition and which intentional cognition can never even dream of grasping. For Land, it can only be invoked, alluded to as the grand point of singularity where the transcendental temporal structure converges, but any specification of just how it does so is doomed to failure. The Outside functions as a negative counterpart of the positive processes observed: it unites all positive properties, but is thereby incapable of expressing itself in the concrete. Land’s and Bakker’s theory of the Outside takes thus the form of a negative theology, as if it were a remnant of our intentional modes of cognition, functioning by coarse characterisations and low-dimensionality.

From this point of view, it does not seem very surprising that accelerationism succumbed to magical thought and resorted to the occultist tradition. The very agent that was meant to be the lever of historical change is taken to be the only real process, but by the same token, it’s rendered ineffective, unexplicative. And indeed, perhaps looking directly to the Outside was too blinding. That’s why we have turned around and looked inside, to the effects that the Outside displays in our own constitution. And what we have seen is a generalised selective mechanism that optimises the human assemblage as a function of entropic irreversibility. Looking inwards, we have discovered the Real that determines it: the Real is that which selects.[27]

If we abandon all magical connotations about the selective mechanisms that enact the functional intervention of the Outside, perhaps we can start improving the ways in which we accelerate its bifurcations, optimise the processes of deterritorialisation. We might, for instance, start paying attention to the conditions of the abstract space that allow for the emergence of a time singularity. One question the response to which we have assumed, but remains very much unanswered, is: is there a single attractor to which all these different selection processes tend—be they neural selection, capitalism, modernity or what have you? Or are there many? Put in terms of the selection dynamics we mentioned before: is the current historical attractor subject to directional, or disruptive selection? Could we even be in a process of stabilisation? What kind of circumstances may favour a bifurcation towards one or the other? And even beyond that: is it possible that this attractor is maintained only under some sustaining conditions—so that if those conditions are not given, the attractor may be lost with them? But asking these questions implies denying the assumed understanding of the Outside as a negative, totalising opposition to the positive processes we observe happening around us. It implies, on the contrary, taking the Outside as a positive process (or processes) that very much determines them. Only then, I think, will it be possible to appreciate how much resistance to it is not only futile, but impossible, given the workings of nature.

  • 1

    LAND, N., “Edino, kar bi uvedel, je fragmentacija”, in: BAUER, M. and TOMAŽIN, A. (eds.), ŠUM, 7, 2018. English translation available at: https://syntheticzero.net/2017/06/19/the-only-thing-i-would-impose-is-fragmentation-an-interview-with-nick-land/.

  • 2

    BAKKER, S., “Cognitio Obscura”, in: Three Poud Brain, 2013, https://rsbakker.wordpress.com/2013/08/04/cognition-obscura-i/.

  • 3

    Wikipedia Commons.

  • 4

    BLOOM, F. E., FLINT BEAL, M., KUPFER, D. J., The Dana Guide to Brain Health, Dana Press, 2006.

  • 5

    MCKONE, E., AIMOLA DAVIES, A., DARKE, H., CROOKES, K., WICKRAMARIYARATNE, T., ZAPPIA, S., FIORENTINI, C., FAVELLE, S., BROUGHTON, M., FERNANDO, D., “Importance of the Inverted Control in Measuring Holistic Face Processing with the Composite Effect and Part-Whole Effect”, in: Frontiers in Psychology, 2013, 10.3389/fpsyg.2013.00033.

  • 6

    BAKKER, R. Scott, “Enlightenment How? Pinker’s Tutelary Natures”. in: Three Pound Brain, 2018, https://rsbakker.wordpress.com/2018/03/20/enlightenment-how-pinkers-tutelary-natures/.

  • 7

    METZINGER, T., Being No One. The Self-Model Theory of Subjectivity. Cambridge MA: MIT Press, 2003.

  • 8

    MARR, D., Vision, San Francisco: W. H. Freeman, 1982.

  • 9

    Ibid., p. 25.

  • 10

    CHOMSKY, N., Rules and Representations, New York: Columbia University Press, 1980.

  • 11

    CHAN-PALAY, V., GAJANAN, N., PALAY, S. L., BEINFELD, M. C., ZIMMERMAN, E. A., WU, J. Y., O’DONOHUE, O., “Chemical Heterogeneity in cerebellar Purkinje Cells: Existence and Coexistence of Glutamic Acid Decarboxylase-like and Motilin-like Immunoreactives”, in: Proc. Natl. Acad. Sci., Vol. 78, 12, 1981, pp. 7787–7791.

  • 12

    INGRAM, V. M., OGREN, M. P., CHALOT, C. L., GASSELO, J. M., OWENS, B. B., “Diversity among Purkinje Cells in the Monkey Cerebellum”, in: Proc. Natl. Acad. Sci., 82, 1985, pp. 7131–7135.

  • 13

    The theory of Neuronal Group Selection is described in many different texts. For an accessible introduction, see Edelman’s Bright Air, Brilliant Fire: On the Matter of the Mind (USA: Basic Books, 1991, ch. 9). My discussion will be mostly based on his monograph on the subject, Neural Darwinism (USA: Basic Books, 1987). For a later paper, see: “Neural Darwinism: Selection and Reentrant Signalling in Higher Brain Function” (in: Neuron, Vol. 10, 1993: pp. 115–125).

  • 14

    Peter Godfrey-Smith has a wonderful monograph where he analyses in detail these requirements, as well as—more in general—these types of abstract characterisations of evolutionary processes. See: GODFREY-SMITH, P., Darwinian Populations and Natural Selection, Oxford University Press, 2011.

  • 15

    See: KAUFFMAN, S., The Origins of Order: Self Organization and Selection in Evolution, Oxford University Press, 1991.

  • 16

  • 17

    For a recent review, see: PRICE, C. & FRISTON, K., “Degeneracy and Cognitive Anatomy”, in: Trends in Cognitive Science, 6, 10, 2002, pp. 416–421.

  • 18

    EDELMAN, Neural Darwinism, p. 49.

  • 19

    Ibid.

  • 20

    CAO, R. & YAMINS, D, “Explanatory Models in Neuroscience: Part 2 – Constraint-based Eligibility”, 2021, preprint available at: https://arxiv.org/abs/2104.01489/.

  • 21

    EDELMAN, Neural Darwinism, p. 62.

  • 22

    The nomenclature and function attribution of some areas is still subject to discussion. Some include areas like V6 or V7, and others include them under other names. For a review of these issues, see: WANDELL, B. A., DUMOULIN, S. O., BREWER, A. A., “Visual Field Maps in Human Cortex”, in: Neuron, 56, 2, 2007, pp. 366–383.

  • 23

    ADELSON, E. H., Checkershadow illusion, 1995, http://persci.mit.edu/gallery/checkershadow.

  • 24

    SETH, A. K., “Interoceptive inference, emotion, and the embodied self”, in: Trends in Cognitive Sciences, 17, 11, 2013, pp. 565–573.

  • 25

    TRIBUS, M., Thermostatistics and Thermodynamics, Princeton University Press, 1961; FRISTON, K., “The free-energy principle: A rough guide to the brain?”, in: Trends in Cognitive Sciences, 12, 7, 2009, pp. 293–301; see also HOHWY, J., The Predictive Mind, Oxford University Press, 2013.

  • 26

    It is worth highlighting the similarity of H with the classical (Gibbsian) entropy S, where kB is Boltzmann’s constant, and again we see entropy measured as a weighted sum of the probabilities of the events that compose the system of interest.

  • 27

    Note that this does not amount to the thesis that natural selection is the only cause of biological evolution. When rendered as the recurring sampling (be it in discrete steps or in a continuous transition) of abstract spaces of possibilities, selection may also apply to conditions like the topological properties of the space. Such topological conditions contribute to what is even possible in the first place, and so they are already included in the population on which selection acts. On the flip side, the products of selection will constitute the future populations whose distribution of traits informs the applicable topological conditions. Biological evolution is thus the result of this interchange between selection and the structure of the population (among other factors).

Bosco García

Bosco García is a graduate student in Philosophy at the University of California, San Diego. His research is mainly on how biological systems, including the brain, are informed by physical principles and modeling. He also maintains an interest in the history of science, particularly of physics, and how scientific development interacts with processes of non-scientific nature. Most of his online output is at @_infinitography.