Platform for art and theory/fiction

The Nihil of Time

Unilateral Limiting and the Decomposability of Human Agency

I. Corrosive Constraints

It is natural enough, therefore, that critique is an instrument of dissolution; a regression to conditions—to the magmic power of presupposition—upon which all order floats.

Nick Land[1]

The precedent of humanism as a sign of retrogressive philosophical and cultural agendas and its correspondent mirroring on the ideologically contaminated waters of long-termism[2] presses us with the task to sustain a critical assessment that must be constructed from the grounds of an open yet strategic probe: how can we bootstrap the programmatic dissolution of any given substantiality (i.e. an ideological purview that has no scale-sensitive positionings toward such-and-such conditions) ascribed to a determined yet monotonic human subject as a universal paradigm? In other words, when faced with the isomorphism between humanism and corrupt, reactionary so-called universalist projects founded on an implicit western-centric exceptionalism,[3] we have to reassess both pragmatic and conceptual schemas necessarily driving us toward a continuum of critical self-effacement.[4] For this task, we will specifically unchain the consequences of the negative temporal chemistry threading out of the project of Kant’s critical philosophy onto the context-specific proposal of inhumanism[5] from where we can productively decant a dispute of human supervenience that would be antagonistic to the particularities of Bostromian long-termism. Within the context of the aforementioned proposal of inhumanism, human supervenience is to be considered as a set of ordered cognitive structurings that hierarchically position human sapience in regard to outward sense data influxes that in a manner of maximal unforeclosure, from structuring to decomposability, open the gates to de-individuation.

Particularly for the inhumanist proposal, the conception of the transcendental order of space and time that Kant develops can be seen as a corrosive structural constraint that helps to undermine human subjectivity via retrochronical abjection[6] inevitably locking human sapience back onto to the aprioristic transcendental or deep temporality[7] of the inorganic.[8] This means that the finer grained sense data that predates a coarser grained[9] state of representational coding and structuration (i.e. the morphogenetic field underlying posterior states of becoming) has a revengeful surfeit[10] on the cognitive structurings of human sapience resulting in the gaping decomposition of our bioimmunological apparatus[11] into the utmost excess of primordial data-bleed. Thus, under this view, we can posit a decomposability of the human subject defined here as a top-down approach that begins from a stabilized and maximally representational cognitive layering that gets retroactively underpinned by a destabilizing ground, jeopardizing what we consider to be our “locating beliefs”[12] as rational human agents in the world. Furthermore, the aformentioned strategy of subtraction would also serve to put into view the diagonalization or non-effective computability, following both Cantor and Putnam’s “Diagonal Argument”,[13] of the One or the I when considered to be an inductively confirmed generality within the context of what a universal learning machine is capable of computationally achieving in terms of a closed set of predictive results.

To this end, Putnam contends the apparent infallibility of Rudolf Carnap’s degree of confirmation of a hypothesis or DC,[14] here applied to the aforementioned set of closed predictive results such as follows: “Let T be any learning machine, and consider what T predicts if the first, second … and so on balls are all red. Sooner or later (if T is not hopelessly weak as a learning device) there must come an n such that T predicts ‘the nth ball will be red’. Call this number n1. If we let n1 be black, and then the next ball after that be red, and the next after that again be red, and so on, then two things can happen. T’s confidence may have been so shaken by the failure of its prediction at n1 that T refuses to ever again predict that a future ball will be red. In this case we make all the rest of the balls red. Then the regularity ‘all the balls with finitely many exceptions are red’ is a very simple regularity that T fails to extrapolate. So T is certainly not a ‘cleverest possible’ learning device.”[15]

The diagonal argument put to work by Putnam undermines an inadequate framework of explication that extracts its short-sighted conclusions from a minimal informational pool using the example of a hypothetical universal learning machine, placing an initially contingent drift brought upon by the inclusion of further evidence to the forefront. The appearance of a contingent drift can help us illustrate how a compressed picturing of human subjectivity that is unmoored from a coarser grained or a minimal yet inadequate framework will eventually lead to a chasm when faced with a finer grained and complexity-laden framework of explication as highlighted above. Taking this into account, we can take a detour toward a brief observation Nick Land makes about the diagonal argument with the idea in mind that a formalist inductive method such as Rudolf Carnap’s is too restrictive to consider other diverging frameworks that would be open to the influx of evidence in excess. In our case, this evidence in excess corresponds to the temporal and spatial transcendental order theorized by Kant that, when seen under the purview of Cantorian mathematics[16] underlying the diagonal argument, leads toward the application of non-denumerable[17] sets of intensive magnitudes:[18] “Cantor systematizes the Kantian intuition of a continuum into transinfinite mathematics, demonstrating that every rational (an integer or fraction) number is mapped by an infinite set of infinite sequences of irrational numbers. Since every completable digit sequence is a rational number, the chance that any spatial or temporal quantity is accurately digitizable is indiscernibly proximal to zero.”[19]

Therefore, in order for us to posit a complexity-laden framework, we would need to consider that such framework must be open to the influx of divergent and initially incompatible evidence that would hazardously contrast with the conventional and short-handed evidence taken to be a given within a restricted framework. In turn, this contrast would initially show itself as contingent but would ultimately end up being a reconstructive contrast that would oblige us to inhabit a far richer language than the formalist one proposed by Carnap’s DC (Degree of Confirmation) while encapsulating the inclusion of transcendental conditions of the type of the space-time order[20] and, while not endorsed by Putnam, the implicit idea that the application of intensive magnitudes corresponding to the space-time order can be seen as a corrosive yet necessary constraint on human agency.

II. Astronomical Enclosement

When weighing in with the latter, we must also consider the seedy arguments posed by long-termism that even when picturing at a certain capacity the corrosive constraints of space-time order on human agency, these would have a grandiose teleological aftertaste that precisely looks to counter the catastrophe-laden consequences of transcendental temporality under the name of the conservatorship of human supervenience. In what could be considered the founding move of long-termism, in “Astronomical Waste”, Nick Bostrom exerts the two-fold sin of exaggerating the role of the human subject against the backdrop of an explicit authoritarian project of spatial colonization that begins by examining the wastage of human potential when put against the ruthless and maximally entropic asymmetry of temporal scales, ultimately facing the loss of capitalizable opportunities at every turn of the clock,[21] whilst continuing the boastful and uncritical expansion of the western human subject toward the very subjection of the stars.

In regard to the latter, Bostrom writes: “We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost.”[22] And even more bluntly, reaching the following bioimmunological conclusion: “Clearly, avoiding existential calamities is important, not just because it would truncate the natural lifespan of six billion or so people, but also—and given the assumptions this is an even weightier consideration—because it would extinguish the chance that current people have of reaping the enormous benefits of eventual colonization.”[23]

Accordingly, if we stop to examine the bare-bones utilitarian rhetoric of Bostromian long-termism, we will be unsurprised to find the most erratic display of the degree of confirmation framework, which as previously mentioned would have for its basis a minimal pool of information and that in this case would be ascribed to a determined type of human agency (i.e. a predatory colonial western subject), with the intention to acridly ground an inflationary conception of the potentialities and statistical growth of human intellect when projected toward the future. Nevertheless, the apparently firm prognosis that corresponds to the speculative progress of the western subject in time becomes voided by disregarding any kind of complexifying or open conclusions that would have at its center paradigmatic counter-evidence such as contemporary theories in the field of physics that have put forward the caustic hypotheses of a symmetry of temporal scales and backward causation[24] endangering the predominant and supposedly determinate phenomenological perspective of human agency that exclusively delves within asymmetrical temporal scales[25] and, most importantly, the explicit muting of cosmic trauma that is precisely brought to the fore by the overwriting of transcendental temporality over our own whimsical and fairly lackluster empirical givenness within the indifferent contingent flow of the universe.

In light of this, and to avoid the outlandish assertions of long-termism and pick apart its embededness to the suppurating drawl of inert enlightened humanism and its ideological vows to contemporary technocapitalism, and as mentioned at the start of this writing, we would have the necessity to sketch out a critical apparatus. Considering what we have previously observed, this respective critical apparatus will have as its starting point the overlapping of diagonalization or non-computability with the ramifying consequences surrounding the question of time as we have found them at the edges of inhumanism. This overlapping must venture outward to its own logical terminus where we would embrace a horizon event that relishes the annihilation of any fixed locating beliefs that correspond to a poorly substantiated “I”. To put it shortly, the imperative scalpel we intend to employ has at its mire the unfeasibility of closure made explicit within the monadic play of mirrors that is long-termism. The effects of which would mean the hard limiting of human agency to the potentialities found on the virtual structuring of the transcendental space-time order.[26] If we can identify the hectic trajectories to where we are lead with the irruption of the above-named virtual structuring while considering how this structuring gets traumatically embodied in the actual as a factor of decomposability, we can then overcome the plighted biases of a single-sided frame of a rigidly enclosed human subject.

For this, we must highlight and recontextualize one of the main tenets of inhumanism: the convergence toward the abstraction of time in-itself, a tenet that in turn must become a reconfiguring and dialectical fixture of the human subject while also considering the perceptual and conceptual irreversibility of the ripples produced by the collapse and further immersion toward and inside the space-time order (in the lingua franca of Landianism: k-space) onto our empirical selves:[27] “In this sense K-space plugs into a sequence of nominations for intensive or convergent real abstraction (time in itself): body without organs, plane of consistency, planomenon, a plateau.”[28] From this passage we can extract the following thought: the convergence toward the abstraction of time in-itself can be explicated as the unraveling of a clash between intensive and extensive qualities that becomes traversable when unmaking and re-arranging a given state of things onto a dynamic refiguring onto a yet-to-be-known state of things. Put differently, according to the vocabulary belonging to the rationalist continuum leading from Spinoza to Deleuze, the weaving between real and numerical distinctions[29] can be used to implement the Kantian-critical purview of temporality and the measuring of its effects by way of intensive qualities, i.e. unmaking and re-arranging a given state of things, and the notation of the cascading immersion of these effects on our empirical register by utilizing extensive quantities, i.e. a yet-to-be-known state of things that can be made visible by way of bare statistical measurements. Using these ideas as a form of grounding, we can find ways to revel in the tools offered by contemporary science and physics (which would be an ordeal beyond the limits of this writing) having at its center the intensive qualities of time against the merely impoverished utilitarian perspective of long-termism which banishes any kind of depth from the merely extensive measurements used to eternalize an undisputed western human subject.

III. Multiscalar temporality as a heuristic critique

Following our line of argument and in order to sketch-out a potentially robust naturalization of the negative impact of time as a structural constraint on human subjectivity, we would need to observe that general contentions about inhumanism’s decomposability of human agency will become inevitable, and for this we will first take into consideration two concrete instances of fallibility. The first one being the seemingly unorthodox interpretation made of Boltzmann’s theory of thermodynamics, interpreted by Land as a transcendental law and as a working proof for the fixed tendency of matter toward its own dissolution in time.[30] We will argue that Land’s justification of the aforementioned proof as to undermine any teleological conception of time and therefore of any telos embedded within human agency reverses precisely into an axiomatic view that predetermines the behavior of phenomena in time. Land delights in his own fallible device when writing the following: “Any process of organization is necessarily aberrational within the general economy, a mere complexity or detour in the inexorable death-flow, a current in the informational motor, energy cascading downstream, dissipation. There are no closed systems, no stable codes, no recuperable origins. There is only the thermospasmic shock wave, tendential energy flux, degradation of energy.”[31]

Toward this end, we will use an exemplary lesson taken from Mark Wilson’s critique of “ersatz rigorism”[32] when analyzing the contextual ills of reductive axiomatic formalism that is taken to be epistemologically complex while remaining fundamentally bare.[33] Following Wilson, we can forfeit Landian temporal catastrophe as a variant of what we can call a “non-linear ersatz complexity”, where we would observe the supposed increase of complexity when utilizing non-linear systems as a model and in particular the dynamics of non-asymmetrical temporal flows but that do not suffice as they don’t go beyond a bare axiomatic proposal that has as its center the idea of primordial entropic unbecoming as a deregulative fixture haunting any kind of negentropic formation. On this point, Reza Negarestani poses a critique related to Land’s idea of apparent complexity when using non-linear models of explication to prove the dissolution of matter and order, that in the end become undermined thanks to their binding to fundamentally closed systems that depend on a predictive economy of self-regulation and largely unrevised locating beliefs: “The dissipative rate is energetically conceived as an economical (and hence, restricted) correlation; its existence is dictated by the exorbitant index of exteriority but its modi operandi are conditioned by the affordability of the interiorized horizon of the organism.”[34]

With this in mind, the second instance we will consider is the “the non-successive and unsegmented zero of intensive extinction” behind “flatlining”[35] to further compromise the complexities suggested by Land, as it would also implicitly suggest a flat picture of time that would act as a unilateral or unidirectional limit that has for consequence an orthodox form of eschatology[36] rather than an intensive and multiscalar[37] conception of time. For the latter, we will also build on the observations made by Wilson that place a direct response to the depthless formalist rigor of the so-called scientific epistemologies developed by much of the European and Anglo philosophy of the 20th century with recent developments in contemporary science and computation, utilizing multiscalar architectures that can tackle the problem of having various interacting submodels within a dominant structure (and accordingly, a dominant behavior), whilst not subtracting the importance of what happens whenever we immerse downwards and inwards to further levels of complexity: “The basic trick is to position a variety of individual modeling tactics (called ‘submodels’) within a coordinated architecture that shifts between these components in a controlled, checks-and-balances manner. By doing so, a multiscale plotting can resolve the computational hazards that Terrance Tao calls the ‘curse of dimensionality’: keeping track at one time of all the interactive variables relevant to a complex system will easily swamp the capacities of the most compendious of imaginable computers.”[38]

Thus, if we now turn to Huw Price’s contention about time as phenomenologically perceived by human agents in an asymmetrical manner with a tendential behavior that affects things in-time by getting strained when putting it side by side to the subjectless perspective from the nowhen that doesn’t have the forward-causation belonging to the phenonemonological perspective as its basis, we can then try to piece together the idea that there might be a yet-to be deciphered dominant behavior of time rather than a restricted nature of time that would include as sub-models both asymmetrical and symmetrical directional flows. The enrichment of perspectival and non-perspectival temporalities and behaviors permitted by a multiscalar conception of time willfully demonstrate that Land’s inhumanist idea of time is deeply conservative when contrasted with developments that might contain and defy human agency without recurring to implicitly threadbare concepts such as the nature of time et al.

With the above, we would like to suggest that even if we are prone to find evident dangers leading toward orthodoxy and dogmatism in Land’s decomposability of human agency, we can also find productive ways of circumventing the substantive status surrounding human agency by using post-Landian responses as a starting point[39] onto a robust hybridization of conceptual strategies and tools across continental and analytical philosophical traditions that can ultimately re-adjust rather than abandon the open-ended intellectual persuasions left behind by the inhumanist proposal. Thus, we will stand by a critique of a substantive “I” as a perspectival given in philosophy by way of what has been termed an “inessential indexical”,[40] a conception rooted in the conflict between relational properties and self-names that Ruth Garret Millikan develops in The Myth of the Essential Indexical.[41] On this note, we would highlight that the Millian[42] observation that underlies Ruth Garrett Millikan’s critique of Lewis’ and Perry’s essential indexicality and that considered through and through has the capability of underwriting self-names[43] carries the repercussion of spousing not organismic and holistic but decomposable and process laden approaches to self-situatedness within the world and the inner-workings and external-networkings that come to be associated with the act of self-naming.

Put differently and while borrowing from Patricia Reed’s parallel remarks on this subject,[44] one does not begin from homophily to circumscribe within homophily, but rather when one subjects homophily from the inside to the knifings of immanent critique it reaches outward to the evidence in excess that challenges our perspectival givens. From this, we can observe an act of inward folding that unfolds toward alienation, which would be another name for what in Mark Fisher’s gothic materialist proposal is known as an implex (the fold outwards from within),[45] i.e. a Liebnizian-Spinozist kernel that feedbacks onto an immanent critique of indexicality, generating strategies in which human subjectivity can be undermined while situated fully within the grasp of determinate material constraints. In turn, we could haphazardly define these material constraints as those belonging to our developing historical context as they adapt to our own cognitive domain,[46] whilst jeopardizing any agentic givens that are and must be dialectically revisable through the conflict between the manifestly ideological and the scientific critique of our own positioning in the world. This could also bring forward a necessary grip with Louis Althusser’s Marxist critique of humanism as an ideological construct when faced with contingent and time-sensitive material constraints that define the relations of socioeconomical production that in turn have also produced the mirage of substantive selves.[47]

Finally, and as an ongoing project, we would like to lead these conceptual syntheses to their ultimate consequences as a painstaking subtractive labor that could eventually piece together a discerning view of the unidirectional “nihil of time” via speculative physics with the anthropic perspective developed by Boltzmann. What would this imply? By taking into account Price’s reconstruction of Boltzmann’s discovery of statistics as a turning point on his theory about entropy, thanks to mathematicians Zermelo and Poincaré’s dispute about the second law of thermodynamics where they highlight that states of lower entropy or organizational influx in the universe even when considered to be of bare statistical importance do inevitably happen, we are lead to tentatively conjecture that what is most improbable within an asymmetric time scale can and will happen. That is, the aprioristic flux of transcendental space-conditions eventually do make themselves tangible when formalized within our bare perceptual view of phenomena in time (i.e. necessary entropy), meaning that eventually we could also consider the convergence of initially divergent senses of time revolutionizing our sense of selves. Price writes the following: “Life on Earth depends on a continual flow of energy from the sun, for example, and would not be possible without a low-entropy hot-spot of this kind. In light of considerations of this kind, Boltzmann suggests that it isn’t surprising that we find ourselves in a low-entropy region, rare as these might be in the universe as a whole. This is an early example of what has come to be called anthropic reasoning in cosmology. By way of comparison, we do not think that it is surprising that we happen to live on the surface of a planet, even though such locations are no doubt rather abnormal in the universe as a whole.”[48]

By following Boltzmann’s purview, we can find the idea of implausible hypotheses becoming plausible in given time as an attractive strategy that can be filtered through the lens of both a heuristical method[49] of critique as developed by complexity theory and the epistemics of surprisal as methods that are entropy tolerant but not overrun or ruled exclusively by entropy. For the idea of heuristics, we must turn to the observation made by William C. Wimsatt in regard to evidential excess and how this evidential excess rather than becoming a limiting factor to be discarded in the building of scientific theories and research can become a reconfiguring factor toward the implementation models that are error prone or that contend with the eventuality of surplus of evidence. Wimsatt writes: “A more realistic model of the scientist as problem solver and decision maker includes the existence of such limitations and is capable of providing real guidance and a better fit with actual practice in all of the sciences. In this model, the scientist must consider the size of computations, the cost of data collection, and must regard both processes as ‘noisy’ or error-prone.”[50] This in hand becomes even more crucial when faced with the entropy tolerant framework proposed by Anil Cavia, where our own situatedness and posterior representational coding of what happens in the world is necessarily bootstrapped by initial states of contingency and entropy; but rather than giving in to the informational saturation of entropy, we might as well face the corrosion of impact and build upon the ruins of our now dislocated beliefs.[51]

Cavia writes: “All the knowledge we have is of uncertainty, there is no means of disentangling judgement from contingency. Surprisal is precisely the idea that our capacity to learn is grounded in an attempt to absorb new forms of entropy as information, and that the negation of intelligence is a reversion to pattern. Here, encoding is an in-situ theory of knowledge in formation, an ontogenesis founded in the tension between freedom and constraint, not so much a dialectics as an informatics of pattern and surprisal.”[52] The play of informational flux as has been highlighted circularly throughout this writing is a key factor that leads to a revision of the status of our own agency as humans, and even more so when the informational flux can potentially be utilized for the purpose of modifying determinate mechanisms and functions within a complex system. We can illustrate this by moving forward with William Bechtel and Richardson’s remarks on the issue of localization and decomposition by using French chemist Lavoisier’s “structural decomposition of water into constituent molecules”,[53] in that most functions and operations belonging to a mechanism in particular can, once located, be potentially decomposed as to be structurally recomposed in an isomorphical manner, with the corresponding functions potentially enabled to be mapped in a one-to-one basis. The compelling factor of decomposition and then posterior structural recomposition is the failure or divergence of functions[54] that could likely, when taking both heuristics and the epistemics of surprisal to their limit, synthesize diverging functions that could put at stake any configuring factors taken to be essential within the mechanisms of a complex system.

Thus, if we take the monstrosity of decomposition seriously, to what speculative stakes could we bet upon? Laura Tripaldi poses a challenge when analyzing soft intelligent technologies such as spider’s silk and the respective complexities in function and location that make for it a difficult subject to be structurally recomposed via biomimicry[55] and therefore observing the diverging results as functionally efficient although botched in replicating a one-to-one mapping by lacking in imagination.[56] With this, we can elaborate some closing observations in that surprisal and heuristics can help rebound speculative monstrosity to the inbound conditions of human agency by way of not only decomposition when taking in the escalating effects of the intensive order of time, but also contemplating what comes thereafter by structurally recomposing our agency as to shake the retrogressive beliefs condensed in humanism and leading us to stranger ways to inhabit the world full of statistical possibilities and reversibly tread neighborhood of escalating outcomes that in this point in time inevitably rear us toward our own extinction. We turn to faciliaty zero in order to jump in degrees from the despotic order of representation to the inevitable intrusion of intensive futurity.

  • 1

    LAND, Nick, The Thirst for Annihilation, UK: Routledge, 1992.

  • 2

    BOSTROM, Nick, “Astronomical Waste”, in: Utilitas, 15(3), 2003, pp. 308–314.

  • 3

  • 4

    We leave open a revisionary outlook of Mark Fisher’s strategies of self-effacement in the landscape of what self-representation would entail when faced with technological enclosement, and in particular the concept of the body without image. See FISHER, Mark, Flatline Constructs: Gothic Materialism and Cybernetic Theory-Fiction, USA: Exmilitary, 2018.

  • 5

    LAND, The Thirst for Annihilation.

  • 6

    In reference to the opening and intrusion of time-rifts. See CCRU, Collected Writings 1997–2003, UK: Urbanomic, 2020, pp. 33–52.

  • 7

    Deep temporality is thought under the register of ontogenesis: a field of pure potentiality that permits the establishing of forms in time. This would also correspond to the remarks Deleuze and Guattari have made on absolute deterritorialization not as the careless acceleration of time but rather as the constant return of the long-term chain of effects of this field on constituted and yet-to-be-constituted beings. See DELEUZE, Gilles & GUATTARI, Felix, A Thousand Plateaus. USA: University of Minnesota Press, 1987, pp. 52–57.

  • 8

    See FREUD, Sigmund, Standard Edition of The Complete Works of Sigmund Freud Vol. 18, UK: The Hogarth Press, 1955, pp. 6–69.

  • 9

    See WILSON, Mark, Physics Avoidance: Essays In Conceptual Strategy, USA: Oxford University Press, 2017, p. 231.

  • 10

    See SCHOPENHAUER, Arthur, On the Fourfold Root of the Principle of Sufficient Reason and Other Writings, USA: Cambridge University Press, 2012, pp. 303–448.

  • 11

    See HAMILTON GRANT, Iain (1998) “Black Ice”, in: BROADHURST DIXON, Joan & CASSIDY, Eric J. (eds.), Virtual Futures: Cyberotics, Technology and Posthuman Pragmatism, UK: Urbanomic, 1998, pp. 132–143.

  • 12

    PERRY, John, “The Problem of the Essential Indexical”, in: Noûs, 13(1), 1979, pp. 3–21.

  • 13

    STERKENBURG, Tom F., “Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine”, in: Erkenntnis, 84, 2018, pp. 633–656.

  • 14

    See CARNAP, Rudolf, Logical Foundations of Probability, USA: University of Chicago Press, 1962.

  • 15

    PUTNAM, Hilary, Philosophical Papers Volume I (Mathematics, Matter and Method), USA: Cambridge University Press, 1975, p. 299.

  • 16

    PUTNAM, Philosophical Papers Volume I, pp. 12–42.

  • 17

    See MUCKENHEIM, Walter, Infinite sets are non-denumerable, 2003, https://arxiv.org/ftp/math/papers/0305/0305310.pdf.

  • 18

    See JANKOWIAK, Tim, “Kant’s Argument for the Principle of Intensive Magnitudes”, in: Kantian Review, 18 (3), 2013, pp. 386–414.

  • 19

    LAND, Nick, Fanged Noumena: Collected Writings 1987–2007, UK: Urbanomic, 2011.

  • 20

    PUTNAM, Philosophical Papers Volume I (Mathematics, Matter and Method), p. 271.

  • 21

    “Given these estimates, it follows that the potential for approximately 1038 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 1038 potential human lives per second.” (BOSTROM, Astronomical Waste, p. 309.)

  • 22

    BOSTROM, Astronomical Waste, p. 311.

  • 23

    Ibid.

  • 24

    See Price’s discussion on the conventions of asymmetrical time and the probabilistic counter-evidence that develop a perspectival shift toward symmetrical time and backwards causation in Time’s Arrow and Archimedes’ Point (USA: Oxford University Press, 1996, pp. 162–194).

  • 25

    And therefore any ends of time ascribed to the teleology of utilitarian long-termism.

  • 26

    “As a result, it is useful for us—the best we can do, in fact, in many cases—to arm our future selves with a description of the world which is rich in ‘potentiality’; in other words, a description which yields useful information about a wide range of possible futures. Potentiality is the best substitute for knowledge of the actual future itself. It gives us a kind of generic knowledge, useful in each of a wide range of future circumstances, many of which may be compatible with what presently we know.” (PRICE, Huw, “Backward causation, hidden variables and the meaning of completeness”, in: Pramana, 56 (1 & 2), 2001, p. 204.) In light of Price’s optimist overview of the potentialities of time, we state that such potentialities can also be catastrophically counterfactual to what we presently know.

  • 27

    In reference to Freud’s earliest involvement about the theory of the embodiment of trauma inside our perceptual-cognitive apparatus in time. See FREUD, Sigmund, Standard Edition of The Complete Works of Sigmund Freud Vol. 1, UK: The Hogarth Press, 1966, pp. 295–387.

  • 28

    LAND, Fanged Noumena.

  • 29

    See DELEUZE, Gilles, Expressionism In Philosophy: Spinoza, USA: Zone Books, 1990, pp. 27–40.

  • 30

    LAND, The Thirst for Annihilation, pp. 27–57.

  • 31

    Ibid., p. 43.

  • 32

    See WILSON, Mark, Imitation of Rigor, USA: Oxford University Press, 2021.

  • 33

    The idea that rigor must be seen as a form of subtractive axiomatic method is delicately weaved by Wilson’s genealogy starting from European physicians and mathematicians of the 19th century and more specifically on the figure of Henrich Hertz and condensed in the work of the logical empiricists of the early 20th century. See WILSON, Imitation of Rigor.

  • 34

    NEGARESTANI, Reza, Accelerationism and the problem of (un)binding, 2010, https://www.urbanomic.com/accelerationism-and-the-problem-of-unbinding/.

  • 35

    LAND, Fanged Noumena, p. 370.

  • 36

    We can highlight here the critique done by Negarestani about the reactionary escathological consequences that come with the elimination of modularity in becoming when only posed with the output or horizon of human extinction. Even if Negarestani poses a modularity of becoming by considering the effects of temporality on our empirical selves and its respective potentialities, we could postulate a modulation of unbecoming that is crucial to dispute our own perspectival and sense-bound locatedness within the world and the outbounds of the universe. See NEGARESTANI, Reza, Intelligence & Spirit, UK: Urbanomic, 2018, pp. 233–348.

  • 37

    WILSON, Imitation of Rigor.

  • 38

    Ibid., p. 90.

  • 39

    NEGARESTANI, Reza, “The Labour of the Inhuman”, in: MACKAY, Robin & AVENASSIEN, Armen (eds.), #Accelerate: The Accelerationist Reader, UK: Urbanomic, 2014, pp. 321–331.

  • 40

    “We also think the substantive conclusion of the tradition is wrong, and that there is no deep or philosophically interesting notion of perspectival content. We think that contents are, and are used as, tools for representing (and, of course, sometimes misrepresenting) the objective state of the world. Some of the states represented are ‘perspectival’ in the minimal sense that they are facts about our immediate environment, or facts about how things are in relation to us. Some of our representational systems are indexical in the minimal (Kaplanian) sense that they represent as they do in part in virtue of where they are situated in the world. But there’s nothing more to the phenomenon than that—fundamentally, all information is objective information, and is used indifferently by us as such.” (CAPPELEN, Herman & DEVER, Josh, The Inessential Indexical: On the Philosophical Insignificance of Perspective and the First Person, USA: Oxford University Press, 2013, p. 173.)

  • 41

    MILLIKAN, Ruth Garrett, “The myth of the essential indexical”, in: Noûs, 24(5), 1990, pp. 723–734.

  • 42

    See KRIPKE, Saul, Philosophical Troubles: Collected Papers Vol. I, USA: Oxford University Press, 2011, pp. 52–72.

  • 43

    MILLIKAN, The myth of the essential indexical.

  • 44

    REED, Xenophily and Computational Denaturalization.

  • 45

    “The implex describes less a relationship between objects than a transformation that happens to a system. Implex designates a process of folding, or unfolding: thus cyberspace is neither ‘inside’ nor ‘outside’ the world, it constitutes a fold in the world that is nevertheless a real production–an addition—to the world as such.” (FISHER, Flatline Constructs, p. 144.)

  • 46

    With this, I would leave open posterior developments that could tentatively link-up Louis Althusser’s idea of ideological interpellation having repercussions on human agents at a cognitive-behavioral level. Even if it might sound tautological, work must be done to sufficiently explore this subject. See ALTHUSSER, Louis, On the Reproduction of Capitalism, USA: Verso Books, 2014.

  • 47

    See ALTHUSSER, Louis, For Marx, USA: Verso Books, 2005, pp. 221–247.

  • 48

    PRICE, Time’s Arrow and Archimedes’ Point, p. 34.

  • 49

    “Heuristics are just the sort of decision rules that recognize cognitive limitations and their impact on choices made in complex circumstances.” (BECHTEL, William & RICHARDSON, Robert C., Discovering Complexity, USA: MIT Press, 2000, p. xxiv.)

  • 50

    WIMSAT, William C., Re-engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, USA: Harvard University Press, 2007, p. 78.

  • 51

    Following Cavia’s conclusive remarks, disorientation does not equal to a discarding of the possibility of grounding knowledge. Disorientation produced by divergences in localization and specifically by the divergences found in time can enrich our own purview and, following the initial remarks of this writing, the strategies of self-effacement at hand: “If we’ve learned anything at all, it’s that the future does not look like the past—an epistemics of surprisal posits that this is necessarily all we could ever learn, it renders both reasoning and mattering as encodings informed by the unfolding of uncertainty we call time.” (CAVIA, Anil, Shannon’s Demon, 2022, https://tripleampersand.org/shannons-demon/.)

  • 52

    CAVIA, Shannon’s Demon.

  • 53

    BECHTEL & RICHARDSON. Discovering Complexity, p. xxxviii.

  • 54

    “Not infrequently, these efforts result in failure: the reconstituted system does not generate the original phenomenon or generates only some aspects of it. One possibility is that the researchers failed to identify some critical component. Another is that the organization was not adequately recovered in the reconstituted system and it contributed in essential ways to the production of the phenomenon. In either case, failure to reconstitute the system’s behavior signals a failure of the proposed account of the mechanism.” (BECHTEL & RICHARDSON, Discovering Complexity, p. xxxviii.)

  • 55

    See TRIPALDI, Laura, Parallel Minds, UK: Urbanomic, 2022, pp. 7–32.

  • 56

    TRIPALDI, Parallel Minds, p. 31.

Federico Nieto

Certificate student and researcher under the Critical Philosophy program at The New Centre For Research & Practice and MA in Philosophy from the National University of Colombia. His current research is focused on novel epistemological approximations to political organization and subjectivity under the influx of contemporary analytical philosophy, complexity theory and structural Marxism.