Platform for art and theory/fiction

Any retrospective on transformations in our “modes of seeing”[1] is a messy business. Looking backwards is always a looking from, and looking forwards from looking backwards is all the more tainted. The paradigm shift from optic to an entoptic media regime brought about by computational technologies is an exemplary case. While we have clearly been on the yonder side of this shift for some time—by some estimates, decades at the least[2]—intermediary terms in theory such as “post-perceptual media”[3] (my emphasis) or my own “thingly uncertainty”[4] still persist. In light of its omnipresence, it is increasingly challenging to appreciate what it was precisely that made “entoptic” the governing figure of experience, that is, what provided the ground for this novel paradigm and, more importantly, whether this figure has now fully transitioned into being a ground in itself. At the same time, where the ease with which we refer to obsolete technologies as “optic” media (just think of the interludes of Google Glass or Musk’s Neuralink) has come from is equally obscure. Similarly to Uexküll’s ‘ur-cybernetics’[5], the notion of entoptic media emerged first as a useful model for the activities in other worlds that did not seem to fully align with the one of human knowing. The watershed year bringing forth what we now casually refer to as entoptic media, as most will agree, was 2020, the first year of the Covid-19 pandemic. Computational technologies began to defy bodily-perceptual experience to such an extent that the once holy division between truth and illusion quickly became irrelevant. One was able to feel the heat of blood-red skies, yet saw the calm of blue as ”technological expectancy”[6] saturated our vision. The formerly vague hint of a fundamental restructuring of selfhood in practice, rather than transhumanist fantasy, became an item in election manifestos due to pioneering computational solutions to the folding-problem.[7]

Against this backdrop, it was clear that optic media as a conceptual, post-Foucauldian paradigm no longer aligned with the computational technologies surrounding the watershed moments of 2020. While the corporate complex of the time branded the then unfolding “technological cocoon”[8] with all kinds of optic metaphors, the consensus emerged that there was indeed very little about these beings that actually warranted comparison to vision in a phenomenological sense. The long-ruling optic model of human-media relations as composed of reciprocal information-processors turned out to be a rather simplistic Cartesian “enframing”[9] of computational technologies wagging a friendly eye at their human operators. In contrast, it has become clear that post-2020 media incorporating computational technologies did indeed not operate spatiotemporally on “the same scale as the operator”[10]. Rather, the ‘autocorrect’ at work in the computationally governed filters encroaching on ‘reality’ on the planetary scale called forth a wholesale re-evaluation of tired metaphors such as the panopticon or surveillance capitalism. In short, around this time, human knowing as “violence” (technē in its most genuine sense according to Heidegger[11]) needed to come to terms with what was unfolding within a sphere it had always thought its own—and as always, it needed a conceptual lynchpin for this coming-to-terms.

Entoptics as technologies facilitating socio-somatic consensual hallucinations, explicitly named as such in Reynolds’s fiction in the form of machine-generated and neurally induced holograms,[12] slipped into this role due to a combination of features making it a perfect ‘conceptual malaphor’—that is, a combination of related concepts from two disparate fields. First, entoptic phenomena as used in the medical sense refer to experiential phenomena induced solely by the structural makeup of the eye[13] and date back to the mid-19th century as a concept. Floaters, seeing purple dots upon standing up to quickly—it is those ineffable happenings that are entoptic phenomena, undoubtedly experiential, yet hardly intuitable. And, most importantly, as opposed to more common optical illusions (e.g. magic eye graphics), entoptic phenomena in the medical sense are common experiences as a category, but cannot in themselves be shared intersubjectively. Each body’s entoptic phenomena are its own. As a sidenote, there is a curious parallel between how the term is used in the medical sense and the ubiquitous Helmholtz machines[14] of today as Helmholtz was among the first to discuss such phenomena.[15] Second, entoptic phenomena as used in the archaeological sense were introduced much later, initially by Lewis-Williams and Dowson.[16] Expanding the focus on the eye towards an interplay of the cornea and cortex, Lewis-Williams and Dowson argued that early human cultures deliberately induced altered states of consciousness (e.g. in shamanic rituals) to experience entoptic phenomena (in the medical sense); literally in-forming the semantic-symbolic imagery coming to dominate such cultures.[17] An important difference from the medical use of entoptics is indeed in the latter part: the authors propose that the subjects deliberately integrated such phenomena into geometric (e.g. a representation of the entoptic phenomena themselves) or iconic imagery (e.g. entoptic zig-zags becoming the structural feature of an arrangement of goats).

The malaphor of the entoptic media paradigm, then, is based on a particular historical-ontological convergence of both senses of the term: the technologically facilitated embedding of the human being in a probabilistic interplay of data-processing (the “cornea”) and pattern-inference (the “cortex”). The current media regime is labelled ‘entoptic’ because (1) all phenomenally accessible, semantic-symbolic forms of our everyday being are (2) scaled up from artefacts constituted within the predictive interplay of data and patterns in our media apparatuses. And, more to the point, the “corneas” of our apparatuses are not only data as a given but may rather also be data on patterns, and their “cortex” is not only the inferred patterns of data but may also be the “patterns of patterns”[18]. Due to this interoperability of representation, inference and prediction, the practices we enact through entoptic media (i.e. operating ‘on’ symbolically encoded entoptic phenomena) are re-integrated into the entoptic interplay, and so forth and so on, in our contemporary consensual hallucination. Put differently, with entoptic media we casually express that a tight integration of multiple, inferred internal optics as a “blindly realized possibility”[19] has superseded the Cartesian grid of a priori scripting in computational media. Indeed, the entoptically mediated fluidity of being that we enjoy now often seems to have appeared ex nihilo precisely due to its phenomenological withdrawal, a smoothly facilitated transparency of experience—with little opportunity to distinguish the status quo from a historical antecedent because the latter is always-already entoptically integrated and mediated.

To advance towards the goal of this essay, unfolding the becoming of the entoptic media regime, I adopt Löffler’s principle of “process-emulative recursions”[20]. Briefly, this principle concerns co-evolutionary developments in technology and civilizational forms, and is fundamentally based on the finding that “every new tool contains the abstraction of already instrumentally discretized processes”[21]. A handy example is the bow: the (1) previous tool-at-hand, i.e. the spear, (2) the instrumental processes associated with it, i.e. throwing it, and (3) the kind of world (i.e. legitimate targets, manufacturing processes) the spear-human assemblage disclosed as a whole—all become abstracted, emulated and integrated in the bow-hunter assemblage. Evidently, the string and arrow emulate the arm-spear interaction. The consequence of this process-emulative recursion, however, is ontological as much as it is functional. The newly constituted bow-hunter assemblage, already embodying the world disclosed by the spear-hunter module, unfolds a new resolution of the world—constituting a new ontological plateau from which new phenomena, such as particularly effective hitzones or new hunting practices, can be “isolated”[22]. Such phenomena, however, do not stand on their own, as associated practices, tools, processes, etc. unfold with them, thereby co-constituting a new “mode of seeing”[23] , or, put differently, a new ”resolution”[24] to the world and the “horizons of reference and validity”[25] enfolding it.

Returning to the subject of this essay, in my brief recall of the concept of entoptic media above, we can start to appreciate an apparent recursion in the paradigm shift from optic to entoptic media—for instance, that contemporary media apparatuses employ a form of internal optics (e.g. inferred patterns of data, patterns of patterns). Hence, given that following Löffler’s principle we are apparently dealing with a new ontological plateau, there are serious implications that should lead us to question the ease of our “attunement”[26] to the new media regime, particularly regarding how smoothly it cocoons both our everyday experience and existential dilemmas. Inasmuch as Löffler had originally conducted his macroscopic study of patterns in civilizational development so as to apprehend the possibilities of transcending the catastrophic “path dependency”[27] of late capitalism (e.g. the environmental collapse), the transition towards the development of the entoptic media regime requires a broader in-depth inquiry so as to gain some certainty on what kind of stakes are confronting us, and what particular possibilities and dependencies we ought to be aware of.

In this context, it is fortunate in the extreme that a relatively obscure computing company, Iskra Delta (ID), has become the subject of an intense interdisciplinary study. It is fortunate not because there is any single one figure of genius (human or conceptual) contained within the company’s products or research, but rather due to its unique place in time and space. Founded in 1974 in the Socialist Federal Republic of Yugoslavia (SFRY) and existing until its dissolution in 1992, ID’s approach to computing must first be placed within a continuum of two important historical developments: (1) the first AI winter following the disappointing results of the symbolic AI approach in the 1970s, and (2) the emergence of novel computing paradigms such as networked computing and parallel distributed processing in the 1980s. Becoming one of the leading computer manufacturers in the SFRY, its research and development of hardware and software therefore ran parallel with advances in computing which characterize the optic/entoptic paradigm shift. At the same time, ID was also comparatively unencumbered by the particular enframing of the West, in that computing technology may operate under a distinctly different, equally longevous systemic paradigm. Owing to a group of dedicated former employees, researchers and directors, my retrospective can dive into a plethora of product brochures, software emulations, research articles, internal reports and technical diagrams. As an exploratory investigation, I will focus on three diagrams (now referred to as the Raziskave diagrams) found in this archive, which bear a (tantalizingly obscure) prophetic trace of the process-emulative recursions in computing technology which engendered the shift to entoptic media—not only as a symbolic figure, but as the principal ground of experience.

While the authorship and origin of these diagrams are disputed, it is thought by archive custodians that they must have come from the Raziskave in razvoj računalniških informacijskih sistemov (Research and Development of Computer Information Systems, henceforth Raziskave) unit and be dated between 1984 and ‘86. This latter estimate stems from the fact that the components of the diagrams can be traced, at least for the most part, to specific publications. One is a 1984 article by Knop, Szymanski and Trinastic on “Future Developments in Computer Architecture” from the SFRY journal Informatica, wherein the authors discuss various currents of the then-emerging parallel distributed processing paradigm. The others are corporate publications: the images in diagrams #1 and #3 originate from a 1983 company profile, whereas the factory component diagram in #2 was used in a 1984 prospectus for the Računalniški Sistem Delta 800 computing terminal. Equally, while there are no accompanying texts preserved, the diagrams themselves seem to integrate, but not make explicit reference to, the then-emerging quintessential concepts such as parallel distributed processing[28] as a formal framework, backpropagation[29] as well as approximation theory[30]. Furthermore, the use of English hints at either planned publication, involvement of international colleagues, or both.

Diagram #1 from the ID archive, ca. 1984–‘86. Unknown authors, presumably from the Raziskave unit. Note the use of graphics from the Knop et al. 1984 journal article and the image from the ID 1983 corporate report.

The Raziskave diagrams share some commonalities. Each is divided into a graphic and a photographic section. Diagrams #1 and #2, however, share more obvious similarities, both in terms of geometry as well as image material. Diagram #1 seems to model a relationship between a local computer and a computational network (as described and presented in Knop et al., 1984), and a relationship between that local computer and some form of terminal access (displaying an ID interface, also pictured on the right). An arc labelled “isomorphism1” connects the former’s relation with the latter’s. Using this terminology, implying a first-order isomorphism, suggests that the authors were concerned with how terminal access on a messy, phenomenal level should map to the strictly defined topologies of networked computing of the time. Taking up Knop et al.’s vocabulary in assigning the currently accessed computer a “local complexity” as opposed to the “global complexity,” the diagram suggests within the context of the then-unfolding parallel distributed processing framework a concern that the complexity of networked computing (“[1,4]n”) cannot produce an isomorphism commensurate with phenomenal appearance, given the need for an a priori semantic definition of phenomena in hard-coded topologies. From today’s point of view, we can apply the critique of Cartesian conceits of optic media to the diagram: demanding semantic-symbolic equivalence of information is a brute-force ‘perspectivization’ of this uniquely versatile medium. The latter is key because as Löffler showed, the virtuality of information lies in the foundation of the coupling of heterogeneous processes[31]—which is the core possibility for transcending late capitalism’s systemic path dependency. In effect, with pre-ordained, semantic-symbolic information ordering, this possibility is concealed by modes of seeing preoccupied with the kind of symbols that slot into its particular world resolution (e.g. late capitalism’s profit margins, environmental exploitation and legacy wealth).

Diagram #2 from the ID archive, ca. 1984–‘86. Unknown authors, presumably from the Raziskave unit. Note the use of graphics from the Knop et al. 1984 journal article and the graphic from the 1984 Računalniški Sistem Delta 800 prospectus.

The isomorphic concern continues in diagram #2. Shifting their focus further inward and illustrating it photographically with a processual rendering of factory operations, the authors specify the relation between the local computer and the computing network with an arc labelled “isomorphism2” spanning over a relational database—presumably indicating that, rather than simple equivalency, isomorphic relations could be qualitatively and quantitatively differentiated, leading to local access being more profoundly geared into networked processes (now illustrated by a cybernetic factory plan). However, since such an isomorphism itself turns inwards, becoming a second-order relation in name, it nonetheless remains optic: the world streams inwards, arrayed in neat order, as if ordained by the eternal ratio. The manner of local access (which, of course, would include the access to the relational database not represented in #2) conditions the complexity of representation.

Diagram #3 from the ID archive, ca. 1984–‘86. Unknown authors, presumably from the Raziskave unit. Note the use of graphics from the Knop et al. 1984 journal article and the image from the ID 1983 corporate report. The origin of the three-dimensional topographic mesh (bottom left) and the images in the area labelled ‘local staging’ are unclear.

Lastly, diagram #3 represents the defining instance of ID’s Raziskave unit’s speculative perceptiveness of the entoptic media regime. As noted, the diagrams suggest that while key publications had not been available at the time, diagram #3 hints at such concepts as approximation theory, the manifold hypothesis and backpropagation. The topographic map in the lower left corner even suggests the loss landscape of later convolutional neural networks[32]. The concerns of isomorphism are replaced by the keyword “homeomorphism,” i.e. the topological shaping of functions or, in more everyday terms, the continuous deformation of a path through space. A feedback loop labelled thus is located in between two projections linking the main components of the graphic section, labelled “Local Staging”[33] (dashed outline rectangle with two unclear objects,[34] presumably sketches of visual interface elements) and “Global Manifold” (the topographic map), respectively. Through Informatica’s editor-in-chief Anton P. Železnikar,[35] who was the technological development advisor at ID at the time, we can gain an appreciation for how the term ‘homeomorphism’ came into consideration. Železnikar pursues a Heideggerian critique of AI as it was conceived at the time (though thoroughly unlike e.g. Dreyfus) and argues that the being of information, rather than intelligence, ought to become a more direct object of inquiry within the field. He states that “informing […] is governed by two basic informational principles: information embedding and information arising”[36]. The use of ‘homeomorphism’ by the Raziskave unit, then, is precisely in line with Bawa-Cavia and Reed’s much later concept of co-dependence of inductive embedding and deductive encoding[37]. As is the case now, the informatic interplay of embedding and encoding—the entoptic inference of a functional relationship based on patterns in data—is what guarantees the integration of heterogeneous processes. The concern was no longer about one specific function (embedding) guaranteeing one specific kind of semantic-symbolic representation (encoding). On the contrary, in the interplay of data and patterns, all that was needed was an inference space, with functions continuously shaping themselves based on their relata.

With this in mind, turning our attention from the homeomorphic feedback loop to the structure of diagram #3 as a whole, we can see what precisely was intended by this early intuition. First, the two homeomorphically linked projections, unlike in #1 or #2, do not spring forth from one endpoint towards the manner of local access. Notably, rather than a projection (note the optic conceit in this alone) towards a local or terminal access as in #1 or #2, the area now labelled simply “Local Staging” is constituted co-extensively with the relations in a global manifold. Second, both areas are labelled with two pseudo-formulas containing the prior main elements: “∑{[icon of local complexity]}𝔫” and “∑{[icon of global complexity]}𝔫.” From this heuristic notation,[38] we can explicate the following: the topographic map is an encoding of the current state of all instances of global complexity required of the local staging. Equally, the local staging is an encoding of access in line with the former global complexity. Third, this diagram is illustrated with an image of lightning striking a city—labelled “Planet”. Beyond the “terminal” (#1) or the “system” (#2), the Raziskave unit predicted a time when the question of semantic-phenomenal alignment, i.e. optic isomorphism, would no longer be enframing the concerns of computation. Rather, the integration and reciprocal co-constitution of each local encoding (of either ‘staging’ or ‘manifold’) in a continuous interplay came to the fore. These characteristics affirm the homeomorphic turn of the Raziskave unit’s work expressed in these diagrams and show an early anticipation of the entoptic media regime. The researchers envisioned a technical integration not as dependent on a priori scripting, but rather as an ad hoc arising of semantic-symbolic forms and the functional relationship of their constituent data.

Following this reading of the Raziskave diagrams, we see that a Löfflerian process-emulative recursion towards entoptic media in the commonly understood sense had already been if not fully apprehended, then speculated about from ID’s unique vantage point—the latter being key as ID operated under the assumption that systemic change may not be impossible, and the late capitalist path dependency avoidable accordingly. The Raziskave diagrams do indeed show that a part of the world had already more conclusively realized the necessary “leap in civilizational time”[39]. In a sense, this long-forgotten ‘spark’ (the English translation of “Iskra”) is tragic—the historic collapse of systemic alternatives to late capitalism have long denied us our peculiar manifestation of this particular Yugo-futurism. But it is also instructive—as we now traverse an entoptically mediated socio-somatic plane, in perpetual sync with synthetic organisms enlivening our barren earth and warmed by the thrusters of government-issued micro-atmospheric injectors, we should not assume every catastrophe has been avoided. Instead, we should take heed of the sparks now arising, and watch for the indicators of the next leap required of us.

  • 1

    Ernst CASSIRER, in: Aud Sissel HOEL and Ingvild FOLKVORT (eds.), Ernst Cassirer on Form and Technology: Contemporary Readings, Basingstoke and New York: Palgrave Macmillan, 2012, p. 17.

  • 2

    Concerning the macro-economic view of a post-Web 2.0 shift towards “heteromation,” cf. Hamid EKBIA and Bonnie NARDI, “Heteromation and Its (Dis)Contents: The Invisible Division of Labor between Humans and Machines”, First Monday 19 (6), 2014, https://doi.org/10.5210/fm.v19i6.5331.

  • 3

    Shane DENSON, Discorrelated Images, Durham: Duke UP, 2020, p. 240.

  • 4

    Jesse Josua BENJAMIN, Arne BERGER, Nick MERRILL and James PIERCE, “Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry”, in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 1–14; p. 10, https://doi.org/10.1145/3411764.3445481.

  • 5

    Davor LÖFFLER, Generative Realitäten I Die Technologische Zivilisation als neue Achsenzeit und Zivilisationsstufe Eine Anthropologie des 21. Jahrhunderts. Weilerswist-Metternich: Velbrück Wissenschaft, 2019, p. 152.

  • 6

    Ian BOGOST, “Your Phone Wasn’t Built for the Apocalypse”, The Atlantic, 11/09 2020, https://www.theatlantic.com/technology/archive/2020/09/camera-phone-wildfire-sky/616279/.

  • 7

    The AlphaFold Team, “AlphaFold: a solution to a 50-year-old grand challenge in biology”, Deepmind (blog), 30/11 2020, https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology.

  • 8

    Don IHDE, Technology and the Lifeworld: From Garden to Earth, Bloomington and Indianapolis: Indiana UP, 1990, p. 10.

  • 9

    Martin HEIDEGGER, The Question Concerning Technology, and Other Essays, New York: Garland, 1977, p. 19.

  • 10

    Gilbert, SIMONDON, in: Arne DE BOEVER, Alex MURRAY, Jon ROFFE and Ashley WOODWARD (eds.), Gilbert Simondon: Being and Technology, Edinburgh: Edinburgh UP, 2013, p. 6.

  • 11

    Martin HEIDEGGER, Introduction to Metaphysics, New Haven: Yale UP, 2014, p. 177.

  • 12

    Cf. “entoptics”, http://www.alastairreynolds.com/rs-universe/rs-glossary/, accessed 05/08 2021.

  • 13

    s.v. ‘entopic phenomena’, Concise Medical Dictionary, Oxford UP, 2010, https://www.oxfordreference.com/view/10.1093/acref/9780199557141.001.0001/acref-9780199557141-e-3227, accessed 19/08 2021.

  • 14

    Peter DAYAN, Geoffrey E. HINTON, Radford M. NEAL and Richard S. ZEMEL, “The Helmholtz Machine”, Neural Computation 7 (5), 1995, pp. 889–904, https://doi.org/10.1162/neco.1995.7.5.889.

  • 15

    Hermann von HELMHOLTZ, Handbuch der Physiologischen Optik, published as “Helmholtz's Treatise on Physiological Optics”, translated from the third German edition," James P. C. Southall (ed.), The Optical Society of America, 1925.

  • 16

    David J. LEWIS-WILLIAMS and Thomas A. DOWSON, “The Signs of All Times: Entoptic Phenomena in Upper Palaeolithic Art [and Comments and Reply]”, Current Anthropology 29 (2), 1988, pp. 201–245.

  • 17

    LEWIS-WILLIAMS and DOWSON, “The Signs of all Times”.

  • 18

    Luciana PARISI, “Xeno-Patterning”, Angelaki 24 (1), 2019, pp. 81–97, https://doi.org/10.1080/0969725X.2019.1568735; p. 89.

  • 19

    Vilém FLUSSER, Into the Universe of Technical Images, Minneapolis, London: U of Minnesota P, 2011, p. 16.

  • 20

    Davor LÖFFLER, “Distributing Potentiality. Post-Capitalist Economies and the Generative Time Regime”, Identities: Journal for Politics, Gender and Culture 15 (1–2), 2018, pp. 8–44; p. 18.

  • 21

    Ibid.

  • 22

    Ibid., p. 20.

  • 23

    CASSIRER, in: HOEL and FOLKVORT (eds.), “Ernst Cassirer on Form and Technology”, p. 17.

  • 24

    LÖFFLER, “Distributing Potentiality”, p. 43

  • 25

    Saulius GENIUSAS, The Origins of the Horizon in Husserl’s Phenomenology, Dordrecht; New York: Springer Netherlands, 2012, p. 28.

  • 26

    Martin HEIDEGGER, Being and Time. Edited by Dennis J. Schmidt and translated by Joan Stambaugh, Albany: State U of New York P, 2010, p. 347.

  • 27

    LÖFFLER, “Distributing Potentiality”, p. 43.

  • 28

    David E. RUMELHART, Geoffrey E. HINTON, and Ronald J. WILLIAMS, “Learning Representations by Back-Propagating Errors”, Nature 323 (6088), 1986, pp. 533–536.

  • 29

    David E. RUMELHART, Geoffrey E. HINTON, and James L. MCCLELLAND, “A General Framework for Parallel Distributed Processing”, in: Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, MIT Press, 1987, pp. 45–76, https://ieeexplore.ieee.org/document/6302935.

  • 30

    George CYBENKO, “Approximation by Superpositions of a Sigmoidal Function”, Mathematics of Control, Signals and Systems 2 (4), 1989, pp. 303–314, https://doi.org/10.1007/BF02551274.

  • 31

    LÖFFLER, “Distributing Potentiality”, p. 35.

  • 32

    Hao LI, Zheng XU, Gavin TAYLOR, Christoph STUDER, and Tom GOLDSTEIN, “Visualizing the Loss Landscape of Neural Nets”, Advances in Neural Information Processing Systems, 2018, pp. 6389–6399.

  • 33

    As a sidenote: The dashed line demarcating its area furthermore suggests that the local staging’s particular modality (e.g. visual, neuronal, olfactory or proprioperceptive) may be diverse.

  • 34

    It is possible that Andrej Terčelj, the author of “Fraktali – Grafične skrivnosti računalniških umetnikov” (“Fractals – Graphic Secrets of Computer Artists”, Informatica 11/3/87), was involved given the resemblance to the fractal graphics in his article.

  • 35

    It is not clear whether Železnikar himself was among the diagrams’ authors, but the use of graphics from Informatica articles and images from ID suggest that there was a degree of cross-pollination.

  • 36

    Anton P. ŽELEZNIKAR, “Artificial Intelligence Experiences Its Own Blindness”, Informatica 11 (3), 1987, pp. 25–28; p. 27.

  • 37

    Anil BAWA-CAVIA and Patricia REED, “Site as Procedure as Interaction”, in: Construction Site for Possible Worlds, Amanda BEECH, Robin MACKAY and James WITTGEN (eds.), Falmouth: Urbanomic, 2020, pp. 83–99; pp. 90–93.

  • 38

    Possibly taking inspiration from early work by Martin-Löf[40] which later led to the expectation maximization algorithm.

  • 39

    LÖFFLER, “Distributing Potentiality”, p. 37.

  • 40

    Per MARTIN-LÖF, Statistics from the point of view of statistical mechanics (lecture notes), Aarhus: Mathematical Institute of Aarhus University, 1966 (“Sundberg formula” credited to Anders Martin-Löf).

Jesse Josua Benjamin

Jesse Josua Benjamin is a PhD candidate in Philosophy of Technology. He combines practice-based research with philosophical analyses to explicate phenomena of technologically induced changes in modes of seeing.