October 12, 2015 Leave a comment


Categories: Uncategorized


December 8, 2009 Leave a comment

Brain-computer interface device that is currently on the market.

A teardown of the device, explaining what all the parts do (also includes some very interesting reader comments)

Categories: Uncategorized

From Digital to Bio-Chemical Computation

December 7, 2009 Leave a comment

In the context of the class, we have been focussing on the relations between literature and digital technologies. As my own work deal with bio or living architecture I thought I would post on digital architecture in order to articulate some relations it might share with living architecture. In fact, it is possible to make a direct connexion between the two. My readings on digital architecture (even though they are not exhaustive) have led me to understand that an important amount of the vocabulary used to qualify the new possibilities offered in the digital realm is issued from the biological one: variation, evolution, adaptation, mutation, etc. Various recent publications also foreground the relations between the digital and the biological: Greg Lynn’s Folds Bodies and Blobs, Lars Spuybroek’ Architecture of Variation, and Brian Massumi who has published extensively on architecture. Here I will focus on the Spuybroek and Massumi.

How to trigger change?

Digital Architecture and Dynamic Forms

Deleuze and Guattari, following Bergson, suggest that the virtual is the mode of reality implicated in the emergence of new potentials. In other words, its reality is the reality of change: the event. (…) Technology, while not constituting change in itself, can be a powerful conditioner of change, depending on its composition or how it integrates into the built environment.”1

In the context of my work, as I explained in my previous post, I am interested in the potential for change: the potential for technology to facilitate the reconfiguration of our social ecology of practices. Here I would like to see what changes digital architecture can trigger in the built environment. Lars Spuybroek’s most recent publication address that question through looking at the ways in which architects today are “resetting the tools for design and creating a language that integrate variation and complexity.”

The book he edited on the topic, The Architecture of Variation, contains an interview with architect Ali Rahim. At the beginning the interview Rahim argues for one should understand the use of digital technologies not in relations to the possibility of increasing efficiency (which for him is the way that was mainly foregrounded in architecture) but rather to (1) “further design innovation and producing proliferating cultural effects” and (2) increase the potential for collaboration and cross-fertilization between different areas (for example, as I discuss the the cross-fertilization between the digital and the biological).

According to him it seems like what digital technologies have brought to architectural practice is the possibility to integrate real-time feedback from the environment into the design process. In this perspective, he says, digital architecture reverse the traditional design process: instead of integrating a pre-conceived design into the environment, it integrates the environment into the design process. As explained by Brian Massumi “this is because the software put into use [are] evolutionary rather than representational2.” “Rather than using traditional CAD software, where basic geometrical forms are reproduced and then modified or rearranged, architects employed special effects software where you start by programming a set of modifications before you have an object to modify — a potential modification”3. This way of doing architecture negates the linear cause/effect model and insist rather on feedback loops. Hence, it seems that what digital technologies bring to architecture is the potential for generating a reflexive and symmetrical dialogue between the built form and the environment: to consider them as co-operating and co-evolving. Accordingly, it would be correct to say that digital technologies insist on the processual dimensions of form generation. In this perspective the digital hold the potential to negate hylomorphism (the imposition of a form over matter) and to insist instead on “formation,” e.g. on the form’s processual dimensions, on the dynamism of its generation and on its potential to vary over time.

In Spuybroek’s edited book on the Architecture of Variation, Manuel De Landa argues for something similar when he makes the difference between properties and capacities. For him capacities are relational and is in fact a capacity to affect and to be affected. Following Rahim and De Landa digital technologies are bringing up front the capacity for an affective design where the form affects the environment but can now also be affected by its environment (and that on the level of design and not only when the form is physically built).

The key point is that digital technologies offer evolutionary, variability and connectivity processes to design. The abstraction made possible by digital technologies is what makes these processes possible. As Massumi argues “architecture has always involved, as an integral part of its creative process, the production of abstract spaces from which concretizable forms are drawn” Now with digital technologies “the abstract space of design is populated by virtual forces of deformation” and, I insist of transformation. Although he adds that “he virtual is a mode of abstraction, the converse is not true. Abstraction is not necessarily virtual.” For architects, the question concerns the ways in which the abstraction process made possible by digital technologies (the virtual forces of deformation and transformation it entails) can operate on the level of the virtual, that is how the abstraction process can trigger deformation and transformation.

Smart/Responsive Buildings

I think that what digital technologies have realized so far in relation to “living architecture” is the creation of responsive buildings. Of course they have also helped with the production of buildings that exhibit living qualities (mainly on the level of the visual form). However my interest is not based on buildings that look like living entities (visual form) but rather buildings that behave like living entities. Responsive buildings is an example. Even though they don’t necessarily behave like living entities, for me they embody the first step towards the production of a real living architecture. I will here give an example that will show that these buildings can adapt themselves to their environment but that unlike living entities, they don’t have the capacity to evolve, to change over time. The new Arts and Engineering building at Concordia University in Montreal is considered a smart or responsive building. One of the problems I have, especially with the discourses surrounding smart or intelligent architecture is that they are mainly understood in terms of efficiency, something that Rahim pointed out in his interview. In this efficiency perspective, intelligent/smart buildings seem to be mainly associated with two main ideas (1) environment friendly and (2) sustainability. At Concordia they created the building in this perspective. For instance, they equipped the building with moving sensors that are related to the lighting of the building. When there is no movement, the lights shut down. This might sound smart, but it seems like the engineers did not integrated the social environment’s feedback into the design. Indeed they did not take into account the fact that sometimes scholars only read in their office and that as a consequence their movements are fairly limited. For instance you sometimes see scholars moving their hands above their head to get back the light in their office! It seems that responsive environments deal with the question of the “information required to get a complex response”4 and that the architects/engineers of this building did not integrate sufficient information (virtual forces) into their design process).

Even though Rahim argues that “designing with the virtual abolishes fixed types and programs. Rather than housing a static, predetermined arrangement of functions within an established representational envelope, formations develop uses in response to their occupants and context. These uses are connected to the form directly rather than through representation” it seems that this still remains on the discursive level as building today are still pre-programmed to a variety of uses and they don’t necessarily hold the potential to evolve once they are built, that is to catalyze new usages. The Concordia Building is one example.

Even though I agree with Rahim on the fact that we should not think design only on the level of efficiency, design must have an objective. It seems that most discourses that deal with the integration of living materials and processes inscribe their goals in relation to environmental development. In this perspective, Rachel Arstrmong argues that projects like the Concordia Arts and Engineering Building deals with the “conservation of energy: alternative energy sources, efficiency and recycling which buy us time by 
reducing the production of greenhouse gases but do not combat the 
fundamental causes of climate change. According to her “these designs can be impressive in their complexity and metaphorical sentiment” but they only help us gaining some time as fundamentally, she says “they change nothing.” I think it would be correct to say that they present initial steps towards the emergence of real reflexive and symetrical relation between the buildings and the environment without necessarily fully actualizing this relation.

Beyond Gravity

Here I would like to discuss the work of the Polish Architect Zbgniew Oksiuta. Oksiuta creates what he calls biological habitats: spaces with dynamic membranes. He argues that the construction of a spatial boundary between an inside and its environment is the most elementary task of architecture. He adds that “naturally, separating oneself from the environment, creating barrier and walls, is also a central human activity5”. His creations speculate on systems/environments whose dividing border between the inside and the outside is not a foreign body, but rather an immanent component. Oksiuta creates spaces of dynamic liminality, transformative instances, uncertain spaces, spaces that act as associated milieus, as milieus of association. The link between his practice and digital architecture concerns the fact that he grows his dynamic membranes under water in order bypass micro-gravity conditions. In fact, many forms generated by computer-based design cannot be built in the physical environment as they don’t respect micro-gravity conditions. Consequently, Oksiuta’s creations provide a term of passage: they can be seen as a current model -an extension or a prolongation- of what is being done in the digital realm. In addition, his practice aims more towards a living architecture as it is a form of liquid architecture and it was shown in science that life requires liquid to emerge. Following my readings on vital individuation it also seems that in order to be alive, a system must have a membrane, but also a space of interiority. I think that the problem with responsive buildings is that they only succeed at generating a membrane that is unfortunately freed from a space of interiority where the potential for evolution actually resides.

From Binary to Chemical Computation

The architect’s job is in a sense catalytic, no longer orchestrating. He or she is more a chemist (or perhaps alchemist) staging catalytic reactions in an abstract matter of variation, than a maestro pulling fully formed rabbits of genius from thin air with a masterful wave of the drafting pencil.”6

As I explained in my previous post, I recently developed an interest in protocell architecture. Protocells have not been fully designed in laboratories so far as nobody has been able to ensure their division/reproduction successfully. However the use of digital technologies is important in that field as scientist use simulation processes in their experiments. Computation is related to evolvability and programmability and can be extremely useful for the study of biological entities. Although it seems that the Turing machine and its related binary or digital code might not be of best used in the field of synthetic biology (the field in which scientists are concerned with the design of protocells). Rachel Armstrong notes that Ikegami argued that the only semantics we have so far is the one of the binary code and that it would be necessary in the long run to develop a “chemical computation” based on shape-shape relations rather than binary (which would mean the development of a shape-grammar). She says, following 
Ikegami, that “the semantics of chemical computing pose a significant obstacle 
to interpreting the results of chemical interactions since our current 
understanding of computer code is based on binary systems that are not
expressed in more complex, analog systems like chemical reactions.7” In this perspective, she adds that

Material computation is performed by molecules that are able to make 
decisions about their environment and which can respond to local cues in 
complex ways that result in a change of their fundamental form, function 
or appearance. Material computers are responsive to their environment and 
make decisions that result in physical outcomes like changes in form, 
growth and differentiation. These have already been demonstrated to take 
place in non-biological systems as early as the latter half of the 19th 
Century when life-like behaviours were reported from nonliving systems 
that were not based on cells or even cell extracts. There are many 
differences between material and digital computers but most arise as a 
consequence of the information in material computers being embodied in a 
molecular scale, physical system that possesses both mass and volume. The 
main advantage of material computers over digital computers is that these 
systems exhibit almost unlimited parallel processing power, which enables 
huge amounts of information to be processed and allows for multiple 
solutions to be found for any given problem. However, material computers 
are also limited by their physical embodiment, which slows down their huge 
powers of processing and contrasts dramatically with the instantaneous, 
massless computation that is characteristic of the digital domain.”8

This might be a very interesting analysis to produce on the semantic level, e.g. to look at the convergences and divergences between digital and chemical computing and to question how they could mutually influence each other. Although I think that we would first need a model to refer to that would help understanding how chemical computing differs form the Turing machine.

Lastly, I think that digital technologies offer very interesting tools for reflecting upon bioarchitecture but that the potential for generating a real bioarchitecture might in fact resides in pushing the limits of the digital realm to its extreme. I think that digital technologies can help to think how a bioarchitecture could emerge but that it might not be the digital realm that will ensure its actualization.

1 Massumi, B. (1998) Sensing the Virtual, Building the Insensible


3 Ibid.

4Kirschner, M. (2009) Variations in Evolutionary Biology in The Architecture of Variation p.30

5 Oksiuta, Z. (2008) Biological Habitat: Developing living Spaces in Sk-Interfaces: Exploring Borders – Creatin Membranes in Art, Technology and Society. Foundation for Arts and Creative Technologies/Liverpool University Press. p.134

6 Massumi, B. (1998) Sensing the Virtual, Building the Insensible


8 Ibid.

Categories: Uncategorized

Some Thoughts on Interactivity

December 7, 2009 Leave a comment

In The Language of New Media, Lev Manovich explores the topic of interactivity with new media objects.  I have attempted to summarize his claims concerning this topic:

Contrary to her impression upon use of the new media object, the user is not a co-author; she is instead forced to follow a predetermined path, stripping her of agency and, eventually, her ability to think for herself.  The user is allowed to select chunks of content, offering her the illusion of interactivity.  The only interactivity that is occurring here, however, is that of the utilization of the user’s cognitive output as she uses the structure of the program as a program input.  The user, through this structured selection, sees the program as fully customized and reflective of her personal preferences and ideas, assuring her of her uniqueness, and thereby supplanting her need for personal associations with hyperlink associations generated by the program that are then accepted by the user as an externalization of her own thought process.  The user then learns to prioritize selection within the context of any program over that of personal evaluation, and the line between information access and psychological engagement is blurred, making both navigation and immersion difficult and leaving the user dependent on the program for navigation as well as for providing the path to a finished creative product that once would have been the result of her own psychological engagement.

My purpose is to explore and also to contest these ideas in light of personal experience with and knowledge of a few current new media objects.

A new type of interaction with new media objects has begun, with applications that perform a task so specific that the act of simply activating them is a declaration of intent.  Instead of searching and directing our own navigation on an all-purpose search engine, we search a library of applications in order to find one that will navigate these types of queries for us.  We navigate the world of many mini-navigations.  Instead of trying different combinations of keywords in search of the phrase that will yield “good lebanese restaurants within 20 miles that are within 5 miles of a Target” we search for “iPhone applications restaurant locator augmented reality,” download it and then activate it by touching the Yelp! icon when we are in need.  In the most absolute sense, the user is following a “pre-programmed” course of “objectively existing associations” (61)–in fact the only user input here (once the application has been downloaded) are the actual choice to activate the app and the automatically determined GPS coordinate location of the user.

In this way, the user’s direct engagement with the application is structured and objectively orchestrated, but on a different level, she is an agent.  Yes, she has selected a certain group of smartphone applications from a library or database that she has somehow decided will serve as her tool set for performing daily tasks, thereby literally manifesting Manovich’s ‘selection logic,’ but she has consciously chosen the structure of her tool.  In the case of Yelp!, she has chosen an application that uses the method of collective filtering as the primary structural element directing navigation; in choosing, this application she has also chosen collective filtering as her own method of addressing the task at hand.  In this case, the method of collective filtering is transparent–it is the marketed feature of the application.  Niche-use applications such as Yelp! compete for users not based on what they do but how they do it.  Featuring the way by which the program sources information has become essential, as the usefulness of an application is dependent not on what it does, as the user is assumed to have already bought into this concept, but on how well the method of doing what it does works.  There is a clear goal that the program is designed to reach and its ability to do so relies on its method.  The user chooses to outsource navigation of a specific kind to a program that uses a certain method for anticipating the user’s desires and access/engagement needs.  The user could have chosen other applications serving the same information access purpose that run on completely different but equally apparent predetermined methods of data processing.  Other niche-use applications, such as Pandora, rely on algorithmic analysis of program data combined with user-history filters that generate “suggestions” based on certain matches of specific data characteristics–i.e. “if you liked this, you probably will like this similar media object.”  Thus, in consciously choosing her method, the user is providing for at least the possibility of conscious cognitive synthesis, considering the output of an application one factor in a much more complicated network of associations, interpretations, and information leading to the formation of a decision.

Given my lauding of the agency inherent in the opportunity to select an application based on its method of processing data, it is easy to assume that I address only applications with a single purpose and scope.  This claim of agency in transparency of method is problematized by applications that have developed multiple functionalities with different methods of processing and presenting data as well as different methods of structuring interaction amongst the different functionalities.  UrbanSpoon (also a restaurant locator) is an application with the singular function of providing access to an navigation of information.  Other applications, including now Yelp!, have developed, often through a series of ‘upgrades,’ additional functionalities including that of social and psychological engagement.  In Yelp!, the user can review restaurants herself as well as accessing the reviews of others, and, using the GPS coordinate location input, can view other users’ current locations.  The user now has the ability to initiate a real-time online chat with other users.  This is often used to initiate online conversation with a user at a restaurant or bar with the intent of requesting current information about the restaurant such as how crowded it is or if the specials are any good that day.  This function can also be used to organize ‘spontaneous’ get-togethers of friends who happen to be in the same area near a favorite location or even as an advanced online dating tool, allowing users to view one another’s profiles to determine compatibility before initiating chat and utilizing mutual knowledge of geo-location to orchestrate a meet-up.

While these applications may not encourage the same conscious choice of method on the part of the user, the user still forms the personal associations particular to internal cognition and agency.  Manovich’s notion that “before we would look at an image and mentally follow our own private associations to other images, [whereas] [n]ow interactive computer media asks us instead to click on an image in order to go to another image” (61) excludes the possibility that personal associations might be interwoven with the hyperlink associations generated by the program.  Upon activating Yelp! while driving, the user might see highlighted a Moroccan restaurant coming up on her left.  This result may be highlighted as a result of her past frequently high ratings of Lebanese restaurants and of the consistently good ratings this restaurant has received from other users or even of the presence of a friend at the restaurant at that time.  However, the user is by no means guaranteed to unquestioningly turn left and enter the restaurant and enjoy the food.  The user may have a particular dislike of Moroccan food or a desire to eat alone.  She may even choose to disable the app in order to listen to a podcast of which a personal association with Moroccan food reminded her.  The restaurant app did not anticipate, nor does it benefit, from this kind of tangential personal association.  The limited purview of many new media applications may actually reinforce the user’s consciousness of the role of her personal evaluation, as well as an awareness of her intent in using the application, whether for navigation and access or psychological engagement or both.  The user may find psychological engagement with another user via chatting on Yelp! difficult, but it may be the possibility of access to real life initiation of psychological engagement with fellow restaurant-goers for which the user has selected the application.  The user may fully intend to use the information gained from this application to generate and control an immersive psychological experience with another person during which she would be assessing compatibility based solely on the interaction itself and not on the person’s user profile.  The user’s choice in managing and layering the influence of the program data and her parallel real life informational input in order to create a mixed reality represents a different type of user agency

This user has selected an application and will process the application’s output in a way that utilizes personal associations and occurs as some form of internal cognition, but through what process does the user do so and given this what is the user’s output?  The application, here Yelp! again, is a hypermediated environment in which the user has agency through the act of remediation.  She imports the application’s output, separating the spatialized augmented reality model, the user reviews, the real-time interactions, the representation not only of physical locations but also of possible physical experiences (the implied act of eating, enjoying entertainment, etc.), and then processes it in an internal cognitive space in which personal associations augment and complicate the output which has become user input and then eliminate, translate, link, problematize, resolve, analyze, and refashion this complex media input and her associated information.  The result is an action or decision that is much larger than the output of the predetermined structure of the program itself.  This action or decision may manifest in a future rating, review, or locative event that will feed back as input to the application.  However, the user willingly offers up her cognitive labor as program input with the goal of receiving more efficient and organized access to a vast database of information previously unknown to the user and of such a great quantity that the user could not sort on an unguided trajectory (or does not wish to) thereby making accessible information she would, in all likelihood, not have been able to access without a programmatic structure.  This then informs her personal cognitive processes, which she can then choose to offer up to the application as input, with the goal of increasing the future usefulness of the application for her own purposes.

The user also engages in a kind of macro creativity of association, combination, and, sometimes, advanced augmentation.  This kind of augmentation occurs on a spectrum.  A user may choose to use a variety of applications in order to foster real life collaboration.  The user and her collaborator may employ a mixture of new media applications, old media objects, online interactions, and real life interactions.  This collaborative project does not preclude immersion in either engagement with the collaborator or the process of creating the project.  The user switches between accessing information, identifying and sharing tools, immersing in work, and immersing in the collaboration, all the while layering and selecting the tools most suited to the task, time, or location.  This example of the creation of a mixed reality (as discussed above) has not caused the users to “to mistake the structure of somebody else’s mind for [their] own” (61).  It has allowed them to utilize the work of the minds of others to engage in a different way with work of their own.

On a different end of the spectrum of augmentation, the user may engage in hacking or re-programming of a new media object itself.  This may involve exploiting the hardware of a device through re-programming it to run custom software that allows it to perform a different extended function that the user deems more personally useful.  Or it may involve creating a personal software application complete with the desired tools or functions through the utilization of both/either the framework/structure of the program itself (algorithmic processes) or the ideas behind the structure of the program (input and output formats, means of access, interface style, method of acquiring user input or external input, etc.).  As Manovich points out, “instead of identical copies, a new media object typically gives rise to many different versions” (36).  However, these versions are not necessarily generated from a selection of templates or by “simply clicking on ‘cut’ and ‘paste,'” (130) as Manovich has claimed are the operations possible in new media applications.  Here new media inspires a kind of variability not generated from a predetermined tree of selection options but from genuine creativity on the part of the user.

While “[p]ulling elements from databases and libraries” may be the default method of operation within a particular application, it is misleading to say that “creating [elements] from scratch becomes the exception” (130).  The user may select applications from libraries and use discrete ‘elements’ from databases which are processed according to the specific method of structure of the application, however, the agency and creativity have not necessarily been supplanted.  The user’s agency and creativity lie in conscious choice of program structure, personal associations and interpretations of program output data, re-programming of the applications themselves, and immersion in experiences which are inspired by and interwoven with these applications.

Categories: Uncategorized

What to do with “I”

December 7, 2009 Leave a comment

It just occurred to me as I start to file away the various materials that I collected over the span of this semester, that a good portion of the work we discussed in this class won’t fit, one way or another, into the extra-large manilla file that I labelled “Art & Lit in Dig Dom.”  I don’t have a copy of the Breathing Wall, or the requisite system/accessories to run it; I don’t know how long many of the “assigned” links will remain active; My class notes criss-cross from a composition pad to floating word documents; The final project that Whitney and I produced isn’t in either of our possession. Without dragging out the all-too-obvious allusion to Diana Taylor’s lecture “The Digital as Anti-Archive,” this situation prods me to consider what is to come of our involuntary memory systems, especially with respect to the arts.  Back in August, a friend and fellow poet recommended this class to me, urging me on the grounds that it would introduce to me new ways of thinking about poetic form.  Gesturing to the material conditions of digital poetry, he made a comparison to the popularity of typewriters to modernist writers, how the ability to set your own type relaxed the conception that poetry is smooth on one side and bumpy on the other.  Looking back however, over my errant swaths of notes, it seems that the biggest change that seems intrinsic to the switch from paper-based composition to computer-based is one that is at the heart of generic theories of the lyric.  That being the distance between the speaker of the poem, the figure of the poet, and the flesh-and-blood human who in most cases composes the verse.  Marshall Brown sums this contention up in his essay “Negative Poetics”:

The poem says,” “the speaker says,” “Wordsworth says.” In our everyday critical usage these three assertions become indistinguishable. But they shouldn’t be. Instead, there is a speaker and there is a poet who gives the speaker voice. And there is a poem, which is a combination of the two voices-speaker and poet. The speech and the voice that re-cites the speech operate in tandem to give poems their depth.

To see the “I” personal pronoun in a paper-based poem is to ask the question to whom does that refer.  If the same pronoun shows up in a digital poem, the question of authorship, especially in the case of a randomly-generated recombinable poem, is severely complicated.  Does such a complication compel us to refigure the already muddled discussion of the lyric genre in order to account for this new signification.  Or does it pose the “I” as ineffable and de-emphasize staid notions of subjectivity and authorship?  I leave you/us with a quote from Chistopher Funkhouser (see his essay “Digital Poetry” in A Companion to Digital Literary Studies) on the banal possibilities and marvelous impossibilities that await the poet who longs to make poems that won’t fit, one way or another, into a manilla folder :

Author(s) or programmer(s) of such works presumably have a different sense of authorial control, from which a different sort of result and artistic expectation would arise; consequently, the purpose and production would veer from the historical norm. Because of this shift in psychology and practice, digital poetry’s formal qualities (made through programming, software, and database operations) are not as uniquely pointed and do not compare to highly crafted, singular exhortations composed by historic poets.

Categories: Uncategorized

mapping movement with moving maps: unfixed grids and dynamic meaning

December 4, 2009 Leave a comment

Here is the link to our final project:

In this project, we examine a variety of critiques articulated in discourses on the history of cartography. At the dawn of wide scale technical and cultural transformations made possible by the current development of digital technologies, we propose a number of visual strategies to creatively and critically engage with these critiques.

The design gestures toward our specific interests in mapping. Experiments with surface and depth; background and foreground; scale; layering; connectivity and disjunction; shifting, stasis, and time; the visible and invisible; unhinged structure; and importantly, navigation that continuously reformulates through interaction. The design is a type of aesthetic mapping that moves us toward the kinds of maps we envision.

Four different node clusters cut across the design experiments. Their relationalities emerge as you explore and experience the site, generating a map in dynamic flux.

This site is best viewed at 1920 x 1200 resolution on a large screen with the Firefox web browser.

Categories: Uncategorized

Charles Bernstein on Seriality

December 4, 2009 Leave a comment

In reading Charles Bernstein’s essay on Charles Reznikoff, “Reznikoff’s Nearness,” I found a quote that, at least for me, helps to locate digital poetics in the context of print-based models.  His closing remarks on hypertext’s distinct penchant for nonlinear readings point to the medium’s potential for poetic seriality.  A question that occurred to me is how much does a work of recombinate literature privilege readings that weigh heavier on form than content (by this I mean addressing the text (content) only as a means to discuss the more conspicuous element, its recombinable form).  I shudder in dividing a poem, no matter its medium, into categories of form and content, but it seems to me that recombinate poems share with sound poetry a resistance to close reading.  I’m wondering if this is a characteristic generalizable to the bulk of contemporary conceptual writing?  Another trait unifying paper and digital poetries?

(By the way, if you’re not up on Reznikoff, I highly recommend his book Testimony…a collection of poems that take their language and occasion from early twentieth century court records.)

Here’s the Bernstein:

There are a number of serial works that are not intended to be read only or principally in the order in which they are printed.  (Serial reading opens all works to recombination.  My favorite image of readerly seriality is David Bowie In Nicholas Roeg’s “The Man Who Fell to Earth,” watching a bank of TVs all of which were rotating their channels.) Robert Grenier’s Sentences—five hundred discrete articulations each on a separate index card and housed in a blue Chinese box—is the best example I know of extrinsic seriality, though two other boxes of cards also come to mind: Jerome Rothenberg and Harris Lenowitz’s Gematria 27 (twenty-seven recombinable numeric word equivalences) and THomas Mc Evilley’s cubo-serial 4 (forty-four four-line poems).  In principle, hypertext is an ideal format for this mode of composition since it allows a completely nonlinear movement from link to link: no path need be specified, and each reading of the database creates an alternative series  (The Objectivist Nexus 222)

Categories: Uncategorized