Some comments on, and two reservations about, #AltLit

I’ve recently joined (found?) the massively distributed online (sub?)culture that calls itself (is called?) Alt.Lit. My journey of discovery is not so important, but it started on twitter, moved to Facebook and now I see it everywhere. And let’s be honest, it’s not really a literature movement anymore; it’s become a cultural juggernaut steamrolling everything in its path. Instead: ‘Welcome to the age of feelings’. In a different vein, but from the same cultural milieu; Welcome to the age of,

…I and most people published that I know of…honestly believe that there is no good or bad in art (for example I 100% believe a 10-year-old’s writing is not less good than James Joyce’s, or replace either with any people)

It’s almost impossible to take that statement seriously (do I even need to mention Freud’s ‘reversal into the opposite’?). No one has talked seriously about art/literature/whatever in explicitly good or bad terms since at least the 60s (good/bad relative to some ‘end’ or cultural/communal/artistic standard, sure, but that’s a far cry from a 10-year old being as good as Joyce).

If I wanted to get Nostradamic I’d be tempted to say that “Alt Lit is the current vision of young American’s cultural future”. On what grounds do I make this claim? On no grounds – and that’s the point, a little bit of a taste of ‘alt lit’ (Alt LITE?). A blog post I read proclaimed that ‘Postmodernism is dead; Long Live Alt Lit’ and I quite earnestly had no idea whether the détournement of that phrase was even intentionally aware of the irony or the effect of twisting the original. Did they mean to do that to the meaning of “The King is dead; Long live the King”? Yeah, there’s a long tradition of Pop Cultural mangling and repurposing of the phrase. What, after all, did the original mean? Is that “the point” or am I giving them too much credit? I don’t think they care. Certainly, no one else in Alt.Lit seems to. Which brings me to,

1) the first of my two reservations about Alt.Lit: How does Alt.Lit do criticism?

How do you criticise something that is (very often) intentionally bad? What would Alt.Lit criticism even look like? Is it all a mask, a shield to forestall criticism? “Hey guys, I know, we’ll never make anything bad if we turn bad into a virtue!” Which seems horribly defeatist to me, but then again perhaps I haven’t “been” defeated in the same way.

It must be pretty terrible to be a young American right now (how quickly things change – remember when everyone hated you guys? Remember Bush?). And I’m not even talking about the economic climate, per se, rather about the libidinal crushing that America faced when the promised “greatest country in the world” never eventuated. Instead you guys got George Dubya and “Don’t panic! Keep shopping!” I mean, fuck, you guys were promised that you were the best! I’d be mad. I’d be mad as fucking hell.

Either that, or be crushed.  So in that sense, the reflex to avoid criticism makes sense.

But to have a mature and developed form (if Alt Lit even aspires to such – and I have my doubts about that too) means to have “better” and “poorer” examples of the genre. So far all the criticism I’ve seen has been pretty polarised – “quickshit” as a meme (as if a meme even counts as criticism), or “BOOST” the best stuff.

Is Alt.Lit an experiment in excising negative criticism from the entire system? Forget about anything that isn’t worth “Boosting” and just “Live ur lief” instead? Maybe… but isn’t that almost worse? Neglect is the ultimate “fuck you”. I don’t even care enough to say I think this isn’t good.

What’s perhaps worse is the possibility that instead people just don’t say what they mean when someone isn’t ‘getting’ or doing good Alt Lit or something. When someone is just not doing it very well, does anyone actually say so or does the collective just pass over like the Angel of the Lord? Alt Lit can’t be “everything” – there must be better and worse examples and approaches and goodness knows what else. Leaving those things unarticulated and tacit brings certain political obligations (which I don’t think have been properly addressed… but we’ll come back to that at No.2).

I realised while writing this that I hadn’t actually read enough Beach Sloth to know for sure if he really does much ‘criticism’ or just doesn’t mention the not great stuff. Here’s what I found instead:

I don’t know even what is going on in these three songs. Ghostandthesong makes no sense. This may be one of the most baffling, incoherent journeys ever put into MP3 format. I mean that as the sincerest complement possible.

Which is genuinely funny, and a nice deconstruction of mainstream musical reviews…  but what would it mean truly for a medium to treat incomprehension in a work as a virtue? Not a kind of “anything goes” postmodern relativity – but instead an absolutely radical, nihilistic, all-encompassing rejection of attempts at comprehension? Probably something excitingly different to Alt Lit, to be honest, because I much suspect it doesn’t live up to such a stratospheric standard (maybe some of it does – which is what I find exciting).

And I’m not trying to step to Beach Sloth here – I have never met or have even interacted with the guy (I don’t think), and too many people I respect have spoken highly of him for me to think differently. Plus – mad pros to a fellow curator. I did the hard yards at Critical Distance for a few years so I know what it’s like being an often reluctant gatekeeper for a community. I also dealt with many of the same issues. I usually did just pass over the not-great stuff, but sometimes I did mention it. Sometimes you do need to editorialise, y’know? Anyway. Respect for the Beach Sloth.

And that’s the thing – I really, genuinely like all of the people I know and have met in and through the Alt Lit community. And I really value that. But I do worry that the relentless positivity covers up some (mostly) invisible community effects.

While researching for this piece I googled “Alt Lit criticism” and all I found was this one piece on the Bangolit blog, which echoed many of my own points:

I haven’t seen a single mildly critical, or even questioning, comment on a piece of flarf in a while. The review sites are often not much better—since boosting caught on, their fangs have been pulled. Tiptoe around things you don’t like, hem and haw. To openly dislike something can result in public evisceration (see: Hazel Cummings). Not that it comes up often. Everyone is positive about everything, to a fault.

Not making things any easier w/r/t Flarf is the fact that there is a real history of explicit ‘badness’ to the form, beyond just “crappy” badness a la Faceobok. This page (offline? Try a wayback archive) featuring comments and explanations by many of the pioneers of Flarf mentions several times that racial slurs were an important part of making the early Flarf poems. Gary Sullivan defined Flarf as: “A quality of intentional or unintentional “flarfiness.” A kind of corrosive, cute, or cloying, awfulness. Wrong. Un-P.C. Out of control. “Not okay.”

And here’s my take on this sort of thing: in your closed community you can pretty much say whatever you like. If you and your mates wanna use whatever horrible slur you like in private, go nuts! But as soon as you get out into the world-around-internet you aren’t in a private space anymore. Someone will stumble upon something you’ve written and find it genuinely offensive, horrible, and reinforcing priviledge/oppression/racism/sexism etc– and they wouldn’t be wrong just because they don’t have your community context. ‘Authorial intention’ (or lack thereof) doesn’t wash. Outsiders misunderstanding it, not getting the “irony” of your subversive/reflexive redeployment of the term “wetback” or “cock-boy” or whatever doesn’t make it any less of an example of real and actual oppression. Which brings me to my second reservation…

2) And that is that Alt.Lit, as far as I can tell, is so white, so middle class.

If you’re going to do “internet community” as the main exercise of your art scene/movement/etc, and you’re not going to do it in a private forum or whatever – if you couch it as art or literature – then you don’t get a free pass on issues of diversity and inclusiveness and politics. Whether you want to be or not, you are a part of the world, and the world is political. Don’t misunderstand me – it isn’t about being explicitly political, in fact it’s better if you aren’t, but get the political dimensions of what you do and say and who you hang with and BOOST and whatever else.

I’m not wrong, am I? Alt Lit has a diversity problem, in both race and class – it’s pretty great actually that there seems to be quite a bit of gender diversity (you’re beating videogames!), but it’s still a pretty huge whitewash. This is a weird position for me to be in because, as an Australian, I am surrounded by whiteness where I live and in where I grew up. The stereotypes are kinda true.

I’d love to be wrong about this point, and in fact about all of it, but they seems pretty important to me (admittedly, a bit of an outsider). The Alt Lit community and its irrepressible positivity is a bit of a problem, even if it is (or has been) such a strength. It’s a bit of a paradox. If Alt Lit is going to be influential outside of just “white kids making stuff for other white kids” can it keep its positivity and aversion to critique? Because, that’s kind of been a defining feature of it… and maybe that’s not always a great thing…

Anyway. I don’t have an answer for either of these reservations, but I’m keen to hear from anyone who has an opinion or suggestion on either of my reservations, or even if you just have a different take on them. Beach sloth! If you read this and want to tell me what you think, my email is on the sidebar to the right. The same goes to anyone else involved in the scene, basically.

Jon McCalmont on Prometheus, Myths and Calvinball stories

At his consistently exceptional ‘Ruthless Culture’ blog Jonathan McCalmont has a great meta-review of Prometheus, in which he locates the film within the broader constellation of ‘myth-making’ in films and modern popular culture that is so prevalent right now. Having not seen Prometheus yet, I can’t really agree or disagree, but his analysis of wider popular cultural obsession with mythmaking is very convincing. But I had a few reservations – possible fault lines in his argument, I guess you could call it.

McCalmont (rather convincingly) argues that “as a culture, Westerners no longer crave stories… they crave mythologies” and he suggests that Prometheus is an attempt at critique of  that obsession with mythologies (coming at the expense of the ‘neat, self-contained story’ which has indeed rather taken a backseat to trilogies, series and the rise of the ‘franchise’). McCalmont says:

I believe that Prometheus is best understood as vicious critique of the tendency to seek answers to big questions and to weave these answers into some kind of escapist fantasy. Far from providing us with a mythology that makes sense and answers all questions, Prometheus suggests that life is nothing more than a series of random events leading not to Tolkien’s meaningful ‘turn’ but to a sense of profound bafflement.

As I said, I can’t really comment on this aspect of the film, and whether or not it succeeds. But there’s something funny about the way he mixes up the difference between “Big Questions” (aka the metanarratives that post-modernism has been so utterly against since forever, but which it has never really gotten rid of) and questions of the decidedly non-big variety. He notes that,

Though ostensibly a mystery, the plot of Prometheus is really nothing more than a series of doors slammed in characters’ faces by a cruelly indifferent universe. The film begins with a group of humans voyaging to the stars in search of Big Answers to Big Questions.

But some of the questions he lists are not big questions: they are (or should be) answerable, quite straightforwardly, e.g.:

  • What did the android say to the alien upon its awakening?
  • Why did the alien respond to a first contact situation with psychotic violence?

The answers to these are not “because there is a god” or anything meta like that. And that’s the crux of it, I think: if Prometheus is like LOST and other “Calvinball” type stories, it’s only because such straightforward questions are warped, twisted, or deliberately obscured as if obscurantism were somehow a statement about the degeneracy of meta-narratives (or even a statement about anything at all other than the arbitrary whim of a storyteller/mythmaker). And this is why I was a bit on the fence when McCalmont states his thesis as the following:

To my mind…attempts to wring meaning from the text of the film are hopelessly deluded as Prometheus is quite explicitly a film about the absolute futility of seeking Big Answers to Big Questions.

But obscurantism is not anti-metanarrative, in fact it’s just a reinforcement  the meta-narrative of an “indifferent” universe. McCalmont makes the claim that “Mythologies differ from scientific explanations in so far as the logic they use to explain events is narrative rather than causal” which I’m also not so sure about. Science is, after all, it’s own mythology. Chris Bateman’s forthcoming “The Mythology of Evolution” touches on some of these issues, with Bateman saying,

the imagery of evolution threatens to distort our understanding of the incredible history of our planet. There is no science without mythology, and the only way to reveal the facts is to understand the fictions.

Bruno Latour has a great quote about the operation of science, saying (and I’m paraphrasing) that it has to explain one thing in terms of another thing, and then that thing in terms of a third, and so on until it ends up looking more and more like a fairytale. Count the number of intermediaries between “you” and the alleged Higgs-Boson.

So where are we, then, on the issue of Big Questions or metanarratives, and why does McCalmont’s piece seem so indicative of the current? I agree wholeheartedly with his assessment, and his term “geek spiritualism” encapsulates it perfectly, but I don’t think we’re even remotely close to a myth-less state, and I don’t narrative obscurantism actually does point to a lacuna or disavowal of metanarratives. I think we’re in a situation where we’ve internalised the post-modern disavowal of metanarratives (the “Big Questions” will never be answered satisfactorily) but perhaps the effort has not been taken seriously, since we can’t disavow the metanarrative of science, as it works so damn well for us at the present. (As an aside, many critics of postmodernism have pointed out since the very earliest phase of its adoption that a disavowal of metanarratives can become itself a metanarrative.)

I find myself agreeing with McCalmont’s analysis of the dual cultural and market forces that are driving the increased mythologisation of popular culture:

The problem highlighted by the very existence of Prometheus is that the demand for synthetic mythologies is now so intense that it is beginning to distort the nature of popular culture. With fans demanding mythological depth and investors demanding the type of monies that accompany owning people’s fantasy lives, the market for self-contained stories is beginning to shrink.

But I think his  argument is a bit of a kludge – narrative obscurantism of the Calvinball type isn’t the same as a real or genuine disavowal of metanarratives (including the metanarratives and myths of science). To my knowledge, one of the few people to take seriously the challenge of a meaningless, indifferent universe is Quentin Meillassoux and his acausality. But again we find the same tension as in McCalmont’s piece – Meillassoux believes in a fundamental, hyperchaotic and meaningless layer of reality as the only necessary and non-contingent layer of the universe, yet at the same time, the universe at present remains contingent and explanatory mechanisms like science remain accurate, and may remain so until long after humans have disappeared from the universe.

McCalmont ends his essay by saying that he fears for the future of “self-contained stories” in the face of increased myth-making, and that Prometheus, while terrible, perhaps “contains the future of all popular culture.” Which I think is an accurate assessment, but I don’t agree that self-contained stories are a “solution” to the problem of metanarratives. But I remain sympathetic to the desire for less mythologising – though perhaps only because most, if not all, modern attempts at it are so utterly shit.

Attention and Immersion

A video essay about the term ‘immersion’ and why I think it should be replaced.

While video of Richard Lemarchand’s GDC talk is behind the GDC Vault paywall, his slides and text from the talk are online here.

I’ve received a few very nice replies and comments – Shawn Trautman wrote out some of his reservations and emailed them to me, and he quoted some of my reply here.

Robert Yang commented on the video when I posted it to my Facebook account, and he had the following to say:

I’d get more polemical than you and say that the industry actively encourages this fallacy, and the idea of the holodeck / this technology fetish results in an “immersion industry” where the Crysis 3 box says ridiculous shit like “aliens behaving realistically” — what the hell could that even possibly mean and who wrote that copy? But no one cares, because we’re used to it, and that’s what’s worth $60 instead of 6 indie games.


Determinism (mostly for Jenn Frank (but you might be interested also?))

So Jenn Frank wrote an astonishingly great piece ‘On games of chance, cheating, and religion’ and JP Grant added some thoughts of his own about the notion of ‘fairness’ in games, in an equally excellent response, ‘Fair Play’. Go read both of them now if you haven’t yet.

But I wanted to add a little something about the notion of determinism, the spectre of which Jenn mentioned in relation to things like the location of gold veins, being able to win at jeopardy or the scratch lottery, the notion of a ‘solved game‘, and the Christian theological tradition following Calvin.

In essence, if anything is ‘solved’ or ‘fated’ or ‘pre-destined’ what we’re saying is that it is determined in advance, usually by some set of rules which may or may not be discoverable. That’s kind of fine – there are some things which can always be determined in advance, like 8 plus 9 or that a (non-contradictory) square will always have four sides, but all these things only happen in the realm of ideas, as abstractions, or in artificially (arbitrarily?) closed systems. Determinism as a philosophy, ideology or religious doctrine concerns the nature of everything. Whether it’s Calvinism, Newtonian physics, belief in the Roman god Fortuna, or a new age sense of fate, they’re really all saying much the same thing – that everything is predestined, predetermined. Why? Because if any part of the universe is ‘out of control’ for whichever force does the determining (even the laws of physics) then the whole thing becomes irredeemable corrupted. One atom left beyond the powerful reach of our Calvinist God’s control could – no, would – undermine the whole basis of determinism. Even if this Calvinist deity is omnipotent and knows what this ‘out of control’ (hello free will) atom will do, the deity reduces the real agency of the free atom utterly and we’re now splitting semantic hairs over our definition of determinism (“If I have ‘free will’ but nothing I do could possibly ever change anything from it’s set course… how is that free again?”). And if it’s left up to “chance”… well, who’s omnipotent now? The point about a philosophy of a determinist universe is that it is so utterly totalising – it’s all or nothing, otherwise it’s not determinism.

But maybe you’re not convinced – after all, how do we know that it’s not deterministic? Well here’s where it get a bit tricky, because we really come to this question with a lot of baggage. Like Jenn says, we worry about the answers to these kinds of questions, and that makes us want to stay away from them, or at least makes us anxious about asking them. It’s also difficult because we’re already treading on the toes of philosophers, who all come with their own historically specific baggage, which in turn is already affecting how we’re even talking about this issue right now…

So if we’ve got all this baggage, where do we start? One way is to start by pinching the best idea that Science ever had, which is to say that we begin from a position of utter, naïve openness to revision – no problem is ever permanently closed to inquiry; no question is beyond asking; no contrary evidence is ever ignored for the sake of preserving our current (even working!) answers. This kind of attitude has actually gotten a bit of a bad rap lately because it’s been perverted and selectively deployed to spectacular effect by people with an agenda other than inquiry-for-inquiry’s-sake. As an aside, in Australia in 2007 over half the population polled in the affirmative when asked whether or not they believed in human influenced climate change. Since then that number has plummeted as tabloid media and right-wingers colluded together to cast unreasonable doubt on issue. We used to believe, but now it’s “not a settled science” once more. That’s not what I’m talking about – these people are no more presenting real challenges to climate science than Ron Paul is really going to take a libertarian position on women’s reproductive rights.

But back to the issue of determinism. What are the odds that the universe is deterministic? Okay, odds is a not a good way to phrase it. How about, ‘What are the possibilities with respect to whether or not the universe is deterministic?’ That’s a much better frame for the question, because now we can see that, actually there’s only two options – either it is, or it isn’t.

Well, actually we’ve already seen a bit of a third option, and that is that derminism is ‘unevenly distributed’ around the cosmos, or occasionally pops up in localised regions of time or space. But as we said at the outset, that’s not determinism – it’s all or nothing baby! Either there’s an actual, real chance that an atomic spec influences the fate of the rest of the cosmos, or there’s not. Implicit within our culturally-overburdened notion of ‘determinism’ is the assumption that all of the universe is consistently deterministic, otherwise… it’s not really determinism! Ta da! So we’re back to two options. The universe and everything in it is either deterministic or it isn’t.

From here we can go in a number of directions – perhaps we can draw on some fancy modern science and apply what we know about popular theories in advanced theoretical physics like string theory, ‘M-theory’ and other quantum mechanical frameworks. Or alternatively we could take the Pratchette-esque route and say that it’s ‘turtles all the way down’, and that rather than having a ‘bottom’, the universe just… keeps on going, all the way down, down, down into the depths of Hades and beyond. It’s hard to imagine such a thing, but it’s really quite difficult to say that it’s beyond the realm of plausibility. Still, it’s just as hard to imagine that this never-ending, fractal-esque universe behaved in anything resembling a determinist manner. Part of the appeal of determinism stems from it’s finitude, in the sense that something starts a chain that is predictable and utterly determined from the very outset.

So whether the universe contains an infinite regress of ‘things’ of increasingly ultra-tiny bits of stuff also impacts our assessment of the question of a determinist universe. If the very bottom level (let’s just say it’s quantum strings) is all irreducibly small and made of the same ‘stuff’ then how that ‘stuff’ behaves makes a difference to the nature of the universe. In fact, all the universe is is that stuff, and if that ‘stuff’ really is strings current thinking (as I understand) is that rather than being deterministic, stings are so weird that they behave based on probability. So whether or not you get out of bed and brush your teeth in the morning is underpinned by strange stringy bits with 26 dimensions all behaving in a probabilistic manner… and by that stage we’re not living in a determinist universe.

But before we go home with our new found suspicion that we’re probably (ah! ahahahahaha!) not living in a determinist universe, we should make one small detour back up to the realm of medium sized-object and remind ourselves where a limited kind of determinism does exist – and that is in abstractions, ideas and in arbitrariness.

And this is where we come back to games, because most games are exactly that – abstractions, rules, ideas, and arbitrariness incarnate. In their ‘pure’ (think platonic) form, every game probably could be deterministic, but games don’t exist as pure thought or rules because games are done, or they are played. Where are they played? In the universe. What is the universe? Probably not deterministic. And despite our best efforts, our lucky or careful organisation, there really is no predicting when the indeterminacy of the universe will intrude. Even these machines – these localised realms of determinacy we call ‘computers’ – depend on other things like the continued operation of the laws of electro-conductance, as well as on the manufacturing standards at Xbox HQ. And while it might even look as though certain ‘universal laws’ like electron conductivity are themselves ‘deterministic’ from the point of view of an engineer or software developer, we would do well to remember that these laws themselves are contingent. That is, at a certain point in the far, far, far, far distant future, at the end of the universe even, according to physicists these laws are going to themselves break down. If they’re  right then the universe will eventually have expanded enough to rip apart even atoms themselves. Try running your Xbox in that kind of an environment.

But hey, these predictions could be wrong – remember we’re not allowing ourselves the option of shutting down necessary revisions early. But at the same time, that’s also kind of appropriate. If we do live in a probabilistic universe, we may never really, truly and necessarily be able to prove it. That’s makes sense, I think, and it seems like a beautiful kind of symmetry, wouldn’t you say?

Audio: Neuroscience, Technospectacularism and the Mind

On Wednesday the 9th of November I presented a paper to the Knowledge/Culture/Social Change International Conference at UWS Parramatta. The title of my paper was “Neuroscience, Technospectacularism and the Mind” and I recorded my talk which you can listen to below. The 29 minute recording includes some questions asked by the audience at the end – the first from Greg Haigne Associate Professor in the School of Languages & Comparitive Cultural Studies at UQ (Who presented a very interesting paper the day previously), and one from Professor Penny Harvey from the University of Manchester)

[haiku url=”http:/wp-content/uploads/2011/11/Ben-Abraham-Neuroscience-technospectacularism-1.mp3″ title=”Ben Abraham – Neuroscience, Technospectacularism and the Mind”]

Direct download.

The phrase “Technospectacularism” is an adapted version of a phrase from the opening pages of Ian Bogost’s How To Do Things With Videogames and I think it’s an incredibly apt phrase to describe our time. The thesis of the paper itself is a reaction to what I see as an upswing in the use of Neuroscientific findings as a blunt weapon of persuasion for academics, journalists and authors outside of – or on the periphery of – the field itself. To counter this dangerous misuse of the unfinished science of the brain I drew on William Uttal’s critically important work suggesting that the brain-mind problem may be intractable. From there I spun out a hypothesis based on the “external mind” thesis, by Andy Clark and David Chalmers, as well as Graham Harman’s Object-Oriented Philosophy, suggesting that the mind is a real object with just as much reality as the touchable stuff of the brain, despite being made up of  different “stuff” to the brain alone.

I’m very interested to hear your comments or concerns, and will certainly entertain requests for clarification – my email address is on the sidebar.

Ada Lovelace Day 2011: Talking ’bout Ms Morrison

So it’s Ada Lovelace day again, and I thought I’d talk briefly about the impact that Ms Morrison, my year 11 & 12 maths teacher had on me.

She was the first (and only) maths teacher to succeed at making me face up to the importance of laying down foundations for higher order mathematical thinking. In my case, it was learning the times tables properly, which I’d never done in all my years of prior schooling. Amazing, I know – I got through about 10 years of school and high school without properly memorising my times tables.

Somehow, from the ages of about seven to seventeen I’d gotten through by relying on my ability to recollect some easy ones, and extrapolating from those I knew (5×5=25!) by doing some quick arithmetic to get the ones I couldn’t ever remember for the life of me. The unappealing practice of classical ‘rote learning’ was something I tried my very hardest to avoid, both in and outside of school, preferring to rely on the rule that the interesting things are more memorable anyway.

It’s an approach that’s held me in reasonable stead, as it comes with an uncanny ability to recall contextual memories. Hey, remember that time we carried a railway sleeper all the way up a 500m escarpment track in 40degree heat because you wanted to build a tree-house in your backyard and I got sticky sap on my transformers t-shirt? Then we could barely fit the huge thing into the tiny Suzuki swift and I had to sit in the front seat with it half crushing me, all the way home! Yeah, good times.

There’s something about the essence of the fact floating unconnected from the reasons why it’s a fact that seems to make it particularly hard for me to remember. Why, after all, do seven bunches of six total forty two? What rhyme or reason is there for it, it just purely is. It’s a product of the base-10 system, but that really doesn’t explain all that much.

So Ms Morrison helped me realise that times tables were worth the trouble of rote learning. But she didn’t just make me do it (no teacher can make any student do anything) she convinced me. How? Through a number of things that add up to her being a god damn amazing high school teacher: she treated her class as adults, she was brilliant and creative in her explanations and demonstrations, and she was a real human being. When she was annoyed with the highly authoritarian, aspirational waffle our principal like to recite at assembly (“Teachers teach! Learners learn!” was a favourite) she’d agree with us when we grumbled about it in class afterwards. When we couldn’t understand how exactly calculus worked, she’d come up with another demonstration of how the semantic language of maths performs the crazy conceptual work of slicing the area under a curve into infinitely small sections and measuring them. And when we didn’t come to class she’d hassle us for missing actual things we had learnt rather than for being simply “missing classes”.

See that’s the thing – when you’re teaching your students important and useful things, you actually get the right to harass them for not coming and learning. That’s the key point that many of the stricter teachers missed: you have to have something worth learning to get the right to be annoyed, angry, upset, &tc. when students skip your classes to go to the shops and get some lunch from The Professors’ Charcoal Chicken. Ms Morrison quite reasonably got that.

And the devotion she showed to her work, while supremely evident each day, didn’t just end at 3:30. She is literally the only teacher that ever gave out her email address to us with the offer of answering questions we didn’t understand so we wouldn’t have to wait a full week to get the answers in class. And ask we did! In her answers she’d scan whole pages of working out to show us where we were going wrong, what we should be doing and explain why, sending them to us via email.

She was savvy too. She didn’t just teach us the interesting or worthwhile parts – she taught us what we needed to know for the exams, and made reasonably prescient predictions as to what, generally, would be in them based on past papers, trends, and what kinds of questions the papers have asked lately. She played the examination game, essentially, and played it on our behalf.

Our diminishing cadre of 3-unit maths students went from being a full class, to about 8 students as the two years went on, and several people dropped out. Ms Morrison never took it personally or viewed it as a failure though – she was quite tactical about it. If changing to 2unit even the month before the exams is going to get you your best results, with scaling taken into account, and if you’re going to use those two months to get marks elsewhere, then go for it.

She encouraged our class’ sense of camaraderie, celebrating birthdays with cakes and other occasional rewards and encouragements, and she stirred our competitive sides to our collective benefit. After occasional topic tests a list of the ‘Top 5’ students would be placed in a prominent position in the room, and I still remember fondly the one time I beat my friend Lachlan at a test – such victory! Such a sense of achievement! Fuck badges and points and all the rest of that shit; the payoff from working your ass off at something and finally beating your friends, beating someone who has in the past always been better than you, is worth more than money.

Ms Morrison of Blaxland High invested in her students. We felt like we let her down when we didn’t find the time (or make the time) to do the exercises and homework she knew we needed to really comprehend a mathematical approach. When we did well in exams, I felt like she did well too. We were, after all, the culmination of two years of her hard mental and emotional work – it took far more than just opening the book and explaining it to ‘teach’ our class, and it was just our luck that we had a teacher as brilliant and dedicates as her to do it.

Addendum: An interesting point of note – 3 of the 7 or 8 students of Ms Morrison’s who went on to finish 3unit maths in 2004 have gone on to PhD level study. Not a bad success rate.

Orwell on miners; the working-class; and unemployment in the 30s

I’m reading George Orwell’s The Road to Wigan Pier, as a re-tweet from my diligent twitter friend Sam_Crisp alerted me to the fact that the University of Adelaide is periodically releasing out-of-copyright e-books (and Orwell, having been dead 50 years  is now out of copyright in Australia. I think he’d be pleased with that, actually).

Wigan Pier, which was written in and around a number of mining towns and heavy industrial cities in Northern England, has got a number of fantastic passages and I thought I’d just highlight a few of them. The opening chapter describes his time lodging with a couple who ran a boarding house/store type establishment and it is a pure and unmitigated horror. Bugs, horrible food, cold and smelly, five men to a room coming and going at different times; it’s a kind of unimaginably Dickensian existence that you wouldn’t believe unless it were described to you by someone who actually lived it, as Orwell did. He was something of an investigative novelist, a slower counterpart of the investigative journalist and it allowed him to get a real sense of (what Latour would call) the whole State of Affairs. There something in his perfect descriptions that is very much an ancestor of Latour’s own methodological approach. There is no ‘expanation’ lacking after one has read an Orwellian description; everything is entirely laid bare.

In chapter 2 he outlines in captivating prose just how tough it is being a miner – how much your whole body is moulded into the task of mining, no doubt coming to define your very existence:

Before I had been down a mine I had vaguely imagined the miner stepping out of the cage and getting to work on a ledge of coal  a few yards away. I had not realized that before he even gets to work he may have had to creep along passages as long as from London Bridge to Oxford Circus. In the beginning, of course, a mine shaft is sunk somewhere near a seam of coal; But as the seam is worked out and fresh seams are followed up, the workings get further and further from the pit bottom. If it is a mile from the pit bottom to the coal face, that it probably an average distance; three miles is a fairly normal one; there are even said to be a few mines where it is as much as five miles. But these distances bear no relation to distance above ground. For in all that mile or three miles as it may be, there is hardly anywhere outside the main road, and not many places even there, where a man can stand upright… what I want to emphasize is this. Here is this frightful business of crawling to and fro, which to any normal person is a hard days work in itself; and it is not part of the miner’s work at all, it is merely an extra, like the City man’s daily ride in the Tube. The miner does that journey to and fro, and sandwiched in between there are seven and a half hours of savage work. I have never travelled much more than a mile to the coal face; but often it is three miles, in which case I and most people other than coal-miners would never get there at all. This is the kind of point that one is always liable to miss. When you think of the coal-mine you think of depth, heat, darkness, blackened figures hacking at walls of coal; you don’t think necessarily of those miles of creeping to and fro. There is the question of time, also. A miner’s working shift of seven and a half hours does not sound very long, but one has got to add on to it at least an hour a day for ‘travelling’, more often two hours and sometimes three. Of course, the ‘travelling’ is not technically work and the miner is not paid for it; but it is as like work as makes no difference. It is easy to say that miners don’t mind all this. Certainly, it is not the same for them as it would be for you or me. They have done it since childhood, they have the right muscles hardened, and they can move to and fro underground with a startling and rather horrible agility… But it is quite a mistake to think they enjoy it. I have talked about this to scores of miners and they all admit that the ‘travelling’ is hard work; in any case when you hear them discussing a pit among themselves the ‘travelling’ is always one of the things they discuss. It is said that a shift always returns faster than it goes; nevertheless the miners all say that it is the coming away after a hard day’s work, that is especially irksome. It is part of their work and they are equal to it, but certainly it is an effort. It is comparable, perhaps, to climbing a smallish mountain before and after your day’s work.

And yet such tiring work often leaves them barely above the poverty line, to say nothing of the precarity of the nature of miners work. They were quite often at the mercy of the ebbs and flows of work and supply/demand (in other words, at the mercy of capital) and what did they get for it? Mostly, poverty. More than that though, as members of the working-class they were kept perpetually down. If they were injured or out of work, to collect their allowance, they had to spend interminable hours waiting around at the mercy of the disburser. Orwell describes the effects of this treatment in the closing of Chapter 3:

This business of petty inconvenience and indignity, of being kept waiting about, of having to do everything at other people’s convenience, is inherent in working-class life. A thousand influences constantly press a working man don into a passive role. he does not act, he is acted upon. He feels himself the slave of the mysterious authority and has a firm conviction that ‘they’ will never allow him to do this, that, and the other.

The last passage I wanted to reporoduce here has to do with unemployment, as a great number of people in the 30’s (and today) were out of work or did not get enough work to support themselves fully. Having spent all of one year in a state of chronic underemployment, living off my parents essentially, I totally and completely empathise with the out-of-work and the underemployed. Here’s Orwell describing the effects of it, and countering the myth that unemployment is a time for productive self-directed work or leisure. Keep in mind this is pre-WWII:

But there is no doubt about the deadening, debilitating effect of unemployment upon everybody, married or single, and upon men more than upon women. The best intellects will not stand up against it. Once or twice it has happened to me to meet unemployed men of genuine literary ability; there are others whom I haven’t met but whose work I occasionally see in the magazines. Now and again, at long intervals, these men will produce an article or a short story which is quite obviously better than most of the stuff that gets whooped up by the blurb-reviewers. Why, then, do they make so little use of their talents? They have all the leisure in the world; why don’t they sit down and write books? Because to write books you need not only comfort and solitude — and solitude is never easy to attain in a working-class home — you also need peace of mind. You can’t settle to anything, you can’t command the spirit of hope in which anything has got to be created, with that dull evil cloud of unemployment hanging over you.

A magnificent summary.

Facebook, lolcats, and matters-of-concern

So an unexpected thing happened recently: my Facebook-ing practice rather drastically changed. Primarily I used to use Facebook as a place to post interesting links to things worth reading – new research and reports on social trends for good or ill; or a particularly insightful piece of political analysis; or something about a new bit of technology that has interesting implications for living. Whatever it was, the implicit understanding was that I wanted my friends to read it and see what I saw of value in the story.

But for some reason I’ve almost entirely stopped doing that now. Why?

The thing I spend most of my time doing on Facebook now is, perhaps unsurprisingly, arguing. The number of “serious” conversations I’ve gotten into on FB over the past few months is a bit embarrassing. Almost always they’re about religion, or religious attitudes and behaviours towards women, gays, minorities, etcetera and occasionally they are with strangers, but usually they’re with ‘friends’ or at least acquaintances. Why this sudden change? I’m begging to think that perhaps it’s because, for all my linking and leaving the ‘evidence’ out there for people to find, many people just haven’t been noticing or have not taken it on board. My ‘links’ don’t seem to be having the desired effect.

At the same time as this, I’ve shifted away from using actual pictures of myself as profile pictures to alternatively using baby photos, memes and photos of famous individuals – Fidel Castro, in particular. I’ve also changed my display name to ‘Comrade Ben Abraham’, a thing that started as a joke but which seems to fit within this same pattern.

So am I a Facebook activist? Not quite. Rather than activism I’d like to connect my practice with a different (and newer) tradition addressing Bruno Latour’s matters-of-concern. A practical demonstration is in order. The comments thread at the bottom of the piece I wrote for Gamasutra responding to the ongoing conditions of inequality in game development (and criticism) is a fantastic example of where the new battles are being fought.

Presented with incontrovertible evidence that sexism is produced through unequal wages (just one powerful example, and one with much ‘hard evidence’ – or so I thought!) many commenters decided not to accept my matters-of-fact that ‘sexism exists’ and that ‘it’s a really big deal’ and instead attempted to debunk my position. Reading through these comments I began to deeply empathise with and understand Latour when he expressed his doubt and fears in his excellent paper, ‘Why has critique run out of steam?’, saying:

Remember the good old days when revisionism arrived very late, after the facts had been thoroughly established, decades after bodies of evidence had accumulated? Now we have the benefit of what I call instant revisionism. The smoke of the event has not yet finished settling before dozens of conspiracy theories begin revising the official account, adding even more ruins to the ruins, adding even more smoke to the smoke.

To see that in action, one needs only glance over the comments. The very first commenter, one Robert Ferris, says:

Alison Croggon’s claim that “It’s just a fact…you can just take it as read that if there’s a woman’s name attached to something it will attract less notice” makes no sense. You can’t simply take judicial notice in a societal discussion. You must back it up with something, because (as with the wage study above) you will either gain a weapon to bludgeon people into action, or (and, yes, this is a real possibility) you will learn that your premise is wrong.

Oh dear. Apparently Croggon’s ‘facts’ aren’t really facts at all – they need revising. Curiously enough, he goes on later to state some of his own facts, but we’ll ignore that. After all, the point of all this analysis is to come to the realisation that it’s never about the facts themselves as it’s all rhetorical anyway. It’s about winning the argument and (in the process) feeling okay about the way women, or gays and lesbians and transgendered persons are treated. Because the facts are on my side and blow the rest of it.

Latour again, expresses the frustration with this new form of strategic critique, saying:

What has become of critique when my neighbour in the little Bournabbais village where I live looks down on me as someone hopelessly naïve because I believe the United States had been attacked by terrorists? Remember the good old days when University professors could look down on unsophisticated folks because those hillbillies naïvely believed in church, motherhood, and apple pie? Things have changed a lot, at least in my village. I am now the one who naively believes in some facts because I am educated, while the other guys are too unsophisticated to be gullible….

He goes on to connect the same action with the long and storied history of paranoia that is conspiracy theories, and says that the same explanatory action is at work behind the debunking:

…after disbelief has struck and an explanation is requested for what is really going on, in both cases again it is the same appeal to powerful agents hidden in the dark acting always consistently, continuously, relentlessly.

After my ‘naïve’ believe in sexism is exposed, the explanation is offered: I am out to destroy the very foundation of western democracy!  Mr Cheng Ling explains it all:

This article is incredibly flawed, but more than that, it’s hilarious. The arms-behind-the-head coolness of Mr. Sensitive Pony Tail Man, telling us all how horrible we are and how we can all be great like him. All we have to do is everything he says. Ben, thank you for epitomizing everything wrong with Western society.

What a stunningly powerful critique, even if it is utterly inaccurate (I haven’t worn a pony-tail by choice since I was 16!) but the point is that my sinister motivation is all the explanation this commenter needs for why I am so ruthlessly attacking his privilege. So he turns my own tools – critique! – back upon me.

So what’s our next move in this arms race of critical weaponry? We can hardly move back to naïve facts, indeed as Latour says, “The question was never to get away from facts but closer to them” by showing how constructed so-called-facts really are. A lot of effort has gone into the production of even something as simple and plainly matter-of-fact as 1+1=2. Think of the great history of mathematical proof, of mathematical teaching institutionalising this most basic piece of knowledge and disseminating it around the world to children and adults alike…

So how do we win this argument for the side of good? Latour again:

Critique has not been critical enough in spite of all its sore-scratching. Reality is not defined by matters of fact. Matters of fact are not all that is given in experience. Matters of fact are only very partial and, I would argue, very polemical, very political renderings of matters of concern and only a subset of what could also be called states of affairs.

And how do we reach these matters-of-concern? How do we uncover (if that’s the appropriate term, as it has about it the whiff of the sceptical debunker) states of affairs? Perhaps we shouldn’t talk in terms of ‘uncoverings’ at all, and instead in terms of aesthetics, or commitments, or of imperatives, or even ethics? Latour recognises this:

My question is thus: can we devise another powerful descriptive tool that deals this time with matters of concern and whose import then will no longer be to debunk but to protect and to care, as Donna Harraway would put it? Is it really possible to transform the critical urge in the ethos of someone who adds reality to matters of fact and not subtract reality?

To return to my Facebook practice as an example – I think this is something that I am trying to do with my new habits. My use of memes and pictures of Castro and all my strenuous efforts at arguing (politically, tactically and rhetorically!) with friends is some attempt at getting to a place where I can deal in matters of concern (or states of affairs). Christian McCrea has been doing this since at least 2009, when he was banned for Faceholing. What does it mean to find abandoned groups with no admins left, and rename them counter to their original purposes? It’s dealing with matters of concern. It’s more aesthetic than it is science; more ‘play’ than it is ‘fun’; it’s serious but at the same time… it’s hard for people to take you at your most polemical when your display picture is a cat.

Some thoughts on Pete Ashton’s

Alright so this is going to be some nit-picky bullshit because that’s what I’m training to do. A Doctor of Philosophy is a nitpicky asshole and eventually I’m going to have to prove to the world academy that I know what I’m talking about via a fuck off 80,000 word long thesis that’s going to be all about this kind of crap – social media, or rather, media that becomes the social.

So what’s going on here? Well, earlier today I read a post by the artist Pete Ashton (who also uses the totally super cool awesome subdomain ‘’, hey buddy! Great minds, &tc.) that I thought was super interesting, and a very encouraging sign that there are still some people out there willing to question the received wisdom about platforms/social media/whatever bullshit buzzword we’re using today/technology… because that’s what we’re talking about. Technology. At the end of the day, anything that has been made or designed by a human mind (even a semi-human mind! I discriminate not against venture capitalist startup types!) counts as technology. Or maybe it doesn’t because fuck, even a stick in the hands of a monkey hungry for some god damn ants in a rotten log counts as a piece of technology under the right circumstances (intentions? uh oh), so hey why not just say everything is technology, but we’re not going to go there tonight… that way madness lies.

So anyway, Pete Ashton. Top bloke, I reckon. Don’t know him from Adam, but he seems like my kind of thinker. Except… here’s the thing. There were some statements in the original post that were a bit… malformed, you could say (if you’re being a nitpicky jerk – which I am, do not forget that!). Or if we’re trying to be charitable to a relative stranger on the internet (and that’s always a nice thing to be), we’d say they were a bit imprecise perhaps.

To summarise the post, Ashton has decided some of the technology he uses (twitter; tumblr; facebook; websites) probably aren’t doing the job the way he wants them to and so he’s decided to go start his own website called (Fuck Yeah, Pete Ashton!). tl;dr – I have some comments about the his thoughts and the rationale he uses to get to his conclusions and so, rather than plop them down in his comments like some kind of well-meaning-but-rude-nitpicking-asshole here they are, ripped from their context.

Yes, I’m almost certainly reading them too specifically and not in the context he was likely to be writing them in (i.e. not academia), yes, he’s probably thought about all these issues but here’s the thing with the internet (and, to a lesser extent, writing more generally) – it’s really hard to know what you might mean, and a lot easier to try and figure out what you do mean.

That preamble out of the way, here’s my comments to Ashton’s words.  He says,

There’s no such thing as a “free” service – someone somewhere is paying for it and that makes them the customer, not you. And as anyone who’s worked in retail know, the customer is who the company listens to, not the suppliers. Users is not the same as customers and in Facebook and Google’s case the customer is the advertiser. You and your content is the product while the service is merely the conduit between the two.

True, yes! But it’s hardly a super new concept. Cf: “Television delivers people” from 1973. Still, it’s probably worth being reminded of every so often, so point well made. However, his analogy with retail industry breaks down when examined. Facebook users often protest changes to the Facebook service, and while they often just get the hell over it (remember “new” faceook boycots? ha!) they’ve sometimes been successful! Remember the hoo-ha over all the privacy settings last year (2010)? Users got mad, the press got mad (even the NY Times!) and people generally made quite a bit of noise, till eventually Zuckerberg and co. felt such pressure, presumably from their users (the suppliers, not the customers in this analogy!), that they added all sorts of privacy customisations.

Sure, I’m critiquing an analogy – way to fucking go Ben, you criticised something most people know is  imperfect anyway, well done. But the point I want to stress here is that we need new ways of talking about technology. Analogies don’t work! Assuming some sort of inevitable, inexorable McLuhan-esque media affect doesn’t work either, except when it does and that’s just as much of a problem! When is it inevitable? When is it not? How can we tell? Technology (everything?) is dynamic and unpredictable and past performance is no indication of future behaviour. We need better ways to talk about the specificities of technology, and precious few seem to recognise it as a problem. But before we lose too much hair, let’s move on.

Ever since it was discovered that you could only back up your most recent 3200 tweets, followed by the realisation that Twitter was never going to “fix” their search to go back more than a week, I’d been wondering why I was pumping all my valuable stuff into this black hole.

No! No no no! Twitter may not let you search back that far, but there are tools to do this! Snapbird lets you search far, far further back than a week (to years even, when it works). It is a myth that tweets go into a black hole. They actually never go away unless they are deleted. And that is almost an even more scary prospect.

Instead of disappearing, they just get nigh-impossible to find without a saved hyperlink or some way of forming the URL for that individual tweet. Want proof? Here’s a tweet from September 23rd, 2008. Don’t want to have to save links, and instead back up all your tweets on an ongoing basis?  Here: ThinkUp runs on your own webserver. Ashton seems pretty keen on keeping control, (as we’ll see) and it’s actually a worthwhile pursuit – nevermind that he probably don’t even own his own webserver, but rather rents it from a rack in some giant cloud warehouse somewhere. Again the specificities slip so casually away, eliding serious implications… the only person I know who has actual, physical control of his own webserver is David Carlton. If the Feds ever want to raid the rackspace of mine or Ashton’s website all they’ve gotta do is go to that giant warehouse with a warrant and there they are where I can’t stop them, collecting all my damning anti-capitalist statements to use against me at my inevitable trial for treason-against-the-dollar. At least David Carlton could smash the crap out of his harddrives if they came knocking (provided he was home, had time, etcetera, etcetera (OMG DETAILS FUUUUU–)). That’s the thing with talking about anything with any certainty. There’s always caveats and provisions. We need better ways of talking about this stuff.

Moving right along.

…being in around 2,500 Twitter streams certainly hasn’t done me any harm. There’s no denying that Twitter increases reach. But what concerns me in the manner in which it does so, something I’m going to call the flashbang effect.

I don’t post that frequently to ASH-10 so it’s maybe not the best example to use but it’s certainly the most dramatic. On the whole I don’t get much traffic there – 10 to 20 visitors a day. But when I write something and post a link to Twitter then, boom, 150+ hits to that post. That’s great, but within a day I’m back down to the trickle.

NEWSFLASH[bang]: No, that’s not just twitter, that’s BEING LINKED ANYWHERE that has a ‘stream’-like operation/audience. Blogs do it too (but the flash happens over days, not hours); Forums do it too and I have no way to describe the way that audience comes in (omg details). Point being: the duration of the flash is not some cumulative function or algorithm of the media that links to it and the way that audience uses that media – the duration of the flash is an abstraction of something real happening. X visitors clicking a hyperlink. Joe Goofey clicking it at 11:59:11 on 12/4/11, and Mary Magdalene clicking the same link another 17 seconds later, etc, etc. The former is proscriptive, the latter descriptive (details! always with the details!) and this is no co-incidence. There’s a reason I keep harping on about that Bruno Latour character – he’s on to something.

FYPA.NET is my response to all this and the process by which I’ll be dealing with it.

To begin with it will house any links to websites I might have posted to Twitter or pics / videos I might have sent to Tumblr. I am effectively reclaiming my short-form blogging and putting it into a space over which I have complete control.

Really? Complete control? Complete is pretty freaking complete. Do you have the rackspace or a local server in your house? And are you always going to “be there” to control it? When you leave the room and your site suddenly gets DDoS’d or hacked by LulzSec or whatever are you still in control?

Okay, okay, I’m being a dick I know. Ashton and his readers probably don’t care all that much about this kind of eventuality – Ashton has comparatively more control.

The weirdest thing about all this is going back to the slow burn. The only publicity I’ve done for the site has been a daily Twitter post linking to the home page. As such there’s been a bit of traffic going there but it’s not much, and if I don’t do the tweet then the traffic dries up completely – the flashbang effect.

Ha! Funny thing: I never heard of Pete Ashton till I heard about what he was doing with FYPA (via twitter retweets from both Mat Wall-Smith and Andrew Murphie). That’s the funny thing about the internet, it’s unpredictable. And yet, at the same time it’s like sometimes we feel like we get a handle on what works (cat pictures; tumblrs starting with Fuck Yeah; etc) but really… when we do, we’re in a sense deluding ourselves.

Blogging is something I do to get my ideas straight, to make tangible a narrative which I can revisit, extend and continue for my own benefit. If others choose to follow my narrative, or to twist it into their own where appropriate, then that is fine, but it’s not the primary goal.

Yup. Me too. Blogging is way cool.

This is not done “for the community” because that is not sustainable. It’s not done to promote or sell because that’s leads to craven begging and desperate number chasing. It’s done for personal reasons in a public way because that’s the only thing that makes sense to me.

Aww, seriously? Doing stuff for other people is “unsustainable”? Well, maybe it is, but it can be totally kick ass and amazing while it lasts. There is a very real benefit gained by the blogger by blogging (tweeting? tumblr-ing?), even if it’s an intangible thing like ‘reputation’ as an expert, a go-to-person, or just being known as a hilarious captioner of cat pictures or whathaveyou. Have you seen? People have been commenting on your stuff! How can you say you’re completely ignoring doing it for “the community” when people are commenting? And commercialisation of blogging is some kind of sell-out? Dude, living off blogging is like living the dream! It doesn’t have to kill of integrity, etc, etc. My favourite example is Rock Paper Shotgun, professional, independently started and owned blog now doing reasonably well for itself commercially. That took them hard work and I don’t begrudge them their wage at all.

To do what we did in the fanzine days and build my own publication that I, and only I, have control over. To be a part of the network that is autonomous and free, and stronger because of it.

Um, really? Autonomous is stronger, necessarily? You’re sounding a bit techno-utopian (and I guess really shouldn’t be surprised by this point) but could I politely suggest Evgeny Morozov’s excellent ‘The Net Delusion‘?

Aaaand on that half-finished note I think I’m done, just because I’ve exhausted myself and my ideas. I hope there’s something in there for everyone.

Abstract: Neuroscience and the digital community: what next for the notion of ‘the individual’?

The following abstract was accepted for the international conference ‘Knowledge/Culture/Social Change‘ to be held in Sydney, Australia in November.

The ‘individual’ has attained an unparalleled level of success and acceptance, with the DNA of all major political and economic theories now permeated with the assumption of real existing ‘individuals’. Modern neuroscientific developments however are challenging this assumption, and in this paper I propose to deal with two challenges to the notion of the individual, the ‘extended mind’ theory and ‘eliminative materialism’, attempting a reconciliation within the context of productive internet communities. The goal of the paper will be to outline some of the important ramifications for humanity and the liberal/progressive project.

Firstly, theories of mind such as Andy Clark and David Chalmer’s “the extended mind” suggest a counter-intuitive redrawing of the boundaries of the mind beyond the limits of the cranial cavity and even the body itself. Consider the example of the Alzheimer’s patient who supplements his failing memory with diligent note-keeping and diarising. Information stored in the patient’s diary now becomes his memory, and as such informs his beliefs, knowledge, actions, etc. The film Memento (2000) in which an amnesiac tattoos messages to himself onto his body functioned on a similar premise.

Secondly, the model of mind proposed by Paul and Patricia Churchland dubbed ‘eliminative materialism’ suggests that when neuroscientific advances progress to a point of near-complete modelling of the human brain, we may well arrive to discover that no structural or literal brain functions are found to represent our common-sense ‘manifest image’ of mental function. What happens when no place, structure, or function of the brain can be found to account for “beliefs”, “ideas”, “thoughts”, etc?

In the paper I propose to attempt a reconciliation of the ‘extended mind’ thesis with the promise of ‘eliminative materialism’, by way of the internet technologies that connect so-called “individuals” together into communities. But if parts of our minds can be said to be outsourced to the digital tools we use for communication, storage, and transmission, and if these tools overlap, what kind of entity arises? The paper will draw on the findings of my earlier work in characterising internet communities as a post-human (or post-individual) subject of knowledge and expertise.

Bibliography (incomplete)

Brassier, Ray. Nihil Unbound. Basingstoke: Palgrave Macmillan, 2010.

Clark, Andy. Being There: Putting Brain, Body and World Together Again. (Cambridge: MIT Press, 1997)

Clark, Andy & Chalmers, David. “The Extended Mind”. Analysis, Vol. 58, No. 1, Jan., 1998.

Dennett, Daniel. Consciousness Explained. Harmondsworth Eng.: Penguin, 1993.

Meillassoux, Quentin After Finitude. London: Continuum, 2008.