Leiter Reports: A Philosophy Blog

News and views about philosophy, the academic profession, academic freedom, intellectual culture, and other topics. The world’s most popular philosophy blog, since 2003.

  1. Justin Fisher's avatar

    To be worth using, a detector needs not only (A) not get very many false positives, but also (B) get…

  2. Mark's avatar

    Everything you say is true, but what is the alternative? I don’t think people are advocating a return to in-class…

  3. Deirdre Anne's avatar
  4. Keith Douglas's avatar

    Cyber security professional here -reliably determining when a computational artifact (file, etc.) was created is *hard*. This is sorta why…

  5. sahpa's avatar

    Agreed with the other commentator. It is extremely unlikely that Pangram’s success is due to its cheating by reading metadata.

  6. Deirdre Anne's avatar
  7. Mark's avatar

What the Unabomber Got Wrong

Unab

Over the past year I have been involved in a couple of panels on Ted Kaczynski (aka, the Unabomber — a serial killer, currently in a supermax prison, who in the 80s and 90s mailed bombs to persons involved with technology).  Organized by Jeffrey
Young
of the Chronicle of
Higher Education, the panels involved Dan Skrbina of UM Dearborn providing a
partial defense of the Unabomber and me offering a criticism.  We’ve done the panel at Rutgers and at SXSW in Austin. 

In my view the the "Unabomber manifesto" is a
smorgasbord of critical reasoning fails, and you could use it in a critical
reasoning course if you wanted to task introductory students with finding basic
fallacies in a written work (see my slides here).  But I also have a positive thesis which is
related to my interest in hacking, but which so far as I know is not advocated
by anyone I know in the philosophy of technology.

 The Unabomber believes that any post-hunter-gatherer
technology is alienating and dehumanizing. 
But I believe that like bees and beavers and spiders and birds we are
technological creatures.  In my view,
alienation comes not from technology (any more than a beaver can be alienated
from its dam or a bird from its nest) but rather comes out when technology is
“jailed” and we can’t tear apart the object and see what makes it tick.  So it isn’t technology that alienates us, but
corporate control of technology – for example Apple, when it makes it difficult
for us to reprogram our iPhones.

This is the point where hacking becomes important.  Hacking is fundamentally about our having the
right and responsibility to open up the technologies of our everyday lives,
learn how they work, repurpose those technologies.

Before I start reinventing wheels here, does anyone know of
people in the philosophy of technology who have articulated such a view?  Beyond that, any comments and criticism?

Leave a Reply to Iain Thomson Cancel reply

Your email address will not be published. Required fields are marked *

22 responses to “What the Unabomber Got Wrong”

  1. What do you think about the following idea?

    There is ample room for something like alienation, even if there is no corporate control and secrecy about technology (e.g. even if all software were open source and there were no hardware trade secrets), purely due to its complexity overreaching what most people can in practise get their heads around. (For example, it would be difficult for most of us to reprogram our iPhones even if Apple hadn't taken any measures to make it so.)

  2. Lawyers aren't philosophers of technology, but I think Matthew Rimmer (http://works.bepress.com/matthew_rimmer/) fits. Cf., e.g., his 'The Freedom to Tinker: Patent Law and Experimental Use', Expert Opinion on Therapeutic Patents (2005).

  3. To what extent does open-source technology solve the problem? I suppose it is easiest to see it as doing so when the following conditions are both fulfilled:

    (a) It is cheap and practical for people to create. This condition is satisfied with a lot of software. But you also need the hardware to be available and open to re-programming, and UEFI is unlikely to help. The condition is not satisfied with devices in which the hardware needs to be precisely as it is – bits of cars, for example – because machining parts is difficult.

    (b) If devices need to interact with one another, the interface standards are completely open. You need, for example, an open format for documents, documents in which everyone can read, and for which anyone can produce devices. Then anyone can use open source, without risking being cut off from others. You can still get a bit of irritation when a proprietary provider decides to invent its own open standard (such as a new XML document standard) when there is already a perfectly good open standard in use.

    Your reference to a right to open up technology is interesting. One can imagine a functioning society without intellectual property rights, in a way that one cannot imagine a functioning society without physical property rights, because if you take my idea, I can still use it, but if you take my food, I cannot still eat it. Is that what we should seek, or should we sit back and wait for the spread of GPL and copyleft, a spread that might be market-driven because products will be cheaper if people don't have to pay royalties, so long as there are enough people willing to give away their work?

  4. not sure of anyone in philo of tech who puts it in terms of rights/responsibilities in the way that say Rushkoff did but lots of discussions about hacking and its potentials/values in STS circles like @ http://turbulence.org and of course folks like Bernard Stiegler talk about our uses/interfaces with technologies and the need to engineer our own individuation processes, hacking writ large also seems to fit in with Derrida's points about bricolage and engineering, do you know McKenzie Wark's:
    http://www.academia.edu/182789/A_Hacker_Manifesto ?

  5. Personally, I'd like to know more about how exactly your view differs from Andrew Feenberg's, esp. with regard to his criticism of essentialism about technology and his distinction between "capitalist rationalization" and "democratic rationalization" (this is all in his "Questioning Technology" (1999), which I haven't read since grad school).

  6. Interesting but sounds very similar to the concept of "black-boxing" in STS; see the Feenberg book Carl cites.

  7. Peter –

    Ken Wark is a place to start. Biella Coleman too. For lawyers, Julie Cohen's recent book is sort of on point. But I take it you know them and you're looking for philosophers per se? I'm not too savvy about philosophers who write about hacking (other than you).

    – Greg

  8. The view you outline (about the "jailing of technology") is very close to Thorstein Veblen's argument about the control of technology by "business." See his "Theory of Business Enterprise," and "The Place of Science in modern Civilization."

  9. This sounds to me like the gist of what Richard Stallman has been saying for decades.

  10. Matthew Crawford's book Shopclass as Soul Craft is relevant here, although he presents our alienation from much of our technology as part of a larger cultural shift toward valuing knowledge over manual competence. The essay on which the book is based appears here:
    http://www.thenewatlantis.com/publications/shop-class-as-soulcraft

  11. "When the design process is complete, the value-laden choices that went into it are ‘black-boxed’, sealed into ‘the technical code’ (p. 88). This hard-wiring of specific cultural values into our technical devices obscures the fact that these values were chosen, and this reinforces a fatalistic attitude toward technology." (From my "From the Question Concerning Technology to the Quest for a Democratic Technology: Heidegger, Marcuse, Feenberg," p. 211.)

  12. It's hard for me to take this suggestion seriously. A big prima facie obstacle is that, if you put a person in a room with a Mac, they'll figure it out and have at lest some fun using it for email, work, surfing the web, etc. If you put them in the room with a Linux computer, they'll find it very frustrating–especially if they feel like they need to use it for email or or work or play. Note that it isn't true that these people will be eventually be more happy once they "take command" and learn how to use Linux: most people don't have the time or interest to learn how to use Linux effectively. A Mac is a much better choice for them, in both the short and long term.

    (I guess I also think that Apple's extraordinarily high customer satisfaction numbers indicate that customers don't find their products dehumanizing/alienating. Kaczynski wasn't an Apple user, was he?)

    Anyway, *if* you were to convince me that Apple products are especially alienating, I don't think that will help your cause either. The *vast majority* of technology is jailed. TVs, VCRs, dishwashers, ballpoint pens, etc. Even basic farming technology is jailed, in the sense that most farmers (historically at least) didn't understand why doing x,y, and z produced more/better crops. They just knew that it did.

    So I think that if jailed technology alienates us, then the Unabomber was basically right.

  13. I dunno — I feel alienated and dehumanized just from my family and friends' IT questions.

  14. Maybe it's because most technology is now so complicated that we are much better off not interfering with it unless we are experts? Imagining an "open access" steel factory where anybody can come and tinker around and hack the code sounds to me like a singularly terrible idea. It is not inaccessible – if one is willing to put a lot of study into it. In any case, I have yet to own a device that I would not be able to tear apart to figure out how it works. It's just that some technologies are not accessible to naked eye anymore. Hacking software (as opposed to hardware) is not such a big deal – that's not where the progress really comes, I think.

  15. I’m not a philosopher, but I work in software, and I’ve gotten interested around the edges of this question. I feel like as a techie, what you describe is the problem. Personally, I’ve experienced the difference between having access to knowledge about my tools’ internals and not even being able to get answers to my more superficial questions. This seems to match RMS’s concern. If I click “yes” to have a software company monitor my computer usage, supposedly to help them solve any problems I may have later on, what am I actually enabling? If I use this software tool, how can I be sure what it will do in my own case if I’m preventing from seeing how it works?

    But often in discussing things with non-techies, they don’t seem to feel that’s what’s alienating about technology. I think the split, for them, is more between seeing technology in terms of how things work and seeing it in ordinary human terms. Imagine a conversation where the non-techie says, “I wish this worked in some other way,” and gets an answer that relies on apparently technical reasons why it doesn’t. The non-techie will reasonably feel that the techie is missing the point, and is missing some normal human sense of what a good reason and what a good tool should do. Explaining how the system works doesn’t decrease their sense that the technology is alienating. The fact that it has to be explained means it’s alienating (ISTM).

  16. One more non-philospher: Hector Postigo has been talking a lot about this lately.
    Hector studied with Langdon Winner.
    http://innovate.ucsb.edu/wp-content/uploads/2010/02/Winner-Do-Artifacts-Have-Politics-1980.pdf

  17. This is not on point but I can't resist mentioning that the Unabombing might have been the result of a study by Henry Murray, a sadistic Harvard psychologist. See http://www.theatlantic.com/past/docs/issues/2000/06/chase.htm

  18. The Unabomber was a reader of Jacques Ellul, an influential French sociologist who wrote "The Technological Society," a highly perceptive and prophetic work about the underlying technological infrastructure of the modern world. In this book, Ellul documents the effects not of "technology" or "technologies" — in other words, tools that are used for human ends — but of "la technique," the overarching "whole" of the technological society that, according to Ellul, is by definition "a system of means with no ends." Technique has become so all-encompassing that it is described as "the totality of methods rationally arrived at, and having absolute efficiency (for a given stage of development) in every field of human activity." This is what makes it dehumanizing, since its insistence upon efficiency and its lack of ends disposes of any need for humanity.

    Much of Ellul's work was purely descriptive, and its overarching pessimism is only due to its honest depiction of the bleakness inherent in the total technological system. However, unlike the Unabomber, Ellul refused to give in to the pessimistic view of the whole that he described. What we need to recover is a philosophy of individual technologies, or technologies geared toward human ends — parts set against the whole. If hacking is such a meaningful use of technology, by all means we should pursue it as such.

  19. I agree with previous comments (Iain Thomson, Carl Sachs) that Feenberg's work on the concept of technical codes and the way the tend to be "sealed" is relevant here (one could also think of Albert Borgmann's useful distinction between focal things and devices). But it seemed to me that Peter was getting at another point here, which I don't think Feenberg directly points to.

    I took Peter to be articulating the particular role of hacking (or "hacktivism"), which I have not seen Feenberg discuss directly – though I may have missed it.

    A similar point is made by Sherry Turkle. She points out that since the 1980s users of computers and their software have become less and less interested in understanding how the software works and put more emphasis on functioning effectively within the software design they are presented with. In a sense, users put more emphasis on striving to play the game well than on questioning the rules of the game. As Turkle puts it, this can compromise our “sense that understanding is accessible and action is possible.”

    Turkle also does not point to hacking as a form of possible counter-movement, but one can see her discussion as implying that the attitudes described may compromise our political imagination – thus making hacking an important disruptive political act.

    [For Turkle's piece see: Turkle, S. (2003). From powerful ideas to PowerPoint. Convergence, 9, 2, 19-25.]

  20. Matthew J. Brown

    Wendell Berry, in his classic "Why I am Not Going to Buy a Computer," articulates nine "standards for technological innovation." Number 6 reads, "It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools." Berry implies that computers do not meet this criterion, and I think that was not really true of most computers when he wrote the piece in 1987, but it is much closer to the case today.

  21. Stephen Clark (retired from Philosophy at Liverpool) argued that we were at a point between two eras in the history of technology: Until recently most people could (if they wished) understand most technologies in everyday use – cars, electricity and lighting, household appliances, etc. But this is becoming less true by the day, and we will shortly be in an era where most people will not be able, even if they wished, to understand the technologies in use in their everyday life. How many people, for instance, know sufficient number theory to understand the basis of web encryption?

Designed with WordPress