As Erik nicely re-articulated, Heidegger defines technology as something more than a mere means to an end -- to him, technology can eventually turn human beings to be "no more than resources," causing the crisis of technology and making us forget human-ness in us. To me, this concept of crisis brought by "danger" of technology sounds really dangerous.
To begin with, no matter how Heidegger defines essence of technology, technology has long been thought of as means to achieve an end as Heidegger himself explained. In fact, the readings on census serve as good example of how human civilization has been using technology as means to an end. In different cultures and different era portrayed in the reading, people in power often struggled to justify racism. In order to achieve this goal, they develop new technology and enhanced existing technology; for example, they came up with eugenics, new use of computer for census, different application of IQ measurements, etc. People in power also manipulated and reformulated racial categorization, which can also be considered as a governing technology. Nazi (re)defined the term "Jew," and Americans came up with the whole idea of one-drop rule. Even today, new words and concepts such as "Asian American" and "Hispanic American" are made to support the goal of efficient governing of the society.
We create, shape, and constantly reshape the technology for our use -- in this way, technology serves a means to an end. In other words, we as human beings have ample control over technology. This is why I say that the "danger of (the essence of) technology" is very a dangerous concept. Whatever tragedy occurs with exploitation or misuse of the technology, we the human kind are responsible for that exploitation or misuse. So here is my stance to Erik's question: there is no one -- not even technology -- else to blame here but us.
I am not rejecting Heidegger's premise that our way of thinking has been shaped by technology. If I were to completely reject that idea, it would be just like me randomly picking a side on chicken-first-or-egg-first sort of a debate. I do realize that human beings and technology are in a relationship of constant dialogue, and mutually impact each other. But whether that impact would be a synergy or a disaster is on our hands, and technology is not a victim to be blamed.
I'm interested in talking about this more, and I really appreciate you engaging with dialogue on that point. I don't wish to imply that technology is to blame, in the sense that we are a blameless or naive party who has just been subject to this "evil" of modernity. I agree with you that it is unconscionable to blame this nebulous body of technology for our self-inflicted ills.
ReplyDeleteWhat I find is a critical issue to go deeper into is exactly this: "Whatever tragedy occurs with exploitation or misuse of the technology, we the human kind are responsible for that exploitation or misuse." I wouldn't challenge that concept, but where does that leave us?
My argument is that technology *seems* to have a will of its own, in that it can bring about new crises that its creators did not see or intend. We certainly have plenty of opportunities to wield it to clear ends, but technologies can be so powerfully interconnected that cause and effect may be beyond the realm of our vision at any one time. This seems to be part of Heidegger's argument as well, that what is cause and what is effect is a matter of uncertainty in our discourse. The meanings of those words have changed, and the field of modern technology upsets our preconceived notions of these terms. To speak colloquially, we don't know what we are dealing with.
This is certainly within the realm of "our" culpability, and I wouldn't blame technology for it. But who is "we the human kind" in this case? Is every human citizen responsible for all technology? only for the devices they use? how do we define use? or should companies be held responsible? or governments? This may seem to be a bit of an absurd and legalistic question at first glance, but I think it is really of paramount importance in the modern age. Technology is progressing at a rate that seems often to outstrip our ability as the human kind to legislate its use, and then actually control its applications based on that legislation. Any time I see new concerns about online privacy (what is considered to be personal information and what is public domain, what can be collected, what can be sold) it seems that we are charging forward into a realm where we don't have adequate answers a-priori, and therefore where people are not ready to deal with any intended or unintended misuses or exploitations of technology.
I'm not arguing at all to smash the computers, or to stop development of new technologies, but that we must come to understand how we intend to deploy ourselves, as a species, to control technologies, or to at least mitigate their ill effects. I'm thinking about what place private citizens and electronic civil disobedience might play here, and how effective or ineffective government is at actually understanding the stakes of our applications of technologies. And since technology allows us so effectively to view, categorize, track, authenticate &c. human beings, and continues to offer us yet more means to discriminate by whatever characteristics we choose, I think it is possible that dealing with race has to start at the level of equality in technology.
And just to extend more speculatively to the next crisis I see on the horizon: what happens when human augmentation goes into the realm--the door to which is being opened even now--of incorporating myriad electronic devices into our bodies? What does race look like then? Where will the new lines of discrimination fall, and how will we prevent the same mistakes we've made before? Here, I see the possibility for technology to literally create new "races," a concept which I think only increases the importance of understanding how technology is not be as under-our-control as we may think it is.