Michael Bugeja’s recent article at IHE, “Harsh Realities About Virtual Ones”, attempts to find fault in current uses of technologies at universities, laying the blame on our drive for “engagement” for various problems: rising tuition, a new digital divide, corporate sponsorship, and an increasing interpersonal divide. I think Begeja raises some very valid points, especially that “engagement” has gone under-theorized by administrators and that we should be concerned about where public moneys are being spent (corporations), but his argument rests on a few faulty assumptions:
1. That tuition is rising because the costs of technologies are rising. He doesn’t back this assumption up, and I’ll admit it: I’m going to be lazy and not do research myself. However, I do know that the major reason that tuition has been rising over the last few years is because states no longer put as much money into their schools as they used to, so students have to pay the bill. I’m not a numbers person, but it would be interesting to see how much of a university’s budget goes to these technologies that Bugeja writes against. I doubt it constitutes the 12 percent rise in tuition or the 5 percent rise in cost of room and board in the last two years. Bugeja claims that these rates are driven by “an engagement industry, largely corporate, relying on wireless campuses to vent virtual products and on teaching excellence centers to advertise their brands in the name of engagement.” I sense that these rates are actually driven by a politics of privatization, so that the state withdraws support from schools and expects individuals to be responsible for their education.
2. That virtual technologies are vastly different from other technologies — and by extension, digital technology companies are different from non-digital technology companies. Bugeja complains that universities “are unwittingly underwriting Second Life, Facebook, Twitter and G-Mail, among other applications.” However, he doesn’t take note of how these very same universities are underwriting Prentice Hall, Bedford St. Martin’s, Houghton Mifflin, HarperCollins, and W. W. Norton, among others. Take away universities and some of these companies would crumble, or at least be seriously maimed. And these are at least classroom related. Let’s talk about the contracts many universities have with clothing companies that they are underwriting? I know I’m probably stretching my point thin by bringing up the clothing companies, but universities seem to have, for quite a long time, promoted consumerism.
3. There is a separation between the virtual and the real world. How is the virtual world not the real world? Don’t words and images still hurt and help people online? Don’t issues of materiality still play roles? Don’t the actions of people online come out of and then also influence the actions of people offline? I agree with Bugeja’s desire for studies to see the interrelations between virtual interactions and face-to-face interactions (in the forms of ethnographies, case studies, etc.), but I don’t agree with his strict dichotomy.
4. Commitment is a better virtue/action than engagement. Bugeja concludes his article:
Those [unwitting business] models perpetuate rampant consumerism, undermining standards that have endured for decades, if not centuries, emphasizing commitment rather than engagement so as to prepare learners for the challenges that await them in the real world rather than the virtual world.
I think Ira Socol’s comment on Bugeja’s article gets at this point: that the last few decades (or century) of learning haven’t necessarily prepared students “for the challenges that await them in the real world.” If they did, then wouldn’t less students say they learned the most in college outside of their classrooms? I wish Bugeja would have explicated his theory of commitment, especially as he chastises administrators for not explicating engagement. Perhaps Bugeja does in his books, which sound interesting (Inerpersonal Divide: The Search for Community in a Technological Age and Living Ethics: Across Media Platforms). Nevertheless, he does not here.
Allow me to theorize commitment, briefly and probably with some banality. To be committed to something means, according to my dictionary on my MacBook, to be devoted to it, to pledge to it, to, in a sense, be loyal to it. Bugeja goes to his dictionary for engagement, and one of his definitions is “to bind, as by pledge, promise, contract, or oath; make liable,” which oddly sounds like commitment. In fact, definition #2 for commitment from my MacBook’s dictionary: “an engagement or obligation that restricts freedom of action.” Perhaps commitment and engagement are actually in debt to each other in learning: one must commit (devote) oneself to something, and then one must engage (act) with that something.
EDIT: I forgot the perhaps most obvious assumption of all: “the [digital] divide has been bridged.” *shakes head*
He starts his article by saying that the digital divide has been bridged…that’s like saying that everyone born during a particular time period is a “millenial.” It’s simply untrue to say that the digital divide has been bridged.
I agree with your thoughts – and I’d expand them. First, your reference to publishers is not nearly powerful enough. A company like Pearson International now controls perhaps more than half the books used on any university campus. It is every bit as powerful and ubiquitous as Microsoft, and its motives far less clear.
Second, Michael Bugeja (who I do consider a friend) consistently distinguishes between “tools” and “technologies,” falling into the trap which defines “technology” as “something invented after you were born.” As I try to say, the book is a technology, so is the classroom, the blackboard, the lecture hall, the class schedule. All non-natural things invented which all create good and bad in their wake.
As I’ve said many times – the printing press spread knowledge widely, but it also wiped out half of Europe’s languages in just its first two centuries. And as Dr. Bugeja has said about the same, Gutenberg’s biggest business was “junk mail” the endless printing of indulgences.
Likewise – these 21st Century technologies will do (have done) fantastic things, and they will do damage. The art is learning to use them for their best… its worthless to sit around and complain.
Quick response – you and Eric are right and Bugeja is wrong – the digital divide has not at all been bridged and in fact is probably worse now – partly due to increasing (not decreasing) economic and educational inequities and partly less visibly in the greater challenges to think clearly and critically as a media literate consumer.
Further, you are right – digital technologies are only different in that they require more thinking-mental interaction perhaps – which makes you also right in that the virtual IS the real, only some folks don’t yet realize that, which is another part of the increasing digital divide.
Sigh. More later.
Thanks for addressing my concerns. I cannot provide as much detail as I would like but the comments are misinformed, especially about technology driving tuition. See my Feb. 1, 2007 piece in the Chronicle of Higher Education on curricular glut. Also note how technology can save costs. It’s in that article.
And please stop the idealistic talk about the digital divide not being bridged. We’re talking about it being bridged here in academe. If you’re talking about Africa, for instance, and Negroponte’s One Laptop Per Child, he and you are lost in your own brand of technostalgia, trying to get cheap laptops to people who need nets for malaria rather than Internet. Moreover, studies show whoever gets his or hands on a laptop in any country with access navigates to most of the corporate sites, especially since Google, MSN, Yahoo, and AOL own 55% of online ad revenue.
We don’t have a global village, friends; we have a global mall.
Thank you, Socol and Bugeja for your comments. I think largely our interests are the same: de-corporatization of education and society in general. If we look at the example of Oregon State, which invests a lot of money into open-source labs and research, but then invests and pretty much requires the use of Blackboard, then we can see something is wrong. Why is it that money isn’t being spent on open source options that aren’t proprietary, and why don’t we divest from Blackboard and stop our corporate sponsorship of that program.
I agree with your assessment that “global mall” is a better metaphor than “global village” for globalization. I am not a fan of globalization in its current form: global corporatization and the domination of various peoples by corporate-state machines. I also agree that we need to focus on economic and health parity worldwide most importantly.
For the sake of argument, lets limit our discussion of the “digital divide” to American academia, as you suggest. First, I’d say that “digital divide” is as much a mystification tool as “global village,” in that it obfuscates what we are really talking about: a classed and raced society. The so-called “digital divide” isn’t bridged in academia. I’ve been to community colleges where students are coming in without experience on computers, and where there are only a few computer labs on campus — not enough to support the student populace. At Iowa State, where you teach and I went to undergrad, and Oregon State, where I teach, certainly most students do have access to computers (and by limiting the discussion to computers, I am admittedly being simplistic about this divide), but even here there are students who before college didn’t have access to computers. In many parts of this country, public primary and secondary schools don’t have money for technology and some students are still handwriting their assignments. Arguably, these students aren’t going to college, but hopefully some are, and they all should have both access to technology and access to higher ed.
When we talk about the digital divide, we’re really talking about economic disparity, and to say that the divide doesn’t exist ignores the disparity within this country, especially in regards to race and socioeconomic class (especially the rural poor). When these students are coming to college (if they are coming to college at all), they are “behind” in technology. I’m not stating this to argue for faux-educational bills that are actually economic stimulus packages (as Cynthia Selfe has smartly noted of Clinton’s 1990s bill to get more computers in the classrooms). I’m saying this because we have to be cognizant of both economic differences and of our students who come to college without the tools that we “expect” them to have.
I’d argue with Dr. Bugeja – and say, that as the Open University’s DEEP project seems to prove, Africans need both malaria nets and information, and that the best way to provide that information exchange is via mobile phone (not OLPC which is a cute technology experiment, but an antiquated idea at the moment of delivery), just as mobile phones are the best way to offer universal information access in the US (want to enable poor kids and technology? forget the school computers and use the phones they already have).
Because the digital divide is real, and it is real even in US universities, because “rich kids” (etc) get the training at home that schools refuse to offer poor kids. No student today should leave high school for college without a clear understanding of how to search, how to access digital data, of the ettiquette of business email and business texting. Of how to set up a professional sounding voicemail. Of how to respond to digital inquiries. These are not “natural” skills – and those who learn them because of privileged backgrounds get further and further ahead.
Ira – I really appreciate your comments on how class plays a vital role in technology literacy.
Dr. Bugeja – I guess i had not realized that the digital divide had been “bridged” at ISU.
Another facet of the digital divide that Dr. Bugeja has blatantly overlooked, is the digital divide in relation to accessibility.
The digital divide in the “academe” is ever present today as most university web applications (portals, Blackboard, Banner, PeopleSoft, etc.) and web sites are not fully accessible to students who have visual or motor impairments. In fact, accessibility as part of the overall discussion of the digital divide is rarely brought up.
To dismiss the digital divide as only something that remains outside of the academe is to truly marginalize students who are impacted directly by classism and ableism in digital environments on a daily basis.
Eric – ableism is such a dangerous thing, and it is rampant in US universities. Those who block technology block format flexibility, which is essential for all students who differ from “the norm” in any way. Why, as I asked recently on my blog, can any student turn a class digital document into ink-on-paper form without question, but if – say – a dyslexic student wants to change a printed book into a document read to him by his smartphone he needs to humiliate himself with a dozen university staffers and professors?
The most powerful thing technology offers is the ability of students of all kinds to access information and communication in the way most comfortable to themselves. Those who want to preserve the primacy of antique forms are also (intentionally or unintentionally) preserving the inherited ‘rights’ of the elite.
Thanks, Eric, and Ira for your input. I agree that ability largely goes unnoticed and is made invisible in our conceptions of all forms of media. At conferences and presentations, accessibility for the deaf and hard of hearing is the last thing considered. Online, even on this blog, accessibility for the blind is often hardly considered. I’ve actually been meaning to write about this, because I’ve come across some tools online that I hadn’t been aware of before in regards to accessibility.
Again, thanks.