Annotated Bibliography Entry: Crow in DWAE

Film DVD cases

Crow, A. (2013). Managing datacloud decisions and “big data”: Understanding privacy choices in terms of surveillant assemblages. In McKee, H. A., & DeVoss, D. N. (Eds.). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae/02_crow.html

Crow addresses the ethics of assessment by defining online composition portfolios as surveillant assemblages, collections of electronic student data that may be used to create increasingly accurate aggregate student profiles. Composition studies seeks assessment techniques, strategies, and technologies that are effective and fair. As big data continues to proliferate, Crow argues that we need to understand and communicate specific ways that student data are used in surveillance. Our goal should be to move toward caring on a surveillance continuum between caring and control.

Google Drawing Visualization of Surveillance Continuum

Google Drawing Visualization of Surveillance Continuum

For-profit assessment platforms, from Google Apps to ePortfolio companies, have sharing and profiling policies that are troubling and may represent more controlling than caring policies. These controlling policies may remove agency from students, faculty, and composition or English departments and transfer agency to university IT departments, university governance, or even corporate entities. Crow concludes that the best option would be a discipline-specific and discipline-informed DIY assessment technology that would take into consideration these real concerns about surveillant assemblages.

The concept of a surveillant assemblage is a network concept. It’s a dynamic collection of student information grown ever larger by the addition of student files. Crow demonstrates that electronic portfolios used for assessment are networked collections of files, collected over time for assessments, that build a (potentially) dangerously accurate profile of the student in aggregate—a profile that can be used for extra-assessment purposes through data mining.

Contemporary networks make privacy a complicated issue, a moving target, one that requires decisions on the part of participants regarding levels of privacy expected.

“[I]n the midst of venues that facilitate social networks, and in the midst of increasing technology capabilities by corporations and nation states, conceptions of privacy are changing shape rapidly, and individuals draw on a range of sometimes unconscious rubrics to determine whether they will opt in to systems that require a degree of personal datasharing.” (Crow 2013)

Crow responds that English studies as a (supra)discipline has a responsibility to investigate the effects of surveillant assemblage collections and to maintain student, faculty, and departmental or disciplinary agency in technology and network selection and implementation.

Miller’s genre, Bazerman’s genre set, and Popham’s boundary genre all demonstrate the socially active nature of genre and genre collections. Crow makes similar observations about student files as surveillant data collections: they have and take on a social activity of their own that can’t necessarily be predicted or controlled. As networked action, genre can expand within its framework and, in the case of boundary genre, expand into interdisciplinary spaces. Tension and contradiction (a la Foucault) are continually present in such networks, including surveillant assemblages, and unexpected results—like the superimposition of business in medical practice seen in Popham’s analysis or the potential marketing of aggregated student data from assessment processes and results mentioned in Lundberg’s forward—can, perhaps likely will, occur, if disciplinary agency is not maintained.

I’ve been working on my Twitter identity this past week, and a Tweet from @google about its transparency efforts caught my eye in relationship to Crow’s article.

The tweet links to an entry in Google’s Official Blog, “Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests,” dated Monday, February 3, 2014, and reports that Google is now legally able to share how many FISA requests they receive. The blog entry, in turn, links to Google’s Transparency Report, which “disclose[s] the number of requests we [Google] receive[s] from each government in six-month periods with certain limitations.”

What struck me about the Transparency Report, the blog post, and the Twitter post related to Crow’s article is the focus on the important role reporting has on my willingness to contribute to my own surveillant assemblage. I feel a little better knowing that Google reports on such requests in an open and relatively transparent way, even if I also know that Google uses my data to create a profile of me that feeds me advertising and other profile-specific messages. This is my own “sometimes unconscious rubric” to which I turn when making decisions about how much and whether to opt in. The question it raises is whether we give our students, faculty, staff, and prospects agency to make these opt-in decisions, consciously or unconsciously. As a Google Analytics and web metrics consumer, these are especially sensitive issues with which I deal on a daily basis.

[CC licensed image from flickr user Richard Smith]

4 thoughts on “Annotated Bibliography Entry: Crow in DWAE

  1. Hi, Daniel:
    I always enjoy following your thinking through things and the connections you make.

    Crow’s article brings up the “dark side” of the cloud, much as White’s afterword brings up the “dark side” of technology hyper-mediating our experience. You’re going to be ahead a bit when we read Deleuze and Guattari for this class as Crow makes use of their concept of assemblage, which, unlike Foucault’s panopticon, theorizes what happens as a result of the newly created entity that is the sum of the surveillance. It is true that the cloud — and e-portfolios as a part of that — creates a convergence of once discrete surveillance systems, this “surveillant assemblage.” A classroom is one such discrete surveillance system, but when you create a portfolio of artifacts from multiple classrooms, you create such a convergence — performance across the boundaries of the individual performances for individual teachers who formerly surveyed their own students but now have access to products beyond the borders of their classrooms and students who were not “their own.” The performance revealed in the portfolio is something new in and of itself — it is more than the sum of the discrete performances in particular classes. Furthermore, the audience is extended, especially depending on who has access to the e-portfolio.

    I have often thought about this “surveillant assemblage” (not using that vocabulary term) when it comes to SafeAssign, the anti-plagiarism tracking tool in use at many high schools, colleges and universities (or another program of the same ilk, such as TurnItIn). Students don’t REALLY have a choice about putting their work into the database. Sure, they have the disclaimer in front of them that says they voluntarily agree to add their information to this network, but they cannot turn the assignment in to the teacher without agreeing to the terms. The vastness of the information contained in the SafeAssign database — including personal identifying information, reflections, etc. — is amazing to think about. It is in the hands of a multi-billion dollar corporation (BlackBoard, which is owned by an investment group (http://www.forbes.com/2011/07/01/blackboard-to-be-taken-private-marketnewsvideo.html). What do they do with the data? What *could* they do?

    I am careful to have students put their personal narratives, profiles of partners, and their professional writing that includes their names, addresses, and resumes into BlackBoard but *not* SafeAssign for this very reason. Their papers are still on the VCCS BlackBoard server, and potentially viewable by others besides me, but that is not the same as being added to large, multi-school database. My institution requires me to use SafeAssign for certain assignments — either as a justification for paying for it or an attempt to monitor plagiarism (which sometimes subsumes the purpose of writing and making it about “catching” wrongdoing).

  2. Pingback: Digital Writing Assessment – Comments on Others’ Posts | Live Action Network Theory

  3. Crow’s argument implies important recommendations for English Studies in not just helping students learn to compose in ever-more digital spaces, but also in helping them (and ourselves) to responsibly navigate the difficult world of contributing to and existing in mass data compilations. Admittedly, I seldom think about how these digital profiles can be surveiled, how my contributions to class wikis or Google docs might construct a “idea” of Suzanne, or how the assignments I ask students to complete might do the same for them. Privacy seems to be a buzz word lately in the wake of Snowden’s leaked documents about American surveillance, yet the conversation is seemingly just beginning in the discipline. The arguments that “English studies as a (supra)discipline has a responsibility to investigate the effects of surveillant assemblage collections” and to maintain agency are likely to greatly concern the future of digital humanities. Crow seems to be at the beginning of this complicated issue. Now that we are more aware of the potential ethical concerns of composing in digital spaces that also collect those contributions, how can we ensure privacy and agency? The idea that we tend toward the “caring” end of the spectrum, giving students greater control is good, and his assertion that we build our own assessment technology is a good suggestion. However, I have questions about the practicality and immediacy of that. Who can build these technologies? Won’t they want a stake in the ownership of the data encumbered therein? How long would it take to develop workable solutions, and what do we do in the meantime? I am struck by the idea that scholars in digital fields must move more toward the critical making advocated by some in the discipline. The responsibility for knowing how to create technology and not just use technology, clearly supported here by Crow, lies more and more with discipline participants and not outsourced to IT companies.

    • Suzanne, I love your questions: “Who can build these technologies? Won’t they want a stake in the ownership of the data encumbered therein? How long would it take to develop workable solutions, and what do we do in the meantime?”

      I’m thinking of my own skill sets. I think I could consult on a project, possibly even lead a project, that attempted to create a caring and responsible network for collecting and hosting digital compositions. But I would have a very hard time creating that network without being tempted — SORELY tempted — to aggregate the profile data collected, like upload time and date, number of downloads, majors, and seniority (i.e. year in college) to draw correlation between senior English majors and their likelihood to complete a composition early. Could I resist the temptation, especially given the attractiveness of publishing results using aggregated datasets, or the requirements to report on project success to accrediting agencies and central authorities at the local, state, and national level? I admit that I don’t know. And I’m struggling with whether it’s ethical for me this semester to require students to compose and link their digital portfolios in Google Drive. For many of my students, it’s their first time using Google Drive; they have a pristine, un-surveilled profile in Google Drive that I’m destroying. In 25 years, will this requirement become a serious liability to any of them. Or to me?

Leave a Reply

Your email address will not be published. Required fields are marked *