Category Archives: Discussion

Bob Stein, “A Unified Field Theory of Publishing in the Networked Era:” Readers, Community, and the Future of the Book

Just as think-pieces on “what is the definition of digital humanities” have proliferated in recent years, textual scholars are increasingly invested in the question of “what is the future of the book.” A final paragraph on the possibilities of digital textuality is now almost ubiquitous in books and articles on the topic, which is productive and necessary, but can often feel like a gesture towards a topic–the book in the digital age–that in fact deserves a deeper dive.

To that end, I appreciate the format of Bob Stein’s “A Unified Field Theory of Publishing in the Networked Era” for its transparency–bullet points suggest ideas-in-progress, steps to be debated, questions ready for answers. The robust comments section, too, suggests the very communities of readers that Stein believes are the key to understanding potential forms of publication in a networked age

For Stein, the networked era requires a shift in consciousness from bibliographic, or physical forms of books to practices of readership, or as he states, “how [books] are used” (1). In his list of key questions, he asks how we might “account for the range of behaviors that comprise reading in the era of the digital network,” and goes on to consider ways to engage readers with the author’s conclusions at a deeper, more satisfying level” (2). Ultimately, his answer rests in the idea of communities of readers and authors that exist in a publishing business model. He suggests “a new formulation might be that publishers and editors contribute to building a community that involves an author and a group of readers who are exploring a subject” (4)

In the comments section, Michael Jensen wonders about the time aspect of the in-depth reader-meet-author, communities of reading that Stein suggests, noting that “most of us simply have too little time to really investigate/explore/expand out” to participate and read in the way that Stein describes. With this, I wonder, what other facets of a community of readers might we examine to determine a better way of producing sustained engagement? Should we be looking at reading practices, but also what readers do generally as embodied beings with obligations, lives, schedules?

Thinking also of the balance between readership and physical form, Stein’s vision of the future indicates that “novels will not continue to be the dominant form of fiction” but rather, participatory games will based on their narrative capacities (5). In considering the futures of the book, it stands that Stein’s turn from content or appearance towards reading practices also suggests that we should look to other non-book forms that are “read.” Of this idea, I might ask, what other media, beyond games, shed light on narrative and reading practices today?

I was surprised (and okay, a little excited!) to see Cory Doctorow, of #ITPCore1 syllabus fame, in the comments, too. He points out moments in Stein’s argument that fall prey to what he terms the “futurismic fallacy,” and states that “tomorrow will be like today, but more so.” I have been thinking about that phrase since I read it. I’ve found that in articles and books on the future of publishing and the potential of hypertext, that authors will readily list off statements about what technology might be able to do in the future–as Vannevar Bush illustrates, this is ultimately productive–but these anticipations often feel impossible since they are not well scaffolded onto our current technologies of reading. In short, it can feel like chasms between Point A and B. But the idea that “tomorrow will be like today, but more so” calls for a re-examination of what exactly constitutes reading practice in our current moment–and the answer to this will be the basis upon which we can envision the types of new forms that books and reading will take. So, what does characterize reading and books today, and how might we distill these characteristics into principles for future publishing?

Scheinfeldt and Flanders: Alt-ac

Tom Schienfeldt – “Toward a Third Way: Rethinking Academic Employment”

My guess is that Scheinfeldt intended this as an appeal to the DH community – those inside and influenced by the Roy Rosenzweig Center for History and New Media, (Scheinfeldt’s then employer; he’s now on a faculty line at UConn) and other DH loci dotted around the DC metro area. Scheinfeldt is talking to DHers he hopes might eventually take over his CHNM role (hard or soft money allowing), and to the newer academic crowd he might employ in the consistently funded digital humanities positions he describes working tirelessly to maintain. I can imagine juggling soft money to create stable employment beyond a grant cycle was tough work, and it sounds completely impossible from a CUNY perspective, what with CUNYFirst and our lithe HR support. I would guess it helped, finance wise, that CHNM is a separate research center, and built an endowment.

Scheinfeldt’s appeal makes a lot of sense when I consider his role in DH. He and his colleagues have worked hard and made a success of this type of alternative academic employment (which I think is awesome, btw) so it is only natural that he would use his position and this post as recruitment for bright new PhDs to go work for him. It’s not that Scheinfeldt is merely concerned with preserving legacy, he believes this work is just as valuable as traditional tenure track work. I agree that a tenure position is but one of many important roles in an academic institution, and I also lament that it still carries a particularly coveted patena. In so far as Scheinfeldt can be a mentor and enabler for alt-ac folks to forge ahead with the support of leaders like him, I applaud his words.

But I have fundamental disagreements with the tenor he uses to structure his argument and I question whether this post would have any impact beyond the DH community. Let me try to explain “tenor” here. When I read this, I heard someone speaking from experience; a very exceptional, and privileged one. I did not get the impression that Scheinfeldt had a true understanding of the realities of librarians or tenure track faculty. Or maybe he didn’t think revealing an understanding was important to this discussion? Mentions of tenure track faculty and librarians, even libraries, were invoked in rather crude terms, seemingly for the sole purpose of advancing his own agenda (though I agree with asserting alt-ac equality). Last time I checked, the library, second only to staff and students, was at the center of the university. Obviously, I’m biased. I was also not so sold when he brought up soft/hard money, the polarities of job security, and the walmart/university simile.

My trouble with this piece, and why I think it falls short of being effective, is basically my trouble with academia (and it extends beyond university grounds). As a reader who wouldn’t consider myself Scheinfeldt’s target audience (I’m a librarian BUT I’m faculty status), I think the tone of this discussion accentuates the purveying lack of awareness or concern for units (human or other) that constitute a more complete, albeit flawed organism. This absence, in academia and elsewhere, may be partly attributable to ignorance, inexperience, fear, selfishness, bullying, transactionality, the bureaucratic beast, institutional siloing, not enough hours in the day…etc.

When we position ourselves as, or conceive those beyond us as “other” or “outside,” absence is produced. The results of this are felt by many graduate students as their advisers and departments endeavor to mold them in their own likenesses (traditional faculty roles), and as members of their cohort attain competitive and prestigious post docs, followed by tenure track positions. It’s all well and good for Scheinfeldt to preach for alt-ac, but the reality is that people feel pressure to perform and to compete in these conventional academic roles and it’s hard to make a leap without feeling much risk, and the deep possibility of failure. The preening of the academic can be a major exercise in solitude and insulation. And preparing an annual tenure review packet, feels much the same. You are forced to report all of your work under one of three columns: teaching, service, or scholarship. No double dipping allowed, although I would have guessed conveying that my work crossed every domain was a win!

So I think reforming the faculty system of tenure and reward is very important. Scheinfeldt speaks of the changing nature of digital humanities scholarship and work, and I think the challenges of the digital humanities are a proxy for the challenges inherent to the whole university. Fundamental questions that press the academic organism are the shift in scholarly communications, and what it means to do collaborative, digital research. I believe teaching must move out from behind scholarship, and become equally as important. Incidentally, I think Scheinfeldt’s work is great on this front. In my institution for instance, there are a number of pedagogical programs that instructors participate in (beyond classroom teaching) that are usually a ton of work, but also extremely important to the college community – and looked upon very highly by college administration. Yet, translating this labor into the tenure portfolio still requires juggling, and faculty can start to second guess how they choose to spend their time. Do something to benefit the college, or to benefit themselves? This is silliness. The college wants to employ good pedagogues and scholars, so can’t we find a way to reward this in practical terms?

And, for possible context, my position is quite rare for an academic librarian. My faculty status is equal to teaching faculty; equal in the sense that we librarians at CUNY have the exact same tenure guidelines and review as other faculty. This has mostly been a net positive for me so far, and I’ll briefly explain why. Libraries and or librarians are perceived to, and often operate at the behest of and in service to others. Providing a service is fine. But the reality is that academic libraries are more than a service. Libraries have their own agenda, mission, and expertise. Of course, a huge aspect of that is to serve and provide resources, but many faculty can be largely unaware of the rest of the library’s goals and initiatives. One such example would be the library as purveyor of critical information literacy. I’m pretty certain I wouldn’t feel as empowered to put my work at the level of importance as faculty outside the library if I didn’t have faculty status. I’ll often hear librarian colleagues, and have observed myself becoming preoccupied with how to help and serve the college community as if we’ve always been fighting for relevance through someone or something else. I think this is a misrepresentation. I believe we’ve always been relevant but that we have the unique privilege and curse of having much less time to push or advocate on our sole behalf because we are often working to advance others. I think this reality produces many librarians who are keen to collaborate and are particularly receptive of stuff “outside” the library. But I fear many folks beyond the library have much less practice or incentive to improve on this. And while I concede that it can be frustrating for the library to balance its roles, I think, or at least would like to think, that we’ve been working in an environment that is far more complementary than (some) other units of the university have. We could use more of this.

Julia Flanders, “Time, Labor, and ‘Alternate Careers’ in Digital Humanities Knowledge Work,” in Debates in the Digital Humanities

Reading Julia Flander’s chapter after writing the above was kind of amazing. I was particularly struck with this:

Self-consciousness in the consultant arises partly from habitual exposure to infinite variety of beliefs, ways of doing things, and systems of value and partly from the constant projection of oneself into other people’s imaginative spaces. The consultant must identify, however briefly and professionally, with the client’s situation…”

Flanders’ is a critical read and observation of “para-academic” roles. I’m particularly interested with her comments on consulting and observing transactions between clients: the clients’ relief at not having to be responsible for certain knowledge, and finding satisfaction from consultant answers with a monetary transactional dimension. Also, the notion of hourly/work for hire labor versus the traditional academic labor paradigm, and the resulting quantifiable labor and outcomes is a really important piece of the conversation which Scheinfeldt’s piece did not tackle at all.

Keeping this short since I already wrote a lot, any reactions to the following statements?

“By formalizing humanities research practices and rendering explicit the premises on which they rest, digital humanists also make possible critique and change.” – Flanders

Re: the fractionalization of workers: “…it constitutes a displacement of autonomy concerning what to work on when and how long to take…a reversal of the classic narrative of academic work.” – Flanders

Are there exceptional models that we celebrate without daring to imitate? Are there exceptional models we are enacting?

It would be great also to hear examples of how we’ve negotiated our choices around investing time and labor as academics.


Steve Jones and the Humanities, Everted

Steve Jones was a Distinguished Visiting Professor for the Advanced Research Collaborative at The Graduate Center last year (2014-2015), and as a result, I had the chance to hear him speak a few times. One of the features I admire about his work is the way it traces beginnings to moments of critical mass–certainly a goal of the introduction and first chapter of his 2013 book, The Emergence of the Digital Humanities. As I understand, Jones is now working on a history of Father Busa, the so-called founding father of digital humanities-type research who produced a concordance of Saint Thomas Aquinas using IBM’s computers around the 1950s. This project, like his others, suggests a common methodology: return to historical roots for new ways of thinking, uncover institutional forces that shaped movements, and interrogate these systems to highlight their current digital and networked instantiations.

Something that has struck me throughout this course is the intense relevance of science fiction, and thinking to Cory Doctorow, young adult versions of this genre, to imagining digital futures. Jones uses the work of William Gibson–who also coined the term “cyberspace”–to refine the term “eversion” (also Gibson’s word) for conceiving of our relationship to technology anew. For Jones, “eversion” is the idea that we no longer tune in to digital worlds, or engage with networks by booting up or down a computer, but that the omnipresence of the network creeps outward into our daily lives and physical space. The WiFi waves that surround our bodies when we’re in networked buildings, the GPS in our phones (GPS is a huge turning point for Jones’ argument about eversion, perhaps worthy of classroom discussion) that tracks our location on grids, gaming devices like the Wii, all indicate that we are surrounded by the stuff of digitality and can no longer contain it in a tiny screen or device. This idea dovetails with Hayles’ argument from How We Became Posthuman that information is material, suggesting, in part, that what’s at stake in Jones’ argument–although he doesn’t necessarily pick this up–is what it means to be human in an everted age. Perhaps Haraway might have something to add!

Jones covers much ground in the first two sections of The Emergence of the Digital Humanities, but by far the most resonant and applicable idea that I’ve extracted is that of “eversion.” Since this term is also the organizing principle for his book, in lieu of a blow-by-blow of the readings, I’ll trot right to the provocations:

***The introduction ends with Jones’ statement that “the digital humanities is the humanities everted” (16). As evidence, he suggests that “DH has the potential to facilitate…productive breaches, to afford the kinds of cultural exchange that have shaped the new DH since its emergence” both inside and outside of the academy (16). Do you agree with his assessment of DH constituted an everted humanities? I’ve been chewing on this one a while.

***Related to eversion, Jones suggests that “the new DH starts from the assumption of a new, mixed-reality humanities” (32) that functions “less like an academic movement and more like a transitional set of practices at a crucial juncture, on the one hand moving between old ideas of the digital and of the humanities, and on the other hand, moving toward new ideas about both.” Looping about around to Haraway and Hayles (very poetic at the end of the semester), how might we build further nuance into this argument? Are “mixed-reality humanities” depending on either student or institutional economic stability/wealth, ideological systems, or perhaps combinations of other factors?

***Jones makes an important distinction in his definition of eversion by noting that the network doesn’t turn “itself inside out,” but rather “human agency” accomplishes this task–just how “games require players” and “digital humanities research requires scholar-practitioners” (36). Many of our course themes have attempted to account for human elements in digital research and pedagogy–it always comes back to the embodied self. How do we continue to negotiate the balance between concepts and theories like eversion and the human elements that are inherent in their animation?

Fights over software and the web

The Free Software Definition and Vaidhyanathan, and O’Reilly in the Social Media Reader

At first glance, these reading selections may appear a bit dry, but they’re valuable to our conversations because they represent different perspectives from certain “stakeholders” of the internet and computing. In the case of the Free Software Definition, the declaration represents a specialized computing community (or as Gates called some, personal computer hobbyists) that strongly support a political and ethical imperative. Vaidhyanathan represents the academic perspective of critic and problem poser, and O’Reilly represents the perspective of a long term business person whose company has straddled tech and created an interesting niche in tech business and culture through software manual publishing and conference hosting. 

Their importance as records of the state of computing in the early to mid 2000s (and in some cases its forecasted future) is also their weakness – they are words from the usual suspects. Yet the content of their discussions cross beyond the materiality of the internet and computing, and into economies of culture and capital that affect all of us. The internet’s capacities to intersect between expression, innovation, collaboration, and commodification are unlike any other, it seems. Is it impossible that a conversation about the future of the internet doesn’t ultimately come down to fundamental questions of freedom and control (regulation)? I think Vaidhyanathan does a laudable job of speaking to the complex politics with respect to legacy copyright laws and the rhetoric of free/open source. Speaking of the open-source model, Vaidhyanathan speaks to my concern over the voices shaping the conversation:

“It has been difficult to court mainstream acceptance for such a tangle of seemingly technical ideas when its chief advocates have been hackers and academics.” 

These works all have in common a reaction (Free Software Definition and Vaidhyanathan) to and/or dialogue (O’Reilly) with the proprietization of software and computing, versus the historical and romantically routed philosophy of hacker culture and free/open source software radiating from academics and researchers, the likes of whom founded the Free Software Foundation, and many of whom have supported the development of GNU/Linux OS.


Who can and how can we make conversations about the future of the internet/computing, and open versus proprietary relevant to all users?

What are some examples of a successful strategy that’s gotten the general public in dialogue? What is the role of media and government? What about privacy and security?

Are the business systems that support Web 2.0 competencies here to stay? How do they advance or hinder internet and computing?

Notes on the Free Software Definition

The free software definition is more than a definition, it’s a declaration that free software is an extension of the fundamental freedom of speech. It is an evolving statement that traces the history and revisions of the very political definition from 2001 to present (a nod to wiki edit history), though the fundamental concepts has been systematically advancing since the early-mid 1980s with the work of Richard Stallman to develop a completely open OS with the GNU project. 

The free software definition consists of four main freedoms:

  1. freedom to run a program as you wish
  2. freedom to learn how the program works and the freedom to change the program to your own specifications
  3. freedom to distribute copies to your neighbor
  4. freedom to distribute copies of your modified versions

Most of these freedoms cannot be achieved unless software source code is open – free for anyone to access and use. The definition also stresses that free software is not about cost. In fact, FSF condones distributing copies of free software for a price. More on that here. The definition also comes out pretty hard against a group advancing the term “open source software”  instead of “free software.” FSF believes the two are fundamentally different. Richard Stallman writes:

“The two terms describe almost the same category of software, but they stand for views based on fundamentally different values. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, essential respect for the users’ freedom. By contrast, the philosophy of open source considers issues in terms of how to make software “better”—in a practical sense only. It says that nonfree software is an inferior solution to the practical problem at hand.”

The Free Software Definition talks about copyright, and recommends using copyleft licensing which requires that any future modifications of the existing software be licensed exactly the same, so that no one can convert the software into a proprietary (nonfree) version. But in general, the Definition does not go into detail about the range of software licenses available, and the style of the post reads sort of like a one sided conversation and has an air of superiority.

Notes on Web 2.0 by Tim O’Reilly 

Tim O’Reilly and his colleagues at O’Reilly Media introduced the term “Web 2.0” after the dot-com bubble burst in the early 2000s. Since the coining of the term, it’s taken on a monstrous life of its own, and O’Reilly’s writes to explain its original intentions.

What makes something on the internet Web 2.0 instead of Web 1.0? O’Reilly describes 2.0 as “…principles and practices that tie together a veritable solar system of sites” that exhibit some/all of the following principles or “core competencies”:

  • services, not packaged software
  • architecture of participation
  • cost-effective scalability
  • remixable data source and data transformations
  • software above the level of a single device
  • harnessing collective intelligence

from Figure 4.1, the Web 2.0 Meme map

Fundamental features of Web 2.0 include the web as platform, integration of transformative social networking technology with blogging and RSS, strategic management of the data supply an application works off of, and constant maintenance and iterative improvements to the product. The most enduring concept throughout O’Reilly’s discussion is really about shifting business models in the likeness of Web 2.0 companies that have had huge success (Google, Amazon, eBay). O’Reilly emphasizes that companies are best positioned when they work with the network: this includes leveraging the user community, for instance Amazon tracking user activity to improve search results, and Flickr categorizing content with user generated folksonomies. Web 2.0 also pushes the option for modular product development that takes existing independent components and assembles something of new value.

I have several concerns having to do with O’Reilly’s section about improving the user experience. The emergence of cross platform access is convenient but can also degrade user privacy and open source software didn’t get explored in great detail.


Whose Commons is it, anyway?

The beginning of Lewis Hyde’s “Common As Air” ,  threw me for quite a loop. I did not expect to see the all too familiar and rote trotting out of the “state of nature”/social contract theory philosophers: John Locke, and Thomas Hobbes. Maybe that exposes my naivete, but the idea of commons seems quite at odds with the practised ideology of these philosophers. John Locke once wrote that “every man has a property in his own person: this no body has any right to but himself”. James Madison, fourth president of the United States, (Locke’s “wild woods and uncultivated waste” that represented an embarrassment of riches ripe for the taking) championed the three-fifths compromise while invoking the biblical codeword ‘dominion’ to justify the Louisiana Purchase.

Hyde continues with this troubling mirroring of imperialist/settler-colonial language on page 24. He writes,

Invocations of the commons can carry with them a promise that more than air can be like air, always there for the inhaling lung: infinite bandwidth, unlimited acorns and deer, all of literature instantly available on the computer screen, unfenced prairies stretching to an unowned ocean, ‘that great and still remaining common of land’ (Locke). There are psychological, spiritual, and mythic elements to ‘the commons’ and it is worth marking at the outset so as to be alert to how they might refract our thinking about other, more concrete commons.

Hyde may only draw these parallels in order to invoke the bevy of riches currently available and yet to be made available because of the internet. Still, connecting unlimited bandwidth with John Locke’s image of America as a cornucopia of unbridled sustenance links the potential of digital spaces to the old Manifest Destiny doctrine. These are some of the cautionary tales that need to remain at the forefront of creating commons.

In this way, the text leads me to wonder: in Hyde’s somewhat glossed over history of property commons, who benefits and who remains locked out of the commons system? How can we build systems that resist “the free market”? What role can educational cyber commons play in capitalist societies? We’ve been exploring these questions for a while now, but I think they are worthwhile to keep in mind as we shift our thinking towards our ITP projects.

Response to A. Hyde et al. and L. Hyde: Collaboration, Sharing, Ownership

That there were two articles written by different first authors of the same last name this week, both of whom described issues related to property, ownership, and sharing, seemed to add a new layer of complexity to the issues at hand. How can one prove ownership of property if one cannot prove to be themselves and no one else?

In any case, the two articles demonstrated different aspects of collaboration and openness with respect to the distribution and use of digital property. A. Hyde et al. (2012) provide an overview of what constitutes sharing and collaboration of intellectual property. Drawing a distinction between sharing and collaboration, the authors suggest that to share content involves treating it as a social object that can be directly linked to author, whereas in collaboration, the direct linkage between author and the content produced is less clearly observed. In the case of Wikipedia, all edits are preserved but the final written article as it appears could consist of multiple edits. Though Wikipedia articles are in some ways culturally constructed, there are safeguards against the falsification of information, as noted by the Colbert Report incident. How might having a distributed network of authors affect the product of a collaboration, and is the accuracy of the information source any more or less questionable than a piece written a solitary author?

A. Hyde et al. (2012) continue by outlining criteria of questions for a successful collaboration. Included are questions or intentions, goals, self-governance, coordination mechanisms, knowledge transfer, identity, scale, network topology, accessibility, and equality. The question or network topology stuck out as an important issue, yet one that I had not considered before as an aspect of collaboration. In the case of Wikipedia, contributions appear to be individually connected, unless there is a conflict with two editors working at the same time. In any given collaboration, is it possible to sketch out a model of the roles and tasks of the individuals or entities involved? Is it always feasible to do so?

Whereas the A. Hyde et al. (2012) discuss the process of collaboration, L. Hyde focus proprietary aspects of collaboration, specifically the “commons.” In contrast to views that place the idea of a commons outside the realm of physical property, L. Hyde speculates that the commons is in fact property, and by definition, “a right to action.” Later, he elaborates by stating that “a commons is a kind of property in which more than one person has rights,” (p. 27) suggesting that a commons may be inclusive of larger units of contributors. The word “commons” itself apparently has been derived from proprietary feudal systems, where such a thing would ultimately be under the ownership of nobility and in order to be used by others, they would have to contribute certain goods or resources in exchange. In this case, a commons was typically a piece of land jointly used by multiple individuals for agrarian purposes. These types of systems strictly controlled the use of the commons as well as any product reaped from it. According to the author, a modern commons is a “kind of property in which more than one person has a right of action.” (p. 43) As “commoners,” how should they view their contributions? Can one reasonably expect to have sole ownership of property once its been submitted to a commons?



This week, we again consider the issue of ownership of intellectual property. A Hyde et al. (2012) prompts us to consider the complexities of collaboration, and to think about ways to structure successful collaborations, while L. Hyde describes the evolution of the modern commons as a property with collective ownership. As teachers and academics, in what ways can we effectively structure collaboration and sharing of knowledge in a commons? What recommendations would you have for students and peers to form constructive models of knowledge generation and sharing?



Lewis Hyde (2010). Common As Air: Revolution, Art and Ownership. New York, NY: Farrar, Straus, Giroux, p. 23-38.

Adam Hyde, et. al. (2012). What Is Collaboration Anyway? In Mandiberg (Ed.), The Social Media Reader, 53-67.

Workshop and Skills Needs

Specific Topics

  • HTML
  • Data Visualization
  • Web Scraping
  • Tools for online teaching (Common LMSs, Voicethread etc)
  • Setting up course sites and organizing digital work for courses
  • Pedagogical discussions
  • CBOX Administration
  • Mobile apps
  • Gaming

General Comments
Workshops can wane in usefulness; would like to have time to ask directed questions applicable to specific projects. Digital Fellows office hours might be good for that. Maybe start a working group (e.g. Python group)? Need avenues to support longer-term skill development.

Rethink structure of workshop requirements? Make more time for differentiated/self-organized play with tools through refresher.

Provide space to discuss pedagogical projects and show/share examples (projects, syllabi etc)

Share workshop materials for those who can’t attend or want to refresh

Look at CityTech L4 Living Lab submission tool for materials submission

To the extent possible, make workshops project-based

Provide materials in advance (tutorials) and use workshop time as a working session (flipped classroom style)

Coordinate between workshop leaders so that material is not redundant. Think of have prerequisites for some workshops (example: HTML as prereq for Bootstrap)

Opportunities to respond (gone, but not forgotten)

Timeline (Manovich & Douglass, 2010)

Timeline (Manovich & Douglass, 2010)

It occurred to me that I had initially agreed to provide a provocation to Manovich’s last week, and I do apologize for not constructing a post in response to the readings and visualizations that were assigned.  While it doesn’t seem fair to detract from the current week’s readings, I would like to pose a very simple question. If one assumes that common knowledge is reproducible, yet creativity  and other forms of nontangible and cultural knowledge are unique sources of information, does the sum of all nontangible knowledge ever approach a mass store of common knowledge? In an era when we can quantify cultural knowledge through advanced data science, are we harming the generativity of knowledge? or are we simply pushing the boundaries of knowledge and creativity by reproducing and re-representing information in unique forms?