Category Archives: Discussion

css.php

Lessig, Benenson, Mandiberg

REMIX

In this talk, Lessig argues that copyright laws and policies are outdated in the context of digital culture, causing problems ultimately harming democracy. He proposes legal changes and cultural practices while refusing both copyright extremism and copyright abolitionism.

Writing is “an essentially democratic form of expression; the freedom to take and use freely is built into our assumptions about how we create what we write.” The observation that follows is that digital media has also been democratized, both in one’s access to diverse cultural content and in one’s ability to create content. It is the popular medium of the 21st century, even more so than writing.

But the traditional copyright model that tries to protect works from being copied fails to reflect the aspect of digital media which necessarily involves duplication; this over-restrains amateur freedom of use. Moreover, the war on piracy is not serving its original purpose of protecting the creator’s right, but is really just criminalizing more people.

In order to preserve the positive functions of copyright of providing incentives to the professional creator, while also pursuing the democratic value which is freedom of use, Lessig argues for a law not focusing on whether something has been copied but relying on context to determine whether something is a mere duplication or a creative remix; whether it is a professional act or an amateur act. The law should provide control over professional copies and encourage amateur remix, while there should be detailed negotiations with regards to professional remixes and amateur copying.

Piracy should not be dealt with through ineffective mass criminalization but instead through legal changes that will facilitate compensation in the current state of technology; proposals such as compulsory licenses (government-granted use without permission but involving a set fee) or the voluntary collective license (subscription-based file sharing network) should be incorporated.

Alongside these legal issues, the potential for an internet-driven hybrid economy—where economic value is created from sharing acts of people—should also be harnessed, and it should be done in a just way that minimizes exploitation; his proposal on this matter is the Creative Commons Licenses.

  • What are current challenges in your field that involve copyright and intellectual properties? One thing that comes to mind is the firewall of commercial databases that Micki mentioned.
  • With online stores for video, music and apps seemingly stabilizing as a platform, is piracy still an important issue? What are the things to think about?

On the Fungibility and Necessity of Cultural Freedom

Benenson discusses how non-copyright licensing should be approached with regards to cultural works. Some points:

  • One difference between Creative Commons and its precedents, notably the idea of Free Software and GPL, is that CCL offers more restrictive options of which the choice is up to the author; whereas Free Software puts more emphasis on keeping things open as a principle not only on the author’s side but also all along the distribution process.
  • There has been arguments for extended application of free-software principles to cultural works, which would enforce free use—something that CCL offers selectively.
  • Such “user-generated utopianism” assumes that cultural works, like tools, are fungible.
  • The fungibility of software like kernels and compilers has been crucial in the success of free software movement; this doesn’t necessarily applies to cultural works, where authorship must be valued.
  • While copyright laws must be adjusted into the contemporary context, we don’t need to completely throw them away as they do protect important values for the cultural creation.

As we are increasingly seeing works that exist across the boundaries of software/tool and cultural works, the question of articulating an appropriate mode of licensing becomes more relevant.

  • Benenson’s discussion is relying on the separate categories of software/tools and cultural/creative works. As we increasingly see works that exist across these boundaries, what would be the considerations that come into play when trying to articulate an appropriate mode of licensing these works?
  • While I can agree on Benenson’s argument that universal openness will not necessarily encourage the creation and sharing of works, the claim that “user-generated utopianism challenges us to believe that all cultural objects are effectively fungible” sounds like a hasty reduction of the logic behind Free Software advocates; I would like to hear your thoughts on this.

Giving Things Away Is Hard Work

Mandiberg examines the collaborative effect that open licensing can bring when applied to projects, especially physical designs. This approach to open licensing is summed up as the cycle where “participation breeds creative mutation, and creative mutation leads to better ideas through this collaborative process.” The insight here as I read it is that one should strategically consider both materiality and work process: the project’s functionality in its shared form, modes of collaboration, degree of access depending on skill levels, and methods of production.

The choice to go on Kickstarter for Bright Bikes was interesting, as crowdfunding platforms seem to have established an almost standardized practice of this type of approach.

As I was reading the texts, I also had the chance to get nostalgic about a project I did with some friends a couple years after the time of the articles. Our choice to go with a CC-BY license was partly logistic (putting the time and effort to deal with copyrights just didn’t make sense); but I also remember the optimistic vibe around free culture and the possibilities of internet which was very much a real thing at that time.

  • I am curious of what Benenson’s response would be to the quote relating to Lady Ada—”this is a success: the practice has become so pervasive that the origins are no longer important.” Do Fried’s contributions count as fungible tools, or do they fall into some middle grounds?

Sources

Fred Benenson, “On the Fungibility and Necessity of Cultural Freedom”; Lawrence Lessig, “REMIX: How Creativity Is Being Strangled by the Law”; and Michael Mandiberg, “Giving Things Away is Hard Work: Three Creative Commons Case Studies” in Mandiberg, The Social Media Reader, Part V: Law.

Links

Visualizing Impossibility: Thoughts on Lauren Klein

In Lauren Klein’s “The Image of Absence: Archival Silence, Data Visualization, and James Hemings,” we search alongside her for ghosts, silences, and absences in the archive. Over the course of the article, she seeks to illuminate the life and contributions of James Hemings within the Papers of Thomas Jefferson, a digital archive made available through ROTUNDA, University of Virginia Press, and in doing so, discusses the possibilities and pitfalls of data visualization in this process. For Klein, digital technology has the capacity to render visible the invisibilities of archival gaps, and at the same time expose the limits of our knowledge as productive space with which to think.

Recalling last week’s conversation about narrative and database, Klein suggests that archival silences can be produced, in part, by metadata and data structuring decisions (663). This claim dovetails with Lisa Brundage’s suggestion that the most essential word in database theory is the “you,” or human agency responsible for decisions regarding information. In the context of Klein, the locus of “you” as human interacting with or producing an archive becomes a space for determining the nature of archival imbalances, power, and structure—particularly when Klein asks, “How does one account for the power relations at work in the relationships between the enslaved men and women who committed their thoughts to paper, and the group of (mostly white) reformers who edited and published their works?” (664)

This same question of the “you” that must be accounted for appears in the data visualists’ role in rendering information visually, and is part of Klein’s call for a greater theorization of the digital humanities. She states, “the critic’s involvement in the design and implementation—or at the least, the selection and application—of digital tools demands an acknowledgment of his or her critical agency” (668). In Klein’s scholarship, qualifying and elucidating the role of “you” is paramount to understanding the archive, the visualization, and the data collected.

Critique without suggesting an alternative is all too easy, and I admire the way in which Klein posits data visualization as antidote to archival silences and also deeply engages the fraught history of its practice (665). She engages visualization’s vexed history through the figure of Thomas Jefferson himself, who underwent training in early forms of data visualization with William Small at the College of William and Mary. In this section of the article, we gain a sense of how complex it is to engage these forms: can the same tool that Jefferson was so fond of also be a tool for scholars to resurrect the memories and presence of the slaves he owned, centuries later?

Klein also explores the ways in which Jefferson’s note-taking and records use representation in diagrams, charts, and tables to suggest that he was engaged in using data visualization as a “form of subjugation and control—that is, the reduction of persons to objects, and stories to names,” which points at the reductiveness and potential for violence in types of visual display (679). Klein’s portrayal of Jefferson here, as an unthinking white man who recorded Hemings as empirical evidence, to be charted and claimed as thus, is emblematic of the central question of her piece: how can we visualize without appropriation, acknowledge incompleteness, and in a paraphrase of Marcus and Best, let ghosts be ghosts without claiming them for our own purposes or meanings?

Evoking Stephen Ramsay’s idea of “deformance,” or the creative manipulation and interpretation of textual materials, Klein ultimately suggests that rendering Hemings in an act of visual deformance makes legible “possibilities of recognition” that the actual textual content of the Papers of Thomas Jefferson resist, while “expos[ing] the impossibilities of recognition—and of cognition—that remain essential to our understanding of the archive of slavery” in contemporary studies (682).

Provocations

When confronted with archival ghosts, Klein seems to suggest that the best policy is: illuminate, not explicate. How do you negotiate the difference between these two words, and can you share with us the ways it influences your pedagogy and scholarship?

Is there ever truly a safe way to visualize data, particularly regarding people and especially those who have been silenced, ghosted, or violated, in a way that rhetorically privileges stories and narrative over names and numbers?

To what extent does digital technology provide solutions of access for archival materials, but at the same time reproduce power structures that perpetuate silences? Can digital technology increasingly address this question through innovation, or is this a question of institutional change?

Klein’s argument regarding silences in digital archives seems to address the question of mark-up and encoding, whose granularity is often determined by institutional funding. In a recent conversation, Erin Glass (of Social Paper, an amazing platform for student-centered writing that you should check out!) and I noted that the first invisible document of any archive, institution, or project is often a grant. This document lays out the rationale, timeline, and required resources that shape the development of the project, but it is rarely discussed once secured for an institution, and is often invisible except in gestures towards sponsorship or funding. ROTUNDA is an organization that is part of University of Virginia Press, but whose digitization work is funded through grants. It is likely that decisions of encoding granularity were built into the grant itself and the time requirements of the project.

So, at the roots of the process of creating digital archives, how might we conceive of the entire process–from grant onwards–as a new space to intervene in inclusive, even collaborative, editing processes that produce richer metadata? Does this help address archival silences, or instead offer more opportunities to reproduce them?

Because I am Trying to Conceptualize Leaves of Grass as a Database…

Ed Folsom’s semi-anecdotal opening to “Database as Genre: The Epic Transformation of Archives” took me back to the late ‘80’s and early ‘90’s. My parents, in an attempt to find economic solutions to grocery shopping for a family of 9, frequented the generic detergent, cold cereal, hot cereal, and toiletries sections of the grocery store. I was conditioned to avoid the bright colorful pictures, and I instead turned my gaze to the black background with the white Times New Roman printing of “Toasted Oats.”

Folsom’s start—an opening frustration with the abundance of lifelessness in the realm of the generic—is a smart preface to his discussion of Walt Whitman and genre. Whitman, even in his labeling, defied the laws of genre as he teased the boundaries of poetry, prose, and everything near or in between. This is no surprise when one considers how Whitman’s writing, if not his very existence, tore at the seams of the very fabric of sexual identity and philosophical thought. He was somewhere between transcendentalism and realism, somewhere between fifty shades of sexual orientation, and somewhere between anti-slavery and white supremacy. Whitman was not one to easily follow a prescribed agenda, and Folsom speaks to how this plays out in Whitman’s description of genre: “peculiar to that person, period, or place—not universal” (1572). Whitman was frustrated with the narrowness, the lack of transport-friendly-interconnectedness that comes along with genre. He did not want to be placed in a box, and Folsom is suggesting that the reason behind his refusal was a lack of options.

Recognizing this “ongoing battle with genre,” (1572) Folsom offers up the database as the best description of Whitman’s work. He credits Lev Manovich for introducing this conceptualization of the database as genre, and he adds to the conversation by asserting that for Whitman, “the world was a kind of preelectronic database” (1574). Moreover, he supports this claim by referring to Whitman’s multiple edits, last minute edits, antebellum and post-bellum coverage, and strategic posting of lines from poetry as markers or code within the text. This problematizing of Whitman as database then leads to a conversation of archive vs. database. Seeking to separate Derrida’s concept of “archive fever” from database, Folsom contends that archive has much more of an association with the physical space, the actual housing of artifacts, whereas database is more of a digital linking of information concerning a particular subject or combination of subjects. He establishes database as a new genre, one that can make the fitting genre home for Whitman’s works.

Provocation:
To be completely honest, I struggled with this piece. At times I jumped in, ready to find a place for Whitman, willing to re-embrace him as low-tech visionary and genius. And then there were times when my spidey senses tingled: How dare he box the unboxed Whitman? Why must “archive” exist in such limited terms? Being mindful of these tensions, I pose three questions. Like my previous provocation, feel free to respond to one or none of the following questions:

1) How do you think Whitman would respond to Folsom’s reading of his work?
2) Given our readings this week and last week, what do you think of Ed Folsom’s description of “archive” and “database”? Would you reframe them?
3) What does Folsom’s act of naming database as a genre do for the field of the humanities? What is its effect?

Citation:
Folsom, Ed. “Database as Genre: The Epic Transformation of Archives.” PMLA: 1571-579. Print.

Lev Manovich. The Language of New Media: The Forms

Minecraft Creeper novelty wallet

In this selection from The Language of New Media, Lev Manovich observes the shifts in visual culture and their underlying organization. To begin, he sketches a portrait of New York web development in 1999. He observes the iconographic migrations of browser buttons to wallets and filing cabinets to computer icons to illustrate the cross-pollination of “virtual” forms. He traces the movements of cultural metaphor — those grafted into computer practices and those conceptualizations based on computers. Manovich goes on to distinguish and blur the computer database and 3-D virtual space as arenas of work and fun in computers. He refers to two of Janet Murray’s four essential properties of digital environments, encyclopedic and spatial, to elaborate the aims of new media design. He draws attention to the “opposition characteristic of new media — between action and representation” (Manovich 216). His call for “info-aesthetics” corresponds with much of his art — he considers data the new media as film and photography once were. Take for example, his Timeline. Introducing the database as “the key form of cultural expression of the modern age,” Manovich traces a theoretical descendance from Panofsky’s art historical description of perspective to Lyotard’s cultural theoretical Postmodern Condition to Berners-Lee’s computer science proposal of the world wide web  (218-9). Threading together these disciplinary developments, he demonstrates the broadly strewn, networked fields of cultural productivity. The refresh, addend, amend nature of the Web, he contends, lends itself to organization by collection rather than completed narratives. Apparent narratives, ie computer games, depend on players reverse learning algorithms. Thus the “ontology of the world according to computers” is reduced to data structures and algorithms (223). Describing the complementary nature of database and algorithm, he shows how the map of our information is greater than the territory — our indices eclipse our information; positing database in contrast to narrative, he addresses how our meaning making shifts accordingly. He goes on to describe the structure of new media in semiotic structuralist terms (following Barthes). He contends that the language-like sequencing is a holdover from the cinema. Manovich’s frame-by-frame sequence of cinema as differentiated from all-images-at-once spatialized visual culture does not entirely hold up, especially with today’s ‘view-as-you-please’ stop-and-go on-demand video media. I wonder if the database articulation Manovich extols in Whitney’s Catalog really changed the course of how we perceive visual culture. The effects Whitney developed certain contributed to the visual amplifications made by computers, but do they really mark any sort of break in the database/narrative tension?

Manovich seems to suggest that chronological linearity is narrative, and that artists trying to undermine it are attempting to express the database — or all options at once. He considers Peter Greenaway a prominent “database filmmaker” (239). Excerpt from Peter Greenaway’s The Falls, 1982 and

.

I am not sure that these catalogs of effects achieve the non-narrative. There are certainly differences, but do these assemblages constitute paradigm over syntagm?

 

Provocation:

What would a radical break from narrative to database look like? Do those things which stubbornly persist through restructuration (Manovich citing Jameson) have something to them which is, dare I say, essentially human? Or are our formal expressions discrete, replaceable, and bound to evolve beyond recognition? Can the paradigm, the vast array of associations, truly be manifested in the database, if we as readers still depend on syntagms (what the screen or interface can render)?

Citation:

Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 1999.

 

 

Etsy Floppy Disk Notebook

Cohen on Data Mining

http://www.dlib.org/dlib/march06/cohen/03cohen.html

Cohen argues that computational methods for analyzing, manipulating and retrieving data from large corpuses will provide new tools for academic research, including the humanities. He provides two examples, projects he worked on. Syllabus Finder, a document classification tool for aggregating and searching course syllabi, finds and collects documents that show similar patterns in their use of words. It also allows to differentiate documents that have similar keywords by analyzing the use of other words. Another example he provides is H-Bot, a question answering tool that takes in queries in natural language (instead of code), transforms the query using predetermined rules and conducts a web search before outputting the answer the tool decides is relevant.

Lessons that Cohen learned while building these tools:

  • APIs are good
    • they offer the possibilities for combining various resources (which facilitates the use of less rigorous but more accessible corpuses)
    • third-party development can lead to unexpected and positive results
  • open resources are better than restricted ones (access makes up for quality)
  • large quantity can make up for quality

Just in case: an API is a way of making easy the process of using our software get data (instead of doing it manually) from another software (usually on another computer, like a web server). The following is one of the more concise and less technical-details-oriented explanations I found online: https://www.quora.com/What-is-an-API

Also, I feel that The Lexicon of DH workshop slides provide a good overview of the coming week’s theme.

So indeed, the use of APIs has become more common outside of the IT field since 2006. New York Public LibraryCooper-Hewitt Museum and the New York Times, among many others, provide APIs that allow the access on their digital collection through software. MOMA provides their collection data on Github.

The technology used for document searching and question answering, the two examples that Cohen provides, have developed into something arguably more reliable, faster and easier to use. For example, we don’t even need to build a tool in order to be able to ask some questions in natural language:

We'll remember you, H-Bot.

Relating back to the discussion of previous weeks, what do you think is the impacts or implications that the increase of digital collections and APIs, along with developments in data collecting and analyzing technologies, have on teaching? (or on more broader aspects of life and research) How does this fit together with more traditional modes of teaching, like textbooks?

Another question I have relates to the fact that both examples mentioned in the article are no longer functioning. The latest update on Syllabus Finder that I could find explains that a system change in the Google search API effectively deprecated the tool; it also provides a download link to the database of syllabi—but only a small part of it. H-Bot is online, but sadly doesn’t seem able to answer me:

Oh, H-Bot.

I can easily imagine the difficulties of maintaining such a digital project. I am also under the impression that the eventual outdating is the fate of many digital projects. They require a different type of effort than, say, putting out journal articles. Maintenance requires manpower, manpower requires funds— I also have the ambivalent feeling that it may not be necessarily a bad thing that some projects finish their life cycle, while it would be great if those projects were archived somehow (in a functioning state). I guess I feel more personally involved since I will probably build something or another during my time here— I would love to hear your thoughts on this matter.

Some more or less related links:

On McGann (and a little bit on Weinberg)

I had some help in de-constructing the McGann text a bit as I was not familiar with the various methodologies/typologies of editing presented in “The Rationale of Hypertext”, and their significance to scholarship. So I will begin by presenting that as a background through which to discuss the direction of academic writing and our work as students and educators.

There are various types of editions, each serving a different purpose in the publishing world (See http://isites.harvard.edu/fs/docs/icb.topic453618.files/Central/editions/edition_types.html#diplomatic_edition and http://www.ualberta.ca/~sreimer/ms-course/course/editns.htm):

  1. Diplomatic editions
  2. Eclectic Editions.
  3. Facsimile Editions
  4. Critical Editions.
  5. Parallel Text Editions.
  6. Hyper-text Editions.

The basic premise of these different editions is how the editor/publisher seeks to present the author’s work alongside the process of editing and alteration that it goes through over time. These different types of editions are mechanisms of reading that reflect the interest, concerns and needs of a literary readership. Or as McGann explains, “[s]cholarly editions comprise the most fundamental tools in literary studies. Their development came in response to the complexity of literary works, especially those that have evolved through a long historical process.” A critical edition for example, will try to present the most “authentic” or close edition to the author’s original intent through comparing various editions and pieces of information and collating together to present something that most resembles the author’s original work. Parallel text editions on the other hand will provide multiple versions/iterations of the work alongside one another. What McGann is most concerned with in this text is highlighting the limitations of the various types of editions embedded in codex form (books), and focusing on the capacity of the hypertextuality to subsume all the practices of the codex editions (1 through 6), and open up scholarship to more possibility than can be allowed or possible in book form. In essence, what he asserts is that hyper-text/hyper-editing is a vastly different “set of schoalrly tools” that can offer a different way of doing scholarship. In other words, he argues that the technology of the book is antiquated (in certain spheres and cases), and argues for a different type of textuality that is layered, complex, multi-modal, dynamic and responsive.

I was put off a bit by this rather strict denouncement of book/codex technology (how quickly we condemn “old” technology when a newer and hotter thing comes along). Nevertheless, as I read both his piece and a critical reception of his piece (http://www.jpwalter.com/cyber-rhetoric/archives/449), I found validity in the claims he makes; ie. that we can leverage hypertext (as opposed to codex) textuality in developing and evolving different forms of scholarship (and writing, reading, researching and learning). However, the transition from one technology to another is not so smooth and not so simple. A lot of rhetoric around education and technology has hastened the process, and in that process, has cheated students and educators out of the real potential for technological change in how learning and scholarship can happen. What Thomas S. Kuhn’s book The Structure of Scientific Revolutions has to tell us is that these quantum leaps in techno-scientific inventions  (or revolutions) is to push society not forward in some linear trajectory of “progress” but out of one epistemic paradigm into another. The mental model by which we come to know the world, in a sense, is radically different. As such, simply copy-pasting your five paragraph essay into a WordPress blog does not a Digital Humanities project make. What McGann was trying to get across, from what I can gather, is that hypertextuality is a different technology rooted in the capacity for a different form of scholarship that is divorced from the logic of codex technology. What this may look like is something that I am personally still in the process of exploring and grappling with in my own thinking and work. As students and scholars conducting research and writing essays, I am very interested in your thoughts on how we could leverage hypertextuality in teaching and learning?

On the Weinberg piece that we almost read, I see a lot of potential for teaching historical thinking through hypertextuality. In the Spring 2015 I piloted a course on the Great Migration that utilized counter-narratives as a critical lens through which to understand contemporary issues and experiences viz-a-viz exploring the conditions that the black community faced at the abolishment of slavery. Specifically, examining the mental framework of the South as blacks and whites alike tried to navigate the social, cultural, economic and political/legal implications of a newly-freed population became the focus of reading historical narratives. The counter-ness of the counter-narrative came from juxtaposing popular stereotypes and issues in contemporary society with developments in race relations during the 60-year movement. In addition, the question of historical texts and narratives was breaded throughout our class discussions. We talked about what we read, but we also talked about coming to this information for the first time (for the majority of the class who were youth of color), we talked about how history was taught in their personal experiences at school, and we talked about the ways that narratives shape our worldview. What also emerged was musing on the concept of a leaderless movements, and an African-American/Black History that did not include the major figures (MLK, Malcon X) etc, but rather focused in the daily experiences of regular folk as they grappled with whether to stay or leave, and navigated a world that offered both potential for progress and more fear. Hypertextuality offers a way of writing about experiences that could potentially braid in several narratives (in a similar way to parallel text), offer a critical annotation through a close and reflexive process of historical reading/thinking, and to embed beneath the text more and more information and ideas, as though the practice of reading, writing and research involves mining iterations of the truth and layers of voices that provide a more complex, nuanced and probably messy “text”. My second provocation the is how does historical thinking matter to you, and in what ways can you see hypertextuality playing a role in your work.

To re-cap: Two provocations-

  1. As students and scholars conducting research and writing essays, I am very interested in your thoughts on how we could leverage hypertextuality in teaching and learning?
  2. How does historical thinking matter to you, and in what ways can you see hypertextuality playing a role in your work.

“Education and Experience” or balancing social and individual knowledge

In “Experience and Education,” John Dewey describes a balance between the personal nature of learning and importance of acquiring knowledge in an organized manner. According to Dewey, this problem requires “a well thought-out philosophy of the social factors that operate in he constitution of individual experience” (p. 7). In many ways, his philosophy is much more theoretically pragmatic than it is applied readily into practice. This can make Dewey seem unapproachable to some, while very profound to others.

Dewey describes the significance of organizing material and experience in a way that progressively builds upon itself. The structure of an authority that does not facilitate learning and experience in such a manner is thus in question. While reading, I considered the arrangement of more traditional classrooms that entailed a dyadic, teacher-student, and did not necessarily account for the dynamic nature of the construction of knowledge within the dynamic classroom between and direct and vicarious interactions of teachers-students, students-students, as well as these interactions between teachers and students with others outside of the classroom. This traditional classroom dynamic seems to have arisen out of the Common Schools Movement, for which Horace Mann is famous, that sought to equalize education for everyone. At the time Dewey was writing, an education that was considered equal was likely one in which a teacher, serving as an authority, had control over the learning experience the students within the classroom.

In some ways, Dewey’s ideas seem like a philosophical reversion to more organic forms of learning like that of an apprenticeship or mentorship. In a model of education that can be both progressive by maximizing the personal aspects of learning and experience while also providing a social structure of knowledge, Dewey describes three steps that are essential for creating knowledge through observation and judgment. The firs phase involves observation of certain conditions, the second phase involves a recollection of the past, and the third phase involves a judgment that puts together what has been observed and what is recalled and how these two experiences relate.

Provocation:

  1. What roles does a teacher have in terms of facilitating the phases of observation and judgment-formation described by Dewey in chapter 6, The Meaning of Purpose? How does the definition of purpose in this chapter relate to these phases of knowledge construction? Who should assume the primary role of shaping the purpose of learning?
  2. To what extent should the teacher be responsible for transmitting cultural knowledge? In chapter 7, Progressive Organization of Subject Matter, Dewey appears to grapple with this issue in the last paragraph on page 33. Though he suggests that adequate knowledge of how systems have arisen can be used to counter their problems, he appears to be impartial to the teaching of histories of social systems. He writes,

“On the one hand, there [will] be reactionaries that claim that the main, if not the sole, business of education is transmission of the cultural heritage. On the other hand, there will be those who hold that we should ignore the past and deal only with the present and future.”

What role does knowledge of history play in our systems of education? Which side do you support: transmit or ignore? Is it possible to reconcile the two?

Reference

Dewey, J. (1998). Experience and Education. Kappa Delta Pi.

Thinking through technology & learning: Bass’s Engines of Inquiry

The imaginary/conceptual “game of perfect information” holds that, with the right setup computers can satisfy all our informational needs. When the language of this game enters into the conversation about technology and education, the conversation goes awry. According to Bass, when attempting to discern the impact of technology on learning we must consider: (a) how teaching/learning is a complex process that occurs and builds knowledge over time and (b) how learning contexts must be analyzed ecologically with the understanding that learning does not happen in one place, one way, via one device or method.

Before considering technology, instructors may need to take a step back and ask basic questions about their own teaching. From these considerations, we can ask: “what aspects of good teaching, and contexts of good learning, do particular technologies serve well?” Rather than engaging with technology as an add-on to our pedagogy, technology can act as a medium for our own pedagogical goals and aspirations. According to Bass, as scholars, our questions drive our desire to learn and this also holds true for students who often engage and learn the most when they are driven by questions that interest them. Questioning our motivations to learn and our pedagogy allows us to better assess the role that technology can play in facilitating and energizing our students’ engines of inquiry.

According to Bass, technology can help facilitate 6 aspects of quality learning: distributive learning, authentic tasks, dialogic learning, public accountability, and reflective and critical thinking. With increased access to information, responsibility for knowledge creation can be distributed. Students are able to deeply engage with rich, diverse, and expansive resources via tech platforms and digital mediums. Technologies can open up lines of communication, leveling discussion and participation, making it less high stakes and more democratic. Digital spaces allow for small group interaction, collaborative writing, and active reading where students can go at their own pace and draw their own connections (which they could later share with others in the space). Often some or all of these spaces are public; students can be held accountable and often take their work more seriously. And often, if instructors desire that their students begin to think reflectively and critically, they must begin by reflecting and considering their own teaching structures and habits.

Integrating technology into a course may reshape overall course structure, requiring a reconsideration of location, course architecture, and assessment possibilities. Courses have always had multiple learning spaces; in the past these have typically been defined as the classroom and elsewhere. Thoughtfully integrating technology into pedagogy requires a re-imagining and deeper conceptualization of ‘elsewhere’. Technologies can allow instructors to choose and define these new engagement spaces and promote quality learning in these spaces. Technology can coherently and easily connect these spaces and foster deeper engagement and communication. Connecting these spaces may provide students with a better understanding of how different aspect of the course come together and technologies can help connect concepts, integrate new viewpoints and resources, and allow students to develop their own constructive projects connected to the course.

Reimagining the course structure rests on the assumption that the “course” should be an independent unit with specific goals. But if reimagining the structure and practice of courses, why stop there? Course, disciplinary, and institutional boundaries often divide people, ideas, and applicable skills. When re-thinking pedagogy and how technology can support our teaching, it might be fruitful to use the intersection between tech and pedagogy to rethink how higher education functions to produce a well-rounded, proficient graduate.

Which begs the question, in 2015, how do we define the well-rounded, proficient graduate? A person who can get a job? A person who has transferable skills? A digitally literate person? Someone who has found a passion? Fights for a cause? Our answers to these questions are both ideological and pedagogical. If our main goal in teaching is to help our students get a job, do we only reinforce the capitalistic structures that often oppress and dominate the very students we teach? Can certain pedagogies allow us to prepare our students for the workforce while also providing them with the vision and tools to resist oppressive and dominant forces?

Reading Bass, at some points I wondered if his view of technology was too utopian. For example, yes, technology can help level communication and open up dialogue. But, I have also encountered students who resist any type of online discussion or engagement. And, yes, public accountability can be beneficial but it also can put students at risk if they hold radical views or feel pressured to conform to the status quo. However, in the end, I think this is where Bass’s question regarding how technologies can serve good teaching becomes most salient. How do we choose the technologies that best support our pedagogy? What questions can we ask ourselves to be sure that the technology works with our pedagogical needs and goals? And, if attempting to break down arbitrary disciplinary and institutional boundaries, what types of knowledge and skills would we our students to develop in order to have coherent experiences across various courses?