At the Bard Graduate Center where I run the Digital Media Lab we have been experimenting with a wide array of digital applications for the study of material culture. We have had thesis-level digital-born projects, are working with digital interactives in gallery spaces, are using wikis as course software, have a NEH funded startup grant for a mult-year faculty project, and are experimenting with 3D printing and scanning and the question of the representation of materiality in a digital age. I would like to propose a session for anyone interested in the use of digital technology in the study of material culture, epistemological questions that digital technology raises about material culture, 3D printing and scanning, museums and technology, or any other related questions. Some people at TCNY may have attended THATCamp Museums NYC that we hosted in the Spring and I see this session as possibly a bridge between those two events
I’d like to have a working session to pick the brains of other campers on ideas for overhauling NYU’s Literature, Arts, and Medicine database. The database has thousands of annotations on various works of literature, art, theater, and film that relate to the practice of medicine, and it’s very much in need of an overhaul. Specifically, right now the database is very much in a pre-social web format, and I’d like to discuss ways to integrate and engage user participation to move the site away from a “static repository of information” model and towards an active, dynamic, collaborative platform for showcasing the combined efforts of humanists, medical professionals, artists, and developers. I’d also like to discuss sustainability and how to make the database a self-supporting entity that doesn’t rely on grant funds. I would love to hear from anyone who is involved or has been involved with similar projects, or knows of anyone doing something similar that I might be able to learn from.
For THATCamp this Saturday, I would like to have a general discussion about the relationship between the digital humanities and scholarly communication and the different forms of online scholarly communication, such as MediaCommonsPress and hypertext. This week at my DH class, Kathleen Fitzpatrick, director of scholarly communication at the MLA and author of the DH book Planned Obsolescence, discussed online peer review and blogging and the changes regarding professors receiving tenure- it was interesting and very insightful.
I would like campers to share and discuss the different software and platforms currently available for mapping and timelines. A quick list, and please add more in the comments, for timelines would be: Simile, Chronos, Verite; for mapping (and timelines as well): Omeka, Viewshare, Historypin.
What are the strengths and weaknesses of these platforms (perhaps with an emphasis on the level of programming/technology knowledge required)? I personally use Omeka for my site, but my experience in that has led me to what I hope is a good segue into debating DH skill sets. I do not mean to rehash the longstanding debate, but possible questions are: When does programming knowledge turn into a gatekeeper, limiting the implementation of an idea/research? Do advanced programming skills reduce, even eliminate the need for collaboration with those who have those skills? If someone does not have these skills, what can/is the DH community doing to facilitate acquisition of those skills? Is a workshop in the timespan of one hour or hours enough to acquire those skills? Are w3schools or codeacademy useful self-teaching platforms? Are funding issues complicating big ideas for solutions to these questions?
I realize this panel proposal slightly overlaps with other proposals and may be better suited to two different sessions, perhaps leading to workshops led by experts in attendance. I look forward to hearing more ideas in the comments section.
Many wikis, including Wikipedia, encourage user-centered curation through a combination of discussion and notification: Talk pages and Watching. Talk pages are forums affiliated with each content page, where users can defend or question posted information, providing both a history of how the wiki page arrived at its present form and a rationale for why it should or shouldn’t change in the future. When Watching is enabled, users can opt to receive email or RSS notifications whenever specified pages are modified; in this way users can help to maintain the quality of content for which they have the greatest expertise.
In this session, I’d like to work on implementing these features within a Drupal 7 site, using modules such as Rules and QuickTabs, to enable watching and tab-separated comments at the level of individual nodes.
I haven’t done this before, so any expert help is welcome, but so are any other intermediate (or even beginner) Drupal users who are interested in figuring it out alongside me.
I’ll be working with a development version of writingstudiestree.org, a D7-based crowdsourced academic genealogy of writing studies, composition and rhetoric.
I’d like to discuss building a central textual annotation website, which would allow for open, collaborative literary marginalia on a line-by-line or word-by-word basis.
I’d also like to talk about using a similar engine to allow for open, collaborative textual editing. In cases where a literary text has several flawed editions, creating a wiki-style collaborative editing platform might help to facilitate the creation of accurate, crowd-sourced digital editions.
If anyone’s interested: a general discussion about Wikipedia and libraries; using Wikipedia in classes (Campus Ambassador Program: en.wikipedia.org/wiki/Wikipedia:United_States_Education_Program/Courses/Present).
I’d like to have a discussion around what existing web platforms folks are using for publishing scholarly journals and other publications online, and what their successes and wish-lists are within these applications. I’m thinking WordPress, OJS, Ambra, plus any others. Also would love to hear about any original scholarly publishing projects outside of these, and reasons for going off the grid.
Within this topic some areas to consider will be: working with digitally native content vs. print to digital; developing for mobile consumption; incorporating rich metadata; and breaking away from traditional peer-review models.
I use Omeka as a major component of Creating Digital History, a graduate course for NYU’s Archives and Public History Program. Students in the course locate, digitize and contribute digital items to the Greenwich Village History Digital Archive, learning how to create metadata, mapping their items, and creating an exhibit on some aspect of Greenwich Village History. Some of the issues that have come up in using Omeka are:
- Tech skills versus History skills – The range of technical skills that students bring to the class varies greatly. Omeka works well out of the box, but to create customized exhibits, students should know HTML, CSS or PHP. Should we be attempting to teach that in addition to the digital history skills and practices? How much emphasis should we put on learning technical skills in a history course?
- Enhancing Exhibits – As students are participating in a group-created digital archive, they do not have a lot of flexibility in how they enter metadata or how the digital archive will appear. Where they do have creative license is in their exhibits, which they create on their own, or in a self-selected team. Without having programming skills, changing the look of exhibits is not that easy. I am working with a programmer to develop a theme that can be more easily customized, enabling user-defined fonts, colors, backgrounds and navigation. I am interested in talking about the options students should have in exhibits lay out, and whether anyone has new ideas on how to structure exhibits within the section and page format that Omeka imposes.
- Structured versus Unstructured Tagging – The first two years that I taught the course, students tagged their items as they thought best, and the results were a mish mash of tags with little rhyme. While working on a new theme, I decided that controlling tag vocabulary was more useful as a tool for searching the items and exhibits. This will be the first year we use these newer tags and I’d be interested in talking to those who have built larger collections (ours has about 900 items after 2 years) about how they use tags.
I am proposing a session for those creating new visualization tools about how programmers can work with humanists to organize content in a way that responds dynamic user interests. I am particularly interested in how to create narratives through the display of archival documents.
I have been working with a collaborative interdisciplinary team that includes a computer scientist, an interactive graphic designer, a historian, and a cartographer to design and develop a new digital tool called Mix D that visualizes historical journeys. The product prototype (based upon the actor Ira Aldridge) intends to create multiple narratives that will enhance user understanding of a historical figure, and create a narrative that goes beyond the bare reading of the data. Some of the visualizations are pre-scribed (timelines and maps) while others are dependent upon user interests.
Anita Gonzalez is a Professor of Theatre Arts at SUNY New Paltz.