Category Archives: Readings

Week 9

I enjoyed reading these brief articles, they raise interesting points and call to attention important issues in this field.

First, the natural debate between the “I code, you code, we code…Why Code?” and the “Learn to Code; Learn Code Culture” articles. The idea that coding is a skill that must be learned for the legitimacy of the work has apparently inspired some debate. Personally, I believe that the Luddite view, to use the author’s words, that not knowing everything about coding is ok, is the minimum involvement necessary. For a digital humanist, at the very least, one needs to understand the concept, and at least enough of the technical aspect to facilitate being properly informed about new practices and developments in the field. And, not that everyone always has to learn everything there will ever be to know, but I also believe that learning as much of the technical skill and culture as possible can only be beneficial. The technical part of digital projects could only be limited, their effectiveness stunted by limited understanding. Collaboration is and will remain to be an important aspect of the digital humanities, but expanding the knowledge and skill base of each individual can only help the field, not hurt it.

Now, concerning the “Some things to think about before you exhort everyone to code” article, this proves exactly why learning the culture of coding is important. Based on my very limited involvement with CIS people, I had no reason to believe that there was such a large gender gap in the world of coding. In conversations and discussions since reading this article, I’ve learned that my experience is atypical. As strange as it sounds, I wouldn’t have known that, it was necessary to avoid sticking solely to the technical readings so that I may understand the world those technical skills are used in. And, it presents a possible future study: women’s digital history, an examination of the gender gap in the world of coding in general or the digital humanities specifically.

Week 9 Readings

These readings constituted a  peek into coding culture as experienced by digital humanists.

Ghajar

It seems Ghajar’s main point is that historians are always going to want context in relation to their digital projects. Just learning to code isn’t a destination in itself, but the historian who wants to be involved in the digital humanities will need to know the why of the tools, not just the how. Why should we be able to design a database, or write code for a project? What are the possibilities in collaboration with web developers, and why would we collaborate in the first place? Why does learning to code affect our methodologies, or become a new methodology to use?  I always say, if I understand why, it helps me understand how.

Wildner

I think Wildner wants people to understand there is a culture which goes along with coding.  Coding is a language, there are ideals, and a lifestyle  just like any foreign culture.  It’s not just syntax, grammar and vocabulary.  There are also shortcuts and best practices that help digital humanists use their time productively if they are involved in coding, so it pays to understand the culture.  In our class discussion we agreed that it made sense for liberal arts people to be familiar with technology and digital culture, because everyone else must be familiar with humanities as a core part of education.  So, coding literacy would be one way to have liberal arts/humanists get familiar with digital technology and culture.  I think that is a great idea and would go a long way towards bringing together science and liberal arts, which we have blogged about before.

Posner

Posner addressed the issue of exhorting women to code.  She agreed that coding is a good skill to have, but she also talked about the reality of the coding culture which is made up of “middle-class white men” who have had greater access to technology, for far longer than women, in the first place. I don’t know what else to say about it, but gender struggles in technology fields are a problem throughout society.  Posner really wants that part of technology culture to change, to be more welcoming to diversity.  Of course I agree.

Week Nine

Coding is a topic which I have mixed emotions about.  This set of articles explain the difficulty of coding as a historian and highlight the importance of embracing the “culture of coding” with a humanist stance in mind.  In terms of “learning code for the sake of coding, Cafferata suggests that “As decontextualized rote response mechanisms, they are retrograde pedagogical steps in an era when critical thinking ought to be a hallmark of educational effectiveness.”  This seems to be important not just to coding, but to understanding the role that we, as budding digital historians, should play in developing our profession.

I have been guilty of the attitude expressed by Widner; I have considered digital history, and coding in particular, a “necessary means to an end”, and this has left me feeling a dislike and even a philosophical push against the digital humanities.  We have been told we must code in order to save our profession.  What if we opened ourselves to using code to our own ends and not the pragmatic ends of the academy (that needs to increase their revenue and student body)?  What if we looked at it as fun and exploratory?

Week six- Meta to my MetaData

Kramer

This week’s readings were interesting as they explored the implications of digital history on the way we view our profession.  Kramer suggests that perhaps “all” that historians do is add meta to metadata.  While at first I found this insulting, eventually I saw the value of this label.  History adds information to/about the primary source documents.  What we do is analysis and by placing the historian and the archivist together in this way the act of “doing” history becomes more clearly defined and accessible.

The historiography is where Kramer runs into trouble.  I do not believe that the entire historiography of a source can be placed into a primary source and be understood.  Perhaps arguments and counter arguments could be layered into the system; for instance, full access for people who would find that meta useful and common access for the average person who can not, without training ,understand the implications of what he/she is looking at.

Olsen and Argamon

Text mining is an interesting concept which seems filled with both problems and solutions.  When a historian works with primary sources they are limited in the number of texts they can use for any given project.  A computer is not human, therefore it can sort through many documents much more quickly which means faster work with a more complete body of sources.  This sounds like it would be extremely helpful in the humanities, but a computer is not human.

A computer can not understand the text it reads, it can not place value in circumstances and it can not feel the era it “reads”; a computer only recognizes symbols and their recurrence in a particular set of data. As Dr. Church said, it is difficult for the computer to mine symbols and metaphors in primary source documents.  This is a red flag on the use of text mining, because it is limited in scope;  the historian can view patterns through it, but computers have a limited capacity–perhaps this is comforting.

The article suggests that one goal of text mining is to make very large data sets “manageable and meaningful”.  In this pursuit, I believe that text mining provides the ability to organize and work with a large set of primary source documents, however, it does not provide meaning.  Meaning comes from human ability to grapple with data.  Meaning comes from the interpretation of the historian.  Text mining can lead the professional to explore different research questions and broaden his/her ability to work with sources, but can not be relied upon to give meaning to sources.

Week 7

I thought that McDonough’s article for this week was very informative of a problem that is easy to overlook. Being new to XML, I had no idea that the language’s being designed with variability in mind was making it difficult in terms of interoperability. It is an interesting point that different needs of different library communities are creating social differences in the use and the miscommunication of XML based tools.

I also felt that the author’s call for some sort of standardization runs a little counter to the entire philosophy of XML. As he says, “In the designers’ world view, a key benefit to XML is the freedom it provides users to define their own structure for documents and data, using their own semantics, and to escape restrictions software vendors might wish to impose on their users.” In this way, isn’t standardizing the language to some degree counterintuitive to its design? This is definitely an interesting problem, as I don’t know a potential solution, or even if one is required.

And maybe it’s the comic book lover in me, but I found Walsh’s article (look at all the superheroes!!!) to be very interesting. The idea that a language can be designed that is capable of examining documents with images and text is extremely beneficial for the future.

Reading the article, I kept thinking of the X-Men. As a comic book series, the comics are analogous for the American civil rights movement. This example demonstrates the literary value of such a markup language, but the use for exploring items such as tapestries or other historical documents is obvious. If a historical work makes heavy use of primary sources such as images or maps, an accurate digital rendition could easily live online, either by itself or as a complement to a monograph or other type of publication.

Week 7 Readings

The readings this week address digital preservation, digital oncology, and interoperability. Much of the information in these readings was somewhat abstract and technical, but I think I understand what is at stake.

McDonough

This article talks about the unintended consequences of  XML use in the digital library community, mainly why there are failures of interoperability between institutions, developers and other users of XML.  The whole idea of XML was to have flexibility within a generalized, standard mark-up language so that different programs could communicate within it.  What has happened, though, is not quite what the creators of XML envisioned.  McDonough says this is because of culture and social relationships between content creators and software vendors (par. 13), among other things.

His solution is to establish “common principles of structural encoding and standardized stylesheets for translation” (par. 36).  The main problem I see is that a standard, or universal mark-up language is probably impossible.  We don’t have a standard, universal human language, we don’t have a standard, universal culture, and we don’t have standard, universal social relationships.  A consequence of human flexibility makes it hard to generalize the digital.

Kirschenbaum

We are again dealing with archives in this article.  The author tells a great story about preservation and possibility for research within the born-digital archives with his example of MITH.  His hook is the “.txtual condition.” He says the catechism of this condition is “Access is duplication, duplication is preservation, and preservation is creation – and recreation”  (par. 16).  I took this to mean that archiving is all about making things ready for recall.

In the digital aspect, an archivist has to “be able to identify and retrieve all its digital components” -Kenneth Thibodeau (par. 16).  What this means is hardware and software may need to be physically available, or emulated, to access old digital information.  This would be very important to a historian because records are primary sources.  Digital records will need to be accessible if future historians are to study the digital age.

Evens

This is a very abstract article about the abstraction of the digital world.  It is about binary code (0,1) being behind everything digital.  We don’t think about that as we go about our digital lives, but it is real.  Evens says, ” The digital has an ontology, a way of being, and products and processes generated through digital technologies bear traces of this way of being” (par. 9).  Later, Evans says it’s not just about technology, but binary code has abstract, ideal and discrete characteristics that end up being expressed in human relations (par. 15).  I found this article difficult to digest, but do think it would be an interesting area of study for philosophers.

 

 

Endless Possibilities

Just wanted to check in and see if anyone else Loves CSS and HTML. There is so much we can do with it and this is only the beginning. I think that learning webdesign has so much potential for the field of Public History. The “public” sees the world in these formats (i.e. web pages), so we should know how to show them history through the lens with which they are most comfortable. Accessibility is key here.

I do want to reiterate my earlier statements: I believe that this is the job of the Public Historian, not the Academic. We need trained professionals (digital historians) in academia to teach these skills for use by the public historian. It is easy to get lost in learning web design elements and lose what is important about what we are doing (or trying to do) as academics/professionals.

There is a great work that predates the digital revolution that is poignant to me as we work through these problems and you can access it here.

 

Now you may like this Christina!

Week 6

Argamon & Olsen

This article does an excellent job highlighting the potential for computers as tools in the humanities. Similar to the Gibbs and Owens article, it shows the use of computers to help brainstorm and articulate research questions. One particular example of this from the article is the new connections discovered between Diderot and d’Alambert’s Encyclpédie and the Journal de Trévoux. With the discover of new connections and historical questions to be asked, who’s to say that we won’t discover new connections in centuries old texts, or even millennia-old texts?

It also seems to me that there may be potential in using machine learning to examine content or stylistic differences and similarities in unattributed historical writings. There are numerous texts written by that pesky anonymous person. Without machine learning, these writings may remain unattributed to any particular author and instead continue to be wondered about. Such tools may not be able to definitively tell us who wrote what, but they may be able to provide insight and provide us with new potential authors to theorize about. In large enough bodies of work from a similar time period, could such a tool help us discover if not one author, perhaps a style linked to an institution or an organization? Could such a discovery help to discuss and/or further develop certain hegemonic theories?

One other brief (instead or risking beating a dead horse…with an already-dead horse) point that Argamon & Olsen are careful to make: new technology can lead to new interpretations of the past, however, any new interpretations must still be researched and developed by professional historians, as context must be acknowledged, especially in new historical connections.

Kramer

I agree with Kramer’s argument that now, during the emergence of the digitization of the humanities, is the time to reconsider how we think about and approach the interpretation of history. Not that we have to completely change and revolutionize the practice of history, but an incorporation of a new way of thinking could definitely freshen up the field and the methods used to define the field. Regarding the increasing significance of the digital archive, thinking of digital and physical archives as a historiography in and of themselves is an important step to building a dynamic fluidity between not only primary and secondary sources, as Kramer asserts, but also between scholarly works of history and public history as well.

Week 6 Readings

Machine Learning, Text Mining and Archives

Argamon and Olsen

This article from 2009 introduces the challenges faced by humanities scholars of dealing with voluminous digital/digitized sources, or the “global digital library.”  These authors believe text mining is complementary to traditional textual analysis.   They used predictive text mining, comparative text mining, and clustering/similarity analysis on three different projects.  In two cases, the digital tools supported the original, traditional scholarly conclusions, and one case found new connections previously unnoticed by traditional scholarly methods.

I thought the warnings were interesting.  You could create the results you want to find based on the construction of the task itself, and there is anxiety inherent in doing criticism by algorithm.   I think these warnings are born of normal professional concerns for historians, whether digital tools are used or not.  I mean, is the evidence there, or are we forcing the evidence to fit our argument?

Kramer

I did not know there were any issues between historians and archivists.  Kramer’s article makes it sound like historians take archivists for granted, and that historians think the archive is for their use alone.  I don’t like the sound of that, but the article is from summer 2014, so apparently it’s a thing.  I agree with his point that digitization will bring the archival and history professions closer together.  I like the idea of digital historiography and all the possibilities that digitization opens up for historical transparency, accessibility and openness.  I think historians and archivists all need to reconsider their relevance in the digital age, so this kind of discussion makes sense.

The history of historical inquiry could become much more interesting than reading traditional historiography.  To follow intellectual journeys in the digital world sounds more fun than reading dry bibliographies to me.  I think this is what Kramer meant when he said we could dynamically link primary sources and their subsequent interpretations.

 

 

The “hegefox”

This article was much more realistic than the others we have read and asks similar questions to those that have cropped up in class.  Obviously the field of history, humanities, and academia are changing as a result of the digital era.  This may not always be for the better, but historians, and humanities professionals in general, have an opportunity and a responsibility to preserve what is traditionally important while taking on digital tools.

The term “hegefox” was discussed and I think it perfectly describes the conundrum and perhaps the solution.  Levis in the 60s and the author speculated on how we, the professional, could deliberate over new technologies and evaluate them properly.  How the humanities could thrive in a new environment without both losing the precious pillars of our profession and/or becoming obsolete.  The idea of the hegefox was his suggestion, we must preserve traditional methodology and also incorporate new digital tools.

The author presents the same question I have been asking myself,   “…to what extent will those of us who care about the humanities be allowed to fret about the present state and future of our disciplines in the same way that Leavis wondered about what can and should be done, without being considered “highbrow,” elitist snobs?”  It is just as important, I would argue, for the hedgehog to respect the fox as it is for the fox to respect the hedgehog during this debate.  Those who want to dive head first into quantitative research and digital analysis without considering the importance of traditional methodology do not allow for the evaluation necessary for viable and long lasting solutions.

I wonder if the digital age, because of the fast pace in which it evolves, produces a moment in which we must chose.  Instead of being able to evaluate and progress, will the humanities have to let go of the past and embrace a future that steps further and further away from the “human” element?  Social media, as mentioned at the beginning of the article, has driven people both closer and further from one another–we may be losing part of our human connection.  Is this what will happen to the humanities?

This article made me look closely at the database work that we have been doing.  How did people feel about the primary sources after working with them in a database?  I saw that it would be possible to come to new conclusions, draw correlations I may not have seen before, and work with large loads of data; but I also felt distant from the subject matter, the period of time, and my own connection with the material.  Maybe because this method is new to me? Maybe not.