If you would like to visualize a text as a network graph, please, use our new open source InfraNodus text network visualization tool.

Textexture is outdated and is not supported any longer. Our new text visualization tool InfraNodus supports English, Russian, German, French and has advanced import, export, sharing, and filtering features: www.infranodus.com
 

 
 
Leave it to an archaeologist, but when I heard the CFP from Digital Humanities Now on ‘evaluating’ digital work, I immediately started thinking about typologies, about categorizing. If it is desirable to have criteria for evaluating DH work, then we should know roughly the differentkinds
of DH work, right? The criteria for determining ‘good’ or ‘relevant’,
or other indications of value will probably be different, for different
kinds of work.


In which case, I think there are at least two dimensions, though
likely more, for creating typologies of DH work. The first – let’s call
it the Owens dimension, in honour of Trevor’s post on the matter-
extends along a continuum we could call ‘purpose’, from ‘discovery’
through to ‘justification’. In that vein I was mulling over the different kinds of digital archaeological work a few days ago. I decided that the closer to ‘discovery’ the work was, the more it fell within the worldview of the digital humanities.


The other dimension concerns computing skill/knowledge, and its
explication. There are lots of level of skill in the digital humanities.
Me, I can barely work Git or other command-line interventions, though I’m fairly useful at agent simulation in Netlogo. It’s not the kinds of skills here I am thinking about, but rather how well we fill in the blanks for others.
There is so much tacit knowledge in the digital world. Read any
tutorial, and there’s always some little bit that the author has left
out because, well, isn’t that obvious? Do I really need to tell you that? I’m afraid the answer is yes. “Good” work on this dimension is work that provides an abundance of detail about how
the work was done so that a complete neophyte can replicate it. This
doesn’t mean that it has to be right there in the main body of the work –
it could be in a detailed FAQ, a blog post, a stand alone site, a post
at Digital Humanities Q&A, whatever.


For instance, I’ve recently decided to start a project that uses Neatline. Having put together a couple of Omeka sites
before, and having played around with adding plugins, I found that (for
me) the documentation supporting Neatline is quite robust.
Nevertheless, I became (am still) stumped on the problem of the geoserver to
serve up my georectified historical maps. Over the course of a few
days, I discovered that since Geoserver is java-based, most website
hosting companies charge a premium or monthly charge to host it. Not
only that, it needs Apache Tomcat installed on the server first, to act as a ‘container’. I eventually found a site – Openshift -
that would host all of this for free (! cost always being an issue for
the one-man-band digital humanist), but this required me to install Ruby
and Git on my machine, then to clone the repository to my own computer,
then to drop a WAR file
(as nasty as it sounds) into the webapps folder (but what is this?
There are two separate webapp folders!) , then ‘commit, push’ everything
back to openshift. Then I found some tutorials that
were explicitly about putting Geoserver on Openshift, so I followed
them to the letter…. turns out they’re out of date and a lot can change
online quite quickly.


If you saw any of my tweets on Friday,
you’ll appreciate how much time all of this took…. and at the end of
the day, still nothing to show for it (though I did manage to delete the
default html). Incidentally, Steve from Openshift saw my tweets and is
coaching me through things, but still…


So: an importance axis for evaluating work in the digital humanities
is explication. Since so much of what we do consists of linking together
lots of disparate parts, we need to spell out how all the different
bits fit together and what the neophyte needs to do to replicate what
we’ve just done. (Incidentally, I’m not slagging the Neatline or Omeka
folks; Wayne Graham and James Smithies have been brilliant in helping me out – thank you gentlemen!). The Programming Historian has an interesting workflow in this regard. The piece that Scott, Ian, and I put together on topic modelling
was reviewed by folks who were definitely in the digital humanities
world, but not necessarily well-versed in the skills that topic modeling
requires. Their reviews, going over our step by step instructions,
pointed out the many, many, places where we were blind to our
assumptions about the target audience. If that tutorial has been useful
to anyone, it’s entirely thanks to the reviewers, John Fink, Alan
MacEachern, and Adam Crymble.


So, it’s late. But measure digital humanities work along these two
axes, and I think you’ll have useful clustering in order to further
‘evaluate’ the work.

 

 

 
NOTICE: Please, use our new open source text network visualization and analysis tool InfraNodus. Textexture is no longer supported.
 
 
Nodes (Words): 100, Edges (Co-Occurrences): 349. Download processed GEXF file.
 

Most influential keywords in this text:
digital    openshift    tutorial    folder    filter: off

Most influential contexts in this text:
#0:   digital    work    humanity    folk    filter: off
#1:   openshift    tutorial    lot    tweet    filter: off
#2:   folder    git    nasty    repository    filter: off
#3:   incidentally    neatline    delete    end    filter: off

 

Link to this page:


Embed this graph:


Note: This embedded graph can be viewed by everyone when it's public: