#MLA15: A First Look. Comparison with #MLA14

Thank you to everyone who attended our MLA and Data panel in Vancouver. I had a great time at the conference in spite of being ill. I was gladly suprised by the relaxed, friendly atmosphere of the conference. I regretted not being well enough to attend all the panels I had meant to attend; this meant I could not say hi to a number of people I’ve been meaning to meet in person for a number of years now. Such is life: “between the idea/And the reality/Between the motion/And the act/Falls the Shadow.”

I have finished cleaning the data from my #MLA15 archive; some deduplication remains to be done but in the meanwhile I thought I’d share a quick comparison between the number of tweets during conference days last year and this year:

mla14-15 comparison

I still need to look at how the numbers of unique users in the same period of time compare between 2014 and 2015, but in the meanwhile, at least according to the latest version of my dataset, there was a considerable increase in the tweet volume, from 21,915 tweets in 2014 (conference days only) to 23,609 tweets in 2015 (conference days only), ie. 1694 tweets more.

Once again the usual caveat, the map is not the territory, these numbers represent the dataset in question.

More, I hope, soon.

#MLA 14: A First Look (IV)

The Story So Far

We have been looking at an archive of tweets tagged with #MLA14, which corresponded to the 2014 MLA (Modern Language Association) Annual Convention. It was held in Chicago from  Monday 9 to Sunday 12 January 2014. You can still browse or search 2014 sessions in the online Program.

The studied archive comprises a dataset of 27,491 unique tweets, collected between Sunday September 01 2014 at 20:35:07 and Wednesday January 15 2014 at 16:16:41Central Time.

The dataset studied in this series of posts was collected and cleaned by Chris Zarate and myself.

After deduplication we were down to 27,491 tweets, and in a sub-set that collects the tweets posted during the actual convention days the total number of tweets in this period sums 21,915 tweets.

We have been offering some key figures and some basic visualisations of the data.

For the first part of this series, click here.

For the second part of this series, click here.

For the third part of this series, click here.

 Text Analysis

We used the Voyant Tools (previously the unfortunately-named Voyeur), a web-based reading and analysis environment for digital texts developed by Stéfan Sinclair and Geoffrey Rockwell, to obtain the most frequent words in the text of the total number of tweets (this includes RTs and replies) posted with #MLA14 during each day of the convention.

Below we share some word clouds to visualise this. As most people know now word clouds are visual presentations of keywords  extracted from a text which are visually differentiated according to their position and frequency of use in that text. Voyant uses Cirrus, which is a “visualization tool that displays a word cloud relating to the frequency of words appearing in one or more documents. […] The larger the word, the more frequent the term.”

In this case we are sharing static image files exported from Voyant itself. We are also including the top 5 most frequent words in each set of tweets. In all cases we used a customised English (“Taporware”) stop words list that was applied globally including words like #mla14, MLA, RT, panel, session, http, t.co, etc.

Numbered hashtags corresponding to sessions were not included in the stop word list as one of the intentions was to reveal which sessions were more frequently mentioned each day. (To find out which sessions correspond to each numbered hashtag check the online Program).

Limitations and Fair Warning

After running the four different corpora more than once through Voyant we discovered the tool was unable to reproduce the same results, particularly regarding word and unique word counts. Top 5 most frequent words remained with minimal variations of little significance, which might mean the results we share in that regard are more or less reliable, though not 100% exact.

We were logically disappointed at the failure to ensure reproducibility using the same corpora and the same tool (we don’t consider each corpus to be too large for reliable text analysis). We will keep looking into it and will keep aiming for reproducibility of the results with different tools, and we will update any findings here.

Here we are only presenting as a research progress update the figures and clouds obtained after the fourth trial, having cleared caches and ensuring the corpora were complete.

 Thursday 9 January 2012

Total number of tweets: 4,558

Total number of words: 71,630

Total number of unique words: 9,142

Top 5 most frequent words in the corpus: #s80 (271), #s66 (199), humanities (188), #s130 (156), #s173 (150).

#mla14 Thursday 9 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

#MLA14 Thursday 9 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

Friday 10 January 2014

Total number of tweets:  7,417

Total number of words: 131,500

Total number of unique words: 13,367

Top 5 most frequent words in the corpus: data (381), #s299 (378), students (354), #s339 (342), reading (342).

#mla14 Friday 10 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

#mla14 Friday 10 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

Saturday 11 January 2014

Total number of tweets:   6,265

Total number of words:  112,482

Total number of unique words:  11,954

Top 5 most frequent words in the corpus:  #s577 (562), digital (543), work (413), humanities (340), #medievaltwitter (283).

    #MLA14 Saturday 11 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

#MLA14 Saturday 11 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

Sunday 12 January 2014

Total number of tweets: 3,675

Total number of words: 66,426

Total number of unique words: 8,206

Top 5 most frequent words in the corpus: #s679 (626), digital (266), #s738 (212), @adelinekoh (174), #s708 (173).

#MLA14 Sunday 12 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/

#MLA14 Sunday 12 January Cirrus Word Cloud. Retrieved January 22, 2014 from http://voyeurtools.org/tool/Cirrus/


Tool Citation

Sinclair, S. and G. Rockwell (2014). Voyant Tools: Reveal Your Texts. Voyant. Retrieved January 22, 2014 from http://voyeurtools.org/

For the first part of this series, click here.

For the second part of this series, click here.

For the third part of this series, click here.



#MLA14: A First Look (III)

[For the first part of this series, click here.

For the second part, click here.

For the fourth part, click here.]

A Summary

The dataset we have includes 27,491 unique tweets, collected between Sunday September 01 2014 at 20:35:07 and Wednesday January 15 2014 at 16:16:41Central Time.

We have now created a sub-set that collects the tweets posted during the actual convention, i.e. from Thursday 9 January 6:04:45 AM to Sunday 12 2014 23:32:46 Central Time. The total number of tweets in this period sums 21,915 tweets.

The table below gives totals per day and a grand total for the whole event:

#mla14 Summary Table. CC-BY Ernesto Priego and Chris Zarate

#mla14 Summary Table. CC-BY Ernesto Priego and Chris Zarate

Visualised as bar charts it looks like this:

#mla14 Summary Chart (with Total) CC-BY Ernesto Priego and Chris Zarate

#mla14 Summary Chart (with Total) CC-BY Ernesto Priego and Chris Zarate

#mla14 Summary (without Total), Thurs 9 - Sun 12 January 2014

#mla14 Summary (without Total), Thurs 9 – Sun 12 January 2014

#mla1 Average Tweet Rate Per Minute chart, CC-BY Ernesto Priego and Chris Zarate

#mla1 Average Tweet Rate Per Minute chart, CC-BY Ernesto Priego and Chris Zarate

According to our data it was Friday that saw the most activity on Twitter, with 7417 tweets.  There were less tweets at the beginning and end of the convention. According to the program panels finished at 3:00pm on Sunday. This is the day that shows significantly fewer tweets.

For more, watch this space.






#MLA14: A First Look (II)

[For the first part of this series, click here.

For the third part of this series, click here.]

As we said in the previous post the dataset we have includes 27,491 unique tweets, collected between Sunday September 01 2014 at 20:35:07 and Wednesday January 15 2014 at 16:16:41Central Time.

(Needless to say Twitter activity with #MLA14 has continued, but Wednesday January 15 16:16:41 is when the archive we are focusing on ends).

Another Finding: How Many Unique Twitter Usernames

There are 3,545 unique usernames in the dataset. Logically not all users tweet as much or with the same frequency.

This number does not mean that 3,545 unique “real” people tweeted with the hashtag, as we must consider that some Twitter users participated in the backchannel with more than one username or account (for example, a personal and an organisational or institutional one), but this is not always easy to identify. It is also possible that more than one “real” people manage one single account.

The following chart compares the number of Twitter usernames that tweeted with #MLA14 during the period of collection described above with the official number of registered participants in the program* and an approximate number of paid attendees.

#MLA14 Comparative Participants Chart, CC-BY Chris Zarate and Ernesto Priego

#MLA14 Comparative Participants Chart, CC-BY Chris Zarate and Ernesto Priego

Many questions arise about the relationships between those attending the convention, those registered in the program (that are a subset of the former) and those participating via the backchannel.

Determining nuanced relationships between the groups might shed some light on the role of tweeting within the context the convention and live-tweeting from the convention itself. Is the backchannel a significant method of “amplification” beyond the convention’s venue? Can current data answer this question and help lay out trends for the future?

There are of course many other questions arising from the data. We’ll be looking at them gradually, some here and hopefully with more detail in a future publication.

*Chris Zarate released program data on GitHub in XML and JSON format: https://github.com/mlaa/mla14.org

Please check John Mulligan’s blog for some very interesting visualisations of scholarly networks including #MLA14.

#MLA14: A First Look (I)

Twitter Research and Academic Conferences

[For part II of this series, click here.

For part III of this series, click here.

For part IV of this series, click here.]

The 2014 MLA (Modern Language Association) Annual Convention, was held in Chicago from 9 to 12 January 2014. You can still browse or search 2014 sessions in the online Program.

As I said in a previous post (Priego, 17/12/2013),

The MLA has been a pioneering academic organization in embracing Twitter. Since 2007 the so-called “conference back channel” has been growing considerably. Adoption of Twitter amongst scholars and students seems on the rise as well, and reporting live from the conference is no longer an underground, parallel activity but pretty much a recognized, encouraged aspect of the event.

As explained by Ross et al (2011) [PDF],

Microblogging, with special emphasis on Twitter.com, the most well known service, is increasingly used as a means of undertaking digital “backchannel” communication (non-verbal, real-time, communication which does not interrupt a presenter or event, (Ynge 1970, Kellogg et al 2006). Digital backchannels are becoming more prevalent at academic conferences, in educational use, and in organizational settings. Frameworks are therefore required for understanding the role and use of digital backchannel communication, such as that provided by Twitter, in enabling participatory cultures.

Ross et all studied the Twitter activity around three digital humanities conferences (#dh09, #thatcamp and #drha09, #drha2009), collecting and analysing a corpus of 4574 tweets ((90%, 4259 original tweets and only 313 Retweets).

Though this was activity that took place in 2009 for events considerably smaller than the MLA, the study by Ross et al remains an important reference for studies on Humanities scholars use of Twitter in general and for the data collection that I’ve been conducting (not only of the MLA backchannel) and the research I’ve been meaning to publish eventually.

As a comparison from another discipline, Desai et al (2012)  collected and analysed 993 tweets over the 5 days of the American Society of Nephrology (ASN) annual scientific conference in 2011 (#kidneywk11).

There is still a paucity of reliable, timely research of how scholars use Twitter around  (before, during, after) academic conferences of different diciplines. Part of the problem is that often studies of social media are not disseminated through social media channels (either as fragmentary outputs on Twitter or as blog posts) and the “publishing delay” involved in peer-reviwed formal publication means that the data reaches us, as in the two cases cited above, two years later.

The Methods

I have been following and participating remotely with the MLA convention through Twitter since 2010, attempting different ways of both engaging with and analysing the scholarly activity taking place under/with the hashtag(s) associated to the event. By far, this year #MLA14 (or #mla14; it’s not case sensitive) seemed to surpass all expectations of adoption.

I have been using Martin Hawksey‘s Twitter Archiving Google Spreadsheet TAGS (now in it’s fifth version) for a few years now, and it’s what I used to start collecting tweets tagged with #MLA14 from the 1st September 2013. In Hawksey’s words, TAGS is  “a quick way to collect tweets, make publicly available and collaborate exploring the data.”

The archives I set updated automatically every minute, but the limit imposed by Google Sheets is 400,000 cells per sheet, and TAGS populates 18 columns with the tweets and associated metadata.

This means that the spreadsheets can fill very quickly and scripts can become unresponsive. I knew that if I wanted to collect as much as possible from what I knew would be a very busy feed. In other words I would require more than one archive, and I would have to hope I’d be able to deduplicate and collate the data in more manageable chunks later. In practical terms it meant that I had to be very attentive monitoring both the feed and the Google spreadsheets,  following the event on Twitter almos as if I were literally there.  It meant being attentive to the live archives and start collecting before the previous one had collapsed.

After the conference I was contacted by Chris Zarate from the MLA, who had also been archiving the #MLA14 feed with TAGS. He had some gaps in his data, and so did I, and only working together we have managed to have some glimpses of a more or less complete dataset of #MLA14 tweets.

A First Finding: How Many

Chris and I had  more than 75,000 tweets  in our combined sets, and after deduplicating them with OpenRefine we were down to 27,491 tweets.

The MLA annual convention might be a mega conference (around 7,500 paid attendees this year, according to Rosemary Feal) but 27,491 tweets is still an amazingly healthy figure reflecting some undoubtable adoption of Twitter from humanities scholars.

Chris did a quick plot over 9-12 January 2014 (the days of actual conference).  It is possible we may have missed some tweets here and there due to the Twitter API rate-limiting, but there are no glaring gaps:

#mla14 conference days activity plot. Chart cc-by Chris Zarate and Ernesto Priego

#mla14 conference days activity plot. Chart cc-by Chris Zarate and Ernesto Priego

Not suprisingly, the overall Twitter activity peaked in the afternoon of Saturday 11 January  (remember the conference took place from 9 to 12 January 2014). It was that morning Central Time that I tweeted that the #MLA14 feed was receiving 21.1 tweets per minute.

Logically many research questions arise.

What’s Next: More Soon

Chris and I are still working on the dataset so as to have it in different and manageable forms that allow for easier qualitative and quantitative analysis.

We are also looking forward to eventually sharing a CSV file containing data and metadata of tweets posted between Sunday September 01 2013 at 20:35:07 to Wednesday January 15 2014 16:16:41 (Central Time).

If you have a dataset including #MLA14 tweets before  Sunday September 01 2013 at 20:35:07, we would love to hear from you.

I will keep sharing some insights from the dataset here. Hopefully I’ll have another post on this blog tomorrow with some interesting findings.

N.B. Sadly, in spite of constant efforts by me and many other colleagues to encourage the recognition of blog posts as academic outputs,  research of this type that is not presented in the traditional academic venues  (read: peer-reviewed academic article or monograph) rarely gets cited (this is frankly disappointing). Therefore I regret I will be unable to blog the complete analysis or share the whole dataset until I  have at least secured one formal output for this ongoing research. Were I in a different stage of my career I could probably afford to, but it’s not the case at the moment.

Again, with many thanks to Chris Zarate for collaborating in this project.


Desai, T., Shariff, A., Shariff, A., Kats, M., Fang, X., Christiano, C., & Ferris, M. (2012). Tweeting the meeting: an in-depth analysis of Twitter activity at Kidney Week 2011. (V. Gupta, Ed.) PloS one, 7(7), e40253. doi:10.1371/journal.pone.0040253. Accessed 16 January 2013

Priego, E. (2013, December 13). “Live-Tweeting the MLA: Suggested Practices”. MLA Convention blog guest post, MLA Commons. http://convention.commons.mla.org/2013/12/17/live-tweeting-the-mla-suggested-practices/ . Accessed 16 January 2013.

Priego, Ernesto (ernestopriego). “More than 14,000 tweets in my #mla14 archive (surely incomplete) since September. At the moment 21.1 tweets per minute. *Back*channel?!”. 11 Jan 2014, 16:40 UTC. Tweet https://twitter.com/ernestopriego/status/422045270688288768. Accessed 16 January 2013.

Rosemary G. Feal (rgfeal). “@ernestopriego around 7,500”. 16 Jan 2014, 18:39 UTC. Tweet, https://twitter.com/rgfeal/status/423887347734687744. Accessed 16 January 2013.

Ross, C., Terras, M., Warwick, C., & Welsh, A. (2011, October 30). Enabled backchannel: conference Twitter use by digital humanists. J DOC. EMERALD GROUP PUBLISHING LIMITED. Retrieved from UCL Discovery (Open Access) http://discovery.ucl.ac.uk/155116/1/Terras_EnabledBackchannel.pdf . Accessed 16 January 2013.

 For part II of this series, click here.


MLA Discussion Group on Comics and Graphic Narratives Sessions at MLA 2014


[Reblogged from http://blog.comicsgrid.com/2014/01/comics-studies-sessions-mla-2014/]

[updated 8 January 2014 8:30am GMT]

The Modern Language Association’s Annual Convention for 2014 takes place in Chicago this week, from January 9th to 12th, 2014.

The following information kindly shared with us by Charles Hatfield.

The MLA Discussion Group on Comics and Graphic Narratives are pleased to make available complete descriptions, with abstracts, of their three panels at the MLA 2014 convention*:

Scholars interested in comics and comics scholars at MLA, please join these panels if you can.

There will also be a Friday night cash bar and reception, jointly hosted with the MLA Division on Children’s Literature.

*For other comics events at MLA 2014, look at this list compiled by Charles Hatfield. (Thanks to Nick Sousanis for providing this link).

Follow #MLA14 on Twitter for updates from/about the conference.

Please check out the MLA Discussion Group on Comics and Graphic Narratives blog  for information on other comics-related events at MLA 2014.

As always, MLA scholars presenting on any aspect related to comics are invited to submit their papers to The Comics Grid: Journal of Comics Scholarship.

We are always open to receiving reports with photos from comics sessions at academic conferences for The Comics Grid’s blog.  Information on how to contribute to the blog, here.

Open Access: Getting Things Right

I reblog here a post I published on my home site earlier today. I thought it could potentially reach a different audience if I share it here as well.

Cameron Neylon published a very interesting and timely opinion piece on the Times Higher Education titled “Let’s get this right” (28 March 2013).

[Unfortunately, as frequent readers of THE online will know, registered users of the site can be blocked from viewing more articles if they have exceeded their allowed quota… which is not ideal, particularly considering the subject debated. Hence this post].

Cameron’s article is short and to the point, and I cannot but agree with him in his call to 1) not associate ‘Green’ and ‘Gold’ with specific buisness models, by which he means whether Article Processing Charges are implemented or not, 2) refer to the ‘Gold’ and ‘Green’ options as publication channels, and 3) use robust evidence when advocating Open Access.

He writes:

[…]the terms “green” and “gold” have very specific technical meanings. They refer to mechanisms of access: “green” means access provided through repositories to author manuscripts; and “gold” means access provided to the final published version of papers in journals.

They explicitly do not refer to business models. Gold does not necessarily mean that article fees apply. The majority of outlets registered on the Directory of Open Access Journals website do not charge any fee, and some of these are very prestigious in their fields. According to a definitive 2012 study by Mikael Laakso and Bo-Christer Björk of the Hanken School of Economics in Helsinki, at least 30 per cent and possibly as many as 60 per cent of articles made immediately accessible on publication are in journals that do not charge article fees. Yet, over the past 12 months, reports, arguments and parliamentary questions have all uncritically repeated the assumption that public access through journals entails such fees.


The terms “green” and “gold” are now so debased that we should simply stop using them. Let’s talk instead about channels of publication, repositories and journals, and new blends that blur these distinctions. Let’s talk about the services we want and whether they are best delivered by commercial providers or by the community: peer review, copy editing, archiving and indexing. And let’s talk about the full range of sustainable open-access models and how they are appropriate, or not, in different research domains and settings.

Perhaps more importantly, Cameron calls for the use of robust evidence when lobbying for Open Access, some of which already exists. He is entirely correct that “for a robust scholarly debate to proceed, we need more evidence to be published and reviewed.” (Some of us think here: “Yes! For the win! If I only there were funding/more time and channels available to me to do this!”)

I hurried a comment to his article, which I have copied and pasted below, with some minor corrections I could not do at the THE interface:

Cameron is right to call for a correct use of the terminology. I would say though that what is “poisoning the debate” is not necessarily an imprecise usage (or rather, understanding or application) of ‘Green’ and ‘Gold’, but a certain unwillingness to

  1. have engaged with open access before it became a governmental imposition and
  2. accept that open access always-already posits that the traditional business model of paid-subscription or paywalled journals is not working.

To be honest even I may have myself referred to ‘Green’ and ‘Gold’ as ‘business models’ within the context of [discussions around] the Finch report, but not because I ignore the fact that as Cameron rightly points out they do not necessarily refer to specific business models per se if by that we understand the charging of article processing fees (or not). Why has this equation of ‘Gold’ Open Access and the paying fees has taken place?

In the traditional and conservative discoursive universe around the Finch report, at least as I have experienced it online and in now dozens of academic workshops, lectures and conferences I have attended in the last year, there are no journals other than the traditional journals that would not embrace open access unless they charge APCs (this is a generalisation of course, since not all traditional journals will/would go this route. Of course there are those journals, but my point is that one of the problems is precisely that instead of thinking of other journals rather than the ones that traditionally imposed a paid-subscription model, when some people think of going ‘Gold’ they are thinking of publishing in those same (often ‘legacy’)  journals that have been or will be ‘forced’ to offer an open access model).

For many, ‘Gold’ equals ‘Unpayable APCs For Which Almost Nobody Have Funding For’. In my view, this (alas, incorrect and biased) definition is rooted in this unwillingness to interrogate the academic publishing system as a whole, including the reasons why people publish in paid-subscription journals in the first place, and the many years in which the structural inequality of access to academic knowledge (by limiting access to many outside a few elite institutions in the ‘developed’ West, fee waivers and discounts or not) remained largely unquestioned.

So yes indeed, ‘Green’ and ‘Gold’ do not equal ‘business models’ if by that we mean whether authors will have to pay to publish or not. As I have already suggested the question is why this definition has become so widespread, and I would suggest one reason is the lack of imagination and even courage to dare a more thorough transformation of the academic publishing landscape.

This does not mean that OA advocacy of this type is calling for the ‘destruction’ of academic publishing as we have known it, as some anti OA colleagues may suggest. On the contrary, it means calling for a realistic, intelligent, ethical renovation of the sector in the context of the radical transformations to the conditions of production and reproducibility of academic knowledge in the age of the Web.

Indeed, equating Gold OA with paying APCs is incorrect and it undermines the OA ethos, because it merely shifts the economic burden from libraries to individual authors. This is not the point, and it is only bound to promote a kind of inequality which OA also seeks to tackle.

In the end, Open Access is more than an ethical stance, it is a technology and a specific redefinition of traditional publishing business models, because it poses that charging ridiculously expensive institutional subscription fees is not fit for purpose because (amongst other reasons) it alienates non-elite academics and non-elite-academic taxpayers, leaving them in many cases without access to content that either discusses their own situation, was authored by them or was funded indirectly through their taxes.

Proposing that publicly-funded research should be openly-accessible by the taxpayers who funded such research is an ethical proposition but it is also a particular kind of business model. Proposing that academic publishing currently has a business model which is very likely to become unsustainable and that in many cases exploits the labour of academics and that therefore something has to be done is a call for the discovery of new business models.

New business models often require radical exercises of imagination: we cannot make a successful transition to OA by leaving things as they are, reacting by imposition rather than will, and without a desire (importantly, from Early Career Researchers as well) to interrogate the most-obvious foundations of academic publishing, and innovate accordingly.

This is a quick blog post and therefore not an academic article. It is an opinion piece and it should be interpreted with that framework in mind.

Around the DH World in 80 Days

Global Outlook DH  banner

I have copied and pasted below a message sent by Alex Gil. I added some hyperlinks for context. Links open in new windows/tabs. Apologies for cross-posting.

“It is my pleasure to introduce to you one of our first pilot projects at GO::DH, Around DH in 80 Days!

[Global list: https://docs.google.com/spreadsheet/ccc?key=0AmgLcm5jfVhSdGlPNm1WQ0hRYjFTU1E5QnBDdlZMQWc&usp=sharing#gid=0]

AroundDH hopes to be a fun way to introduce the work of colleagues around the world to those who are just starting out. Everyday for 80 days we will visit a group or projects across the globe. An editorial board will select a total of 80 groups or projects, one for each day. Groups in the list will be approached to describe themselves and highlight their work in 200 words or less. We will do our best to bring attention to digital scholarship outside of Canada, Europe, the US and Japan. In that sense, we are departing from a broad and inclusive vision of DH. Besides the audience of new comers, the global scope of the tour should also attract some of the more seasoned DH’ers. The greatest challenge of the editorial board is to balance the geographical margins with the greatest-hits of the northern mainstream. The greatest hope of the project is to paint enough of a broad picture of digital humanities to redefine it in the process. Thus, AroundDH can be read not only as a tour of the globe, but also as a dance around the periphery of DH.

The project began as an email experiment. One email was sent daily from my outbox to all the librarians in the H&H division at Columbia with the subject “The DH Daily.” Everyday, our librarians, who are in the middle of a 2-year professional development program to become the consultation arm of our Digital Humanities Center at Columbia, would visit a different DH center or project. Others outside of Columbia heard about the experiment and wanted to be included in the email list. The appeal was the small dosages. Like the librarians, the rise of DH across the land has brought crowds of DH-curious academic professionals and students to our doors asking where do I begin?’ At the same time that the emails were going out, I was slowly but surely becoming part of the conversations around Global Outlook DH. There we were trying to discover as much as we could about the world outside the fields of vision of the member-nations of the ADHO. Eventually these two sets of concerns blend into one, and thus was born the idea for Around DH in 80 days.

The project is currently being developed by Ryan Cordell’s Doing Digital Humanities graduate class (#s13dh). You are still welcome to contribute to our global list. After Ryan’s class develops the first stage of the project, the project will be passed around the world for refinement. Around DH indeed!”

Global Monitoring Report Releases New Policy Paper

UNESCO’s Global Mornitoring Report (http://www.unesco.org/new/en/education/themes/leading-the-international-agenda/efareport/) has released a new policy paper, “Private sector should boost finance for education.” [Click on title to download PDF].

There’s also an excellent write-up and commentary on the World Education Blog (http://efareport.wordpress.com/2013/01/18/business-leaders-at-the-world-economic-forum-must-boost-finance-for-education/).

Thought it would be interesting to MLA members.

En Español y en Inglés: Global DH/HD Globales

Spanish announcement followed by English.

En nombre de la Alianza de Organizaciones de Humanidades Digitales (o Humanística Digital) (ADHO por sus siglas en inglés), es un placer anunciar la creación de nuestro primer Grupo de Interés Especial: Perspectivas Globales a las Humanidades Digitales (GO: DH por sus siglas en inglés) y a invitarles a participar.

GO: DH es una comunidad de intereses cuyo propósito es superar las barreras que limitan la comunicación y colaboración entre investigadorxs y estudiantes de los sectores del arte digital, las humanidades, o herencia cultural en economías de alto, medio y bajo nivel de desarrollo.

Las actividades centrales del GO: DH son el descubrimiento, construcción de comunidades, investigación y promoción. Su objetivo es vincular las fortalezas, intereses, habilidades y experiencias complementarias de sus participantes a través de proyectos especiales, eventos, acciones promocionales, y al apoyar la colaboración entre individuos, proyectos e instituciones. Se funda en el principio de que este trabajo se está haciendo en muchos países y regiones y tenemos que aprender mucho de modo mutuo.

La participación en el GO: DH está abierta a todas las personas que compartan estos objetivos. Si tienes interes en participar, puedes visitar la web del GO: DH  http://www.globaloutlookdh.org/, unirte a la lista de correo  http://listserv.uleth.ca/mailman/listinfo/globaloutlookdh-l y/o seguirnos en Facebook o Twitter (@globaloutlookdh).

Spanish text is a slightly-modified version of the translation kindly provided by Yasmín Portales Machado.

The Alliance of Digital Humanities Organizations (ADHO) is an umbrella organisation whose goals are to promote and support digital research and teaching across arts and humanities disciplines, drawing together humanists engaged in digital and computer-assisted research, teaching, creation, dissemination, and beyond, in all areas reflected by its diverse membership. (Read more about the ADHO here).

The ADHO has announced  the creation of its first Special Interest Group (SIG): Global Outlook::Digital Humanities (GO::DH).

GO::DH is a Community of Interest whose purpose is to address barriers that hinder communication and collaboration among researchers and students of the Digital Arts, Humanities, and Cultural Heritage sectors across and between High, Mid, and Low Income Economies.

The core activities of GO::DH are Discovery, Community-Building, Research, and Advocacy. Its goal is to leverage the complementary strengths, interests, abilities, and experiences of participants through special projects and events, profile and publicity activity, and by encouraging collaboration among individuals, projects, and institutions. It is not an aid programme. Instead it recognises that work is being done in many countries and regions and that we all have much to learn from each other.

Participation in GO::DH is open to all who share its aims. If you are interested in participating in this initiative, you can visit the GO::DH website http://www.globaloutlookdh.org/, join the GO::DH mailing list (http://listserv.uleth.ca/mailman/listinfo/globaloutlookdh-l), or follow us on Facebook or Twitter (@globaloutlookdh).


Please help us spread the word in any way you can. Every little helps. If you know other languages apart from English do spread the word as well. This is a community effort and it will only achieve its true goal if we make it reach those we don’t already know of or those who don’t already know of us. Thank you.

On a related note, you might be interested in reading my interview with Alex Gil, who is part of the GO::DH group, at 4Humanities.