Over the last decade the National Library of Wales has developed an award-winning volunteer program, with hundreds of volunteers working to enrich data and our understanding of our collections through a range of tasks, from transcription and indexing to the tagging of photographs.
The library also has a long-standing partnership with Wikimedia, the organisation behind Wikipedia and Wikidata – a huge linked open database of people, places and all manner of things. During Covid Lockdowns we trialed the use of Wikidata and IIIF to add descriptive tags to images using open-source software developed by the Wiki community, before embedding this process into our own digital crowdsourcing platform.
Whilst the use of the IIIF image standard has long been established at the National Library, the use of Wikidata to describe our collections is still more experimental. The main advantages we see of this approach are multilingualism and enriched data.
Wikidata allows users to add labels to items in multiple languages. For example, there is only one item in the dataset for a tree, with a unique identifier, but it can be labelled and described in hundreds of languages, including Welsh. This means our volunteers can work in English or Welsh and we can capture and present that data in any language we choose. It also gives us access to a rich array of additional data about the places, people and things tagged in our collections.
Whilst using a set vocabulary like Wikidata means we can create structured data, as opposed to just collection strings of text where one item might be described in multiple different ways by different volunteers, there are still challenges with our methodology.
Wikidata contains over 100 million items of data on all sorts of things and much of this is irrelevant to our users, meaning there is the risk of tagging the wrong thing. This might be accidental. For example, in one image a boy could be seen kneeling and our volunteers used the Wikidata item for ‘Kneeling Boy’ to tag the image. However ‘Kneeling Boy’ was actually the title of a painting. And so the wrong tag was applied.
It may also be that tags are applied in good faith, but the complex nature of Wikidata’s ontology means that the wrong tag was applied, such as using ‘male’ (gender) instead of ‘man’ (male human) to tag a man in a photograph.
The goal of the photo tagging project is to add tags to a large collection of 19th-century photo albums, providing more detailed data than held in our catalogue. Over the last 12 months over 100 volunteers have taken part in the tagging task on our crowd-sourcing platform with a total of 900 hours spent on the platform. The most active volunteers are those who are part of the library’s in-house volunteer team although the project is open for anyone to participate.
More than 20,000 tags have been added to the photograph collection to date.
So, when a Masters Student in Library and Information Science at the University of Maryland enquired about a field placement, we saw a fantastic opportunity to review the standard of tagging by our volunteers so far. Amelia Eldridge, our Masters Student, had visited the National Library while on a visit to Wales as part of an artist residency in 2020. She felt that a field placement with the Library would be an amazing way to combine her interest in Welsh culture with a graduation requirement.
Amelia set about reviewing a random sample of over 3000 tags. She was looking for the incorrect use of tags and the absence of useful tags, where perhaps volunteers missed an opportunity to add useful data.
Out of 3315 tags reviewed 191 were marked as incorrect which is a failure rate of just 5%. 671 new tags were added to albums considered ‘complete’ (a 20% increase) suggesting that volunteers sometimes missed opportunities to tag certain things.
Amelia explains that;
“The most added missing tag was “caption” – lines of text used to explain or elaborate an illustration, figure, table, or photograph. With 155 tags added. I added this tag when images/illustrations had descriptive captions within the photo or illustration itself, not handwritten in graphite below it. The second most added was “Victorian fashions”; fashions and trends in British culture during the Victorian era, with 45 tags added. I added this tag to portrait style photos, where it seemed to me that the fashions the people were sporting were important to the description of the image.
I did not add this to images where people were clearly wearing “victorian fashions” but not done in portrait style. I would however, not mark it incorrect if another volunteer did. This is an example of ‘tagger bias’, where I found it interesting to see how different people would describe a photo. In most instances I didn’t tag these differences as wrong, rather they caused me to engage in self-reflection.”
The observed ‘tagger bias’ is a reminder that crowdsourcing descriptive data, whatever the methodology, is likely to lack consistency as people will tend to tag the things that interest them, or that they notice more prominently when examining an image. The ability to see tags added by others on the platform, however, does allow users to reflect on their own tagging.
When it came to the incorrect use of tags there was a clear pattern, as Amelia explains;
“I did mark certain tags as incorrect. The top three relate to gender. The most marked incorrect tag was ‘man’ (male adult human) with 74 tags marked incorrect. I would mark this tag as incorrect when there were multiple male presenting people being tagged as only one man. I felt that the correct tag for these images, because there were multiple men depicted, was ‘group of men’. Then came ‘male’ which is intended only to be used to describe “sex or gender”. 45 tags of this kind were marked. I would either correct these to ‘man’ or ‘group of men’ depending on how many male presenting people were in the actual image. The third most corrected tag was ‘woman’ with 18 incorrectly tagged. I would correct this tag if, like with the men, multiple female presenting people were tagged as only one. They would be changed to ‘group of women’. ‘Female’ was also used incorrectly to describe a female person, but only 2 times. ‘female’ and ‘male’ were used in early albums I evaluated, and the volunteers corrected themselves quite quickly it seems.”
The fact that so many of the incorrect tags result from an honest misunderstanding of the data suggests that providing greater guidance and training resources for volunteers could easily reduce the error rate significantly.
There were also a few issues regarding ethnicity, where individuals were tagged as Italian, Chinese or Native American. As Amelia was keen to stress, “we cannot assume identity”. Wikidata does have data items for identifying residence of a place regardless of ethnicity and Amelia suggests that the use of these items would be less problematic, although assuming people in a photograph taken in Italy are definitely Italian is still difficult to assert with any authority. For example, Amelia suggests that when “ ‘Native Americans in the United States’ was tagged within an image, changing it to ‘Indigenous Peoples of the Americas’ may be more inclusive and accurate.” Again, providing clear guidance for volunteers may help reduce examples of this problem.
I asked Amelia what her recommendations would be for reducing the number of errors.
“It is my feeling that many of the tags marked as incorrect, could be avoided by training the volunteers to not add them. For example-avoid tagging ethnicity, or the gendered tag when describing a male or female. I would hesitate to have a specific set of predefined vocabulary tags, just because I wouldn’t want to limit the volunteers. As I’ve mentioned, something interesting about this project for me was seeing how there are different approaches to describing an image. And, as I’ve also mentioned, by and large the volunteers are already doing a good job at deciphering and tagging what is within the photo albums.
Another suggestion- do the volunteers learn any background on the photo albums before they begin their tagging work? Maybe a short talk with the curator in charge of them? Or a pre-recorded video for remote workers? I think some would find this interesting, and provide an opportunity to see another side of the library (curatorial).”
Amelia’s work to review the tagged albums and to identify patterns in user behavior will be incredibly valuable as we look to develop and progress our crowd-sourcing opportunities. Her perspective as someone who also contributed to the tagging as a volunteer will help us improve our service moving forward. The overwhelming conclusion here is that actually, the volunteers have done a great job at tagging the albums with impressive accuracy. Amelia’s suggestions for training resources and asking curators to give some history and context for the collections being tagged are incredibly useful and something I hope we can develop for our next tagging project.
So a massive thank you to Amelia for this work. We wish her all the best with her Masters and hope she got as much out of her field placement as we did!
Category: Article