Advanced Archaeological Training

(University of Basel, Digital Humanities Lab, Switzerland)

Keywords: AI, objects, metadata, photography, research

Description: Research using digital visual methods and artificial intelligence is an increasingly important field of archaeology and object sciences in general. We would like to carry out our course as advanced archaeological training at the CHNT25, that we are currently undertaking in collaboration with the Classical Studies Department of the University of Basel and the Antikenmuseum Basel.
In this interdisciplinary project which is done by the Digital Humanities Lab of the University of Basel, the aim is to apply Computational Photography and machine learning methods to assets of cultural relevance.
We combine traditional archaeological research work by focusing on a specific field, the Roman clay lamps. We will have recorded these with Reflexion Transformation Imaging (RTI) by the time the conference is held. We will then be generating metadata with the scientists and archaeologists to train the machine learning approach. We use the annotated images  for artificial intelligence image classification as reference data.
Using the small collection of Roman clay lamps, we have the opportunity to work on the documentation of archaeological objects and to collect human-based datasets for the annotated images that can be used for AI image classification.
By our hands-on training on working with photographic resources of archaeological heritage, we monitored how single photographs but also image groups are described. Based on these experiences, we discuss the potential of machine learning components for semi-automatic image annotation and clustering. We are interested in object-specific meta information but also on contextual metadata that describe the connection between objects and are the criteria for clustering. The combination of rich semantic metadata and machine learning means increased functionality and value of digital source material for archaeological research.

Max. 10-15 participants
Bring some examples of your collection, ideas of object-specific meta information, semantic information (no technical equipment is needed)