21. oktoobril alustab digihumanitaaria ja infoühiskonna keskuse korraldatud loengusari TÜ Digihum Talks, mille eesmärk on pakkuda ettekandeid digihumanitaaria teemadel. Ettekanded toimuvad üle nädala reedeti kell 14.00–16.00, kõik loengud on üle kantud veebis (Zoomi vahendusel). Osa loenguid toimub kohapeal Tartus Jakobi 2 – 103 koos väikse söögi ja joogiga. Infot jagame keskuse kodulehe, infolistide ja keskuse FB-lehe kaudu.
Zoomi link on veel kord siin.
"From artificial intelligence to artificial stupidity. Mapping the dominant enthusiasms and concerns related to the use of AI technologies in education"
Jakobi 2 and Zoom
In the recent years, many educational institutions around the world, from early years to higher education alike, have become accustomed to making use of AI-based educational technologies in a variety of ways—for predicting outcomes and preventing risks, for providing insights into the processes of learning, or for personalizing the education system around every student’s personal needs. While some have greeted such uptake of AI-supported applications within the education sector with enthusiastic techno-optimism stressing the revolutionary powers of technology; others have been concerned about the potential harms such applications might have on students’ and teachers’ agency, privacy, and rights.
In my talk I will map currently ongoing debates regarding the use of AI in education sector to highlight the main enthusiasms and concerns the debates have triggered both amongst the scholarly community and in the international news media. The talk will be based upon the findings from different empirical case studies related to 1) the uptake of proctoring software by higher education institutions to combat academic dishonesty; 2) the usage of student online activity monitoring software by secondary schools to ease the concerns related to students’ wellbeing; 3) the algorithmic grading fiasco with the A-level exam results in the UK and 4) the public launch of Chat GPT.
"An Agent-based modelling approach to finding risks and opportunities for deliberative communication in European media landscapes"
Jakobi 2 – 103 and Zoom
This presentation will focus on the application of computational techniques to different media scenarios and the systems they are set in. We will apply the agent-oriented modelling methodology as an innovative approach, focusing on media in European nations and how it has changed in recent decades. This is an innovative approach that allows modelling not only the structure of the systems, but of various agents, their behaviours, and their interactions.
We will discuss hypothetical scenarios that are currently being developed, e.g., the impact of changes in the ability to monitor media standards. Simulations relating to these scenarios will be implemented in the NetLogo software package. They may identify major risks and opportunities that relate to the likes of the ability to monitor.
This research is part of the MEDIADELCOM project, which is developing a diagnostic tool aimed at policy makers, educators, media critical bodies and institutions, as well as for media experts and journalists.
I am an interdisciplinary researcher within both science and humanities. My current research in based in the Institute of Computer Science in association with the Institute of Social Science. Of particular interest to me is the application of quantitative and computational methods to research questions that involve people. My Ph.D. studied the historical price dynamics in Europe between 1500 and 1800 CE (Department of History, Trinity College Dublin, 2021). Previously, I spent over a decade as an analyst in the financial services. Before this, I completed a research master’s degree in pure mathematics (Department of Mathematics, Trinity College Dublin, 2005).
"Strengths and Pitfalls of Large-Scale Text Mining for DH"
In this talk, I will how we can use digital methods to generate sustainable knowledge in the humanities. I will give an overview of the data-intensive research methodology and discuss how methods, results, and data relate to each other and must be evaluated as parts of a whole: there is no such thing as a good method, nor is there a way to know if the results are good, without considering the data. I will discuss results as a window from which we can see our data, and how we can reason about the results of digital methods. Finally, I will present the Change is Key! research program and describe our efforts to connect computational research with research questions from the humanities and social sciences.
Related reading here.
Nina Tahmasebi is an associate professor in Natural Language Processing (NLP) at the University of Gothenburg. She studies lexical semantic change from a computational perspective, developing theory, methods, evaluation techniques, and resources. She also works with text mining and AI for digital humanities, both practically and with regards to epistemological questions relating to a data-intensive research methodology: how text mining can be used to generate stable knowledge in text-based humanities and social sciences. She leads a 6-year research program, Change is Key! that received 33.5 Million SEK in funding for research on computational semantic change for the humanities and social sciences funded by Riksbankens Jubileumsfond, starting in 2022.
"Literary and linguistic computing: from authorship attribution to assessing language change"
The presentation will be focused on computer-assisted text analysis, understood as measuring textual similarities by statistical techniques. The talk will start with a concise introduction to authorship attribution, followed by a discussion of how attribution techniques can be extended to assess stylistic variation in (large) collections of texts. Since one of the core concepts in attribution is to identify the works that are stylistically (whatever it really means) most similar to an anonymous sample in question, the same idea can be easily adopted to assess any text collection. Instead of looking for authorship, however, such an analysis is aimed at tracing other stylometric signals: genre, gender, chronology, intertextuality, and so forth.
Prof. Maciej Eder is the director of the Institute of Polish Language (Polish Academy of Sciences), chair of the Committee of Linguistics at the Polish Academy of Sciences, principal investigator of the project Computational Literary Studies Infrastructure, co-founder of the Computational Stylistics Group, and the main developer of the R package ‘Stylo’ for performing stylometric analyses. He is interested in European literature of the Renaissance and the Baroque, classical heritage in early modern literature, and quantitative approaches to style variation. These include measuring style using statistical methods, authorship attribution based on quantitative measures, as well as “distant reading” methods to analyze dozens (or hundreds) of literary works at a time.
"Hexameters, genres and geniuses: formal approaches to the relationship between poetic meter and meaning in European verse"
Jakobi 2 – 427 and Zoom
Recent advances in cultural analytics and computational studies of art, literature and film often show that long-term change in artistic works happens gradually. These findings suggest that conservative forces that shape creative domains are often underestimated. The talk will demonstrate how this conservative retention of features happens in European poetic traditions (Czech, Dutch, English, German, Russian) by computationally addressing the problem of "semantic halo of meter" --- a theorized relationship between poetic form and its meaning. Using topic modeling and a series of clustering and classification analyses it is possible to trace the effect of semantic halo across all accentual-syllabic traditions under question. The talk will focus on the potential mechanisms behind the observed effect and use simple agent-based models to show how cultural transmission, symbolic inequality and structural differences might be responsible for the long-lasting power of poetic forms to shape meaning and retain genre memory for hundreds of years.
Artjoms Šeļa is currently doing postdoctoral research at the Methodology department of the Institute of Polish Language (PAN, Kraków) and is a research fellow at the University of Tartu. He holds PhD in Russian Literature and uses computational methods to understand historical change in literature and culture. His main research interests include stylometry, verse studies and cultural evolution. Sometimes he forays into digital preservation and the history of quantitative methods in humanities.
"Lessons Learned From Running The First Digital Scholarship Lab at a National Library"
Jakobi 2 – 427 and Zoom
Mahendra Mahey will tell the story of setting up, running and maintaining the first experimental Digital Scholarship Lab at a National Library anywhere in the world, British Library Labs. He will highlight some of the hundreds of pioneering and innovative projects that were developed by scholars, artists, educators, entrepreneurs and communities that reused and remixed the British Library’s digitised cultural heritage collections and data.
He will reflect on the lessons learned on experimenting with digitised collections through competitions, awards, projects and exhibitions at the British Library and on the activities of an international community of professionals working in Galleries, Libraries, Archives and Museums’ Labs (or GLAM Labs).
Mahendra will shed light on some of the myths and assumptions many make about cultural heritage organisations and will address some of the significant issues and challenges faced when working with digital collections and data (e.g. legal, technical, human etc.).
The session will end with a look at the idea of ‘Cultural Remix’ followed by questions and a discussion.
Mahendra has over 30 years’ experience working in education, as an educator, adviser, manager, researcher and community builder. Since March 2022, he has been working as Senior Adviser in Research and Development at Tallinn University helping academics develop funded projects.
From 2013 to 2021, he built and managed the pioneering British Library Labs, which encouraged people to experiment with the British Library’s digitised cultural heritage data (such as digitised books, newspapers, maps, sheet music, manuscripts, audio/TV recordings as well as born digital archived websites, personal digital archives, electronic books, radio, performances and artworks). He established and expanded the global Galleries, Libraries, Archives, and Museums (GLAM Labs) network.
Previously, he has built several international communities in academia within software development and educational technology. Mahendra has expertise in research management information systems, research data, educational technology, teaching social sciences, and communicating to various groups by creating innovative meeting spaces and events, both physically and online.
Mahendra occasionally tweets at @mahendra_mahey