Call for Abstracts: LLM fails – Failed experiments with Generative AI and what we can learn from them

**automatic English translation below**

Workshop am 8. und 9. April 2025 im Leibniz-Institut für Deutsche Sprache, Mannheim

Visit our Workshop-Website  to stay up-to-date.

Organisation: Annelen Brunner, Christian Lang, Ngoc Duyen Tanja Tu

Gescheiterte Experimente finden für gewöhnlich keinen Platz im wissenschaftlichen Diskurs, sie werden verworfen und nicht publiziert. Wir glauben, dass dadurch potenzieller Erkenntnisgewinn verloren geht. Schließlich ermöglicht eine systematische Reflexion über die Gründe des Scheiterns, angewendete Methoden zu hinterfragen und/oder zu verbessern.

[...]

Quelle: https://dhd-blog.org/?p=21632

Weiterlesen

2 Stellen für Doktorand.innen in Computer-/Soziolinguistik an der Universität Luxemburg

Liebe Kolleg.innen,

im Rahmen eines neuen Forschungsprojekts (TRAVOLTA – Tracing Attitudes and Variation in Online Luxembourgish Text Archives) haben wir derzeit zwei Stellen für Doktorand.innen mit einem Hintergrund in NLP, Computerlinguistik, Soziolinguistik oder Variationslinguistik zu besetzen. Die Kandidat.innen werden Mitglieder des Departments of Humanities der Universität Luxemburg.

Das Projekt wird das erste sein, das die Entwicklung des Luxemburgischen zu einer vollwertigen Schriftsprache anhand von Daten des Nachrichtenportals RTL.lu verfolgt. Wir werden neue Werkzeuge und Pipelines für NLP-Zwecke entwickeln und sie nutzen, um Variation und soziale Positionierungen in geschriebenen Texten zu untersuchen.

[...]

Quelle: https://dhd-blog.org/?p=18695

Weiterlesen

2 Stellen für PostDocs im Culture & Computation Lab (Uni Luxemburg)

Am Department for Humanities der Universität Luxemburg sind derzeit 2 Stellen für Postdoktorand:innen im Bereich Cultural Data Science ausgeschrieben. Die Stellen sind am neu geschaffenen Culture & Computation Lab angesiedelt, einer transversalen Initiative aller Institute des Departments, und werden zunächst auf 5 Jahre (3+2) vergeben.

Ziel des Labs ist die umfassende Analyse der komplexen Wechselwirkungen von Kultur und Digitalität aus geisteswissenschaftlicher Perspektive. Neben der Weiterentwicklung und Vermittlung von Ansätzen aus Digital Humanities und Computer Science steht dabei auch die zentrale Rolle der Geisteswissenschaften für das kritische Hinterfragen aktueller technischer Entwicklungen im Zentrum der Arbeit. Nähere Informationen zum Lab finden sich unter cucolab.uni.lu.

Position 1)

  • Postdoktorand:in im Bereich Cultural Data Science


  • [...]

Quelle: https://dhd-blog.org/?p=17658

Weiterlesen

Stellenausschreibung: Stud. Hilfskraft (m/w/d): Datenanalyse zum Diskurs über die Regulierung von digitaler Öffentlichkeit (Standort Leipzig), Bewerbungsfrist: 22.8.2021

Für ein Kooperationsprojekt der Universität Bremen und der Deutschen Nationalbibliothek (DNB) zum Diskurs über die Regulierung von digitaler Öffentlichkeit auf deutschen IT-Blogs und in den Printmedien suchen wir am Standort in Leipzig eine

studentische Hilfskraft.

Kernaufgabe ist die Unterstützung bei unserer Analyse unterschiedlicher digitaler Textkorpora. Hierfür sind Kenntnisse im Bereich natural language processing (NLP) und Erfahrungen im Umgang mit Python wichtige Voraussetzungen.

Aufgabenbeschreibung:
• Unterstützung bei der Analyse digitaler Textkorpora am Standort in Leipzig
• Aufbereitung der Daten für die Untersuchung mit verschiedenen Analysemethoden: Machine Learning Modelle, Topic Modelling, Netzwerkanalyse, Statistische Auswertung, Data Visualization.
• Inhaltliche Recherchen zu netzpolitischen Fragen

Wir bieten Ihnen:
• ein anregendes und inspirierendes Arbeitsumfeld
• die Gelegenheit an einer aktuellen und gesellschaftlich sehr relevanten Forschungsfrage mitzuarbeiten
• die Möglichkeit wissenschaftliche Fähigkeiten und Kompetenzen anzuwenden und zu vertiefen

Einstellungsvoraussetzungen:
• Möglichkeit 1x pro Woche an der DNB in Leipzig zu arbeiten.

[...]

Quelle: https://dhd-blog.org/?p=16355

Weiterlesen

PhD Position Digital Humanities and Environmental Risks (m/w/d)

The UFZ will establish a Graduate School within its Research Unit “Environment and Society” on the topic: “Thirsty Cities: Pathways for Water-Resilient Urban Transformation and Agricultural Adaptation.” The Graduate School consists of engaged PhD-researchers who will participate in a structured training-program that facilitates integration and exchange among the different projects involved as well as within the larger UFZ research context. The projects will investigate possible pathways to make cities and adjacent rural areas more resilient to future water scarcity. Within this framework the Department of Urban and Environmental Sociology is now offering a:

PhD Position Digital Humanities and Environmental Risks

Part-time 65% (25,35 h per week), limited to 3 years

Your tasks:

The candidate will assess water scarcity-related impacts at the urban and rural scales in Germany and the adopted policy responses. To this end, text-based data (e.g. policy documents, newspapers, regulations) will be used. In addition, the PhD candidate will develop a new mixed-methods method for assessing future scenarios on how stakeholders prioritize different mitigation and adaptation measures.

[...]

Quelle: https://dhd-blog.org/?p=14773

Weiterlesen

Final CfP: Workshop Teach4DH – Teaching NLP for Digital Humanities, 12.09.2017, Berlin

Der Workshop richtet sich sowohl an Computerlinguisten als auch an Wissenschaftler im Bereich der Digital Humanities, die DH-Module – und im speziellen NLP – unterrichten. Im Workshop wechseln Vorträge und Diskussionen ab, um Erfahrungen auszutauschen, best practices zu diskutieren, Lehrkonzepte vorzustellen und bereits vorhandene Technologien zu demonstrieren. Der Workshop stellt außerdem ein Forum dar, um Anforderungen und Hilfestellungen für zukünftige Entwicklungen von DH Curricula in Richtung Computerlinguistik zu adressieren. Der Workshop soll Kooperationen fördern und Wissen und Ansätze DH-übergreifend befruchten.

Teach4DH wird mitorganisiert von der GSCL SIG Education and Profession und findet zusammen mit der GSCL 2017 statt.

Weitere Informationen: siehe unten, detaillierte Informationen auch auf: https://teach4dh.github.io/cfp.

[...]

Quelle: http://dhd-blog.org/?p=8249

Weiterlesen

CfP: Workshop Teach4DH – Teaching NLP for Digital Humanities, 12.09.2017, Berlin

Der Workshop richtet sich sowohl an Computerlinguisten als auch an Wissenschaftler im Bereich der Digital Humanities, die DH-Module – und im speziellen NLP – unterrichten. Im Workshop wechseln Vorträge und Diskussionen ab, um Erfahrungen auszutauschen, best practices zu diskutieren, Lehrkonzepte vorzustellen und bereits vorhandene Technologien zu demonstrieren. Der Workshop stellt außerdem ein Forum dar, um Anforderungen und Hilfestellungen für zukünftige Entwicklungen von DH Curricula in Richtung Computerlinguistik zu adressieren. Der Workshop soll Kooperationen fördern und Wissen und Ansätze DH-übergreifend befruchten.

Teach4DH wird mitorganisiert von der GSCL SIG Education and Profession und findet zusammen mit der GSCL 2017 statt.

Weitere Informationen: siehe unten, detaillierte Informationen auch auf: https://teach4dh.github.io/cfp.

[...]

Quelle: http://dhd-blog.org/?p=7932

Weiterlesen

Explore, play, analyse your corpus with TXM

A short introduction of TXM by José Calvo and Silvia Gutiérrez

 

On Feburary 6-7, 2014, the Department for Literary Computing, Würzburg University, organized a DARIAH-DE Workshop called “Introduction to the TXM Content Analysis Platform“. The workshop leader was Serge Heiden (ENS-Lyon) who is in charge of the conceptualizing and implementing TXM at the ICAR Laboratory in France.

The workshop included a brief explanation of TXM’s background, but it concentrated on a very practical approach. We learned about the “Corpora options” (that is what you can know about your corpus: POS descriptions, text navigation), but also what you can do with it: find Key Words In Context (KWIC), retrieve Parts of Speech, and moreover how you can analyse these results querying for the Most Frequent Words or the cooccurrences.

In the evening of day one, we got an overview of the state of art of the use of “Natural Language Processing for Historical Texts” in a keynote by Michael Piotrowski (IEG Mainz). First of all, he started by defining Historical Texts as all those texts that will bring major problems to NLP. In order to clarify these definitions, Dr. Piotrowski listed some of the greatest difficulties:

  • Medium and integrity: we have to remember that in order to analyse an old script that was written in clay tablets or marble, it is compulsory to first find a way to transfer this information into a digital format (not an easy task); plus: some texts are defective or unclear, and transcriptions may introduce new errors
  • Language, writing system and spelling: many of the historical texts were written in extinct languages or variants different from today’s variants; as for the writing system, the many abbreviation forms and the variety of typefaces are more or less problematic; finally, we should not forget the little problem of non-standardized spelling!
  • State of art: Historical languages are less-resourced-languages, there are few texts available, and NLP for historical languages is carried out in specific projects; that is, there are no common standards and everyone has to start from zero.

Not to discourage his public, he then offered an overview of what can be done: Part-of-speech tagging. Creating a tagger for a historical language can be done with the the following methods:

  1. From scratch: manually annotating your text
  2. Using a modern tagger and manually correcting all errors
  3. Modernizing spelling
  4. Bootstraping POS tagger (with many versions of the same text, like the Bible)

Now let’s get back to the TXM workshop. In this post, you will find a brief practical introduction to this tool. We will provide you with a rough idea of what is this software about and what you can do with it. If you would like to learn more, do check the links we have shared towards the end of this post. By the way, all words marked with a little * are explained at the end, in the “Vocabulary” section.

What is TXM?

This software is at the juncture of linguistics and scholarly editing and it’s made to help scholars analyse the content of any kind of digital text (Unicode encoded raw texts or XML/TEI tagged texts).

To get to know more about the TXM background, don’t miss Serge Heiden’s Workshop slides:

Where can I work with it?

You may work on the desktop (download page) or online version of the tool. Both platforms have advantages and disadvantages. The online version allows you to start the work without downloading or installing anything, and share your corpora with other colleagues. With the desktop version, you can easily lemmatize and analyse the Parts of Speech (POS*) of your own texts.

So that you can get a better idea of the way it works, we’ll guide you with some practical examples. Say you want to search for the lemma politics on the “Brown Corpus*. First you have to open the Index option:

2014-02-15_19h28_12

Then you use the query box to type in the query, using the following structure from the CQL* query language: [enlemma=“politics”]. In the desktop version, the results will look as follows (the web version is very similar):

2014-02-12_11h02_39

What can I do with TXM?

Explore your corpus

Corpora options

On the first column of both interfaces there’s a list of the corpora you can work with (in this case DISCOURS, GRAAL, BROWN). When you click with the right button of your mouse on one of your corpora, you will see a list of icons:

png;base6465a19490765df979These are the main tools of TXM and you will use one of these to analyse your corpus in different ways.

Corpus description (Dimensions)

Before you start with the fun, you should click the “Dimensions” option and have a look at some general information about the corpus (number of words, properties, and structural units, as well as the lexical and structural units properties). This information is richer in the desktop version:

2014-02-12_11h12_23

Text navigation

A very practical TXM feature is the text display. If you wish to open a list of the corpus’ elements, you just have to click on the book icon (called “Texts” in the online version and “Open edition” in the other). A list like the following will be shown:

2014-02-12_11h17_48

Moreover, if you click on the book icon in the “edition” column, TXM will open a readable version of our text:

2014-02-12_11h18_47

Play with your corpus

Key Words In Context (KWIC)

A very typical visualization of a corpus is the so called KWIC view, which you have already seen displayed in the politics lemma example.

With TXM you can sort the results using different criteria organizing them according to the right or left context of your word, the word form, etc; besides, you can choose which elements you want to visualize. Say you’re searching for collocations of present as an adjective and NOT the data related to the noun nor the verb form (to present). First of all you need to go to the INDEX.

Once you open this, you can set the options in the “Keyword” column and visualize the grammatical category along with the word form. Then you type “JJ_present”, where “JJ” means “adjective” and “present” is the verb form, so that only those instances of the graphical form present are selected which are adjectives. It is also possible to order this data by different criteria.

As you can see in the next screenshot, you are looking for the lemma present. Therefore, you should set the first “Sort keys” menu to “Left context”, and the second one to “Keyword”; what you’re saying to the software is that you want all the examples sorted by the Left context as a first criteria and the Keyword as a second. In the “Keyword” > “View” menu we have set “enpos, word”. With that we are ordering TXM to show us not just the word form, but also the POS. That is why we see the keywords as “VVN_present” (that means, present as a verb) or JJ_present (present as an adjective):

png;base64f22bacff5fc65745

Parts of Speech

Another way to display specific words according to their POS can be run by using the Index tool (A|Z icon), from a lexicologist point of view one the most interesting options of TXM. If you search again for the lemma present and in the properties box, you chose to see not only the word form but the POS as well, TXM will tell you the frequency, word form and POS of each different word form found in the corpus:

2014-02-12_11h53_26

If you only want the word forms of the verb to present, you can add the POS information to the query: [enlemma=“present” & enpos=“VV.*”]

These index can able to create lists of n-grams. Let’s search for the most frequent words that appear after the lemma present:

2014-02-12_11h57_37

Quantative analysis

Most Frequent Words

To query something you have to have a specific question and know some basic information, for instance: in which language is the corpus? A way to have a general idea about the texts is the Lexicon option, the icon with AZ both on white background. When you click on it, you will see a list of the most frequent word forms:

2014-02-13_11h58_08

 

You can change the settings of the query and ask to count not the word forms but the lemmas. In that case the verb to be climbs up some positions, now that is, are, were, been etc. count as one single unity:

png;base64c5510467d9a7ff1f

 

Coocurrences

Another quantitative analysis concerns the coocurrences, that is, the words (or other unities) that frequently appear close to a specific word (or to other unities). Unlike n-grams, coocurrences do not have to appear exactly after or before the unity, they just have to be somewhere close to it.

The Brown corpus was compiled in the 1960s in the United States, the main years of the Cold War. So let’s see the vocabulary related to the words United States and which one to Soviet Union:

2014-02-13_12h13_56

Progression

Another statistical option that exists on the Desktop version is the Progression (icon with an arrow). This option helps visualize how many times a unity appears in a corpus or a text. This might be interesting to see the progress of a word between two dates or see the development of a word in the different parts of a text.

For the next example, the text of Bram Stocker’s novel Dracula was imported (the version used is from the University of Adelaide). With the information of the chapters kept in XML elements, you can look for the name of the main characters and see how many times and where they appear. The next screen-shot shows the complete query:

2014-02-13_13h41_35

To understand the next graphic, you have to keep in mind that if the lines ascends, that means the name has been mentioned; if the line keeps going horizontally, it means the name didn’t appear any more.

 

2014-02-08_15h54_49

 

As you can see, the Count Dracula (yellow) is the most mentioned name in the first four chapters, but it almost disappears towards the 17th chapter. In this gap, Lucy (blue) becomes the main character and, from the 9th chapter, the Professor van Helsing (red) takes the “leading” role. It is also remarkable that this last character is not only the most frequent, but the most stable.

Sub-corpora and partitions

You can divide your corpus into two options: sub-corpora and partitions. With a sub-corpus you can choose some texts from a corpus and work with them. With the partition, you can split the corpus into more than one part and easily compare the results of the different parts. On the next screenshot, you have the menu where a Partition called “Fiction and Press partition” is being created, using the XML “text” and the property “type” to choose which kind of text is wanted. This partition will have two parts: one called “Fiction” and the other one called “Press” and each of it will contain the respective type of texts.

2014-02-13_13h26_05

Useful links

“A gentle introduction to TXM key concepts in 90 minutes” by Serge Heiden: http://sourceforge.net/projects/txm/files/documentation/IQLA-GIAT%202013%20TXM-workshop.pdf/download

Tutorial video introducing TXM 0.4.6 (WARNING: the software, specially it’s surface, is now very different): http://textometrie.ens-lyon.fr/IMG/html/intro-discours.htm

TXM background http://fr.slideshare.net/slheiden/txm-background

TXM import process http://fr.slideshare.net/slheiden/txm-import-process

Vocabulary

 

Brown Corpus

The Brown corpus consists of 500 English-language texts, with roughly one million words, compiled from works published in the United States in 1961. You can learn more about it here.

CQL

TXM uses an underlying Contextual Query Language, which is a formal system for representing queries to information retrieval systems such as web indexes, bibliographic catalogues and museum collection information. More information in the official web-page: http://www.loc.gov/standards/sru/cql/

 

POS

Here is a useful alphabetical list of part-of-speech tags used in the Penn Treebank Project (tag and description): https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html

Quelle: http://dhd-blog.org/?p=3384

Weiterlesen