The Speech Prosody SIG Lecture Series, a new initiative of the Speech Prosody SIG Officers, aims to (1) offer to the Speech Prosody community a well-covered view of themes and methods in speech prosody; (2) introduce new perspectives and foster debate; (3) stimulate collaborations among speech prosody researchers, including by making known to the community the existence of public repositories with data, corpora, joint projects asking for collaboration and other resources that can be freely shared. Lectures will be presented live in YouTube, with Q&A, handled through the YouTube's chat feature.



The speech synthesis phoneticians need is both realistic and controllable: A survey and a roadmap towards modern synthesis tools for phonetics.
Zofia Malisz, KTH Royal Institute of Technology.
April 17th, 2 pm (Brasilia time). viewing link

ABSTRACT
In the last decade, data and machine learning-driven methods to speech synthesis have greatly improved its quality. So much so, that the realism achievable by current neural synthesisers can rival natural speech. However, modern neural synthesis methods have not yet transferred as tools for experimentation in the speech and language sciences. This is because modern systems still lack the ability to manipulate low-level acoustic characteristics of the signal such as e.g.: formant frequencies.
In this talk, I survey recent advances in speech synthesis and discuss their potential as experimental tools for phonetic research. I argue that speech scientists and speech engineers would benefit from working more with each other again: in particular, in the pursuit of prosodic and acoustic parameter control in neural speech synthesis. I showcase several approaches to fine synthesis control that I have implemented with colleagues: the WavebenderGAN and a system that mimicks the source-filter model of speech production. These systems allow to manipulate formant frequencies and other acoustic parameters with the same (or better) accuracy as e.g.: Praat but with a far superior signal quality.
Finally, I discuss ways to improve synthesis evaluation paradigms, so that not only industry but also speech science experimentation benchmarks are met. My hope is to inspire more students and researchers to take up these research challenges and explore the potential of working at the intersection of the speech technology and speech science.

Outline: 1. I discuss briefly the history of advancements in speech synthesis starting in the formant synthesis era and explain where the improvements came from. 2. I show experiments that I have done that prove modern synthesis is processed not differently than natural speech by humans in a lexical decision task as evidence that the realism (“naturalness”) goal has been largely achieved. 3. I explain how realism came at the expense of controllability. I show how controllability is an indispensable feature for speech synthesis to be adopted in phonetic experimentation. I survey the current state of research on controllability in speech engineering - concentrating on prosodic and formant control. 4. I propose how we can fix this by explaining the work I have done with colleagues on several systems that feature both realism and control. 5. I sketch a roadmap to improve synthesis tools for phonetics - by placing focus on benchmarking systems according to scientific criteria.

  • TBD. Gabriel Skantze, KTH, May 15.
  • TBD. Simon Roessig, York, September.
  • TBD. Sam Tilsen. October.
  • TBD. Sasha Calhoun, November.
  • TBD, Robert Xu, December.

    Archived Lectures

    Tackling prosodic phenomena at their roots

    Professor Yi Xu, University College London. October 25, 2023.

    Abstract: Rather than being a coherent whole, speech prosody consists of highly diverse phenomena that are best understood in terms of their communicative functions, together with specific mechanisms of articulatory encoding and perceptual decoding. The understanding of these root causes is therefore key to further advances in prosody research.

    archived talk at YouTube and at bilibili


    Segmental Articulations and Prosody

    Malin Svensson Lundmark, Lund University, November 23rd, 2023

    Abstract: This lecture will be on an aspect of the articulatory-acoustics relationship that is rarely addressed but which is both stable and robust across, e.g., places of articulation, tonal context and prosodic levels. It’s about the acceleration and deceleration of articulatory movements and how they coincide with acoustic segment boundaries.

    archived talk at Youtube and at biliblili


    December 14, 1 - 2 pm (Brasilia time, UTC -3)

    How to handle variability in the study of intonation

    Amalia Arvaniti, Radboud University, Netherlands.

    Abstract: This talk will give an overview of the issue of variability in intonation and present methodological approaches that render variability easier to handle. These methodologies are presented by means of a case study, the English pitch accents H* and L+H*, which are treated as distinct phonological entities in some accounts but as endpoints of a continuum in others. The research that will be presented sheds light on the reasons for the disagreement between analyses and the discrepancies between analyses and empirical evidence, by examining both production data from British English unscripted speech and perceptual data, which also link the processing of the two accents to the participants’ levels of empathy, musicality, and autistic-like traits.

    archived talk at Youtube and at bilibili


    Host: Plinio A. Barbosa, University of Campinas, Brazil