%0 Conference Proceedings %T Automatic Segmentation of Sign Language into Subtitle-Units %+ Laboratoire Interdisciplinaire des Sciences du Numérique (LISN) %+ Architectures et Modèles pour l'Interaction (AMI) %+ Information, Langue Ecrite et Signée (ILES) %A Bull, Hannah %A Gouiffès, Michèle %A Braffort, Annelies %< avec comité de lecture %( Computer Vision – ECCV 2020 Workshops %B Sign Language Recognition, Translation & Production workshop %C Glasgow ( virtual ), United Kingdom %8 2020-08-23 %D 2020 %R 10.1007/978-3-030-66096-3_14 %K Sign Language %K Segmentation %K Sentence %K Subtitle %K Graph Neural Network %K Skeleton Keypoints %Z Computer Science [cs]/Artificial Intelligence [cs.AI] %Z Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Conference papers %X We present baseline results for a new task of automatic segmentation of Sign Language video into sentence-like units. We use a corpus of natural Sign Language video with accurately aligned subtitles to train a spatio-temporal graph convolutional network with a BiLSTM on 2D skeleton data to automatically detect the temporal boundaries of subtitles. In doing so, we segment Sign Language video into subtitle-units that can be translated into phrases in a written language. We achieve a ROC-AUC statistic of 0.87 at the frame level and 92% label accuracy within a time margin of 0.6s of the true labels. %G English %2 https://hal.science/hal-03098684/document %2 https://hal.science/hal-03098684/file/nf7881.pdf %L hal-03098684 %U https://hal.science/hal-03098684 %~ CNRS %~ INRIA %~ CENTRALESUPELEC %~ UNIV-PARIS-SACLAY %~ INRIA-AUT %~ TEST-HALCNRS %~ UNIVERSITE-PARIS-SACLAY %~ LISN %~ GS-ENGINEERING %~ GS-COMPUTER-SCIENCE %~ LISN-AMI %~ LISN-ILES %~ LISN-STL