Show simple item record

FieldValueLanguage
dc.contributor.authorMcConvell, Patrick
dc.date.accessioned2007-01-23
dc.date.available2007-01-23
dc.date.issued2004-01-01
dc.identifier.citationMcConvell, Patrick. “Multilingual Multiperson Multimedia: Linking Audio-Visual with Text Material in Language Documentation”. Researchers, Communities, Institutions, Sound Recordings, eds. Linda Barwick, Allan Marett, Jane Simpson and Amanda Harris. Sydney: University of Sydney, 2003.en
dc.identifier.urihttp://hdl.handle.net/2123/1429
dc.description.abstractLanguage documentation for endangered and Indigenous languages has been rapidly moving towards a more holistic view of what is to be captured, including a range of genres, conversation as well as narrative. Most of the languages concerned also exist in a multilingual, multivariety language ecology, in which different age groups may speak, and switch between different varieties. This inevitably becomes part of what is being recorded and is crucial in the understanding of language shift and maintenance. Added to this is the growing realisation of the importance of paralinguistic elements such as gesture even to the basic interpretation of utterances. For proper documentation, what is required now is a system that can handle video, audio, transcription, translation and other annotation, synchronically linked. In this paper I will investigate the functionality of the CLAN system of a/v-transcript linking, widely used for child language and multilingual studies, and briefly compare this to other available alternatives. As for archival holdings of a/v and transcriptions, most of what already exists cannot be immediately moved into such a/v-text linking systems, because of the enormous amount of work involved. There is a need however for some standard system for preliminary digital linking of a/v with existing transcripts, translations and annotations, which may be separated from each other physically and institutionally. From this, more robust linking for analysis and multimedia presentation can be developed. This paper reviews some of the systems being used and the extent to which the metadata element Relation can be refined to carry out this task.en
dc.description.sponsorshipAustralian Academy of the Humanities; Australian E-Humanities Network; Research Institute for Humanities and Social Sciences, University of Sydney; School of Society, Culture and Performance, Faculty of Arts, University of Sydneyen
dc.format.extent353404 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherOpen Conference Systems, University of Sydney, Faculty of Artsen
dc.rightsThis material is copyright. Other than for the purposes of and subject to the conditions prescribed under the Copyright Act, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be altered, reproduced, stored in a retrieval system or transmitted without prior written permission from the University of Sydney Library and/or the appropriate author.en
dc.subjectlanguage documentationen
dc.subjectlinguisticsen
dc.subjectendangered languagesen
dc.subjectindigenous languagesen
dc.subjectaudioen
dc.subjectvideoen
dc.subjectdocumentary linguisticsen
dc.subjectsign languageen
dc.subjecttranslationen
dc.subjecttranscriptionen
dc.titleMultilingual Multiperson Multimedia: Linking Audio-Visual with Text Material in Language Documentationen
dc.typeConference paperen


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.