Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Corpus linguistics is the study of language as expressed in samples (corpora) or "real world" text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are largely derived by an automated process, which is corrected.
The corpus approach runs counter to Noam Chomsky's view that real language is riddled with performance-related errors, thus requiring careful analysis of small speech samples obtained in a highly controlled laboratory setting.
The problem of laboratory-selected sentences is similar to that facing lab-based psychology: researchers do not have any measure of the ethnographic representativity of their data.
Corpus linguistics does away with Chomsky's competence/performance split; adherents believe that reliable language analysis best occurs on field-collected samples, in natural contexts and with minimal experimental interference. Within CL there are divergent views as to the value of corpus annotation, from John Sinclair advocating minimal annotation and allowing texts to 'speak for themselves', to others, such as the Survey of English Usage team advocating annotation as a path to greater linguistic understanding and rigour.
A landmark in modern corpus linguistics was the publication by Henry Kucera and Nelson Francis of Computational Analysis of Present-Day American English in 1967, a work based on the analysis of the Brown Corpus, a carefully compiled selection of current American English, totalling about a million words drawn from a wide variety of sources. Kucera and Francis subjected it to a variety of computational analyses, from which they compiled a rich and variegated opus, combining elements of linguistics, language teaching, psychology, statistics, and sociology. A further key publication was Randolph Quirk's 'Towards a description of English Usage' (1960) in which he introduced The Survey of English Usage.
Shortly thereafter, Boston publisher Houghton-Mifflin approached Kucera to supply a million word, three-line citation base for its new American Heritage Dictionary, the first dictionary to be compiled using corpus linguistics. The AHD made the innovative step of combining prescriptive elements (how language should be used) with descriptive information (how it actually is used).
Other publishers followed suit. The British publisher Collins' COBUILD monolingual learner's dictionary, designed for users learning English as a foreign language, was compiled using the Bank of English. The Survey of English Usage Corpus was used in the development of one of the most important Corpus-based Grammars, the Comprehensive Grammar of English (Quirk et al 1985).
The Brown Corpus has also spawned a number of similarly structured corpora: the LOB Corpus (1960s British English), Kolhapur (Indian English), Wellington (New Zealand English), Australian Corpus of English (Australian English), the Frown Corpus (early 1990s American English), and the FLOB Corpus (1990s British English). Other corpora represent many languages, varieties and modes, and include the International Corpus of English, and the British National Corpus, a 100 million word collection of a range of spoken and written texts, created in the 1990s by a consortium of publishers, universities (Oxford and Lancaster) and the British Library. For contemporary American English, work has stalled on the American National Corpus, but the 360 million word Corpus of Contemporary American English (COCA) (1990-present) is now available.
The first computerized corpus of transcribed spoken language was constructed in 1971 by the Montreal French Project , containing one million words, which inspired Shana Poplack's much larger corpus of spoken French in the Ottawa-Hull area 
Corpus Linguistics has generated a number of research methods, attempting to trace a path from data to theory. Wallis and Nelson (2001) first introduced what they called the 3A perspective: Annotation, Abstraction and Analysis.
- Annotation consists of the application of a scheme to texts. Annotations may include structural markup, POS-tagging, parsing, and numerous other representations.
- Abstraction consists of the translation (mapping) of terms in the scheme to terms in a theoretically motivated model or dataset. Abstraction typically includes linguist-directed search but may include e.g., rule-learning for parsers.
- Analysis consists of statistically probing, manipulating and generalising from the dataset. Analysis might include statistical evaluations, optimisation of rule-bases or knowledge discovery methods.
Most lexical corpora today are POS-tagged. However even corpus linguists who work with 'unannotated plain text' inevitably apply some method to isolate terms that they are interested in from surrounding words. In such situations annotation and abstraction are combined in a lexical search.
The advantage of publishing an annotated corpus is that other users can then perform experiments on the corpus. Linguists with other interests and differing perspectives than the originators can exploit this work. By sharing data, corpus linguists are able to treat the corpus as a locus of linguistic debate, rather than as an exhaustive fount of knowledge.
- ↑ Sinclair, J. 'The automatic analysis of corpora', in Svartvik, J. (ed.) Directions in Corpus Linguistics (Proceedings of Nobel Symposium 82). Berlin: Mouton de Gruyter. 1992.
- ↑ Wallis, S. 'Annotation, Retrieval and Experimentation', in Meurman-Solin, A. & Nurmi, A.A. (ed.) Annotating Variation and Change. Helsinki: Varieng, University of Helsinki. 2007. e-Published
- ↑ Quirk, R. 'Towards a description of English Usage', Transactions of the Philological Society. 1960. 40-61.
- ↑ Quirk, R., Greenbaum, S., Leech, G. and Svartvik, J. A Comprehensive Grammar of the English Language London: Longman. 1985.
- ↑ Sankoff, D. & Sankoff, G. Sample survey methods and computer-assisted analysis in the study of grammatical variation. In Darnell R. (ed.) Canadian Languages in their Social Context Edmonton: Linguistic Research Incorporated. 1973. 7-64.
- ↑ Poplack, S. The care and handling of a mega-corpus. In Fasold, R. & Schiffrin D. (eds.) Language Change and Variation, Amsterdam: Benjamins. 1989. 411-451.
- ↑ Wallis, S. and Nelson G. 'Knowledge discovery in grammatically analysed corpora'. Data Mining and Knowledge Discovery, 5: 307-340. 2001.
There are several international peer-reviewed journals dedicated to corpus linguistics, for example, Corpora, Corpus Linguistics and Linguistic Theory, ICAME Journal and the International Journal of Corpus Linguistics.
- Biber, D., Conrad, S., Reppen R. Corpus Linguistics, Investigating Language Structure and Use, Cambridge: Cambridge UP, 1998. ISBN 0-521-49957-7
- McCarthy, D., and Sampson G. Corpus Linguistics: Readings in a Widening Discipline, Continuum, 2005. ISBN 0-826-48803-X
- Facchinetti, R. Theoretical Description and Practical Applications of Linguistic Corpora. Verona: QuiEdit, 2007 ISBN 978-88-89480-37-3
- Facchinetti, R. (ed.) Corpus Linguistics 25 Years on. New York/Amsterdam: Rodopi, 2007 ISBN 978-90-420-2195-2
- Facchinetti, R. and Rissanen M. (eds.) Corpus-based Studies of Diachronic English. Bern: Peter Lang, 2006 ISBN 3-03910-851-4
- Concordance (KWIC)
- Collostructional analysis
- Keyword (linguistics)
- Linguistic Data Consortium
- Machine translation
- Natural Language Toolkit
- Search engines: they access the "web corpus".
- Semantic prosody
- Text corpus
- Translation memory
- Bookmarks for Corpus-based Linguists -- very comprehensive site with categorized and annotated links to language corpora, software, references, etc.
- Corpora discussion list
- Freely-available, web-based corpora (100 million - 360 million words each): American, British (BNC), TIME, Spanish, Portuguese
- Manuel Barbera's overview site
- Przemek Kaszubski's list of references
- Corpus4u Community a Chinese online forum for corpus linguistics
- McEnery and Wilson's Corpus Linguistics Page
- Corpus Linguistics with R mailing list
- Research and Development Unit for English Studies
- Survey of English Usage
- The Centre for Corpus Linguistics at Birmingham University
- Gateway to Corpus Linguistics on the Internet: an annotated guide to corpus resources on the web
- Biomedical corpora
- Linguistic Data Consortium, a major distributor of corpora
- XAIRA: a general purpose XML aware open-source corpus analysis tool
- Corsis: (formerly Tenka Text) an open-source (GPLed) corpus analysis tool
- ICECUP and Fuzzy Tree Fragments
- Research and Development Unit for English Studies
- Discussion group text mining
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|