Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach
It has been argued that most of corpus linguistics involves one of four fundamental methods: frequency lists, dispersion, collocation, and concordancing. All these presuppose (if only implicitly) the definition of a unit: the element whose frequency in a corpus, in corpus parts, or around a search w...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Université Jean Moulin - Lyon 3
2022-03-01
|
| Series: | Lexis: Journal in English Lexicology |
| Subjects: | |
| Online Access: | https://journals.openedition.org/lexis/6231 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846132402699632640 |
|---|---|
| author | Stefan Th. Gries |
| author_facet | Stefan Th. Gries |
| author_sort | Stefan Th. Gries |
| collection | DOAJ |
| description | It has been argued that most of corpus linguistics involves one of four fundamental methods: frequency lists, dispersion, collocation, and concordancing. All these presuppose (if only implicitly) the definition of a unit: the element whose frequency in a corpus, in corpus parts, or around a search word are counted (or quantified in other ways). Usually and with most corpus-processing tools, a unit is an orthographic word. However, it is obvious that this is a simplifying assumption borne out of convenience: clearly, it seems more intuitive to consider because of or in spite of as one unit each rather than two or three. Some work in computational linguistics has developed multi-word unit (MWU) identification algorithms, which typically involve co-occurrence token frequencies and association measures (AMs), but these have not become widespread in corpus-linguistic practice despite the fact that recognizing MWUs like the above will have a profound impact on just about all corpus statistics that involve (simplistic notions of) words/units. In this programmatic proof-of-concept paper, I introduce and exemplify an algorithm to identify MWUs that goes beyond frequency and bidirectional association by also involving several well-known but underutilized dimensions of corpus-linguistic information: frequency: how often does a potential unit (like in_spite_of) occur?; dispersion: how widespread is the use of a potential unit?; association: how strongly attracted are the parts of a potential unit?; entropy: how variable is each slot in a potential unit? The proposed algorithm can use all these dimensions and weight them differently. I will (i) present the algorithm in detail, (ii) exemplify its application to the Brown corpus, (iii) discuss its results on the basis of several kinds of MWUs it returns, and (iv) discuss next analytical steps. |
| format | Article |
| id | doaj-art-95d25939ad2f49978bd61a0fc2f3e5e0 |
| institution | Kabale University |
| issn | 1951-6215 |
| language | English |
| publishDate | 2022-03-01 |
| publisher | Université Jean Moulin - Lyon 3 |
| record_format | Article |
| series | Lexis: Journal in English Lexicology |
| spelling | doaj-art-95d25939ad2f49978bd61a0fc2f3e5e02024-12-09T14:52:34ZengUniversité Jean Moulin - Lyon 3Lexis: Journal in English Lexicology1951-62152022-03-011910.4000/lexis.6231Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approachStefan Th. GriesIt has been argued that most of corpus linguistics involves one of four fundamental methods: frequency lists, dispersion, collocation, and concordancing. All these presuppose (if only implicitly) the definition of a unit: the element whose frequency in a corpus, in corpus parts, or around a search word are counted (or quantified in other ways). Usually and with most corpus-processing tools, a unit is an orthographic word. However, it is obvious that this is a simplifying assumption borne out of convenience: clearly, it seems more intuitive to consider because of or in spite of as one unit each rather than two or three. Some work in computational linguistics has developed multi-word unit (MWU) identification algorithms, which typically involve co-occurrence token frequencies and association measures (AMs), but these have not become widespread in corpus-linguistic practice despite the fact that recognizing MWUs like the above will have a profound impact on just about all corpus statistics that involve (simplistic notions of) words/units. In this programmatic proof-of-concept paper, I introduce and exemplify an algorithm to identify MWUs that goes beyond frequency and bidirectional association by also involving several well-known but underutilized dimensions of corpus-linguistic information: frequency: how often does a potential unit (like in_spite_of) occur?; dispersion: how widespread is the use of a potential unit?; association: how strongly attracted are the parts of a potential unit?; entropy: how variable is each slot in a potential unit? The proposed algorithm can use all these dimensions and weight them differently. I will (i) present the algorithm in detail, (ii) exemplify its application to the Brown corpus, (iii) discuss its results on the basis of several kinds of MWUs it returns, and (iv) discuss next analytical steps.https://journals.openedition.org/lexis/6231corpus linguisticsmulti-word unitsn-gramsfrequencydispersionassociation |
| spellingShingle | Stefan Th. Gries Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach Lexis: Journal in English Lexicology corpus linguistics multi-word units n-grams frequency dispersion association |
| title | Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach |
| title_full | Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach |
| title_fullStr | Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach |
| title_full_unstemmed | Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach |
| title_short | Multi-word units (and tokenization more generally): a multi-dimensional and largely information-theoretic approach |
| title_sort | multi word units and tokenization more generally a multi dimensional and largely information theoretic approach |
| topic | corpus linguistics multi-word units n-grams frequency dispersion association |
| url | https://journals.openedition.org/lexis/6231 |
| work_keys_str_mv | AT stefanthgries multiwordunitsandtokenizationmoregenerallyamultidimensionalandlargelyinformationtheoreticapproach |