Sunday, October 25, 2015

Quantifying Overlap: A Shiny App for NWAV 44

TL;DR
Pillai (actually 1 - Pillai):
df <- data.frame(x = F2, y = F1, class = vowel.class)
m <- lm(cbind(df$x, df$y) ~ df$class)
pillai <- 1 - anova(m)["df$class", "Pillai"]

Bhattacharyya affinity:
library(adehabitatHR)
df <- data.frame(x = F2, y = F1, class = vowel.class)
spdf <- SpatialPointsDataFrame(cbind(df$x, df$y), data.frame(class = df$class))
ba <- kerneloverlap(spdf, method = "BA", kern = "epa")[1, 2]



Today at NWAV in Toronto, I ran out of time, but presented most of this presentation. This is not Powerpoint slides, but a Shiny app, which contains an interactive Overlap Simulator and an ANAE Explorer for the low back vowels. I think that interactive apps like this can be very useful as part of presentations and even publications, as we move away from the model of the traditional paper journal. The R/Shiny code that makes up the app is here and here, but please note that this was my first time trying to write this kind of code! Included in the code are functions for the Pillai score, the Bhattacharyya affinity, and the "Closest Centroid Correct" measure discussed in the text.

To summarize my talk, the popular Pillai score -- as noted by Nycz & Hall-Lew -- is not really a measure of the overlap of two clouds of points, such as vowel tokens from two different word classes. Pillai (a parametric statistic making several assumptions about the data) is more similar to an R-squared measurement, asking what proportion of the total variability in the data is "explained" by the difference (in means) between the two categories. Even when two clouds are clearly non-overlapping, there is still residual variation in each cloud, which means that Pillai will not come out as 1. On the other hand, if the means of the two groups are equal, Pillai will always come out as 0, even if the clouds have different shapes and are not technically showing complete overlap. Finally, Pillai is sensitive to imbalance in token numbers between word classes. If one class has more data than the other, Pillai suggests that there is more overlap than if the number of tokens were equal.

The Overlap Simulator allows the user to observe these drawbacks of the Pillai score, and to note that the Bhattacharyya affinity (or coefficient) generally does not suffer from the same problems (although it is also skewed, to a lesser degree, when the tokens are imbalanced across groups). BA was explicitly designed as a measure of the overlap of two continuous distributions, and has a very simple mathematical formula: multiply the class probabilities, take the square root, and integrate over the plane. For R to estimate and implement BA, though, a few parameters must be set: the type of kernel, the kernel bandwidth, and the grid size. I have mainly used the default values for these.

Another measure of overlap, which I came up with (as far as I know), is the Closer Centroid Correct Criterion (CCCC). This seems to perform similarly to BA, although it tends to have a lower value (when converted to a scale where 0 means no overlap and 1 means complete overlap). One possible advantage of CCCC is that its calculation is very simple: it represents the chance that a point is closer to the centroid or mean of its own class rather than the other class. This seems like it could reflect the amount of confusion that a listener might have in distinguishing two vowel classes in the speech of another person, and also would presumably (?) be computationally/brain-instantiable much more readily than the Bhattacharyya method, which involves estimation, multiplication, and integration of two-dimensional probability distributions.

While results from the ANAE Explorer were preliminary, it was clearly evident that the Pillai metric failed to reflect the degree of low back separation of some of the speakers in the Mid-Atlantic and Inland North region. Another point to mention is that it makes a big difference whether the overlap of the LOT and THOUGHT vowels is assessed with or without making an adjustment for phonetic environment.

Experimenting with this adjustment -- which amounts to working with the residuals from a regression model that fits preceding- and following-segment coefficients pooled across all speakers -- shows that LOT and THOUGHT usually appear to overlap more once phonetic effects are taken into account. An extreme example of this is Gus K. from Nashville, TN, whose unadjusted BA was .320, but whose adjusted BA was .818. However, factoring out phonetic environment can sometimes have the opposite effect, like for Tony M. from Knoxville, TN, whose unadjusted BA was .690 and whose adjusted BA was .234.

Sunday, October 4, 2015

Slash Non-Fiction

There may be times when you do not wish to exclude [tokens with] a factor from your entire analysis, but you do want to exclude [that factor] from the results of a given factor group. - Goldvarb 2001 Users' Manual

Since Rbrul was created in 2008, users have asked about the possibility of this type of partial exclusion. Until now, this capability has continued to be available only in Goldvarb. While Goldvarb is still the only way to achieve more complex exclusions (for example, excluding tokens from the results of one factor group based on the values of another factor group), Rbrul now emulates the Goldvarb "slash operator" in its primary usage.

If one or more tokens have "/" (slash) for a certain predictor (factor group), then regardless of the value(s) of the dependent variable for those tokens, the log-odds coefficient for the slashed group is forced to zero (factor weight .500), although this is reported as "NA" for the sake of clarity. In effect, these tokens are ignored in the calculation of the other results for that predictor, while being included normally in the calculations for the other predictors.

I have not tested this new function very thoroughly, only making sure that the output matched Goldvarb on a few simple models. So please give me feedback (danielezrajohnson@gmail.com) if it seems to have problems, or even if it seems to be working properly.

Thanks for your patience, and happy slashing!