New approaches for delineating n‐dimensional hypervolumes

By Benjamin Blonder, Cecina Babich Morrow, Brian Maitner, David J. Harris, Christine Lamanna, Cyrille Violle, Brian J. Enquist, Andrew J. Kerkhoff in hypervolume R

August 9, 2017

Abstract

Hutchinson’s n‐dimensional hypervolume concept underlies many applications in contemporary ecology and evolutionary biology. Estimating hypervolumes from sampled data has been an ongoing challenge due to conceptual and computational issues. We present new algorithms for delineating the boundaries and probability density within n‐dimensional hypervolumes. The methods produce smooth boundaries that can fit data either more loosely (Gaussian kernel density estimation) or more tightly (one‐classification via support vector machine). Further, the algorithms can accept abundance‐weighted data, and the resulting hypervolumes can be given a probabilistic interpretation and projected into geographic space. We demonstrate the properties of these methods on a large dataset that characterises the functional traits and geographic distribution of thousands of plants. The methods are available in version ≥2.0.7 of the hypervolume r package. These new algorithms provide: (i) a more robust approach for delineating the shape and density of n‐dimensional hypervolumes; (ii) more efficient performance on large and high‐dimensional datasets; and (iii) improved measures of functional diversity and environmental niche breadth.

Posted on:
August 9, 2017
Length:
1 minute read, 163 words
Categories:
hypervolume R
Tags:
hypervolume R
See Also:
Least squares regression: Part 2
Least squares regression: Part 1
Mental health outcomes
comments powered by Disqus