Machine learning may be neck and neck with “big data” in the race to being Hollywood’s favorite catch-all term for technology as magic.
Many theorems in economics can be proven (and hypotheses shown to be false) with "quantifier elimination." Results from real algebraic geometry such as Tarski's quantifier elimination theorem and Collins' cylindrical algebraic decomposition algorithm are applicable because the economic hypotheses, especially those that leave functional forms unspecified, can be represented as systems of multivariate polynomial (sic) equalities and inequalities.
We introduce sparse random projection, an important dimension-reduction tool from machine learning, for the estimation of discrete choice models with high-dimensional choice sets. Initially, high-dimensional data are compressed into a lower-dimensional Euclidean space using random projections. Subsequently, estimation proceeds using cyclic monotonicity moment inequalities implied by the multinomial choice model; the estimation procedure is semi-parametric and does not require explicit distributional assumptions to be made regarding the random utility errors.
In the 1940s and ’50s, the Cowles Commission, then at the University of Chicago, brought together economic scholars together with eminent statisticians and applied mathematicians who pioneered exciting new lines of research in mathematically-oriented economic theory and econometrics. Their intell