The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color – Cibelli et al. (2016)

The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.

http://www.cs.toronto.edu/~yangxu/cibelli_xu_austerweil_griffiths_regier_2016_whorfcolor.pdf?fbclid=IwAR3Wy9HPLgqjg5Sv28hWp1bc0FIWNhBBzEUwX4oBuqcKu2-4igg1HfCETEg

ImageNet Classification with Deep Convolutional Neural Networks – Krizhevsky, Sutskever, and Hinton (2012)

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf?fbclid=IwAR1LehJc4cEr0OA_Zgh654Z3oeTg_dyCTwDZc1m_RBbOqAP-VyCB68HemYo

Deep Learning – LeCun, Bengio, and Hinton (2015)

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf

Machine learning: Overview of the recent progresses and implications for the process systems engineering field – Lee, Shin, and Realff (2018)

Machine learning (ML) has recently gained in popularity, spurred by well-publicized advances like deep learning and widespread commercial interest in big data analytics. Despite the enthusiasm, some renowned experts of the field have expressed skepticism, which is justifiable given the disappointment with the previous wave of neural networks and other AI techniques. On the other hand, new fundamental advances like the ability to train neural networks with a large number of layers for hierarchical feature learning may present significant new technological and commercial opportunities. This paper critically examines the main advances in deep learning. In addition, connections with another ML branch of reinforcement learning are elucidated and its role in control and decision problems is discussed. Implications of these advances for the fields of process and energy systems engineering are also discussed.

https://www.sciencedirect.com/science/article/pii/S0098135417303538