The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color – Cibelli et al. (2016)

The Sapir-Whorf hypothesis holds that our thoughts are shaped by our native language, and that speakers of different languages therefore think differently. This hypothesis is controversial in part because it appears to deny the possibility of a universal groundwork for human cognition, and in part because some findings taken to support it have not reliably replicated. We argue that considering this hypothesis through the lens of probabilistic inference has the potential to resolve both issues, at least with respect to certain prominent findings in the domain of color cognition. We explore a probabilistic model that is grounded in a presumed universal perceptual color space and in language-specific categories over that space. The model predicts that categories will most clearly affect color memory when perceptual information is uncertain. In line with earlier studies, we show that this model accounts for language-consistent biases in color reconstruction from memory in English speakers, modulated by uncertainty. We also show, to our knowledge for the first time, that such a model accounts for influential existing data on cross-language differences in color discrimination from memory, both within and across categories. We suggest that these ideas may help to clarify the debate over the Sapir-Whorf hypothesis.

ImageNet Classification with Deep Convolutional Neural Networks – Krizhevsky, Sutskever, and Hinton (2012)

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

Is Psychology Still a Science of Behavior? – Dolinski (2018)

Since the 1970s, social psychology has examined real human behaviour to an increasingly smaller degree. This article is an analysis of the reasons why this is so. The author points out that the otherwise valuable phenomenon of cognitive shift, which occurred in social psychology precisely in the 1970s, naturally boosted the interest of psychologists in such phenomena like stereotypes, attitudes, and values; at the same time, it unfortunately decreased interest in others, like aggression, altruism, and social influence. In recent decades, we have also witnessed a growing conviction among psychologists that explaining why people display certain reactions holds greater importance than demonstrating the conditions under which people display these reactions. This assumption has been accompanied by the spread of statistical analysis applied to empirical data, which has led to researchers today generally preferring to employ survey studies (even if they are a component of experiments being conducted) to the analysis of behavioural variables. The author analyses the contents of the most recent volume of “Journal of Personality and Social Psychology”, and argues that it is essentially devoid of presentations of empirical studies in which human behaviours are examined. This gives rise to the question of whether social psychology remains a science of behaviour, and whether such a condition of the discipline is desirable.

Pretend Play – Weisberg (2015)

Pretend play is a form of playful behavior that involves nonliteral action. Although on the surface this activity appears to be merely for fun, recent research has discovered that children’s pretend play has connections to important cognitive and social skills, such as symbolic thinking, theory of mind, and counterfactual reasoning. The current article first defines pretend play and then reviews the arguments and evidence for these three connections. Pretend play has a nonliteral correspondence to reality, hence pretending may provide children with practice with navigating symbolic relationships, which may strengthen their language skills. Pretend play and theory of mind reasoning share a focus on others’ mental states in order to correctly interpret their behavior, hence pretending and theory of mind may be mutually supportive in development. Pretend play and counterfactual reasoning both involve representing nonreal states of affairs, hence pretending may facilitate children’s counterfactual abilities. These connections make pretend play an important phenomenon in cognitive science: Studying children’s pretend play can provide insight into these other abilities and their developmental trajectories, and thereby into human cognitive architecture and its development.

Core Knowledge – Spelke and Kinzler (2008)

Human cognition is founded, in part, on four systems for representing objects, actions, number, and space. It may be based, as well, on a fifth system for representing social partners. Each system has deep roots in human phylogeny and ontogeny, and it guides and shapes the mental lives of adults. Converging research on human infants, non-human primates, children and adults in diverse cultures can aid both understanding of these systems and attempts to overcome their limits.

Deep Learning – LeCun, Bengio, and Hinton (2015)

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

Machine learning: Overview of the recent progresses and implications for the process systems engineering field – Lee, Shin, and Realff (2018)

Machine learning (ML) has recently gained in popularity, spurred by well-publicized advances like deep learning and widespread commercial interest in big data analytics. Despite the enthusiasm, some renowned experts of the field have expressed skepticism, which is justifiable given the disappointment with the previous wave of neural networks and other AI techniques. On the other hand, new fundamental advances like the ability to train neural networks with a large number of layers for hierarchical feature learning may present significant new technological and commercial opportunities. This paper critically examines the main advances in deep learning. In addition, connections with another ML branch of reinforcement learning are elucidated and its role in control and decision problems is discussed. Implications of these advances for the fields of process and energy systems engineering are also discussed.