CogSci 2018: Madison
July 25 – July 28th, 2018
Quantum Models of Cognition and Decision
Peter Bruza, Jerome Busemeyer, Peter Kvam, and Joyce Wang
This tutorial is an exposition of a rapidly growing new alternative approach to building computational models of cognition and decision based on quantum theory. It will provide an exposition of the basic assumptions of classical versus quantum theories by reviewing these assumptions, side-by-side, in a parallel and elementary manner. We will show that quantum theory provides a unified and powerful explanation for a wide variety of paradoxes found in human cognition and decision ranging from attitude, inference, causal reasoning, judgment and decision, and memory. Computer programs for computing model predictions are described and made available. No previous experience or background with quantum theory will be assumed. The tutorial is presented by Peter Bruza, Jerome Busemeyer, Peter Kvam, and Joyce Wang.
Statistics as pottery: Bayesian Data Analysis using Probabilistic Programs
Michael Tessler, Noah Goodman
Probability theory is the logic of science (Jaynes, 2003) and Bayesian data analysis (BDA) is the glue that brings that logic to data. BDA is a general, flexible alternative to standard statistical approaches (e.g., Null Hypothesis Significance Testing) that provides the scientist with clarity and ease to address their personal scientific questions. Doing BDA in a probabilistic programming language (PPL) affords additional advantages: a compositional approach to writing models, separation of model specification from algorithmic implementation (a la lm() in R), and continuity from articulating data analytic models to formalizing Bayesian cognitive models. Furthermore, specifying one’s model and data analysis in a PPL allows you to search for “optimal experiments” for free. This tutorial will walk the participant through the basics of BDA to state-of-the-art applications, using an interactive online web-book and tools for integrating BDA into their existing workflow. Check out the tutorial website (http://stanford.edu/~mtessler/short-courses/2018-bdappl-cogsci/).
Mixed Models in R – An Applied Introduction
Mixed models are a generalization of ordinary regression models that explicitly capture dependencies among related data points via random-effects parameters. Such dependencies are ubiquitous in cognitive science due to collecting more than one data point from the same participant and/or from the same item. Compared to traditional analyses approaches that ignore these dependencies, mixed models provide more accurate (and generalizable) estimates, improved statistical power, and non-inflated Type I errors. The tutorial will introduce the functionality of lme4, the gold standard for estimating mixed models in R. In addition, it will introduce the functionality of afex, which simplifies many aspects of using lme4, such as the calculation of p-values for mixed models. The tutorial also introduces basic knowledge in statistical modeling with R that is necessary for competently using mixed models. Attendants are expected to have some basic knowledge of R.
A behavioral measure of mindfulness for local and online data collection
Samuel Nordli, Thomas Gorman