Reinforcement Learning Tools for Two-Alternative Forced Choice Tasks
binaryRL: Reinforcement Learning Tools for Two-Alternative Forced Choi...
Step 3: Optimizing parameters to fit real data
Function: Exploration Strategy
Function: Learning Rate
Function: Utility Function
Function: Upper-Confidence-Bound
Function: Soft-Max Function
Process: Optimizing Parameters
Step 2: Generating fake data for parameter and model recovery
Process: Recovering Fake Data
Step 4: Replaying the experiment with optimal parameters
Model: RSTD
Step 1: Building reinforcement learning model
Process: Simulating Fake Data
S3method summary
Model: TD
Model: Utility
Tools for building Rescorla-Wagner Models for Two-Alternative Forced Choice tasks, commonly employed in psychological research. Most concepts and ideas within this R package are referenced from Sutton and Barto (2018) <ISBN:9780262039246>. The package allows for the intuitive definition of RL models using simple if-else statements and three basic models built into this R package are referenced from Niv et al. (2012)<doi:10.1523/JNEUROSCI.5498-10.2012>. Our approach to constructing and evaluating these computational models is informed by the guidelines proposed in Wilson & Collins (2019) <doi:10.7554/eLife.49547>. Example datasets included with the package are sourced from the work of Mason et al. (2024) <doi:10.3758/s13423-023-02415-x>.