Toolbox for Error-Driven Learning Simulations with Two-Layer Networks
Calculate the change in activation for a specific cue or set of cues.
Calculate the activations for each learning event.
Calculate the activations for one or a set of cues.
Calculate the activations for all outcomes in the data.
Remove empty cues and/or outcomes.
Check whether cues and outcomes exist in a weight matrix and optionall...
Create event training data from a frequency data frame.
Create empty weight matrix based on a set of cues and outcomes.
Create a 'cue window', for overlapping or continuous cues.
Toolbox for Error-Driven Learning Simulations with Two-Layer Networks
Function to calculate the activations.
Extract cues from list of weightmatrices.
Retrieve the lambda values for all or specific outcomes for each learn...
Extract outcomes from list of weightmatrices.
Retrieve the weight updates and their change for each learning event.
Retrieve all cues from a vector of text strings.
Extract the change of connection weights between a specific cue and al...
Extract the change of connection weights between all cues and a specif...
Retrieve all cues from a vector of text strings.
Function implementing the Luce choice rule.
Visualize the change of connection weights between a specific outcome ...
Visualize the change of connection weights between a specific cue and ...
Return strong weights.
Visualize the change of connection weights between a specific outcome ...
Function implementing the Rescorla-Wagner learning.
Function implementing the Rescorla-Wagner learning.
Function implementing the Rescorla-Wagner learning equations without c...
Function implementing the Rescorla-Wagner learning equetions without o...
Set value background cue.
Function implementing the Rescorla-Wagner learning for a single learni...
Function implementing the Rescorla-Wagner learning equations without c...
Function implementing the Rescorla-Wagner learning equations without o...
Error-driven learning (based on the Widrow & Hoff (1960)<https://isl.stanford.edu/~widrow/papers/c1960adaptiveswitching.pdf> learning rule, and essentially the same as Rescorla-Wagner's learning equations (Rescorla & Wagner, 1972, ISBN: 0390718017), which are also at the core of Naive Discrimination Learning, (Baayen et al, 2011, <doi:10.1037/a0023851>) can be used to explain bottom-up human learning (Hoppe et al, <doi:10.31234/osf.io/py5kd>), but is also at the core of artificial neural networks applications in the form of the Delta rule. This package provides a set of functions for building small-scale simulations to investigate the dynamics of error-driven learning and it's interaction with the structure of the input. For modeling error-driven learning using the Rescorla-Wagner equations the package 'ndl' (Baayen et al, 2011, <doi:10.1037/a0023851>) is available on CRAN at <https://cran.r-project.org/package=ndl>. However, the package currently only allows tracing of a cue-outcome combination, rather than returning the learned networks. To fill this gap, we implemented a new package with a few functions that facilitate inspection of the networks for small error driven learning simulations. Note that our functions are not optimized for training large data sets (no parallel processing), as they are intended for small scale simulations and course examples. (Consider the python implementation 'pyndl' <https://pyndl.readthedocs.io/en/latest/> for that purpose.)