Functions to Interact with the 'FAIR Data Pipeline'
add_read
add_write
check_config
check_dataproduct_exists
check_datetime
Check if entry exists in the data registry
check_field
check_fields
check_handle
check_integer
check_local_repo
check_string
Check if table exists
check_yaml_write
Clean query
create_config
create_index
Create version number
Download source file from database
Download source file from URL
extract_id
fair_init
fair_run
fdp_resolve_read
fdp_resolve_write
fdp-class
Finalise code run
Find matching read aliases in config file
Find matching write aliases in config file
findme
get_author_url
Get H5 file components
get_dataproduct
Get entity from url
Return all fields associated with a table entry in the data registry
Return all entries posted to a table in the data registry
Get fields from table
Calculate hash from file
Get current GitHub hash
Get ID
get_index
get_max_version
Get storage location from url
Get optional fields
Get queryable fields
Get readable fields
Get required fields
Get writable fields
Get tables from registry
get_token
Get URL
List files in GitHub repository
increment_filename
Initialise code run
Check whether fields are queryable
Link path to external format data
Link path for external format data
Post entry to author table
Post entry to code_repo_release table
Post entry to code_run table
Post entry to data_product table
Post entry to external_object table
Post entry to file_type table
Post entry to issue table
Post entry to keyword table
Post entry to licence table
Post entry to namespace table
Post entry to object_component table
Post entry to object table
Post entry to quality_controlled table
Post entry to storage_location table
Post entry to storage_root table
Post entry to user_author table
Check whether paper exists
Patch entry in data registry
Post entry to data registry
Raise issue with config file
Raise issue with remote repository
Raise issue with submission script
raise_issue
random_hash
rDataPipeline
Read array component from HDF5 file
Read distribution component from TOML file
Read estimate component from TOML file
Read table component from HDF5 file
register_issue_dataproduct
register_issue_script
remove_empty_parents
resolve_read
resolve_version
resolve_data_product
Validate fields
Write array component to HDF5 file
Write distribution component to TOML file
Write estimate component to TOML file
Write table component to HDF5 file
R implementation of the 'FAIR Data Pipeline API'. The 'FAIR Data Pipeline' is intended to enable tracking of provenance of FAIR (findable, accessible and interoperable) data used in epidemiological modelling.
Useful links