rDataPipeline0.60.0 package

Functions to Interact with the 'FAIR Data Pipeline'

add_read

add_read

add_write

add_write

check_config

check_config

check_dataproduct_exists

check_dataproduct_exists

check_datetime

check_datetime

check_exists

Check if entry exists in the data registry

check_field

check_field

check_fields

check_fields

check_handle

check_handle

check_integer

check_integer

check_local_repo

check_local_repo

check_string

check_string

check_table_exists

Check if table exists

check_yaml_write

check_yaml_write

clean_query

Clean query

create_config

create_config

create_index

create_index

create_version_number

Create version number

download_from_database

Download source file from database

download_from_url

Download source file from URL

extract_id

extract_id

fair_init

fair_init

fair_run

fair_run

fdp_resolve_read

fdp_resolve_read

fdp_resolve_write

fdp_resolve_write

fdp-class

fdp-class

finalise

Finalise code run

find_read_match

Find matching read aliases in config file

find_write_match

Find matching write aliases in config file

findme

findme

get_author_url

get_author_url

get_components

Get H5 file components

get_dataproduct

get_dataproduct

get_entity

Get entity from url

get_entry

Return all fields associated with a table entry in the data registry

get_existing

Return all entries posted to a table in the data registry

get_fields

Get fields from table

get_file_hash

Calculate hash from file

get_github_hash

Get current GitHub hash

get_id

Get ID

get_index

get_index

get_max_version

get_max_version

get_storage_location

Get storage location from url

get_table_optional

Get optional fields

get_table_queryable

Get queryable fields

get_table_readable

Get readable fields

get_table_required

Get required fields

get_table_writable

Get writable fields

get_tables

Get tables from registry

get_token

get_token

get_url

Get URL

github_files

List files in GitHub repository

increment_filename

increment_filename

initialise

Initialise code run

is_queryable

Check whether fields are queryable

link_read

Link path to external format data

link_write

Link path for external format data

new_author

Post entry to author table

new_code_repo_release

Post entry to code_repo_release table

new_code_run

Post entry to code_run table

new_data_product

Post entry to data_product table

new_external_object

Post entry to external_object table

new_file_type

Post entry to file_type table

new_issue

Post entry to issue table

new_keyword

Post entry to keyword table

new_licence

Post entry to licence table

new_namespace

Post entry to namespace table

new_object_component

Post entry to object_component table

new_object

Post entry to object table

new_quality_controlled

Post entry to quality_controlled table

new_storage_location

Post entry to storage_location table

new_storage_root

Post entry to storage_root table

new_user_author

Post entry to user_author table

paper_exists

Check whether paper exists

patch_data

Patch entry in data registry

post_data

Post entry to data registry

raise_issue_config

Raise issue with config file

raise_issue_repo

Raise issue with remote repository

raise_issue_script

Raise issue with submission script

raise_issue

raise_issue

random_hash

random_hash

rDataPipeline-package

rDataPipeline

read_array

Read array component from HDF5 file

read_distribution

Read distribution component from TOML file

read_estimate

Read estimate component from TOML file

read_table

Read table component from HDF5 file

register_issue_dataproduct

register_issue_dataproduct

register_issue_script

register_issue_script

remove_empty_parents

remove_empty_parents

resolve_read

resolve_read

resolve_version

resolve_version

resolve_write

resolve_data_product

validate_fields

Validate fields

write_array

Write array component to HDF5 file

write_distribution

Write distribution component to TOML file

write_estimate

Write estimate component to TOML file

write_table

Write table component to HDF5 file

R implementation of the 'FAIR Data Pipeline API'. The 'FAIR Data Pipeline' is intended to enable tracking of provenance of FAIR (findable, accessible and interoperable) data used in epidemiological modelling.

  • Maintainer: Ryan Field
  • License: GPL (>= 3)
  • Last published: 2024-10-08