DataLad provides fine-grained data access down to the level of individual files, and allows for tracking future updates. It is a free and open source command line tool, available for all major operating systems, and builds upon Git and git-annex to share, synchronize, and version control collections of large files.
Get the Dataset
The dataset can be
cloned by running:
datalad clone https://github.com/psychoinformatics-de/studyforrest-data
Once the dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.
Retrieve Dataset Content
After cloning the dataset, you can retrieve file contents by running:
datalad get path/to/directory/or/file
This command will trigger a download of the files, directories, or subdatasets you have specified.
This dataset contains other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run:
datalad get -n path/to/subdataset
Afterwards, you can browse the retrieved metadata to find out about
subdataset contents, and retrieve individual files with
get. If you use
datalad get path/to/subdataset,
all contents of the subdataset will be downloaded at once.
All data are released to the public under the ODC Public Domain Dedication and Licence (PDDL). Offering these data for download or through other means is encouraged; we only ask that you add a reference to this website. In order to provide a comprehensive overview of entities hosting these data, or any derived data artifacts, please let us know at email@example.com what data access you are providing.
How to Cite
If you use these data, please follow good scientific practice and cite any relevant publications. A list of all publications can be found on the Publications Page.
We are grateful to our data hosting providers for their support, sponsored bandwidth, and storage capacity.