@@ -17,20 +17,25 @@ image is based on the HAPI FHIR
1717[ Synthea] ( https://synthea.mitre.org/downloads ) stored in the container itself.
1818To load this dataset into the HAPI FHIR image, do the following:
1919
20- 1 . Run a local version of the HAPI FHIR server:
20+ 1 . Run a local version of the HAPI FHIR server. Note by default this uses an
21+ in-memory database but we want to persist the uploaded data, hence we change
22+ the configuration by the following environment variables:
2123
2224 ```
23- docker run --rm -d -p 8080:8080 --name hapi_fhir hapiproject/hapi:latest
25+ docker run --rm -d -p 8080:8080 --name hapi-fhir-add-synthea \
26+ -e spring.datasource.url='jdbc:h2:file:/app/data/hapi_db;DB_CLOSE_ON_EXIT=FALSE;AUTO_RECONNECT=TRUE' \
27+ -e spring.jpa.hibernate.ddl-auto=update \
28+ hapiproject/hapi:latest
2429 ```
2530
26- 2 . Download the ` 1K Sample Synthetic Patient Records, FHIR R4 ` dataset:
31+ 2 . Download the ` 1K+ Sample Synthetic Patient Records, FHIR R4 ` dataset:
2732
2833 ```
2934 wget https://synthetichealth.github.io/synthea-sample-data/downloads/synthea_sample_data_fhir_r4_sep2019.zip \
3035 -O fhir.zip
3136 ```
3237
33- 3 . Unzip the file, a directory named ` fhir ` should be created containig JSON
38+ 3 . Unzip the file, a directory named ` fhir ` should be created containing JSON
3439 files:
3540
3641 ```
@@ -39,8 +44,15 @@ To load this dataset into the HAPI FHIR image, do the following:
3944
40454 . Use the Synthetic Data Uploader from the
4146 [ FHIR Analytics] ( https://github.com/GoogleCloudPlatform/openmrs-fhir-analytics/tree/master/synthea-hiv )
42- repo to upload the files into the HAPI FHIR container
43- ` docker run -it --network=host \ -e SINK_TYPE="HAPI" \ -e FHIR_ENDPOINT=http://localhost:8080/fhir \ -e INPUT_DIR="/workspace/output/fhir" \ -e CORES="--cores 1" \ -v $(pwd)/fhir:/workspace/output/fhir \ us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/synthea-uploader:latest `
47+ repo to upload the files into the HAPI FHIR container. Note instead of
48+ uploading all patients, you can pick a small subset instead. In that case
49+ adjust the ` INPUT_DIR ` accordingly. Using the whole dataset increases the
50+ container init time by a few minutes (and slows down e2e tests which depend
51+ on this):
52+
53+ ```
54+ docker run -it --network=host \ -e SINK_TYPE="HAPI" \ -e FHIR_ENDPOINT=http://localhost:8080/fhir \ -e INPUT_DIR="/workspace/output/fhir" \ -e CORES="--cores 1" \ -v $(pwd)/fhir:/workspace/output/fhir \ us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/synthea-uploader:latest
55+ ```
4456
45575 . As the uploader uses ` POST ` to upload the JSON files, the server will create
4658 the ID used to refer to resources. We would like to upload a patient list
@@ -82,7 +94,7 @@ To load this dataset into the HAPI FHIR image, do the following:
82947 . Commit the Docker container. This saves its state into a new image
8395
8496 ```
85- docker commit hapi_fhir us-docker.pkg.dev/fhir-proxy-build/stable/hapi-synthea:latest
97+ docker commit hapi-fhir-add-synthea us-docker.pkg.dev/fhir-proxy-build/stable/hapi-synthea:latest
8698 ```
8799
881008 . Push the image
0 commit comments