{ "resourceType": "Bundle", "id": "dae635b6-1525-4371-86c8-ec0c2d2c49ea", "meta": { "lastUpdated": "2024-03-19T01:33:23.622+00:00" }, "type": "searchset", "total": 16, "link": [ { "relation": "self", "url": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication" } ], "entry": [ { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20221117_hapi_fhir_6_2_0", "resource": { "resourceType": "Communication", "id": "20221117_hapi_fhir_6_2_0", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "
Welcome to the winter release of HAPI FHIR! Support has been added for FHIR R4B (4.3.0). See the R4B Documentation for more information on what this means. Now onto the rest!
\nActionRequestDetails
class has been dropped (it has been deprecated\nsince HAPI FHIR 4.0.0). This class was used as a parameter to the\nSERVER_INCOMING_REQUEST_PRE_HANDLED
interceptor pointcut, but can be\nreplaced in any existing client code with RequestDetails
. This change\nalso removes an undocumented behaviour where the JPA server internally\ninvoked the SERVER_INCOMING_REQUEST_PRE_HANDLED
a second time from\nwithin various processing methods. This behaviour caused performance\nproblems for some interceptors (e.g. SearchNarrowingInterceptor
) and\nno longer offers any benefit so it is being removed.reindex-terminology
command.":nickname
qualifier only worked with the predefined name
and given
SearchParameters.\nThis has been fixed and now the :nickname
qualifier can be used with any string SearchParameters.Accept
\nheader that matched the contentType
of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching Accept
header will receive\nthe stored binary data directly as the requested content type.STORAGE_TRANSACTION_PROCESSING
has been added. Hooks for this\npointcut can examine and modify FHIR transaction bundles being processed by the JPA server before\nprocessing starts.$meta
operation\nagainst the deleted resource, and will remain if the resource is brought back in a subsequent update.Accept
header._outputFormat
parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value application/fhir+ndjson
.[fhir base]/Patient/[id]/$export
, which will export only the records for one patient.\nAdditionally, added support for the patient
parameter in Patient Bulk Export, which is another way to get the records of only one patient.$poll-export-status
endpoint so that when a job is complete, this endpoint now correctly includes the request
and requiresAccessToken
attributes.$export
operation receives a request that is identical to one that has been recently\nprocessed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a Prefer: respond-async
header with the request.$reindex
operation failed with a ResourceVersionConflictException
the relatedResourceVersionConflictException
during the $reindex
operation. In addition, the ResourceIdListStep
\nwas submitting one more resource than expected (i.e. 1001 records processed during a $reindex
operation if only 1000\nResources
were in the database). This has been corrected.upload-terminology
operation of the HAPI-FHIR CLI. you can pass the -s
or --size
parameter to specify\nthe maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example:\nupload-terminology -s \\"1GB\\"
will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.import-csv-to-conceptmap
command in the CLI successfully created ConceptMap resources\nwithout a ConceptMap.status
element, which is against the FHIR specification. This has been fixed by adding a required\noption for status for the command.reindex-terminology
command.DocumentReference
with an Attachment
containing a URL over 254 characters\nan error was thrown. This has been corrected and now an Attachment
URL can be up to 500 characters._include
and _revinclude
parameters in the JPA server has been streamlined, which should\nimprove performance on systems where includes are heavily used.reindex-terminology
command.":text
qualifier was not performing advanced search. This has been corrected.MAP_TO
properties defined in MapTo.csv input file to TermConcept(s).$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a Prefer: respond-async
header with the request.reloadExisting
attribute in PackageInstallationSpec. It defaults to true
, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!Welcome to the winter release of HAPI FHIR! Support has been added for FHIR R4B (4.3.0). See the R4B Documentation for more information on what this means. Now onto the rest!
\nActionRequestDetails
class has been dropped (it has been deprecated\nsince HAPI FHIR 4.0.0). This class was used as a parameter to the\nSERVER_INCOMING_REQUEST_PRE_HANDLED
interceptor pointcut, but can be\nreplaced in any existing client code with RequestDetails
. This change\nalso removes an undocumented behaviour where the JPA server internally\ninvoked the SERVER_INCOMING_REQUEST_PRE_HANDLED
a second time from\nwithin various processing methods. This behaviour caused performance\nproblems for some interceptors (e.g. SearchNarrowingInterceptor
) and\nno longer offers any benefit so it is being removed.reindex-terminology
command.":nickname
qualifier only worked with the predefined name
and given
SearchParameters.\nThis has been fixed and now the :nickname
qualifier can be used with any string SearchParameters.Accept
\nheader that matched the contentType
of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching Accept
header will receive\nthe stored binary data directly as the requested content type.STORAGE_TRANSACTION_PROCESSING
has been added. Hooks for this\npointcut can examine and modify FHIR transaction bundles being processed by the JPA server before\nprocessing starts.$meta
operation\nagainst the deleted resource, and will remain if the resource is brought back in a subsequent update.Accept
header._outputFormat
parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value application/fhir+ndjson
.[fhir base]/Patient/[id]/$export
, which will export only the records for one patient.\nAdditionally, added support for the patient
parameter in Patient Bulk Export, which is another way to get the records of only one patient.$poll-export-status
endpoint so that when a job is complete, this endpoint now correctly includes the request
and requiresAccessToken
attributes.$export
operation receives a request that is identical to one that has been recently\nprocessed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a Prefer: respond-async
header with the request.$reindex
operation failed with a ResourceVersionConflictException
the relatedResourceVersionConflictException
during the $reindex
operation. In addition, the ResourceIdListStep
\nwas submitting one more resource than expected (i.e. 1001 records processed during a $reindex
operation if only 1000\nResources
were in the database). This has been corrected.upload-terminology
operation of the HAPI-FHIR CLI. you can pass the -s
or --size
parameter to specify\nthe maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example:\nupload-terminology -s \\"1GB\\"
will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.import-csv-to-conceptmap
command in the CLI successfully created ConceptMap resources\nwithout a ConceptMap.status
element, which is against the FHIR specification. This has been fixed by adding a required\noption for status for the command.reindex-terminology
command.DocumentReference
with an Attachment
containing a URL over 254 characters\nan error was thrown. This has been corrected and now an Attachment
URL can be up to 500 characters._include
and _revinclude
parameters in the JPA server has been streamlined, which should\nimprove performance on systems where includes are heavily used.reindex-terminology
command.":text
qualifier was not performing advanced search. This has been corrected.MAP_TO
properties defined in MapTo.csv input file to TermConcept(s).$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a Prefer: respond-async
header with the request.reloadExisting
attribute in PackageInstallationSpec. It defaults to true
, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!Welcome to a new major release of HAPI FHIR! Since there are many breaking changes, we are making this a major release.
\ntenantID
. This issue has been fixed.STORAGE_PRESTORAGE_CLIENT_ASSIGNED_ID
which is invoked when a user attempts to create a resource with a client-assigned ID._revinclude
. the results sometimes incorrectly included resources that were reverse included by other search parameters with the same name. Thanks to GitHub user @vivektk84 for reporting and to Jean-Francois Briere for proposing a fix.:not-in
queries.code:in
or code:not-in
expression, for mandating that results must be in a specified list of codes.GET
resource with _total=accurate
and _summary=count
if consent service enabled should throw an InvalidRequestException. This issue has been fixed.ne
) prefix.IDomainResource
. This caused a few resource types to be missed. This has been corrected, and resource type is now set on the id element for all IBaseResource
instances instead.JpaStorageSettings.setStoreResourceInLuceneIndex()
_has
parameterWelcome to a new major release of HAPI FHIR! Since there are many breaking changes, we are making this a major release.
\ntenantID
. This issue has been fixed.STORAGE_PRESTORAGE_CLIENT_ASSIGNED_ID
which is invoked when a user attempts to create a resource with a client-assigned ID._revinclude
. the results sometimes incorrectly included resources that were reverse included by other search parameters with the same name. Thanks to GitHub user @vivektk84 for reporting and to Jean-Francois Briere for proposing a fix.:not-in
queries.code:in
or code:not-in
expression, for mandating that results must be in a specified list of codes.GET
resource with _total=accurate
and _summary=count
if consent service enabled should throw an InvalidRequestException. This issue has been fixed.ne
) prefix.IDomainResource
. This caused a few resource types to be missed. This has been corrected, and resource type is now set on the id element for all IBaseResource
instances instead.JpaStorageSettings.setStoreResourceInLuceneIndex()
_has
parameterWelcome to the winter-ish release of HAPI-FHIR 5.6.0!
\nHAPI FHIR 5.6.0 (Codename: Raccoon) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\nnl
instead of nl-DE
or nl-NL
.%
symbol was causing searches to fail to return results. This has been corrected._language
search parameter has been dropped._id
parameter has been added to the Patient/$everything type-level operation so you can narrow down a specific list of patient IDs to export._mdm
parameter support has been added to the $everything operation.$mdm-clear
operation has been refactor to use spring batch.$mdm-create-link
operation.Welcome to the winter-ish release of HAPI-FHIR 5.6.0!
\nHAPI FHIR 5.6.0 (Codename: Raccoon) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\nnl
instead of nl-DE
or nl-NL
.%
symbol was causing searches to fail to return results. This has been corrected._language
search parameter has been dropped._id
parameter has been added to the Patient/$everything type-level operation so you can narrow down a specific list of patient IDs to export._mdm
parameter support has been added to the $everything operation.$mdm-clear
operation has been refactor to use spring batch.$mdm-create-link
operation.Another quarter gone by, another HAPI-FHIR release.
\nHAPI FHIR 5.5.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\n_include=Observation:*
.Tag Versioning Mode
, which determines how tags are maintained.ForceOffsetSearchModeInterceptor
. This interceptor forces all searches to be offset searches, instead of relying on the query cache.$reindex
operation which creates a Spring Batch job to reindex selected resources._include
and _revinclude
resources to be added to a single search page result. In addition, the include/revinclue processor have been redesigned to avoid accidentally overloading the server if an include/revinclude would return unexpected massive amounts of data.Cache-Control: no-store
) the loading of _include
and _revinclude
will now factor the maximum include count."Another quarter gone by, another HAPI-FHIR release.
\nHAPI FHIR 5.5.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\n_include=Observation:*
.Tag Versioning Mode
, which determines how tags are maintained.ForceOffsetSearchModeInterceptor
. This interceptor forces all searches to be offset searches, instead of relying on the query cache.$reindex
operation which creates a Spring Batch job to reindex selected resources._include
and _revinclude
resources to be added to a single search page result. In addition, the include/revinclue processor have been redesigned to avoid accidentally overloading the server if an include/revinclude would return unexpected massive amounts of data.Cache-Control: no-store
) the loading of _include
and _revinclude
will now factor the maximum include count."It's time for another release of HAPI FHIR.
\nHAPI FHIR 5.4.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on May 20 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\nPrefer: handling=lenient
header has been added via an optional interceptor._list
search parameter has been added to the JPA server:contained
modifier has been added, allowing searches to select from data in contained resources found within the resource being searches. Note that this feature is disabled by default and must be enabled if needed.Resource.meta
X-Upsert-Extistence-Check
(note there is a typo in the name, this will be addressed in the next release of HAPI FHIR! Please be aware if you are planning on using this feature) can be added which avoids existence checks when using client assigned IDs to create new records. This can speed up performance.Observation?patient:mdm=Patient/123
can be used to search for Observation resources belonging to Patient/123
but also to other MDM-linked patient records.It's time for another release of HAPI FHIR.
\nHAPI FHIR 5.4.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
\nHighlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on May 20 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
\nPrefer: handling=lenient
header has been added via an optional interceptor._list
search parameter has been added to the JPA server:contained
modifier has been added, allowing searches to select from data in contained resources found within the resource being searches. Note that this feature is disabled by default and must be enabled if needed.Resource.meta
X-Upsert-Extistence-Check
(note there is a typo in the name, this will be addressed in the next release of HAPI FHIR! Please be aware if you are planning on using this feature) can be added which avoids existence checks when using client assigned IDs to create new records. This can speed up performance.Observation?patient:mdm=Patient/123
can be used to search for Observation resources belonging to Patient/123
but also to other MDM-linked patient records.It's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Numbat).
\nSecurity Notice:
\nMajor New Features:
\nThe JPA SearchBuilder (which turns FHIR searches into SQL statements to be executed by the database) has been completely rewritten to not use Hibernate. This allows for much more efficient SQL to be generated in some cases. For some specific queries on a very large test repository running on Postgresql this new search builder performed 10x faster. Note that this new module is enabled by default in HAPI FHIR 5.2.0 but can be disabled via a JpaStorageSettings setting. It is disabled by default in Smile CDR 2020.11.R01 but will be enabled by default in the next major release.
\nSupport for RDF Turtle encoding has been added, finally bringing native support for the 3rd official FHIR encoding to HAPI FHIR. This support was contributed by Josh Collins and Eric Prud'hommeaux of the company Janeiro Digital. We greatly appreciate the contribution! To see an example of the RDF encoding: hapi.fhir.org/baseR4/Patient?_format=rdf
\nTerminology Enhancements:
\nIntegration with remote terminology services has been improved so that required bindings to closed valuesets are no longer delegated to the remote terminology server. This improves performance since there is no need for remote services in this case.
\nThe CodeSystem/$validate-code
operation has been implemented for R4+ JPA servers.
The JPA Terminology Server is now version aware, meaning that multiple versions of a single CodeSystem can now be stored in a single FHIR terminology server repository. ValueSet expansion, CodeSystem lookup, and ConceptMap translation are all now fully version aware. Note that implementing support for fully versioned terminology is mostly complete, but some validation operations may still not work. This should be completed by our next major release.
\nValueSet expansion with filtering (e.g. using the filter
parameter on the $expand
operation) has now been implemented in such a way that it fully supports filtering on pre-expanded ValueSets, including using offsets and counts. This is a major improvement for people building picker UIs leveraging the $expand operation.
EMPI Improvements:
\nIdentifier matchers have been added, providing native FHIR support for matching on resource identifiers
\nThe $empi-clear
operation performance has been greatly improved
Other Notable Improvements:
\nA new combined "delete+expunge" mode has been added to the DELETE operation in the JPA server. This mode deletes resources and expunges (physically deletes) them in a single fast operation. Note that with this mode must be enabled, and completely bypasses interceptor hooks notifying registered listeners that data is being deletes and expunged. It is several orders of magnitude faster when deleting large sets of data, and is generally intended for test scenarios.
\nThe Package Server module now supports installing non-conformance resources from packages.
\nThe _typeFilter
parameter has been implemented for the $bulk-export module.
As always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nIt's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Numbat).
\nSecurity Notice:
\nMajor New Features:
\nThe JPA SearchBuilder (which turns FHIR searches into SQL statements to be executed by the database) has been completely rewritten to not use Hibernate. This allows for much more efficient SQL to be generated in some cases. For some specific queries on a very large test repository running on Postgresql this new search builder performed 10x faster. Note that this new module is enabled by default in HAPI FHIR 5.2.0 but can be disabled via a JpaStorageSettings setting. It is disabled by default in Smile CDR 2020.11.R01 but will be enabled by default in the next major release.
\nSupport for RDF Turtle encoding has been added, finally bringing native support for the 3rd official FHIR encoding to HAPI FHIR. This support was contributed by Josh Collins and Eric Prud'hommeaux of the company Janeiro Digital. We greatly appreciate the contribution! To see an example of the RDF encoding: hapi.fhir.org/baseR4/Patient?_format=rdf
\nTerminology Enhancements:
\nIntegration with remote terminology services has been improved so that required bindings to closed valuesets are no longer delegated to the remote terminology server. This improves performance since there is no need for remote services in this case.
\nThe CodeSystem/$validate-code
operation has been implemented for R4+ JPA servers.
The JPA Terminology Server is now version aware, meaning that multiple versions of a single CodeSystem can now be stored in a single FHIR terminology server repository. ValueSet expansion, CodeSystem lookup, and ConceptMap translation are all now fully version aware. Note that implementing support for fully versioned terminology is mostly complete, but some validation operations may still not work. This should be completed by our next major release.
\nValueSet expansion with filtering (e.g. using the filter
parameter on the $expand
operation) has now been implemented in such a way that it fully supports filtering on pre-expanded ValueSets, including using offsets and counts. This is a major improvement for people building picker UIs leveraging the $expand operation.
EMPI Improvements:
\nIdentifier matchers have been added, providing native FHIR support for matching on resource identifiers
\nThe $empi-clear
operation performance has been greatly improved
Other Notable Improvements:
\nA new combined "delete+expunge" mode has been added to the DELETE operation in the JPA server. This mode deletes resources and expunges (physically deletes) them in a single fast operation. Note that with this mode must be enabled, and completely bypasses interceptor hooks notifying registered listeners that data is being deletes and expunged. It is several orders of magnitude faster when deleting large sets of data, and is generally intended for test scenarios.
\nThe Package Server module now supports installing non-conformance resources from packages.
\nThe _typeFilter
parameter has been implemented for the $bulk-export module.
As always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20200813_hapi_fhir_5_1_0", "resource": { "resourceType": "Communication", "id": "20200813_hapi_fhir_5_1_0", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "It's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Manticore).
\nNotable changes in this release include:
\nAn XSS vulnerability has been fixed in the testpage overlay project. This issue affects only the testpage overlay module, but users of this module should upgrade immediately. A CVE number for this issue has been requested and will be updated here when it is assigned.
\nSupport for the new FHIR NPM Package spec has been added. Currently this support is limited to JPA servers, and support should be added to plain servers in the next release. Packages can be imported on startup, either by supplying NPM files locally or by downloading them automatically from an NPM server such as packages.fhir.org. Package contents (the StructureDefinition, CodeSystem, ValueSet, etc. resources in the package) can be installed into the repository, or can be stored in a dedicated set of tables and made available to the validator without actually being installed in the repository.
\nSupport for the Observation/$lastn
operation has been implemented thanks to a partnership with LHNCBC/NIH. This operation uses ElasticSearch to support querying for recent Observations over a set of test codes for one or more patients in a very efficient way.
The FHIR PATCH operation now supports FHIRPatch in addition to the already supported XML and JSON Patch specs. FHIRPatch is a very expressive mechanism for creating patches and can be used to supply very precise patches.
\nA new operatiion called $diff
has been added. Diff can be used too generate a FHIRPatch diff between two resrouces, or between two versions of the same resource. For example: http://hapi.fhir.org/baseR4/Patient/example/$diff
Several performance problems and occasional failures in the resource expunge operation have been corrected
\nThe memory use for Subscription delivery queues has been reduced
\nSnapshot generaton now uses a single snapshot generator codebase for generating snapshots across all versions of FHIR. This makes ongoing maintenance much easier and fixes a number of version specific bugs.
\nThe maximum cascade depth for cascading deletes is now configurable.
\nAuthorizationInterceptor can now fully authorize GraphQL calls, including allowing/blocking individual resources returned by the graph.
\nGraphQL now supports POST form (thanks to Kholilul Islam!)
\nThe LOINC uploader now supports LOINC 2.68
\nA new batch job framework has been introduced, leveraging the Spring Batch library. Initial jobs to use this new framework are the Bulk Export and EMPI modules, but eventually all long processes will be adapted to use this new framework.
\nTThe HAPI FHIR built-in Terminology Server now includes support for validating UCUM (units of measure), BCP-13 (mimetypes), ISO 4217 (currencies), ISO 3166 (countries), and USPS State Codes.
\nIt is now possible to disable referential integrity for delete operations for speciific reference paths.
\nA regression has been fixed that significantly degraded validation performance in the JPA server for validation of large numbers of resources.
\nUnit tests have been migrated to JUnit 5. This change has no user visible impacts, but will help us continue to improve ongoing maintenance of our test suites.
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nIt's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Manticore).
\nNotable changes in this release include:
\nAn XSS vulnerability has been fixed in the testpage overlay project. This issue affects only the testpage overlay module, but users of this module should upgrade immediately. A CVE number for this issue has been requested and will be updated here when it is assigned.
\nSupport for the new FHIR NPM Package spec has been added. Currently this support is limited to JPA servers, and support should be added to plain servers in the next release. Packages can be imported on startup, either by supplying NPM files locally or by downloading them automatically from an NPM server such as packages.fhir.org. Package contents (the StructureDefinition, CodeSystem, ValueSet, etc. resources in the package) can be installed into the repository, or can be stored in a dedicated set of tables and made available to the validator without actually being installed in the repository.
\nSupport for the Observation/$lastn
operation has been implemented thanks to a partnership with LHNCBC/NIH. This operation uses ElasticSearch to support querying for recent Observations over a set of test codes for one or more patients in a very efficient way.
The FHIR PATCH operation now supports FHIRPatch in addition to the already supported XML and JSON Patch specs. FHIRPatch is a very expressive mechanism for creating patches and can be used to supply very precise patches.
\nA new operatiion called $diff
has been added. Diff can be used too generate a FHIRPatch diff between two resrouces, or between two versions of the same resource. For example: http://hapi.fhir.org/baseR4/Patient/example/$diff
Several performance problems and occasional failures in the resource expunge operation have been corrected
\nThe memory use for Subscription delivery queues has been reduced
\nSnapshot generaton now uses a single snapshot generator codebase for generating snapshots across all versions of FHIR. This makes ongoing maintenance much easier and fixes a number of version specific bugs.
\nThe maximum cascade depth for cascading deletes is now configurable.
\nAuthorizationInterceptor can now fully authorize GraphQL calls, including allowing/blocking individual resources returned by the graph.
\nGraphQL now supports POST form (thanks to Kholilul Islam!)
\nThe LOINC uploader now supports LOINC 2.68
\nA new batch job framework has been introduced, leveraging the Spring Batch library. Initial jobs to use this new framework are the Bulk Export and EMPI modules, but eventually all long processes will be adapted to use this new framework.
\nTThe HAPI FHIR built-in Terminology Server now includes support for validating UCUM (units of measure), BCP-13 (mimetypes), ISO 4217 (currencies), ISO 3166 (countries), and USPS State Codes.
\nIt is now possible to disable referential integrity for delete operations for speciific reference paths.
\nA regression has been fixed that significantly degraded validation performance in the JPA server for validation of large numbers of resources.
\nUnit tests have been migrated to JUnit 5. This change has no user visible impacts, but will help us continue to improve ongoing maintenance of our test suites.
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20200602_hapi_fhir_5_0_2", "resource": { "resourceType": "Communication", "id": "20200602_hapi_fhir_5_0_2", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "A second point release of the HAPI FHIR 5.0 (Labrador) release cycle has been pushed to Maven Central.
\nThis release corrects only two issues:
\nA snapshot dependency was accidentally left into the POM for HAPI FHIR 5.0.1. This has been corrected.
\nThe default setting for the new partitioning feature "Add Partition to Search Indexes" was incorrectly set to enabled (true). It has been set to false, which was the intended default for this setting.
\nA second point release of the HAPI FHIR 5.0 (Labrador) release cycle has been pushed to Maven Central.
\nThis release corrects only two issues:
\nA snapshot dependency was accidentally left into the POM for HAPI FHIR 5.0.1. This has been corrected.
\nThe default setting for the new partitioning feature "Add Partition to Search Indexes" was incorrectly set to enabled (true). It has been set to false, which was the intended default for this setting.
\nIt's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nA new feature called Partitioning has been added to the JPA server. This can be used to implement multitenancy, as well as other partitioned/segregated/sharded use cases.
\nThe IValidationSupport interface has been completely redesigned for better flexibility, extensibility and to enable future use cases. Any existing implementations of this interface will need to be adjusted.
\nMany improvements to performance have been implemented
\nFHIR R5 draft definitions have been updated to the latest FHIR 4.2.0 (Preview 2) definitions
\nThe Gson JSON parser has been replaced with Jackson for better flexibility and performance
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nIt's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nA new feature called Partitioning has been added to the JPA server. This can be used to implement multitenancy, as well as other partitioned/segregated/sharded use cases.
\nThe IValidationSupport interface has been completely redesigned for better flexibility, extensibility and to enable future use cases. Any existing implementations of this interface will need to be adjusted.
\nMany improvements to performance have been implemented
\nFHIR R5 draft definitions have been updated to the latest FHIR 4.2.0 (Preview 2) definitions
\nThe Gson JSON parser has been replaced with Jackson for better flexibility and performance
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20200215_hapi_fhir_4_2_0", "resource": { "resourceType": "Communication", "id": "20200215_hapi_fhir_4_2_0", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "It's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nA new database migrator for the JPA server has been introduced, based on FlywayDB.
\nA major performance enhancement has been added to the parser, which decreases the parse time when parsing large Bundle resources by up to 50%.
\nSupport for positional (near) search using geo-coordinates and positional distance has been added. This support currently uses a "bounding box" algorithm, and may be further enhanced to use a radius circle in the future.
\nSupport for LOINC 2.67 has been added
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nIt's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nA new database migrator for the JPA server has been introduced, based on FlywayDB.
\nA major performance enhancement has been added to the parser, which decreases the parse time when parsing large Bundle resources by up to 50%.
\nSupport for positional (near) search using geo-coordinates and positional distance has been added. This support currently uses a "bounding box" algorithm, and may be further enhanced to use a radius circle in the future.
\nSupport for LOINC 2.67 has been added
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20191113_hapi_fhir_4_1_0", "resource": { "resourceType": "Communication", "id": "20191113_hapi_fhir_4_1_0", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "It's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nStructures JARs have been updated to incorporate the latest technical corrections. DSTU3 structures are upgraded to FHIR 3.0.2, R4 structures are upgraded to\tFHIR 4.0.1, and R5 draft structures are upgraded to the October 2019 draft revision.
\nValueSets are now automatically pre-expanded by the JPA server into a dedicated set of database tables. This "precalculated expansion" is used to provide much better performance for validation and expanion operations, and introduced the ability to successfully expand very large ValueSets such as the LOINC implicit (all codes) valueset.
\nSupport for the FHIR Bulk Export specification has been added. We are now\tworking on adding support for Bulk Import!
\nFirst-order support for ElasticSearch as a full-text and terminology service backend implementation. At this time, both raw Lucene and ElasticSearch are supported (this may change in the future but we do not have any current plans to deprecate Lucene).
\nLive Terminology Service operations for terminology file maintenance based on delta files has been added.
\nBinary resources and Media/DocumentReference instances with binary attachments stored in the FHIR repository can now take advantage of externalized binary storage for the binary content when that feature is enabled. This allows much better scalability of repositories containing large amounts of binary content (e.g. document repositories).
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nAlso, as a reminder, if you have not already filled out our annual user survey, please take a moment to do so. Access the survey here:\thttp://bit.ly/33HO4cs (note that this URL was originally posted incorrectly. It is now fixed)
\nIt's time for another release of HAPI FHIR!
\nThis release brings some good stuff, including:
\nStructures JARs have been updated to incorporate the latest technical corrections. DSTU3 structures are upgraded to FHIR 3.0.2, R4 structures are upgraded to\tFHIR 4.0.1, and R5 draft structures are upgraded to the October 2019 draft revision.
\nValueSets are now automatically pre-expanded by the JPA server into a dedicated set of database tables. This "precalculated expansion" is used to provide much better performance for validation and expanion operations, and introduced the ability to successfully expand very large ValueSets such as the LOINC implicit (all codes) valueset.
\nSupport for the FHIR Bulk Export specification has been added. We are now\tworking on adding support for Bulk Import!
\nFirst-order support for ElasticSearch as a full-text and terminology service backend implementation. At this time, both raw Lucene and ElasticSearch are supported (this may change in the future but we do not have any current plans to deprecate Lucene).
\nLive Terminology Service operations for terminology file maintenance based on delta files has been added.
\nBinary resources and Media/DocumentReference instances with binary attachments stored in the FHIR repository can now take advantage of externalized binary storage for the binary content when that feature is enabled. This allows much better scalability of repositories containing large amounts of binary content (e.g. document repositories).
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nAlso, as a reminder, if you have not already filled out our annual user survey, please take a moment to do so. Access the survey here:\thttp://bit.ly/33HO4cs (note that this URL was originally posted incorrectly. It is now fixed)
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20190814_hapi_fhir_4_0_0", "resource": { "resourceType": "Communication", "id": "20190814_hapi_fhir_4_0_0", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Release" } ] }, "language": "en", "text": { "status": "generated", "div": "The next release of HAPI has now been uploaded to the Maven repos and GitHub's releases section.
\nThis release features a number of significant performance improvements, and has some notable changes:
\nA new consent framework called ConsentInterceptor that can be used to apply local consent directives and policies, and potentially filter or mask data has been added.
\nInitial support for draft FHIR R5 resources has been added.
\nSupport for GraphQL and the _filter search parameter has been added.
\nThe ability to perform cascading deletes has been added.
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\nThe next release of HAPI has now been uploaded to the Maven repos and GitHub's releases section.
\nThis release features a number of significant performance improvements, and has some notable changes:
\nA new consent framework called ConsentInterceptor that can be used to apply local consent directives and policies, and potentially filter or mask data has been added.
\nInitial support for draft FHIR R5 resources has been added.
\nSupport for GraphQL and the _filter search parameter has been added.
\nThe ability to perform cascading deletes has been added.
\nAs always, see the changelog for a full list of changes.
\nThanks to everyone who contributed to this release!
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20180211_fhir_and_the_hype_cycle", "resource": { "resourceType": "Communication", "id": "20180211_fhir_and_the_hype_cycle", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Hype Cycle" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "HL7" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "HL7v2" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "FHIR" } ] }, "language": "en", "text": { "status": "generated", "div": "One of the things we often talk about in the FHIR standards development community is where FHIR currently sits on Gartner's Hype Cycle. The hype cycle is a coarse measure of the trajectory of new technologies on a journey from being "new and exciting silver bullets" to eventually being "boring useful technologies".
\nWhen you are a proponent of a new technology (as I certainly am with FHIR), probably the most important aspect to remember about the hype cycle is that you really only ever know where you are at any given time long after that time has passed. In other words, it's fun to ask yourself "have we passed the Peak of Inflated Expectations yet?" but you really won't know until much later.
\nSpeculating is perhaps a fool's errand. I probably shouldn't try but I can't help but wonder if we have passed the peak yet.
\nThe trajectory of HAPI FHIR's growth is interesting. FHIR has been growing over the last few years by all kinds of metrics. The connectathons keep getting bigger, the number of vendors participating keeps on getting bigger, and FHIR DevDays keeps on getting bigger.
\nIf I look at our website in Google Analytics, I am curious about the trajectory.
\n\nWhile HAPI FHIR has seen pretty steady growth over the last few years, that growth has been either tapering or at least very unstable over the last 8 months.
\nCertainly I don't think HAPI FHIR has stopped growing. The number of messages on the support forum and the number of people with big production implementations these days certainly doesn't suggest that; however, things have certainly been weird the last 8 months.
\nLet's look at interest in FHIR overall. The next thing to look at is the FHIR Google Trends graph, which measures the number of people searching for terms on Google (a pretty decent indicator of general interest). The following graph shows the last 4 years for FHIR.
\n\nIt would seem that FHIR itself saw a crazy explosion of interest back in May, too. That makes sense since FHIR R3 was released right before that peak.
\nLet's compare that with the graph for IHE. I don't think anyone would disagree that IHE sits firmly atop the Plateau of Productivity. Most people in the world of health informatics know what can be accomplished with IHE's profiles, and certainly I've worked with many organizations who use them to accomplish good things.
\nThe FHIR and IHE Graph shows interest in FHIR in BLUE and IHE in RED.
\n\nSo what can we take from this? I think the right side of the graph is quite interesting. FHIR itself has kind of levelled off recently and has hit similar metrics to those of a very productive organization.
\nI probably shouldn't attach too much meaning to these graphs, but I can't help but wonder...
\nOne of the things we often talk about in the FHIR standards development community is where FHIR currently sits on Gartner's Hype Cycle. The hype cycle is a coarse measure of the trajectory of new technologies on a journey from being "new and exciting silver bullets" to eventually being "boring useful technologies".
\nWhen you are a proponent of a new technology (as I certainly am with FHIR), probably the most important aspect to remember about the hype cycle is that you really only ever know where you are at any given time long after that time has passed. In other words, it's fun to ask yourself "have we passed the Peak of Inflated Expectations yet?" but you really won't know until much later.
\nSpeculating is perhaps a fool's errand. I probably shouldn't try but I can't help but wonder if we have passed the peak yet.
\nThe trajectory of HAPI FHIR's growth is interesting. FHIR has been growing over the last few years by all kinds of metrics. The connectathons keep getting bigger, the number of vendors participating keeps on getting bigger, and FHIR DevDays keeps on getting bigger.
\nIf I look at our website in Google Analytics, I am curious about the trajectory.
\n\nWhile HAPI FHIR has seen pretty steady growth over the last few years, that growth has been either tapering or at least very unstable over the last 8 months.
\nCertainly I don't think HAPI FHIR has stopped growing. The number of messages on the support forum and the number of people with big production implementations these days certainly doesn't suggest that; however, things have certainly been weird the last 8 months.
\nLet's look at interest in FHIR overall. The next thing to look at is the FHIR Google Trends graph, which measures the number of people searching for terms on Google (a pretty decent indicator of general interest). The following graph shows the last 4 years for FHIR.
\n\nIt would seem that FHIR itself saw a crazy explosion of interest back in May, too. That makes sense since FHIR R3 was released right before that peak.
\nLet's compare that with the graph for IHE. I don't think anyone would disagree that IHE sits firmly atop the Plateau of Productivity. Most people in the world of health informatics know what can be accomplished with IHE's profiles, and certainly I've worked with many organizations who use them to accomplish good things.
\nThe FHIR and IHE Graph shows interest in FHIR in BLUE and IHE in RED.
\n\nSo what can we take from this? I think the right side of the graph is quite interesting. FHIR itself has kind of levelled off recently and has hit similar metrics to those of a very productive organization.
\nI probably shouldn't attach too much meaning to these graphs, but I can't help but wonder...
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20170208_custom_search_parameters", "resource": { "resourceType": "Communication", "id": "20170208_custom_search_parameters", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "SearchParameters" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "HAPI" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Development" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "FHIR" } ] }, "language": "en", "text": { "status": "generated", "div": "HAPI FHIR's JPA Module lets you quickly set up a FHIR server, complete with a database for whatever purpose you might have.
\nOne of the most requested features in the last year has been for support of custom search parameters on that server. Out of the box, the JPA server has always supported the default/built-in search parameters that are defined in the FHIR specification.
\nThis means that if you store a Patient
resource in the database, the Patient.gender
field will be indexed with a search parameter called gender
, the Patient.birthDate
field will be indexed with a search parameter called birthdate
, etc.
To see a list of the default search parameters for a given resource, you can see a table near the bottom of any resource definition. For example, here are the Patient search parameters.
\nThe built-in parameters are great for lots of situations but if you're building a real application backend then you are probably going to come up with a need that the FHIR specification developers didn't anticipate (or one that doesn't meet FHIR's 80% rule).
\nThe solution for this is to introduce a custom search parameter. Search parameters are defined using a resource that is – unsurprisingly – called SearchParameter
. The idea is that you create one of these SearchParameter resources and give it a code
(the name of the URL parameter), a type
(the search parameter type), and an expression
(the FHIRPath expression which will actually be indexed).
In HAPI FHIR's JPA server, custom search parameters are indexed just like any other search parameter. A new mechanism has been introduced in HAPI FHIR 2.3 (to be released soon) that parses the expression, adds any new or updated search parameters to an internal registry of indexed paths, and marks any existing resources that are potential candidates for this new search parameter as requiring reindexing.
\nThis means that any newly added search parameters will cover resources added after the search parameter was added, and it will also cover older resources after the server has had a chance to reindex them.
\nThis also means that you definitely want to make sure you have properly secured the /SearchParameter
endpoint since it can potentially cause your server to do a lot of extra work if there are a lot of resources present.
To show how this works, here is an example of a search parameter on an extension. We'll suppose that in our system we've defined an extension for patients' eye colour. Patient resources stored in our database will have the eye colour extension set, and we want to be able to search on this extension, too.
\n1. Create the Search Parameter
\nFirst, define a search parameter and upload it to your server. In Java, this looks as follows:
\n// Create a search parameter definition\nSearchParameter eyeColourSp = new SearchParameter();\neyeColourSp.addBase("Patient");\neyeColourSp.setCode("eyecolour");\neyeColourSp.setType(org.hl7.fhir.dstu3.model.Enumerations.SearchParamType.TOKEN);\neyeColourSp.setTitle("Eye Colour");\neyeColourSp.setExpression("Patient.extension('http://acme.org/eyecolour')");\neyeColourSp.setXpathUsage(org.hl7.fhir.dstu3.model.SearchParameter.XPathUsageType.NORMAL);\neyeColourSp.setStatus(org.hl7.fhir.dstu3.model.Enumerations.PublicationStatus.ACTIVE);\n// Upload it to the server\nclient\n\t.create()\n\t.resource(eyeColourSp)\n\t.execute();
\nThe resulting SearchParameter resource looks as follows:
\n{\n\t"resourceType": "SearchParameter",\n\t"title": "Eye Colour",\n\t"base": [ "Patient" ],\n\t"status": "active",\n\t"code": "eyecolour",\n\t"type": "token",\n\t"expression": "Patient.extension('http://acme.org/eyecolour')",\n\t"xpathUsage": "normal"\n}
\n2. Upload Some Resources
\nLet's upload two Patient resources with different eye colours.
\nPatient p1 = new Patient();\np1.setActive(true);\np1.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("blue"));\nclient\n\t.create()\n\t.resource(p1)\n\t.execute();\nPatient p2 = new Patient();\np2.setActive(true);\np2.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("green"));\nclient\n\t.create()\n\t.resource(p2)\n\t.execute();
\nHere's how one of these resources will look when encoded.
\n{\n "resourceType": "Patient",\n "extension": [\n {\n "url": "http://acme.org/eyecolour",\n "valueCode": "blue"\n }\n ],\n "active": true\n}
\n3. Search!
\nFinally, let's try searching:
\nBundle bundle = ourClient\n\t.search()\n\t.forResource(Patient.class)\n\t.where(new TokenClientParam("eyecolour").exactly().code("blue"))\n\t.returnBundle(Bundle.class)\n\t.execute();\nSystem.out.println(myFhirCtx.newJsonParser().setPrettyPrint(true).encodeResourceToString(bundle));
\nThis produces a search result that contains only the matching resource:
\n{\n "resourceType": "Bundle",\n "id": "bc89e883-b9f7-4745-8c2f-24bf9277664d",\n "meta": {\n "lastUpdated": "2017-02-07T20:30:05.445-05:00"\n },\n "type": "searchset",\n "total": 1,\n "link": [\n {\n "relation": "self",\n "url": "http://localhost:45481/fhir/context/Patient?eyecolour=blue"\n }\n ],\n "entry": [\n {\n "fullUrl": "http://localhost:45481/fhir/context/Patient/2",\n "resource": {\n "resourceType": "Patient",\n "id": "2",\n "meta": {\n "versionId": "1",\n "lastUpdated": "2017-02-07T20:30:05.317-05:00"\n },\n "text": {\n "status": "generated",\n "div": "<div xmlns=\\"http://www.w3.org/1999/xhtml\\"><table class=\\"hapiPropertyTable\\"><tbody/></table></div>"\n },\n "extension": [\n {\n "url": "http://acme.org/eyecolour",\n "valueCode": "blue"\n }\n ],\n "active": true\n },\n "search": {\n "mode": "match"\n }\n }\n ]\n}
\nNaturally, this feature will soon be available in Smile CDR. Previous versions of Smile CDR had a less elegant solution to this problem; however, now that we have a nice elegant approach to custom parameters that is based on FHIR's own way of handling this, Smile CDR users will see the benefits quickly.
\nHAPI FHIR's JPA Module lets you quickly set up a FHIR server, complete with a database for whatever purpose you might have.
\nOne of the most requested features in the last year has been for support of custom search parameters on that server. Out of the box, the JPA server has always supported the default/built-in search parameters that are defined in the FHIR specification.
\nThis means that if you store a Patient
resource in the database, the Patient.gender
field will be indexed with a search parameter called gender
, the Patient.birthDate
field will be indexed with a search parameter called birthdate
, etc.
To see a list of the default search parameters for a given resource, you can see a table near the bottom of any resource definition. For example, here are the Patient search parameters.
\nThe built-in parameters are great for lots of situations but if you're building a real application backend then you are probably going to come up with a need that the FHIR specification developers didn't anticipate (or one that doesn't meet FHIR's 80% rule).
\nThe solution for this is to introduce a custom search parameter. Search parameters are defined using a resource that is – unsurprisingly – called SearchParameter
. The idea is that you create one of these SearchParameter resources and give it a code
(the name of the URL parameter), a type
(the search parameter type), and an expression
(the FHIRPath expression which will actually be indexed).
In HAPI FHIR's JPA server, custom search parameters are indexed just like any other search parameter. A new mechanism has been introduced in HAPI FHIR 2.3 (to be released soon) that parses the expression, adds any new or updated search parameters to an internal registry of indexed paths, and marks any existing resources that are potential candidates for this new search parameter as requiring reindexing.
\nThis means that any newly added search parameters will cover resources added after the search parameter was added, and it will also cover older resources after the server has had a chance to reindex them.
\nThis also means that you definitely want to make sure you have properly secured the /SearchParameter
endpoint since it can potentially cause your server to do a lot of extra work if there are a lot of resources present.
To show how this works, here is an example of a search parameter on an extension. We'll suppose that in our system we've defined an extension for patients' eye colour. Patient resources stored in our database will have the eye colour extension set, and we want to be able to search on this extension, too.
\n1. Create the Search Parameter
\nFirst, define a search parameter and upload it to your server. In Java, this looks as follows:
\n// Create a search parameter definition\nSearchParameter eyeColourSp = new SearchParameter();\neyeColourSp.addBase("Patient");\neyeColourSp.setCode("eyecolour");\neyeColourSp.setType(org.hl7.fhir.dstu3.model.Enumerations.SearchParamType.TOKEN);\neyeColourSp.setTitle("Eye Colour");\neyeColourSp.setExpression("Patient.extension('http://acme.org/eyecolour')");\neyeColourSp.setXpathUsage(org.hl7.fhir.dstu3.model.SearchParameter.XPathUsageType.NORMAL);\neyeColourSp.setStatus(org.hl7.fhir.dstu3.model.Enumerations.PublicationStatus.ACTIVE);\n// Upload it to the server\nclient\n\t.create()\n\t.resource(eyeColourSp)\n\t.execute();
\nThe resulting SearchParameter resource looks as follows:
\n{\n\t"resourceType": "SearchParameter",\n\t"title": "Eye Colour",\n\t"base": [ "Patient" ],\n\t"status": "active",\n\t"code": "eyecolour",\n\t"type": "token",\n\t"expression": "Patient.extension('http://acme.org/eyecolour')",\n\t"xpathUsage": "normal"\n}
\n2. Upload Some Resources
\nLet's upload two Patient resources with different eye colours.
\nPatient p1 = new Patient();\np1.setActive(true);\np1.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("blue"));\nclient\n\t.create()\n\t.resource(p1)\n\t.execute();\nPatient p2 = new Patient();\np2.setActive(true);\np2.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("green"));\nclient\n\t.create()\n\t.resource(p2)\n\t.execute();
\nHere's how one of these resources will look when encoded.
\n{\n "resourceType": "Patient",\n "extension": [\n {\n "url": "http://acme.org/eyecolour",\n "valueCode": "blue"\n }\n ],\n "active": true\n}
\n3. Search!
\nFinally, let's try searching:
\nBundle bundle = ourClient\n\t.search()\n\t.forResource(Patient.class)\n\t.where(new TokenClientParam("eyecolour").exactly().code("blue"))\n\t.returnBundle(Bundle.class)\n\t.execute();\nSystem.out.println(myFhirCtx.newJsonParser().setPrettyPrint(true).encodeResourceToString(bundle));
\nThis produces a search result that contains only the matching resource:
\n{\n "resourceType": "Bundle",\n "id": "bc89e883-b9f7-4745-8c2f-24bf9277664d",\n "meta": {\n "lastUpdated": "2017-02-07T20:30:05.445-05:00"\n },\n "type": "searchset",\n "total": 1,\n "link": [\n {\n "relation": "self",\n "url": "http://localhost:45481/fhir/context/Patient?eyecolour=blue"\n }\n ],\n "entry": [\n {\n "fullUrl": "http://localhost:45481/fhir/context/Patient/2",\n "resource": {\n "resourceType": "Patient",\n "id": "2",\n "meta": {\n "versionId": "1",\n "lastUpdated": "2017-02-07T20:30:05.317-05:00"\n },\n "text": {\n "status": "generated",\n "div": "<div xmlns=\\"http://www.w3.org/1999/xhtml\\"><table class=\\"hapiPropertyTable\\"><tbody/></table></div>"\n },\n "extension": [\n {\n "url": "http://acme.org/eyecolour",\n "valueCode": "blue"\n }\n ],\n "active": true\n },\n "search": {\n "mode": "match"\n }\n }\n ]\n}
\nNaturally, this feature will soon be available in Smile CDR. Previous versions of Smile CDR had a less elegant solution to this problem; however, now that we have a nice elegant approach to custom parameters that is based on FHIR's own way of handling this, Smile CDR users will see the benefits quickly.
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20170201_gitlab_and_exactly_how_to_handle_an_outage", "resource": { "resourceType": "Communication", "id": "20170201_gitlab_and_exactly_how_to_handle_an_outage", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Outages" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "DevOps" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Git" } ] }, "language": "en", "text": { "status": "generated", "div": "I love GitLab. Let's get that out of the way.
\nBack when I first joined the HAPI project, we were using CVS for version control, hosted on SourceForge. Sourceforge was at that point a pretty cool system. You got free project hosting for your open source project, a free website, and shell access to a server so you could run scripts, edit your raw website, and whatever else you needed to do. That last part has always amazed me; I've always wondered what lengths SourceForge must have had to go to in order to keep that system from being abused.
\nNaturally, we eventually discovered GitHub and happily moved over there – and HAPI FHIR remains a happy resident of GitHub. We're now in the progress of migrating the HAPI Hl7v2.x codebase over to a new home on GitHub, too.
\nThe Smile CDR team discovered GitLab about a year ago. We quickly fell in love: easy self-hosting, a UI that feels familiar to a GitHub user yet somehow slightly more powerful in each part you touch, and a compelling set of features in the enterprise edition as well once you are ready for them.
\nOn Tuesday afternoon, Diederik noticed that GitLab was behaving slowly. I was curious about it since GitLab's @gitlabstatus Twitter mentioned unknown issues affecting the site. As it turned out, their issues went from bad, to better, and then to much worse. Ultimately, they wound up being unavailable for all of last night and part of this morning.
\nGitLab's issues were slightly hilarious but also totally relatable to anyone building and deploying big systems for any length of time. TechCrunch has a nice writeup of the incident if you want the gory details. Let's just say they had slowness problems caused by a user abusing the system, and in trying to recover from that a sysadmin accidentally deleted a large amount of production data. Ultimately, he thought he was in a shell on one (bad) node and just removing a useless empty directory but he was actually in a shell on the (good) master node.
\nI read a few meltdowns about this on reddit today, calling the sysadmin inexperienced, inept, or worse, but I also saw a few people saying something that resonated with me much more: if you've never made a mistake on a big complicated production system, you've probably never worked on a big complicated production system.
\nThese things happen. The trick is being able to recover from whatever has gone wrong, no matter how bad things have gotten.
\nThis is where GitLab really won me over. Check their Twitter for yourself. There was no attempt to mince words. GitLab engineers were candid about what had happened from the second things went south.
\nGitLab opened a publicly readable Google Doc where all of the notes of their investigation could be read by anyone wanting to follow along. When it became clear that the recovery effort was going to be long and complicated, they opened a YouTube live stream of a conference bridge with their engineers chipping away at the recovery.
\nThey even opened a live chat with the stream so you could comment on their efforts. Watching it was great. I've been in their position many times in my life: tired from being up all night trying to fix something, and sitting on an endless bridge where I'm fixing one piece, waiting for others to fix theirs, and trying to keep morale up as best I can. GitLab's engineers did this, and they did it with cameras running.
\nSo this is the thing: I bet GitLab will be doing a lot of soul-searching in the next few days, and hopefully their tired engineers will get some rest soon. In the end, the inconvenience of this outage will be forgotten but I'm sure this won't be the last time I'll point to the way they handled a critical incident with complete transparency, and set my mind at ease that things were under control.
\nI love GitLab. Let's get that out of the way.
\nBack when I first joined the HAPI project, we were using CVS for version control, hosted on SourceForge. Sourceforge was at that point a pretty cool system. You got free project hosting for your open source project, a free website, and shell access to a server so you could run scripts, edit your raw website, and whatever else you needed to do. That last part has always amazed me; I've always wondered what lengths SourceForge must have had to go to in order to keep that system from being abused.
\nNaturally, we eventually discovered GitHub and happily moved over there – and HAPI FHIR remains a happy resident of GitHub. We're now in the progress of migrating the HAPI Hl7v2.x codebase over to a new home on GitHub, too.
\nThe Smile CDR team discovered GitLab about a year ago. We quickly fell in love: easy self-hosting, a UI that feels familiar to a GitHub user yet somehow slightly more powerful in each part you touch, and a compelling set of features in the enterprise edition as well once you are ready for them.
\nOn Tuesday afternoon, Diederik noticed that GitLab was behaving slowly. I was curious about it since GitLab's @gitlabstatus Twitter mentioned unknown issues affecting the site. As it turned out, their issues went from bad, to better, and then to much worse. Ultimately, they wound up being unavailable for all of last night and part of this morning.
\nGitLab's issues were slightly hilarious but also totally relatable to anyone building and deploying big systems for any length of time. TechCrunch has a nice writeup of the incident if you want the gory details. Let's just say they had slowness problems caused by a user abusing the system, and in trying to recover from that a sysadmin accidentally deleted a large amount of production data. Ultimately, he thought he was in a shell on one (bad) node and just removing a useless empty directory but he was actually in a shell on the (good) master node.
\nI read a few meltdowns about this on reddit today, calling the sysadmin inexperienced, inept, or worse, but I also saw a few people saying something that resonated with me much more: if you've never made a mistake on a big complicated production system, you've probably never worked on a big complicated production system.
\nThese things happen. The trick is being able to recover from whatever has gone wrong, no matter how bad things have gotten.
\nThis is where GitLab really won me over. Check their Twitter for yourself. There was no attempt to mince words. GitLab engineers were candid about what had happened from the second things went south.
\nGitLab opened a publicly readable Google Doc where all of the notes of their investigation could be read by anyone wanting to follow along. When it became clear that the recovery effort was going to be long and complicated, they opened a YouTube live stream of a conference bridge with their engineers chipping away at the recovery.
\nThey even opened a live chat with the stream so you could comment on their efforts. Watching it was great. I've been in their position many times in my life: tired from being up all night trying to fix something, and sitting on an endless bridge where I'm fixing one piece, waiting for others to fix theirs, and trying to keep morale up as best I can. GitLab's engineers did this, and they did it with cameras running.
\nSo this is the thing: I bet GitLab will be doing a lot of soul-searching in the next few days, and hopefully their tired engineers will get some rest soon. In the end, the inconvenience of this outage will be forgotten but I'm sure this won't be the last time I'll point to the way they handled a critical incident with complete transparency, and set my mind at ease that things were under control.
\n" } ] } }, { "fullUrl": "https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20170117_hl7_wgm_san_antonio", "resource": { "resourceType": "Communication", "id": "20170117_hl7_wgm_san_antonio", "meta": { "versionId": "1", "tag": [ { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "HL7" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Meetings" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "Connectathon" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "FHIR" }, { "system": "https://smilecdr.com/hapi-fhir/blog/tag", "code": "WGM" } ] }, "language": "en", "text": { "status": "generated", "div": "It's January again, which of course means it's time for the January HL7 Working Group Meeting. As always, the first two days of the HL7 meeting brings FHIR Connectathon, and this was Connectathon 14.
\nI feel like every time I visit one of these meetings, the scale of the meeting astounds me and I can't imagine it being any bigger... and then that happens again the next time. The final tally at the September 2016 (Baltimore) Connectathon was 170 people. The final tally here in San Antonio was 209 so we continue to beat expectations.
\nI think we are finally passing a point where it's feasible to fit everyone in a half-size hotel ballroom. We may well have some hard decisions about whether the format still works or whether we need to turn people away in September.
\nAlso amazing to me was the number of new faces. On the first day, Ewout Kramer asked the room for anyone who was a first-time attendee to a FHIR Connectathon to raise their hand. It looked like about half the room raised their hand so we're really expanding the pool of interested people right now. Exciting days for FHIR!
\nMonday night brought our usual HAPI & .NET Users Group. We discussed a proposal we're working on for a template-based approach to automatic resource narrative generation. There will be more on that in a future post.
\nIt's January again, which of course means it's time for the January HL7 Working Group Meeting. As always, the first two days of the HL7 meeting brings FHIR Connectathon, and this was Connectathon 14.
\nI feel like every time I visit one of these meetings, the scale of the meeting astounds me and I can't imagine it being any bigger... and then that happens again the next time. The final tally at the September 2016 (Baltimore) Connectathon was 170 people. The final tally here in San Antonio was 209 so we continue to beat expectations.
\nI think we are finally passing a point where it's feasible to fit everyone in a half-size hotel ballroom. We may well have some hard decisions about whether the format still works or whether we need to turn people away in September.
\nAlso amazing to me was the number of new faces. On the first day, Ewout Kramer asked the room for anyone who was a first-time attendee to a FHIR Connectathon to raise their hand. It looked like about half the room raised their hand so we're really expanding the pool of interested people right now. Exciting days for FHIR!
\nMonday night brought our usual HAPI & .NET Users Group. We discussed a proposal we're working on for a template-based approach to automatic resource narrative generation. There will be more on that in a future post.
\n" } ] } } ] }