This result is being rendered in HTML for easy viewing. You may access this content as Raw JSON or Raw XML or Raw Turtle or view this content in HTML JSON or HTML XML or HTML Turtle . Response generated in 34ms.

HTTP 200 OK

Response Headers

Date: Wed, 30 Nov 2022 21:55:50 GMT
X-Powered-By: HAPI FHIR 6.3.2-SNAPSHOT/1ea3ff5094/2022-11-27 REST Server (FHIR Server; FHIR 4.0.1/R4; Raccoon Nation Edition)
Content-Type: application/x-turtle;charset=utf-8
X-Request-ID: PQ7w3Bzs0l7hfGAO

Response Body

@prefix fhir: <;http://hl7.org/fhir/>; .
@prefix rdf: <;http://www.w3.org/1999/02/22-rdf-syntax-ns#>; .
@prefix rdfs: <;http://www.w3.org/2000/01/rdf-schema#>; .
@prefix sct: <;http://snomed.info/id#>; .
@prefix xsd: <;http://www.w3.org/2001/XMLSchema#>; .
<;https://smilecdr.com/hapi-fhir/blog/fhir/baseR4/Communication/20221117_hapi_fhir_6_2_0/_history/1>;
rdf:type fhir:Communication ;
fhir:Communication.payload [ fhir:Communication.payload.contentString
[ fhir:value "<p>Welcome to the winter release of HAPI FHIR! Support has been added for FHIR R4B (4.3.0). See the <a href=\"/hapi-fhir/docs/getting_started/r4b.html\">R4B Documentation</a> for more information on what this means. Now onto the rest!</p>\n<h3>Breaking Changes</h3>\n<ul>\n<li>The <code>ActionRequestDetails</code> class has been dropped (it has been deprecated\nsince HAPI FHIR 4.0.0). This class was used as a parameter to the\n<code>SERVER_INCOMING_REQUEST_PRE_HANDLED</code> interceptor pointcut, but can be\nreplaced in any existing client code with <code>RequestDetails</code>. This change\nalso removes an undocumented behaviour where the JPA server internally\ninvoked the <code>SERVER_INCOMING_REQUEST_PRE_HANDLED</code> a second time from\nwithin various processing methods. This behaviour caused performance\nproblems for some interceptors (e.g. <code>SearchNarrowingInterceptor</code>) and\nno longer offers any benefit so it is being removed.</li>\n<li>Previously when ValueSets were pre-expanded after loinc terminology upload, expansion was failing with an exception for each ValueSet\nwith more than 10,000 properties. This problem has been fixed.\nThis fix changed some freetext mappings (definitions about how resources are freetext indexed) for terminology resources, which requires\nreindexing those resources. To do this use the <code>reindex-terminology</code> command.&quot;</li>\n<li>Removed Flyway database migration engine. The migration table still tracks successful and failed migrations\nto determine which migrations need to be run at startup. Database migrations no longer need to run differently when\nusing an older database version.</li>\n<li>The interceptor system has now deprecated the concept of ThreadLocal interceptors. This feature was\nadded for an anticipated use case, but has never seen any real use that we are aware of and removing it\nshould provide a minor performance improvement to the interceptor registry.</li>\n</ul>\n<h3>Security Changes</h3>\n<ul>\n<li>Upon hitting a subscription delivery failure, we currently log the failing payload which could be considered PHI. Resource content is no longer written to logs on subscription failure.</li>\n</ul>\n<h3>General Client/Server/Parser Changes</h3>\n<ul>\n<li>Previously, Celsius and Fahrenheit temperature quantities were not normalized. This is now fixed.\nThis change requires reindexing of resources containing Celsius or Fahrenheit temperature quantities.</li>\n<li>Fixed bug where searching with a target resource parameter (Coverage:payor:Patient) as value to an _include parameter would fail with a 500 response.</li>\n<li>Previously, DELETE request type is not supported for any operations. DELETE is now supported, and is enabled for operation $export-poll-status to allow cancellation of jobs</li>\n<li>Previously, when a client would provide a requestId within the source uri of a Meta.source, the provided requestId would get discarded and replaced by an id generated by the system. This has been corrected</li>\n<li>In the JPA server, when a resource is being updated, the response will now include any tags or\nsecurity labels which were not present in the request but were carried forward from the previous\nversion of the resource.</li>\n<li>Previously, if the Endpoint Base URL is set to something different from the default value, the URL that export-poll-status returned is incorrect.\nAfter correcting the export-poll-status URL, the binary file URL returned is also incorrect. This error has also been fixed and the URLs that are returned\nfrom $export and $export-poll-status will not contain the extra path from 'Fixed Value for Endpoint Base URL'.</li>\n<li>Previously, the <code>:nickname</code> qualifier only worked with the predefined <code>name</code> and <code>given</code> SearchParameters.\nThis has been fixed and now the <code>:nickname</code> qualifier can be used with any string SearchParameters.</li>\n<li>Previously, when executing a '[base]/_history' search, '_since' and '_at' shared the same behaviour. When a user searched for the date between the records' updated date with '_at', the record of '_at' time was not returned.\nThis has been corrected. '_since' query parameter works as it previously did, and the '_at' query parameter returns the record of '_at' time.</li>\n<li>Previously, creating a DSTU3 SearchParameter with an expression that does not start with a resource type would throw an error. This has been corrected.</li>\n<li>There was a bug in content-type negotiation when reading Binary resources. Previously, when a client requested a Binary resource and with an <code>Accept</code>\nheader that matched the <code>contentType</code> of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching <code>Accept</code> header will receive\nthe stored binary data directly as the requested content type.</li>\n<li>A new built-in server interceptor called the InteractionBlockingInterceptor has been added. This interceptor\nallows individual operations to be included/excluded from a RestfulServer's exported capabilities.</li>\n<li>The OpenApi generator now allows additional CSS customization for the Swagger UI page, as well as the\noption to disable resource type pages.</li>\n<li>Modified BinaryAccessProvider to use a safer method of checking the contents of an input stream. Thanks to @ttntrifork for the fix!</li>\n<li>Fixed issue where adding a sort parameter to a query would return an incomplete result set.</li>\n<li>Added new attribute for the @Operation annotation to define the operation's canonical URL. This canonical URL value will populate\nthe operation definition in the CapabilityStatement resource.</li>\n<li>A new interceptor pointcut <code>STORAGE_TRANSACTION_PROCESSING</code> has been added. Hooks for this\npointcut can examine and modify FHIR transaction bundles being processed by the JPA server before\nprocessing starts.</li>\n<li>In the JPA server, when deleting a resource the associated tags were previously deleted even though\nthe FHIR specification states that tags are independent of FHIR versioning. After this fix, resource tags\nand security labels will remain when a resource is deleted. They can be fetched using the <code>$meta</code> operation\nagainst the deleted resource, and will remain if the resource is brought back in a subsequent update.</li>\n<li>Fixed a bug which caused a failure when combining a Consent Interceptor with version conversion via the <code>Accept</code> header.</li>\n</ul>\n<h3>Bulk Export</h3>\n<ul>\n<li>Previously, bulk export for Group type with _typeFilter did not apply the filter if it was for the patients, and returned all members of the group.\nThis has now been fixed, and the filter will apply.</li>\n<li>A regression was introduced in 6.1.0 which caused bulk export jobs to not default to the correct output format when the <code>_outputFormat</code> parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value <code>application/fhir+ndjson</code>.</li>\n<li>Fixed a bug in Group Bulk Export where the server would crash in oracle due to too many clauses.</li>\n<li>Fixed a Group Bulk Export bug which was causing it to fail to return resources due to an incorrect search.</li>\n<li>Fixed a Group Bulk Export bug in which the group members would not be expanded correctly.</li>\n<li>Fixed a bug in Group Bulk Export: If a group member was part of multiple groups , it was causing other groups to be included during Group Bulk Export, if the Group resource type was specified. Now, when\ndoing an export on a specific group, and you elect to return Group resources, only the called Group will be returned, regardless of cross-membership.</li>\n<li>Previously, Patient Bulk Export only supported endpoint [fhir base]/Patient/$export, which exports all patients.\nNow, Patient Export can be done at the instance level, following this format: <code>[fhir base]/Patient/[id]/$export</code>, which will export only the records for one patient.\nAdditionally, added support for the <code>patient</code> parameter in Patient Bulk Export, which is another way to get the records of only one patient.</li>\n<li>Fixed the <code>$poll-export-status</code> endpoint so that when a job is complete, this endpoint now correctly includes the <code>request</code> and <code>requiresAccessToken</code> attributes.</li>\n<li>Fixed a bug where /Patient/$export would fail if _type=Patient parameter\nwas not included.</li>\n<li>Previously, Group Bulk Export did not support the inclusion of resources referenced in the resources in the patient compartment.\nThis is now supported.</li>\n<li>A previous fix resulted in Bulk Export files containing mixed resource types, which is\nnot allowed in the bulk data access IG. This has been corrected.</li>\n<li>A previous fix resulted in Bulk Export files containing duplicate resources, which is\nnot allowed in the bulk data access IG. This has been corrected.</li>\n<li>Previously, the number of resources per binary file in bulk export was a static 1000. This is now configurable by a new DaoConfig property called\n'setBulkExportFileMaximumCapacity()', and the default value is 1000 resources per file.</li>\n<li>By default, if the <code>$export</code> operation receives a request that is identical to one that has been recently\nprocessed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been</li>\n<li>introduced that disables this behavior and forces a new batch job on every call.</li>\n<li>Bulk Group export was failing to export Patient resources when Client ID mode was set to: ANY. This has been fixed</li>\n<li>Previously, Bulk Export jobs were always reused, even if completed. Now, jobs are only reused if an identical job is already running, and has not yet completed or failed.</li>\n</ul>\n<h3>Other Operations</h3>\n<ul>\n<li>Extend $member-match to validate matched patient against family name and birthdate</li>\n<li><code>$mdm-submit</code> can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a <code>Prefer: respond-async</code> header with the request.</li>\n<li>Previously, if the <code>$reindex</code> operation failed with a <code>ResourceVersionConflictException</code> the related<br />\nbatch job would fail. This has been corrected by adding 10 retry attempts for transactions that have\nfailed with a <code>ResourceVersionConflictException</code> during the <code>$reindex</code> operation. In addition, the <code>ResourceIdListStep</code>\nwas submitting one more resource than expected (i.e. 1001 records processed during a <code>$reindex</code> operation if only 1000\n<code>Resources</code> were in the database). This has been corrected.</li>\n</ul>\n<h3>CLI Tool changes:</h3>\n<ul>\n<li>Added a new optional parameter to the <code>upload-terminology</code> operation of the HAPI-FHIR CLI. you can pass the <code>-s</code> or <code>--size</code> parameter to specify\nthe maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example:\n<code>upload-terminology -s \\&quot;1GB\\&quot;</code> will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.</li>\n<li>Previously, using the <code>import-csv-to-conceptmap</code> command in the CLI successfully created ConceptMap resources\nwithout a <code>ConceptMap.status</code> element, which is against the FHIR specification. This has been fixed by adding a required\noption for status for the command.</li>\n<li>Added support for -l parameter for providing a local validation profile in the HAPI FHIR CLI.</li>\n<li>Previously, when the upload-terminology command was used to upload a terminology file with endpoint validation enabled, a validation error occurred due to a missing file content type.\nThis has been fixed by specifying the file content type of the uploaded file.</li>\n<li>For SNOMED CT, upload-terminology now supports both Canadian and International edition's file names for the SCT Description File</li>\n<li>Documentation was added for <code>reindex-terminology</code> command.</li>\n</ul>\n<h3>JPA Server General Changes</h3>\n<ul>\n<li>Changed Minimum Size (bytes) in FHIR Binary Storage of the persistence module from an integer to a long. This will permit larger binaries.</li>\n<li>Previously, if a FullTextSearchSvcImpl was defined, but was disabled via configuration, there could be data loss when reindexing due to transaction rollbacks. This has been corrected. Thanks to @dyoung-work for the fix!</li>\n<li>Fixed a bug where the $everything operation on Patient instances and the Patient type was not correctly propagating the transactional semantics. This was causing callers to not be in a transactional context.</li>\n<li>Previously when updating a phonetic search parameter, any existing resource will not have its search parameter String updated upon reindex if the normalized String is the same letter as under the old algorithm (ex JN to JAN). Searching on the new normalized String was failing to return results. This has been corrected.</li>\n<li>Previously, when creating a <code>DocumentReference</code> with an <code>Attachment</code> containing a URL over 254 characters\nan error was thrown. This has been corrected and now an <code>Attachment</code> URL can be up to 500 characters.</li>\n</ul>\n<h3>JPA Server Performance Changes</h3>\n<ul>\n<li>Initial page loading has been optimized to reduce the number of prefetched resources. This should improve the speed of initial search queries in many cases.</li>\n<li>Cascading deletes don't work correctly if multiple threads initiate a delete at the same time. Either the resource won't be found or there will be a collision on inserting the new version. This changes fixes the problem by better handling these conditions to either ignore an already deleted resource or to keep retrying in a new inner transaction.</li>\n<li>When using SearchNarrowingInterceptor, FHIR batch operations with a large number\nof conditional create/update entries exhibited very slow performance due to an\nunnecessary nested loop. This has been corrected.</li>\n<li>When using ForcedOffsetSearchModeInterceptor, any synchronous searches initiated programmatically (i.e. through the\ninternal java API, not the REST API) will not be modified. This prevents issues when a java call requests a synchronous\nsearch larger than the default offset search page size</li>\n<li>Previously, when using _offset, the queries will result in short pages, and repeats results on different pages.\nThis has now been fixed.</li>\n<li>Processing for <code>_include</code> and <code>_revinclude</code> parameters in the JPA server has been streamlined, which should\nimprove performance on systems where includes are heavily used.</li>\n</ul>\n<h3>Database-specific Changes</h3>\n<ul>\n<li>Database migration steps were failing with Oracle 19C. This has been fixed by allowing the database engine to skip dropping non-existent indexes.</li>\n</ul>\n<h3>Terminology Server, Fulltext Search, and Validation Changes</h3>\n<ul>\n<li>With Elasticsearch configured, including terminology, an exception was raised while expanding a ValueSet\nwith more than 10,000 concepts. This has now been fixed.</li>\n<li>Previously when ValueSets were pre-expanded after loinc terminology upload, expansion was failing with an exception for each ValueSet\nwith more than 10,000 properties. This problem has been fixed.\nThis fix changed some freetext mappings (definitions about how resources are freetext indexed) for terminology resources, which requires\nreindexing those resources. To do this use the <code>reindex-terminology</code> command.&quot;</li>\n<li>Added support for AWS OpenSearch to Fulltext Search. If an AWS Region is configured, HAPI-FHIR will assume you intend to connect to an AWS-managed OpenSearch instance, and will use\nAmazon's <a href=\"https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html\">DefaultAwsCredentialsProviderChain</a> to authenticate against it. If both username and password are provided, HAPI-FHIR will attempt to use them as a static credentials provider.</li>\n<li>Search for strings with <code>:text</code> qualifier was not performing advanced search. This has been corrected.</li>\n<li>LOINC terminology upload process was enhanced to consider 24 additional properties which were defined in\nloinc.csv file but not uploaded.</li>\n<li>LOINC terminology upload process was enhanced by loading <code>MAP_TO</code> properties defined in MapTo.csv input file to TermConcept(s).</li>\n</ul>\n<h3>MDM (Master Data Management)</h3>\n<ul>\n<li>MDM messages were using the resource id as a message key when it should be using the EID as a partition hash key.\nThis could lead to duplicate golden resources on systems using Kafka as a message broker.</li>\n<li><code>$mdm-submit</code> can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a <code>Prefer: respond-async</code> header with the request.</li>\n</ul>\n<h3>Batch Framework</h3>\n<ul>\n<li>Fast-tracking batch jobs that produced only one chunk has been rewritten to use Quartz triggerJob. This will\nensure that at most one thread is updating job status at a time. Also jobs that had FAILED, ERRORED, or been CANCELLED\ncould be accidentally set back to IN_PROGRESS; this has been corrected</li>\n<li>All Spring Batch dependencies and services have been removed. Async processing has fully migrated to Batch 2.</li>\n<li>In HAPI-FHIR 6.1.0, a regression was introduced into bulk export causing exports beyond the first one to fail in strange ways. This has been corrected.</li>\n<li>A remove method has been added to the Batch2 job registry. This will allow for dynamic job registration\nin the future.</li>\n<li>Batch2 jobs were incorrectly prevented from transitioning from ERRORED to COMPLETE status.</li>\n</ul>\n<h3>Package Registry</h3>\n<ul>\n<li>Provided the ability to have the NPM package installer skip installing a package if it is already installed and matches the version requested. This can be controlled by\nthe <code>reloadExisting</code> attribute in PackageInstallationSpec. It defaults to <code>true</code>, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!</li>\n</ul>\n" ] ;
fhir:index 0
] ;
fhir:Communication.sender [ fhir:Reference.display [ fhir:value "James Agnew" ] ;
fhir:Reference.reference [ fhir:value "https://smilecdr.com/about_us/#james" ]
] ;
fhir:Communication.sent [ fhir:value "2022-11-17T08:00:00-05:00"^^xsd:dateTime ] ;
fhir:Communication.status [ fhir:value "completed" ] ;
fhir:DomainResource.text [ fhir:Narrative.div "<div xmlns=\"http://www.w3.org/1999/xhtml\"><p>Welcome to the winter release of HAPI FHIR! Support has been added for FHIR R4B (4.3.0). See the <a href=\"/hapi-fhir/docs/getting_started/r4b.html\">R4B Documentation</a> for more information on what this means. Now onto the rest!</p>\n<h3>Breaking Changes</h3>\n<ul>\n<li>The <code>ActionRequestDetails</code> class has been dropped (it has been deprecated\nsince HAPI FHIR 4.0.0). This class was used as a parameter to the\n<code>SERVER_INCOMING_REQUEST_PRE_HANDLED</code> interceptor pointcut, but can be\nreplaced in any existing client code with <code>RequestDetails</code>. This change\nalso removes an undocumented behaviour where the JPA server internally\ninvoked the <code>SERVER_INCOMING_REQUEST_PRE_HANDLED</code> a second time from\nwithin various processing methods. This behaviour caused performance\nproblems for some interceptors (e.g. <code>SearchNarrowingInterceptor</code>) and\nno longer offers any benefit so it is being removed.</li>\n<li>Previously when ValueSets were pre-expanded after loinc terminology upload, expansion was failing with an exception for each ValueSet\nwith more than 10,000 properties. This problem has been fixed.\nThis fix changed some freetext mappings (definitions about how resources are freetext indexed) for terminology resources, which requires\nreindexing those resources. To do this use the <code>reindex-terminology</code> command.&quot;</li>\n<li>Removed Flyway database migration engine. The migration table still tracks successful and failed migrations\nto determine which migrations need to be run at startup. Database migrations no longer need to run differently when\nusing an older database version.</li>\n<li>The interceptor system has now deprecated the concept of ThreadLocal interceptors. This feature was\nadded for an anticipated use case, but has never seen any real use that we are aware of and removing it\nshould provide a minor performance improvement to the interceptor registry.</li>\n</ul>\n<h3>Security Changes</h3>\n<ul>\n<li>Upon hitting a subscription delivery failure, we currently log the failing payload which could be considered PHI. Resource content is no longer written to logs on subscription failure.</li>\n</ul>\n<h3>General Client/Server/Parser Changes</h3>\n<ul>\n<li>Previously, Celsius and Fahrenheit temperature quantities were not normalized. This is now fixed.\nThis change requires reindexing of resources containing Celsius or Fahrenheit temperature quantities.</li>\n<li>Fixed bug where searching with a target resource parameter (Coverage:payor:Patient) as value to an _include parameter would fail with a 500 response.</li>\n<li>Previously, DELETE request type is not supported for any operations. DELETE is now supported, and is enabled for operation $export-poll-status to allow cancellation of jobs</li>\n<li>Previously, when a client would provide a requestId within the source uri of a Meta.source, the provided requestId would get discarded and replaced by an id generated by the system. This has been corrected</li>\n<li>In the JPA server, when a resource is being updated, the response will now include any tags or\nsecurity labels which were not present in the request but were carried forward from the previous\nversion of the resource.</li>\n<li>Previously, if the Endpoint Base URL is set to something different from the default value, the URL that export-poll-status returned is incorrect.\nAfter correcting the export-poll-status URL, the binary file URL returned is also incorrect. This error has also been fixed and the URLs that are returned\nfrom $export and $export-poll-status will not contain the extra path from 'Fixed Value for Endpoint Base URL'.</li>\n<li>Previously, the <code>:nickname</code> qualifier only worked with the predefined <code>name</code> and <code>given</code> SearchParameters.\nThis has been fixed and now the <code>:nickname</code> qualifier can be used with any string SearchParameters.</li>\n<li>Previously, when executing a '[base]/_history' search, '_since' and '_at' shared the same behaviour. When a user searched for the date between the records' updated date with '_at', the record of '_at' time was not returned.\nThis has been corrected. '_since' query parameter works as it previously did, and the '_at' query parameter returns the record of '_at' time.</li>\n<li>Previously, creating a DSTU3 SearchParameter with an expression that does not start with a resource type would throw an error. This has been corrected.</li>\n<li>There was a bug in content-type negotiation when reading Binary resources. Previously, when a client requested a Binary resource and with an <code>Accept</code>\nheader that matched the <code>contentType</code> of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching <code>Accept</code> header will receive\nthe stored binary data directly as the requested content type.</li>\n<li>A new built-in server interceptor called the InteractionBlockingInterceptor has been added. This interceptor\nallows individual operations to be included/excluded from a RestfulServer's exported capabilities.</li>\n<li>The OpenApi generator now allows additional CSS customization for the Swagger UI page, as well as the\noption to disable resource type pages.</li>\n<li>Modified BinaryAccessProvider to use a safer method of checking the contents of an input stream. Thanks to @ttntrifork for the fix!</li>\n<li>Fixed issue where adding a sort parameter to a query would return an incomplete result set.</li>\n<li>Added new attribute for the @Operation annotation to define the operation's canonical URL. This canonical URL value will populate\nthe operation definition in the CapabilityStatement resource.</li>\n<li>A new interceptor pointcut <code>STORAGE_TRANSACTION_PROCESSING</code> has been added. Hooks for this\npointcut can examine and modify FHIR transaction bundles being processed by the JPA server before\nprocessing starts.</li>\n<li>In the JPA server, when deleting a resource the associated tags were previously deleted even though\nthe FHIR specification states that tags are independent of FHIR versioning. After this fix, resource tags\nand security labels will remain when a resource is deleted. They can be fetched using the <code>$meta</code> operation\nagainst the deleted resource, and will remain if the resource is brought back in a subsequent update.</li>\n<li>Fixed a bug which caused a failure when combining a Consent Interceptor with version conversion via the <code>Accept</code> header.</li>\n</ul>\n<h3>Bulk Export</h3>\n<ul>\n<li>Previously, bulk export for Group type with _typeFilter did not apply the filter if it was for the patients, and returned all members of the group.\nThis has now been fixed, and the filter will apply.</li>\n<li>A regression was introduced in 6.1.0 which caused bulk export jobs to not default to the correct output format when the <code>_outputFormat</code> parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value <code>application/fhir+ndjson</code>.</li>\n<li>Fixed a bug in Group Bulk Export where the server would crash in oracle due to too many clauses.</li>\n<li>Fixed a Group Bulk Export bug which was causing it to fail to return resources due to an incorrect search.</li>\n<li>Fixed a Group Bulk Export bug in which the group members would not be expanded correctly.</li>\n<li>Fixed a bug in Group Bulk Export: If a group member was part of multiple groups , it was causing other groups to be included during Group Bulk Export, if the Group resource type was specified. Now, when\ndoing an export on a specific group, and you elect to return Group resources, only the called Group will be returned, regardless of cross-membership.</li>\n<li>Previously, Patient Bulk Export only supported endpoint [fhir base]/Patient/$export, which exports all patients.\nNow, Patient Export can be done at the instance level, following this format: <code>[fhir base]/Patient/[id]/$export</code>, which will export only the records for one patient.\nAdditionally, added support for the <code>patient</code> parameter in Patient Bulk Export, which is another way to get the records of only one patient.</li>\n<li>Fixed the <code>$poll-export-status</code> endpoint so that when a job is complete, this endpoint now correctly includes the <code>request</code> and <code>requiresAccessToken</code> attributes.</li>\n<li>Fixed a bug where /Patient/$export would fail if _type=Patient parameter\nwas not included.</li>\n<li>Previously, Group Bulk Export did not support the inclusion of resources referenced in the resources in the patient compartment.\nThis is now supported.</li>\n<li>A previous fix resulted in Bulk Export files containing mixed resource types, which is\nnot allowed in the bulk data access IG. This has been corrected.</li>\n<li>A previous fix resulted in Bulk Export files containing duplicate resources, which is\nnot allowed in the bulk data access IG. This has been corrected.</li>\n<li>Previously, the number of resources per binary file in bulk export was a static 1000. This is now configurable by a new DaoConfig property called\n'setBulkExportFileMaximumCapacity()', and the default value is 1000 resources per file.</li>\n<li>By default, if the <code>$export</code> operation receives a request that is identical to one that has been recently\nprocessed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been</li>\n<li>introduced that disables this behavior and forces a new batch job on every call.</li>\n<li>Bulk Group export was failing to export Patient resources when Client ID mode was set to: ANY. This has been fixed</li>\n<li>Previously, Bulk Export jobs were always reused, even if completed. Now, jobs are only reused if an identical job is already running, and has not yet completed or failed.</li>\n</ul>\n<h3>Other Operations</h3>\n<ul>\n<li>Extend $member-match to validate matched patient against family name and birthdate</li>\n<li><code>$mdm-submit</code> can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a <code>Prefer: respond-async</code> header with the request.</li>\n<li>Previously, if the <code>$reindex</code> operation failed with a <code>ResourceVersionConflictException</code> the related<br/>\nbatch job would fail. This has been corrected by adding 10 retry attempts for transactions that have\nfailed with a <code>ResourceVersionConflictException</code> during the <code>$reindex</code> operation. In addition, the <code>ResourceIdListStep</code>\nwas submitting one more resource than expected (i.e. 1001 records processed during a <code>$reindex</code> operation if only 1000\n<code>Resources</code> were in the database). This has been corrected.</li>\n</ul>\n<h3>CLI Tool changes:</h3>\n<ul>\n<li>Added a new optional parameter to the <code>upload-terminology</code> operation of the HAPI-FHIR CLI. you can pass the <code>-s</code> or <code>--size</code> parameter to specify\nthe maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example:\n<code>upload-terminology -s \\&quot;1GB\\&quot;</code> will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.</li>\n<li>Previously, using the <code>import-csv-to-conceptmap</code> command in the CLI successfully created ConceptMap resources\nwithout a <code>ConceptMap.status</code> element, which is against the FHIR specification. This has been fixed by adding a required\noption for status for the command.</li>\n<li>Added support for -l parameter for providing a local validation profile in the HAPI FHIR CLI.</li>\n<li>Previously, when the upload-terminology command was used to upload a terminology file with endpoint validation enabled, a validation error occurred due to a missing file content type.\nThis has been fixed by specifying the file content type of the uploaded file.</li>\n<li>For SNOMED CT, upload-terminology now supports both Canadian and International edition's file names for the SCT Description File</li>\n<li>Documentation was added for <code>reindex-terminology</code> command.</li>\n</ul>\n<h3>JPA Server General Changes</h3>\n<ul>\n<li>Changed Minimum Size (bytes) in FHIR Binary Storage of the persistence module from an integer to a long. This will permit larger binaries.</li>\n<li>Previously, if a FullTextSearchSvcImpl was defined, but was disabled via configuration, there could be data loss when reindexing due to transaction rollbacks. This has been corrected. Thanks to @dyoung-work for the fix!</li>\n<li>Fixed a bug where the $everything operation on Patient instances and the Patient type was not correctly propagating the transactional semantics. This was causing callers to not be in a transactional context.</li>\n<li>Previously when updating a phonetic search parameter, any existing resource will not have its search parameter String updated upon reindex if the normalized String is the same letter as under the old algorithm (ex JN to JAN). Searching on the new normalized String was failing to return results. This has been corrected.</li>\n<li>Previously, when creating a <code>DocumentReference</code> with an <code>Attachment</code> containing a URL over 254 characters\nan error was thrown. This has been corrected and now an <code>Attachment</code> URL can be up to 500 characters.</li>\n</ul>\n<h3>JPA Server Performance Changes</h3>\n<ul>\n<li>Initial page loading has been optimized to reduce the number of prefetched resources. This should improve the speed of initial search queries in many cases.</li>\n<li>Cascading deletes don't work correctly if multiple threads initiate a delete at the same time. Either the resource won't be found or there will be a collision on inserting the new version. This changes fixes the problem by better handling these conditions to either ignore an already deleted resource or to keep retrying in a new inner transaction.</li>\n<li>When using SearchNarrowingInterceptor, FHIR batch operations with a large number\nof conditional create/update entries exhibited very slow performance due to an\nunnecessary nested loop. This has been corrected.</li>\n<li>When using ForcedOffsetSearchModeInterceptor, any synchronous searches initiated programmatically (i.e. through the\ninternal java API, not the REST API) will not be modified. This prevents issues when a java call requests a synchronous\nsearch larger than the default offset search page size</li>\n<li>Previously, when using _offset, the queries will result in short pages, and repeats results on different pages.\nThis has now been fixed.</li>\n<li>Processing for <code>_include</code> and <code>_revinclude</code> parameters in the JPA server has been streamlined, which should\nimprove performance on systems where includes are heavily used.</li>\n</ul>\n<h3>Database-specific Changes</h3>\n<ul>\n<li>Database migration steps were failing with Oracle 19C. This has been fixed by allowing the database engine to skip dropping non-existent indexes.</li>\n</ul>\n<h3>Terminology Server, Fulltext Search, and Validation Changes</h3>\n<ul>\n<li>With Elasticsearch configured, including terminology, an exception was raised while expanding a ValueSet\nwith more than 10,000 concepts. This has now been fixed.</li>\n<li>Previously when ValueSets were pre-expanded after loinc terminology upload, expansion was failing with an exception for each ValueSet\nwith more than 10,000 properties. This problem has been fixed.\nThis fix changed some freetext mappings (definitions about how resources are freetext indexed) for terminology resources, which requires\nreindexing those resources. To do this use the <code>reindex-terminology</code> command.&quot;</li>\n<li>Added support for AWS OpenSearch to Fulltext Search. If an AWS Region is configured, HAPI-FHIR will assume you intend to connect to an AWS-managed OpenSearch instance, and will use\nAmazon's <a href=\"https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html\">DefaultAwsCredentialsProviderChain</a> to authenticate against it. If both username and password are provided, HAPI-FHIR will attempt to use them as a static credentials provider.</li>\n<li>Search for strings with <code>:text</code> qualifier was not performing advanced search. This has been corrected.</li>\n<li>LOINC terminology upload process was enhanced to consider 24 additional properties which were defined in\nloinc.csv file but not uploaded.</li>\n<li>LOINC terminology upload process was enhanced by loading <code>MAP_TO</code> properties defined in MapTo.csv input file to TermConcept(s).</li>\n</ul>\n<h3>MDM (Master Data Management)</h3>\n<ul>\n<li>MDM messages were using the resource id as a message key when it should be using the EID as a partition hash key.\nThis could lead to duplicate golden resources on systems using Kafka as a message broker.</li>\n<li><code>$mdm-submit</code> can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a <code>Prefer: respond-async</code> header with the request.</li>\n</ul>\n<h3>Batch Framework</h3>\n<ul>\n<li>Fast-tracking batch jobs that produced only one chunk has been rewritten to use Quartz triggerJob. This will\nensure that at most one thread is updating job status at a time. Also jobs that had FAILED, ERRORED, or been CANCELLED\ncould be accidentally set back to IN_PROGRESS; this has been corrected</li>\n<li>All Spring Batch dependencies and services have been removed. Async processing has fully migrated to Batch 2.</li>\n<li>In HAPI-FHIR 6.1.0, a regression was introduced into bulk export causing exports beyond the first one to fail in strange ways. This has been corrected.</li>\n<li>A remove method has been added to the Batch2 job registry. This will allow for dynamic job registration\nin the future.</li>\n<li>Batch2 jobs were incorrectly prevented from transitioning from ERRORED to COMPLETE status.</li>\n</ul>\n<h3>Package Registry</h3>\n<ul>\n<li>Provided the ability to have the NPM package installer skip installing a package if it is already installed and matches the version requested. This can be controlled by\nthe <code>reloadExisting</code> attribute in PackageInstallationSpec. It defaults to <code>true</code>, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!</li>\n</ul>\n</div>" ;
fhir:Narrative.status [ fhir:value "generated" ]
] ;
fhir:Resource.id [ fhir:value "20221117_hapi_fhir_6_2_0" ] ;
fhir:Resource.language [ fhir:value "en" ] ;
fhir:Resource.meta [ fhir:Meta.tag [ fhir:Coding.code [ fhir:value "Release" ] ;
fhir:Coding.system [ fhir:value "https://smilecdr.com/hapi-fhir/blog/tag"^^xsd:anyURI ] ;
fhir:index 0
] ;
fhir:Meta.versionId [ fhir:value "1" ]
] ;
fhir:nodeRole fhir:treeRoot .
Wrote 38.5 KB (57.7 KB total including HTML) in estimated 1ms