Welcome to the winter release of HAPI FHIR! Support has been added for FHIR R4B (4.3.0). See the R4B Documentation for more information on what this means. Now onto the rest!
ActionRequestDetails
class has been dropped (it has been deprecated
since HAPI FHIR 4.0.0). This class was used as a parameter to the
SERVER_INCOMING_REQUEST_PRE_HANDLED
interceptor pointcut, but can be
replaced in any existing client code with
RequestDetails
. This change
also removes an undocumented behaviour where the JPA server internally
invoked the
SERVER_INCOMING_REQUEST_PRE_HANDLED
a second time from
within various processing methods. This behaviour caused performance
problems for some interceptors (e.g.
SearchNarrowingInterceptor
) and
no longer offers any benefit so it is being removed.
reindex-terminology
command."
:nickname
qualifier only worked with the predefined
name
and
given
SearchParameters.
This has been fixed and now the
:nickname
qualifier can be used with any string SearchParameters.
Accept
header that matched the
contentType
of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching
Accept
header will receive
the stored binary data directly as the requested content type.
STORAGE_TRANSACTION_PROCESSING
has been added. Hooks for this
pointcut can examine and modify FHIR transaction bundles being processed by the JPA server before
processing starts.
$meta
operation
against the deleted resource, and will remain if the resource is brought back in a subsequent update.
Accept
header.
_outputFormat
parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value
application/fhir+ndjson
.
[fhir base]/Patient/[id]/$export
, which will export only the records for one patient.
Additionally, added support for the
patient
parameter in Patient Bulk Export, which is another way to get the records of only one patient.
$poll-export-status
endpoint so that when a job is complete, this endpoint now correctly includes the
request
and
requiresAccessToken
attributes.
$export
operation receives a request that is identical to one that has been recently
processed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been
$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a
Prefer: respond-async
header with the request.
$reindex
operation failed with a
ResourceVersionConflictException
the related
ResourceVersionConflictException
during the
$reindex
operation. In addition, the
ResourceIdListStep
was submitting one more resource than expected (i.e. 1001 records processed during a
$reindex
operation if only 1000
Resources
were in the database). This has been corrected.
upload-terminology
operation of the HAPI-FHIR CLI. you can pass the
-s
or
--size
parameter to specify
the maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example:
upload-terminology -s \"1GB\"
will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.
import-csv-to-conceptmap
command in the CLI successfully created ConceptMap resources
without a
ConceptMap.status
element, which is against the FHIR specification. This has been fixed by adding a required
option for status for the command.
reindex-terminology
command.
DocumentReference
with an
Attachment
containing a URL over 254 characters
an error was thrown. This has been corrected and now an
Attachment
URL can be up to 500 characters.
_include
and
_revinclude
parameters in the JPA server has been streamlined, which should
improve performance on systems where includes are heavily used.
reindex-terminology
command."
:text
qualifier was not performing advanced search. This has been corrected.
MAP_TO
properties defined in MapTo.csv input file to TermConcept(s).
$mdm-submit
can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a
Prefer: respond-async
header with the request.
reloadExisting
attribute in PackageInstallationSpec. It defaults to
true
, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!
Welcome to a new major release of HAPI FHIR! Since there are many breaking changes, we are making this a major release.
tenantID
. This issue has been fixed.
STORAGE_PRESTORAGE_CLIENT_ASSIGNED_ID
which is invoked when a user attempts to create a resource with a client-assigned ID.
_revinclude
. the results sometimes incorrectly included resources that were reverse included by other search parameters with the same name. Thanks to GitHub user @vivektk84 for reporting and to Jean-Francois Briere for proposing a fix.
:not-in
queries.
code:in
or
code:not-in
expression, for mandating that results must be in a specified list of codes.
GET
resource with
_total=accurate
and
_summary=count
if consent service enabled should throw an InvalidRequestException. This issue has been fixed.
ne
) prefix.
IDomainResource
. This caused a few resource types to be missed. This has been corrected, and resource type is now set on the id element for all
IBaseResource
instances instead.
JpaStorageSettings.setStoreResourceInLuceneIndex()
_has
parameter
Welcome to the winter-ish release of HAPI-FHIR 5.6.0!
HAPI FHIR 5.6.0 (Codename: Raccoon) brings a whole bunch of great new features, bugfixes, and more.
Highlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
nl
instead of
nl-DE
or
nl-NL
.
%
symbol was causing searches to fail to return results. This has been corrected.
_language
search parameter has been dropped.
_id
parameter has been added to the Patient/$everything type-level operation so you can narrow down a specific list of patient IDs to export.
_mdm
parameter support has been added to the $everything operation.
$mdm-clear
operation has been refactor to use spring batch.
$mdm-create-link
operation.
Another quarter gone by, another HAPI-FHIR release.
HAPI FHIR 5.5.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
Highlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on August 18 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
_include=Observation:*
.
Tag Versioning Mode
, which determines how tags are maintained.
ForceOffsetSearchModeInterceptor
. This interceptor forces all searches to be offset searches, instead of relying on the query cache.
$reindex
operation which creates a Spring Batch job to reindex selected resources.
_include
and
_revinclude
resources to be added to a single search page result. In addition, the include/revinclue processor have been redesigned to avoid accidentally overloading the server if an include/revinclude would return unexpected massive amounts of data.
Cache-Control: no-store
) the loading of
_include
and
_revinclude
will now factor the maximum include count."
It's time for another release of HAPI FHIR.
HAPI FHIR 5.4.0 (Codename: Pangolin) brings a whole bunch of great new features, bugfixes, and more.
Highlights of this release are shown below. See the Changelog for a complete list. There will be a live Webinar (recording available on-demand afterward) on May 20 2021. Details available here: https://www.smilecdr.com/quarterly-product-release-webinar-reminder
Prefer: handling=lenient
header has been added via an optional interceptor.
_list
search parameter has been added to the JPA server
:contained
modifier has been added, allowing searches to select from data in contained resources found within the resource being searches. Note that this feature is disabled by default and must be enabled if needed.
Resource.meta
X-Upsert-Extistence-Check
(note there is a typo in the name, this will be addressed in the next release of HAPI FHIR! Please be aware if you are planning on using this feature) can be added which avoids existence checks when using client assigned IDs to create new records. This can speed up performance.
Observation?patient:mdm=Patient/123
can be used to search for Observation resources belonging to
Patient/123
but also to other MDM-linked patient records.
It's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Numbat).
Security Notice:
Major New Features:
The JPA SearchBuilder (which turns FHIR searches into SQL statements to be executed by the database) has been completely rewritten to not use Hibernate. This allows for much more efficient SQL to be generated in some cases. For some specific queries on a very large test repository running on Postgresql this new search builder performed 10x faster. Note that this new module is enabled by default in HAPI FHIR 5.2.0 but can be disabled via a JpaStorageSettings setting. It is disabled by default in Smile CDR 2020.11.R01 but will be enabled by default in the next major release.
Support for RDF Turtle encoding has been added, finally bringing native support for the 3rd official FHIR encoding to HAPI FHIR. This support was contributed by Josh Collins and Eric Prud'hommeaux of the company Janeiro Digital. We greatly appreciate the contribution! To see an example of the RDF encoding: hapi.fhir.org/baseR4/Patient?_format=rdf
Terminology Enhancements:
Integration with remote terminology services has been improved so that required bindings to closed valuesets are no longer delegated to the remote terminology server. This improves performance since there is no need for remote services in this case.
The
CodeSystem/$validate-code
operation has been implemented for R4+ JPA servers.
The JPA Terminology Server is now version aware, meaning that multiple versions of a single CodeSystem can now be stored in a single FHIR terminology server repository. ValueSet expansion, CodeSystem lookup, and ConceptMap translation are all now fully version aware. Note that implementing support for fully versioned terminology is mostly complete, but some validation operations may still not work. This should be completed by our next major release.
ValueSet expansion with filtering (e.g. using the
filter
parameter on the
$expand
operation) has now been implemented in such a way that it fully supports filtering on pre-expanded ValueSets, including using offsets and counts. This is a major improvement for people building picker UIs leveraging the $expand operation.
EMPI Improvements:
Identifier matchers have been added, providing native FHIR support for matching on resource identifiers
The
$empi-clear
operation performance has been greatly improved
Other Notable Improvements:
A new combined "delete+expunge" mode has been added to the DELETE operation in the JPA server. This mode deletes resources and expunges (physically deletes) them in a single fast operation. Note that with this mode must be enabled, and completely bypasses interceptor hooks notifying registered listeners that data is being deletes and expunged. It is several orders of magnitude faster when deleting large sets of data, and is generally intended for test scenarios.
The Package Server module now supports installing non-conformance resources from packages.
The
_typeFilter
parameter has been implemented for the $bulk-export module.
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
It's August, so it's time for our next quarterly relese: HAPI FHIR 5.2.0 (Codename: Manticore).
Notable changes in this release include:
An XSS vulnerability has been fixed in the testpage overlay project. This issue affects only the testpage overlay module, but users of this module should upgrade immediately. A CVE number for this issue has been requested and will be updated here when it is assigned.
Support for the new FHIR NPM Package spec has been added. Currently this support is limited to JPA servers, and support should be added to plain servers in the next release. Packages can be imported on startup, either by supplying NPM files locally or by downloading them automatically from an NPM server such as packages.fhir.org. Package contents (the StructureDefinition, CodeSystem, ValueSet, etc. resources in the package) can be installed into the repository, or can be stored in a dedicated set of tables and made available to the validator without actually being installed in the repository.
Support for the
Observation/$lastn
operation has been implemented thanks to a partnership with LHNCBC/NIH. This operation uses ElasticSearch to support querying for recent Observations over a set of test codes for one or more patients in a very efficient way.
The FHIR PATCH operation now supports FHIRPatch in addition to the already supported XML and JSON Patch specs. FHIRPatch is a very expressive mechanism for creating patches and can be used to supply very precise patches.
A new operatiion called
$diff
has been added. Diff can be used too generate a FHIRPatch diff between two resrouces, or between two versions of the same resource. For example: http://hapi.fhir.org/baseR4/Patient/example/$diff
Several performance problems and occasional failures in the resource expunge operation have been corrected
The memory use for Subscription delivery queues has been reduced
Snapshot generaton now uses a single snapshot generator codebase for generating snapshots across all versions of FHIR. This makes ongoing maintenance much easier and fixes a number of version specific bugs.
The maximum cascade depth for cascading deletes is now configurable.
AuthorizationInterceptor can now fully authorize GraphQL calls, including allowing/blocking individual resources returned by the graph.
GraphQL now supports POST form (thanks to Kholilul Islam!)
The LOINC uploader now supports LOINC 2.68
A new batch job framework has been introduced, leveraging the Spring Batch library. Initial jobs to use this new framework are the Bulk Export and EMPI modules, but eventually all long processes will be adapted to use this new framework.
TThe HAPI FHIR built-in Terminology Server now includes support for validating UCUM (units of measure), BCP-13 (mimetypes), ISO 4217 (currencies), ISO 3166 (countries), and USPS State Codes.
It is now possible to disable referential integrity for delete operations for speciific reference paths.
A regression has been fixed that significantly degraded validation performance in the JPA server for validation of large numbers of resources.
Unit tests have been migrated to JUnit 5. This change has no user visible impacts, but will help us continue to improve ongoing maintenance of our test suites.
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
A second point release of the HAPI FHIR 5.0 (Labrador) release cycle has been pushed to Maven Central.
This release corrects only two issues:
A snapshot dependency was accidentally left into the POM for HAPI FHIR 5.0.1. This has been corrected.
The default setting for the new partitioning feature "Add Partition to Search Indexes" was incorrectly set to enabled (true). It has been set to false, which was the intended default for this setting.
It's time for another release of HAPI FHIR!
This release brings some good stuff, including:
A new feature called Partitioning has been added to the JPA server. This can be used to implement multitenancy, as well as other partitioned/segregated/sharded use cases.
The IValidationSupport interface has been completely redesigned for better flexibility, extensibility and to enable future use cases. Any existing implementations of this interface will need to be adjusted.
Many improvements to performance have been implemented
FHIR R5 draft definitions have been updated to the latest FHIR 4.2.0 (Preview 2) definitions
The Gson JSON parser has been replaced with Jackson for better flexibility and performance
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
It's time for another release of HAPI FHIR!
This release brings some good stuff, including:
A new database migrator for the JPA server has been introduced, based on FlywayDB.
A major performance enhancement has been added to the parser, which decreases the parse time when parsing large Bundle resources by up to 50%.
Support for positional (near) search using geo-coordinates and positional distance has been added. This support currently uses a "bounding box" algorithm, and may be further enhanced to use a radius circle in the future.
Support for LOINC 2.67 has been added
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
It's time for another release of HAPI FHIR!
This release brings some good stuff, including:
Structures JARs have been updated to incorporate the latest technical corrections. DSTU3 structures are upgraded to FHIR 3.0.2, R4 structures are upgraded to FHIR 4.0.1, and R5 draft structures are upgraded to the October 2019 draft revision.
ValueSets are now automatically pre-expanded by the JPA server into a dedicated set of database tables. This "precalculated expansion" is used to provide much better performance for validation and expanion operations, and introduced the ability to successfully expand very large ValueSets such as the LOINC implicit (all codes) valueset.
Support for the FHIR Bulk Export specification has been added. We are now working on adding support for Bulk Import!
First-order support for ElasticSearch as a full-text and terminology service backend implementation. At this time, both raw Lucene and ElasticSearch are supported (this may change in the future but we do not have any current plans to deprecate Lucene).
Live Terminology Service operations for terminology file maintenance based on delta files has been added.
Binary resources and Media/DocumentReference instances with binary attachments stored in the FHIR repository can now take advantage of externalized binary storage for the binary content when that feature is enabled. This allows much better scalability of repositories containing large amounts of binary content (e.g. document repositories).
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
Also, as a reminder, if you have not already filled out our annual user survey, please take a moment to do so. Access the survey here: http://bit.ly/33HO4cs (note that this URL was originally posted incorrectly. It is now fixed)
The next release of HAPI has now been uploaded to the Maven repos and GitHub's releases section.
This release features a number of significant performance improvements, and has some notable changes:
A new consent framework called ConsentInterceptor that can be used to apply local consent directives and policies, and potentially filter or mask data has been added.
Initial support for draft FHIR R5 resources has been added.
Support for GraphQL and the _filter search parameter has been added.
The ability to perform cascading deletes has been added.
As always, see the changelog for a full list of changes.
Thanks to everyone who contributed to this release!
One of the things we often talk about in the FHIR standards development community is where FHIR currently sits on Gartner's Hype Cycle. The hype cycle is a coarse measure of the trajectory of new technologies on a journey from being "new and exciting silver bullets" to eventually being "boring useful technologies".
When you are a proponent of a new technology (as I certainly am with FHIR), probably the most important aspect to remember about the hype cycle is that you really only ever know where you are at any given time long after that time has passed. In other words, it's fun to ask yourself "have we passed the Peak of Inflated Expectations yet?" but you really won't know until much later.
Speculating is perhaps a fool's errand. I probably shouldn't try but I can't help but wonder if we have passed the peak yet.
The trajectory of HAPI FHIR's growth is interesting. FHIR has been growing over the last few years by all kinds of metrics. The connectathons keep getting bigger, the number of vendors participating keeps on getting bigger, and FHIR DevDays keeps on getting bigger.
If I look at our website in Google Analytics, I am curious about the trajectory.
While HAPI FHIR has seen pretty steady growth over the last few years, that growth has been either tapering or at least very unstable over the last 8 months.
Certainly I don't think HAPI FHIR has stopped growing. The number of messages on the support forum and the number of people with big production implementations these days certainly doesn't suggest that; however, things have certainly been weird the last 8 months.
Let's look at interest in FHIR overall. The next thing to look at is the FHIR Google Trends graph, which measures the number of people searching for terms on Google (a pretty decent indicator of general interest). The following graph shows the last 4 years for FHIR.
It would seem that FHIR itself saw a crazy explosion of interest back in May, too. That makes sense since FHIR R3 was released right before that peak.
Let's compare that with the graph for IHE. I don't think anyone would disagree that IHE sits firmly atop the Plateau of Productivity. Most people in the world of health informatics know what can be accomplished with IHE's profiles, and certainly I've worked with many organizations who use them to accomplish good things.
The FHIR and IHE Graph shows interest in FHIR in BLUE and IHE in RED.
So what can we take from this? I think the right side of the graph is quite interesting. FHIR itself has kind of levelled off recently and has hit similar metrics to those of a very productive organization.
I probably shouldn't attach too much meaning to these graphs, but I can't help but wonder...
HAPI FHIR's JPA Module lets you quickly set up a FHIR server, complete with a database for whatever purpose you might have.
One of the most requested features in the last year has been for support of custom search parameters on that server. Out of the box, the JPA server has always supported the default/built-in search parameters that are defined in the FHIR specification.
This means that if you store a
Patient
resource in the database, the
Patient.gender
field will be indexed with a search parameter called
gender
, the
Patient.birthDate
field will be indexed with a search parameter called
birthdate
, etc.
To see a list of the default search parameters for a given resource, you can see a table near the bottom of any resource definition. For example, here are the Patient search parameters.
The built-in parameters are great for lots of situations but if you're building a real application backend then you are probably going to come up with a need that the FHIR specification developers didn't anticipate (or one that doesn't meet FHIR's 80% rule).
The solution for this is to introduce a custom search parameter. Search parameters are defined using a resource that is – unsurprisingly – called
SearchParameter
. The idea is that you create one of these SearchParameter resources and give it a
code
(the name of the URL parameter), a
type
(the search parameter type), and an
expression
(the FHIRPath expression which will actually be indexed).
In HAPI FHIR's JPA server, custom search parameters are indexed just like any other search parameter. A new mechanism has been introduced in HAPI FHIR 2.3 (to be released soon) that parses the expression, adds any new or updated search parameters to an internal registry of indexed paths, and marks any existing resources that are potential candidates for this new search parameter as requiring reindexing.
This means that any newly added search parameters will cover resources added after the search parameter was added, and it will also cover older resources after the server has had a chance to reindex them.
This also means that you definitely want to make sure you have properly secured the
/SearchParameter
endpoint since it can potentially cause your server to do a lot of extra work if there are a lot of resources present.
To show how this works, here is an example of a search parameter on an extension. We'll suppose that in our system we've defined an extension for patients' eye colour. Patient resources stored in our database will have the eye colour extension set, and we want to be able to search on this extension, too.
1. Create the Search Parameter
First, define a search parameter and upload it to your server. In Java, this looks as follows:
// Create a search parameter definition
SearchParameter eyeColourSp = new SearchParameter();
eyeColourSp.addBase("Patient");
eyeColourSp.setCode("eyecolour");
eyeColourSp.setType(org.hl7.fhir.dstu3.model.Enumerations.SearchParamType.TOKEN);
eyeColourSp.setTitle("Eye Colour");
eyeColourSp.setExpression("Patient.extension('http://acme.org/eyecolour')");
eyeColourSp.setXpathUsage(org.hl7.fhir.dstu3.model.SearchParameter.XPathUsageType.NORMAL);
eyeColourSp.setStatus(org.hl7.fhir.dstu3.model.Enumerations.PublicationStatus.ACTIVE);
// Upload it to the server
client
.create()
.resource(eyeColourSp)
.execute();
The resulting SearchParameter resource looks as follows:
{
"resourceType": "SearchParameter",
"title": "Eye Colour",
"base": [ "Patient" ],
"status": "active",
"code": "eyecolour",
"type": "token",
"expression": "Patient.extension('http://acme.org/eyecolour')",
"xpathUsage": "normal"
}
2. Upload Some Resources
Let's upload two Patient resources with different eye colours.
Patient p1 = new Patient();
p1.setActive(true);
p1.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("blue"));
client
.create()
.resource(p1)
.execute();
Patient p2 = new Patient();
p2.setActive(true);
p2.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("green"));
client
.create()
.resource(p2)
.execute();
Here's how one of these resources will look when encoded.
{
"resourceType": "Patient",
"extension": [
{
"url": "http://acme.org/eyecolour",
"valueCode": "blue"
}
],
"active": true
}
3. Search!
Finally, let's try searching:
Bundle bundle = ourClient
.search()
.forResource(Patient.class)
.where(new TokenClientParam("eyecolour").exactly().code("blue"))
.returnBundle(Bundle.class)
.execute();
System.out.println(myFhirCtx.newJsonParser().setPrettyPrint(true).encodeResourceToString(bundle));
This produces a search result that contains only the matching resource:
{
"resourceType": "Bundle",
"id": "bc89e883-b9f7-4745-8c2f-24bf9277664d",
"meta": {
"lastUpdated": "2017-02-07T20:30:05.445-05:00"
},
"type": "searchset",
"total": 1,
"link": [
{
"relation": "self",
"url": "http://localhost:45481/fhir/context/Patient?eyecolour=blue"
}
],
"entry": [
{
"fullUrl": "http://localhost:45481/fhir/context/Patient/2",
"resource": {
"resourceType": "Patient",
"id": "2",
"meta": {
"versionId": "1",
"lastUpdated": "2017-02-07T20:30:05.317-05:00"
},
"text": {
"status": "generated",
"div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"><table class=\"hapiPropertyTable\"><tbody/></table></div>"
},
"extension": [
{
"url": "http://acme.org/eyecolour",
"valueCode": "blue"
}
],
"active": true
},
"search": {
"mode": "match"
}
}
]
}
Naturally, this feature will soon be available in Smile CDR. Previous versions of Smile CDR had a less elegant solution to this problem; however, now that we have a nice elegant approach to custom parameters that is based on FHIR's own way of handling this, Smile CDR users will see the benefits quickly.
I love GitLab. Let's get that out of the way.
Back when I first joined the HAPI project, we were using CVS for version control, hosted on SourceForge. Sourceforge was at that point a pretty cool system. You got free project hosting for your open source project, a free website, and shell access to a server so you could run scripts, edit your raw website, and whatever else you needed to do. That last part has always amazed me; I've always wondered what lengths SourceForge must have had to go to in order to keep that system from being abused.
Naturally, we eventually discovered GitHub and happily moved over there – and HAPI FHIR remains a happy resident of GitHub. We're now in the progress of migrating the HAPI Hl7v2.x codebase over to a new home on GitHub, too.
The Smile CDR team discovered GitLab about a year ago. We quickly fell in love: easy self-hosting, a UI that feels familiar to a GitHub user yet somehow slightly more powerful in each part you touch, and a compelling set of features in the enterprise edition as well once you are ready for them.
On Tuesday afternoon, Diederik noticed that GitLab was behaving slowly. I was curious about it since GitLab's @gitlabstatus Twitter mentioned unknown issues affecting the site. As it turned out, their issues went from bad, to better, and then to much worse. Ultimately, they wound up being unavailable for all of last night and part of this morning.
GitLab's issues were slightly hilarious but also totally relatable to anyone building and deploying big systems for any length of time. TechCrunch has a nice writeup of the incident if you want the gory details. Let's just say they had slowness problems caused by a user abusing the system, and in trying to recover from that a sysadmin accidentally deleted a large amount of production data. Ultimately, he thought he was in a shell on one (bad) node and just removing a useless empty directory but he was actually in a shell on the (good) master node.
I read a few meltdowns about this on reddit today, calling the sysadmin inexperienced, inept, or worse, but I also saw a few people saying something that resonated with me much more: if you've never made a mistake on a big complicated production system, you've probably never worked on a big complicated production system.
These things happen. The trick is being able to recover from whatever has gone wrong, no matter how bad things have gotten.
This is where GitLab really won me over. Check their Twitter for yourself. There was no attempt to mince words. GitLab engineers were candid about what had happened from the second things went south.
GitLab opened a publicly readable Google Doc where all of the notes of their investigation could be read by anyone wanting to follow along. When it became clear that the recovery effort was going to be long and complicated, they opened a YouTube live stream of a conference bridge with their engineers chipping away at the recovery.
They even opened a live chat with the stream so you could comment on their efforts. Watching it was great. I've been in their position many times in my life: tired from being up all night trying to fix something, and sitting on an endless bridge where I'm fixing one piece, waiting for others to fix theirs, and trying to keep morale up as best I can. GitLab's engineers did this, and they did it with cameras running.
So this is the thing: I bet GitLab will be doing a lot of soul-searching in the next few days, and hopefully their tired engineers will get some rest soon. In the end, the inconvenience of this outage will be forgotten but I'm sure this won't be the last time I'll point to the way they handled a critical incident with complete transparency, and set my mind at ease that things were under control.
It's January again, which of course means it's time for the January HL7 Working Group Meeting. As always, the first two days of the HL7 meeting brings FHIR Connectathon, and this was Connectathon 14.
I feel like every time I visit one of these meetings, the scale of the meeting astounds me and I can't imagine it being any bigger... and then that happens again the next time. The final tally at the September 2016 (Baltimore) Connectathon was 170 people. The final tally here in San Antonio was 209 so we continue to beat expectations.
I think we are finally passing a point where it's feasible to fit everyone in a half-size hotel ballroom. We may well have some hard decisions about whether the format still works or whether we need to turn people away in September.
Also amazing to me was the number of new faces. On the first day, Ewout Kramer asked the room for anyone who was a first-time attendee to a FHIR Connectathon to raise their hand. It looked like about half the room raised their hand so we're really expanding the pool of interested people right now. Exciting days for FHIR!
Monday night brought our usual HAPI & .NET Users Group. We discussed a proposal we're working on for a template-based approach to automatic resource narrative generation. There will be more on that in a future post.