HaloConnect

Lessons learned about memory usage

Practice management systems collect a lot of data. Extracting that much data for an integration can get complicated, especially regarding memory usage.


Over the years, practice management systems collect a lot of data. Every patient, every appointment, every result and referral — it all gets stored in perpetuity. Extracting that data as part of an integration though, that can get complicated. Especially when it comes to the memory usage for large data extractions.

For a smaller or newer practice, the size of the data extraction is manageable. However, the older and bigger the practice (or practice group) gets, the larger each data extraction gets as well. As Halo Connect rolled out to more and more practices, we discovered that, for some practices, hitting system memory limits is a very real concern.

This prompted us to overhaul how Halo Link handles and uploads result data.

Memory disappears quickly if you’re not careful

The problem first reared its head in the form of out-of-memory errors. Our customers would try to extract data from a practice and the query result would fail to upload to Halo Cloud. This meant integrators had to work around the memory limit, otherwise, practices using their integration would be missing data.

After some investigation we discovered Halo Link was hitting the Windows RAM limit for processes (to support backward compatibility, Halo Link is a 32-bit process which limits us to only 2GB).

The problem was that Halo Link was loading the result data into RAM while it prepared it for upload. The paging and uploading process was also somewhat naive, causing the result data to use 3.5 times as much RAM as needed.

This wasn’t going to scale. So we set out to overhaul how Halo Link handled data (and maybe improve a couple of other things as we went).

What is paging?

Pagination is the practice of dividing a set of results or data into smaller subsections in order to make it easier to handle. This allows us to upload results in smaller chunks, without worrying about connection timeouts due to large uploads. It also allows us to interleave uploading result data for one query and executing other queries, preventing a large query from blocking the execution of other queries.


Cutting down memory usage

Instead of loading the data into RAM, Halo Link now streams result data to encrypted files in a protected system folder on the local hard disk. The files are a defined size, to match our 1MB paging, reducing the need to double-handle the data to split it into pages. The files are then uploaded one at a time to our cloud cache, ready to be retrieved by the integrator.

By streaming data directly from the database to the hard drive and from there to the cloud, we’re never caching anything in RAM. This leads to sub-100mb RAM utilisation. It also significantly reduces the memory usage of even the largest queries — we’ve successfully run tests with queries up to and including 1GB. In fact, it ran so smoothly, we couldn’t tell it was working until it was done. So we added more logging around page uploading to track progress and help debug any issues.

Fewer interruptions for integrators and practices

The obvious benefit of this improvement is increased reliability for integrators, given that this change mitigates the risk of these kinds of out-of-RAM issues. This removes the need for integrators to develop workarounds for this issue, and it means fewer potential interruptions to the practices using their services.

However, this improvement is also a win for the general health of practice servers. Minimising Halo Link’s RAM utilisation reduces the potential impact on other programs and services running on the server. This ensures everything runs smoother — both on the server and in the practice.

Similar posts