Building Hungary's largest telemedicine platform for diabetic patients
Dcont - A diabetes platform for patients and doctors
Completely replacing a legacy system is always a challenging task, especially when thousands of active users and millions of health records are involved. In the case of Dcont, a subsidiary of the health-tech giant 77 Elektronika, this was exactly the challenge they wanted us to help with. They had a 10+ year-old tele-medical system connecting diabetic patients to doctors and helping them manage their diabetes journey. The service was slow, a pain to use and in mountains of technical debt.
Legacy system displacement is an enormous topic on its own, with different approaches in terms of both method and outcome. In order to make an informed decision on what angle to take, we needed to be immersed in the technical and architectural design of the current solution and, just as importantly, the requirements of the desired system.
Pre-planning
After a few consulting sessions, the following high-level architecture was defined:
SPA frontend
It quickly became clear that the long-term vision of the Dcont platform could hardly be achieved through traditional web practices, so we decided on a Single Page Application for the web frontend since we needed a tool to efficiently and quickly craft robust, highly interactive web interfaces. We weighed options for which framework to use and ended up deciding on Vue.js. Most modern web frontend frameworks, such as React or Angular, are more than capable of doing the job; it was simply a decision of preference.
Unified backend with a REST API
One of the main issues with the preceding system was that the data flow was scattered through multiple channels. The web application used server-rendered pages to display data, directly communicating with the underlying database. There was also a web API of some sort which used its own logic for data retrieval and storage, and on top of that, it was highly incoherent, and the mobile dev teams were yearning for a standardised, consistent interface that is not a pain to interact with. All these data channels had their own formal parameters, validation, authorisation, and error handling, which led to cases where a data insertion flow was allowed on a certain channel while it was refused on others.
The solution came in the form of a unified web API responsible for communicating with all the clients of the Dcont system: web, mobile, and desktop applications. REST is the most popular web API architecture nowadays, and since the goal from the start was to build a system that can be carried on by internal teams, it seemed reasonable to use REST.
Of course, a highly scalable web service was needed behind the interface. The client was mostly familiar with the .NET ecosystem because other product teams were heavy on C# and .NET framework. At the time, .NET Core was on the rise with its cross-platform capabilities regarding web development and did a great job providing a robust toolset for web in general, especially for web API development with its MVC architecture.
Modular monolith vs micro-services
In this case, deciding whether to do micro-services or a modular monolith was a simple decision. Contrary to popular belief, you can build highly resilient systems using a monolithic architecture. Our rule of thumb when it comes to this question is whether the client's organisational structure supports a micro-service approach or not. The same engineers would have worked on all the services plus the added infra, which would have rendered one of the key benefits of the micro-service architecture useless. Although the platform was popular and needed to manage relatively high traffic, it was not on a global scale, so micro-services would have added a substantial infrastructural and architectural overhead for no measurable benefit.
About using 3rd party services
Lastly, we decided to use third-party services where reasonable. In cases where the development time of a feature was unreasonably high compared to its benefits to users, we looked for a 3rd party service to do the job. Of course, there were some restrictions, considering that the system is responsible for managing highly confidential medical records, so we drew a hard line regarding privacy concerns. For features like push notifications, mass distribution of non-confidential emails, and such, we used external services tailored for those purposes to speed up the development process.
Deployment
One major decision when it comes to replacing a legacy system is the method of deploying the new software. After thorough consideration, we ended up with the most basic solution: shut down the old system and launch the new one. Although it really was one of the most straightforward answers, we needed to plan ahead to overcome its drawbacks. We started by asking simple questions:
How will the data get from the old system to the new one?
In order to move data from the system marked for sunsetting, we needed a solid data migration pipeline since:
- The data in the legacy relational database was not normalised, and the database design was inconsistent and poor.
- Not only the structure but also the types of specific data were different between the systems.
- The data flow was almost constant, with only unpredictable breaks late at night.
This meant that we needed to make sure, when the time came, we could safely and, most importantly, predictably migrate data over to the new system. We've introduced a 3 step deployment setup: development, staging, and production. This allowed us to have trial runs and tests for our migration processes to ensure no data was lost and no anomalies were introduced to the data.
This also meant that some maintenance downtime was inevitable; the only question was how long it would take to finish the migration process, which was answered later during the tests.
Is there any data that can't be migrated?
When it comes to encrypted data such as passwords, we needed custom implementations in place. For passwords, it was fairly simple: let the user log in with their existing password the old way, and if the login is successful, introduce the password to the new system.
Regarding more advanced scenarios, the data had to be re-requested from the user on the first login onboarding flow.
How will the client apps handle the change?
The simple answer is to have the client apps conform to the new API, and from a certain version above, data flows through the new channels, otherwise they use the legacy interface. However, there were two major issues with this approach:
- It would have required a bridge between the old and new systems, so when apps use the legacy API, data would be automatically transferred to the new system. When stacked against other options, this one had a much more significant impact on the scope and budget of the project.
- The previously mentioned inconsistent data channels would still be open, and that could reintroduce anomalies to the new system.
Since the feature set of the new platform was magnitudes bigger than the legacy platform, we decided to go with the "forced update" route. This meant we needed to introduce a mechanism: from a certain version above, all apps should force the user to update to the latest version. Unfortunately, there was no way to do this through the designated app stores, we had to prioritise the implementation of this feature so most clients would have this logic on launch day.
After answering all these questions, we had a solid foundation for a production deployment process that later proved successful. Fortunate circumstances allowed (and considering the time and budget limits, in a way forced) us to execute the simplest strategy there is: out with the old system, in with the new.
The development stage
From the beginning, the client's plan was to eventually bring the development in-house for tighter control over the implementation, shipping features faster and cost efficiency. This meant that we had to work closely with the in-house product team so they could gather as much knowledge about the new system as possible. Part of the reason we came on board was the client's lack of competency with web technologies and complex web-based systems, so the plan was to lay a solid foundation for their developers so they would be able to maintain and extend the system with moderate effort.
This sounds much easier than it is since productivity quickly gets hit by the overhead of managing and educating new members during the development of a project. In order to solve this, we applied a custom flavour of battle-tested development methodologies to try to stay in the goldilock zone of maximum productivity with minimum management.
In order to achieve a smooth transition, we had to:
- Choose technologies that are as close to the client's stack as possible.
- Involve the in-house teams in all major architectural decisions and the overall development process.
- Provide training for devs in technologies they are completely unfamiliar with.
- Come up with a progressive transitioning roadmap for the in-house developers so they can gradually produce more and more value for the project.
In levoolabs, we are a big fan of the managed development team approach, which is a solid middle ground between staff augmentation (where the client gets raw manpower to boost their output) and outsourcing (where the development is a complete black box for the client). In our experience, this is the most efficient and most convenient way of collaborating on projects of this magnitude. In this particular case, it was a method that highly contributed to the project's success.
The method
The general approach was to pick a slice of the planned architecture and then do the groundwork and complex implementations with concrete examples of the usage that the in-house team could use as a sample. In other words, we were building vertically while the in-house dev team did the horizontal development. This worked quite well, because a part of, or a thinner layer of a system is much easier to understand and highly efficient work could be done without completely understanding the system as a whole. Much like any framework nowadays, where you can be quite effective and build wonderful things without even taking a peek under the hood once.
Another thing we did to maximise knowledge distribution within the team was to require at least two people to approve a pull request; an architect or senior engineer who understood the underlying system and an in-house member. This gave the team the opportunity to learn from each other's mistakes and understand the system better with every pull request that was approved.
In a general sense, we used a scrum-like methodology for the development process, which served as a dual-purpose framework:
- It helped us manage planning and development in an agile way.
- It helped team members get a sense of current tasks and roadblocks on which they could ask questions or even suggest solutions. Also when the time came for them to onboard, they were not entirely unfamiliar with the features and the technology.
These techniques and rules helped us move along the development roadmap as planned while helping the client's in-house dev team understand the system and, on top of that, efficiently contribute to the project itself.
Launch day
Project estimations are hard and often are subject to change (usually more than once). That being said, we hit the milestones of the 2 years roadmap like clockwork, utilising occasional replanning and reallocation of resources and a few clever bending of the scope that eventually ended up saving not only time but also future expenses for the client.
We managed to ship the project's original scope and more, consisting of a web application, two updated mobile apps, a redesigned desktop client and a backend service to serve as a unified backbone for the entire ecosystem.
Our delivery pipelines ensured a safe and stable deployment process, so fortunately, there were no hiccups on the day of the launch of the new Dcont ecosystem.
We took part in every step of the development, from managing the project and defining the roadmap to planning the architecture and topology and, of course, the development itself.
The client, and most importantly, the users, were extremely satisfied with the new look and feel of the Dcont platform, serving thousands of patients daily and helping them navigate their diabetic journey.
numbers
1.25x
Increased monthly user registration
85%
of users noticed improvement in performance
137%
More data uploaded to the system
77 Elektronika
77 Elektronika is the 12th most valuable Hungarian-owned company in 2021. They are a major global developer, manufacturer, and supplier of in-vitro diagnostic medical devices, mainly urine analyzers, blood glucose meters, and their consumables.
"Our experience with levoolabs has been fantastic. We now know what to include in our service to help people ensure healthy lives through their diabetes journey."