At foobar Agency, tech is part of our core. We understand the importance of technology in effecting positive change and solving complex problems. For us, technology is not only a delivery department but rather an integral part of the solution. We are passionate about writing code that produces stunning interfaces and user experiences, whether that be in the form of webpages, mobile apps, or other products.
There is a limit to writing code, though. We’re not writing code for the sake of writing code. The measure of success isn't in “SLOC” (source lines of code), because length of the code doesn’t mean it has value. Rather, we see success in KPIs that indicate value for the end user. For example, for web-based, customer-facing websites this could be web vitals, or stability of systems and processes. These technical factors translate into tangible benefits, such as higher search rankings, increased organic traffic and easier use of the application. And ultimately, we measure invalidated hypotheses and then business metrics. These are what really counts! What's the point if we've written the most beautiful lines of code, but they don't bring any value to the end user? What’s the point of a great CI/CD pipeline if there is nothing to deploy that will bring value to the user? Exactly: nothing.
This means we have to constantly ask ourselves whether the first solution that springs from our developer brains is actually the right solution in the current situation. It may be that our first idea would be architecturally cleaner. It may be that it would scale better in a future that we don't yet see. It may be the "current state of the art" for a technical problem. But, to paraphrase Marie Kondo: "does it spark value"?
There are a lot of things you could talk about in this context. SaaS and PaaS, for example. Or, is the CI/CD pipeline really the first thing a team should set up? But today we want to talk about low-code platforms.
What are low-code development platforms?
Let’s make a brief excursion to understand what low-code development platforms actually are. As we‘re exploring new concepts and ideas, I think it’s only fitting that I let ChatGPT provide more insights about this:
A low-code development platform is a software development environment that enables developers to build applications with minimal or no manual coding. The platform provides visual tools and pre-built components that allow developers to create, test, and deploy applications faster and with less effort. The goal of low-code platforms is to make app development accessible to a broader range of users, including non-technical users and business stakeholders, to help organizations quickly respond to changing business needs. — Chat GPT as of 4th of February
Sounds legit. But as always, there are 2 sides to the coin. A system should never be implemented carelessly. A look at security, compliance and governance is just as important as a look at GDPR. There are also people who are fundamentally critical of such platforms. These voices say, that the use of these platforms can lead to a lack of knowledge and understanding of the development process and the code behind it. Which again could lead to security issues and compliance problems. Whatever your opinion on the matter may be, the most important thing is to know with what goal you are using such a platform, and to be aware of the benefits and also the limits of its use. It is important to understand that, while such platforms can offer many advantages, they are not a silver bullet to all challenges.
From our point of view, a very good use-case for low-code platforms is during the validation of hypotheses. Especially in business-building projects, we start with the client by formulating hypotheses, which then need to be validated fast and efficiently. This is where low-code platforms come into play, as they provide us with the ability to rapidly develop prototypes, test versions, or just parts of an application, depending on the use case. In this way, low-code platforms help us to validate the hypotheses quickly and accurately, providing us with the insight needed to make the best possible decisions.
An example use case
Let's take a look at an example where we could proceed with traditional engineering methods and how an approach using low-code development platforms could look like.
The client has a legacy system, which is the. So it is the authoritative data source for certain information. Let's assume it holds product data, including inventory for an online store. Let's further assume that we are rebuilding and scaling up the online business for the client. However, the legacy system is slow to respond and is not designed to handle heavy loads.
Given these circumstances, we have a few options on how to deal with it. The following are a few typical scenarios:
1) We provide the legacy system with a REST interface and a performance upgrade
The obvious approach is to extend the legacy system and enable it to handle the new or future requirements. However, often the existing developers are busy with maintenance and developing smaller features, so we have to consider pausing these tasks or bringing new developers into the team. Both possible solutions have consequences:
If we pull developers away from maintenance and minor feature development, other issues take a back seat. Can the company afford these opportunity costs? Normally, the answer is a clear "no", because the ongoing business pays for further development and therefore must not be torpedoed.
So if we want to bring in new developers, we have a number of other things to sort out:
- Can we get new developers on the team in the short term and for a reasonable price? The language of the legacy application, the specialization in the product and the associated price of the developers are variables that we need to resolve in order to quantify the costs involved.
- Once we have found developers, we need to budget for onboarding. Clarification of deployment, pipelines, and testing is required. Often such legacy systems do not have modern deployment workflows like and will be unavailable for each deployment. Is this reasonable in our time window?
- How high is the risk of changes? Does a suitable test environment exist to extensively test the changes in a short time? What are the implications on other business units should a deployment go wrong?
There are certainly more questions that need to be answered. The key questions, however, are how long the solutions will take, what complexity they will bring, and how much they will cost.
2) Decoupling the legacy system
This is another approach, which we often choose when the option before is too costly, too risky or changes are too complex. In this case, we decouple the system from direct access to the online store.
We would create an application that periodically reads data from the legacy system, e.g. via a database connection, SOAP interface, FTP or whatever (let your imagination run wild, it can be anything 😅), and writes it to a database. Then we would create a REST or GraphQL interface through which the online store can read the above data.
To read the data from the legacy system on a regular basis, we would need a cron job or similar (e.g. AWS Batch job scheduler, GCP Cloud Scheduler, etc.). There are a few things to consider here, such as race conditions, batch processing and slicing, and more.
There are solutions to all these issues that developers can implement. However, this takes time and eats up resources that might be better invested in work that brings real value to the user or the business.
3) We use a creative solution of low-code development platforms and SaaS products
In this approach, we would largely avoid the use of proprietary development.
We would use the existing interface (remember: database, SOAP, FTP, or other) and include iPaaS tools like Zapier, Prismatic, Celigo or IFTTT.
👉 I will continue with Zapier in this example, but the decision would be depending on the circumstances. There are even Open Source iPaaS around, such as Cenit IO.
In this case, we probably need to write a connector, called a ‘trigger’ in Zapier, to make Zapier pick up new data from the legacy system.
We would connect Zapier to a Cloud Database such as Firebase or DynamoDB, chosen here as two examples of many. There are already connections from Zapier to both databases and both provide a REST interface.
Thus, with relatively little effort, we have built a system that behaves like a custom-built application, but with high stability and little time investment. This is because we don't have to deploy and host the systems ourselves, nor do we have to ensure that data has to be fetched from the legacy system at regular intervals and without race conditions.
Which scenario is the best?
Unfortunately, this question cannot be answered without further ado. The example as well as the possible scenarios are very simplified.
In a real project, there are many more aspects to consider and the scenarios could overlap. For example, instead of a cloud iPaaS, we could also use Kafka. The decisions comes down to the wider circumstances, the strategy and policies around security, compliance and governance.
However, in an environment of scarce resources and time, it makes sense to consider and deploy alternative tools.
What about you?
What challenges do you face? We like to provide support and show alternative approaches. In doing so, we always keep an eye on the big picture. So that technology becomes an integral part of the solutions, pays into the business KPIs and is not just a cost.