Skip to content
8 min read

How to minimize challenges to develop notification engines for a successful user engagement

Notifications are highly important when it comes to user engagement and interaction. Imagine if you can send and track all different types of notifications like Email, SMS, WhatsApp, IVR, Push, Web Push from one single system that is highly scalable and flexible for delivering messages how easy it would be to notify your customers or associates about every significant event where customers can choose what channel and what types of notifications they want to receive.


We at Accion Labs believe in polyglot architecture and the same is reflected in the technology choices that we made for designing our Notification Engine. MEAN stack sits at the heart of the Notification Engine and we primarily use NodeJs coupled with Express and event-driven approach. As we are handling different types of notifications we have preferred to use Non-relational database i.e. MongoDB. APIs are exposed through the API gateway using Kong leaving the APIs to focus on core application and business logic.


Users are provided flexibility to specify at what event and activity, would they want to initiate the communication and at what frequency to sufficiently engage the customers without overwhelming them with irrelevant and redundant communications. The customers can also control what mode of communication would they want to use at every event — SMS, Email, IVR calls, Web Notifications or Push Notifications.

It uses a template engine and hence separates view from data. An administrative team could hence configure the communication message without worrying about the code and this flexibility allows them to test various interactions to find out what is more relevant to the customer or the associates. Mostly all the notification systems are designed using REST API’s but here at Accion Labs, we are providing REST as well as an event-driven approach to trigger notifications. We have individual micro-services that are communicating with each other through Kafka events.


  1. Notification API: This module is used to receive the notification Request and pushes the same to the KAFKA Queue. All the validations are performed based on the schema using JOI validator. API user needs to provide the details required in the JSON format specified.
  2. Dispatcher: The purpose of this module is to bifurcate the payload received, insert into Database, fetch for the preferences if any and prepare the relevant packet required as per the plugin service based on its notification Type.
  3. Preferences: User can specify user details and/or user preferences such as to receive a notification via specific channels or can user details such as name, contactNo, email-address.
  4. Plugins: It consists of individual service plugins that actually sends the respective notification like email, SMS, etc.

These plugin micro-services are kept scalable enough to incorporate any other channel provider easily by implementing just the core logic of the provider as per its need in the respective plugin and adding it to the configuration. Currently, we have implemented majorly used channel providers like twilio, Nexmo, etc


User-controlled channel configurations:

Usually, there is a set of patterns that are used for triggering notifications which are used for sending a similar type of notifications to a group of users. So to achieve this we have introduced a concept called notification Agents that is responsible for the predefined configuration of a particular workflow. Notification Agent can be created to provide the client with customization’s wherein the client/admin can configure what channel should be preferred or what template needs to be sent based on a group of users to send notifications.

Scalable and robust:

Using event driven approach we wanted to ensure the delivery of notifications triggered and handle as many as millions of notifications to be triggered.We handled this using kafka as it provides us with pub-sub messaging service that highly supports distributed architecture.A node library called node-rdkafka is used for the same.

Schema validation:

When working with kafka events we wanted to ensure that proper data is received when message is produced in kafka. For this schema validation using avro is implemented which ensures the message to be in the predefined schema format. This eradicates unimportant information and false messages on the queue

Personalized Content with dynamic templates:

We have created a template engine micro-service that is responsible for storing the templates created by the administrator and then rendering it with customers dynamic data.These templates once created can be configure in agents or can be over written during generating message to kafka queue.

Data centralization and logging:

To store and keep the status of each notification sent to the customers at one spot in the centralized database which can be viewed in the front-end. Also maintain the events triggered to be persisted and or replay the events as and when needed in case of any failure in the micro-services. It also tracks the error occurred in the database

Schedule Notifications:

One can schedule notifications at a later date and time for sending future notifications by assigning the scheduled date and time in the message packet.

Provide flexibility to send notifications through multiple channel providers:

Administrator can configure multiple channel providers for one single notification channel and set a default channel in case when not provided specifically

Dockerizing multiple micro-services:

We have followed multiple container in one pod approach to handle all the micro-services.