|   Software Engineer

~ Latest Posts ~

I have recently followed a course on Linkedin Learning titled Software Architecture: Patterns for Developers by Peter Morlion. In this course, I learned about the Command Query Responsibility Segregation (CQRS) pattern and I was interested to try it out in ASP.NET Core. In this article, I will walk you through my approach to implementing CQRS in a ASP.NET Core Web API project.

Understanding CQRS

Command Query Responsibility Segregation (CQRS) is a software architectural pattern that separates the responsibility of reading and writing data. In a traditional application, we usually have a single model that is used to read and write data. CQRS introduces two separate models, one for reading data and one for writing data. The model for reading data (read model) is optimized for reading data and the model for writing data (write model) is optimized for writing data.

Let’s define Commands and Queries:

  1. Commands: These operations change the system’s state and may or may not return data.
  2. Queries: These operations retrieve data from the system without modifying its state.

Why is this required? What possible benefits can be achieved by taking the effort to implement this pattern?

Benefits of CQRS

One of the main advantages of CQRS is the ability to scale the read and write operations independently. In traditional applications, scaling both operations together is necessary. Having the isolation between read and write models allows more flexibility and allows individual models to be updated without affecting the other. Additionally, implementing CQRS gives the ability to optimize the read model and write model separately for performance.

However, implementing CQRS comes with trade-offs.

Considerations and Trade-offs

One of the main disadvantages is the increased complexity of the system from this pattern. CQRS is not suitable for simple CRUD applications, as it would add unnecessary complexity to the system. It is best suited for complex business domains where the benefits of implementing CQRS outweigh the complexity it adds.

Another concern is maintaining the consistency between the read model and the write model. Additional effort is required to synchronize the data between the two models to ensure consistency. Eventual consistency is a common approach to handle this, where the read model is eventually consistent with the write model.

CQRS can also lead to code duplication and increased development time since there is an associated learning curve.

Implementation of CQRS

Now let’s dive into the implementation.

First, I built a simple CRUD application for managing contacts. The application has three entities: User, Contact, and Address. A user can have contacts of different types (email, phone, etc.) and addresses in different states.

You would see that a simple CRUD application like this does not have a use case for CQRS. However, we will use this application to demonstrate how CQRS can be implemented.

I created the domain models and a repository which uses an SQLite database to persist the data. Tables for each entity were created in the database.

public class User
{
    public string Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public List<Contact> Contacts { get; set; }
    public List<Address> Addresses { get; set; }
}

public class Contact
{
    public string Id { get; set; }
    public string Type { get; set; }
    public string Detail { get; set; }
    public string UserId { get; set; }
}

public class Address
{
    public string Id { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Postcode { get; set; }
    public string UserId { get; set; }
}
public interface IUserRepository
{
    User Get(string userId);
    void Create(User user);
    void Update(User user);
    void Delete(string userId);
}

As you know, I can expose these simple CRUD operations from the repository as a service and use it in the ASP.NET Core controller. And our application would work just fine.

Instead, I need to separate the read and write operations into separate models to implement CQRS pattern in the application.

First I implemented the write side of the application.

I have defined two commands named CreateUserCommand and UpdateUserCommand. These commands are used to create the users and update their contacts and addresses.

public class CreateUserCommand
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

    public class UpdateUserCommand
{
    public string Id { get; set; }
    public List<Contact> Contacts { get; set; }
    public List<Address> Addresses { get; set; }
}

To handle the write operations, I created a new repository UserWriteRepository based on the previous UserRepository implementation.

public interface IUserWriteRepository
{
    User Get(string userId);
    void Create(User user);
    void Update(User user);
    void Delete(string userId);
    Contact GetContact(string contactId);
    void CreateContact(Contact contact);
    void UpdateContact(Contact contact);
    void DeleteContact(string contactId);
    Address GetAddress(string addressId);
    void CreateAddress(Address address);
    void UpdateAddress(Address address);
    void DeleteAddress(string addressId);
}

Next I have implemented a service named UserWriteService which uses the UserWriteRepository to handle the write operations.

public interface IUserWriteService
{
    User HandleCreateUserCommand(CreateUserCommand command);
    User HandleUpdateUserCommand(UpdateUserCommand command);
}

And that completes the write side of the application.

Next we will implement the read side of the application. When it comes to the read operations, note that the data model should be independent of the write model and optimized only for reading data.

We need to define the read model suited to the read operations that we have in the application. In this ASP.NET Core application, end users should be able to get the contact details of a particular user according to the contact type, and the address of a particular user according to the state.

To achieve this, I have defined two queries:

public class ContactByTypeQuery
{
    public string UserId { get; set; }
    public string ContactType { get; set; }
}

    public class AddressByStateQuery
{
    public string UserId { get; set; }
    public string State { get; set; }
}

I will define two models UserAddress and UserContact to represent the read model.

public class UserAddress
{
    public string UserId { get; set; }
    public Dictionary<string, AddressByState> AddressByStateDictionary { get; set; }
}

    public class UserContact
{
    public string UserId { get; set; }
    public Dictionary<string, ContactByType> ContactByTypeDictionary { get; set; }
}

And I will create following database tables in the read model:

CREATE TABLE UserAddresses
(
    UserId NVARCHAR(255) NOT NULL,
    AddressByStateId NVARCHAR(255) NOT NULL,
    FOREIGN KEY(AddressByStateId) REFERENCES AddressByState(Id),
    PRIMARY KEY(UserId, AddressByStateId)
);

CREATE TABLE AddressByState
(
    Id NVARCHAR(255) PRIMARY KEY,
    State NVARCHAR(255) NOT NULL,
    City NVARCHAR(255) NOT NULL,
    Postcode NVARCHAR(255) NOT NULL
);

CREATE TABLE UserContacts
(
    UserId NVARCHAR(255) NOT NULL,
    ContactByTypeId NVARCHAR(255) NOT NULL,
    FOREIGN KEY(ContactByTypeId) REFERENCES ContactByType(Id),
    PRIMARY KEY(UserId, ContactByTypeId)
);

CREATE TABLE ContactByType
(
    Id NVARCHAR(255) PRIMARY KEY,
    Type NVARCHAR(255) NOT NULL,
    Detail NVARCHAR(255) NOT NULL
);

To handle the read operations, we need to define a new repository called UserReadRepository.

public interface IUserReadRepository
{
    UserContact GetUserContact(string userId);
    UserAddress GetUserAddress(string userId);
}

Next, I have implemented a service named UserReadService which uses the UserReadRepository to handle the read operations.

public interface IUserReadService
{
    ContactByType Handle(ContactByTypeQuery query);
    AddressByState Handle(AddressByStateQuery query);
}

Now we can read the required data from the read model tables and return the expected results.

But you’ll notice that the read model tables are empty and we didn’t add any data to them.

How do these read model tables get updated when a user is created or updated from the write model? To achieve this, we need to implement a mechanism to synchronize the data between the read and write models.

I have implemented a UserProjector to handle this synchronization.

public interface IUserProjector
{
    void Project(User user);
}

Ideally, this synchronization should be done asynchronously to avoid blocking the write operations. But for simplicity, I have called the project method synchronously in the UserWriteService.

public User HandleUpdateUserCommand(UpdateUserCommand command)
{
    User user = _userWriteRepository.Get(command.Id);
    user.Contacts = UpdateContacts(user, command.Contacts);
    user.Addresses = UpdateAddresses(user, command.Addresses);
    _userProjector.Project(user);
    return user;
}

With the synchronization in place, the read model tables will be updated whenever a user is updated.

And that completes the implementation of the CQRS pattern in the application.

The source code for this application can be found in the GitHub repo

Follow the below steps to run the application:

  1. Clone the repository.
  2. Run the recreate-database.bat file to create the database tables.
  3. Open the ContactBook.sln file in Visual Studio.
  4. Run the application.
  5. Use the Swagger UI to test the API endpoints for creating, updating, and reading user contacts.

Conclusion

In this article, my aim was to demonstrate how to implement the CQRS pattern in an ASP.NET Core application. In my next article, I will try to implement Event Sourcing along with CQRS in this application.

If you have any questions or feedback, please feel free to share.

Thank you for reading!

See Also

  1. https://github.com/gregoryyoung/m-r/tree/master/SimpleCQRS

  2. https://www.baeldung.com/cqrs-event-sourcing-java

  3. https://www.confluent.io/learn/cqrs/

This article was originally published on Towards Dev.

Monitoring of an application helps us to track aspects like resource usage, availability, performance and functionality. Azure Monitor is a service available in Microsoft Azure that delivers a comprehensive solution for collecting, analyzing, and acting on telemetry data from our environments.

Application Insights is an extension of Azure Monitor that provides Application Performance Monitoring both proactively and reactively. Today I’m going to tell you how you can easily use Azure Application Insights to monitor Java applications.

Please note that I have used WSO2 Identity Server in this article to demonstrate how to enable and configure Azure Application Insights for a Java Application.

Prerequisites

  • Java application (using Java 8+)
  • Azure subscription
  1. As the first step, we need to create an Application Insights resource in Azure as follows. You can find how to create an Application Insights resource from here.

Creating an Application Insights resource

  1. Next, we need to copy the connection string from the Application Insights resource we just created as shown below.

Application Insights resource connection string

  1. Create a file named applicationinsights.json with the following content. You should replace <CONNECTION_STRING> by the connection string copied above.
{
  "connectionString": "<CONNECTION_STRING>"
}
  1. Next, download the Application Insights agent for Java from here. You should keep the applicationinsights-agent-x.x.x.jar file in the same directory as the applicationinsights.json file created above.

  2. In our Java application, we should add a JVM argument as shown to point the JVM to the agent jar file. You need to update the path to the agent jar file.

-javaagent:"path/to/applicationinsights-agent-x.x.x.jar"

In my WSO2 Identity Server configuration, I have added the JVM argument in the /bin/wso2server.sh file as shown below.

Adding the JVM argument

  1. When the Azure Application Insights is correctly configured for the Java application, it can be observed in the logs as shown when the application is started.

Application Insights Java Agent logs

Now we can observe real time telemetry data from our Java application running on Azure through the Application Insights resource.

Application Insights Live Metrics

We can get information like CPU percentage, committed memory, request duration and request failure rate from Application Insights under Live Metrics.

Additionally, we can use features like Log Analytics, Availability tests and Alerting through Azure Application Insights. You can read more about Application Insights here.

Thank you for taking the time to read. Have a nice day!

References

  1. https://learn.microsoft.com/en-us/azure/azure-monitor/overview

  2. https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview?tabs=java

  3. https://wso2.com/identity-server/

  4. https://learn.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource

This article was originally published on Towards Dev.

Azure Membership Scheme

Have you ever deployed WSO2 products in Azure virtual machines?

Today I’m going to tell you how you can easily run a cluster of WSO2 Identity Server on Azure virtual machines. To automatically discover the Identity Server nodes on Azure, we can use the Azure Membership Scheme.

Please note that this article is written based on using WSO2 Identity Server 5.11 with the Azure Membership Scheme. You can use the Azure Membership Scheme with other WSO2 products as well.

You can get a basic understanding about clustering in WSO2 Identity Server by reading this doc.

How Azure Membership Scheme works

You can find the Azure Membership Scheme in this repository.

When a Carbon server is configured to use the Azure Membership Scheme, it will query the IP addresses in the given cluster using the Azure services during startup.

To discover the IP addresses, name of the Azure resource group where the virtual machines are assigned should be provided. After discovering, the Hazelcast network configuration will be updated with the acquired IP addresses. As a result, the Hazelcast instance will get connected to all the other members in the cluster.

In addition, when a new member is added to the cluster, all other members will get connected to the new member.

The following two approaches can be used for discovering Azure IP addresses.

Using the Azure REST API

Azure REST API is used to get the IP addresses of the virtual machines from the resource group and provide them to the Hazelcast network configuration.

Using the Azure SDK

Azure Java SDK is used to query the IP addresses of the virtual machines from the resource group and provide them to the Hazelcast network configuration.

By default, the Azure REST API will be used to discover the Azure virtual machines in the Azure Membership Scheme (If you want to use the Azure Java SDK to discover the Azure virtual machines, please refer to https://github.com/pabasara-mahindapala/azure-membership-scheme/blob/master/README.md).

How to use the Azure Membership Scheme

Follow the given steps to use the Azure Membership Scheme.

  • Clone the Azure Membership Scheme repository and run the following command from the azure-membership-scheme directory
mvn clean install
  • Copy the following JAR file from azure-membership-scheme/target to the <carbon_home>/repository/components/lib directory of the Carbon server.
azure-membership-scheme-1.0.0.jar
  • Copy the following dependencies from azure-membership-scheme/target/dependencies to the <carbon_home>/repository/components/lib directory of the Carbon server.
azure-core-1.23.1.jar
content-type-2.1.jar
msal4j-1.11.0.jar
oauth2-oidc-sdk-9.7.jar
  • Configure the membership scheme as shown in the <carbon_home>/repository/conf/deployment.toml file.
[clustering]
membership_scheme = "azure"
local_member_host = "127.0.0.1"
local_member_port = "4000"
[clustering.properties]
membershipSchemeClassName = "org.wso2.carbon.membership.scheme.azure.AzureMembershipScheme"
AZURE_CLIENT_ID = ""
AZURE_CLIENT_SECRET = ""
AZURE_TENANT = ""
AZURE_SUBSCRIPTION_ID = ""
AZURE_RESOURCE_GROUP = ""
AZURE_API_ENDPOINT = "https://management.azure.com"
AZURE_API_VERSION = "2021-03-01"
  • When the server is starting, you will be able to see the logs related to the cluster initialization.

I have explained the parameters required to configure the Azure Membership Scheme below.

  • AZURE_CLIENT_ID - Azure Client ID should be obtained by registering a client application in the Azure Active Directory tenant. The client app needs to have the necessary permissions assigned to perform the action Microsoft.Network/networkInterfaces/read [1].

eg: 53ba6f2b-6d52-4f5c-8ae0-7adc20808854

  • AZURE_CLIENT_SECRET - Azure Client Secret generated for the application.

eg: NMubGVcDqkwwGnCs6fa01tqlkTisfUd4pBBYgcxxx=

  • AZURE_TENANT - Azure Active Directory tenant name or tenant ID.

eg: default

  • AZURE_SUBSCRIPTION_ID - ID of the subscription for the Azure resources.

eg: 67ba6f2b-8i5y-4f5c-8ae0-7adc20808980

  • AZURE_RESOURCE_GROUP - Azure Resource Group to discover IP addresses.

eg: wso2cluster

  • AZURE_API_ENDPOINT - Azure Resource Manager API Endpoint.

eg: https://management.azure.com

  • AZURE_API_VERSION - Azure API Version.

eg: 2021-03-01

If you have any suggestions or questions about the Azure Membership Scheme, don’t forget to leave a comment.

Have a nice day!

References

  1. https://learn.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftnetwork

This article was originally published on JavaScript in Plain English.

We come across numerous types of search bars and search dropdowns as we browse the internet every day. Most of these search bars would be calling to a backend API to query the results based on our input.

Think about a search bar for countries that displays results as you type.

Assume a user wants to search for Columbia, and the user types in “col” from the keyboard. If the search function is triggered on each keyboard input, there will be three separate search requests for “c”, “co” and “col” (See below).

Unwanted API request are being sent


But, only the results from the last request that search for “col” are actually required. Search requests for “c” and “co” are sent to the backend, but their results are not used.

To prevent a search bar from sending unwanted requests like this for each letter the user types in, we can use the debounce operator in RxJs.

By definition, The debounce operator emits a notification from the source Observable only after a particular time span determined by another Observable has passed without another source emission.

In simple words, it prevents sending a request to the API until a configured time (Let’s say 500ms) is passed after an input, without another input. Since the time period between letters when the user typing “c”, “o” and “l” is less than this value, API requests will not be sent. But, when 500ms is passed after the user has typed “col”, the API request for the search will be sent.

I have created a sample search bar in Angular with debounce. Here, I have configured the debounce time as 500ms.

ngAfterViewInit() {
    fromEvent(this.searchInput.nativeElement, 'input')
      .pipe(
        pluck('target', 'value'),
        filter((searchTerm: string) => {
          return (
            searchTerm.trim().length >= (this.minLength ? this.minLength : 1)
          );
        }),
        debounceTime(500),
        tap((_) => {
          this.showList = true;
        }),
        switchMap((value) => {
          return this.searchRequest(value);
        })
      )
      .subscribe((result: any) => {
        this.resultList = result;
      });
}

After the debounce time is reached, the searchRequest() method is called with the search term. Then it will query the API and return the results as an observable.

You can see a StackBlitz demo of the search bar below. Check the console to see the requests being made.

Don’t forget to check the full source code on GitHub!