Friday, January 31, 2020

How To move to Spring Boot 2 and Flyway 5 from Spring Boot 1.5 and Flyway 3

I had to figure this out the hard way, as soon as we upgraded our project from Spring Boot 1.5 to Spring Boot 2, all our existing migrations written using Flyway 3 started bombing.

After reading the docs I realised that the migration schema used by Flyway 3 isn't compatible with Flyway 5. What's more, there is not direct upgrade path from Flyway 3 to Flyway 5.

According to the official documentation we need to do the following:
  • First upgrade your 1.5.x Spring Boot application to Flyway 4 (4.2.0 at the time of writing), see the instructions for Maven and Gradle
  • Once your schema has been upgraded to Flyway 4, upgrade to Spring Boot 2 and run the migration again to port your application to Flyway 5.
This essentially means we will have to do two releases, which I didn't really want to do. Another approach to getting this fixed is described here. I haven't tried it but it should work.

For me, all I wanted was to get my project upgraded and working. I had a few migrations but I also had lots of DB snapshots which we could use to restore the old database. Hence, I decided to take the easy way out. I looked for ways to ignore the existing migrations, in the Flyway documentation we were able to find two settings that could be used. 

# Whether to automatically call baseline when migrate is executed against a non-empty schema with no schema history
# table. This schema will then be initialized with the baselineVersion before executing the migrations.
# Only migrations above baselineVersion will then be applied.
# This is useful for initial Flyway production deployments on projects with an existing DB.
# Be careful when enabling this as it removes the safety net that ensures
# Flyway does not migrate the wrong database in case of a configuration mistake! (default: false)
# The version to tag an existing schema with when executing baseline. (default: 1)

As the documentation suggest, basically it will ignore all the migrations till the version 20200131132000 and thats exactly what we wanted to do.

When using these settings with Spring Boot you need to append "spring." to them hence, the final settings added in application.properites file are:


That did it for us, the project was up and running with Spring Boot 2 and Flyway 5. After this point if we wrote new migrations those worked as well.

Sunday, December 29, 2019

How To Stream data to Redshift via Firehose

We had a backend system that would write data to PostgreSQL RDS. I wanted to copy this data to our Redshift cluster too in near realtime manner. There are many ways to copy data from PostgreSQL RDS instance to Redshift. However, most of them are not realtime solutions. Thankfully there exists Amazon Firehose which could be used to stream data to Redshift.

In this post we will see the steps needed to stream data to Redshift via Firehose using a Kotlin application.

The Solution

This post assumes that you already have an Amazon AWS account.
  • Lets start by creating a new Kinesis Data Firehose delivery streams by clicking here. We will be creating this delivery stream in Oregon region. 
  • Enter the name of stream and select source as Direct PUT and other sources and click Next

  • Next, this screen lets you transform the data into a different format. For simplicity we are not going to do any of this and select Disabled for both options.

  • Next step is by far one of the most important step in the process, so pay attention :D
  • There are three important things to configure in this step. 
    • Choose Amazon Redshift and fill in the connection details for your Redshift cluster

    • Next, you need to provide S3 bucket details where the data will be held temporarily. Create an S3 bucket called test-delivery-streams in the same Oregon region which we will use as our temporary S3 location.

    • And finally, the Amazon Redshift COPY command, here you can specify various options for the Redshift COPY command. Since we will be streaming data in JSON format, you need to put format as json 'auto' in the COPY options - optional section. 

    • Right below it will also show the actual COPY command that will be used, please review this and make sure that the COPY options have been added successfully.
    • Click Next and move to the final step
  • In this step only thing we need to change is the IAM role in the Permissions section.

    • Click that button and a new window will popup which will basically create a new IAM role that will have access to various AWS services needed to stream data to Redshift.
    • No need to change anything here, click Allow
    • After this the popup will close and we can click Next.

  • The Final step will show you all the information you have entered. After you are happy with everything shown there, click on Create Delivery Stream.
  • This creates the Delivery Stream however we are missing one final step, i.e. to create the table in Redshift that will hold the streamed data.
  • This table needs to mimic the JSON data that we are going to ingest into the Kinesis stream. For simplicity we will create a table with 2 columns only. Here is the create table script that we will use

  • Finally our delivery stream is ready to stream data to Redshift. 
  • Here is the sample Kotlin code to PUT data into the Kinesis stream which should eventually end up into Redshift table.
  • After invoking this code, the Kinesis stream will push the data to S3 our temporary location.
  • You can view the files in the location and validate that the data has reached S3

  • After about 5 minutes you should be able to see the same data in Redshift cluster too! 

  • If you click on the Kinesis Delivery Stream, it has a nifty Monitoring tab which shows information like how much data is written to S3 and how much data has been written to Redshift and stuff like that.

Saturday, November 30, 2019

How to do iOS Receipt Validation in Objective-C

Until you do it yourself, receipt validation might feel like one of the most unclear topic, when it comes to verifying In App Purchases (IAP) on iOS platform for an App Developer.

This post documents the exact steps and code needed to perform receipt validation with AppStore.

The Receipt

As soon as the app is installed or updated, Apple puts a purchase receipt (signed by Apple via AppStore) in the main bundle of the app.

Think of the receipt as the trusted record of a purchase. It also includes any in-app purchases that the user might have made.

Receipt Validation

By verifying the receipts, App Developers can protect their revenue and enforce their business model directly in their application. Receipt Validation plays a key role in verifying whether the auto-renewing subscription is currently active or not.

Here are the steps needed to validate the receipt.
  • First step in the process is to load the receipt from apps main bundle
  • If the receipt is not found, we could refresh the receipt using SKReceiptRefreshRequest
    • This is especially helpful during development and debugging.
  • Refreshing the receipt tells the system that the application needs to retrieve a new receipt

  • Before we could send the receipt information to AppStore, we will need the App's Shared Secret.
  • To get this, log on to itunesconnect and navigate to My Apps -> Click on your App -> Click on Features 
  • Click on the In-App Purchases section. On the right side you will see the link App-Specific Shared Secret, click this link to generate the App-Specific Shared Secret. It will pop up a dialog where you could generate a new secret or view the existing secret.
  • Note down the generate secret somewhere, we need to pass this value to AppStore for receipt verification.

  • Next, we need send the receipt details along with App-Specific Shared Secret to AppStore. We have to hit different URL based on whether the app is running in the sandbox environment or production environment.
  • Now we need to send the Base 64 encoded receipt information along with App-Specific shared secret to the AppStore API. The response of this API is a JSON object with details about various purchases user has made on the app.
  • Full list of receipt fields found in this JSON response can be found here
  • "latest_receipt_info" field is part of the JSON response. It is an Array containing the details of IAP and Subscription purchases made by the user on the app. 
  • Each purchase holds information like
    • Which product was purchased: "product_id"
    • When was the purchase made: "original_purchase_date"
    • Whether or not the auto-renewing subscription is running the trial period: "is_trial_period"
    • The date of the auto-renewing subscription expires: "expires_date"
    • What was the intent behind the subscription expiration: "expiration_intent"
  • Now all that is left is to parse the JSON response and iterate through the contents of "latest_receipt_info" field. Here's the code that does it.

Thats about all that is needed to perform receipt validation.

However, please note that, since we can't build a trusted connection between a user’s device and the App Store directly, we should should always call AppStore receipt validation API from a trusted server. Details of how to do that, is out of scope of this post.

Thursday, October 31, 2019

How to upgrade PostgreSQL to 11.4 from 10 on MacOS

I recently updated my PostgreSQL to version 11.4 from 10.0. After the upgrade, I realised that I wasn't able to start my PostgreSQL server. It kept giving me the following error:

The data directory was initialized by PostgreSQL version 10.0, which is not compatible with this version 11.4.

I had to follow bunch of steps to get back my old databases to work with PostgreSQL 11.4. This post is an attempt to document those steps for future reference.

The Solution

  • Install the older version of PostgreSQL using the following command
  • Output would be very similar to these messages
  • Unlink the newly installed older version of PostgreSQL. Brew will spit out the message confirming that the unlinking was successful.
  • Link the latest version of PostgreSQL. As before, brew will spit out the message stating that the linking was successful.
  • Move the data directory from default location to another location
  • Use initdb to initialise a new and empty data directory.
  • Output might look somewhat like this
  • Copy over the timezone and timezonesets directory to /usr/local/share/postgresql10
  • Upgrade the data directory using the following command
  • It will do bunch of things and might spit out messages like these
  • Moment of truth, start the PostgreSQL server
  • If everything goes through fine, you should see a message that states that PostgreSQL server was started successfully.
  • Cleanup steps
Thats about it! We have successfully upgraded PostgreSQL to 11.4 from 10.

Sunday, September 29, 2019

How to setup an Alarm when RDS is running on low free disk space

Yea, that happened to me!

The Problem

My RDS instance suddenly ran out of space and some of our applications started failing left, right and centre. It was a disaster and a fair bit of fire fighting was involved.

I told to myself, how did this happen? I should have put checks in place to ensure this didn't happen. I should have added some sort of alarm to warn when free disk space is low.

To deal with this, we wanted to first setup an alarm to notify the team when RDS instance is running on low free disk space. Looked at AWS console to create the alarm, but - I must admit - we were a bit surprised to see that there isn't like a straightforward way to create this type of an alarm.

The Solution

After a little googling, we found the way to setup the Alarm. This post is to document the steps involved in getting this done so that, I do not forget them :D

We basically need to do the following

  • Create an SNS topic to that can send emails
  • Subscribe the team email address to the SNS topic
  • Confirm the email subscription by clicking on the link that AWS sends.
  • Create a cloud watch alarm to send the alert when the RDS free disk space is lesser than the chosen threshold

That's all there is to it!

Saturday, August 31, 2019

How to restart AWS Elastic BeanStalk instances on a schedule

We wanted to restart all AWS Elastic BeanStalk instances at a given time during the day, every day. To perform this task, there are many solutions. Some of them involve, setting up a lambda function and writing some code etc, what we wanted was something quick and not a maintenance nightmare.

Hence, I thought, all we need to do is re-start the instance on a schedule, whats the best way to do it? Cron jobs flashed as a possible answer!

The Problem

The environment we were talking about was an Elastic BeanStalk Auto-Scaling environment, which meant that EC2 instances will be added and removed on demand.

If we were going to use cron jobs, we needed to make sure that whatever instances are currently in service, they all honour the cron job at all times. This means, new instances that get added on demand, in service should automatically have the cron jobs setup on them too!

The Solution

To do this in the easiest possible way, we ended up using the Advanced Environment Customization with Configuration Files (.ebextensions). AWS Elastic BeanStalk has a feature where we could provide configuration files bundled with our web application source code. These files are basically configuration files under the directory named ".ebextensions".

Now, to setup the cron job reliably on Elastic BeanStalk instances, all we needed to do was to add the "cron-linux.config" file under the ".ebextensions" directory and bundle it with the application source code which gets deployed to the environment.

The folder structure would look like this

The cron-linux.config will be YAML file. Here are sample contents of the cron-linux.config file, this setup re-starts the Elastic BeanStalk instance every day at 0530 hrs.

Thats about it, deploying this along with you source bundle on the environment ensures that the environment instances are restarted every day at 0530 hrs without fail!

Wednesday, July 31, 2019

Majestic Meghalaya - Part 3

This is the last post that will cover the remainder of our mesmerising trip to Meghalaya. If you haven't already checked out the Part 1 and Part 2 of this series, I would strongly encourage you to do so right away.

After a full day trek to the Double Decker Living Root Bridge it was time to relax and spend the evening at the hotel. Next day we visited the beautiful town of Cherrapunji. It has got some breath taking scenery. We saw a tall waterfall called the NohKalikai waterfall. It's a very tall waterfall with beautiful stream of water falling throughout the year. It also has an interesting back story to it on why it was named NohKalikai, I am not going to tell the story because locals do an amazing job at narrating it and I don't want to spoil the fun.

After breathing in the beauty of NohKalikai waterfall, we moved on to visit the Mawsmai Cave. The caves are beautiful well maintained. At some points you will have to cross natural water, while at others you have to pass through very narrow zone, overall a great experience.

Next we visited one of the lesser know places called the Garden of Caves near Cherrapunji, it's a beautiful place with lots of things to see in it. We also drank the medicinal mountain water and collected a bit of it in a lush green bamboo stick to carry it along with us.

It was now time to say good bye to Cherrapunji and move on to the last destination in our trip. The formidable Kaziranga National Park.  By late evening we reached our hotel at the Kaziranga National Park. We were all very excited for the next day, especially because in the morning we will be visiting the Park for an Elephant back safari!

Next day, we all woke up very early and reached the Kaziranga National Park gate well in time. Before this I had been to many Jeep and Bus Safaris, but an Elephant Back Safari was a first time event for me. It is one of a kind experience! You basically get to sit on the elephant and mahout takes us in the jungle to see Rhino's, Tiger's, Deer's, Bird's and other wild creatures. 

A funny incident happened during our ride, our elephant decided to race with other elephant nearby and for a good minute or so we were forced participants to our very first elephant race! It was both funny and adventurous at the same time :D

After this safari, we did 2 more Jeep safari one in the evening and another in the morning the next day.
 In addition to the wild animals, Kaziranga National Park is filled with rich flora and fauna.
In the evenings, after the safari we saw the traditional north east cultural program. Lots of talented artists can be seen performing, while wearing traditional north east attire. I would strongly recommend everyone to see it at least once.

Truly, the trip to Meghalay was majestic and mesmerizing. We came back with our hearts filled with joy and lots of love for the beautiful state and its lovely people!

Have some Fun!