Friday, August 31, 2018

How to find APNS Device Token of a Production iOS app

I wanted to test out the look and feel of a push notification on the production iOS app. For that, we needed to know device token of my device.

The Problem

Quick googling suggests that, we can get the device token from the app delegate callback method. However this method doesn't work for production apps. So how do we get the device token of a production iOS app?

The Solution

The solution is pretty straightforward and extremely low tech :D. We could get the device token from XCode using the following steps:

  • Connect your device with the Mac
  • Open XCode, click on Window -> Devices and Simulators
  • Open Devices and Simulators
  • Devices and Simulator window opens up, click on "Open Console"
  • Click on Open Console
  • New and shiny device console opens up
  • Launch the app and accept the popup to receive notifications.
  • Now on the device console search for "Request per-app token with token identifier"
  • Device Console with Push Token
  • You should get the device token which is in the format "4D2338E0-1D8F-490A-9C8E-F5A4FEA2CFFF"
Just use the device token with any push notification sending service to send the push notification to your device!

Tuesday, July 31, 2018

How to paginate faster in PostgreSQL with big offset values

I was surprised to know, how inefficient pagination could be, when its done with LIMIT and OFFSET.

Everything is good and dandy as long as the OFFSET value is in hundreds and you are dealing with relatively smaller dataset. With huge datasets (5-10 Million+ records) the performance degrades pretty fast as the offset values increase.

The Problem

Offset inefficiency creeps in because of the delay incurred by shifting the results by a large offset. Even in the presence of an index, the database must scan through storage to count rows. To utilise an index we would have to filter a column by a value, but in this case we require a certain number of rows irrespective of their column values.

Moreover Rows could be of different size in the storage and some may be marked for deletion, hence the database cannot use simple arithmetic to find a location on disk to begin reading results.

The Solution

Its best to demonstrate the solution with an example. Lets say for e.g. we have a table called "events" with primary key column "id". We are fetching 30 records per page from it and now want to skip 100000 records and get the next 30 records. The query to do this would look like this

This query would be pretty slow because of reasons mentioned above. To get around this problem we can tweak the query as follows and it should start running faster (unbelievably fast).

Reason for this significant improvement in performance is because of the WHERE clause on "id". Database could go right to the given row and than fetch next 30 records!

There you have it, a simple tweak in query could be the difference between a "crawling query" and a "blazing fast query"!

Saturday, June 30, 2018

How to map PostgresSQL JSON column with Hibernate value type and Kotlin

At makkajai, there has been no dearth of challenging problems :). Recently we moved our analytics partner. I will not bore you with details on why we had to move, but what is significantly more interesting is how we executed the move. Some key requirements for the move where:

  • Migrate all the data collected by previous analytics partner i.e. around 40 Million events to the new partner.
  • Honour concurrency limits of old and new analytics partners. Because if we didn't honour them, they will stop responding for a period of 10 minutes (which would be costly 10 minutes)
  • Old analytics partner had a limit of 3 concurrent requests.
  • New analytics partner had a limit of sending 1000 events per second. 
  • Migration had to be reliable and fault tolerant. For e.g. we could run the migration multiple times during the migration window.
I am not going to go into details of how we solved the whole problem (may be some other time), in this blog I am going to focus on a very small part of the problem. 

The Problem

PostgresSQL JSON column type has great querying features, I wanted to use it to save parts of events JSON response received from our old analytics partner. For this to happen, I needed to map the PostgresSQL JSON column type to Hibernate value type. This blog post is to document the steps needed to achieve this using Kotlin.

The Solution


There are 4 steps involved to make things work.

  • Adding a custom PostgreSQL dialect to register the JSON column type with Kotlin String.
  • Registering the custom PostgreSQL dialect in application.properties.
  • Adding a custom user type class to map kotlin String to PostgreSQL JSON column.
  • Annotating the model classes, to use the custom user type class.
Here is the exact code needed to achieve all the 4 steps mentioned above

Above is the custom PostgresSQL Dialect to register the JSON column type with a Kotlin String.

Sample Application properties to register the custom dialect.

Above is the custom user type mapping class. This will be used to map Kotlin String to PostgreSQL JSON column.

Above is the simple UserEvent model class that uses the string property properties and maps it to the PostgreSQL JSON column data type.

Thats about it! When we create the instance of UserEvent class and set the value of properties it will be correctly saved in PostgreSQL JSON column type. PostgreSQL will also validate that its a valid JSON String before saving the information.

Tuesday, May 29, 2018

How To Print 1x1 Shipping label on a 2x2 A4 Generic Sticker Paper

When you are running a startup, you will face numerous business problems on a daily basis. Some problems are within your core competency and some fall outside your comfort zone. Being a startup founder, you really cant afford to not solve the problem because it falls outside your comfort zone. Also at times, you only need to be street smart to solve the problem and move on :)

I recently faced one such, not so interesting problem, but it was essential for me to solve it.

Background

We ship books to our customers in India and we recently moved to Delhivery as our delivery partner. Have you noticed the stickers on the packaging, when you receive deliveries from Amazon/Flipkart/Delhivery etc? Those stickers are called Shipping Label. It has bunch of information like:

  • Who is the package for.
  • Where is it coming from
  • Contact details of the client 
  • Contents of the package and its approximate value
  • And many other things.
This is how a Shipping Label looks:

When we create the shipment in Delhivery portal, they generate the shipping label for us. Shippers are supposed to print it and affix it on the shipment. So far it feels normal and business as usual, what's the problem is not clear?

The Problem

We usually we ship in bulk, hence we ship to hundreds of our customers in one batch. Delhivery generates a PDF with 1 shipping label on every page, hence if we are shipping 100 shipments, Delhivery will generate the PDF with 100 pages i.e. 1 label on each page in a 1x1 format.

If we had access to a specialised printer which could print these stickers on a sticker roll, that we would be sorted. But unfortunately we didn't have that printer.

There are generic sticker papers available in the market to print shipping labels. However each A4 size generic sticker paper would cost around Rs. 5. Its not optimal from cost as well as resources perspective to print just one sticker on the entire A4 page.

Considering the size of the Shipping Label, we could easily print 4 shipping labels on one A4 size paper. If we were able to do it, the cost of printing one label will drop 4 folds. Something like this


Initially you might think, why is saving few bucks so important. Thats because of a simple concept called unit economics. If you ever want to get your startup in the successful zone, you need to get the unit economics right :D!

The Solution

Now that we know whats the problem and why we need to solve it, lets focus on how did I get it done.

Speed of execution is everything in startup world. I had to solve this in way that its easily doable by any non-tech operations guy, at the same time I didn't have the luxury to build a sophisticated custom solution. 

So, what did I do? I tried to breakdown the problem into smaller steps and try to solve each of those smaller steps.

  • First thing I observed in the Shipping Label's PDF was that, There was some extra information around the shipping labels like the footer of the page and some more unimportant stuff. 
  • In order to arrange it in 2x2 format I need to trim the unimportant stuff. For this I looked for a site that could help me trim all the PDF pages in one go. Sejda was perfect for this. The free plan has some restrictions but we could live with those.
  • You could upload the PDF and you could crop all pages with a mask in one go. What I got after that was a PDF with all pages having only the important stuff.
  • Next, I need to export each page as a separate image either in PNG or JPEG format. This was necessary so that I could use mail merge to actually arrange the shipping labels in 2x2 format.
  • I exported the PDF pages to png using pdf2png site. The result was a zip file with all the PDF pages exported as PNG files.
  • Final step was to use "Microsoft Word Mail Merge" to arrange these images in 2x2 grid. I followed the this nice article, to get that done.
  • Once I followed the steps, I finally got a 2x2 grid of all my shipping labels. These could be printed on a generic A4 paper with a 2x2 sticker grid.
  • In the end, I had a very low tech solution (which could be easily followed by any Operations guy) to a business problem.
The entire solution is in line with the theory suggested by Mark Watney, from The Martian :)
You solve one problem... and you solve the next one... and then the next. And If you solve enough problems, you get to come home!


VoilĂ , my job here was done!

Friday, April 27, 2018

How to get Snowplow-Mini running on AWS

While looking at various Analytics engines we came across Snowplow Analytics. We wanted to give it a shot and experience it first hand. Luckily, they have something called as Snowplow-Mini. Its an easily deployable, single instance version of Snowplow. It essentially gives us, a taste of what Snowplow can do for us, as far as data collection, processing and analytics is concerned!

We started with the quick start guide and usage guide, performed all the steps mentioned there to get the Snowplow-Mini instance working. However, we did faced two annoying issues, investigating and fixing them, wasted a few hrs. This post is about those two issues, so that my fellow developers do not have to waste any time on investigating and fixing them.

Unable to: Generate a pair of read/write API keys for the local Iglu schema registry

We followed all steps mentioned in the usage guide but we were unable to generate the keys.
  • Navigate to http://<public dns>/iglu-server
  • Input the super API key set up in previous step, in the input box in the top right corner
  • Expanded the keygen section
  • Expanded the POST /api/auth/keygen operation
  • Input the appropriate vendor_prefix for this API key
  • Click Try it out!
At this, it should have generated the read and write keys for us. But all it did instead was, showed a progress bar and runs forever without return.

Investigating it in Chrome Developer Console revealed that the calls were failing with 401 UnAuthorized. After googling for this error a bit, I found that someone else was also facing a similar problem. Their solution was to do HTTP POST via CURL and that seemed to work. However it didn't work for us either.

I looked around for ways to debug the problem.
  • I connected to the Showplow-Mini instance via SSH (refer to AWS documentation on how to do this)
  • Checked the config under "snowplow" directory on the instance. Could not spot anything unusual there -- not that I knew much about it anyways :D
  • Checked the logs under "/var/logs" directory. Found a few things but could not really solve the problem.
  • Connected to PostgreSQL DB on the instance using the following command
    • psql --host=localhost --port=5432 --username=snowplow --dbname=iglu
      # Password is "snowplow"
  • Ran the query to check the API key
    • select * from apikeys;
  • What I saw next, made my jaw drop, in disbelief!
  • They API key is case-sensitive and the key Snowplow-Mini had saved was all in lowercase, even though when I had given it the key, I had given it in all caps.
  • Passing the key in small case and making the following call did result in generating the read/write API keys for local iglu schema registry
    • curl http://<IP address of your server>/api/auth/keygen -X POST -H "apikey: <your case sensitive API key>" -d "vendor_prefix=com.makkajai"
  • Duh! Yea I know.

  • How to connect to PostgreSQL Snowplow-Mini DB, I got to know that from here 
I must have easily wasted an hour trying to fix this problem. I hope others can save that time!

Unable to: See events in Kibana Dashboard

This was a tricky one. After raising sample events, I was unable to see them in Kibana Dashboard. This happens mainly because the "snowplow_stream_enrich" is not able to connect to the "elastic search service".

How Did I figure it out?
  • ssh into the Snowplow-Mini instance
  • I checked the logs under "/var/logs" directory. 
  • The logs seemed to be filled with exceptions like
    • Exception in thread “main” java.net.UnknownHostException: ip-xx-xx-xx-xx: ip-xx-xx-xx-xx: unknown
  • Googled it a bit, found the solution here 
  • Edit the file "/etc/hosts" and add the IP address information in that file as follows.
    • sudo vim /etc/hosts 
    • xx.xx.xx.xx ip-xx-xx-xx-xx localhost
  • xx.xx.xx.xx being the AWS local IP address.
  • Save and exit and re-start all services from the Snowplow-Mini console.
  • Generate a few events and open Kibana dashboard, and it worked this time!
After these two problems were out of the way, my Snowplow-Mini instance was fully up and running on AWS!

Saturday, March 31, 2018

How to Debug A Pre-built APK

Android Studio 3.0 added a nifty little feature -- The ability to debug and profile pre-built APK's. For developers working with a mix of Native (C/C++) and Java code for their applications, this is an extremely valuable feature.

I stumbled upon it, while I was looking for something else. Wrote this post with the intent that more people will be able to find this feature and make good use of it!

Here are the steps needed to debug a pre-built apk:
  • On the launch screen of Android Studio 3.0+, select the option "Profile or debug APK"
 

  • It will open up a dialog which will let you choose the APK you want to debug.
  • Ensure that the APK is build with debugging enabled
  • Next, Android Studio will try and create a new project in this folder ~/ApkProjects
  • Once it finishes loading the APK it will open up a screen that looks like this
  • As you can see it has unpacked the APK. It shows various part of the APK along with their sizes.
  • It has not fully decompiled *.dex files into *.java files. It show them as *.smali files. 
  • When you open the *.smali file, it will give you an opportunity to Attack Java Sources
 
  • Clicking the link Attach Java Sources will open up a dialog which will let you select the folder where Java sources are located.
  • Once you do this, you should see the Java classes in their full glory. You can now attach breakpoints and debug through the APK as if it was real source code.
  • If your project contains native code, it will let you attach a library containing debug symbols for that too.
  • Hitting the Debug or Run icons on the IDE will popup a dialog which lets you select the Device on which you want to install the APK and start debugging.

  • On selecting the device, IDE will install APK on the device and attach the debugger.
  • You should see a screen like this on the device

  • In a second or so, you should see your app's first screen loaded and ready to be debugged!
  • That's about all that is needed, you can debug, step through the code, evaluate variables and what not!
As you can see its a valuable little feature which can help in locating the bug in tricky situations!

Saturday, March 17, 2018

How to get to Tadoba and things to take along!

We had an opportunity to visit Tadoba Tiger Reserve, its a pristine and unique eco-system situated in the Chandrapur district of the Maharashtra, India. It contains some of the best of forest tracks and endowed with rich biodiversity. Its a brilliant place for nature lovers.

When I was planning the trip, one thing I found particularly challenging was to get pin point location of various safari gates and how to get there. This blog is an effort to help people like me, with landmarks to locate various safari gates. This post also list certain Do's and Dont's to best enjoy the safari.

Safari Gates and Getting There

Tadoba is close to a city called Chandrapur which is around 140KM from Nagpur. I drove to Chandrapur from Nagpur using a Zoomcar (Use my referral code MTI5NDU while making your first booking and get flat 15%(max discount - ₹1500) off).

Chandrapur has a bunch of decent hotels and is about 25-45 KM away from Tadoba Gates. Safari Gates are well spread out so make sure you are book a safari for a gate that is closer to where you are staying.

As a landmark locate Hotel Siddharth Chandrapur on Google Maps, the road to various gates of Tadoba is bang opposite this hotel.

We visited the following gates:
  • Devada Adegaon Agarzari Zone
  • Agarzari Zone
  • Moharli
  • Junona Zone
  • Zhari (Kolsa)




Devada Adegaon Agarzari Zone and Agarzari Zone are right opposite to each other. They are about 25KM from Hotel Siddharth.

Moharli and Junona are pretty close by as well and they are about 35KM from Hotel Siddharth. These gates are around 11KM from Agrazari Zone gates, basically you have to follow the same path but go a little ahead.

Zhari (Kolsa) is on the other side and is about 35KM from Hotel Siddharth.

Here is an interactive map to give you a better idea


Buffer Or Core Zone?

Tadoba has two types of zones, Buffer and Core. Some people say that, there is a high probability of spotting the tiger in core zone, but my personal experience is that its all about a little bit of luck and timing. You need to be at the right place, at the right time, to spot the tiger. To put things in perspective, out of the 4 buffer zone safaris we did, we spotted the tiger in 3 of them!

Safaris are done in open Jeeps with one guide and one driver. One safari typically last for around 3:30-4 hrs. You can book the safaris online from here.


How many safaris to do?

To have a decent chance of spotting the tiger, I would recommend doing at least 4-5 safaris. This gives you a higher chance of spotting the tiger. If you just do one or two safaris, there is a good chance that you would return without spotting the tiger.

We did a total of 5 safaris (4 in Buffer zone and 1 in the core zone). In one of the safari we were extremely lucky to witness a Gaur fight. The idea is not to keep looking only for the tiger, there are plenty of other animals which are equally majestic. Trust me, these safaris are well worth their time and money.

Some random clicks:
Spot the tiger!

I am watching you!
Do's and Dont's
  • Lot of dust will settle on you and your clothes while doing the safaris. Its advisable to carry a scarf or handkerchief to tie around your head and mouth.
  • It will be cold, especially during the morning safari. Do carry something to cover yourselves so that you do not feel too cold.
  • Take ample water to drink and keep yourselves hydrated.
  • Carry some light weight foods like sandwiches to eat if you feel hungry during the safari -- Especially if you have young children.
  • Food options near and around Tadoba Gates are pretty limited. There is a nice MDTC Resort nearby with a decent restaurant. 
  • Please don't litter -- Anywhere!
  • Be patient and don't get disappointed if you don't spot a tiger. There are lots of other animals in the forest to see too. Even the flora and fauna has a lot to offer!

What a hot afternoon!



Closing Remarks

Its a must visit place according to me. If you are a nature lover, you would love to visit Tadoba. Its a very well maintained and protected national park too!
Have some Fun!