Work

ION Group

Database Optimization
SaaS
Multitenant Architecture

I was responsible for scaling an emerging commodity trading product, where getting it right the first time was crucial for our clients. I identified and implemented new strategies and performance improvements that had not been previously considered.

ION Group Logo

Two of the five core values at ION group are rigor and entrepreneurship. To be able to grow as a finance company with software deployed in critical pipelines at all major banks in Europe, the ability to mitigate mistakes is foundational. Additionally, the ability to decipher customer demand and bring automation into financial workflows requires creativity and far-sighted vision. My work at ION revolved around my vision to expand the scope of my product and deliver solutions right the first time.

I worked under a product that had a unique selling point of being robust while being cheap. It could securely run multiple client instances on the same machine while meeting customer feature requirements expected by the market and beyond. It gave clients the freedom to create scripts to transform their data and create their APIs. Additionally, the product was renowned for being quick, and snappy and would run on just any machine which supported a web browser. However, the product faced scaling issues when faced with a large amount of data and an increased number of concurrent users. My vision for the product was elevation from the restraint of being known only for “small-scale” clients. Looking at the feature set, I know the product had the potential to rival the best competitors in the market. There were two problems between me and my goal: allowing more concurrent users and bringing efficient scaling out database. I was blessed with managers: Amol Chikhalkar and Harshawardhan Wankhade which gave me tools, time and the ability to work on the first obstacle towards my mission.

The product was more than capable of handling multiple requests and compiling large amounts of data to generate reports. So what was stopping more users from jumping in? The database used by the product only allowed a single write transaction at a time. Meaning if there is any background writing activity going on, it would impede any other user from writing to the database. Modern databases primarily use either two-phase locking (pessimistic) or optimistic concurrency to allow users to do transactions concurrently. The product used its custom database, written before optimistic concurrency was efficiently implemented, it could handle multiple read operations (using ‘snapshot isolation’ which was contemporary when the database was coded), but due to limitations in database technology, failed to incorporate optimistic concurrency. The execution of optimistic concurrency was undertaken by me, under the guidance of our chief engineer Vasily Zhelezny. During my time with the company, I partially implemented and tested the technology, showing significant improvement in its user experience.

Further, during my time with the company I worked on various other performance-related issues. One noteworthy achievement was improving performance on a calculation that would force the application out of memory. In Commodity trading, companies use shipments and warehouses to store products, and valuation of shipments or warehouses uses financial concepts such as ‘weighted average’ or ‘first in first out’. Our product could follow the smallest quantity of the product across the chain and show users its valuation between the first and the final points. When applying weighted averages at shipments or warehouses, the permutations of the chain exploded making the calculation extremely intensive. To solve this problem I translated the problem space into a graph and developed the ability to allow users to calculate and see a subset of the solution space, without driving the system out of memory.

Additionally, with heavy memory stress, the application faced reoccurring memory outages. To address this challenge, a senior developer and I were tasked with devising a proactive method to predict these outages and enable timely intervention. I initiated the project by investigating garbage collector logs, representing both clean and dirty runs of the application. Employing data visualization concepts and tools, I conducted a thorough analysis of the logs for effective feature selection. Subsequently, I crafted a model tailored to predict memory issues within the application. Seamless integration with a log parser and event management system, designed by my teammates, validated the system’s efficacy. Through tests on intentionally overloaded instances, simulating scenarios leading to memory errors, the model accurately predicted outages one hour before the actual error in most cases.

I got to learn a lot about corporate culture, professionalism, rigor, and creativity during my time at ION group. My experience there played a large role in my interest in bringing machine learning solutions to database technologies, to build more robust, efficient, and distributed data management frameworks.