top of page

Datacop Case Study: E-commerce Growth Hacking in practice

Updated: Apr 7, 2021

Using big tech data techniques to increase revenues of a mid-sized e-shop’s mobile-web by 24.5% in 4 months.

What is growth hacking?

It is a careful process involving data analytics, designing solutions and A/B test experimentation. This process of growth hacking is one of the core drivers of success for digital players like Facebook, Google and Amazon. Due to their unprecedented scale and regular activity of their users they were in an unique position to pioneer the technique. Each big tech firm had early user versions of their services - the early e-shop, the early social media, the early search engine. For each it proved its basic potential, but as they experienced exponential growth of users their systems designs quickly became obsolete. As in any company, there were many competing ideas and beliefs about the design and look of core features like Facebook’s “Feed”, Google’s “Search Results” and Amazon’s “Homepage”. What should or should not the user see? What if the user is already registered? How big should the search bar be? Where can the user find his profile? What each of these three tech firms succeeded in is taking a data-driven approach to design and adapt their respective platforms to match the growing needs of their user bases.

Have a look at the major changes undertaken by Amazon’s homepage between 1995 and 2019. Many aspects that we take for granted today about an e-shop’s homepage - like the dominant aspect of the search bar, the “categories sub-menu”, that the location of the cart is in the top right hand corner, how many pieces of “content” to show, etc.; has been aided by the process of growth hacking. A major change that started happening on Amazon’s homepage as a result of such testing was the process of “decluttering” - simplifying and personalising the homepage and leaving only information that is relevant to the customer, based on what we already know about them. Many changes were made over more than a decade of tests and evaluated against desired objectives - whether a certain activity (registering) and almost always customer lifetime value.

Amazon’s homepage in 2000 for unregistered new users. Notice how prominent the elements regarding their books are. This was a website in transition, taking on many more verticals quickly, while still not quite letting go of its old identity yet.


Amazon’s homepage in 2010 for unregistered new users. Notice how cluttered the homepage is compared to today’s Amazon’s homepage. Also, it was a time when BlackBerries were a thing.

Amazon’s homepage in 2020 for unregistered new users. Here we can see many of the current e-commerce best practices: The prominence of the search bar, a Top Bar Main Menu usually containing the basket icon in the top right-hand corner. The Top Bar Sub-Menu usually populated with categories of the shop.

The Homepage is one of many critical touchpoints on an eshop. This process is going on for many of the various other customer touchpoints such as product_view pages, checkout process, search results, etc. as well as on various customer segments: unregistered visitors, returning customers, high-value customers, etc. Amazon had to pioneer many of these findings on its own through analysing their data, understanding their customer, designing hypotheses and systematic a/b testing and evaluation. This process isn’t without its limitations and not every test is successful. However, each test enriches the knowledge of your company about your customers. Even today, in the constantly changing landscape of the internet, big tech firms rely on their highly skilled and developed growth hacking processes and teams. It is said that each of the big tech firms conducts more than 10,000 A/B tests annually on their respective platforms.

From a Big Tech Secret to a SaaS Industry

Back then, this kind of large scale and continuous use of data to systematically support the design process of a core product was unheard of. Thanks to their unique positions on the market, the big tech companies were able to maximise their revenue potentials. This was an expensive and risky undertaking as each of the companies had to create their own appropriate server warehouses, data architecture, a/b testing + automation softwares on their own. Additionally, there was a personnel shortage as not a lot of people were doing it and even fewer people understood how to do it properly. Now a similar process that Amazon, Google and Facebook underwent is now occurring in thousands of maturing digital firms across the globe. This is partly due to a dramatic fall in costs. Today, if an e-commerce company wanted to have a growth hacking team, it has become accessible. There are easy to use and relatively affordable data and marketing automation software out-there such as Exponea, Synerise, Optimizely, Dynamic Yield. These have taken a decade of experience into practice and developed the supporting infrastructure for any digital company with sufficient size and data to take advantage of this powerful technique. MarketsandMarketsestimates the market size of CDPs, one of the core infrastructures behind growth hacking to be at $2.4bn in 2020, expected to grow to a whopping $10.3bn in 2025.

Additionally, after more than a decade since the term ‘growth hacking’ was coined, there are experienced specialists for hire that can accomplish this for you. At Datacop, we have more than 5 years of experience working with CDPs to maximise values from the data of 25+ e-commerce companies, primarily in Fashion, Electronics and Furniture. One of the more advanced services we provide is growth hacking. In this article we wanted to show you a case study of this process with Tempo Kondela, a Slovak re-seller of furniture and household equipment. to illustrate how it works in 4 steps: identifying the opportunity, developing solutions, evaluating results and deploying results.

The Case Study: Tempo Kondela

Step 1: Identifying the Opportunity

The first step of growth hacking is to identify the opportunity. In complex digital systems, there are many touchpoints that influence the decision making of users and customers. It is easy to try to fix everything at once, only to find out you do not have the time and resources to do so all at the same time.

Our first step was to conduct a month long process of “auditing” the client’s data. This involves going through all of their data and existing digital processes, discussing priorities through with the client, understanding the customer base and comparing performance of key metrics with industry best practice. This analysis has yielded a number of key insights that have given us an idea of where the main opportunities for improvement were, but also left us with many questions that some of the patterns in their data showed.

It is important to stress that the entire process works if all of the members of the team - the analysts, the developer and the client are aligned for this process to yield optimal results. We discussed the results of the audit with the client in an open, in-depth discussion. The biggest opportunity we discovered was to improve the product detail page for mobile users that come from three specific traffic sources. This segment made up approximately a quarter of all of the traffic. This particular segment suffered from a below-average conversion rate (%) to purchase compared to the rest of the customers. At the same time, we discovered that even a small increase in conversion of this segment would yield larger benefits in comparison to optimising other channels that seemed to have solid potential for % increases. However, since they had smaller traffic, they would yield lower benefits. Lastly, we studied the behaviour and key metrics of this particular segment to better understand their needs and customer journey.

Step 2: Developing Three Solutions

Developing solutions is all about 4 things: customer understanding, hypothesis formulation, developing solutions, quality assurance before launching.

Firstly it is about customer understanding. In our example it is about understanding the customers in underperforming segments. This consists of a few parts: quantitative insights, qualitative insights and client knowledge, The critical quantitative insight we got from our audit, was that this segment began their journey on the shop at the product page, coming from a referrer website. Then we qualitatively assessed all the roadblocks the customers may face in conversion. You want to be able to “put on their shoes” so to speak and see the customer journey from their perspective. Lastly, the information provided by the client is similarly important -polishing draft ideas into final forms.

Altogether these sources of insights allowed us to formulate three working hypotheses, based on which we designed our solutions:

Hypothesis 1: Visitors from this segment have predominantly no experience with the e-shop brand. Therefore they may trust the brand less than other segments. Increased trust perception by customers increases purchase value.

Hypothesis 2: The product page UX could be optimized to decrease friction to the next funnel step - adding a product to the cart. Increased total number of unique cart updates will result in more purchases.

Hypothesis 3: The visitors are coming from a referrer that aggregates product catalogs of various brands. They are likely in the middle of their customer journey, we hypothesised that visitors do a lot of browsing on the referrer site. If we prevent the number of customers returning to browse on the referrer, but instead they would continue their browsing journey on the brand’s shop - this would result in increased conversions.

Then, it is about actually developing the solutions with your front-end specialists. Each working hypothesis had its own solution:

Solution 1: The brand has good ratings from a popular local third-party reviewer site. Inform users of this segment of this good rating, as they enter the website. This will probably help with the trust issues new customers have with new websites they are unfamiliar with.

Solution 2: Identify all-important web elements of the product page that are critical to a customer’s decision to add the item to the cart and give those elements the most prominent space on the website.

Solution 3: Help users see the next most relevant product on the page as easily as possible.

Lastly, it is about quality assurance, before launching solutions live on the site. Websites in the 2020s have grown to become complex creatures, with hundreds of sub-pages and many key touchpoints. Even small changes can have unexpected impacts on user experience. Test on different devices, make sure you have a few people test your designs from all angles. Test different user journeys. This will help you catch any unexpected bugs in the software. For instance, in our QA we discovered that users coming from the first time would have their add-to-cart banner “covered” by the cookie banner.

Final versions of solutions:

(from the left) Variant 1, Variant 2,

Main feature of Variant 1 is the green banner that 90% of the customers would recommend the brand to their friends, with over 10,000 such reviews submitted. The main features of Variant 2 are the floating add to cart button as well as cleaner UX.

(from the left) Variant 3, Control Group:

Variant 3 was Variant 2 with the addition of a recommended item that is most similar to the product viewed. Notice in the control group and Variant 1 the placement of the name of the product. We hypothesised that it takes too much space and placed it less prominently below the fold in Variants 2 & 3.

Step 3: Evaluating the A/B Test

Once you have double and triple tested your variants it is time to set the A/B test intervention live. It is important to ensure you have monitoring reporting in place and check each day for the first 7 days whether everything is going according to plan. The A/B test ran at a randomised 25%, 25%, 25%, 25% split for each of the three Variants and the Control Group. After 25 days, the experiment showed with 99.9% confidence the following results, show in the table below:

All % are in comparison to Control Group Performance

The attribution window for % to purchase and EUR revenue was 24 hours since the last product viewed for both the Variants and the Control Group.

As a total of the segment's revenue, the revenue generated by the uplift of Variant 3, if it has been set at 100% during the same 25 day period, would represent 48% of all extra revenue for that given segment. Including the entire mobile traffic into the calculation, that would represent 24.5% of all extra revenue. For the entire shop, both desktop and mobile it would represent 11.01% of all extra revenue. To contextualize these results consider the above uplifts on approximately a quarter of your traffic. If you are a 5,000,000 EUR a month digital business, and you see results as for Variant 3, this represents approximately 550,000 EUR of uplift a month for these changes.

Aside from the positive effect of the intervention on revenue uplifts, we can also see in the third column of the table that Variant 2 & 3 had a cca. 60% uplift on the conversion rate of visitors to adding the product to their cart. We attribute this difference primarily to the floating add to cart button. Because we know that approximately a third of all the purchases happen after 24 hours from the cart_update, it is likely that part of the positive effect on revenue is not taken into account, as it takes account only a 24 hour window since the last product viewed. Additionally, this also raises new opportunities. The current intervention was very successful in converting visitors into the next step, namely from view_item to cart_update. Across the e-shop the conversion rate between these two steps is only at around 7-8%. So that is itself highly valuable. However, some of the success was not translated to the % to purchase uplift. This suggests that growth hacking the cart_update to purchase step is an opportunity for future success.

Step 4: Deployment of Results

The results were above our initial expectations. Nevertheless, it is not the time to stop. A complex evaluation of an ambitious A/B test will always yield interesting insights that should be documented and discussed whether to be applied into practice.

Insight 1: Both Variants 2 and 3 have performed strongly. The floating add to cart feature has a clear positive impact on the segment’s likelihood to add the product to their cart. At the same time, Variant 3 overperforms Variant 2 on revenue and purchase by 4 and 10 percentage points. This suggests a positive effect of the recommended item on the visitors in this segment.

Action 1: Deploy Variant 3 at 100% for the segment.

Insight 2: The success of the second hypothesis suggests a cleaner UX could potentially improve all segment’s performance.

Action 2: Plan and Deploy a Variant 3+ A/B Test on the rest of the visitors with an 80/20 split.

Insight 3: Variant 1 has a positive effect on revenue, but a neutral effect on conversion rates. Its effect was most pronounced on a particular sub-segment within the segment. Clearly, the positive reviews are a useful asset to use. Perhaps they can be used in different touchpoints.

Action 3: Put it in the good ideas drawer, as its benefits are lower than Action 2.

To sum it up, successful long-term growth hacking takes the form of the Build - Measure - Learn cycle. We start with the data, quantitative and qualitative to learn as much as possible about the problem you are trying to improve. Then you form educated ideas based on what you have learned. Those ideas are built into solutions, which are then tested. These results are measured and then fed-back into the original body of learnings and insights. From that you can form even more precisely educated hypotheses on what will improve your digital product, website or application. Continuing this cycle is not easy. However, if done correctly and with the right team it will result in innovative solutions for your problems, high chances of systematic growth and produce a lot of understanding about your business and customers.

A word from Tempo Kondela:

“Our main motivation to work with Datacop at Tempo Kondela, was to explore complex analytics and A/B testing on our e-shops. Together with Datacop we have managed to push our utilisation of marketing tools to a new level. In particular, we pursued the optimisation of conversion rates of our acquisition traffic on our mobile product pages. The project brought us both immediate positive results on our bottom line as well as new perspectives on how to apply personalisation techniques to different customer segments at different touchpoints of the customer journeys. I would like to thank our colleagues from Datacop for a professional and fair attitude, analytical expertise and lastly for patience in our discussions.”


We would like to thank Tempo Kondela and Mr Žák the E-commerce Manager at Tempo Kondela who owned the project process on the client-side. As mentioned earlier in the article, his knowledge, customer understanding, and experience in their market were important to achieving strong results.

We would also like to thank Mr Bohony, our front-end specialist who jumped onto the project during the difficult development and user testing stage with us and put together great-looking solutions for our ideas.

- DR & LB

464 views0 comments

Recent Posts

See All

Part of Datacop's Blog Series on Data Science in the Digital Economy (#5) A version of this article was posted in Slovak in Bridge, the Ecommerce Magazine in the CZ/SK market. In our work at Datacop

bottom of page