Building on the revelations shared in our first exploration of dynamic pricing's intricate field, we delve deeper into the practicalities of implementing a dynamic pricing strategy.

In this follow-up article, we aim to equip project managers and practitioners with the knowledge to navigate the challenges of dynamic pricing projects effectively. Beyond the allure of technology and the promise of optimized profits lies a terrain filled with pitfalls and complexities that can derail your strategy.

Drawing from our direct experience with a project that forecasted demand by integrating product features and advertisement efforts, we offer a granular look into the "how" of dynamic pricing.

The How

Usually selling prices are determined by business experts who start with the manufacturer purchase price and have a great intuition about the product’s expected performance taking into account internal (e.g brand) and external influence factors (e.g economic situation, geo-location etc.).

The idea behind smart pricing projects is to automate that process by using machine learning combined with business rules that transfer domain knowledge into code.

One of the approaches is to forecast demand for the product as a first step based on historical performance from the existing data. One can wonder here about the new products (cold-start problem) that wasn’t sold before or the diverse products features spectrum. In this case, we can resort to a clustering approach by predicting demand not for a specific product but instead for a product category. Hence, we train a model per products cluster. This clustering can be done using calculated similarity with existent products or a simple grouping approach for products in the same sub-category (e.g Pets>Dogs>Medicine>Ticks Collar).

As products within the same cluster may exhibit similar demand patterns, this clustering approach can result in a better predictive performance within each product category.

However, this approach increases the complexity of managing multiple models. Moreover, some clusters might end up with too little data. For example, using very high level clusters which average many products will result in poor prediction on product level, even if the error on cluster level is low!

An alternative approach to having a different model for each cluster is to have one complex model that has inputs that describe the type of the cluster.

At this step, you can employ machine learning models to learn product and context (e.g. season) dependent price demand relations.

As advertisement time (duration & schedule) plays an important role driving up the sales, these are important inputs to you model to make an accurate prediction.

However, it's crucial not to overlook the application of business rules, which brings in:

We try to find an optimal price by learning a demand curve, using machine learning, and then optimize revenue [revenue = demand * selling price] or gross margin [gross margin = demand * (selling price- purchase price)].

It is important to underline here that the main goal is not to have the most accurate product wise demand forecast but to correctly model the relationship patterns between demand price and advertisement.

Finally, addressing the risk of customer dissatisfaction due to frequent price changes is paramount. One effective strategy to ensure recommended prices bolster sales without alienating customers is through A/B testing. A/B testing is necessary as the gold standard test if your model is actually working. If your ML model does not properly work, that A/B test will show suboptimal performance.

Moreover, to further mitigate any potential dissatisfaction among customers, some businesses have proactively adopted a price guarantee feature. This Business Rule ensures that if the prices of items drop within a certain period following a purchase, customers are entitled to a refund of the difference. This not only enhances customer trust but also solidifies the brand's commitment to fairness and value.

Is it really working?

Hence, as skeptical Data Scientists we need to implement tracking measures.

Usually in ML, one minimizes the demand prediction error (using metrics like R^2, SMAPE). Nevertheless, it is crucial that the dependency of the demand prediction on price and advertisement is correct.

As underpredicting demand for high discount and overpredicting for low discount leads to consistently too low recommended discount. Therefore, we need to measure if the model has a bias in demand prediction that is dependent on the price and or advertisement strength. And of course taking measure to minimize such a bias.

Note these quality measures should also be monitored in the future to assess if the models adapt to market changes and maintain accuracy over time.

And always keep in mind ‘if you don’t measure it you can’t manage it’; so please don’t assume that a KPI that is not measure is correct, just because you don’t know that it is not.

Why things can go wrong

A challenging reality to acknowledge is the high failure rate of many pricing data science projects. Usually, business owners rush into AI projects and expect magical outcomes.

From one side, The fancy marketing for AI Dynamic pricing may lead to a misconception about ML and data science.

People tend to overlook the “science” part and think it’s a pure engineering discipline whereas you need to combine both science and engineering disciplines to run properly in production!

From the other side, data scientists might be over-confident, whereas the attitude toward building a model (e.g. a demand model) should be that “we basically don’t know what we are doing but we’re trying to make it work.” We need to admit that are in a complex economical and social space. We shouldn’t underestimate the complexity of market dynamics or overrely on raw data without proper analysis.

We don’t understand all the dynamics of the market and the users.

If we build a model without verifying it, it will most likely not be working.

Therefore it is essential to continuously measure, iterate, and discover how to build a working model tailored to specific contexts and situations.

As the German astrophysicist Dr. Harald Lesch said “we err upwards” (Original German text: ”wir irren uns empor”), we need strategic iterations where we stress the importance of a cyclical approach to model development, utilizing continuous feedback and refinement adjustments. Through this process of trial and error, we find out how to build a working model for our specific context and situation.

In our case of the pricing model, we have one “true” measure of the performance: A/B test, where we measure the impact of our dynamic prices compared to a control group.

The control group can be on a small fraction of the main cutomers group who are demographically and socio-economically representative of the main group. If it’s not possible to do the target audience split, we can do the A/B test on a small fraction of products for which we apply a baseline pricing strategy.

However, this is comparably slow as it takes time to collect statistics and might also be expensive as a bad model yields bad prices with suboptimal revenue.

Therefore, during model development we need to focus on building approximate verification metrics that can be fast and cheaply evaluated using historical data. They will not be perfect but they can steer model development. The good candidate models are then verified using the more expensive A/B testing.

In a nut shell, we highlight what is needed for a successfull dynamic pricing project:

The quality of these tests is crucial for the success of the project. We believe that the focus first and foremost should be on these tests than the model development itself.

How leadership can help

Management can facilitate successful project outcomes by fostering a culture of bold but critical innovation and providing resources for rigorous testing.

As the development of tests adds more time on the project planning, this scope needs to be understood and backed by the managers and stakeholders of the project.

Data scientists and software engineers often face considerable pressure to release updates quickly or add more features. This can derail the focus from a good testing. Sooner or later, the expected results fail to be achieved.

Measuring and verifying the impact of the a model should be developed from the beginning. Usually, data scientists start with a simple bench model that would have threshold metrics.

Having good and constant verification is THE successfactor and should be the focus of managing the project as quality measures should be transparent and aligned with stakeholders.

Conclusion

In summary, it's clear that success in dynamic pricing requires more than just technological adoption. It demands a thoughtful approach that considers market complexities, data intricacies, and the human element of pricing strategies.

By fostering a culture of continuous learning, cyclic innovation with a testing focus, and adaptive leadership, businesses can leverage their data andcturning data-driven insights into competitive advantages and sustainable profitability.

We shall not shy away from the challenges. Instead, let's view them as opportunities to innovate, iterate, and lead our industries forward.