THANK YOU FOR SUBSCRIBING

The importance of experimentation when developing AI
By Hanneke Stellink, Director AI Products, ING


Hanneke Stellink, Director AI Products, ING
In my professional experience working with large financial institutions, I found that many people perceive experimentation as hard,especially when it comes to AI. This all comes down to culture and mindset. Not everyone understands the importance of experimentation and how to set it up right. People might be reluctant to put a low-quality solution in front of a user. Also, the required data might not exist yet, be of low quality or does not fit the objective of the experiment.
I encountered this first hand when I was consulting for a large pension fund in the UK. A data science team worked on an application that predicts when customers want to take out their money as a lump sum. The idea was to propose a better offer to prevent them from leaving. As you can imagine, there was a large business case for retaining those funds. In 3 months’ time, the team had built a very good predictive model. Unfortunately, it then turned outthat customers acted through a financial advisor. The data used, represented advisor behaviour and not customer behaviour. A much easier - and cheaper- solution to keep customers on board was to educate the advisors on the pension funds’ offerings, because what customers really needed was better advice.
The model never made it into production. Time spend and costs could have been minimized by early experimentation, specifically testing the assumptions that the data represented customer behaviour and that the customer needed a better offer.
Three recommendations to get it right
So, how can you tackle the perception of many that experimentation is hard, and test your ideas early in the process to avoid wasting resources?
1. Have a clear vision and defined user need.
This is your starting point. Make sure you define the riskiest assumptions in your AI solution and test these first. Often, these assumptions are not about whether you can build a working AI model (feasibility), but about understanding the user’s needs (desirability). If you do not know what the user needs, you simply cannot measure the success of an experiment.Often, your riskiest assumptions can be tested without much real data or modelling. When you can put a prototype in front of the user that imitates the functionality, you gather feedback very early, and avoid wasting development efforts, if the experiment shows no desire for your solution.
2. Have the right experimentation setup.
First define how to measure success, based on your user needs. And make this measurable, e.g. an increase in returning visits or a reduction in time spent for a task. Next, assess whether the data you have is suitable for the purpose. In most cases the model is not the problem, the problem is in the input data. For example, when you are building a solution that predicts car damage and you use a data set on insurance claims; you are not predicting damage, you are predicting claims. So, check thoroughly that the data matches the intended use. In some cases, the right data does not exist at all, and you need to create it yourself. Like the start-up Mapper, they build and sell maps for self-driving cars. They hire local drivers to collect geographic data and then convert that data into 3-D maps.
3. Create that experimentation culture.
Convince your stakeholders of the importance of experimentation. For instance, a data scientist may be averse to testing a scrappy model in order to gather some first user feedback. It is up to you to make them understand that the user determines whether something fulfils their needs or not. When an engineer raises concerns about a patchy prototype that does not hold up to the highest engineering standards, then explain that 80% is good enough to test with. Working for weeks on a perfect solution only to find out the user does not need it, cannot be a sensible alternative.
Convince your business stakeholders to have the patience to test before building. In the end it is all about finding a compromise between a solution that is technically sensible, addresses a real problem and something that is viable to build. This could mean multiple iterations to find the right experiment setup, whether through restating the problem to be solved, by collecting new data, or both.Open communication and a relationship of trust between all parties is key here.
So,when it comes to building AI applications, experimentation is paramount if you want to build a successful solution with minimal waste. And while this can be challenging when there is a lack of understanding its importance or the competence to do it well, there is really no alternative to it.
So, how can you tackle the perception of many that experimentation is hard, and test your ideas early in the process to avoid wasting resources?
1. Have a clear vision and defined user need.
This is your starting point. Make sure you define the riskiest assumptions in your AI solution and test these first. Often, these assumptions are not about whether you can build a working AI model (feasibility), but about understanding the user’s needs (desirability). If you do not know what the user needs, you simply cannot measure the success of an experiment.Often, your riskiest assumptions can be tested without much real data or modelling. When you can put a prototype in front of the user that imitates the functionality, you gather feedback very early, and avoid wasting development efforts, if the experiment shows no desire for your solution.
2. Have the right experimentation setup.
First define how to measure success, based on your user needs. And make this measurable, e.g. an increase in returning visits or a reduction in time spent for a task. Next, assess whether the data you have is suitable for the purpose. In most cases the model is not the problem, the problem is in the input data. For example, when you are building a solution that predicts car damage and you use a data set on insurance claims; you are not predicting damage, you are predicting claims. So, check thoroughly that the data matches the intended use. In some cases, the right data does not exist at all, and you need to create it yourself. Like the start-up Mapper, they build and sell maps for self-driving cars. They hire local drivers to collect geographic data and then convert that data into 3-D maps.
3. Create that experimentation culture.
Convince your stakeholders of the importance of experimentation. For instance, a data scientist may be averse to testing a scrappy model in order to gather some first user feedback. It is up to you to make them understand that the user determines whether something fulfils their needs or not. When an engineer raises concerns about a patchy prototype that does not hold up to the highest engineering standards, then explain that 80% is good enough to test with. Working for weeks on a perfect solution only to find out the user does not need it, cannot be a sensible alternative.
Convince your business stakeholders to have the patience to test before building. In the end it is all about finding a compromise between a solution that is technically sensible, addresses a real problem and something that is viable to build. This could mean multiple iterations to find the right experiment setup, whether through restating the problem to be solved, by collecting new data, or both.Open communication and a relationship of trust between all parties is key here.
So,when it comes to building AI applications, experimentation is paramount if you want to build a successful solution with minimal waste. And while this can be challenging when there is a lack of understanding its importance or the competence to do it well, there is really no alternative to it.
Weekly Brief
Read Also
Security in the Cloud Requires a New Way of Thinking
Dan Constantino, Director, Security Operations, Cox Automotive
Adapting to the Ever-changing Threat Landscape
Brian Hussey, Global Director of SpiderLabs Incident Response & Readiness, Trustwave
2021 - Are You Ready for the Future?
Sebastian Fuchs, Managing Director Manheim and RMS Continental Europe, Cox Automotive
Follow the Money as Roadmap for Data Analytics
Hiek van der Scheer, Chief Analytics Officer, Aegon
How CERN has embraced and navigated the recruitment software maze
Anna Cook, Deputy Group Leader – Talent Acquisition, CERN [NASDAQ: CERN]
Key to AN Effective RCM: Collaborate with Payers
Sheila Augustine, Director of Patient Financial Services, Nebraska Medicine

I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info