Innovation is perhaps the buzzword of the past decade. But is there another term you’ve heard buzzing around quite a bit?
Artificial intelligence (AI). As one of the hottest topics on the agenda of the world’s top companies, AI is considered a vital part of their business success. However, only 31 percent of companies feel capable of applying AI, according to Deloitte’s “2019 Global Human Capital Trends” report. The reality is that it takes more than a brilliant team of scientists to set up successful AI projects. So how can your company speed up its digital transformation?
While there’s no easy recipe, there are steps that companies can take to develop a stronger AI innovation process.
Step one: AI starts with data.
To lay the groundwork for successful AI projects, set up a data-analytics platform in your company through which scientists can explore data, build prototypes and experiment with AI models and the tooling needed to scale them. Many companies have their proprietary data organised in a data lake. In itself, this isn’t enough. Data has to be sorted, organised and labeled. This often happens through the data-management team. For scientists to use it, the data has to interpreted and sometimes even “cracked” to make it valuable. This is normally done by data engineers. This process, known as data wrangling, can take up a sizeable percentage of the time you will spend on a project (sometimes most of it!). Needless to say, to use the data, you will need to adhere to applicable laws and regulations, and you will have to take into account access rights, privacy laws and so on.
A common misunderstanding is that data has to be of perfect quality. It sure helps if it is, but lower-quality data shouldn’t be used as an excuse to not provide it to an AI team. Data scientists are capable of filling the “gaps” with their advanced analytical techniques.
Step two: Set up a diverse team with a broad skill set.
AI is not about data science alone. It takes a village to build an AI model. Designers, data engineers, developers and product managers have to collaborate closely and stay connected to the business to create a solution that can be of real service to customers or employees. This helps ensure that AI prototypes can be scaled into production and don’t just lead to interesting insights and then land in the “graveyard of models”. This means that you need tooling that scores high in the user-experience department so that people want to use it.
An often overlooked aspect of building a strong AI team is diversity. This is as important in a team as its broad range of hard skills. It’s how you avoid running one of the biggest risks in building AI models: bias. For example, take a model that focuses on projecting the success of entrepreneurs in a certain country. If you look at the Netherlands, for instance, where approximately 37 percent of entrepreneurs are women, and correlate successful startups with gender, you’ll find out that men are significantly more successful in business than women—an unfair conclusion because the model doesn’t take into account the gender imbalance in the population of entrepreneurs. The more diverse your team is, the more diverse the questions they’ll ask will be—and the more perspectives from which they’ll look at models will be, leading to better, more accurate outcomes.
Step three: Is your line of defense strong enough?
There are many other risks that can have unwanted consequences on models in AI innovation. The use and management of data has to adhere to applicable laws and regulations. Critical to achieving this is having a strong support system involving departments such as Compliance, Non-financial Risk (NFR), Legal, Human Resources (HR), as well as Quality Assurance, Research and Ethics teams:
- NFR, Compliance, Legal
The role of NFR teams is to challenge and advise innovation teams on the non-financial risks of an AI project. First-line NFR officers and legal experts should work closely with data scientists to ensure that they tackle risk and compliance issues efficiently. Then come the second-line risk officers, who can take part in the validation process of AI models, advising on compliance issues as well as on the cybersecurity and operational challenges of AI innovation. Finally, Legal will weigh in on assessing intellectual property and licensing considerations.
Next to that, HR aspects need to be considered. Set up clear career paths and offer coaching to retain top AI talent. A lack of interesting problems to solve and progress in professional development are often reasons why tech companies lose talent. As exciting as it sounds, a job in AI usually involves considerable amounts of preliminary work—a slow, sometimes tedious process. Set the right expectations from the start, and leave room for the team to work on exciting projects.
For a boost of energy and creativity, you can organise “experimentation weeks” a couple of times a year, during which teams can work on whatever they find exciting. That’s often how great ideas come to life..
- Quality Assurance and Research
Helping data scientists improve their craft is crucial. That’s where Quality Assurance teams come in. These are reviewers, usually senior data scientists, who analyse the models and the techniques that were used to build them before validating projects and sending them into the world. Another way of doing quality assurance is through peer reviews. Having someone else look at a model and the logic used to develop it will often prevent scientists from falling into ethical or bias traps. And scientists love a good challenge!
Although the field of artificial intelligence has been around for 60 years, we’re only at the beginning of applying it through technology. To stay at the forefront of innovation in the field, set up Research teams. Work with universities to exchange knowledge and work on algorithms that go beyond the needs of your business. Find a way to give back to the community and contribute to the development of AI technology. This will help your company attract Millennial talent, who are very much driven by purpose.
- Ethics: Doing the right thing
Think about what is acceptable now and what will be acceptable in five years—an essential exercise to avoid reputational damage. Collaborate with ethics councils and boards, and provide training for your staff on how to manage dilemmas about their use of data. Data is the driving force behind artificial intelligence, yet there are many cultural and social sensitivities about how it’s used and the purposes for which it’s used. For example, will your data amplify unfair biases against certain groups of people?
Having a strong defense line in place ensures that risks are measured, assessed and actioned and that no harm is done through AI innovation. Doing good with AI innovation should be deeply ingrained in your company’s philosophy.
Step four: Stay connected with the business and your customers.
Stay focused on the needs of your business and your customers. Have regular feedback loops. What often happens when developing AI models is that teams tend to get engrossed in projects that aren’t properly validated with the business and its customers. It’s not uncommon to work on a solution for a problem that doesn’t exist. Therefore, first make sure that there is a big problem to solve. The bigger the problem you are solving, the higher the likelihood that people will use your model.
Step five: Fail fast; keep on exploring.
Create a culture of failing fast. In corporate cultures that don’t allow people to fail fast and bounce back even faster, teams won’t be open about their failures. This is yet another obstacle that leads to biased or unethical models. It’s okay to say, “This didn’t work out as planned.”
Never stop exploring and experimenting with data. There’s no one-size-fits-all process, and it can take years for a company to develop an AI-innovation process that fits its business best.