The Hows And Whys Of Integrating DevOps Practices Into AI/ML Lifecycle

The Hows And Whys Of Integrating DevOps Practices Into AI/ML Lifecycle

Automation, AI and DevOps

According to the surveys, 90% of startups fail with only 10% of them making significant changes around them. Let’s be real, startups are chaotic. To put a system in place to reach product-market fit quickly is no small feat. The past decade has seen the introduction and evolution of the best DevOps practices in the startup ecosystem to help them succeed. But today, the ever-evolving complexities that companies face has made it evident that there is a need for a more streamlined, refined and data-driven pipeline, especially for companies that deal with Artificial Intelligence (AI) and Machine Learning (ML): Automation, AI and DevOps.

With AI models being used frequently in driving products and their features, it makes more sense to introduce DevOps practices and culture into the AI/ML workstream. AI implementation can’t only be focused on the technology side of it. Culture and communication are an indispensable part of any AI-based organisational success. And this part of the pie is brought to the table by strong adherence to agile and DevOps practices. Young companies can’t wait 6 months to build a product only to find that customers don’t like it, which is true in most cases. Early customer feedbacks help companies to steer in the right direction and build for what customers need and not what they think the customers need. ‘Fail fast, Fail Early’ must be made one of the pillars of AI startups in general. Having solid DevOps practices enforces the integration, delivery and deployment of software continuously with any midcourse corrections. DevOps teams build agile enterprises by enabling three things: delivering new solutions, running and planning infrastructure around the solutions, and taking care of midcourse changes. 

MLOps

The four steps that AI organisations need to spend time and resources on are:

  1. Collecting data: Make data simple and accessible. Enable flexibility by collecting any and every relevant data regardless of where it resides.  
  2. Organising: All data should be organized into a trusted, business-ready foundation with built-in governance, protection, and compliance. 
  3. Analysing: Build and scale AI with trust and transparency. Analysing data in smarter ways and benefiting from AI models empower organizations to gain new insights and make informed smarter decisions. 
  4. Infusing: AI should then be operationalised through the business. 

DevOps approach towards AI and ML

DevOps approaches to AI and ML are limited by the fact that machine learning models differ from traditional application development in many ways. For one, ML models are highly dependent on data: training data, test data, validation data, and of course, the real-world data used in inferencing. Simply building a model and pushing it to operation is not sufficient to guarantee performance. DevOps approaches for ML also treat models as “code” which makes them somewhat blind to the issues that are strictly data-based, and in particular, the management of training data, the need for re-training of models, and concerns of model transparency and explainability.

As organizations move their AI projects out of the lab and into production across multiple business units and functions, the processes by which models are created, operationalized, managed, governed, and versioned need to be made as reliable and predictable as the processes by which traditional application development is managed. 

DevOps for AI/ML has the potential to stabilize and streamline the model release process. It is often paired with different practices and toolsets to support CI/CD cycles. Here are some ways to consider CI/CD for AI/ML workstreams:
  • The AI/ML process relies on experimentation and iteration of models and it can take hours or days for a model to train and test. Carve out a separate workflow to accommodate the timelines and artefacts for a model build and test cycle. Avoid gating time-sensitive application builds on AM/ML model builds.
  • For AI/ML teams, think about models as expected to deliver value over time rather than a one-time construction of the model. Adopt practices and processes that plan for and allow a model lifecycle and evolution.
  • DevOps is often characterized as bringing together business, development, release, and operational expertise to deliver a solution. Ensure that AI/ML is represented on feature teams and is included throughout the design, development, and operational sessions.
  • For the production systems, it can be a way to monitor, govern and reliably operate at scale.

To address these needs, tools and solutions are emerging in the market that considers ML models to be distinct from code and can handle the range of ML model-specific management needs. They can not only manage the process for model creation and operationalization, but also monitor the data used at training and real-time, the performance of models over time, and specific needs for model governance, security, and transparency.

Model Lifecycle Management – Similar to the needs for application development processes in traditional DevOps tools, MLOps tools need to help manage the lifecycle for model development, training, deployment, and operationalization and provide consistent, reliable processes for moving models from the data science environment to the production environment.

Model Versioning & Iteration – MLOps solutions provide capabilities that can operationalize different versions of models, supporting multiple versions in operation as needed, provide notification to model users of version changes, visibility into model version history, and can help make sure that obsolete models are not used.

Model Monitoring and Management – Since the real world continues to change and doesn’t match up to the world used in training data, MLOps solutions need to monitor and manage model usage, consumption, and results of models to make sure that their accuracy, performance, and other measures continue to provide acceptable results. Such solutions can provide visibility into data and model “drift” and keep an eye on various measures of model performance against thresholds and benchmarks.

Model Governance – Models that are used in real-world situations need to be relied upon. MLOps platforms provide capabilities for auditing, compliance, governance, and access control. This includes features for model and data provenance (tracing data changes to model change), model access control, prioritizing model access, providing transparency into how models use data, and any regulatory or compliance needs for model usage.

Model Discovery – MLOps solutions can provide model registries or catalogues for models produced within the tool ecosystem as well as a searchable model marketplace that provides a way to locate consumable models. These model discovery solutions provide sufficient information to be able to ascertain the relevance, quality, data origination, transparency of model generation, and other factors for a particular model.

Model Security – Models are assets that need to be protected. MLOps solutions can provide the functionality to protect models from being corrupted by tainted data, being overwhelmed by denial of service attacks, attacked through adversarial means, or being inappropriately accessed by unauthorized users.

Like any other product development problems and practices, be it AI/ML or not, the key to making it work, among many other factors is adapting to the changes and problems, finding a solution and improvising and overcoming it. 

If you have questions or suggestions, or would generally like to discuss your AI/ML infrastructures in place, please do connect with us. We are available to connect through call – +918002985878,+91439857338, or mail – info@aidetic.in

Resources: https://www.oreilly.com/online-learning/report/The-AI-Ladder.pdf

Leave a Reply