AI DevOps Lifecycle
Test Driven Design (TDD) and Development
Automate AI dialogues or one-shot completions. Assert expected responses with pass/fail test results. Run all unit tests in regression before deploying updates to ensure changes have not impacted AI behavior. Nightly automated execution to monitor AI/LLM drift and entropy.
AI Agent Development / Skill Builder
Develop AI Models, Flows and Assistants in a Dev or Sandbox org. Immediate feedback from live chat bots and unit tests. Define grounding prompts, skills and conversation starters.
Package
Prepare AI Models, Skills and Eval tests for deployment. Define packaging rules, name, description, install passcode.
Publish & Listing
Optionally publish packages to the iDialogue AppStore. Share AI models, assistants and GPT Flows with customers.
Deploy, Install and Config
Install AI models in target orgs. Integrate with existing metadata deployment processes (Github, Copado, Gearset, etc…)
Onboarding
For end user facing applications, formalize communications to help them use newly deployed functionality.
Monitor: Ongoing Alignment and Regression Testing
Automated nightly execution of unit tests. Automated audit of dialogues. Escalation when there are deviations to test assertions.
Retrospective
Periodic review of AI models and Skills. Initially daily, then weekly, 2 weeks, monthly. Curate backlog requests and prioritize AI Model enhancements for ongoing development. Return to step 1, define test assertions for new features.