Uniphore is one of the largest B2B AI-native companies—decades-proven, built-for-scale and designed for the enterprise. The company drives business outcomes, across multiple industry verticals, and enables the largest global deployments.
Uniphore infuses AI into every part of the enterprise that impacts the customer. We deliver the only multimodal architecture centered on customers that combines Generative AI, Knowledge AI, Emotion AI, workflow automation and a co-pilot to guide you. We understand better than anyone how to capture voice, video and text and how to analyze all types of data.
As AI becomes more powerful, every part of the enterprise that impacts the customer will be disrupted. We believe the future will run on the connective tissue between people, machines and data: all in the service of creating the most human processes and experiences for customers and employees.
Job Description:
Key Responsibilities
Lead QA strategy and roadmap for Uniphore’s AI and SaaS platforms, driving continuous improvements in automation, coverage, and release quality.
Design and implement scalable automation frameworks for API, UI, and data validation testing. Expertise using AI tools for automation.
Partner with Data Science and Engineering teams to validate AI/ML workflows, analytics pipelines, and model integration within production environments.
Lead efforts in performance, security, and regression testing, ensuring Uniphore’s enterprise platforms deliver reliability at scale.
Perform root cause and trend analysis to identify process gaps and recommend automation or process enhancements.
Document QA processes, release notes, and best practices in Confluence / Jira and maintain alignment with Agile/SAFe practices.
Ensure end-to-end testing coverage across microservices, APIs, mobile, and web-based solutions in a multi-tenant cloud environment.
Qualifications
Bachelor’s degree in Computer Science, Engineering
7–10+ years of experience in Software Quality Assurance, including significant automation experience.
Expert-level proficiency in at least one programming language (e.g., Java, Python, JavaScript, TypeScript).
Ability to understand complex features, write quality test cases, and automation test scripts with the right level of validation and assertions.
Strong hands-on experience creating and maintain custom test automation frameworks
Experience testing complex systems for end-to-end, longevity, reliability, migration, and performance testing.
Experience using building and deploying AI testing strategies for foundation models, prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) or Generative AI frameworks (LangChain, Guidance etc)
Deep understanding of CI/CD workflows and tools (e.g., Jenkins, GitHub Actions, GitLab CI, CircleCI).
Proven ability to understand complex, distributed systems and design test strategies for them
Excellent analytical, debugging, and problem-solving skills.
Preferred Skills
Familiarity with microservices architectures and related testing methodologies.
Experience with performance/load testing tools (JMeter, Gatling, k6).
Hands on experience with cloud platforms (GCP/AWS/Azure). Familiarity with at least one of them is ideal.
Experience working in Agile/Scrum environments.
Prior mentorship or leadership experience in QA teams.
Prior experience of testing data driven applications.
Location preference:
Uniphore is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
For more information on how Uniphore uses AI to unify—and humanize—every enterprise experience, please visit www.uniphore.com.


