MD
AI Quality Assurance/Tester (Senior)
Accepting applicationsMaryland Department of Information Technology · Linthicum, MD
Full-Time Mid_senior AIaiateganrf
Posted
6d ago
Category
Test
Experience
Mid_senior
Country
United States
Introduction
Maryland Benefits (MD Benefits) is a dynamic, cloud-based platform. This enterprise-wide digital service allows organizations to build, test, host, operate, and integrate mission-driven applications, data, and emerging technologies. MD Benefits offers cloud-based Platform-as-a-Service (PaaS) capabilities, a shared data architecture, and product development services, all developed by the State of Maryland to help multiple agencies deliver and manage health, human, and social service benefits and programs. On July 1, 2025, the operation of the MD Benefits shared platform and statewide applications transitioned from the Department of Human Services (DHS) to the Department of Information Technology (DoIT).
***This is a contractual position with limited benefits***
Main Purpose
The AI Quality Assurance Tester provides quality management for information systems using the standard methodologies, techniques, and metrics for assuring product quality and key activities in quality management. This individual is responsible for performing the following tasks:
Position Duties
The responsibilities of a AI Quality Assurance/Tester (Senior) include, but are not limited to the following duties:
Establishing capable processes, monitoring and control of critical processes and product mechanisms for feedback of performance, implementing effective root cause analysis and corrective action system, and continuous process improvement;
Providing strategic quality plans in targeted areas of the organization;
Providing QA strategies to ensure continuous production of products consistent with established industry standards, government regulations, and customer requirements; and
Developing and implementing life cycle and QA methodologies and educating, and implementing QA metrics.
Define and execute comprehensive QA strategies for AI-enabled software systems, including testing for model accuracy, bias, drift, and output consistency.
Design test cases for APIs or UIs that consume predictions from NLP models, classifiers, or AI assistants.
Validate data flows from ingestion pipelines through model inference and response rendering across multiple systems.
Partner with data engineers and scientists to verify pre-processing logic, validate predictions, and interpret edge-case outcomes.
Develop test cases and scenarios for model explainability (e.g., SHAP, LIME) and human-in-the-loop validation workflows.
Participate in agile sprint activities and act as QA lead for releases involving AI/ML features.
Perform database queries and SQL validation to confirm training and inference dataset consistency.
Maintain and enhance automated regression and integration test suites using tools like PyTest, Postman, Cypress, JMeter, or Selenium.
Support testing of user-facing AI features like chatbots, recommendations, smart prompts, or classification-driven workflows.
Conduct 508 accessibility, performance, and cross-browser testing for intelligent UI components.
Collaborate with developers and MLOps engineers to debug pipeline errors and track model prediction anomalies.
Monitor and test AI system behavior after model retraining, deployment, or feedback loop adjustment.
Minimum Qualifications
Education: This position requires a Bachelor’s degree from an accredited college or university in Engineering, Computer Science, Information Systems or a related discipline. Will accept 7 years of experience in lieu of education.
General Experience: The proposed candidate must have at least eight (8) years of information systems quality assurance experience.
Specialized Experience: The proposed candidate must have at least five (5) years of experience working with statistical methods and quality standards. This individual must have working QA/process knowledge, and possess superior written and verbal communication skills.
(AI/ML testing or data science coursework/certification a plus)
At least 8 years of software quality assurance experience, with increasing responsibility in testing enterprise systems.
Minimum 3 years working with or supporting projects involving AI/ML services or data science teams.
Experience testing AI/ML model integration in enterprise applications (e.g., validation of model inferences, confidence scores, and response behaviors).
Familiarity with ML model lifecycle, training/inference pipelines, and feedback loop workflows.
Hands-on experience testing RESTful APIs, data APIs, or AWS-hosted AI services (e.g., SageMaker).
Experience with automated test frameworks and performance testing tools (e.g., JMeter, PyTest, Selenium, Postman, Newman).
Strong skills in writing and executing SQL for test data validation and pre/post inference checks.
Experience with JSON-based payloads, OpenAPI/Swagger, and mock service tools.
Ability to triage and analyze AI prediction issues related to data quality, model logic, or system design.
Familiarity with ethical AI practices, including model bias testing, fairness, and transparency, is a plus.
Excellent communication skills for bridging technical and non-technical stakeholders around complex AI test cases.
Experience in Agile teams, working with tools like JIRA, Confluence, GitHub, TestRail, or similar.
Show more Show less
Maryland Benefits (MD Benefits) is a dynamic, cloud-based platform. This enterprise-wide digital service allows organizations to build, test, host, operate, and integrate mission-driven applications, data, and emerging technologies. MD Benefits offers cloud-based Platform-as-a-Service (PaaS) capabilities, a shared data architecture, and product development services, all developed by the State of Maryland to help multiple agencies deliver and manage health, human, and social service benefits and programs. On July 1, 2025, the operation of the MD Benefits shared platform and statewide applications transitioned from the Department of Human Services (DHS) to the Department of Information Technology (DoIT).
***This is a contractual position with limited benefits***
Main Purpose
The AI Quality Assurance Tester provides quality management for information systems using the standard methodologies, techniques, and metrics for assuring product quality and key activities in quality management. This individual is responsible for performing the following tasks:
Position Duties
The responsibilities of a AI Quality Assurance/Tester (Senior) include, but are not limited to the following duties:
Establishing capable processes, monitoring and control of critical processes and product mechanisms for feedback of performance, implementing effective root cause analysis and corrective action system, and continuous process improvement;
Providing strategic quality plans in targeted areas of the organization;
Providing QA strategies to ensure continuous production of products consistent with established industry standards, government regulations, and customer requirements; and
Developing and implementing life cycle and QA methodologies and educating, and implementing QA metrics.
Define and execute comprehensive QA strategies for AI-enabled software systems, including testing for model accuracy, bias, drift, and output consistency.
Design test cases for APIs or UIs that consume predictions from NLP models, classifiers, or AI assistants.
Validate data flows from ingestion pipelines through model inference and response rendering across multiple systems.
Partner with data engineers and scientists to verify pre-processing logic, validate predictions, and interpret edge-case outcomes.
Develop test cases and scenarios for model explainability (e.g., SHAP, LIME) and human-in-the-loop validation workflows.
Participate in agile sprint activities and act as QA lead for releases involving AI/ML features.
Perform database queries and SQL validation to confirm training and inference dataset consistency.
Maintain and enhance automated regression and integration test suites using tools like PyTest, Postman, Cypress, JMeter, or Selenium.
Support testing of user-facing AI features like chatbots, recommendations, smart prompts, or classification-driven workflows.
Conduct 508 accessibility, performance, and cross-browser testing for intelligent UI components.
Collaborate with developers and MLOps engineers to debug pipeline errors and track model prediction anomalies.
Monitor and test AI system behavior after model retraining, deployment, or feedback loop adjustment.
Minimum Qualifications
Education: This position requires a Bachelor’s degree from an accredited college or university in Engineering, Computer Science, Information Systems or a related discipline. Will accept 7 years of experience in lieu of education.
General Experience: The proposed candidate must have at least eight (8) years of information systems quality assurance experience.
Specialized Experience: The proposed candidate must have at least five (5) years of experience working with statistical methods and quality standards. This individual must have working QA/process knowledge, and possess superior written and verbal communication skills.
(AI/ML testing or data science coursework/certification a plus)
At least 8 years of software quality assurance experience, with increasing responsibility in testing enterprise systems.
Minimum 3 years working with or supporting projects involving AI/ML services or data science teams.
Experience testing AI/ML model integration in enterprise applications (e.g., validation of model inferences, confidence scores, and response behaviors).
Familiarity with ML model lifecycle, training/inference pipelines, and feedback loop workflows.
Hands-on experience testing RESTful APIs, data APIs, or AWS-hosted AI services (e.g., SageMaker).
Experience with automated test frameworks and performance testing tools (e.g., JMeter, PyTest, Selenium, Postman, Newman).
Strong skills in writing and executing SQL for test data validation and pre/post inference checks.
Experience with JSON-based payloads, OpenAPI/Swagger, and mock service tools.
Ability to triage and analyze AI prediction issues related to data quality, model logic, or system design.
Familiarity with ethical AI practices, including model bias testing, fairness, and transparency, is a plus.
Excellent communication skills for bridging technical and non-technical stakeholders around complex AI test cases.
Experience in Agile teams, working with tools like JIRA, Confluence, GitHub, TestRail, or similar.
Show more Show less