Job Description
Summary
As a member of the LLM Quality Assurance (QA) team, your main role is to check how well our AI model (LLM) performs by comparing its decisions with human labels. When there are differences, you’ll analyze them to find out why they happened and help improve the model. Your work helps make sure our AI moderation system is as accurate and fair as possible. You’ll do this by identifying common issues, spotting patterns, and supporting model improvements through Root Cause Analysis (RCA).
Key Responsibilities:
- Video and Image Comparison: Review and compare new and old videos or images to spot similarities and differences, including visual and sound details.
- Annotation: Clearly label video/image content with important details like timestamps, objects shown, and key changes or similarities.
- Data Quality Assurance: Make sure all annotations are accurate and consistent by following set guidelines and standards.
- Documentation: Keep clear records of your work, including any issues or findings from the comparison process.
- Continuous Improvement: Suggest ways to improve tools and processes based on your experience and insights from the tasks. Provide on-the-floor support, answer associate queries, and clarify quality-related concerns in real time.
- Publish and distribute daily QA performance reports to relevant stakeholders.
Qualifications:
- Experience: Previous experience in quality assurance, content moderation is highly preferred
- Language Skills: Proficient in English – reading, writing, speaking, and listening (B2 level or higher)
- College graduate is a must but those with relevant experience will be considered.
- Candidates must demonstrate a willingness to work on-site with a flexible schedule at our office locations in Imus or Bacoor, Cavite
Skills
- Communications Skills
- Content
- Quality Assurance
- Team Collaboration
- Writing

