Job Title: AI Ethics & Safety Specialist
Location: Remote / Hybrid / [City, State]
Job Type: Full-time
Department: AI Governance / Research / Risk & Compliance
About the Role
We are seeking an AI Ethics & Safety Specialist to help shape the responsible development and deployment of artificial intelligence systems. In this role, you will assess ethical risks, guide the design of safe and trustworthy AI, and help implement standards that align with both organizational values and regulatory requirements.
This role sits at the intersection of technology, ethics, and policy, and requires close collaboration with engineering, product, legal, and executive teams.
Key Responsibilities
- Evaluate ethical, safety, and social implications of AI models and applications
- Develop and maintain AI governance frameworks, including model audits, fairness assessments, and documentation protocols
- Collaborate with product and engineering teams to integrate ethical principles into the AI development lifecycle
- Lead or support the implementation of red teaming, impact assessments, and alignment techniques
- Monitor evolving regulations and best practices related to AI ethics and compliance
- Conduct research or analysis on emerging risks (e.g., bias, misuse, misinformation, algorithmic harm)
- Promote internal awareness of AI safety principles through training, workshops, and documentation
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Ethics, Philosophy, Law, Public Policy, or a related field
- Demonstrated experience in AI safety, responsible AI, digital ethics, or algorithmic auditing
- Understanding of machine learning fundamentals and how AI models operate in practice
- Familiarity with fairness and interpretability tools (e.g., SHAP, LIME, Aequitas)
- Strong analytical and communication skills; ability to work across technical and non-technical teams
Preferred Qualifications
- Experience with impact assessments (e.g., data protection, algorithmic accountability, bias audits)
- Background in regulatory frameworks such as the EU AI Act, GDPR, or NIST AI RMF
- Research experience or publications in AI ethics or safety
- Knowledge of generative AI and associated risks (e.g., hallucinations, deepfakes, misuse)
What We Offer
- A mission-driven culture focused on building safe and equitable AI
- Opportunities to contribute to groundbreaking AI systems and their societal impact
- Collaboration with top researchers, engineers, and policy experts
- Competitive compensation, remote flexibility, and professional development support
How to Apply
Please submit your resume along with a short statement (or writing sample) reflecting your perspective on responsible AI or a past project related to ethics or safety in AI.