AI Security Engineer
Irvine Technology Corporation - Irvine, CA
Apply NowJob Description
AI Security Engineer (Remote)The AI Security Engineer serves as the organization's dedicated subject matter expert at the intersection of artificial intelligence and cybersecurity within a regulated healthcare environment. This role is responsible for evaluating AI vendors and technologies, establishing and enforcing secure AI implementation standards, and providing hands-on guidance to development and engineering teams adopting AI platforms such as Microsoft Copilot Studio, Azure AI Foundry, Snowflake Cortex, Claude Code, and other large language model (LLM)-powered tooling.Operating within the HIPAA-regulated landscape, this analyst will ensure AI integrations "” including Model Context Protocol (MCP) servers, agentic workflows, command-line interfaces (CLIs), APIs, and third-party AI extensions "” are architected and deployed in a manner consistent with NIST AI RMF, HITRUST, and organizational security policies. The role acts as a trusted advisor, security gatekeeper, and enabler for responsible AI adoption across the enterprise.Job Type: Contract to hireLocation: Remote working Pacific hoursCompensation: This job is expected to pay about $65 - $95 per hour plus benefitsNo Visa Sponsorship Available for this role What You'll Do: AI Vendor & Technology EvaluationLead security assessments of AI vendors and platforms prior to adoption or renewalEvaluate data handling, model transparency, and platform security controlsProduce vendor risk reports with ratings, controls, and recommendationsMaintain AI technology inventory with risk classifications and review cyclesSecure AI Implementation GuidanceAdvise engineering and data teams on secure AI adoption and architectureDefine and enforce secure configurations and least-privilege accessReview AI integrations for authentication, encryption, and prompt injection risksEstablish security standards for AI development tools and conduct code reviewsDevelop reference architectures, templates, and best practicesAI Risk Management & ComplianceMaintain AI risk register aligned to NIST AI RMFEnsure compliance with HIPAA and applicable privacy regulationsConduct threat modeling and AI-focused security testing (e.g., prompt injection, data leakage)Monitor emerging AI threats and contribute to governance policiesSecurity Integration ReviewsAssess AI architectures for data flow, segmentation, and trust boundariesEnsure proper handling of sensitive data (e.g., PHI) in AI systemsEvaluate RAG and agentic workflows for access and escalation risksProvide security approval through change management processesTraining, Awareness & PolicyDeliver AI security training across technical and clinical teamsDevelop and maintain AI security policies and usage standardsPublish internal guidance and threat intelligence updatesWhat Gets You the Job: Bachelor's degree in Cybersecurity, Computer Science, Information Systems, or a closely related fieldMaster's degree preferred; equivalent professional experience considered5+ years of progressive experience in information security, with a minimum of 2 years focused on AI/ML security or applied AI technology evaluationMust have hands-on experience configuring and operating AI security controls in Microsoft environments (Copilot, Azure AI, data protection, logging)Proven direct ownership of AI security guardrailsMust have demonstrated hands-on experience with Copilot Studio and Azure AI Foundry including a deep understanding of backend functionalities including Plugin manifest security review, connector authentication, sensitivity label enforcement, identity configuration, private endpoints, content filtering policy management, model deployment governance, etc.Demonstrated hands-on experience with one or more of the following is a plus: Claude / Anthropic APIs, OpenAI API, GitHub Copilot, or LLM agentic frameworks (LangChain, AutoGen, Semantic Kernel)Experience working in a regulated environment; healthcare industry background strongly preferredProven track record conducting vendor risk assessments and producing executive-level risk documentationStrong background in security fundamentals including grounding in IAM (OAuth 2.0, OIDC, SAML, managed identities, workload identity federation), API security, network security; SIEM/SOAR integration for AI audit log ingestion, anomaly detection, and automated response; and threat modeling methodologies such as STRIDE, PASTA, or application of MITRE ATT&CK and ATLAS frameworksCertifications (CISSP, CSSLP, OSCP/OSWE, CEH, AWS/Azure AI Security, Microsoft SC-100, Google PCSAE, CCSP, HCISPP, HITRUST CCSFP, CIPP/US, CRISC) are a plusIrvine Technology Corporation (ITC) connects top talent with exceptional opportunities in IT, Security, Engineering, and Design. From startups to Fortune 500s, we partner with leading companies nationwide. Our AI recruiter, Avery helps streamline the first step of your journey"”so we can focus on what matters most: helping you grow. Join us. Let us ELEVATE your career!Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.
Created: 2026-05-12