Cybersecurity Consultant
Prophet Town - Berkeley, CA
Apply NowJob Description
Job Description This is a remote position.Cybersecurity ConsultantPosition DescriptionType: Consultant (1099 or W2)Location: Remote (with occasional travel to partner, client sites)Organization: Prophet Town LLC Duration: Potentially 6+ months full-time work, followed by an ongoing, continuous retainer Position OverviewWe work with Open Science foundations committed to radical transparency. Every line of code, standard, database schema, and dataset we produce is, or will be published openly. At the same time we must protect unpublished datasets, partner intellectual property, personal data, and the integrity of AI-driven research workflows. We are seeking a consultant who combines deep cybersecurity and threat-analysis expertise with hands-on DevOps/DevSecOps capability and specialized knowledge of AI Agent security as well as academic/industry partner obligations.You will not merely advise, you will assess threats with our clients, prioritize them quantitatively (Threat = Probability × Severity), lead tabletop exercises, and then roll up your sleeves to guide our DevOps teams in deploying production-grade, elegant, inexpensive mitigations.Key Responsibilities Partner with clients to perform rapid threat assessments and produce prioritized risk registers (probability × severity scoring). Design and facilitate realistic tabletop exercises covering both conventional and AI-Agent scenarios. Lead DevOps/DevSecOps teams in implementing mitigations: infrastructure-as-code, CI/CD pipeline hardening, container security, secrets management, monitoring, and automated policy enforcement. Develop and maintain data-governance frameworks that protect sensitive/unpublished assets while preserving our clients’ 100% open-publication mandate. Ensure all security controls satisfy contractual, regulatory, and ethical obligations to academic and industry partners (data-use agreements, IP clauses, SOX, GDPR/HIPAA-equivalent, export-control rules, etc.). Stay current on and mitigate both standard organizational threats and the evolving threat landscape specific to autonomous AI Agents. RequirementsRequired Expertise - Threat LandscapeYou must be fluent in both the standard cybersecurity threats every organization faces and the specialized threats to AI Agents. Demonstrated ability to explain, model, and mitigate the following concepts is mandatory.Standard Cybersecurity Threats (all organizations) Phishing, spear-phishing, and social-engineering attacks Ransomware and malware families (viruses, trojans, spyware, cryptojackers) Distributed Denial-of-Service (DDoS) and resource-exhaustion attacks Injection attacks (SQL, command, LDAP, etc.) Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and other web/application-layer vulnerabilities Man-in-the-Middle (MitM), session hijacking, and credential theft (brute-force, credential stuffing, password spraying) Supply-chain and third-party dependency attacks (e.g., compromised open-source libraries or CI/CD pipelines) Cloud and container misconfigurations leading to unintended data exposure Advanced Persistent Threats (APTs) and nation-state campaigns Zero-day exploits, malware, unpatched vulnerabilities Physical security breaches and insider-enabled network access Broken authentication, insecure deserialization, and insufficient logging/monitoring (OWASP Top 10 categories) AI Agent-Specific Threats (in addition to the above)Adversaries include nation states, malicious insiders, collaborators, partners, organized crime, individual criminals, hobby hackers, and adversaries that themselves deploy malicious AI Agents. Data Exfiltration: Bad actors (insider or external) steal sensitive data, including proprietary information, intellectual property, personal information, personal financial information, personal health information, or unpublished datasets not ready for publication. Data Poisoning: Bad actor aims to undermine the delegator’s objective by returning subtly corrupted data, either in its scheduled monitoring updates or the final artifact (Cin? et al., 2023). Verification Subversion: Adversary utilizes prompt injection or another related method to jailbreak the AI Critic agent used in research task completion verification (Liu et al., 2023), falsifying the research. Resource Exhaustion: Adversary engages in (distributed) denial-of-service attack by intentionally consuming excessive computational or physical resources or overwhelming shared APIs. Unauthorized Access: Adversary utilizes malware to obtain permissions and privileges within the network that it would not otherwise have received, violating SOX or other academic or regulatory requirements. Backdoor Implanting: Adversary successfully completes a task but additionally embeds concealed triggers or vulnerabilities within the generated artifacts that can be exploited later either by the adversary itself or a third party (Rando and Tramèr, 2024; Wang et al., 2024c). Unlike data poisoning, which degrades performance, backdoors preserve immediate task utility to evade identification while compromising future security. Required Technical & Operational Skills Hands-on DevOps/DevSecOps proficiency (Terraform, Ansible, Kubernetes security, GitHub Actions/GitLab CI hardening, container image signing, SBOM generation, etc.). Ability to write and review infrastructure-as-code, security policies (OPA, Kyverno), and monitoring rules that our teams can deploy immediately. Threat modeling (STRIDE, PASTA, or equivalent) and quantitative risk scoring. Familiarity with open-source software supply-chain security (dependency scanning, reproducible builds, sigstore, etc.). Qualifications Demonstrated expertise in the threat areas listed above (via prior projects, research, publications, or certifications such as CISSP, CISM, CRISC, OSCP, or equivalent). Ability to translate complex threats into actionable DevOps deliverables. Excellent communication skills—equally comfortable briefing a principal investigator or pair-programming with a DevOps engineer. Experience level: We value depth of expertise and proven ability over years in title. Strong candidates with academic, research-lab, or early-career backgrounds are encouraged. Nice-to-have: Direct familiarity with academic research workflows in Biology, Bioinformatics, or Proteomics (understanding of genomic/proteomic data sensitivity, research reproducibility requirements, and typical partner data-sharing agreements). BenefitsWhat We Offer Competitive salary, daily, or project rate Direct impact on a mission-driven open science organization Opportunity to shape security practices at the intersection of radical openness and cutting-edge AI research Collaborative environment with biologists, bioinformaticians, and DevOps engineers who value transparency and rigor If you are excited by the challenge of securing 100% open science while protecting AI Agents from both garden-variety malware and esoteric backdoor-implanting attacks—and you can both strategize and ship code—please send your CV and a short note describing one threat-mitigation project you have led (standard or AI-Agent related). We look forward to working with you. RequirementsRequired Expertise - Threat Landscape You must be fluent in both the standard cybersecurity threats every organization faces and the specialized threats to AI Agents. Demonstrated ability to explain, model, and mitigate the following concepts is mandatory. Standard Cybersecurity Threats (all organizations) Phishing, spear-phishing, and social-engineering attacks Ransomware and malware families (viruses, trojans, spyware, cryptojackers) Distributed Denial-of-Service (DDoS) and resource-exhaustion attacks Injection attacks (SQL, command, LDAP, etc.) Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and other web/application-layer vulnerabilities Man-in-the-Middle (MitM), session hijacking, and credential theft (brute-force, credential stuffing, password spraying) Supply-chain and third-party dependency attacks (e.g., compromised open-source libraries or CI/CD pipelines) Cloud and container misconfigurations leading to unintended data exposure Advanced Persistent Threats (APTs) and nation-state campaigns Zero-day exploits, malware, unpatched vulnerabilities Physical security breaches and insider-enabled network access Broken authentication, insecure deserialization, and insufficient logging/monitoring (OWASP Top 10 categories) AI Agent-Specific Threats (in addition to the above) Adversaries include nation states, malicious insiders, collaborators, partners, organized crime, individual criminals, hobby hackers, and adversaries that themselves deploy malicious AI Agents. Data Exfiltration: Bad actors (insider or external) steal sensitive data, including proprietary information, intellectual property, personal information, personal financial information, personal health information, or unpublished datasets not ready for publication. Data Poisoning: Bad actor aims to undermine the delegator’s objective by returning subtly corrupted data, either in its scheduled monitoring updates or the final artifact (Cin? et al., 2023). Verification Subversion: Adversary utilizes prompt injection or another related method to jailbreak the AI Critic agent used in research task completion verification (Liu et al., 2023), falsifying the research. Resource Exhaustion: Adversary engages in (distributed) denial-of-service attack by intentionally consuming excessive computational or physical resources or overwhelming shared APIs. Unauthorized Access: Adversary utilizes malware to obtain permissions and privileges within the network that it would not otherwise have received, violating SOX or other academic or regulatory requirements. Backdoor Implanting: Adversary successfully completes a task but additionally embeds concealed triggers or vulnerabilities within the generated artifacts that can be exploited later either by the adversary itself or a third party (Rando and Tramèr, 2024; Wang et al., 2024c). Unlike data poisoning, which degrades performance, backdoors preserve immediate task utility to evade identification while compromising future security. Required Technical & Operational Skills Hands-on DevOps/DevSecOps proficiency (Terraform, Ansible, Kubernetes security, GitHub Actions/GitLab CI hardening, container image signing, SBOM generation, etc.). Ability to write and review infrastructure-as-code, security policies (OPA, Kyverno), and monitoring rules that our teams can deploy immediately. Threat modeling (STRIDE, PASTA, or equivalent) and quantitative risk scoring. Familiarity with open-source software supply-chain security (dependency scanning, reproducible builds, sigstore, etc.). Qualifications Demonstrated expertise in the threat areas listed above (via prior projects, research, publications, or certifications such as CISSP, CISM, CRISC, OSCP, or equivalent). Ability to translate complex threats into actionable DevOps deliverables. Excellent communication skills—equally comfortable briefing a principal investigator or pair-programming with a DevOps engineer. Experience level: We value depth of expertise and proven ability over years in title. Strong candidates with academic, research-lab, or early-career backgrounds are encouraged. Nice-to-have: Direct familiarity with academic research workflows in Biology, Bioinformatics, or Proteomics (understanding of genomic/proteomic data sensitivity, research reproducibility requirements, and typical partner data-sharing agreements).
Created: 2026-04-17