Mission and scope
HRAH advances public-benefit safety and security work at the intersection of artificial intelligence, human rights,
and societal resilience. We specialize in identifying and analyzing AI-driven risks, documenting vulnerabilities, and developing
practical mitigation approaches — including software tools — that reduce abuse, fraud, and unlawful exploitation enabled by AI.
Our approach is interdisciplinary: we integrate technical security analysis with legal, policy, ethical, and operational perspectives
to produce defensible, actionable recommendations for government, industry, and civil-society stakeholders.
What we do (in one view)
- AI threat and vulnerability research across model, data, and deployment layers
- Applied development of protective tools and workflows (defense-in-depth)
- Monitoring and analysis of unlawful AI use, including deepfakes and AI-enabled fraud
- Operational guidance for organizations deploying or relying on AI systems
Who we work with
- Government and public-sector stakeholders (where lawful and appropriate)
- Private organizations seeking risk reduction and responsible AI deployment
- Research partners and professional experts across multiple countries
- Civil-society entities impacted by AI misuse and digital harms
We structure engagements to be practical and measurable: clear problem definition, threat model, method, outputs, and documentation suitable for internal governance or external reporting.
Research and capabilities
HRAH conducts end-to-end AI security work: from vulnerability discovery and threat modeling to mitigation design,
implementation, and validation. We prioritize repeatable methods and evidence-based conclusions.
1) Vulnerability research
- Model-level risks: prompt injection, jailbreaking, misuse pathways, unsafe tool use
- Data-level risks: poisoning, leakage, membership inference, dataset integrity issues
- System-level risks: insecure integrations, agentic abuse, escalation via connectors/tools
- Evaluation methods and red-team style testing (case-dependent)
2) Protective engineering
- Detection and response workflows for AI incidents
- Content authenticity and deepfake countermeasures (verification pipelines)
- Risk controls: policy enforcement, guardrails, auditing, rate limits, access control
- Security-by-design recommendations for AI product lifecycles
3) Interdisciplinary analysis
- Legal and regulatory mapping relevant to AI misuse and digital harms
- Human impact analysis: fraud, coercion, exploitation, reputational harm
- Operational risk: insider threat, supply chain, third-party dependencies
- Policy design for responsible adoption and accountability
4) Partner-based research model
- Engaging vetted specialists and professional collaborators internationally
- Research coordination, QA review, and structured evidence collection
- Cross-domain expertise: software security, OSINT, law, ethics, operations
- Clear deliverables and documentation for stakeholders
Threat landscape addressed
Our work focuses on real-world abuse pathways where AI capabilities are operationalized for unlawful, deceptive,
coercive, or high-impact harmful outcomes.
AI misuse and unlawful applications
- Deepfake abuse and impersonation (voice, video, identity)
- AI-enabled fraud, social engineering, and automated deception
- Harassment, coercion, and exploitation amplified by synthetic media
- Disinformation operations using AI-assisted content generation
AI system security and reliability risks
- Prompt injection and tool/agent compromise
- Data exfiltration, privacy leakage, and sensitive inference
- Model supply-chain risks and unsafe dependency chains
- Weak governance: lack of auditability, monitoring, and incident readiness
We do not position AI safety as a purely technical issue. We treat it as a combined security, governance, and societal risk problem — with technical mitigations matched to practical operational controls.
Outputs and deliverables
HRAH produces stakeholder-ready outputs that translate technical findings into actionable controls and defensible documentation.
Deliverables are tailored to the mission and needs of public-sector, private-sector, and civil-society partners.
Typical deliverables
- Threat model and risk assessment (scope, assets, adversaries, scenarios)
- Vulnerability findings report with evidence and mitigation guidance
- Security and governance recommendations (controls, audit, monitoring)
- Policy or standards-aligned implementation roadmap
Applied software and tooling
- Prototype defensive tools (case-dependent and stakeholder-approved)
- Content authenticity verification workflows and automation
- Detection pipelines for synthetic media and AI-assisted fraud patterns
- Operational playbooks for incident response and escalation
Cooperation model with public and private organizations
We collaborate in a disciplined, compliance-aware format designed to reduce risk and increase impact.
Engagements can be advisory, research-driven, or engineering-supported, depending on partner needs and legal constraints.
Engagement formats
- Joint research and assessment projects
- Independent review of AI systems and deployment architectures
- Partner-supported applied development (tools, workflows, prototypes)
- Briefings and training for stakeholder teams (case-dependent)
What partners typically provide
- Problem statement, scope, and operational context
- System access requirements (as lawful and appropriate)
- Constraints: security, privacy, reporting, timelines
- Point-of-contact for technical and compliance coordination
Governance and safeguards
HRAH emphasizes responsible research practices, transparency, and safeguards that reduce legal, security, and reputational risk.
We align research execution with lawful access and stakeholder-approved boundaries.
- Defined scope, threat model, and success criteria before execution
- Documentation standards: evidence, reproducibility, and clear mitigation rationale
- Partner-aware confidentiality and secure handling of sensitive materials (as agreed)
- Conflict-of-interest and independence posture where relevant to the engagement
- Focus on defensive, protective outcomes and responsible disclosure principles (case-dependent)
We are prepared to provide concise governance documentation for due diligence (organization identifiers, short capability memo, engagement outline, and a COI statement if needed).
Suggested collaboration language (MOU / statement of work)
Partners may adapt the sample language below to their procurement and compliance requirements.
Option A — Research & Assessment
HUMAN RIGHTS & ANALYTICAL HOUSE, INC. (HRAH), a U.S. 501(c)(3) nonprofit organization, will conduct an AI safety and security assessment focused on identifying vulnerabilities, threat scenarios, and operational risks relevant to the Partner’s stated objectives. HRAH will deliver a written risk assessment, prioritized mitigation recommendations, and supporting documentation suitable for governance and reporting purposes. Scope, access, confidentiality, and reporting requirements will be defined in writing prior to work commencement.
Option B — Applied Engineering / Defensive Tooling
HRAH will support the Partner through applied development of defensive workflows and/or software prototypes designed to reduce unlawful or harmful AI use (including deepfakes and AI-enabled fraud), subject to agreed scope and constraints. Deliverables may include verification workflows, detection pipelines, operational playbooks, and implementation guidance. All deliverables are intended for protective, public-benefit, and risk-reduction purposes.
Option C — Multi-Party Collaboration
The Parties may coordinate with qualified professional partners (including international collaborators) to support interdisciplinary analysis and specialized technical tasks, provided that such participation is disclosed to the Partner and remains within agreed security, compliance, and confidentiality requirements. HRAH will coordinate quality assurance, documentation standards, and delivery of consolidated outputs to the Partner.
Documentation available upon request
We can provide documentation to facilitate due diligence and partner onboarding, including:
- Proof of 501(c)(3) status and basic organizational information
- Capability memo (AI safety/security scope, methods, and deliverables)
- Engagement outline (research plan, reporting format, timelines)
- Conflict-of-Interest (COI) statement (if required)
- Partner-ready summary for internal review or governance submission