allocator.tech is a public directory of Filecoin Plus (Fil+) allocators. It helps the ecosystem discover allocator pathways, understand who operates them, and review their scope and governance posture.
Applicants deciding where to request DataCap
Allocators and pathway operators
Governance reviewers and auditors
Storage Providers assessing allocator behavior and deal flow
Browse allocator pathways and profiles
Find key allocator details (pathway type, links, contacts, governance notes)
Compare allocators side-by-side
Find your listing and confirm that the basics are correct (name, addresses, pathway type, links).
Publish operational links (application form, documentation, contact channel).
Keep governance information current: refresh cadence, rules, and any requirements for clients/SPs.
Use the directory as the canonical reference when directing clients and reviewers to your pathway.
Allocator discovery is not a nice-to-have; it is part of governance scalability.
It reduces application routing mistakes (clients applying to the wrong pathway).
It lowers reviewer overhead (one place to find pathway context).
It standardizes the public footprint of allocators (less ambiguity, fewer ad-hoc clarifications).
Frontend: https://github.com/filecoin-project/filplus-registry
Backend: https://github.com/filecoin-project/filplus-backend
Dashboards, reports, and compliance signals for Filecoin Plus
datacapstats.io is a public analytics and reporting site for the Filecoin Plus (Fil+) program. It aggregates on-chain signals and derives metrics that help allocators, governance reviewers, clients, and Storage Providers understand how DataCap is being used and whether program expectations are being met.
Allocators: self-audit pathway behavior, spot client issues early, generate reports.
Clients: understand allocation history, claims, and provider behavior.
Storage Providers: see how Fil+ deal flow is distributed and what signals are being measured.
Governance teams: run audits with consistent evidence and structured reporting.
Dashboard: high-level “State of Fil+” view with headline statistics.
Allocators: allocator list, allocator-level metrics, and report generation.
Clients: client list, client-level metrics, and report generation.
Storage Providers: provider-level views of Fil+ activity and related signals.
Alerts: allocator-level checks and summaries of failures.
The Dashboard provides quick program health indicators, intended for fast orientation:
Approved and active allocator counts
Compliance proportions
High-level daily statistics
This section is designed for operational and governance workflows.
Allocator list + overview
Browse approved allocators
View remaining DataCap (where available)
Navigate into allocator detail pages
Allocator performance views (examples)
Retrievability Score: a derived indicator of storage-provider retrievability behavior across allocator deal flow.
Client Diversity: a view into concentration risk and how broadly an allocator serves clients.
Size of Biggest Client Allocation: a concentration signal to highlight single-client dominance.
Allocator reports
Allocator pages include report generation and structured report views. A report typically summarizes:
Allocator overview and configuration signals
Client-level issues discovered by checks
Storage provider distribution and duplication signals
Allocator Score & Ranking
Allocator score and ranking provide a simplified summary of multiple signals. The goal is not to “name and shame”, but to:
help allocators quickly see what requires attention,
allow reviewers to prioritize audits,
and make it easier for clients to choose pathways responsibly.
Fil+ incentives are powerful. Without measurement, the program becomes harder to govern and easier to misuse.
The metrics focus on three program health goals:
Data usefulness and accessibility: proof that stored data can be retrieved and validated.
Good-faith participation: evidence that pathways follow expected allocation rules and do not concentrate rewards in ways that undermine the program.
Operational integrity: structured reporting so that governance decisions can be explained and repeated consistently.
Client pages focus on DataCap usage and storage behavior.
Client detail pages typically include
Latest claims
Storage provider usage
Allocation history
Client reports, including the ability to generate reports and compare them over time
Alerts provide a compact “what needs attention” view. Examples of checks surfaced include:
Clients receiving DataCap from more than one allocator
Replica requirement failures
Unspent DataCap with prolonged inactivity
Tranche schedule deviations
IPNI misreporting signals
Low retrievability performance signals
https://github.com/fidlabs/filecoin-plus-webapp-next
Operations portal for Fil+ governance workflows
apply.allocator.tech supports the operational side of Fil+ governance and DataCap administration. It is built for day-to-day workflows that otherwise require manual coordination across multiple repositories and spreadsheets.
Fil+ governance team: manage new allocator applications and refresh requests.
Pathway/Metaallocator managers : assign DataCap to allocators under those pathways.
Root Key Holders (RKH): support multisig workflows (minting and removing DataCap).
Intake and manage allocator applications
Track refresh requests and reviewer workflows
Support RKH operations for minting / removal processes
https://github.com/fidlabs/rkh-frontend
https://github.com/fidlabs/allocator-rkh-backend
A Fil+ sandbox for testing new allocation models safely
The Experimental Pathway Metaallocator (EPMA) is a Filecoin Plus (Fil+) pathway designed to give builders and emerging allocators a safe, structured way to test novel DataCap allocation approaches—for example new evaluation metrics, automation, user interfaces, or incentive designs.
It exists to support iteration without putting the wider Fil+ program at risk.
Teams building automated / market-based allocator models that need DataCap to validate the system in real conditions.
New allocator operators proposing a novel allocation mechanism that cannot be evaluated using only existing manual pathway standards.
Applicants submit a structured proposal describing the experiment, success criteria, and safeguards.
Approved participants receive an allocation ceiling intended to be large enough to test meaningfully, but small enough to contain risk (1PiB)
Participants publish reporting and learnings so governance can evaluate what should graduate into a main pathway.
Lower DataCap ceilings: allocators in the pathway receive up to 1 PiB.
Faster review cycles: tighter feedback loops for active iteration.
Transparency through reporting: participants are expected to publish metrics and insights.
EPMA creates a clear, legible “innovation lane” for Fil+. It helps the ecosystem:
test new allocation logic with real data onboarding,
collect evidence about what works (and what doesn’t),
and graduate successful models into long-term pathways (Manual, Market-based, or Automated).
https://github.com/fidlabs/Experimental-Pathway-Metaallocator
FIDL has developed smart contracts that allow different allocation policies to be executed with stronger controls and clearer auditability.
Allocator: an entity or pathway that distributes DataCap to clients.
MetaAllocator: a higher-level allocator that can deploy and manage allocators with consistent policy controls (for example: to standardize rules across a set of allocator instances, or to enforce specific constraints).
Enables deployment and management of MetaAllocators and allocator instances.
Allows allocators to grant DataCap allowances that are only usable with Storage Providers defined by the client.
https://www.fidl.tech/news/improvements-to-datacap-management
https://github.com/fidlabs/contract-docs/blob/main/ddo-contract.png
https://github.com/fidlabs/contract-docs/blob/main/ddo-contract-sequence.png
Designed for on-ramps that can present real-world contracts to governance reviewers. It provides finer control and supports higher DataCap allocations when appropriate.
https://github.com/fidlabs/metaallocator-dapp/blob/main/OnRampContractManagement.md
https://github.com/fidlabs/contract-onramp?tab=readme-ov-file#example-deployment-procedure
On-chain SLAs, off-chain measurements, and verifiable scoring for Storage Providers
This stack connects three things into one loop:
Clients define SLIs and SLAs and register agreements on-chain.
SLIs are measured off-chain using auditable data sources and repeatable measurement pipelines.
SLA performance is computed on-chain from the latest SLI attestations.
The goal is to make “storage quality” legible and enforceable, rather than a best-effort promise.
The reference oracle attestation includes:
Availability
Latency
Indexing
Retention
Bandwidth
Stability
SLARegistry stores SLA agreements (client ↔ provider) and defines how they are scored.
Oracle contracts hold the latest SLI attestations for providers.
Oracle service pulls SLI metrics from the Compliance Data Platform (CDP) API and posts them on-chain.
Beneficiary / payout logic can use SLA scores to compute a weighted performance outcome per provider.
Filecoin deals are not all the same. By making expectations measurable, the network can:
support paid and regulated workloads with explicit quality targets,
reward Storage Providers who consistently meet client requirements,
and reduce governance overhead by making performance review more evidence-driven.
Oracle service: https://github.com/fidlabs/filecoin-oracle-service
Contracts: https://github.com/fidlabs/sla-allocator-contracts
Auditable measurement of Storage Providers’ public HTTP retrieval endpoints
This measurement stack produces repeatable metrics about Storage Providers (SPs) serving data over public HTTP retrieval endpoints.
It is built from two components:
Provider Sample URL Finder (URL Finder) — discovers working public HTTP endpoints and performs lightweight retrievability checks.
Bandwidth Measurement System (BMS) — performs controlled, parallel downloads to estimate throughput and network performance.
Retrievability: whether /piece/{PieceCID} is reachable and responds successfully under test conditions.
Consistency: whether repeated requests for the same piece behave stably and appear valid.
Bandwidth / throughput: aggregate download speed under parallel load.
Latency signals: Time to HEAD response (TTHR) and Time to First Byte (TTFB).
Sector utilization (optional): how served headers/sizes relate to expected deal/piece sizes.
Discover SP endpoints
Obtain SP peer ID via Lotus RPC.
Fetch advertised endpoints from cid.contact.
Parse multiaddrs and extract usable HTTP addresses.
Sample PieceCIDs
Query PieceCIDs associated with the SP (optionally filtered by client).
Randomly sample up to 100 pieces to keep tests repeatable and cost-bounded.
Build URLs and test
Construct http://{endpoint}/piece/{pieceCID}.
Run concurrent HTTP HEAD checks (e.g., up to 20 at once).
Output
Store/return at least one working URL.
Compute a sample retrievability percentage.
Consistency checks determine whether an endpoint serves stable, valid responses for the same piece across repeated requests (“double-tap” testing). A response is treated as valid when:
reported Content-Length meets a minimum threshold, and
the content begins with a valid CAR header.
This avoids counting error pages or misconfigured endpoints as “retrievable”.
BMS runs coordinated range downloads to estimate throughput under load.
Uses a scheduler + distributed workers (with PostgreSQL + RabbitMQ) to coordinate tests.
Workers use random byte-range requests, synchronized in time windows.
Includes a warm-up stage and staged load tests (e.g., 80% then 100% worker saturation) to detect saturation and instability.
Retrievability is a sample-based signal, not a guarantee that all data is always retrievable.
Low retrievability can reflect endpoint advertisement issues, instability, ratelimiting, or genuine availability problems.
Similar throughput at high load stages suggests saturation; large divergence can indicate concurrency limits or instability.
These measurements can be surfaced in Fil+ dashboards and used as inputs to SLIs (for example availability, latency, bandwidth, and stability). The Compliance Data Platform (CDP) is the aggregation and serving layer that makes these signals easy to consume.
Structured compliance data and APIs for Fil+ dashboards, reporting, and automation
The Compliance Data Platform (CDP) is a NestJS-based microservice that tracks, analyzes, and reports on Filecoin Plus (Fil+) compliance metrics. It aggregates allocator, client, and storage-provider signals into a consistent set of tables and API endpoints that power dashboards and automated workflows.
Allocator management: allocator metadata, compliance scores, audit states, and allocation patterns.
Storage Provider monitoring: retrievability and performance signals over time.
Client tracking: DataCap usage and distribution across providers.
Compliance scoring: multi-metric scoring and ranking for allocators.
Report generation: scheduled allocator and client reports.
Aggregation jobs: hourly/daily/weekly runners producing analytical tables.
Historical analysis: week-over-week and month-over-month trend views.
CDP is designed to combine multiple sources into one auditable dataset, including:
DMOB database (deal and program data) accessed via SSH tunnel
GitHub (allocator registry and client bookkeeping)
Lotus RPC (Filecoin network state)
CID Contact API (provider endpoint discovery)
BMS API (retrievability data)
Filscan API (blockchain data)
IPNI advertisement data (via a dedicated fetcher)
Documentation: https://cdp.allocator.tech/docs
Architecture: https://github.com/fidlabs/compliance-data-platform/blob/documentation-draft/ARCHITECTURE.md
System documentation: https://github.com/fidlabs/compliance-data-platform/blob/documentation-draft/DOCUMENTATION.md
A portal integrating Gitcoin Passport verification to support identity checks in Fil+ workflows.
https://github.com/fidlabs/filplus-kyc-portal
Utility tooling that supports smaller or test-oriented DataCap flows.
https://github.com/fidlabs/filplus-faucet
Dashboards and automated governance workflows are only as good as the data they are built on. FIDL’s approach is to combine reliable on-chain sources, operational program datasets, and active measurements, then serve them through a documented API layer.
On-chain history and state
Lotus RPC: live chain context for miners, claims, and state-derived signals.
Full historical traces (when needed): state traces from genesis can be used to reconstruct historical transitions and improve auditability for long-range analytics.
Program datasets and registries
DMOB database: deal and allowance data used for Fil+ analytics and reporting.
Zondax Data Traces: The processing pipelines are all embodied in https://github.com/fidlabs/tracer
GitHub registries: allocator registry and bookkeeping data used for pathway metadata and operational context.
Provider endpoint and indexing signals
CID Contact API: provider-advertised endpoints used for discovery of public HTTP retrieval addresses.
IPNI advertisement data: indexing/discovery signals used in compliance-style checks.
Active measurement systems
Network Measurements System: auditable endpoint reachability, consistency checks, and bandwidth/latency measurements.
SLA measurements
Collect raw signals from sources above (on-chain, program DBs, discovery systems, and measurements).
Aggregate into analytical tables using scheduled jobs (hourly / daily / weekly).
Serve via CDP APIs (documented at cdp.allocator.tech/docs) so dashboards and other services consume consistent datasets.
dashboards (e.g., allocator/client reports, rankings, alerts), and
on-chain SLA attestations via the oracle service.