Documentation ¶
Index ¶
- Variables
- func Cache(ctx context.Context, blobIndex blobindexlookup.BlobIndexLookup, ...) error
- func Publish(ctx context.Context, blobIndex blobindexlookup.BlobIndexLookup, ...) error
- type IndexingService
- func (is *IndexingService) Cache(ctx context.Context, provider peer.AddrInfo, claim delegation.Delegation) error
- func (is *IndexingService) Get(ctx context.Context, claim ipld.Link) (delegation.Delegation, error)
- func (is *IndexingService) Publish(ctx context.Context, claim delegation.Delegation) error
- func (is *IndexingService) Query(ctx context.Context, q types.Query) (types.QueryResult, error)
- type Option
Constants ¶
This section is empty.
Variables ¶
var ErrUnrecognizedClaim = errors.New("unrecognized claim type")
Functions ¶
func Cache ¶ added in v1.0.0
func Cache(ctx context.Context, blobIndex blobindexlookup.BlobIndexLookup, claims contentclaims.Service, provIndex providerindex.ProviderIndex, provider peer.AddrInfo, claim delegation.Delegation) error
func Publish ¶ added in v1.0.0
func Publish(ctx context.Context, blobIndex blobindexlookup.BlobIndexLookup, claims contentclaims.Service, provIndex providerindex.ProviderIndex, provider peer.AddrInfo, claim delegation.Delegation) error
Types ¶
type IndexingService ¶
type IndexingService struct {
// contains filtered or unexported fields
}
IndexingService implements read/write logic for indexing data with IPNI, content claims, sharded dag indexes, and a cache layer
func NewIndexingService ¶
func NewIndexingService(blobIndexLookup blobindexlookup.BlobIndexLookup, claims contentclaims.Service, publicAddrInfo peer.AddrInfo, providerIndex providerindex.ProviderIndex, options ...Option) *IndexingService
NewIndexingService returns a new indexing service
func (*IndexingService) Cache ¶ added in v1.0.0
func (is *IndexingService) Cache(ctx context.Context, provider peer.AddrInfo, claim delegation.Delegation) error
Cache is used to cache a claim without publishing it to IPNI this is used cache a location commitment that come from a storage provider on blob/accept, without publishing, since the SP will publish themselves (a delegation for a location commitment is already generated on blob/accept) ideally however, IPNI would enable UCAN chains for publishing so that we could publish it directly from the storage service it doesn't for now, so we let SPs publish themselves them direct cache with us
func (*IndexingService) Get ¶ added in v1.0.0
func (is *IndexingService) Get(ctx context.Context, claim ipld.Link) (delegation.Delegation, error)
func (*IndexingService) Publish ¶ added in v1.0.0
func (is *IndexingService) Publish(ctx context.Context, claim delegation.Delegation) error
Publish caches and publishes a content claim I imagine publish claim to work as follows For all claims except index, just use the publish API on ProviderIndex For index claims, let's assume they fail if a location claim for the index car cid is not already published The service should lookup the index cid location claim, and fetch the ShardedDagIndexView, then use the hashes inside to assemble all the multihashes in the index advertisement
func (*IndexingService) Query ¶
func (is *IndexingService) Query(ctx context.Context, q types.Query) (types.QueryResult, error)
Query returns back relevant content claims for the given query using the following steps 1. Query the ProviderIndex for all matching records 2. For any index records, query the ProviderIndex for any location claims for that index cid 3. For any index claims, query the ProviderIndex for location claims for the index cid 4. Query the BlobIndexLookup to get the full ShardedDagIndex for any index claims 5. Query ProviderIndex for any location claims for any shards that contain the multihash based on the ShardedDagIndex 6. Read the requisite claims from the ClaimLookup 7. Return all discovered claims and sharded dag indexes
type Option ¶
type Option func(is *IndexingService)
Option configures an IndexingService
func WithConcurrency ¶
WithConcurrency causes the indexing service to process find queries parallel, with the given concurrency