ipld

package module
v0.12.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 30, 2021 License: MIT Imports: 10 Imported by: 624

README

go-ipld-prime

go-ipld-prime is an implementation of the IPLD spec interfaces, a batteries-included codec implementations of IPLD for CBOR and JSON, and tooling for basic operations on IPLD objects (traversals, etc).

API

The API is split into several packages based on responsibly of the code. The most central interfaces are the base package, but you'll certainly need to import additional packages to get concrete implementations into action.

Roughly speaking, the core package interfaces are all about the IPLD Data Model; the codec/* packages contain functions for parsing serial data into the IPLD Data Model, and converting Data Model content back into serial formats; the traversal package is an example of higher-order functions on the Data Model; concrete ipld.Node implementations ready to use can be found in packages in the node/* directory; and several additional packages contain advanced features such as IPLD Schemas.

(Because the codecs, as well as higher-order features like traversals, are implemented in a separate package from the core interfaces or any of the Node implementations, you can be sure they're not doing any funky "magic" -- all this stuff will work the same if you want to write your own extensions, whether for new Node implementations or new codecs, or new higher-order order functions!)

  • github.com/ipld/go-ipld-prime -- imported as just ipld -- contains the core interfaces for IPLD. The most important interfaces are Node, NodeBuilder, Path, and Link.
  • github.com/ipld/go-ipld-prime/node/basicnode -- imported as basicnode -- provides concrete implementations of Node and NodeBuilder which work for any kind of data.
  • github.com/ipld/go-ipld-prime/traversal -- contains higher-order functions for traversing graphs of data easily.
  • github.com/ipld/go-ipld-prime/traversal/selector -- contains selectors, which are sort of like regexps, but for trees and graphs of IPLD data!
  • github.com/ipld/go-ipld-prime/codec -- parent package of all the codec implementations!
  • github.com/ipld/go-ipld-prime/codec/dagcbor -- implementations of marshalling and unmarshalling as CBOR (a fast, binary serialization format).
  • github.com/ipld/go-ipld-prime/codec/dagjson -- implementations of marshalling and unmarshalling as JSON (a popular human readable format).
  • github.com/ipld/go-ipld-prime/linking/cid -- imported as cidlink -- provides concrete implementations of Link as a CID. Also, the multicodec registry.
  • github.com/ipld/go-ipld-prime/schema -- contains the schema.Type and schema.TypedNode interface declarations, which represent IPLD Schema type information.
  • github.com/ipld/go-ipld-prime/node/typed -- provides concrete implementations of schema.TypedNode which decorate a basic Node at runtime to have additional features described by IPLD Schemas.

Other IPLD Libraries

The IPLD specifications are designed to be language-agnostic. Many implementations exist in a variety of languages.

For overall behaviors and specifications, refer to the IPLD website, or its source, in IPLD meta repo:

  • https://ipld.io/
  • https://github.com/ipld/ipld/ You should find specs in the specs/ dir there, human-friendly docs in the docs/ dir, and information about why things are designed the way they are mostly in the design/ directories.
distinctions from go-ipld-interface&go-ipld-cbor

This library ("go ipld prime") is the current head of development for golang IPLD, and we recommend new developments in golang be done using this library as the basis.

However, several other libraries exist in golang for working with IPLD data. Most of these predate go-ipld-prime and no longer receive active development, but since they do support a lot of other software, you may continue to seem them around for a while. go-ipld-prime is generally serially compatible with these -- just like it is with IPLD libraries in other languages.

In terms of programmatic API and features, go-ipld-prime is a clean take on the IPLD interfaces, and chose to address several design decisions very differently than older generation of libraries:

  • The Node interfaces map cleanly to the IPLD Data Model;
  • Many features known to be legacy are dropped;
  • The Link implementations are purely CIDs (no "name" nor "size" properties);
  • The Path implementations are provided in the same box;
  • The JSON and CBOR implementations are provided in the same box;
  • Several odd dependencies on blockstore and other interfaces that were closely coupled with IPFS are replaced by simpler, less-coupled interfaces;
  • New features like IPLD Selectors are only available from go-ipld-prime;
  • New features like ADLs (Advanced Data Layouts), which provide features like transparent sharding and indexing for large data, are only available from go-ipld-prime;
  • Declarative transformations can be applied to IPLD data (defined in terms of the IPLD Data Model) using go-ipld-prime;
  • and many other small refinements.

In particular, the clean and direct mapping of "Node" to concepts in the IPLD Data Model ensures a much more consistent set of rules when working with go-ipld-prime data, regardless of which codecs are involved. (Codec-specific embellishments and edge-cases were common in the previous generation of libraries.) This clarity is also what provides the basis for features like Selectors, ADLs, and operations such as declarative transformations.

Many of these changes had been discussed for the other IPLD codebases as well, but we chose clean break v2 as a more viable project-management path. Both go-ipld-prime and these legacy libraries can co-exist on the same import path, and both refer to the same kinds of serial data. Projects wishing to migrate can do so smoothly and at their leisure.

We now consider many of the earlier golang IPLD libraries to be defacto deprecated, and you should expect new features here, rather than in those libraries. (Those libraries still won't be going away anytime soon, but we really don't recomend new construction on them.)

unixfsv1

Be advised that faculties for dealing with unixfsv1 data are still limited. You can find some tools for dealing with dag-pb (the underlying codec) in the ipld/go-codec-dagpb repo, and there are also some tools retrofitting some of unixfsv1's other features to be perceivable using an ADL in the ipfs/go-unixfsnode repo... however, a "some assembly required" advisory may still be in effect; check the readmes in those repos for details on what they support.

Change Policy

The go-ipld-prime library is already usable. We are also still in development, and may still change things.

A changelog can be found at CHANGELOG.md.

Using a commit hash to pin versions precisely when depending on this library is advisable (as it is with any other).

We may sometimes tag releases, but it's just as acceptable to track commits on master without the indirection.

The following are all norms you can expect of changes to this codebase:

  • The master branch will not be force-pushed.
    • (exceptional circumstances may exist, but such exceptions will only be considered valid for about as long after push as the "$N-second-rule" about dropped food).
    • Therefore, commit hashes on master are gold to link against.
  • All other branches will be force-pushed.
    • Therefore, commit hashes not reachable from the master branch are inadvisable to link against.
  • If it's on master, it's understood to be good, in as much as we can tell.
  • Development proceeds -- both starting from and ending on -- the master branch.
    • There are no other long-running supported-but-not-master branches.
    • The existence of tags at any particular commit do not indicate that we will consider starting a long running and supported diverged branch from that point, nor start doing backports, etc.
  • All changes are presumed breaking until proven otherwise; and we don't have the time and attention budget at this point for doing the "proven otherwise".
    • All consumers updating their libraries should run their own compiler, linking, and test suites before assuming the update applies cleanly -- as is good practice regardless.
    • Any idea of semver indicating more or less breakage should be treated as a street vendor selling potions of levitation -- it's likely best disregarded.

None of this is to say we'll go breaking things willy-nilly for fun; but it is to say:

  • Staying close to master is always better than not staying close to master;
  • and trust your compiler and your tests rather than tea-leaf patterns in a tag string.
Version Names

When a tag is made, version number steps in go-ipld-prime advance as follows:

  1. the number bumps when the lead maintainer says it does.
  2. even numbers should be easy upgrades; odd numbers may change things.
  3. the version will start with v0. until further notice.

This is WarpVer.

These version numbers are provided as hints about what to expect, but ultimately, you should always invoke your compiler and your tests to tell you about compatibility.

Updating

Read the CHANGELOG.

Really, read it. We put exact migration instructions in there, as much as possible. Even outright scripts, when feasible.

An even-number release tag is usually made very shortly before an odd number tag, so if you're cautious about absorbing changes, you should update to the even number first, run all your tests, and then upgrade to the odd number. Usually the step to the even number should go off without a hitch, but if you do get problems from advancing to an even number tag, A) you can be pretty sure it's a bug, and B) you didn't have to edit a bunch of code before finding that out.

Documentation

Overview

go-ipld-prime is a series of go interfaces for manipulating IPLD data.

See https://ipld.io/ for more information about the basics of "What is IPLD?".

Here in the godoc, the first couple of types to look at should be:

  • Node
  • NodeBuilder and NodeAssembler
  • NodePrototype.

These types provide a generic description of the data model.

A Node is a piece of IPLD data which can be inspected. A NodeAssembler is used to create Nodes. (A NodeBuilder is just like a NodeAssembler, but allocates memory (whereas a NodeAssembler just fills up memory; using these carefully allows construction of very efficient code.)

Different NodePrototypes can be used to describe Nodes which follow certain logical rules (e.g., we use these as part of implementing Schemas), and can also be used so that programs can use different memory layouts for different data (which can be useful for constructing efficient programs when data has known shape for which we can use specific or compacted memory layouts).

If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are:

  • LinkSystem
  • ... and its fields.

The most typical use of LinkSystem is to use the linking/cid package to get a LinkSystem that works with CIDs:

lsys := cidlink.DefaultLinkSystem()

... and then assign the StorageWriteOpener and StorageReadOpener fields in order to control where data is stored to and read from. Methods on the LinkSystem then provide the functions typically used to get data in and out of Nodes so you can work with it.

This root package gathers some of the most important ease-of-use functions all in one place, but is mostly aliases out to features originally found in other more specific sub-packages. (If you're interested in keeping your binary sizes small, and don't use some of the features of this library, you'll probably want to look into using the relevant sub-packages directly.)

Particularly interesting subpackages include:

  • datamodel -- the most essential interfaces for describing data live here, describing Node, NodePrototype, NodeBuilder, Link, and Path.
  • node/* -- various Node + NodeBuilder implementations.
  • node/basicnode -- the first Node implementation you should try.
  • codec/* -- functions for serializing and deserializing Nodes.
  • linking -- the LinkSystem, which is a facade to all data loading and storing and hashing.
  • linking/* -- ways to bind concrete Link implementations (namely, the linking/cidlink package, which connects the go-cid library to our datamodel.Link interface).
  • traversal -- functions for walking Node graphs (including automatic link loading) and visiting them programmatically.
  • traversal/selector -- functions for working with IPLD Selectors, which are a language-agnostic declarative format for describing graph walks.
  • fluent/* -- various options for making datamodel Node and NodeBuilder easier to work with.
  • schema -- interfaces for working with IPLD Schemas, which can bring constraints and validation systems to otherwise schemaless and unstructured IPLD data.
  • adl/* -- examples of creating and using Advanced Data Layouts (in short, custom Node implementations) to do complex data structures transparently within the IPLD Data Model.
Example (CreateDataAndMarshal)

Example_createDataAndMarshal shows how you can feed data into a NodeBuilder, and also how to then hand that to an Encoder.

Often you'll encoding implicitly through a LinkSystem.Store call instead, but you can do it directly, too.

package main

import (
	"os"

	"github.com/ipld/go-ipld-prime/codec/dagjson"
	"github.com/ipld/go-ipld-prime/node/basicnode"
)

func main() {
	np := basicnode.Prototype.Any // Pick a prototype: this is how we decide what implementation will store the in-memory data.
	nb := np.NewBuilder()         // Create a builder.
	ma, _ := nb.BeginMap(2)       // Begin assembling a map.
	ma.AssembleKey().AssignString("hey")
	ma.AssembleValue().AssignString("it works!")
	ma.AssembleKey().AssignString("yes")
	ma.AssembleValue().AssignBool(true)
	ma.Finish()     // Call 'Finish' on the map assembly to let it know no more data is coming.
	n := nb.Build() // Call 'Build' to get the resulting Node.  (It's immutable!)

	dagjson.Encode(n, os.Stdout)

}
Output:

{"hey":"it works!","yes":true}
Example (UnmarshalData)

Example_unmarshalData shows how you can use a Decoder and a NodeBuilder (or NodePrototype) together to do unmarshalling.

Often you'll do this implicitly through a LinkSystem.Load call instead, but you can do it directly, too.

package main

import (
	"fmt"
	"strings"

	"github.com/ipld/go-ipld-prime/codec/dagjson"
	"github.com/ipld/go-ipld-prime/node/basicnode"
)

func main() {
	serial := strings.NewReader(`{"hey":"it works!","yes": true}`)

	np := basicnode.Prototype.Any // Pick a stle for the in-memory data.
	nb := np.NewBuilder()         // Create a builder.
	dagjson.Decode(nb, serial)    // Hand the builder to decoding -- decoding will fill it in!
	n := nb.Build()               // Call 'Build' to get the resulting Node.  (It's immutable!)

	fmt.Printf("the data decoded was a %s kind\n", n.Kind())
	fmt.Printf("the length of the node is %d\n", n.Length())

}
Output:

the data decoded was a map kind
the length of the node is 2

Index

Examples

Constants

View Source
const (
	Kind_Invalid = datamodel.Kind_Invalid
	Kind_Map     = datamodel.Kind_Map
	Kind_List    = datamodel.Kind_List
	Kind_Null    = datamodel.Kind_Null
	Kind_Bool    = datamodel.Kind_Bool
	Kind_Int     = datamodel.Kind_Int
	Kind_Float   = datamodel.Kind_Float
	Kind_String  = datamodel.Kind_String
	Kind_Bytes   = datamodel.Kind_Bytes
	Kind_Link    = datamodel.Kind_Link
)

Variables

View Source
var (
	Null   = datamodel.Null
	Absent = datamodel.Absent
)
View Source
var (
	KindSet_Recursive  = datamodel.KindSet_Recursive
	KindSet_Scalar     = datamodel.KindSet_Scalar
	KindSet_JustMap    = datamodel.KindSet_JustMap
	KindSet_JustList   = datamodel.KindSet_JustList
	KindSet_JustNull   = datamodel.KindSet_JustNull
	KindSet_JustBool   = datamodel.KindSet_JustBool
	KindSet_JustInt    = datamodel.KindSet_JustInt
	KindSet_JustFloat  = datamodel.KindSet_JustFloat
	KindSet_JustString = datamodel.KindSet_JustString
	KindSet_JustBytes  = datamodel.KindSet_JustBytes
	KindSet_JustLink   = datamodel.KindSet_JustLink
)

Future: These aliases for the `KindSet_*` values may be dropped someday. I don't think they're very important to have cluttering up namespace here. They're included for a brief transitional period, largely for the sake of codegen things which have referred to them, but may disappear in the future.

Functions

func DeepEqual added in v0.10.0

func DeepEqual(x, y Node) bool

DeepEqual reports whether x and y are "deeply equal" as IPLD nodes. This is similar to reflect.DeepEqual, but based around the Node interface.

This is exactly equivalent to the datamodel.DeepEqual function.

func Encode added in v0.12.1

func Encode(n Node, encFn Encoder) ([]byte, error)

Encode serializes the given Node using the given Encoder function, returning the serialized data or an error.

The exact result data will depend the node content and on the encoder function, but for example, using a json codec on a node with kind map will produce a result starting in `{`, etc.

Encode will automatically switch to encoding the representation form of the Node, if it discovers the Node matches the schema.TypedNode interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).

If you would like this operation, but applied directly to a golang type instead of a Node, look to the Marshal function.

func EncodeStreaming added in v0.12.1

func EncodeStreaming(wr io.Writer, n Node, encFn Encoder) error

EncodeStreaming is like Encode, but emits output to an io.Writer.

func Marshal added in v0.12.1

func Marshal(encFn Encoder, bind interface{}, typ schema.Type) ([]byte, error)

Marshal accepts a pointer to a Go value and an IPLD schema type, and encodes the representation form of that data (which may be configured with the schema!) using the given Encoder function.

Marshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.

The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.

func MarshalStreaming added in v0.12.1

func MarshalStreaming(wr io.Writer, encFn Encoder, bind interface{}, typ schema.Type) error

MarshalStreaming is like Marshal, but emits output to an io.Writer.

Types

type ADL added in v0.9.0

type ADL = adl.ADL

type BlockReadOpener added in v0.9.0

type BlockReadOpener = linking.BlockReadOpener

type BlockWriteCommitter added in v0.9.0

type BlockWriteCommitter = linking.BlockWriteCommitter

type BlockWriteOpener added in v0.9.0

type BlockWriteOpener = linking.BlockWriteOpener

type Decoder added in v0.9.0

type Decoder = codec.Decoder

type Encoder added in v0.9.0

type Encoder = codec.Encoder

type ErrHashMismatch added in v0.9.0

type ErrHashMismatch = linking.ErrHashMismatch

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrInvalidKey added in v0.0.2

type ErrInvalidKey = schema.ErrInvalidKey

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrInvalidSegmentForList added in v0.4.0

type ErrInvalidSegmentForList = datamodel.ErrInvalidSegmentForList

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrIteratorOverread

type ErrIteratorOverread = datamodel.ErrIteratorOverread

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrMissingRequiredField added in v0.0.3

type ErrMissingRequiredField = schema.ErrMissingRequiredField

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrNotExists

type ErrNotExists = datamodel.ErrNotExists

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrRepeatedMapKey added in v0.0.3

type ErrRepeatedMapKey = datamodel.ErrRepeatedMapKey

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type ErrWrongKind

type ErrWrongKind = datamodel.ErrWrongKind

Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.

type Kind added in v0.7.0

type Kind = datamodel.Kind
type Link = datamodel.Link

type LinkContext

type LinkContext = linking.LinkContext

type LinkPrototype added in v0.9.0

type LinkPrototype = datamodel.LinkPrototype

type LinkSystem added in v0.9.0

type LinkSystem = linking.LinkSystem

type ListAssembler added in v0.0.3

type ListAssembler = datamodel.ListAssembler

type ListIterator

type ListIterator = datamodel.ListIterator

type MapAssembler added in v0.0.3

type MapAssembler = datamodel.MapAssembler

type MapIterator

type MapIterator = datamodel.MapIterator

type Node

type Node = datamodel.Node

func Decode added in v0.12.1

func Decode(b []byte, decFn Decoder) (Node, error)

Decode parses the given bytes into a Node using the given Decoder function, returning a new Node or an error.

The new Node that is returned will be the implementation from the node/basicnode package. This implementation of Node will work for storing any kind of data, but note that because it is general, it is also not necessarily optimized. If you want more control over what kind of Node implementation (and thus memory layout) is used, or want to use features like IPLD Schemas (which can be engaged by using a schema.TypedPrototype), then look to the DecodeUsingPrototype family of functions, which accept more parameters in order to give you that kind of control.

If you would like this operation, but applied directly to a golang type instead of a Node, look to the Unmarshal function.

func DecodeStreaming added in v0.12.1

func DecodeStreaming(r io.Reader, decFn Decoder) (Node, error)

DecodeStreaming is like Decode, but works on an io.Reader for input.

func DecodeStreamingUsingPrototype added in v0.12.1

func DecodeStreamingUsingPrototype(r io.Reader, decFn Decoder, np NodePrototype) (Node, error)

DecodeStreamingUsingPrototype is like DecodeUsingPrototype, but works on an io.Reader for input.

func DecodeUsingPrototype added in v0.12.1

func DecodeUsingPrototype(b []byte, decFn Decoder, np NodePrototype) (Node, error)

DecodeUsingPrototype is like Decode, but with a NodePrototype parameter, which gives you control over the Node type you'll receive, and thus control over the memory layout, and ability to use advanced features like schemas. (Decode is simply this function, but hardcoded to use basicnode.Prototype.Any.)

DecodeUsingPrototype internally creates a NodeBuilder, and thows it away when done. If building a high performance system, and creating data of the same shape repeatedly, you may wish to use NodeBuilder directly, so that you can control and avoid these allocations.

For symmetry with the behavior of Encode, DecodeUsingPrototype will automatically switch to using the representation form of the node for decoding if it discovers the NodePrototype matches the schema.TypedPrototype interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).

func Unmarshal added in v0.12.1

func Unmarshal(b []byte, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)

Unmarshal accepts a pointer to a Go value and an IPLD schema type, and fills the value with data by decoding into it with the given Decoder function.

Unmarshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.

The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.

In contrast to some other unmarshal conventions common in golang, notice that we also return a Node value. This Node points to the same data as the value you handed in as the bind parameter, while making it available to read and iterate and handle as a ipld datamodel.Node. If you don't need that interface, or intend to re-bind it later, you can discard that value.

The 'bind' parameter may be nil. In that case, the type of the nil is still used to infer what kind of value to return, and a Node will still be returned based on that type. bindnode.Unwrap can be used on that Node and will still return something of the same golang type as the typed nil that was given as the 'bind' parameter.

func UnmarshalStreaming added in v0.12.1

func UnmarshalStreaming(r io.Reader, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)

UnmarshalStreaming is like Unmarshal, but works on an io.Reader for input.

type NodeAssembler added in v0.0.3

type NodeAssembler = datamodel.NodeAssembler

type NodeBuilder

type NodeBuilder = datamodel.NodeBuilder

type NodePrototype added in v0.5.0

type NodePrototype = datamodel.NodePrototype

type NodeReifier added in v0.10.0

type NodeReifier = linking.NodeReifier

type Path

type Path = datamodel.Path

func NewPath added in v0.0.2

func NewPath(segments []PathSegment) Path

NewPath is an alias for datamodel.NewPath.

Pathing is a concept defined in the data model layer of IPLD.

func ParsePath

func ParsePath(pth string) Path

ParsePath is an alias for datamodel.ParsePath.

Pathing is a concept defined in the data model layer of IPLD.

type PathSegment added in v0.0.2

type PathSegment = datamodel.PathSegment

func ParsePathSegment added in v0.0.2

func ParsePathSegment(s string) PathSegment

ParsePathSegment is an alias for datamodel.ParsePathSegment.

Pathing is a concept defined in the data model layer of IPLD.

func PathSegmentOfInt added in v0.0.2

func PathSegmentOfInt(i int64) PathSegment

PathSegmentOfInt is an alias for datamodel.PathSegmentOfInt.

Pathing is a concept defined in the data model layer of IPLD.

func PathSegmentOfString added in v0.0.2

func PathSegmentOfString(s string) PathSegment

PathSegmentOfString is an alias for datamodel.PathSegmentOfString.

Pathing is a concept defined in the data model layer of IPLD.

Directories

Path Synopsis
adl
rot13adl
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
dagcbor
The dagcbor package provides a DAG-CBOR codec implementation.
The dagcbor package provides a DAG-CBOR codec implementation.
dagjson2
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again.
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again.
jst
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly.
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly.
raw
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path.
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
qp
qp helps to quickly build IPLD nodes.
qp helps to quickly build IPLD nodes.
cid
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'.
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'.
basic
This is a transitional package: please move your references to `node/basicnode`.
This is a transitional package: please move your references to `node/basicnode`.
bindnode
Package bindnode provides an datamodel.Node implementation via Go reflection.
Package bindnode provides an datamodel.Node implementation via Go reflection.
tests/corpus
The corpus package exports some values useful for building tests and benchmarks.
The corpus package exports some values useful for building tests and benchmarks.
dmt
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem.
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem.
bsadapter Module
bsrvadapter Module
dsadapter Module
This package provides functional utilities for traversing and transforming IPLD nodes.
This package provides functional utilities for traversing and transforming IPLD nodes.
selector/parse
selectorparse package contains some helpful functions for parsing the serial form of Selectors.
selectorparse package contains some helpful functions for parsing the serial form of Selectors.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL