Documentation ¶
Overview ¶
go-ipld-prime is a series of go interfaces for manipulating IPLD data.
See https://ipld.io/ for more information about the basics of "What is IPLD?".
Here in the godoc, the first couple of types to look at should be:
- Node
- NodeBuilder and NodeAssembler
- NodePrototype.
These types provide a generic description of the data model.
A Node is a piece of IPLD data which can be inspected. A NodeAssembler is used to create Nodes. (A NodeBuilder is just like a NodeAssembler, but allocates memory (whereas a NodeAssembler just fills up memory; using these carefully allows construction of very efficient code.)
Different NodePrototypes can be used to describe Nodes which follow certain logical rules (e.g., we use these as part of implementing Schemas), and can also be used so that programs can use different memory layouts for different data (which can be useful for constructing efficient programs when data has known shape for which we can use specific or compacted memory layouts).
If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are:
- LinkSystem
- ... and its fields.
The most typical use of LinkSystem is to use the linking/cid package to get a LinkSystem that works with CIDs:
lsys := cidlink.DefaultLinkSystem()
... and then assign the StorageWriteOpener and StorageReadOpener fields in order to control where data is stored to and read from. Methods on the LinkSystem then provide the functions typically used to get data in and out of Nodes so you can work with it.
This root package gathers some of the most important ease-of-use functions all in one place, but is mostly aliases out to features originally found in other more specific sub-packages. (If you're interested in keeping your binary sizes small, and don't use some of the features of this library, you'll probably want to look into using the relevant sub-packages directly.)
Particularly interesting subpackages include:
- datamodel -- the most essential interfaces for describing data live here, describing Node, NodePrototype, NodeBuilder, Link, and Path.
- node/* -- various Node + NodeBuilder implementations.
- node/basicnode -- the first Node implementation you should try.
- codec/* -- functions for serializing and deserializing Nodes.
- linking -- the LinkSystem, which is a facade to all data loading and storing and hashing.
- linking/* -- ways to bind concrete Link implementations (namely, the linking/cidlink package, which connects the go-cid library to our datamodel.Link interface).
- traversal -- functions for walking Node graphs (including automatic link loading) and visiting them programmatically.
- traversal/selector -- functions for working with IPLD Selectors, which are a language-agnostic declarative format for describing graph walks.
- fluent/* -- various options for making datamodel Node and NodeBuilder easier to work with.
- schema -- interfaces for working with IPLD Schemas, which can bring constraints and validation systems to otherwise schemaless and unstructured IPLD data.
- adl/* -- examples of creating and using Advanced Data Layouts (in short, custom Node implementations) to do complex data structures transparently within the IPLD Data Model.
Example (CreateDataAndMarshal) ¶
Example_createDataAndMarshal shows how you can feed data into a NodeBuilder, and also how to then hand that to an Encoder.
Often you'll encoding implicitly through a LinkSystem.Store call instead, but you can do it directly, too.
package main import ( "os" "github.com/ipld/go-ipld-prime/codec/dagjson" "github.com/ipld/go-ipld-prime/node/basicnode" ) func main() { np := basicnode.Prototype.Any // Pick a prototype: this is how we decide what implementation will store the in-memory data. nb := np.NewBuilder() // Create a builder. ma, _ := nb.BeginMap(2) // Begin assembling a map. ma.AssembleKey().AssignString("hey") ma.AssembleValue().AssignString("it works!") ma.AssembleKey().AssignString("yes") ma.AssembleValue().AssignBool(true) ma.Finish() // Call 'Finish' on the map assembly to let it know no more data is coming. n := nb.Build() // Call 'Build' to get the resulting Node. (It's immutable!) dagjson.Encode(n, os.Stdout) }
Output: {"hey":"it works!","yes":true}
Example (UnmarshalData) ¶
Example_unmarshalData shows how you can use a Decoder and a NodeBuilder (or NodePrototype) together to do unmarshalling.
Often you'll do this implicitly through a LinkSystem.Load call instead, but you can do it directly, too.
package main import ( "fmt" "strings" "github.com/ipld/go-ipld-prime/codec/dagjson" "github.com/ipld/go-ipld-prime/node/basicnode" ) func main() { serial := strings.NewReader(`{"hey":"it works!","yes": true}`) np := basicnode.Prototype.Any // Pick a stle for the in-memory data. nb := np.NewBuilder() // Create a builder. dagjson.Decode(nb, serial) // Hand the builder to decoding -- decoding will fill it in! n := nb.Build() // Call 'Build' to get the resulting Node. (It's immutable!) fmt.Printf("the data decoded was a %s kind\n", n.Kind()) fmt.Printf("the length of the node is %d\n", n.Length()) }
Output: the data decoded was a map kind the length of the node is 2
Index ¶
- Constants
- Variables
- func DeepEqual(x, y Node) bool
- func Encode(n Node, encFn Encoder) ([]byte, error)
- func EncodeStreaming(wr io.Writer, n Node, encFn Encoder) error
- func Marshal(encFn Encoder, bind interface{}, typ schema.Type) ([]byte, error)
- func MarshalStreaming(wr io.Writer, encFn Encoder, bind interface{}, typ schema.Type) error
- type ADL
- type BlockReadOpener
- type BlockWriteCommitter
- type BlockWriteOpener
- type Decoder
- type Encoder
- type ErrHashMismatch
- type ErrInvalidKey
- type ErrInvalidSegmentForList
- type ErrIteratorOverread
- type ErrMissingRequiredField
- type ErrNotExists
- type ErrRepeatedMapKey
- type ErrWrongKind
- type Kind
- type Link
- type LinkContext
- type LinkPrototype
- type LinkSystem
- type ListAssembler
- type ListIterator
- type MapAssembler
- type MapIterator
- type Node
- func Decode(b []byte, decFn Decoder) (Node, error)
- func DecodeStreaming(r io.Reader, decFn Decoder) (Node, error)
- func DecodeStreamingUsingPrototype(r io.Reader, decFn Decoder, np NodePrototype) (Node, error)
- func DecodeUsingPrototype(b []byte, decFn Decoder, np NodePrototype) (Node, error)
- func Unmarshal(b []byte, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)
- func UnmarshalStreaming(r io.Reader, decFn Decoder, bind interface{}, typ schema.Type) (Node, error)
- type NodeAssembler
- type NodeBuilder
- type NodePrototype
- type NodeReifier
- type Path
- type PathSegment
Examples ¶
Constants ¶
const ( Kind_Invalid = datamodel.Kind_Invalid Kind_Map = datamodel.Kind_Map Kind_List = datamodel.Kind_List Kind_Null = datamodel.Kind_Null Kind_Bool = datamodel.Kind_Bool Kind_Int = datamodel.Kind_Int Kind_Float = datamodel.Kind_Float Kind_String = datamodel.Kind_String Kind_Bytes = datamodel.Kind_Bytes Kind_Link = datamodel.Kind_Link )
Variables ¶
var ( Null = datamodel.Null Absent = datamodel.Absent )
var ( KindSet_Recursive = datamodel.KindSet_Recursive KindSet_Scalar = datamodel.KindSet_Scalar KindSet_JustMap = datamodel.KindSet_JustMap KindSet_JustList = datamodel.KindSet_JustList KindSet_JustNull = datamodel.KindSet_JustNull KindSet_JustBool = datamodel.KindSet_JustBool KindSet_JustInt = datamodel.KindSet_JustInt KindSet_JustFloat = datamodel.KindSet_JustFloat KindSet_JustString = datamodel.KindSet_JustString KindSet_JustBytes = datamodel.KindSet_JustBytes KindSet_JustLink = datamodel.KindSet_JustLink )
Future: These aliases for the `KindSet_*` values may be dropped someday. I don't think they're very important to have cluttering up namespace here. They're included for a brief transitional period, largely for the sake of codegen things which have referred to them, but may disappear in the future.
Functions ¶
func DeepEqual ¶ added in v0.10.0
DeepEqual reports whether x and y are "deeply equal" as IPLD nodes. This is similar to reflect.DeepEqual, but based around the Node interface.
This is exactly equivalent to the datamodel.DeepEqual function.
func Encode ¶ added in v0.12.1
Encode serializes the given Node using the given Encoder function, returning the serialized data or an error.
The exact result data will depend the node content and on the encoder function, but for example, using a json codec on a node with kind map will produce a result starting in `{`, etc.
Encode will automatically switch to encoding the representation form of the Node, if it discovers the Node matches the schema.TypedNode interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).
If you would like this operation, but applied directly to a golang type instead of a Node, look to the Marshal function.
func EncodeStreaming ¶ added in v0.12.1
EncodeStreaming is like Encode, but emits output to an io.Writer.
func Marshal ¶ added in v0.12.1
Marshal accepts a pointer to a Go value and an IPLD schema type, and encodes the representation form of that data (which may be configured with the schema!) using the given Encoder function.
Marshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.
The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.
Types ¶
type BlockReadOpener ¶ added in v0.9.0
type BlockReadOpener = linking.BlockReadOpener
type BlockWriteCommitter ¶ added in v0.9.0
type BlockWriteCommitter = linking.BlockWriteCommitter
type BlockWriteOpener ¶ added in v0.9.0
type BlockWriteOpener = linking.BlockWriteOpener
type ErrHashMismatch ¶ added in v0.9.0
type ErrHashMismatch = linking.ErrHashMismatch
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrInvalidKey ¶ added in v0.0.2
type ErrInvalidKey = schema.ErrInvalidKey
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrInvalidSegmentForList ¶ added in v0.4.0
type ErrInvalidSegmentForList = datamodel.ErrInvalidSegmentForList
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrIteratorOverread ¶
type ErrIteratorOverread = datamodel.ErrIteratorOverread
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrMissingRequiredField ¶ added in v0.0.3
type ErrMissingRequiredField = schema.ErrMissingRequiredField
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrNotExists ¶
type ErrNotExists = datamodel.ErrNotExists
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrRepeatedMapKey ¶ added in v0.0.3
type ErrRepeatedMapKey = datamodel.ErrRepeatedMapKey
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type ErrWrongKind ¶
type ErrWrongKind = datamodel.ErrWrongKind
Future: These error type aliases may be dropped someday. Being able to see them as having more than one package name is not helpful to clarity. They are left here for now for a brief transitional period, because it was relatively easy to do so.
type LinkContext ¶
type LinkContext = linking.LinkContext
type LinkPrototype ¶ added in v0.9.0
type LinkPrototype = datamodel.LinkPrototype
type LinkSystem ¶ added in v0.9.0
type LinkSystem = linking.LinkSystem
type ListAssembler ¶ added in v0.0.3
type ListAssembler = datamodel.ListAssembler
type ListIterator ¶
type ListIterator = datamodel.ListIterator
type MapAssembler ¶ added in v0.0.3
type MapAssembler = datamodel.MapAssembler
type MapIterator ¶
type MapIterator = datamodel.MapIterator
type Node ¶
func Decode ¶ added in v0.12.1
Decode parses the given bytes into a Node using the given Decoder function, returning a new Node or an error.
The new Node that is returned will be the implementation from the node/basicnode package. This implementation of Node will work for storing any kind of data, but note that because it is general, it is also not necessarily optimized. If you want more control over what kind of Node implementation (and thus memory layout) is used, or want to use features like IPLD Schemas (which can be engaged by using a schema.TypedPrototype), then look to the DecodeUsingPrototype family of functions, which accept more parameters in order to give you that kind of control.
If you would like this operation, but applied directly to a golang type instead of a Node, look to the Unmarshal function.
func DecodeStreaming ¶ added in v0.12.1
DecodeStreaming is like Decode, but works on an io.Reader for input.
func DecodeStreamingUsingPrototype ¶ added in v0.12.1
DecodeStreamingUsingPrototype is like DecodeUsingPrototype, but works on an io.Reader for input.
func DecodeUsingPrototype ¶ added in v0.12.1
func DecodeUsingPrototype(b []byte, decFn Decoder, np NodePrototype) (Node, error)
DecodeUsingPrototype is like Decode, but with a NodePrototype parameter, which gives you control over the Node type you'll receive, and thus control over the memory layout, and ability to use advanced features like schemas. (Decode is simply this function, but hardcoded to use basicnode.Prototype.Any.)
DecodeUsingPrototype internally creates a NodeBuilder, and thows it away when done. If building a high performance system, and creating data of the same shape repeatedly, you may wish to use NodeBuilder directly, so that you can control and avoid these allocations.
For symmetry with the behavior of Encode, DecodeUsingPrototype will automatically switch to using the representation form of the node for decoding if it discovers the NodePrototype matches the schema.TypedPrototype interface. This is probably what you want, in most cases; if this is not desired, you can use the underlaying functions directly (just look at the source of this function for an example of how!).
func Unmarshal ¶ added in v0.12.1
Unmarshal accepts a pointer to a Go value and an IPLD schema type, and fills the value with data by decoding into it with the given Decoder function.
Unmarshal uses the node/bindnode subsystem. See the documentation in that package for more details about its workings. Please note that this subsystem is relatively experimental at this time.
The schema.Type parameter is optional, and can be nil. If given, it controls what kind of schema.Type (and what kind of representation strategy!) to use when processing the data. If absent, a default schema.Type will be inferred based on the golang type (so, a struct in go will be inferred to have a schema with a similar struct, and the default representation strategy (e.g. map), etc). Note that not all features of IPLD Schemas can be inferred from golang types alone. For example, to use union types, the schema parameter will be required. Similarly, to use most kinds of non-default representation strategy, the schema parameter is needed in order to convey that intention.
In contrast to some other unmarshal conventions common in golang, notice that we also return a Node value. This Node points to the same data as the value you handed in as the bind parameter, while making it available to read and iterate and handle as a ipld datamodel.Node. If you don't need that interface, or intend to re-bind it later, you can discard that value.
The 'bind' parameter may be nil. In that case, the type of the nil is still used to infer what kind of value to return, and a Node will still be returned based on that type. bindnode.Unwrap can be used on that Node and will still return something of the same golang type as the typed nil that was given as the 'bind' parameter.
type NodeAssembler ¶ added in v0.0.3
type NodeAssembler = datamodel.NodeAssembler
type NodeBuilder ¶
type NodeBuilder = datamodel.NodeBuilder
type NodePrototype ¶ added in v0.5.0
type NodePrototype = datamodel.NodePrototype
type NodeReifier ¶ added in v0.10.0
type NodeReifier = linking.NodeReifier
type Path ¶
func NewPath ¶ added in v0.0.2
func NewPath(segments []PathSegment) Path
NewPath is an alias for datamodel.NewPath.
Pathing is a concept defined in the data model layer of IPLD.
type PathSegment ¶ added in v0.0.2
type PathSegment = datamodel.PathSegment
func ParsePathSegment ¶ added in v0.0.2
func ParsePathSegment(s string) PathSegment
ParsePathSegment is an alias for datamodel.ParsePathSegment.
Pathing is a concept defined in the data model layer of IPLD.
func PathSegmentOfInt ¶ added in v0.0.2
func PathSegmentOfInt(i int64) PathSegment
PathSegmentOfInt is an alias for datamodel.PathSegmentOfInt.
Pathing is a concept defined in the data model layer of IPLD.
func PathSegmentOfString ¶ added in v0.0.2
func PathSegmentOfString(s string) PathSegment
PathSegmentOfString is an alias for datamodel.PathSegmentOfString.
Pathing is a concept defined in the data model layer of IPLD.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
rot13adl
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like.
|
rot13adl is a demo ADL -- its purpose is to show what an ADL and its public interface can look like. |
dagcbor
The dagcbor package provides a DAG-CBOR codec implementation.
|
The dagcbor package provides a DAG-CBOR codec implementation. |
dagjson2
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again.
|
Several groups of exported symbols are available at different levels of abstraction: - You might just want the multicodec registration! Then never deal with this package directly again. |
jst
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly.
|
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly. |
raw
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes.
|
Package raw implements IPLD's raw codec, which simply writes and reads a Node which can be represented as bytes. |
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path.
|
The datamodel package defines the most essential interfaces for describing IPLD Data -- such as Node, NodePrototype, NodeBuilder, Link, and Path. |
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility.
|
The fluent package offers helper utilities for using NodeAssembler more tersely by providing an interface that handles all errors for you, and allows use of closures for any recursive assembly so that creating trees of data results in indentation for legibility. |
qp
qp helps to quickly build IPLD nodes.
|
qp helps to quickly build IPLD nodes. |
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics).
|
Package 'must' provides another alternative to the 'fluent' package, providing many helpful functions for wrapping methods with multiple returns into a single return (converting errors into panics). |
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'.
|
The 'node' package gathers various general purpose Node implementations; the first one you should jump to is 'node/basicnode'. |
basic
This is a transitional package: please move your references to `node/basicnode`.
|
This is a transitional package: please move your references to `node/basicnode`. |
bindnode
Package bindnode provides an datamodel.Node implementation via Go reflection.
|
Package bindnode provides an datamodel.Node implementation via Go reflection. |
tests/corpus
The corpus package exports some values useful for building tests and benchmarks.
|
The corpus package exports some values useful for building tests and benchmarks. |
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem.
|
Storage contains some simple implementations for the ipld.BlockReadOpener and ipld.BlockWriteOpener interfaces, which are typically used by composition in a LinkSystem. |
bsadapter
Module
|
|
bsrvadapter
Module
|
|
dsadapter
Module
|
|
This package provides functional utilities for traversing and transforming IPLD nodes.
|
This package provides functional utilities for traversing and transforming IPLD nodes. |
selector/parse
selectorparse package contains some helpful functions for parsing the serial form of Selectors.
|
selectorparse package contains some helpful functions for parsing the serial form of Selectors. |