plugin

package
v0.11.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 3, 2023 License: MIT Imports: 5 Imported by: 0

README

go-ds-s3-plugin

Installation

On a brand new instance:

  1. Copy the binary go-ds-s3-plugin to ~/.ipfs/plugins.
  2. Run ipfs init
  3. Update the datastore configuration in .ipfs/config as explained below. This does not happen automatically.
  4. Start Kubo (ipfs daemon). The plugin should be loaded automatically and the S3 backend should be used.
Configuration

The config file should include the following. This must be edited manually after initializing Kubo:

{
  "Datastore": {
  ...

    "Spec": {
      "mounts": [
        {
          "child": {
            "type": "s3ds",
            "region": "us-east-1",
            "bucket": "$bucketname",
            "rootDirectory": "$bucketsubdirectory",
            "accessKey": "",
            "secretKey": ""
          },
          "mountpoint": "/blocks",
          "prefix": "s3.datastore",
          "type": "measure"
        },

If the access and secret key are blank they will be loaded from the usual ~/.aws/. If you are on another S3 compatible provider, e.g. Linode, then your config should be:

{
  "Datastore": {
  ...

    "Spec": {
      "mounts": [
        {
          "child": {
            "type": "s3ds",
            "region": "us-east-1",
            "bucket": "$bucketname",
            "rootDirectory": "$bucketsubdirectory",
            "regionEndpoint": "us-east-1.linodeobjects.com",
            "accessKey": "",
            "secretKey": ""
          },
          "mountpoint": "/blocks",
          "prefix": "s3.datastore",
          "type": "measure"
        },

If you are configuring a brand new ipfs instance without any data, you can overwrite the datastore_spec file with:

{"mounts":[{"bucket":"$bucketname","mountpoint":"/blocks","region":"us-east-1","rootDirectory":"$bucketsubdirectory"},{"mountpoint":"/","path":"datastore","type":"levelds"}],"type":"mount"}

Otherwise, you need to do a datastore migration.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var Plugins = []plugin.Plugin{
	&S3Plugin{},
}

Functions

This section is empty.

Types

type S3Config

type S3Config struct {
	// contains filtered or unexported fields
}

func (*S3Config) Create

func (s3c *S3Config) Create(path string) (repo.Datastore, error)

func (*S3Config) DiskSpec

func (s3c *S3Config) DiskSpec() fsrepo.DiskSpec

type S3Plugin

type S3Plugin struct{}

func (S3Plugin) DatastoreConfigParser

func (s3p S3Plugin) DatastoreConfigParser() fsrepo.ConfigFromMap

func (S3Plugin) DatastoreTypeName

func (s3p S3Plugin) DatastoreTypeName() string

func (S3Plugin) Init

func (s3p S3Plugin) Init(env *plugin.Environment) error

func (S3Plugin) Name

func (s3p S3Plugin) Name() string

func (S3Plugin) Version

func (s3p S3Plugin) Version() string

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL