teleport

package module
v1.3.3-0...-f31c912 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 9, 2024 License: AGPL-3.0 Imports: 6 Imported by: 975

README

Teleport provides connectivity, authentication, access controls and audit for infrastructure.

Here is why you might use Teleport:

  • Set up SSO for all of your cloud infrastructure [1].
  • Protect access to cloud and on-prem services using mTLS endpoints and short-lived certificates.
  • Establish tunnels to access services behind NATs and firewalls.
  • Provide an audit log with session recording and replay for various protocols.
  • Unify Role-Based Access Control (RBAC) and enforce the principle of least privilege with access requests.

[1] The open source version supports only GitHub SSO.

Teleport works with SSH, Kubernetes, databases, RDP, and web services.


Table of Contents

  1. Introduction
  2. Installing and Running
  3. Docker
  4. Building Teleport
  5. Why Did We Build Teleport?
  6. More Information
  7. Support and Contributing
  8. Is Teleport Secure and Production Ready?
  9. Who Built Teleport?

Introduction

Teleport includes an identity-aware access proxy, a CA that issues short-lived certificates, a unified access control system and a tunneling system to access resources behind the firewall.

We have implemented Teleport as a single Go binary that integrates with multiple protocols and cloud services:

You can set up Teleport as a Linux daemon or a Kubernetes deployment.

Teleport focuses on best practices for infrastructure security:

  • No need to manage shared secrets such as SSH keys or Kubernetes tokens: it uses certificate-based auth with certificate expiration for all protocols.
  • Two-factor authentication (2FA) for everything.
  • Collaboratively troubleshoot issues through session sharing.
  • Single sign-on (SSO) for everything via GitHub Auth, OpenID Connect, or SAML with endpoints like Okta or Active Directory.
  • Infrastructure introspection: Use Teleport via the CLI or Web UI to view the status of every SSH node, database instance, Kubernetes cluster, or internal web app.

Teleport uses Go crypto. It is fully compatible with OpenSSH, sshd servers, and ssh clients, Kubernetes clusters and more.

Project Links Description
Teleport Website The official website of the project.
Documentation Admin guide, user manual and more.
Blog Our blog where we publish Teleport news.
Forum Ask us a setup question, post your tutorial, feedback, or idea on our forum.
Slack Need help with your setup? Ping us in our Slack channel.
Cloud-hosted We offer Enterprise with a Cloud-hosted option. For teams that require easy and secure access to their computing environments.

Installing and Running

To set up a single-instance Teleport cluster, follow our getting started guide. You can then register your servers, Kubernetes clusters, and other infrastructure with your Teleport cluster.

You can also get started with Teleport Team, a managed Teleport deployment that makes it easier for small organizations to enable secure access to their infrastructure.

Sign up for a free trial of Teleport Team.

Follow our guide to registering your first server with Teleport Team

Docker

Deploy Teleport

If you wish to deploy Teleport inside a Docker container see the installation guide.

For Local Testing and Development

Follow the instructions in the docker/README file.

To run a full test suite locally, see the test dependencies list

Building Teleport

The teleport repository contains the Teleport daemon binary (written in Go) and a web UI written in Javascript (a git submodule located in the webassets/ directory).

If your intention is to build and deploy for use in a production infrastructure a released tag should be used. The default branch, master, is the current development branch for an upcoming major version. Get the latest release tags listed at https://goteleport.com/download/ and then use that tag in the git clone. For example git clone https://github.com/gravitational/teleport.git -b v13.0.0 gets release v13.0.0.

Dockerized Build

It is often easiest to build with Docker, which ensures that all required tooling is available for the build. To execute a dockerized build, ensure that docker is installed and running, and execute:

make -C build.assets build-binaries

Local Build

Dependencies

Ensure you have installed correct versions of necessary dependencies:

  • Go version from go.mod
  • If you wish to build the Rust-powered features like Desktop Access, see the Rust and Cargo version in build.assets/Makefile (search for RUST_VERSION)
  • For tsh version > 10.x with FIDO support, you will need libfido and openssl 1.1 installed locally
  • To build the web UI, yarn(< 2.0.0) is required.
    • If you prefer not to install/use yarn, but have docker available, you can run make docker-ui instead.

For an example of Dev Environment setup on a Mac, see these instructions.

Perform a build

Important

  • The Go compiler is somewhat sensitive to the amount of memory: you will need at least 1GB of virtual memory to compile Teleport. A 512MB instance without swap will not work.
  • This will build the latest version of Teleport, regardless of whether it is stable. If you want to build the latest stable release, run git checkout and git submodule update --recursive to the corresponding tag (for example,
  • run git checkout v8.0.0) before performing a build.

Get the source

git clone https://github.com/gravitational/teleport.git
cd teleport

To perform a build

make full

To build tsh with Apple TouchID support enabled:

Important

tsh binaries with Touch ID support are only functional using binaries signed with Teleport's Apple Developer ID and notarized by Apple. If you are a Teleport maintainer, ask the team for access.

make build/tsh TOUCHID=yes

To build tsh with libfido:

make build/tsh FIDO2=dynamic
  • On a Mac, with libfido and openssl 1.1 installed via homebrew

    export PKG_CONFIG_PATH="$(brew --prefix openssl@1.1)/lib/pkgconfig"
    make build/tsh FIDO2=dynamic
    
Build output and running locally

If the build succeeds, the installer will place the binaries in the build directory.

Before starting, create default data directories:

sudo mkdir -p -m0700 /var/lib/teleport
sudo chown $USER /var/lib/teleport
Running Teleport in a hot reload mode

To speed up your development process, you can run Teleport using CompileDaemon. This will build and run the Teleport binary, and then rebuild and restart it whenever any Go source files change.

  1. Install CompileDaemon:

    go install github.com/githubnemo/CompileDaemon@latest
    

    Note that we use go install instead of the suggested go get, because we don't want CompileDaemon to become a dependency of the project.

  2. Build and run the Teleport binary:

    make teleport-hot-reload
    

    By default, this runs a teleport start command. If you want to customize the command, for example by providing a custom config file location, you can use the TELEPORT_ARGS parameter:

    make teleport-hot-reload TELEPORT_ARGS='start --config=/path/to/config.yaml'
    

Note that you still need to run make grpc if you modify any Protocol Buffers files to regenerate the generated Go sources; regenerating these sources should in turn cause the CompileDaemon to rebuild and restart Teleport.

Web UI

The Teleport Web UI resides in the web directory.

Rebuilding Web UI for development

To rebuild the Teleport UI package, run the following command:

make docker-ui

Then you can replace Teleport Web UI files with the files from the newly-generated /dist folder.

To enable speedy iterations on the Web UI, you can run a local web-dev server.

You can also tell Teleport to load the Web UI assets from the source directory. To enable this behavior, set the environment variable DEBUG=1 and rebuild with the default target:

# Run Teleport as a single-node cluster in development mode:
DEBUG=1 ./build/teleport start -d

Keep the server running in this mode, and make your UI changes in /dist directory. For instructions about how to update the Web UI, read the web README.

Managing dependencies

All dependencies are managed using Go modules. Here are the instructions for some common tasks:

Add a new dependency

Latest version:

go get github.com/new/dependency

and update the source to use this dependency.

To get a specific version, use go get github.com/new/dependency@version instead.

Set dependency to a specific version
go get github.com/new/dependency@version
Update dependency to the latest version
go get -u github.com/new/dependency
Update all dependencies
go get -u all
Debugging dependencies

Why is a specific package imported?

go mod why $pkgname

Why is a specific module imported?

go mod why -m $modname

Why is a specific version of a module imported?

go mod graph | grep $modname

Devbox Build (experimental)

Note: Devbox support is still experimental. It's very possible things make not work as intended.

Teleport can be built using devbox. To use devbox, follow the instructions to install devbox here and then run:

devbox shell

This will install Teleport's various build dependencies and drop you into a shell with these dependencies. From here, you can build Teleport normally.

flake.nix

A nix flake is located in build.assets/flake that allows for installation of Teleport's less common build tooling. If this flake is updated, run:

devbox install

in order to make sure the changes in the flake are reflected in the local devbox shell.

Why did We Build Teleport?

The Teleport creators used to work together at Rackspace. We noticed that most cloud computing users struggle with setting up and configuring infrastructure security because popular tools, while flexible, are complex to understand and expensive to maintain. Additionally, most organizations use multiple infrastructure form factors such as several cloud providers, multiple cloud accounts, servers in colocation, and even smart devices. Some of those devices run on untrusted networks, behind third-party firewalls. This only magnifies complexity and increases operational overhead.

We had a choice, either start a security consulting business or build a solution that's dead-easy to use and understand. A real-time representation of all of your servers in the same room as you, as if they were magically teleported. Thus, Teleport was born!

More Information

Support and Contributing

We offer a few different options for support. First of all, we try to provide clear and comprehensive documentation. The docs are also in GitHub, so feel free to create a PR or file an issue if you have ideas for improvements. If you still have questions after reviewing our docs, you can also:

  • Join Teleport Discussions to ask questions. Our engineers are available there to help you.
  • If you want to contribute to Teleport or file a bug report/issue, you can create an issue here in GitHub.
  • If you are interested in Teleport Enterprise or more responsive support during a POC, we can also create a dedicated Slack channel for you during your POC. You can reach out to us through our website to arrange for a POC.

Is Teleport Secure and Production-Ready?

Yes -- Teleport is production-ready and designed to protect and facilitate access to the most precious and mission critical applications.

Teleport has completed several security audits from nationally and internationally recognized technology security companies.

We publicize some of our audit results, security philosophy and related information on our trust page.

You can see the list of companies who use Teleport in production on the Teleport product page.

Who Built Teleport?

Teleport was created by Gravitational, Inc.. We have built Teleport by borrowing from our previous experiences at Rackspace. Learn more about Teleport and our history.

Documentation

Overview

Gravitational Teleport is a modern SSH server for remotely accessing clusters of Linux servers via SSH or HTTPS. It is intended to be used instead of sshd.

Teleport enables teams to easily adopt the best SSH practices like:

  • No need to distribute keys: Teleport uses certificate-based access with automatic expiration time.
  • Enforcement of 2nd factor authentication.
  • Cluster introspection: every Teleport node becomes a part of a cluster and is visible on the Web UI.
  • Record and replay SSH sessions for knowledge sharing and auditing purposes.
  • Collaboratively troubleshoot issues through session sharing.
  • Connect to clusters located behind firewalls without direct Internet access via SSH bastions.
  • Ability to integrate SSH credentials with your organization identities via OAuth (Google Apps, Github).
  • Keep the full audit log of all SSH sessions within a cluster.

Teleport web site:

https://gravitational.com/teleport/

Teleport on Github:

https://github.com/gravitational/teleport

Code generated by "make version". DO NOT EDIT.

Index

Constants

View Source
const (
	// SSHAuthSock is the environment variable pointing to the
	// Unix socket the SSH agent is running on.
	SSHAuthSock = "SSH_AUTH_SOCK"
	// SSHAgentPID is the environment variable pointing to the agent
	// process ID
	SSHAgentPID = "SSH_AGENT_PID"

	// SSHTeleportUser is the current Teleport user that is logged in.
	SSHTeleportUser = "SSH_TELEPORT_USER"

	// SSHSessionWebProxyAddr is the address the web proxy.
	SSHSessionWebProxyAddr = "SSH_SESSION_WEBPROXY_ADDR"

	// SSHTeleportClusterName is the name of the cluster this node belongs to.
	SSHTeleportClusterName = "SSH_TELEPORT_CLUSTER_NAME"

	// SSHTeleportHostUUID is the UUID of the host.
	SSHTeleportHostUUID = "SSH_TELEPORT_HOST_UUID"

	// SSHSessionID is the UUID of the current session.
	SSHSessionID = "SSH_SESSION_ID"

	// EnableNonInteractiveSessionRecording can be used to record non-interactive SSH session.
	EnableNonInteractiveSessionRecording = "SSH_TELEPORT_RECORD_NON_INTERACTIVE"
)
View Source
const (
	// TOTPValidityPeriod is the number of seconds a TOTP token is valid.
	TOTPValidityPeriod uint = 30

	// TOTPSkew adds that many periods before and after to the validity window.
	TOTPSkew uint = 1
)
View Source
const (
	// ComponentMemory is a memory backend
	ComponentMemory = "memory"

	// ComponentAuthority is a TLS and an SSH certificate authority
	ComponentAuthority = "ca"

	// ComponentProcess is a main control process
	ComponentProcess = "proc"

	// ComponentServer is a server subcomponent of some services
	ComponentServer = "server"

	// ComponentACME is ACME protocol controller
	ComponentACME = "acme"

	// ComponentReverseTunnelServer is reverse tunnel server
	// that together with agent establish a bi-directional SSH revers tunnel
	// to bypass firewall restrictions
	ComponentReverseTunnelServer = "proxy:server"

	// ComponentReverseTunnelAgent is reverse tunnel agent
	// that together with server establish a bi-directional SSH revers tunnel
	// to bypass firewall restrictions
	ComponentReverseTunnelAgent = "proxy:agent"

	// ComponentLabel is a component label name used in reporting
	ComponentLabel = "component"

	// ComponentProxyKube is a kubernetes proxy
	ComponentProxyKube = "proxy:kube"

	// ComponentAuth is the cluster CA node (auth server API)
	ComponentAuth = "auth"

	// ComponentGRPC is gRPC server
	ComponentGRPC = "grpc"

	// ComponentMigrate is responsible for data migrations
	ComponentMigrate = "migrate"

	// ComponentNode is SSH node (SSH server serving requests)
	ComponentNode = "node"

	// ComponentForwardingNode is SSH node (SSH server serving requests)
	ComponentForwardingNode = "node:forward"

	// ComponentProxy is SSH proxy (SSH server forwarding connections)
	ComponentProxy = "proxy"

	// ComponentProxyPeer is the proxy peering component of the proxy service
	ComponentProxyPeer = "proxy:peer"

	// ComponentApp is the application proxy service.
	ComponentApp = "app:service"

	// ComponentDatabase is the database proxy service.
	ComponentDatabase = "db:service"

	// ComponentDiscovery is the Discovery service.
	ComponentDiscovery = "discovery:service"

	// ComponentAppProxy is the application handler within the web proxy service.
	ComponentAppProxy = "app:web"

	// ComponentWebProxy is the web handler within the web proxy service.
	ComponentWebProxy = "web"

	// ComponentDiagnostic is a diagnostic service
	ComponentDiagnostic = "diag"

	// ComponentClient is a client
	ComponentClient = "client"

	// ComponentTunClient is a tunnel client
	ComponentTunClient = "client:tunnel"

	// ComponentCache is a cache component
	ComponentCache = "cache"

	// ComponentBackend is a backend component
	ComponentBackend = "backend"

	// ComponentSubsystemProxy is the proxy subsystem.
	ComponentSubsystemProxy = "subsystem:proxy"

	// ComponentSubsystemSFTP is the SFTP subsystem.
	ComponentSubsystemSFTP = "subsystem:sftp"

	// ComponentLocalTerm is a terminal on a regular SSH node.
	ComponentLocalTerm = "term:local"

	// ComponentRemoteTerm is a terminal on a forwarding SSH node.
	ComponentRemoteTerm = "term:remote"

	// ComponentRemoteSubsystem is subsystem on a forwarding SSH node.
	ComponentRemoteSubsystem = "subsystem:remote"

	// ComponentAuditLog is audit log component
	ComponentAuditLog = "audit"

	// ComponentKeyAgent is an agent that has loaded the sessions keys and
	// certificates for a user connected to a proxy.
	ComponentKeyAgent = "keyagent"

	// ComponentKeyStore is all sessions keys and certificates a user has on disk
	// for all proxies.
	ComponentKeyStore = "keystore"

	// ComponentConnectProxy is the HTTP CONNECT proxy used to tunnel connection.
	ComponentConnectProxy = "http:proxy"

	// ComponentSOCKS is a SOCKS5 proxy.
	ComponentSOCKS = "socks"

	// ComponentKeyGen is the public/private keypair generator.
	ComponentKeyGen = "keygen"

	// ComponentFirestore represents firestore clients
	ComponentFirestore = "firestore"

	// ComponentSession is an active session.
	ComponentSession = "session"

	// ComponentDynamoDB represents dynamodb clients
	ComponentDynamoDB = "dynamodb"

	// Component pluggable authentication module (PAM)
	ComponentPAM = "pam"

	// ComponentUpload is a session recording upload server
	ComponentUpload = "upload"

	// ComponentWeb is a web server
	ComponentWeb = "web"

	// ComponentUnifiedResource is a cache of resources meant to be listed and displayed
	// together in the web UI
	ComponentUnifiedResource = "unified_resource"

	// ComponentWebsocket is websocket server that the web client connects to.
	ComponentWebsocket = "websocket"

	// ComponentRBAC is role-based access control.
	ComponentRBAC = "rbac"

	// ComponentKeepAlive is keep-alive messages sent from clients to servers
	// and vice versa.
	ComponentKeepAlive = "keepalive"

	// ComponentTeleport is the "teleport" binary.
	ComponentTeleport = "teleport"

	// ComponentTSH is the "tsh" binary.
	ComponentTSH = "tsh"

	// ComponentTBot is the "tbot" binary
	ComponentTBot = "tbot"

	// ComponentKubeClient is the Kubernetes client.
	ComponentKubeClient = "client:kube"

	// ComponentBuffer is in-memory event circular buffer
	// used to broadcast events to subscribers.
	ComponentBuffer = "buffer"

	// ComponentBPF is the eBPF packagae.
	ComponentBPF = "bpf"

	// ComponentRestrictedSession is restriction of user access to kernel objects
	ComponentRestrictedSession = "restrictedsess"

	// ComponentCgroup is the cgroup package.
	ComponentCgroup = "cgroups"

	// ComponentKube is an Kubernetes API gateway.
	ComponentKube = "kubernetes"

	// ComponentSAML is a SAML service provider.
	ComponentSAML = "saml"

	// ComponentMetrics is a metrics server
	ComponentMetrics = "metrics"

	// ComponentWindowsDesktop is a Windows desktop access server.
	ComponentWindowsDesktop = "windows_desktop"

	// ComponentTracing is a tracing exporter
	ComponentTracing = "tracing"

	// ComponentInstance is an abstract component common to all services.
	ComponentInstance = "instance"

	// ComponentVersionControl is the component common to all version control operations.
	ComponentVersionControl = "version-control"

	// ComponentUsageReporting is the component responsible for reporting usage metrics.
	ComponentUsageReporting = "usage-reporting"

	// ComponentAthena represents athena clients.
	ComponentAthena = "athena"

	// ComponentProxySecureGRPC represents secure gRPC server running on Proxy (used for Kube).
	ComponentProxySecureGRPC = "proxy:secure-grpc"

	// ComponentAssist represents Teleport Assist
	ComponentAssist = "assist"

	// VerboseLogEnvVar forces all logs to be verbose (down to DEBUG level)
	VerboseLogsEnvVar = "TELEPORT_DEBUG"

	// IterationsEnvVar sets tests iterations to run
	IterationsEnvVar = "ITERATIONS"

	// DefaultTerminalWidth defines the default width of a server-side allocated
	// pseudo TTY
	DefaultTerminalWidth = 80

	// DefaultTerminalHeight defines the default height of a server-side allocated
	// pseudo TTY
	DefaultTerminalHeight = 25

	// SafeTerminalType is the fall-back TTY type to fall back to (when $TERM
	// is not defined)
	SafeTerminalType = "xterm"

	// DataDirParameterName is the name of the data dir configuration parameter passed
	// to all backends during initialization
	DataDirParameterName = "data_dir"

	// KeepAliveReqType is a SSH request type to keep the connection alive. A client and
	// a server keep pining each other with it.
	KeepAliveReqType = "keepalive@openssh.com"

	// ClusterDetailsReqType is the name of a global request which returns cluster details like
	// if the proxy is recording sessions or not and if FIPS is enabled.
	ClusterDetailsReqType = "cluster-details@goteleport.com"

	// JSON means JSON serialization format
	JSON = "json"

	// YAML means YAML serialization format
	YAML = "yaml"

	// Text means text serialization format
	Text = "text"

	// PTY is a raw PTY session capture format
	PTY = "pty"

	// Names is for formatting node names in plain text
	Names = "names"

	// LinuxAdminGID is the ID of the standard adm group on linux
	LinuxAdminGID = 4

	// DirMaskSharedGroup is the mask for a directory accessible
	// by the owner and group
	DirMaskSharedGroup = 0o770

	// FileMaskOwnerOnly is the file mask that allows read write access
	// to owers only
	FileMaskOwnerOnly = 0o600

	// On means mode is on
	On = "on"

	// Off means mode is off
	Off = "off"

	// GCSTestURI turns on GCS tests
	GCSTestURI = "TEST_GCS_URI"

	// AZBlobTestURI specifies the storage account URL to use for Azure Blob
	// Storage tests.
	AZBlobTestURI = "TEST_AZBLOB_URI"

	// AWSRunTests turns on tests executed against AWS directly
	AWSRunTests = "TEST_AWS"

	// AWSRunDBTests turns on tests executed against AWS databases directly.
	AWSRunDBTests = "TEST_AWS_DB"

	// Region is AWS region parameter
	Region = "region"

	// Endpoint is an optional Host for non-AWS S3
	Endpoint = "endpoint"

	// Insecure is an optional switch to use HTTP instead of HTTPS
	Insecure = "insecure"

	// DisableServerSideEncryption is an optional switch to opt out of SSE in case the provider does not support it
	DisableServerSideEncryption = "disablesse"

	// ACL is the canned ACL to send to S3
	ACL = "acl"

	// SSEKMSKey is an optional switch to use an KMS CMK key for S3 SSE.
	SSEKMSKey = "sse_kms_key"

	// SchemeFile configures local disk-based file storage for audit events
	SchemeFile = "file"

	// SchemeStdout outputs audit log entries to stdout
	SchemeStdout = "stdout"

	// SchemeS3 is used for S3-like object storage
	SchemeS3 = "s3"

	// SchemeGCS is used for Google Cloud Storage
	SchemeGCS = "gs"

	// SchemeAZBlob is the Azure Blob Storage scheme, used as the scheme in the
	// session storage URI to identify a storage account accessed over https.
	SchemeAZBlob = "azblob"

	// SchemeAZBlobHTTP is the Azure Blob Storage scheme, used as the scheme in the
	// session storage URI to identify a storage account accessed over http.
	SchemeAZBlobHTTP = "azblob-http"

	// LogsDir is a log subdirectory for events and logs
	LogsDir = "log"

	// Syslog is a mode for syslog logging
	Syslog = "syslog"

	// HumanDateFormat is a human readable date formatting
	HumanDateFormat = "Jan _2 15:04 UTC"

	// HumanDateFormatMilli is a human readable date formatting with milliseconds
	HumanDateFormatMilli = "Jan _2 15:04:05.000 UTC"

	// DebugLevel is a debug logging level name
	DebugLevel = "debug"

	// MinimumEtcdVersion is the minimum version of etcd supported by Teleport
	MinimumEtcdVersion = "3.3.0"
)
View Source
const (

	// OIDCPromptSelectAccount instructs the Authorization Server to
	// prompt the End-User to select a user account.
	OIDCPromptSelectAccount = "select_account"

	// OIDCAccessTypeOnline indicates that OIDC flow should be performed
	// with Authorization server and user connected online
	OIDCAccessTypeOnline = "online"
)
View Source
const (
	// AuthorizedKeys are public keys that check against User CAs.
	AuthorizedKeys = "authorized_keys"
	// KnownHosts are public keys that check against Host CAs.
	KnownHosts = "known_hosts"
)
View Source
const (
	// CertExtensionPermitX11Forwarding allows X11 forwarding for certificate
	CertExtensionPermitX11Forwarding = "permit-X11-forwarding"
	// CertExtensionPermitAgentForwarding allows agent forwarding for certificate
	CertExtensionPermitAgentForwarding = "permit-agent-forwarding"
	// CertExtensionPermitPTY allows user to request PTY
	CertExtensionPermitPTY = "permit-pty"
	// CertExtensionPermitPortForwarding allows user to request port forwarding
	CertExtensionPermitPortForwarding = "permit-port-forwarding"
	// CertExtensionTeleportRoles is used to propagate teleport roles
	CertExtensionTeleportRoles = "teleport-roles"
	// CertExtensionTeleportRouteToCluster is used to encode
	// the target cluster to route to in the certificate
	CertExtensionTeleportRouteToCluster = "teleport-route-to-cluster"
	// CertExtensionTeleportTraits is used to propagate traits about the user.
	CertExtensionTeleportTraits = "teleport-traits"
	// CertExtensionTeleportActiveRequests is used to track which privilege
	// escalation requests were used to construct the certificate.
	CertExtensionTeleportActiveRequests = "teleport-active-requests"
	// CertExtensionMFAVerified is used to mark certificates issued after an MFA
	// check.
	CertExtensionMFAVerified = "mfa-verified"
	// CertExtensionPreviousIdentityExpires is the extension that stores an RFC3339
	// timestamp representing the expiry time of the identity/cert that this
	// identity/cert was derived from. It is used to determine a session's hard
	// deadline in cases where both require_session_mfa and disconnect_expired_cert
	// are enabled. See https://github.com/gravitational/teleport/issues/18544.
	CertExtensionPreviousIdentityExpires = "prev-identity-expires"
	// CertExtensionLoginIP is used to embed the IP of the client that created
	// the certificate.
	CertExtensionLoginIP = "login-ip"
	// CertExtensionImpersonator is set when one user has requested certificates
	// for another user
	CertExtensionImpersonator = "impersonator"
	// CertExtensionDisallowReissue is set when a certificate should not be allowed
	// to request future certificates.
	CertExtensionDisallowReissue = "disallow-reissue"
	// CertExtensionRenewable is a flag to indicate the certificate may be
	// renewed.
	CertExtensionRenewable = "renewable"
	// CertExtensionGeneration counts the number of times a certificate has
	// been renewed.
	CertExtensionGeneration = "generation"
	// CertExtensionAllowedResources lists the resources which this certificate
	// should be allowed to access
	CertExtensionAllowedResources = "teleport-allowed-resources"
	// CertExtensionConnectionDiagnosticID contains the ID of the ConnectionDiagnostic.
	// The Node/Agent will append connection traces to this diagnostic instance.
	CertExtensionConnectionDiagnosticID = "teleport-connection-diagnostic-id"
	// CertExtensionPrivateKeyPolicy is used to mark certificates with their supported
	// private key policy.
	CertExtensionPrivateKeyPolicy = "private-key-policy"
	// CertExtensionDeviceID is the trusted device identifier.
	CertExtensionDeviceID = "teleport-device-id"
	// CertExtensionDeviceAssetTag is the device inventory identifier.
	CertExtensionDeviceAssetTag = "teleport-device-asset-tag"
	// CertExtensionDeviceCredentialID is the identifier for the credential used
	// by the device to authenticate itself.
	CertExtensionDeviceCredentialID = "teleport-device-credential-id"
	// CertExtensionBotName indicates the name of the Machine ID bot this
	// certificate was issued to, if any.
	CertExtensionBotName = "bot-name@goteleport.com"

	// CertCriticalOptionSourceAddress is a critical option that defines IP addresses (in CIDR notation)
	// from which this certificate is accepted for authentication.
	// See: https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD.
	CertCriticalOptionSourceAddress = "source-address"
)
View Source
const (
	// NetIQ is an identity provider.
	NetIQ = "netiq"
	// ADFS is Microsoft Active Directory Federation Services
	ADFS = "adfs"
	// Ping is the common backend for all Ping Identity-branded identity
	// providers (including PingOne, PingFederate, etc).
	Ping = "ping"
	// Okta should be used for Okta OIDC providers.
	Okta = "okta"
	// JumpCloud is an identity provider.
	JumpCloud = "jumpcloud"
)

Note: when adding new providers to this list, consider updating the help message for --provider flag for `tctl sso configure oidc` and `tctl sso configure saml` commands as well as docs at https://goteleport.com/docs/enterprise/sso/#provider-specific-workarounds

View Source
const (
	// RemoteCommandSuccess is returned when a command has successfully executed.
	RemoteCommandSuccess = 0
	// RemoteCommandFailure is returned when a command has failed to execute and
	// we don't have another status code for it.
	RemoteCommandFailure = 255
	// HomeDirNotFound is returned when a the "teleport checkhomedir" command cannot
	// find the user's home directory.
	HomeDirNotFound = 254
)
View Source
const (
	// CertificateFormatOldSSH is used to make Teleport interoperate with older
	// versions of OpenSSH.
	CertificateFormatOldSSH = "oldssh"

	// CertificateFormatUnspecified is used to check if the format was specified
	// or not.
	CertificateFormatUnspecified = ""
)
View Source
const (
	// TraitInternalPrefix is the role variable prefix that indicates it's for
	// local accounts.
	TraitInternalPrefix = "internal"

	// TraitExternalPrefix is the role variable prefix that indicates the data comes from an external identity provider.
	TraitExternalPrefix = "external"

	// TraitTeams is the name of the role variable use to store team
	// membership information
	TraitTeams = "github_teams"

	// TraitJWT is the name of the trait containing JWT header for app access.
	TraitJWT = "jwt"

	// TraitInternalLoginsVariable is the variable used to store allowed
	// logins for local accounts.
	TraitInternalLoginsVariable = "{{internal.logins}}"

	// TraitInternalWindowsLoginsVariable is the variable used to store
	// allowed Windows Desktop logins for local accounts.
	TraitInternalWindowsLoginsVariable = "{{internal.windows_logins}}"

	// TraitInternalKubeGroupsVariable is the variable used to store allowed
	// kubernetes groups for local accounts.
	TraitInternalKubeGroupsVariable = "{{internal.kubernetes_groups}}"

	// TraitInternalKubeUsersVariable is the variable used to store allowed
	// kubernetes users for local accounts.
	TraitInternalKubeUsersVariable = "{{internal.kubernetes_users}}"

	// TraitInternalDBNamesVariable is the variable used to store allowed
	// database names for local accounts.
	TraitInternalDBNamesVariable = "{{internal.db_names}}"

	// TraitInternalDBUsersVariable is the variable used to store allowed
	// database users for local accounts.
	TraitInternalDBUsersVariable = "{{internal.db_users}}"

	// TraitInternalDBRolesVariable is the variable used to store allowed
	// database roles for automatic database user provisioning.
	TraitInternalDBRolesVariable = "{{internal.db_roles}}"

	// TraitInternalAWSRoleARNs is the variable used to store allowed AWS
	// role ARNs for local accounts.
	TraitInternalAWSRoleARNs = "{{internal.aws_role_arns}}"

	// TraitInternalAzureIdentities is the variable used to store allowed
	// Azure identities for local accounts.
	TraitInternalAzureIdentities = "{{internal.azure_identities}}"

	// TraitInternalGCPServiceAccounts is the variable used to store allowed
	// GCP service accounts for local accounts.
	TraitInternalGCPServiceAccounts = "{{internal.gcp_service_accounts}}"

	// TraitInternalJWTVariable is the variable used to store JWT token for
	// app sessions.
	TraitInternalJWTVariable = "{{internal.jwt}}"
)
View Source
const (
	// PresetEditorRoleName is a name of a preset role that allows
	// editing cluster configuration.
	PresetEditorRoleName = "editor"

	// PresetAccessRoleName is a name of a preset role that allows
	// accessing cluster resources.
	PresetAccessRoleName = "access"

	// PresetAuditorRoleName is a name of a preset role that allows
	// reading cluster events and playing back session records.
	PresetAuditorRoleName = "auditor"

	// PresetReviewerRoleName is a name of a preset role that allows
	// for reviewing access requests.
	PresetReviewerRoleName = "reviewer"

	// PresetRequesterRoleName is a name of a preset role that allows
	// for requesting access to resources.
	PresetRequesterRoleName = "requester"

	// PresetGroupAccessRoleName is a name of a preset role that allows
	// access to all user groups.
	PresetGroupAccessRoleName = "group-access"

	// PresetDeviceAdminRoleName is the name of the "device-admin" role.
	// The role is used to administer trusted devices.
	PresetDeviceAdminRoleName = "device-admin"

	// PresetDeviceEnrollRoleName is the name of the "device-enroll" role.
	// The role is used to grant device enrollment powers to users.
	PresetDeviceEnrollRoleName = "device-enroll"

	// PresetRequireTrustedDeviceRoleName is the name of the
	// "require-trusted-device" role.
	// The role is used as a basis for requiring trusted device access to
	// resources.
	PresetRequireTrustedDeviceRoleName = "require-trusted-device"

	// SystemAutomaticAccessApprovalRoleName names a preset role that may
	// automatically approve any Role Access Request
	SystemAutomaticAccessApprovalRoleName = "@teleport-access-approver"

	// ConnectMyComputerRoleNamePrefix is the prefix used for roles prepared for individual users
	// during the setup of Connect My Computer. The prefix is followed by the name of the cluster
	// user. See teleterm.connectmycomputer.RoleSetup.
	ConnectMyComputerRoleNamePrefix = "connect-my-computer-"
)
View Source
const (
	// RemoteClusterStatusOffline indicates that cluster is considered as
	// offline, since it has missed a series of heartbeats
	RemoteClusterStatusOffline = "offline"
	// RemoteClusterStatusOnline indicates that cluster is sending heartbeats
	// at expected interval
	RemoteClusterStatusOnline = "online"
)
View Source
const (
	// SharedDirMode is a mode for a directory shared with group
	SharedDirMode = 0o750

	// PrivateDirMode is a mode for private directories
	PrivateDirMode = 0o700
)
View Source
const (
	// SessionEvent is sent by servers to clients when an audit event occurs on
	// the session.
	SessionEvent = "x-teleport-event"

	// VersionRequest is sent by clients to server requesting the Teleport
	// version they are running.
	VersionRequest = "x-teleport-version"

	// ForceTerminateRequest is an SSH request to forcefully terminate a session.
	ForceTerminateRequest = "x-teleport-force-terminate"

	// TerminalSizeRequest is a request for the terminal size of the session.
	TerminalSizeRequest = "x-teleport-terminal-size"

	// MFAPresenceRequest is an SSH request to notify clients that MFA presence is required for a session.
	MFAPresenceRequest = "x-teleport-mfa-presence"

	// EnvSSHJoinMode is the SSH environment variable that contains the requested participant mode.
	EnvSSHJoinMode = "TELEPORT_SSH_JOIN_MODE"

	// EnvSSHSessionReason is a reason attached to started sessions meant to describe their intent.
	EnvSSHSessionReason = "TELEPORT_SESSION_REASON"

	// EnvSSHSessionInvited is an environment variable listning people invited to a session.
	EnvSSHSessionInvited = "TELEPORT_SESSION_JOIN_MODE"

	// EnvSSHSessionDisplayParticipantRequirements is set to true or false to indicate if participant
	// requirement information should be printed.
	EnvSSHSessionDisplayParticipantRequirements = "TELEPORT_SESSION_PARTICIPANT_REQUIREMENTS"

	// SSHSessionJoinPrincipal is the SSH principal used when joining sessions.
	// This starts with a hyphen so it isn't a valid unix login.
	SSHSessionJoinPrincipal = "-teleport-internal-join"
)
View Source
const (
	// EnvKubeConfig is environment variable for kubeconfig
	EnvKubeConfig = "KUBECONFIG"

	// KubeConfigDir is a default directory where k8s stores its user local config
	KubeConfigDir = ".kube"

	// KubeConfigFile is a default filename where k8s stores its user local config
	KubeConfigFile = "config"

	// KubeRunTests turns on kubernetes tests
	KubeRunTests = "TEST_KUBE"

	// KubeSystemAuthenticated is a builtin group that allows
	// any user to access common API methods, e.g. discovery methods
	// required for initial client usage
	KubeSystemAuthenticated = "system:authenticated"

	// UsageKubeOnly specifies certificate usage metadata
	// that limits certificate to be only used for kubernetes proxying
	UsageKubeOnly = "usage:kube"

	// UsageAppOnly specifies a certificate metadata that only allows it to be
	// used for proxying applications.
	UsageAppsOnly = "usage:apps"

	// UsageDatabaseOnly specifies certificate usage metadata that only allows
	// it to be used for proxying database connections.
	UsageDatabaseOnly = "usage:db"

	// UsageWindowsDesktopOnly specifies certificate usage metadata that limits
	// certificate to be only used for Windows desktop access
	UsageWindowsDesktopOnly = "usage:windows_desktop"
)
View Source
const (
	// NodeIsAmbiguous serves as an identifying error string indicating that
	// the proxy subsystem found multiple nodes matching the specified hostname.
	NodeIsAmbiguous = "err-node-is-ambiguous"

	// MaxLeases serves as an identifying error string indicating that the
	// semaphore system is rejecting an acquisition attempt due to max
	// leases having already been reached.
	MaxLeases = "err-max-leases"
)
View Source
const (
	// OpenBrowserLinux is the command used to open a web browser on Linux.
	OpenBrowserLinux = "xdg-open"

	// OpenBrowserDarwin is the command used to open a web browser on macOS/Darwin.
	OpenBrowserDarwin = "open"

	// OpenBrowserWindows is the command used to open a web browser on Windows.
	OpenBrowserWindows = "rundll32.exe"

	// BrowserNone is the string used to suppress the opening of a browser in
	// response to 'tsh login' commands.
	BrowserNone = "none"
)
View Source
const (
	// ExecSubCommand is the sub-command Teleport uses to re-exec itself for
	// command execution (exec and shells).
	ExecSubCommand = "exec"

	// ForwardSubCommand is the sub-command Teleport uses to re-exec itself
	// for port forwarding.
	ForwardSubCommand = "forwardv2"

	// CheckHomeDirSubCommand is the sub-command Teleport uses to re-exec itself
	// to check if the user's home directory exists.
	CheckHomeDirSubCommand = "checkhomedir"

	// ParkSubCommand is the sub-command Teleport uses to re-exec itself as a
	// specific UID to prevent the matching user from being deleted before
	// spawning the intended child process.
	ParkSubCommand = "park"

	// SFTPSubCommand is the sub-command Teleport uses to re-exec itself to
	// handle SFTP connections.
	SFTPSubCommand = "sftp"

	// WaitSubCommand is the sub-command Teleport uses to wait
	// until a domain name stops resolving. Its main use is to ensure no
	// auth instances are still running the previous major version.
	WaitSubCommand = "wait"
)
View Source
const (
	// ChanDirectTCPIP is a SSH channel of type "direct-tcpip".
	ChanDirectTCPIP = "direct-tcpip"

	// ChanSession is a SSH channel of type "session".
	ChanSession = "session"
)
View Source
const (
	// GetHomeDirSubsystem is an SSH subsystem request that Teleport
	// uses to get the home directory of a remote user.
	GetHomeDirSubsystem = "gethomedir"

	// SFTPSubsystem is the SFTP SSH subsystem.
	SFTPSubsystem = "sftp"
)
View Source
const (
	// internal application being proxied.
	AppJWTHeader = "teleport-jwt-assertion"

	// HostHeader is the name of the Host header.
	HostHeader = "Host"
)
View Source
const (
	// KubeSessionDisplayParticipantRequirementsQueryParam is the query parameter used to
	// indicate that the client wants to display the participant requirements
	// for the given session.
	KubeSessionDisplayParticipantRequirementsQueryParam = "displayParticipantRequirements"
	// KubeSessionReasonQueryParam is the query parameter used to indicate the reason
	// for the session request.
	KubeSessionReasonQueryParam = "reason"
	// KubeSessionInvitedQueryParam is the query parameter used to indicate the users
	// to invite to the session.
	KubeSessionInvitedQueryParam = "invite"
)
View Source
const (
	// MetricGenerateRequests counts how many generate server keys requests
	// are issued over time
	MetricGenerateRequests = "auth_generate_requests_total"

	// MetricGenerateRequestsThrottled measures how many generate requests
	// are throttled
	MetricGenerateRequestsThrottled = "auth_generate_requests_throttled_total"

	// MetricGenerateRequestsCurrent measures current in-flight requests
	MetricGenerateRequestsCurrent = "auth_generate_requests"

	// MetricGenerateRequestsHistogram measures generate requests latency
	MetricGenerateRequestsHistogram = "auth_generate_seconds"

	// MetricServerInteractiveSessions measures interactive sessions in flight
	MetricServerInteractiveSessions = "server_interactive_sessions_total"

	// MetricProxySSHSessions measures sessions in flight on the proxy
	MetricProxySSHSessions = "proxy_ssh_sessions_total"

	// MetricRemoteClusters measures connected remote clusters
	MetricRemoteClusters = "remote_clusters"

	// MetricTrustedClusters counts trusted clusters
	MetricTrustedClusters = "trusted_clusters"

	// MetricClusterNameNotFound counts times a cluster name was not found
	MetricClusterNameNotFound = "cluster_name_not_found_total"

	// MetricFailedLoginAttempts counts failed login attempts
	MetricFailedLoginAttempts = "failed_login_attempts_total"

	// MetricConnectToNodeAttempts counts ssh attempts
	MetricConnectToNodeAttempts = "connect_to_node_attempts_total"

	// MetricFailedConnectToNodeAttempts counts failed ssh attempts
	MetricFailedConnectToNodeAttempts = "failed_connect_to_node_attempts_total"

	// MetricUserMaxConcurrentSessionsHit counts number of times a user exceeded their max concurrent ssh connections
	MetricUserMaxConcurrentSessionsHit = "user_max_concurrent_sessions_hit_total"

	// MetricProxyConnectionLimitHit counts the number of times the proxy connection limit was exceeded
	MetricProxyConnectionLimitHit = "proxy_connection_limit_exceeded_total"

	// MetricUserLoginCount counts user logins
	MetricUserLoginCount = "user_login_total"

	// MetricHeartbeatConnectionsReceived counts heartbeat connections received by auth
	MetricHeartbeatConnectionsReceived = "heartbeat_connections_received_total"

	// MetricCertificateMismatch counts login failures due to certificate mismatch
	MetricCertificateMismatch = "certificate_mismatch_total"

	// MetricHeartbeatsMissed counts the nodes that failed to heartbeat
	MetricHeartbeatsMissed = "heartbeats_missed_total"

	// MetricWatcherEventsEmitted counts watcher events that are emitted
	MetricWatcherEventsEmitted = "watcher_events"

	// MetricWatcherEventSizes measures the size of watcher events that are emitted
	MetricWatcherEventSizes = "watcher_event_sizes"

	// MetricMissingSSHTunnels returns the number of missing SSH tunnels for this proxy.
	MetricMissingSSHTunnels = "proxy_missing_ssh_tunnels"

	// MetricMigrations tracks for each migration if it is active or not.
	MetricMigrations = "migrations"

	// TagMigration is a metric tag for a migration
	TagMigration = "migration"

	// MetricIncompleteSessionUploads returns the number of incomplete session uploads
	MetricIncompleteSessionUploads = "incomplete_session_uploads_total"

	// TagCluster is a metric tag for a cluster
	TagCluster = "cluster"

	// MetricTotalInstances provides an instance count
	MetricTotalInstances = "total_instances"

	// MetricEnrolledInUpgrades provides total number of instances that advertise an upgrader.
	MetricEnrolledInUpgrades = "enrolled_in_upgrades"

	// MetricUpgraderCounts provides instance count per-upgrader.
	MetricUpgraderCounts = "upgrader_counts"

	// TagUpgrader is a metric tag for upgraders.
	TagUpgrader = "upgrader"

	// MetricsAccessRequestsCreated provides total number of created access requests.
	MetricAccessRequestsCreated = "access_requests_created"
	// TagRoles is a number of roles requested as a part of access request.
	TagRoles = "roles"
	// TagResources is a number of resources requested as a part of access request.
	TagResources = "resources"

	// UserCertificatesCreated provides total number of user certificates generated.
	MetricUserCertificatesGenerated = "user_certificates_generated"
	// TagPrivateKeyPolicy is a private key policy associated with a user's certificates.
	TagPrivateKeyPolicy = "private_key_policy"
)
View Source
const (
	// MetricProcessCPUSecondsTotal measures CPU seconds consumed by process
	MetricProcessCPUSecondsTotal = "process_cpu_seconds_total"
	// MetricProcessMaxFDs shows maximum amount of file descriptors allowed for the process
	MetricProcessMaxFDs = "process_max_fds"
	// MetricProcessOpenFDs shows process open file descriptors
	MetricProcessOpenFDs = "process_open_fds"
	// MetricProcessResidentMemoryBytes measures bytes consumed by process resident memory
	MetricProcessResidentMemoryBytes = "process_resident_memory_bytes"
	// MetricProcessStartTimeSeconds measures process start time
	MetricProcessStartTimeSeconds = "process_start_time_seconds"
)
View Source
const (
	// MetricGoThreads is amount of system threads used by Go runtime
	MetricGoThreads = "go_threads"

	// MetricGoGoroutines measures current number of goroutines
	MetricGoGoroutines = "go_goroutines"

	// MetricGoInfo provides information about Go runtime version
	MetricGoInfo = "go_info"

	// MetricGoAllocBytes measures allocated memory bytes
	MetricGoAllocBytes = "go_memstats_alloc_bytes"

	// MetricGoHeapAllocBytes measures heap bytes allocated by Go runtime
	MetricGoHeapAllocBytes = "go_memstats_heap_alloc_bytes"

	// MetricGoHeapObjects measures count of heap objects created by Go runtime
	MetricGoHeapObjects = "go_memstats_heap_objects"
)
View Source
const (
	// MetricBackendWatchers is a metric with backend watchers
	MetricBackendWatchers = "backend_watchers_total"

	// MetricBackendWatcherQueues is a metric with backend watcher queues sizes
	MetricBackendWatcherQueues = "backend_watcher_queues_total"

	// MetricBackendRequests measures count of backend requests
	MetricBackendRequests = "backend_requests"

	// MetricBackendReadHistogram measures histogram of backend read latencies
	MetricBackendReadHistogram = "backend_read_seconds"

	// MetricBackendWriteHistogram measures histogram of backend write latencies
	MetricBackendWriteHistogram = "backend_write_seconds"

	// MetricBackendBatchWriteHistogram measures histogram of backend batch write latencies
	MetricBackendBatchWriteHistogram = "backend_batch_write_seconds"

	// MetricBackendBatchReadHistogram measures histogram of backend batch read latencies
	MetricBackendBatchReadHistogram = "backend_batch_read_seconds"

	// MetricBackendWriteRequests measures backend write requests count
	MetricBackendWriteRequests = "backend_write_requests_total"

	// MetricBackendWriteFailedRequests measures failed backend write requests count
	MetricBackendWriteFailedRequests = "backend_write_requests_failed_total"

	// MetricBackendBatchWriteRequests measures batch backend writes count
	MetricBackendBatchWriteRequests = "backend_batch_write_requests_total"

	// MetricBackendBatchFailedWriteRequests measures failed batch backend requests count
	MetricBackendBatchFailedWriteRequests = "backend_batch_write_requests_failed_total"

	// MetricBackendReadRequests measures backend read requests count
	MetricBackendReadRequests = "backend_read_requests_total"

	// MetricBackendFailedReadRequests measures failed backend read requests count
	MetricBackendFailedReadRequests = "backend_read_requests_failed_total"

	// MetricBackendBatchReadRequests measures batch backend read requests count
	MetricBackendBatchReadRequests = "backend_batch_read_requests_total"

	// MetricBackendBatchFailedReadRequests measures failed backend batch read requests count
	MetricBackendBatchFailedReadRequests = "backend_batch_read_requests_failed_total"

	// MetricLostCommandEvents measures the number of command events that were lost
	MetricLostCommandEvents = "bpf_lost_command_events"

	// MetricLostDiskEvents measures the number of disk events that were lost.
	MetricLostDiskEvents = "bpf_lost_disk_events"

	// MetricLostNetworkEvents measures the number of network events that were lost.
	MetricLostNetworkEvents = "bpf_lost_network_events"

	// MetricLostRestrictedEvents measures the number of restricted events that were lost
	MetricLostRestrictedEvents = "bpf_lost_restricted_events"

	// MetricState tracks the state of the teleport process.
	MetricState = "process_state"

	// MetricNamespace defines the teleport prometheus namespace
	MetricNamespace = "teleport"

	// MetricConnectedResources tracks the number and type of resources connected via keepalives
	MetricConnectedResources = "connected_resources"

	// MetricBuildInfo tracks build information
	MetricBuildInfo = "build_info"

	// MetricCacheEventsReceived tracks the total number of events received by a cache
	MetricCacheEventsReceived = "cache_events"

	// MetricStaleCacheEventsReceived tracks the number of stale events received by a cache
	MetricStaleCacheEventsReceived = "cache_stale_events"

	// MetricRegisteredServers tracks the number of Teleport servers that have successfully registered with the Teleport cluster and have not reached the end of their ttl
	MetricRegisteredServers = "registered_servers"

	// MetricRegisteredServersByInstallMethods tracks the number of Teleport servers, and their installation method,
	// that have successfully registered with the Teleport cluster and have not reached the end of their ttl
	MetricRegisteredServersByInstallMethods = "registered_servers_by_install_methods"

	// MetricReverseSSHTunnels defines the number of connected SSH reverse tunnels to the proxy
	MetricReverseSSHTunnels = "reverse_tunnels_connected"

	// MetricHostedPluginStatus tracks the current status
	// (as defined by types.PluginStatus) for a plugin instance
	MetricHostedPluginStatus = "hosted_plugin_status"

	// MetricTeleportServices tracks which services are currently running in the current Teleport Process.
	MetricTeleportServices = "services"

	// TagRange is a tag specifying backend requests
	TagRange = "range"

	// TagReq is a tag specifying backend request type
	TagReq = "req"

	// TagTrue is a tag value to mark true values
	TagTrue = "true"

	// TagFalse is a tag value to mark false values
	TagFalse = "false"

	// TagResource is a tag specifying the resource for an event
	TagResource = "resource"

	// TagVersion is a prometheus label for version of Teleport built
	TagVersion = "version"

	// TagGitref is a prometheus label for the gitref of Teleport built
	TagGitref = "gitref"

	// TagGoVersion is a prometheus label for version of Go used to build Teleport
	TagGoVersion = "goversion"

	// TagCacheComponent is a prometheus label for the cache component
	TagCacheComponent = "cache_component"

	// TagType is a prometheus label for type of resource or tunnel connected
	TagType = "type"

	// TagServer is a prometheus label to indicate what server the metric is tied to
	TagServer = "server"

	// TagClient is a prometheus label to indicate what client the metric is tied to
	TagClient = "client"

	// TagInstallMethods is a prometheus label to indicate what installation methods
	// were used for the agent.
	// This value comes from UpstreamInventoryAgentMetadata (sourced in lib/inventory/metadata.fetchInstallMethods).
	TagInstallMethods = "install_methods"

	// TagServiceName is the prometheus label to indicate what services are running in the current proxy.
	// Those services are monitored using the Supervisor.
	// Only a subset of services are monitored. See [lib/service.metricsServicesRunningMap]
	// Eg, discovery_service
	TagServiceName = "service_name"
)
View Source
const (
	// MetricUsageEventsSubmitted is a count of usage events that have been generated.
	MetricUsageEventsSubmitted = "usage_events_submitted_total"

	// MetricUsageBatches is a count of batches enqueued for submission.
	MetricUsageBatches = "usage_batches_total"

	// MetricUsageEventsRequeued is a count of events that were requeued after a
	// submission failed.
	MetricUsageEventsRequeued = "usage_events_requeued_total"

	// MetricUsageBatchSubmissionDuration is a histogram of durations it took to
	// submit a batch.
	MetricUsageBatchSubmissionDuration = "usage_batch_submission_duration_seconds"

	// MetricUsageBatchesSubmitted is a count of event batches successfully
	// submitted.
	MetricUsageBatchesSubmitted = "usage_batch_submitted_total"

	// MetricUsageBatchesFailed is a count of event batches that failed to
	// submit.
	MetricUsageBatchesFailed = "usage_batch_failed_total"

	// MetricUsageEventsDropped is a count of events dropped due to the
	// submission buffer reaching a length limit.
	MetricUsageEventsDropped = "usage_events_dropped_total"
)
View Source
const (
	// MetricParquetlogConsumerBatchPorcessingDuration is a histogram of durations it
	// took to process single batch of events.
	MetricParquetlogConsumerBatchPorcessingDuration = "audit_parquetlog_batch_processing_seconds"
	// MetricParquetlogConsumerS3FlushDuration is a histogram of durations it took to
	// flush and close parquet files on s3.
	MetricParquetlogConsumerS3FlushDuration = "audit_parquetlog_s3_flush_seconds"
	// MetricParquetlogConsumerDeleteEventsDuration is a histogram of durations it
	// took to delete events from SQS.
	MetricParquetlogConsumerDeleteEventsDuration = "audit_parquetlog_delete_events_seconds"
	// MetricParquetlogConsumerBatchSize is a histogram of sizes of single batch of events.
	MetricParquetlogConsumerBatchSize = "audit_parquetlog_batch_size"
	// MetricParquetlogConsumerBatchCount is a count of number of events in single batch.
	MetricParquetlogConsumerBatchCount = "audit_parquetlog_batch_count"
	// MetricParquetlogConsumerLastProcessedTimestamp is a timestamp of last finished consumer execution.
	MetricParquetlogConsumerLastProcessedTimestamp = "audit_parquetlog_last_processed_timestamp"
	// MetricParquetlogConsumerOldestProcessedMessage is age of oldest processed message.
	MetricParquetlogConsumerOldestProcessedMessage = "audit_parquetlog_age_oldest_processed_message"
	// MetricAthenaConsumerCollectFailed is a count of number of errors received from sqs collect.
	MetricParquetlogConsumerCollectFailed = "audit_parquetlog_errors_from_collect_count"
)

athena audit log metrics

View Source
const AdminRoleName = "admin"

AdminRoleName is the name of the default admin role for all local users if another role is not explicitly assigned

View Source
const (
	// HTTPNextProtoTLS is the NPN/ALPN protocol negotiated during
	// HTTP/1.1.'s TLS setup.
	// https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids
	HTTPNextProtoTLS = "http/1.1"
)
View Source
const (
	// KubeLegacyProxySuffix is the suffix used for legacy proxy services when
	// generating their names Server names.
	KubeLegacyProxySuffix = "-proxy_service"
)
View Source
const MaxEnvironmentFileLines = 1000

MaxEnvironmentFileLines is the maximum number of lines in a environment file.

View Source
const MaxHTTPRequestSize = 10 * 1024 * 1024

MaxHTTPRequestSize is the maximum accepted size (in bytes) of the body of a received HTTP request. This limit is meant to be used with utils.ReadAtMost to prevent resource exhaustion attacks.

View Source
const MaxHTTPResponseSize = 10 * 1024 * 1024

MaxHTTPResponseSize is the maximum accepted size (in bytes) of the body of a received HTTP response. This limit is meant to be used with utils.ReadAtMost to prevent resource exhaustion attacks.

View Source
const MaxResourceSize = 1000000

MaxResourceSize is the maximum size (in bytes) of a serialized resource. This limit is typically only enforced against resources that are likely to arbitrarily grow (e.g. PluginData).

View Source
const SCP = "scp"

SCP is Secure Copy.

View Source
const StandardHTTPSPort = 443

StandardHTTPSPort is the default port used for the https URI scheme, cf. RFC 7230 § 2.7.2.

View Source
const (
	// SystemAccessApproverUserName names a Teleport user that acts as
	// an Access Request approver for access plugins
	SystemAccessApproverUserName = "@teleport-access-approval-bot"
)
View Source
const UserSingleUseCertTTL = time.Minute

UserSingleUseCertTTL is a TTL for per-connection user certificates.

View Source
const UserSystem = "system"

UserSystem defines a user as system.

View Source
const Version = "15.0.0-dev"
View Source
const WebAPIVersion = "v1"

WebAPIVersion is a current webapi version

Variables

View Source
var Gitref string

Gitref is set to the output of "git describe" during the build process.

View Source
var MinClientVersion string

MinClientVersion is the minimum client version required by the server.

Functions

func Component added in v1.0.0

func Component(components ...string) string

Component generates "component:subcomponent1:subcomponent2" strings used in debugging

func NewWebAssetsFilesystem

func NewWebAssetsFilesystem() (http.FileSystem, error)

NewWebAssetsFilesystem is a no-op in this build mode.

Types

type Principal

type Principal string

A principal name for use in SSH certificates.

const (
	// The localhost domain, for talking to a proxy or node on the same
	// machine.
	PrincipalLocalhost Principal = "localhost"
	// The IPv4 loopback address, for talking to a proxy or node on the same
	// machine.
	PrincipalLoopbackV4 Principal = "127.0.0.1"
	// The IPv6 loopback address, for talking to a proxy or node on the same
	// machine.
	PrincipalLoopbackV6 Principal = "::1"
)

Directories

Path Synopsis
api module
build.assets
kubectl-version
Command version-check that outputs the version of kubectl used by the build.
Command version-check that outputs the version of kubectl used by the build.
tooling Module
docs
e2e
aws
Package e2e contains end-to-end tests for Teleport's AWS integrations.
Package e2e contains end-to-end tests for Teleport's AWS integrations.
examples
jwt
gen
integration package tests Teleport on a high level creating clusters of servers in memory, connecting them together and connecting to them
integration package tests Teleport on a high level creating clusters of servers in memory, connecting them together and connecting to them
db
hsm
integrations
kube-agent-updater
Code generated by "make version".
Code generated by "make version".
kube-agent-updater/hack
cosign-fixtures is a tool to generate valid and invalid cosign signed artifacts This is used to test the Cosign validator implementation.
cosign-fixtures is a tool to generate valid and invalid cosign signed artifacts This is used to test the Cosign validator implementation.
kube-agent-updater/pkg/img
Package img contains the required interfaces and logic to validate that an image has been issued by Teleport and can be trusted.
Package img contains the required interfaces and logic to validate that an image has been issued by Teleport and can be trusted.
lib
operator/apis/resources/v1
Package v1 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
Package v1 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
operator/apis/resources/v2
Package v2 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
Package v2 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
operator/apis/resources/v3
Package v3 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
Package v3 contains API Schema definitions for the resources v2 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
operator/apis/resources/v5
Package v5 contains API Schema definitions for the resources v5 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
Package v5 contains API Schema definitions for the resources v5 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
operator/apis/resources/v6
Package v6 contains API Schema definitions for the resources v6 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
Package v6 contains API Schema definitions for the resources v6 API group +kubebuilder:object:generate=true +groupName=resources.teleport.dev
terraform Module
lib
agentless
Package agentless provides functions to allow connecting to registered OpenSSH (agentless) nodes.
Package agentless provides functions to allow connecting to registered OpenSSH (agentless) nodes.
ai
asciitable
Package asciitable implements a simple ASCII table formatter for printing tabular values into a text terminal.
Package asciitable implements a simple ASCII table formatter for printing tabular values into a text terminal.
auth
Package auth implements certificate signing authority and access control server Authority server is composed of several parts:
Package auth implements certificate signing authority and access control server Authority server is composed of several parts:
auth/accesspoint
package accesspoint provides helpers for configuring caches in the context of setting up service-level auth access points.
package accesspoint provides helpers for configuring caches in the context of setting up service-level auth access points.
auth/authclient
Package authclient contains common code for creating an auth server client which may use SSH tunneling through a proxy.
Package authclient contains common code for creating an auth server client which may use SSH tunneling through a proxy.
auth/keystore
Package keystore provides a generic client and associated helpers for handling private keys that may be backed by an HSM or KMS.
Package keystore provides a generic client and associated helpers for handling private keys that may be backed by an HSM or KMS.
auth/test
package test contains CA authority acceptance test suite.
package test contains CA authority acceptance test suite.
auth/testauthority
Package testauthority implements a wrapper around native.Keygen that uses pre-computed keys.
Package testauthority implements a wrapper around native.Keygen that uses pre-computed keys.
auth/webauthn
Package webauthn implements server-side support for the Web Authentication specification.
Package webauthn implements server-side support for the Web Authentication specification.
auth/webauthncli
Package webauthncli provides the client-side implementation for WebAuthn.
Package webauthncli provides the client-side implementation for WebAuthn.
auth/webauthntypes
Package webauthntypes provides WebAuthn types and conversions for both client-side and server-side implementations.
Package webauthntypes provides WebAuthn types and conversions for both client-side and server-side implementations.
auth/webauthnwin
Package webauthnwin is wrapper around Windows webauthn API.
Package webauthnwin is wrapper around Windows webauthn API.
backend
Package backend provides storage backend abstraction layer
Package backend provides storage backend abstraction layer
backend/dynamo
Package dynamo implements DynamoDB storage backend for Teleport auth service, similar to etcd backend.
Package dynamo implements DynamoDB storage backend for Teleport auth service, similar to etcd backend.
backend/etcdbk
Package etcdbk implements Etcd powered backend
Package etcdbk implements Etcd powered backend
backend/firestore
Package firestoreFirestoreBackend implements Firestore storage backend for Teleport auth service, similar to DynamoDB backend.
Package firestoreFirestoreBackend implements Firestore storage backend for Teleport auth service, similar to DynamoDB backend.
backend/kubernetes
Package kubernetes implements Kubernetes Secret backend used for persisting identity and state for agent's running in Kubernetes clusters.
Package kubernetes implements Kubernetes Secret backend used for persisting identity and state for agent's running in Kubernetes clusters.
backend/lite
Package lite implements SQLite backend used for local persistent caches in proxies and nodes and for standalone auth service deployments.
Package lite implements SQLite backend used for local persistent caches in proxies and nodes and for standalone auth service deployments.
backend/memory
Package memory implements backend interface using a combination of Minheap (to store expiring items) and B-Tree for storing sorted dictionary of items.
Package memory implements backend interface using a combination of Minheap (to store expiring items) and B-Tree for storing sorted dictionary of items.
backend/test
Package test contains a backend acceptance test suite that is backend implementation independent each backend will use the suite to test itself
Package test contains a backend acceptance test suite that is backend implementation independent each backend will use the suite to test itself
benchmark
Package benchmark package provides tools to run progressive or independent benchmarks against teleport services.
Package benchmark package provides tools to run progressive or independent benchmarks against teleport services.
bpf
cache
Package cache implements event-driven cache layer that is used by auth servers, proxies and nodes.
Package cache implements event-driven cache layer that is used by auth servers, proxies and nodes.
client/db
Package db contains methods for working with database connection profiles that combine connection parameters for a particular database.
Package db contains methods for working with database connection profiles that combine connection parameters for a particular database.
client/escape
Package escape implements client-side escape character logic.
Package escape implements client-side escape character logic.
client/identityfile
Package identityfile handles formatting and parsing of identity files.
Package identityfile handles formatting and parsing of identity files.
cloud
Package cloud contains common methods and utilities for integrations with various cloud providers such as AWS, GCP or Azure.
Package cloud contains common methods and utilities for integrations with various cloud providers such as AWS, GCP or Azure.
config
Package config provides facilities for configuring Teleport daemons including
Package config provides facilities for configuring Teleport daemons including
defaults
Package defaults contains default constants set in various parts of teleport codebase
Package defaults contains default constants set in various parts of teleport codebase
devicetrust/native
Package native implements OS-specific methods required by Device Trust.
Package native implements OS-specific methods required by Device Trust.
events
Package events implements the audit log interface events.IAuditLog using filesystem backend.
Package events implements the audit log interface events.IAuditLog using filesystem backend.
events/firestoreevents
Package firestoreeventsLog implements Firestore storage backend for Teleport event storage.
Package firestoreeventsLog implements Firestore storage backend for Teleport event storage.
events/gcssessions
Package gcssessionsHandler implements GCS storage for Teleport session recording persistence.
Package gcssessionsHandler implements GCS storage for Teleport session recording persistence.
gcp
httplib
Package httplib implements common utility functions for writing classic HTTP handlers
Package httplib implements common utility functions for writing classic HTTP handlers
joinserver
Package joinserver contains the implementation of the JoinService gRPC server which runs on both Auth and Proxy.
Package joinserver contains the implementation of the JoinService gRPC server which runs on both Auth and Proxy.
jwt
Package jwt is used to sign and verify JWT tokens used by application access.
Package jwt is used to sign and verify JWT tokens used by application access.
kube
Package kube contains subpackages with utility functions and proxies to intercept and authenticate Kubernetes API traffic
Package kube contains subpackages with utility functions and proxies to intercept and authenticate Kubernetes API traffic
kube/kubeconfig
Package kubeconfig manages teleport entries in a local kubeconfig file.
Package kubeconfig manages teleport entries in a local kubeconfig file.
labels
Package labels provides a way to get dynamic labels.
Package labels provides a way to get dynamic labels.
limiter
Package limiter implements connection and rate limiters for teleport
Package limiter implements connection and rate limiters for teleport
modules
package modules allows external packages override certain behavioral aspects of teleport
package modules allows external packages override certain behavioral aspects of teleport
multiplexer
Package multiplexer implements SSH and TLS multiplexing on the same listener
Package multiplexer implements SSH and TLS multiplexing on the same listener
pam
player
Package player includes an API to play back recorded sessions.
Package player includes an API to play back recorded sessions.
reversetunnel
Package reversetunnel sets up persistent reverse tunnel between remote site and teleport proxy, when site agents dial to teleport proxy's socket and teleport proxy can connect to any server through this tunnel.
Package reversetunnel sets up persistent reverse tunnel between remote site and teleport proxy, when site agents dial to teleport proxy's socket and teleport proxy can connect to any server through this tunnel.
reversetunnel/track
Package track provides a simple interface to keep track of proxies as described via "gossip" messages shared by other proxies as part of the reverse tunnel protocol, and to decide if and when it's appropriate to attempt a new connection to a proxy load balancer at any given moment.
Package track provides a simple interface to keep track of proxies as described via "gossip" messages shared by other proxies as part of the reverse tunnel protocol, and to decide if and when it's appropriate to attempt a new connection to a proxy load balancer at any given moment.
secret
Package secret implements a authenticated encryption with associated data (AEAD) cipher to be used when symmetric is required in Teleport.
Package secret implements a authenticated encryption with associated data (AEAD) cipher to be used when symmetric is required in Teleport.
service
Package service implements teleport running service, takes care of initialization, cleanup and shutdown procedures
Package service implements teleport running service, takes care of initialization, cleanup and shutdown procedures
service/servicecfg
Package servicecfg contains the runtime configuration for Teleport services
Package servicecfg contains the runtime configuration for Teleport services
services
Package services implements statefule services provided by teleport, like certificate authority management, user and web sessions, events and logs.
Package services implements statefule services provided by teleport, like certificate authority management, user and web sessions, events and logs.
services/local
Package local implements services interfaces using abstract key value backend provided by lib/backend, what makes it possible for teleport to run using boltdb or etcd
Package local implements services interfaces using abstract key value backend provided by lib/backend, what makes it possible for teleport to run using boltdb or etcd
session
Package session is used for bookkeeping of SSH interactive sessions that happen in realtime across the teleport cluster
Package session is used for bookkeeping of SSH interactive sessions that happen in realtime across the teleport cluster
srv
srv/app
Package app runs the application proxy process.
Package app runs the application proxy process.
srv/db/common
Package common provides common utilities used by all supported database implementations.
Package common provides common utilities used by all supported database implementations.
srv/db/mongodb
Package mongodb implements database access proxy that handles authentication, authorization and protocol parsing of connections from MongoDB clients to MongoDB clusters.
Package mongodb implements database access proxy that handles authentication, authorization and protocol parsing of connections from MongoDB clients to MongoDB clusters.
srv/db/mongodb/protocol
Package protocol implements reading/writing MongoDB wire protocol messages from/to client/server and converting them into parsed data structures.
Package protocol implements reading/writing MongoDB wire protocol messages from/to client/server and converting them into parsed data structures.
srv/db/mysql
Package mysql implements MySQL protocol support for the database access.
Package mysql implements MySQL protocol support for the database access.
srv/db/mysql/protocol
Package protocol implements parts of MySQL wire protocol which are needed for the service to be able to interpret the protocol messages but are not readily available in the convenient form in the vendored MySQL library.
Package protocol implements parts of MySQL wire protocol which are needed for the service to be able to interpret the protocol messages but are not readily available in the convenient form in the vendored MySQL library.
srv/db/postgres
Package postgres implements components of the database access subsystem that proxy connections between Postgres clients (like, psql or pgAdmin) and Postgres database servers with full protocol awareness.
Package postgres implements components of the database access subsystem that proxy connections between Postgres clients (like, psql or pgAdmin) and Postgres database servers with full protocol awareness.
srv/db/redis
Package redis implements database access proxy that handles authentication, authorization and protocol parsing of connections from Redis clients to Redis standalone or Redis clusters.
Package redis implements database access proxy that handles authentication, authorization and protocol parsing of connections from Redis clients to Redis standalone or Redis clusters.
srv/db/secrets
Package secrets implements clients for managing secret values using secret management tools like AWS Secrets Manager.
Package secrets implements clients for managing secret values using secret management tools like AWS Secrets Manager.
srv/db/sqlserver/kinit
Package kinit provides utilities for interacting with a KDC (Key Distribution Center) for Kerberos5
Package kinit provides utilities for interacting with a KDC (Key Distribution Center) for Kerberos5
srv/desktop
Package desktop implements Desktop Access services, like windows_desktop_access.
Package desktop implements Desktop Access services, like windows_desktop_access.
srv/desktop/rdp/rdpclient
Package rdpclient implements an RDP client.
Package rdpclient implements an RDP client.
srv/desktop/tdp
Package tdp implements the Teleport desktop protocol (TDP) encoder/decoder.
Package tdp implements the Teleport desktop protocol (TDP) encoder/decoder.
srv/regular
Package regular implements SSH server that supports multiplexing tunneling, SSH connections proxying and only supports Key based auth
Package regular implements SSH server that supports multiplexing tunneling, SSH connections proxying and only supports Key based auth
srv/uacc
Package uacc concerns itself with updating the user account database and log on nodes that a client connects to with an interactive session.
Package uacc concerns itself with updating the user account database and log on nodes that a client connects to with an interactive session.
sshca
Package sshca specifies interfaces for SSH certificate authorities
Package sshca specifies interfaces for SSH certificate authorities
sshutils
Package sshutils contains the implementations of the base SSH server used throughout Teleport.
Package sshutils contains the implementations of the base SSH server used throughout Teleport.
sshutils/scp
Package scp handles file uploads and downloads via SCP command.
Package scp handles file uploads and downloads via SCP command.
sshutils/sftp
Package sftp handles file transfers client-side via SFTP.
Package sftp handles file transfers client-side via SFTP.
sshutils/x11
Package X11 contains contains the ssh client/server helper functions for performing X11 forwarding.
Package X11 contains contains the ssh client/server helper functions for performing X11 forwarding.
tlsca
Package tlsca provides internal TLS certificate authority used for mutual TLS authentication with the auth server and internal teleport components and external clients
Package tlsca provides internal TLS certificate authority used for mutual TLS authentication with the auth server and internal teleport components and external clients
utils/parse
TODO(nklaassen): evaluate the risks and utility of allowing traits to be used as regular expressions.
TODO(nklaassen): evaluate the risks and utility of allowing traits to be used as regular expressions.
utils/socks
package socks implements a SOCKS5 handshake.
package socks implements a SOCKS5 handshake.
utils/typical
typical (TYPed predICAte Library) is a library for building better predicate expression parsers faster.
typical (TYPed predICAte Library) is a library for building better predicate expression parsers faster.
web
Package web implements web proxy handler that provides web interface to view and connect to teleport nodes
Package web implements web proxy handler that provides web interface to view and connect to teleport nodes
web/app
Package app connections to applications over a reverse tunnel and forwards HTTP requests to them.
Package app connections to applications over a reverse tunnel and forwards HTTP requests to them.
tool
teleport/testenv
Package testenv provides functions for creating test servers for testing.
Package testenv provides functions for creating test servers for testing.
tsh

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL