config

package
v0.0.0-...-37c3036 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 10, 2024 License: Apache-2.0 Imports: 46 Imported by: 0

Documentation

Overview

Package config knows how to read and parse config.yaml.

Package config knows how to read and parse config.yaml.

Index

Constants

View Source
const (
	// DefaultJobTimeout represents the default deadline for a prow job.
	DefaultJobTimeout = 24 * time.Hour

	// DefaultMoonrakerClientTimeout is the default timeout for all Moonraker
	// clients. Note that this is a client-side timeout, and does not affect
	// whether Moonraker itself will finish doing the Git fetch/parsing in the
	// background (esp. for new repos that need the extra cloning time).
	DefaultMoonrakerClientTimeout = 10 * time.Minute

	ProwImplicitGitResource = "PROW_IMPLICIT_GIT_REF"

	// ConfigVersionFileName is the name of a file that will be added to
	// all configmaps by the configupdater and contain the git sha that
	// triggered said configupdate. The configloading in turn will pick
	// it up if present. This allows components to include the config version
	// in their logs, which can be useful for debugging.
	ConfigVersionFileName = "VERSION"

	DefaultTenantID = "GlobalDefaultID"

	ProwIgnoreFileName = ".prowignore"
)

Variables

View Source
var ReProwExtraRef = regexp.MustCompile(`PROW_EXTRA_GIT_REF_(\d+)`)

Functions

func BaseSHAFromContextDescription

func BaseSHAFromContextDescription(description string) string

BaseSHAFromContextDescription is used by Tide to decode a baseSHA from a github status context description created via ContextDescriptionWithBaseSha. It will return an empty string if no valid sha was found.

func BranchRequirements

func BranchRequirements(branch string, jobs []Presubmit, requireManuallyTriggeredJobs *bool) ([]string, []string, []string)

BranchRequirements partitions status contexts for a given org, repo branch into three buckets:

  • contexts that are always required to be present
  • contexts that are required, _if_ present
  • contexts that are always optional

func ClearCompiledRegexes

func ClearCompiledRegexes(presubmits []Presubmit)

ClearCompiledRegexes removes compiled regexes from the presubmits, useful for testing when deep equality is needed between presubmits

func ContainsInRepoConfigPath

func ContainsInRepoConfigPath(files []string) bool

ContainsInRepoConfigPath indicates whether the specified list of changed files (repo relative paths) includes a file that might be an inrepo config file.

This function could report a false positive as it doesn't consider .prowignore files. It is designed to be used to help short circuit when we know a change doesn't touch the inrepo config.

func ContextDescriptionWithBaseSha

func ContextDescriptionWithBaseSha(humanReadable, baseSHA string) string

ContextDescriptionWithBaseSha is used by the GitHub reporting to store the baseSHA of a context in the status context description. Tide will read this if present using the BaseSHAFromContextDescription func. Storing the baseSHA in the status context allows us to store job results pretty much forever, instead of having to rerun everything after sinker cleaned up the ProwJobs.

func DefaultAndValidateProwYAML

func DefaultAndValidateProwYAML(c *Config, p *ProwYAML, identifier string) error

func DefaultRerunCommandFor

func DefaultRerunCommandFor(name string) string

DefaultRerunCommandFor returns the default rerun command for the job with this name.

func DefaultTriggerFor

func DefaultTriggerFor(name string) string

DefaultTriggerFor returns the default regexp string used to match comments that should trigger the job with this name.

func GetAndCheckRefs

func GetAndCheckRefs(
	baseSHAGetter RefGetter,
	headSHAGetters ...RefGetter) (string, []string, error)

GetAndCheckRefs resolves all uniquely-identifying information related to the retrieval of a *ProwYAML.

func GetCMMountWatcher

func GetCMMountWatcher(eventFunc func() error, errFunc func(error, string), path string) (func(ctx context.Context), error)

GetCMMountWatcher returns a function that watches a configmap mounted directory and runs the provided "eventFunc" every time the directory gets updated and the provided "errFunc" every time it encounters an error. Example of a possible eventFunc:

func() error {
		value, err := RunUpdate()
		if err != nil {
			return err
		}
		globalValue = value
		return nil
}

Example of errFunc:

func(err error, msg string) {
		logrus.WithError(err).Error(msg)
}

func GetFileWatcher

func GetFileWatcher(eventFunc func(*fsnotify.Watcher) error, errFunc func(error, string), files ...string) (func(ctx context.Context), error)

GetFileWatcher returns a function that watches the specified file(s), running the "eventFunc" whenever an event for the file(s) occurs and the "errFunc" whenever an error is encountered. In this function, the eventFunc has access to the watcher, allowing the eventFunc to add new files/directories to be watched as needed. Example of a possible eventFunc:

func(w *fsnotify.Watcher) error {
		value, err := RunUpdate()
		if err != nil {
			return err
		}
		globalValue = value
     newFiles := getNewFiles()
     for _, file := range newFiles {
			if err := w.Add(file); err != nil {
				return err
			}
		}
		return nil
}

Example of errFunc:

func(err error, msg string) {
		logrus.WithError(err).Error(msg)
}

func IsConfigMapMount

func IsConfigMapMount(path string) (bool, error)

IsConfigMapMount determines whether the provided directory is a configmap mounted directory

func IsNotAllowedBucketError

func IsNotAllowedBucketError(err error) bool

func ListCMsAndDirs

func ListCMsAndDirs(path string) (cms sets.Set[string], dirs sets.Set[string], err error)

ListCMsAndDirs returns a 2 sets of strings containing the paths of configmapped directories and standard directories respectively starting from the provided path. This can be used to watch a large number of files, some of which may be populated via configmaps

func NotAllowedBucketError

func NotAllowedBucketError(err error) error

NotAllowedBucketError wraps an error and return a notAllowedBucketError error.

func OrgReposToStrings

func OrgReposToStrings(vs []OrgRepo) []string

OrgReposToStrings converts a list of OrgRepo to its String() equivalent.

func ReadFileMaybeGZIP

func ReadFileMaybeGZIP(path string) ([]byte, error)

ReadFileMaybeGZIP wraps os.ReadFile, returning the decompressed contents if the file is gzipped, or otherwise the raw contents.

func SetPostsubmitRegexes

func SetPostsubmitRegexes(ps []Postsubmit) error

SetPostsubmitRegexes compiles and validates all the regular expressions for the provided postsubmits.

func SetPresubmitRegexes

func SetPresubmitRegexes(js []Presubmit) error

SetPresubmitRegexes compiles and validates all the regular expressions for the provided presubmits.

func SplitRepoName

func SplitRepoName(fullRepoName string) (string, string, error)

func ValidateController

func ValidateController(c *Controller, templateFuncMaps ...template.FuncMap) error

ValidateController validates the provided controller config.

func ValidatePipelineRunSpec

func ValidatePipelineRunSpec(jobType prowapi.ProwJobType, extraRefs []prowapi.Refs, spec *pipelinev1beta1.PipelineRunSpec) error

func ValidateRefs

func ValidateRefs(repo string, jobBase JobBase) error

ValidateRefs validates the extra refs on a presubmit for one repo.

Types

type Agent

type Agent struct {
	// contains filtered or unexported fields
}

Agent watches a path and automatically loads the config stored therein.

func (*Agent) Config

func (ca *Agent) Config() *Config

Config returns the latest config. Do not modify the config.

func (*Agent) Set

func (ca *Agent) Set(c *Config)

Set sets the config. Useful for testing. Also used by statusreconciler to load last known config

func (*Agent) SetWithoutBroadcast

func (ca *Agent) SetWithoutBroadcast(c *Config)

SetWithoutBroadcast sets the config, but does not broadcast the event to those listening for config reload changes. This is useful if you want to modify the Config in the Agent, from the point of view of the subscriber to the new one that was detected from the DeltaChan; if you just used Set() instead of this in such a situation, you would end up clogging the DeltaChan because you would be acting as both the consumer and producer of the DeltaChan.

func (*Agent) Start

func (ca *Agent) Start(prowConfig, jobConfig string, additionalProwConfigDirs []string, supplementalProwConfigsFileNameSuffix string, additionals ...func(*Config) error) error

Start will begin polling the config file at the path. If the first load fails, Start will return the error and abort. Future load failures will log the failure message but continue attempting to load.

func (*Agent) StartWatch

func (ca *Agent) StartWatch(prowConfig, jobConfig string, supplementalProwConfigDirs []string, supplementalProwConfigsFileNameSuffix string, additionals ...func(*Config) error) error

StartWatch will begin watching the config files at the provided paths. If the first load fails, Start will return the error and abort. Future load failures will log the failure message but continue attempting to load. This function will replace Start in a future release.

func (*Agent) Subscribe

func (ca *Agent) Subscribe(subscription DeltaChan)

Subscribe registers the channel for messages on config reload. The caller can expect a copy of the previous and current config to be sent down the subscribed channel when a new configuration is loaded.

type AllowedApiClient

type AllowedApiClient struct {
	// ApiClientGcp contains GoogleCloudPlatform details about a web API client.
	// We currently only support GoogleCloudPlatform but other cloud vendors are
	// possible as additional fields in this struct.
	GCP *ApiClientGcp `json:"gcp,omitempty"`

	// AllowedJobsFilters contains information about what kinds of Prow jobs this
	// API client is authorized to trigger.
	AllowedJobsFilters []AllowedJobsFilter `json:"allowed_jobs_filters,omitempty"`
}

func (*AllowedApiClient) GetApiClientCloudVendor

func (allowedApiClient *AllowedApiClient) GetApiClientCloudVendor() (ApiClientCloudVendor, error)

type AllowedJobsFilter

type AllowedJobsFilter struct {
	TenantID string `json:"tenant_id,omitempty"`
}

AllowedJobsFilter defines filters for jobs that are allowed by an authenticated API client.

func (AllowedJobsFilter) Validate

func (ajf AllowedJobsFilter) Validate() error

type ApiClientCloudVendor

type ApiClientCloudVendor interface {
	GetVendorName() string
	GetRequiredMdHeaders() []string
	GetUUID() string
	Validate() error
}

type ApiClientGcp

type ApiClientGcp struct {
	// EndpointApiConsumerType is the expected value of the
	// x-endpoint-api-consumer-type HTTP metadata header. Typically this will be
	// "PROJECT".
	EndpointApiConsumerType string `json:"endpoint_api_consumer_type,omitempty"`
	// EndpointApiConsumerNumber is the expected value of the
	// x-endpoint-api-consumer-number HTTP metadata header. Typically this
	// encodes the GCP Project number value, which uniquely identifies a GCP
	// Project.
	EndpointApiConsumerNumber string `json:"endpoint_api_consumer_number,omitempty"`
}

ApiClientGcp encodes GCP Cloud Endpoints-specific HTTP metadata header information, which are expected to be populated by the ESPv2 sidecar container for GKE applications (in our case, the gangway pod).

func (*ApiClientGcp) GetRequiredMdHeaders

func (gcp *ApiClientGcp) GetRequiredMdHeaders() []string

func (*ApiClientGcp) GetUUID

func (gcp *ApiClientGcp) GetUUID() string

func (*ApiClientGcp) GetVendorName

func (gcp *ApiClientGcp) GetVendorName() string

func (*ApiClientGcp) Validate

func (gcp *ApiClientGcp) Validate() error

type Branch

type Branch struct {
	Policy `json:",inline"`
}

Branch holds protection policy overrides for a particular branch.

type BranchProtection

type BranchProtection struct {
	Policy `json:",inline"`
	// ProtectTested determines if branch protection rules are set for all repos
	// that Prow has registered jobs for, regardless of if those repos are in the
	// branch protection config.
	ProtectTested *bool `json:"protect-tested-repos,omitempty"`
	// Orgs holds branch protection options for orgs by name
	Orgs map[string]Org `json:"orgs,omitempty"`
	// AllowDisabledPolicies allows a child to disable all protection even if the
	// branch has inherited protection options from a parent.
	AllowDisabledPolicies *bool `json:"allow_disabled_policies,omitempty"`
	// AllowDisabledJobPolicies allows a branch to choose to opt out of branch protection
	// even if Prow has registered required jobs for that branch.
	AllowDisabledJobPolicies *bool `json:"allow_disabled_job_policies,omitempty"`
	// ProtectReposWithOptionalJobs will make the Branchprotector manage required status
	// contexts on repositories that only have optional jobs (default: false)
	ProtectReposWithOptionalJobs *bool `json:"protect_repos_with_optional_jobs,omitempty"`
}

BranchProtection specifies the global branch protection policy

func (BranchProtection) GetOrg

func (bp BranchProtection) GetOrg(name string) *Org

GetOrg returns the org config after merging in any global policies.

func (BranchProtection) HasManagedBranches

func (bp BranchProtection) HasManagedBranches() bool

HasManagedBranches returns true if the global branch protector's config has managed branches

func (BranchProtection) HasManagedOrgs

func (bp BranchProtection) HasManagedOrgs() bool

HasManagedOrgs returns true if the global branch protector's config has managed orgs

func (BranchProtection) HasManagedRepos

func (bp BranchProtection) HasManagedRepos() bool

HasManagedRepos returns true if the global branch protector's config has managed repos

type Brancher

type Brancher struct {
	// Do not run against these branches. Default is no branches.
	SkipBranches []string `json:"skip_branches,omitempty"`
	// Only run against these branches. Default is all branches.
	Branches []string `json:"branches,omitempty"`
	// contains filtered or unexported fields
}

Brancher is for shared code between jobs that only run against certain branches. An empty brancher runs against all branches.

func (*Brancher) DeepCopy

func (in *Brancher) DeepCopy() *Brancher

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Brancher.

func (*Brancher) DeepCopyInto

func (in *Brancher) DeepCopyInto(out *Brancher)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (Brancher) Intersects

func (br Brancher) Intersects(other Brancher) bool

Intersects checks if other Brancher would trigger for the same branch.

func (Brancher) RunsAgainstAllBranch

func (br Brancher) RunsAgainstAllBranch() bool

RunsAgainstAllBranch returns true if there are both branches and skip_branches are unset

func (Brancher) ShouldRun

func (br Brancher) ShouldRun(branch string) bool

ShouldRun returns true if the input branch matches, given the allow/deny list.

type Branding

type Branding struct {
	Logo string `json:"logo,omitempty"`
	// Favicon is the location of the favicon that will be loaded in deck.
	Favicon string `json:"favicon,omitempty"`
	// BackgroundColor is the color of the background.
	BackgroundColor string `json:"background_color,omitempty"`
	// HeaderColor is the color of the header.
	HeaderColor string `json:"header_color,omitempty"`
}

Branding holds branding configuration for deck.

type BypassRestrictions

type BypassRestrictions struct {
	Users []string `json:"users,omitempty"`
	Teams []string `json:"teams,omitempty"`
}

BypassRestrictions defines who can bypass PR restrictions Users and Teams items are appended to parent lists.

type CacheKey

type CacheKey string

CacheKey acts as a key to the InRepoConfigCache. We construct it by marshaling CacheKeyParts into a JSON string.

type CacheKeyParts

type CacheKeyParts struct {
	Identifier string   `json:"identifier"`
	BaseSHA    string   `json:"baseSHA"`
	HeadSHAs   []string `json:"headSHAs"`
}

The CacheKeyParts is a struct because we want to keep the various components that make up the key separate to help keep tests readable. Because the headSHAs field is a slice, the overall CacheKey object is not hashable and cannot be used directly as a key. Instead we marshal it to JSON first, then convert its type to CacheKey.

Users should take care to ensure that headSHAs remains stable (order matters).

func (*CacheKeyParts) CacheKey

func (kp *CacheKeyParts) CacheKey() (CacheKey, error)

CacheKey converts a CacheKeyParts object into a JSON string (to be used as a CacheKey).

type ChangedFilesProvider

type ChangedFilesProvider func() ([]string, error)

ChangedFilesProvider returns a slice of modified files.

func NewGitHubDeferredChangedFilesProvider

func NewGitHubDeferredChangedFilesProvider(client githubClient, org, repo string, num int) ChangedFilesProvider

NewGitHubDeferredChangedFilesProvider uses a closure to lazily retrieve the file changes only if they are needed. We only have to fetch the changes if there is at least one RunIfChanged/SkipIfOnlyChanged job that is not being force run (due to a `/retest` after a failure or because it is explicitly triggered with `/test foo`).

type Config

type Config struct {
	JobConfig
	ProwConfig
}

Config is a read-only snapshot of the config.

func Load

func Load(prowConfig, jobConfig string, supplementalProwConfigDirs []string, supplementalProwConfigsFileNameSuffix string, additionals ...func(*Config) error) (c *Config, err error)

Load loads and parses the config at path.

func LoadStrict

func LoadStrict(prowConfig, jobConfig string, supplementalProwConfigDirs []string, supplementalProwConfigsFileNameSuffix string, additionals ...func(*Config) error) (c *Config, err error)

LoadStrict loads and parses the config at path. Unlike Load it unmarshalls yaml with strict parsing.

func (*Config) BranchProtectionWarnings

func (c *Config) BranchProtectionWarnings(logger *logrus.Entry, presubmits map[string][]Presubmit)

BranchProtectionWarnings logs two sets of warnings:

  • The list of repos with unprotected branches,
  • The list of repos with disabled policies, i.e. Protect set to false, because any branches not explicitly specified in the configuration will be unprotected.

func (*Config) DefaultPeriodic

func (c *Config) DefaultPeriodic(periodic *Periodic) error

DefaultPeriodic defaults (mutates) a single Periodic.

func (*Config) GetBranchProtection

func (c *Config) GetBranchProtection(org, repo, branch string, presubmits []Presubmit) (*Policy, error)

GetBranchProtection returns the policy for a given branch.

Handles merging any policies defined at repo/org/global levels into the branch policy.

func (*Config) GetPolicy

func (c *Config) GetPolicy(org, repo, branch string, b Branch, presubmits []Presubmit, protectedOnGitHub *bool) (*Policy, error)

GetPolicy returns the protection policy for the branch, after merging in presubmits.

func (*Config) GetPostsubmits

func (c *Config) GetPostsubmits(gc git.ClientFactory, identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Postsubmit, error)

GetPostsubmits will return all postsubmits for the given identifier. This includes Postsubmits that are versioned inside the tested repo, if the inrepoconfig feature is enabled. Consumers that pass in a RefGetter implementation that does a call to GitHub and who also need the result of that GitHub call just keep a pointer to its result, but must nilcheck that pointer before accessing it.

func (*Config) GetPostsubmitsStatic

func (c *Config) GetPostsubmitsStatic(identifier string) []Postsubmit

GetPostsubmitsStatic will return postsubmits for the given identifier that are versioned inside the tested repo.

func (*Config) GetPresubmits

func (c *Config) GetPresubmits(gc git.ClientFactory, identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Presubmit, error)

GetPresubmits will return all presubmits for the given identifier. This includes Presubmits that are versioned inside the tested repo, if the inrepoconfig feature is enabled. Consumers that pass in a RefGetter implementation that does a call to GitHub and who also need the result of that GitHub call just keep a pointer to its result, but must nilcheck that pointer before accessing it.

func (*Config) GetPresubmitsStatic

func (c *Config) GetPresubmitsStatic(identifier string) []Presubmit

GetPresubmitsStatic will return presubmits for the given identifier that are versioned inside the tested repo.

func (*Config) GetProwJobDefault

func (c *Config) GetProwJobDefault(repo, cluster string) *prowapi.ProwJobDefault

GetProwJobDefault finds the resolved prowJobDefault config for a given repo and cluster.

func (Config) GetTideContextPolicy

func (c Config) GetTideContextPolicy(gitClient git.ClientFactory, org, repo, branch string, baseSHAGetter RefGetter, headSHA string) (*TideContextPolicy, error)

GetTideContextPolicy parses the prow config to find context merge options. If none are set, it will use the prow jobs configured and use the default github combined status. Otherwise if set it will use the branch protection setting, or the listed jobs.

func (*Config) IdentifyAllowedClient

func (c *Config) IdentifyAllowedClient(md *metadata.MD) (*AllowedApiClient, error)

IdentifyAllowedClient looks at the HTTP request headers (metadata) and tries to match it up with an allowlisted Client already defined in the main Config.

Each supported client type (e.g., GCP) has custom logic around the HTTP metadata headers to know what kind of headers to look for. Different cloud vendors will have different HTTP metdata headers, although technically nothing stops users from injecting these headers manually on their own.

func (*Config) InRepoConfigAllowsCluster

func (c *Config) InRepoConfigAllowsCluster(clusterName, identifier string) bool

InRepoConfigAllowsCluster determines if a given cluster may be used for a given repository Assumes that config will not include http:// or https://

func (*Config) InRepoConfigEnabled

func (c *Config) InRepoConfigEnabled(identifier string) bool

InRepoConfigEnabled returns whether InRepoConfig is enabled for a given repository. There is no assumption that config will include http:// or https:// or not.

func (*Config) ValidateJobConfig

func (c *Config) ValidateJobConfig() error

ValidateJobConfig validates if all the jobspecs/presets are valid if you are mutating the jobs, please add it to finalizeJobConfig above.

func (*Config) ValidateStorageBucket

func (c *Config) ValidateStorageBucket(bucketName string) error

ValidateStorageBucket validates a storage bucket (unless the `Deck.SkipStoragePathValidation` field is true). The bucket name must be included in any of the following:

  1. Any job's `.DecorationConfig.GCSConfiguration.Bucket` (except jobs defined externally via InRepoConfig).
  2. `Plank.DefaultDecorationConfigs.GCSConfiguration.Bucket`.
  3. `Deck.AdditionalAllowedBuckets`.

type ContextPolicy

type ContextPolicy struct {
	// Contexts appends required contexts that must be green to merge
	Contexts []string `json:"contexts,omitempty"`
	// Strict overrides whether new commits in the base branch require updating the PR if set
	Strict *bool `json:"strict,omitempty"`
}

ContextPolicy configures required github contexts. When merging policies, contexts are appended to context list from parent. Strict determines whether merging to the branch invalidates existing contexts.

type Controller

type Controller struct {
	// JobURLTemplateString compiles into JobURLTemplate at load time.
	JobURLTemplateString string `json:"job_url_template,omitempty"`
	// JobURLTemplate is compiled at load time from JobURLTemplateString. It
	// will be passed a prowapi.ProwJob and is used to set the URL for the
	// "Details" link on GitHub as well as the link from deck.
	JobURLTemplate *template.Template `json:"-"`

	// ReportTemplateString compiles into ReportTemplate at load time.
	ReportTemplateString string `json:"report_template,omitempty"`

	// ReportTemplateStrings is a mapping of template comments.
	// Use `org/repo`, `org` or `*` as a key.
	ReportTemplateStrings map[string]string `json:"report_templates,omitempty"`

	// ReportTemplates is a mapping of templates that is compliled at load
	// time from ReportTemplateStrings.
	ReportTemplates map[string]*template.Template `json:"-"`

	// MaxConcurrency is the maximum number of tests running concurrently that
	// will be allowed by the controller. 0 implies no limit.
	MaxConcurrency int `json:"max_concurrency,omitempty"`

	// MaxGoroutines is the maximum number of goroutines spawned inside the
	// controller to handle tests. Defaults to 20. Needs to be a positive
	// number.
	MaxGoroutines int `json:"max_goroutines,omitempty"`
}

Controller holds configuration applicable to all agent-specific prow controllers.

func (*Controller) ReportTemplateForRepo

func (c *Controller) ReportTemplateForRepo(refs *prowapi.Refs) *template.Template

ReportTemplateForRepo returns the template that belong to a specific repository. If the repository doesn't exist in the report_templates configuration it will inherit the values from its organization, otherwise the default values will be used.

type CopyableRegexp

type CopyableRegexp struct {
	*regexp.Regexp
}

CopyableRegexp wraps around regexp.Regexp. It's sole purpose is to allow us to create a manual DeepCopyInto() method for it, because the standard library's regexp package does not define one for us (making it impossible to generate DeepCopy() methods for any type that uses the regexp.Regexp type directly).

func (*CopyableRegexp) DeepCopy

func (in *CopyableRegexp) DeepCopy() *CopyableRegexp

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CopyableRegexp.

func (*CopyableRegexp) DeepCopyInto

func (in *CopyableRegexp) DeepCopyInto(out *CopyableRegexp)

type Deck

type Deck struct {
	// Spyglass specifies which viewers will be used for which artifacts when viewing a job in Deck.
	Spyglass Spyglass `json:"spyglass,omitempty"`
	// TideUpdatePeriod specifies how often Deck will fetch status from Tide. Defaults to 10s.
	TideUpdatePeriod *metav1.Duration `json:"tide_update_period,omitempty"`
	// HiddenRepos is a list of orgs and/or repos that should not be displayed by Deck.
	HiddenRepos []string `json:"hidden_repos,omitempty"`
	// ExternalAgentLogs ensures external agents can expose
	// their logs in prow.
	ExternalAgentLogs []ExternalAgentLog `json:"external_agent_logs,omitempty"`
	// Branding of the frontend
	Branding *Branding `json:"branding,omitempty"`
	// GoogleAnalytics, if specified, include a Google Analytics tracking code on each page.
	GoogleAnalytics string `json:"google_analytics,omitempty"`
	// RerunAuthConfigs is not deprecated but DefaultRerunAuthConfigs should be used in favor.
	// It remains a part of Deck for the purposes of backwards compatibility.
	// RerunAuthConfigs is a map of configs that specify who is able to trigger job reruns. The field
	// accepts a key of: `org/repo`, `org` or `*` (wildcard) to define what GitHub org (or repo) a particular
	// config applies to and a value of: `RerunAuthConfig` struct to define the users/groups authorized to rerun jobs.
	RerunAuthConfigs RerunAuthConfigs `json:"rerun_auth_configs,omitempty"`
	// DefaultRerunAuthConfigs is a list of DefaultRerunAuthConfigEntry structures that specify who can
	// trigger job reruns. Reruns are based on whether the entry's org/repo or cluster matches with the
	// expected fields in the given configuration.
	//
	// Each entry in the slice specifies Repo and Cluster regexp filter fields to
	// match against jobs and a corresponding RerunAuthConfig. The entry matching the job with the
	// most specification is for authentication purposes.
	//
	// This field is smarter than the RerunAuthConfigs, because each
	// entry includes additional Cluster regexp information that the old format
	// does not consider.
	//
	// This field is mutually exclusive with the RerunAuthConfigs field.
	DefaultRerunAuthConfigs []*DefaultRerunAuthConfigEntry `json:"default_rerun_auth_configs,omitempty"`
	// SkipStoragePathValidation skips validation that restricts artifact requests to specific buckets.
	// By default, buckets listed in the GCSConfiguration are automatically allowed.
	// Additional locations can be allowed via `AdditionalAllowedBuckets` fields.
	// When unspecified (nil), it defaults to false
	SkipStoragePathValidation *bool `json:"skip_storage_path_validation,omitempty"`
	// AdditionalAllowedBuckets is a list of storage buckets to allow in artifact requests
	// (in addition to those listed in the GCSConfiguration).
	// Setting this field requires "SkipStoragePathValidation" also be set to `false`.
	AdditionalAllowedBuckets []string `json:"additional_allowed_buckets,omitempty"`
	// AllKnownStorageBuckets contains all storage buckets configured in all of the
	// job configs.
	AllKnownStorageBuckets sets.Set[string] `json:"-"`
}

Deck holds config for deck.

func (*Deck) FinalizeDefaultRerunAuthConfigs

func (d *Deck) FinalizeDefaultRerunAuthConfigs() error

FinalizeDefaultRerunAuthConfigs prepares the entries of Deck.DefaultRerunAuthConfigs for use in finalizing the job config. It parses either d.RerunAuthConfigs or d.DefaultRerunAuthConfigEntries, not both. Old format: map[string]*prowapi.RerunAuthConfig where the key is org,

org/repo, or "*".

New format: []*DefaultRerunAuthConfigEntry If the old format is parsed it is converted to the new format, then all filter regexp are compiled.

func (*Deck) GetRerunAuthConfig

func (d *Deck) GetRerunAuthConfig(jobSpec *prowapi.ProwJobSpec) *prowapi.RerunAuthConfig

func (*Deck) Validate

func (d *Deck) Validate() error

Validate performs validation and sanitization on the Deck object.

type DefaultDecorationConfigEntry

type DefaultDecorationConfigEntry struct {

	// OrgRepo matches against the "org" or "org/repo" that the presubmit or postsubmit
	// is associated with. If the job is a periodic, extra_refs[0] is used. If the
	// job is a periodic without extra_refs, the empty string will be used.
	// If this field is omitted all jobs will match.
	OrgRepo string `json:"repo,omitempty"`
	// Cluster matches against the cluster alias of the build cluster that the
	// ProwJob is configured to run on. Recall that ProwJobs default to running on
	// the "default" build cluster if they omit the "cluster" field in config.
	Cluster string `json:"cluster,omitempty"`

	// Config is the DecorationConfig to apply if the filter fields all match the
	// ProwJob. Note that when multiple entries match a ProwJob they are all used
	// by sequentially merging with later entries overriding fields from earlier
	// entries.
	Config *prowapi.DecorationConfig `json:"config,omitempty"`
}

DefaultDecorationConfigEntry contains a DecorationConfig and a set of regexps. If the regexps here match a ProwJob, then that ProwJob uses defaults by looking the DecorationConfig defined here in this entry.

If multiple entries match a single ProwJob, the multiple entries' DecorationConfigs are merged, with later entries overriding values from earlier entries. Then finally that merged DecorationConfig is used by the matching ProwJob.

func DefaultDecorationMapToSliceTesting

func DefaultDecorationMapToSliceTesting(m map[string]*prowapi.DecorationConfig) []*DefaultDecorationConfigEntry

DefaultDecorationMapToSliceTesting is a convenience function that is exposed to allow unit tests to convert the old map format to the new slice format. It should only be used in testing.

type DefaultRerunAuthConfigEntry

type DefaultRerunAuthConfigEntry struct {

	// OrgRepo matches against the "org" or "org/repo" that the presubmit or postsubmit
	// is associated with. If the job is a periodic, extra_refs[0] is used. If the
	// job is a periodic without extra_refs, the empty string will be used.
	// If this field is omitted all jobs will match.
	OrgRepo string `json:"repo,omitempty"`
	// Cluster matches against the cluster alias of the build cluster that the
	// ProwJob is configured to run on. Recall that ProwJobs default to running on
	// the "default" build cluster if they omit the "cluster" field in config.
	Cluster string `json:"cluster,omitempty"`

	// Config is the RerunAuthConfig to apply if the filter fields all match the
	// ProwJob. Note that when multiple entries match a ProwJob the entry with the
	// highest specification is used.
	Config *prowapi.RerunAuthConfig `json:"rerun_auth_configs,omitempty"`
}

type Delta

type Delta struct {
	Before, After Config
}

Delta represents the before and after states of a Config change detected by the Agent.

type DeltaChan

type DeltaChan = chan<- Delta

DeltaChan is a channel to receive config delta events when config changes.

type DismissalRestrictions

type DismissalRestrictions struct {
	Users []string `json:"users,omitempty"`
	Teams []string `json:"teams,omitempty"`
}

DismissalRestrictions limits who can merge Users and Teams items are appended to parent lists.

type ExternalAgentLog

type ExternalAgentLog struct {
	// Agent is an external prow agent that supports exposing
	// logs via deck.
	Agent string `json:"agent,omitempty"`
	// SelectorString compiles into Selector at load time.
	SelectorString string `json:"selector,omitempty"`
	// Selector can be used in prow deployments where the workload has
	// been sharded between controllers of the same agent. For more info
	// see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors.
	Selector labels.Selector `json:"-"`
	// URLTemplateString compiles into URLTemplate at load time.
	URLTemplateString string `json:"url_template,omitempty"`
	// URLTemplate is compiled at load time from URLTemplateString. It
	// will be passed a prowapi.ProwJob and the generated URL should provide
	// logs for the ProwJob.
	URLTemplate *template.Template `json:"-"`
}

ExternalAgentLog ensures an external agent like Jenkins can expose its logs in prow.

type FailoverScheduling

type FailoverScheduling struct {
	// ClusterMappings maps a cluster to another one. It is used when we
	// want to schedule a ProJob to a cluster other than the one it was
	// configured to in the first place.
	ClusterMappings map[string]string `json:"mappings,omitempty"`
}

FailoverScheduling is a configuration for the Failover scheduling strategy

type GCSBrowserPrefixes

type GCSBrowserPrefixes map[string]string

type Gangway

type Gangway struct {
	// AllowedApiClients encodes identifying information about API clients
	// (AllowedApiClient). An AllowedApiClient has authority to trigger a subset
	// of Prow Jobs.
	AllowedApiClients []AllowedApiClient `json:"allowed_api_clients,omitempty"`
}

func (*Gangway) Validate

func (g *Gangway) Validate() error

type Gerrit

type Gerrit struct {
	// TickInterval is how often we do a sync with bound gerrit instance.
	TickInterval *metav1.Duration `json:"tick_interval,omitempty"`
	// RateLimit defines how many changes to query per gerrit API call
	// default is 5.
	RateLimit int `json:"ratelimit,omitempty"`
	// DeckURL is the root URL of Deck. This is used to construct links to
	// job runs for a given CL.
	DeckURL        string                `json:"deck_url,omitempty"`
	OrgReposConfig *GerritOrgRepoConfigs `json:"org_repos_config,omitempty"`
	// AllowedPresubmitTriggerRe is used to match presubmit test related commands in comments
	AllowedPresubmitTriggerRe          *CopyableRegexp `json:"-"`
	AllowedPresubmitTriggerReRawString string          `json:"allowed_presubmit_trigger_re,omitempty"`
}

Gerrit is config for the gerrit controller.

func (*Gerrit) DefaultAndValidate

func (g *Gerrit) DefaultAndValidate() error

func (*Gerrit) IsAllowedPresubmitTrigger

func (g *Gerrit) IsAllowedPresubmitTrigger(message string) bool

type GerritOrgRepoConfig

type GerritOrgRepoConfig struct {
	// Org is the name of the Gerrit instance/host. It's required to keep the
	// https:// or http:// prefix.
	Org string `json:"org,omitempty"`
	// Repos are a slice of repos under the `Org`.
	Repos []string `json:"repos,omitempty"`
	// OptOutHelp is the flag for determining whether the repos defined under
	// here opting out of help or not. If this is true, Prow will not command
	// the help message with comments like `/test ?`, `/retest ?`, `/test
	// job-not-exist`, `/test job-only-available-from-another-prow`.
	OptOutHelp bool `json:"opt_out_help,omitempty"`
	// Filters are used for limiting the scope of querying the Gerrit server.
	// Currently supports branches and excluded branches.
	Filters *GerritQueryFilter `json:"filters,omitempty"`
}

GerritOrgRepoConfig is config for repos.

type GerritOrgRepoConfigs

type GerritOrgRepoConfigs []GerritOrgRepoConfig

GerritOrgRepoConfigs is config for repos.

func (*GerritOrgRepoConfigs) AllRepos

func (goc *GerritOrgRepoConfigs) AllRepos() map[string]map[string]*GerritQueryFilter

func (*GerritOrgRepoConfigs) OptOutHelpRepos

func (goc *GerritOrgRepoConfigs) OptOutHelpRepos() map[string]sets.Set[string]

type GerritQueryFilter

type GerritQueryFilter struct {
	Branches         []string `json:"branches,omitempty"`
	ExcludedBranches []string `json:"excluded_branches,omitempty"`
	// OptInByDefault indicates that all of the PRs are considered by Tide from
	// these repos, unless `Prow-Auto-Submit` label is voted -1.
	OptInByDefault bool `json:"opt_in_by_default,omitempty"`
}

type Getter

type Getter func() *Config

Getter returns the current Config in a thread-safe manner.

type GitHubOptions

type GitHubOptions struct {
	// LinkURLFromConfig is the string representation of the link_url config parameter.
	// This config parameter allows users to override the default GitHub link url for all plugins.
	// If this option is not set, we assume "https://github.com".
	LinkURLFromConfig string `json:"link_url,omitempty"`

	// LinkURL is the url representation of LinkURLFromConfig. This variable should be used
	// in all places internally.
	LinkURL *url.URL `json:"-"`
}

GitHubOptions allows users to control how prow applications display GitHub website links.

type GitHubReporter

type GitHubReporter struct {
	// JobTypesToReport is used to determine which type of prowjob
	// should be reported to github.
	//
	// defaults to both presubmit and postsubmit jobs.
	JobTypesToReport []prowapi.ProwJobType `json:"job_types_to_report,omitempty"`
	// NoCommentRepos is a list of orgs and org/repos for which failure report
	// comments should not be maintained. Status contexts will still be written.
	NoCommentRepos []string `json:"no_comment_repos,omitempty"`
	// SummaryCommentRepos is a list of orgs and org/repos for which failure report
	// comments is only sent when all jobs from current SHA are finished. Status
	// contexts will still be written.
	SummaryCommentRepos []string `json:"summary_comment_repos,omitempty"`
}

GitHubReporter holds the config for report behavior in github.

type Horologium

type Horologium struct {
	// TickInterval is the interval in which we check if new jobs need to be
	// created. Defaults to one minute.
	TickInterval *metav1.Duration `json:"tick_interval,omitempty"`
}

Horologium is config for the Horologium.

type InRepoConfig

type InRepoConfig struct {
	// Enabled describes whether InRepoConfig is enabled for a given repository. This can
	// be set globally, per org or per repo using '*', 'org' or 'org/repo' as key. The
	// narrowest match always takes precedence.
	Enabled map[string]*bool `json:"enabled,omitempty"`
	// AllowedClusters is a list of allowed clusternames that can be used for jobs on
	// a given repo. All clusters that are allowed for the specific repo, its org or
	// globally can be used.
	AllowedClusters map[string][]string `json:"allowed_clusters,omitempty"`
}

type InRepoConfigCache

type InRepoConfigCache struct {
	*cache.LRUCache
	// contains filtered or unexported fields
}

InRepoConfigCache is the user-facing cache. It acts as a wrapper around the generic LRUCache, by handling type casting in and out of the LRUCache (which only handles empty interfaces).

func NewInRepoConfigCache

func NewInRepoConfigCache(
	size int,
	configAgent prowConfigAgentClient,
	gitClientFactory git.ClientFactory) (*InRepoConfigCache, error)

NewInRepoConfigCache creates a new LRU cache for ProwYAML values, where the keys are CacheKeys (that is, JSON strings) and values are pointers to ProwYAMLs.

func (*InRepoConfigCache) GetInRepoConfig

func (cache *InRepoConfigCache) GetInRepoConfig(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) (*ProwYAML, error)

GetInRepoConfig just wraps around GetProwYAML().

func (*InRepoConfigCache) GetPostsubmits

func (cache *InRepoConfigCache) GetPostsubmits(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Postsubmit, error)

GetPostsubmitsCached is like GetPostsubmits, but attempts to use a cache lookup to get the *ProwYAML value (cache hit), instead of computing it from scratch (cache miss). It also stores the *ProwYAML into the cache if there is a cache miss.

func (*InRepoConfigCache) GetPresubmits

func (cache *InRepoConfigCache) GetPresubmits(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Presubmit, error)

GetPresubmits uses a cache lookup to get the *ProwYAML value (cache hit), instead of computing it from scratch (cache miss). It also stores the *ProwYAML into the cache if there is a cache miss.

func (*InRepoConfigCache) GetProwYAML

func (cache *InRepoConfigCache) GetProwYAML(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) (*ProwYAML, error)

GetProwYAML returns the ProwYAML value stored in the InRepoConfigCache.

func (*InRepoConfigCache) GetProwYAMLWithoutDefaults

func (cache *InRepoConfigCache) GetProwYAMLWithoutDefaults(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) (*ProwYAML, error)

type InRepoConfigGetter

type InRepoConfigGetter interface {
	GetInRepoConfig(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) (*ProwYAML, error)
	GetPresubmits(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Presubmit, error)
	GetPostsubmits(identifier, baseBranch string, baseSHAGetter RefGetter, headSHAGetters ...RefGetter) ([]Postsubmit, error)
}

InRepoConfigGetter defines a common interface that both the Moonraker client and raw InRepoConfigCache can implement. This way, Prow components like Sub and Gerrit can choose either one (based on runtime flags), but regardless of the choice the surrounding code can still just call this GetProwYAML() interface method (without being aware whether the underlying implementation is going over the network to Moonraker or is done locally with the local InRepoConfigCache (LRU cache)).

type JenkinsOperator

type JenkinsOperator struct {
	Controller `json:",inline"`
	// LabelSelectorString compiles into LabelSelector at load time.
	// If set, this option needs to match --label-selector used by
	// the desired jenkins-operator. This option is considered
	// invalid when provided with a single jenkins-operator config.
	//
	// For label selector syntax, see below:
	// https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
	LabelSelectorString string `json:"label_selector,omitempty"`
	// LabelSelector is used so different jenkins-operator replicas
	// can use their own configuration.
	LabelSelector labels.Selector `json:"-"`
}

JenkinsOperator is config for the jenkins-operator controller.

type JenkinsSpec

type JenkinsSpec struct {
	// Job is managed by the GH branch source plugin
	// and requires a specific path
	GitHubBranchSourceJob bool `json:"github_branch_source_job,omitempty"`
}

JenkinsSpec holds optional Jenkins job config

type JobBase

type JobBase struct {
	// The name of the job. Must match regex [A-Za-z0-9-._]+
	// e.g. pull-test-infra-bazel-build
	Name string `json:"name"`
	// Labels are added to prowjobs and pods created for this job.
	Labels map[string]string `json:"labels,omitempty"`
	// MaximumConcurrency of this job, 0 implies no limit.
	MaxConcurrency int `json:"max_concurrency,omitempty"`
	// Agent that will take care of running this job. Defaults to "kubernetes"
	Agent string `json:"agent,omitempty"`
	// Cluster is the alias of the cluster to run this job in.
	// (Default: kube.DefaultClusterAlias)
	Cluster string `json:"cluster,omitempty"`
	// Namespace is the namespace in which pods schedule.
	//   nil: results in config.PodNamespace (aka pod default)
	//   empty: results in config.ProwJobNamespace (aka same as prowjob)
	Namespace *string `json:"namespace,omitempty"`
	// ErrorOnEviction indicates that the ProwJob should be completed and given
	// the ErrorState status if the pod that is executing the job is evicted.
	// If this field is unspecified or false, a new pod will be created to replace
	// the evicted one.
	ErrorOnEviction bool `json:"error_on_eviction,omitempty"`
	// SourcePath contains the path where this job is defined
	SourcePath string `json:"-"`
	// Spec is the Kubernetes pod spec used if Agent is kubernetes.
	Spec *v1.PodSpec `json:"spec,omitempty"`
	// PipelineRunSpec is the tekton pipeline spec used if Agent is tekton-pipeline.
	PipelineRunSpec *pipelinev1beta1.PipelineRunSpec `json:"pipeline_run_spec,omitempty"`
	// TektonPipelineRunSpec is the versioned tekton pipeline spec used if Agent is tekton-pipeline.
	TektonPipelineRunSpec *prowapi.TektonPipelineRunSpec `json:"tekton_pipeline_run_spec,omitempty"`
	// Annotations are unused by prow itself, but provide a space to configure other automation.
	Annotations map[string]string `json:"annotations,omitempty"`
	// ReporterConfig provides the option to configure reporting on job level
	ReporterConfig *prowapi.ReporterConfig `json:"reporter_config,omitempty"`
	// RerunAuthConfig specifies who can rerun the job
	RerunAuthConfig *prowapi.RerunAuthConfig `json:"rerun_auth_config,omitempty"`
	// Hidden defines if the job is hidden. If set to `true`, only Deck instances
	// that have the flag `--hiddenOnly=true or `--show-hidden=true` set will show it.
	// Presubmits and Postsubmits can also be set to hidden by
	// adding their repository in Decks `hidden_repo` setting.
	Hidden bool `json:"hidden,omitempty"`
	// ProwJobDefault holds configuration options provided as defaults
	// in the Prow config
	ProwJobDefault *prowapi.ProwJobDefault `json:"prowjob_defaults,omitempty"`
	// Name of the job queue specifying maximum concurrency, omission implies no limit.
	// Works in parallel with MaxConcurrency and the limit is selected from the
	// minimal setting of those two fields.
	JobQueueName string `json:"job_queue_name,omitempty"`

	UtilityConfig
}

JobBase contains attributes common to all job types

func (*JobBase) DeepCopy

func (in *JobBase) DeepCopy() *JobBase

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobBase.

func (*JobBase) DeepCopyInto

func (in *JobBase) DeepCopyInto(out *JobBase)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (JobBase) GetAnnotations

func (jb JobBase) GetAnnotations() map[string]string

func (JobBase) GetLabels

func (jb JobBase) GetLabels() map[string]string

func (JobBase) GetName

func (jb JobBase) GetName() string

func (JobBase) GetPipelineRunSpec

func (jb JobBase) GetPipelineRunSpec() (*pipelinev1beta1.PipelineRunSpec, error)

func (JobBase) HasPipelineRunSpec

func (jb JobBase) HasPipelineRunSpec() bool

type JobConfig

type JobConfig struct {
	// Presets apply to all job types.
	Presets []Preset `json:"presets,omitempty"`
	// .PresubmitsStatic contains the presubmits in Prows main config.
	// **Warning:** This does not return dynamic Presubmits configured
	// inside the code repo, hence giving an incomplete view. Use
	// `GetPresubmits` instead if possible.
	PresubmitsStatic map[string][]Presubmit `json:"presubmits,omitempty"`
	// .PostsubmitsStatic contains the Postsubmits in Prows main config.
	// **Warning:** This does not return dynamic postsubmits configured
	// inside the code repo, hence giving an incomplete view. Use
	// `GetPostsubmits` instead if possible.
	PostsubmitsStatic map[string][]Postsubmit `json:"postsubmits,omitempty"`

	// Periodics are not associated with any repo.
	Periodics []Periodic `json:"periodics,omitempty"`

	// AllRepos contains all Repos that have one or more jobs configured or
	// for which a tide query is configured.
	AllRepos sets.Set[string] `json:"-"`

	// ProwYAMLGetterWithDefaults is the function to get a ProwYAML with
	// defaults based on the rest of the Config. Tests should provide their own
	// implementation.
	ProwYAMLGetterWithDefaults ProwYAMLGetter `json:"-"`

	// ProwYAMLGetter is like ProwYAMLGetterWithDefaults, but does not default
	// the retrieved ProwYAML with defaulted values. It is mocked by
	// TestGetPresubmitsAndPostubmitsCached (and in production, prowYAMLGetter()
	// is used).
	ProwYAMLGetter ProwYAMLGetter `json:"-"`

	// DecorateAllJobs determines whether all jobs are decorated by default.
	DecorateAllJobs bool `json:"decorate_all_jobs,omitempty"`

	// ProwIgnored is a well known, unparsed field where non-Prow fields can
	// be defined without conflicting with unknown field validation.
	ProwIgnored *json.RawMessage `json:"prow_ignored,omitempty"`
}

JobConfig is config for all prow jobs.

func ReadJobConfig

func ReadJobConfig(jobConfig string, yamlOpts ...yaml.JSONOpt) (JobConfig, error)

ReadJobConfig reads the JobConfig yaml, but does not expand or validate it.

func (*JobConfig) AllPeriodics

func (c *JobConfig) AllPeriodics() []Periodic

AllPeriodics returns all prow periodic jobs.

func (*JobConfig) AllStaticPostsubmits

func (c *JobConfig) AllStaticPostsubmits(repos []string) []Postsubmit

AllPostsubmits returns all prow postsubmit jobs in repos. if repos is empty, return all postsubmits. Be aware that this does not return Postsubmits that are versioned inside the repo via the `inrepoconfig` feature and hence this list may be incomplete.

func (*JobConfig) AllStaticPresubmits

func (c *JobConfig) AllStaticPresubmits(repos []string) []Presubmit

AllStaticPresubmits returns all static prow presubmit jobs in repos. if repos is empty, return all presubmits. Be aware that this does not return Presubmits that are versioned inside the repo via the `inrepoconfig` feature and hence this list may be incomplete.

func (*JobConfig) SetPostsubmits

func (c *JobConfig) SetPostsubmits(jobs map[string][]Postsubmit) error

SetPostsubmits updates c.Postsubmits to jobs, after compiling and validating their regexes.

func (*JobConfig) SetPresubmits

func (c *JobConfig) SetPresubmits(jobs map[string][]Presubmit) error

SetPresubmits updates c.PresubmitStatic to jobs, after compiling and validating their regexes.

type LensConfig

type LensConfig struct {
	// Name is the name of the lens.
	Name string `json:"name"`
	// Config is some lens-specific configuration. Interpreting it is the responsibility of the
	// lens in question.
	Config json.RawMessage `json:"config,omitempty"`
}

LensConfig names a specific lens, and optionally provides some configuration for it.

type LensFileConfig

type LensFileConfig struct {
	// RequiredFiles is a list of regexes of file paths that must all be present for a lens to appear.
	// The list entries are ANDed together, i.e. all of them are required. You can achieve an OR
	// by using a pipe in a regex.
	RequiredFiles []string `json:"required_files"`
	// OptionalFiles is a list of regexes of file paths that will be provided to the lens if they are
	// present, but will not preclude the lens being rendered by their absence.
	// The list entries are ORed together, so if only one of them is present it will be provided to
	// the lens even if the others are not.
	OptionalFiles []string `json:"optional_files,omitempty"`
	// Lens is the lens to use, alongside any lens-specific configuration.
	Lens LensConfig `json:"lens"`
	// RemoteConfig specifies how to access remote lenses.
	RemoteConfig *LensRemoteConfig `json:"remote_config,omitempty"`
}

LensFileConfig is a single entry under Lenses, describing how to configure a lens to read a given set of files.

type LensRemoteConfig

type LensRemoteConfig struct {
	// The endpoint for the lense.
	Endpoint string `json:"endpoint"`
	// The parsed endpoint.
	ParsedEndpoint *url.URL `json:"-"`
	// The endpoint for static resources.
	StaticRoot string `json:"static_root"`
	// The human-readable title for the lens.
	Title string `json:"title"`
	// Priority for lens ordering, lowest priority first.
	Priority *uint `json:"priority"`
	// HideTitle defines if we will keep showing the title after lens loads.
	HideTitle *bool `json:"hide_title"`
}

LensRemoteConfig is the configuration for a remote lens.

type ManagedWebhookInfo

type ManagedWebhookInfo struct {
	TokenCreatedAfter time.Time `json:"token_created_after"`
}

ManagedWebhookInfo contains metadata about the repo/org which is onboarded.

type ManagedWebhooks

type ManagedWebhooks struct {
	RespectLegacyGlobalToken bool `json:"respect_legacy_global_token"`
	// Controls whether org/repo invitation for prow bot should be automatically
	// accepted or not. Only admin level invitations related to orgs and repos
	// in the managed_webhooks config will be accepted and all other invitations
	// will be left pending.
	AutoAcceptInvitation bool                          `json:"auto_accept_invitation"`
	OrgRepoConfig        map[string]ManagedWebhookInfo `json:"org_repo_config,omitempty"`
}

ManagedWebhooks contains information about all the repos/orgs which are onboarded with auto-generated tokens.

type Moonraker

type Moonraker struct {
	ClientTimeout *metav1.Duration `json:"client_timeout,omitempty"`
}

func (*Moonraker) Validate

func (m *Moonraker) Validate() error

type Org

type Org struct {
	Policy `json:",inline"`
	Repos  map[string]Repo `json:"repos,omitempty"`
}

Org holds the default protection policy for an entire org, as well as any repo overrides.

func (Org) GetRepo

func (o Org) GetRepo(name string) *Repo

GetRepo returns the repo config after merging in any org policies.

func (Org) HasManagedBranches

func (o Org) HasManagedBranches() bool

HasManagedBranches returns true if the org has managed branches

func (Org) HasManagedRepos

func (o Org) HasManagedRepos() bool

HasManagedRepos returns true if the org has managed repos

type OrgRepo

type OrgRepo struct {
	Org  string
	Repo string
}

OrgRepo supercedes org/repo string handling.

func NewOrgRepo

func NewOrgRepo(orgRepo string) *OrgRepo

NewOrgRepo creates a OrgRepo from org/repo string.

func StringsToOrgRepos

func StringsToOrgRepos(vs []string) []OrgRepo

StringsToOrgRepos converts a list of org/repo strings to its OrgRepo equivalent.

func (OrgRepo) String

func (repo OrgRepo) String() string

type OwnersDirDenylist

type OwnersDirDenylist struct {
	// Repos configures a directory denylist per repo (or org).
	Repos map[string][]string `json:"repos,omitempty"`
	// Default configures a default denylist for all repos (or orgs).
	// Some directories like ".git", "_output" and "vendor/.*/OWNERS"
	// are already preconfigured to be denylisted, and need not be included here.
	Default []string `json:"default,omitempty"`
	// By default, some directories like ".git", "_output" and "vendor/.*/OWNERS"
	// are preconfigured to be denylisted.
	// If set, IgnorePreconfiguredDefaults will not add these preconfigured directories
	// to the denylist.
	IgnorePreconfiguredDefaults bool `json:"ignore_preconfigured_defaults,omitempty"`
}

OwnersDirDenylist is used to configure regular expressions matching directories to ignore when searching for OWNERS{,_ALIAS} files in a repo.

func (OwnersDirDenylist) ListIgnoredDirs

func (o OwnersDirDenylist) ListIgnoredDirs(org, repo string) (ignorelist []string)

ListIgnoredDirs returns regular expressions matching directories to ignore when searching for OWNERS{,_ALIAS} files in a repo.

type Periodic

type Periodic struct {
	JobBase

	// (deprecated)Interval to wait between two runs of the job.
	// Consecutive jobs are run at `interval` duration apart, provided the
	// previous job has completed.
	Interval string `json:"interval,omitempty"`
	// MinimumInterval to wait between two runs of the job.
	// Consecutive jobs are run at `interval` + `duration of previous job` apart.
	MinimumInterval string `json:"minimum_interval,omitempty"`
	// Cron representation of job trigger time
	Cron string `json:"cron,omitempty"`
	// Tags for config entries
	Tags []string `json:"tags,omitempty"`
	// contains filtered or unexported fields
}

Periodic runs on a timer.

func (*Periodic) GetInterval

func (p *Periodic) GetInterval() time.Duration

GetInterval returns interval, the frequency duration it runs.

func (*Periodic) GetMinimumInterval

func (p *Periodic) GetMinimumInterval() time.Duration

GetMinimumInterval returns minimum_interval, the minimum frequency duration it runs.

func (*Periodic) SetInterval

func (p *Periodic) SetInterval(d time.Duration)

SetInterval updates interval, the frequency duration it runs.

func (*Periodic) SetMinimumInterval

func (p *Periodic) SetMinimumInterval(d time.Duration)

SetMinimumInterval updates minimum_interval, the minimum frequency duration it runs.

type Plank

type Plank struct {
	Controller `json:",inline"`
	// PodPendingTimeout defines how long the controller will wait to perform a garbage
	// collection on pending pods. Defaults to 10 minutes.
	PodPendingTimeout *metav1.Duration `json:"pod_pending_timeout,omitempty"`
	// PodRunningTimeout defines how long the controller will wait to abort a prowjob pod
	// stuck in running state. Defaults to two days.
	PodRunningTimeout *metav1.Duration `json:"pod_running_timeout,omitempty"`
	// PodUnscheduledTimeout defines how long the controller will wait to abort a prowjob
	// stuck in an unscheduled state. Defaults to 5 minutes.
	PodUnscheduledTimeout *metav1.Duration `json:"pod_unscheduled_timeout,omitempty"`

	// DefaultDecorationConfigs holds the default decoration config for specific values.
	//
	// Each entry in the slice specifies Repo and Cluster regexp filter fields to
	// match against jobs and a corresponding DecorationConfig. All entries that
	// match a job are used. Later matching entries override the fields of earlier
	// matching entries.
	//
	// In FinalizeDefaultDecorationConfigs(), this field is populated either directly from
	// DefaultDecorationConfigEntries, or from DefaultDecorationConfigsMap after
	// it is converted to a slice. These fields are mutually exclusive, and
	// defining both is an error.
	DefaultDecorationConfigs []*DefaultDecorationConfigEntry `json:"-"`
	// DefaultDecorationConfigsMap is a mapping from 'org', 'org/repo', or the
	// literal string '*', to the default decoration config to use for that key.
	// The '*' key matches all jobs. (Periodics use extra_refs[0] for matching
	// if present.)
	//
	// This field is mutually exclusive with the DefaultDecorationConfigEntries field.
	DefaultDecorationConfigsMap map[string]*prowapi.DecorationConfig `json:"default_decoration_configs,omitempty"`
	// DefaultDecorationConfigEntries is used to populate DefaultDecorationConfigs.
	//
	// Each entry in the slice specifies Repo and Cluster regexp filter fields to
	// match against jobs and a corresponding DecorationConfig. All entries that
	// match a job are used. Later matching entries override the fields of earlier
	// matching entries.
	//
	// This field is smarter than the DefaultDecorationConfigsMap, because each
	// entry includes additional Cluster regexp information that the old format
	// does not consider.
	//
	// This field is mutually exclusive with the DefaultDecorationConfigsMap field.
	DefaultDecorationConfigEntries []*DefaultDecorationConfigEntry `json:"default_decoration_config_entries,omitempty"`

	// JobURLPrefixConfig is the host and path prefix under which job details
	// will be viewable. Use `org/repo`, `org` or `*`as key and an url as value.
	JobURLPrefixConfig map[string]string `json:"job_url_prefix_config,omitempty"`

	// JobURLPrefixDisableAppendStorageProvider disables that the storageProvider is
	// automatically appended to the JobURLPrefix.
	JobURLPrefixDisableAppendStorageProvider bool `json:"jobURLPrefixDisableAppendStorageProvider,omitempty"`

	// BuildClusterStatusFile is an optional field used to specify the blob storage location
	// to publish cluster status information.
	// e.g. gs://my-bucket/cluster-status.json
	BuildClusterStatusFile string `json:"build_cluster_status_file,omitempty"`

	// JobQueueCapacities is an optional field used to define job queue max concurrency.
	// Each job can be assigned to a specific queue which has its own max concurrency,
	// independent from the job's name. Setting the concurrency to 0 will block any job
	// from being triggered. Setting the concurrency to a negative value will remove the
	// limit. An example use case would be easier scheduling of jobs using boskos resources.
	// This mechanism is separate from ProwJob's MaxConcurrency setting.
	JobQueueCapacities map[string]int `json:"job_queue_capacities,omitempty"`
}

Plank is config for the plank controller.

func (*Plank) FinalizeDefaultDecorationConfigs

func (p *Plank) FinalizeDefaultDecorationConfigs() error

FinalizeDefaultDecorationConfigs prepares the entries of Plank.DefaultDecorationConfigs for use in finalizing the job config. It sets p.DefaultDecorationConfigs into either the old map format or the new slice format: Old format: map[string]*prowapi.DecorationConfig where the key is org,

org/repo, or "*".

New format: []*DefaultDecorationConfigEntry If the old format is parsed it is converted to the new format, then all filter regexp are compiled.

func (Plank) GetJobURLPrefix

func (p Plank) GetJobURLPrefix(pj *prowapi.ProwJob) string

GetJobURLPrefix gets the job url prefix from the config for the given refs.

func (*Plank) GuessDefaultDecorationConfig

func (p *Plank) GuessDefaultDecorationConfig(repo, cluster string) *prowapi.DecorationConfig

GuessDefaultDecorationConfig attempts to find the resolved default decoration config for a given repo and cluster. It is primarily used for best effort guesses about GCS configuration for undecorated jobs.

func (*Plank) GuessDefaultDecorationConfigWithJobDC

func (p *Plank) GuessDefaultDecorationConfigWithJobDC(repo, cluster string, jobDC *prowapi.DecorationConfig) *prowapi.DecorationConfig

GuessDefaultDecorationConfig attempts to find the resolved default decoration config for a given repo, cluster and job DecorationConfig. It is primarily used for best effort guesses about GCS configuration for undecorated jobs.

type Policy

type Policy struct {
	// Unmanaged makes us not manage the branchprotection.
	Unmanaged *bool `json:"unmanaged,omitempty"`
	// Protect overrides whether branch protection is enabled if set.
	Protect *bool `json:"protect,omitempty"`
	// RequiredStatusChecks configures github contexts
	RequiredStatusChecks *ContextPolicy `json:"required_status_checks,omitempty"`
	// Admins overrides whether protections apply to admins if set.
	Admins *bool `json:"enforce_admins,omitempty"`
	// Restrictions limits who can merge
	Restrictions *Restrictions `json:"restrictions,omitempty"`
	// RequireManuallyTriggeredJobs enforces a context presence when job runs conditionally, but not automatically,
	// that results in params always_run: false, optional: false, and skip_if_only_change, run_if_changed not present.
	RequireManuallyTriggeredJobs *bool `json:"require_manually_triggered_jobs,omitempty"`
	// RequiredPullRequestReviews specifies github approval/review criteria.
	RequiredPullRequestReviews *ReviewPolicy `json:"required_pull_request_reviews,omitempty"`
	// RequiredLinearHistory enforces a linear commit Git history, which prevents anyone from pushing merge commits to a branch.
	RequiredLinearHistory *bool `json:"required_linear_history,omitempty"`
	// AllowForcePushes permits force pushes to the protected branch by anyone with write access to the repository.
	AllowForcePushes *bool `json:"allow_force_pushes,omitempty"`
	// AllowDeletions allows deletion of the protected branch by anyone with write access to the repository.
	AllowDeletions *bool `json:"allow_deletions,omitempty"`
	// Exclude specifies a set of regular expressions which identify branches
	// that should be excluded from the protection policy, mutually exclusive with Include
	Exclude []string `json:"exclude,omitempty"`
	// Include specifies a set of regular expressions which identify branches
	// that should be included from the protection policy, mutually exclusive with Exclude
	Include []string `json:"include,omitempty"`
}

Policy for the config/org/repo/branch. When merging policies, a nil value results in inheriting the parent policy.

func (Policy) Apply

func (p Policy) Apply(child Policy) Policy

Apply returns a policy that merges the child into the parent

func (Policy) Managed

func (p Policy) Managed() bool

Managed returns true if Unmanaged is false in the policy

type Postsubmit

type Postsubmit struct {
	JobBase

	// AlwaysRun determines whether we should try to run this job it (or not run
	// it). The key difference with the AlwaysRun field for Presubmits is that
	// here, we essentially treat "true" as the default value as Postsubmits by
	// default run unless there is some falsifying condition.
	//
	// The use of a pointer allows us to check if the field was or was not
	// provided by the user. This is required because otherwise when we
	// Unmarshal() the bytes into this struct, we'll get a default "false" value
	// if this field is not provided, which is the opposite of what we want.
	AlwaysRun *bool `json:"always_run,omitempty"`

	RegexpChangeMatcher

	Brancher

	// TODO(krzyzacy): Move existing `Report` into `Skip_Report` once this is deployed
	Reporter

	JenkinsSpec *JenkinsSpec `json:"jenkins_spec,omitempty"`
}

Postsubmit runs on push events.

func (Postsubmit) CouldRun

func (ps Postsubmit) CouldRun(baseRef string) bool

CouldRun determines if the postsubmit could run against a specific base ref

func (*Postsubmit) DeepCopy

func (in *Postsubmit) DeepCopy() *Postsubmit

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Postsubmit.

func (*Postsubmit) DeepCopyInto

func (in *Postsubmit) DeepCopyInto(out *Postsubmit)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (Postsubmit) ShouldRun

func (ps Postsubmit) ShouldRun(baseRef string, changes ChangedFilesProvider) (bool, error)

ShouldRun determines if the postsubmit should run in response to a set of changes. This is evaluated lazily, if necessary.

type Preset

type Preset struct {
	Labels       map[string]string `json:"labels"`
	Env          []v1.EnvVar       `json:"env"`
	Volumes      []v1.Volume       `json:"volumes"`
	VolumeMounts []v1.VolumeMount  `json:"volumeMounts"`
}

Presets can be used to re-use settings across multiple jobs.

func (*Preset) DeepCopy

func (in *Preset) DeepCopy() *Preset

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Preset.

func (*Preset) DeepCopyInto

func (in *Preset) DeepCopyInto(out *Preset)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type Presubmit

type Presubmit struct {
	JobBase

	// AlwaysRun automatically for every PR, or only when a comment triggers it.
	AlwaysRun bool `json:"always_run"`

	// Optional indicates that the job's status context should not be required for merge.
	Optional bool `json:"optional,omitempty"`

	// Trigger is the regular expression to trigger the job.
	// e.g. `@k8s-bot e2e test this`
	// RerunCommand must also be specified if this field is specified.
	// (Default: `(?m)^/test (?:.*? )?<job name>(?: .*?)?$`)
	Trigger string `json:"trigger,omitempty"`

	// The RerunCommand to give users. Must match Trigger.
	// Trigger must also be specified if this field is specified.
	// (Default: `/test <job name>`)
	RerunCommand string `json:"rerun_command,omitempty"`

	// RunBeforeMerge indicates that a job should always run by Tide as long as
	// Brancher matches.
	// This is used when a prowjob is so expensive that it's not ideal to run on
	// every single push from all PRs.
	RunBeforeMerge bool `json:"run_before_merge,omitempty"`

	Brancher

	RegexpChangeMatcher

	Reporter

	JenkinsSpec *JenkinsSpec `json:"jenkins_spec,omitempty"`
	// contains filtered or unexported fields
}

Presubmit runs on PRs.

func (Presubmit) ContextRequired

func (ps Presubmit) ContextRequired() bool

ContextRequired checks whether a context is required from github points of view (required check).

func (Presubmit) CouldRun

func (ps Presubmit) CouldRun(baseRef string) bool

CouldRun determines if the presubmit could run against a specific base ref

func (*Presubmit) DeepCopy

func (in *Presubmit) DeepCopy() *Presubmit

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Presubmit.

func (*Presubmit) DeepCopyInto

func (in *Presubmit) DeepCopyInto(out *Presubmit)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (Presubmit) NeedsExplicitTrigger

func (ps Presubmit) NeedsExplicitTrigger() bool

NeedsExplicitTrigger determines if the presubmit requires a human action to trigger it or not.

func (Presubmit) ShouldRun

func (ps Presubmit) ShouldRun(baseRef string, changes ChangedFilesProvider, forced, defaults bool) (bool, error)

ShouldRun determines if the presubmit should run against a specific base ref, or in response to a set of changes. The latter mechanism is evaluated lazily, if necessary.

func (Presubmit) TriggerMatches

func (ps Presubmit) TriggerMatches(body string) bool

TriggerMatches returns true if the comment body should trigger this presubmit.

This is usually a /test foo string.

func (Presubmit) TriggersConditionally

func (ps Presubmit) TriggersConditionally() bool

TriggersConditionally determines if the presubmit triggers conditionally (if it may or may not trigger).

type ProwConfig

type ProwConfig struct {
	// The git sha from which this config was generated.
	ConfigVersionSHA     string               `json:"config_version_sha,omitempty"`
	Tide                 Tide                 `json:"tide,omitempty"`
	Plank                Plank                `json:"plank,omitempty"`
	Sinker               Sinker               `json:"sinker,omitempty"`
	Deck                 Deck                 `json:"deck,omitempty"`
	BranchProtection     BranchProtection     `json:"branch-protection"`
	Gerrit               Gerrit               `json:"gerrit"`
	GitHubReporter       GitHubReporter       `json:"github_reporter"`
	Horologium           Horologium           `json:"horologium"`
	SlackReporterConfigs SlackReporterConfigs `json:"slack_reporter_configs,omitempty"`
	InRepoConfig         InRepoConfig         `json:"in_repo_config"`

	// Gangway contains configurations needed by the the Prow API server of the
	// same name. It encodes an allowlist of API clients and what kinds of Prow
	// Jobs they are authorized to trigger.
	Gangway Gangway `json:"gangway,omitempty"`

	// Moonraker contains configurations for Moonraker, such as the client
	// timeout to use for all Prow services that need to send requests to
	// Moonraker.
	Moonraker Moonraker `json:"moonraker,omitempty"`

	// Scheduler contains configuration for the additional scheduler.
	// It has to be explicitly enabled.
	Scheduler Scheduler `json:"scheduler,omitempty"`

	// TODO: Move this out of the main config.
	JenkinsOperators []JenkinsOperator `json:"jenkins_operators,omitempty"`

	// ProwJobNamespace is the namespace in the cluster that prow
	// components will use for looking up ProwJobs. The namespace
	// needs to exist and will not be created by prow.
	// Defaults to "default".
	ProwJobNamespace string `json:"prowjob_namespace,omitempty"`
	// PodNamespace is the namespace in the cluster that prow
	// components will use for looking up Pods owned by ProwJobs.
	// The namespace needs to exist and will not be created by prow.
	// Defaults to "default".
	PodNamespace string `json:"pod_namespace,omitempty"`

	// LogLevel enables dynamically updating the log level of the
	// standard logger that is used by all prow components.
	//
	// Valid values:
	//
	// "debug", "info", "warn", "warning", "error", "fatal", "panic"
	//
	// Defaults to "info".
	LogLevel string `json:"log_level,omitempty"`

	// PushGateway is a prometheus push gateway.
	PushGateway PushGateway `json:"push_gateway,omitempty"`

	// OwnersDirDenylist is used to configure regular expressions matching directories
	// to ignore when searching for OWNERS{,_ALIAS} files in a repo.
	OwnersDirDenylist *OwnersDirDenylist `json:"owners_dir_denylist,omitempty"`

	// Pub/Sub Subscriptions that we want to listen to.
	PubSubSubscriptions PubsubSubscriptions `json:"pubsub_subscriptions,omitempty"`

	// PubSubTriggers defines Pub/Sub Subscriptions that we want to listen to,
	// can be used to restrict build cluster on a topic.
	PubSubTriggers PubSubTriggers `json:"pubsub_triggers,omitempty"`

	// GitHubOptions allows users to control how prow applications display GitHub website links.
	GitHubOptions GitHubOptions `json:"github,omitempty"`

	// StatusErrorLink is the url that will be used for jenkins prowJobs that can't be
	// found, or have another generic issue. The default that will be used if this is not set
	// is: https://github.com/kubernetes/test-infra/issues.
	StatusErrorLink string `json:"status_error_link,omitempty"`

	// DefaultJobTimeout this is default deadline for prow jobs. This value is used when
	// no timeout is configured at the job level. This value is set to 24 hours.
	DefaultJobTimeout *metav1.Duration `json:"default_job_timeout,omitempty"`

	// ManagedWebhooks contains information about all github repositories and organizations which are using
	// non-global Hmac token.
	ManagedWebhooks ManagedWebhooks `json:"managed_webhooks,omitempty"`

	// ProwJobDefaultEntries holds a list of defaults for specific values
	// Each entry in the slice specifies Repo and CLuster regexp filter fields to
	// match against the jobs and a corresponding ProwJobDefault . All entries that
	// match a job are used. Later matching entries override the fields of earlier
	// matching entires.
	ProwJobDefaultEntries []*ProwJobDefaultEntry `json:"prowjob_default_entries,omitempty"`

	// DisabledClusters holds a list of disabled build cluster names. The same context names will be ignored while
	// Prow components load the kubeconfig files.
	DisabledClusters []string `json:"disabled_clusters,omitempty"`
}

ProwConfig is config for all prow controllers.

func (*ProwConfig) HasConfigFor

func (pc *ProwConfig) HasConfigFor() (global bool, orgs sets.Set[string], repos sets.Set[string])

type ProwJobDefaultEntry

type ProwJobDefaultEntry struct {

	// OrgRepo matches against the "org" or "org/repo" that the presubmit or postsubmit
	// is associated with. If the job is a periodic, extra_refs[0] is used. If the
	// job is a periodic without extra_refs, the empty string will be used.
	// If this field is omitted all jobs will match.
	OrgRepo string `json:"repo,omitempty"`
	// Cluster matches against the cluster alias of the build cluster that the
	// ProwJob is configured to run on. Recall that ProwJobs default to running on
	// the "default" build cluster if they omit the "cluster" field in config.
	Cluster string `json:"cluster,omitempty"`

	// Config is the ProwJobDefault to apply if the filter fields all match the
	// ProwJob. Note that when multiple entries match a ProwJob they are all used
	// by sequentially merging with later entries overriding fields from earlier
	// entries.
	Config *prowapi.ProwJobDefault `json:"config,omitempty"`
}

type ProwYAML

type ProwYAML struct {
	Presets     []Preset     `json:"presets"`
	Presubmits  []Presubmit  `json:"presubmits"`
	Postsubmits []Postsubmit `json:"postsubmits"`

	// ProwIgnored is a well known, unparsed field where non-Prow fields can
	// be defined without conflicting with unknown field validation.
	ProwIgnored *json.RawMessage `json:"prow_ignored,omitempty"`
}

ProwYAML represents the content of a .prow.yaml file used to version Presubmits and Postsubmits inside the tested repo.

func ReadProwYAML

func ReadProwYAML(log *logrus.Entry, dir string, strict bool) (*ProwYAML, error)

ReadProwYAML parses the .prow.yaml file or .prow directory, no commit checkout or defaulting is included.

func (*ProwYAML) DeepCopy

func (in *ProwYAML) DeepCopy() *ProwYAML

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProwYAML.

func (*ProwYAML) DeepCopyInto

func (in *ProwYAML) DeepCopyInto(out *ProwYAML)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type ProwYAMLGetter

type ProwYAMLGetter func(c *Config, gc git.ClientFactory, identifier, baseBranch, baseSHA string, headSHAs ...string) (*ProwYAML, error)

ProwYAMLGetter is used to retrieve a ProwYAML. Tests should provide their own implementation and set that on the Config.

type PubSubTrigger

type PubSubTrigger struct {
	Project         string   `json:"project"`
	Topics          []string `json:"topics"`
	AllowedClusters []string `json:"allowed_clusters"`
	// MaxOutstandingMessages is the max number of messaged being processed, default is 10.
	MaxOutstandingMessages int `json:"max_outstanding_messages"`
}

PubSubTrigger contain pubsub configuration for a single project.

type PubSubTriggers

type PubSubTriggers []PubSubTrigger

PubSubTriggers contains pubsub configurations.

type PubsubSubscriptions

type PubsubSubscriptions map[string][]string

PubsubSubscriptions maps GCP project IDs to a list of subscription IDs.

type PushGateway

type PushGateway struct {
	// Endpoint is the location of the prometheus pushgateway
	// where prow will push metrics to.
	Endpoint string `json:"endpoint,omitempty"`
	// Interval specifies how often prow will push metrics
	// to the pushgateway. Defaults to 1m.
	Interval *metav1.Duration `json:"interval,omitempty"`
	// ServeMetrics tells if or not the components serve metrics.
	ServeMetrics bool `json:"serve_metrics"`
}

PushGateway is a prometheus push gateway.

type QueryMap

type QueryMap struct {
	sync.Mutex
	// contains filtered or unexported fields
}

QueryMap is a struct mapping from "org/repo" -> TideQueries that apply to that org or repo. It is lazily populated, but threadsafe.

func (*QueryMap) ForRepo

func (qm *QueryMap) ForRepo(repo OrgRepo) TideQueries

ForRepo returns the tide queries that apply to a repo.

type RefGetter

type RefGetter = func() (string, error)

RefGetter is used to retrieve a Git Reference. Its purpose is to be able to defer calling out to GitHub in the context of inrepoconfig to make sure its only done when we actually need to have that info.

type RefGetterForGitHubPullRequest

type RefGetterForGitHubPullRequest struct {
	// contains filtered or unexported fields
}

RefGetterForGitHubPullRequest is used to get the Presubmits for a GitHub PullRequest when that PullRequest wasn't fetched yet. It will only fetch it if someone calls its .PullRequest() func. It is threadsafe.

func NewRefGetterForGitHubPullRequest

func NewRefGetterForGitHubPullRequest(ghc refGetterForGitHubPullRequestClient, org, repo string, number int) *RefGetterForGitHubPullRequest

NewRefGetterForGitHubPullRequest returns a brand new RefGetterForGitHubPullRequest.

func (*RefGetterForGitHubPullRequest) BaseSHA

func (rg *RefGetterForGitHubPullRequest) BaseSHA() (string, error)

BaseSHA is a RefGetter that returns the baseRef for the PullRequest.

func (*RefGetterForGitHubPullRequest) HeadSHA

func (rg *RefGetterForGitHubPullRequest) HeadSHA() (string, error)

HeadSHA is a RefGetter that returns the headSHA for the PullRequest.

func (*RefGetterForGitHubPullRequest) PullRequest

func (rg *RefGetterForGitHubPullRequest) PullRequest() (*github.PullRequest, error)

type RegexpChangeMatcher

type RegexpChangeMatcher struct {
	// RunIfChanged defines a regex used to select which subset of file changes should trigger this job.
	// If any file in the changeset matches this regex, the job will be triggered
	// Additionally AlwaysRun is mutually exclusive with RunIfChanged.
	RunIfChanged string `json:"run_if_changed,omitempty"`
	// SkipIfOnlyChanged defines a regex used to select which subset of file changes should trigger this job.
	// If all files in the changeset match this regex, the job will be skipped.
	// In other words, this is the negation of RunIfChanged.
	// Additionally AlwaysRun is mutually exclusive with SkipIfOnlyChanged.
	SkipIfOnlyChanged string `json:"skip_if_only_changed,omitempty"`
	// contains filtered or unexported fields
}

RegexpChangeMatcher is for code shared between jobs that run only when certain files are changed.

func (RegexpChangeMatcher) CouldRun

func (cm RegexpChangeMatcher) CouldRun() bool

CouldRun determines if its possible for a set of changes to trigger this condition

func (*RegexpChangeMatcher) DeepCopy

func (in *RegexpChangeMatcher) DeepCopy() *RegexpChangeMatcher

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RegexpChangeMatcher.

func (*RegexpChangeMatcher) DeepCopyInto

func (in *RegexpChangeMatcher) DeepCopyInto(out *RegexpChangeMatcher)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (RegexpChangeMatcher) RunsAgainstChanges

func (cm RegexpChangeMatcher) RunsAgainstChanges(changes []string) bool

RunsAgainstChanges returns true if any of the changed input paths match the run_if_changed regex; OR if any of the changed input paths *don't* match the skip_if_only_changed regex.

func (RegexpChangeMatcher) ShouldRun

func (cm RegexpChangeMatcher) ShouldRun(changes ChangedFilesProvider) (determined bool, shouldRun bool, err error)

ShouldRun determines if we can know for certain that the job should run. We can either know for certain that the job should or should not run based on the matcher, or we can not be able to determine that fact at all.

type Repo

type Repo struct {
	Policy   `json:",inline"`
	Branches map[string]Branch `json:"branches,omitempty"`
}

Repo holds protection policy overrides for all branches in a repo, as well as specific branch overrides.

func (Repo) GetBranch

func (r Repo) GetBranch(name string) (*Branch, error)

GetBranch returns the branch config after merging in any repo policies.

func (Repo) HasManagedBranches

func (r Repo) HasManagedBranches() bool

HasManagedBranches returns true if the repo has managed branches

type Reporter

type Reporter struct {
	// Context is the name of the GitHub status context for the job.
	// Defaults: the same as the name of the job.
	Context string `json:"context,omitempty"`
	// SkipReport skips commenting and setting status on GitHub.
	SkipReport bool `json:"skip_report,omitempty"`
}

type RerunAuthConfigs

type RerunAuthConfigs map[string]prowapi.RerunAuthConfig

RerunAuthConfigs represents the configs for rerun authorization in Deck. Use `org/repo`, `org` or `*` as key and a `RerunAuthConfig` struct as value.

type Restrictions

type Restrictions struct {
	Apps  []string `json:"apps,omitempty"`
	Users []string `json:"users,omitempty"`
	Teams []string `json:"teams,omitempty"`
}

Restrictions limits who can merge Apps, Users and Teams items are appended to parent lists.

type ReviewPolicy

type ReviewPolicy struct {
	// DismissalRestrictions appends users/teams that are allowed to merge
	DismissalRestrictions *DismissalRestrictions `json:"dismissal_restrictions,omitempty"`
	// DismissStale overrides whether new commits automatically dismiss old reviews if set
	DismissStale *bool `json:"dismiss_stale_reviews,omitempty"`
	// RequireOwners overrides whether CODEOWNERS must approve PRs if set
	RequireOwners *bool `json:"require_code_owner_reviews,omitempty"`
	// Approvals overrides the number of approvals required if set
	Approvals *int `json:"required_approving_review_count,omitempty"`
	// BypassRestrictions appends users/teams that are allowed to bypass PR restrictions
	BypassRestrictions *BypassRestrictions `json:"bypass_pull_request_allowances,omitempty"`
}

ReviewPolicy specifies github approval/review criteria. Any nil values inherit the policy from the parent, otherwise bool/ints are overridden. Non-empty lists are appended to parent lists.

type Scheduler

type Scheduler struct {
	Enabled bool `json:"enabled,omitempty"`

	// Scheduling strategies
	Failover *FailoverScheduling `json:"failover,omitempty"`
}

type Sinker

type Sinker struct {
	// ResyncPeriod is how often the controller will perform a garbage
	// collection. Defaults to one hour.
	ResyncPeriod *metav1.Duration `json:"resync_period,omitempty"`
	// MaxProwJobAge is how old a ProwJob can be before it is garbage-collected.
	// Defaults to one week.
	MaxProwJobAge *metav1.Duration `json:"max_prowjob_age,omitempty"`
	// MaxPodAge is how old a Pod can be before it is garbage-collected.
	// Defaults to one day.
	MaxPodAge *metav1.Duration `json:"max_pod_age,omitempty"`
	// TerminatedPodTTL is how long a Pod can live after termination before it is
	// garbage collected.
	// Defaults to matching MaxPodAge.
	TerminatedPodTTL *metav1.Duration `json:"terminated_pod_ttl,omitempty"`
	// ExcludeClusters are build clusters that don't want to be managed by sinker.
	ExcludeClusters []string `json:"exclude_clusters,omitempty"`
}

Sinker is config for the sinker controller.

type SlackReporter

type SlackReporter struct {
	JobTypesToReport            []prowapi.ProwJobType `json:"job_types_to_report,omitempty"`
	prowapi.SlackReporterConfig `json:",inline"`
}

SlackReporter represents the config for the Slack reporter. The channel can be overridden on the job via the .reporter_config.slack.channel property.

func (*SlackReporter) DefaultAndValidate

func (cfg *SlackReporter) DefaultAndValidate() error

type SlackReporterConfigs

type SlackReporterConfigs map[string]SlackReporter

SlackReporterConfigs represents the config for the Slack reporter(s). Use `org/repo`, `org` or `*` as key and an `SlackReporter` struct as value.

func (SlackReporterConfigs) GetSlackReporter

func (cfg SlackReporterConfigs) GetSlackReporter(refs *prowapi.Refs) SlackReporter

func (SlackReporterConfigs) HasGlobalConfig

func (cfg SlackReporterConfigs) HasGlobalConfig() bool

type Spyglass

type Spyglass struct {
	// Lenses is a list of lens configurations.
	Lenses []LensFileConfig `json:"lenses,omitempty"`
	// Viewers is deprecated, prefer Lenses instead.
	// Viewers was a map of Regexp strings to viewer names that defines which sets
	// of artifacts need to be consumed by which viewers. It is copied in to Lenses at load time.
	Viewers map[string][]string `json:"viewers,omitempty"`
	// RegexCache is a map of lens regexp strings to their compiled equivalents.
	RegexCache map[string]*regexp.Regexp `json:"-"`
	// SizeLimit is the max size artifact in bytes that Spyglass will attempt to
	// read in entirety. This will only affect viewers attempting to use
	// artifact.ReadAll(). To exclude outlier artifacts, set this limit to
	// expected file size + variance. To include all artifacts with high
	// probability, use 2*maximum observed artifact size.
	SizeLimit int64 `json:"size_limit,omitempty"`
	// GCSBrowserPrefix is used to generate a link to a human-usable GCS browser.
	// If left empty, the link will be not be shown. Otherwise, a GCS path (with no
	// prefix or scheme) will be appended to GCSBrowserPrefix and shown to the user.
	GCSBrowserPrefix string `json:"gcs_browser_prefix,omitempty"`
	// GCSBrowserPrefixesByRepo are used to generate a link to a human-usable GCS browser.
	// They are mapped by org, org/repo or '*' which is the default value.
	// These are the most specific and will override GCSBrowserPrefixesByBucket if both are resolved.
	GCSBrowserPrefixesByRepo GCSBrowserPrefixes `json:"gcs_browser_prefixes,omitempty"`
	// GCSBrowserPrefixesByBucket are used to generate a link to a human-usable GCS browser.
	// They are mapped by bucket name or '*' which is the default value.
	// They will only be utilized if there is not a GCSBrowserPrefixesByRepo for the org/repo.
	GCSBrowserPrefixesByBucket GCSBrowserPrefixes `json:"gcs_browser_prefixes_by_bucket,omitempty"`
	// If set, Announcement is used as a Go HTML template string to be displayed at the top of
	// each spyglass page. Using HTML in the template is acceptable.
	// Currently the only variable available is .ArtifactPath, which contains the GCS path for the job artifacts.
	Announcement string `json:"announcement,omitempty"`
	// TestGridConfig is the path to the TestGrid config proto. If the path begins with
	// "gs://" it is assumed to be a GCS reference, otherwise it is read from the local filesystem.
	// If left blank, TestGrid links will not appear.
	TestGridConfig string `json:"testgrid_config,omitempty"`
	// TestGridRoot is the root URL to the TestGrid frontend, e.g. "https://testgrid.k8s.io/".
	// If left blank, TestGrid links will not appear.
	TestGridRoot string `json:"testgrid_root,omitempty"`
	// HidePRHistLink allows prow hiding PR History link from deck, this is handy especially for
	// prow instances that only serves gerrit.
	// This might become obsolete once https://github.com/kubernetes/test-infra/issues/24130 is fixed.
	HidePRHistLink bool `json:"hide_pr_history_link,omitempty"`
	// PRHistLinkTemplate is the template for constructing href of `PR History` button,
	// by default it's "/pr-history?org={{.Org}}&repo={{.Repo}}&pr={{.Number}}"
	PRHistLinkTemplate string `json:"pr_history_link_template,omitempty"`
	// BucketAliases permits a naive URL rewriting functionality.
	// Keys represent aliases and their values are the authoritative
	// bucket names they will be substituted with
	BucketAliases map[string]string `json:"bucket_aliases,omitempty"`
}

Spyglass holds config for Spyglass.

func (Spyglass) GetGCSBrowserPrefix

func (s Spyglass) GetGCSBrowserPrefix(org, repo, bucket string) string

GetGCSBrowserPrefix determines the GCS Browser prefix by checking for a config in order of:

  1. If org (and optionally repo) is provided resolve the GCSBrowserPrefixesByRepo config.
  2. If bucket is provided resolve the GCSBrowserPrefixesByBucket config.
  3. If not found in either use the default from GCSBrowserPrefixesByRepo or GCSBrowserPrefixesByBucket if not found.

type Tide

type Tide struct {
	Gerrit *TideGerritConfig `json:"gerrit,omitempty"`
	// SyncPeriod specifies how often Tide will sync jobs with GitHub. Defaults to 1m.
	SyncPeriod *metav1.Duration `json:"sync_period,omitempty"`
	// MaxGoroutines is the maximum number of goroutines spawned inside the
	// controller to handle org/repo:branch pools. Defaults to 20. Needs to be a
	// positive number.
	MaxGoroutines int `json:"max_goroutines,omitempty"`
	// BatchSizeLimitMap is a key/value pair of an org or org/repo as the key and
	// integer batch size limit as the value. Use "*" as key to set a global default.
	// Special values:
	//  0 => unlimited batch size
	// -1 => batch merging disabled :(
	BatchSizeLimitMap map[string]int `json:"batch_size_limit,omitempty"`
	// PrioritizeExistingBatches configures on org or org/repo level if Tide should continue
	// testing pre-existing batches instead of immediately including new PRs as they become
	// eligible. Continuing on an old batch allows to re-use all existing test results whereas
	// starting a new one requires to start new instances of all tests.
	// Use '*' as key to set this globally. Defaults to true.
	PrioritizeExistingBatchesMap map[string]bool `json:"prioritize_existing_batches,omitempty"`

	TideGitHubConfig `json:",inline"`
}

Tide is config for the tide pool.

func (*Tide) BatchSizeLimit

func (t *Tide) BatchSizeLimit(repo OrgRepo) int

func (*Tide) GetPRStatusBaseURL

func (t *Tide) GetPRStatusBaseURL(repo OrgRepo) string

func (*Tide) GetTargetURL

func (t *Tide) GetTargetURL(repo OrgRepo) string

func (*Tide) MergeCommitTemplate

func (t *Tide) MergeCommitTemplate(repo OrgRepo) TideMergeCommitTemplate

MergeCommitTemplate returns a struct with Go template string(s) or nil

func (*Tide) MergeMethod

func (t *Tide) MergeMethod(repo OrgRepo) types.PullRequestMergeType

MergeMethod returns the merge method to use for a repo. The default of merge is returned when not overridden.

func (*Tide) OrgRepoBranchMergeMethod

func (t *Tide) OrgRepoBranchMergeMethod(orgRepo OrgRepo, branch string) types.PullRequestMergeType

OrgRepoBranchMergeMethod returns the merge method to use for a given triple: org, repo, branch. The following matching criteria apply, the priority goes from the highest to the lowest:

  1. kubernetes/test-infra@main: rebase org/repo@branch shorthand

  2. kubernetes: test-infra: ma(ster|in): rebase branch level regex

  3. kubernetes/test-infra: rebase org/repo shorthand

  4. kubernetes: test-infra: rebase repo-wide config

  5. kubernetes: rebase org shorthand

  6. default to "merge"

func (*Tide) PrioritizeExistingBatches

func (t *Tide) PrioritizeExistingBatches(repo OrgRepo) bool

type TideBranchMergeType

type TideBranchMergeType struct {
	MergeType types.PullRequestMergeType
	Regexpr   *regexp.Regexp
}

func (TideBranchMergeType) MarshalJSON

func (tbmt TideBranchMergeType) MarshalJSON() ([]byte, error)

func (TideBranchMergeType) Match

func (tbmt TideBranchMergeType) Match(branch string) bool

func (*TideBranchMergeType) UnmarshalJSON

func (tbmt *TideBranchMergeType) UnmarshalJSON(b []byte) error

type TideContextPolicy

type TideContextPolicy struct {
	// whether to consider unknown contexts optional (skip) or required.
	SkipUnknownContexts       *bool    `json:"skip-unknown-contexts,omitempty"`
	RequiredContexts          []string `json:"required-contexts,omitempty"`
	RequiredIfPresentContexts []string `json:"required-if-present-contexts,omitempty"`
	OptionalContexts          []string `json:"optional-contexts,omitempty"`
	// Infer required and optional jobs from Branch Protection configuration
	FromBranchProtection *bool `json:"from-branch-protection,omitempty"`
}

TideContextPolicy configures options about how to handle various contexts.

func ParseTideContextPolicyOptions

func ParseTideContextPolicyOptions(org, repo, branch string, options TideContextPolicyOptions) TideContextPolicy

func (*TideContextPolicy) IsOptional

func (cp *TideContextPolicy) IsOptional(c string) bool

IsOptional checks whether a context can be ignored. Will return true if - context is registered as optional - required contexts are registered and the context provided is not required Will return false otherwise. Every context is required.

func (*TideContextPolicy) MissingRequiredContexts

func (cp *TideContextPolicy) MissingRequiredContexts(contexts []string) []string

MissingRequiredContexts discard the optional contexts and only look of extra required contexts that are not provided.

func (*TideContextPolicy) Validate

func (cp *TideContextPolicy) Validate() error

Validate returns an error if any contexts are listed more than once in the config.

type TideContextPolicyOptions

type TideContextPolicyOptions struct {
	TideContextPolicy `json:",inline"`
	// GitHub Orgs
	Orgs map[string]TideOrgContextPolicy `json:"orgs,omitempty"`
}

TideContextPolicyOptions holds the default policy, and any org overrides.

type TideGerritConfig

type TideGerritConfig struct {
	Queries GerritOrgRepoConfigs `json:"queries"`
	// RateLimit defines how many changes to query per gerrit API call
	// default is 5.
	RateLimit int `json:"ratelimit,omitempty"`
}

TideGerritConfig contains all Gerrit related configurations for tide.

type TideGitHubConfig

type TideGitHubConfig struct {
	// StatusUpdatePeriod specifies how often Tide will update GitHub status contexts.
	// Defaults to the value of SyncPeriod.
	StatusUpdatePeriod *metav1.Duration `json:"status_update_period,omitempty"`
	// Queries represents a list of GitHub search queries that collectively
	// specify the set of PRs that meet merge requirements.
	Queries TideQueries `json:"queries,omitempty"`

	// A key/value pair of an org/repo as the key and merge method to override
	// the default method of merge. Valid options are squash, rebase, and merge.
	MergeType map[string]TideOrgMergeType `json:"merge_method,omitempty"`

	// A key/value pair of an org/repo as the key and Go template to override
	// the default merge commit title and/or message. Template is passed the
	// PullRequest struct (prow/github/types.go#PullRequest)
	MergeTemplate map[string]TideMergeCommitTemplate `json:"merge_commit_template,omitempty"`

	// URL for tide status contexts.
	// We can consider allowing this to be set separately for separate repos, or
	// allowing it to be a template.
	TargetURL string `json:"target_url,omitempty"`

	// TargetURLs is a map from "*", <org>, or <org/repo> to the URL for the tide status contexts.
	// The most specific key that matches will be used.
	// This field is mutually exclusive with TargetURL.
	TargetURLs map[string]string `json:"target_urls,omitempty"`

	// PRStatusBaseURL is the base URL for the PR status page.
	// This is used to link to a merge requirements overview
	// in the tide status context.
	// Will be deprecated on June 2020.
	PRStatusBaseURL string `json:"pr_status_base_url,omitempty"`

	// PRStatusBaseURLs is the base URL for the PR status page
	// mapped by org or org/repo level.
	PRStatusBaseURLs map[string]string `json:"pr_status_base_urls,omitempty"`

	// BlockerLabel is an optional label that is used to identify merge blocking
	// GitHub issues.
	// Leave this blank to disable this feature and save 1 API token per sync loop.
	BlockerLabel string `json:"blocker_label,omitempty"`

	// SquashLabel is an optional label that is used to identify PRs that should
	// always be squash merged.
	// Leave this blank to disable this feature.
	SquashLabel string `json:"squash_label,omitempty"`

	// RebaseLabel is an optional label that is used to identify PRs that should
	// always be rebased and merged.
	// Leave this blank to disable this feature.
	RebaseLabel string `json:"rebase_label,omitempty"`

	// MergeLabel is an optional label that is used to identify PRs that should
	// always be merged with all individual commits from the PR.
	// Leave this blank to disable this feature.
	MergeLabel string `json:"merge_label,omitempty"`

	// TideContextPolicyOptions defines merge options for context. If not set it will infer
	// the required and optional contexts from the prow jobs configured and use the github
	// combined status; otherwise it may apply the branch protection setting or let user
	// define their own options in case branch protection is not used.
	ContextOptions TideContextPolicyOptions `json:"context_options,omitempty"`

	// BatchSizeLimitMap is a key/value pair of an org or org/repo as the key and
	// integer batch size limit as the value. Use "*" as key to set a global default.
	// Special values:
	//  0 => unlimited batch size
	// -1 => batch merging disabled :(
	BatchSizeLimitMap map[string]int `json:"batch_size_limit,omitempty"`

	// Priority is an ordered list of sets of labels that would be prioritized before other PRs
	// PRs should match all labels contained in a set to be prioritized. The first entry has
	// the highest priority.
	Priority []TidePriority `json:"priority,omitempty"`

	// DisplayAllQueriesInStatus controls if Tide should mention all queries in the status it
	// creates. The default is to only mention the one to which we are closest (Calculated
	// by total number of requirements - fulfilled number of requirements).
	DisplayAllQueriesInStatus bool `json:"display_all_tide_queries_in_status,omitempty"`
}

TideGitHubConfig is the tide config for GitHub.

type TideMergeCommitTemplate

type TideMergeCommitTemplate struct {
	TitleTemplate string `json:"title,omitempty"`
	BodyTemplate  string `json:"body,omitempty"`

	Title *template.Template `json:"-"`
	Body  *template.Template `json:"-"`
}

TideMergeCommitTemplate holds templates to use for merge commits.

type TideOrgContextPolicy

type TideOrgContextPolicy struct {
	TideContextPolicy `json:",inline"`
	Repos             map[string]TideRepoContextPolicy `json:"repos,omitempty"`
}

TideOrgContextPolicy overrides the policy for an org, and any repo overrides.

type TideOrgMergeType

type TideOrgMergeType struct {
	Repos     map[string]TideRepoMergeType
	MergeType types.PullRequestMergeType
}

func (TideOrgMergeType) MarshalJSON

func (tomt TideOrgMergeType) MarshalJSON() ([]byte, error)

When TideOrgMergeType.MergeType is present, unmarshal into:

kubernetes: squash

when TideOrgMergeType.Repos is not empty, unmarshal into:

kubernetes:
  test-infra: squash

func (*TideOrgMergeType) UnmarshalJSON

func (tomt *TideOrgMergeType) UnmarshalJSON(b []byte) error

Org-wide configuration:

kubernetes: merge

unmarshal into types.PullRequestMergeType.

Full configuration:

kubernetes:
  test-infra:
    main: merge

unmarshal into map[string]TideRepoMergeType:

type TidePriority

type TidePriority struct {
	Labels []string `json:"labels,omitempty"`
}

TidePriority contains a list of labels used to prioritize PRs in the merge pool

type TideQueries

type TideQueries []TideQuery

TideQueries is a TideQuery slice.

func (TideQueries) OrgExceptionsAndRepos

func (tqs TideQueries) OrgExceptionsAndRepos() (map[string]sets.Set[string], sets.Set[string])

OrgExceptionsAndRepos determines which orgs and repos a set of queries cover. Output is returned as a mapping from 'included org'->'repos excluded in the org' and a set of included repos.

func (TideQueries) QueryMap

func (tqs TideQueries) QueryMap() *QueryMap

QueryMap creates a QueryMap from TideQueries

type TideQuery

type TideQuery struct {
	Author string `json:"author,omitempty"`

	Labels        []string `json:"labels,omitempty"`
	MissingLabels []string `json:"missingLabels,omitempty"`

	ExcludedBranches []string `json:"excludedBranches,omitempty"`
	IncludedBranches []string `json:"includedBranches,omitempty"`

	Milestone string `json:"milestone,omitempty"`

	ReviewApprovedRequired bool `json:"reviewApprovedRequired,omitempty"`

	Orgs          []string `json:"orgs,omitempty"`
	Repos         []string `json:"repos,omitempty"`
	ExcludedRepos []string `json:"excludedRepos,omitempty"`
}

TideQuery is turned into a GitHub search query. See the docs for details: https://help.github.com/articles/searching-issues-and-pull-requests/

func (TideQuery) ForRepo

func (tq TideQuery) ForRepo(repo OrgRepo) bool

ForRepo indicates if the tide query applies to the specified repo.

func (*TideQuery) OrgQueries

func (tq *TideQuery) OrgQueries() map[string]string

OrgQueries returns the GitHub search string for the query, sharded by org.

func (*TideQuery) Query

func (tq *TideQuery) Query() string

Query returns the corresponding github search string for the tide query.

func (TideQuery) TenantIDs

func (q TideQuery) TenantIDs(cfg Config) []string

func (*TideQuery) Validate

func (tq *TideQuery) Validate() error

Validate returns an error if the query has any errors.

Examples include: * an org name that is empty or includes a / * repos that are not org/repo * a label that is in both the labels and missing_labels section * a branch that is in both included and excluded branch set.

type TideRepoContextPolicy

type TideRepoContextPolicy struct {
	TideContextPolicy `json:",inline"`
	Branches          map[string]TideContextPolicy `json:"branches,omitempty"`
}

TideRepoContextPolicy overrides the policy for repo, and any branch overrides.

type TideRepoMergeType

type TideRepoMergeType struct {
	Branches  map[string]TideBranchMergeType
	MergeType types.PullRequestMergeType
}

func (TideRepoMergeType) MarshalJSON

func (trmt TideRepoMergeType) MarshalJSON() ([]byte, error)

When TideRepoMergeType.MergeType is present, unmarshal into:

kubernetes: squash

when TideRepoMergeType.Branches is not empty, unmarshal into:

kubernetes:
  main: squash

func (*TideRepoMergeType) UnmarshalJSON

func (trmt *TideRepoMergeType) UnmarshalJSON(b []byte) error

Full configuration:

test-infra:
  main: merge

unmarshal into map[string]TideBranchMergeType

Repo-wide configuration:

test-infra: merge

unmarshal into types.PullRequestMergeType

type UtilityConfig

type UtilityConfig struct {
	// Decorate determines if we decorate the PodSpec or not
	Decorate *bool `json:"decorate,omitempty"`

	// PathAlias is the location under <root-dir>/src
	// where the repository under test is cloned. If this
	// is not set, <root-dir>/src/github.com/org/repo will
	// be used as the default.
	PathAlias string `json:"path_alias,omitempty"`
	// CloneURI is the URI that is used to clone the
	// repository. If unset, will default to
	// `https://github.com/org/repo.git`.
	CloneURI string `json:"clone_uri,omitempty"`
	// SkipSubmodules determines if submodules should be
	// cloned when the job is run. Defaults to false.
	SkipSubmodules bool `json:"skip_submodules,omitempty"`
	// CloneDepth is the depth of the clone that will be used.
	// A depth of zero will do a full clone.
	CloneDepth int `json:"clone_depth,omitempty"`
	// SkipFetchHead tells prow to avoid a git fetch <remote> call.
	// The git fetch <remote> <BaseRef> call occurs regardless.
	SkipFetchHead bool `json:"skip_fetch_head,omitempty"`

	// ExtraRefs are auxiliary repositories that
	// need to be cloned, determined from config
	ExtraRefs []prowapi.Refs `json:"extra_refs,omitempty"`

	// DecorationConfig holds configuration options for
	// decorating PodSpecs that users provide
	DecorationConfig *prowapi.DecorationConfig `json:"decoration_config,omitempty"`
}

UtilityConfig holds decoration metadata, such as how to clone and additional containers/etc

func (*UtilityConfig) DeepCopy

func (in *UtilityConfig) DeepCopy() *UtilityConfig

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UtilityConfig.

func (*UtilityConfig) DeepCopyInto

func (in *UtilityConfig) DeepCopyInto(out *UtilityConfig)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*UtilityConfig) Validate

func (u *UtilityConfig) Validate() error

Validate ensures all the values set in the UtilityConfig are valid.

Directories

Path Synopsis
Package secret implements an agent to read and reload the secrets.
Package secret implements an agent to read and reload the secrets.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL