You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

<Please fill out the Overview, Design and User Experience sections for an initial review of the proposed feature.>

Overview

<Briefly describe the problem being solved, not how the problem is solved, just focus on the problem. Think about why the feature is needed, and what is the relevant context to understand the problem.>

Currently, the edge cluster agent is installed with cluster permissions which must be granted by the Kubernetes cluster admin. This means that a DevOps team wishing to make use of Open Horizon is required to engage the Ops team responsible for providing Kubernetes services. In order to enable DevOps to be more self sufficient, this barrier needs to be removed. In effect, a DevOps team needs to be able to install an edge cluster agent with permission to a specific namespace so that the agent can manage service deployments in that namespace, and only in that namespace. As a result, a namespace scoped edge cluster agent is no longer able to deploy services into any namespace (as it does currently).

Once this barrier is removed, another set of seemingly disjoint use cases is also solved. When multiple DevOps teams are utilizing an edge cluster in this way, they are effectively using it in a pseudo multi-tenant fashion. That is, each DevOps team would expect to be able to manage their own agents and services deployed by those agents without interference from agents in other namespaces within the same cluster. To the extent that Kubernetes administration enables multi-tenancy within a cluster, a namespace scoped agent supports those goals. Thus, a provider of kubernetes services could enable each of their customers to independently exploit OH in their own namespace.

The use cases for a single cluster scoped agent with cluster wide permissions are still valid and are not altered by this design. Further, it is desirable that OH can support a single edge cluster containing both a cluster scoped agent and one or more namespace scoped agents.

It is not a goal of this design to provide an edge cluster agent that supports more than 1 namespace but less than the entire cluster.

Design

<Describe how the problem is fixed. Include all affected components. Include diagrams for clarity. This should be the longest section in the document. Use the sections below to call out specifics related to each aspect of the overall system, and refer back to this section for context. Provide links to any relevant external information.>

Assumptions:

This design assumes that when edge cluster deployers are deploying a given service, they will be dealing primarily with namespace scoped nodes or cluster scoped nodes, but not a mix. Therefore the design should enable a simple experience for these two cases. Further the design assumes that when edge cluster deployers are deploying a given service, it MUST be possible for them to work with a mix of namespace and cluster scoped nodes, but that these situations are more complex and therefore require more cognitive energy to understand.

Agent Install:

The agent install script is updated to include a namespace flag indicating the target namespace of the agent:

./agent_install.sh --namespace MyProjectNamespace ...

The user invoking the install script MUST have permission to the MyProjectNamespace, otherwise the install will fail. The absence of the --namespace flag indicates a desire to install the agent with cluster wide permissions, which will be installed into the openhorizon-agent namespace.

Node Properties:

A new built-in node property called openhorizon.kubernetesNamespace is introduced, the value reflects the namespace in which the agent is installed. This property is read-only, it is always set by the OH runtime and is not settable by any user role. This property MAY be used in a deployment policy constraint expression.


Service Definition:

When publishing a service definition, the operator definition is introspected for namespace definitions. If one is found, the CLI user will receive a warning.

...Talk about the openhorizon.service.kubernetesNamespace built-in property.....

Deployment:

When an edge cluster service is deployed, by default, it is deployed into the same namespace as the agent.

When deploying an edge cluster service, the service deployer MAY write a constraint expression referencing the built-in openhorizon.kubernetesNamespace property in order to limit the placement of the edge service onto nodes in a specific namespace or set of namespaces.

When deploying an edge cluster service to cluster scoped nodes, the service deployer needs a way to indicate the target namespace. A new field is added to the service section of a deployment policy, indicating the target namespace for the service's deployment.

"service": { ...
"cluster_namespace": <string>
}

This field is optional and ignored for services deployed to a device or a namespace scoped node <=== seems like it violates the principal of least astonishment.

This field is optional and ignored for services deployed to a device. If a deployment policy constraint expression chooses a namespace scoped node as a deployment target, this field acts as a built-in constraint that causes namespace scoped nodes in namespaces, other than the one specified by this field, to be eliminated as deployment targets.


The OH cluster scoped agent already allows an edge cluster service definition to contain a kubernetes namespace definition (yaml) embedded within the operator definition. The namespace definition indicates the target namespace into which the service should be deployed. There are two problems with this feature. First, it is the wrong placement of function because the namespace in which a service runs is a deployment concern, not an implementation concern. Second, it creates a semantic conflict when the deployer tries to deploy to a namespace scoped node in a different namespace.

The first problem is solved by the introduction of the "cluster_namespace" field in the deployment policy. This field allows deployers to have control of the target namespace, especially when the deployer is primarily dealing with cluster scoped nodes.


A namespace specified in the deployment policy overrides any namespace defined in the operator definition.

The Agbot calculates the target namespace of a cluster based service as follows:

  1. If present, use the namespace in the deployment policy.
  2. If present, use the namespace in the service definition.
  3. Use openhorizon-agent namespace (this is the default namespace where the cluster scoped agent is installed).

Once the Agbot has calculated the target namespace it:

  1. Uses this namespace as a built-in constraint when searching for deployment targets (nodes) that are not in the openhorizon-agent namespace.
  2. Ignores this namespace for nodes in the openhorizon-agent namespace (these nodes are assumed to have cluster scope permissions and are therefore valid targets for services in any namespace. That is, there is no built-in constraint on deployments for nodes in the openhorizon-agent namespace.
  3. Includes this namespace as built-in service property (openhorizon.service.kubernetesNamespace), so that the node owner can create constraint expressions referring to the target namespace of a service.

Note: The node owner is always free to configure a deployment constraint expression that limits the namespace

Patterns:

A new field is added to the schema of a pattern (as a top level field in the schema), indicating the target namespace for the pattern's deployment.

 "namespace": <string>


The namespace field is optional and ignored for patterns deployed to a device.

A namespace specified in the pattern overrides any namespace defined in the operator definition of all services in the pattern.

A pattern is in error if it attempts to deploy services to a namespace scoped node where the collection of services in the pattern are NOT deployable to the same namespace. Clearly this can only happen when the namespace is NOT specified in the pattern definition but is contained within the operator definition.

User Experience

<Describe which user roles are related to the problem AND the solution, e.g. admin, deployer, node owner, etc. If you need to define a new role in your design, make that very clear. Remember this is about what a user is thinking when interacting with the system before and after this design change. This section is not about a UI, it's more abstract than that. This section should explain all the aspects of the proposed feature that will surface to users.>


Terminology:

Cluster scoped agent - An OH agent installed in an edge cluster node where the agent has permission to deploy services into any namespace.

Namespace scoped agent - An OH agent installed in an edge cluster where the agent has permission to deploy services into ONLY the namespace where it is installed.

DevOps user - a conflation of roles found in the practice of DevOps; e.g. service developer, or service deployer.


Usage scenarios:

As a DevOps user, I want to install the OH agent into one or more namespaces that I have permission to use for my project.

As a DevOps user, I want to select the namespace into which a service is deployed, for both cluster scoped and namespace scoped agents.

As a node owner, I want OH ensure that DevOps teams using my edge cluster are isolated from each other, based on the namespace(s) I have given to each team.


Command Line Interface

<Describe any changes to the hzn CLI, including before and after command examples for clarity. Include which users will use the changed CLI. This section should flow very naturally from the User Experience section.>


External Components

<Describe any new or changed interactions with components that are not the agent or the management hub.>


Affected Components

<List all of the internal components (agent, MMS, Exchange, etc) which need to be updated to support the proposed feature. Include a link to the github epic for this feature (and the epic should contain the github issues for each component).>


Security

<Describe any related security aspects of the solution. Think about security of components interacting with each other, users interacting with the system, components interacting with external systems, permissions of users or components>


APIs

<Describe and new/changed/deprecated APIs, including before and after snippets for clarity. Include which components or users will use the APIs.>


Build, Install, Packaging

<Describe any changes to the way any component of the system is built (e.g. agent packages, containers, etc), installed (operators, manual install, batch install, SDO), configured, and deployed (consider the hub and edge nodes).>


Documentation Notes

<Describe the aspects of documentation that will be new/changed/updated. Be sure to indicate if this is new or changed doc, the impacted artifacts (e.g. technical doc, website, etc) and links to the related doc issue(s) in github.>


Test

<Summarize new automated tests that need to be added in support of this feature, and describe any special test requirements that you can foresee.>

  • No labels