Motivation

Setting up a communication channel from the app instance to the controller meaning that EVE won't save anything. EVE will pass on the information to the controller whatever app instance will send. Downloading kubeconfig from the K3S cluster instance is the first use case of the proposal. Kubeconfig is used to configure access to a K3S cluster instance when used in conjunction with the kubectl command-line tool or any other client.

EVE and Controller Communication Proposal

EVE will use the existing device info API for sending app instance metadata information to the controller. We will introduce a new structure for app instance metadata which will have application UUID, data, and its type.

// AppInstMetaData sends metadata of an application instance
// e.g kubeconfig, etc
// Size of app metadata <= 32KB
message ZInfoAppInstMetaData {
  string uuid = 1;
  AppInstanceMetaData type = 2;
  bytes data = 3;
}
// Different types of app instance metadata
enum AppInstMetaDataType {
  APP_INST_META_DATA_TYPE_NONE = 0;
  APP_INST_META_DATA_TYPE_KUBE_CONFIG = 1;
}

App Instance and EVE Communication Proposal

Need a mechanism in EVE to enable file exchange between EVE and the application instance. Zedrouter in EVE currently implements some API endpoints at http://169.254.169.254/ where the application instances can fetch some data. We could add some new EVE-specific POST APIs under http://169.254.169.254/eve/v1.

For downloading kubeconfig, we will add POST API http://169.254.169.254/eve/v1/kubeconfig on which seed server application instance of K3S cluster will publish its kubeconfig. In zedrouter, we will get the kubeconfig of the K3S cluster and we will forward it to zedagent and will send it to the controller from the zedagent in the device info message. However, this only works if the size of the data is limited to less than about 32 KB.

Appendix

  • Sample K3S Cluster KubeConfig File

    apiVersion: v1
    kind: Config
    clusters:
    - name: "test-cluster"
      cluster:
        server: "https://104.211.222.233/k8s/clusters/c-6n45m"
        certificate-authority-data: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpRENDQ\
          VM2Z0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQTdNUnd3R2dZRFZRUUtFeE5rZVc1aGJXbGoKY\
          kdsemRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR1Z1WlhJdFkyRXdIa\
          GNOTWpBeApNREV5TURjMU5UQXpXaGNOTXpBeE1ERXdNRGMxTlRBeldqQTdNUnd3R2dZRFZRUUtFe\
          E5rZVc1aGJXbGpiR2x6CmRHVnVaWEl0YjNKbk1Sc3dHUVlEVlFRREV4SmtlVzVoYldsamJHbHpkR\
          1Z1WlhJdFkyRXdXVEFUQmdjcWhrak8KUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVJza1A3cjNCU3VYd\
          1I2d3pIQ0N1NVovVzNaZGxlQlpZSDN5cW1vVHBrNgoxLzhGSkdiMVhNSE01d3JxSUU0WVJZYTJmd\
          3FPdkFjM2VKL2xJSGxCd0RZVm95TXdJVEFPQmdOVkhROEJBZjhFCkJBTUNBcVF3RHdZRFZSMFRBU\
          UgvQkFVd0F3RUIvekFLQmdncWhrak9QUVFEQWdOSUFEQkZBaUF5YnRRSEpINEsKZVJucW9MajduM\
          WdTSEZ0aFZDOURxSm1DeUtrUzduSE9RZ0loQU9uNCtpbElXd0hyVXBxMFp2bFhIc1BLaENRawpnM\
          GVYaGkwOS9zSlQ0V1E2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0="

    users:
    - name: "test-cluster"
      user:

        token: "kubeconfig-user-5vmrh:858wtpjhnkstqv5c9wgbjgjw5scxx4l5hqdfwprrkpcpsvbzws6qlz"


    contexts:
    - name: "test-cluster"
      context:
        user: "test-cluster"
        cluster: "test-cluster"

    current-context: "test-cluster"

  • Sample request body sent from application instance to EVE

    {
    "kind": "Config",
    "apiVersion": "v1",
    "preferences": {},
    "clusters": [
    {
    "name": "multi-device-cluster-seedserver",
    "cluster": {
    "server": "https://192.168.254.180:6443",
    "certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTWpZd09EUTNNREF3SGhjTk1qRXdOekV5TVRBeE1UUXdXaGNOTXpFd056RXdNVEF4TVRRdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTWpZd09EUTNNREF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTa2FadjI4ekcyK21HNkJBTnFFdEx6ZFR0c2F4a1RnMmpmTm82eStLaXMKakdqUWlUWDN3TjcyUjdFRWdYZ1AvK0JwTnZER09ZZmRzSm1nVnFHcUMvbytvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVR2V0JXTWtyUzlEeXdJMzBhU3Q4CktLU0xucVl3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnTGs2SmdEanNVRGtOeHlvMUMrWlpCek43aWtGUnJhT3IKbXozREs3R1NNeG9DSUZjSVk0TzlvVnN5UTBreDlpMU4wWnYzeThTSzI3cENxSmt4RW9IQXhNUTgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
    }
    }
    ],
    "users": [
    {
    "name": "multi-device-cluster-seedserver",
    "user": {
    "token": "eyJhbGciOiJSUzI1NiIsImtpZCI6Imk1aWxkNXBMVktkYnVpZTFFS0xrX3JpRTZOcE05RFpmM2xjbnBzZTJXSlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtdWx0aS1kZXZpY2UtY2x1c3Rlci1zZWVkc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im11bHRpLWRldmljZS1jbHVzdGVyLXNlZWRzZXJ2ZXItdG9rZW4tZG1wbnoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibXVsdGktZGV2aWNlLWNsdXN0ZXItc2VlZHNlcnZlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImIxNDM3NzZhLTRmMzQtNGI5MS04NDAwLThjNzliYjc4MzEyNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptdWx0aS1kZXZpY2UtY2x1c3Rlci1zZWVkc2VydmVyOm11bHRpLWRldmljZS1jbHVzdGVyLXNlZWRzZXJ2ZXIifQ.rYN1Rmutaw1AAyx-3pIOv_hdaiXGnpbEQI2nCiS9sGEjK49ZqSmNasx9mHM62YBF1IBkmJOez95rkjdouiSOU_08DpBoxOEGzd2r_01kk-96LsEUXb7D0gkwWN5Fvr-CKIk2dRK3k0v40U9j4Vsx8Va3rGjjRm0C9n0LO7PPkbEk1Ox0S8sWe9qsOpCt3cVwFF57LwAb6hQbZR8LvJUNcw25_fgF1qBCitGB4WccB1z5_DKNJ2B1bPISJQyPWxMAnQvrJNVX--IVrXelb7DBl-J47S6rEXWq9GetAv2mmgUte6jvsB5j7yLWzcN1xDk_rBLZsOhOghf6ckZp8bLBbg"
    }
    }
    ],
    "contexts": [
    {
    "name": "multi-device-cluster-seedserver",
    "context": {
    "cluster": "multi-device-cluster-seedserver",
    "user": "multi-device-cluster-seedserver"
    }
    }
    ],
    "current-context": "multi-device-cluster-seedserver"
    }
  • No labels

7 Comments

  1. Does this assume that something in the guest VM formats the yaml as json?

    Why not just carry the bytes unmodified end to end so that the guest VM can just use e.g., curl to send the file content to 169.254.169.254?

  2. There is already a case in EVE where config is being pulled from an application to EVE: https://github.com/lf-edge/eve/pull/2105 (local profile)

    For local profile Petr Fedchenkov chose approach in which EVE periodically pulls profile from an app configured as "profile server". In each iteration, EVE finds the application IP address and obtains the profile using a HTTP GET request.

    Here, the opposite approach is chosen: application pushes configuration into EVE through the HTTP server, which zedrouter deploys for each network instance.

    I wonder if we could unify these cases (and any future ones) and agree on the best approach for propagating of configuration from an application into EVE (and potentially further to zedcloud).

  3. Milan Lenco The local profile server is different because EVE is told through config which domainname/IP address to use to contact it, whereas for the applications using services from EVE (such as this push, and also the pull of cloud-init/cloud-config) we have the fixed 169.254.169.254 which the applications can use to pull and push, respectively.

    But we might see more local services on 169.254.169.254 down the road.

  4. My biggest meta-comment Rishabh Gupta is that I really think this needs to be framed as a proposal for a communication channel between the app instance and the controller. I understand that your particular usecase is k3s, but I think we should approach it as a generic mechanism with k3s being just one app that may benefit from it.

    If you agree with the above framing, then your proposal is incomplete, since the focus of it should be describing the data flow between EVE and the controller. Lets start with that and then we can figure out the mechanism of data exchange within EVE itself.

    One last point when it comes to that mechanism -- I feel that it would be prudent for us to consider pros/cons of POSIX mechanism like 9P that is already available by default in how we launch containers.

    Do you think you can modify your proposal accordingly?

  5. Roman ShaposhnikI had made this comment in GH but it seems to have disappeared.

    I agree the proposal should describe app to EVE and EVE to controller in one place.


    However, using a distributed filesystem like 9P as a communication channel doesn't seem like an improvement.

    While one can build a communication channel using a file system (using atomic rename property, file locking, fsnotify, etc) that is unneeded complexity when we already have the ability to communicate using TCP/IP.  So to me it makes no sense considering a file system approach for this.

  6. Rishabh GuptaA bit terse but this has the key information for the end to end flow, so I think it is a good enough.

  7. But maybe the title should be "Passing application meta-data from app via EVE to controller" with a subtitle of "Passing KubeConfig".