Create a LDAPS self-signed certificates for Pinniped

In order to simplify the authentication of Kubernetes clusters operating on different clouds, VMware has developed the Pinniped project accessible in Opensource. Pinniped has been integrated by default into the VMware Tanzu Kubernetes Grid (TKG) offering since version 1.3, replacing the Gangway. inniped allows authentication from OIDC or LDAP sources. In the case of LDAP source, Pinniped does not connect directly to LDAP but currently relies on the Dex component as Gangway already did.

When a user runs a Kubernetes command for the first time or after a certain period of inactivity, they are prompted to authenticate only once with their corporate credentials and can then consume multiple Kubernetes clusters.

I wanted to test this functionality in my lab with an LDAPS / Active Directory server running Windows 2019 and I quickly encountered the eternal problem of certificates not signed by a known authority. So I had to create a certificate that is recognized by the Active Directory server. Searching for hours on the internet, I ended up finding an article by Peter Mescalchin that worked on the first try: Enable LDAP over SSL (LDAPS) for Microsoft Active Directory servers. –

However, when I wanted to use this procedure with Pinniped, it did not work because the SAN (Subject Alternative Name) information was not present in the certificate. By crossing several articles on the subject, I was able to adapt Peter Mescalchin’s solution so that the certificates integrate the SAN information. It gives this:

Creation of the Root certificate

Via OpenSSL (I used a Linux Ubuntu) create a private key (ca.key in my example) to be able to then create the root certificate (ca.crt in my example). The first command will ask you for a password and the second for your organization information.

$ openssl genrsa -aes256 -out ca.key 4096
$ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt


Import the Root certificate on the AD server

From the AD server, type the command certlm or via Control Pannel, type computer certificates in the search bar:

Be careful to choose “Manage computer certificates” and not “Manage user certificates”

Import the previously generated ca.crt in the “Trusted Root Certification Authorities \ Certificates” section


Creation of the Client certificate

Still from the Active Directory server, create a file, in our example it has the name request.inf. In red, I have made changes from the original procedure to add the SAN information. Be careful to put the FQDN of the AD server in CN. The values ​​of _continue_ = “dns” and _continue_ = “ip-address” correspond to the SAN values, the other possible values ​​to reference the AD server.

Signature=”$Windows NT$”

Subject = “
KeySpec = 1
KeyLength = 2048
Exportable = TRUE
MachineKeySet = TRUE
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = “Microsoft RSA SChannel Cryptographic Provider”
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

OID = ; Server Authentication

; SANs can be included in the Extensions section by using the following text format. Note is the OID for a SAN extension. = “{text}”
_continue_ = “dns=ad-server&”
_continue_ = “”
_continue_ = “”
_continue_ = “ipaddress=”

Generate the client.csr file with the command below:

c:\> certreq -new request.inf client.csr

From the Linux machine:

Create an extension file, in our example it has the name v3ext.txt. In red, I have made the changes from the initial procedure to add the SAN information under the heading v3_ca which will be referenced in the next order.


# These extensions are added when ‘ca’ signs a request.
[ v3_ca ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 =
DNS.2 = ad-server
IP.1 =

Still from the Linux machine, create the client.crt certificate from the files generated in the previous steps ca.crt, ca.key, client.csr and v3ext.txt, in red which has been added compared to the command outcome of the initial procedure

$ openssl x509 -req -days 3650 -in client.csr -CA ca.crt -CAkey ca.key -extfile v3ext.txt -set_serial 01 -out client.crt -extensions v3_ca


To verify the presence of SAN information

$ openssl x509 -in client.crt -text
X509v3 extensions:
      X509v3 Subject Alternative Name:
, DNS:ad-server, IP Address:


Import the Client certificate

From the AD server

C:\> certreq -accept client.crt

The certificate should appear in “Personal \ Certificates”

For the certificate to be taken into account, you must either restart the AD server or force LDAPS to load the certificate with the procedure below:

Still from the AD server, create a text file, in our example it is called ldap-renewservercert.txt with the content below (note the end of the file includes a line with a – (a dash):

changetype: modify
add: renewServerCertificate
renewServerCertificate: 1

Then type the command below:

c:\> ldifde -i -f ldap-renewservercert.txt

To test the taking into account, use the ldp.exe utility by selecting port 636 (or another if specific) and checking the SSL box.

Once all the procedure is done, you have to recover the ca.crt generated in the first step to give it to Pinniped. This can be done either when the TKG management cluster is created or subsequently.

If the management cluster has not yet been created:

$ tanzu management-cluster create –ui
(there are two dashes before the ui argument but WordPress only displays one)

In my test I chose vSphere as the platform, at the identity manager step it will be necessary to copy the certificate in the ROOT CA part

If the TKG management cluster has already been created and you want to update it:

From the Kubernetes context of the manager cluster, encrypt the Root certificate of the AD server with the base64 command and get the result:

$ base64 -w 0 ca.crt

Modify the certificate in the dex configmap by the result of the previous command:

$ kubectl edit configmap -n tanzu-system-auth dex
# Please edit the object below. Lines beginning with a ‘#’ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
apiVersion: v1
  config.yaml: |
        theme: tkg
        tlsCert: /etc/dex/tls/tls.crt
        tlsKey: /etc/dex/tls/tls.key
        signingKeys: 90m
        idTokens: 5m
        level: info
        format: json
        – id: pinniped-client-id
          name: pinniped-client-id
          secret: 089db7e23b19cb628ba841b17cc32ea4
        – type: ldap
          id: ldap
          name: LDAP
            insecureSkipVerify: false
bindDN: cn=administrator,cn=Users,dc=velocity,dc=local
            bindPW: $BIND_PW_ENV_VAR
            usernamePrompt: LDAP Username

Relaunch the dex pod in the tanzu-system-auth namespace to take the change into account.


Once the management cluster has the right certificate

From there, create a workload cluster:

$ tanzu cluster create my-cluster -f <fichier-environnement>

Import the administration Kubeconfig of the created workload cluster:

$ tanzu cluster kubeconfig get my-cluster –admin
(there are two dashes before the admin argument but WordPress only displays one)

Connect to the workload cluster with the admin context, as admin no need for an account:

$ kubectl use-context my-cluster-admin@my-cluster

Create a cluster role binding with the role that interests you (here cluster-admin) for the desired users, this will allow the user to use this cluster once authenticated:

$ kubectl create clusterrolebinding admin-fbenrejdal  –clusterrole cluster-admin –user fbe@velocity.local
(there are two dashes before the clusterrole and user arguments but WordPress only displays one)

Export the workload cluster kubeconfig, this is the kubeconfig that will need to be passed to users, it has no admin context and will require user authentication. The user will consume this cluster according to the rights defined in clusterrolebinding from the previous step:

$ tanzu cluster kubeconfig get my-cluster –export-file my-cluster-kubeconfig
(there are two dashes before the export argument but WordPress only displays one)

Issue a kubernetes command with the generated kubeconfig file, which will launch the browser for authentication:

$ kubectl get pods -A –kubeconfig my-cluster-kubeconfig
(there are two dashes before the kubeconfig argument but WordPress only displays one)

You should be redirected to a browser with a web page asking for your username and password:

Once entered, you will get the result of your last command:

The result of the previously command should be displayed:

kube-system antrea-agent-q9xpg 2/2 Running 0 7d15h
kube-system antrea-agent-qlmj8 2/2 Running 0 7d15h
kube-system antrea-controller-6bb57bd84-6cj58 1/1 Running 0 7d15h
kube-system coredns-68d49685bd-bjcps 1/1 Running 0 7d15h
kube-system coredns-68d49685bd-vttdw 1/1 Running 0 7d15h
kube-system etcd-my-cluster-control-plane-48n9f 1/1 Running 0 7d15h
kube-system kube-apiserver-my-cluster-control-plane-48n9f 1/1 Running 0 7d15h
kube-system kube-controller-manager-my-cluster-control-plane-48n9f 1/1 Running 0 7d15h
kube-system kube-proxy-dntrc 1/1 Running 0 7d15h
kube-system kube-proxy-k5m9g 1/1 Running 0 7d15h
kube-system kube-scheduler-my-cluster-control-plane-48n9f 1/1 Running 0 7d15h
kube-system kube-vip-my-cluster-control-plane-48n9f 1/1 Running 0 7d15h
kube-system metrics-server-66cb4fb659-xlprc 1/1 Running 0 7d15h
kube-system vsphere-cloud-controller-manager-vmfwl 1/1 Running 1 7d15h
kube-system vsphere-csi-controller-bd8b6cc8c-8ljl8 6/6 Running 0 7d15h
kube-system vsphere-csi-node-6xqf5 3/3 Running 0 7d15h
kube-system vsphere-csi-node-vmbmq 3/3 Running 0 7d15h
pinniped-concierge pinniped-concierge-dcd587f97-lk9n5 1/1 Running 0 7d15h
pinniped-concierge pinniped-concierge-dcd587f97-zrnb7 1/1 Running 0 7d15h
pinniped-concierge pinniped-concierge-kube-cert-agent-8a8e3e38 1/1 Running 0 7d15h
pinniped-supervisor pinniped-post-deploy-job-4ldt7 0/1 Completed 0 7d15h
pinniped-supervisor pinniped-post-deploy-job-m74gz 0/1 Error 0 7d15h
tkg-system kapp-controller-69c4d4bbb4-kwk5l 1/1 Running 0 7d15h

Deploy VM in and via Kubernetes

Applications are often made up of Kubernetes PODs and VMs. The most common example that we find is, a database in the form of a VM and the rest of the application in the form of PODs. By reflex, rightly or wrongly, what requires data persistence is put in the form of VMs.

The vSphere with Tanzu platform is also a platform that allows simultaneous and native hosting of Kubernetes PODs and VMs.

Until now, VMs and PODs were deployed using different methods and connected to different networks, which could cause developers to delay development environment provisioning and the risk of connection failures. Indeed, developers had to ask the team that manages the infrastructure, the deployment of a VM with an expression of need.

To reduce the time impact and the risk of errors, the infrastructure teams have implemented automation tools via a ticketing system or via a self-service portal to give a certain autonomy. Deployment is much simpler but it is not yet sufficient because it involves the developer learning and using additional tools and retrieving the details of connections to the deployed VM. The self-service portal is not obsolete though, it has many other values ​​such as governance management, I hope I will have the opportunity to write an article on it for details.


Diagram showing a developer who clicks on his portal to deploy a VM that will be connected to a network.
This same developer uses the Kubernetes kubectl command to deploy their PODs. Kubernetes uses its own network.


Since vSphere 7U2a it is now possible to provision VMs in the same way as one deploys PODs, using the Kubernetes kubectl command. To be more precise, since the beginning of vSphere with Tanzu (originally it was called Project Pacific) it was possible to deploy Virtual Machines from Kubernetes, they were however reserved for internal Kubernetes use as for creation by Tanzu Kubernetes Cluster.

Now the developer can also deploy his own virtual machines, they will also be connected to the same network as the pods. The waste of time and the risk of error are thus eliminated. I did the test on my demo environment which is shared with my other colleagues, it takes less than 3 minutes to have a freshly installed MongoDB database from a completely virgin Linux Ubuntu.


Diagram showing a developer who uses both the Kubernetes kubectl command to deploy their PODs and VMs.
Everything will be connected to the same Kubernetes network.
What are the perimeters of each persona?
There are two, the resource provider and the consumer. The resource provider is the infrastructure administrator who will present the resources to be consumed and, if necessary, cap them. The consumer is the developer who will use these resources through Kubernetes to develop their application.
The person of the infrastructure with his usual tool (vSphere client), creates a namespace of resources, grants access rights to the developer, defines the classes of service (number of CPU, amount of RAM) to which the developer will have the right to use and the VM image library that he will be able to use.
The developer connects via his account to the Namespace provided and thus creates his YAML files in order to define his resource needs for his virtual machine (s) and if they wish, he can customize it or them in order to install his tools and the services he needs.
In summary, vSphere with Tanzu leaves the choice to the developer to have its application components developed and hosted on PODs or on VMs using the same tool, the same network and the same platform. This saves time for deployment, development and offer more agility.
If you want to lift the hood, I invite you to read this article: Steps for creating VM through Kubectl

Steps for creating VM through Kubectl


To create a virtual machine with vSphere with Tanzu via the kubectl command, there are steps to follow for the administrator and for the developer, they are very simple but that did not prevent me from wasting a little time on the customization side regarding the OS part.

I recommend this article to you to understand the interest of deploying VMs through Kubernetes: Deploy VM in and via Kubernetes. My colleague’s blog: Introducing Virtual Machine Provisioning, via Kubernetes with VM service | VMware is also very well detailed.

In the last part of this article, I will provide some details on the Content Library part and on the YAML part. But first, let’s review the parts to be done on the administrator side and the developer side.



Regarding the administrator

The first step is to download the VMs images that are different from those used for TKC (Tanzu Kubernetes Cluster aka Guest Cluster). The images are available in the VMware marketplace, at the time of writing this article there are 2 (Ubuntu and Centos), the current Ubuntu version does not allow the use of persistent volume (PVC) because it is based on a virtual hardware version 10 and at least version 12 is required, this problem will be soon corrected.

You have to go to the marketplace and do a search with the keyword “vm service”, this allows you to filter (a little) the compatible images => VMware Marketplace.

Then click on the desired image, connect with your MyVMware account.

You have two options, download it then upload it to a local content library

or retrieve the subscription url to create a content library that will synchronize with the one hosted by VMware.

Once the image is loaded or the link is filled in, you should have a content library like this:

Still from the vSphere interface, we must now create a namespace, grant the rights to the users so that they can connect to it, assign the VM class, the content library and the storage class, which should give this

The example above shows once the namespace has been created, how to assign a VM class, a content library, authorize the developers who can consume this namespace, which storage class to use and finally if necessary cap the resources, CPU, memory and storage.

That’s all there is to it on the infrastructure administrator side.

Regarding the developer

You need a YAML description for:

  • The configmap which contains the customization of the VM
  • The creation of the VM
  • The Network service if you want to connect to it from an outside network (optional) PVC if you want to use persistent volumes (optional)

Via the Kubernetes command, the developer connects with his account to the Namespace provided, he will be able to list the classes of services that he can use as well as the images that he can deploy.

kubectl get virtualmachineclass
NAME                  CPU   MEMORY   AGE
best-effort-2xlarge   8     64Gi     22d
best-effort-4xlarge   16    128Gi    22d
best-effort-8xlarge   32    128Gi    22d
best-effort-large     4     16Gi     22d
best-effort-medium    2     8Gi      31d
best-effort-small     2     4Gi      31d
best-effort-xlarge    4     32Gi     22d
best-effort-xsmall    2     2Gi      22d
guaranteed-xsmall     2     2Gi      22d


kubectl get virtualmachineimage
NAME                                                         VERSION                           OSTYPE                FORMAT   AGE

centos-stream-8-vmservice-v1alpha1-1619529007339                                               centos8_64Guest       ovf      4h8m
ob-15957779-photon-3-k8s-v1.16.8—vmware.1-tkg.3.60d2ffd    v1.16.8+vmware.1-tkg.3.60d2ffd    vmwarePhoton64Guest   ovf      2d19h
ob-16466772-photon-3-k8s-v1.17.7—vmware.1-tkg.1.154236c    v1.17.7+vmware.1-tkg.1.154236c    vmwarePhoton64Guest   ovf      2d19h
ob-16545581-photon-3-k8s-v1.16.12—vmware.1-tkg.1.da7afe7   v1.16.12+vmware.1-tkg.1.da7afe7   vmwarePhoton64Guest   ovf      2d19h
ubuntu-20-1621373774638                                                                        ubuntu64Guest         ovf      4h8m

He can thus create his YAML descriptive files in order to define his resource requirements for his virtual machine (s) and if they wish, he can customize it or them in order to install his tools.

The configmap descriptive file, includes the customization of the VM. The 3 important fields to fill in for personalization are:

  • The hostname which contains the OS hostname
  • The public-keys, which contains the public key of a computer from which we will connect to the OS in ssh.
  • The user-data part is, if you wish, the place where we put the contents of the Cloud Init configuration file, it will have to be encrypted with the base64 command

cat loeil-du-se-vm-configmap.yaml
apiVersion: v1
kind: ConfigMap
    name: loeil-du-se-vm-configmap # The name of the ConfigMap, must be the same as in the VirtualMachine
    namespace: loeil-du-se
  # OVF Keys values required by the VM at provision time
  hostname: loeil-du-se
  public-keys: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC4Cclh3rN/l70lBNlwQyK6ZtugxqT/7HeerHZKPSO0hcl5ZWLvz2+7QG5FqvYbkPP6EomvyDhE2MPnQ0kWaIrumVxYXAbVdpBBKKdTI3xJpewWB2syxgVOXP2ZOrw4cRLFv18rnESGHfsohedyaSB1qvubPWAqBFa+PSS4xh3zKalUknwc7Bs14fci8tEwEg8cpvNsqvrPZliJ6qTYFGqKuG6Ct+y449JNW6k6itTepgSYvUdJfjBTxk5tDzBdWz28km5N7lxgUB0rIWgSDl1XLCBrmm+H6bkHtD59MxAuxwLjih4tS4PzspcVjwWiJhd0HH7u2wbsPLCrrAX7am4EP40zphu9IR+fVxk+2jp7eD2uXPS6p9sDPEWHl6wGclI7pnfuoyvcn+CIwCtMweLuUw5MPj2eIIXcBhqUffeVAXVHrx8+e7+yHvqfyhqm2J9Ay3yt3zvAcXW0VqDxfvnfmv8sc9VNUW+8fUeyoo4b4uZRLLSf2DHM8= root@fbenrejdal-z01 # the public key to be able to do ssh without password from my laptop
  user-data: | # optional, enter the base64 cloud init encodded file, the result key could be a one line key or a multiple lines key. watch out the indentation, the line should be start under the “r” of user-data

The base64 is obtained as follows:

base64  loeil-du-se-vm-cloud-init.yaml

Its content in clear:

cat  loeil-du-se-vm-cloud-init.yaml

# WATCHOUT the first line must start with #cloud-config
  – devops
  – default # Create the default user for the OS
  – name: fbe
  ssh-authorized-keys: # the public key of my laptop, it could also be filled in the OVF property
    – ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC4Cclh3rN/l70lBNlwQyK6ZtugxqTG7HeerHZKPSO0hcl5ZWLvz2+7QG5FqvYbkPP6EomvyDhE2MPnQ0kWaIrumVxYXAbVdpBBKKdTI3xJpewWB2syxgVOXP2ZOrw4cRLFv18rnESGHf+sohedyaSB1qvubPWAqBFa+PSS4xh6D3zKalUknwc7Bs14fci8tEwEg8cpvNsqvrPZliJ6qTYFGqKuG6Ct+y449JNW6k6itTepgSYvUdJfjBTxk5tDzBdWz28km5N7lxgUB0rIWgSDl1XLCBrmm+H6bkHtD59MxAuxwLjih4tS4PzspcVjwWiJhd0HH7u2wbsPLCrrAX7am4EP40zphu9IR+fVxk+2jp7eD2uXPS6p9sDPEWHl6wGclI7pnfuoyvcn+CIwCtMweLuUw5MPj2eIIXcBhqUffeVAXVHrx8+e7+yHvqfyhqm2J9Ay3yt3zvAcXW0VqDxfvnfmv8sc9VNUW+8fUeyoo4b4uZRLLSf2DHM8= root@fbenrejdal-z01
      groups: sudo, devops
    shell: /bin/bash
    passwd: VMware1!
    sudo: [‘ALL=(ALL) NOPASSWD:ALL’] # the user fbe will not need to enter password when using sudo
  ssh_pwauth: true
  list: |
    fbe:VMware1! # in case you want to change password of users
    expire: false  # if you don’t want your password to expire
runcmd: # Example of runcmd to install MongoDB. Cloud Init has also APT keyword to do installation
  – echo “deb [ arch=amd64,arm64 ] focal/mongodb-org/4.4 multiverse” | tee /etc/apt/sources.list.d/mongodb-org-4.4.list
  – wget -qO – | apt-key add –
  – echo “deb [ arch=amd64,arm64 ] focal/mongodb-org/4.4 multiverse” | tee /etc/apt/sources.list.d/mongodb-org-4.4.list
  – apt-get update
  – apt-get install -y mongodb-org
  – echo “mongodb-org hold” | dpkg –set-selections
  – echo “mongodb-org-server hold” | dpkg –set-selections
  – echo “mongodb-org-shell hold” | dpkg –set-selections
  – echo “mongodb-org-mongos hold” | dpkg –set-selections
  – echo “mongodb-org-tools hold” | dpkg –set-selections
  – sed -i ‘s/’ /etc/mongod.conf
  – ufw allow from any to any port 27017 proto tcp
  – sleep 2
  – systemctl start mongod

Very very important, the file must absolutely start with # cloud-config and nothing else. It’s still a classic Cloud Init file. If you’re not too familiar with Cloud Init, I’ve put some comments there to make it a bit more readable.

The VM description file

cat loeil-du-se-vm-deployment.yaml

kind: VirtualMachine
  name: loeil-du-se-vm
  namespace: loeil-du-se
  vm: loeil-du-se-vm
  imageName: ubuntu-20-1621373774638 #the image must exist on the content library and must be listed with the command kubectl get virtualmachineimage
  className: best-effort-xsmall
  powerState: poweredOn
  storageClass: silver-storage-policy
  – networkType: nsx-t #must be nsx-t or vsphere-distributed depending on your install.
    # networkName: if vsphere-distributed you must specify the name of the network portgroup
    configMapName: loeil-du-se-vm-configmap # The K8s configmap where personalization is stored
    transport: OvfEnv
#  when writing this article, the available image ubuntu (ubuntu-20-1621373774638) is not able to use volume because is using virtual hardware version 10
#  Instead you can use the centos image (centos-stream-8-vmservice-v1alpha1-1619529007339)
#  volumes: #when writing this article, the volume mount parameter is not use, the volume is seen in the guest but should be formated and mounted manualy
#    – name: loeil-du-se-volume
#      persistentVolumeClaim:
#      claimName: loeil-du-se-pvc
#      readOnly: false

Optional, the description file of the network service In my example, I created a service of type LoadBalancer to connect in ssh from an external network to that of the PODs.

Please note, the kind is not Service as usual but VirtualMachineService

cat loeil-du-se-vm-service.yaml

kind: VirtualMachineService
  name: loeil-du-se-vm
    vm: loeil-du-se-vm
  type: LoadBalancer
    – name: ssh
      port: 22
      protocol: TCP
      targetPort: 22

Once the YAML files are created, all that remains is to have them taken into account by Kubernetes.

Kubectl create -f loeil-du-se-vm-configmap.yaml

Kubectl create -f loeil-du-se-vm-deployment.yaml

Kubectl create -f loeil-du-se-vm-service.yaml

To verify the creation of the VM:

Kubectl get vm

To know more about it:

Kubectl describe vm loeil-du-se-vm

It remains a classic VM, so it will benefit from HA and vMotion (via DRS or host maintenance mode). On the other hand, it is “Developer Managed”, that is to say that it cannot be managed via vCenter, you will not see the contents of the console for example.

One tip though, check which ESXi the VM is running on, then connect directly to the ESXi through a browser and there you will have access to the console.

To connect in ssh, if you have access via a loadbalancer like me, you can connect to it directly, otherwise you will have to go through a bounce POD (like busybox, alpine or other) and do an ssh with the IP address on the POD network. You can find it as follows:

kubectl get vm loeil-du-se-vm -o jsonpath='{.status.vmIp}’; echo

The ssh must be done via the user entered in the Cloud Init, I had set fbe, it looks like this:

kubectl get svc loeil-du-se-vm
NAME             TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
loeil-du-se-vm   LoadBalancer   22:32148/TCP   2d22h

ssh fbe@

To run a command as administrator (user “root”), use “sudo <command>”.
See “man sudo_root” for details.

If the ssh does not work, it is because the user was not taken into account by Cloud Init, try with root to obtain the default user, generally ubuntu for Ubuntu and cloud-user for CentOS:

ssh roo@
Please login as the user “ubuntu” rather than the user “root”.

If you have the error below, it is because the laptop from which you are connecting does not have the public ssh key entered or there is an error in it, so you must check the key appearing in the configmap file:

fbe@ Permission denied (publickey,password).

To debug Cloud Init, you must connect to the vm os via ssh or via the console and look at the log /var/log/cloud-init-output.log

There you go, feel free to ping me if you need more information.