On June 28th, 2017, Azure Container Service team announced that a new version of the service was deployed in the UK region. This version exposes some new cool features with one of them being the ability to deploy DockerCE (swarm mode) clusters. In this article, you will see how the Azure CLI can be used to deploy a DockerCE cluster in Azure Container Service. Once the cluster is deployed, you can manage it with the docker
command-line tool and deploy your Linux container(s).
This tutorial requires the Azure CLI version 2.0.4 or later. Run az --version
to find the version. If you need to upgrade, see Install Azure CLI 2.0. You can also use the embedded shell in Azure Portal, called Azure Cloud Shell.
If you don’t have an Azure subscription, create a free account before you begin.
Log in to Azure
Log in to your Azure subscription with the az login command and follow the on-screen directions.
az login
Create a resource group
Create a resource group with the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed.
The following example creates a resource group named myDockerCEGroup in the ukwest location. The uksouth location can also be used.
az group create --name myDockerCEGroup --location ukwest
Create a Swarm Mode (DockerCE) cluster
Next, you will create a DockerCE cluster in Azure Container Service with the az group deployment create command.
The following example creates a cluster named myDockerCECluster with one Linux master node and two Linux agent nodes. First, you need to download two files that will help in the cluster deployment. Both exist in this GitHub repo.
wget https://github.com/Azure/ACS/blob/master/docs/Simple/azuredeploy.json
wget https://github.com/Azure/ACS/blob/master/docs/Simple/azuredeploy.params.dockerce.json
You must set the proper values for these variables on the azuredeploy.params.dockerce.json file:
- masterDnsNamePrefix
- agentDnsNamePrefix
- sshRSAPublicKey
The masterDnsNamePrefix and agentDnsNamePrefix variables must be assigned with unique strings whereas the sshRSAPublicKey needs to have the value of a public key. Check here to see how to create one on Linux/Max and here for Windows instructions.
After you finish modifying the variables, you are now ready to create the DockerCE cluster.
az group deployment create -g myDockerCEGroup --template-file azuredeploy.json --parameters azuredeploy.params.dockerce.json
After several minutes, the command completes and shows you information about your deployment. In this output you will see the ssh command that you can use to ssh into the master node. It will be similar like this one:
ssh azureuser@masterDnsNamePrefix.ukwest.cloudapp.azure.com -A -p 2200
The output will also contain the URL that you can use to reach the agent nodes, make sure to keep it handy. It will be in the format ‘agentDnsNamePrefix.ukwest.cloudapp.azure.com’.
Moreover, you’ll see that a new resource group has also been created with the name in the format of {originalResourceGroupname}_containerservice-{originalResourceGroupname}{location}. The resource group you created originally (myDockerCEGroup in this example) will contain the Azure Container Service instance whereas this one will contain the other necessary cluster resources (VMs, Load Balancers, Storage Disks etc.). For this example, the name of the new resource group will be myDockerCEGroup_containerservice-myDockerCEGroup_ukwest. Moreover, the name of the Azure Container Service instance will be containerservice-{originalResourceGroupname}.
Manage the cluster
Once you ssh into the master node, you are ready to manage your newly created cluster. One of the benefits of the DockerCE cluster is that it uses the same command-line tool that is used to manage local Docker containers, i.e. the docker
command-line client.
To verify that the cluster is up and running, run this command:
docker node ls
docker
lists the master and agent nodes as well as their statuses.
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
9dzwar4eo3vcml8pq1f2ckbgb swarmm-agentpools-15168259000001 Ready Active
f59050uz7crli6rtc8byix8dp * swarmm-master-15168259-0 Ready Pause Leader
x7vy7geknu0mkge3tfmimhwmp swarmm-agentpools-15168259000000 Ready Active
The star (*) next to the master ID shows that you are connected to the master node.
Deploy an NGINX container
You can easily run a Docker container in the agent nodes.
The following command starts the NGINX Docker container in a DockerCE node. In this case, the container runs the NGINX web server pulled from an image in Docker Hub.
docker service create --replicas 1 --name mynginx nginx
To see that the container is running, run:
docker service ls
You will see something like:
ID NAME MODE REPLICAS IMAGE
1lo5m92i7izf mynginx replicated 1/1 nginx:latest
View the NGINX welcome page
To make the NGINX server accessible to the world through the agent nodes URL, type the following command:
docker service update --publish-add 80:80 mynginx
With this command, the cluster will make port 80 of the NGINX container accessible via port 80 on agent nodes URL. You can use a web browser of your choice to see the default NGINX welcome page at the external IP address or just run a curl command:
curl agentDnsNamePrefix.ukwest.cloudapp.azure.com
Delete cluster
When the cluster is no longer needed, you can use the az group delete command to remove the resource group, container service, and all related resources.
First, you should delete the resource group that was created by the az group deployment create
command. For this example, the command would be:
az group delete --name myDockerCEGroup_containerservice-myDockerCEGroup_ukwest
Alternatively, you can delete the Azure Container Service instance. This command will delete the resource group that was created by the deployment command and will leave the original resource group empty.
az acs delete --name containerservice-myDockerCEGroup --resource-group myDockerCEGroup
Finally, you can delete the resource group you initially created:
az group delete --name myDockerCEGroup
Next steps
In this quick start article, you deployed a DockerCE cluster, connected to it using the docker
command-line utility, and deployed an NGINX container. See more examples of using Azure CLI 2.0 commands with Azure Container Service.
Hi, so far so good I have my cluster up, but i cant connect to the cluster.
I grabed the SSH command, wich in my case is: ssh azureuser@ohl-sawrm-dnsmgmt.westus.cloudapp.azure.com -A -p 2200
and i get this response…..
(by the way im connecting from a ubuntu VM created in the same Resource Group)
The authenticity of host ‘[ohl-sawrm-dnsmgmt.westus.cloudapp.azure.com]:2200 ([13.93.154.12]:2200)’ can’t be established.
ECDSA key fingerprint is SHA256:HDH7X20ySL53hjnyPreuHwVKQ/NVeGvlWie7nqNA9sE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘[ohl-sawrm-dnsmgmt.westus.cloudapp.azure.com]:2200,[13.93.154.12]:2200’ (ECDSA) to the list of known hosts.
Permission denied (publickey).
LikeLike
Do you have the private key in the machine you’re connecting from (in this case, the Ubuntu VM)?
LikeLike
on the directory /home/adminorion/ y have two files id_rsa and id_rsa.pub, and only there. should i move that to another folder?
LikeLike
Are you referencing these files when you try to ssh? Try moving them to ~/.ssh folder
LikeLike