Possible Solution
May have a solution:
- Created a new VNET
- Address range 10.60.0.0/23
- Created subnet Containers 10.60.0.0/24
- Created VNET Firewall resource
- Created subnet Firewall 10.60.1.0/24
- Assigned Static Public Address to Firewall Resource
Now the Firewall "Rules" allow for the following:
- NAT rules - typical port translation
- Network Rules - route addresses
- Application Rules - route FQDN
Working to deploy the container to this dev subnet , on the face of it all the options are there , redirect port, ip or FQDN. The game changes is teh ability to assign a static public address to a resource within the VNET and allow NAT, network or app rules to redirect traffic.
Will update the thread on result tomorrow.
Update Feb 2019
Ok so dont use an Azure Firewall Resource. Its very expensive and in my case not in anyway cost effective @ approx £500 per month. I did not get the time to test the theory using the firewall , but due to the cost there was not any point in following it further.
Azure Container Instances enables exposing containers directly to the internet with an IP address and a fully qualified domain name (FQDN). When you create a container instance, you can specify a custom DNS name label so your application is reachable at customlabel.azureregion.azurecontainer.io. Unfortunately Static Public IP addresses are not supported on ACI at the moment.
Certain limitations apply when you deploy container groups to a virtual network.
To deploy container groups to a subnet, the subnet cannot contain any other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet.
Container groups deployed to a virtual network do not currently support public IP addresses or DNS name labels.
Due to the additional networking resources involved, deploying a container group to a virtual network is typically somewhat slower than deploying a standard container instance.
https://feedback.azure.com/forums/602224-azure-container-instances
Solution Deployed
- Ubuntu VM created using Azure Image
- Static public address assigned to the VM
- Api and Service deployed in docker image to the VM
- Arm template used for deployment , integrated with DevOps build and release
- Cost per month £23.52 (Cores: 2, 3GB ram, 16GB HD)
This was initially the solution but offloading and managing the SSL cert added complexity.
Update March 2019 - New Solution Deployed
If anyone is interested (not many base on the number of times this thread has been viewed) the final solution deployed was this:
- Provision App Service Plan
- Deployed "API" App Service using a Container Instance to host the API on port 443.
- Dynamic address and standard SSL cert deployed to "API" App Service.
- Deployed "Service" App Service using a Container Instance to host the Service port 80.
- Static address and IP based SSL cert deployed to "Service" App Service. This has the effect of fixing the IP address on the service and meeting my "i need a static ip address" condition.
- Costs about £65 a month to host approx.
Worth noting the only reason the cert was deployed was to fix the IP address on the "Service" app service. Its a work around to resolve the lack of support on Azure currently to allow users to apply a static IP address to a container instance.
Update March 2020
Post updated in March for reference due to the introduction of the following feature 20/03/20:
This article shows one way to expose a static, public IP address for a container group by using an Azure application gateway. Follow these steps when you need a static entry point for an external-facing containerized app that runs in Azure Container Instances.
In this article you use the Azure CLI to create the resources for this scenario:
- An Azure virtual network
- A container group deployed in the virtual network (preview) that hosts a small web app
- An application gateway
with a public frontend IP address, a listener to host a website on
the gateway, and a route to the backend container group
As long as the application gateway runs and the container group exposes a stable
private IP address in the network's delegated subnet, the container group is accessible at this public IP address.
Create virtual network
az group create --name myResourceGroup --location eastus
Create a virtual network with the az network vnet create command. This command creates the myAGSubnet subnet in the network.
az network vnet create \
--name myVNet \
--resource-group myResourceGroup \
--location eastus \
--address-prefix 10.0.0.0/16 \
--subnet-name myAGSubnet \
--subnet-prefix 10.0.1.0/24
Use the az network vnet subnet create command to create a subnet for the backend container group. Here it's named myACISubnet.
az network vnet subnet create \
--name myACISubnet \
--resource-group myResourceGroup \
--vnet-name myVNet \
--address-prefix 10.0.2.0/24
Use the az network public-ip create command to create a static public IP resource. In a later step, this address is configured as the front end of the application gateway.
az group create --name myResourceGroup --location eastus
az network public-ip create \
--resource-group myResourceGroup \
--name myAGPublicIPAddress \
--allocation-method Static \
--sku Standard
Create container group
Run the following az container create to create a container group in the virtual network you configured in the previous step.
The group is deployed in the myACISubnet subnet and contains a single instance named appcontainer that pulls the aci-helloworld image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
az container create \
--name appcontainer \
--resource-group myResourceGroup \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--vnet myVNet \
--subnet myACISubnet
When successfully deployed, the container group is assigned a private IP address in the virtual network. For example, run the following az container show command to retrieve the group's IP address:
az container show \
--name appcontainer --resource-group myResourceGroup \
--query ipAddress.ip --output tsv
Output is similar to: 10.0.2.4.
For use in a later step, save the IP address in an environment variable:
ACI_IP=$(az container show \
--name appcontainer \
--resource-group myResourceGroup \
--query ipAddress.ip --output tsv)
Create application gateway
Create an application gateway in the virtual network, following the steps in the application gateway quickstart. The following az network application-gateway create command creates a gateway with a public frontend IP address and a route to the backend container group. See the Application Gateway documentation for details about the gateway settings.
az network application-gateway create \
--name myAppGateway \
--location eastus \
--resource-group myResourceGroup \
--capacity 2 \
--sku Standard_v2 \
--http-settings-protocol http \
--public-ip-address myAGPublicIPAddress \
--vnet-name myVNet \
--subnet myAGSubnet \
--servers "$ACI_IP"
It can take up to 15 minutes for Azure to create the application gateway.
Test public IP address
Now you can test access to the web app running in the container group behind the application gateway.
Run the az network public-ip show command to retrieve the frontend public IP address of the gateway:
az network public-ip show \
--resource-group myresourcegroup \
--name myAGPublicIPAddress \
--query [ipAddress] \
--output tsv
Output is a public IP address, similar to: 52.142.18.133.
To view the running web app when successfully configured, navigate to the gateway's public IP address in your browser. Successful access is similar to:
Browser screenshot showing application running in an Azure container instance
Ref
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-application-gateway
Jan 2021
Final edit to document this article from July 2020 in which you can you set up container groups in a virtual network behind an Azure firewall. You configure a user-defined route and NAT and application rules on the firewall. By using this configuration, you set up a single, static IP address for ingress and egress from Azure Container Instances.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-egress-ip-address
Documents the use of:
- container group
- azure firewall
- reserved public ip
- routing traffic
The down side is the expense of the Azure Firewall resource. That aside the solution works.
Aug 2022
Another project and this issue has popped up again. Its possible Azure Load Balancer which currently only supports VM backend pools may support Azure Container Instances (ACI) some time soon.
Project 1
In the mean time we needed a single instance container to serve FW images to devices over a TCP connection. Devices look for a server IP endpoint. So hosting in an ACI resource the service does a DNS lookup on the azurecontainerregistery.io host name and serves the device with the most recent public IP address. Device then routes in to the IP resource. Handy work around for us when server side is initiating the command to a device polling into another service.
Project 2
In regards to the other device service , finding hosting a container on a Azure D-Series VM behind an Azure Load Balancer and attaching an Azure Reserved IP to the WAN (front end interface) of the Load Balancer is working ok. But to run Docker on the VM you need an expensive D series resource. Still cheaper than Azure Firewall. An alternative is running as a windows service on the VM, allows the use of a $11 VM resource but this small VM cannot support containers if using Windows OS.
Long Term
Think the ideal situation would be Azure Load Balancer with a fixed public IP pointing at a backend pool of multiple small cheap ACI instances so we don't have to worry about OS management. Azure VM Scale Set for the back end pool resources also have some pretty nice auto scale tools and Dev Ops integration.
Thanks