SSW Foursquare

Rules to Better Azure - 38 Rules

The Azure cloud platform is more than 200 products and cloud services designed to help you bring new solutions to life—to solve today's challenges and create the future. Build, run, and manage applications across multiple clouds, on-premises, and at the edge, with the tools and frameworks of your choice.

Need help with Microsoft Azure? Check SSW's Azure consulting page.

  1. Do you know how to choose Azure services?

    Getting application architecture right is super hard and often choosing the wrong architecture at the start of a project causes immense pain further down the line when the limitations start to become apparent.

    Azure has 100s of offerings and it can be hard to know what the right services are to choose for any given application.

    However, there are a few questions that Azure MVP Barry Luijbregts has come up with to help narrow down the right services for each business case.

    Video: How to choose which services to use in Azure | Azure Friday

    There are 2 overarching questions to ask when building out Azure architecture:

    1. How do you run the app?

    Azure offers heaps of models for running your app. So, to choose the right one you need to break this question down into 3 further parts:

    1.1 Control - How much is needed?

    There are many different levels of control that can be provided. From a VM which provides complete control over every aspect, to an out-of-the-box SaaS solution which provides very little control.

    Keep in mind, that the more control you have, the more maintenance will be required meaning more costs. It is crucial to find the sweet spot for control vs maintenance costs - is the extra control gained actually necessary?

    • Infrastructure as a Service (IaaS)

      • Consumer responsible for everything beyond the hardware

      e.g. Azure VM, AKS

    • Platform as a Service (PaaS)

      • Consumer responsible for App configuration, building the app and server configuration

      e.g. Azure App Service

    • Functions as a Service (FaaS) -- the Logic

      • Consumer responsible for App configuration and building the app

      e.g. Azure Functions, Azure Logic Apps

    • Software as a Service (SaaS)

      • Consumer responsible for only App configuration

    Figure: The different levels of control

    1.2 Location - Where do I need the app to run?

    Choosing where to run your app

    • Azure
    • On-Premises
    • Other Platforms e.g. AWS, Netlify, GitHub Pages
    • Hybrid

    1.3 Frequency - How often does the app need to run?

    Evaluating how often an app needs to run is crucial for determining the right costing model. A website or app that needs to be available 24/7 is suited to a different model than something which is called infrequently such as a scheduled job that runs once a day.

    There are 2 models:

    • Runs all the time

      • Classic (Pay per month) e.g. Azure App Service, Azure VM, AKS
    • Runs Occasionally

      • Serverless (Pay per execution) e.g. Azure Functions, Azure Logic Apps

    2. How do you store your data?

    Azure has tonnes of ways to store data that have vastly different capabilities and costing models. So to get it right, ask 2 questions.

    2.1 Purpose - What will the data be used for?

    The first question is what is the purpose of the data. Data that is used for everyday apps has very different storage requirements to data that is used for complex reporting.

    So data can be put into 2 categories:

    • Online Transaction Processing (OLTP)

      • For general application usage e.g. storing customer data, invoice data, user data etc
    • Online Analytical Processing (OLAP)

      • For data analytics e.g. reporting

    2.2 Structure - What type of data is going to be stored?

    Data comes in many shapes and forms. For example, it might have been normalized into a fixed structure or it might come with variable structure.

    Classify it into 2 categories:

    • Relational data e.g. a fully normalized database
    • Unstructured data e.g. document data, graph data, key/value data

    Example Scenario

    These questions can be applied to any scenario, but here is one example:

    Let's say you have a learning management system running as a React SPA and it stores information about companies, users, learning modules and learning items. Additionally the application administrators can build up learning items with a variable amount of custom fields, images, videos, documents and other content as they want.

    It also has a scheduled job that runs daily, picks up all the user data and puts it into a database for reporting. This database for reporting needs to be able to store data from many different sources and process billions of records.

    Q1: The App - Where to run the app?

    Control - The customer doesn't need fine tuned control but does need to configure some server settings for the website.

    Location - The app needs to run in Azure.

    Frequency - The scheduled job runs occasionally (once a day...) while the website needs to be up all the time.

    A1: The App - The best Azure services are

    • An Azure App Service for the website, since it is a PaaS offering that provides server configuration and constant availability
    • An Azure function for the scheduled Job, since it only runs occasionally and no server configuration is necessary

    Q2: Data - How to store it?

    Purpose - The data coming in for everyday usage is largely transactional while the reporting data is more for data analytics.

    Structure - The data is mostly structured except for the variable learning items.

    A2: Data - The best Azure Services are

    • Azure SQL for the main everyday usage
    • CosmosDB for the variable learning items
    • Azure Synapse for the data analytics
  2. Do you know the best tools for learning Azure?

    Azure is a beast of a product with hundreds of services. When you start learning Azure, it can be overwhelming to think about all the different parts and how they fit together. So, it is crucial to know the right tools to make the process as pain free as possible.

    There are heaps of great tools out there with differing pricing models and learning styles.

    YouTube - $0

    YouTube is a great resource for those who love audio-visual learning. It is completely free and there are heaps of industry experts providing content. Some of the best examples are:

    Microsoft Learn is the best free tool out there. It provides hundreds of practical tutorials, heaps of video content and even lets you spin up little Azure sandboxes to try out Azure functionality. It is officially supported by Microsoft and so is one of the best ways to get ready for certifications.

    Online Learning Platforms - $100 - 500 AUD

    Online learning platforms provide high quality technical training from your browser. These courses include lectures, tutorials, exams and more so you can learn at your own pace.

    Some of the options:

    • Udemy - $~100 AUD - Budget Option, no instructor vetting
    • LinkedIn Learning $~300 AUD
    • PluralSight - $~500 AUD - Gold Standard ⭐
  3. Do you know the relevant Azure certifications and associated exams?

    Whether you're an expert or just getting started, working towards gaining a new certification is a worthwhile investment.

    Microsoft provides numerous certifications and training options to help you:

    • Learn new skills
    • Fill technical knowledge gaps
    • Boost your productivity
    • Prove your competence

    azure certification branch
    Figure: Microsoft Certification RoadMap


    If you're just getting started, take a look at:

    Microsoft Certified: Azure Fundamentals

    Earn this certification to prove you have a foundational knowledge of cloud services and how those services are provided with Microsoft Azure.

    You will need to pass Exam AZ-900: Microsoft Azure Fundamentals.

    Microsoft Certified: Azure Data Fundamentals

    Earn this certification to prove you have foundational knowledge of core data concepts and how they are implemented using Microsoft Azure data services.

    You will need to pass: Exam DP-900: Microsoft Azure Data Fundamentals.


    Once you've mastered the fundamentals, developers should move on to:

    Microsoft Certified: Azure Developer Associate

    Earn this certification to prove your subject matter expertise in designing, building, testing, and maintaining cloud applications and services on Microsoft Azure.

    You will need to pass: Exam AZ-204: Developing Solutions for Microsoft Azure.

    Microsoft Certified: Azure Data Engineer Associate

    Earn this certification to prove you have subject matter expertise integrating, transforming, and consolidating data from various structured and unstructured data systems into structures that are suitable for building analytics solutions.

    You will need to pass: Exam DP-203: Data Engineering on Microsoft Azure.

    Microsoft Certified: Azure Security Engineer Associate

    Earn this certification to prove your subject matter expertise implementing security controls and threat protection, managing identity and access, and protecting data, applications, and networks in cloud and hybrid environments as part of an end-to-end infrastructure.

    You will need to pass: Exam AZ-500: Microsoft Azure Security Technologies.

    Microsoft Certified: Azure Data Scientist Associate

    Earn this certification to prove you have subject matter expertise applying data science and machine learning to implement and run machine learning workloads on Azure.

    You will need to pass: Exam DP-100: Designing and Implementing a Data Science Solution on Azure.

    Microsoft Certified: Azure Administrator Associate

    Earn this certification to prove you understand how to implement, manage and monitor an organization's Azure environment.

    You will need to pass: Exam AZ-104: Microsoft Azure Administrator.


    Cosmos is becoming a very popular database solution. Learn more by completing:

    Microsoft Certified: Azure Cosmos DB Developer Specialty

    Earn this certification to prove that you have strong knowledge of the intricacies of Azure Cosmos DB.

    You will need to pass: Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB


    Eventually, all rock star developers and solution architects should set their sights on:

    Microsoft Certified: Azure Solutions Architect Expert

    Earn this certification to prove your subject matter expertise in designing and implementing solutions that run on Microsoft Azure, including aspects like compute, network, storage, and security. Candidates should have intermediate-level skills for administering Azure. Candidates should understand Azure development and DevOps processes.

    You will need to pass: Exam AZ-303: Microsoft Azure Architect Technologies and Exam AZ-304: Microsoft Azure Architect Design.

    Now that you can build awesome cloud applications, you might want to Deploy your applications to Microsoft Azure:

    Microsoft Certified: DevOps Engineer Expert

    Earn this certification to prove your subject matter expertise working with people, processes, and technologies to continuously deliver business value.

    You will need to pass: Exam AZ-400: Designing and Implementing Microsoft DevOps Solutions.

    screen shot 2022 01 06 at 10 17 14 pm
    Figure: Get the poster to see Microsoft's certifications

    Check the Become Microsoft Certified poster for details of exams required for each of the certifications.

    Preparing for exams can involve a lot of work, and in some cases stress and anxiety. But remember, you're not in school anymore! You've chosen to take this exam, and no one is forcing you. So just sit back and enjoy the journey - you should feel excited by the new skills you will soon learn. If you want some great advice and tips, be sure to check out Successfully Passing Microsoft Exams by @JasonTaylorDev.

    Good luck!

  4. Do you know the 9 important parts of Azure?

    To help you out, here is a list of the top 9 Azure services you should be using:

    1. Computing: App Services
    2. Best practices: DevOps Project
    3. Data management: Azure Cosmos DB (formerly known as Document DB)
    4. Security: Azure AD (Active Directory)
    5. Web: API Management
    6. Automation: Logic Apps
    7. Automation: Cognitive Services
    8. Automation: Bots
    9. Storage: Containers

    Watch the video

    More details on Adam's Blog - The 9 knights of Azure: services to get you started

  5. Do you have a Cloud Architect in your projects?

    The goal of a modern complex software project is to build software with the best software architecture and great cloud architecture. Software developers should be focusing on good code and good software architecture. Azure and AWS are big beasts and it should be a specialist responsibility.

    Many projects for budget reasons, have the lead developer making cloud choices. This runs the risk of choosing the wrong services and baking in bad architecture. The associated code is hard and expensive to change, and also the monthly bill can be higher than needed.

    The focus must be to build solid foundations and a rock-solid API. The reality is even 1 day of a Cloud Architect at the beginning of a project, can save $100K later on.

    2 strong developers (say Solution Architect and Software Developer)
    No Cloud Architect
    No SpendOps

    Figure: Bad example of a team for a new project

    ::: greybox2 strong developers (say Solution Architect and Software Developer)
    + 1 Cloud Architect (say 1 day per week, or 1 day per fortnight, or even 1 day per month) after choosing the correct services, then looks after the 3 horsemen:

    • Load/Performance Testing
    • Security choices
    • SpendOps ::: ::: good Figure: Good example of a team for a new project :::

    Problems that can happen without a Cloud Architect:

    • Wrong tech chosen e.g. nobody wants to accidentally build and need to throw away
    • Wrong DevOps e.g. using plain old ARM templates that are not easy to maintain
    • Wrong Data story e.g. defaulting to SQL Server, rather than investigating other data options
    • Wrong Compute model e.g. Choosing a fixed price, always-on, slow scaling WebAPI for sites that have unpredictable and large bursts of traffic
    • Security e.g. this word should be enough
    • Load/Performance e.g. not getting the performance to $ spend ratio right

    Finally, at the end of a project, you should go through a "Go-Live Audit". The Cloud Architect should review and sign off that the project is good to go. They mostly check the 3 horsemen (load, security, and cost).

    MS Cloud Design Patterns Infographic SSW Edited

  6. Do you use Azure Architecture Center?

    In a Specification Review you should include an architecture diagram so the client has a visual idea of the plan. There are lot of tools to help build out an architecture diagram, but the best one is Azure Architecture Center

    It is a one stop shop for all things Azure Architecture. It’s got a library of reference implementations to get you started. Lots of information on best practices from the big decisions you need to make down to the little details that can make a huge difference to how your application behaves.

    Video: Discovering the Azure Architecture Center | Azure Tips and Tricks (2 mins)

    Reference Architectures

    Figure: Use Browse Architectures to find a reference architecture that matches your application

    The architectures presented fit into 2 broad categories:

    • Complete end to end architectures. These architectures cover the full deployment of an application.
    • Architectures of a particular feature. These architectures explain how to incorporate a particular element into your architecture. The Caching example above explains how you might add caching into your application to improve performance.

    Each architecture comes with comprehensive documentation providing all the information you need to build and deploy the solution.

    Best Practices

    Figure: Use Explore Best Practices to find information on particular best practice

    The Best Practices is a very broad set of documentation from things like performance tuning all the way through to designing for resiliency and some of the more common types of applications and their requirements. Because of this there is almost always something useful, no matter what stage your application is at. Many teams will add a Sprint goal of looking at one best practise per Sprint or at regular intervals. The Product Owner would then help prioritise which areas should be focussed on first.

  7. Do you use the Well-Architected Framework?

    The Well-Architected Framework is a set of best practices which form a repeatable process for designing solution architecture, to help identify potential issues and optimize workloads.

    waf diagram revised
    Figure: The Well-Architected Framework includes the five pillars of architectural excellence. Surrounding the Well-Architected Framework are six supporting elements

    The 5 Pillars


    There are trade-offs to be made between these pillars. E.g. improving reliability by adding Azure regions and backup points will increase the cost.

    Why use it?

    Thinking about architecting workloads can be hard – you need to think about many different issues and trade-offs, with varying contexts between them. WAF gives you a consistent process for approaching this to make sure nothing gets missed and all the variables are considered.

    Just like Agile, this is intended to be applied for continuous improvement throughout development and not just an initial step when starting a new project. It is less about architecting the perfect workload and more about maintaining a well-architected state and an understanding of optimizations that could be implemented.

    What to do next?

    Assess your workload against the 5 Pillars of WAF with the Microsoft Azure Well-Architected Review and add any recommendations from the assessment results to your backlog.

    waf assessment
    Figure: Some recommendations will be checked, others go to the backlog so the Product Owner can prioritize

    waf reliability results 2
    Figure: Recommended actions results show things to be improved

    waf tech debt backlog northwind
    Figure: Good example - WAF is very visible to the Product Owner on the backlog

  8. Visualizing - Do you have an Azure resources diagram?

    Looking at a long list of Azure resources is not the best way to be introduced to a new project. It is much better to visualize your resources.

    You need an architecture diagram, but this is often high level, just outlining the most critical components from the 50,000ft view, often abstracted into logical functions or groups. So, once you have your architecture diagram, the next step is to create your Azure resources diagram.

    Video: Azure resource diagrams in Visual Studio Code - Check out this awesome extension! (6 min)

    Option A: Just viewing a list of resources in the Azure portal

    Note: When there are a lot of resources this doesn't work.

    azure resources
    Figure: Bad Example – Using the Azure Portal to view your resources

    Option B: Visually viewing the resources

    Figure: Good Example – Viewing the resources in VS Code using the ARM Template Viewer extension

    ssw rewards resource github
    Figure: Good Example - ARM template and automatically generated Azure resources diagram in the SSW Rewards repository on GitHub

    sswrewards azure resources new
    Figure: Good Example - The Azure resources diagram generated by the ARM Template Viewer extension for SSW Rewards

    Install ARM Template Viewer from VisualStudio Marketplace.

    Suggestion to Microsoft: Add an auto-generated diagram in the Azure portal. Have an option in the combo box (in addition to List View) for Diagram View.

    Update: This is now happening.

    Scrum Warning: Like the architecture diagram, this is technical debt as it needs to be kept up to date each Sprint. However, unlike the architecture diagram, this one is much easier to maintain as it can be refreshed with a click. You could reduce this technical debt by automatically saving the .png to the same folder as your architecture diagram. It is easy to do this by using Azure Event Grid and Azure Functions to generate these for you when you make changes to your resources.

  9. UX - Do you rename Azure’s default URL?

    If you use the default Azure staging website URL, it can be difficult to remember and a waste of time trying to lookup the name every time you access it. Follow this rule to increase your productivity and make it easier for everyone to access your staging site.

    Default Azure URL:

    Figure: Bad example - Site using the default URL (hard to remember!!)

    Customized URL:

    Figure: Good example - Staging URL with "staging." prefix

    How to setup a custom URL

    1. Add a CName to the default URL to your DNS server

    2015 03 10 17 13 55
    Figure: CName being added to DNS for the default URL

    1. Instruct Azure to accept the custom URL

    custom domains
    Figure: Azure being configured to accept the CName

  10. Search - Do you consider Azure Search for your website?

    AzureSearch is designed to work with Azure based data and runs on ElasticSearch. It doesn't expose all the advanced search features. You may resist to choose it as your search engine from the missing features and what seems to be an expensive monthly fee ($250 as of today). If this is the case, follow this rule:

    Consider AzureSearch if your website:

    • Uses SQL Azure (or other Azure based data such as DocumentDB), and
    • Does not require advanced search features

    Consider ElasticSearch if your website:

    • Requires advance search features that aren't supported by AzureSearch

    Keep in mind that:

    1. Hosting of a full-text search service costs you labour to set up and maintain the infrastructure, and
    2. A single Azure VM can cost you up to $450. So do not drop AzureSearch option unless the missing features are absolutely necessary for your site

    9c0754 Untitled2
    Figure: Good example - Azure website using AzureSearch for what it can deliver today

    Figure: Bad example - Azure website using ElasticSearch for a simple search that AzureSearch can do

  11. Do you know how to create Azure resources?

    We've been down this road before where developers had to be taught not to manually create databases and tables. Now, in the cloud world, we're saying the same thing again: Don't manually create Azure resources.

    Manually Creating Resources

    This is the most common and the worst. This is bad because it requires manual effort to reproduce and leaves margin for human error. Manually provisioning resources can also lead to configuration drift, which is to say that over time it can be difficult to keep track of which deployment configurations were made and why.

    • Create resources in Azure and not save a script

    Figure (animated gif): Bad example - Creating resources manually

    Manually creating and saving the script

    Some people half solve the problem by manually creating and saving the script. This is also bad because it’s like eating ice cream and brushing your teeth – it doesn’t solve the health problem.

    azure bad 1
    Figure: Bad example – Exporting your Resource Group as an ARM template defined in JSON

    azure bad 2
    Figure: Warning - The templates are crazy verbose. They often don't work and need to be manually tweaked

    Tip: Save infrastructure scripts/templates in a folder called 'infra'.

    So if you aren't manually creating your Azure resources, what options do you have?

    Option A: Farmer

    Farmer - Making repeatable Azure deployments easy!

    • IaC using F# as a strongly typed DSL
    • Generates ARM templates from F#
    • Add a very short and readable F# project in your solution
    • Tip: The F# solution of scripts should be in a folder called Azure

    Figure: Farmer was our favourite until Bicep was supported by Microsoft


    Bicep - a declarative language for describing and deploying Azure resources

    • Is free and fully supported by Microsoft
    • Has 'az' command line integration
    • Awesome extension for VS Code to author ARM Bicep files ⭐️
    • Under the covers - Compiles into an ARM JSON template for deployment
    • Improves the repeatability of your deployment process, which can come in handy when you want to stage your deployment configuration
    • Much simpler syntax than ARM JSON
    • Handles resource dependencies automatically
    • Private Module Registries for publishing versioned and reusable architectures

    Tip: If you are assigning any role assignment using bicep, make sure it doesn't exist before. (Using Azure Portal)

    Announcement info: Project Bicep – Next Generation ARM Templates

    Example Bicep files: Fullstack Webapp made with Bicep

    Figure: Good example - Author your own Bicep templates in Visual Studio Code using the Bicep Extension

    Option C: Enterprise configuration management $$$

    The other option when moving to an automated Infrastructure as Code (IaC) solution is to move to a paid provider like Pulumi or Terraform. These solutions are ideal if you are using multiple cloud providers or if you want to control the software installation as well as the infrastructure.

    • Both tools are great and have free tiers available
    • Paid tiers provide more benefits for larger teams and helps manage larger infrastructure solutions
    • Terraform uses HashiCorp Configuration Language HCL

    • Pulumi uses real code (C#, TypeScript, Go, and Python) as infrastructure rather than JSON/YAML

    Figure: Good example - Code from the Pulumi Azure NextGen provider demo with Azure resources defined in C#

    Figure: Good example - From the console simply run 'pulumi up' to deploy your resources to Azure

    Tip: After you’ve made your changes, don’t forget to visualize your new resources.

  12. Bicep - Do you use User-defined Data Types?

    User-defined data types in Bicep allow you to create custom data structures for better code organization and type safety. They enhance reusability, abstraction, and maintainability within projects.

    When creating a cloud resource, numerous parameters are typically required for configuration and customization. Organizing and naming these parameters effectively is increasingly important.

    @allowed(['Basic', 'Standard'])
    param skuName string = 'Basic'
    @allowed([5, 10, 20, 50, 100])
    param skuCapacity int = 5
    param skuSizeInGB int = 2

    Bad example - Relying on parameter prefixes and order leads to unclear code, high complexity, and increased maintenance effort

    param sku object

    Bad example - When declaring a parameter as an untyped object, bicep cannot validate the object's properties and values at compile time, risking runtime errors.

    // User-defined data type
    type skuConfig = {
      name: 'Basic' | 'Standard'
      capacity: 5 | 10 | 20 | 50 | 100
      sizeInGB: int
    param sku skuConfig = {
      name: 'Basic'
      capacity: 5
      sizeInGB: 2

    Good example - User-defined data type provides type safety, enhanced readability and making maintenance easier

  13. Do you name your Azure resources correctly?

    Video: Hear from Luke Cook about how organizing your cloud assets starts with good names and consistency!

    icon naming azure

    kv bad name
    The scariest resource name you can find

    Organizing your cloud assets starts with good names. It is best to be consistent and use:

    Azure defines some best practices for naming and tagging your resource.

    Having inconsistent resource names across projects creates all sorts of pain

    • Developers will struggle to find a project's resources and identify what those resources are being used for
    • Developers won't know what to call new resources they need to create.
    • You run the risk of creating duplicate resources... created because a developer has no idea that another developer created the same thing 6 months ago, under a different name, in a different Resource Group!

    Keep your resources consistent

    If you're looking for resources, it's much easier to have a pattern to search for. At a bare minimum, you should keep the name of the product in the resource name, so finding them in Azure is easy. One good option is to follow the "productname-environment" naming convention, and most importantly: keep it consistent!

    bad azure name example 1
    Bad Example - Inconsistent resource names. Do these belong to the same product?

    Name your resources according to their environment

    Resource names can impact things like resource addresses/URLs. It's always a good idea to name your resources according to their environment, even when they exist in different Subscriptions/Resource Groups.

    better example
    Good Example - Consistent names, using lowercase letters and specifying the environment. Easy to find, and easy to manage!

    Plan for the exceptions

    Some resources won't play nicely with your chosen naming convention (for instance, storage accounts do not accept kebab-case). Acknowledge these, and have a rule in place for how you will name these specific resources.

    Automate resource deployment

    ClickOps can save your bacon when you quickly need to create a resource and need to GSD. Since we are all human and humans make mistakes, there will be times when someone is creating resources via ClickOps are unable to maintain the team standards to consistent name their resources.

    Instead, it is better to provision your Azure Resources programmatically via Infrastructure as Code (IaC) using tools such as ARM, Bicep, Terraform and Pulumi. With IaC you can have naming conventions baked into the code and remove the thinking required when creating multiple resources. As a bonus, you can track any changes in your standards over time since (hopefully) your code is checked into a source control system such as Git (or GitHub, Azure Repos, etc.).

    You can also use policies to enforce naming convention adherance, and making this part of your pipeline ensures robust naming conventions that remove developer confusion and lower cognitive load.

    For more information, see our rule: Do you know how to create Azure resources?

    Want more Azure tips? Check out our rule on Azure Resource Groups.

  14. Resource Groups - Do you know how to arrange your Azure resources?

    icon naming azure 1710232021931

    Naming your Resource Groups

    Resource Groups should be logical containers for your products. They should be a one-stop shop where a developer or sysadmin can see all resources being used for a given product, within a given environment (dev/test/prod). Keep your Resource Group names consistent across your business, and have them identify exactly what's contained within them.

    Name your Resource Groups as Product.Environment. For example:

    • Northwind.Dev
    • Northwind.Staging
    • Northwind.Production

    There are no cost benefits in consolidating Resource Groups, so use them! Have a Resource Group per product, per environment. And most importantly: be consistent in your naming convention.

    Keep your resources in logical, consistent locations

    You should keep all a product's resources within the same Resource Group. Your developers can then find all associated resources quickly and easily, and helps minimize the risk of duplicate resources being created. It should be clear what resources are being used in the Dev environment vs. the Production environment, and Resource Groups are the best way to manage this.

    rogue resource
    Bad example - A rogue dev resource in the Production RG

    Don't mix environments

    There's nothing worse than opening up a Resource Group and finding several instances of the same resources, with no idea what resources are in dev/staging/production. Similarly, if you find a single instance of a Notification Hub, how do you know if it's being built in the test environment, or a legacy resource needed in production?

    bad azure environments
    Bad example - Staging and Prod resources in the same RG

    Don't categorize Resource Groups based on resource type

    There is no cost saving to group resources of the same type together. For example, there is no reason to put all your databases in one place. It is better to provision the database in the same resource group as the application that uses it.

    arrange azure resources bad
    Figure: Bad example - SSW.SQL has all the Databases for different apps in one place

    rg good
    Figure: Good example (for all the above) - Resource Group contains all staging resources for this product

  15. Resource Groups- Do you apply Tags to your Azure Resource Groups?

    To help maintain order and control in your Azure environment, applying tags to resources and resources groups is the way to go.

    Azure has the Tag feature, which allows you to apply different Tag Names and values to Resources and Resource Groups:

    tags in resources group
    Figure: Little example of Tags in Resource Groups

    You can leverage this feature to organize your resources in a logical way, not relying in the names only. E.g.

    • Owner tag: You can specify who owns that resource
    • Environment tag: You can specify which environment that resource is in

    Tip: Do not forget to have a strong naming convention document stating how those tags and resources should be named. You can use this Microsoft guide as a starter point: Recommended naming and tagging conventions.

  16. Cost - Do you have an Azure Spend $ master?

    Azure is Microsoft's Cloud service. However, you have to pay for every little bit of service that you use.

    Before diving in, it is good to have an understanding of the basic built-in user roles:

    Figure: Roles in Azure

    More info:

    It's not a good idea to give everyone 'Contributor' access to Azure resources in your company. The reason is cost: Contributors can add/modify the resources used, and therefore increase the cost of your Azure build at the end of the month. Although a single change might represent 'just a couple of dollars', in the end, everything summed up may increase the bill significantly.

    The best practice is to have an Azure Spend Master. This person will control the level of access granted to users. Providing "Reader" access to users that do not need to/should not be making changes to Azure resources and then "Contributor" access to those users that will need to Add/Modify resources, bearing in mind the cost of every change.

    Also, keep in mind that you should be giving access to security groups and not individual users. It is easier, simpler, and keeps things much better structured.

    Figure: Bad example - Contributor access to the Developers group

    Figure: Good example - Reader access to the Developers group

  17. Do you proactively notify about expected spikes in Azure Resource costs?

    Always inform stakeholders in advance if you anticipate a significant increase in Azure resource costs. This proactive communication is crucial for budget planning and avoiding unexpected expenses.

    Why This Matters

    1. Budget Management: Sudden spikes in costs can disrupt budget allocations and financial planning.
    2. Transparency: Keeping stakeholders informed fosters trust and transparency in operations.
    3. Planning: Advance notice allows for better resource allocation and potential cost optimization strategies.

    How to Implement

    • Communicate Early: As soon as a potential cost increase is identified, communicate this to relevant stakeholders.
    • Provide Details: Include information about the cause of the spike, expected duration, and any steps being taken to mitigate costs.


    A team needs to perform a bulk update on millions of records in an Azure Cosmos DB instance, a task that requires scaling up the throughput units substantially. They proceed without notifying anyone, assuming the cost would be minimal as usual. However, the intensive usage for a week leads to an unexpectedly high bill, causing budgetary concerns and dissatisfaction among stakeholders.

    Figure: Bad example - Nobody likes a surprise bill

    Before running a large-scale data migration on Azure SQL Database, which is expected to significantly increase DTU (Database Transaction Unit) consumption for a week, the team calculates the expected cost increase. They inform the finance and management teams, providing a detailed report on the reasons for the spike, the benefits of the migration, and potential cost-saving measures.

    Then send an email (as per the template below)

    Figure: Informing and emailing stakeholders before a spike makes everyone happy

    Email template

    Remember, effective communication about cost management is key to maintaining a healthy and transparent relationship with all stakeholders involved in your Azure projects.

  18. Do you use Entra Access Packages to give access to resources?

    In today's complex digital landscape, managing user access to resources can be a daunting task for organizations. Entra Access Packages emerge as a game-changer in this scenario, offering a streamlined and efficient approach to identity and access management.

    By bundling related resources into cohesive packages, they simplify the process of granting, reviewing, and revoking access. This not only reduces administrative overhead but also enhances security by ensuring that users have the right permissions at the right time. Furthermore, with built-in automation features like approval workflows and periodic access reviews, organizations can maintain a robust and compliant access governance structure. Adopting Azure Access Packages is a strategic move for businesses aiming to strike a balance between operational efficiency and stringent security.

    ❌ Bad Example - Manually Requesting Access via Email

    In the old-fashioned way, users would send an email to the SysAdmins requesting access to a specific resource. This method is prone to errors, lacks an audit trail, and can lead to security vulnerabilities.

    Figure: Bad example - This requires manual changes by a SysAdmin

    ✅ Good Example - Requesting Access via

    Instead of manually sending emails, users can request access through, which provides a streamlined, auditable, and secure method.

    1. Navigate to

      screenshot 2023 08 23 214846
      Navigate to

    2. Search for the desired resource or access package.

      screenshot 2023 08 23 215159
      Figure: Search for the required resource

    3. Request Access by selecting the appropriate access package and filling out any necessary details.

      screenshot 2023 08 23 215532
      Request Access

    4. Wait for approval from the people responsible for the resource

      If you require immediate access ping them on Teams

    Steps to Create an Access Package

    1. Open Azure Portal: Navigate to Azure Active Directory | Identity Governance | Access packages.

      screenshot 2023 08 23 220334
      Figure: Navigate to Azure portal | Access packages | New Access package

    2. New Access Package: Click on + New access package.
    3. Fill Details: Provide a name, description, and select the catalog for the access package.

      screenshot 2023 08 23 221623
      Figure: Fill out the details and choose a catalog

    4. Define Resources: Add the resources (applications, groups, SharePoint sites) that users will get access to when they request this package.

      screenshot 2023 08 23 222048
      Figure: Add the required resources

    5. Set Policies: Define who can request the package, approval workflows, duration of access, and other settings.

      screenshot 2023 08 23 222124
      Figure: Choose the types of users that can request access

      screenshot 2023 08 23 222210
      Figure: Choose policies that match the level of access

    6. Review and Create: Ensure all details are correct and then create the access package.

      screenshot 2023 08 23 222746
      Figure: Review the settings and create the policy

  19. Cost - Do you know how to be frugal with Azure Storage Transactions?

    Azure transactions are CHEAP. You get tens of thousands for just a few cents. What is dangerous though is that it is very easy to have your application generate hundreds of thousands of transactions a day.

    Every call to Windows Azure Blobs, Tables and Queues count as 1 transaction. Windows Azure diagnostic logs, performance counters, trace statements and IIS logs are written to Table Storage or Blob Storage.

    If you are unaware of this, it can quickly add up and either burn through your free trial account, or even create a large unexpected bill.

    Note: Azure Storage Transactions do not count calls to SQL Azure.

    Be aware that Azure Functions Queue and Event Hub Triggers can cause lots of transactions

    Both of these triggers can cause a lot of transactions. Typically this is controlled by the batch size you configure. What happens is that the Functions runtime needs to read and write a watermark into blob storage. This is a record of what items have been read from the Queue or Event Hub. So the bigger the batch size, the less often these records get written. If you expect your function to potentially be triggered a lot, make the batch size bigger.

    Many people set the batch size to 1, which results in ~2 storage transactions per trigger, which can get expensive quite fast.

    Ensure that Diagnostics are Disabled for your web and worker roles

    Having Diagnostics enabled can contribute 25 transactions per minute, this is 36,000 transactions per day.

    Question for Microsoft: Is this per Web Role?

    azure check properties
    Figure: Check the properties of your web and worker role configuration files

    azure disable diagnostics
    Figure: Disable diagnostics

    Disable IntelliTrace and Profiling

    azure publishing settings
    Figure: When publishing, ensure that IntelliTrace and Profiling are both disabled


    Search bots crawling your site to index it will lead to a lot of transactions. Especially for web "applications" that do not need to be searchable, use Robot.txt to save transactions.

    azure robots
    Figure: Place robots.txt in the root of your site to control search engine indexing

    Continuous Deployment

    When deploying to Azure, the deployment package is loaded into the Storage Account. This will also contribute to the transaction count.

    If you have enabled continuous deployment to Azure, you will need to monitor your transaction usage carefully.


  20. Do you know how to backup data on SQL Azure?

    Microsoft Azure SQL Database has built-in backups to support self-service Point in Time Restore and Geo-Restore for Basic, Standard, and Premium service tiers.

    You should use the built-in automatic backup in Azure SQL Database versus using T-SQL.

    T-SQL: CREATE DATABASE destinationdatabasenameAS COPY OF[source_server_name].sourcedatabasename

    Figure: Bad example - Using T-SQL to restore your database

    Azure restore
    Figure: Good example - Using the built-in SQL Azure Database automatic backup system to restore your database

    Azure SQL Database automatically creates backups of every active database using the following schedule: Full database backup once a week, differential database backups once a day, and transaction log backups every 5 minutes. The full and differential backups are replicated across regions to ensure the availability of the backups in the event of a disaster.

    Backup Storage

    Backup storage is the storage associated with your automated database backups that are used for Point in Time Restore and Geo-Restore. Azure SQL Database provides up to 200% of your maximum provisioned database storage of backup storage at no additional cost.

    Service TierGeo-RestoreSelf-Service Point in Time RestoreBackup Retention PeriodRestore a Deleted Database
    WebNot supportedNot supportedn/an/a
    BusinessNot supportedNot supportedn/an/a
    BasicSupportedSupported7 days
    StandardSupportedSupported14 days
    PremiumSupportedSupported35 days

    Figure: All the modern SQL Azure Service Tiers support back up. Web and Business tiers are being retired and do not support backup. Check Web and Business Edition Sunset FAQ for up-to-date retention periods

    Learn more on Microsoft documentation:

    Other ways to back up Azure SQL Database:

  21. Security - Do you configure your web applications to use specific accounts for database access?

    Do you configure your web applications to use application specific accounts for database access?

    An application's database access profile should be as restricted as possible, so that in the case that it is compromised, the damage will be limited.

    Application database access should be also be restricted to only the application's database, and none of the other databases on the server

    Figure: Bad example – Contract Manager Web Application using the administrator login in its connection string

    Figure: Good example – Application specific database user configured in the connection string

    Most web applications need full read and write access to one database. In the case of EF Code first migrations, they might also need DDL admin rights. These roles are built in database roles:

    db_ddladminMembers of the db_ddladmin fixed database role can run any Data Definition Language (DDL) command in a database.
    db_datawriterMembers of the db_datawriter fixed database role can add, delete, or change data in all user tables.
    db_datareaderMembers of the db_datareader fixed database role can read all data from all user tables.

    Table: Database roles taken from Database-Level Roles

    If you are running a web application on Azure as you should configure you application to use its own specific account that has some restrictions. The following script demonstrates setting up an sql user for myappstaging and another for myappproduction that also use EF code first migrations:

    USE master
    CREATE LOGIN myappstaging WITH PASSWORD = '************';
    CREATE USER myappstaging FROM LOGIN myappstaging;
    USE myapp-staging-db;
    CREATE USER myappstaging FROM LOGIN myappstaging;
    EXEC sp_addrolemember 'db_datareader', myappstaging;
    EXEC sp_addrolemember 'db_datawriter', myappstaging;
    EXEC sp_addrolemember 'db_ddladmin', myappstaging;

    Figure: Example script to create a service user for myappstaging

    Note: If you are using stored procedures, you will also need to grant execute permissions to the user. E.g.

    GRANT EXECUTE TO myappstaging

    Data,1433; Initial Catalog=myapp-staging-db; User ID=myappstaging@xyzsqlserver; Password='*************'

    Figure: Example connection string

  22. Security - Do you give users least privileges?

    Like other services, it is important that your company has a structured and secure approach to managing Azure Permissions.

    First a little understanding of how Azure permissions work. For each subscription, there is an Access Control (IAM) section that will allow you to grant overall permissions to this Azure subscription. It is important to remember that any access that is given under Subscriptions | "Subscription Name" | Access Control (IAM), will apply to all Resource Groups within the Subscription.

    azure permissions bad
    Figure: Bad example - Too many people have Owner permission on the subscription level

    azure permissions good
    Figure: Good example - Only Administrators that will be managing overall permissions and content have been given Owner/Co-administrator

    From the above image, only the main Administrators have been given Owner/Co-administrator access, all other users within the SSWDesigners and SSWDevelopers Security Groups have been given Reader access. The SSWSysAdmins Security group has also been included as an owner which will assist in case permissions are accidentally stripped from the current Owners.

  23. Do you know how to find the closest Azure Data Centre for your next project?

    Here's a cool site that tests the latency of Azure Data Centres from your machine. It can be used to work out which Azure Data Centre is best for your project based on the target user audience:

    As well as testing latency it has additional tests that come in handy like:

    • CDN Test
    • Upload Test
    • Large File Upload Test
    • Download Test

    azure speed
    Figure: example

  24. Do you know to pay for Azure WordPress databases?

    Setting up a WordPress site hosted on Windows Azure is easy and free, but you only get 20Mb of MySql data on the free plan.

    wp db azure1
    Figure: Once you approach your 20Mb limit you will receive a warning that your database may be suspended

    wp db azure2
    Figure: If you are serious about your blog and including content on it, you should configure a paid Azure Add-on to host your MySQL Database when you set it up

    wp db azure3
    Figure: If you have already created your blog, navigate to your website within the Azure portal, select 'Linked Resources', select the line for the MySQL Database and click the 'Manage link'. This will open the ClearDb portal. Go to the Dashboard and click 'Upgrade'

    References: John Papa: Tips for WordPress on Azure.

  25. Do you know when to use Geo Redundant Storage?

    Data in Azure Storage accounts is protected by replication. Deciding how far to replicate it is a balance between safety and cost.

    azure graphic
    Figure: It is important to balance safety and pricing when choosing the right replication strategy for Azure Storage Accounts

    Locally redundant storage (LRS)

    • Maintains three copies of your data.
    • Is replicated three times within a single facility in a single region.
    • Protects your data from normal hardware failures, but not from the failure of a single facility.
    • Less expensive than GRS
    • Use when:

      • Data is of low importance – e.g. for test websites, or testing virtual machines
      • Data can be easily reconstructed
      • Data is non-critical
      • Data governance requirements restrict data to a single region

    Geo-redundant storage (GRS)

    • The default when you create storage accounts.
    • Maintains six copies of your data.
    • Data is replicated three times within the primary region, and is also replicated three times in a secondary region hundreds of miles away from the primary region
    • In the event of a failure at the primary region, Azure Storage will failover to the secondary region.
    • Ensures that your data is durable in two separate regions.
    • Use when:

      • Data cannot be recovered if lost

    Read access geo-redundant storage (RA-GRS)

    • Replicates your data to a secondary geographic location (same as GRS)
    • Provides read access to your data in the secondary location
    • Allows you to access your data from either the primary or the secondary location, in the event that one location becomes unavailable.
    • Use when:

      • Data is critical, and access is required to both the primary and the secondary regions

    More information:

  26. Do you shutdown VM's when you no longer need them?

    Often we use Azure VM's for presentations, training and development. As there is a cost involved to store and use the VM it is important to ensure that the VM is shutdown when it is no longer required.

    Shutting down the VM will prevent compute charges from incurring. There is still a cost involved for the storage of the VHD files but these charges are a lot less than the compute charges.

    Please note that is for Visual Studio subscriptions.

    You can shutdown the VM by either making a remote desktop connection to the VM and shutdown server or using Azure portal to shutdown the VM.

    Figure: Azure Portal

  27. Do you use Azure Policies?

    If you use a strong naming convention and is using Tags to its full extent in Azure, then it is time for the next step.

    Azure Policies is a strong tool to help in governing your Azure subscription. With it, you make it easier to fall in The Pit of Success when creating or updating new resources. Some features of it:

    1. You can deny creation of a Resource Group that does not comply with the naming standards
    2. You can deny creation of a Resource if it doesn't possess the mandatory tags
    3. You can append tags to newly created Resource Groups
    4. You can audit the usage of specific VMs or SKUs in your Azure environment
    5. You can allow only a set of SKUs within Azure

    Azure Policy allow for making of initiatives (group full of policies) that try to achieve an objective e.g. a initiative to audit all tags within a subscription, to allow creation of only some types of VMs, etc...

    You can delve deep on it here:

    compliant initiative azure policy
    Figure: Good Example - A fully compliant initiative in Azure Policy"

  28. Do you use Azure Machine Learning to make predictions from your data?

    Azure Machine Learning provides an easy to use yet feature rich platform for conducting machine learning experiments.  This introduction provides an overview of ML Studio functionality, and how it can be used to model and predict interesting rule world problems.

  29. Do you use Azure Notebooks to learn your data?

    Azure Notebooks offer a simple, transparent and complete technology for analysing data and presenting the results.  They are quickly become the default way to conduct data analysis in the scientific and academic community.

  30. Do you set up Azure alert emails to go to a Teams channel?

    Most sysadmins set up Azure alerts to go to a few people and then they have given themselves a job to forward the email to the right people every time there is a problem. What happens when they are away and why do they need to keep adding and removing emails when people join and leave the team.

    There is a better way. Have those emails go to the Team. Every team channel has a specific email address and then Team members can pin that. This way these important emails are sitting right at the top.

    azure alert emails teams channel
    Figure: Good example – Set Azure alert emails to go to a Team and not to specific people

  31. Redundancy - Do you use Azure Site Recovery?

    Azure Site Recovery is the best way to ensure business continuity by keeping business apps and workloads running during outages. It is one of the fastest ways to get redundancy for your VMs on a secondary location. For on-premises local backup see Do you know why to use Data Protection Manager?

    Ensuring business continuity is priority for the System Administrator team, and is part of any good disaster recovery plan. Azure Site Recovery allows an organization to replicate and sync Virtual Machines from on-premises (or even different Azure regions) to Azure. This replication can be set to whatever frequency the organization deems to be required, from Daily/Weekly through to constant replication.

    When there is an issue, restoration can be in minutes - you just switch over to the VMs in Azure! They will keep the business running while the crisis is dealt with. The server will be in the same state as the last backup. Or if the issue is software you can restore an earlier version of the virtual machine within a few minutes as well.

    azure backup
    Figure: Azure Backup and Site Recovery backs up on-premises and Azure Virtual Machines

  32. Do you know how to use slot deployment on Azure?

    Azure App Services are powerful and easy to use. Lots of developers choose it as the default hosting option for their Web Apps and Web APIs. However, to set up a staging environment and manage the deployment for the staging environment can be tricky.

    We can choose to create a second resource group or subscription to host our staging resources. As a great alternative, we can use a fully-fledged feature on App Service called deployment slot.

    How to use deployment slots

    To start using slot deployment, we can spin up another web app – it sits next to your original web app with a different url. Your production url could be and the corresponding staging slot is Your users would access your production web app while you can deploy a new version of the web app to your staging slot. That way, the updated web app can be tested before it goes live. You can easily swap the staging and production slot with only a turnkey configuration. See figure 1 to 5 below.

    Other benefits of deployment slots

    The benefit of using deployment slot is that if anything goes wrong on your production web app, you can easily roll it back by swapping with the staging slot – your previous version of web app sits on the staging slot – ready to be swapped back anytime before a newer version is pushed to staging slot.

    Deployment slot can also work hand in hand with your blue green deployment strategy – you can opt user to the beta feature on the staging slot gradually.

    azure slot 1
    Figure 1: Before Swap - Production slot

    azure slot 2
    Figure 2: Before swap - Staging slot

    azure slot 3
    Figure 3: Swap the slot with one click

    azure slot 4
    Figure 4: After swap – Production slot

    azure slot 5
    Figure 5: After swap – Staging slot

  33. Budgets - Do you monitor your costs in Azure?

    Azure costs can be difficult to figure out and it is important to make sure there are no hidden surprises. To avoid bill shock, it is crucial to be informed. Once you are informed, then you can make the appropriate actions to reduce costs.

    Let's have a look at the tools and processes that can be put in place to help manage Azure costs:

    Video: Monitoring your Azure $ costs with Warwick Leahy (4 min)

    Budgets - Specify how much you aim to spend

    Budgets are a tool that allow users to define how much money is spent on either an Azure Subscription or a specific resource group.

    It is critical that an overarching budget, is set up for every subscription in your organization. The budget figure should define the maximum amount expected to be spent every month.

    In addition to the overarching budget, specific apps can be targeted to monitor how much is being spent on them. Each time a new service is proposed, it is a good idea to have a cost conversation. Remember to jump into Azure and create a new budget to monitor that app.

    Subscriptions - Split costing by environment

    In addition to budgets, it's also a good idea to split costing between production and non-production scenarios. This can help diagnose why there are unexpected spend fluctuations e.g. performed load testing on the test site. Also, there are sometimes discounts that can be applied to a subscription only used for dev/test scenarios.

    Figure: Bad example - No budget has been set up, disaster could be imminent and no one would know 🥶!

    Figure: Good example - Budgets have been set up 😎

    Cost alerts - Make sure you know something has gone wrong

    Once a budget is set up, cost alerts are the next important part for monitoring costs. Cost alerts define the notifications that are sent out when budget thresholds are being exceeded. For example, it might be set to send out an alert at 50%, 75%, 100% and 200%.

    Make sure to set up alerts on all the thresholds that are important to the company.

    If the company is really worried about costs, an Azure runbook could even be set up to disable resources after exceeding the budget limit. However, that isn't a very common practice since nobody wants the company website to go down randomly!

    Figure: Bad example - No cost alerts, a recipe for disaster 😞!

    Figure: Good example - Cost alerts have been set up ✨

    Cost analysis - What if you get an alert?

    It can be scary when you get an alert. Luckily, Azure has a nice tool for managing costs, called Cost Analysis. You can break down costs by various attributes (e.g. resource group or resource type).

    Using this tool helps identify where the problem lies, and then you can build a plan of attack for handling it.

    Note: If your subscription is a Microsoft Sponsored account, you can't use the Cost Analysis tool to break down your costs, unfortunately. Microsoft has this planned for the future, but it's not here yet.

    Tag your resources - Make it easier to track costs

    Adding a tag of cost-category to each of your resources makes it easier to track costs over time. This will allow you to see the daily costs of your Azure resources based on whether they are Core, Value adding or Dev/Test. Then you can quickly turn off resources to save money if you require. It also helps you to see where money is disappearing.

    Running a report every fortnight (grouped by the cost-category tab) will highlight any spikes in resource costs - daily reports are probably too noisy, while monthly reports have the potential for overspend to last too long.

    azurecostsbycategory 1710232021930
    Figure: Daily costs by category

    Approval process - Don't let just anyone create resources

    Managing the monthly spend on cloud resources (e.g. Azure) is hard. It gets harder for the Spend Master (e.g. SysAdmins) when developers add services without sending an email to aid in reconciliation.

    Developers often have high permissions (e.g. Contributor permissions to a Resource Group), and are able to create resources without the Spend Master knowing, and this will lead to budget and spending problems at the end of the billing cycle.

    For everyone to be on the same page, the process a developer should follow is:

    1. Use the Azure calculator - Work out the monthly resource $ price
    2. Email the Spend Master with $ and a request to create resources in Azure, like the below:
    1. If the request is approved, remember to add a cost-category tag to the new resource once it is created

    Make sure you include all resources you intend to create, even if they should be free. For example, you might create an App Service on an existing, shared App Service Plan. The Spend Master will still need to be aware of this, in case the App Service Plan needs to be scaled up.

  34. Do you reduce your Azure costs?

    Dealing with questions from Product Owners about expenses related to applications hosted on Azure can be a real headache 🥲

    Get ready to empower your Product Owners! When it comes to Azure expenses, you want to be informed and monitor your costs. You can also have a solution that not only helps you understand where the spending is coming from, but also helps you find ways to optimize it. With Azure Cost Analysis, you can confidently provide your Product Owners with insights and recommendations that will save time and money, and make everyone's day a little brighter ✨

    Always tackle the biggest 3 costs first. In most instances they will be upwards of 98% of your spend, particularly if you are in a wasteful environment. I have seen MANY projects where the largest cost by a significant margin was Application Insights.
    - Bryden Oliver, Azure expert

    Video: Managing your Azure Costs | Bryden Oliver | SSW Rules (5 min)

    Azure Cost Analysis gives you a detailed breakdown of where any Azure spending is coming from. It breaks down your cost by:

    • Scoped Area e.g. a subscription
    • Resource Group e.g. Northwind.Website
    • Location e.g. Australia East
    • Service type e.g. Azure App Service

    Note: You can also 'filter by' any of these things to give you a narrowed down view.

    Analysing the expenditure - Finding the big dogs 🐶

    To optimize spending, analyze major costs in each category. Generally, it's a good idea to focus on the top 3 contributors - optimizing beyond that is usually not worth the effort.

    Key questions to ask:

    • Do you need that resource?
    • Can you scale down?
    • Can you refactor your application to consume less?
    • Can you change the type of service or consumption model?

    Scoped Area

    The cumulative costs of a selected area over a given time period e.g. the cost of a subscription charted over the last year showing the period of higher or sudden growth during that time.

    azure area chart
    Figure: Azure Portal | Cost Analysis | Scoped Area Chart e.g. in February it was deployed and in August a marketing campaign caused more traffic

    Resource Group

    The cost of each resource group in the scoped area e.g the cost of the Northwind website infrastructure.

    Look at the most expensive resource group and try to reduce it. Ignore the tiny ones.

    resource groups
    Figure: Azure Portal | Cost Analysis | Resource Group Breakdown


    The cost of each location e.g. Australia East.

    If you have your applications spread across multiple locations, this chart can help figure out if one of those locations is costing more than others. Consider scaling each location to the scale of usage in that location ⚖️.

    Figure: Azure Portal | Cost Analysis | Location breakdown

    Service type

    The cost of each service used e.g. Azure App Service.

    If a specific service is costing a lot of money, consider if there is a service that might be better suited, or if that service can have its consumption model adapted to better fit the usage levels.

    Figure: Azure Portal | Cost Analysis | Service type breakdown

    What if you suspect a specific resource is a problem?

    The Azure Cost Analysis tool also allows for different views to be selected. If you think a specific resource is causing a problem, then select the "CostByResource" view and then you can view each aspect of a resource which is costing money. That way you can identify an area which can be improved 🎯.

    service breakdown
    Figure: Azure Portal | Cost Analysis | View | CostByResource | Resource breakdown

  35. Do you keep track of expiring app registration secrets and certificates?

    In Azure AD, App Registrations are used to establish a trust relationship between your app and the Microsoft identity platform. This allows you to give your app access to various resources, such as Graph API.

    App Registrations use secrets or certificates for authentication. It is important to keep track of the expiry date of these authentication methods, so you can update them before things break.

    Use a PowerShell script to check expiry dates

    An easy way to do this is to run a PowerShell script that checks the expiry date of all app registration secrets or certificates. This requires the AzureAD module; the cmdlets used are:




    There's an example of a working script here:

    To extend the example above, you can run the script on a schedule using Task Scheduler or an Azure Automation Runbook, and send an email with Send-MailMessage.

    Note: To run this on a schedule, you should create an app registration to authenticate the script. The app registration will need the role Cloud Application Administrator.

    Use a Logic App to check expiry dates

    If you prefer working with Logic Apps, there's an example of how it can be done here:

    You will also need an App Registration to authenticate your Logic App. Notifications can then be sent to email or a Teams channel.

    app reg email
    Figure: Example email, listing app registration secrets that are expiring soon

  36. Do you use Configuration over Key Vault?

    We all know that we should Store Secrets Securely using Key Vault, but did you know that rather than have developers having to deal with a combination of Key Vault and Configuration, you can abstract Key Vault out of your application code and leave developers to only have to deal with Configuration?

    Figure: Bad example - Having to wire up Key Vault unnecessarily

    A feature of Azure AppService is the ability to use secrets from Key Vault as Configuration values. This allows you to setup a link between your AppService and a Key Vault and have Configuration values point to a Key Vault Entry.

    So now rather than developers having to think about if a value is a secret or configurations, it's always configuration. It just might have its value stored securely in Key Vault.

    Figure: Good Example - Developers don't need to know anything about Key Vault

  37. Do you clean up your groups with Entra Access Reviews?

    When you have multiple ongoing projects with people moving in and out of project teams, you can end up with too many people in the related groups - especially if you are using public Microsoft 365 groups that anyone in the organization can join.

    With Access Reviews, you can automate cleaning up these groups and make sure only the right people have ongoing access.

    Why use Access Reviews?

    In today's digital landscape, ensuring the right people have the right access to resources is paramount. Over time, as employees change roles, projects evolve, or external collaborators come and go, permissions can become outdated. This can lead to excessive access rights or, conversely, insufficient access, both of which pose risks. Excessive access can open doors to potential security breaches, while insufficient access can hinder productivity.

    "Entra Access Reviews" provides a systematic way to review and validate user access rights regularly. By conducting periodic access reviews, organizations can identify and rectify any inappropriate permissions, reducing the risk of unauthorized access or data breaches. Moreover, it ensures that users have the necessary access to perform their roles efficiently. Access reviews also support compliance efforts, as many regulatory frameworks require periodic reviews of access rights. With "Entra Access Reviews", organizations can automate this process, ensuring a consistent, auditable, and efficient approach to maintaining secure and compliant access controls.

    User Experience During an Access Review

    When it's time for an access review, users receive a notification prompting them to validate their access rights. This user-friendly process is designed to be intuitive, guiding users step-by-step through the review of their permissions. They'll see a clear list of the resources they currently have access to and will be asked to confirm if they still require that access. This self-review empowers users to be part of the security and compliance process, ensuring they only have access to what they genuinely need. The interface is clean and straightforward, minimizing any potential confusion. Below is a screenshot that provides a glimpse into what users see during this process:

    2023 10 09 9 09 17
    Figure: Reviewing your access is as simple as clicking a link in an email

    Creating an Access Review

    1. Go to the Azure Portal | Identity Governance | Access Reviews
    2. Click + New Access Review

    access review 1
    Figure: New Access Review

    1. Under Select what to review, choose Teams + Groups
    2. Under Review scope, choose Select Teams + Groups
    3. Click on + Select groups and choose the group you want to review
    4. Under Scope select All users
    5. Click Next: Reviews

    access review 2
    Figure: Access Reviews | Review type

    1. Check the Multi-stage review box
    2. Under First stage review | Select reviewers, choose Users review their own access
    3. Select a stage duration (default is 3 days)
    4. Under Second stage review | Select reviewers, choose Group owner(s)
    5. Select a stage duration again (default is 3 days)

    access review 3
    Figure: Access Review | Stages

    1. Under Specify recurrence of review, select a Review recurrence and Start date
    2. Under Specify reviewees to go to next stage, choose Approved reviewees
    3. Click Next: Settings

    access review 4
    Figure: Access Reviews | recurrence & reviewees

    1. Under Upon completion settings, tick Auto apply results to resource
    2. Under If reviewers don't respond, choose Remove access

    access review 5
    Figure: Access Reviews | Upon completion

    Under Advanced Settings

    1. Turn off Justification required
    2. Under Additional content for reviewer email, add an explanation so there's no confusion over what this email is.
    3. Click Next: Review + Create

    access review 6
    Figure: Access Reviews | Advanced settings

    1. Under Name new access review, add a name and description
    2. Review the details and click Create

    access review 7
    Figure: Access Review | Review + Create

    The Results

    At the end of the review we get to see the results

    screenshot 2023 09 27 094036
    Figure: At the conclusion we see these great stats!!

  38. Do You Know Which Environments You Need To Provision When Starting A New Project?

    Before any project can be used by a customer it must first be published to a production environment. However, in order to provide a robust and uninterrupted service to customers, it is important that the production environment is not used for development or testing purposes. To ensure this, we must setup a separate environment for each of these purposes.

    bad example skipping environments
    Bad example - Skipping environments

    Skipping environments in a feeble attempt to save money will result in untested features breaking production.

    What is each environment for?

    • Production: Real data being used by real customers. This is the baseline/high watermark of all your environments. Lower environments will be lower spec, fewer redundancies, less frequent backups, etc.
    • Staging: 'Production-like' environment used for final sign-off. Used for testing and verification before deploying to Production. Should be as close to Production as possible e.g. access (avoid giving - developers admin rights), same specs as production (especially during performance testing). However, this is not always the case due to cost implications. It is important that staging is 'logically equivalent' to production. This means having the same level of redundancy (e.g. Regions + Zones), back-ups, permissions, and service SLAs.
    • Development: A place to verify code changes. Typically, simpler or under-specified version of Staging or Production environments aiding in the early identification and troubleshooting of issues (especially integration).
    • Ephemeral: Short-lived environment that is spun up on demand for isolated testing of a branch, and then torn down when the branch is merged or deleted. See rule on ephemeral environments.
    • Local: Developer environment running on their local machine. May be self-contained or include hosted services depending on the project's needs.

    What environments should I create for a new project?

    Large or Multi-Team Projects

    complex environments
    Good example - Large or Multi-Team Projects tend to have more environments

    For large projects it's recommended to run 4 hosted environments + 1 local:

    • Production
    • Staging
    • Development
    • Ephemeral (if possible)
    • Local

    The above is a general recommendation. Depending on your project's needs you may need to add additional environments e.g. support, training, etc.

    Internal or Small Projects

    simple environments
    Good example - Internal or Small Projects have fewer environments

    For smaller projects we can often get away without having a dedicated development environment. In this scenario we have 2 hosted environments + 1 local:

    • Production
    • Staging
    • Local
We open source. Powered by GitHub