Tips on Using CloudEndure

Somethings, this is likely the little genius that could, but everyone pays attention to the bigger boy in the family. Documentation wise, there are some things that need to be stated in a simple QNA format better:

What are some setup tips with using CloudEndure? – the Staging systems should be placed in a network with an Internet Gateway ( IGW ). It will require that public IP to communicate directly to port 1500. The conversion system should use a very large / expensive system as it will decrease the amount of time it takes to convert the system…and it will only cost $4-20 for the whole effort.

Does CloudEndure Support Migrating from one AWS account to another? – it does not obviously state this, but if you specify the other source account as “other Infrastructure,” you should be able to migrate EC2 systems ( Computer and SQL ) server between accounts pretty easily.

How does CloudEndure compare to using Commvault or Zerto? – Working in infrastructure sometimes mean you wished you didn’t have to provision tons of infrastructure to make things work. The key difference is that both less Source infrastructure is required and that the fact that the service is a SAAS Service. Although Zerto and Commvault both use OS Level agents, you still need to provision source infrastructure for each region your in. Imagine you have 5 regions that will DR to 1 centralized region. Would you rather have 1 server or 5 servers to do the same thing?

How do i ensure the target system connects to the AD Server? Make sure to add the destination DNS Server, Disable Network Level Authentication, and restart the system. Restarting the system ensures it connects to the destination DC vs. the Source DC.

Could this whole process be done manually? Of course, but its a bit slower and not automated…ha! I have not tested it, but its perhaps worthy of trying to automate this:

  1. Stop instance (and poll to wait for it to stop)
  2. Create a Snapshot and Create an AMI from it (and poll to wait for image to complete)
  3. If encrypted with a Customer Managed Key (CMK) key, share the key. If encrypted with built-in key, copy image to a CMK key that is shared (you can’t share the aws/ebs key). Wait for copy to complete.
  4. Record the ImageID
  5. Share AMI image and snapshot
  6. Record relevant information that may be important (availability zone, instance size, IP address, tags, etc.)
  7. Switch role to other account
  8. Convert any relevant information captured in step 5 (e.g. desired subnet)
  9. Launch instance from ImageID
  10. Apply tags

If you are dealing with Windows instances, you will likely also need to fix the static routes to the 169.254.169.* endpoints as they are created during deploying based on the original VPC configuration.

source: cloudnewbie1 on Reddit of all places….

This article sounds better: https://aws.amazon.com/blogs/security/how-to-create-a-custom-ami-with-encrypted-amazon-ebs-snapshots-and-share-it-with-other-accounts-and-regions/

Tips on Using Power Virtual Agents ( PVA )

Last updated: 11/18/2020 ( Geesh! )

Here are some tips in which may help you when using Power Virtual Agents

Is there a way to bypass certain restrictions on urging users to submit a ticket? – if your bot allows you to submit a ticket, one thing to know is that you get to the point of using synonyms as using the word “ticket” in your Qnamaker.ai bot will confuse the bot in the long run and it will not be able to answer its say /ticket command or intent. I had to host images on Azure and than put the word “ticket” in the image itself to get pass this limitation.

What is the easiest way to implement a Knowledge Base in PVA? – Use QnaMaker.ai through Microsoft Power Automate. As of 10/24/2020, There is a pre-existing flow to do this, but it does not currently have the multi-turn feature implemented yet. Once the Adaptive Card feature is implemented, we should be able to then program the “Click Button to type it into the window” kind of functionality.

Is there an easier way to manipulate JSON in Microsoft Power Automate? – Not really, but if you already know Powershell….then you can use Azure Functions as a HTTP Call from Power Automate ( P.A ) to do the proper manipulation. It is pretty much free to do if your not going to call the functions millions of times.

Is Power Virtual Agents in Team less costly? – Yes. It only allows you to use the bot in Teams and with only services that are within the Microsoft Power Ecoysystem. Great for QNABots that answer basic questions, not great for advanced bots that allow the user to complete a workflow.

What is one way to implement custom workflows that involve on-premise integration with your bot? – Typically the way is using a custom connector connected through the on-premise data gateway, but somehow you need to know how to setup a custom API to make it all work. I was able to use a product called Powershell Universal API by Ironman Software in order to create a powershell based API…as i already know how to use Powershell and install software.

Which topic should be used with QNAMaker? – the Fallback topic. As the fallback topic corresponds to anything that is not recognized as a topic / intent in PVA, it can than respond to all the questions in qnamaker

Is adaptive cards or inserting images directly in PVA supported? – Sadly, it is not 🙁 Don’t get me started with not being able to change the color either….

Tips on using Microsoft Bot Framework Composer

I do not see a lot of people documenting their experiences using the Micorosoft Bot Framework Composer. I wanted to present some tips in which will help with creating a bot with most of the functionality that may be used in a customer service scenario.

  1. What is the acceptable user input properties? – You cannot simply name a scope called $name unless you initiate it. You have access to the existing user or dialog scope. Ext. user.name, user.age, user.location are acceptable scopes that can be used. Variables than than be used by defining it in the following manner: ${variale_name}. In this case, the variable can be referred to later as ${user.problem}

2. How do you use luis.ai ( known as the LUIS Recognizer ) to differentiate between qnamaker intents and Intents that you have manually defined in composer? – The first thing to configure is to ensure the recognizer is of the type LUIS.

Add a New Trigger corresponding to the unrecognized intent and connect it to your QNAMAKER Knowledgebase. This way, when utterances or “trigger keywards” are mentioned by the user, as it will not be recognized and there is no condition to associate it to a certain intent, it will than trigger the utterances in your QNAMAKER Knowledge instead of the recognized intents in Composer.

Now that we have defined the “unknown” intents that should route to your QNAMaker Knowledge base, we will now define the “known” intents that we have setup as a workflow in compose. We need to now associate the luis.ai “intents” so that it can trigger one of our known intents in composer. The way we do that is by defining the “intent Score condition” in which is trained by the utterances in composer. The format is: #INTENT_Name.score {Comparator} Score_Value. So it requires that you place a hashtag in front of the name of the intent and than compare it to the score that you can in Luis.ai when attempting to train the model for a percent accuracy score.

Ex. If i am using an intent called Help, and if i train it to have an accuracy of greater than 90%, i would use #Help.score > 0.90 as the percent will be expressed as a numerical value.

To put it all together, here are some screenshots that associate the “Known” intent of submitting a ticket as the trigger and the “Action” known as the “begindialog” in this case to route it to the known intent. In this case, it is using a luis type recognizer. I have programmed that when user types in the word ticket, the ticket is trigger as luis recognized the trriger words as having 93% or 0.93 score accuracy to likely relating to the submit_ticket trigger. the condition for the trigger in this case is #submit_ticket > 0.92.

3. What are some ways in which i can troubleshoot conflicting intents between QNAMaker and Intents defined within Composer? – Generally, you will want to make sure all of your “known” intents without QNAMaker do not conflict with each other and that when you mention certain known triggers, that it always consistently provides you the intended intent that you think it should route to. Once that is troubleshooted, you can them enable the QNAMAKER unreocgnized intent and make sure that does not conflict with it either.

4. Is the Dispatch method supported in Bot Framework Composer? – yes, but doing it the way above is much easier to understand and there does not seem to be a GUI way to do it in composer as of yet. Just keep on testing out your triggers and intents in the bot framework emulator.

5. What is one way to test my chatbot without deploying services to Azure? – Use the Bot Framework emulator application. It may require you to sign up for an Azure bot service, luis.ai, and qnamaker.ai accounts.

6. Is there a way to invoke power automate workflows directly from Composer? – Compared to using Microsoft Power Virtual Agent product, you would simply have to invoke the command a relevant HTTP Post API Workflow to another workflow in power automated that can be invoke through a HTTP post command.

7. What is one way to run on-premise API in a customized manner? – This definitely requires some ingenuity, but one way in which it can be accomplished is by using the Logic Apps Data Gateway that allows you to install an application on one of your servers so that it can be invoked by a workflow from Logic Apps ( think of it as a older and more mature brother of Power Automate ). from there There, you would setup an Azure HTTP workflow to invoke a command from Jenkins. As creating an API endpoint can be pretty hard, it is much easier to use jenkins to expose your powershell script as an REST API Endpoint and than go from there.

8. How can i train or better associate certain utterances / Trigger keywords to particular intents? – One way to directly change the trigger keywords and get the recognition score immediately is to go to luis.ai to “train” your model and once you are satified, you will than publish your model in production mode. In luis.ai, click on your luis.ai model and click on intents. Click on the intent you want to train and add more keywords until you are satified. Click Train and than you will see that the training score should ideally get higher. You can than click on “test” on the upper right hand part of the window to test the Trigger words and what relevant Luis recognition score that it should receive.

9. Is there a way to simply format a response with a certain Formatted card?this page does more justice than i can explain. Generally cards can refer to Templates that are defined in the .lu and that is how the format template can be created. it can be found in the template tab and than editing the template accordingly.

10. How do you publish your composer app directly from Composer? – by clicking on the Publish Tab.

Cost Optimization for 1 scenario when backing to AWS for CommVault Backups

What is one way to optimize costs? – you can make a measure of both of how many copies of the data need to be copied per retention standard. With full deduplication, a copy measure of approximately 1.XX is a good measure of efficiency. In addition to using Copy Measure, you can always measure the cost in TB/Months. The lower it costs relative to other types of storage classes, the more efficient you can do your backups.

Ex. Our retention standard is 15 months. In order to minimize copies, you can create a new backup set in which types of backups occur at the following schedule: 1x Full every 15 month, in which it is retained for 30 months. The reason why it needs to be retained is that say after 15 months, if the full copy retention is deleted after 15 months, the differential chain will not work. As differential backups are cumulative backups of data, there can be a different retention standard for Differential of only 15 months. This way, It is optimized.

Ex. Our retention standard is 15 months. To be even more efficient and only store 1 copy of the full, you can create 1x full every 30 months and have a retention standard of 30 months.

As always, there are alway tradeoffs in using any approach.

Which tier is the Cheapest when it comes down to Cloud Storage Backups / Archives on AWS?

if you are okay with backups being restored within 12 hours, than you can consider the following. If the minimum of Commvault Backups is its Full’s Size, than that is the easiest way to calculate it over the long term; but there are additional tricks you can take in the cloud to spend even less.

Scenario: I would like to retain backups for 12 months in US-EAST-1. Which backup tier will cost the least? if the Cost from S3 Tiers from most expensive to least expensive per TB/month is $24, $12, $4, and $1 ( S3, S3 Inactive, Glacier, Glacier D.A). If we are using Deduplication/ Synthetic Fulls in this scenario, over 12 months the most ideal tier is S3 IA. As archive tiers cannot combine “Fulls” through Synthetic fulls, we simply to have to store more fulls times the number of months of retention.

Ex. S3 = $24 x 1 ( syn. full ), S3 IA ( syn. full ) = 12 x 1, S3 Glacier = 4 x 12, S3 Glacier D.A = 1 x 12. 12x from S3 IA is the same as S3 Glacier D.A, but it would make more sense to choose the tier that allows for faster restores. In this case, it would cost $12/TB a month in addition to running the Mediagent for about 7 days a month to make it a grand total of Cost = $20 ( M.A cost for stores less than 50 TB ) + $12 * the Amount of TB.

Scenario 2: Hey, its true that S3 IA can be the least expensive, but can you go even lower than that? – Yes, you can. If you use 1 backup set for local retention that has faster restores and simply just need a solution that allows for long term retention for compliance reasons, you could in theory just use the Commvault Combined S3 IA / Glacier Tier to get costs pretty low. You could get away with just having 1x Fulls in Glacier D.A. and get a cost factor of nearly 1.10x in the states and about 2.10x everywhere else.

Ex. i store 10 TB of Data. My costs could be as low as $10/month for storage and it will cost $20 to have the Commvault Media Agent only be on 7 days in that whole month for a total cost of $30/month to store the data. I will create 1 Full per a year and the rest of the months will utilize an Incremental backup.

Troubleshooting AWS AD Connector for AWS Chime

In troubleshooting AD connector, i learned the following:

The Primary requirements are:

-Open ports: 53, 88, 389 ( TCP/UDP)

-Service Account that is contained within that Domain ( mult-forest configuration is not supported on the AD Connector )

-Firewall rule allows for the system to connect to the DNS server

-Firewall rule allows for the system to connect to any Domain controller in that domain.

Other things to consider when migrating a domain from one account to another to make it work with AWS Chime:

-Only one domain of your forest and AD Connector directory service can be configured in AWS Chime. If you are using 1 e-mail domain worldwide, if you have 1 domain in each region, you would have to use 4 e-mail domain addresses as proxy addresses for those users in order to authenticate these users worldwide.

-It mentions in the documentation that either the EmailAddress attribute or proxyaddress attribute can be used in that domain of the account. When it comes to migrating to another account, you cannot use the Proxyattribute approach on the users primary account as it has already been claimed as an active domain on the account you are trying to migrate away from. You must delete the domain from the old Chime account in order to make sure there is no conflicts in using the proxy approach.

Since the AD Connector can only be provisioned in AWS, when the service account queries for the IP address of a Domain Controller in that domain , it will than give the IP Address of any domain controller in that domain; even if there is a perfectly suitable Domain controller in the same subnet as the AD Connector. Since the Domain controller in which it queries from is Random, that makes creating the Firewall rules harder to constrain. What worked for us in this case is to temporarily open up the firewall rules so that the connection is not….random and to give it a chance to actually successfully connect. Once it successfully connects, we can think of limiting the firewall rules at that point. Logically we think sometimes it should route to the closest available system for its configuration, but sometimes the program just thinks: “I’m just gonna use any system as it is valid in your list!!!!”

AWS Networking: VPC Peering Vs. Resource Access Manager

Some of the interesting feature when using AWS is when you are trying to share resources either between different regions and different accounts. The following are some of the scenario’s in which you would use VPC Peering, Resource Access Manager (R.A.M) or both.

Assuming that you have a centralized resource that you would like to share:

  1. Is the resource and the destination to share in the same region in different accounts? – AWS RAM
  2. Is the resource and destination to share in a different region? – VPC Peering
  3. Is the resource and the destination to share in different accounts in the same region? – AWS RAM
  4. If R.A.M is used to share the subnet/network with a child account, if the resources are in different regions, you will generally need to: Share the resource subnet from the source region to the destination region using VPC Peering and than use ram to share the subnet with the child account in the same region.

Break Even point for using LTO6 Tapes compared to Cloud Archiving Systems using CommVault

Simplistic Assumptions: Your Physical Agents with Support costs $12.5K over 5 years and your HP MSL 4048 Tape Array costs $15.7K over 5 years. Each LTO Tapes costs $25. Iron Mountain Costs are assumed to be $1/per tape and includes the monthly Iron Mountain Service Costs

Simply: if you have 1 array, your break even point with a majority of Cloud Based systems in which price their Archive based tiers to about $1-4 / TB / Month, you would have to back up at least 500 – 600 TB comparably to make it worth it. Although, it does not include the labor costs related to having to physically load and unload a tape system. Although, if you have 3 offices your break even point would be 300 – 400 TB comparably at the low end and about ~ 1.5 PB at the end high end to make it cost competitive with cloud based systems.

Considerations when Backing up and Archiving in the Cloud

Here are some things to consider when backing up and Archiving Data in the Cloud

  1. Cost to Store the Data – generally you will use the following formula to calculate your Costs on either a monthly, yearly, or full-cycle Cost.
    • Cost per TB = [Cost per GB] X 100
    • Cost per Month = [Cost per TB] x 12
    • Full Lifecycle Cost = Cost TB x 60
  2. Cost to Restore the Data – Generally there is a Restore cost and a data transfer cost, depending on where it is restored.
    • ex. In AWS, it costs $0.09 / GB to restore when restored through the internet and $0.004 when restored through the VPN. If stored in the S3 Inactive Tier, it will cost $0.01 / GB to restore. In total, it will cost $0.05 / GB to restore through VPN and $0.10 to restore through the internet.
  3. Time to Restore data – Generally, the more expensive the data costs to restore, the less time it takes to restore said data. If you are comparing restore time from tape / using Iron Mountain for restore; Restore times within 1 day are usually acceptable. If you bemoan the thought of having to manually restore data for users and see that it grows on trees, you generally will want Active restore tiers and not Glacial / Archive Tiers.
    • AWS has the S3 – Inactive tier which is ideal for immediate restore for Archiving purposes. Generally data restore falls off after 6 years. It also costs 12x as much as S3 Glacier – Deep Archive.
    • AWS has the AWS S3 – Glacier – Deep Archive Tier which is optimized for storing data that is normally stored on Tapes. As it costs only $1 / TB / Month, use it as your eventual storage tier. Standard restore allows for restores within 12 hours while Bulk restore allows for restores within 48 hours.

4. Advanced – Storage Tiering to Optimize Long Term Costs – Generally, time the archiving tiering with the expected frequency of restore. For archiving purposes, you can perhaps store your data in the following ways. As Data restores normally fall off after 6 years, your archiving rules should A) archive data after 2 years, B) Store it in S3 Inactive for 3 years , C) store it on Glacier for another 2 years, and D) transition it to S3 Glacier – Deep Archive for the rest of the time the data is usable.

AWS Billing, Oh My!

Here are some of my thoughts on the use of Reserved Instances, Savings Plans, and such and such.

When should i use a Reserved Instance? – Once your on-demand Utilization of your computing instance on a yearly basis exceeds 33% of the year ( about 3 months for muggles ) or meets 50 – 70 percent of your base computing needs on a yearly basis.

Are Reserved Instances, in a multi-tenant environment, truly enforceable with no or partial upfront costs? – The answer is sadly…No. Although a user or tenant might agree to have a reserved instance for the year, in an AWS Organizations Scenario, if they simply delete their computing Instance, since there is no financial tagging mechanism to really follow through with that tagging till the end. The better and more enforceable mechanism is to a convertible, all upfront mechanism in which if do they decide to stop using, other users / agencies can benefit as it should be really a “use it, or lose it” scenario. Not a “Lets get married, and i might cheat later” kind of scenario.

When is it really practical to use Savings Plans? – As it doesn’t really define a particular minimum, it really only makes sense when you have a couple of machines in which they share the same computing instance family and it makes it a bit easier to shrimp or grow your instances. ex. instead of VCPU = 36 at 1 machine, perhaps 2 machines at 18 VCPU etc. etc.

Does it really make sense to use a Volume Gateway when using AWS Storage Gateway and Commvault? – if you are rolling in money and can’t be bothered as it is annoying, it might be worth it at that point. Otherwise, does it really make sense to pay $200 + dollars to retrieve 1 file from say that whole 3+ TB “Virtual Tape” Not really! Stick with using the File Share Gateway as S3 is more financially efficent in this case.

Is there a point of deploying an AWS Machine, when you don’t require External Access or require Infrastructure as Code? – Yes, spend more money than you have to be have a hipster and expensive machine in which you are renting.

Can you mange systems in AWS like you do with Traditional Infrastructure? – Why not, you didn’t care when it was hosted onsite, just spend 2-3x more to host it in AWS. Although, if you don’t like to move, its definitely worth it at that point!

What if the amount of running EC2 Instances exceeds my Reserved Instances? – It is magically on-demand and you are overpaying for it.

What is the best case scenario for sharing your Reserved instances in an AWS Organizations and Resource Access management Scenario? – If you implement an AWS Service Control Policy ( SCP ) to limit what types of EC2 Instance types can and cannot be deployed in an instance, a Reserved Instance or Saving plan can definitely help in this case.

What if i got 1 Reserved instances, which saves 70 percent, for say 3 on-demand instances? – Well, instead of paying 300 percent, you are now paying 230% for the same 3 machines.