Break Even point for using LTO6 Tapes compared to Cloud Archiving Systems using CommVault

Simplistic Assumptions: Your Physical Agents with Support costs $12.5K over 5 years and your HP MSL 4048 Tape Array costs $15.7K over 5 years. Each LTO Tapes costs $25. Iron Mountain Costs are assumed to be $1/per tape and includes the monthly Iron Mountain Service Costs

Simply: if you have 1 array, your break even point with a majority of Cloud Based systems in which price their Archive based tiers to about $1-4 / TB / Month, you would have to back up at least 500 – 600 TB comparably to make it worth it. Although, it does not include the labor costs related to having to physically load and unload a tape system. Although, if you have 3 offices your break even point would be 300 – 400 TB comparably at the low end and about ~ 1.5 PB at the end high end to make it cost competitive with cloud based systems.

Considerations when Backing up and Archiving in the Cloud

Here are some things to consider when backing up and Archiving Data in the Cloud

  1. Cost to Store the Data – generally you will use the following formula to calculate your Costs on either a monthly, yearly, or full-cycle Cost.
    • Cost per TB = [Cost per GB] X 100
    • Cost per Month = [Cost per TB] x 12
    • Full Lifecycle Cost = Cost TB x 60
  2. Cost to Restore the Data – Generally there is a Restore cost and a data transfer cost, depending on where it is restored.
    • ex. In AWS, it costs $0.09 / GB to restore when restored through the internet and $0.004 when restored through the VPN. If stored in the S3 Inactive Tier, it will cost $0.01 / GB to restore. In total, it will cost $0.05 / GB to restore through VPN and $0.10 to restore through the internet.
  3. Time to Restore data – Generally, the more expensive the data costs to restore, the less time it takes to restore said data. If you are comparing restore time from tape / using Iron Mountain for restore; Restore times within 1 day are usually acceptable. If you bemoan the thought of having to manually restore data for users and see that it grows on trees, you generally will want Active restore tiers and not Glacial / Archive Tiers.
    • AWS has the S3 – Inactive tier which is ideal for immediate restore for Archiving purposes. Generally data restore falls off after 6 years. It also costs 12x as much as S3 Glacier – Deep Archive.
    • AWS has the AWS S3 – Glacier – Deep Archive Tier which is optimized for storing data that is normally stored on Tapes. As it costs only $1 / TB / Month, use it as your eventual storage tier. Standard restore allows for restores within 12 hours while Bulk restore allows for restores within 48 hours.

4. Advanced – Storage Tiering to Optimize Long Term Costs – Generally, time the archiving tiering with the expected frequency of restore. For archiving purposes, you can perhaps store your data in the following ways. As Data restores normally fall off after 6 years, your archiving rules should A) archive data after 2 years, B) Store it in S3 Inactive for 3 years , C) store it on Glacier for another 2 years, and D) transition it to S3 Glacier – Deep Archive for the rest of the time the data is usable.

AWS Billing, Oh My!

Here are some of my thoughts on the use of Reserved Instances, Savings Plans, and such and such.

When should i use a Reserved Instance? – Once your on-demand Utilization of your computing instance on a yearly basis exceeds 33% of the year ( about 3 months for muggles ) or meets 50 – 70 percent of your base computing needs on a yearly basis.

Are Reserved Instances, in a multi-tenant environment, truly enforceable with no or partial upfront costs? – The answer is sadly…No. Although a user or tenant might agree to have a reserved instance for the year, in an AWS Organizations Scenario, if they simply delete their computing Instance, since there is no financial tagging mechanism to really follow through with that tagging till the end. The better and more enforceable mechanism is to a convertible, all upfront mechanism in which if do they decide to stop using, other users / agencies can benefit as it should be really a “use it, or lose it” scenario. Not a “Lets get married, and i might cheat later” kind of scenario.

When is it really practical to use Savings Plans? – As it doesn’t really define a particular minimum, it really only makes sense when you have a couple of machines in which they share the same computing instance family and it makes it a bit easier to shrimp or grow your instances. ex. instead of VCPU = 36 at 1 machine, perhaps 2 machines at 18 VCPU etc. etc.

Does it really make sense to use a Volume Gateway when using AWS Storage Gateway and Commvault? – if you are rolling in money and can’t be bothered as it is annoying, it might be worth it at that point. Otherwise, does it really make sense to pay $200 + dollars to retrieve 1 file from say that whole 3+ TB “Virtual Tape” Not really! Stick with using the File Share Gateway as S3 is more financially efficent in this case.

Is there a point of deploying an AWS Machine, when you don’t require External Access or require Infrastructure as Code? – Yes, spend more money than you have to be have a hipster and expensive machine in which you are renting.

Can you mange systems in AWS like you do with Traditional Infrastructure? – Why not, you didn’t care when it was hosted onsite, just spend 2-3x more to host it in AWS. Although, if you don’t like to move, its definitely worth it at that point!

What if the amount of running EC2 Instances exceeds my Reserved Instances? – It is magically on-demand and you are overpaying for it.

What is the best case scenario for sharing your Reserved instances in an AWS Organizations and Resource Access management Scenario? – If you implement an AWS Service Control Policy ( SCP ) to limit what types of EC2 Instance types can and cannot be deployed in an instance, a Reserved Instance or Saving plan can definitely help in this case.

What if i got 1 Reserved instances, which saves 70 percent, for say 3 on-demand instances? – Well, instead of paying 300 percent, you are now paying 230% for the same 3 machines.

Using a IR Probe on a BigTreeTech SKR 1.4 Board

Sometimes the best solutions…require out of the box thinking. Today, I was working on my 3D Printer and I was trying to adjust the IR Probe for use with the Z-Min Endstop probe. After running the M199 Comand ( get Endstop Status), I noticed that it would never trigger, even when the red LED status was on. The problem I was having was, “How come my Z-Endstop IR Probe will not trigger?” Directly from the Mini height Sensor board documentation, it mentions:

So…it says something about installing a pullup resistor. As you have to wait a week for shipping and you can’t purchase it from RadioShack as they went bankrupt back in the day, One solution that was posed was to change the pin-out to a port that i would likely never use. In this case i will simply change out the pins to a port in which does not have the 10K resistor installed…

and mainly change the Pins in my pins_BTT_SKR_V1_4.h file to use P1_25 instead of the original port of P1_27 in the documentation. That of course means i have to plug it into a different port…but it beats soldering!

Re-compiled the Firmware….and Presto!

As the Z-Endstop State was Triggered instead of being open, i have to just invert my Z-Endstop Logic State:

How do i find the Certificate Chain on a Third Party, Private GoDaddy SSL Certificate?

The problem i was having today was…What the hell is a Certificate Chain? As i just touched AWS Certificate Manager Today…i want to provide one solution to importing SSL Certificates into AWS Certificate Manager.

  1. Go to AWS Certificate Manager and click on Provision Certificates

2. Click on the blue heading on the top – Import a Certificate

3. Here is the question of the day – how do i fill it out?

Certificate Body – The contents of the .crt file, .pem file, or the contents between BEGIN CERTIFICATE and END CERTIFICATE. Note this may not the be the original BEGIN CERTIFICATE…END CERTIFICATE in your original request.

Certificate Private KEY – The Private Key corresponding to your Certificate. The vendor will not provide this to you; its something you saved already, right?

Certificate Chain – Since we got our Cert from GoDaddy, I looked up the keyword ‘GoDaddy Certificate Chain’ and Go this wonderful page:

If you open the .crt file – you will notice it mentions Starfield Certificate Authority – G2. In this case, we will use the Starfield Certificate Bundle – G2 and Copy and Paste the Contents of that .crt file into our Certificate Chain Area

Amazon AWS Governance Reminders and Billing

Tips and Tricks to help with Administering an AWS Instance and helping getting infrastructure paid for in a clear way.

FAQS / Tips

  1. Account Separation and Billing – the most important thing to do in AWS is separate group’s by whom in the organization is willing to pay for said infrastructure. If you have a developer group, they get one account. If the group is in another continent, they get another account. etc. etc. If you don’t know how to chargeback costs, at least it can be paid back on an account level.

2. Use Tagging to allocate Billing by Tags – Whether this be done by project or organization code, anything that can cost a significant amount of money should use a tag. The use of AWS Config can we used to ensure all relevant resources are tagged properly.

3. Shutdown all Compute Resources after 2 hours if they do not have the correct tagging – Especially for unapproved resources, create a no tolerance policy on tagging. Shut down all resources that do not follow this tagging mandate.

4. Limit Accounts, especially is using AWS Organizations, by minimal service and Region Policies – Implement guardrails that allow users to not access IAM, limit to known services, limit to known deployment regions, and limit to known certain subnets as part of the VPC.

5. Implement guardrails from prevent users from disabling cloudtrails and cloudwatch at the AWS Organization level – As cloudwatch monitors resources and Cloudtrails monitors API Actions, it is most relevant to ensure these services are not turned off.

6. Mandatory Key Rotation – Keys will be associated to users and if those users are termed, processes must be in place to replace those keys with newer keys. Promote users to utilize Secrets Manager and make sure that it is accounted for in their workflow.

7. Limit Infrastructure provisioning Types to less expensive infrastructure in Testing/QA Environments – use IAM policies to limit users to only deploying resources for which are less expensive in a QA/Testing context.

8. Use AWS Organizations to create an Account Hierarchy with relevant Service Control Policies ( SCP’s) to limit access at the account level – Use Hierarchies, similar to Active Directory AD GPO’s, to apply organization to account groups and apply limitations at the account level. Use hierarchy.

9. In regards to using AWS Organizations, although you can define Service Control Policies at every level, it is more straight forward to apply certain limiting policies that apply to all instances at the root level and defining limitations at the account level – Since every level must allow the permission ( from Account to root ), it can get very confusing. Make it easier on yourself and deal with less abstraction by concentrating only on the root level and the account level. Its okay to define FullAdmin for every level in between.

10. Use resource groups to…group resources by Tag – this is mainly useful for splitting up costs by particular resources. Can also be used to apply automatons for these particular group of tags from System Manager.

10. Use Cost Allocation Tags – Generic tags are only usable in the Billing Explorer if they have been activated in the Cost Allocation Tag Menu. Otherwise they will now show up for Billing purposes.

11. Depending on project requirements, define a non-default KMS Key by default and enable encryption by default for each region – Allows for keys to be applied, regardless if the Admin/developer forgets to do this. Best practice is to encrypt it on a project-by-project basis.

12. Deny…User Creation, Key Creation, Attaching and Detaching IAM Policies to users, Creating Identity providers, Creating and Deleting VPC’s, Creating and Deleting Subnets, Changing IGW or Route Tables, Peering Connections to all non-admin users – The idea is to lock down networking, authentication, and sources of automation which are unintended. Imagine if a hacker had to slowly provision instances through the console…

13. Deny…creating Identity providers IAM permission – Not that they will care for permission federation after they have already accessed your account.

14. Deny…Resource Sharing – If you think they should not be allowed to share, well do make sure to make sure to send a letter to their mother.

15. Use Resource Access Manager to Share Subnets from the master account to the child account – You can lock down networking as a whole by sharing certain networks with certain accounts.

16. Using SNS Billing Alerts – This can either indicate overspending or an unusual amount of resource utilization in your instance. It Should give stakeholders an idea of how much resources really cost and if they are forecasted to go above their budget.

17. Require Tagging before instances can be ran or accessed – Use IAM policies to mandata certain tag fields be present and filled out. Otherwise, in the case of EC2, it will not run and in the case of S3, it can not be accessed.

18. IAM provides a feature called Service Last Accessed -Although you can see this in the console on a per policy basis, it would probably be better if you scripted this feature to see all the Last Service dates for all relevant services. This is supposed to help with the policy of users having the least privilege.

19. Deny…Inline Policies – It is neither scalable or obvious to audit these type of policies. Just don’t allow inline policies.

20. Have an onboarding questionaire – provision only what they think they need, and give them a month to change it again. It should at least get you going on the right track.

21. If developers are using Git, make sure they use GIT Secrets – it mainly scans for credentials and such and takes them out of any git repository.

Amazon VPC’s Reminders

I honestly forget how to remember AWS Networking. Here are some alternative ways to understanding the principles of networking in AWS…in a more retarded and interesting way. I would recommend training on AcloudGuru first.

FAQ’s / mnemonics / breaking understanding

Do devices in different availability zones in the same VPC communicate with each other? – Yes, as they are the same subnet.

What makes a VPC subnet available to the public internet? – It requires a combination of a Internet Gateway ( a connection to the internet ) being associated with the subnet and an entry in the route table with a destination of and a Target of the id of the Internet Gateway. Alternate Thinking – Think of it like the Internet Gateway ( IGW ) as being your cable modem and the route table being connecting your ethernet cable into the right place on your switch.

What makes a VPC Subnet not available to the public Internet for Hackers to attack it? – As stated previously , when the subnet is not associated with an internet gateway and there is no route in the route table to an internet gateway. Alternate Thinking – If you don’t have a cable modem and no cable is connected to it, how can it be connected to the internet? – DUH…its so obvious.

How can i provide the internetz to my EC2 Instances for patching and such without exposing it to the general internet? – Compared to adding a route to the Internet Gateway in your route table, you will use the NAT Gateway instead.

What the hell is a security group as it relates to VPC’s? – A Security groups is essentially the basic Windows Firewall…Defined! Like the Windows Firewall when it is turned on, it denies everyone that comes in to your house, but is more than happy to kick out the very same people…kinda like my inlaws during Thanksgiving…You can vet the inlaws for invitation to your network/home by giving them Ports to Open, like port 443 for HTTPS and port 22 for SSH, and a destination, like your public IP or the addresses of the rest of the neighborhood, for access.

How are Security Groups different from NACL’s? – Security groups are like a firewall you can use at the machine / instance / Virtual Machine level or it can even act as a firewall…for a group of systems. As for NACL’s, think of them like your curmudgeon grandfather, they don’t like anyone…but if you happen to come by they won’t let go of you either as they have to tell you all about their stories from NAM. Since they work at the network level, you have define what traffic is allowed to come in and come out. Grandfather would like to tell you his stories of nam: One Time in Nam…

What is the purpose of a route table? – Its mainly directions on what network a subnet is allowed to talk with another subnet / networking component. Think of it like Google Maps for the network. Give it the wrong directions…and you will be late to your interview. Give it the right directions…you might still get lost, but at least you are going the right direction.

( Sarcastically ) How do i definitely ensure a hacker can access my network from the network? – Spin up an instance in AWS, Use the default password, define the correct security group rules, define the correct NACL rules, correctly attach the internet gateway and make the right route table adjustments, test to make sure it is available on the public internet, and make sure to post the address and password on Reddit!

What are some ways i can connect to my EC2 Instance? – If it is available in a public network with the right NACL and security group rules: A. By using the private SSH Keys you created when the EC2 instance was created and limiting the Security Group Destination to only your public IP. B. Use a VPN such as OpenVPN that connects to the VPC network and SSH into that instance. C. Use a Bastion host which exists in a public Subnet and use it as a jump box to SSH into the private Instance.

If my EC2 instance is in a Private Subnet with No Internet Access…how go i get it internet Access!! – Assuming you need it for things like Windows Updates or would like to upgrade your Linux Instance, you can either attach a NAT Gateway to that particular private subnet and add the NAT Gateway as a target in the Route Table…Or if you connect a site-to-site VPN to that netework, it will not get it from the Internet Directly…rather all the traffic will route through the Site-to-Site VPN Connection

What is a good scenario for VPC Peering? – if the network is not in your AWS Account / AWS Organization, than you use VPC Peering. Company just merge with another Company? VPC Peering! It only works if both the VPC’s are in the same region ( actually you are allowed to do Inter-regional VPC Peering )

How is a static Elastic IP elastic? Can it stretch? – Since AWS says that we should not treat our machines as pets, rather we should treat them as Cattle. Ex. For comparison purposes, lets just assume that DNS Names are the same thing as IP’S ( they are not…but keep reading ). Betsyserver01 has an IP of X. Since AWS is ruthless and does not care if a certain IP is associated with a certain Instance / Name, it will send Betsyerver01 to the glue factory as Betsyserver01 is gonna become Thanksgiving dinner and be done with it. Once Betsyserver02 comes by, it will be given the same identity/IP as Betseyserver01. HOW BRUTAL!

From the guy in the mainframe department, How does an EC2 Instance ( a Machine ) access an S3 Bucket ( NAS / SAN ) – Sadly…this has nothing to do with networking, rather IAM Permissions….ouch! Apparently, word on the street, the machine has to play its ‘Role” and meeting up with the other guy, called the S3 Bucket guy. Only while he is ‘role playing’ can he talk to the bucket. Sounds like someone drank too much water…

What are Load Balancers? – if you are from traditional IT and don’t have to deal with Developers or even the term “DEVOPS” you probably don’t really need to know this concept.

What is a good scenario for Subnet Sharing – If you would like to share the same network with other accounts, especially when using AWS Organizations, allowing you to create guardrails for certain accounts. A good example of this is to have Account X only to be able to deploy to a certain region and subnet.

What is a good scenario for the VPC Transit Gateway? – As it may present certain shared resources, a good way to use it is for a certain entity to own the Active Directory/VPN Infrastructure and share that infrastructure within the conglomerate to the other operating companies.

What is the point of a VPC Endpoint? – Lets use this example with AWS S3 ( the general object storage service) . As VPC’s are all about security and limiting access to who, what and where; the purpose of VPC endpoints is to further restrict that access to only certain networks and protocols. Alternate Thinking – Without VPC Endpoint, the access is accessed willy nilly to anyone with a particular AWS Account / Key. Throw the VPC endpoint resources such as S3 into a particular VPC….suddenly it has the restrictions of that particular VPC…and is a bit more secure.

Formal Documentation:

Some Slides for your entertainment:

Has one simple diagrams covering network security:

Part 3 – Generating AWS Organizations IAM Policies that are restricted by Service

Simply restrict your lab environment of only the essential services that users need.

The IAM Policy Does the following

  1. Deprovisioning resources becomes a lot much easier when its limited to 1 AWS region
  2. As the Time is specified in UTC, ideally run the command at the start of class and give it no more than the possible amount of time it takes to run the lab. Otherwise it will only give the Users in that group permissions for that particular time. Basically cuts off access and allows for grading at that point.
  3. Of course only allow the services that the students need to have access to.

The Json Output of the command above:


In Powershell, use nested hashtables in addition to the great ConvertTo-Json Feature in Powershell in order to create the IAM Policies


  1. List the Services you want to allow and generate the Allow IAM List
  2. import the list of all iam services and generate the deny IAM List
  3. Run the Generate-AWSIAMOrgPolicy Command and output it to a .txt file for usage


The ConvertTo-Json has a level limitation of 2. Specify it to at least 10 levels and that should more than cover the level of hierarchies in the Json hierarchy

Part 2 – Using AWS Organizations and creating Lab Accounts

The first step is to create an AWS Organization, designate the Account you want to be your Lab Account, and have fun doing an AWS Nuke on those lab resources later!

Step 1. Create the Organization

Step 2. Click the Add Button

Step 3. Create the Account

Step 4. Create the Account Details

Presto! The Account has been created.

The very first reason to use AWS Organizations in your lab environment? Provide a consistent way to “nuke” or clean up your Lab/Dev Environment without affecting your main/Production account hosting your WordPress website, dear photos, and other stuff in which should not be affected by your Lab environment.

Part 1: Managing IAM Users and their groups; managing user deprovisioning

The journey to writing a user provisioning and deprovisioning process for AWS Labs…in Powershell

ActionPowershell Commands
Create UserNew-Iamuser -username $user
Create Console PasswordNew-IAMLoginProfile -username $user
Change Console PasswordUpdate-IAMLoginProfile -username $user
Add user to GroupAdd-IAMUserToGroup -username $user -groupname $group
Remove User from Group
Remove-IAMUserfromGroup -username $user -groupname $group
Remove the IAM UserRemove-IAMUser -username $user
Get the Account ID and use the following URL for login

Generic Steps

  1. Create the Console users and give the user the option for either users based on pattern or from a given .csv file.
  2. Create the Console user and create a password for that user.
  3. Add the User to a particular group which has the IAM Policies attached to that group.
  4. The script will then either wait a certain amount of time after the Users were created or the admin can manually run the user deletion script. The user will than be removed from the group ( thus removing access ) and the password will be changed. The easiest way to remove access is to delete the user.
  5. Later scripts will somehow delete the resources that were created at the end of the day.

The code thus far to Create the Lab User Account and delete the lab users accounts can be found here: