AWS Made Easy
Search
Close this search box.

Ask Us Anything: Episode 1 – AWS Cost Optimization

With Badri Varadarajan, EVP of product at CloudFix

Join AWS superfan and CTO of ESW Capital, Rahul Subramaniam, and EVP of product at CloudFix, Badri Varadarajan, as they share their insights on AWS cost optimization, provide an overview of what CloudFix is and can do, and answer AWS questions from the audience.

Latest podcast & videos

Summary

In this episode, we begin by introducing Rahul Subramaniam and Badri Varadarajan. Rahul discusses the big bet that ESW capital has made on AWS. Then, we take some time to talk about CloudFix, by Aurea. Stephen, Rahul, and Badri discuss how CloudFix works, the types of changes that it can suggest, and the types of savings that you can expect. Finally, we take some questions from the audience.

Episode highlights:

AWS Made Easy

Transcript

  1. Stephen Barr

    Hello everyone and welcome to the AWS Insiders. Ask Us Anything. We are really happy to be here and this is the
    first and hopefully, a long series of live streams. Before we start, I’d like to thank our ASL interpreters,
    Stephanie and Rachel, it’s really nice to have you with us. The idea of the show is an “ask us anything” series,
    it’s all in the name. We’re gonna have Rahul and me, and a round of guest speakers with different experiences
    and all things Cloud and AWS. We want to share some stories, some demos, and some ideas, but the majority of the
    content would be answering your questions and talking about real AWS issues that you’re facing and whether
    you’re just getting started or you’re very experienced. We just want to talk shop with you on your journey.
    We’ve all got, we’ve got years of experience amongst us. It’s a really fun time we’re living in, not just with
    on-demand, compute resources, but all these higher-level services just available at our fingertips before we get
    started. I’d like to do a round of introductions. First I’d like to introduce Rahul Subramaniam:. He’s the CTO
    for ESW Capital and this includes a whole family of companies, he’s also the head of innovation across these
    companies. Is truly a visionary thinker. He’s the inventor of CloudFix, a tool that can scan your AWS account
    and find nondisruptive changes that can instantly start to save money. We’re gonna hear more about CloudFix
    later. Finally, Rahul is a self-described AWS superfan. Rahul, can you tell us more about yourself?

  2. Rahul Subramaniam

    Yeah, thanks Stephen, and it’s a pleasure to be here today at Ask Us Anything Livestream. I’ve been with this
    group of companies for almost two decades. That’s a really long time during this period. We’ve worked very
    closely with AWS literally since 2008 and migrated or moved over 150 enterprise software companies over to the
    cloud. And over the last two decades, I have enough scar tissue on me to talk about all the things that worked
    and didn’t work as we made all these big transitions. So AWS is definitely one of the big passions and interests
    that I have. Apart from that, I am a Lego hobbyist and a 3D Printing hobbyist, which I love to do in my spare
    time. So I would love to have, you know, a conversation about that someday. But we’re here to talk about AWS and
    ClouFix today.

  3. Stephen Barr

    Well, the scope of anything I think can be expanded to include Legos and 3D prints at some point. It’s really
    exciting to have you with us, and looking forward to sharing some of these war stories and experiences and just
    talking about AWS. Next, I’d like to introduce Badri Varadarajan: and who’s the CTO of a portfolio of products
    under the Trilogy brand. He’s been part of ESW since about 2019. Badri, can you tell us a bit more about
    yourself and your experience with the cloud?

  4. Badri Varadarajan

    Hey, thanks, Stephen. At ESW Capital, my team owns the technical roadmap of 150 products. We start the diligence
    of products we’re going to acquire or incubate. And the very first thing we do is to move them onto the public
    cloud and write a roadmap for how to add more valuable features for our customers. One of the things we do is we
    focus only on the technical merits of the products and do cost optimization after products that our customers
    love and I guess we’ll talk a little bit about that today. That’s an interesting journey for me because before
    EWS I work primarily on the network edge deploying signal processing and influence engines spanning telco and
    computer vision applications in different radicals. And one of the key realizations I had was even if you’re
    completely operating at the network edge, you need the cloud to still configure control, and monitor all the
    products that you’re relying on that. So in a way, every company has needs to have a cloud story. I live in
    Silicon Valley. So I guess we pay a “beauty tax” but I do get to go to the ocean when I choose to enter the
    mountains. So that’s great.

  5. Stephen Barr

    Oh, thank you for introducing yourself. And it was great meeting you in person recently. We’re all remote, all
    remote setup. But we luckily had the chance to meet in real life and I’d like to introduce myself quickly. My
    name is Stephen Barr:and I think I’ve got the coolest job in the world. So my title is Chief Technology
    Evangelist and it’s a kind of a funny combination of words, there are two main roles. So on the one hand,
    “evangelists” means spreading the good news. In our case we’re talking about software, talking about our
    software, and promoting it within the developer community. But at the same time evangelist is also as much a
    listening role. So I want to make sure that what we’re doing is actually what is needed and that we’re
    interacting with the greater community and just playing a part in that. And it’s really exciting to be at the
    nexus of technology and innovation and customer success. So like I said, it’s the coolest job I can imagine. As
    for my personal background, let’s see, I’m from Seattle and there now. The plan was originally to split some
    time between Australia and Seattle. Australia decided to hold on to us for a little longer. So we’re making up
    for a bit of lost time in Seattle. All right, so why are we here? Rahul, I wanted to ask you this, what do you
    see as the purpose of this weekly series? What are we aiming to achieve over the long run?

  6. Rahul Subramaniam

    Yes, so I think the short version of that is that we’ve built enough scar tissue around working with AWS, and
    because we made such a big bet on it, I think the world is making a big bet on AWS. There are loads of insights
    and learnings that we’ve had that really love to share with the community, but at the same time, I am hoping
    that with a variety of guests that we invite over to the Ask us Anything Livestream over the next few weeks, we
    get to learn to from their experiences and I think that shared experience is what we want to accomplish. The
    longer version of that is that we made the big bet on AWS back in 2008 and we’ve had the advantage of learning
    along with growing with AWS and I think that’s something that a lot of newcomers into the AWS ecosystem are
    missing today. The sheer volume and scale of the AWS footprint can seem very scary for a lot of newcomers that
    are trying to be on AWS and just try to understand this new cloud-native, even driven paradigm of building
    applications and that’s something that we are hoping that we can help with easing the getting started the
    process with the second aspect is that 90% of the products that we have today are the codebases that are owned
    in applications today is basically utility functions. And what AWS has been doing diligently over the past
    decade or more is to take a bunch of these utilities and commoditize them. And AWS has really changed the game
    on how applications can be built. And it’s basically becoming the operating system of the future. And we want to
    talk more about that. We want to share everyone’s, our, and other experts’ experiences and how they’ve been able
    to leverage your services to accomplish outcomes that historically might have been impossible to do. So sharing
    and learning for ourselves is basically what we hope to accomplish with this series.

  7. Stephen Barr

    Oh, thank you. And I could definitely appreciate what you’re saying about, just the on hand we get to stitch
    together all these complex services. I was playing with the transcription of a phone call recently. And AWS has
    this really neat service that could take a recording of a phone call between a customer and an agent. And it’ll
    give you word by word, sentence by sentence. Syllable by syllable, actually, it’ll tell you how and what’s being
    said, but not just that. What was the intent of the conversation? What are the action items? What’s the
    sentiment and all of those things? And that would have been phenomenally hard to put together just a couple of
    years ago and it still would be hard and but now you can just literally drop an MP3 and into an AS3 bucket and
    in a couple of seconds you’ve got this full analysis and you can tuck that into analysis pipeline. It’s really
    exciting the size of the building blocks that we get to play with. There are some other places where they’re
    still reversing linked lists as an exercise, but we get to play with these huge building blocks that we’re
    doing. Really exciting work and well it’s great to have your 10,000-foot perspective here. Alright, so before we
    get started with the content again, I just want to thank everyone who has joined the stream. Again, we’re on
    LinkedIn, we’re on Twitch and we’re on Twitter please feel free to ask any questions in the chat and we see them
    also if you want to do a hashtag AWS or hashtag CloudFix, we’re keeping track of those. We’re going to give away
    a CloudFix hoodie a bit later. And so your comments are what for each as you comment, we aggregate all the user
    names and we’ll do that drawing at the end again, thank you to our two ASL interpreters Rachel and Stephanie, I
    really appreciate that and please let us know what else we can do to to make this as accessible as we can.
    Alright, so now we’re going to transition to talking about CloudFix. So we’re going to play a little video and
    then we’ll get started with that. AD READ: AWS is a critical part of your business but managing what you’re
    spending on AWS is a pain in the… well, you know what we mean? It’s true. 20% of cloud spending is just flat-out
    wasted. At CloudFix we’ve optimized over 45,000 AWS accounts with zero downtime and no performance degradation.
    We saved companies more than 20% on AWS costs no BS. CloudFix aligned with the latest AWS cost optimization
    guidelines and savings plans. So, you know, our recommendations are legit. And the CloudFix AWS cost
    optimization engine is fully automated. Simple to use. No, IT time or resources are needed. Installation? 5
    minutes max CloudFix does the rest with just one click. It’s as simple as find, fix, and safe. The road to AWS
    cost optimization is ahead. Start your test drive of CloudFix today.

  8. Stephen Barr

    Alright, so That is our one minute of CloudFix. Now let’s dive into it. So, Rahul, can you give us the
    backstory, where did CloudFix come from?

  9. Rahul Subramaniam

    Yes. So one of the consequences of having over 150 companies in the portfolio is that you aggregate a whole lot
    of AWS accounts over a period of time. There was a time a few years ago when we had over 45,000 AWS accounts
    that we had to manage and for those of you who are not familiar with that number, it’s insanely large to the
    point that AWS themselves cannot put 45,000 AWS accounts under one master payer. We actually needed to have five
    separate master payer accounts and categorize 10,000 accounts each across these master pairs. We suddenly found
    that our cost structures were just, and our costs were literally blown out of proportion. And so the first thing
    that we did was we went out and looked at every tool in the market that would help us solve our cost problems.
    Initially, it was really exciting because a lot of these tools were telling us that we would easily save 50 or
    60% on our end of the spend and that was really exciting. So we put together a swat team and we started going
    after all these savings. Unfortunately, we found very quickly that we were making absolutely no progress. And
    for a few reasons, number one was that most of these tools give us really nice visuals, they give us the
    equivalent of cost explorer where we could slice and dice the data around. Our accountant figured out where we
    were spending money. But realizing the savings was really our problem, none of these tools actually realized the
    savings for us. Number two, was that more often than not a lot of these savings required massive open-heart
    surgery on your applications and we could never get all members of our engineering teams, product management
    teams, and DevOps teams to agree on the changes that needed to be made and therefore we never really managed to
    realize any of those tables. So after a bunch of failures trying out a lot of these tools in the market, we
    decided to take a step back and rethink how we would go back all these problems. And it boils down to something
    very simple where we started looking at every AWS recommendation that was being made around optimizing costs. We
    filtered them down to the ones that were completely non-disruptive in nature. I mean one that wouldn’t require
    to restart an instance, ones that did not require to change your application code, ones that did not require you
    to even change the configuration of your deployment, things that were completely nondisruptive that you didn’t
    need provisions of your teams to get going because again, everything just continues to work as-is. So we just
    took some of those and started automating these particular fixes and we started rolling them out across all of
    our accounts. Over a period of time, as we started talking to a lot of our customers, we realized that they had
    the same problem that we were having and so sometimes last year we made this product available externally to our
    customers and to the rest of the world.

  10. Stephen Barr

    Well, thank you so much for explaining that we really, really appreciate it. And isn’t it fascinating how you
    start playing with the cloud and it’s really fun? And then you look at the bill and wow, okay, let’s rein this
    in for a moment, but it’s a complex thing in and of itself. And also we’ve got some great comments in the chat.
    I love you mentioned that 45,000 accounts were having, the signal says, I’d say that certainly a few accounts I
    know just when you said that number, it instinctively distributes for my coffee cups. Oh my gosh. 45,000.
    Alright, well, that’s pretty neat to think about that. Oh, and we actually, we gotta just got a great question
    about multi-cloud, We’re gonna save that for the third half of the show. Looking forward to seeing that one.
    Badri, I’ve got a question for you. So what are the hidden costs that you’ve seen in typical AWS deployments?

  11. Badri Varadarajan

    So yeah, I mean, I think I rather alluded to this earlier, right, when you sort of getting when people start
    their cloud cost optimization journey, you can start with the tools that the cloud providers themselves give you
    just get a sense of like what is it you’re spending money on. I’ll share my screen to show you the AWS tool for
    that. That’s a cost explorer.

  12. Rahul Subramaniam

    While we’re working on that we are completely new at this and we’re still figuring out all the technical issues
    around how we do this. Again, tips and tricks tricks tricks around this would also be welcome from the audience.

  13. Stephen Barr

    And then the comment from The Signal “I love that you guys solve your own problem”. I really liked that about
    the company culture of really digging in and being introspective and figuring out. Okay, well what are we
    because the things that we’re doing are at scale and so being able to be introspective and then take that
    solution and generalize and say, okay, how do we solve this? Not just for us now that we saw it for ourselves.
    Can we solve this generally? And I’m certain other people can benefit from that work. Alright. I think we might
    have to punt on the screen share.

  14. Badri Varadarajan

    So when you start with this you first want to get a basic sense of what you’re spending money on and do it by
    services first. What we find with most of our customers is they spend about 30% on EC2 instances. Then the next
    big category tends to be storage to block store. That comes under easy to others in AWS for some weird reason.
    Ah, three which is object storage and that’s where most people are in their cloud journey. They’ve basically
    taken a bunch of on-prem stuff. They’ve put it, they’ve hosted it in the cloud using the layer one services that
    AWS or any of the cloud providers provides. And talking of hidden costs, you can save a bunch of money simply by
    finding out what’s unused and turning it off. And that number actually is surprisingly high for lots of folks,
    it’s 10-15% and always talking about the 45,000 accounts earlier. One of the things you want to do is to go and
    tag not thousands but probably hundreds of thousands of resources. And simple tricks work. Basically, you can
    target by, I think the other thing you want to do is to categorize your accounts themselves what is produced,
    what is nonproduction, that sort of thing. And by doing that and turning on some aggressive policies in
    non-production in terms of turning off not untapped resources, you can get 10-20% there but it takes a bunch of
    the network and that’s where our insight into the cloud and I will talk about that more. There are things you
    can do without all that tagging without all that potential performance impact that will still save you 10%
    completely automated. And that was the inside that led into let ClaudFix the product.

  15. Stephen Barr

    Or just so to summarize within tagging, also understanding the basic breakdown and understanding what’s our dev
    cost, what’s our product costs instead of just looking at it as one big bucket. Okay. So then I guess one of our
    follow-up questions was if you’re just getting started I guess, but you really address that already
    understanding your costs and so the next one is building the plan. I’m building the plan for once, you know
    where your costs are going, how do you execute on that?

  16. Badri Varadarajan

    So one thing that we find useful is if you just try and build a plan in a vacuum, you sort of get confused. It’s
    good to sort of start with goals and there are some good goals to keep in mind. You can eventually get to 50%
    cost reductions in almost all cases unless you’ve already done a whole bunch of optimization. If you’ve just
    started on your cloud journey, your target is that 50-90% of your costs can be reduced. That’s where you need to
    go. But you don’t start by saying I’m going to solve that first. You take it 10% of the time. That’s a good,
    good way to do it. Turn on something like CloudFix that basically does this and does the automatic no regrets
    stuff that will get your easiest 10%. Then you look at the next easiest thing tends to be reservations like pure
    financial engineering and I think we’ll talk about that a little bit in-depth later But that will give you
    another 10-20%. That’s a good thing to try and spend time on because you don’t have to impact your customer’s
    engineering teams. All you’re doing is forecasting properly and making reservations and then you can start these
    sprints that get you somewhere between 30% and 60% that sort of application dependent. And sprints are actually
    a good way to do it because you sort of time-bound everything. You scope bound everything and you only look at
    what’s important.

  17. Stephen Barr

    Thanks for that answer. Alright. So first you do CloudFix and then you look at the kind of financial ones of the
    reserved instance versus on-demand and then you do targeted sprints to target specific things.

  18. Rahul Subramaniam

    I had one other thing. With that particular note from Badri, I think there is a tremendous amount of savings to
    be had by just shutting off machines that you don’t use anymore In general. I think for the last 15 years of
    experience working with all kinds of different accounts in AWS I think that’s the number one area that you can
    get started with simply because it’s easy to do and it’s one of the things that everyone misses. It’s just so
    easy with the simple API caller just log into the console to just turn on the machine and no one ever pays
    attention to shutting them off. The cost per hour shows up as a fraction of a cent you know on a lot of
    instances and resources and it doesn’t feel like much but when you leave tens of thousands of these resources on
    without remembering to turn them off they add up to a lot of money. So I’d say start with just looking at stuff
    that you can shut off and then you can start with the sort of stuff that you know Badri was talking about a few
    minutes ago.

  19. Stephen Barr

    That makes a lot of sense. If you have an EC2 instance that’s your personal dev box that’s sitting there, you’re
    only going to work 8-10 maybe 12 hours. Hopefully not but maybe 12 hours in a day. That’s still 12 hours of idle
    time where you said a simple API call can just stop that instance while you’re not using it and that you regain
    that reserve capacity and you regain that cost.

  20. Rahul Subramaniam

    But we have an interesting war story that we were just talking about last week um with Badri. I just like to
    bring that up which is so back in 2008-2009 AWS didn’t have IAM. So when we had all of these users we had to
    build a system of our own that basically acted as an IAM that will allow multi tenants or multiple users to
    access resources within the same account. And one of the very early automation that we put in place was that
    every user had to put in their working hours in the system. That said I work these eight hours in a day and we
    would automatically go shut off those instances and put them in hibernate mode. So that we weren’t wasting 16
    hours of computing time. Just because the user was using it only 48,000 a day. And that makes a big difference
    that 66% cost reduction on every machine that everybody launched and you’re talking about 14 years ago. So it’s
    just a discipline that you know, you need to build from the get-go. And you know, that makes a big difference to
    your past and your bills as well.

  21. Stephen Barr

    We’re getting some great feedback on that. That’s some great savings. And speaking about discipline. Another war
    story a long time ago and luckily you’re protected from the type of area error. I accidentally committed some
    keys to a public Github repository and my AWS account very quickly turned into a massive Bitcoin mining cluster,
    that was my scariest um scariest AWS bill experience. And luckily we were able to rectify that very quickly. I
    could not afford to maintain the cluster that these folks started on my behalf. We’ve got a great question from
    the audience here. Let me post this out: for someone who’s just getting started out and doesn’t know what their
    costs are gonna be, are there smart suggestions for what to turn off based on predicted usage? So are we doing,
    I guess you alluded to that earlier, but do you have any more to say about that?

  22. Badri Varadarajan

    Yeah. So the cost explorer tool does have like cost forecasting but I think we want to start with some really
    simple rules. To Steve’s point, it is actually a really good one, which is about Bitcoin mining, you want to
    turn off before type instances in all your nonproduct counts, basically. Service control policies are your
    friend. They’re not intended to be very detailed or anything. You only get five service control policies per
    account, I’d go turn off everything that’s untracked. That’s one that’s a good service control policy. The other
    one is specific kinds of instances just block them. You can do something with regions if you’re just getting
    started out, you should basically, you’re probably just going to use one reason you’re not going to use all
    regions so just turn them off, don’t use anything there. And in general when if you’re really starting a new
    project you should start with something survivalist if at all possible because that way you’re paying for use to
    Rahul’s point earlier, most of your cost is going to be easy to instance or an RDS instance somebody turned on
    and never used.

  23. Rahul Subramaniam

    Yeah, there’s one other factor which I see very common or I’ve seen very commonly which is for someone who just
    started out, let’s say your kids start a new account, you start working with it very soon you start hitting API
    limits or different resource limits on that particular account and there’s a general tendency to over-provision
    those limits because you’re like well I don’t want to keep fighting support tickets to get these limits raised
    but I’d say keep them conservative. Do not try to you know like if you need five machines don’t turn it into 20
    machine limits because those are things that come and bite you later and you want to avoid creating very open
    limits it’s think of it almost like your credit card limit if you have a high credit card limit you are likely
    to spend more so just like your credit card keep your credit limits conservative so that even in the worst-case
    scenario you have some certain checks and balances in place and like whether you’re saying service control
    policies are really awesome. Use the other way that you can actually get around the limitation of five service
    control policies per account by splitting out your resources across a few accounts. And that way in case you
    have you know problems like a key getting exposed or something going wrong with a particular account. You can
    keep your resource limits fairly tight within that account and limit the blast radius of any runaway costs
    within just that one set of resources instead of letting it affect You know, one account that has everything in
    it. So those would be my two other pieces of advice.

  24. Badri Varadarajan

    Yeah, I actually agree with that. I mean, focus anytime you can focus your attention on a small set of things
    that would really help you. So you can focus on it by account. Three axes which are useful: accounts as one,
    services is another to look at. I mean that’s where you start with looking at what services you’re spending
    money on. I’d even go if you’re getting started and if you can get away with it then to a nuclear service
    catalog don’t permit anybody to spend up a service that you’re not explicitly authorizing them to do. And the
    third one is the application-level focus. But that’s a little bit harder because applications are more complex
    beasts and you need to involve large engineering teams to drill down into application-level costs. But services
    and accounts should be something that you can do at the (??) level and that saves a lot of money.

  25. Stephen Barr

    I really like the idea is limiting the blast radius. I remember the first time I launched a really big cluster.
    It was, I had this For Loop and said, okay, this is gonna run, it’s gonna start 600 instances. I really want to
    make sure this is doing exactly what I think is doing. Especially at the time I was, you know, I broke graduate
    student, I really couldn’t afford to make this mistake. So limiting this blast radius and that’s a really good
    idea and they said don’t have that, that credit card with this huge limit on it at some point, you’re gonna hit
    that limit.

    Alright. We have another question, we talked about serverless. Let’s talk about this one: in terms of stacks and
    getting started with AWS. If you were getting started today, what would you start with? Would you go serverless,
    would you just start an EC2, Rahul, what do you think?

  26. Rahul Subramaniam

    Yeah, I think this stack has changed, over the last few years and it’s continuing to evolve and change. I think
    serverless is the way to go. And even within that, it’s changing quite a bit. So just a few months ago, I would
    have said doing something like RDS with Lambda, an API gateway to front it and you know if you really needed
    S3CDN for your any front end that you’re building, that would be the basic stack to go with. My view on that has
    changed just given all the stuff that’s happening. If your primary building at very transaction-oriented
    applications, a better bed would probably be something like DynamoDB with Lambda and you could probably leverage
    Lambda, or else you probably want to look at something like the amplified studio to build up your UI and front
    end and you could models your back and within Amplify studio as well. And the sheer number of to that AWS is
    coming up with every year. This is an area that’s constantly going to change and evolve. There’s lots of very
    exciting new serverless stuff that’s coming up and we couldn’t be more excited to see where that’s headed. If
    you’re dealing with certain areas of ML you’re basically, there’s tons of stuff coming up around SageMaker
    Studio or around that world. There is also the AWS Glue Studio that helps you do a bunch of E. T. L. And data
    transformations and that’s pretty neat to get started with right out of the box. There are a lot of serverless
    tools now in the AWS portfolio that you should go explore. This space is constantly evolving and again it is
    very cheap. That cost to get started is what I find every year keeps coming down. What used to be $100 is
    probably $80 now. If it got started with something like DynamoDB and Lambda, it would probably be under $10
    depending on the order, depending on how much data you actually are going to put into the system.

  27. Stephen Barr

    I really like that perspective of starting with serverless and then letting AWS deal with the margins and the
    excess capacity. I think it’s almost similar to what you’re saying about and I’m over provisioning your account,
    right? It’s easy to think scale right out of the gate than you think, no, I’ll just think serverless and let AWS
    deal with that is a much easier way. Right. It’s fun to think about, oh my API that I’m writing is gonna need to
    handle a million requests per second. But even if it does still leave that to AWS instead of trying to engineer
    your own solution. Alright, here’s another user, a viewer-submitted question. Let’s look at this one: clients
    tend to see the most cost savings or what clients tend to see the most savings through these optimizations. And
    can we share any specific situations about cost savings, any stories, about cost savings and their outcomes?

  28. Badri Varadarajan

    Sure. What clients tend to see the most cost savings? That’s a good question. It’s also a mental question, which
    is what are people actually spending money on anyway in AWS? And it’s surprising how often the same pattern
    repeats. You can almost sort of blindly make this bet on, take anybody’s bill. I can do an over-under on it.
    It’s going to be 40% easy to 30% EBS volumes, 20% S3 and then 10% of them just getting started with managed
    services. Right, I’m sorry, I said EBS particularly that’s stores, that’s databases. Out of these where do
    people really save money? They save a whole bunch of money on storage because that’s the place where AWS has
    invested a whole bunch of tools in cost optimization and really all you have to do is just turn them on and
    configure them properly. CloudFix has a bunch of that actually. We have a bunch of advisories that do that, that
    can’t quite be automated, but you should do it anyway. The compute savings, that’s 40% of your bill. So anything
    you say there is a big chunk of your bill that goes away. Focus on reservations there, folks who are paying
    anything on-demand at all are the ones who save a bunch of Money When they started this journey, your aim should
    be to pay $0 on demand. It’s a sort of, I mean, in terms of setting stretch goals right? When you start that
    journey, you look at the Cost Explorer it will give you an option to look at your cost by purchase option. If
    you see an on-demand spend that’s money that you’re throwing away. AWS has a bunch of tools such as savings
    plans, convertible instances basically invest in them. There’s a little bit of a fear of overestimating demand
    and I think we can talk in detail about how that’s a kind of misplaced fear. And you should go spend some time
    getting those costs down once you get into managed services, a little bit more complicated to save, save money.
    But there are easy savings there in terms of trying to use cheaper instances like Graviton instances for almost
    every single service pattern. But essentially what I’m getting at is that where you can save money is strongly
    correlated with various spending money and that really goes to storage, it goes too easy to use. And then just
    where you’re running your advantage services on, I think the next level is, is trying to get into enterprise
    disk, plants and so on. I think the oldest parts of insects on that stuff.

  29. Rahul Subramaniam

    Yeah, but before we jump on that, I think there’s another way I look at this. There are basically 3 dimensions
    of where things are changing, right? Number one, AWS is constantly coming up with a whole lot of new services
    that make some of the older services or resources, they make them look more expensive because that’s the AWS
    mechanism. So back in the day, if you were an early adopter of AWS services, what AWS will do is year over year
    or literally every quarter they would reduce the prices of their existing resources and you could basically stay
    on the same result and it just keeps getting cheaper. You’re over here. They stopped doing that about seven or
    eight years ago and instead they started creating these new services. Okay, so for our new resource types. Very
    recently, for example, AWS released a new EBS volume type called GP3, that made the older de facto GP2 volume
    kind of look worse because the new GP3 volumes were more performant and 20% cheaper than the GP2. But AWS
    doesn’t switch them all for you. You can you have to do that and you have to give you guidance on how to do it,
    but they don’t switch these resources for you. So when there are these new sets of services or new resources
    that are being created by AWS and if you’ve been a user for all AWS services for a while, there’s a bunch of
    savings in moving to the latest and greatest versions of those resources because they are almost always going to
    cheaper and more performance. So if you actually look at the price performance of the newest resource types,
    they blow all the old ones out of the park. So you want to kind of get there. So, for customers who’ve been on
    AWS for a while, there are tons of savings on that dimension. The second one is where there are customers who
    have, they are constantly working on a lot of development activities. They’re moving in new workloads into AWS
    when you’re constantly moving new stuff into AWS, things aren’t optimized at the get-go. So there’s a lot of
    clean-up. There’s a lot of optimization that can happen and does happen over a period of time right after the
    movement into AWS has happened. And that’s the second dimension where a lot of the savings come in. So folks are
    just moving in after they’ve done lifting shifts after they’ve done your basic movement into AWS, you go into
    optimization. That’s the second dimension. The third one is that tools like CloudFix are constantly adding new
    fixes from the recommendations that come from AWS. In which case there are new kinds of resources, there are new
    kinds of insights that they go and seek and are therefore able to realize a lot of savings just because of the
    new recommendations, new tools like Badri said, they’re a lot of tools that AWS creates for you to save money.
    S3 intelligent tiering is a perfect example of that where they’ve done all the heavy lifting to figure out how
    what the age of your objects in S3 storage is and all you have to do is turn it on. If you turn it on, your cost
    of storage of objects in S3 could be as low as 90% plus right. If you go to a deep archive, of course, you have
    latency penalties in those cases but there are scenarios where there are too infrequent access layers where
    there are absolutely no latency penalties on it and it could still be 60% cheaper to move to that infrequent
    access to. So AWS does a lot of the heavy lifting for you and you need tools to be able to just enable those for
    your account, your resources and those like CloudFix do that and as we get more insights we just automate that
    for you. So that’s the third dimension. So depending on where you are leveraging, you know AWS across these
    three dimensions, the cost savings are different and the mechanics of getting those savings are also different.

  30. Stephen Barr

    Alright, so just kind of summarize that. So we have these advisories from AWS like GPT2 vs. GPT3 and they come
    out with his big advisory but the long and the short of it is in almost all cases it’s worth making the switch.
    Or with the S3 Intelligent Tiering that there’s almost no reason not to turn that on. But yet we don’t do it for
    whatever reason. So you’re saying CloudFix, if we just run CloudFix, it’s gonna tell us to do that and then we
    hit the button and it’ll do it. Is that is that what we’re saying here?

  31. Rahul Subramaniam

    Pretty much, that’s exactly what it’s meant to do. They are completely nondestructive, super simple
    recommendations that just maintain the health and hygiene of your accounts based on recommendations that area
    needs.

  32. Stephen Barr

    So this is basically money on the table. Doesn’t affect the end user doesn’t affect doesn’t affect the
    performance of the application anyway, it’s just literally just doing it better.

  33. Rahul Subramaniam

    Yeah, that’s basically what it is.

  34. Stephen Barr

    No, no, can’t complain about that. Mhm. All right, well, here’s here’s a follow up question and it’s kind of
    funny one. So, we talk about saving money on our AWS bills and we want to help other people save. Well, the
    question is, what does AWS think about that? I mean, don’t they want us to use more and more and more computing
    and not save money, or what’s their perspective on this?

  35. Rahul Subramaniam

    So in my experience having worked so closely with AWS, the one thing that most people kind of miss in that area
    is probably that Amazon in general is probably one of the most customer success-oriented companies out there,
    they want you to be successful because if you are successful and you’re, you love it and AWS as a service,
    you’re going to use more of it. So they go out of their way to help you optimally use resources. So I think I
    have never come across anyone in the AWS that said, oh we’d love for you to waste your resources so we can make
    more money. They will come to you, they will take the initiative to come and tell you about where you might be
    wasting money and ask you to optimize it. They have invested so much in the very complicated framework. They’ve
    invested so much in all of these tools. Like if you just look at the advisories that AWS puts out around cost,
    they are phenomenal. They built tools like savings plans, they built tools as you know, S3 Intelligent Tiering,
    they’ve built all these infrequent access tiers across all of these managed services, including DynamoDB and so
    on. They do all this engineering for you so that you can save money and you can utilize the right resources for
    the right workloads. That’s what they wanted to do. They want you to do things in the most efficient manner.
    They do not want you to waste money. So I have never, in my experience, met anyone from AWS whose company said,
    we love the fact that you’re wasting so much money, it meets my sales quota, or whatever mechanics that they
    have in there. I have never heard them do that. Instead, I’ve heard them come back and say, hey you guys are
    wasting a ton of money, why are you doing it? What can we do to help you save all that money? And so that you’re
    utilizing them in the right. That’s what they are motivated by.

  36. Stephen Barr

    Alright. At the end of the day, if we as customers feel like we’re getting good utilization, then we’re gonna
    keep coming back for more and more and that’s their long-term strategy. So if we’re using them more efficiently
    in the short run, we’re going to end up giving them more money in the long run. And so but it’s a win for
    everyone because we’re being efficient. They’re helping us to be efficient. It’s a win for everyone.

  37. Rahul Subramaniam

    Exactly.

  38. Stephen Barr

    Okay, fantastic. Can’t complain about that. We have another question from the audience. I think we should kind
    of wrap up the CloudFix part of this and just go to general AWS questions so please everyone in the audience
    keeps the questions coming in terms of general chat. Yeah, let’s do a trend. Please just put in any AWS
    questions. We’ve got one from Curtis Young. Let’s put this up on the screen one, let’s give this a shot. This is
    a good question: So AWS has been heavily focused on the automation approach for information security to improve
    scalability. Do you have any insights regarding currently available security automation vectors that can be
    taken within AWS?

  39. Rahul Subramaniam

    Yeah. So actually there are just so many of them there’s of course the entire well-architected framework that
    lays out all the dimensions around information security that you should actually take the time to understand and
    architectural applications accordingly. In terms of access and permission schemes AWS, it takes a shared
    security model approach with their side of things which is primarily around the IAM where they basically take a
    zero-trust policy. Where they start by saying you have permission to do nothing and you have to explicitly grant
    permissions for whatever access you might want someone to have. So I think understanding the whole permissions
    and access mechanism is going to be critical for you to build your solution that has the right security
    parameters in place on top of that interview. AWS has a ton of very interesting services that actually help you
    filter out potential issues. So just for example, for almost all of our groups we try and make sure that we
    never ever have to deal with PII for example. Right. That’s something that as much as possible we would never
    want to touch, we never want to get it. We never want to deal with it. So AWS has a bunch of tools where any
    data that’s coming into our AWS accounts whether it be messy, whether be comprehended. And AWS just last week
    announced a new GA for Glue Supporting PII redaction out of data and being able to deal with PII Data. The Glue
    ETL tool itself has built-in capabilities that allow you to filter out this DII Data. So AWS has a bunch of
    these tools that you should look to leverage so that you never have to deal with a lot of these in the 1st
    place. And again, you’ll find a lot of this in your AWS well-architected framework where the decision-making
    framework about how you architect your application for the right levels of security compliance. I think it’s
    something that you should go through and and deep that it’s time well spent.

  40. Stephen Barr

    I’m used to working in medical data and we think it’s best to treat PII like this hot potato if you want you
    don’t want to be holding on to it. You want to get it to AWS as fast as you can because it’s the liability
    associated with it. You want to leverage the fact that AWS has a lot of other people sharing in that investment
    in not messing this up.

  41. Rahul Subramaniam

    Yeah, sorry Badri, you’re going to add something.

  42. Badri Varadarajan

    Just the one thing to what you said. A trusted advisor is useful. The only problem with a trusted advisor has 10
    times as many security advisories as it does. Cost savings, that’s the only problem and that’s the good and bad
    of it is it probably contains everything you need to know but you need to go through like 100,000 and line items
    there but it does have a severity rating which is pretty useful so I’d start there, I mean talking of tools that
    are available right after the laughter will attractive framework the frustrated because this is a good place to
    go look at issues that you could be running into.

  43. Stephen Barr

    Oh thank you, thank you Badri. And also thank you for our audience to keep putting those questions in the chat.
    We have probably about three or 4 more minutes of Q&A. And then we want to do that hoodie drawing and wrap up. I
    want to respect your time, it’s been about 15 minutes, and we’ll wrap up right At 10 am Pacific time. Here’s
    another question that I think is interesting and worth, we’re thinking about here. Post of the screen. This one
    came up earlier on the comments and now it’s time to look at this. Here we go. Here’s a good one: We have been
    advised to look into a multi-cloud strategy in case something happens to Amazon or Google and Microsoft. So what
    do we think about this multi-cloud approach? Do we have to hedge our bets against AWS, pulling the plug or what
    do we think?

  44. Rahul Subramaniam

    So let me take the first pass at it. And then I let Badri’s share his thoughts. I think the multi cloud strategy
    comes from a very traditional view of procurement. Where you’re worried about one of your suppliers or your
    vendors going out of business and then you basically hedged your bet by going with multiple vendors or at least
    really multiple codes and then figuring out how to mitigate those risks. The cloud is a completely different
    game and the traditional procurement strategies don’t really work. I think most people don’t realize that
    between Google and Amazon servers, you probably have a vast majority of today’s internet traffic moved through
    their service and if amazon or google or one of those, you know, top three providers, but to go down, you have
    larger problems in the world than just your application not working for that small amount of time. AWS has
    probably one of the largest teams working on building out data centers across the world and the kind of
    redundancy that they have is Not just one data center. Every availability zone is a series of data centers that
    work together. A region is built up of multiple availability zones, you then have 20 plus regions across the
    world. So you can again back to the well-architected framework, you can architect your application to have all
    the redundancies that are required to deal with whatever disaster recovery scenario you need to work through.
    Just think of it this way, if amazon.com doesn’t really ever go down okay, how many times have you experienced
    amazon.com go down? They have built all the redundancies and you know, disaster recoveries that they need and so
    if you needed that level of redundancy for your applications, you can absolutely build it on an on a provider
    like AWS or technically even google or Microsoft. I would really question the need to go to multi-cloud because
    that just adds a ton of complexity in the way you design your applications and build them up. And more often
    than not you have to resort to the least common denominator of services, which is literally the computer and the
    raw storage and you have to build everything else on top of it. So unless you have a real use case and I’ve
    rarely come across one, that has made sense. But unless you have a real absolute need that even the likes of
    Amazon, you know, AWS or Amazon can’t satisfy, then she’ll go for it. But in 99.9% of the cases, I don’t think
    you need to go down a multi-cloud strategy.

  45. Stephen Barr

    I’m trying to be more redundant than what AWS, you couldn’t possibly have a bigger interest in being redundant
    and reliable unless you really are convinced that you can do better use a stick with them and and and let them
    work that out and they have every reason in the world to would that be a fair summary?

  46. Rahul Subramaniam

    Absolutely

  47. Stephen Barr

    Badri, do you have any any more thoughts on this one?

  48. Badri Varadarajan

    I think I agree with that. The one thing I’d add is what we’ve seen is even folks who decide they need to do
    multi-cloud for whatever reason. I think one of the things the good frameworks that I’ve seen is to find out is
    a real multi-cloud or the fake multi-cloud? Effectively, if you run the same thing in three different clouds or
    three different regions within the same cloud but you’re kept alive was running in AWS US East one then you’re
    only as reliable as that link and now suddenly that thing goes down and everything went down anyway in respect
    of how many other things you were running. So that’s one. The other one is to find out what’s the cost of being
    multi cloud. I think what I’ve seen it translate to being done is folks want to run everything in Kaban (?) It
    is so that they think they can deploy anywhere. Again, first is that real or not? Can you really deploy it
    anywhere? And second what are you not doing because of that? What are you not shipping because you chose to do
    that instead of running say SageMaker or Glue (?)or whatever it is. So there is the opportunity cost of trying
    to be multi cloud which should be measured in terms of feature velocity as well. So that’s those are two things
    I’d watch out for.

  49. Stephen Barr

    That’s a great stretch. So you’re saying with multi cloud, you really reduced down to simple computing storage,
    you’re reinventing all those higher order building blocks yourself and not leveraging. The the ones that are
    specific to each provider and you have some single point of failure is that you just can’t get around anyway. So
    it really it would be really hard to make the argument that it’s worth it.

  50. Badri Varadarajan

    Right. And I think the last point what making is transfer costs? Transfer costs are high between clouds and I
    think ravelstein had a great infographic. I don’t know if we have the link that we can post on the show notes
    here that described the reveals as for course. I’ve seen some crazy strategies as well. Like folks trying to do
    host compute in GCP storage in AWS. The transfer cost them more than the data itself.

  51. Stephen Barr

    All right. We have one last question that maybe we can spend 30 seconds on and then we’re gonna give away that
    hoodie and call it a show: Amazon S3 three is good for hosting data lakes but can get very pricey. What
    recommendations do we have? We have 30 seconds. Badri, what do you think? S3? how do we save that data science
    cost?

  52. Badri Varadarajan

    Actually, if you used S3 native tools such as Intelligent Tiering properly, particularly with what they
    announced in Torino in 2021, which was the Intel, the instant Glacier story I call it. I think AIA is the
    acronym for it. But a useful frame of reference here is if you’re spending more than $100 per terabyte per year
    on history, you’re doing it wrong.

  53. Stephen Barr

    That’s a great rule of thumb. $100 per terabyte per year max. I’m posting that in the chat as that is a good
    rule of thumb So if you’re crossing that threshold you need to reevaluate. Rahul, any other thoughts?

  54. Rahul Subramaniam

    Yeah, I think like Badri just said using the tools that are available from AWS like S3 Intelligent Tiering
    getting to the right and frequent access layer because when you have data lakes, a lot of that data is just
    sitting over there for a long time and you may not actually be accessing a large part of the data. And so it
    makes sense to move them to other infrequent access tiers and you should be able to reduce your costs quite
    substantially by doing that. Keeping everything in standard here, you know is literally the wrong way to do it.
    If you have a large data lake.