AWS Made Easy
Search
Close this search box.

Ask Us Anything: Episode 18

In this episode, Rahul and Stephen recap the “Behind the Scenes” episode 1, and then discuss a few new AWS announcements, and plan for Behind the Scenes part II.

Latest podcast & videos

Summary

Review Blogpost from Ep 17

In this segment, Rahul and Stephen review the Episode 17 “Behind the Scenes Part I” blogpost.

Rating:
Verdict:

Behind the Scenes Part 2 Coming Next Week

In this segment, Rahul and Stephen talk about the upcoming “Behind the Scenes” Part II, coming next week.

Rating:
Verdict:

sam-lambda-py-fn-graviton

In this segment, Rahul and Stephen discuss the Lambda Function URL template, which is perfect to use with DevSpaces. See https://github.com/AWSMadeEasy/sam-lambda-py-graviton for the repository, and go to https://ey-graviton.devspaces.com to try it out with DevSpaces. All you need to do is configure your DevSpace with the relevant environment variables and deploying a Lambda Function in Python complete with a Function URL, couldn’t be easier.

Rating:
Verdict:

Easily install and update the CloudWatch Agent with Quick Setup

In this Segment, Rahul and Stephen discuss the insights that can be gained by using the CloudWatch Agent. In particular, the CloudWatch agent can give you tremendous insights into how to right-size an EC2 instance.

Rating:
Verdict:

Amazon RDS Proxy now supports Amazon RDS for SQL Server

In this segment, Rahul and Stephen discuss the new support for SQL Server in RDS Proxy. This is a surprising but welcome move by AWS, which would make it faster to connect and scale with SQL Server databases.

Rating:
Verdict:

AWS Fargate increases compute and memory by 4x

In this last segment of the show, Rahul and Stephen talk about AWS Fargate. This is a very welcome improvement in the capabilities of Fargate, and it is amazing how AWS is growing the bounds of what is capable under in Serverless. They compare this to the recent auto-scaling improvements to ECS. While new features are welcome, we would much prefer to not be thinking about managing a cluster at all. Thus, we strongly prefer Fargate to ECS where possible!

Rating:
Verdict:

AWS Made Easy

Transcript

  1. Stephen

    Right. Hello, and welcome to “AWS Made Easy, Ask us Anything” live stream, episode number 18. Hello, welcome. Stephen Barr, I’m the co-host. And Rahul, how are you doing?

  2. Rahul

    Hey, everyone. Doing very well, Stephen. How are things at your end?

  3. Stephen

    Oh, really good. So we just had the last weekend of summer.

  4. Rahul

    Oh, man. End of summer is good.

  5. Stephen

    But in true cynical Seattle fashion we’re excited for summer to be over.

  6. Rahul

    I can imagine what you look forward to in that dreary winter season over there.

  7. Stephen

    So we just had some fires on the other side of the mountains, and the smoke blew into Seattle for a few days. But before that, we had dipping into the ’40s Fahrenheit for the first time since June. I had to reply to my friend here, bring on the cold. That’s the general consensus. I like it a lot.

  8. Rahul

    Yeah, I think I read an article or it was a tweet that somebody had sent out that said, “For a day or so Seattle had the worst air quality in the world.” Like, Delhi and Beijing and all the other places that typically kind of are the top of the list.

  9. Stephen

    That was my Seattle weather blog friend put that out. Let’s see. He’s good at monitoring that. Let’s see if I can find his tweet here. Well, it was pretty bad for a few days. Yeah, I can imagine. Here we go. This is what it was like here. You see on the screen? Yeah.

  10. Rahul

    Yeah. Oh, man, that looks really bad.

  11. Stephen

    So luckily, we had a pretty relaxed weekend. How were you? What did you do?

  12. Rahul

    Well, as you know, we are a family that plays a lot of board games, and a bunch of card games, we’ve discovered this new game called Phase 10, which is the latest craze with the kids. If you haven’t played it, check it out. It’s pretty neat. But I am terrible at shuffling cards. And whether it be cards related to Rails and Sails, or Ticket to Ride or any of those games, or even, you know, regular card games. It’s just something that I’m so bad at. And there’s always this thing that, you know, I didn’t shuffle the cards properly. So this weekend, I spend my time tweaking a design that I found from someone, I’ll try and get credit so that… And post it but I tweaked the design of this person who created this card shuffler. And I printed it out, I tweaked the design a little bit, and build it. And it actually shuffles cards reasonably well. Better than how I do it.

  13. Stephen

    So how does it work? So you put half the deck in each half?

  14. Rahul

    Correct, you put half the deck in each half, and then you have this little crank. And the crank basically moves these two spindles, which you can see over here, the spindles have O-rings attached to them, which basically grip the cards. And the two spindles are at different heights. So while the cards come out at the same time, they come out at different levels.

  15. Stephen

    Oh, okay.

  16. Rahul

    And that’s what causes the interleaving. So I love the simplicity of the design is just the fact that, you know, the creator of this design decided that having this separation and height would achieve the same thing as trying to figure out how to interleave the cards together. And it’s pretty neat. And it works pretty well. I need to refine it a little bit more. I’m gonna motorize it at some point of time. I love creating electronic gadgets like that. So I had to figure out how to bring this certain look.

  17. Stephen

    That’s a really good idea. I have to get your design on that. I ended up making two independent piles and I try to shuffle or destroy the edges of the cards trying to meld them together.

  18. Rahul

    When you have two packs of cards, two cards shuffled together, or if you have an entire set of UNO or one of those games, it’s hard. It’s really hard. So yeah, so that is my weekend. My weekend project. How about you?

  19. Stephen

    Well, I guess we had a pretty quiet weekend. The only other thing I can think of that was really relaxing of reading “Lord of the Rings” with my two older kids.

  20. Rahul

    Oh, wow.

  21. Stephen

    And we’re in the second part of the “Fellowship of the Ring.” And we read the Council of Elrond yesterday. I don’t know what the word is, but they’re experiencing it for the first time. And just seeing them, you know, because that chapter is where the whole rest of the book kind of gets planned, where they’re gonna have to do this big quest. And it’s just exciting kind of what’s ahead of us there.

  22. Rahul

    Oh, man that’s pretty exciting. My older one just finished the entire “Harry Potter” series, like he’s just buried in books at the moment. He’s just finished that, but I’m not sure it was very wise to let him read the last two books, it gets pretty dark for an eight-year-old.

  23. Stephen

    Yeah. That’s pretty intense. I think they were written so that they kind of would age with the audience. So I think if he read them all in one shot. We have to revisit those in a year, maybe in two or three years.

  24. Rahul

    Yeah, So I’m not entirely sure that was a very wise decision, but he kind of just went with it.

  25. Stephen

    Yeah, there’s not much that you can do to stop that momentum. You can’t say, “Oh, wait two years to read this.”

  26. Rahul

    Correct. But my elder one actually got glasses last week, as well. Because I mean, the strain in his eyes from reading, of course. I mean, it also is hereditary. So yeah, a lot of changes.

  27. Stephen

    That makes sense. Sounds really good. I think that’s exciting I have to share some book notes.

  28. Rahul

    Yeah, absolutely. We should put out a book review, not just for the kids, but for the adults as well.

  29. Stephen

    Oh, it will be fun. That’s a great idea. All right, well…

  30. Rahul

    Right. So what do you have in store today?

  31. Stephen

    So I think the first segment we want to cover is kind of a review of last week we did the first in our behind-the-scenes episode one. So I wanted to kind of review I’d have put out a blog post. Let me cue our segment. All right. So last week, we talked about our first of I think going to be a three-part series of behind-the-scenes. And as part of the companion to that I put out a blog post, it just got published, I want to share the screen just kind of go over and recap what we talked about because putting together the highlights, it was a pretty dense episode.

    So here’s episode 17. Behind-the-scenes, part one, when we decided we want to talk about the phases of a live stream, how to organize a live stream using ClickUp, webhooks, scheduling live stream, and Google Calendar. And we use DevFlows. And as part of that, we realized we need to do a full DevFlows episode. And then planning the live stream and generating the transitions with Lambda and Shotstack. And then using recognition, segment detection to detect the timestamps of the segments. So that’s as far as we got in one hour. And again, part of our learning in this process is that take what you think is gonna be an hour and divide it by two.

  32. Rahul

    Yeah. It could also be that we talk a lot.

  33. Stephen

    Yes. Oh, and I guess tying this in with the weekend, you know, as part of developing this automation, I think, okay, it’s really fun, it’s really exciting. We think am I doing enough automation, is this right? And then, so last Friday had wandered downtown to kind of take a walk at the afternoon. And to be honest, sometimes Amazon puts on these free ice cream events. And so my two older kids and I we thought, “Let’s wander down and see if we get some free ice cream.” And it turned out to be AWS Builder’s Day. And there was a really good talk by Swami Sivasubramanian, and Peter DeSantis. On automation, I wanted to play a little clip that I got from that, that made me feel a bit better,

  34. Swami

    it’s really to focus on actually adding value and really look at how you can delight customers but also at our scale, look for ways we can optimize it. You know, Peter jokingly mentioned three general automation. That is an example of why we do a lot of wasted manual work. That’s a great example of not having to do manual work. We hire these amazing mothers, and we made them actually run skirts and do manual work. These are examples of, again, make things better for all of us and also for future builders to come and join our team.

  35. Man

    That was a good answer. It’s hard to improve on it. You know, we always start with the customer. And that’s like the easiest litmus test for how to figure out what’s the most important.

  36. Stephen

    All right. So I guess my takeaway from that even at AWS, they’re still working on their automation. So it’s okay, if 17 episodes in, there’s still some rough spots around the edges. But part of documenting this process, part of building out this blog post, it was really good for, you know, you find that as you write it down, you realize, oh, actually, there’s a little bit of a logical hole here, I need to clean this up. So that’s why it took a few other days to kind of get this out.

  37. Rahul

    No, I think that goes back to the culture of writing. I think writing stuff down makes it so much better in terms of giving you clarity of thought. I think it’s an exercise that we’ve learned, or from what AWS does internally, and I think everyone should try it out. It makes a big difference to the way you think about anything that you’re doing or any decisions you’re making.

  38. Stephen

    It’s looping back to the kids, we’re actually having this discussion with our older ones just recently saying, when you write things down, exactly, you realize, either, sometimes you can have a great idea, it sounds great in your head because you haven’t taken the time to write it out. And once you write it out, you realize oh, maybe it’s circular. It’s not as good as you thought or it is good. And you can publish it.

  39. Rahul

    Correct. Completely agree.

  40. Stephen

    Well, as part of what we went over, we talked about, you know, how the automation works, the live stream or going through a couple of different phases, we start with…

  41. Rahul

    Hey, speaking of I think we need to bring that up on the screen.

  42. Stephen

    Oh, sorry. Let me do that. Thank you. Let’s put this in. There we go.

  43. Rahul

    Great.

  44. Stephen

    So we talked about how a live stream, it starts in a phase so for example, there’s the planning phase, where it’s just we have a guest. And then we have a scheduled phase. So once we have a guest, and an email address and a date and time nailed down, we can then move the phase from plan to scheduled and that can fire off a webhook, which can then send a calendar invite. So we showed how to do that. In the blog post, we talk about how, in general, these webhooks work. So we have a transition from one state to another, it’s gonna fire off some information about the event. And then our API endpoint can handle that event and do whatever it wants to it. So we talk about that. We talk about in our case, we’re using the ClickUp API. So we say that thing goes from plan to scheduled so okay, we have a schedule, so let’s send out a Google Calendar invite.

    And then we use Devflows to do that. And then we put that data back in, so it kind of closes the loop. And here’s an example of the Devflow that we showed. So this is the endpoint that receives that event. And then we apply some JSON structure to it. And then we can send it out to Google Calendar, here’s a date for the live stream, here’s a date for maybe a planning meeting that can be the day before a few days before, there’s a link to this really neat language called JSONata, which is the computation language that joins the nodes. So you can reshape one JSON into another.

    Again, we’ll get into this in the full Devflows episode. Let’s see we covered planning the live stream and generating transitions using Lambda function and Shotstack. So I have really cool videos that we play in between the segments, they start off with this empty box here. And then we can put a piece of text here. And then we talked about how these videos are engineered with a few frames of black in the beginning, before that little jingle, and that what we’re trying to find is, I’ll put that here, is this. This is how we can think about how our livestream is broken down. So we have that little black frame, and then our transition jingle, which is nine seconds. And then this is us talking right now this content, and then it will be bordered by the next one.

  45. Rahul

    Yep.

  46. Stephen

    And then let’s see, we talked about how this is the architecture for this. We have… This is a very standard pattern when you’re dealing with webhooks because they want to return right away. So whatever is calling this function, it can return right away, but it can also push a job to a queue. And this can take a few minutes, a few seconds, and then update. Give us the information that we need. In this case, we get the little mp4 file that has that transition. And then we can put it into place it’s ready to go. We talked about recognition. Recognition is where we take our recording, we put it into S3, we launch the recognition job. This goes into SNS. And then SNS will push to a queue, which will then tell us…it will find the start and end time.

    So the start and end time of this particular piece of content, it will put that back into ClickUp where I can then adjust it, or play with it, preview it, and then send it on to the rest of our highlight process. So it was a lot for an hour, but I thought it was a really successful episode. I want to ask you, Rahul, after kind of going over that and planning for next week, what are some of the patterns that came out? Thinking about automation in general.

  47. Rahul

    So I think the first one would basically be that automation needs some kind of a trigger, whether it be webhooks, whether it be, you know, an explicit call, whether it be some kind of a message via an SNS topic or you know, SQS a message in the queue. But you basically need some kind of a trigger. And that’s where I think the entire event-driven architecture, kind of building your stuff like an event-driven architecture makes so much sense. It just enables automation in little pieces. If you had to, you know, design automation for an entire monolithic system, taking into account all those different pieces, and latencies, and all the other criteria, automation is hard, you want to make it easy, keep it event-driven, keep it in small chunks. That way you can carve out the pieces you want to automate. And if everything talks asynchronously to each other like events, it just makes it really easy. Then I think we talked about Lambda function URLs. That’s something that I am a big fan of.

  48. Stephen

    We’re going to talk about those. Yeah, we’re gonna talk about that really soon. Actually, yeah, we covered Lambda function URLs. And that’s actually how these are all triggered, right? Because before you had to use API gateway, and even for a quick 10, line, Python thing, and messing around with cores, API gateway, all the settings, not having to do that anymore. I think it’s a real win.

  49. Rahul

    I agree. For automation, I think Lambda function URLs is one of the most amazing recent developments that has come about. The other one was Devflows, where I think the whole reason why we created Devflows and why I love it is because I found it to be one of the simplest ways to stitch together a bunch of different services to achieve outcomes. And my brain now naturally, gravitates towards the flow programming paradigm, where we’re talking about data flowing through a whole series of processing nodes, and coming out of it. And by the way, for me, the big aha moment was when I started doing a lot of IoT-related stuff. Or my automation at home, I started using Node-RED. And if you haven’t not used Node-RED for simple home automation and stuff like that. Try it out. It’s awesome. And unfortunately, that doesn’t scale to large enterprise applications. So we had to kind of redo a version of Node-RED. That actually was built on AWS as the latest platform and has the same flow programming paradigm under the hood. So yeah, I think that was one of the other takeaways from last week.

  50. Stephen

    Yeah, absolutely. I really like the flow programming paradigm and the drawing out a cyclic graph, it’s very easy to reason about, to think about, and if you can represent your problem that way, like on a whiteboard, and then basically move it something like Devflows. It’s a really nice way of keeping that mental model in sync with the actual process.

  51. Rahul

    Exactly. Great, and I think the next takeaway was making automation important, is really key, especially in an event-driven architecture. And that sometimes is harder than we think it is. Because the one thing that’s never promised is events being in sequence. So the ordering of events is something that we just take for granted. It’s the natural, you know, way that we think about things. But in reality, you have to when you’re doing automation, you have to plan for the fact that these events may not be in order. And so you have to account for sequencing and item potency, which is if a processing or if one item has been processed if it shows up again, it shouldn’t make any difference.

  52. Stephen

    There’s this great joke about… Okay, here we go. This is a great programmer joke, I want to put this on the screen that I think captures what you were saying. “There are only two problems in distributed systems. Exactly-once delivery. Number one guaranteed order of messages. Number two, exactly-once delivery.”

  53. Rahul

    Absolutely.

  54. Stephen

    I think that really captures it, right? You send this message out into the world, and you have to plan for these scenarios of it might not happen in order, it might happen twice.

  55. Rahul

    Correct. And so that is… Yeah, that’s an incredibly hard problem. And you have to build your systems to be able to be resilient to that. So yeah, so that’s something to take care of, then you have to have some sort of monitoring or observability. For whatever automation you’re creating, because the more pieces that are kind of talking to each other asynchronously, you want to be able to know where everything is what state something isn’t. And as a corollary to that, you should be able to start off from any state and be able to proceed just in case everything collapses at that particular point of time. So that’s really important, too.

  56. Stephen

    I was experimenting with this. Over the weekend, I hooked up some of the automation to a Slack channel using this AWS SNS to Slack publisher. And I think one thing it’s really interesting to end up finding is that there’s a certain rhythm of things, you end up hearing, is you hear the… Okay, I know that it takes two seconds for the transition video to generate, and then it’ll take another second for it to post back. And so you hear this click, click, click. And there was that little pattern recognizer in your heads like, oh, wait. So you’ll know, when it sounds off, just like if I don’t know, if you’re listening to a song, and they leave a note out. It’s neat to have that.

  57. Rahul

    True. Yeah, I mean, that’s something to kind of watch out for in asynchronous, because in an asynchronous event-driven model, you’re waiting for an event, right? You don’t know when that event may come. There’s no timeline associated with it. So you either have to have an explicit timeout. But then what if the message comes back after the timeout? Again, that’s when the item potency plays a role. And yeah, you have to basically prepare for that kind of mechanism to operate. So I’d say having some kind of monitoring that tells you when something is off, is incredibly important to that setup. And yeah. Anything else we missed?

  58. Stephen

    No, I think that’s great. All right. Well, I wanted to do another segment, which talk about what’s coming up next week, that’ll be a short one. And then we’re gonna skip ahead to talking about the SAM, Lambda functions that we talked about using DevSpaces. I’ll cue in the next transition here goes. All right, so in this one, I just wanted to talk about some of the things we’ll talk about next week. One of them that I’m really excited to show is to go a bit more in-depth about recognition, and then showing how that loop actually closes and saying, okay, once we have recognition, and it gives us… So right now we’re talking through a number of segments that we’ve kind of pre-planned out, and then recognition is going to go and it’s gonna give us the segments it found.

    And then the way our behind-the-scenes, automation works is if our pre-planned segments, match the number of detected segments, then it will do a bunch of the legwork of cropping the videos, trimming the videos, attaching the thumbnails, basically making it ready so that we can then upload them to LinkedIn to Twitter, to YouTube throughout the week. But I wanted to take a quick moment kind of brainstorm about other things that we could do. One of the things I’ve been thinking about, we talked about a few weeks ago about a custom entity recognition model. I think that it’d be fun to run on some of the AWS services that we talk about a lot. And then we can build up a heat map and see, okay, how often do we mention EC2 or Lambda, Or something like that.

  59. Rahul

    Yeah, I think there is a model for AWS services somewhere, but we should definitely take a look at it. I think we also discussed it last week. So yeah, I agree. Having that entity recognition for AWS services or you know, yeah, because that’s the most potent one for these conversations could be incredibly valuable.

  60. Stephen

    And the other one, I was thinking, I really liked your idea of using Selenium to make a stream yard API, and then… Unless stream yard you’re listening and you don’t want us to do that for some reason, then forget we said that. And I also wanted to make speaking of communications, I wanted to use Devflows to then whenever we post something on the social media channels to have a Google Hangout channel that can be internal to the company, so there’s visibility internally that people know the things that we’re doing, and then they can reshare to their own networks.

  61. Rahul

    Yeah, I think that makes a lot of sense. That would be pretty neat to have.

  62. Stephen

    And that’s a really straightforward Devflow, right? It’s just taking an incoming webhook and then reshaping the data, making it nice, and sending it out to Hangout channel.

  63. Rahul

    Correct.

  64. Stephen

    So we’ll post that next week for completeness. But I want to… anything else you can think of things we should try to do for next week for behind-the-scenes part two.

  65. Rahul

    So I would love a mechanism by which, especially when we do our section on AWS announcements, where we review those. I wish because there’s just so many of them. I mean, we spend a bunch of time kind of filtering through all of those announcements and picking maybe five or six that we can cover in a session, it’d be great to figure out a way to just click on those and have those, you know, the links, the thumbnails, that stuff kind of show up in one place, or in our recording sheet that we use to reference all of this material. That’ll be neat if we have some way of doing that.

  66. Stephen

    All right, well, let’s see what we can deliver on for next week. That’ll be a lot of fun.

  67. Rahul

    Awesome.

  68. Stephen

    All right. Well, let’s switch gears again. And I wanted to… I cleaned up the DevSpaces demo repository. And I wanted to put that out there because it’s ready for public use. So let’s switch gears over to that, take a quick break. And then we’ll do that.

  69. Woman

    Public cloud costs going up and your AWS Bill growing without the right cost controls? CloudFix saves you 10% to 20%, on your AWS bill, by focusing on AWS-recommended fixes that are 100% safe with zero downtime, and zero degradation in performance. The best part with your approval, CloudFix finds and implements AWS fixes to help you run more efficiently. Visit cloud fix.com for a free savings assessment.

  70. Stephen

    Okay, I’m going to share my screen again. There we go, I put a link to this repository, it’s on the AWS Made Easy repository called SAM Lambda Py Graviton. We talked about it last week, but it’s been cleaned up a little bit, I just wanted to put it out there, make sure you do. I think this is the easiest and fastest way to put some Python out there and have a Lambda and use a Lambda function URL. So all you need to do, you can go to this repository. And it’s a template repository. So you can just say use this template. And I’ll keep it in there. It was really easy. I click SAM hello, world. Create that repository. It’s going to make a copy of it from that point. And then all I have to do is start a Dev Space. And this immediately starts a… this will take a moment. Interesting. This will start a DevSpace based with the graviton processor. Let me see if I have one running. Here we go. Let’s try this one. You can tell we use this one quite a bit. And it’s got all of the environment variables built in. It has SAM pre-installed. And then I’ve set up my DevSpaces. So that for any AWS Made Easy repositories, I’ve got my AWS credentials built in. So all I have to do is… Where’s my other one?

  71. Rahul

    I think it’s the last one.

  72. Stephen

    There we go. And also one thing that we’ve made a lot easier is rather than dealing with Lambda layers because you’re using SAM, you can just put something in requests, and then SAM build will work. So it’s all set up, it’s ready to go. And there’s actually a pretty neat script in here called Test Lambda Local, which will iterate over some local JSON events, test them. So you can do the one environment variable that you need to set for this is Py Lambda function Stack. So let me make sure I have that set, Py Lambda function Stack. Py Lambda function Stack equals O, world, three. And then I can do a test Lambda local, it’s going to run SAM build. And then it’s going to run it with a local event, hello world. And then I can literally just do with AWS. And that’s it. And so if you have a little bit of Python code, and you want that to exist publicly with a Lambda function URL, this is the easiest way to do it. You don’t have to…all you have to do is make sure that you have your credentials set up in your DevSpaces, variables, and then they’re automatically populated for you. They’re automatically populated for you, it’s ready to go. All you really need to do is modify app.py and requirements dot txt.

  73. Rahul

    And just confirming, so this does set up all the function URLs, it does set up cars, all of that stuff. Yeah, right?

  74. Stephen

    It’s all done. You can see it in your… yep, it’s setting up.. see if you see the resources that it’s creating. Let’s see if I can, yeah, it’s setting up a Py function. And then it’s going to set up a function URL. That’s the last thing it sets up because everything else has to be in place. There it is the lambda URL, Py function URL. And yeah, you get that ready to go. You don’t have to… And what’s really neat is that you can set up all those variables that if you’re doing projects for different people, there we go. Right away, there’s the Py function endpoint, we can command-click on this. Oh, I forgot that one. I’m still working on the request. But anyway, you get the Python function. What did I do here? app.py. Anyway… Oh, I took away requests. That’s what it was.

  75. Rahul

    Okay, you can do it again.

  76. Stephen

    Yep. AWS, that’d be just fine now. I was experimenting with different ways of packaging. But this is all set now. So it will do that update, it generates the change set. And then you have a publicly available Lambda function ready to go. So this will take another couple of seconds. So I’ve put the URL in the chat that’s out there for people to experiment with to try. And this forms the foundation of a lot of our automation is having these Lambda functions with function URLs.

  77. Rahul

    Yeah, this is, by the way, incredibly valuable, even in all the other automations that you do. When you have little pieces. You can just delegate all the automation into Lambda functions, organize them well, using SAM, and then yeah, becomes really easy to access and works with those.

  78. Stephen

    Cool. All right, well, there it is. Let’s switch gears to we’re gonna review a couple of articles and yeah, there’s been a lot of announcements. It’s been a while since we’ve done some articles so let’s go and switch over to that. Here we go. Okay, this one is easy install and update the CloudWatch agent with quick setup.

  79. Rahul

    Yep. Okay, so this one is very near and dear to me because I’ve lived through an insane amount of pain, because of you know, CloudWatch agent not being there on every instance. Now, if you did not know this when you these two instances and you’re trying to figure out what the right size is, you kind of need four metrics, right? You need CPU usage, you need network IO, you need this Kayo and you need memory usage. You will get all the other three metrics except for the memory usage by default or CloudWatch. Because AWS can externally look at what is being allocated and what’s happening, however, memory usage is something internal to the VM.

    So you need an agent running inside your instance, to be able to figure out what’s running in memory, so that you can export that out into CloudWatch, or at least figure out the usage and ship that out to CloudWatch. If you don’t have memory utilization numbers, you really can’t right size, any instance, or figure out how to optimize things. And AWS, just because they will not do anything inside your instance, will never set it up by default. So it’s, you know, some AMIs where CloudWatch agent is installed by default.

    But by and large, when you look at instances, cloud agent is not installed by default. So the systems manager, CloudWatch setup is an easy way to centralize the setup of CloudWatch agents across all of your devices, or all of your instances, sorry. And now that it can automatically update and set up and do all of that stuff, I think it makes the life of most SaaS ops, and FinOps, you know, folks, way easier because this is a big piece of information or data that’s missing, that allows us to right size or optimize our costs.

  80. Stephen

    Okay, awesome, that’s really, really useful to have to be able to, like you said, have that data to make the decision on what instance size that you need. I looked up the CloudWatch agent, I’m very glad that it’s open source, obviously, if they’re gonna ask you to install an agent, it better be open source. But it’s a pretty busy repository. Looks like they can gather a lot of really useful data here. And, yeah, they have the full list of the metrics here. Like you said, CPU, CPU time, idle IO, there’s lots of different things that they can gather here.

  81. Rahul

    Correct. For me, the most important one, which you can’t get from anywhere else is the memory stuff. Because that is so incredibly important. Yep, all of this, how much memory is available, how much memory is buffered, how much is free, how much is inactive, all of that stuff is incredibly valuable to figure out whether, you know, sometimes you have processes that use up all the CPU use almost no memory when you have those kind of scenarios, you want to move to as a compute-optimized instance because that’s probably gonna be a better bang for your buck. And you’ll get, you know, lower costs overall. If, however, you have stuff that is hardly using any CPU but using tons of memory, you want to switch over to the memory intensive like databases, right, they’re gonna need as much memory as you can give it and they will use all of it. And the CPU utilization may not peak at all.

    So for those kinds of instances, go with the memory-optimized instances. Or if you find that you need instances where your IO is, you know, peaking and hitting limits, you don’t want to keep bumping up your CPU and your memory because that costs way more. Instead, you could go to one of the IO optimized instances, the iTypes, or the iFamily. And those will help you do a much better job on, you know, these kinds of instances. So without these metrics, you really wouldn’t be able to make the right choice of instances that your workload should really be on.

  82. Stephen

    Well, and just being able to sort of right size, it’s kind of a funny analogy, I was at an Amazon logistics talk. And the person was saying that just by optimizing the way they pack things on a shelf, they were able to increase by 500%, the amount of physical goods stored on a shelf by packing. By having… there’s some boxes that are long and wide, and some boxes that are tall, some boxes that are flat. And it’s basically the same problem. There’s a couple of different dimensions. At the end of the day, you have a big compute instance, managed by nitro that will have some amount of CPU and some memory available to it, and how you want to allocate that really depends on your problem. It’s just the same thing by having more observability. And by right-sizing your instance, you’re really making it more efficient for everyone.

  83. Rahul

    Exactly. So yeah, not having these… Oh, and by the way, one of the other areas where we actually save tons of cost is by really understanding these metrics. It makes it easier to make a decision to use Spot Instances. So when you know that, you know it’s occasional use, or you just need buzz or stuff like that, that’s when you can actually cut your cost by 90% and leverage Spot Instances.

  84. Stephen

    Oh, that’s a really, really good point.

  85. Rahul

    So not having these metrics makes it really hard to make any of those decisions.

  86. Stephen

    Yeah, yeah, that makes a lot of sense. Well, anything else on this one before we go on?

  87. Rahul

    Oh, yeah, I think this is incredibly valuable. So I think it definitely simplifies. Because imagine thousands of instances, installing CloudWatch agents manually, absolute nightmare. So this definitely goes a long way in simplifying all of that stuff, keeping all the CloudWatch agents up to date, with all the latest releases, security patches, all of that stuff. Yeah. This is a no-brainer.

  88. Stephen

    All right. And then let’s see, in terms of the article, it seems really straightforward.

  89. Rahul

    Yeah, I think that’s rarely straightforward. So I think the product page has great documentation points to that. So yeah, I’d give this up four or five.

  90. Stephen

    There we go, we get four and a half. So maybe then a screenshot. Look at that is pretty good. All right. Well, let’s talk about the next one, which is going to be RDS. Let me cue the transition. And here we go. There we go. All right, RDS proxy supports RDS for SQL Server.

  91. Rahul

    Got it. I think we need to take down the scores. While we’re discussing this particular…

  92. Stephen

    Oh, good point. Can’t buy us this one.

  93. Rahul

    Yeah. All right. So RDS proxy. This one I really like. So the problem that we used to run into with the advent of Lambda was the fact that most of our code bases, so if you write a code or a code snippet that actually connects to a database and does anything with it, what do you need? You basically set up the code, set up a connection with the database, and then you start doing all your transactions, and then you close the connection, right? Now, when you have Lambda when you’re launching tens, of thousands of Lambdas, at any point of time, because I mean, Lambda, if you have a queue that’s being processed, and you’re doing Lambdas, and each Lambda survives maybe less than a second if you were to set up a connection every time, and there’s a TCP-based connection, you’re setting up a connection with the server with a database server, doing that entire, you know, handshake and transaction, and then going through the exercise of tearing down the connection at the end of the transaction. It just feels like an incredibly inefficient way of doing things. Because you’re probably spending more time computing the connection than actually transacting that record manipulation. That was a big challenge. So RDS proxy came about with the idea, and this came up first for my SQL and Postgres SQL, with the idea that you would have this proxy that sits between your database and your Lambdas and holds a pool of connections.

  94. Stephen

    So kind of like a warm pool that’s nice to re-establish, the session.

  95. Rahul

    Exactly.

  96. Stephen

    Okay, got it.

  97. Rahul

    And Lambdas can request a quick connection from the pool without having to really establish a full-fledged connection with the database, they get a connection, they do whatever they do, and then they release the connection. And that way, you don’t have that overhead talking to the database over and over again. And you can scale up with Lambdas without giving either Lambdas too much processing to do, or the database too much processing to do. And this becomes a way more efficient way of doing things. Now, with this announcement, they seem to have extended this to SQL Server, which is awesome. I’m actually surprised that they did SQL server because SQL Server and Oracle traditionally had been issues with Amazon because they are proprietary protocols under the hood. And Amazon wouldn’t do it. So I’m actually very curious as to how they went about doing this. And also curious to see if Oracle’s round the corner. Because that would make a big difference.

  98. Stephen

    And so it’s interesting, right? Because even though like you said there might be some proprietary code in there, at the end of the day, the vendor would want their database to be used more and so has an incentive to kind of support this particular feature.

  99. Rahul

    Yeah, I mean, this kind of load-balancing mechanism. Definitely benefits the vendors as well. But both Microsoft and Oracle kind of set up their licensing in a manner that prefers for them to have their customers on their clouds, rather than on a vendor like AWS. So it’s still surprising a bit to me, but I’m really glad that they did it

  100. Stephen

    Well, this is just classic, what do call it coopetition where…

  101. Rahul

    Exactly.

  102. Stephen

    I mean, they want to make money either way, right? And so I guess that they’re humble enough to say, okay, people are gonna be using AWS, not everyone’s on it that’s where everyone is. So might as well get a slice of that pie.

  103. Rahul

    Yeah, it’s probably just that. So yeah, curious. But this is a really neat development. Now you can write .NET-based Lambdas, that can talk to your SQL server, or even Python-based Lambdas, that conduct a SQL Server does matter. But there are typically a lot of .NET scripts that just talk to SQL Server, do some transactions, turn them into Lambdas. And now they’re been driven.

  104. Stephen

    Oh, and speaking about Lambdas. It was it’s been bugging me since we did that little DevSpace. The DevSpace’s demo why it didn’t work. I fixed it on the side, I know what it was. Now. It was my sample event was depending on something on a field being there, that wasn’t in the main code. So I just wanted to go back on it for a second. This does work, you get a function URL, hello, world. I’ll update the template. I was realizing my local event wasn’t the same as my GET request. I was testing it against a post that had a field. But it does work. So I’ll update that. For completeness, sorry, that was bugging me. In the back of my mind. I was thinking now. Okay, we could have a template for C-Sharp .NET-based Lambdas as well, that you develop in DevSpaces, and then talk to SQL Server. I think that’d be a lot of fun.

  105. Rahul

    Yeah, I’ve actually had almost no experience writing C-sharp Lambdas. But it’ll be interesting. Yeah, it would definitely be interesting to see how those turn out.

  106. Stephen

    All right. Well, how do we rate this article, then?

  107. Rahul

    This is neat. I love this. This definitely simplifies stuff for anyone wanting to get onto C-Sharp. Or if they have scripts for demonstration. So definitely, and this is between four and five.

  108. Stephen

    I guess, another four and a half. Nice work. Amazon doing very well today. All right, cool.

  109. Rahul

    I don’t know if it’s data or if it is selection bias.

  110. Stephen

    That’s an interesting point, right, we might only pick the articles we’re interested in. Okay, so that’s another bit of automation. We should review one that’s completely randomly taken off the RSS feed.

  111. Rahul

    Yeah, correct.

  112. Stephen

    Okay. Well, let’s go ahead and talk about the next article. I will cue up the transition right now. I think we’re talking about Fargate. Next. Okay, AWS Fargate increases compute and memory resource comp configurations by 4x.

  113. Rahul

    So this basically allows you to run really large containers in a completely serverless model. And you don’t have to worry about anything, including scaling or, you know, capacity, any of that stuff. And the fact that these containers are now gonna be 15, what 16 vCPUs and 120 gigs of RAM, that’s plenty to run most workloads. It’s really a lot.

  114. Stephen

    And 120 gigs of RAM, which is also 4x. This is great.

  115. Rahul

    Yeah. So, I mean, it just makes the case for serverless. So much stronger. Right? I mean, what workload would not fit on a 16 vCPU instance? And, you know, scale infinitely and have 120 gigs of RAM like the set of workloads that meet that criteria is just shrinking dramatically.

  116. Stephen

    It’s really neat to think about how more and more the amount of resources that can be immediately marshaled in a serverless way, right, like when originally when Lambdas came out, it was very, very, very lightweight. And now you can have Lambda with what 10 gigabytes of instant storage, that you can have, you know, in a few milliseconds notice. Now with this thing with Fargate to be able to have 120 gigabytes of memory accessible in a serverless way. You’re right, that’s a big deal.

  117. Rahul

    Yeah, that’s a really big deal. And for me, the difference, the more I think about it, I mean, Lambda started at one end of the spectrum and Fargate started at the other end of the spectrum with containers. But when I start seeing the features that are being added to both, I feel like they’re both moving closer and closer together. And it might just end up overlapping a lot. Like, if you really think about…

  118. Stephen

    You can say on AWS, there might be more than one way to do something.

  119. Rahul

    Yeah, I mean, think about it, the amount of compute memory and stuff that’s been added to Lambdas. And the fact that they can now run for 15 minutes at a time makes a huge difference, there was a time when Lambdas would not run for more than a minute, then increase that to five…three minutes, then five minutes, and so on. And the only opportunity or only option rather, to run a long-running job. And I said long-running is relatively long-running. Even five minutes would have been long-running from early Lamda standpoint. Today, a long-running job would be anything in excess of 15 minutes. If you were running a job like that, then Fargate is your choice.

    But I wouldn’t be surprised if Lambda then now goes to 30 minutes, and they have another way of doing it. The other thing that kind of blurred the separation was that pretty much both on Fargate and on Lambdas, you can basically package up your entire code as a container and deploy it on either one of those two services. So there’s really no distinction that I see in terms of the way you package up your code, write your code, or whatever, is just you make a judgment call on whether something’s long-running or not, and decide where to host. That’s fundamentally the difference. And now 15 minutes, 30 minutes, I mean, most jobs should not take that long. So the thing is blurring a lot for me the distinction.

  120. Stephen

    And so it will be interesting if in the future, we do have, like 30 minutes or one hour or arbitrarily long Lambda functions, and how the availability of those changes how we use them.

  121. Rahul

    Yeah. I couldn’t agree more. So the only thing that I feel is between Lambda and Fargate, I think it makes decisions a lot harder for the end user. Like, how do you decide when to use Lambda and when to use Fargate? I would use one of those two for sure. But when do you use which?

  122. Stephen

    So yeah, that’s an interesting question. Let’s see, anything else comes up on this article. Let’s see. I liked it. It supports x86 and ARM.

  123. Rahul

    I think there’s one other thing that I wanted to bring up, which is, I think I saw another article that talked about an announcement about ECS. In comparison, yep. Well, this might seem really neat, with everything that’s going on with Fargate, and stuff, this kind of an announcement just brings to question as to why you would want to manage all of this stuff yourself. Like, why would you want to manage an entire cluster and a fleet and figure out how to auto scale, it quickly figure out, you know, warm-up times figure out how much you’ll figure out how to keep your images cached if you have heterogeneous workloads because just loading up all these images on a new node takes its own sweet time. When a node goes down, if you have heterogeneous workloads on it, you know, you lose a lot of images that are stored locally. How do you bring that all back? So if there’s a cold start time associated with those. And auto-scaling to manage your auto-scaling groups figured out how they will scale when they will scale figuring out all the rules and events that drive that. It’s just too much overhead. I just don’t know why…

  124. Stephen

    I can feel my heart rate racing of thinking about okay, scaling to 1000 scaling back to 100 doing augment. I’ve done that kind of stuff manually before and it’s unnerving having to do all this stuff and maybe some institutions or users really like the control I suppose. But I guess often you find that their control isn’t really doesn’t buy you as much as you think it does. And it’s a lot more expensive than you think it is.

  125. Rahul

    It’s way more expensive for sure. Do I have to do this all yourself? Because then the cost of a mistake is way higher than you’d ever imagined. Right?

  126. Stephen

    Yeah.

  127. Rahul

    And if you are managing all of this stuff, and that becomes your primary job, it’s hard to do it. You’d much rather let someone like AWS do it, where they are managing a sublet service, the SLAs, are very crisp, very clear. And in a lot of these things, you know, I’m coming to this conclusion that we get taught in tech, that don’t walk down the beaten path, as, you know, slogan that drives almost all of tech, which basically tries to say, hey, no way, do something new, do something different, don’t go do something else. But when it comes to infrastructure, when it comes to deploying these things, you want exactly the opposite. Do things that a million other people have done before so that it is reliable, it is stable, this is the foundation on which you build your tech differentiation. That foundation has to be absolutely rock solid and tried a million times before.

  128. Stephen

    Yeah, you don’t want your plumber to look at your pipes and say, “Wow, I’ve never seen anyone else do it this way before.” So yeah, this is exactly the same as everyone else does.

  129. Rahul

    That is, I think the best way to kind of explain this to somebody. I’m just visualizing the shock on my plumber’s face. If he saw bespoke, that he doesn’t have any clue about how it operates.

  130. Stephen

    You were going to get a bespoke bill, at the end of that process. That’s actually a great plug for… Well, this reminds me of the talk that you gave a few weeks ago at the Partner Summit about bespoke problems have bespoke costs, or bespoke code has bespoke cost to maintain.

  131. Rahul

    Correct. Now, I think that goes with every aspect of our software development lifecycle. I think the only thing that needs to be bespoke is the idea. Beyond that idea, everything else should be something that you can, you know, pick off the shelf, something that everyone understands something that’s easy to manage, easy to maintain, and most importantly, easy to replace. If it is not easy to replace, you’re gonna live with it forever. And it’s gonna go with you to your grave. It’s that bad. So you have to design things that are easy to delete, easy to replace.

  132. Stephen

    Yeah, and you’re right. Often the core idea is so small, right? And everything else is getting the data to there serving the data, validating if an email address is a valid email address, all that stuff, right, that’s been done 1000 times before, and much better to get someone else to do it.

  133. Rahul

    Exactly. And there is no technical differentiation in that work. It’s commodity at this point when a million people have done things a certain way, tag it as commodity. And if something is commodity, you got to figure out how to leverage it, don’t go build another custom solution around it.

  134. Stephen

    And also thinking about it even that example of an email address, right? Email addresses can be more and more and more complicated, right? It’s easy to think, Okay, I could write a regular expression to validate an email address, I don’t want to put in some library, no way, you know, addresses can… there’s a 1000 edge cases that have been well trotted out. Same with anytime you handle strings this great GitHub repository. I think it’s like the list of very bad strings. And it’s something that anytime you deal with strings, you say… Okay, here it is, I’ll put it in the chat. It’s the big list of naughty strings. And these are our strings that are valid, that if you want to handle a string, you should be able to deal with that, right? You want to use someone else like unless your core business is handling strings. You want to let someone else do with this.

  135. Rahul

    Completely agree. Completely agree. And yeah, so I think we kind of overdo the custom, it’s not built here kind of syndrome. We see too much of that in tech. And we kind of need to change our attitude towards that.

  136. Stephen

    And I think there’s room for it, but it’s just maybe not in production code, right? It’s for things that are meant to be works of art. Like, fun.

  137. Rahul

    Yeah. And there are some things that are gonna be unique, but that’s got to be a core value differentiator a core value proposition. Okay, and 90% of your codebase can’t be that.

  138. Stephen

    Yeah. Well, it looks like we’re hitting 10:00 Pacific time. I think you’re at 9:30 p.m. Well, next week, we’re gonna have our behind-the-scenes part two, we’re gonna see what we can do about the indexing all the Amazon articles we’ve already talked about, show you some fun with recognition. Hopefully, we can find that custom entity recognition model and be able to tag all of our transcripts with what AWS services we talked about, and see how we can tie that all together. So looking forward to behind-the-scenes part two next week. Thanks, Rahul for a really good episode. And thank you, audience for joining in. We will see you next week.

  139. Rahul

    Thanks everyone, and let us know what else you would like us to talk about on the show. We’ll follow to it.

  140. Stephen

    Well, have a good afternoon, evening. Bye-bye.

  141. Rahul

    Bye-bye.

  142. [music]

  143. Woman

    Is your AWS public cloud bill growing? While most solutions focus on visibility, CloudFix saves you 10% to 20% on your AWS bill by finding and implementing AWS-recommended fixes that are 100% safe. There’s zero downtime and zero degradation in performance. We’ve helped organizations save millions of dollars across tens of thousands of AWS instances. Interested in seeing how much money you can save. Visit cloudfix.com to schedule a free assessment and see your yearly savings.