AWS Made Easy
Search
Close this search box.

Ask Us Anything: Episode 16

In this “What’s New Review” post, Rahul and Stephen go over a variety of announcements from AWS. Most of the articles rated very well, with the exception of the SageMaker Pipelines announcement.

Latest podcast & videos

Summary

High memory instances

These instances are HUGE. 12 terabytes of RAM available. The application which demands this much memory is of course, SAP HANA and related SAP Business Suite applications. For institutions looking to transition away from expensive on-prem hardware, this can be a very good option. The running costs are not cheap, coming in at $109.20 / hr for the largest of the large instances. In this segment, Rahul wonders if it would be possible to get these instances at spot. At some point in time, we will have to do an experiment to find out. Fascinatingly, if you saturated the 100 gigabit network writing directly into RAM, it would take nearly 2.5 hours to fill up the RAM. This is an incredible amount of bandwidth and data, and may limit the utility of spot instances. For companies who need to run SAP on AWS, this will be a welcome addition to the family of EC2 instances.

Rating:
Verdict:

Sagemaker pipelines local testing

This segment was, to be quite honest, a bit baffling. Rahul and Stephen struggled to think of a serious use case for a developer running machine learning workflows locally, and this seems to open the door to a lot of complexity as AWS loses control of the hardware and the underlying operating system. This seems to be a concession to developers without regard to the long term plans. It seems like AWS would be better served to make testing ML workflows on the cloud a more streamlined experience, rather than trying to support SageMaker Pipelines locally. This will get especially complex if you want to use GPUs or other specialized hardware. For this reason, Rahul and Stephen give this announcement a “Complexity Alert” and a 2 / 5 clouds.

Rating:
Verdict:
Complexity alert

Amazon Lex now supports conditional branching

Amazon Lex is a very useful tool for building an interface, based on chat or text, on top of an existing backend. With this addition, Lex bots can be customized to particular branches of a conversation. It is nice to see that this feature is universally available in all regions. Rahul is a big fan of Lex in general, and this feature makes it even better. This article earns a nearly perfect 4.5 clouds / 5, and a “Simplifies” tag!

Rating:
Verdict:

Amazon CloudFront now supports HTTP/3 powered by QUIC

CloudFront now supports HTTP/3. This is the latest evolution of one of the core protocols which keep the internet running. With this announcement, it is straightforward to update your CloudFront distribution. The article links to the relevant documentation, the feature is available in all regions and for no additional charges, and is completely backwards compatible. For all these reasons, this article earns our first 5 clouds / 5!

Rating:
Verdict:

IOT Twinmaker launches enhancements for scaling digital twins and building data connectors

The idea of “digital twinning” is fascinating. The idea of a digital twin is basically a simulation of a physical system in the digital space. The usage of a digital twin is that, if there are IOT sensors on a real device, then the sensor readings can be processed both on the physical device and on the digital twin. One way of making a digital twin is to model the fundamental physics of the system, but another way is to make a data-driven digital twin. With this announcement, AWS is making it easier to build digital twins. For those in the IOT space, this may be a useful feature!

Rating:
Verdict:

AWS Made Easy

Transcript

  1. Stephen

    There we go. Hello, and welcome to “AWS Made Easy.” We’re on episode number 16. This is the Ask Us Anything podcast and welcome, everyone. Wow, it’s been a busy couple of days. Rahul, how are you doing? If you were to take yourself off mute and try that one again.

  2. Rahul

    Yeah, sure. So, yeah, it has been a little crazy at this end. We were in Anaheim last week? I seemed to have lost track of time, but about a week before. Anyway, I got back late last week from the trip to the AWS Summit, and, yeah, still a little jet-lagged, kind of at the fog end of my day. And as a result, you might see me yawning for a bit. But yeah, this is an exciting episode. So, looking forward to it. Spent the weekend just being with family and spending some time at home, a pretty relaxing weekend. What about you?

  3. Stephen

    Well, it was really good. Got back from Anaheim, trying to get back organized and back on schedule. I was going over the trip footage and here I’ll show you the flight on the way in, we got a really, really neat view out of the windows. I’ll put that up on the screen. [inaudible 00:07:12] depending on how nice the weather is and the plane [inaudible 00:07:18].

  4. Rahul

    I think, Stephen, your audio is completely suppressed by the aircraft.

  5. Stephen

    Let’s see. There we go. There’s the downtown. That’s pretty cool. Really fun to see that view. If you ever fly into Seattle and you’re lucky to come in on a nice day and the plane is back around, you’ve got a nice downtown view.

  6. Rahul

    Awesome.

  7. Stephen

    So, that was good. Let’s see, what other things do we do? We went over to the Ballard Locks this weekend with my sisters.

  8. Rahul

    Oh, yeah? And did the salmon or witnessed salmon kind of doing the crossing?

  9. Stephen

    Oh, that’s a perfect question.

  10. Rahul

    Oh, wow.

  11. Stephen

    It was really neat.

  12. Rahul

    I think the last time I saw this was summer 2018, I think, where we went to Ballard Locks and saw the…what it’s called? The salmon bridge or salmon…

  13. Stephen

    Well, the fish ladder.

  14. Rahul

    The ladder. Yeah, ladder is what it’s called.

  15. Stephen

    So, for those of you who don’t know, salmon have to swim upstream to go back to wherever they were born. They’re born in these little freshwater streams and they swim downstream and swim around in the ocean for a couple of years, grow up, mature, and then their instincts kick in and they go back up to the very stream where they were born. But in Seattle, there’s kind of a natural barrier called the Ballard Locks and they’re about 100 years old. But in order to preserve their…and these Ballard Locks exist because Lake Washington and Puget Sound are at different levels, and so in order to not drain Lake Washington, this has to exist. It’s like an elevator for boats.

    But in order for the fish to continue to have their natural upstream behavior, they built the system where the fish make these jumps in these steps called the ladder. And so, it’s really neat to watch, you can see the salmon just going in and going upstream. So, it’s a pretty neat phenomena to observe. I remember I remarked to my sister, I said, “We’ve kind of come full circle,” because my dad always took us to the Ballard Locks and we always complained as kids, “It’s so boring, we just watch the ships go up,” and now we are doing that to our kids.

  16. Rahul

    They have to go through that rite of passage.

  17. Stephen

    Yeah, but I understand what he was getting at now, it’s pretty neat to watch.

  18. Rahul

    I think the appreciation comes much later as they’re adulting.

  19. Stephen

    Well, yeah, 20 years. Well, should we jump into the first…our first segment is some very, very large instances made available to us from AWS. I’ll get that queued up here and we will jump right in.

  20. Rahul

    Absolutely. Let’s get started.

  21. Stephen

    Okay, here it goes. Okay, the first one is Amazon EC2 High Memory instances are now available in US East, South America, and the Asia Pacific regions. So, these are EC2 instances with 12 terabytes of memory. This is pretty incredible.

  22. Rahul

    It is insane. 12 TB of RAM to go these instances? Can you bring up…I think the announcement is kind of gone off-page or it’s not coming up on the page.

  23. Stephen

    Yeah, there we go. There we go.

  24. Rahul

    There we go. Okay, so these are some super large instances, 3 TB, 6 TB, 12 TB of RAM depending on what size you pick, 56 or 112. But yeah, these SAP HANA use cases, which what I’ve always driven right from the time that the x1.32XL came out and this is maybe seven, eight years ago. Ever since those have come, they’ve always been driven by SAP HANA. That seems to be the only use case. But when you really think about it, I wonder why these very large instances are not being used for databases in general given that the more you can load up into memory, the faster those databases should really be. And, yeah, this is a…

  25. Stephen

    It’s not general purpose, these are just for SAP HANA. Is that right? Because it said…

  26. Rahul

    I mean, these are available as general-purpose instances and you can run SAP HANA workloads on it. The problem is it’s not available with RDS, like these instances actually not available with RDS. So, that just seems tricky. In fact, my struggle with that is RDS still hasn’t…you know, for the likes of Aurora, they still haven’t launched the 24XL Graviton instances. Like, Graviton instances are still running, I think, the 12 XL, if I’m not mistaken, or 16 XL. So, they’re not yet at 24 XL. And I would love to understand the constraints of launching all of these.

  27. Stephen

    Because at some level, a certain CPU can only talk to a certain amount of memory, but we don’t know what that constraint is for the Graviton3, do we, of how large of an instance or how much memory a Graviton3 can address?

  28. Rahul

    Correct, but even the Graviton2s are not available with the 24 XL. And I would assume that given that it’s a 64-bit chip, it should be able to address a 24 XL. I mean, the Intel ones do. So, in terms of addressable memory, I don’t think that would be the constraint but it’d be interesting to understand what they’ve tuned those instances to.

  29. Stephen

    Yeah, so it’s not an architecture issue, it is an instrumentation bundling for whatever reason, or maybe it’s just the demand, they haven’t gotten the demand for it.

  30. Rahul

    Yeah. By the way, just as a side note, there is an interesting conversation going on in the community about today’s earlier announcement about the UAE region being opened up or being made available. And the conversation is that it’s probably most services are actually not showing up over there or not yet available in the UAE region, so it feels like a very incomplete deployment of an AWS region. There is no Graviton, you know, CPU type and processor types are not yet available. You don’t have Version 2 of the API gateway. There’s so many others services that are missing that it feels like a very incomplete deployment. So, I think it’d be good to figure out from AWS what their strategy to…sorry, to rolling out a new data center looks like, like, how do they prioritize what to put in and not to put in?

  31. Stephen

    Yeah, certainly. I mean, I wonder if it’s based on what hardware is available at the time and we all know that there’s global supply chain issues, so is it this is what they can get? Or is it this is a strategic order? Or is it demand-based? Or are there certain hardware that can serve multiple different configurations and this is however the Nitro configuration that they chose to go with to allow these particular instances to be served? Yeah, I’m not sure.

  32. Rahul

    Yeah, it’s an open question that we hope someday we can get some experts from AWS to come in on the show and answer for us, but it baffles me that they’ve only got such a half-baked easygoing. I also wonder whether they actually did this for very specific customers who are local and local only to get them on board. Having lived in Dubai for 12-plus years and having worked very closely with the UAE AWS team, I know for a fact that they’ve always wanted to have the big telcos and the government institutions get on to AWS and there was a big demand from them to, you know, set up this region. I wonder if initially, they built it just for those use cases and, you know, will slowly over a period of time bring in those other general purpose use cases and work, you know, and customer loads in it.

  33. Stephen

    Yeah. Okay, so that brings a bigger question of, like you said, we would like to speak to someone from AWS about what’s the strategy of what order do things get rolled out to a region? And is it a general strategy? Or is it driven by local demand or even demands of a particular customer? Or, you know, in this case, the government, the UAE government within that region?

  34. Rahul

    Yep.

  35. Stephen

    Go ahead.

  36. Rahul

    No, just coming back to this announcement. Yeah, this is exciting, I just…I don’t know if, you know, in a lot of these regions, there’s that kind of demand for these instances. I think you and I were looking at some of this data earlier and it looks like each one of these instances, what, a million bucks?

  37. Stephen

    Well, let’s go through the calculation. So, on the pricing page, that 12tb.112xlarge, and that’s $109.20 an hour.

  38. Rahul

    And it has 448 cores, correct?

  39. Stephen

    Yes, 448 cores for 12 terabytes of RAM, EBS only. So, $109.20 is your on-demand pricing. So, if you were to do this in the least efficient way possible and get on-demand for a year, it would be almost $1 million.

  40. Rahul

    Correct.

  41. Stephen

    And then thinking about reserved instances, typically about half, it’s still half a million a year.

  42. Rahul

    Yeah, that’s a pretty pricey machine to keep running all the time.

  43. Stephen

    I was trying to think about, you know, when you go to a restaurant and the saying is that you buy a glass of wine at what the restaurant paid for the bottle. That’s typically restaurant pricing. I would love to know what Amazon’s pricing is for, “This is what we paid for the server and this is how much we rented out in terms of ratios.” I was trying…I bought my first large workstation from these guys, Thinkmate, a long time ago, and I was trying to configure the most RAM that I could and I got to…

  44. Rahul

    Di you get 12 TB? I think they’ve got a couple of 12 TB.

  45. Stephen

    Well, I have to look through the options more. I got it to 6 TB with 48 DIMM, each 128 gigabytes, and so far, we’re at $126,000. So, you can get two of these…you can get three of these but again, you’re not just paying for the hardware, you’re paying for the entire experience, the entire managed, all the AWS system.

  46. Rahul

    The infrastructure, yeah.

  47. Stephen

    The power, the reliability, the network, all that stuff.

  48. Rahul

    Yeah, I mean, I would imagine that these machines burn an insane amount of power.

  49. Stephen

    So, this one says 1425.6 Watts estimated, and again, that is…

  50. Rahul

    Wait, how many cores does this have?

  51. Stephen

    This one is not as many, this is a 4X 28-core Xeon.

  52. Rahul

    Yeah, there’s 28 cores. I mean, you’re gonna have 448 cores in that instance. So, multiply this by 20 and that’s your power consumption.

  53. Stephen

    But are those actual cores that you’d see in…?

  54. Rahul

    No, these are vCPU, so half of them.

  55. Stephen

    vCPU, yeah.

  56. Rahul

    Yeah, these are half of them, so if you…these are just Intel. So, these are Intel architecture, right?

  57. Stephen

    Yeah, these are the Xeon Phi, large memory.

  58. Rahul

    Yeah. So, you’re still looking at 224 cores, so still multiply by 10. That’s the power consumption.

  59. Stephen

    Well, so in terms of this article and in terms of this announcement, what do we think in terms of… I mean, obviously, if you have a need for this, then this will simplify your life greatly. And what were the customers who needed this before and didn’t have access to it? They either use it in a different data center or using on-prem hardware. So, in this case, this massively adds simplification.

  60. Rahul

    Yep. I would love to, if anyone out there has a real-world need for these kind of instances, please reach out to us, we’d love to have you on the show. We’d love for you to tell us what the experience of these instances is and, yeah, we’d love to hear your story. I am actually dying to do an experiment to see if these instances are available as Spot Instances and I wonder if AWS is going to give me a call if I decide to launch 100 of these.

  61. Stephen

    Well, you do that from your account.

  62. Rahul

    How much is it gonna cost for an hour if I ran 100 of these machines, 10 bucks a machine? Ten thousand bucks.

  63. Stephen

    That’s a $3,900 experiment.

  64. Rahul

    It’s actually just 1,000 bucks, isn’t it?

  65. Stephen

    No, 1,000 would be 10.

  66. Rahul

    No, but it’s one-tenth of the price. Spot Instances are at one-tenth of the price.

  67. Stephen

    Oh, okay. Okay, and then you’re gonna use it for the full hour?

  68. Rahul

    Yeah, and if I use it…if I use these instances, so it’s 10 bucks an hour.

  69. Stephen

    No, this is…okay, no, the 12 TB is 920 per hour.

  70. Rahul

    Correct, but if I did Spot Instances…

  71. Stephen

    Okay, I got it. Okay, sorry.

  72. Rahul

    This is all under one pricing, right? But if it’s Spot Instance, I’d pay about 10 bucks or let’s say 11 bucks, and then I run 100 of them. That’s 1,100 bucks. That’s not too shabby for an experiment.

  73. Stephen

    The issue would be how in the world you’re gonna get that much data in there that quickly?

  74. Rahul

    Who cares about the data? Run these machines and see what they can do.

  75. Stephen

    Multiply some big matrices together, I suppose. Yeah, a giant matrix.

  76. Rahul

    I mean, you have at this point, with 100 machines, what? Twelve TB, right? So, 1200 terabytes of RAM. That’s 1.2 petabytes or roughly 1 petabyte of RAM. What can you do with 1 petabyte of RAM?

  77. Stephen

    You could load the…no, you could load it the top 1,000 movies of Netflix…no, no, no, you could probably load a big chunk of Netflix all at once. I don’t know what you do with it.

  78. Rahul

    Interesting exercise. Okay, we’ll see if we can figure out if we can run the Spot Instances and have some fun with them. I doubt if they’ll be available as Spot but…

  79. Stephen

    No, I would think that given the size of this, it will be very rare to see one pop up as Spot.

  80. Rahul

    True.

  81. Stephen

    All right. Well, in terms of the article quality itself, it seems very clear. It would be interesting if there was a non-SAP HANA example.

  82. Rahul

    I think these lots, everything from the x1 onwards, as soon as you add 1 TB or more of RAM, those are all designed for HANA and HANA alone.

  83. Stephen

    Yeah.

  84. Rahul

    I don’t know if any other use case that has come up that says, “I need a terabyte of RAM.” It’s just those, I haven’t come across.

  85. Stephen

    So, what do we rate this in terms of clouds? Let’s say…

  86. Rahul

    Four clouds.

  87. Stephen

    Four? All right.

  88. Rahul

    Yeah, I think we’re good with four.

  89. Stephen

    And you see we’ve got our new clouds that are visible.

  90. Rahul

    Yay, the one with the orange borders.

  91. Stephen

    All right. Well, let’s close out this segment and go on to the next one, which is SageMaker local. All right, this one’s a bit of a funny one, I think. This is, “Amazon Sage Maker Pipelines now supports testing of machine learning workflows in your local environment.” Now, I’ll put this article in the chat for anyone who’s following along. Okay, this to me seems a bit backwards. “SageMaker Pipelines is a tool that helps you build machine learning pipelines that take advantage of direct SageMaker integration. Pipelines now supports creating and testing pipelines in your local machine,” e.g., your computer, to clarify what local means. Which they should, right? “Wait, am I reading this right? Do you mean local as in my local region?” No, they’re saying the actual physical piece of hardware on your desk.

  92. Rahul

    Correct.

  93. Stephen

    So, “With this launch, you can test SageMaker Pipelines scripts and parameters compatibility locally before running them on SageMaker in the cloud. Why would they do this? I mean, it seems, obviously, it is a cloud-native setup, SageMaker is meant to be on the cloud and not locally. This seems like is it…I think we discussed this beforehand, is this a….how do they say? Is this kind of pandering to people who have requested this? I mean, obviously, this is not the direction they want to go in the long run but why move this particular step locally?

  94. Rahul

    Actually, they’ve done a number of services where they’ve started supporting local debugging and development. And I do agree with you that this feels like pandering to the developers who are reluctant to adopt all these cloud-native tools in a cloud-native way where you don’t have to do stuff on your local machine. I think the analogy that I can give you is the cloud-native IDEs, you know, the likes of Deskspaces, Git Port, Codespaces, and so on. All of them have so much reluctance from developers. Somehow, they feel like if they’re not using their local IDE, they’re not doing real-world development and that just feels messed up in terms of logic.

    In this particular case, all these notebooks and all these pipelines and stuff, is it making it available locally? Yeah, it feels like diluting the value, like, why would you want to keep stuff locally? It just adds more complexity, it adds more variables, it adds more of, you know, failure scenarios. Like, the setup that you have local is not the same as the setup that you have for your pipelines in the cloud. There is no…the hardware is totally different.

  95. Stephen

    They have to deal with the architecture and all this other stuff now.

  96. Rahul

    Yeah, exactly. So, I don’t really understand why someone might want to do this. By the way, I still have a very skewed view, you know, of how things should be. For me, for example, even IDEs being local is virtually blasphemy. At this point, given all the tools that are out there, you shouldn’t have to launch your local IDE anymore. And therefore, when someone says that you can do all these pipeline stuff locally, I cringe. I cringe that you just use a really messy machine with messed up configuration to decide how your production workloads should operate and then debug it there. It doesn’t make sense.

  97. Stephen

    Yeah, I agree 100%, it seems so…why would you want to do this? It seems really backwards. I mean, maybe you’d say, “Okay, well, we already bought all these brand-new MacBook Pros for the team, so we need to figure out how to get some mileage out of them,” and this is being friendly, I suppose. It’s a bit odd. I guess this is…going into the rating of this article, they don’t really say why you’d want to do this.

  98. Rahul

    I am pretty sure that a lot of developers came to them and said, “Oh, unless we can do things locally, we are not going to use the service.” And so, they were kind of pandering to that. Yeah, I have no doubt that that was the scenario. I actually think that if I were the product manager of this particular product, I wouldn’t add that as a feature even if the developers were asking for it. It just adds to the complexity.

  99. Stephen

    Yeah, it’s just going to add and make problems later on.

  100. Rahul

    Yeah, exactly.

  101. Stephen

    You have to deal with architectures, what about when you want to train with GPUs, and then someone has an ATI, someone has a Radeon and Nvidia? Yeah, it’s fraught with issues.

  102. Rahul

    Correct.

  103. Stephen

    I’m inclined to give this a…okay, in terms of an article, I mean, it doesn’t give an example of why you’d want to do this. But I guess from an engineering standpoint, it’s kind of cool in the sense that it’s a new ability that didn’t exist before so that’s, I guess, a good thing.

  104. Rahul

    Yeah.

  105. Stephen

    But on the overall, I want to give this a complexity…

  106. Rahul

    So, I think when I give a rating for this one, it’s less to do with this article or this blog post in particular, but more to do with… I think the score that I’m about to give or that we are about to give is, I think, a reflection of the product management for this particular feature. I would actually count it as adding complexity and not something that I would have picked. Of all the things to do on SageMaker, trying to support local was not what I thought as a value-add.

  107. Stephen

    Yeah, no, I agree completely. I think that this exists and now they have to support it and the support forums are gonna fill up about, “Why this doesn’t work on my particular bespoke configuration?”

  108. Rahul

    Correct. Yeah, and there’s just so many variables to take care of on the local machine like…yeah, it makes no sense to me.

  109. Stephen

    All right. Well, I think then I’m going to throw out the two clouds unless you can think of another cloud to give it.

  110. Rahul

    No, I think that seems fine and fair for this particular work.

  111. Stephen

    Yes. Well, I guess, thinking about this and thinking about the dangers of running things locally, I wanted to do kind of a fun transition segment. There was a neat article on Hacker News recently and it’s based on…it was from the author of a bit of software that I use called Lunar, and I’ll put this in the chat. So, Lunar is…oops, I just unplugged my keyboard. Lunar is a brightness adjustment software for Mac OS and it handles a lot of edge cases. For example, sometimes, if you have a batch of monitors, the factory will get lazy and it will put the same serial number on all of them. And this is an issue because when you have a left and a right, your map can’t tell which one is which.

    Well, one interesting issue that got discussed in the comments is that there’s some known issue with monitors that have very poorly shielded HDMI cables. And the issue is that when you sit on a standard office chair, you sit on a standard office chair, there is one metal cylinder moving inside of another and this situation can sometimes induce an electromagnetic spike. And if you have poorly shielded video cables, this can actually…your cable, or you can disturb the input. And they’ve been able to reproduce this, I’ll add this, of what it looks like. There’s this engineer here sitting. And as he gets off, his screens turn off. I’m gonna come back here.

    There you go. So, they’re on, he sits down, and they turn off again. And so, here’s the point of why we bring this up, in addition to this being kind of fascinating from a nerdy tech perspective, is just that with local setups, there are so many edge cases that you have to deal with. And this person who was talking about the monitors just kind of…it was an interesting blog post because he talked about, “Here’s all of these different edge cases that have come up over the years of monitors with identical serial numbers, the HDMI issue, all sorts the stuff.”

    And so, that’s why we always encourage, “Use managed services,” because then you have this great team that’s taking care of all this stuff for you. And in the next day or two, a clip from the talk you gave in Anaheim about tier two of the cost savings, which is, “Use managed services.” Why would you want to deal with all these little edge cases when you don’t have to and you could pay someone else at a much bigger scale to deal with them?

  112. Rahul

    I agree. Completely agree.

  113. Stephen

    All right. Well, I posted the article in the chat. It’s a fun article, so have a good read of that and then you go on and do the Aurora. All right, let’s take a quick break. When we come back, we’ll be talking about the next article, about Lex.

  114. Woman

    Public cloud costs going up and your AWS bill growing without the right cost controls? CloudFix saves you 10% to 20% on your AWS bill by focusing on AWS-recommended fixes that are 100% safe with zero downtime and zero degradation in performance. The best part? With your approval, CloudFix finds and implements AWS fixes to help you run more efficiently. Visit cloudfix.com for a free savings assessment.

  115. Rahul

    Whoa, we got a bunch of echo on the last one.

  116. Stephen

    I think that was my fault. I unplugged what I thought was my keyboard. For some reason, I misconfigured my…I was rearranging my office a little bit. Yeah, I missed…anyway, I unplugged my mic by mistake. All right, Amazon Lex. Let’s bring this on the screen. Here we go. “Amazon Lex now supports conditional branching for simplified dialogue management.” I’ll put this one over here in the…

  117. Rahul

    Do we need a segment separator on this one?

  118. Stephen

    Yes, sorry about that. We do. So, take two.

  119. Rahul

    Just for the audience, the reason why I call that out is because…and I’ll let Steven describe that a little bit more. We use these segment separators for a lot of automation that allows us to slice these videos up and post them as segments on various social media channels. And that automation requires the presence of these segments. So, go ahead, let’s see if you can elaborate on that a little bit more.

  120. Stephen

    We’re going to do a special behind-the-scenes episode, it’s now going to be two weeks from today and we are very excited about that. Again, it shows the process that we use to automate bits of our live stream. We believe in, you know, using the stuff that we actually…that we talked about, AWS services, one of them is segment detection. And so, by having these introductory segments, we can get a lot of guidance because after this video is posted in full, we go back and chop it up into bits based on what we talked about and we use some machine learning, specifically AWS Rekognition segment detection, to tell us where the different highlights are. And then, of course, we validate them by hand before we post them, but it makes it a bit faster than going through by hand and scrolling through. So, that’s why we do that. And it’s kind of fun, it gives us a little chance to take even that little 10-second breather to kind of clear your brain and get ready for the next, it is useful.

  121. Rahul

    Completely agree. Okay, back to Lex.

  122. Stephen

    Lex is a service for building conversational interfaces into any application using text.

  123. Rahul

    So, this is all your chatbots and IVR systems, right? And just for contrast, what used to happen till now is that you would create these flows of dialogues with the customer but anytime you had to make different choices and pick the right choice, you would basically glue up different Lex dialogues via some custom code. And now for the first time, you actually have conditionals that you can do on intents or rather the slots that you get out of the intents, and you can use those conditionals to direct different dialogue flows.

  124. Stephen

    Okay. On the one hand, this is going to be very helpful if you’re already using Lex. It’s funny, I’m surprised that this didn’t exist earlier.

  125. Rahul

    Yeah, I think the way Lex kind of developed, there was always a lambda handler for a bunch of stuff, and, therefore, you know, there was custom code to be written. It is part of the way you dealt with things. And now it’s neat that, you know, you have the conditional right in there within your designer, your chatbot designer, and you can use that to go around different parts.

  126. Stephen

    So, this is the mechanism for Lex to get simpler, to have UIs, and make this a lot easier to use for someone who wants to use Lex but maybe isn’t as comfortable with lambda.

  127. Rahul

    Yeah. I’d say this, I think it doesn’t make Lex simpler. I actually think that it makes Lex a little more complex, but it makes building chatbots and IVR systems way easier because you’re now no longer dealing with custom code to stitch these things together. So, you know, I am always for, you know, figuring out simpler ways to execute the problems-solutions rather than trying to just simplify a tool, so I would absolutely give this a “Simplifies.” Even though Lex is getting just a tiny bit more complex, I think that’s totally a valid trade-off for…

  128. Stephen

    You know, because in aggregate, the complexity is going down even though Lex, in particular, gets a little bit more complicated, but if you can avoid having to write a lambda function, then you can see all the logic right in one place. Because otherwise, you have to open up the lambda function and then you have to do…you know, it’s always a…you have to repackage the lambda function and if you don’t remember how to do that…there’s a couple of steps to zip up your Python code and upload it as a lambda function. And if you don’t remember all that, having that logic in that one place, yeah, I can’t see a reason why you wouldn’t want to do that.

    So, let’s see. And then also, there’s a default condition to manage the conversation in case none of the conditions are triggered. That makes sense. “You reduce dependency on custom code and expedite the design and delivery of conversational interfaces.” This is nice. It’s available in all AWS regions at once, so you don’t have to deal with rollouts, a few regions at a time.

  129. Rahul

    So, this is one service that’s available in the UAE region that just got announced today.

  130. Stephen

    Fantastic. So, okay, I can’t really fault this at all.

  131. Rahul

    Yeah, I think this one is good. I would completely agree that this one is very, very good. Do we have a four and a half? Or does it go only to five?

  132. Stephen

    There you go. No, there’s that half-cloud.

  133. Rahul

    Nice.

  134. Stephen

    Yeah, five is reserved for…well, I don’t know, we’ll know when we get there.

  135. Rahul

    Yeah, I think five is for ones which have a lot of examples, which have, you know, literally, “Here’s the Follow Me guide,” kind of setup. So, I think those would fall under the examples. I think we have one service that we’re going to talk about today that has that.

  136. Stephen

    Well, let’s transition over, then, to see what are we doing next. We’re doing that IoT TwinMaker. All right.

  137. Rahul

    That’s the one. That’s the one.

  138. Stephen

    Here you go. Okay, this is, “AWS IoT Twin Maker launches enhancements for scaling digital twins and building data connectors.” So, digital twin, that’s an interesting idea.

  139. Rahul

    That idea is a fascinating idea. It is part of the industrial 4.0 revolution. And just for the audience, the digital twin is nothing but a virtual representation of your physical infrastructure. So, let’s say you had a machine that was producing something or you had an assembly line or you had some kind of workshop that’s producing certain goods or you have an entire supply chain, doesn’t matter. Whatever your physical system is, you can actually start…you can use CAD modeling to create a virtual setup of that and then start streaming all of the sensor data that you are collecting across all the sensors across the factory or the workshop or whatever you’ve got, take all of that data and that material and pass it onto your virtual device.

    You can then train that virtual version of that device to mimic the behavior of your real physical system. Now, imagine what that means for factories or imagine what that means for plants and machineries. You can use that mechanism to predict failure of the plant or device or specific parts, you can use it as a mechanism to figure out, you know, how key metrics would operate, you could run a bunch of different simulations, there are numerous use cases that come out of the IoT TwinMaker service. So, yeah, I’m actually pretty hip about it. To be honest, we haven’t built anything with it yet but we are just about to kick start a project where we’re going to be leveraging the IoT TwinMaker to really understand what’s happening with a lot of our own internal processes.

  140. Stephen

    Well, that’s exciting. It’ll be fun, I think we should do a special episode dedicated to that when it’s all up and running. So, I’ll mark that one for a future one.

  141. Rahul

    Great.

  142. Stephen

    It’s interesting. I think about a long time ago in grad school, I had a roommate, and he was in operations management. And he was looking at…no, he’s a classmate of mine, and he was looking at a hospital and he was designing a new process for the elevator management of this hospital. And I can see, you know, he had to do all these calculus-based queuing theory stuff on service time and inter-arrival processes. And being able to validate that or pair that with a simulation based on…well, at first, you would do a digital twin and then you could also do real data based on, say, the sensors at the elevator, that would be a really neat way to kind of make an empirical addition to that research.

  143. Rahul

    Yeah, completely agree. And by the way, since you brought up elevators, I’ve been fascinated by some of these elevators that are now available where the button you press to get to your floor is outside the elevator and outside of the…you know, like, you don’t make the choice of the floor you want to go to inside the elevator, you make it outside and it tells you which one of the elevators to go to based on whatever optimization it runs. And when you really think about it, that is such a…that is so simple to implement and it’s so much more effective in reducing congestion. I wonder why no one thought about it earlier.

  144. Stephen

    I used to work in a building a long time ago in Brisbane, Australia that had one of those. And it was neat because there’s peak periods, right, when everyone is getting back from lunch or everything, where they can do pre-emptive things like they can have a policy of if the elevator takes someone up to the 10th floor, just return it right back to the lobby.

  145. Rahul

    Correct.

  146. Stephen

    And it leaves so much room for that. And it’s actually interesting, I looked at this a long time ago, the elevators are a huge portion of a building’s footprint, especially the skyscrapers. And so, if you can reduce the need for it while maintaining a quality of service, you get a huge amount of savings, especially over the lifetime of the building, right? How much is 100 floors of, you know, 50 square feet over 50 years’ worth? It’s a lot to being able to save. Although I think the way they used to be when they first came out where the elevators were like ski lifts and you just had to jump in as it was going all the way up. That would be terrifying.

  147. Rahul

    Yeah, agreed. Yeah, so TwinMaker is actually really neat. You guys should all try it out. We’re going to do the same at our end with this new project that we’re just kicking off. But yeah, I think the TwinMaker set up with…I think there’s a service called Site View as well, which manages temporal data. So, yeah, this is an amazing tool for a lot of, you know, understanding what’s going on at the process level. So, yeah, folks should use it. You can figure out whether your parts are about to die or, you know, get ruined, whether you need replacement parts, whether you’re operating them at peak efficiency or not, you can gather all of that data in one place.

  148. Stephen

    I’m curious to see how much work it is to get your twin to truly mimic the more complex behaviors of the actual system that you’re trying to model.

  149. Rahul

    Yep, I think the amount of data that you collect becomes a big driving factor for that. The more data you have, the twin is actually able to learn from that data and create all these interesting scenarios that can then…yeah, it’s fascinating.

  150. Stephen

    Yeah, and it represents a bit of a shift in thinking about models, right? Because from an academic perspective, a model meant, you know, equations that govern the fundamental physics or fundamental properties of the system. But if this is based on actual data and it’s saying, “Well, I know that for any scenario, if this goes in, this goes out,” it’s more like an empirical model but it’s equally useful, if not more so. All right. Well, anything else you want to say about Twins before we move on?

  151. Rahul

    No, I think that’s it on the digital twins.

  152. Stephen

    Got it. All right. Well, let’s do a transition…let’s do a quick break, and then when we shift back, we will be on QUIC.

  153. Woman

    Is your AWS bill going up? CloudFix makes AWS cost savings easy and helps you with your cloud hygiene. Think of CloudFix as the Norton Utilities for AWS. AWS recommends hundreds of new fixes each year to help you run more efficiently and save money but it’s difficult to track all these recommendations and tedious to implement them. Stay on top of AWS-recommended savings opportunities and continuously save 25% off your AWS bill. Visit cloudfix.com for a free savings assessment.

  154. Stephen

    All right, so in our final article of today, we’re looking at CloudFront supporting HTTP/3 powered by QUIC. All right, so this actually came out about 10 days…August 15th, so time flies, almost two weeks ago. We didn’t make it in the last ones but we’re bringing this up now because it is interesting. I mean, HTTP is a pretty fundamental protocol in all of our lives. So, the support for the relatively new HTTP/3, how is this going to affect our day-to-day operations?

  155. Rahul

    So, this one is actually really interesting. Just for some history, HTTP was the first protocol that was purely TCP. Then came HTTP/2, which actually came out of a bunch of work from Google and then became a standard over a period of time, which basically brought in a bunch of other efficiencies which HTTP did not have, and it started bringing in SSL and, you know, security into the HTTP protocol. What we started realizing is that, at some point in time, there was a need to get more efficient in the way the page loads were happening.

    And so, again, at Google, they created the first version of the HTTP/3 protocol, where instead of making these really heavy-duty connection…you know, the connection protocol in TCP is pretty heavy duty. So, instead of doing that, they started leveraging a UDP-based connection, which became a lot more efficient. And of course, all the reliability things that came with TCP were implemented in a slightly different manner in this new protocol. But in effect, it makes the HTTP/3 is about…was it 30% faster? Yeah, somewhere between 20% and 30% more performance.

    I think the way it compresses stuff, the way their transfers can happen, the way site page loads come up, they are about 20% to 30% more performance than HTTP/2. And interestingly, the committee or standards is still deliberating on what HTTP/3 should have. Right now there’s an ad hoc series of documents, which are being used by all the different vendors of browsers, so whether that be Microsoft or Google or Brave or, you know, you name it, they’ve all implemented a version of HTTP/3 in this type of platform.

  156. Stephen

    [crosstalk 00:54:04] is RFC 9000 supported by 9001, 9002, and 8999. So, QUIC Version 1 has now formalized, but yeah, there’s still…

  157. Rahul

    Correct. So, this is the QUIC version, so this is the protocol, but HTTP/3’s protocol definition is not yet been standardized, it’s still in deliberation. So, they expect the RFC to come out probably by the end of the year. But there are already…like, most major browsers already support HTTP/3 at least in the way it stands today, there are edge cases that are different. But we expect that this will…IETF will make this a standard by hopefully the end of the year.

  158. Stephen

    Well, it’s amazing how they even quantify the amount of work they’re putting into talking about it, 5 years, 26 in-person meetings, you know, 1,750 issues, and thousands of emails. And it’s good that they can release a new standard that, like they said, that is not…you’re not locked into the previous things. There’s certain things that we do now, we do a lot more streaming these days than we used to do, and so it’s good to have support for things where you don’t necessarily need to know if every single packet got delivered, but you want to get those packets as quickly as possible. And thus…

  159. Rahul

    Yeah, I think one of the other factors is also backwards compatibility. That’s kind of like a mandate for these protocols. And to make sure the new protocols are completely backwards compatible, I think, is the big debate and argument right now in the committee for this particular protocol. I think I was reading about it somewhere. So, yeah, I mean, it’s no surprise because when you actually look at the RFC when it gets published, which has all the information about every discussion and everything else that happened about it, about that particular decision. Yeah, it’s no surprise that it takes an entire committee to come up with these with tons of deliberations and tons of work going on behind the scenes.

  160. Stephen

    Yeah. I mean, I still follow…I don’t use it very much anymore but I still follow the evolution of the C++ compiler and the C++ standard and reading how that evolves. Yeah, it’s a lot of work to really fight feature creep and have that balance between backwards compatibility and looking forward and it’s a tricky thing to navigate. So, I’m glad that I’m not doing that but I can enjoy the benefits of it. And so, for this particular article itself with CloudFront, it supports HTTP/3 powered by QUIC, it’s looking ahead…

  161. Rahul

    You just have to click one checkbox, I think, if I’m not mistaken, and it turns it on for you automatically. It is completely backwards compatible, so any browser that doesn’t support HTTP/3 will continue to work via HTTP/2 and that will just work seamlessly, you wouldn’t have to do anything else differently.

  162. Stephen

    Well, looking at this. So, this is great as we were talking about in terms of article quality, so they had, “To enable it on your distributions, you can edit through the console, through an API action, or a CloudFormation template,” and they give you a link to a CloudFormation template that you can use, and this is still going to work using previous versions. And look, “It’s available on all the edge locations at once.” That’s really good. There’s no extra charge. This is great. It’s hard to say…okay, it’s available everywhere, here’s one CloudFormation template you can run, here’s interesting links. I think this is doing pretty well as far as…

  163. Rahul

    It is pretty good. I actually think this one is actually really good. It covered a lot of the technical details and I was able to research, you know, a lot of stuff around this HTTP/3 implementation from AWS relatively easily. So, I would give us a pretty high score. A four and a half? Five? Four and a half?

  164. Stephen

    All right. Well, I mean, we’re setting a precedent with this five.

  165. Rahul

    I hope that we are constantly raising the bar. Was that a four and a half? Yeah, it looks like a four and a half.

  166. Stephen

    It was four and a half.

  167. Rahul

    Yeah.

  168. Stephen

    I don’t know, I could have done, should we…?

  169. Rahul

    Sorry, go ahead.

  170. Stephen

    There we go. Let’s put it at five. That’s pretty good. They’ve got it’s all regions, it doesn’t cost anything else, here’s a CloudFormation template, here’s a reasonable amount of text about how it works.

  171. Rahul

    Yeah, I think this blog post is pretty good. And overall, I think it also simplifies. So, yeah, I would give it the Simplify stack.

  172. Stephen

    Awesome. Yeah. And especially, it’s backwards compatible, all you have to do is run a CloudFormation template and nothing is going to break because if the protocol doesn’t support that version, it will fall back to the previous one.

  173. Rahul

    Exactly. They’re trying to be straightforward.

  174. Stephen

    All right. Well, I think this is a good place to stop for the day. It’s been a really good discussion, some interesting articles. It’s really fun just seeing how far we’ve gone. I remember when we were talking about that 12 terabytes of RAM, I remember being a young teenager and saving up for one stick of 32 megabytes EDO RAM and thinking that we’ve gone from megabytes to gigabytes to terabytes. So, I think in another decade, we might be seeing this as petabytes and this instance that we’re talking about now might be, you know, the size of the Raspberry Pi 7 or maybe Raspberry Pi 10.

  175. Rahul

    By the way, the first machine we were able to afford to buy was actually a 386, and you got 32 MB of RAM? I had 4 MB of RAM and that is a big trade-odd because a 4 MB of RAM cost about half the cost of the entire machine. It was that expensive back then.

  176. Stephen

    Did you have…did your machine have a turbo button?

  177. Rahul

    Yes. So, this was a 386 DX2/66, which operated at 44 hertz…44 megahertz, and it had a turbo button that would bump it up to 66 megahertz. [crosstalk 01:00:24] we got megahertz and not even gigahertz.

  178. Stephen

    I had a little two-digit, you know, eight-segment display and ours was from 8 megahertz to 40 megahertz. And I remember this because you can play Tetris and you hit turbo and just have all the pieces just [vocalization]. Oh, man.

  179. Rahul

    Oh, that’s a 5x improvement in processing speed. Yeah, we used to go…so, my machine used to go from, you know, 40 or 45 megahertz all the way up to 66 megahertz because it had the DX2/66 processor, and the 386 had that. Even though I originally started working with just the 80 machines where you had two floppy drives and you plug the first one in, which should boot up half the OS, then plug in the second one, which will be the second OS. Of course, DOS-only second part of the OS and it will boot up, and then you would put in the third floppy to do anything else that you really wanted to do.

  180. Stephen

    Well, there is an old computer museum over here in Seattle. So, next time you’re in town, go over there, bring the families and let them see these machines that we worked on and maybe they’ll appreciate the CPU power they have today.

  181. Rahul

    And there’s an interesting one in Palo Alto as well if anyone wants to go check that one out.

  182. Stephen

    That would be cool. All right.

  183. Rahul

    Awesome.

  184. Stephen

    Well, thanks again. It’s been a really good segment and we will see you all next time.

  185. Rahul

    Thanks, everyone. See you all next time. Bye-bye.

  186. [music]

  187. Woman

    Is your AWS public cloud bill growing? While most solutions focus on visibility, CloudFix saves you 10% to 20% on your AWS bill by finding and implementing AWS-recommended fixes that are 100% safe. There’s zero downtime and zero degradation in performance. We’ve helped organizations save millions of dollars across tens of thousands of AWS instances. Interested in seeing how much money you can save? Visit cloudfix.com to schedule a free assessment and see your yearly savings.