AWS Made Easy
Close this search box.

Ask Us Anything: Episode 11

In this episode, Rahul and Stephen review a selection of announcements from AWS. But first, we learn that Rahul has a special hobby – collecting musical instruments during his travels. This includes his most recent trip to Hawaii, and he gives us a small sample of his ukulele playing.

Latest podcast & videos


Also, as an announcement, Rahul and Stephen will be at the AWS Anaheim Summit, August 18th 2022. See: for details.

After these announcements, they began a segment with a discussion of EBS Elastic Volums, and then 3 articles about Redshift. Following this, they discussed new features for Timestream and Connect.

New Amazon EBS Elastic Volumes automated performance settings make it even easier to modify volumes and save costs

This feature makes it easy to convert gp2 volumes to gp3. However, the default volume type is still gp2, which is hard to explain. Rahul does a live demo of how simple it is to modify an Elastic Volume to gp3. CloudFix can make these modifications automatically and at scale.

Redshift Announcements

Amazon Redshift Serverless is now generally available

Redshift Serverless makes it much cheaper to use Redshift, without worrying about the details of provisioning a cluster and sizing the resources appropriately.

Amazon Redshift announces support for Row-Level Security (RLS)

This adds additional granularity for security, where some rows may contain sensitive data. For example, billing data.

Amazon Redshift improves cluster resize performance and flexibility of cluster restore

If you are using provisioned Redshift, rather than Redshift Serverless, then you may want to change the size of your cluster to adjust to the load. This announcement states that cluster resize is now faster!

Amazon Timestream announces improved cost-effectiveness with updates to metadata metering

Timestream is a time series database with a somewhat complicated pricing model. This announcement makes pricing more straightforward. Rahul makes an interesting observation about the relationship between Timestream and CloudWatch, in that logs are also time series data.

Amazon Connect Customer Profiles now enables you to integrate unified customer information into your custom agent applications

AWS Connect Customer Profiles now has a JS library for embedding information about a customer inside of another application. This empowers agents to have the data they need to offer a personalized experience. This is another great release by the Connect team, which has been mentioned in many of the other “What’s New Review” segments.

AWS Made Easy


  1. Stephen

    Hello, and welcome to “AWS Made Easy,” episode number 11. I’m Stephen Barr joined by me and co-host Rahul Subramaniam.

  2. Rahul

    Hey, everyone. Welcome back. I can’t believe we’re in episode number 11. Last week was pretty awesome. We did the 10th episode and feels awesome to have this consistency.

  3. Stephen

    Yeah. Really enjoying the process. I still can’t believe that we can do…we don’t have a production crew in the background. It’s you and I and a bit of help from a designer to help with some of the clips. I still remember that part in “Back to the Future” one where the 1955 Dr. Brown is looking at the video camera and we have a portable television studio. And I think this is almost the extension of that where now we have a whole live TV studio and we can make things as pretty good production value, I think.

  4. Rahul

    I know. I mean. And if it weren’t for the cloud, I don’t even know how this would be possible.

  5. Stephen

    Oh, gosh.

  6. Rahul

    Just thought about this a few years ago, yeah, just impossible.

  7. Stephen

    And all the little details. And we’re using a thing called StreamYard and to manage the multiple destinations, we’ve got people on Twitch, on LinkedIn, we’re doing Facebook Live now, and Twitter live streaming and getting all the bandwidth and the distribution and handling all the sessions and the tokens and even some of the things behind-the-scenes like getting the audio levels that it works out automatically. It’s pretty neat. And you can do it.

  8. Rahul

    No. Yeah, I mean, I think this is quite revolutionary. And given how many people are now independent podcasters, and YouTube broadcasters, and so on. I think platforms like these are incredible. And I think we should one day, you know, invite someone from StreamYard to come in and talk about how they’ve created this service and what goes behind the scenes of something like StreamYard. I think that’d be a very interesting episode to have.

  9. Stephen

    Yeah. I’m gonna put that down on my notes. I’ll reach out. StreamYard.

  10. Rahul


  11. Stephen

    That’d be a lot of fun

  12. Rahul

    Which also reminds me, I think we need to also get on YouTube. I just realized we’re not streaming to YouTube. So, next week, let’s see if we can get there as well.

  13. Stephen

    Yeah, I’ll add that to the destination list. It’s really neat. Yeah. I can do that.

  14. Rahul

    And it makes it so easy, right? I mean, you just decided all the destinations that you want to be in and just go and figure those fairly straightforward.

  15. Stephen

    All right. All right. So, we’ll see you next week on five platforms. That’s Facebook, LinkedIn, Twitch, Twitter, and YouTube.

  16. Rahul

    Absolutely. So, how was your weekend, Stephen?

  17. Stephen

    Really good. We had some family come in from out of town. So, a lot of eating, a lot of tourism. And it was really nice to catch up with people I hadn’t seen since before COVID.

  18. Rahul

    Awesome. Did you just have the wedding in your family, or is it yet to happen?

  19. Stephen

    No, no. That happened.

  20. Rahul

    Oh, that happened. Awesome.

  21. Stephen

    Yeah. That was…

  22. Rahul

    Did you manage to fit everyone in the protocol? Which is the big challenge.

  23. Stephen

    Yeah. It worked out surprisingly well. One of those things where it’s hard to see it in advance, but then yeah, it actually worked out really well. So…

  24. Rahul

    And Seattle weather held up?

  25. Stephen

    We’re actually really lucky that it was just a little bit overcast because I was worried that if we’re gonna have one of those really nice hot days you get, you know, this classic fail video where someone passes out during a wedding party? Luckily, it was a little bit overcast. I think the photographers were happy and it was nice and cool. So I think it was just perfect.

  26. Rahul

    That’s awesome.

  27. Stephen

    And then it rained the next day. So, threaded the needle on that one. And what about you? How was your weekend?

  28. Rahul

    Weekend was good. I celebrated my birthday.

  29. Stephen

    Oh, happy birthday.

  30. Rahul

    Yeah. Thank you so much. Give me just one second. Yeah. So, I celebrated my birthday. And yeah, had lots of family around. So, that was pretty awesome. And, yeah, just a quiet weekend with the family. Nothing earth-shattering and oh, yeah, I spent a little time practicing my new acquisition, which is a ukulele that I’d picked up from Hawaii.

  31. Stephen

    Let’s hear it.

  32. Rahul

    I don’t know if it’s tuned. I’m still learning the chords. So, someday when we don’t have an AWS announcement, which seems very unlikely, I might actually play a song.

  33. Stephen

    Well, actually, would you do like some intro music? That’d be kind of cool.

  34. Rahul

    All right. Let’s try that.

  35. Stephen

    Yeah. We’ll mix it in. It’s hard to find good royalty-free music. So, if you do some good intro music and we’ll mix it in, I think that’d be fun for some of the transition music.

  36. Rahul

    Yep. We can definitely give it a shot. Would be an interesting experiment. So, yeah. One of my hobbies is I collect musical instruments from wherever I travel, something that’s unique to the location, and try to learn how to play it. So, yeah. It’s fun.

  37. Stephen

    We’re excited to have a ukulele intro and outro coming out soon.

  38. Rahul

    Let’s add that to our list for the show. Okay. So, we have a ton of announcements that, as usual, came up this week. Shall we dive right into it?

  39. Stephen

    Yeah. Oh, well, last little announcement is I’ll share the screen. You might be able to meet us in person in August, we’ll be in Anaheim, not Disney World, or Disneyland but the AWS Summit. But maybe we’ll go on a roller coaster after. AWS Summit, Anaheim, 2022. So, if you happen to be there, it’s August 18th. We might see you there.

  40. Rahul

    Yep. And I’m really excited to say I’m gonna be doing two talks at the Anaheim Summit. The first one is going to be about why you should think of code as liability and discuss the future of how apps should and could be built. And the second one is how you can systematically get to 70%, 80% of cost savings on your AWS bill, and what are the methodologies that we’ve tried and experimented with and refined over the last 14 years trying to get there. So, we’ll be sharing a lot of those insights. So, we’d love for all of you to come and join us in Anaheim. And if no one shows up, I am bringing my ukulele along. So…

  41. Stephen

    All right. So, you will find us either in the convention center or outside.

  42. Rahul


  43. Stephen

    It’s funny. I love that code is liability. I don’t want to give it away, but it’s almost like that word serverless. Right? When you think code, that’s my asset, that’s my IP service. Where’s my computer? But that’s such a great… I won’t give it away. That’s an exciting talk that’s coming up.

  44. Rahul

    Yeah. And yeah I would love for an awesome discussion at the end of it. So, yeah. Please come and join us at Anaheim. And looking forward to it.

  45. Stephen

    All right. Well, let’s cue our transition, and then we will be talking about EBS Elastic Volumes. So, here we go.

    Okay. So, the first announcement that we want to talk about is Amazon EBS Elastic Volumes automated performance settings. So, the big deal, and I think we’ve been talking about this particular change for a while is it’s saying when customers use elastic volumes change from gp2 to gp3, EBS will automatically provision the target gp3 volume with the equivalent IOPS and throughput. So, gp2 vs gp3, what’s the big distinction here? And why would why we want to do this?

  46. Rahul

    So, I think now coming up on a year and a half, AWS announced a new volume type called gp3, which basically has a minimum IOPS of 3000. It’s significantly more performant than gp2 and it is 20% cheaper. Like that’s the beauty of gp3. And what this announcement says is that every time you modify your volume, and if it is the elastic volume type, so I forget what the cutoff date was, but beyond that date, all the EBS volumes that you launch, you know afresh, they are of the type elastic volumes. So, it’s very easy to change the elastic volume types. But when you move those from gp2 to gp3, then there’s a bunch of settings that get carried over. So, for example, if you have your IOPS, you know, no matter what your IOPS is, if it will set the IOPS to either the minimum of whatever the default for gp3 is, which is 3000 IOPS, or it will set it to whatever IOPS you had on your gp2 volume automatically without you having to specify it explicitly. So, yeah. It’s actually pretty straightforward. My only grouse about this with AWS is that the default for launching an instance is still a gp2 and I don’t understand why. They go through so much lengths to try and save money for the customer, but they’re literally missing the most obvious thing. If we can just share my screen.

  47. Stephen

    Yeah. Let’s do that.

  48. Rahul

    I actually wanna show this. I wanted to kind of try something out over here. So, it’s the first time that we’re trying… I’m trying to screen share and try and do a live demo. Thankfully, it’s an AWS system. So, I don’t have to worry about things going awfully wrong. But if you look at the default storage, it always shows up as gp2. And you are to click this, select gp3 by default if you wanted to. And I don’t see any reason why it shouldn’t be gp3 by default. If somebody wanted something more specific, yeah, go choose something specific. But I still don’t understand why AWS has left this one to be gp2 by default, and not gp3. Now, let’s just go ahead and launch this instance, hopefully.

  49. Stephen

    Well, I guess it’s to give us something to talk about, about why we need to make it easy.

  50. Rahul

    No, absolutely. I mean, it is 20% savings that is left on the table, specifically for every single instance you launch. Like, imagine the friction that exists because you have to scroll down that huge list of configuration parameters, and then choose something that’s not the default. Right? That just makes no sense at all. So, let’s go to this particular volume. Let’s pick up our storage, and here’s our volume. If you go to this particular volume, you can now say modify. And because this is an elastic volume type, you can choose to make a gp3. And if you look at it, the IOPS and the throughput got automatically populated. Okay? Here because the default, I think on gp2, the default IOPS is about 100 IOPS. So, it defaulted to 3,000 IOPS. So, you get way more performance by default on the gp3 at 20% the price. So, I mean, that couldn’t be a more of a no-brainer on say here than to start with a default of gp3. And just, if you’re using anything less than 3,000 IOPS, just switch over to gp3. I don’t even know how much easier or how much more straightforward that could be, but they just don’t make it that simple. You know, they complicate it with this.

    So, for those of you, this particular announcement basically says that if you decide to modify your volume, so A, start with the gp2 default. And then when you go to modify your volumes, as long as it’s elastic volume, they’ve basically crossed that cutoff, then these two parameters, the IOPS and the throughput are automatically gonna be set to whatever the defaults in gp2 were or the minimum of what gp3 supports, which is significantly greater than what gp2 supports. So, that’s it. I think it brings up CloudFix.

  51. Stephen

    So, it’s really neat to see the ability to do it with a click through the console. But what if you have hundreds of volumes, you’re not gonna do that. Is there a way we can automate this?

  52. Rahul

    Absolutely. I think that’s where CloudFix comes into play. I mean, you could write a very simple script that goes through all your volumes and constantly scans it. There’s just a lot of operational overhead in trying to do that, but we’ve got a tool called CloudFix that does pretty much exactly the same thing. It scans through all of your volumes that are gp2. Any gp2 volume that is using less than 3,000 IOPS, it’ll convert it to gp3. And this brings up a really neat solution where now if you have a gp2 volume with more IOPS, we have to look at some of the end use cases. But we might be able to move those also by default, carrying over all the performance parameters because that’s taken care of by AWS automatically for us. So, it actually all opens up a lot more volumes in CloudFix for us to then automatically fix.

  53. Stephen

    Okay. So, as you’re doing the transition, you’re looking at the usage and then setting the settings on the new volume type so it’s literally as good or better than the previous one while still giving you a savings?

  54. Rahul

    Exactly. So, yeah. Check our CloudFix. If you have loads of volumes, I’m pretty sure you’re gonna be exhausted clicking through every single volume. You just saw the exercise that I had to go through. You had to click on the… Should we go over that again?

  55. Stephen

    Yep. We’ll put that in the highlight reel of what to do, what not to do. You just do it once or twice, but not 100 or 1,000.

  56. Rahul

    Not 100. Yeah. It’s just too painful to go through every single volume, then click on modify, then go into the settings, and change it to gp3 from the drop-down. Yeah. You just don’t wanna do that. You want something that constantly sweeps your accounts, looking for these kind of volumes, and fixes them automatically. Just takes care of that for you.

  57. Stephen

    Perfect. Yeah. And I guess these certain types… these type of fixes you want to…it’s easy to forget to do them. I know I still have a few S3 buckets that really shouldn’t be in glacier. I haven’t accessed them in a long time. And there’s not much in them, but again, it’s easy to forget, and there’s still money on the table. So, I have to get those done. It’s my CloudFix for my account. All right. Anything else to say about EBS Elastic Volumes?

  58. Rahul

    No. Just AWS, please, default gp3. They just build an entire new console with a new UX, all of that stuff. Do the simplest stuff first, just default. An instance launch to gp3, you’ll save people tons of money. You’ll make customers very, very happy. So, please, do that.

  59. Stephen

    All right. I’m gonna post that particular plea as a highlight so we could say, “All right. Please do this. Save some money.” All right. Well, let’s transition over to Redshift serverless.

    Okay. So, the next announcement we’re gonna talk about and I’ll put this in the comments so you can follow along for the link. And even this part, like we talked about with StreamYard, I’m putting it in one chat window, and I was putting it out into the chats of Facebook and Twitch and Twitter. That’s pretty neat. There’s a lot of API integrations you do not have to worry about. So, [inaudible 00:17:14] on Redshift serverless. Now generally available. So, Redshift serverless… The idea of Redshift when you get this big data warehouse. And I personally don’t have any data needs that are big enough to use Redshift, but I want to use it. And I don’t wanna have a Redshift cluster sitting there to, you know, analyze my couple of gigabytes of data. That seems like overkill. But now Redshift serverless, I can. I can have some things in an S3 bucket or an Aurora serverless that then get pulled into Redshift serverless.

  60. Rahul

    Correct. Yeah. So, I think serverless is the trend. I think they figured out the serverless pattern very well with the Aurora team. And, you know, Aurora into two revisions of serverless, the V1 and then now V2. And I believe that this version of Redshift serverless is built pretty much like the V2 version, where you only pay for whatever compute and time that, you know, you’re using those instances. A big detriment for Redshift really is the cost, right? I mean, you don’t really run your analytics workloads that frequently. And when you actually do want that, if you were to keep those, you know, D-class instances or D family instances running for a really long time, it can cost you a bomb. So, instead of running those instances all the time, the fact that now it can scale up and down as needed and you pay only for compute when stuff is actually getting used is pretty remarkable. I like this. I like the variety as well, within the AWS ecosystem, in that if for analytics workloads, you don’t care so much about near real-time latencies. Then S3 with Athena is a really good option. But if you really wanted, you know, near real-time results in your dashboards and stuff like that, then yeah, Redshift is really valuable, but costs are significantly greater. And so, this just makes it more economical to use Redshift.

  61. Stephen

    And what it’s saying is on a per-second basis, right? So, it’s not rounding to the nearest hour or half hour per second. That’s pretty phenomenal that they can do the billing down to that fine grain. So, you really wouldn’t hesitate to use this. Well, you can monitor your plant irrigation system with Redshift if you really wanted to now. Even though you had considered doing that before but you could.

  62. Rahul

    Yeah. I generally use this phrase that you know, when something is basically a cannon to swat a fly. This is one of the scenarios where, you know, for a home IoT system to use something like Redshift. Yeah. Would literally be that analogy where someone…you bring out a cannon to swat a fly. So, considered and rejected.

  63. Stephen

    Very well. I do like the simplicity here of no need to choose no types, workload management, scaling, all that stuff. It really feels like it’s another level of abstraction above. You know, once you start going through the menu of regular Redshifts, there’s just a lot to think about. And it’s a lot of decisions you have to make upfront. Whereas this, you’re not, you’re just scaling as you need.

  64. Rahul

    Exactly. So, I think this is pretty neat in terms of enabling people to use those analytics tools really quickly and get started. The fact that they’re actually taking away a bunch of decisions, makes it simpler and easier for people to adopt these services, unlike another announcement that we’re gonna be talking about in just a bit, which does exactly the opposite. So, have mixed signals from AWS, but we’ll talk about that. And we’ll come to that in just a bit. But yeah, this is definitely simpler. I love the simplicity aspect of it. And the more choices that AWS takes away and just gives you stuff by default where they take care of stuff behind the scenes, the better it is for customers in my view.

  65. Stephen

    And one thing I really like about Redshift as well, in terms of, you know, adding in practical features for customers, they even have pre-loaded datasets of kind of background data that you would like. They’d say if you had sales data by county or by postcode, they have pre-loaded data that maybe if you want to get population data by those same things, so then look at some metrics. You don’t have to go hunt that down from under the Bureau of Labor Statistics, or wherever that happens to be, and then clean it and parse it. That’s already done. So, there’s data for you to be able to do that kind of thing because that can be really difficult for someone who’s just getting started is just finding some sample data. They just want to explore it. So, I like that there’s sample data available for experimentation.

  66. Rahul

    Exactly. Now, I mean, all of those things help a ton. They really do help a ton to get started. There are actually a bunch of patterns that you can get started with right out of the box. There are some CDK, you know, templates and cloud formation templates that get you set up with a basic Redshift, Athena, QuickSight kind of setup. So, you can get started with stuff right out of the box. So, I would encourage folks to go try that out. And then you can load other datasets in it to get your analytics workloads going. So, yeah. [crosstalk 00:23:16] I’d say thumbs up on this one. AWS simplicity and serverless definitely the way to go.

  67. Stephen

    Yeah. Absolutely. You know, I was just thinking about, I want to have a little OBS overlay that we can give like a score to each announcement and just push a button. I’ll give this a nine out of 10 or four stars or five stars, whatever it is. I like this. Thumbs up. Redshift serverless is very exciting. Any other announcements before we move on to…let’s see, move on to Timestream?

  68. Rahul

    I think there were a couple of more announcements.

  69. Stephen

    Oh, yeah.

  70. Rahul

    Redshift announced support for the R6i if I’m not mistaken. And row-level security.

  71. Stephen

    Where’s the R6i?

  72. Rahul

    Sorry. No, R6i was on Postgres, if I’m not mistaken.

  73. Stephen

    Yeah. There’s row-level security. I’ll put that in the chat.

  74. Rahul

    So, this one basically just gives you more control about what you expose. Now, typically, in a data warehouse, you’ve got, you know, data from all over the place, and you want to restrict at times what data gets exposed by a query. You don’t want all the data across all the joints because effectively what you have in Redshift is flattening out or denormalizing a whole lot of data across, you know, different sources and then dumping it all in one massive columnar store. So, you really don’t want to expose everything to everybody. So, now you get row-level control to make sure that you’re exposing only what is relevant.

  75. Stephen

    And especially, like for example, I used to work in healthcare IT and not everyone should have access to some of the billing things. Whereas not everyone should have access to say diagnostics or medical code and vice versa. And so, being able to have these policies that can, like you said, without having to make some really complicated schema or something like that, you have column and then row-level access control.

  76. Rahul

    Correct. So, this one is neat. And I think there was one more Redshift announcement.

  77. Stephen

    It was the cluster resize.

  78. Rahul

    All right. So, again, if you did not want to go with serverless, then…

  79. Stephen

    Here we go.

  80. Rahul

    Cluster resize performance carries over, you know, pretty much the same way. So, cluster resize used to take forever to run because they had to replicate a whole lot of stuff and make it all work. Now, that performance has been improved pretty significantly.

  81. Stephen

    I mean, it seems like a lot of the performance that we’re seeing are really functions of performance to Nitro, and to how quickly data can be moved around, like behind-the-scenes.

  82. Rahul

    Yeah. And the other thing that I really wonder is if a lot of these instances are actually now using Graviton where they have possibly tuned a bunch of the operational pipelines for Redshift and some of the more cloud-native services. So, that’d be an interesting one to find out whether they’re using Redshift to do these.

  83. Stephen

    Yeah. No. I would love to know the behind-the-scenes Graviton underpinning. So, let’s try and see if we can find someone to talk about the behind-the-scenes of sort of this Redshift cluster resizing and other similar things to see how they’re doing that if they’re willing to talk about some of the detail. Some of it Is a bit don’t, I wanna say classified but sensitive. But it’s so interesting though.

  84. Rahul

    Absolutely. And we should probably have someone from the Graviton team come in and talk about this.

  85. Stephen

    Yeah. All right.

  86. Rahul

    Yeah. Okay. So, yeah. I think those were the announcements on the Redshift one.

  87. Stephen

    Well, let’s take a quick 30-second break, and then we will switch gears over to Timestream.

  88. Rahul

    Sounds great.

  89. Announcer

    Public cloud cost is going up and your AWS bill growing without the right cost controls? CloudFix saves you 10% to 20% on your AWS bill. They focus on AWS-recommended fixes that are 100% safe with zero downtime and zero degradation in performance. The best part with your approval, CloudFix finds and implements AWS fixes to help you run more efficiently. Visit for a free savings assessment.

  90. Stephen

    All right. We are back. Amazon Timestream announces improved cost-effectiveness with updates to metadata metering. So, I haven’t used Timestream too much. It’s this hard series database. And the idea is… Okay. With this, let’s just parse this together. No longer charge customers for the dimension names and measure names associated with ingesting, storing, and querying data written after July 8th, 2022. All right.

  91. Rahul

    Yeah. So, Timestream is neat. So, for those of you, a time series database is basically some purpose-built database for data that basically has your date time as your primary key or as an index. And that allows you to kind of partition your data by time, right?

  92. Stephen

    So, well, I guess one very interesting at the moment, exchange rate data. That’s time series.

  93. Rahul

    Correct. Or stock prices. That’s another time series, you know, data.

  94. Stephen

    Or Bitcoin processes.

  95. Rahul

    Logs. Event logs. So, let’s say you have a system that’s just generating a ton of events, all of those event logs, network traffic, monitoring like your VPC logs, and stuff like that, that’s time series data as well. And Timestream actually provides a very simple interface, take all of the data, load it up, it stores it up internally, very sequentially. That way, it’s easy to index, easy to retrieve, and then you can actually perform time-related operations. So, if you wanted aggregations over a day, over a week, over a year, things like that, it becomes easier to do with time series database than trying to take a conventional relational data store and try to do the same thing with that.

  96. Stephen

    And that’s just really… it’s again, with a lot of technology, it seems okay. It’s assorted by one column. This stuff gets really tricky to deal with in practice. Yeah. I used to do some graduate-level time series analysis, we did these autoregressive models and integrated moving average. They’re called ARIMA. That was autoregressive, integrated moving average where you were to predict something and you’d have the moving average of, say, the past hour or the past day or the past month, and the different weights on different pieces of time, you have to adjust for seasonality because the, say, sales is often very seasonal, so you want to adjust for is it…well, here’s the holidays or now Prime Day, you know, those things you wanna adjust for. It just gets tricky to do in practice or having a purpose-built data structure that can scale and handle this at again, any scale you want, trillions per day. That’s really exciting to deal with.

  97. Rahul

    Yeah. Absolutely. And the only confusion for me, and again, I’m not coming yet to the pricing part of it. But the part that I don’t get in terms of AWS is vision and something I’ve brought up a few times. But again, because AWS has different teams, they take completely different paths. CloudWatch is itself a Timestream data store, if you really think about it. You can basically keep, you know, reporting events into CloudWatch, and you have all the aggregations with metrics. And then you also have dashboards that come right out of the box for you. And then you can have alarms and alerts on that Timestream data. So, CloudWatch actually seems like a pretty good encapsulation over something like Timestream, which gives you all these other benefits. But if you literally needed to use all that raw data and write your own queries and manage your own metrics and aggregations, and all of that stuff, then maybe you wanted to use Timestream. But I haven’t come across a scenario where I would pick Timestream or CloudWatch by default yet. But I would love to hear from the audience if there are scenarios where you believe that you pick Timestream over CloudWatch, I’d love to hear.

  98. Stephen


  99. Rahul

    I mean, I know CloudWatch is meant for logging, fundamentally. But logging is just another way of thinking about… I mean, logging is almost synonymous with taking a bunch of events and ordering them along the axis of time.

  100. Stephen

    That’s a good way of thinking, right? What is the real fundamental difference between log analytics and any other type of time series analytics? And so, it might be…basically, you wanna ask the same questions like what happened in the past, what are the patterns, what are the anomalies, what are cyclical variations, and what’s gonna happen in the future?

  101. Rahul

    Correct. I think, in fact, something like logs needs additional features like full-text search on them. But net-net, I feel like something like CloudWatch would actually be more popular than Timestream for that very reason. You get all of that stuff out of the box as a service. And so, I’m still confused about who uses Timestream as their time series data store as against CloudWatch, and why they made that decision.

  102. Stephen

    That is a really interesting point. Well, to users of Timestream or Timestream developer advocates that love to know some use cases, and yeah, please reach out. We’d love to… I’ll start looking on Twitter to see where Timestream is mentioned.

  103. Rahul


  104. Stephen

    Neat. All right.

  105. Rahul

    Coming back to this cost announcement over here. Just gonna get into the point of the announcement.

  106. Stephen

    Yeah. Immediately. Yeah. Good point.

  107. Rahul

    So, AWS had a crazy multi-dimensional costing matrix that… I mean, I always will be so confused by it every time I looked at it. Thankfully, no one ever used it, at least within our log. So, we never had to go through the details of, you know, how we priced it and how we planned for it. But yeah. It was insanely complex. Now, hopefully, you know, with some of the stuff being simplified, people might actually start to not get confused by just the pricing and the documentation and just start using it more often. So…

  108. Stephen

    I guess that’s one of the things we heard. What was it? It was the DynamoDB team when they said at one point they had pricing that was a little bit confusing, and they’ve had to simplify that. So, it is a pattern amongst different users of different AWS services of simplified pricing. Because although the team that develops it has a model of all the different dimensions and the constraints, ideally, that should be somewhat opaque to the end user.

  109. Rahul

    Yep. I just thought of something. You know, from our next episode onwards, we should figure out if there’s a way to have the stamp come in on these announcements where we say, “Simplified. Thumbs up.” That kind of a stamp. And then there are announcements that make things more complex and give… you know, AWS customers make so many more choices that are hard. So, yeah. We should like, I know, we have an opinion about these about what are being simplified and what instead are being made more complex. So, yeah. Let’s share that with everyone.

  110. Stephen

    Yeah. All right. That sounds really fun.

  111. Rahul

    This is certified as this definitely simplifies the AWS world.

  112. Stephen

    Yes. And it is a trade-off, right? Because as the services become more generic, or I guess as more use cases come up in practice and features get requested that, you know, obviously, the scope enlarges over time, that’s the general trend. And so, it’s really difficult to simplify things and get that exact level of usability where almost every use case is covered and yet it’s still approachable for someone who’s new to it without having to read a couple of 100 pages of manuals or make an expensive mistake.

  113. Rahul

    Yeah. I think that’s where you kind of have two variants, right? I mean, something like Aurora, you have the serverless and the non-serverless version. But the serverless part of it addresses 80% of use cases. And for those who are on the edge, you know, yeah, you’ve got your levels of control as need be. But to take something that’s complex and simplified with serverless is great. Simplifying the pricing and, you know, costing mechanics, yeah, definitely helps. It definitely helps with adoption, without a doubt.

  114. Stephen

    Yeah. It’s funny, I was reading a very similar discussion. One of my favorite hobbyist languages is Haskell. And they’re at this point where the compiler is somewhat straightforward to extend. But they’ve made so many syntax extensions and other additions to the base language over the years that now it’s unless you’re an expert, and I’ve been following it for the last couple of years, it’s really hard to just jump in. And so, there’s been discussions about, well, should there be a simplified common Haskell variant which takes some of the established, you know, the middle of the bell curve patterns and says, “Okay, Here’s where you start?” And there’s all this stuff at the periphery that you probably don’t need. And I think that’s a good thing to do as things evolve is to highlight what’s the common path.

  115. Rahul

    I think it’s the same thing with Lisp. I mean, you went with Haskell and functional programming, I went with Lisp because I’m an emacs person. I hope there are no rotten tomatoes thrown at me. But yeah, I’m an emacs guy. I’ve always been an emacs user, and ELisp was the way to kind of live in that world. And Lisp went through its cycle of all kinds of different variants of Lisp. In fact, we had a product that we acquired, I think around 2005, 2006, which is basically a real-time process control engine written in a variant of Lisp that existed in 1970s. And the company that built the compiler for it, it is again, a variant of Lisp. They went out of business somewhere in the ’80s. And when we acquired the product, there wasn’t a compiler. There wasn’t a system that we could continue working on that process control engine. It was really important because significant industries and, you know, big infrastructure organizations used and continue to use that platform. So, we spent a ton of time translating and moving all of that stuff to Common Lisp. So, unlike Haskell, they were able to actually create a variant of Lisp called Common Lisp. And we were able to migrate all of that code base to that Common Lisp variant.

  116. Stephen

    Oh, we have to have like a story hour where you get the details. That seems pretty fascinating.

  117. Rahul

    Yep. Now we went through some harrowing times with that version of the product.

  118. Stephen

    So, I guess the summarize it all, we are in favor of simplicity and yet still, we like the extensibility, but we have to know, okay, it’s good to be able to communicate that through the UI, through the documentation, or both when you’re in the common path and when you’re not.

  119. Rahul

    I’d go one step further. I’d say default on simplicity. And if you have additional bandwidth and you want to showcase something, then have that as an extra. But default always to the simple, easy, straightforward solution so that it’s easy to adopt.

  120. Stephen

    Cool. I like it. All right.

  121. Rahul

    Awesome. Next one.

  122. Stephen

    Next one. Okay. We are going to be talking about…let me load up the banner. We will be talking about “Connect Customer Profiles. Give me one second here.

    All right. Amazon Connect Customer Profiles. And this is saying now enables you to integrate unified customer information into your custom agent application. So, the idea of this is, say, you’re at a call center or in some other way you are interacting with customers. And as you’re interacting with them, you want all the background information to come up. Is that the idea? And then you want to know… I see leveraging machine learning-based entity resolution to create single unified profile for each of your customers. That’s a big win.

  123. Rahul

    So, Connect has something called “Find Matches.” So, what it does is fuzzy matches the idea of a profile. So, let’s say you had NetSuite, which is your financial system, you have Salesforce, which is a CRM, you have Zendesk, which is your support system. And let’s say a few other, you know, few other systems that you use, each one of those systems has a different concept of a customer, right? You’ve got emails that might vary, the person might vary, the name of the customer or the company might vary like one place, you might say IBM, and the other one you might say, IBM Asia or IBM, you know, U.S. or whatever. Other arguments call it IBM Corp, even though, you know, you and I know that it’s IBM, for a machine to know that just IBM and IBM Corp and IBM, some other, you know, subsidiary might also refer to the same customer and the same person and the same team. That’s hard for that kind of resolution to happen.

    So, what Find Matches does is, so, A, you know, connect basically can act as your central manager of your customer profile, where you can bring in data from all of these different systems and create this amazing 360-degree view of your customer across all of these different systems. All of their interactions and support or with support, your interactions with your sales team, you know, all the interactions that they’ve had on billing and finance and purchases and stuff like that, you cannot bring it all in one view. And that’s what Connect does. It disambiguates the idea of a customer across all of these different systems where you pull in data from and gives you that one view. What this announcement says is you can now take this profile and export it into whatever system you use for your agents. So, let’s say you were using, I don’t know, Twilio to contact your customers, then you could…you know, Twilio dashboard or your agent, you could bring in this customer profile information and leverage it. So, I love Connect for two reasons. One is if you just looked at the pace at which they’ve been bringing in some amazing insights into this platform, it’s been crazy. I think over the last, including this one, if you look at the last 11 episodes, I think we brought up Connect in maybe seven of them.

    So, the rate at which they’re announcing new stuff coming to Connect just absolutely is mind-blowing. And they are turning into an amazing, amazing platform. So, if you haven’t checked out Connect yet, please do. It’s really awesome. Super simple way to get your call center going. Create a unified 360-degree profile of your customer, integrating all these other peripheral systems that you use is just awesome. And for me, the underlying find matches algorithm is just amazing.

  124. Stephen

    A couple of years ago, I wrote a master patient index for a healthcare provider, and it’s a similar idea, right? They have, say, one doctor… So, my name is Stephen, obviously, but sometimes people will write it down S-T-E-V-E-N, sometimes people will write it and they spell it my way, S-T-E-P-H-E-N, and then sometimes it’ll have a middle initial. Sometimes what often happens in international situations, and we still use month, day, year, but often is day, month, year. I’m glad I’m born after the 12th, so I don’t make that mistake. But when I was living in Australia, it took me a really long time to get day, month, year thing. Is it November 10th or October 11th? You know, it was really hard to… and those kinds of swaps happen. And so, having a machine learning system that can look for those type of things and say, “Okay. This is probably the same person.” You know, Steven, spelled E-V-E-N and Stephen, P-H-E-N, but with the same birthday and the same social security number. Yeah. We’re the same person. And that’s how identity resolution works is matching, you know, a certain number of points that match to create a higher threshold. That’s a really big deal for data quality and keeping control over how many actual entities you actually have. And I really like this idea of the contacts that are agents that have this…being able to have a quick profile of the customer to, like I said, provide a more personalized service.

  125. Rahul

    Exactly. And getting this right is incredibly hard. Like, you know, we’ve made multiple attempts over the years, I mean, when the technology never existed. That tried to disambiguate. And more often than not, the data is so dirty that doing is incredibly hard. But the Connect team seems to be doing incredibly well in, you know, getting all of that lined up and working. So, kudos to the team. Kudos to the pace at which they are kind of releasing all of these new features. And I think on our simple versus not simple meter, this one is a hard one. I think I would still put it in the simpler category because they are taking away a whole bunch of stuff that is today very custom…tons of custom code that people are still writing and moving it into a simple API call. In fact, it’s not even API. Simple API call, I’d say. So, by commoditizing a lot of this function, they’re simplifying it. So, I would give it the simplifies or simplified tab, or stamp. But yeah, it’ll be interesting to see as they increase these number of features. If they start exposing a lot of this to the customers and having them control a lot of these parameters, then I think they’d be going down the wrong direction. So, I love where they’re at right now. I love what they’ve done so far. But I really hope and pray that they don’t go down the path of giving too much control to the customers and overcomplicating it in the process.

  126. Stephen

    Yeah. That makes a lot of sense because the number of dimensions and variables here could just explode. And then you can…

  127. Rahul


  128. Stephen

    Yeah. Well, all right. We will have those banners and pop-ups. So, the next episode I know is gonna be a lot of fun. All right. Well…

  129. Rahul

    We do need a big, big hand coming in with a stamp.

  130. Stephen

    Yeah. Okay. All right. We’re gonna have some fun with this. All right. Well, let’s take another 30-second break and we come back we’ll be talking about Aurora Postgres.

  131. Announcer

    Is your AWS bill going up? CloudFix makes AWS cost savings easy and helps you with your cloud hygiene. Think of CloudFix as the Norton Utilities for AWS. AWS recommends hundreds of new fixes each year to help you run more efficiently and save money. But it’s difficult to track all these recommendations and tedious to implement them. Stay on top of AWS-recommended savings opportunities, and continuously save 25% off your AWS bill. Visit for a free savings assessment.

  132. Stephen

    All right. We’re back with Amazon Aurora PostgreSQL compatible edition now supports our R6i. The R6i Xeon scaled with processors. Let’s see. These are hu…well, they’re 32x large. So, that’s 128 vCPUs and a terabyte of memory. That’s really impressive.

  133. Rahul

    Yes. No. I think this announcement basically, the thing with databases is that you always want to have the biggest largest instance. Okay? Especially when it comes to relational data stores, you want to load up as much in memory as you possibly can, or let me clarify. You want the biggest largest instance in terms of memory. And you want really fast IOPS because you want to load up as much data as you possibly can in memory so that your responses are super quick. And you wanna be able to read from disk as quickly as possible to load it into memory. I think the CPU doesn’t matter quite as much unless you abuse the database with stored procedures. So, I would really advise, do not use stored procedures with databases because it’s the old-school way of really messing up your databases and creating a Frankenstein monster for you in the long run. I’ve seen people do everything from ETL to business logic to, you know, all kinds of other operations with stored procs and mess up everything. But if you use it right, you should need tons. The ratio of CPU to memory should be really low.

  134. Stephen

    Okay. So, this is reminding me of a storytime episode we’ll have to do. I was a machine learning consultant, and I uncovered a machine learning model implemented as a mountain of 1,200 stored procedures, and no one knew how it worked, and then, “Why is this taking so long?” And yeah. That was quite the story of unpacking this model. Of course, the person who wrote it was on their way out. And that was a fun experience.

  135. Rahul

    Okay. I think I win this one. The worst one I have seen is about six and a half thousand stored procs. And this was an acquisition that came out of bankruptcy. So, there was no team left. And unraveling six and a half thousand stored products across, you know, I think, if I’m not mistaken, this is almost 400,000 lines of code.

  136. Stephen

    Oh, wow.

  137. Rahul

    Was an absolute nightmare. And this was an on-premise database that we were trying to bring to AWS. So, I think I win that storytime challenge.

  138. Stephen

    One of my favorite sci-fi authors, Vernor Vinge, he talks about, in a thousand years, there’s gonna be the profession of a computational archaeologist who’s gonna have to dig through thousands and thousands and thousands of layers of things just like we look at sediment build up under these rocks. And that’s obviously stored procedures calling stored procedures, calling stored procedures, what is the actual table here?

  139. Rahul

    I swear. And it’s really hard to even navigate all of that code because it’s all embedded and sitting inside your database. And logging is crap, the traceability is terrible, going and unearthing stuff like… I think if I look back historically, the fact that we did not have much network bandwidth available. We’re pulling out data and doing something with it was expensive. Make people do all kinds of stuff, just cram everything into the database and do it. That’s kind of how they went about it, and that was the legacy way of doing it. So, now that you have got all these powerful machines, I think people might still have a tendency to do a lot of that. But please, I urge you not to do that. You’re just creating a monster to deal with at a later date. Keep the data stored just for data and put all your other ETL and business logic and all of that stuff elsewhere. AWS makes it really easy to use other services to do all of that really well. Use the database just to store data and retrieve it. CRUD operations. That’s all. That’s what you need to do there.

  140. Stephen

    I agree 100%. Yeah. And even with this 32x large, I think 6,000 store procedures, we’ll still figure out a way to slow this down.

  141. Rahul

    Absolutely. So, I think here, like the most important thing, this gives you nearly a terabyte of memory, which is awesome, right? I mean, you want as much memory as you possibly can. I wish they gave a little more flexibility on choosing the ratio, but I Understand they have to do those families and, you know, handle it that way. On my simplify versus complexify, if that’s a word, I would give this a complexify stamp because I shouldn’t have to worry about this. If my database just loads everything in memory once that I use a terabyte of memory or terabyte of RAM to do it, great. Just let me just use that. And all the other free CPU, use it for something else. For me to be able to now choose between all the different instance types, you have your CPU-optimized instances, your memory-optimized instances, you have general-purpose instances, then you’ve got your IO-optimized instances, which is, you know, ones like these. I think I would be paralyzed trying to figure out which one is right. And then, of course, you have your burstable instances with the D family. So, yeah. I think I would be completely paralyzed trying to figure out which one to pick for a production workload.

  142. Stephen

    Well, I guess it’s probably moving in the direction. The way computers have moved, where unless you’re a gamer, you probably aren’t going to be too worried about the specs of your personal computer. I bought a new monitor recently on Prime Day, and I felt that paralysis just trying to sort through the specs and, “Do I need 144 hertz of refresh?’ And, you know, all the different trade-offs that you make, and it’s just…you’re right. You shouldn’t need to worry unless you have extremely specific requirements. And otherwise, the system should just deliver you the performance you want, or stay under your budget. And you really need to just specify constraints rather than looking at it from…well, like we looked at the other one with…we had to deal with the number of nodes in the cluster. You shouldn’t have to deal with this. And maybe we’re at the tail end of most people worrying about the particular instance type they’re using. And maybe in five years, we won’t be having this conversation, we’ll just say everything is serverless, or auto-scaling, except for very specific use cases.

  143. Rahul

    Yeah. I think my hypothesis is slightly different in that Aurora is actually turning into a setup where you can bring in your legacy on-premise databases, and just host them in the cloud as efficiently as possible. And that’s the use case. But if somebody was creating a very specific performance-oriented use case, I think the way to go for a cloud-native architecture is to split out your data stores based on your use case. So, if you need a transaction data store, use something like DynamoDB. If you need analytics, you need OLAP, use something like Redshift or S3 depending on what your latency requirements are. If you need full-text search, use something like Elasticsearch or Kendra depending on what your use case is. If you need a graph data store, use Neptune. So, there are ways if you need cache in, where you just need to pull out data right then and there, use something like Redis or ElasticCache in the case of…

  144. Stephen

    Or MemoryDB.

  145. Rahul

    Yeah, I mean, whatever the variant of ElasticCache you wanna pick out of database services. I think that’s the way to go about it when it comes to building up a truly cloud-native, specialized, high-performance data store.

  146. Stephen

    Well, even in the name Redshift, the idea was you’re shifting out of big red.

  147. Rahul

    Yes. That more for the OLAP got crazy. I mean, yes, you are… I do think that that’s the way to go. Aurora seems…like initially, I thought, “Hey, Aurora is an awesome cloud-native database, and that will become the foundation for the future.” But I think I’m changing my views on that. Aurora is a great midway point moving your old traditional relational data storage to the cloud, but you’re still carrying all those stored procs, you’re still carrying all that data, you’re still carrying the old relational mindset of denormalized data, sorry, of normalized data that you put and then you do these tons of joins and stuff like that. That old mindset are old ways of thinking about your data and how it gets accessed where you want one generalized monkey to do all your tasks. That is basically what relation data store was, and I think continues to be for those old workloads. If I was building an integrative solution, I’d think very differently about it today. I’d separate out the stores based on what they need to do.

  148. Stephen

    Yeah. I totally agree. You shouldn’t be reaching for Postgres necessarily. You start with Dynamo. If you are doing something from scratch, start with Dynamo or Neptune, or something depending on your use case.

  149. Rahul


  150. Stephen

    But we’re not living in a vacuum, there’s a huge amount of SQL databases out there that exist and they need to be brought in over time, and in a way that breaks everything all at once.

  151. Rahul

    Yep. Exactly.

  152. Stephen

    All right. Well, it looks like we’ve hit our…speaking of constraints, we’ve hit our time constraint for the day. Any other messages for the audience or…? I think we’ve got a few different things we’re gonna try for next week. I’m pretty excited about those.

  153. Rahul

    No. I think I like this new framework. Audience, let us know if you like this new rubber stamp that we put on all of these announcements on whether they’re simplifying stuff or complexifying your life. These are our opinions, but we’d love for you to opine on it.

  154. Stephen

    Cool. All right. Well, thanks, everyone. Thanks, Rahul. It was really a pleasure, and we’ll see you next week. And hopefully, we’ll see you next month in Anaheim. Maybe we’ll be jamming on some ukuleles together.

  155. Rahul

    Please don’t make me play the ukulele on stage. So, please show up.

  156. Stephen

    All right. Bye, everybody.

  157. Rahul

    Awesome. Thanks, everyone. Bye-bye.

  158. Announcer

    Is your AWS public cloud bill growing? While most solutions focus on visibility, CloudFix saves you 10% to 20% on your AWS bill by finding and implementing AWS-recommended fixes that are 100% safe. There’s zero downtime and zero degradation in performance. We’ve helped organizations save millions of dollars across tens of thousands of AWS instances. Interested in seeing how much money you can save? Visit to schedule a free assessment and see your yearly savings.