AWS Made Easy
Close this search box.

Ask Us Anything: Episode 14

In this week’s episode, Rahul and Stephen review a couple of AWS Announcements, as well as do a demo of AWS Comprehend. In addition to reviewing the news articles, they do a demo of AWS Comprehend in the new “Let’s Code” segments.

Latest podcast & videos


Amazon ElastiCache now supports AWS Graviton2-based T4g, M6g, R6g

In this segment, we discuss how ElastiCache instances now support the AWS Graviton2 processor. We expect that this follows a trend of managed services, in that there will be a Graviton-based offering which will be the most best choice in most cases. We are happy about trend, and hope to see it continue.


AWS Lambda announces tiered pricing

In this segment, we discuss the new Lambda tiered pricing. Although this is a net cost savings for all Lambda users, this makes cost modeling much more complicated. Additionally, there are different tiers based on architecture, which seems unnecessarily complex. We would have appreciated some hard numbers, e.g. how many customers this affects.

Complexity alert

Amazon QuickSight launches API-based domain allow listing for developers to scale embedded analytics across different applications

We are big users of QuickSight dashboards and with this announcement, it makes it much easier to embed QuickSight dashboards in applications, while making the domain management configurable via API. The article is short, but has a link to a detailed blog post and documentation. Additionally, we appreciate that this new feature is available to ALL regions simultaneously. For this reason, this announcement gets the “Simplifies” tag and 4 clouds / 5.


AWS Comprehend lowers annotation limits for training custom entity recognition models

This announcement is a dramatic reduction in amount of data needed to train Custom Entity Recognition models. The previous requirement was 250 documents with 100 labels per entity. This has been reduced to 3 documents containing 25 annotations per entity. This is a drastic reduction in the amount of effort required to use comprehend. As part of the new Let’s Code segment, Rahul and Stephen give a demo of using comprehend. Rahul shows how to use core Comprehend features with Python. See: Stephen shows how to train a custom entity rekognition model using just 3 tagged documents.


AWS Made Easy


  1. [music]

  2. Stephen

    Hello, everyone. Welcome to “AWS Made Easy: Ask Us Anything.” This is episode number 14 with me, your host, Stephen Barr, and co-host Rahul Subramaniam. Rahul, how are you?

  3. Rahul

    Doing very well. Hi, everyone. Welcome to yet another episode. I’m glad we’re going strong up to 14, so yeah, very, very exciting. How was your weekend, Stephen?

  4. Stephen

    Really good. We went up to the San Juan Islands, which are these little islands…a group of islands up near the Canadian border, and just had a really nice time exploring. It’s really green and lush. And I went fishing and swam off the dark and all this kind of stuff. It was really nice.

  5. Rahul

    That sounds like a lot of fun.

  6. Stephen

    How about you?

  7. Rahul

    Good. By the way, just before I jump to tell you what my weekend was like, how’s the weather in Seattle? I heard there’s a crazy heat wave.

  8. Stephen

    Cloudy and gray and everyone’s relieved. It was hot yesterday, but today it’s a bit cloudier.

  9. Rahul

    Oh, glad that there’s some respite. So, at this end, it was a fairly relaxed weekend. Did nothing. I got to spend a little bit of time doing some electronics work. Like I said, I’m trying to brush up on a little bit of my robotics and electronics building. So, this side of my office looks crazy filthy with everything on the table. I have my breadboards. I have my components. I have my chips. I have my microcontrollers, all of that strewn all over the place. But, yeah, I’ve been trying to build some stuff, and I taught my little one, an 8-year-old, how to solder.

  10. Stephen

    Oh, cool.

  11. Rahul

    And his soldering was shockingly very professional.

  12. Stephen

    I think it’s a challenge not to burn your fingers, so that’s fantastic that he was able to do that [inaudible 00:07:00.070] quality results.

  13. Rahul

    The quality was…I was very pleasantly surprised, and he seemed super keen on doing all the soldering from then on. So, yeah, it was a good, fun, relaxed weekend at this end. Still a little hot, but signs of rain.

  14. Stephen


  15. Rahul

    Okay, so we have a pretty packed agenda today.

  16. Stephen

    All right. Well, let’s dive right into our first article. Here we go. There we go. I put the article in the chat for everyone to follow along. The first one is Amazon ElastiCache now supports AWS Graviton2-based T4g, M6g, R6g instances in Europe, and R6gd in Europe-Paris region.

  17. Rahul

    Yeah. So, it looks like the Gravitons are everywhere. The way Amazon is pushing the Graviton chips, the deployments are at massive scale pretty much everywhere. I think what I heard was that over 50% of the processors that they’re actually deploying in all the different data centers are Graviton processors, split right down between the Graviton2 and C7gs, which are the Graviton3 processors. But I think the rollout is huge. Most importantly, what they’re doing with their managed services is absolutely amazing. They’re taking every one of the services that they manage themselves and natively supporting those on Graviton. And most importantly, the pricing for the services that are offered on Graviton is lower than the one that you see on x86. So, you know, they’re pushing those Graviton chips really hard.

  18. Stephen

    And for something like ElastiCache, where it’s a managed service, is there any reason why you’d even care what the architecture was as long as it performed the way you wanted it to?

  19. Rahul

    Really, no because I mean, ElastiCache doesn’t even let you, you know, get into the nitty-gritty details of the system. It’s accessed via the API, so you really wouldn’t care. You just want a performant ElastiCache instance that’s cheaper.

  20. Stephen

    I mean, maybe if you were working on some weird deck alpha system with big NDN encoding and you were caching binary. I can’t really think of a reason why you’d care about the architecture of this. So, if it’s cheaper, then yeah, go for it.

  21. Rahul

    Yeah, and this is ElastiCache as a managed service, right? I mean, someone who’s on ElastiCache shouldn’t really care. The first thing that they should be doing is turning off the…just changing the instance type for ElastiCache.

  22. Stephen

    Yeah. So, this is a no-brainer. This is cheaper. It’s really hard to argue why you wouldn’t do this.

  23. Rahul

    Correct. And I think we discussed this on a previous episode. We’ll try and find that particular episode. ElastiCache is significantly cheaper when you move from x86 to ARM. I think the specific versions are when you move to ElastiCache 7.1 or… I’m trying to remember all the versions, but I think it was Redis 6… Sorry, ElastiCache 6 is only on x86, but once you go from 6.9 onwards, the migration becomes seamless to ARM and the performance improvements are really, really big. They’re like 60% or more. So, you would actually end up spending way less on those instances than you would otherwise. Yeah, it’s right there. When Redis version 6.2 and above and Graviton2-based R6g nodes, they have 5X more capacity.

  24. Stephen

    Wow. Looking at the pricing page, so looking at the different cache class instances, right? We’ve got the R6g, this is the Ohio region, R6g at [inaudible 00:11:45.900].

  25. Rahul

    Yeah. Pick an R5… Let’s pick an R5 2xlarge, which is the 8 core and 52 gigs of RAM. That’s 8 cents an hour. Oh, sorry, 86 cents an hour.

  26. Stephen

    Yeah, and then I guess comparable would be this, the R6g.2xlarge.

  27. Rahul


  28. Stephen

    Which is slightly cheaper and much better [crosstalk 00:12:12.102].

  29. Rahul

    Eighty-two. But 60% better performance.

  30. Stephen

    Let’s see. Wow.

  31. Rahul

    You could probably get away with using a smaller instance, given the performance. You probably don’t need that mass of an instance.

  32. Stephen

    Okay. Here’s the R6gd.2xlarge, $1.56 an hour, 10 gigabytes.

  33. Rahul

    Yeah, but then you get a significantly greater SSD attached to it.

  34. Stephen

    That’s interesting. So, this is a very simple announcement in one sense, right? It’s just a different instance type, but at the same time, it’s very free money kind of just put on the table by AWS as part of the Graviton adoption. I think in the future we wouldn’t even want to know the architecture if it’s a managed service. But I guess at this point we… It’s good to know and it gives us peace of mind of, “Okay, what are we running on?”

  35. Rahul

    I really wonder if, at some point in time, all of these managed services would only be available on Graviton. Like, as they move to serverless on all of these services, I guess, it will all be on Graviton and you wouldn’t care about what’s running under the hood. So, the more stuff moves to serverless, that conversation kind of doesn’t exist anymore. But the way they are rolling out the Graviton, I mean, it’s absolutely mind-boggling.

  36. Stephen

    It’s amazing to think of the physicality of the process of having to produce these. Do we know where they’re produced? I imagine they ship these as completed racks that have the Nitro Cards, Gravitons all ready to go, get this to the data center, get them installed, integrate them into whatever manages the region level, and then have it be ready to be offered. It’s pretty incredible they can roll out all of these things at different regions across the globe.

  37. Rahul

    Correct. And it makes sense for AWS to do this. I think last week we were discussing this with Jeff as well. The fact that on one rack, they are able to pack in three of these processors on a single motherboard. So, it’s one Nitro Card and three 64 core processors. That’s staggering. It just makes sense to do that for the entire data center.

  38. Stephen

    Yeah. [Crosstalk 00:14:42.634] lowers the power footprints. I’d imagine that as you switch to these, as we talked about yesterday, you looked at that carbon footprint calculator, it would lower a bit as well, just because these are using so much less power.

  39. Rahul

    That’s actually right. The carbon footprint calculator, you know, 60%, so there are multiple dimensions to this. One, the power consumption overall is way less, right? So, you’re going to end up saving or being greener on that front. And I think the numbers were like 40%, 45%. That was also 60% more green. They have 60% lower power consumption overall. Then the next dimension is if the performance of these chips is 60% better on a workload like ElastiCache, then you have that win as well, which means that you would end up using 60% lesser resources to keep things, you know, up and running. So, this just has a multiplier effect on so many dimensions. You could end up cutting your energy bills or your contribution to, you know, the carbon emissions by 70%, 80% easily by switching over.

    So, it just makes sense for organizations on so many different fronts. And you save money doing it. Usually, for a lot of these exercises where you get such massive wins, the cost associated with these are usually crazy high. In this case, you save money, you have lower energy consumption and performance is better. So, I don’t know who loses. [crosstalk 00:16:30.302].

  40. Stephen

    Yeah, I think someone loses, but that’s okay. They had a pretty good run and they’ll come back. It’s part of the competition, right?

  41. Rahul

    The cycles. Yeah.

  42. Stephen

    I was trying to find the highlight of putting in the… Oh, here we go. I’ll put this into the chat now. This is that segment from last week where we’re talking about Graviton and AWS’s metrics for energy consumption. So, just put that in the chat and it reminds me, everyone check out our highlights on Twitch, and we also post them on our Twitter feed, so make sure to follow us on @awsmadeeasy on Twitter.

    We post highlights from previous episodes, and it’s a great way to kind of just get reminded about the things we’ve talked about. It has really good segments, and taking them in all at once is a lot, but then when you actually break it down into the highlights, I find those really helpful. Yeah, we had a really good discussion and I relistened to them because there’s insights that come up in real-time that we don’t think about. So, I had to relisten to them after the fact. All right, well. So, thinking about this, we’re trying out a new system here. So, our overall grade of this particular article, what do we think about this?

  43. Rahul

    So, I think this is a… I’d put this at a four out of five. And the reason why I’m picking four out of five is I wish they had more data about how seamless it is to migrate over. And if there were, like, a two-step or a single click or a two-click migration that’s provided out of the box from your existing ElastiCache instance, I think that would be neat. Like, just two more screenshots over here saying, “Here’s how you do it,” would make such a massive difference to adoption. Right now, this just feels like a… I say most blog posts from AWS look like legalese. This just feels like another cookie-cutter announcement, like a press release. If there is an announcement blog, having those two steps would be great.

  44. Stephen

    Yeah, I agree. I’d just be able to click and say, “Well, here’s how to transition.” Or even an example of how to change your cloud formation template, or whatever it is, to transition over to Graviton instance. But then overall, I think this is…you know, overall, it’s a win. Obviously, we’re big fans of Graviton, but the contents of the post itself, it’d be nice if there was a little bit more information on how to transition.

  45. Rahul

    Yep, completely agree. Okay, and then we have the other rating that you are doing, right?

  46. Stephen


  47. Rahul

    Look at this…

  48. Stephen

    Does this make [crosstalk 00:19:17.883]?

  49. Rahul

    Simplifies and complicates.

  50. Stephen

    I think this is an obvious simplify.

  51. Rahul

    Yeah, I think this is pretty straightforward.

  52. Stephen

    There we go.

  53. Rahul

    Wow, you’ve got new graphics.

  54. Stephen

    Yeah. Our designer, she’s working hard behind the scenes to really up our production value. So, this is simplifies. This simplifies life. We have less power, more bang for your buck. All you really need to do is change the instance type. How do you go wrong? All right. Well, anything else before we transition over to Lambda tiered pricing?

  55. Rahul

    Yeah, let’s move on to that.

  56. Stephen

    All right, here goes. All right. AWS Lambda announces tiered pricing. And I’ll put this article in the chat as well. There we go. So, tiered pricing for monthly Lambda function duration in gigabytes-seconds of using. The additional pricing tiers are discounts on aggregate monthly on-demand duration. So, looking at the article they give you, okay, the tiers are at first 6 billion gigabyte-seconds, and then an additional 9 billion gigabyte-seconds. It goes down by about… This is about 10%.

  57. Rahul

    Yeah, but it was…yeah.

  58. Stephen

    And then again, it goes down after your 15 billion gigabyte-seconds per month.

  59. Rahul

    Yeah. Actually, this announcement is way simplified. I think it gets a little more complicated. So, can we pull up the Lambda pricing? Yeah, let’s pull up the Lambda pricing. So, this is really the announcement over here. So, you have the new tiers. This is a much better way to understand how the pricing goes. So, first, in while… I mean, it’s just obvious. The ARM prices are lower than the x86 prices. Like, look at the duration column, and it shows you how much it’s going to be, how much they charge for every GB-second. And the difference, though which I’m curious about is that they keep a slightly higher pricing for a little longer on the ARM. So, for the first 7.5 billion GB-seconds, you are in the most expensive tier. And for x86, that’s down to 6 billion.

  60. Stephen

    You really wonder if that extra 1.5 billion gigabytes-seconds for that, what is this, 100,000 of a cent? Was it really worth adding that extra…a whole other… Doubling the size of this table?

  61. Rahul

    Correct. That part just seems a little, you know, it just feels weird. And can you imagine the complexity behind the scenes for billing? Because then have to… It’s not a simple tiered pricing and you have to worry about how many billion GB-seconds per month you’re consuming. So I’ll step back. I’ll say two things. Number one, I don’t know how many customers use 6 billion GB-seconds. I don’t have a map of on how many odds that is.

  62. Stephen

    That would be really interesting for them to put in the blog post. We estimate that this will impact our top 30% or 50% or if this is only for the top 5% of users who are heavy users. It would be interesting to know who this impacts.

  63. Rahul

    Correct. And I think the numbers, like, at first glance, when you look at 6 billion GB-seconds, that number just seems insanely high. And my first thought was to I don’t think we’ll ever touch that kind of number. But I guess if you start looking at it in terms of number of hours that you’re going to actually run this, it might be significant.

  64. Stephen

    Well, I’ve got some…and we’ll see this in the behind-the-scenes episode next week. I’ve got some Lambda functions to deal with video encoding that take on the order of, you know, a low amount of minutes in some cases. That adds up, I mean, 6 billion is still a lot, but it adds up. I don’t think [crosstalk 00:23:53.060].

  65. Rahul

    Let’s say we had a concurrency of 1,000, right? There are 1,000 Lambdas that are running all at the same time within an ORC. So, that’s 6 million. Yeah, so it’s the 6 billion. Divide that by 1,000, you’re left with 6 million. And then divide that by 3,600 and that’s about $1,600, $1,700.

  66. Stephen

    Okay. So, a moderately big organization would hit this.

  67. Rahul

    Oh, actually, this is not incredibly large. Like anyone running like 3,000 or 4,000 Lambda. But this is on a per-month basis. Actually, that’s still pretty large. You would need to run almost 50,000 Lambdas concurrently to hit this number.

  68. Stephen

    Yeah. So it would be really interesting to know, “Okay, we estimate that this will save money for x% of our customers.”

  69. Rahul

    Yeah, that would be interesting to know.

  70. Stephen

    Well, all right. I don’t know if there’s too much more to talk about here. Nothing else changes, but if you’re using, I think, a lot of Lambdas, then as you use more, it gets cheaper on a per-unit basis. Three tiers and different tiers… Slightly different tiers, depending on if you’re an x86 or ARM.

  71. Rahul

    Okay. Can you go back to pricing? There’s actually one more table at the bottom that I wanted to bring up. No the table on top, where it compares x86 pricing and…

  72. Stephen

    This one?

  73. Rahul

    No, the one below that.

  74. Stephen

    There’s data transfer, Lambda@Edge.

  75. Rahul

    No, not Lambda@Edge. The one on top. There is a section that talks about per second. There’s a table. Yep, right there. There. That one.

  76. Stephen

    Okay. Oh, I see.

  77. Rahul

    Okay. So this one slices the same thing by MB and gives you the per-millisecond pricing. And if you compare the x86 price versus the ARM price, you will find that there’s about a 20% price difference, 20%?

  78. Stephen

    You think there’s room for one more column on this table?

  79. Rahul

    I don’t know if you noticed, but now the pricing changes from a per-second basis to a per-millisecond basis.

  80. Stephen

    The price for one millisecond. Price for one… Oh, okay. No, it’s still price per-one millisecond.

  81. Rahul

    Yeah, but then all the other stuff is in terms of GB-seconds.

  82. Stephen

    I see. Okay. So then you have to deal with the extra couple of zeros.

  83. Rahul

    Like, could we make this more complicated to comprehend?

  84. Stephen

    Okay, so how do we rate this article? Because on the one hand, you don’t have to do anything. Nothing really changes. But on the other hand, the way they communicate it could be a lot more simple.

  85. Rahul

    Yeah, I think I’d give this one a one or a two maybe. I feel a little generous [crosstalk 00:27:33.010] give it two.

  86. Stephen

    Here you go, two clouds.

  87. Rahul

    But I think they’ve just complicated the communication to a point where a lot of customers are going to say, “I know what my…” Yeah, this is definitely very, very high on the complexity a lot. A lot of customers are going to be scared of figuring out what their net costs are going to be. Like, they’ve been using Lambda, they know what their current costs are, and this doesn’t provide a clear indication that they should move. It’s a complex calculation that someone has to perform to figure out whether this is actually going to be cheaper for them or not. My instinct tells me, it’s going to be about 20% cheaper, but they’ve just represented this in such a convoluted manner that I don’t think anyone takes away that from…

  88. Stephen

    And these multi-tab tables, we got to get rid of… This is only if you have a lot of different columns, but it doesn’t make any sense. All right. Well, okay, so overall, good thing, but a lot of extra complexity in the communication.

  89. Rahul


  90. Stephen

    Cool. Well, let’s do a quick break and then when we get back, we’re going to be talking about QuickSight.

  91. Female Speaker

    Public cloud costs going up and your AWS bill growing without the right cost controls? CloudFix saves you 10% to 20% on your AWS bill by focusing on AWS recommended fixes that are 100% safe with zero downtime and zero degradation in performance. The best part, with your approval, CloudFix finds and implements AWS fixes to help you run more efficiently. Visit for a free savings assessment.

  92. Stephen

    All right. Long title on this one. “Amazon QuickSight launches API-based domain allow listing for developers to scale embedded analytics across different applications.”

  93. Rahul

    Yeah. So, this one is actually quite interesting. Unlike most other services at AWS, QuickSight kind of evolved in a completely different manner. The way it is integrated with the AWS suite is completely different. It feels like a single tenant application that’s just offered on the console, rather than being a native API first service, which is what pretty much all the other services are. The mechanism of authenticating users or bringing them on board is done in a completely different manner. You have to authorize users on the QuickSight app. Fundamentally, as you start embedding these applications into other or these dashboards inside other applications, you run into the problem of multi-tenancy, right?

    You have one account to one QuickSight dashboard that’s templated that you want to use across multiple tenants. You want to be able to do different permission schemes and so on. That just becomes insanely hard. And a lot of this was not available via API earlier. You had to go into the console and set it up. You had to go into the QuickSight dashboard and go to the settings and go set it up there. So, it was kind of all over the place and I think, given that now there’s an API you can do domain allow. We have to try this out a little bit more, but I think this is the mechanism by which you’ll now be able to afford or be able to serve multi-tenant solutions that use QuickSight dashboards.

    So, this is actually a really big deal for anyone embedding those dashboards inside. If you’re just using it within your org as simple dashboards where someone logs into the console, yeah, not that big a deal but the more you’re embedding these dashboards inside applications that you’re shipping, you need, you know, to create these domains and control access. This is your multi-tenant solution.

  94. Stephen

    That’s really interesting. It almost reminds me of how Amplify felt initially when you had the Amplify console. This user model is detached from everything else I’m doing.

  95. Rahul


  96. Stephen

    I need one second. Oh, I need to close the window. One second. Oh, I apologize for that. Someone started a leaf blower outside. You know, Elon Musk, if you happen to be watching, please, invent an electric leaf blower. Add to the Tesla lineup. That would be so much appreciated.

  97. Rahul

    Yeah, multi-tenancy is really a big thing for most of these applications. I mean, QuickSight has the scale, but if you can’t serve multiple thousands of customers in different domain and be able to partition and segment customers who access these, it’s really not valuable. The other problem that QuickSight has, and hopefully this resolves it, is any of the dashboards created in that account, you basically have…you can decide what dashboard the particular user has access to. But if you have edit access, you pretty much have edit access to most stuff, and you have the ability to look at the underlying set of dashboards. So, the permission schemes are a little convoluted, and, you know, the more we can segregate customers, the better it gets.

  98. Stephen

    Got it. Well, I think this is a fairly straightforward announcement. How…

  99. Rahul

    Yeah this is fairly straightforward. Given that they have the blog going with it, which actually covers examples of code and the request responses, I’d actually give this a four out of five. QuickSight is complicated as a product, which is why I’m taking away one point from it, but I think from an article standpoint, they’ve done a pretty good job of trying to communicate it effectively and make it easy for someone to try it out, use it, and figure out what to do with it. So, I’ll give it a four.

  100. Stephen

    Yeah, definitely. If you saw this come up on your RSS feed and it was relevant, you right away jump into the blog post and just have a look. Like you said, they link right to the blog post. The blog post has examples, screenshots, a lot of [crosstalk 00:34:43.460].

  101. Rahul

    Yeah. This is literally the thing that I was hoping that they would do, you know, for the first article we discussed. Like, ElastiCache did like a half a page blog post that said, “Go to your console, click this button, change your instance type to Graviton instance, and save up to 60%.”

  102. Stephen

    So then overall, do we think this gives us some simplicity? It seems like you said, it’s fundamentally not a simple thing, right? Because access to data within your organization, but being able to access this by the API, I would think, makes the administrator’s life simpler.

  103. Rahul

    I absolutely agree. For embedded applications, you have to do things via API. You cannot do this from the console. I think that expectation’s ancient, so this definitely simplifies.

  104. Stephen

    Excellent. All right. Well, I guess that’s a good thing to keep in mind for other teams to have that blog post. There’s nothing that’s better than an example and a screenshot. There’s one API request that really clarifies a lot more than a big paragraph code.

  105. Rahul

    Yeah, completely agree.

  106. Stephen

    All right. Well, anything else before we move on to Comprehend?

  107. Rahul

    No, I think this is fairly straightforward. We can move on to the next one.

  108. Stephen

    All right. Let’s do it. All right. This one’s a really fun one. This is “AWS Comprehend lowers annotation limits for training custom entity recognition models.” So, to go over this one is a bit of jargon that we have to go over. So, Comprehend, this is part of the text recognition framework that can look at things like sentiment and named entity recognition. So, named entity recognition is if you have, say, proper nouns or quantities or things that are of interest. It’s basically trying to extract the structure of your document and what are the key things that you’re thinking about.

    Now, with custom entity recognition, what we’re saying is, “Okay, Comprehend can look at things by default, like places,” right? There’s a finite list of places that we can talk about, and that’s straightforward across all text. But then there’s domain-specific things, and we’ll show some different examples in a bit. For example, video games or in pharmaceutical. It could be the names of certain drugs and there’s actually medical Comprehend for that or any type of domain-specific language where you have a corpus of text that’s specific to your application that Comprehend wouldn’t necessarily pick up or like even Amazon Services. That’s a whole other body of vocabulary.

    So, to train a Comprehend model, it used to require 250 documents and 100 annotations per entity. So, entity is the type of thing that you’re looking for. The 250 documents quite a bit… This is a pretty big reduction. This is 25 annotations per entity type with as low as 3 annotated documents. So, two orders of magnitude drop. That’s pretty big. This is [crosstalk 00:38:16.080].

  109. Rahul

    Yeah, that’s good. And I guess there are algorithms that train these models are just improving by leaps and bounds. And it’s pretty amazing what a service like this can do. I mean, imagine, you know, you wanted to get alerted. You have different news streams that are coming in and you wanted to get alerted anytime anything was said about a particular company or a particular person, or you wanted to figure out whether there was any notification about an event or something that was happening today or tomorrow or on a particular date.

    It would be virtually impossible to write a system that could filter through all of that, but tools like Comprehend or services like Comprehend can help you navigate through all of that. It can also figure out whether there is content that is not safe for work or things like that. There are tons of amazing things that Comprehend can do for you, and it’s really easy to get started with it. But do we want to show how Comprehend works?

  110. Stephen

    Yeah. Should we just jump right into it? That is a great idea. Well, then with that we want to introduce a new type of segment called “Let’s Code.” Let’s introduce the segment. All right, so Rahul we shall switch to your screen and you can show us [crosstalk 00:39:49.480].

  111. Rahul

    Yeah, let’s go ahead and do just that. By the way, I have one complaint that graphics shows me in a suit. I can’t remember the last time I was in a suit.

  112. Stephen

    All right. You’ll have that fixed for next time.

  113. Rahul

    Great. Okay, so for our viewers, we’ve created this new repository called AWS Made Easy. It’s a public repository where we hope to post some real live code that you can try out, you can play around with. Hopefully, it will help you get started with these AWS services a lot easier. And we will try to bring in a few new snippets of code every single week as part of this conversation. Feel free to ask questions.

    This is live coding exercises so we will do our best to keep up with your questions or feel free to contribute to these repositories as well. This all will be public. We just created today, so it’s probably going to take us another few hours to clean it up and put it out there. This is currently available in the ep14 branch. You can follow along right there or you can just try it out.

  114. Stephen

    All right. I’ll put it in the banner and it’s also…there’s a link in the comments. All right, let’s go ahead.

  115. Rahul

    Perfect. So, we’re going to use something on DevSpaces. DevSpaces is our AWS-based Graviton-enabled IDE in AWS. It gives us Visual Studio code in the browser. And instead of talking about it, I’m just going to show it to you. So, we have a little browser plug-in that allows us to click and get right into DevSpaces straight from our GitHub repositories. And it basically pulls in a container in AWS, gets everything started. It’s Visual Studio code as you can see in the browser. It’s loading up a bunch of extensions, it’ll load up all the code. It’ll get everything up and running for us. And I think we are up and running. We’ll just give it a second.

  116. Stephen

    And so in that second, it’s allocated to resources, cloned the repository, set up the Visual Studio code, set up this temporary URL for you to access your workspace. All that in that two or three seconds that you’ve been waiting. It’s pretty neat. I would take it for granted now that I have completely [inaudible 00:42:20.180] environments.

  117. Rahul

    No, absolutely. It’s crazy how easy it makes life. Okay, let me see if this happens live. How far can I go? This much?

  118. Stephen

    Yeah, that’s fine. We can even do full screen. You can go a lot wider, but that’s fine.

  119. Rahul

    Okay, so let’s start with this. Okay, so this is a very simple piece of code that I wrote up a few minutes before this session. What we’re doing is this simple Python code, it uses the Boto3 library and it’s importing JSON because JSON is the format in which we process all the responses. I’ve written three very simple functions out here. The first one is called detect entities. And let’s look at the Boto documentation here. The Boto documentation is really, really simple. All you need to do is to get started with Comprehend, import Boto3. It tells you how to instantiate the client. And then here are all the amazing methods that you can start using to bringing, you know, all of these neat insights.

    So today, we’re going to talk about three, detect entities, detect key phrases, and detect sentiment. So, let’s start with detect entities. Okay? So, in detect entities, you basically pass it some text and you decide what language encoding or what language you’re using. There’s a code for it. En is English, Es is Spanish, Fr is French, and so on. And it’s really that simple. And then in response, you’ll get a score, you will get the type. So, it will give you a classification. It will tell you whether this entity that it found in the text is a person, location, organization, a commercial item, event, date, whatever, right? There are a bunch of different categories that it has.

    So, it’ll give you all of that, and it’ll give you the text that it found, and it’ll tell you where in the file this is. So, there’s the beginning and end offset, which tells you exactly where in the file that particular entity exists. So, you can look at it, pull it up, and then use it as you please. So, let’s just run this code. Okay, so I am going to need some permissions. So, we need some AWS credentials. I am going to take myself off screen for just a second as I put in my credentials. Give me one second here and I’m coming back on screen. So my environment should be set up, hopefully. Someone test it out, but let’s get right into it.

    So, I’m going to get into the folder where I have all of my files. The first thing that I’m gonna have to do is install all the dependencies. That was a typo. Okay, so this is installing all the dependencies that we need to run Boto. Great. Everything should be ready to go. Let’s just run the script. So, for right now, we should be actually pulling up raw text. If you remember our very first article where Amazon ElastiCache supports AWS Graviton T4g instances, I literally just copy-pasted that entire text and dumped it in here to see what it finds.

    Let’s just look at the code and make sure that we are doing some interesting stuff. So for now, we’re just detecting the entities. That’s the function that I’m calling over here. And then I’m going to just dump out the JSON response that comes out of this simple function call. Okay. So, let’s do… I can…

  120. Stephen


  121. Rahul

    …just start Comprehend.

  122. Stephen

    Credentials, uh?

  123. Rahul

    I think I need…my credentials expired. Word of advice for anyone working with AWS and AWS-related services, do not use your keys directly. It’s always a good practice to use temporary keys because you never know where you might check it in or make it available to the rest of the world depending on whether you’re doing a demo like this or doing something else. Give me just one second to get a new set of keys and generate…

  124. Stephen

    No problem. That makes a lot of sense to use those temporary keys. I had a few close calls with checked-in keys and luckily GitHub catches those now. But still, it’s a pretty bad practice when you get that commit failed. You put a private key in a repository and it’s easy to say, “Okay, I’m doing some testing, I’ll just do that later.” And then you commit the file and your AWS account is a bitcoin mining machine, very expensive, very quickly. So, a good idea even though it can be slightly more cumbersome to use those temporary keys. [Crosstalk 00:48:00.725].

  125. Rahul

    Now I just have one other problem, which is that I’ve lost the window. I’ve lost my window where I had…oh, there we are. Great. Okay, just give me one second to set up my credentials.

  126. Stephen

    No problem.

  127. Rahul

    And we should be good to go. Sorry, give me one sec. Okay, so we’re gonna try this one more time. Great, [inaudible 00:48:53.010]. Okay, this is where things start getting interesting. So, here are all the entities that are detected. The first one was the text Amazon. In fact, let me bring up the raw text right here. So, Amazon, it detected as an organization with a 99% accuracy. These scores are out of one. So, it seems pretty confident of that. ElastiCache it guesses at 66% score that this is a commercial item. AWS is an organization. Graviton2 is a commercial item, right? There are some other very interesting ones.

    So Paris is a location. This announcement says that this was launched in Paris and Milan. I think there was Milan in here somewhere as well. Europe is a location. These different instance types R6gd, that’s other. It doesn’t know how to classify that. Then you’ve got dates like August 4th, 2022, which is the date of this blog post. You’ll see this right up here on top. So, it does a pretty neat job of identifying all the key entities. Now, let’s say you wanted to create an index of all of your documents so that it’s searchable. Just figuring out what these key elements are, you can take all of this and start creating good neat indexes out of this that will make searching all this content a lot easier.

    Now, here’s another very interesting feature. If you wanted the searching to be a little more interesting, you can start using a new or another API called “Detect Key Phrases.” So, instead of just looking for words or single entities, you can start looking for key phrases. Because sometimes you want natural language-based rating. You want to look for these phrases. So let’s just try that out. Let’s see what we get. This time when we run this with key phrases, you’ll see that you’ve got long sentences like the Graviton2-based T4g, M6g, R6g, and R6gd instances. Or if you go up a little further, it says, the general purpose M6g, or it figures out that this is some quantity and that’s relevant. NVMe-based SSD storage is another key phrase that you might want to search for. It’s not just storage. It’s not just SSD. It’s not just NVMe. The fact that it’s NVMe-based SSD storage is important for good natural searches.

    So provides a pretty neat way of collecting all this information without you having to write much code at all. I mean, this is what? This effectively is one line of code. Yeah, it looks like four, because…

  128. Stephen

    Yeah, you’re just wrapping [crosstalk 00:51:55.230].

  129. Rahul

    …one line for Boto, one line for this. Like five, six lines in total, and I didn’t really have to do very much to get all of this. And it probably cost me a fraction of a fraction of a cent.

  130. Stephen

    [Crosstalk 00:52:12.920] through all this data science, machine learning lectures, learning about TFIDF and learning the n-grams and the bigrams, and building up indexes and manually labeling all this stuff. And even if you went through all that work and did all of that, it wouldn’t be nearly as good as this because it’s been trained on so much data.

  131. Rahul

    Exactly. Last, I am going to try this one. And I was writing this up just as you were starting this episode. I wanted to really understand how well it does on sentiment. Okay. So, I pulled up this “Top Gun: Maverick” review that someone had posted and brought it into the code. Actually, let’s copy this and replace this raw text with that. Okay, I’ll be really interested to see what it does on the entities. Again, live coding. I don’t know if this will work. The test for the Comprehend team. Let’s see…

  132. Stephen

    Let’s see.

  133. Rahul

    …how well it does on detecting the entities. Okay, let’s see what entities it found. It found Top Gun. It found Maverick. That’s the title. Then tonight is a date. Tony Scott is a person. ’80s is also a date.

  134. Stephen

    The neat thing about that is that it’s not just looking at, you know, syntax, but it’s looking at the way, what do you call it? It’s looking at the embedding. So, it’s like way back in the ’80s and that embedding. It’s not just looking for number, number S, right? This is not regex. This is embedding and context and how these particular phrases fit in and how they’re used in the language.

  135. Rahul

    Correct. One of the best… This one is interesting. One of the best is the quantity. I think this one, movies, I don’t know in what context or it may be one of the best movies. This one seems off. Tom Cruise is a person. Cruise is a person. Let’s see, any other interesting ones here?

  136. Stephen

    Well, again, if you were talking about a cruise ship, it would figure that out based on the context.

  137. Rahul


  138. Stephen

    He was talking about Mr. Cruise or Cruise did a great job where cruise ship wouldn’t make sense even though it’s the same word.

  139. Rahul

    Exactly. Okay, let’s try out key phrases. Let’s see what it produces for key phrases. By the way, a shout-out to the DevSpaces team. As you can see, I’m doing this all live in a browser. Okay? DevSpaces makes it so easy to go straight from code without having to set up the environment. The environment is all provided as infrastructure is code. And by the way, if you don’t provide it, the default image that it uses has support for 25 languages right out of the box. So, you really don’t need to do very much. You can get started right away and start playing around with code.

  140. Stephen

    All you need is a GitHub ID. I put a banner up showing where to try it out. Literally, you don’t have to sign up with us, you just put in your GitHub ID and authenticate that, and right away you can just use DevSpaces on the Graviton3. It’s a hell of a lot easier than SSHing into a box and getting it all set up and remembering to close it down. It’s just right there, ready to go. I love this product.

  141. Rahul

    Correct. And if you close the browser, it’s ephemeral, so it goes away. So literally, for every PR, for every comment that you want to look up, you can create a completely separate environment, run it up for the 10, 15, 20 minutes that you’re going to play around with it, and then just kill it or it’ll die automatically. So, it’s really simple. Okay, let’s come back to Comprehend. So, look at the key phrases. These are actually pretty neat. ’80s people is the key phrase. It wasn’t just ’80s the time, but it was the ’80s people. Everything else, friendship, loyalty, romance, old-school action movie, playful wings, and embellishments, absolute best, supporting cast. It’s pretty neat. Original “Top Gun” leather jacket.

  142. Stephen

    That’s [inaudible 00:56:55.250].

  143. Rahul

    Okay, now for the hard one. To be able to figure out sentiment, okay? I’m going to do a line-by-line sentiment over here because I want to really understand whether it’s able to provide that level of detail. So, let’s go ahead and do this. Oh, I dumped everything, sorry. I dumped everything in water, but we could turn this into a line-by-line one. Okay, so this was everything, and it’s 94% positive or score of 0.94. So, I have very high confidence that this is a great positive review for the movie, and it’s just that simple.

    Can you imagine doing this for your internal documents, your blog, or to figure out sentiment of all of your employees or the communication that they are having or to give feedback to someone. You know, when you’re sending out communication internally, you want to make sure that that communication is neutral or positive, and you’re not communicating negatively, because that’s not productive. Imagine this being a filter to your email before you send it out.

  144. Stephen

    Some people don’t recognize that their tone can be misperceived. And this being able to say, “Okay, before I send this email, oh, this is being read as more negative than I intended, but it should be neutral.” It’s really neat to be able to get that feedback from an essentially objective source.

  145. Rahul

    Yeah. One concrete data point I do have is that Comprehend does not understand sarcasm yet.

  146. Stephen

    Well, many people don’t.

  147. Rahul

    That’s true too.

  148. Stephen

    So, I think that’s a pretty high bar.

  149. Rahul

    Yeah, I agree.

  150. Stephen

    There’s some of the most complicated things that are out there.

  151. Rahul


  152. Stephen

    Well, I’ll share my screen real quick. I know we’ve got about five minutes. I wanted to share along the custom entity part. So what I did to train a custom-named entity is I looked at video game names. So just to show the path of how this works. The steps are basically this, first, you want to upload some training data to S3. Then you can annotate that data using SageMaker Ground Truth, and then you train the model, and then you can test the model, and then you can run it in an endpoint. So, to show what that looks like, I’ll start with Comprehend. So let me open up a new tab here. So if I go to Comprehend…sorry, custom entity recognition. Go back… Okay, I see. I have this one set up for real-time analysis. So, this is going to use my model. I’ll show you how this model got set up. Let’s go to the training data. Okay. So first, I put some training data here, and it’s just a few text files that are articles from Blizzard press releases or video game press releases.

    So, if you look at what this looks like, similar to Rahul’s, it’s just text. And the next thing you do is you go into SageMaker Ground Truth and you can annotate that. And let’s see if we can find that. It basically looks like Mechanical Turk. And what you get at the end is you get a list of tags of all the different entities. So, you work through all the different words. You highlight them and say, “Okay, this word is a video game name.” And now I’ll show what it looks like at the very end. So, I’ve trained the model, and now I can say I want to do a real-time analysis on my game recognizer endpoint. Let’s clear this text and let’s go over to here we go. Here’s Activision and here’s a press release for Diablo IV.

    Now, I didn’t train it on this particular game, but let’s paste in the text. Okay, we’re in the right character limit, and let’s hit analyze. And there we go… So, it came up with Diablo IV is a game name, Diablo III: Reaper of Souls is a game name. So, it was able to figure out and again, I didn’t train it on these particular ones. It’s not looking for words in the text. It’s looking for the way that those particular types of entities are talked about, right? I had a lot of fun playing blank, although I thought the gameplay itself was difficult, or however, it is that’s the kind of structure it’s picking up. And I trained this on literally 5 text files and spent 20 minutes annotating them.

    So, back to the original announcement, if I was to do this with 250 documents, it wouldn’t happen, I wouldn’t have been able to. But 3 documents, 25 annotations, yeah, I can power through that in a few minutes. And so it was really neat to be able to see that this model, even though it didn’t have very much training data at all, was able to work and produce some useful output.

  153. Rahul

    No, this is pretty neat. To be able to do this without much heavy lifting, without really having to know much about ML or NLP or all the algorithms, and keep pace with the advancements that are happening on that front. I mean, this is absolutely amazing. So, you know, leverage it is my advice. These are some really amazing services that are at your disposal. Costs almost nothing to use. And the fact that you can run this, you can go create custom entities, custom models for your data, which is specific to your domain, I think opens up so many amazing use cases that, you know, I can’t wait to see what people come up with.

  154. Stephen

    Yeah. I’ve got this little personal Raspberry Pi-based dashboard and it has an RSS feed, and I want to try… Okay, can I monitor an RSS feed and a certain entity that I’m interested in whenever that comes up to alert me on the dashboard? And it should be able to extract that entity or even negative sentiment around that entity.

  155. Rahul

    Yeah. By the way, we use Comprehend behind the scenes to keep pace with all of these announcements that AWS is making and to bring to our attention the most important things that we can bring back to you. So, you know, we use Comprehend a lot, and, yeah, we’d love to hear from you about your experiences and your feedback on it.

  156. Stephen

    All right. So, final ratings on this article. I think this a…what do you think? I would think this simplifies.

  157. Rahul

    Yeah, absolutely. This simplifies stuff. I mean, it just makes the product so much more usable from an end-user standpoint. I totally get what complexity must have gone behind the scenes to create these models that can just work with a fraction of the documents and annotations, so kudos to the team. Really neatly done. And the quality of this article pretty neat as well. It’s pretty straightforward. And I do believe… I wish that they just added the… Oh, there is a blog post, so we should be… Yeah, there is a blog post.

  158. Stephen

    I already clicked on it, so it went dark. There’s blog post, you get sample data, you get endpoint, you get [crosstalk 01:04:45.310].

  159. Rahul

    Correct. Their logs are exactly step-by-step what to do to get started with it.

  160. Stephen

    Yep. Links to a sample data set.

  161. Rahul

    Correct. So I give it a four out of five as well.

  162. Stephen

    You give it a four? All right. I was thinking four and a half. All right, let’s see what earns a four and a half in a future date. Solid four. Good example sample data. Great. All right, well, I think that wraps us up for today. Anything else, Rahul?

  163. Rahul

    No. We look forward to the next episode, which will be episode number 15.

  164. Stephen

    And that’s going to be a behind-the-scenes look at how we manage the live stream using our own tools, using serverless, and it’s going to be a lot of fun. Looking forward to sharing that with you.

  165. Rahul

    Yep, I am really looking forward to learning as well.

  166. Stephen

    All right, we’ll see you all next time.

  167. Rahul

    [Crosstalk 01:05:34.220] Comprehend behind-the-scenes.

  168. Stephen

    All right. Oh, have a good week…

  169. Rahul

    Great. See you all.

  170. Stephen


  171. Rahul

    Very good.

  172. [music]

  173. Female Speaker

    Is your AWS public cloud bill growing? While most solutions focus on visibility, CloudFix saves you 10% to 20% on your AWS bill by finding and implementing AWS-recommended fixes that are 100% safe. There’s zero downtime and zero degradation in performance. We’ve helped organizations save millions of dollars across tens of thousands of AWS instances. Interested in seeing how much money you can save? Visit to schedule a free assessment and see your yearly savings.