AWS Made Easy
Close this search box.

Ask Us Anything: Episode 3 – AWS EKS Anywhere and more

In this episode, Rahul and Stephen have discussed five recent announcements on “What’s New with AWS?

Latest podcast & videos


Article #1 – AWS Compute Optimizer adds four new Trusted Advisor Checks

Note that both Trusted Advisor and CloudFix can help save money on your AWS bill, and both should be used. The key difference is that CloudFix’s suggestions have no impact on the user, and do not require any changes in order to be implemented. You should use them both!

Article #2 – Lenovo ThinkEdge SE70 device, powered by AWS Panorama, is now available for sale

AWS Panorama devices allow AWS’s image and video recognition algorithms to be used at the edge, meaning situations where high speed / low latency is critical, bandwidth is limited, and/or data cannot leave the physical premises due to data governance requirements.

Try to identify the 1980s factory footage in this segment!

Article #3 – Amazon EFS now supports a larger number of concurrent file locks

In this article, we talk about EFS in general, and how useful it is in maintaining continuity and state when using ephemeral services such as Lambda and spot instances. This announcement increases the number of concurrent file locks from 8000 to 32,000. This greatly increases the usefulness of EFS in some key dimensions. One example would be running databases backed on EFS.

This limitation has been known for some time ( ), and it is really cool to see an 8x increase in the capabilities of EFS.

Article #4 – Amazon Lex now supports custom vocabulary

We had a lot of fun with this segment. Amazon Lex is AWS’s voice and text recognition service. Rather than simply “transcribing” voice, Lex parses the structure of the language and infers intent. For example, “Book me a flight from San Diego to Seattle” has an intent of “Book a flight” with parameters being “San Diego” and “Seattle”.

With this new feature, Lex can now support custom vocabularies. As an example of this, we use the American Diner Lingo. For example, ordering “Cowboy with spurs, make it cry, extra axle grease, and a mug of murk” would get you a western omelet with fries and onions, some butter, and a black coffee. It would be very difficult for Lex to make sense of this, as Diner Lingo uses words that are not in common parlance in regular conversational English.

Speech recognition interfaces are certainly the future of computing, as is shown in many science fiction movies. I showed one of my favorite scenes from Star Trek 4: The Voyage Home. Amazon Lex is one of the building blocks of the user interfaces of the future.

Article #5 – Amazon EKS Anywhere curated packages are now in public preview

EKS Anywhere is a service to allow AWS’s Kubernetes Cluster Management to work on on-prem clusters. The 10,000ft view is that with EKS Anywhere and these new curated packages, there will be a continuum between EKS Anywhere and EKS in AWS, allowing companies to leverage their existing infrastructure while limiting the amount of parallel effort necessary for managing this infrastructure.

Kevin Duconge


  1. Stephen Barr

    Hello there and welcome to our Aws. Made easy. Ask us anything livestream. This is episode number three. We’re doing a weekly series Tuesdays at nine a.m. Thank you for joining us today. How you doing Rahul?

  2. Rahul Subramaniam

    Doing very well. Looking forward to the discussion today.

  3. Stephen Barr

    Yeah, me too. We’re gonna try the different format of going over what’s new in in AWS. And again for everyone in the audience, please uh feel free to ask us questions about anything AWS or anything about anything. We’ll do our best before we start. I’d like to thank our ASL interpreter Erica. It’s really nice to have you with us today. And also one other thing to um to watch for, I don’t know if you saw it in the comments, but there are three subtle sci fi sci fi references in this live stream. So keep an eye out for those trying a comment and see what, see what they are. You know, make it fun and mix it up again. This doesn’t ask us anything. So you want to spend time answering your questions going over real AWS issues that you’re facing. Um That whether you’re just getting started or you have years of experience. We we want to talk to you. Alright, let’s first just catch up. How was your weekend Rahul?

  4. Rahul Subramaniam

    Really good. For the audience. Both Stephen and I are big lego fans. We just spent the last four months working on that Bugatti Chiron and that is an insanely complex piece. It took us about four months to build it. What really amazed that how much concentration it gives the kids, they sat for hours at the end trying to put together these pieces following hundreds of pages of instruction so that it’s really, really interesting learning experience for all of us.

  5. Stephen Barr

    That’s, that’s incredible. I remember doing the original Technic Supercar with my dad and I think they’ve gotten, what would you say? At least 22 orders of magnitude, more complicated.

  6. Rahul Subramaniam

    I think so, I think these have gotten really, really, seriously complicated. These ones have, you have to build the engine with all the camshafts, the transmissions, the differentials, steering mechanisms, all of that actually works on discussing, they’ve tried to make it very complex and as realistic as possible.

  7. Stephen Barr

    Oh, wow, that, that’s fantastic. So we ended up going to um the marriage of Figaro over at the Seattle Opera. Oh, that’s pretty awesome. It was really fun. It’s, you talk about attention span. I, for me, my two older kids, they love it, they were riveted the whole time. I mean, it was a fascinating story, but I just, it was, I’m just amazing their ability to hold their attention at this kind of complicated convoluted story for a couple of hours.

  8. Rahul Subramaniam

    No, it’s fast and it’s great exposure for the kids as well um, to have to learn all these to go through these experiences um definitely very very enriching.

  9. Stephen Barr

    Yeah it was it was a blast. It’s it’s been a bit cold and rainy here unseasonably in Seattle, but imagine it’s a bit hotter for you.

  10. Rahul Subramaniam

    Pretty much. We had a number of showers over the last two or three days, but now it’s back to being hot, sultry and uh yeah, we’re gonna live through that for the next few weeks.

  11. Stephen Barr

    Oh fantastic. Alright, well should we, should we dive into what’s new in in AWS?

  12. Rahul Subramaniam


  13. Stephen Barr

    So let’s pull this up. We decided to pick out six different of the what’s new in AWS, it’s a the Keeping Up With AWS is a bit like drinking from a firehose and there’s there’s so much going on and so many so many incremental changes to each one of their different services. So we thought we’d just go over a few of them and and and see what’s happening kind of across the spectrum of of AWS services. Yeah. And by the way, for the audience, if there are other ones you want to pick out, you want to ask questions about some of the other announcements, like students said, it is really a fire hose that, you know, you feel like you’re being bombarded by and keeping track of all of that stuff is really hard. So we’re just taking a few today but we’d love for your feedback and we’ll pick whatever you are interested in talking about. Yeah, they just post the URL in the chat and we’ll just open it up right here and and and go. So the first one we wanted to look at and this was posted on May the fourth on Star Wars Day. Um It’s the Trusted Advisor.

  14. Stephen Barr

    So just looking at the text, it’s AWS compute Optimizer adds four new trusted AWS. Trusted Advisor adds four checks automatically ingested from AWS compute optimizer. The AWS Trusted Advisor provides recommendations that help you follow AWS best practices. Okay, so it just kind of breaking it down into pieces. Uh compute optimizer is analyzing your AWS workload and it’s helping you choose optimal configurations for in particular Ec two. Ebs lambda based on your utilization. So say you had a very large instance that had very low but very predictable utilization compute optimizer, it might tell you, hey you your instance is too big for what you’re using it for. So Rahul, how does trusted advisor and compute optimizer? What’s the relationship between the two of those?

  15. Rahul Subramaniam

    Yeah. So um trusted, You don’t think of trusted advisor as your place to go for a lot of the AWS this practice recommendations for which they have built a bunch of finders and that’s the place where they bring in all of these recommendations. They have seven core services of categories under which a lot of these um uh insights or advisors are coming in the services, primary R. S. Three. You’ll see security groups, you will see I am. You will see multi factor authentication again along your permission schemes you’ll see some recommendations on RDS E B S uh service limits and so on. Um broadly I think the vast majority of recommendations that exist today fall under the security category. So even though they are across all of these services, I think the first ones they have focused on and rightly so are all the security ones where developers um or customers end up leaving some loopholes in their accounts. Whether those we don’t log in with your root account kind of recommendations using I enroll for your services um you know use tokens that expired, you know on regular use and use refresh tokens to operate your services. Um things like that, don’t you know, open up um everything to a c idea of 00 slash zero kind of set up for your resources. Those are kind of best practices that they put into a trusted advisor now over a period of time. What trusted advisor has been doing has been drawing from a bunch of other tools that AWS has built and computer optimizer happens to be one of them. And where do the computer computer optimizer help is they bring in certain insights that are related typically to cost and appropriate resource utilization. Um They usually started off with Recommendations around ec two instances where they would take a look at your CPU utilization memory, um network I. O and disk IO. And make a few options as recommendations. And you could pick the one once you did that. It helps you run a few. What if scenarios to understand what might happen if you chose one of the three or four options that they provide and that had to make better decisions. Now, just to be clear these are advisories where you still need to do a bunch of work. You still need to go look at those options where those options make tradeoffs as necessary. Um You also need to make sure that you’ve covered some of the fundamental requirements. So for example please I request request everyone install the SSN agent in your on all your instances. It doesn’t happen by default install it. That you can actually start capturing memory metrics that you can make better optimization decisions without simple metrics like that. It’s really hard to make decisions around what the right size for your instance is going to be. The next thing that I recommend everyone does is really think about all the valuables in making a change on these instances because you have to think about what is actually running in there. Is it really stateless. Is there stuff that’s in memory that’s not being persisted on this because when you change your instance type, you’re going to have to take a snapshot, turn down your machine. Start up the new instance and then you start using the new instance. So if there’s stuff that’s still in memory that’s not really persistent, you might end up losing it. So be cautious about all of those things. But if you actually get to the right resource utilization, I mean, you you suddenly find that everything is far more performant, you’ve balanced, you know, all of your trade offs between different IO’s I apps, CPU utilization and the utilization. Everything just works seamlessly. So there’s a bunch of that you have to do, the other area where we’ve actually found computer optimized to be incredibly valuable as landowners. Um really trying to figure out how much computer and memory you need for lander functions is incredibly hard because it’s serve earless. You don’t have control over the silver and you have to basically look at executions over a vast number of lambda calls. And what computer optimizer does for you is it actually does all that for you. That’s all the hard work, collect all of that information about how much CPU got used, how much memory got used, does all the matter for you and tells you what the right size for cores and memory should be and they just announced uh memory size over provisioning. Um as one of the insights out of computer optimizer, which is awesome. They were doing CPU earlier now it’s memory as well. So you get to optimize your CPU and memory and your lambda costs quite significantly? That’s that’s okay.

  16. Stephen Barr

    So just to to recap, well first I really like the idea of having them focused on security first because that’s that’s a that’s gotta be the priority. Um But that that being said with lambda um you know, I’ve definitely fallen into this trap where when you’re developing you just you pick up you definitely way over provision it and then once you get the immediate task done you can forget to then adjust it back down and then you’re ramping up this lambda, that’s that’s way too big for what you need. And then same thing with under provisions, say if you’re using a lambda function that’s loading up all of your users or all of your transactions and that that might grow over time and then eventually you’re gonna find that that you’re gonna get that out of memory error and you want to be able to catch that at scale. So what what trusted Advisor and computer compute optimizer are watching for these type of errors. So, trusted Advisor is I suppose the general framework of advice for managing your AWS account and then compute optimizer is one of one of the sources of data that it can draw upon to to give you that correct? Cool, this is I can see this being really, really useful and helping people save on on their AWS bills. Do you see EBS volumes being over under provisions. How do you see that cropping up in practice?

  17. Rahul Subramaniam

    So this is an interesting one i in my fear that I actually find that um developers either operate in powers of two or powers of 10, so when they provisioned ebs volumes, it’s either eight gigs, 16 gigs, 64 gigs, you know, they either go in those, you know, powers of two or they would do 10 gigs, 100 gigs, uh you know, tv kind of volume provisioning. And to be honest, I don’t think developers spend enough time thinking about how much this space you actually need on the face of it, it actually looks really cheap To, you know, take up, you know, one gig or 10 gig or 10 gig at 100 gig um ebs volume and just slap it onto your instance. But over a period of time, as you do a lot more of these, you suddenly find that you’re wasting so much space on those volumes, it’s just, you know, you really never need that much and in most cases I found volumes to be provisions in excess of 10 X. Of what they achieved needs. So um there’s tons of room for right sizing them and I think having computer optimizer do that for you or the box is pretty neat, It’s actually fairly straightforward also to resize your ebs volumes. So if they are small and you need to resize them at a later stage, you can actually do that with a very simple a vehicle. So that’s pretty neat, so it makes sense to right size them at regular intervals and make sure that you have them running optimally.

  18. Stephen Barr

    Yeah, that makes a lot of sense in terms of and I definitely fall into that habit of powers of two. Thinking I was just thinking about when you when you make it it would be a pretty nerdy diversion, but when you make your Rs a keys, you know, you say, okay, I’ll take the 4000 and 96 bit R. S. A key. And I saw this advice once it doesn’t need to be that size, you can make it, you can make it 3997 Because if there’s ever a giant RSA rainbow table, you you maybe you shouldn’t do 4,096 because that’s where that search space is going to be. Um definitely also like you said, it’s it’s a couple of pennies when you’re dealing with one, but all of a sudden that one has become thousands and you want to right size? Absolutely. So you go last question before we, before we move on to the next one is our our sponsors CloudFix. And how does that relate to compute optimizer? You know the first past? Well, they’re both looking at optimizing the usage of AWS resources but how how are they different or how do they over what’s the Venn diagram?

  19. Rahul Subramaniam

    So I think this set of recommendations are disjointed from what loud fix does not think focuses primarily on cost advisories from AWS that are completely nondestructive. Uh So with literally a couple of clicks you can start saving money without worrying about you know, restarting instances of something getting lost or you know, disruption to your service. These ones are disruptive because they either need an instance, we started talking about instances, ebs volume resizing um depends on the scenario which way you’re going, you may or may not require you to detach the volume, work on it and then reattach it and then lambdas when you resize them may cause a little bit of destruction as it’s kind of turning over unless you manage um you know the mechanism by which these changes are being made. So I would say um potentially disruptive And not, you know, 100% said that you do need to make some tradeoffs at times. So the Venn diagram stands disjointed at this point between what computer optimizer is doing and what cloud fixed up.

  20. Stephen Barr

    I got it. So the optimizer is saying, okay, here’s some changes that you could make but don’t just push the don’t just close your eyes and push the button whereas cloud fix really is there’s no there’s no disruption, you go ahead and do what it tells you to and there’s gonna be no performance hit, you’re just going to flat out, save money,

  21. Rahul Subramaniam

    Correct. And by the way, one other thing is there isn’t one answer to these computer optimized recommendations. Like if you go to the ec 21, they typically will list three or four options with different profiles. Do you care about you know, finishing the work load faster? In which case if you find that it’s multi terry, they bump up the cpus they will, you can probably recite to, you know, compute optimized instances or if you don’t want to reduce your cost, it might be memory optimized instances. If your network I. O. And stuff are the primary drivers afraid you might decide to go with certain other kinds of IO optimized our ebs optimized volumes that go on those instances. So there are different trade offs you have to make depending on what your motivation is for that work, Lord. And that’s not something that a machine can make um you know, just out of the box. So given that you as a human has to go in, you have to look at these options, make tradeoffs, make a decision and then execute on God.

  22. Stephen Barr

    Okay, so it really is in the true sense of advisor, it’s giving you some advice but it’s it’s not gonna be the push of a button you have to think about. Okay, well what I have to make some changes in order to take this advice uh on notice.

  23. Rahul Subramaniam

    That’s correct.

  24. Stephen Barr

    Alright, well that was really fun to think about it. It is pretty neat just in general thinking about how to more efficiently utilize this instances or these resources. Alright, so the next one is and this is a bit more specific but it’s it’s fun to talk about the broader context in which that sits. So this is the think edge sc 70 powered by AWS panorama is now available for sale. So in order to to discuss what this is and we have to think about what’s edge computing. And so edge computing is for some applications either due to um just the bandwidth restrictions or the data governance reasons you’ve got data that you want to do something with but it can’t leave where it is. And so an example would be computer vision in manufacturing. So you have this manufacturing pipeline, things are happening really fast. You’ve got an IOT device like the SEc 70 which is taking a photo of this thing and you want to run some image recognition on it. But you know, manufacturing is fast, it might be hundreds per second. So this is where you want to use something on on the edge, I’m gonna show a video of where this this fits in.

  25. [Video of recycling center powered by machine learning]

  26. Stephen Barr

    I thought this was pretty neat. So what this is doing, that’s a recycling center and they’re taking pictures of the different bottles and then they’re using machine learning to sort and and specify the bottles into different types of, of plastic and then that’s that is, this is then how do you say? Yeah, it’s sorted into different types of plastic and that makes the recycling more efficient, but they couldn’t take a picture and then send it all the way to AWS that they’re doing uh, you know, One bottle every 10th of a second, so that’s too slow. That that that that that Maybe if you had a great Internet connection, but you still have to send the image to S3 make an api call process the result. Do you want to do that in real time? So where do you see edge computing fitting in?

  27. Rahul Subramaniam

    Yeah, So I think um the way you should think about its computing is there are certain scenarios where the, the cloud compute infrastructure has to kind of extend to the edge because of constraints around bandwidth and latency. Um you have to have considerations around um where your um, what the workload really is and how much data are you actually ending up processing? Um There are times when the sheer volume of data that you’re processing is so large that there’s no way, no matter how large pipes you have connecting you to the cloud, there’s no way you can go through all of that data to give you an example. And that’s kind of a lot of the work that panorama is doing um for more details, by the way, there’s a amazing podcast that I did with Omar’s Arca, the Gm of AWS panorama just a few weeks ago, um as he talks about where panorama, you know excels in its compute and the kind of use cases that they’ve been working on, but I’ll try and cover some of that over here.

  28. Stephen Barr

    I’m gonna put the link to that in the chat.

  29. Rahul Subramaniam

    Yeah, so um some of the examples that Omar was talking about um in that particular episode was um they’ve got you know, warehouses, they’ve got you know, big supply chain logistics partners, where what you have on premise is all of these CCTV cameras that are literally monitoring and managing everything, but that data has not been made use of Apple and with this um edge appliance that they have, you can actually take machine learning models, deploy them on the edge remotely from your interviews console, you can deploy them on the edge and you can now literally just connect your your cameras, your I. P. Cameras, you don’t need to change any of the infrastructure, you don’t need to change of your hardware, just add this new appliance to your rack and you know, start um put them in the same network as your cameras. Start, you know, getting data from data streams from your cameras and then you can start applying machine learning models to whatever is happening over there. Some use cases where pretty amazing in certain hospitals, they’re looking at cases where um you know, patient care. Um some use cases on patient care, like making sure patients don’t fall off their beds or predicting if a patient is likely to fall off their bed while they are in the hospital. Things like that are really remarkable. Use cases I think tomorrow is talking about another case where on remote farms, sometimes the people who are actually inflicting cruelty to the animals that are, you know, the the humans are responsible for inflicting cruelty to animals. But to be able to detect that remotely and using machine learning models and act on it immediately is something very remarkable to understand, you know, what’s happening around security logistics where every packages are, where shipments and containers are moving in massive dockyards. Those are all use cases that Panorama is now enabling and having this new device that’s now available at a lower price point is pretty amazing. Um to the original AWS device that they came up with the AWS has was based on the NVIDIA Jetson, a G X Xavier processor, um, which I think you have that over there. So that’s the Xavier NX um, the A G X is what goes into the AWS build um device and that has a lot more throughput and not more bandwidth to to basically. And if I’m not mistaken, instead of up to um 8 to 20 cameras, but it also depends on what kind of models you’re running. So how much CPU you will actually end up utilizing for machine learning models. But these other ones um, the new ones that have just been launched by um the novel are pretty neat too. This is some powerful, powerful kit here. It’s a 384 and video course 48 tensor cores. The sixth corps, we’ve got the end video arm and then also it’s their hardened devices. If you look at this photo there there fanless design, their industrial application, you said could be used in a loud, busy factory floor or farming applications and have been able to have that image recognition right there. And also you mentioned medical, right? This can do image recognition or voice recognition where the data isn’t leaving the premises, which is really important in some of these regulated environments. But you can still take advantage of AWS scale in developing these large, large models. Um that’s pretty neat. I wish I personally had an application for this where I could uh I think for me, a raspberry pi and Arduino is probably gonna be sufficient, but if I did have a factory, I would definitely want one of these. So I actually ended up getting one of these Jetson nano’s. And uh I’m still working on the machine learning model around the security in my house. I have 11 cameras around my place and uh I’m looking forward to experimenting with the Jetson nano to see what kind of models I can come up with.

  30. Stephen Barr

    You know, those Jetsons are are really neat. I did um I did some work with a drone company once and trying to get one of the Jetsons up in the air a lot of fond devices and especially that versatility of arm um it really shows the versatility of the arm, the arm platform, it’s really just thinking about this in the context of factories. Uh it’s amazing how much those are developed. I love just watching these factory videos. There’s a subreddit called automate, that’s really when I first got a factory, this is what I always thought of. Mhm. Alright. Did anyone catch where um where this particular factory comes through And it is amazing that kind of the speed and the throughput of these.

  31. [80s factory video]

  32. Stephen Barr

    So this this is from the 80s obviously, but then we think about what’s happening these days, it looks more like this.

  33. [Video of fast pick-and-place machine]

  34. Stephen Barr

    So this is a, you know, a pick and place machine where it’s taking these one millimeter components and picking them up off of these strips and the strips of resistors, strips of capacitors and it’s picking them up, taking a picture of figuring out exactly where it needs to be placing it and this all has to happen really, really fast. And then this particular, this is a circuit board that’s going to be made and the circuit board can then pass by a an edge device and it can do something like uh defect detection.

  35. Rahul Subramaniam

    There’s actually some really interesting work being done in defect detection um where There are some like if you look at lookout for vision as one of the aws services, I think if you give it something like 10 images of just for training of defective and non defective pieces, it does a pretty neat job of identifying um, anomalies or defects for all future, you know, pieces and parts that come through the line. So check out AWS lookout for vision as well. It’s a very, very mean service in the manufacturing and industrial space. If you have a use case, we’d love to know.

  36. Stephen Barr

    Yeah, awesome. And I like how AWS it’s and we’ll see this again with one of the other things that we’re going to talk about soon with EKS anywhere, but they really are have this continuum of, of edge to cloud. Yeah, we got a question from the audience that I think we should talk about, Let’s put this on the screen.

  37. Rahul Subramaniam

    So I think the question says in what situation would you recommend using an auto scaling easy to volume in a cost reduction perspective. So I’m a little confused. Um to be honest about this particular question, I’m wondering if it is ebs volume or are you talking about auto scaling easy to because those are due different things, but let me, let me talk about using order scaling ec two instances, um, I think that’s the right way to do it because the whole point of leveraging the cloud is that you have elastic workloads or you have elastic compute being leveraged to perform processing on your workloads, so as long as you’re you’re processing is going to be elastic where it scales up and down depending on how much workload is coming in, you actually optimizing costs. So auto scaling groups, leveraging them should be kind of done by default. It requires you to rethink your core architecture, you can’t state fullness doesn’t really work without a scaling. It’s got to be fairly stateless uh to operate. So if you have state full workloads, state full processing happening within your processes. That’s when we are tricky, you have to figure out how to extract that out and make it stateless because once you do that, then you can order scale up and down and not have to worry about um you know, managing state, there are some workarounds, we’re gonna talk about EFS shortly one of the ways to manage some of the state fullness that exists in file systems, but in general, I think statelessness is where you need to be in the long run and yeah, and then E B S we don’t really think of ebs volumes as auto scaling, you can resize ebs volumes volumes as and when you need it, which is actually really handy thing. So, my only advice there is have regular checks set thresholds for your ebs volume limits. So for example, as soon as your head, You know, you want your EBS volume to operate between 60 and 80% of socialization or you know, space. So haven’t triggered at 60 have a trigger at 80 when it you know, approaches 80 or just as soon as it crosses 80% has threshold, triggering KPI it re sizes your volume to go up to a point when it comes back to 60. If it drops below 60 have a trigger that will resize it down so that the utilization is 80 again and you can do this with a very simple lambda function that called the P I to rule to do the research.

  38. Stephen Barr

    All right. Yeah, I really like that that idea and I think that’s a great segue into ef s right where ef s is this persistent file system that can hold the state, but then the actual ec two instance that’s working with it, that can be ephemeral, it could vanish or be replaced with a bigger one as needed. And I got I like that idea in general just dealing with this being able to leverage the cloud to scale up very quickly. This would be another kind of good old tech reference but we used to talk about slight websites getting slashdotted meaning, you know, you post, you have your website, it’s running on your little, you know, Pentium pro 200 And it’s fine, 99.9% of the time and then it makes the front page of slash dot and all of a sudden you get this 10,000 fold increase in traffic and it goes down so it’s really neat. We don’t really see that that much anymore. Like, you know, if something shows up on hacker news or still slash dot, uh the usually that they’re hosted in a way that they can scale up to meet demand and it works so For this one time 10,000 x surge in traffic, it will handle it. Yeah, absolutely. And I think the cloud, the cloud and it sounds like AWS has really made that possible managing it yourself, which has been too hard. Yeah. Alright, let’s go. ,

  39. Stephen Barr

    [Talks to kid who just walked into room] I’m talking right now. Can you talk to mom? Hello? Sorry, I got a little little visitor um uh my my two year old daughter just came to say hello.

  40. Stephen Barr

    Alright, let’s talk about E. F. S. Alright, so EF s now supports a larger number of concurrent file locks, So this one this is a really important one. so for background, like you said, E. F. S is the elastic file system and the idea is that it can persist and share data amongst survivalists. E C two amongst serverless applications. So whether it’s an Ec two, a lambda um container in Eks or most other compute services, they have that they can use E. F. S so that you can have this very short lived computes paired with very long lived storage. And so today the the number of concurrent the announcement today is, well, let’s read it right here. Here’s the big part of it. This update increases the number of simultaneous file locks To 65536. So again that powers of two from 8192. And this means that EFS can be used for a broader set of applications that are using fire walking. So where this comes down to, um, there was this great blog post I was reading and it was called the first limit you’ll hit on AWS Cfs is locks. Put this one in the, in the chat. I thought that was pretty, pretty good blog post, but where it seems like it’s so many files and then where do you see this uh, where do you see this uh limitation being hit? The example that the blog post gave was databases.

  41. Rahul Subramaniam

    Mm hmm. Yeah. So just for some background context, those not familiar with the fsc. Fsc, I mean think of it as it reduces version of providing some service like NFS And there are two areas where you end up needing something like this. The first one like I said was you want ephemeral compute where the compute is, um, you know, short lived and for example, you want to, you, you need tons and tons of compute to do something but you don’t want to spend an insane amount doing that. So you want to use and leverage as much of sport instances as possible. But the drawback of spot instances is that they can be taken away at any point of time, but spot instances could be 90% cheaper than your on demand instances, right? But then it comes to state, where do you store state when you’re processing tons and tons of workload, you either store state in a database, but a lot of older applications towards ST in the file system, like you took some data, you did some processing and you dumped it back in your file system. Right? So the file system was one of the mechanisms of storing state. Now when you don’t have something like NFS, you only have ebs volumes. Your file system is local to your machine, your instance, right? The only way you can take that data and put it somewhere else is either detached the volume and attach it somewhere else or actually try and do some kind of life transfer um, you know, from one instance to another and when everything is ephemeral orchestrating, that is an absolute nightmare. So E F S comes as an amazing bridge that helps you resolve that gap in being able to deal with some of the older workloads that still don’t use things like object store, um are you required to use the file system? I’ll give you an example of a place that we started using NFS very extensively. Um today we manage about, you know, across the, across the entire portfolio, we have about 2, 2.5 billion lines of code that we manage and they are all engaged repositories. We have, you know, I think nearly 100,000 git repositories if not more, I’ve forgotten how many there are now. But one of the things that we do as part of our analysis whenever we acquire a company, whenever we are really trying to understand what’s happening with the court basis is we always want to know how we got here. Typically the companies that we acquired our 10 2030 years old and they have tens of thousands of comments, if not hundreds of thousands of commits across different branches. Um, so one of the most important things we try to identify is how did we get to the state of the core base that we acquired? So we literally start analyzing the code from commit number one and then go down coming after coming to understand what changes got made, what technical debt got added at what step, why did it get added? What was the evolution of the different features? And so you have to do this commit by commit across hundreds of thousands of code bases. So you want ephemeral compute to do all of this Unfortunately, because this is all in debt, we couldn’t use, we couldn’t use it as three as a plus same mechanism because get operational file system, You can’t run, get on S three. So we were limited by the fact that each commit is very beautifully packaged in your GIT repository, but it always runs on the file system. So we had we needed some sort of some sort of a file system to operate on and cbs wouldn’t work because the average get repository that we have is about Somewhere around 5- six gigs. So every time if you needed to take the six gig repository and split it out across 1000 compute units, you’re copying six gigs 1000 times and that is not cheap and that is not fast. And so you can have this, that’s incredible.

  42. Stephen Barr

    So 100,000 repositories, average size six gigabytes and you can have them sitting there ready to be accessed by, by lambda. Buy spot instances by anything you want to and you don’t have. And it seems like EFS is really, you know, made sure that it’s accessible by the majority of these compute services. That’s that’s a really impressive. I think it’s really nice to know that you can use that. That is an interesting instance where,

  43. Rahul Subramaniam

    Right, where you said you needed a lot of files, correct? We need tons and tons of files that we write to because those, so it’s all, it’s, you should think of it literally as a pipeline of processes, um they’re just running, you know, step up a step. And so every step, writes back to E F s with some of its analysis and then other processes pick it up because they pick up a committee to pick up the previous analysis because independently you need to do a diff between your previous analysis on the previous commit and the current analysis that you just got on your existing carpet and then you do a diff, you write that as you know, um analysis for the next step and so on. So locks started becoming an integral part of what we have to watch for. We actually ended up even though we were using lambdas and we’re using spot instances for managing our computer. We were limited by the number of locks we had because that was literally the number of Landers you could you could run but that was the number of spot instances you could run um exactly jobs that you could run at the same time because all of these cases is holding 100 locks. And if you increase the number of if you grew your computer beyond that limit, you will basically start um you know, throttling you start receiving tons of letters and you have to throttle your entire pipeline.

  44. Stephen Barr

    So you’re saying that that’s pretty awesome news that has grown. So if you were to do this today, you could do it eight times faster?

  45. Rahul Subramaniam

    Absolutely. We could do always infuriate times faster when you acquire a company, which is awesome because right now it takes us a few days to finish. Plus saying, you know, in some cases nearly millions of comments, you know,

  46. Stephen Barr

    I think that we need to do a separate episode solely devoted to this, the architecture of this, this code analysis because this is really fascinating. I think it would be a really fun episode to deep dive into that.

  47. Rahul Subramaniam

    Absolutely, we’d love to do it

  48. Stephen Barr

    and then just one more thing I guess this is not the first time EF s has done this um big magnitude increase, I mean usually you don’t get an eight X boost very often. So congrats to the F. S. Team for launching this um I guess in 2020 they did a 400% increase in read operations per second. So they’re really really pushing the envelope in terms of performance and in leaps and bounds.

  49. Rahul Subramaniam

    Now actually E F S given, you know, the old literature of NFS um has is one of the services that has so much room for improvement in a cloud native environment, like you can start rethinking what it means to build, storage is accessible across multiple computers with multiple rights happening at the bank, you can truly build it in a collaborative manner. So there’s tons of head room to grow.

  50. [Stephen’s kid walks in again]

  51. Stephen Barr

    Yeah, absolutely, one second. Can you go visit mom for a moment? We need 20 more minutes. Okay, see you soon.

  52. Stephen Barr

    All right, well let’s move on to amazon lex. Alright, so amazon lex, I think the lexx is um this is their service where you can build conversational interfaces into any application using either voice or text and I think the lexx is the part of Alexa right? That part that can do voice recognition. Um Now whenever I think about voice recognition with computers this is a this scene comes to mind so I’m gonna put this on the screen.

  53. [Video from Star Trek 4]

  54. Stephen Barr

    So I think it’s a a universal truth in science fiction that the future of community of interaction with computers is voice driven. I think we’ve seen that a lot and it’s nice to know that we’re we’re moving closer to the world of of the starship Enterprise every day and I think lex and things like it are going to be one of those building blocks that make it possible. So pulling up the announcement again, um the idea of lex it used well before this used it used to be recognizing english generally, but english is really well, languages in general are interesting because we get these domain specific um we get these domain specific. Um how do you say languages that evolved in these particular contexts? So one of these an interesting example of this uh, this for um I guess a I guess fun example is in american diners. There’s this thing called diner lingo. I want to put a link to the, in the chat about what this is, but the idea is that there’s this whole language that’s evolved in american diners about uh funny names for food and I’ll play an example.

  55. [Video of diner orders]

    Two eggs scrambled on toast. Sure thing. Honey, adam and eve on a raft wreck. Um I’d like a hot dog with ketchup and some jell o please paint a bow, wow red and a side of nervous pudding could have a well done burger with lettuce and tomato burn one, drag it through the garden pin, a rose on it.

  56. Stephen Barr

    So that’s, that’s a custom vocabulary. Right? That’s a, a set of words. That only makes sense in a particular context. And so if you were to build some Chat bought. Yeah, some some voices, some special Alexa that could recognize speech and take orders in a 1950s diner, what you do is you build up this customized vocabulary um where you, where you could, you would understand what you’re talking about and if you look up this blog post on diner lingo uh it’s truly unique set of phrases that wouldn’t make sense in any other context. Um so what other good uses do you see for this Rahul and being conscious of time? Because we have two more that we want to talk about and we’ve got about 12 minutes left.

  57. Rahul Subramaniam

    Yeah so I’ll answer this very quickly. Lex is actually one of those services that I absolutely love. Um if our audience hasn’t looked at it um I’ll break it down very simply lex has this concept of an intent and intent is basically what you intend to do. Uh And an intent is constructed of basically the set of phrases that you might say ah and a bunch of slots. The stink of slots as valuables, right? So for example you might ask um you might want to ask a question like Um you know, book me a ticket from L. A. to New York on the 25th of december, right? If you had a phrase like that, that’s your intent booking a ticket is your intent. The utterances are book me a ticket or you have to three different kinds of ways of saying it. Um and then your slots are your origin, your destination and your date, right? Those are your variables and the job of resolving that.

  58. Stephen Barr

    So from my diner bought my intent would be order, order breakfast. And if I said, and I’m gonna put this phrase in the chat. Uh no, clearly gonna translate this one, but if I said cowboy with spurs make it cry, extra axle grease and a mug of Merck that those are going to fill my slots. And what I’m really saying is Western omelet with fries and onions, that’s making it cry some butter, the axle grease and a black coffee, the mug of Merc. Right? And that’s so now instead of having to have the teach a different vocabulary to this bot or to the people interacting with the bot, they can use the familiar vocabulary. The diner lingo and lex will know, okay, I know this, this body of words that I’m searching and I’ll fill it in for, I know that I’m looking for mug of Merck and axle grease and make it cry. All kind of these funny phrases that wouldn’t be used in a normal english conversation.

  59. Rahul Subramaniam

    Absolutely. And actually one of the things that I really love about lex and I wish more people would do this is whenever you’re building an application, you invariably get inhibited by the Ui and the U. S. You have to build to figure out how customers can access a set of api that you build. Engineers are really good at building A. P. I. Because they are to the point, the function of the api call does one thing, it has a set of variables that you pass it as parameters and it produces an output that you can consume right, but for an api to be leveraged to build this whole massive UI and the UX around it to make sure that people find that aPI call, they know how to get it, they know how to pass, you know, do the right things to get the right parameters passed to it and then make sure that the data is presented back to them in a manner that is digestible, consumable and all of that stuff. The conversational parts that you can build with legs are amazing because you can literally build it to do one thing and one thing really well which is do the job that the API does for you. So you can ask a question that question maps directly to a single Api call the slots in your intent get passed in as parameters to the api call, whatever the output is goes directly in your chat, you know, client and you get the answer then and there, it’s probably the easiest way with the least amount of friction that you see in us and different kinds of us so that people have built over the years. It’s probably the easiest way to get to what you want to do with a whole bunch of applications out there, especially when it comes to data analytics, data analysis, um you know, slicing and dicing stuff using a chat board to do? It could potentially be revolutionary.

  60. Stephen Barr

    Yeah, absolutely. I think it’s really neat that you can, if you can figure and they said lex right now it’s, it can’t, it’s not artificial general intelligence, it won’t understand every last you can’t, but what you can do is you can map, you’re a P I to a set of questions and answers and then that’s your interface. Yeah, Well now the one thing lex won’t do and I’ll play our last clip for the day.

  61. [Clip from 2001 A space Odyssey]

    Open the pod bay doors. Hal I’m sorry. Dave, I’m afraid I can’t do that. What’s the problem? I think, you know what the problem is just as well as I do.

  62. Stephen Barr

    So lex might be able to figure out what you want and the intent here is to open the pod bay doors, it doesn’t mean you’re API is gonna do it. So that’s still on you.

  63. Rahul Subramaniam


  64. Stephen Barr

    And if your API goes rogue, that’s that’s another episode. Alright, well should we, let’s maybe we should just do one more um, instead of trying to push through to let’s do the last one is the EKS Anywhere. But this announcement says amazon elastic kubernetes service anywhere now allows you to enable amazon curated software packages that extend the core functionalities of kubernetes on your Eks anywhere, clusters. So to give some context, what is Eks anywhere? Well, it’s almost I I mentioned this when we were talking about panorama. Eks anywhere allows you to use the amazon interface to kubernetes on your own physical infrastructure. And so that the, the package is that they’re talking about, they’re talking about harbor, which is um, for scanning your local containers that you’re going to run to make sure they don’t have any security issues, the viruses, that sort of thing. And then there’s another one called metal L B, which is a load balancer. That’s for bare metal kubernetes clusters. Um, now Rahul your much more knowledgeable about kubernetes ecosystem. So what are the implications of this announcement?

  65. Rahul Subramaniam

    So, um, kubernetes is, do you think of communities more as a framework? Um, it’s basically an an orchestration framework that manages the life cycles of your containment. Right, But there is so much more that you need um, there are other packages that you need to install for either network layers or different kinds of resources that you want to kind of mount onto that kubernetes cluster or you know, have services like lower balances. There are a bunch of different proxies and sidecars that you might want. So there’s just so many things that usually go into that that kubernetes ecosystem, it’s never just, oh, I started kubernetes and I’m ready to go, you need a lot of other stuff that goes with it and with different versions and different variants of how kubernetes gets deployed, it’s hard to get stuff working right from the get go, Like you have to be a master at tinkering around with all those packages and the way the installations happen, there are different mechanism of installing these things, so you really need to be a master at being able to do all of that and that’s hard. What interviews has done here is they’ve curated a set of packages for you and I’m sure that this list is going to keep growing um you know, over a period of time, but the ones that they have just installed out of the box, they could click you say, I want this deployed, it’s very easy to make it really easy to get all this stuff working without having to be an expert at kubernetes Jamel files dealing with these particular packages.

  66. Stephen Barr

    So I guess um then wrapping up for the time, the 10,000 ft view is that from EKS Anywhere and these packages, you can really have this continuum of usage between your own infrastructure and EKS in AWS and that really will help you Treat them as one pool of resources with the event. Eventually as your own physical infrastructure depreciates and you may, you know the way most things are going to move more and more of it into AWS. But this, this minimizes the amount of parallel effort you need to do, you can use kubernetes everywhere and then as things start shifting into AWS more and more, it’s, it’s a much more gradual or incremental process. Would that be a fair way of summarizing it?

  67. Rahul Subramaniam

    I think that’s absolutely right. About 90% of workloads today are still on premise and there is a shift towards the cloud, but that’s again gonna be gradual, this people, there’s a lot of hurdles to overcome. So the more services that allow you to start at the edge and then quickly bring you onto the cloud are always welcome. Let’s help more about yours. Perfect.

  68. Stephen Barr

    Alright, well, I think we’re just about ready to wrap up has been really fun to just have a big variety of topics to discuss and Erica, thank you. I know it’s been a long session and thank you so much for for your translation. A huge appreciation for what do you do that? It’s phenomenally complicated. So thank you um and thank you to our viewers, really appreciate it and we hope to see you next Tuesday 9 AM. Pacific time.

  69. Rahul Subramaniam

    Yeah, thanks,

  70. Stephen Barr

    Thanks for it was really fun catching up with you and and just having this great chat, we’ll see everyone next time.

  71. Rahul Subramaniam

    Likewise, see all last week by All right, thanks