AWS Made Easy
Search
Close this search box.

AWS Insiders Podcast: Episode 4 – Exploring Computer Vision at the Edge with AWS Panorama

With Omar Zarka, GM of Amazon Panorama

On this episode, Omar Zarka, GM of AWS Panorama, dives deep into the structure of AWS Panorama, a machine learning appliance and software development kit that brings computer vision to on-premises internet protocol cameras. Omar discusses how AWS Panorama can make accurate predictions, how to reduce operational overhead, and how to improve the experience for customers.

Listen now on

Speakers

Portrait of Rahul Subramaniam

Rahul Subramaniam

AWS superfan and CTO of ESW Capital

Rahul is currently the CEO of CloudFix and DevGraph and serves as the Head of Innovation at ESW Capital. Over the course of his career, Rahul has acquired and transformed 140+ software products in the last 13 years.
Portrait of Omar Zarka

Omar Zarka

GM of Amazon Panorama

Omar Zarka has been with the Amazon for 9 years now. Omar started at the Lab 126 working on devices such as Echo, tablets, Kindle, Fire TV. After 7 years working with tin the device department Omar has joined AWS Panorama focusing on the computer vision at the edge.

Transcript

  1. Rahul Subramaniam

    Hi, everyone, welcome to another episode of AWS Insiders. We have a very exciting conversation lined up today,
    about a relatively new edge computing service, called AWS Panorama. And my guest today is the GM of AWS
    Panorama, Omar Zarka. Omar, it’s an absolute pleasure to have you on the show today, and I’m really excited to
    learn more about the service.

  2. Omar Zarka

    Thank you, Rahul. I’m excited to be here. Thanks for having me.

  3. Rahul Subramaniam

    Awesome. So, to get started, we’d love to know a little bit more about your Omar. Why don’t you tell us a little
    bit about your history with AWS and how you started on the Panorama project?

  4. Omar Zarka

    I’ve actually been with Amazon now for nine years. In April, it’ll be nine years. I started out at lab 126,
    which is the devices division. So the Echo, the tablets, the Kindle e-reader, Fire TV, et cetera. And I joined
    when that team was relatively new and growing very, very fast, and got to see the all kinds of different product
    ship, hit very important milestones, like millions or tens of millions of devices. And I learned a lot
    specifically about devices, which became very important for my journey here on Panorama. And as I was reaching
    about seven and a 1/2 years or so, I reconnected with a colleague that I had worked with at lab 126, who was
    leading AWS Panorama at the time. And we connected, we were talking about Panorama, which I found it very
    exciting, computer vision at the edge. And he thought that they had some roles that would be a good fit. So, one
    thing led to another and I joined the team.

  5. Rahul Subramaniam

    That’s awesome. So what was the Genesis of Panorama as a service? Was there a particular problem that you were
    trying to solve or did the service have more organic roots?

  6. Omar Zarka

    That’s a great question. The vast majority of services at AWS, it starts with a customer signal. And so the
    Panorama one was particularly interesting in my opinion. We didn’t start by trying to solve a business problem.
    We actually started by trying to solve an education problem. So as AI and ML became more prevalent in the cloud
    with deep learning revolutionizing how easy it was to develop models, there was a gap, a skillset gap. In what
    practitioners and builders were able to do, and the skills they needed to be able to take advantage of these new
    technologies. So, what AWS they did is they created the programs to educate people on these new concepts so that
    they could get hands on and see how they could apply to their particular context for business problems. One of
    the first initiatives was called Deep Lens. And Deep Lens was a fully packed edge camera in a box, with a edge
    compute, that allowed you to run machine learning models.

  7. Omar Zarka

    And the idea was to give folks a platform to learn what it meant to run these deep learning models on computer
    vision use cases. It was a revolutionary idea. People loved it, just like all kinds of interesting use cases,
    whether it’s sign language interpretation, or homework help or reading a book for kids, very creative
    applications. One of the other things that came out of it though, is businesses started taking Deep Lens,
    businesses started buying them in large quantities and trying to deploy them into their business processes. So,
    the team would see these orders and be like, “Well, we should reach out and figure out what they’re trying to
    do, because the pattern doesn’t quite match.” And as they got deeper, they learned that customers were intrigued
    by the simplicity of Deep Lens. The all in one package that allowed them to easily deploy machine learning or
    computer vision to the edge for their business problems, was very compelling.

  8. Omar Zarka

    They dug a few layers deeper, and for those who know a little bit about Panorama, we actually ended up not
    building a camera, we ended up building what’s called an appliance. And it’s like a computer. It’s like a
    mini-server, that you can attach to your network, and it turns out that it’s very expensive to replace cameras,
    or to change out the infrastructure. And so the idea of being able to take your existing cameras and connect
    them to a smart appliance that makes it easy to deploy these rich applications that can do the visual inspection
    that customers want was very compelling. And that’s how the journey started.

  9. Rahul Subramaniam

    That sounds really interesting. I think I’d like to dive a little bit deeper and talk about edge computing. The
    vast majority of AWS services are based on cloud site compute, right? And we’ve all come to see the benefits of
    its costs, the scale, the elasticity, as well as security. But in the last two or three years, there’s been an
    explosion of edge compute. And edge computing services on the AWS side. How does AWS think of edge computing, as
    compared to the more traditional cloud-site computing, that we’ve all been familiar with?

  10. Omar Zarka

    So I’ll start by saying, when we talk to our customers, we actually see that they have several use cases at the
    edge. And there’s no one size fits all I would say. And you actually see this in AWS’s portfolio, where you have
    a variety of edge solutions that actually try to solve the edge or hybrid, or edge to cloud or cloud to edge
    problem, different ways. Whether it’s the snow family of devices that are about getting data at the edge and
    then getting them into the cloud, or you have outpost, which extends the cloud into a customer’s facility. And
    there’s a lot of others like IOT and green grass and SageMaker Edge, et cetera. But in the case of Panorama, the
    signal that we saw was a little bit different. We saw that customers had use cases that needed to bring the
    power of AI in the cloud, that’s been going on for five or six years, but they needed it specifically at the
    edge, with the scalability that the cloud brought.

  11. Omar Zarka

    And the reason for that, is there are just some use cases that are more suited for the edge. And so I see three
    signals when I talk to customers. And we actually challenge. By the way, when I talk to a customer, I challenge
    them. Why do you need the edge? Because the edge as we’ll probably get into later, is in many ways more
    complicated, right? And that’s not a bad thing, but we want to make sure that customers are ending up with the
    right solution to solve their specific business phone. But so the three signals that I tend to see are one, is
    cost. And this is why specifically cameras are interesting, because cameras produce a lot of data, right?

  12. Omar Zarka

    And exorbitant amount of data. That’s actually very expensive to get to the cloud for a number of reasons. One
    is just the bandwidth cost of streaming that data constantly. And that assumes you actually have the connection
    necessary. Many facilities, manufacturing facilities, boats in the ocean, et cetera, do not even have the
    option, right? Of getting that kind of connectivity. So, that’s one barrier. The second barrier is latency and
    reliability. And what I mean by this is, some use cases require sub-100 millisecond latency, right? You need to
    be able to respond to an event, such as a worker safety event, or a safety event, such as a machine potentially
    colliding with something, or the like.

  13. Omar Zarka

    Or you may have a factory, that every minute on the factory or every 10 minutes or every hour is millions of
    dollars of production. And being down for 30 minutes is just not acceptable, right? So that’s the second factor.
    And then the third factor is actually data regulation, right? So having more control over where your data goes
    and doesn’t go, the ability to discard data immediately after processing it, or storing it locally, instead of
    send it to the cloud to have better controls, the third factor.

  14. Rahul Subramaniam

    That sounds like a really good framework to decide whether you should go for edge computing, or just rely on the
    elasticity and the cloud economics, basically. In general, do you find that edge computing is more expensive
    than the cloud computing?

  15. Omar Zarka

    I would say that for all the reasons why the cloud has revolutionized the entire world over the last 20, 25
    years, are the reasons why you should choose the cloud first, right? And basically, it’s simple. You don’t have
    to buy hardware up front. It’s not a capital expense. It’s operational expense. Your ability to scale your
    elasticity is super high. Your time to market and time to value is super fast. Your ability to control costs can
    be done day over day, as opposed to you don’t have to plan them with hardware purchases over six, nine, 12
    months. So, the cloud is infinitely more flexible, but the edge has some use cases that are just not solvable
    right now in the cloud. There’s going to be a while where at least until our network infrastructure is strong
    enough, where everywhere in the world, more or less, can have the bandwidth required to use the cloud. And,
    that’s many, many, many years out there, there’s a lot of opportunity here for us at the edge.

  16. Rahul Subramaniam

    Do you see that changing if solutions like 5G become more commonplace?

  17. Omar Zarka

    I think the dynamics will always change. But I think if you look at this historically, the swing between cloud
    and edge and cloud and edge and distributed versus local has happened repeatedly over time. So, I think there’s
    always going to be value and use cases at the edge, for a variety of reasons. And we want to meet customers
    where they are, based on their needs, so that we can provide them those solutions.

  18. Rahul Subramaniam

    Let’s start by talking about the hardware. So what is it that customers install on-site? And what is in that
    hardware?

  19. Omar Zarka

    Yeah, so we basically have a little box. It’s the size of a laptop, maybe a little bit thicker, but the same
    dimensions. And the box connects to your network over the ethernet. And the basic premise is, you want to be
    able to stream data into the box, so that you can process that data. And we are focused on computer vision. So
    IP cameras, so a lot of businesses have cameras distributed throughout their facility for a variety of use
    cases. So by connecting to the local network, you can tap into that video data, and then stream it through the
    Panorama appliance. And the appliance actually doesn’t have very much storage. We have some other hardware
    variants that will come out with more storage in the next month or two. But right now, we’re really focused on
    processing. Because that’s the main use case. That’s the main gap.

  20. Omar Zarka

    A lot of customers already have storage on site, but the main gap that they have today is they can’t do this
    intelligent processing. The main hardware component is Nvidia Jetson, Xavier AGX. Which is a system on a module,
    produced by Nvidia, that is designed for AI and machine learning workloads. It leverages some of their most
    popular GP architecture. And they’re very popular software stacks, which is deep stream, et cetera. And it
    allows customers to deploy those models to this device, so that they can do things like detect objects, identify
    when there’s a collision about to happen, identify when there’s a pattern that’s not expected, identify objects
    that are out of specification, or need to be handled or inspected differently. A variety of use cases. Almost
    anything that you can think of, that you can do with visual inspection, you can almost find a way to do it with
    computer vision these days.

  21. Rahul Subramaniam

    That sounds like a really, very tiny box that you have to install. And I hear it’s weatherproof and can be
    installed outdoors and pretty resilient.

  22. Omar Zarka

    Yeah, we are dust and water resistant. So a lot of environments, unless you’re in a heavy water environment,
    chances are, it’s going to be fine. Also for folks that are like, “I want to install this in a server room.” It
    installs into server racks. You can put two side by side, it’s called one-U, single unit. One of our customers
    has these 10 of them stacked right next to each other in their data center. It’s very interesting to see. But
    yeah, it’s very versatile. We designed it with a lot of manufacturing and industrial use cases in mind.

  23. Rahul Subramaniam

    What is the elasticity of these units? How many cameras or streams does each unit handle?

  24. Omar Zarka

    Our general guidance is to assume between 10 and 20, but we have customers getting more out of it. And it really
    depends on the workload that you’re trying to deploy. In principle, the Nvidia chip that’s on here, can decode
    dozens of streams at a time. What tends to be the bottleneck is the complexity of the machine learning model.
    And that’s very use case specific. We’re working with customers to actually help them maximize the value of
    their hardware, to be able to stream cameras intelligently or swap between different cameras. Our goal here is
    to make sure that customers get the most out of their investment, and are able to scale a familiar way, as they
    would in the cloud. Where they can add more cameras and more applications and distribute across multiple devices
    over time.

  25. Rahul Subramaniam

    So, I see there are multiple different elements. There’s of course the hardware part of it, which you install on
    prem. Make sure that this device is connected to all your IP cameras, that you want to stream out of. But then
    the second 1/2 of the equation is your machine learning models, and what you want to do with it. So, what does a
    customer typically go through when they get started? How do they go about training it? How do they typically
    capture the data from all of their cameras? What approaches do customers take?

  26. Omar Zarka

    There’s actually a few layers to that. The first is actually, one thing that customers love a lot about the
    Panorama experience, is the ease to set up the device. Right now, if you want to take a device and connect it to
    IOT, any web service, there’s actually a lot involved. And to do that in a scaled way, requires a lot of heavy
    lifting, the customer needs to do. And to layer on top of that, to do it in a secure way is even harder. So,
    we’ve abstracted all of that for our customers. When you buy a device, in five minutes you have your device set
    up on your network. Not only is the device software stack secured top to bottom, but it’s connected to our
    service, and you have a reliable connection to the service. We’ve taken days or weeks of work, I would say
    sometimes even months, depending on the security profile that you’re trying to achieve, down to just a few
    minutes for customers. So, that’s one really important thing.

  27. Omar Zarka

    So once you’ve provisioned your device, like you said. You connect it to the network, you’re able to identify
    the cameras, but the main thing you want to do, it’s a platform to build these applications. And the core of the
    application is the model, as you suggested. So, we can talk about maybe the model and the applications, and then
    connecting the cameras. So from a model perspective, there are two big things that a customer needs to think
    about. One is, you need a make sure that the model you’re selecting and you’re going to train is going to solve
    the use case that you want to solve, right? And so there’s a lot of models out there, there’s a lot of
    selection, and there are a lot of different ways to train your model to do that.

  28. Omar Zarka

    So we try to simplify this for customers, by doing two things. One is we pre-select models that we qualify on
    our device. And we provide these in of sample applications. So these are open source models that are easy to
    retrain for specific use case, and customers can deploy these also in a matter of minutes. The second thing we
    do is we have really deep relationships with partners, who are experts at retraining and deploying these models.
    It turns out that a lot of customers don’t have the data science expertise to be able to retrain these models in
    a way that will solve their use cases in a satisfactory fashion. So, we work with our partners, to develop these
    models or to retrain these models such that customers can then use them for their use cases. So, a typical flow
    would go like this. Customer buys a device, they deploy their device in a lab environment where they can either
    access a couple cameras or the like.

  29. Omar Zarka

    And they usually start with a sample application, just to get a sense of, “Hey, how does detection look like?
    What kind of data am I getting back? Et cetera.” And then, we go through a process with the customer of
    identifying, okay, for your use case, what does success look like? Right? Are we trying to achieve very high
    accuracy? Are we trying to achieve very low latency? Are we trying to just have an integrated system, where you
    used to have a disconnected environment, and now you want to get data back to the cloud? Maybe not necessarily
    images, but inference data. And based on those factors and the number of cameras they want to connect in the
    environment, we help identify the model, and then train the model for them, with our partners.

  30. Omar Zarka

    We don’t do a it as AWS, but the partners will do it in collaboration with the customer. And so then you end up
    with a proof of concept. And that proof of concept allows customers to see the business value that they can
    derive, from using machine learning at the edge, or AI at the edge. And then we start to talk about, okay, what
    does it mean to actually deploy now? Because you need to do you need to do everything we did so far, but you
    need to do it across tens or thousands of sites, right? So you need to be able to manage fleets. You need to be
    able to manage models at scale. And so that’s the core of the prototype. And then once you get into building
    something more complex, now we start to think about, okay, how do I attach cameras at scale? How do I package
    and bundle and deploy my applications? How do I connect them with my other business systems? What kind of
    dashboards do I want to be able to see and manage in the cloud, alerting, monitoring, et cetera?

  31. Omar Zarka

    So it tends to go, devices and provisioning, making sure it works on your network and works for your use case.
    Then building the model that makes sense, and then applications and the dashboards, et cetera.

  32. Rahul Subramaniam

    And if I understand right, you can have one device actually deploy multiple models?

  33. Omar Zarka

    That’s right.

  34. Rahul Subramaniam

    So that on the same set of streams that are coming in, you could actually do multiple different kinds of
    analysis, triggering different alerts. Is that accurate?

  35. Omar Zarka

    That’s absolutely correct. Yes. It’s like any computer, right? You can max out the CPU or the GP more or less,
    but yes, we find that so far, most customers are able to run multiple models, and still serve their use case.
    There’s always a trade off that you need to make as you would with any compute environment, but it very doable.
    That’s why we picked the AGX, by the way. The AGX is one of the most powerful edge chips available on the
    market.

  36. Rahul Subramaniam

    What are the different kinds of models or analysis that are commonplace? Is there a marketplace for standard
    models that you could possibly apply right out the box, and start using them? And you see customers using them,
    or has everyone go in to build custom models for their use cases?

  37. Omar Zarka

    So, I think the pattern that we see is that everybody starts with some sort of open source model. Either from a
    model zoo that we have connected, or we provide guidance on how to pull those models into the Panorama
    experience and deploy them to our device. Or with the sample applications. So, our sample applications represent
    the most common use cases that we’re seeing so far. So that usually gets customers started. But what we notice
    is that once you have a general proof of concept, to really get the business value that you want, you really
    need to spend time training for your specific use case. We haven’t found yet, a model that is just a catchall,
    where you can deploy the single model as is, and it just works. And we talk very closely with our solution
    providers and et cetera, and this seems to be a common thing in the industry in general. That there needs to be
    some amount of retraining or training specific to the use case that the customer has.

  38. Rahul Subramaniam

    What are your favorite use cases that wouldn’t really be possible if Panorama wasn’t around?

  39. Omar Zarka

    So my favorite use case I can’t talk about yet, unfortunately. But I plant that seed because I hope to be able
    to talk about it with you in the next few months. But there’s a lot of really interesting stuff that you would
    never expect, that make the lives of everybody easier and for the good of everybody. So, I hope to be able to
    talk about more of them, but one of them, one of the unexpected use cases that I can talk about today is AI for
    animals that Deloitte in the Netherlands created. And the idea here is to detect animal abuse in real-time. And
    so they want to be able, it’s not just about identifying when the abuse has happened, but it’s also about
    intervening to minimize the abuse. So, to me, this combines a lot of the advantages of edge AI and it is for the
    general good of the world, right?

  40. Omar Zarka

    So, one is, it’s very hard to have humans watching all the time. Two is, humans themselves are part of the
    problem at times. And then three is being able to respond in real-time or in locations that don’t have the
    bandwidth or the infrastructure necessary to be streaming videos to the cloud, make it a perfect storm.

  41. Rahul Subramaniam

    Can you give some examples of industrial applications of Panorama? I assume that those are far more common, and
    so what kind of scenarios are people using Panorama in?

  42. Omar Zarka

    So the first thing I’ll say, and this might resonate with some of your listeners, is I think we’ve all felt the
    pinch of the supply chain crunch, right? So, right now, supply in almost every industry is a problem. And
    actually, a lot of that challenge is sometimes not the supply itself, but the ability to get goods to where they
    need to go, in an efficient way. One of the use cases that we’re consistently seeing across industries that it’s
    resonating with customers, and that customers are deploying into production, is the ability to optimize the use
    cases. Using Panorama to get visibility as to where containers are in the process. So that they can in real time
    inform their customers about where their containers are, how long they’ll take the process, et cetera. But also
    gives them the insight they need to see where they have bottlenecks in the process.

  43. Omar Zarka

    Similarly, completely different use case, completely different industry, but similar use case, Tyson, in their
    production of foods, need better insight as to which skews, which products are being produced at which rate? And
    where they’re seeing slowdowns in their factories. So they use computer vision, to be able to do this in
    real-time, so that their operators can make more intelligent decisions about where to spend their time across
    which lines and which areas, et cetera. We also see again, similar use case, but very different industry.
    Cincinnati Airport is using Panorama computer vision, to be able to monitor curbside at airports. I think a lot
    of people have the experience of trying to drop somebody off or pick somebody up and you just can’t get to the
    curb, right? And you have to loop back around.

  44. Omar Zarka

    And so the idea is very simple. Instead of having somebody standing there, intimidating everybody and just… I
    can only imagine what it’s like to be a person monitoring the curbside. So they’ve abstracted all that by just a
    very simple solution. They just see cars coming in, and they just time. They just set a timer. How long has that
    car been there? And if a car has been there, passing whatever threshold, they send a notification to an
    employee’s Apple Watch, and they go politely tell the car to keep moving. It doesn’t have to be binary of
    replacing something, or there’s one benefit and then you lose something else. So yeah, just between those four
    use cases that we described, you can see the core is very similar. It’s like, there’s an object, I’m tracking
    something or I’m tracking some behavior, but then the use cases and the consequences to the business are very
    different.

  45. Rahul Subramaniam

    Do you see applications of Panorama in areas like healthcare and sports as well? But that seems to be a very
    interesting domain. You started seeing AWS themselves are involved in a lot of sports broadcasting now, where
    there’s streams of data coming in and there’s tons of AI being applied right there on everything from statistics
    to probabilities of certain outcomes as the game is being executed. Is Panorama being used in scenarios like
    that. Or are you seeing applications like that evolve?

  46. Omar Zarka

    So, we are seeing applications like that evolve. We have been in conversation with multiple customers in
    healthcare and in sports, trying to identify how we can make their solutions better for their customers. And it
    could be anything from, in the healthcare space for example, fall detection, right? Identifying when somebody
    needs help or on sports. There’s a ton of applications coming out now. And AWS is pioneering with the NFL in
    this space. There’s nothing yet, but I would say there’s a lot of opportunity here and we’re excited to work
    with our partners and customers there.

  47. Rahul Subramaniam

    I want to jump in to ask you about the three best practices that you’d recommend customers follow when they
    decide that they want to start with AWS Panorama. What will be the three pieces of advice that you’d give them,
    to get started?

  48. Omar Zarka

    Yeah. Great question. So the first advice that I give to a customer is to engage with a practitioner, that
    understands how CV, computer vision, works at the edge and what can be enabled. And so the reason I say that is
    there’s a lot of brainstorming that happens around the use case. All of these use cases that we discuss, the
    core is the same, right? The core is very similar across all of them, but the actual implementation, how you get
    it done and what are the important differentiators for that use case in that particular situation, can vary a
    lot. So I think this is where our practitioners and our partners really help change the conversation, and
    identify the best way to achieve a particular solution. So, that’s the first thing I would say. The second thing
    I would say, and this goes back to maybe earlier in our conversation is, really think about why the edge, right?

  49. Omar Zarka

    There’s a lot of benefits to the cloud. It’s funny because, my business is for the edge, but at the same time,
    we really push our customers on why the edge? Because if you want to go down this route, I think your businesses
    are seeing dramatic improvements in their business processes and what they’re achieving, but it has to fit,
    right? You don’t want to fit a square peg into a round hole, et cetera. So really be precise about why the edge
    matters in your use case. And we’re happy to help you figure that out. And as well, the partner.

  50. Rahul Subramaniam

    I think the framework that you described earlier is a really good way to think about why edge, if at all.

  51. Omar Zarka

    Yes. A 100%. No, for sure. Just to recap, right? Cost, usually driven by bandwidth or lack of availability of
    network. The second is latency as in needs to be real-time, or you need to have very, very high reliability for
    processes that require one millisecond processing times or hundreds of milliseconds of processing times. And
    then the last one is really refined control of data, right? On where data ends up and how you use data, those
    are the three that really make a big difference.

  52. Rahul Subramaniam

    Great. And was there a third best practice recommendation you had?

  53. Omar Zarka

    I mean the third one, it’s like the first, but I think it’s important for folks to realize is that, a lot of our
    customers who even have a lot of data science expertise, are able to get through the initial phases, technical
    validation, business validation, on their own, which is fantastic. But really to scale, if you think of the edge
    at scale, when you think of AWS and the way you want to scale, provisioning, it’s so easy, right? You just add
    instances, there’s very little thought that you have to put into it. Here at the edge, it’s very different.
    Managing the fleets at the edge and rolling out the networks, et cetera. So it can be complicated and can
    require a lot of boots on the ground, in a lot of geographies that not all customers have.

  54. Omar Zarka

    So because of that, I encourage all of our customers to work with partners. That’s where we see the best
    success. And we have a great partner network of partners who have a lot of experience in the space, across a ton
    of use cases that can help customers be successful.

  55. Rahul Subramaniam

    I think you’ve covered some of it, in terms of some things to consider and not do. But if you could specifically
    dive deep into what are the cost related practices that you would advise our customers to follow as best
    practices for the service?

  56. Omar Zarka

    So, I think in our case with the edge, starting by doing the technical and business validation before you scale,
    it seems like an obvious thing, but I’m going to point out why. The reason why it’s more important is because
    there’s a hardware purchase involved. Actually doing the profiling upfront, helps a lot in being able to
    understand the devices you need and how you need them. Whereas in the cloud, you have a little more fungibility
    in terms of if you underestimate or overestimate, you can deprovision or reprovision. So, you want to be a
    little more intentional up front on the hardware. But, for what it’s worth, I think we haven’t seen yet a
    customer run into this problem. I think that the natural cycle of how this goes at the edge, ends up where most
    customers are able to get ahead of this. We do see some customers that want to jump ahead, they’re excited,
    which is fantastic.

  57. Omar Zarka

    We try to make sure that they understand the importance of doing this up front and we walk them through. I think
    that is the main one. I mean the rest is pretty straightforward. We only bill by camera, right? On the number of
    cameras that you have connected. So, if you’re getting value out of cameras, have them connected. If not, it’s
    very easy to disconnect them and we stop billing for them. We were very, very intentional about keeping the
    billing very simple, because of exactly what you described. And I think by keeping it just along the dimension
    of charging per camera, has helped mitigate some of these cost overruns that could potentially happen.

  58. Rahul Subramaniam

    Yeah. I think what we see across a large number of other AWS services is that the elastic nature of the service
    itself, where you auto scale up or down, causes that build shock once in a while, because you don’t expect the
    system to scale up suddenly, it is unanticipated and it probably didn’t scale down quite as fast as you
    anticipated. And that causes large build else to accumulate. But in the model that you guys have now for
    Panorama, it looks like the costs are actually very predictable, because it all depends on the number of cameras
    that you add, or the streams that you add for analysis. And it’s just that. And that’s not something that you
    see as being elastic in nature.

  59. Rahul Subramaniam

    You don’t auto scale it up the number of cameras or stream. So, that seems like a pretty good fail safe that’s
    built into the service pricing itself. Great. And one last thing. So how does someone start with the hardware? I
    hear you can buy this now on the Amazon store, but what are the mechanisms of getting hold of the hardware?

  60. Omar Zarka

    We made it very simple. You can literally buy on amazon.com with your amazon.com account. You can buy it with
    your personal. So, for those who don’t know, on amazon.com, there’s two ways you can have your personal account,
    it’ll be shipped to you in two days if you have prime. Or you can use Amazon for business, which allows
    businesses to use the purchase order process, so that they can match their business processes. We also have
    what’s called AWS Elemental, which is a manual purchase order workflow that’s supported through the AWS console,
    for those that prefer that also.

  61. Rahul Subramaniam

    And the hardware is going to be Amazon only, or are they plans to have other partners offer the hardware as
    well?

  62. Omar Zarka

    We’re very much keen on working with our hardware partners, to enable more skews and more variety for our
    customers. So, Lenovo is the first to launch. They’re launching here very soon. We announced with them a couple
    times last year, they are producing a variant of this device, different form factor, but also different chip. So
    it’s the same Jetson family of chips from Nvidia, but it’s what’s called BNX, which is a lower price point, but
    still very powerful and very applicable to a lot of use cases.

  63. Rahul Subramaniam

    I have to say that this has been an absolutely fascinating conversation. And thank you, Omar, for being so
    generous with your time, and sharing your insights with us. I would love to have you back on the show, whenever
    you’re ready to share with us and discuss the top secret and unexpected use case you spoke about earlier for the
    audience, I encourage you to go and try out AWS Panorama. And we’d love to hear from you about the use cases
    that excite you. If you enjoyed this conversation, please take a few moments to review the show. And I hope to
    see you on the next episode of AWS Insiders. Until then, goodbye.

Leave a Reply

Your email address will not be published. Required fields are marked *