Renato @ AWS FM podcast

Renato joins Adam to discuss the differences between Aurora Serverless v1 and v2, how he’s used AWS certifications to learn topics he might not dive into otherwise, and the benefits of speaking at conferences when you’re introverted.

Dice, Skylines and CloudWatch Anomaly Detection

I am a lazy cloud architect with a background in site reliability engineering. That’s why I immediately felt in love with the idea behind CloudWatch Anomaly Detection when it was announced almost three years ago.

What is anomaly detection?

Regardless of the algorithm used to determine the outliers, anomaly detection is the process of discovering values that differ considerably from the majority of the data and should raise suspicions and alarms. The availability of a managed service, based on machine learning, that alerts a SRE if something goes wrong is too good to be ignored. CloudWatch Anomaly Detection is that option, without integrating third party tools or relying on more complex services like Amazon Lookout for Metrics.

Configuring CloudWatch Anomaly Detection

In a few seconds you can add an alarm that will help monitor even the simplest website. A service with a pricing that is not too high or complicated. What can go wrong with Anomaly Detection? Not too much. As long as you do not consider it a catch-all alarm replacing any other one you have configured in CloudWatch.

While the expected values represent normal metric behavior, the threshold of Anomaly Detection is based on standard deviation, as the label in the console suggests: “Based on a standard deviation. Higher number means thicker band, lower number means thinner band”.

The only not trivial step in the setup is deciding the threshold: what is a good number? Small with possibly many false alarms? High with the chance of missing some outliers? A bigger challenge is to remember that the algorithm cannot know the constraints of your system or the logic behind your product. Let’s give it a try.

Monitoring coconut orders

Let’s assume you have a successful website where you sell coconuts and you want to monitor the number of completed purchases per minute. You have thousands of orders at peak time, a few hundreds during the night with some daily and weekly patterns. Lucky you, that is many coconuts! How can you monitor the online shop? How do you adapt the alarms for seasonality and trend changes?

Without Anomaly Detection, you should have at least two static alarms in CloudWatch to catch the following cases:

  • the “Zero Orders” scenario: it likely indicates that something is broken in the shop. A simple static alarm, catching zero values for the shortest sensible period will not raise many false positives.
  • the “Black Friday” scenario: it is much harder to define a safe upper boundary but you can for example create an alarm at 130% of the maximum value you achieved in the previous month.

Falling coconuts

None of these two static alarms helps if the orders fall by half during the day or if the pattern suddenly changes and you lose 30% of your daily orders. You still do not account for seasonality but these static alarms are better than no monitoring.

Here comes CloudWatch Anomaly Detection: with a few clicks, you can configure an alarm and be notified when the pattern of the orders changes.

Can you simply configure the smart alarm, discard the static ones and trust the magic of machine learning? Let’s take a step back and look at one of the very first presentations of Anomaly Detection.

The example used to highlight the seasonality and the benefits of the new option shows a range band – regardless of how many standard deviations – with negative values. But the ConsumedWriteCapacityUnits metric cannot be negative. A subpar example?

Going below zero

The ConsumedWriteCapacityUnits one is not a corner case. Most AWS and custom metrics have only positive values. Selecting randomly some metrics in the dashboard:

  • you cannot have negative orders in the coconut (custom) metric
  • you cannot have negative IOPS on RDS
  • you cannot have a negative CPU or ACU for Aurora Serverless

Considering 100s metrics, there are only a few that can occasionally go below zero. But the gray band in Anomaly Detection often does.

If you set up a static zero alarm as previously discussed, just keep it: one based on Anomaly Detection might not react as quickly as a static one. The ML option can help finding outliers but it is not the fastest way to catch a broken system with no orders.

For example, during the quieter hours, a “zero orders” scenario would not be immediately an outlier.

Ideally there should be a flag in CloudFront to enforce positive values. But only you know the pattern of your service and a strength of CloudWatch Anomaly Detection is the simple setup. It just works.

Let’s do a simple test to show the difference between constrained values and an algorithm based on machine learning. Let’s roll a dice.

Rolling a dice

One dice, six faces, and numbers between 1 and 6. No pattern and no outliers. There are no 0s and no 7s, there are no values outside the fixed range when you roll a dice. But Anomaly Detection cannot know that.

How can we test it? Let’s roll a dice in CloudWatch with the AWS CLI and a one line bash script roll-a-dice:

aws cloudwatch put-metric-data --namespace "cloudiamo.com" --metric-name "dice-1m" --unit Count --value $(( $RANDOM % 6 + 1 ))

Adding the script to the crontab, we can have a new random value in CloudWatch every minute.

* * * * * /home/ubuntu/roll-a-dice

We now set up Anomaly Detection on the custom dice metric, wait a few days and see what the AWS algorithm thinks of the randomic pattern. How is it going to apply machine learning algorithms to the dice’s past data and create a model of the expected values?

Anomaly Detection is doing a good job given the circumstances but a zero or a seven might not (immediately) trigger an alarm.

Rolling a dice is way too simple and it has no predictable patterns, but if you have hard boundaries in your values, you should have a separate static alarm for that. Relying only on Anomaly Detection is suboptimal. Let’s now challenge CloudWatch and the AWS algorithm with something more complicated, a skyline.

Drawing the NYC skyline

Last year I presented a session at re:Invent, drawing the NYC skyline with Aurora Serverless v2. A SQL script triggered the spikes in the CPU and the Aurora Capacity Unit (ACU) of the serverless database, drawing a basic skyline of New York City in CloudWatch.

Let’s run that SQL script multiple times, for days, for weeks. Is CloudWatch Anomaly Detection going to forecast the NYC skyline?

Reusing the same logic from re:Invent, we can run it on a Aurora Serverless v2 endpoint, adding a 30 minutes sleep between executions and looping. This translates to a single bash command:

while true; do mysql -h nyc.cluster-cbnlqpz*****.eu-west-1.rds.amazonaws.com -u nyc < nyc.sql; sleep 1800; done;

Unfortunately, even after a couple of weeks, the range of Anomaly Detection is still not acceptable.

What is the problem here? A key sentence explains how the service works: “Anomaly detection algorithms account for the seasonality and trend changes of metrics. The seasonality changes could be hourly, daily, or weekly”.

Our loop has a fixed period but it is not hourly, daily or weekly. It is 30 minutes plus the execution of the SQL script. The data points at 7:47 UTC and 8:47 UTC are unrelated. The data points at 7:47 UTC on different days have nothing in common, we do not have a standard and supported seasonality.

But is this really the problem? Let’s change the approach slightly and run the SQL script hourly. It is a single line in the crontab:

0 * * * * mysql -h nyc2.cluster-cbnlqpz*****.eu-west-1.rds.amazonaws.com -u nyc < nyc.sql

Does the new period work better with Anomaly Detection? Let’s wait a few days and see the new forecasted range.

After a couple of days the overlap is still not perfect and the baseline for the CPU is generous but there is now a clear pattern. The outliers are not too different from the ones we saw with the coconuts.

If we suddenly change the crontab entry from hourly to every two hours, we notice that Anomaly Detection was indeed forecasting an hourly pattern.

The seasonality of the data is a key element. A periodic pattern is not enough, an hourly, daily or weekly one is required.

Conclusions

What did we learn? Is it worth using CloudWatch Anomaly Detection?

  • CloudWatch Anomaly Detection is easy to configure, almost free, and is a great addition to a monitoring setup. There are very few reasons not to use it.
  • You should add Anomaly Detection to your existing static alarms in CloudWatch, not simply replace them.
  • Make sure that your pattern is hourly, daily, or weekly.
  • There is much more you can do going forward: Amazon CloudWatch now supports anomaly detection on metric math expressions.
  • Take a look at Amazon Lookout for Metrics if you need a more powerful tool and are planning to automatically detect anomalies in business and operational data. Consider CloudWatch Application Insights if you need automated setup of observability for enterprise applications.

Thanks for making it this far! I am always looking for feedback to make it better, so please feel free to reach out to me via LinkedIn or email.

Credits

Coconut photo by Tijana Drndarski and dice photo by Riho Kroll. Re:Invent photo by Goran Opacic. All other photos and screenshots by the author. The AWS bill for running these tests was approximately 120 USD, mainly ACU for Aurora Serverless. Thanks AWS for the credits. Thanks to Stefano Nichele for some useful discussions about the benefits and challenges of CloudWatch Anomaly Detection.

InfoQ – July 2022

From AMD R6a insances to Rocky Linux on Google Cloud, from Amazon Redshift Serverless to API backend options for Azure Static Web Apps: a recap of my articles for InfoQ in July.

AWS Announces AMD Based R6a Instances for Memory-Intensive Workloads

AWS recently announced the general availability of the R6a instances, EC2 designed for memory-intensive workloads like SQL and NoSQL databases.. The new instances are built on the AWS Nitro System and are powered by AMD Milan processors.

Google Cloud Introduces Optimized Rocky Linux Images for Customers Moving off CentOS

Google recently announced the general availability of Rocky Linux optimized for Google Cloud. The new images are customized variants of Rocky Linux, the open-source enterprise distribution compatible with Red Hat Enterprise.

Amazon Redshift Serverless Generally Available to Automatically Scale Data Warehouse

Amazon recently announced the general availability of Redshift Serverless, an elastic option to scale data warehouse capacity. The new service allows data analysts, developers and data scientists to run and scale analytics without provisioning and managing data warehouse clusters.

Amazon Announces General Availability of EC2 M1 Mac Instances to Build and Test on macOS

AWS recently announced the general availability of the EC2 M1 Mac instances based on the Apple ARM-based processor and designed for CI/CD of Apple-based applications. The M1 Mac option is faster and cheaper than the existing x86-based Mac version but still requires a minimum 24 hours commitment.

Azure Static Web Apps Introduces API Backend Options

Azure recently announced the preview of new API backend options in Azure Static Web Apps. Developers can now create an end-to-end authenticated application calling APIs hosted on Azure App Service, Azure Container Apps, or Azure API Management.

Amazon Aurora Supports PostgreSQL 14

Amazon recently announced that Aurora PostgreSQL supports PostgreSQL major version 14. The new release adds performance improvements and new capabilities, including support for SCRAM password encryption.

PostgreSQL Interface for Cloud Spanner Now Generally Available

Google Cloud recently announced the general availability of the PostgreSQL interface for Cloud Spanner. The new interface increases the portability of workloads to and from Spanner and provides a globally distributed option to developers already familiar with PostgreSQL.

TLS 1.2 Becoming the Minimum TLS Protocol Level on AWS

AWS recently announced that TLS 1.2 is going to become the minimum protocol level for API endpoints. The cloud provider will remove backward compatibility and support for versions 1.0 and 1.1 on all APIs and regions by June 2023.

More news? A recap of my articles for InfoQ in June.

Amazon Redshift Serverless è GA

Amazon ha recentemente annunciato la GA di Redshift Serverless, un’opzione elastica per scalare data warehouse. Il nuovo servizio consente a sviluppatori e data analyists di svolgere analisi senza preoccuparsi del provisioning e della gestione dei cluster di data warehouse.

Annunciato in preview all’ultimo re:Invent, Redshift Serverless è progettato per carichi di lavoro variabili, picchi non previsti e ambienti di sviluppo. La capacità del cluster viene scalata verticalmente e automaticamente in base al carico di lavoro e si spegne durante i periodi di inattività.  Danilo Poccia, chief evangelist EMEA presso AWS, spiega i principali vantaggi:

Questo consente a più aziende di costruire una strategia di dati, soprattutto per casi d’uso in cui i carichi di lavoro di analisi non in esecuzione 24 ore su 24, 7 giorni su 7 e quando il data warehouse non è sempre attivo. È anche indicato per aziende in cui l’utilizzo dei dati si allarga all’interno dell’organizzazione e gli utenti nei nuovi reparti desiderano eseguire analisi senza dover gestire l’infrastruttura del data warehouse.

Gli sviluppatori possono connettersi a un endpoint Redshift utilizzando una connessione JDBC/ODBC o Redshift Query Editor v2. È inoltre possibile accedere al database ed eseguire query utilizzando le Redshift Data API, integrandosi con le funzioni Lambda e SageMaker. Prima di Redshift Serverless, gli sviluppatori utilizzavano tool open source per mettere in pausa e riavviare i cluster Redshift utilizzando AWS Lambda, CloudWatch e Step Functions. 

L’opzione serverless non è l’unica feature annunciata per Redshift. Il provider cloud ha recentemente introdotto Row-Level Security, la possibilità di limitare l’accesso a un sottoinsieme di righe all’interno di una tabella in base al ruolo e alle autorizzazioni dello user. Offrendo le stesse prestazioni delle viste materializzate create dall’utente, le nuove Automated Materialized View sono ora GA e aiutano a ridurre la latenza delle query per carichi di lavoro ripetibili.  Rispondendo a un tweet sui costi dell’opzione serverless, Poccia scrive:

Redshift misura la capacità del data warehouse in RPU. Paghi per i carichi di lavoro eseguiti con un addebito minimo di 60 secondi. È possibile specificare la capacità di base tra 32 e 512 RPU. Puoi anche configurare limiti di utilizzo giornalieri, settimanali o mensili in ore RPU per tenere i tuoi costi sotto controllo.

Redshift Serverless è attualmente disponibile solo in alcuni regioni AWS, tra cui Virginia del Nord, Francoforte e Irlanda. AWS ha ridotto il prezzo RPU del 25% rispetto al periodo di preview, con costi a partire da 0,375 USD per RPU-ora in us-east-1.

Vuoi leggere altre news su AWS?

Amazon annuncia le istanze EC2 Mac M1 per sviluppare e testare applicazioni su macOS

Amazon annuncia le istanze EC2 Mac M1 per sviluppare e testare applicazioni su macOS

AWS ha recentemente annunciato la disponibilità delle istanze Mac M1 basate sul processore ARM Apple e progettate per CI/CD di applicazioni su piattaforma macOS. L’opzione M1 Mac è più veloce ed economica della versione Mac esistente su x86, ma richiede comunque un pagamento minimo di 24 ore.

Le istante Mac M1 sono mini computer Mac dedicati collegati al sistema AWS Nitro tramite l’interfaccia Thunderbolt. Il Mac mini si comporta come un’istanza EC2 tradizionale e può essere utilizzato per creare o testare app per iPhone, iPad, Mac, Apple Watch, Apple TV e Safari.

Sébastien Stormacq, principal developer advocate at AWS, spiega:

La disponibilità delle istanze Mac EC2 M1 ti consente di accedere a macchine basate sul System on Chip (SoC) M1 progettato da Apple. Se sei uno sviluppatore Mac e stai progettando le tue app per supportare in modo nativo i nuovi Mac, ora puoi creare e testare le tue app e sfruttare tutti i vantaggi di AWS. Gli sviluppatori che creano applicazioni per iPhone, iPad, Apple Watch e Apple TV beneficeranno di build più veloci. Le istanze Mac EC2 M1 offrono prestazioni  a parità di costo fino al 60% migliori rispetto alle istanze Mac EC2 basate su processori x86.

Le AWS CLI, Systems Manager e CloudWatch sono applicazioni preinstallate sulle istanze Mac M1. In un thread su Twitter, alcuni sviluppatori hanno espresso preoccupazione per il commitment minimo di 24 ore, un requisito Apple per la licenza macOS, e per l’impossibilità di eseguire aggiornamenti del sistema operativo sulle istanze. Corey Quinn sottolinea invece come la conformità sia uno uno dei principali vantaggi dell’utilizzo di un’istanza Mac. 

Le nuove istanze si integrano con altri servizi AWS, come EFS, Auto Scaling e Secrets Manager. Le M1 Mac costano un minimo di 15,6 USD al giorno e sono attualmente disponibili solo nelle regioni Northern Virginia, Oregon, Dublin e Singapore. Il prezzo dell’istanza Mac M1 è significativamente inferiore rispetto a quello Intel Mac che parte invece da 26 USD/giorno. 

Vuoi leggere altre news su AWS?

TLS 1.2 diventa il minimo protocollo TLS su AWS

TLS 1.2 diventa il minimo protocollo TLS su AWS

AWS ha recentemente annunciato che il protocollo crittografico TLS 1.2 diventerà il livello minimo per gli endpoint API. Il provider cloud rimuoverà la compatibilità e il supporto per le versioni 1.0 e 1.1 su tutte le API e le region entro giugno 2023.

Janelle Hopper, senior technical program manager presso AWS, Daniel Salzedo, Senior specialist technical account manager presso AWS, e Ben Sherman, software development engineer presso AWS, spiegano:

Abbiamo mantenuto fino ad oggi il supporto AWS per TLS versioni 1.0 e 1.1 per mantenere la compatibilità con client meno recenti o difficili da aggiornare, come gli embedded device. Inoltre, abbiamo implementato misure di mitigazione che aiutano a proteggere i tuoi dati dai problemi identificati nelle versioni precedenti. Ora è però arrivato il momento di rimuovere TLS 1.0 e 1.1, perché un numero crescente di customers ha richiesto questa modifica per semplificare i processi di compliance e sono sempre meno i progetti che le utilizzano.

Secondo AWS, il 95% dei deployment utilizza già protocolli crittografici più recenti e l’uso più comune oggi di TLS 1.0 o 1.1 sono le versioni di .NET Framework precedenti alla 4.6.2. Colm MacCárthaigh, VP e distinguished engineer presso AWS, scrive:

In AWS non deprechiamo quasi mai nulla, ma TLS1.0 e TLS1.1 sono l’eccezione! Pochissimi clienti utilizzano ancora queste versioni e puoi controllare i log di CloudTrail per vedere se hai richieste.

Utilizzando il nuovo attributo tlsDetails , i log di AWS CloudTrail possono essere monitorati per identificare se le versioni TLS obsolete sono attualmente utilizzate senza rendersene conto. AWS consiglia di analizzare i record con CloudTrail Lake, CloudWatch Log Insights o Athena. CloudWatch Log Insights dispone di due nuove query che possono essere utilizzate per logs in cui è stato utilizzato TLS 1.0 o 1.1 e trovare il numero di richieste per servizio che hanno utilizzato versioni TLS obsolete.

Fonte: https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/

Per ridurre al minimo l’impatto sui servizi, AWS implementerà le modifiche endpoint per endpoint nei prossimi mesi, partendo da quelli dove non sono rilevate connessioni con versioni TLS 1.0 e 1.1. Dopo il 28 giugno 2023, AWS aggiornerà la configurazione di tutti gli endpoint rimasti, anche se i customer hanno ancora connessioni che utilizzano versioni non supportate.

Vuoi leggere altre news su AWS?

Il software per moduli hardware AWS IoT ExpressLink è GA

Flagging Flags: Nine Numbers with Amazon Rekognition

I recently published an article where I played with Amazon Rekognition and flags from around the world. Few friends and developers asked for more numbers, either out of curiosity or because they had some suggestions or doubts.

Is the “Stars and Stripes” the flag with the highest confidence for the label “American Flag”? Are all the flags labelled as “Flag”? Does the quality of the PNG file affect the label detection? 

Before training a model to better recognize flags with Rekognition Custom Labels, I decided to publish more results and the full dataset. Here are nine numbers and trends for the 255 flags available in the repository. Once more I rely on the images from the open source region-flags, a collection of flags for geographic region and subregion codes maintained by Google. 

128 Flags

The outcome is a coin toss: almost a perfect 50% (128 out of 255) of the flags is labelled “Flag”. Only 98 of them with a confidence above 90%, 73 above 98% and 56 above 99%. OK, a flag is not always a flag. In doubt, toss a coin, a cheaper algorithm than an API request.

Flag is a Flag

98 American Flag

As we already noticed, many flags, including the Cuban and Malaysian ones, are labelled as “American Flag”. How many PNG files are decoded as “American Flag”? There are 98 of them, with 27 above 90% and one above 98%. A high confidence level alone is not always a safety net.

Flag US

Only one Stars and Stripes

The flag with the highest confidence for “American Flag” is the one of the United States Peru. No kidding, a very high 98.3 %. The real “Stars and Stripes” is actually not in the top ten for “American Flag”

Flag Peru

No Syrup, two Maples

Two labels, “Maple” and “Maple Leaf” matched the Canadian flag. And only the Canadian one. Perfect match. Well done Rekognition!

Flag Canada

Eleven Outdoor Flags

What do Slovenia, Laos and Kosovo have in common? Their flags are all labelled “Outdoor”, Laos with a staggering 99.47% confidence level. Whether you are looking for rock climbing in Nong Khiaw or kayaking through Si Phan Don, the flag of Laos is apparently the country’s best marketing tool.

Flag Laos

One Lollipop

There is only one lollipop detected. And we cannot even share it. The flag of Dominica, which features a sisserou parrot, the national bird emblem, got the only (incorrect) candy.

Flag Dominica

61 Stars

Almost a quarter of the flags have the “Star Symbol”, 25 with a confidence above 90% and the European Union leading at 97%. Brexit or not, the twelve golden stars on a blue background are an easy catch for Rekognition.

Flag EU

239 Symbols

Almost every flag has “Symbol” as a label (94%), with 75 of them at 99% confidence level. India, Georgia and Peru are all above 99.99%. Whatever Symbol means.

Flag India

19 Animals

From Kiribati to Mexico, from American Samoa to Uganda, Rekognition does a good job finding animals inside flags: of the 19 decoded, only 3 are false positives. While the parent label (Rekognition uses a hierarchical taxonomy of ancestor labels) is good, the species itself is often wrong: a “Penguin” for Uganda, a “Chicken” for Mexico. Whoops.

Flag Uganda

Size of PNG is not significant

There is not any significant discrepancy testing the flags at 1000px or 250px, with confidence level slightly higher or lower, but without a significant pattern. This is somehow expected as the models are likely trained with images scaled down to a fixed size to reduce the computational load.

Testing All Flags

How can you quickly test all the flags? Amazon Rekognition, the AWS CLI and a while loop is the answer. You upload the dataset to a S3 bucket, and you run a simple command in the AWS CLI:

aws s3api list-objects --bucket <my-bucket> \
    --query 'Contents[].{Key: Key}' | jq .[].Key > list-countries.csv
cat list-countries.csv | while read flag
do
   aws rekognition detect-labels --image "{\"S3Object\":{\"Bucket\":\"<my bucket>\",\"Name\":"$flag\"}}"  >  $flag.json
done

Not elegant, but it just works. Every output file is a JSON, one file for every flag. Here is an example output (Italy) and here is a zip with the output for all the countries.

Conclusions

Maybe there is not one single Stars and Stripes but we have only one Lollipop. 

The decoding of flags on Amazon Rekognition is quite unreliable but, with a few exceptions, the decoding of objects and animals inside the flags is accurate. 

Please don’t take the numbers too seriously. As already acknowledged by AWS Support, Rekognition is currently not trained to identify flags. These numbers are just a warning and a reminder that results from image recognition have to be validated and used carefully

How can we improve our results and have some confidence in the flag detection process?  We will soon play with Rekognition Custom Labels and discuss the results in a separate article. 

Thanks for making it this far! I am always looking for feedback to make it better, so please feel free to reach out to me via LinkedIn or email.

Credits

All the screenshots are from the author and PNG files are from the region-flags repository.

Il software per moduli hardware AWS IoT ExpressLink è GA

Amazon ha recentemente annunciato la GA di AWS IoT ExpressLink. Il software di connettività supporta moduli hardware wireless per creare soluzioni IoT connesse ai servizi AWS.

Annunciato in anteprima a re:Invent, AWS IoT ExpressLink è un servizio introdotto per creare dispositivi IoT sicuri, rimuovendo la complessità dell’implementazione della connettività cloud, un processo che di solito richiede più step come la connessione hardware, la gestione della comunicazione TCP/IP e l’utilizzo di un broker MQTT.

Espressif, Infineon Technologies e u-blox sono i primi partner AWS che hanno sviluppato moduli wireless. Channy Yun, principal developer advocate presso AWS, utilizza una pompa di una piscina come esempio per spiegare i vantaggi del nuovo servizio:

Alcuni dispositivi sono troppo limitati nelle risorse per supportare la connettività cloud, il che significa che i loro processori sono troppo piccoli o lenti per gestire il codice aggiuntivo. Ad esempio, una piccola pompa per piscina può contenere un minuscolo processore ottimizzato per controllare un particolare tipo di motore ma non dispone della memoria o delle prestazioni necessarie per gestire sia il motore che una connessione cloud. (…) Grazie al nuovo servizio, puoi utilizzare il minuscolo processore nella pompa e delegare il lavoro pesante della connessione al cloud a IoT ExpressLink.

Fonte: https://aws.amazon.com/blogs/aws/aws-iot-expresslink-now-generally-available-quickly-develop-devices-that-connect-securely-to-aws-cloud/

Secondo AWS, il servizio è compatibile con dispositivi di ogni dimensione e vincolo di risorse, consentendo di non modificare i processori esistenti. IoT ExpressLink si integra con altri servizi AWS IoT, come AWS IoT Jobs e IoT over-the-air (OTA), per schedulare ed eseguire aggiornamenti, e AWS IoT Device Management

Un repository con esempi di IoT ExpressLink è disponibile su GitHub. E’ possibile ordinare evaluation kit e inviare dati di telemetria al cloud tramite l’interfaccia seriale. Il prezzo di un modulo con IoT ExpressLink dipende dalle caratteristiche e dal produttore dell’hardware, a partire da circa 26 USD. AWS non addebita costi aggiuntivi per l’utilizzo del servizio.

Vuoi leggere altre news su AWS?

Cockroach Labs 2022 Cloud Report: AMD meglio di Intel?