Location-based services and countries on AWS

A few weeks ago I covered my experience and challenges working with geolocation technologies for Funambol and RaceBase World, as the (accidental) political software developer.

How does it work using AWS services?

As the focus of this blog is AWS technologies, what about Amazon and their location-based services? Any way to keep those geolocation issues under control?

First of of all let’s see which services Amazon offers that include geolocation capabilities.

AWS offers almost nothing. If we compare the products from AWS with Google Maps API or GeoNames, there is nothing yet that has geolocation capabilities. Of course you can run a third party AMI from the Marketplace, like MaxMind – GeoIP or IP2Location Geolocation. But that is just taking advantage of an EC2 instance, it is not a service directly provided from AWS.

Even Amazon Rekognition, a deep learning-based image analysis, does not offer the full set features that Google Vision has and that could potentially trigger location-based issues. If you upload a picture on Google Vision, thanks to Landmark Detection, you might end up with a location or a questionable place on the Earth. But if you upload a picture of the Eiffel Tower on Amazon Rekognition, you have as a result a disappointing but very safe 99.2% tower.

Screenshot from 2017-04-25 21-49-30

And you have something similar with a picture of the Western Wall in the the Old City of Jerusalem. Definitely not so accurate, but not a result that can create many controversies.

Screenshot from 2017-05-28 22-49-59

What about Amazon user interfaces and products for end users? Many photo sharing and storage services like Google Photos or Apple Photos or OneMediaHub (a white label solution provided by Funambol) create tags according to the GPS coordinates (EXIF data) of the pictures uploaded by a user. With the challenges of defining the best tag for Hong Kong or determine if Crimea is a Russian or Ukrainian territory.  Prime Photos from Amazon does not. No feature, no issues.

Really nothing on Amazon or AWS?

Amazon of course still has to provide a user interface and a chance for the user to add and validate  an address. They have their own interesting choices (no Kosovo, for example) and there might be some entries in the list that might be disputed by some users as countries – somehow the approach of Google of calling the drop-down  location and not country feels safer. But that is hardly interesting for a developer.

Screenshot from 2017-04-23 15-14-16

To summarize, I never had to deal in the past with geolocation issues or controversies while working directly with AWS services or the Amazon platform.

But there is something new that might pose a potential challenge, the Alexa SDK and the built -in slot types that define how data in the slot is recognized and handled. And Amazon Lex, the “conversational interfaces for your applications, powered by the same deep learning technologies as Alexa”.

Alexa relies on slots, that are list of values with many of them predefined by Amazon. For example, the slot AMAZON.Country is a ready list of (English) names of countries around the world. Or AMAZON.DE_CITY provides recognition for German and world cities commonly used by speakers in Germany and Austria.

Anything to be worried about? Let’s test it first.

Big World: Alexa and geolocation

After attending a presentation at Factory Berlin  and a talk at the AWS summit, both from Memo Döring and both very inspiring, I decided to build a simple dynamic skill for Alexa. The skill, called Big World, relies on data from Population.io, a project of the World Data Lab that aims to make demography accessible to a wider audience. The very simple skill, given a country name, returns today and tomorrow’s population and returns the values from the World Population API. Below  are a screenshot and a short audio demo, the code is available on GitHub.

Screenshot_2017-05-25-21-37-27

 

Any problem?

Going back to the topic of this post, the only challenge was to match the names from the built-in Alexa slot to the countries of the Population.io API. The backend supports only values such as Arab Rep of Egypt, Islamic Republic of Iran, West Bank and Gaza or Hong Kong SAR-China. Names that it’s very unlikely a user is going to say while talking to a voice assistant like Amazon Echo and that require mapping.

var PS = 'West Bank and Gaza';
(...)
else if(countryName.toUpperCase()=='PALESTINE'){ 
 countryName=PS;
}

But unless you type the name entirely wrong, there is really no big challenge and the answer still is not controversial as you are dealing only with the name and not the location itself. For example:

Q. Alexa ask Big World the population of Palestine.

A. You are not alone in this world. The population of West Bank and Gaza today is 4916233, tomorrow there will be 368 people more.

or in a simpler scenario:

Q. Alexa ask Big World how many people live in England.
A.
The world population is growing as we speak. The population of United Kingdom today is 65473338, tomorrow there will be 1101 people more.

The answer might not be 100% accurate but it is the best approximation available using the data from Population.io. As for any vocal conversation, an audio interaction with a smart speaker is more forgiving than an incorrect point or country name on a website.

Of course some users might still not be able to find results for specific and perfectly valid country names but that is down to poor coding and logic in the Lambda function and not to the specific Alexa slot.

At the end…

Due to the lack of real location-based features, the services currently available on AWS do not currently present most of the challenges covered in the previous post. But for the very same reasons they do not provide a solution or any help to the developer to address or mitigate them.

 

Finding marathon cheats using Amazon Rekognition

In the last few weeks many articles have been published about the problem of cheating  in the biggest running events, for example Beijing marathon to use facial recognition in cheating crackdown.

There is apparently even a former marathoner and business analyst, Derek Murphy, who devotes his time to catch the cheats, as the BBC recently reported : The man who catches marathon cheats – from his home. The booming of the biggest international marathons, the grow of qualifying events for the most prestigious ones make the likelihood of cheating higher, with bib-swappers who give their chips to a faster runner and bib-mules who carry more than one chip during the entire race.

Is it really cheating?

Let me start saying that I am not a big fan of blaming “marathon cheats” in public forums. There are  scenarios when a runner might decide to take part in a race using the number of someone else and most of the them do not hurt the community or other runners. Qualifying for the UTMB or for Boston Marathon at the expenses of other runners is of course not one of them. There are a few hundred trail events that allow runners to collect points for the UTMB, even more road marathons that can give you an official time good enough to go to Boston. You can find most of them on RaceBase World.

Screenshot from 2017-04-26 17-08-52

I have been running almost 100 races in different countries in the last 15 years, I have a volunteered in a dozen of running events in Berlin alone and I do not need face recognition to figure out that there are indeed a few runners every race who are running with someone else number. And I never did anything to stop them.

Data privacy and marathon running

Before testing Amazon Rekognition as a tool to find cheats in marathons, we might want to discuss if facial recognition technology is really a threat to privacy and how we can have data without contacting the race organizer.

Unfortunately runners have been used for some times. While you might take good care of your data on-line, make sure not to post to social networks or remove geolocation information from your pictures, there is nothing you can really do if you are a runner to avoid your personal data being shared everywhere.

You run 42 kilometers with a number on your shirt and everyone can find all the pictures for a given runner and his personal data (often including date of birth and residence) on public websites. Last year I was able to figure out personal information about the girlfriend of a old schoolmate I have not met in 20 years simply by looking at the pictures and results of a old Berlin Marathon only.

Is face recognition during a marathon actually possible?

Using facial recognition to address cheating crackdown in a recreational event feels like using a machete to cut the salad but does it actually deliver?  Can Amazon Rekognition help in validating the results of qualifying races for UTMB or Boston?

As a first test, I took the images of the latest marathon I run (OK, I barely finished walking), the Berlin Marathon 2016. And I of course compared the very first picture in the set associated with my number (the one before the start) with the last one, after crossing the finish line.

rekognition-marathon

Amazon Rekognition simply confirms something that runners have always known, a marathon changes you. After completing a 42K you are not the same person anymore.

Jokes aside, this is just a one picture test on one runner but highlights the challenges of tracking the runner along the race using face recognition alone. It might help combined with other technologies but it is likely to generate a significant number of false positives.

Testing a few more pictures from my previous races still available on-line (the not existent data privacy for runners), I had mixed results. Some pictures match easily others do not. Some are false positives others are not. And I was definitely younger.

Screenshot from 2017-04-24 21-06-43
A random test with the London Marathon

Let’s instead limit the goal to confirm that a runner taking part in an event matches by gender and age with the category she was registered for. That’s the most common scenario of cheating that is impossible to cover using split times along the course.

Think about your younger and fitter cousin making your PB so you can qualify for the Boston Marathon, your long term running pal who collects a few points for you so you can qualify for the UTMB next year.

Last Sunday took place the London Marathon, the largest spring marathon in Europe and one of the biggest in the world with New York, Berlin and Paris.

All the results  of the race are of course available online, and there is a simple GET request that returns the data for a given bib number

http://results-2017.virginmoneylondonmarathon.com/2017/?event=MAS&pid=search&search%5Bstart_no%5D=*****&search%5Bsex%5D=%25&search%5Bnation%5D=%25&search_sort=name

In the same way you can access all the pictures of all the runners on MarathonPhoto and retrieve the pictures of a runner using again the start number and the last name you got from the previous request (the RaceOID is the one of the London Marathon 2017).

http://www.marathonfoto.com/index.cfm?RaceOID=19802017S3&LastName=****&BibNumber=*****

We do not want (yet) to process 30K or 40K runners, let’s use a very small sample to see how Amazon Rekognition works. Let’s use Random.org to get 10 numbers we can test.

10297

Three of them did not match any runner (not all numbers are assigned for the race) and one runner did not start at the event. What about the other 6 runners? We know the category (age range) and the gender for all of them.

- 31724 (18-39, male) 
- 10297 (18-39, male)
- 12471 (18-39, male) 
- 19412 (18-39, female) 
- 17970 (45-49, female) 
- 21095 (18-39, female)

Retrieving the first image of the set of each one using the MarathonFoto URL, we are able to double check the runners using Amazon Rekognition to match the above data with the results with face recognition.

How did Amazon Rekognition score?

- 31724 (35-52, male 99,9%)
- 10297 (29-45, male 99.9%)
- 12471 (14-23, male 99.9%)
- 19412 (23-38, female 100%)
- 17970 (20-38, female 100%)
- 21095 (30-47, female 100%)

It did very well. All runners matched the expected gender.

Even if there was a bit of luck as in one of the picture Amazon Rekognition selected a different and incorrect runner in the corner of the photo. And this will be the biggest  challenge for an automatic bot: we need to first match the bib number in the image with the race number as there might be multiple runners in the same shot, so combine the technology already used to map the bib number to the photo to face recognition.

Did all runners match the expected age range as well? In short, yes.

The only failure (12471) is due to picking up the wrong face in the picture, but once address that is correct too.  Note as well that for runner 12471 the overlapping of the age range is minimum. But even when the overlapping is minimum, the correct age is in the range (you can find that information searching the athlete name on-line: 31724 is 39 and 21095 is 33).

Limitations

So we have a few limitations here that working with a race organizer can easily address:

  1. The London Marathon is sensible enough to publish the age category but not the year or date of birth. A data of course they have. And that (wrongly) many race organizers make public.
  2.  The pictures are screenshots from the MarathonFoto site and are not the best quality (I did not pay for them).
  3. We need to parse multiple photo of each runner to have a significant confidence and that might increase the cost of the solution

Conclusions

Of course a very small set of a few runners is not expect to catch cheaters (and I would not publish their data anyway) but confirms that the approach of face recognition to catch cheats in marathons is feasible even if it most likely needs other tools too to have a reasonable level of accuracy.

But running a full data validation for a big event requires just the collaboration of the race organizer, a few dollars, maybe an instance running Scrapy and a couple of AWS Lambda function. But everyone today can create profiles of thousands of marathon runners around the world and verify their data. Whatever that is good or not.

But I still believe that at the moment the claim of the Beijing Marathon is more a PR stunt than the real way they are going to use to address the issue.

The (accidental) political software developer

As a software developer, the chance to discuss politics is high at the coffee machine or after a couple of beers in the evening but not while writing code. Somehow the last few weeks proved me wrong, I managed to discuss controversial borders and disputed countries in more than one occasion. And all thanks to the new ubiquitous geolocation and image recognition services.

The Hong Kong user

The founders of RaceBase World, a service to discover and rate running events around the world, are based in Hong Kong. And they were not too impressed when their profile page stated as home country China. The geolocation labeling was provided by Mapbox, one of the largest provider of custom on-line maps. While the labeling might be justified – most of their users consider Hong Kong to be part of China – the choice is controversial for many runners based in the territory. And using the official name, the Hong Kong Special Administrative Region of the People’s Republic of China, not really a feasible option.

Shenzhen anyone?

And it’s not the only service affected.

When I then uploaded a trail picture taken in Hong Kong, not far from Mainland China, on OneMediaHub (a cloud solution provided by Funambol) and the result was even more bizarre. The picture was labeled with the location “Shenzhen, China”. Even if downtown Shenzhen is not exactly a paradise for trail running and it is quite far away.

imageedit_4_5469168402In this scenario the problem was both in the algorithm used to match EXIF data and the accuracy of the open source geolocation database used, GeoNames.

To make matters worse, a picture taken in the West Bank, not far from Jerusalem, for the very same reason had on OneMediaHub.com the location “Jerusalem, Israel”. Again, the author was not too happy.

imageedit_6_4935337524

Is that really so bad?

Most of the geolocation decoding services are pretty accurate and the error margin is very low. The vast majority of the users are hardly affected by the issues above that are corner cases. And even if one of your summer picture get tagged with the next town on the Costa Brava you are hardly going to complain. Or be offended. You might not even notice the bug.

But the issue is that a small percentage of those scenarios where the algorithm fails or where there is a controversial decoding are in disputed territory and partially recognized states. And that introduces some challenges for the developer who does not want to deal with politics while writing code.

It’s only geolocation!

Actually even a simple signup form where the user has to choose the country might be controversial. Not everyone in the world sadly agrees on the status of Kosovo. Or Palestine. Or even their names.

Screenshot from 2017-04-16 13-43-36

Google uses “Palestine” (but label the field location) while Amazon goes for a neutral “Palestinian territories”.

Screenshot from 2017-04-16 13-47-21

Relying on the official UN status might be a safer option, but it does not make local users very happy either. Let’s go back the Mapbox example with RaceBase World.

Screenshot from 2017-04-15 21-36-40

Mapbox works for the Palestine Marathon and make most (if not all) the runners attending the event happy but let’s assume a (fictitious marathon) is taking place in Simferopol, the largest city on the Crimean peninsula. Would most local runners be OK with Ukraine as the country? Runners in Germany and runners in Russia have usually a different option about the status of Crimea. And there are many more similar examples without even considering war zones.

Screenshot from 2017-04-16 12-51-40

How to fix those issues?

As a developer, if you have only a local audience it’s relatively easy. And you can minimize the controversies. If not, you can have some workarounds or hacks for challenging names or simply hide them (pretend that automatic decoding did not work). Racebase World for example now shows Hong Kong for new registrations in the autonomous territory.

Better, but with a significantly higher development costs, you could show localized names according to where the audience is.

But at the end of the day the big players drive the geolocation databases and they care more about where most of their users are. When “2.8 million people took part in marathons in China in 2016, almost twice the number from the previous year”, as the Telegraph recently reported, it’s hard to argue with Mapbox’s approach on what China is and what China is not. Runners in Hong Kong might not be their first audience or growing market.

If you want to test how your website performs in critical area, you do not even need real pictures, just edit the EXIF data of a random picture using Photo Exif Editor or similar applications and enjoy the challenge. And you are read to go.

How does it work with AWS services?

What about Amazon and AWS services? Any way to limit or keep the above issues under control? This will be covered soon in the second part of this post.

Build a Alexa skill in (about) one hour

Since a few weeks ago the Amazon Echo Dot is finally available in Germany, the first not English speaking market. Time for me to test the cloud-based virtual assistant and see if it is really possible to build a trivia skill in under an hour as Amazon claims. And hopefully find out a bit more of how AWS Lambda is used by Alexa.

Any other benefit?

A new challenge aside, Amazon offered two main benefits to develop news (German) skills for Alexa.

  • developers who published a German Alexa skill in March could get an Alexa hoodieScreenshot from 2017-04-03 17-20-19

What’s the challenge?

No real technical challenge. Setting up a few trivia sentences in German was the hardest task. Never wrote a Node.js Lambda function before (I am more familiar with Java and Python) but there was no really any code to be written, it was simply a matter to change the template that Amazon set up. The goal was simply to get a meaningful skill live in the shortest time frame possible.

How long did it really take?

About one hour to set up the skill. A little longer to decide the topic, find an easy name (lauf lauf) of the trivia and the keywords to trigger the skill. And less than two hours to collect a few running trivia and write them in German. And submit the new skill for certification. All together a not too long evening in front of the laptop. The skill was approved after less than four days.

Screenshot_2017-04-03-18-35-58

How does it sound?


What about privacy concerns?

You might not be a strong supporter of virtual assistants always in stand by mode. I do not have Alexa on all the time, I do not need to. I just got  a timer to switch it off at scheduled times and I changed a bit the default configuration  to avoid storing all the information on the Alexa application too. Note as well that you do not need to have a real Echo device to develop a skill but testing it in a real scenario is the fun part.

Was it worth it?

Yes. Not a technical challenge (it’s really too easy to set up a trivia) but a useful and interesting introduction to the world of Alexa and Lambda integration. And still impressed that in one evening I got something I was not familiar with from (almost) zero to live. Next step is to build a real skill, possibly using the Alexa Skills Kit SDK for Node.js

Base performance and EC2 T2 instances

Almost three years ago AWS launched the now very popular T2 instances, EC2 servers with burstable performance. As Jeff Barr wrote in 2014:

Even though the speedometer in my car maxes out at 150 MPH, I rarely drive at that speed (and the top end may be more optimistic than realistic), but it is certainly nice to have the option to do so when the time and the circumstances are right. Most of the time I am using just a fraction of the power that is available to me. Many interesting compute workloads follow a similar pattern, with modest demands for continuous compute power and occasional needs for a lot more.

It took a while for the users to fully understand the benefits of the new class and how to compute and monitor CPU credits but the choice between different instance types was very straightforward.

A bit of history…

Originally there were only 3 instance types (t2.micro, t2.small and t2.medium) and base performance, RAM and CPU Credits were very clear, growing linearly.

Instance     Base    RAM    Credits/hr

t2.micro     10%     1.0     6     
t2.small     20%     2.0     12     
t2.medium    40%     4.0     24

And price too. A t2.medium was effectively equivalent to 2 small instances or 4 micro instances. Both as credits, base rate and price. So far so good.

At the end of 2015, AWS introduced an even smaller instance, the t2.nano but the approach was still the same:

Instance     Base    RAM    Credits/hr

t2.nano      5%     0.5     3

Still same approach, nothing different.

Now large and even bigger!

But AWS extended the T2 class in the large range too, having first a t2.large in June 2015 and t2.xlarge and t2.2xlarge at the end of 2016. A lot more flexibility and a class that can cover many use cases with the chance of vertical scaling but finally the linear grow was broken:

Instance  RAM    Credits/hr Price/hr

t2.large        2        8      $0.094
t2.xlarge       4       16      $0.188
t2.2xlarge      8       32      $0.376

So far so good, the price per hour doubles according to vCPU and Memory. So a t2.2xlarge is equivalent to 4 t2.large. But what about the base performance?

Instance    Base Performance

t2.large      60% (of 200%)
t2.xlarge     90% (of 400%)
t2.2xlarge   135% (of 800%)

A t2.2xlarge is not equivalent to 4 t2.large.

I have a better base rate that I can run forever as average for cCPU running 4 nodes of t2.large (I would be able to average 30% on every vCPU) versus running a single t2.2xlarge (where I have a base performance of less than 17% on every vCPU) for the very same price.

The bigger you go, the lower the base performance by vCPU is.

So what?

vitaly-145502
Even without the loss in term of base performance, you have many reasons to choose more small instances: better HA in multi AZ, easier horizontal scaling, better N+1 metrics

But with T2 large+ instances even the AWS pricing strategy pushes you away from a single instance.

Unless you have an application that definitely benefit from a single larger t2 instance (for example a database server), spread out your load across smaller instances, with the T2 class you have one more reason for that.

The recently announced instance size flexibility for EC2 reserved instances makes it even easier to adapt the instance type even if you have a lot of RI capacity.

A wish list for the Amazon Elastic Transcoder

1 – Real-time video encoding

There is no real-time video encoding with the Amazon Elastic Transcoder.

How long does it take to transcode a job? It depends, usually not too long but if you need real-time or almost real-time encoding you need to look somewhere else or wait and hope they will add the feature in the future.

Over a year ago Amazon paid apparently $296 million  to acquire Elemental Technologies, a company that provides real-time video and audio encoding and still operates as a standalone company. But who knows, maybe there are some hopes for real-time video encoding in the AWS offer in the future.

2 – Filter out items that already match the preset

Let’s start with a simple example to clarify the problem. Let’s say I submit a job for a video (for example dummy.mov, Resolution 1920 x 1080, Frame Rate 29.97 fps, File Size, 27.4 MB, Duration 0:00:13.980) and I use a simple preset like “Generic 480p 4:3” (1351620000001-000030) My output is the expected:

Output Key playback.mp4
Output Duration 0:00:14.095
Output Resolution 640 x 360
Frame Rate 29.97 fps
File Size 1.7 MB

I pay one minute of encoding, I have a nice output and I am a happy customer. I now take the output and reiterate the process. Same pipeline, same preset, just a new job. I might hope to have the output identical to the input – the Amazon Elastic Transcoder has transcoded it, from the metadata of the output it’s obvious. Instead a new output is generated, I pay one more minute and I can keep iterating. The output every time is similar but the quality decreases.

OK, I submitted the job but I am not sure it makes sense to generate an output that is not as good as the input and that already matches the preset as best as your FFmpeg transcoder could. Would not be better to avoid any transcoding operation when the input is already in a “margin of error” of the specs of the preset? Output files cannot (obviously) be better than input ones. But AWS will try and charge for it anyway.

All this might sound like a corner case, but let’s think of a scenario where your end users upload the videos from their iPhone or tablets or laptop. And you are looking for surprises if you do not have good filtering in front of the pipeline.

3 – Processing a few seconds of video

Actually that is not true, the Amazon Elastic Transcoder can process video of any length, even just a few seconds. Whatever they are 6.5 second looping videos (Vines) or short video from Snapchat or whatever other source at a higher resolution. But it might not be the most effective way and it can be quite costly.

“Fractional minutes are rounded up. For example, if your output duration is less than a minute, you are charged for one minute.”

So for your 3 seconds you pay 60 seconds for every output. Any way around it? There are two more efficients way to process short videos. You can use the new Clip Stitching feature that allows you to stitch together parts, or clips, from multiple input files to create a single output and fractional minutes are then better rounded up. But you then need to handle the splitting of the output yourself. Or you can replace the Amazon Elastic Transcoder with aws-lambda-ffmpeg or similar open source projects based on AWS Lambda + FFmpeg and that are more cost effective for very short videos. You can then determine the minimum length that triggers the job on the Elastic Transcoder.

4 – Run a spot job & bidding price

I can bid on the price of a EC2 instance. I can run Elasticsearch on Spot Instances. I can rely on cheaper spot options on a Amazon EMR cluster. But I have no real chance to lower the costs of running Amazon Elastic Transcoder. I cannot (at least automatically from the console) commit to a certain amount of encoded hours to have a reserved capacity at a lower price. Many times, when I do not need real-time video encoding, I do not have a strong constraint of having the items in just a few minutes. Amazon Elastic Transcoder does a great job to process them (usually) in a few minutes but I would mind the option to bid a price or have a significantly longer queue time in exchange of a lower price when the service is less used. To achieve that, you are back to a cluster of FFmpeg running on EC2 instances.

chris-lawton-172561.jpg

5 – Encoding available in every regions

The Amazon Elastic Transcoder is currently available only in a subset of regions (8) and in Europe only in Ireland. From a technical point of view, you can always configure the Amazon Elastic Transcoder to handle the S3 bucket in one region and run the encoding in a different one. Latency is not that big of an issue, given that it’s anyway an asynchronous operation. And the cost of data transfer is as well negligible compared to the cost of the service itself. But the blocker is often a legal requirement and you cannot process a user video outside a specific region where the data is stored.