Notes from AWS Summit

I attended the AWS Summit in Chicago this week. Like many other tech companies, we at Gesture use various AWS products so it was a nice opportunity to get exposed to what is becoming possible with their platform.

Additionally, looking into what AWS is up to is a great way to see where technology is headed in the near future (and the near future is looking increasingly like things you may have thought only possible in sci-fi movies)

Though many of their services are focused on backend and devops, as a front end developer, there were still many things that appealed to me.

Lightsail
Lightweight VPN for hosting simple sites. This is the AWS alternative to the various shared hosting providers out there. Want to spin up a quick WordPress server or marketing site? this is the AWS service for you.

Athena
Serverless, interactive query service that makes it easy to analyze big data in S3 using standard SQL. In other words, chuck a JSON file onto S3, define your data structure, then you can run queries on it. Would be cooler if they had a service that could look at a file on S3 and figure out the data structure on its own. Wouldn’t be surprised if they don’t have that coming out soon.

Rekognition, Polly and Lex
This is the next level stuff. APIs hooking into AI services. Rekognition does image analysis to make it easy to integrate things like facial recognition and object detection into your applications. Polly and Lex offer a killer combination of text into humanlike speech (Polly) and speech recognition and understanding (Lex) which allow you to add powerful conversational interfaces to applications (the same tech that powers Alexa).

Serverless (Lambda + API Gateway)
I’ve been following serverless architectures for awhile. It is a rapidly evolving area that is opening up so many different possibilities when it gets plugged into other services like those above.

Mobile Hub
Really cool configurator for building mobile apps, including mobile web which is more where my interests lie. Makes it really easy to get up and running with services like user authentication and management (Cognito), a NoSQL DB (DynamoDB), Storage (S3), etc. These baseline core services are low cost and scale. Lots of popular apps and companies have gotten their start with these and have continued to rely on them after scaling.

As I write this I am having fun doing the AWS Game Day challenge with my team at Gesture (currently we’re in first place!)

AWS Notes: Mastering NoSQL – Advanced Amazon DynamoDB Design Patterns for Ultra-High Performance Apps

DynamoDB Engineer David Yanacek started things out by giving an overview of the types of tables and queries that exist in a typical social network app, and how they look in DynamoDB. A second example he gave was image tagging, doing queries like images by users, by date, by tags and then combinations of those. He showed how to set up DynamoDB queries that would be useful and inexpensive for this type of datasets. He introduced their new concept of a Local Secondary Index, which is a temporary table created by DynamoDB, that allows you to run a secondary lookup on a result set. For tags, in DynamoDB you would have another table tie the tags to image IDs.

Then he revealed a new feature for DynamoDB: Global Secondary Index. This lets you do fast, inexpensive with more flexible queries.

The next example was managing states of a tic tac toe game, show some problems that arise and how to avoid them. David showed a problem in which a tic tac toe player could cheat by sending multiple update item requests, perhaps in separate browser windows. You can solve for that by doing conditional writes. Another fix was to add an expect clause to prevent bad writes.

He moved on to talking about finer-grained access control for user data. He proposed moving from a three-tiered architecture, to two-tiered where users do updates directly to DynamoDB rather than a service layer. Having just heard the ___ presentation, I knew where he was going with this: using Web Identity Federation to get temporary credentials and a session token. A new feature in DynamoDB is to limit access to particular hash keys and attributes. For example, allow all authenticated Facebook users to query the images table but only when their facebook id matches their user id. This allows us to expose the database directly to users, resulting in lower latency, cost and complexity. Some problems with this is less visibility into user behavior, more difficult to make changes without a service layer and requires work in scoping items to specific users.

Next up was David Tuttle, Engineering Manager of Devicescape. His talk would cover how they have implemented DynamoDB in their service. Prior to adopting DynamoDB, they faced challenges in scaling, DB maintenance, slow queries on large table and difficulty making schema changes. DynamoDB forces a different way of thinking about data.

He quickly gave a bunch of pro-tips: Organize data into items around a primary key, evenly distribute workload across hash keys, use parallel scans for high speed jobs. DynamoDB will not give you the expected throughput unless you distribute across hash keys. By using conditional writes and an atomic increment in the first item, they avoid collisions. Plan for failure. Use local files to resume data operations after a failure. Extract data to S3 to do non-realtime operations like analytics. Use DynamoDB local to speed up developer workflow.

Last it was Greg Nelson from Dropcam, the Wi-Fi enabled camera company that does intelligent activity recognition and cloud recording. They moved away from managed hosting to AWS and eventually DynamoDB. He showed how they use it to manage all their camera data, as well as user sessions. He drilled into what their table updates, queries and deletes look like for cuepoints (to keep track of important footage within the video tracks).

He talked about some of the complexity within DynamoDB, such as the concept of “eventual consistency”. With NoSQL, choosing a hash key is the most important up-front design decision. You need to keep very close attention on IOs. Throttling behavior is opaque, so you need to actively watch for problems.

AWS Notes: Writing JavaScript Applications with the AWS SDK

AWS Developer Loren Segal introduced the AWS SDK for Node.js It is open source, apache-licensed and on Github. They have full service coverage across 30 services.

Install through npm, of course:
npm install aws-sdk

Configuration is easy and can be done programatically (there’s a global configuration object). Loren ran through a bunch of code snippets for how to do various things. It wasn’t long before he got into some live coding in the terminal. First up some simple S3 requests and response handling. Next, his next livecoding example demo’d response pagination for scanning large DynamoDB datasets. Then he showed request event handling, and demonstrated the AWS global events object, allowing you to do global event handling on request success response for example. Last, he went over programmatically configuring credentials, showing that you can set temporary creds that automatically expires.

Loren moved on to show off the AWS SDK for JavaScript in the Browser which is now in developer preview. This literally allows you to build web applications with static files. Some examples of things that could be easily created are forum software, blogging, commenting systems, browser extensions and mobile apps. This is freaking awesome.

He created a sample blogging application (check it out on github). The app.js script is a mere 260 lines of code.

Loren live deployed the app. It had permissions set up, so he did a social login, then created a blog post using markdown and an open source WYSIWYG editor. Did I mention this is all with static HTML/CSS/JS files?! Nice. You can see the demo at (demo link no longer available).

He went over the key differences between doing traditional three-tiered architecture vs two-tier development with AWS in the browser. He did say that in order to use it with S3, you need to configure CORS settings. For other services, CORS is not necessary because the requests for them are already authenticated.

He stressed the importance of not hardcoding your credentials. They get around this by using Web Identity Federation which trades access tokens provided through other services like Facebook, Google or Amazon for AWS keys. He showed quickly how to set up a Facebook application to create IAM roles and set up permissions for users. He showed some code for working with Facebook Access Tokens.

He finished by talking about the open source community. They love to get feedback, pull requests, issue reports and third-party plugins.