Skip to main content

29 posts tagged with "AWS"

View All Tags

· 6 min read
Alex DeBrie

This is part 2 of a two-part post on DynamoDB Transactions. Check out part 1 in this series for a look at how and when to use DynamoDB Transactions.

In this post, we're going to do some performance testing of DynamoDB Transactions as compared to other DynamoDB API calls. As a reminder from the last post, you can use DynamoDB Transactions to make multiple requests in a single call. Your entire request will succeed or fail together -- if a single write cannot be satisfied, all other writes will be rolled back as well.

· 14 min read
Alex DeBrie

Amazon's DynamoDB was released in 2012 and has been adding a drumbeat of new features ever since. It's hard to believe now, but the original version of DynamoDB didn't have DynamoDB Streams, parallel scans, or even secondary indexes.

One of the more exciting feature releases from DynamoDB in recent years has been the addition of DynamoDB Transactions at re:Invent 2018. With DynamoDB Transactions, you can write or read a batch of items from DynamoDB, and the entire request will succeed or fail together.

This feature release simplified a lot of workflows that involved complex versioning and multiple requests to accurately work across multiple items. In this post, we'll review how and why to use DynamoDB Transactions.

· 18 min read
Alex DeBrie

I've become a big proponent of DynamoDB over the past few years. DynamoDB provides many benefits that other databases don't, such as a flexible pricing model, a stateless connection model that works seamlessly with serverless compute, and consistent response time even as your database scales to enormous size.

Yet data modeling with DynamoDB is tricky for those used to the relational databases that have dominated for the past few decades. There are a number of quirks around data modeling with DynamoDB, but the biggest one is the recommendation from AWS to use a single table for all of your records.

In this post, we'll do a deep dive on the concepts behind single-table design.

· 12 min read
Alex DeBrie

When building an application with AWS Lambda, you may need to host your Lambda function in a VPC. The most common reason for this is because your Lambda function will use other resources which aren't accessible from the public internet, such as a relational database or Redis instance.

Since the improvement of VPC cold starts in 2019, hosting a Lambda function inside a VPC is more feasible even for user-facing workflows. However, by default, your Lambda function in a VPC won't have access to the public internet. This is fine for many use cases, as you may have an HTTP endpoint that uses your database in the VPC and responds to the user without making any public internet calls.

But even a single endpoint like this can be a pain if you're using a service like Amazon CloudWatch Metrics to store metrics about your function's execution. Like other AWS services, the CloudWatch Metrics API is a public API that requires public internet access to publish metric data from your Lambda function.

In this post, we'll see three ways to use AWS services from your Lambda function in a VPC:

· 15 min read
Alex DeBrie

Over the past few years, I've helped people design their DynamoDB tables. For many, it's a struggle to unlearn the concepts of a relational database and learn the unique structure of a DynamoDB single table design. Primary keys, secondary indexes, and DynamoDB streams are all new, powerful concepts for people to learn.

Yet there's one feature that's consistently a red herring for new DynamoDB users -- filter expressions. I'm going to shout my advice here so all can hear:

· 27 min read
Alex DeBrie

Over the past few years, DynamoDB has gotten more and more popular as a database. This is for a few reasons, such as the way it fits so well with serverless architectures using AWS Lambda or AWS AppSync or due to the growing community around how to model DynamoDB spurred by the talks from the incredible Rick Houlihan.

When I talk to people who are new to DynamoDB, I often hear the same initial grumblings:

· 10 min read
Alex DeBrie

DynamoDB is a solid, well-loved product, but that hasn't stopped the DynamoDB team from innovating. At last year's re:Invent, we saw two huge announcements. DynamoDB Transactions brought transactions to DynamoDB and made it easier to handle complex, multi-item operations in a single request. DynamoDB On-Demand Pricing let you forget about capacity planning and only pay for what you use.

But as AWS customers, we still want more. It's the reason Jeff Bezos loves us -- we are 'divinely discontent'. In this post, I lay out my two big #awswishlist items for DynamoDB.

· 18 min read
Alex DeBrie

AWS just announced the release of S3 Batch Operations. This is a hotly-anticpated release that was originally announced at re:Invent 2018. With S3 Batch, you can run tasks on existing S3 objects. This will make it much easier to run previously difficult tasks like retagging S3 objects, copying objects to another bucket, or processing large numbers of objects in bulk.

In this post, we'll do a deep dive into S3 Batch. You will learn when, why, and how to use S3 Batch. First, we'll do an overview of the key elements involved in an S3 Batch job. Then, we'll walkthrough an example by doing sentiment analysis on a group of existing objects with AWS Lambda and Amazon Comprehend.

· 20 min read
Alex DeBrie

In a previous post, we looked at how to use CloudFormation Macros to provide a simpler DSL around CloudFormation or to provide company-wide defaults around particular resources.

However, sometimes you need more than what CloudFormation currently offers. Perhaps CloudFormation doesn't have support for a resource that you need. Or maybe you want to use a third-party resource, like Auth0 or Algolia, in your application.

In this post, we'll learn about CloudFormation custom resources. Custom resources greatly expand what you can do with CloudFormation as you can run custom logic as part of your CloudFormation deployment.