Introduction#

  • Admin management task
  • Import export data
  • Python and Go ORM
  • RDS Proxy connect Lambda GitHub

Admin Managmenet#

First connect to it

psql -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -p 5432 -U postgresql -d demo

Check current user or role

\conninfo

List user

\du

Query permissions or privillege of an user

select * from information_schema.role_table_grants where grantee = 'dev';

Create a role with login password

create role dev login password 'Admin2024';

Then login the newly created role

psql -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -p 5432 -U dev -d demo

Can this role delete an existing database

DROP DATABASE [IF EXISTS] demo;

Then get error

you must be owner of this database

As admin, create a database, then a table

\c demo;

Create a table

CREATE TABLE IF NOT EXISTS book (
id serial PRIMARY KEY,
author TEXT,
title TEXT,
amazon TEXT,
image TEXT
);

List schema

\dn+

Describe a table

SELECT
table_name,
column_name,
data_type
FROM
information_schema.columns
WHERE
table_name = 'book';

Insert data to a table with id automatically generated incrementally by the serial data type

INSERT INTO book (author, title, amazon, image)
values ('Hai Tran', 'Deep Learning', '', 'dog.jpg') RETURNING id;

Inser data row specifing id

INSERT INTO book (id, author, title, amazon, image)
values (1, 'Hai Tran', 'Deep Learning', '', 'dog.jpg') RETURNING id;

User and Grant#

Now switch to demo user and query

psql -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -p 5432 -U postgresql -d demo

Or switch role command

select current_user, session_user;
set role dev;
select current_user, session_user;

Test can dev query the book table created in demo database own by admin postgresql user

select * from book;

Should get permissioned denied error

ERROR: permission denied for table book

Back to the admin user and grant permissions to the dev user

set role postgresql;

Grant permissions to the dev user or role. First grant database permissions

GRANT ALL PRIVILEGES ON DATABASE demo to dev;

Second grant table permissions

\c demo
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO dev;
GRANT ALL PRIVILEGES ON TABLE demo.book TO dev;
GRANT ALL PRIVILEGES ON TABLE book TO dev;

Restore and Dump#

Let dump a existing database to a file

pg_dump -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -p 5432 -U postgresql -Fc -b -v -f data.sql -d dvdrental;

Let download sample dvdrental data

wget https://www.postgresqltutorial.com/wp-content/uploads/2019/05/dvdrental.zip

Restore from data

pg_restore -v -h <hostname> -U <username> -d <database_name> -j 2 <dumpfilelocation.sql>

For example

pg_restore -v -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -U postgresql -d dvdrental -j 2 dvdrental.tar
pg_restore -v -h database-1.c9y4mg20eppz.ap-southeast-1.rds.amazonaws.com -U postgresql -d dvdrental dvdrental.tar

Restore a postgresql which running on a local machine

pg_restore --dbname=dvdrental --verbose dvdrental.tar

Import Export Data#

  • Install extension
  • Create s3 uri
  • Grant permissions
  • Setup VPC endpoint
  • Export and import to s3
  1. Install aws_s3 extension

First connect as admin

psql --host=111122223333.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password

Then install extension

CREATE EXTENSION aws_s3 CASCADE;

Then verify the extension works

\dx
  1. Create s3 uri
SELECT aws_commons.create_s3_uri(
'postgresql-bucket-demo',
'data',
'ap-southeast-1'
) AS s3_uri_1 \gset

Double check engine

aws rds describe-db-engine-versions --region ap-southeast-1 --engine postgres --engine-version 15.4 | grep s3Export

Output should contain s3Export

  1. Grant Permissions and Setup VPC Endpoint

[!IMPORTANT]

Create an IAM role with permissions to write to s3 and attach to the database instance, from the RDS console. Create S3 VPC Endpoint so RDS DB Instance can upload the exported data to S3

  1. Export Data

Let export data to s3

SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM book', :'s3_uri_1');
SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM book', :'s3_uri_1', options :='format text');
SELECT * from aws_s3.query_export_to_s3('select * from book', aws_commons.create_s3_uri('postgresql-bucket-demo', 'sample-filepath', 'ap-southeast-1'));
  1. Import Data

Let drop table

drop table book;

Create table book

CREATE TABLE IF NOT EXISTS book (
id serial PRIMARY KEY,
author TEXT,
title TEXT,
amazon TEXT,
image TEXT
);

Then import data into book table

SELECT aws_s3.table_import_from_s3(
'book',
'id,author,title,amazon,image',
'(format text)',
:'s3_uri_1'
);

AWS PostgreSQL#

List all databases

\l

There are some existing databases name

postgres
rdsadmin
template0
template1

Create a new database

CREATE DATABASE demo;

Use a database

\c demo
\dt

Delete all rows in a table

DELETE FROM book WHERE id=1;
DELETE FROM book;

Drop a table

DROP TABLE book;

Lambda Function#

  • Lambda VPC
  • Deploy lamdba with dependencies

Let create a lambda which connects to a VPC

export class LambdaDemoStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: LambdaProps) {
super(scope, id, props)
const vpc = cdk.aws_ec2.Vpc.fromLookup(this, 'VpcLookup', {
vpcId: 'vpc-0d08a1e78d2e91cb0',
vpcName: 'demo'
})
const securityGroup = cdk.aws_ec2.SecurityGroup.fromSecurityGroupId(
this,
'SecurityGroupforLambda',
'sg-014fee8db3f201b09'
)
const subnet = cdk.aws_ec2.Subnet.fromSubnetId(
this,
'SubnetLambda',
'subnet-0fc863892f278dabe'
)
new cdk.aws_lambda.Function(this, 'LambdaDemo', {
functionName: 'LambdaDemo',
handler: 'index.handler',
code: cdk.aws_lambda.Code.fromAsset(path.join(__dirname, './../lambda/')),
memorySize: 521,
timeout: cdk.Duration.seconds(15),
runtime: cdk.aws_lambda.Runtime.PYTHON_3_9,
vpc: vpc,
allowPublicSubnet: true,
vpcSubnets: { subnets: [subnet] },
securityGroups: [securityGroup],
environment: {
PYTHONPATH: '/var/task/package',
HOST: props.host,
PASSWORD: props.password,
USER: props.user
}
})
}
}

Use psycopg to connect to RDS postgresql

# Note: the module name is psycopg, not psycopg3
import psycopg
# import numpy as np
import json
import os
dbname = "demo"
user = os.environ["USER"]
password = os.environ["PASSWORD"]
host = os.environ["HOST"]
port = "5432"
reponse = []
# Connect to an existing database
with psycopg.connect(f"dbname={dbname} user={user} port={port} host={host} password={password}") as conn:
# Open a cursor to perform database operations
with conn.cursor() as cur:
# Execute a command: this creates a new table
cur.execute("""SELECT * FROM book """)
cur.fetchone()
# will return (1, 100, "abc'def")
# You can use `cur.fetchmany()`, `cur.fetchall()` to return a list
# of several records, or even iterate on the cursor
for record in cur:
reponse.append(record)
print(record)
# Make the changes to the database persistent
conn.commit()
def handler(context, event):
print("Hello")
return reponse

RDS Instance#

Let create a RDS database instance with proxy. First, let create a subnetgroup

const subnetGroup = new cdk.aws_rds.SubnetGroup(this, 'SubnetForRdsDemo', {
description: 'subnetgroup for rds demo',
vpc: vpc,
removalPolicy: cdk.RemovalPolicy.DESTROY,
subnetGroupName: 'SubnetForRdsDemo',
vpcSubnets: {
subnetType: cdk.aws_ec2.SubnetType.PUBLIC
}
})

Second, create secrete for databse credential and for proxy

const unsafeSecret = new cdk.aws_secretsmanager.Secret(
this,
'UnsafeSecretDemo',
{
secretName: 'UnsafeSecretDemo',
secretObjectValue: {
username: cdk.SecretValue.unsafePlainText('posgresql'),
password: cdk.SecretValue.unsafePlainText('Admin2024'),
engine: cdk.SecretValue.unsafePlainText('postgres')
// resourceId: cdk.SecretValue.unsafePlainText(""),
// dbInstanceIdentifier: cdk.SecretValue.unsafePlainText("")
}
}
)

And an IAM role for the proxy

const proxyRole = new cdk.aws_iam.Role(this, 'RoleForProxyRdsDemo', {
roleName: 'RoleForProxyRdsDemo',
assumedBy: new cdk.aws_iam.ServicePrincipal('rds.amazonaws.com')
})
proxyRole.addToPolicy(
new cdk.aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['secretsmanager:GetSecretValue'],
resources: [secret.secretArn]
})
)
proxyRole.addToPolicy(
new cdk.aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['kms:Decrypt'],
resources: ['*']
})
)

Third, create a proxy target the database instance

const proxy = new cdk.aws_rds.DatabaseProxy(this, 'RdsProxyDemo', {
dbProxyName: 'RdsProxyDemo',
proxyTarget: cdk.aws_rds.ProxyTarget.fromInstance(rds),
//
secrets: [rds.secret!],
role: proxyRole,
vpc,
clientPasswordAuthType: cdk.aws_rds.ClientPasswordAuthType.POSTGRES_MD5,
securityGroups: [securityGroup],
requireTLS: false
})

Finally here is the database instance stack

export class RdsProxyDemo extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props)
const vpc = cdk.aws_ec2.Vpc.fromLookup(this, 'VpcLookup', {
vpcId: 'vpc-0d08a1e78d2e91cb0',
vpcName: 'demo'
})
const securityGroup = cdk.aws_ec2.SecurityGroup.fromSecurityGroupId(
this,
'SecurityGroupforLambda',
'sg-014fee8db3f201b09'
)
const subnet = cdk.aws_ec2.Subnet.fromSubnetId(
this,
'SubnetLambda',
'subnet-0fc863892f278dabe'
)
const unsafeSecret = new cdk.aws_secretsmanager.Secret(
this,
'UnsafeSecretDemo',
{
secretName: 'UnsafeSecretDemo',
secretObjectValue: {
username: cdk.SecretValue.unsafePlainText('posgresql'),
password: cdk.SecretValue.unsafePlainText('Admin2024'),
engine: cdk.SecretValue.unsafePlainText('postgres')
// resourceId: cdk.SecretValue.unsafePlainText(""),
// dbInstanceIdentifier: cdk.SecretValue.unsafePlainText("")
}
}
)
const credentials =
cdk.aws_rds.Credentials.fromGeneratedSecret('postgresql')
const subnetGroup = new cdk.aws_rds.SubnetGroup(this, 'SubnetForRdsDemo', {
description: 'subnetgroup for rds demo',
vpc: vpc,
removalPolicy: cdk.RemovalPolicy.DESTROY,
subnetGroupName: 'SubnetForRdsDemo',
vpcSubnets: {
subnetType: cdk.aws_ec2.SubnetType.PUBLIC
}
})
const rds = new cdk.aws_rds.DatabaseInstance(this, 'RdsDbInstanceDemo', {
databaseName: 'RdsDbInstanceDemo',
engine: cdk.aws_rds.DatabaseInstanceEngine.postgres({
version: cdk.aws_rds.PostgresEngineVersion.VER_15_4
}),
instanceType: cdk.aws_ec2.InstanceType.of(
cdk.aws_ec2.InstanceClass.T3,
cdk.aws_ec2.InstanceSize.MICRO
),
// credentials: cdk.aws_rds.Credentials.fromGeneratedSecret("postgresql"),
// credentials: cdk.aws_rds.Credentials.fromSecret(secret),
// credentials : {
// username: "postgresql",
// password: cdk.SecretValue.unsafePlainText("Admin2024")
// },
credentials: credentials,
vpc: vpc,
subnetGroup: subnetGroup,
// vpcSubnets: {subnetType: cdk.aws_ec2.SubnetType.PUBLIC},
securityGroups: [securityGroup],
port: 5432
})
const proxyRole = new cdk.aws_iam.Role(this, 'RoleForProxyRdsDemo', {
roleName: 'RoleForProxyRdsDemo',
assumedBy: new cdk.aws_iam.ServicePrincipal('rds.amazonaws.com')
})
proxyRole.addToPolicy(
new cdk.aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['secretsmanager:GetSecretValue'],
resources: ['*']
})
)
proxyRole.addToPolicy(
new cdk.aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['kms:Decrypt'],
resources: ['*']
})
)
const proxy = new cdk.aws_rds.DatabaseProxy(this, 'RdsProxyDemo', {
dbProxyName: 'RdsProxyDemo',
proxyTarget: cdk.aws_rds.ProxyTarget.fromInstance(rds),
//
secrets: [rds.secret!],
role: proxyRole,
vpc,
clientPasswordAuthType: cdk.aws_rds.ClientPasswordAuthType.POSTGRES_MD5,
securityGroups: [securityGroup],
requireTLS: false
})
}
}

Troubleshooting#

  • Connect to default database name
psql -h PROXY_ENDPOINT -p 5432 -U postgresql -d postgres
  • s3Import role in the RDS console when importing data from s3

  • s3Export role in the RDS console when exporoting data to s3