AWS Database Exam Preparation
We can install DB on an EC2 but will need a manual maintenance service like backup,patches,replication,fail over as well all Admin tasks are manual.
RDS will automate all the above Admin tasks.
- RDS supports six database engines: MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, and Amazon Aurora.
- Isolates DB from main instance (in a different AZ).
- Supports Automatic + manual snapshots.
- Snapshot will go into S3.
- MultiAZ – synchronous replication to decrease RPO and fast failover to decrease RTO.
- If Multi AZ enabled then primary DB Instance switches over automatically to the standby replica in case of fail over.
- You can also create read replicas within a Region or between Regions.
- Encrypted at rest with AWS Key Management Service (KMS).
- AWS data warehouse service.
- Columnar storage on high-performance disk.
- You can enable database encryption for your clusters
- AWS KMS for key management with Amazon Redshift – master-key, a cluster encryption key (CEK), a database encryption key (DEK), and data encryption keys or HSM.
- Redshift support CRR snapshots for clusters.
- Instead of storing data as a series of rows, Amazon Redshift organizes the data by column.
- Redshift Spectrum enables you to run queries against Exabyte of data in Amazon S3.
- Dense compute (DC) nodes allow you to create very high-performance data warehouses using fast CPU,’s large amounts of RAM, and SSDs.
- Non Relational DB / Schema less DB.
- Autoscaling is available for DynamoDB.
- Through put = 4kb/unit for read.
- Use Case- Storing user preferences, session details, logs for further analysis.
- A local secondary index lets you query over a single partition, as specified by the hash key value in the query.
- A global secondary index lets you query over the entire table, across all partitions.
- Applications can connect to the Amazon DynamoDB service endpoint.
- Use primary keys & sorty key / secondary indexes for performance.
- Amazon DynamoDB Accelerator (DAX) provides a read-through/write-through distributed caching tier in front of the database, supporting the same API as Amazon DynamoDB, but providing sub-millisecond latency for entities that are in the cache.
- upto 4kb/sec = 1 read capcity units or 2 eventual capacity read units
- upto 1kb /sec= 1 write capacity units.
- Data plane operations let you do create, read, update, and delete (also called CRUD) actions on data in a table.
- Although all reads from a DynamoDB table are eventually consistent by default, strongly consistent reads can be specified.
- Amazon DynamoDB now supports cross–region replication.
- Supports Read Replica for Mysql, Aurora ,postgressql.
- Aurora schema changes can be done without downtime.
- If you see NoSql, select DynamoDB for most of the cases this should work.
- If you see fully managed, highly scalable & available select Aurora most of times.
- If you see columnar DB, then its Aurora.
- Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
- Existing database cannot be encrypted, encyption can be done during creation of RDS db.
- You should use a combination of Read Replica’s and Elasticache to help offload the traffic.
- By default, customers are allowed to have up to a total of 40 Amazon RDS DB instances. *max 10 for each Oracle/sql server.
- ElasticCache will be a better answer for serving repeated requests or when DB server is underperforming.
- RDS does not support Autoscaling.
- Read replicas will have eventual consistency, little lagging when compared to primary db.
- If you see columnar db and analytics , use Redshift.
- For Redshift , if you want the LOAD or COPY process via a VPC, then enable Redshift Enhanced VPC Routing.
- Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection.
- Hot and Cold data separation for DynamoDB-Performance. Hot data is accessed frequently, like recent replies in the example forums application. Cold data is accessed infrequently or never, like forum replies from several months ago.