GCP BigTable Flashcards
Is BigTable a low-cost database for production?
No, BigTable required a minimum of 3 nodes to form a cluster with a cluster costing $1500pm.
I have 3 BigTable nodes and I have reached the max transactions per second, what can I do to increase traction per second?
BigTable scaled by adding more nodes, you would add one or more nodes. This si the same for storage.
Is GCP BigTable a global service?
No, Nodes are deployed in a single zone in a region. You can deploy a replica cluster in a separate zone in the same region.
Is GCP BigTable a noSQL database?
Sort of, it is what we call a wide column database, its a sort of persistent hash table. There is only one index, the key.
Is GCP BigTable a wide column database?
Yes
What type of DB is BigTable?
It is a wide column database, a sort of persistent hash table, there is only one index on the row, the key.
Can I store large amounts of data?
Yes BigTables is designed for petabytes, it’s not good for data below 1TB.
Is BigTable slow to put data in?
No, big tables is very fast, you can stream high through put into the database.
Is BigTable a DB that has managed nodes or is it just a service you use?
It is a service with managed nodes, where the n ode are managed by Google for you.
Is BigTable restricted to a region or a zone?
A zone, but you can have another replica cluster in a separate zone with in a region.
Can you scale out the node count to 1000nds of nodes?
Yes, BigTable scales very well.
Can we refer to BigTable as a single key database?
Yes, this is because BigTable has just one index on the key.
I need to use HBase, is the supported by BigTable?
Yes, 100%, HBase is a supported interface
I have an existing cluster and I need to process a workload just once off, what are my options?
Increase the number of BigTable nodes and decrease after you finish processing workload.
Is creating a BigTable cluster fast?
Yes, 1 to 2 seconds
Should I be concerned about the performance of BigTable , it is is a fully managed service of BigTable?
No, BigTable will split data and place it into new nodes as needed.
I have 50 node BigTable cluster and I am storing data, do i use a sequential key or random key?
Use a random key, this will ensure data is distributed across all nodes in the cluster.
Is data compressed in BigTable?
Yes
I have less then 1TB of data, is BigTable a good solution.
No, BigTable is designed for large scale data sets, you need to be storing data above 1TB.
What sort of data type is BigTable good for?
-Time-series data
-Marketing data
-Graph data
Internet of things data
If the BigTable master sees a spike in a table, what will it do?
It will split the table and distribute between nodes.
Why do we want to have distributed write sand reads in the BigTable cluster?
Distributed reads and writes ensure scaleability.
I have a data set less than 1TB, is BigTable a good option?
No the cost of running BigTable is 3 modes in a zone and you may possibly have a replication cluster of 3 nodes in another zone.
Is BigTable multi-region?
No, BigQuery clusters are created in a single zone in a region with the option to create a replication cluster in a separate zone.
I require HBase interface compatable storage, what GCP options do i have?
BigTable has compatible HBase interface.