ACG Notes Flashcards
3 V’s of Big Data
- ) Volume – Scale of data being handled by systems (can it be handled by a single server?)
- ) Velocity – speed in which its being processed
- ) Variety – The diversity of data sources, formats, and quality
What is a Data Warehouse?
- ) Data Warehouse
a. Structured and/or processed
b. Ready to use
c. Rigid structures – hard to change, may not be the most up to date either
What is a data lake?
- ) Data Lake
a. Raw and/or unstructured
b. Ready to analyze – more up to date but requires more advanced tools to query
c. Flexible – no structure is enforced
4 Stages of a Data Pipeline
- ) Ingestion
- ) Storage
- ) Processing
a. ETL – Data is taken from a source, manipulated to fit the destination
b. ELT – data is loaded into a data lake and transformations can take place later
c. Common transformations: Formatting / Labeling / Filtering / Validating - ) Visualization
Cloud Storage
o Unstructured object storage o Regional, dual-region, or multi-region o Standard, nearline, or cold line o Storage event triggers (pub/sub) (Usually, first steps in a cloud data pipeline)
Cloud Bigtable
o Petabyte-scale NoSQL database
o High-throughput and scalability
o Wide column key/value data
Time-series, transactional, IoT data
Cloud BigQuery
o Petabyte-scale analytics DW
o Fast SQL queries across large datasets
o Foundations for BI and AI
o Useful public datasets
Cloud Spanner
o Global SQL-based relational database
o Horizontal scalability and HA
o Strong consistency
o Not cheap to run, usually used in financial transactions
Cloud SQL
o Managed MySQL, PostgreSQL, and SQL Server instances
o Built-in backups, replicas, and failover
o Does not scale horizontally, but does scale vertically
Cloud Firestore
o Fully managed NoSQL document database
o Large collections of small JSON documents
o Realtime database with mobile SDKs
o Strong consistency
Cloud Memorystore
o Managed Redis instances
o In-memory DB, cache, or message broker
o Bult-in HA
o Vertically scalable with increasing the amount of RAM
Cloud Storage (GCS) at a high level
o Fully managed object storage
o For unstructured data: Images, videos, etc.
o Access via API or programmatic SDK
o Multiple storage classes
o Instant access in all classes, also has lifecycle management
o Secure and durable (HA and maximum durability)
GCS Concepts, what is GCS, where can buckets be?
o A bucket is a logical container for an object
o Buckets exist within projects (named within a global namespace)
o Buckets can be:
o Regional $
o Dual-regional $$ (HA)
o Multi-regional $$$ (Uses all datacenters in a region, lowest latency as well)
4 GCS Storage Classes
o Standard
o Nearline
o Coldline
o Archive
Describe standard GCS Storage Class
$0.02 per GB
99.99% regional availability
>99.99% availability in multi and dual-regions
Describe Nearline GCS Storage Class
30 day minimum storage
$0.01 per GB up / down
99.9% regional availability
99.95% availability in multi and dual regions
Describe Coldline GCS Storage Class
90 days minimum storage $.004 per GB stored $02 up/down 99.9% regional availability 99.95% availability in multi and dual region
Describe Archive GCS Storage Class
365 minimum storage $.0012 per GB stored $0.05 per GB up/down 99.9% regional availability 99.95% availability in multi and dual regions
Objects in cloud storage (encryption, changes)
o Encrypted in flight and at rest
o Objects are immutable to change you must overwrite (atomic operation)
o Objects can be versioned
name the 5 “advanced” features of GCS
o Parallel uploads of a single object
o Integrity checking – pre-calculate an md5 hash, compared to the one Google calculates
o Transcoding for compression
o Requester can pay, if desired
o Pub/Sub notifications
New files are commits that trigger a data pipeline
What is Cloud Transfer Service?
o Transfers from a source to a sink (bucket), supported sources: S3, HTTP, GCP Storage
o Transfers can be filtered based on names/dates
Schedule it for one run or periodically (can delete in source or destination after transfer is confirmed)
What is BigQuery Data Transfer Service?
o Automates data transfer to BigQuery
o Data is loaded on a regular basis
o Backfill can recover from gaps or outages
o Supported sources: Cloud Storage, Merchant Center, Google Play, S3, Teradata, Redshift
What is a transfer appliance?
physical rack storage device, 100 TB and 480 TB versions
What are the top 3 features of Cloud SQL?
o Managed SQL instances (creation, replication, backups, patches, updates)
o Multiple DB engines (MySQL, PostgreSQL, SQL Server)
o Scalability – vertically to 64 cores and 416 GB of RAM, HA options are available
Describe regional configuration of Cloud SQL
o Regional Replication:
3 read-write replicas
Every mutation requires a write quorum
This is different from traditional HA in that it’s a read AND a write replica in each zone.
Regional Cloud SQL best practices
o Design a performant schema
o Spread reads/writes around the database, avoid write hot spots
o Co-locate compute workloads in the same region
o Provision nodes to keep average CPU utilization under 65%
Multi-regional Cloud SQL benefits
5 9s SLA – 99.9999%
Reduce latency with distributed data
External consistency
• Concurrency control for transactions, this guarantees transactions are executed sequentially even across the globe
Multi-regional Cloud SQL best practices:
Design a performant schema to avoid hotspots
Co-locate write-heavy compute workloads in the same region as the leader
Spread critical workloads cross two regions
Provision nodes to keep average CPU under 45%
Cloud SQL data model
• Data model:
o Relational database tables
o Strongly typed (you must conform to a strict schema)
o Parent-child relationships, declared with primary keys and create an interleaved table
Cloud SQL transactions
• Transactions: o Locking read-write o Read-Only o Partitioned DML o Regular transactions using ANSI SQL best practices
Top 2 features of Cloud MemoryStore
o Fully managed Redis instance
o 2 tiers:
Basic tier – make sure your app can withstand full data flush
Standard tier – adds cross zone replication and automatic failover
Benefits of managed Redis (MemoryStore)
o No need to provision VMs
o Scale instances with minimal impact
o Private IPs and IAM
o Automatic replication and failover
Cloud MemoryStore use cases
o Session cache – store logins or shopping carts
o Message queue – loosely couple micro services
o Pub/sub – message queue, but also look at pub/sub
Storage options: Low latency vs. Warehouse
Low latency (use Cloud Bigtable)
- petabyte scale
- single-key rows
- Time series or IoT data
Warehouse (use BigQuery)
- Petabyte scale
- Analytics warehouse
- SQL queries
Storage options: Horizontal vs. Vertical scaling
Horizontal scaling (Cloud Spanner)
- ANSI SQL
- Global replication
- High Availability and consistency
Vertical Scaling (use Cloud SQL)
- MySQL or PostgreSQL
- Managed service
- High availability
Storage options: NoSQL vs Key/Value
NoSQL
- Fully managed document database
- Strong consistency
- Mobile SDKs and offline data
Key/Value
- Managed Redis instances
- Does what Redis does
What is MapReduce
distributed implementation of the map and reduce programming model, common interface to program these operations while abstracting away all of the systems management
4 Core modules of Hadoop & HDFS
Hadoop Common – base files
Hadoop Distributed File System (HDFS) – distributed fault tolerant file system
Hadoop YARN – resource management / job scheduling
Hadoop MapReduce – their own implementation
Apache Pig
language for analyzing large datasets, essentially an abstraction for MapReduce
o High level framework for running MapReduce jobs on Hadoop clusters
Apache Spark
general purpose cluster-computing framework
o Pretty much replaced Hadoop, uses Hadoop, much faster
Hadoop stores data in blocks on disk before, during, and after computation
Spark stores data in memory, enabling parallel operations on that data
Hadoop vs. Spark
Hadoop
- Slow disk storage
- High latency
- Used for: slow, reliable batch processing
Spark
- Fast memory storage
- Low latency
- Stream processing
- 100x faster in-memory
- 10x faster on disk
Apache Kafka
distributed streaming platform, designed for high-throughput and low-latency pub/sub stream of records
o Handles >800 billion messages per day at LinkedIn
Kafka vs. Pub/Sub
Kafka
- Guaranteed message ordering
- Tuneable message retention
- Polling (Pull) subscriptions only
- Unmanaged
Pub/Sub
- No message ordering guarantee
- 7 day maximum message retention
- Pull or Push subscriptions
- Managed
Top 2 Benefits of Pub/Sub
o Global messaging and event ingestion
o Serverless and fully managed, processes up to 500 million messages per second
Top 4 features of Pub/Sub
o Multiple pub/sub patterns, one to many, many to one, and many to many
o At least once delivery is guaranteed
o Can process messages in real-time or batch with exponential backoff
o Integrates with Cloud Dataflow
Pub/Sub use cases
o Distributing workloads o Asynchronous workflows – order processing, order, packaging, shipping o Distributing Event Notifications o Distributed Logging o Device Data Streaming
2 types of delivery method for pub/sub subscriptions:
o Pull (default) – ad-hoc request, messages must be acknowledged, or they will remain at the top of the queue and you won’t get the next message o Push – will send new messages to an endpoint, must be HTTS with a valid cert
Pub/Sub integration facts
o Fully supported by Cloud Dataflow
o Client libraries for popular languages (python)
o Cloud Functions can be triggered by events
o Cloud Run to be the receiver of a push sub
o IoT Core
Pub/Sub delivery model
o You may receive a message more than once in a single subscription
o Message Retention Duration (default 7 days) – undelivered messages are deleted
Pub/Sub lifecycle – when does a sub expire?
if No pulls / no pushes – subs expire after 31 days
Standard pub/sub model limitations
o Acknowledged messages are no longer available to subscribers
o Every message must be processed by a subscription
Pub/Sub: Seek
o Seek – you can rewind the clock and retrieve old messages up to the retention window, you can also use this to seek to a point in the future
Useful in case of an outage most commonly
Pub/Sub: Snapshot
o Snapshot – save the current state of the queue, this enables replay
Useful if you have new code and you’re not sure what it’ll do, you can snapshot back to a certain point and move forward, then seek back to replay
Pub/Sub: Ordering messages
o Ordering messages – Use timestamps when final order matters, order still isn’t guaranteed but you do have a record of that time.
If you absolutely must guarantee order, consider an alternative system
Pub/Sub: Access Control
o Use service accts for authorization, granting per-topic or per-subscription permissions
Grant limited access to publish or consume messages
Define: Cloud Dataflow
• Cloud Dataflow – fully managed, serverless ETL tool, using Apache Beam.
o Supports:
SQL, Java, and Python
Real-time and batch processing
Define: Pipeline Lifecycle
• You can run pipelines on your local machine, this is the preferred way to fix bugs • Pipeline design considerations: o Location of data o Input data structure and format o Transformation objectives o Output data structure and location
Dataflow: ParDo
• ParDo – defines the distributed operation/transformation to be performed on the PCollection of data. These can be user defined functions or pre-defined.
Dataflow: PCollections (Characteristics)
o Data types – may be of any data type but must all be of the same type. The SDK includes built-in encoding
o Access – individual access to elements is not supported, transforms are performed on all
o Immutable – cannot be changed once created
o Boundedness – no limit to the number of elements that a PCollection can contain
o Timestamp – associated with every element of the collection, assigned by the creation
Dataflow: Core Beam transforms (6)
o ParDo – generic parallel processing transforms
o GroupByKey – processing collections of KVP’s
o CoGroupByKey – used when combining multiple key collections, performs relational join
o Combine – requires you to provide a function to provide the logic, multiple pre-built functions are available, sum, min, max…
o Flatten – Multiple collections become one
o Partition – how the elements of the PCollection are split up
Dataflow Security Mechanisms
o Only users with permission can submit pipelines
o Any temp data during execution is encrypted
o Any communication between workers happens on a private network
o Access to telemetry or metrics is controlled by project permissions
Describe GCP Service Account usage
Cloud Dataflow service uses the Dataflow Service Account
Account is automatically created on flow creation
Manipulates job resources on your behalf
Assumes the “Cloud Dataflow service agent role”
Read/write access to project resources (recommended not to change this)
Worker instances will use the Controller service account
Used for metadata operations (ex: determine size of file on storage)
Also, able to use user-managed controller service acct, enabling fine grained access-control
Dataflow: Regional Endpoints
• Regional Endpoints – specifying a regional endpoint means all the worker instances will persist in that region, this is best for:
o Security and compliance
o Data locality
o Resiliency
What use case does Dataflow address
o Used for migrating MapReduce jobs to Cloud Dataflow
Define: Cloud Dataflow SQL
o Develop and run Cloud Dataflow jobs from the BigQuery web UI
What service does Cloud Dataflow integrate with and what are the benefits?
o Integrates with Apache Beam SQL
Apache Beam SQL:
• Can query bounded and unbounded PCollections
• Query is converted to a SQL transformation
Cloud Dataflow SQL benefits:
• Join streams with BigQuery tables
• Query streams or static datasets
• Write output to BigQuery for analysis and visualization
What type of client is Dataflow for? (on prem to cloud migration)
• Go with the flow – ideal solution for customers using Apache Beam
o Batch == Dataproc & spark
o Streaming == Beam and Dataflow
Pipelines and PCollections
o The pipeline represents the complete set of stages required to read, transform, and write data using the Apache Beam SDK
o PCollection – represents a multi-element dataset that is processed by the pipeline
ParDo and DoFn
o ParDo – Core parallel processing function of Apache beam which can transform elements of an input PCollection into an output PCollection, can invoke UDF’s
o DoFn – template you use to create UDF that are referenced by a ParDo
Dataflow windowing
allows streaming data to be grouped into finite collections according to time or session-based windows
Useful when you need to impose ordering or constraints on pub/sub data
Dataflow Watermarking
o Watermark – indicates when a Dataflow expects all data in a window to have arrived
Data that arrives with a timestamp that’s inside the window but past the watermark is considered late, policies can decide what happens to this.
Dataflow vs. Cloud Composer
o Dataflow is normally the preferred option for data pipelines
o Composer may sometimes be used for ad-hoc orchestration or to provide manual control of Dataflow pipelines themselves
Dataflow Triggers
• Triggers – determine when to emit aggregated results as data arrives.
o For bounded data, results are emitted after all the input has been processed.
o For unbounded data, results are emitted when the watermark passes the end of the window, indicating that the system believes all input data for that window has been processed.
Define: BigQuery
Peta-byte scale, serverless, highly-scalable cloud enterprise DW
BigQuery Key Features
o Highly available – automatic data replication
o Supports standard SQL – ansi compliant
o Supports Federated Data – connects to several external sources
o Automatic Backups – automatically replicates data and keeps a 7-day history of changes
o Support for Governance and Security – fine-grained IAM
o Separation of storage and compute – ACID compliant, stateless compute
BigQuery Data Management Architecture
Project
Dataset – container for tables/views, think of this like a database
• Native table – standard table, where data is held in BQ storage
• External tables – backed by storage outside of BQ
• Views or virtual tables – created by a SQL Query
BigQuery data ingestion (2 types of sources)
Real-time events
Generally streamed using pub/sub, then use cloud dataflow to process these and push to BQ
Batch sources
Push files to cloud storage, have cloud dataflow pick up the data and put in BQ
BigQuery: Job
action that is run in BigQuery on your behalf asynchronously
o 4 types: Load / Export / Query / Copy
BigQuery supported import formats
• Importing data: supported formats: csv, json, Avro, Parquet, ORC, Datastore/Firestore exports
BigQuery: Views
o Control access to data
o Reduce query complexity
o Can be used to construct logical tables
o Enables authorized views – users can have access to different subsets of rows
BigQuery: Limitations of views
Cannot export data from a view
Cannot use JSON API to retrieve data from a view
No UDF’s
Limited to 1k authorized views per dataset
BigQuery: Supported external data sources
Supports BigTable, Cloud Storage, and Google Drive
o Use cases:
Load and clean data in one pass
Small, frequently changing data joined with other tables
BigQuery External data sources; limitations
Limitations: No guarantee of consistency Lower query performance Cannot run export jobs on external data Cannot query Parquet or ORC formats Results not cached Limited to 4 concurrent queries
BigQuery: 2 methods of partitioning
Ingestion time partition tables
Partitioned tables
BigQuery: Ingestion time partition tables
- Partitioned by load or arrival date
- Data automatically loaded into date-based partitions (daily)
- Tables include the pseudo-column _PARTITIONTIME
- use _PARTITIONTIME in queries to limit partitions scanned (in the where)
BigQuery: Partitioned tables
• Partitioning is based on specific TIMESTAMP or DATE column
• Data partitioned based on value supplied in partitioning column
• 2 additional partitions:
o __null__
o __UNPARTITIONED__
• Use partitioning column in queries
BigQuery: Why do we care about partitioned tables?
Improve query performance, less data is read/processed
Cost control, you pay for all the data processed by a query
BigQuery: Clustering
o Like creating an index on the table
o Supported for both types of partitioning, unsupported for non-partitioned tables
o Create a cluster key on frequently accessed columns (order is important, it’s the exact order you’ll be accessing the data)
BigQuery: Clustering Limitations
Specify clustering columns only on table create, cannot be changed
Clustering columns must be top-level, non-repeated columns
Only supported for partitioned tables
Can specify one to four clustering columns
BigQuery: Querying clustered tables
Filter clustered columns in the order they were specified:
Avoid using clustered columns in complex filter expressions
Avoid comparing cluster columns to other columns
BigQuery: Slots
Slots – unit of computational capacity required to execute SQL queries
o Play a role in pricing and resource allocation
o Determined by:
Query size
Query complexity (amount of info shuffled)
o Automatically managed
BigQuery: 3 main topics of best practices
Controlling Costs
Query Performance
Optimizing Storage
BigQuery: How to control costs
Avoid SELECT * (columns are stored separately)
Use preview options to sample data
Price queries before executing them (price is per byte), there’s a calculator
Using LIMIT does not affect costs, the full columns are read and processed
View costs using a dashboard and query audit logs
Partition by date in your queries
Use streaming inserts with caution, costly, go with bulk loads
BigQuery: Query Performance: Input Data and Data Sources
• Prune partitioned queries (don’t query partitions you don’t need)
• De-normalize data whenever possible
• Use external data sources appropriately
• Avoid wildcard tables
o A wildcard table represents a union of all the tables that match the wildcard expression.
BigQuery: Query Performance: Query computation
- Avoid repeatedly transforming data via SQL queries
- Avoid JS UDF’s
- Order query operations to maximize performance
- Optimize JOIN patterns
BigQuery: Query Performance: SQL anti-patterns
- Avoid self-joins
- Avoid data skew
- Avoid unbalanced joins
- Avoid joins that generate more outputs than inputs
- Avoid DML statements that update or insert single rows
BigQuery: Query Performance: Optimizing Storage
• Use expiration settings (tables auto deleted after expiration)
• Take advantage of long-term storage
o Lower monthly charges apply for data stored in tables or in partitions that have not been modified in 90 days
BigQuery: 3 Types of Roles relating to BQ
3 Types of roles relating to BigQuery
o Primitive – defined at the project level
3 types: Owner, Editor, Viewer
o Predefined – Granular access defined at the service level, GCP managed.
Recommended to use this over the primitive roles
o Custom – user managed
Cloud Data Loss Prevention (Cloud DLP) API
Fully managed service
Identify and protect sensitive data at scale
De-identifies data using masking, tokenization, date shifting, and more
Stackdriver
• Stackdriver – fancy CloudWatch logs, you can build a dashboard
o Supports metrics from various services including BigQuery
4 ML Models
o Linear regression
o Binary logistic regression
o Multi-class logistic regression
o K-means clustering (most recent addition)
Dataproc Benefits
• Datproc – managed cluster service for Spark and Hadoop o Benefits: Cluster actions complete in 90 seconds Pay-per-second, minimum 1 minute Scale up/down or turn off at will
Using Dataproc
Submit Hadoop/Spark Jobs
(optionally) enable auto scaling to cope with the load
Output to GCP Services (GCS, BigQuery, BigTable)
Monitor with stackdriver – fully integrated logs
Dataproc Cluster Types
o Single node cluster – limited to the capacity of a single VM and cannot auto scale
o Standard cluster: Type and size can be customized
o High Availability Cluster
When should you not use autoscaling with Dataproc?
High availability clusters
Not permitted on single node clusters
HDFS – make sure you have enough primary workers to store all data so you don’t have loss when it scales down
Spark structured streaming
Idle clusters – just delete the idle cluster and make a new one for a new job
Define: Dataproc Cloud Storage Connector
• Cloud Storage Connector – run dataproc jobs on GCS instead of HDFS
o Cheaper than persistent disk and you get all the GCS benefits
o Decouple storage from cluster, decommission cluster when finished, no data loss
What on-prem technologies can be replaced by Dataproc?
great choice for migrating Hadoop and Spark into GCP
What are the benefits of Dataproc?
Ease of scaling, use GCS instead of HDFS, and the connectors to other GCP services (BigQuery / BigTable)
Define: Hadoop
It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.
Define: Spark
Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Define: Zookeeper
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
Define: Hive
Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop.
Define: Tez
Tez is an extensible framework for building high performance batch and interactive data processing applications, coordinated by YARN in Apache Hadoop. Tez improves the MapReduce paradigm by dramatically improving its speed, while maintaining MapReduce’s ability to scale to petabytes of data.
Define: MapReduce
A MapReduce program is composed of a map procedure, which performs filtering and sorting, and a reduce method, which performs a summary operation.
Define: BigTable
• BigTable – managed wide-column NoSQL database (kvp), designed for high throughput with low latency (10,000 reads/sec, 6 ms response)
o Scalable and HA
Originally developed internally for web indexing
HBase created an open-source implementation of the BigTable, bought my Microsoft, adopted by Apache
o BigTable supports the HBase library for java
Where are BigTable “tables” stored?
• Cloud BigTable Tables: (only index you get is on the row key)
o Blocks of contaguous rows are sharded into tables
Tables are stored in Google Colossus – all splitting, merging, rebalancing, happens automatically
What are the typical use cases of BigTable?
(large amount of small data)
o Marketing & Financial (stock prices, currency exchange rates)
o Time Series & IoT
What are the alternatives to BigTable?
o SQL Support OLTP – Cloud SQL o OLAP – BigQuery o NoSQL Documents – Firestone o In-memory KVP – Memorystore o Realtime DB – Firebase
How many clusters can be ran per BigTable instance and where do they exist?
o Instances can run up to 4 clusters
o Clusters exist in a single zone
Production allows up to:
• 30 nodes per project
• 1000 tables per instance
BigTable individual cell/row data limit
Individual cells should be no larger than 10 mb (with history) and no row should be larger than 100 mb)
BigTable garbage collection policies
o Expiry policies define garbage collection:
Expire based on age
Expire based on number of versions
BigTable Query Planning
think about what sort of questions you may ask about the database. Scans are the most expensive operation (take the longest).
What is field promotion in BigTable?
move data that may normally be in a column and combine it with the row key for querying
Never put a timestamp at the start of a row key, this will make it impossible to balance the cluster
What BigTable row keys should be avoided?
Domain names
Sequential numbers (Really bad idea, writes will always be on the end)
Frequently updated identifiers (repeatedly updating the same row is not as performant as new rows)
Hashed values, doesn’t really help distribution and doesn’t help with even distribution
How to design BigTable for performance
Store related entities in adjacent rows, balance reads/writes
Balanced access pattersns enable linear scaling of performance
BigTable: How to store time series data
Use tall and narrow tables
• Use new rows instead of versioned cells
• Logically separate event data into different tables where possible
• Don’t re-invent the wheel, there’s multiple time series schemas available that have already been proven
BigTable: How to avoid hotspots
Consider field promotion to the key
Salting
Use the key visualizer to find hotspots
BigTable data model replication
o Eventually consistent data model
o Used for:
Availability and failover
Application isolation
Global presence
BigTable Autoscaling, how-to and what to expect
o Stackdriver metrics can be used for programmatic scaling – not a built-in feature
Rebalancing tablets take time; performance may not improve for 20 min
Adding nodes does not solve a bad schema/hot node
Performance in BigTable good/bad
Good
- optimized schema and row key design
- large datasets
- correct row and column sizing
Bad
- Datasets smaller than 300GB
- short-lived data
When do you choose BigTable?
If migrating from an on-prem environment, look for HBase, also consider when BigTable beats BigQuery due to the nature of the data, timeseries or latency sensitive information
What are common causes of poor performance in BigTable?
– under resourced clusters, bad schema design, poorly chosen row-keys
BigTable design: Tall vs. Wide
wide tables store multiple columns for a given row-key where the query pattern is likely to require all the information about a single entity. Tall tables suit time-series or graph data and often only have a single column
Define: Datalab
Jupyter notebooks that can interact with GCP services
Define: Data Studio
Free visualization tool for creating Dashboards and reports
Define: Cloud Composer
Fully managed workflow service, on top of airflow, task organization system intended to create workflows of various complexity.
o Written in python, highly extensible
o Central management and scheduling tool
o Extensive CLI and web UI tool for managing workflows
Dataflow vs Composer
- Dataflow is specifically for Batch and Stream data using beam
- Composer is a task orchestrator with python
Organizing dataflow with composer is a common pattern
Cloud Composer architecture
each environment is an isolated installation of airflow and its component parts
o Can have multiple environments per project but each environment is independent
You write DAG’s in python for the scheduler to pick up. This is where you define the order and configuration settings for the workflows.
ML: 3 major categories/learning options
Pre-trained models
No model training / knowledge of ML required
Re-useable models
Model training required, minimal knowledge of ML
Build your own
Deep knowledge of ML is required, lots of model training
ML: Cloud Vision API
identifies objects within images
able to perform facial recognition
Can read printed and handwritten text
ML: Cloud Video Intelligence API
Identifies objects/places/actions in videos, streamed or stored
ML: Cloud Translation API
Translate between more than 100 different languages
ML: Cloud Speech-To-Text API
Converts text to human speech, with 180 voices across 30 languages
ML: Natural Language API
Perform sentiment, entity, content classification…
Define: Cloud AutoML
o Cloud AutoML – train your own custom models to solve specific problems
Suite of ML products to facilitate training of custom ML models.
Vision Video Intelligence Natural Language Natural Translation Tables
ML Supervised Learning
train the model using data that is labeled, the model will be able to infer the feature values on the label
ML Unsupervised Learning
the model is used to uncover structure within the dataset itself
Example: Uncover personas within customer data, it will take information that’s similar and group it together
ML: Top 3 model types
o Regression – predict a real number, ex: value of a house o Classification – predict the class from a specified set with a probability score o Clustering – group elements into clusters or groups based on how similar they are
ML: Overfitting
• Overfitting – common challenge that has to be overcome training ML models
o An overfit model is not generalized, it does not fit unknown data very well. It’s too trained to the training dataset
ML: How to deal with overfitting
Increase training data size
Feature selection – include more or reduce the number of features
Early stopping – not too many iterations on the training data
Cross-validation – take the training data and split it into much smaller sets, these are then used to tune a model, known as folds
• K-fold cross validation
Name some examples of hyperparameters
o Batch Size
o Training epochs – number of times that the full set of training data is ran through
o Number of hidden layers in a neural network
o Regularization type
o Regularization rate
o Learning rate
Name the 2 types of hyperparameters
Model hyperparameters relate directly to the model that is selected
Algorithm – hyperparameters relate to the training model
Define: Keras
Open-source neutral network library, high-level API for fast experimentation, supported in TensorFlow’s core library
Define: TensorFlow
Google’s open source, end-to-end, ML framework
Define: Tensor
Tensors represent the flow of information in a neural network
What is the AI Hub?
Facilitates sharing of AI resources
Hosted repo of plug and play AI components
End-to-end pipelines
Standard algorithms to solve common problems
ML: Vision AI, 2 modes
o Synchronous mode – responses are returned immediately (online processing)
o Asynchronous mode – Only returns results once processing is completed (offline)
ML: Vision AI, Detection Modes
- Face Detection – suggests emotional analysis
- Image property detection – identify image properties (ex: Dominant colors)
- Label Detection – Identify and detect objects, locations, activities, animal species, products….
Define: Dialogflow
• Dialogflow – Natural language interaction platform
o Used in mobile and web application, devices, and bots
o Analyses text or audio inputs
o Responds using text or speech
ML: Cloud speech-to-text usage: Synchronous Recognition
- REST and gRPC
- Returns a result after all input audio has been processed
- Limited to audio of one minute or less
ML: Cloud speech-to-text usage: Asynchronous Recognition
- Rest and gRPC
- initiates a long-running operation
- Use the operation to poll for results
ML: Cloud speech-to-text usage: Streaming Recognition
- gRPC
- Audio data is provided within a gRPC bi-directional stream
- results produced while audio is being captured
Define: gRPC
gRPC (gRPC Remote Procedure Calls[2]) is an open source remote procedure call (RPC) system initially developed at Google in 2015 as the next generation of the RPC infrastructure Stubby.
provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages.
Most common usage scenarios include connecting services in a microservices style architecture, or connecting mobile device clients to backend services.