Intense skill level Flashcards
Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this?• A. Threading• B. Serialization• C. Dropout Methods• D. Dimensionality Reduction
• C. Dropout Methods1
You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?• A. Continuously retrain the model on just the new data.• B. Continuously retrain the model on a combination of existing data and the new data.• C. Train on the existing data while using the new data as your test set.• D. Train on the new data while using the existing data as your test set.
• B. Continuously retrain the model on a combination of existing data and the new data.2
represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?• A. Add capacity (memory and disk space) to the database server by the order of 200.• B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.• C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.• D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.
• C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.3
You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?• A. Disable caching by editing the report settings.• B. Disable caching in BigQuery by editing table details.• C. Refresh your browser tab showing the visualizations.• D. Clear your browser history for the past hour then reload the tab showing the virtualizations.
4• A. Disable caching by editing the report settings.
An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?• A. Use federated data sources, and check data in the SQL query.• B. Enable BigQuery monitoring in Google Stackdriver and create an alert.• C. Import the data into BigQuery using the gcloud CLI and set maxbadrecords to 0.• D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.
• D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.5
Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?• A. Issue a command to restart the database servers.• B. Retry the query with exponential backoff, up to a cap of 15 minutes.• C. Retry the query every second until it comes back online to minimize staleness of data.• D. Reduce the query frequency to once every hour until the database comes back online.
• B. Retry the query with exponential backoff, up to a cap of 15 minutes.6
You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?• A. Linear regression• B. Logistic classification• C. Recurrent neural network• D. Feedforward neural network
• A. Linear regression7
You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?• A. Include ORDER BY DESK on timestamp column and LIMIT to 1.• B. Use GROUP BY on the unique ID column and timestamp column and SUM on the values.• C. Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.• D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.
• D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.8
Your company is using WILDCARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the following error:Syntax error : Expected end of statement but got “-“ at [4:11]SELECT age -FROM -bigquery-public-data.noaa_gsod.gsodWHERE -age != 99ANDTABLESUFFIX = “˜1929’ORDER BY -age DESCWhich table name will make the SQL statement work correctly?• A. “bigquery-public-data.noaa_gsod.gsod”˜• B. bigquery-public-data.noaa_gsod.gsod• C. “~bigquery-public-data.noaa_gsod.gsod’• D. ‘~bigquery-public-data.noaa_gsod.gsod*`
• D. ‘~bigquery-public-data.noaa_gsod.gsod*`9
Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose three.)• A. Disable writes to certain tables.• B. Restrict access to tables by role.• C. Ensure that the data is encrypted at all times.• D. Restrict BigQuery API access to approved users.• E. Segregate data across multiple tables or databases.• F. Use Google Stackdriver Audit Logging to determine policy violations.
• B. Restrict access to tables by role.• D. Restrict BigQuery API access to approved users.• F. Use Google Stackdriver Audit Logging to determine policy violations.10
You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:✑ No interaction by the user on the site for 1 hour✑ Has added more than $30 worth of products to the basket✑ Has not completed a transactionYou use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?• A. Use a fixed-time window with a duration of 60 minutes.• B. Use a sliding time window with a duration of 60 minutes.• C. Use a session window with a gap time duration of 60 minutes.• D. Use a global window with a time based trigger with a delay of 60 minutes.
• C. Use a session window with a gap time duration of 60 minutes.11
Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data.Which three steps should you take? (Choose three.)• A. Load data into different partitions.• B. Load data into a different dataset for each client.• C. Put each client’s BigQuery dataset into a different table.• D. Restrict a client’s dataset to approved users.• E. Only allow a service account to access the datasets.• F. Use the appropriate identity and access management (IAM) roles for each client’s users.
• B. Load data into a different dataset for each client.• D. Restrict a client’s dataset to approved users.• F. Use the appropriate identity and access management (IAM) roles for each client’s users.12
You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.Which Google database service should you use?• A. Cloud SQL• B. BigQuery• C. Cloud Bigtable• D. Cloud Datastore
A. Cloud SQL13
You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)• A. There are very few occurrences of mutations relative to normal samples.• B. There are roughly equal occurrences of both normal and mutated samples in the database.• C. You expect future mutations to have different features from the mutated samples in the database.• D. You expect future mutations to have similar features to the mutated samples in the database.• E. You already have labels for which samples are mutated and which are normal in the database.
• A. There are very few occurrences of mutations relative to normal samples.• D. You expect future mutations to have similar features to the mutated samples in the database.14
You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?• A. Re-write the application to load accumulated data every 2 minutes.• B. Convert the streaming insert code to batch load for individual messages.• C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.• D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.
D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.15
Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?• A. Use Google Stackdriver Audit Logs to review data access.• B. Get the identity and access management IIAM) policy of each table• C. Use Stackdriver Monitoring to see the usage of BigQuery query slots.• D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.
• A. Use Google Stackdriver Audit Logs to review data access.16
Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?• A. Create a Google Cloud Dataflow job to process the data.• B. Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.• C. Create a Hadoop cluster on Google Compute Engine that uses persistent disks.• D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.• E. Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
• D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.17
Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine learning applications can you use? (Choose three.)• A. Supervised learning to determine which transactions are most likely to be fraudulent.• B. Unsupervised learning to determine which transactions are most likely to be fraudulent.• C. Clustering to divide the transactions into N categories based on feature similarity.• D. Supervised learning to predict the location of a transaction.• E. Reinforcement learning to predict the location of a transaction.• F. Unsupervised learning to predict the location of a transaction.
• B. Unsupervised learning to determine which transactions are most likely to be fraudulent.• C. Clustering to divide the transactions into N categories based on feature similarity.• D. Supervised learning to predict the location of a transaction.18
Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for- like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?• A. Put the data into Google Cloud Storage.• B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.• C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data.• D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
• A. Put the data into Google Cloud Storage.19
You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?• A. The message body for the sensor event is too large.• B. Your custom endpoint has an out-of-date SSL certificate.• C. The Cloud Pub/Sub topic has too many messages published to it.• D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline.
• D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline.20
Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?• A. Assign global unique identifiers (GUID) to each data entry.• B. Compute the hash value of each data entry, and compare it with all historical data.• C. Store each data entry as the primary key in a separate database and apply an index.• D. Maintain a database table to store the hash value and other metadata for each data entry.
• A. Assign global unique identifiers (GUID) to each data entry.21
Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?• A. Run a local version of Jupiter on the laptop.• B. Grant the user access to Google Cloud Shell.• C. Host a visualization tool on a VM on Google Compute Engine.• D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.
• D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.22
You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time. What should you do?• A. Send the data to Google Cloud Datastore and then export to BigQuery.• B. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.• C. Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.• D. Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.
• B. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.23
You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?• A. Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.• B. Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.• C. Create a view CLICKSTREAMV, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICKSTREAMV instead of the table CLICK_STREAM from now on.• D. Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and ISNEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of ISNEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.• E. Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.
• E. Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.24
You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?• A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.• B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.• C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.• D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.
• D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.25
You are working on a sensitive project involving private user data. You have set up a project on Google Cloud Platform to house your work internally. An external consultant is going to assist with coding a complex transformation in a Google Cloud Dataflow pipeline for your project. How should you maintain users’ privacy?• A. Grant the consultant the Viewer role on the project.• B. Grant the consultant the Cloud Dataflow Developer role on the project.• C. Create a service account and allow the consultant to log on with it.• D. Create an anonymized sample of the data for the consultant to work with in a different project.
• B. Grant the consultant the Cloud Dataflow Developer role on the project.26
You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?• A. Eliminate features that are highly correlated to the output labels.• B. Combine highly co-dependent features into one representative feature.• C. Instead of feeding in each feature individually, average their values in batches of 3.• D. Remove the features that have null values for more than 50% of the training records.
B. Combine highly co-dependent features into one representative feature.27
Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.The data scientists have written the following code to read the data for a new key features in the logs.BigQueryIO.Read -.named(“ReadLogData”).from(“clouddataflow-readonly:samples.log_data”)You want to improve the performance of this data read. What should you do?• A. Specify the TableReference object in the code.• B. Use .fromQuery operation to read specific fields from the table.• C. Use of both the Google BigQuery TableSchema and TableFieldSchema classes.• D. Call a transform that returns TableRow objects, where each element in the PCollection represents a single row in the table.
• B. Use .fromQuery operation to read specific fields from the table.28
Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?A. Use a row key of the form timestamp.B. Use a row key of the form sensorid.C. Use a row key of the form timestamp#sensorid.D. Use a row key of the form #sensorid#timestamp.
D. Use a row key of the form #sensorid#timestamp..29
Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations.The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?• A. Add a node to the MySQL cluster and build an OLAP cube there.• B. Use an ETL tool to load the data from MySQL into Google BigQuery.• C. Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.• D. Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.
B. Use an ETL tool to load the data from MySQL into Google BigQuery.
You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?• A. Update the current pipeline and use the drain flag.• B. Update the current pipeline and provide the transform mapping JSON object.• C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.• D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.
• A. Update the current pipeline and use the drain flag.31
Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do?• A. Redefine the schema by evenly distributing reads and writes across the row space of the table.• B. The performance issue should be resolved over time as the site of the BigDate cluster is increased.• C. Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.• D. Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.
• A. Redefine the schema by evenly distributing reads and writes across the row space of the table.32
Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google CloudDataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next?A. Check the dashboard application to see if it is not displaying correctly.B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.C. Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.D. Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.
B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.33
Flowlogistic Case Study -Company Overview -Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.Company Background -The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.Solution Concept -Flowlogistic wants to implement two concepts using the cloud:✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.Existing Technical Environment -Flowlogistic architecture resides in a single data center:✑ Databases8 physical servers in 2 clusters• SQL Server “” user data, inventory, static data3 physical servers• Cassandra “” metadata, tracking messages10 Kafka servers “” tracking message aggregation and batch insert✑ Application servers “” customer front end, middleware for order/customs60 virtual machines across 20 physical servers• Tomcat “” Java services• Nginx “” static content• Batch servers✑ Storage appliances• iSCSI for virtual machine (VM) hosts• Fibre Channel storage area network (FC SAN) “” SQL server storage• Network-attached storage (NAS) image storage, logs, backups✑ 10 Apache Hadoop /Spark servers• Core Data Lake• Data analysis workloads✑ 20 miscellaneous servers• Jenkins, monitoring, bastion hosts,Business Requirements -✑ Build a reliable and reproducible environment with scaled panty of production.✑ Aggregate data in a centralized Data Lake for analysis✑ Use historical data to perform predictive analytics on future shipments✑ Accurately track every shipment worldwide using proprietary technology✑ Improve business agility and speed of innovation through rapid provisioning of new resources✑ Analyze and optimize architecture for performance in the cloud✑ Migrate fully to the cloud if all other requirements are metTechnical Requirements -✑ Handle both streaming and batch data✑ Migrate existing Hadoop workloads✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.✑ Use managed services whenever possible✑ Encrypt data flight and at rest✑ Connect a VPN between the production data center and cloud environmentSEO Statement -We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.We need to organize our information so we can more easily understand where our customers are and what they are shipping.CTO Statement -IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.CFO Statement -Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move toBigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?• A. Store the common data in BigQuery as partitioned tables.• B. Store the common data in BigQuery and expose authorized views.• C. Store the common data encoded as Avro in Google Cloud Storage.• D. Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.
• C. Store the common data encoded as Avro in Google Cloud Storage.34
Flowlogistic Case Study -Company Overview -Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.Company Background -The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.Solution Concept -Flowlogistic wants to implement two concepts using the cloud:✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.Existing Technical Environment -Flowlogistic architecture resides in a single data center:✑ Databases8 physical servers in 2 clusters• SQL Server “” user data, inventory, static data3 physical servers• Cassandra “” metadata, tracking messages10 Kafka servers “” tracking message aggregation and batch insert✑ Application servers “” customer front end, middleware for order/customs60 virtual machines across 20 physical servers• Tomcat “” Java services• Nginx “” static content• Batch servers✑ Storage appliances• iSCSI for virtual machine (VM) hosts• Fibre Channel storage area network (FC SAN) “” SQL server storage• Network-attached storage (NAS) image storage, logs, backups✑ 10 Apache Hadoop /Spark servers• Core Data Lake• Data analysis workloads✑ 20 miscellaneous servers• Jenkins, monitoring, bastion hosts,Business Requirements -✑ Build a reliable and reproducible environment with scaled panty of production.✑ Aggregate data in a centralized Data Lake for analysis✑ Use historical data to perform predictive analytics on future shipments✑ Accurately track every shipment worldwide using proprietary technology✑ Improve business agility and speed of innovation through rapid provisioning of new resources✑ Analyze and optimize architecture for performance in the cloudMigrate fully to the cloud if all other requirements are metTechnical Requirements -✑ Handle both streaming and batch data✑ Migrate existing Hadoop workloads✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.✑ Use managed services whenever possible✑ Encrypt data flight and at rest✑ Connect a VPN between the production data center and cloud environmentSEO Statement -We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.We need to organize our information so we can more easily understand where our customers are and what they are shipping.CTO Statement -IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.CFO Statement -Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system.You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?• A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage• B. Cloud Pub/Sub, Cloud Dataflow, and Local SSD• C. Cloud Pub/Sub, Cloud SQL, and Cloud Storage• D. Cloud Load Balancing, Cloud Dataflow, and Cloud Storage
• A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage35
Flowlogistic Case Study -Company Overview -Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.Company Background -The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.Solution Concept -Flowlogistic wants to implement two concepts using the cloud:✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.Existing Technical Environment -Flowlogistic architecture resides in a single data center:✑ Databases8 physical servers in 2 clusters• SQL Server “” user data, inventory, static data3 physical servers• Cassandra “” metadata, tracking messages10 Kafka servers “” tracking message aggregation and batch insert✑ Application servers “” customer front end, middleware for order/customs60 virtual machines across 20 physical servers• Tomcat “” Java services• Nginx “” static content• Batch servers✑ Storage appliances• iSCSI for virtual machine (VM) hosts• Fibre Channel storage area network (FC SAN) “” SQL server storage• Network-attached storage (NAS) image storage, logs, backups✑ 10 Apache Hadoop /Spark servers• Core Data Lake• Data analysis workloads✑ 20 miscellaneous servers• Jenkins, monitoring, bastion hosts,Business Requirements -✑ Build a reliable and reproducible environment with scaled panty of production.✑ Aggregate data in a centralized Data Lake for analysis✑ Use historical data to perform predictive analytics on future shipments✑ Accurately track every shipment worldwide using proprietary technology✑ Improve business agility and speed of innovation through rapid provisioning of new resources✑ Analyze and optimize architecture for performance in the cloud✑ Migrate fully to the cloud if all other requirements are metTechnical Requirements -✑ Handle both streaming and batch data✑ Migrate existing Hadoop workloads✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.✑ Use managed services whenever possible✑ Encrypt data flight and at rest✑ Connect a VPN between the production data center and cloud environmentSEO Statement -We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.We need to organize our information so we can more easily understand where our customers are and what they are shipping.CTO Statement -IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.CFO Statement -Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?• A. Export the data into a Google Sheet for virtualization.• B. Create an additional table with only the necessary columns.• C. Create a view on the table to present to the virtualization tool.• D. Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.
• C. Create a view on the table to present to the virtualization tool.36
Flowlogistic Case Study -Company Overview -Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.Company Background -The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.Solution Concept -Flowlogistic wants to implement two concepts using the cloud:✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.Existing Technical Environment -Flowlogistic architecture resides in a single data center:✑ Databases8 physical servers in 2 clusters• SQL Server “” user data, inventory, static data3 physical servers• Cassandra “” metadata, tracking messages10 Kafka servers “” tracking message aggregation and batch insert✑ Application servers “” customer front end, middleware for order/customs60 virtual machines across 20 physical servers• Tomcat “” Java services• Nginx “” static content• Batch servers✑ Storage appliances• iSCSI for virtual machine (VM) hosts• Fibre Channel storage area network (FC SAN) “” SQL server storage• Network-attached storage (NAS) image storage, logs, backups✑ 10 Apache Hadoop /Spark servers• Core Data Lake• Data analysis workloads✑ 20 miscellaneous servers• Jenkins, monitoring, bastion hosts,Business Requirements -✑ Build a reliable and reproducible environment with scaled panty of production.✑ Aggregate data in a centralized Data Lake for analysis✑ Use historical data to perform predictive analytics on future shipments✑ Accurately track every shipment worldwide using proprietary technology✑ Improve business agility and speed of innovation through rapid provisioning of new resources✑ Analyze and optimize architecture for performance in the cloud✑ Migrate fully to the cloud if all other requirements are metTechnical Requirements -✑ Handle both streaming and batch data✑ Migrate existing Hadoop workloads✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.✑ Use managed services whenever possible✑ Encrypt data flight and at restConnect a VPN between the production data center and cloud environment SEO Statement -We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.We need to organize our information so we can more easily understand where our customers are and what they are shipping.CTO Statement -IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.CFO Statement -Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.Which approach should you take?• A. Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.• B. Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.• C. Use the NOW () function in BigQuery to record the event’s time.• D. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.
D. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.37
MJTelco Case Study -Company Overview -MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company Background -Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution Concept -MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.MJTelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business Requirements -✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ Provide reliable and timely access to data for analysis from distributed research workers✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical Requirements -✑ Ensure secure and efficient transport and storage of telemetry data✑ Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.✑ Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day✑ Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.CEO Statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.CTO Statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.CFO Statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?• A. The zone• B. The number of workers• C. The disk size per worker• D. The maximum number of workers
• D. The maximum number of workers38
Mjtelco case study -Company overview -Mjtelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company background -Founded by experienced telecom executives, mjtelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution concept -Mjtelco is running a successful proof-of-concept (poc) project in its labs. They have two primary needs:✑ scale and harden their poc to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.Mjtelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business requirements -✑ scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ provide reliable and timely access to data for analysis from distributed research workers✑ maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical requirements -✑ ensure secure and efficient transport and storage of telemetry data✑ rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.✑ allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day✑ support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.Ceo statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.Cto statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.Cfo statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.You need to compose visualizations for operations teams with the following requirements:✑ the report must include telemetry data from all 50,000 installations for the most resent 6 weeks (sampling once every minute).✑ the report must not be more than 3 hours delayed from live data.✑ the actionable report should only show suboptimal links.✑ most suboptimal links should be sorted to the top.✑ suboptimal links can be grouped and filtered by regional geography.✑ user response time to load the report must be <5 seconds.Which approach meets the requirements?• a. Load the data into google sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.• b. Load the data into google bigquery tables, write google apps script that queries the data, calculates the metric, and shows only suboptimal rows in a table in google sheets.• c. Load the data into google cloud datastore tables, write a google app engine application that queries all rows, applies a function to derive the metric, and then renders results in a table using the google charts and visualization api.• d. Load the data into google bigquery tables, write a google data studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.
d. Load the data into google bigquery tables, write a google data studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.39
Mjtelco case study -Company overview -Mjtelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company background -Founded by experienced telecom executives, mjtelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution concept -Mjtelco is running a successful proof-of-concept (poc) project in its labs. They have two primary needs:✑ scale and harden their poc to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.Mjtelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business requirements -✑ scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ provide reliable and timely access to data for analysis from distributed research workers✑ maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical requirements -✑ ensure secure and efficient transport and storage of telemetry data✑ rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.✑ allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day✑ support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.Ceo statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.Cto statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.Cfo statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.You create a new report for your large team in google data studio 360. The report uses google bigquery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.Which two actions should you take? (choose two.)• a. Ensure all the tables are included in global dataset.• b. Ensure each table is included in a dataset for a region.• c. Adjust the settings for each table to allow a related region-based security group view access.• d. Adjust the settings for each view to allow a related region-based security group view access.• e. Adjust the settings for each dataset to allow a related region-based security group view access.
• b. Ensure each table is included in a dataset for a region.• c. Adjust the settings for each table to allow a related region-based security group view access.40
Mjtelco case study -Company overview -Mjtelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company background -Founded by experienced telecom executives, mjtelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution concept -Mjtelco is running a successful proof-of-concept (poc) project in its labs. They have two primary needs:✑ scale and harden their poc to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.Mjtelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business requirements -✑ scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ provide reliable and timely access to data for analysis from distributed research workers✑ maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical requirements -✑ ensure secure and efficient transport and storage of telemetry data✑ rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.✑ allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day✑ support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.Ceo statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.Cto statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.Cfo statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.Mjtelco needs you to create a schema in google bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day.Which schema should you use?• a. Rowkey: date#device_id column data: data_point• b. Rowkey: date column data: deviceid, datapoint• c. Rowkey: deviceid column data: date, datapoint• d. Rowkey: datapoint column data: deviceid, date• e. Rowkey: date#datapoint column data: deviceid
• a. Rowkey: date#device_id column data: data_point41
Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch mapreduce analytics jobs in apache hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?• a. Rewrite the job in pig.• b. Rewrite the job in apache spark.• c. Increase the size of the hadoop cluster.• d. Decrease the size of the hadoop cluster but also rewrite the job in hive.
• b. Rewrite the job in apache spark.42
You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in google bigquery in a users table consisting of a firstname field and a lastname field. A member of it is building an application and asks you to modify the schema and data in bigquery so the application can query a fullname field consisting of the value of the firstname field concatenated with a space, followed by the value of the lastname field for each employee. How can you make that data available while minimizing cost?• a. Create a view in bigquery that concatenates the firstname and lastname field values to produce the fullname.• b. Add a new column called fullname to the users table. Run an update statement that updates the fullname column for each user with the concatenation of the firstname and lastname values.• c. Create a google cloud dataflow job that queries bigquery for the entire users table, concatenates the firstname value and lastname value for each user, and loads the proper values for firstname, lastname, and fullname into a new table in bigquery.• d. Use bigquery to export the data for the table to a csv file. Create a google cloud dataproc job to process the csv file and output a new csv file containing the proper values for firstname, lastname and fullname. Run a bigquery load job to load the new csv file into bigquery.
• b. Add a new column called fullname to the users table. Run an update statement that updates the fullname column for each user with the concatenation of the firstname and lastname values.43
You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is google cloud datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity “˜movie’ the property “˜actors’ and the property“˜tags’ have multiple values but the property “˜date released’ does not. A typical query would ask for all movies with actor= ordered by datereleased or all movies with tag=comedy ordered by datereleased. How should you avoid a combinatorial explosion in the number of indexes?• a. Manually configure the index in your index config as follows: Indexes:- kind: movie properties: - name: actors name: date_released- kind: movie properties: -name: tags name: date_released• b. Manually configure the index in your index config as follows: Indexes:- kind: movie properties: - name: actors -name: tags- name: date_released• c. Set the following in your entity options: excludefromindexes = “˜actors, tags’• d. Set the following in your entity options: excludefromindexes = “˜date_published’
• a. Manually configure the index in your index config as follows: Indexes:- kind: movie properties: - name: actors name: date_released- kind: movie properties: -name: tags name: date_released44
You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 am. You have written a google cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?• a. Change the processing job to use google cloud dataproc instead.• b. Manually start the cloud dataflow job each morning when you get into the office.• c. Create a cron job with google app engine cron service to run the cloud dataflow job.• d. Configure the cloud dataflow job as a streaming job so that it processes the log data immediately.
• c. Create a cron job with google app engine cron service to run the cloud dataflow job.45
You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use google bigquery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in bigquery as cheaply as possible.What should you do?• a. Load the data every 30 minutes into a new partitioned table in bigquery.• b. Store and update the data in a regional google cloud storage bucket and create a federated data source in bigquery• c. Store the data in google cloud datastore. Use google cloud dataflow to query bigquery and combine the data programmatically with the data stored in cloud datastore• d. Store the data in a file in a regional google cloud storage bucket. Use cloud dataflow to query bigquery and combine the data programmatically with the data stored in google cloud storage.
• b. Store and update the data in a regional google cloud storage bucket and create a federated data source in bigquery46
You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:✑ the user profile: what the user likes and doesn’t like to eat✑ the user account information: name, address, preferred meal times✑ the order information: when orders are made, from where, to whomThe database will be used to store all the transactional data of the product. You want to optimize the data schema. Which google cloud platform product should you use?• a. Bigquery• b. Cloud sql• c. Cloud bigtable• d. Cloud datastore
• a. Bigquery47
Your company is loading comma-separated values (csv) files into google bigquery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?• a. The csv data loaded in bigquery is not flagged as csv.• b. The csv data has invalid rows that were skipped on import.• c. The csv data loaded in bigquery is not using bigquery’s default encoding.• d. The csv data has not gone through an etl phase before loading into bigquery.
• c. The csv data loaded in bigquery is not using bigquery’s default encoding.48
Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (csv) file that is less than 4 kb. All files must be ingested on google cloud platform before they can be processed. Your company site has a 200 ms latency to google cloud, and your internet connection bandwidth is limited as 50 mbps. You currently deploy a secure ftp (sftp) server on a virtual machine in google compute engine as the data ingestion point. A local sftp client runs on a dedicated machine to transmit the csv files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)• a. Introduce data compression for each file to increase the rate file of file transfer.• b. Contact your internet service provider (isp) to increase your maximum bandwidth to at least 100 mbps.• c. Redesign the data ingestion process to use gsutil tool to send the csv files to a storage bucket in parallel.• d. Assemble 1,000 files into a tape archive (tar) file. Transmit the tar files instead, and disassemble the csv files in the cloud upon receiving them.• e. Create an s3-compatible storage endpoint in your network, and use google cloud storage transfer service to transfer on-premises data to the designated storage bucket.
• c. Redesign the data ingestion process to use gsutil tool to send the csv files to a storage bucket in parallel.• d. Assemble 1,000 files into a tape archive (tar) file. Transmit the tar files instead, and disassemble the csv files in the cloud upon receiving them.49
You are choosing a nosql database to handle telemetry data submitted from millions of internet-of-things (iot) devices. The volume of data is growing at 100Tb per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (acid).However, high availability and low latency are required.You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (choose three.)• a. Redis• b. Hbase• c. MySQL• d. Mongodb• e. Cassandra• f. Hdfs with hive
• b. Hbase• d. Mongodb• e. Cassandra50
You are training a spam classifier. You notice that you are overfitting the training data. Which three actions can you take to resolve this problem? (choose three.)• a. Get more training examples• b. Reduce the number of training examples• c. Use a smaller set of features• d. Use a larger set of features• e. Increase the regularization parameters• f. Decrease the regularization parameters
• a. Get more training examples• c. Use a smaller set of features• e. Increase the regularization parameters51
You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the project owner. You want to automate these jobs by taking nightly batch files containing non-public information from google cloud storage, processing them with a spark scala job on a google cloud Dataproc cluster, and depositing the results into google bigquery.How should you securely run this workload?• a. Restrict the google cloud storage bucket so only you can see the files• b. Grant the project owner role to a service account, and run the job with it• c. Use a service account with the ability to read the batch files and to write to bigquery• d. Use a user account with the project viewer role on the cloud dataproc cluster to read the batch files and write to bigquery
• c. Use a service account with the ability to read the batch files and to write to bigquery52
You are using Google BigQuery as your data warehouse. Your users report that the following simple query is running very slowly, no matter when they run the query:SELECT country, state, city FROM [myproject:mydataset.mytable] GROUP BY countryYou check the query plan for the query and see the following output in the Read section of Stage:1:Bar with mostly purple, some blue What is the most likely cause of the delay for this query?• a. Users are running too many concurrent queries in the system• b. The [myproject:mydataset.mytable] table has too many partitions• c. Either the state or the city columns in the [myproject:mydataset.mytable] table have too many null values• d. Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew
• d. Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew53
Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and different application servers process those bids. Each bid event contains the item, amount, user, and timestamp. You want to collate those bid events into a single location in real time to determine which user bid first. What should you do?• a. Create a file on a shared file and have the application servers write all bid events to that file. Process the file with apache hadoop to identify which user bid first.• b. Have each application server write the bid events to cloud pub/sub as they occur. Push the events from cloud pub/sub to a custom endpoint that writes the bid event information into cloud sql.• c. Set up a mysql database for each application server to write bid events into. Periodically query each of those distributed mysql databases and update a master mysql database with bid event information.• d. Have each application server write the bid events to google cloud pub/sub as they occur. Use a pull subscription to pull the bid events using google cloud dataflow. Give the bid for each item to the user in the bid event that is processed first.
• b. Have each application server write the bid events to cloud pub/sub as they occur. Push the events from cloud pub/sub to a custom endpoint that writes the bid event information into cloud sql.
Your organization has been collecting and analyzing data in google bigquery for 6 months. The majority of the data analyzed is placed in a time-partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the last 14 days of data. The view is described in legacy sql. Next month, existing applications will be connecting to bigquery to read the events data via an odbc connection. You need to ensure the applications can connect. Which two actions should you take? (choose two.)• a. Create a new view over events using standard sql• b. Create a new partitioned table using a standard sql query• c. Create a new view over events_partitioned using standard sql• d. Create a service account for the odbc connection to use for authentication• e. Create a google cloud identity and access management (cloud iam) role for the odbc connection and shared “events”
• c. Create a new view over events_partitioned using standard sql• d. Create a service account for the odbc connection to use for authentication55
You have enabled the free integration between firebase analytics and google bigquery. Firebase now automatically creates a new table daily in bigquery in the format app_events_yyyymmdd. You want to query all of the tables for the past 30 days in legacy sql. What should you do?• a. Use the table_date_range function• b. Use the where_partitiontime pseudo column• c. Use where date between yyyy-mm-dd and yyyy-mm-dd• d. Use select if.(date >= yyyy-mm-dd and date <= yyyy-mm-dd
• a. Use the table_date_range function56
Your company is currently setting up data pipelines for their campaign. For all the google cloud pub/sub streaming data, one of the important business requirements is to be able to periodically identify the inputs and their timings during their campaign. Engineers have decided to use windowing and transformation in google cloud dataflow for this purpose. However, when testing this feature, they find that the cloud dataflow job fails for the all streaming insert. What is the most likely cause of this problem?• a. They have not assigned the timestamp, which causes the job to fail• b. They have not set the triggers to accommodate the data coming in late, which causes the job to fail• c. They have not applied a global windowing function, which causes the job to fail when the pipeline is created• d. They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created
• d. They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created57
You architect a system to analyze seismic data. Your extract, transform, and load (etl) process runs as a series of mapreduce jobs on an apache hadoop cluster. The etl process takes days to process a data set because some steps are computationally expensive. Then you discover that a sensor calibration step has been omitted. How should you change your etl process to carry out sensor calibration systematically in the future?• a. Modify the transformmapreduce jobs to apply sensor calibration before they do anything else.• b. Introduce a new mapreduce job to apply sensor calibration to raw data, and ensure all other mapreduce jobs are chained after this.• c. Add sensor calibration data to the output of the etl process, and document that all users need to apply sensor calibration themselves.• d. Develop an algorithm through simulation to predict variance of data output from the last mapreduce job based on calibration factors, and apply the correction to all data.
• b. Introduce a new mapreduce job to apply sensor calibration to raw data, and ensure all other mapreduce jobs are chained after this.58
An online retailer has built their current application on google app engine. A new initiative at the company mandates that they extend their application to allow their customers to transact directly via the application. They need to manage their shopping transactions and analyze combined data from multiple datasets using a business intelligence (bi) tool. They want to use only a single database for this purpose. Which google cloud database should they choose?• a. Bigquery• b. Cloud sql• c. Cloud bigtable• d. Cloud datastore
• B. CLOUD SQL59
You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate google bigquery table with the table name format logs_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?• a. Convert all daily log tables into date-partitioned tables• b. Convert the sharded tables into a single partitioned table• c. Enable query caching so you can cache data from previous months• d. Create separate views to cover each month, and query from these views
• b. Convert the sharded tables into a single partitioned table60
Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google CloudDataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into GoogleBigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?A. Migrate the workload to Google Cloud DataflowB. Use pre-emptible virtual machines (VMs) for the clusterC. Use a higher-memory node so that the job runs fasterD. Use SSDs on the worker nodes so that the job can run faster
B. Use pre-emptible virtual machines (VMs) for the cluster61
Your company receives both batch- and stream-based event data. You want to process the data using Google Cloud Dataflow over a predictable time period.However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle data that is late or out of order?A. Set a single global window to capture all the data.B. Set sliding windows to capture all the lagged data.C. Use watermarks and timestamps to capture the lagged data.D. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.
62C. Use watermarks and timestamps to capture the lagged data.
You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is. You want to classify this data accurately using a linear algorithm. To do this you need to add a synthetic feature. What should the value of that feature be?A. X^2+Y^2B. X^2C. Y^2D. cos(X)그래프 가운대에 검은색 윗쪽 검은색, 하단 흰색
63A. X^2+Y^2
You are integrating one of your internal IT applications and Google BigQuery, so users can query BigQuery from the application’s interface. You do not want individual users to authenticate to BigQuery and you do not want to give them access to the dataset. You need to securely access BigQuery from your IT application. What should you do?A. Create groups for your users and give those groups access to the datasetB. Integrate with a single sign-on (SSO) platform, and pass each user’s credentials along with the query requestC. Create a service account and grant dataset access to that account. Use the service account’s private key to access the datasetD. Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and use those credentials to access the BigQuery dataset
64C. Create a service account and grant dataset access to that account. Use the service account’s private key to access the dataset
Topic 1You are building a data pipeline on Google Cloud. You need to prepare data using a casual method for a machine-learning process. You want to support a logistic regression model. You also need to monitor and adjust for null values, which must remain real-valued and cannot be removed. What should you do?A. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to “˜none’ using a Cloud Dataproc job.B. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 0 using a Cloud Dataprep job.C. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to “˜none’ using a Cloud Dataprep job.D. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to 0 using a custom script.
65B. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 0 using a Cloud Dataprep job.
You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed. What should you do?A. Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls.B. Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.C. Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.D. Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.
66B. Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do?A. Build and train a complex classification model with Spark MLlib to generate labels and filter the results. Deploy the models using Cloud Dataproc. Call the model from your application.B. Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application.C. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the user’s viewing history to generate preferences.D. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted labels to match the user’s viewing history to generate preferences.
67C. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the user’s viewing history to generate preferences.
You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?A. Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.B. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.D. Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.
68C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances. Most Voted
Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud?A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.
69A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to support compression and parallel load from the input locations using Google recommended practices. What should you do?A. Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.B. Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.C. Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.D. Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.
70A. Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
You are developing an application on Google Cloud that will automatically generate subject labels for users’ blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do?A. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels.B. Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels.C. Build and train a text classification model using TensorFlow. Deploy the model using Cloud Machine Learning Engine. Call the model from your application and process the results as labels.D. Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your application and process the results as labels.
71A. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels.
You are designing storage for 20 TB of text files as part of deploying a data pipeline on Google Cloud. Your input data is in CSV format. You want to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which storage service and schema design should you use?A. Use Cloud Bigtable for storage. Install the HBase shell on a Compute Engine instance to query the Cloud Bigtable data.B. Use Cloud Bigtable for storage. Link as permanent tables in BigQuery for query.C. Use Cloud Storage for storage. Link as permanent tables in BigQuery for query.D. Use Cloud Storage for storage. Link as temporary tables in BigQuery for query.
72C. Use Cloud Storage for storage. Link as permanent tables in BigQuery for query.
You are designing storage for two relational tables that are part of a 10-TB database on Google Cloud. You want to support transactions that scale horizontally.You also want to optimize data for range queries on non-key columns. What should you do?A. Use Cloud SQL for storage. Add secondary indexes to support query patterns.B. Use Cloud SQL for storage. Use Cloud Dataflow to transform data to support query patterns.C. Use Cloud Spanner for storage. Add secondary indexes to support query patterns.D. Use Cloud Spanner for storage. Use Cloud Dataflow to transform data to support query patterns.
73C. Use Cloud Spanner for storage. Add secondary indexes to support query patterns.
Your financial services company is moving to cloud technology and wants to store 50 TB of financial time-series data in the cloud. This data is updated frequently and new data will be streaming in all the time. Your company also wants to move their existing Apache Hadoop jobs to the cloud to get insights into this data.Which product should they use to store the data?A. Cloud BigtableB. Google BigQueryC. Google Cloud StorageD. Google Cloud Datastore
74A. Cloud Bigtable
An organization maintains a Google BigQuery dataset that contains tables with user-level data. They want to expose aggregates of this data to other GoogleCloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the analysis cost for other projects is assigned to those projects. What should they do?A. Create and share an authorized view that provides the aggregate results.B. Create and share a new dataset and view that provides the aggregate results.C. Create and share a new dataset and table that contains the aggregate results.D. Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.
75A. Create and share an authorized view that provides the aggregate results. Most Voted
Government regulations in your industry mandate that you have to maintain an auditable record of access to certain types of data. Assuming that all expiring logs will be archived correctly, where should you store data that is subject to that mandate?A. Encrypted on Cloud Storage with user-supplied encryption keys. A separate decryption key will be given to each authorized user.B. In a BigQuery dataset that is viewable only by authorized personnel, with the Data Access log used to provide the auditability.C. In Cloud SQL, with separate database user names to each user. The Cloud SQL Admin activity logs will be used to provide the auditability.D. In a bucket on Cloud Storage that is accessible only by an AppEngine service that collects user information and logs the access before providing a link to the bucket.
76B. In a BigQuery dataset that is viewable only by authorized personnel, with the Data Access log used to provide the auditability.
Your neural network model is taking days to train. You want to increase the training speed. What can you do?A. Subsample your test dataset.B. Subsample your training dataset.C. Increase the number of input features to your model.D. Increase the number of layers in your neural network.
77B. Subsample your training dataset.
You are responsible for writing your company’s ETL pipelines to run on an Apache Hadoop cluster. The pipeline will require some checkpointing and splitting pipelines. Which method should you use to write the pipelines?A. PigLatin using PigB. HiveQL using HiveC. Java using MapReduceD. Python using MapReduce
78A. PigLatin using Pig
Your company maintains a hybrid deployment with GCP, where analytics are performed on your anonymized customer data. The data are imported to CloudStorage from your data center through parallel uploads to a data transfer server running on GCP. Management informs you that the daily transfers take too long and have asked you to fix the problem. You want to maximize transfer speeds. Which action should you take?A. Increase the CPU size on your server.B. Increase the size of the Google Persistent Disk on your server.C. Increase your network bandwidth from your datacenter to GCP.D. Increase your network bandwidth from Compute Engine to Cloud Storage.
79C. Increase your network bandwidth from your datacenter to GCP.
MJTelco Case Study -Company Overview -MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company Background -Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution Concept -MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.MJTelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business Requirements -✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ Provide reliable and timely access to data for analysis from distributed research workers✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical Requirements -Ensure secure and efficient transport and storage of telemetry dataRapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/daySupport rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.CEO Statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.CTO Statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.CFO Statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.MJTelco is building a custom interface to share data. They have these requirements:1. They need to do aggregations over their petabyte-scale datasets.2. They need to scan specific time range rows with a very fast response time (milliseconds).Which combination of Google Cloud Platform products should you recommend?A. Cloud Datastore and Cloud BigtableB. Cloud Bigtable and Cloud SQLC. BigQuery and Cloud BigtableD. BigQuery and Cloud Storage
You 80C. BigQuery and Cloud Bigtable
MJTelco Case Study -Company Overview -MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company Background -Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution Concept -MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.MJTelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business Requirements -✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ Provide reliable and timely access to data for analysis from distributed research workers✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical Requirements -Ensure secure and efficient transport and storage of telemetry dataRapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/daySupport rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.CEO Statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.CTO Statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.CFO Statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.You need to compose visualization for operations teams with the following requirements:✑ Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)✑ The report must not be more than 3 hours delayed from live data.✑ The actionable report should only show suboptimal links.✑ Most suboptimal links should be sorted to the top.✑ Suboptimal links can be grouped and filtered by regional geography.✑ User response time to load the report must be <5 seconds.You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?A. Look through the current data and compose a series of charts and tables, one for each possible combination of criteria.B. Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.C. Export the data to a spreadsheet, compose a series of charts and tables, one for each possible combination of criteria, and spread them across multiple tabs.D. Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.
81B. Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.
MJTelco Case Study -Company Overview -MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware.Company Background -Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost.Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs.Solution Concept -MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs:✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations.✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition.MJTelco will also use three separate operating environments “” development/test, staging, and production “” to meet the needs of running experiments, deploying new features, and serving production customers.Business Requirements -✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community.✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis.✑ Provide reliable and timely access to data for analysis from distributed research workers✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.Technical Requirements -Ensure secure and efficient transport and storage of telemetry dataRapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/daySupport rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles.CEO Statement -Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.CTO Statement -Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.CFO Statement -The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud’s machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines.Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?A. Create a table called tracking_table and include a DATE column.B. Create a partitioned table called tracking_table and include a TIMESTAMP column.C. Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.D. Create a table called tracking_table with a TIMESTAMP column to represent the day.
82B. Create a partitioned table called tracking_table and include a TIMESTAMP column.
Flowlogistic Case Study -Company Overview -Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.Company Background -The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.Solution Concept -Flowlogistic wants to implement two concepts using the cloud:✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.Existing Technical Environment -Flowlogistic architecture resides in a single data center:✑ Databases- 8 physical servers in 2 clusters- SQL Server “” user data, inventory, static data- 3 physical servers- Cassandra “” metadata, tracking messages10 Kafka servers “” tracking message aggregation and batch insert✑ Application servers “” customer front end, middleware for order/customs- 60 virtual machines across 20 physical servers- Tomcat “” Java services- Nginx “” static content- Batch servers✑ Storage appliances- iSCSI for virtual machine (VM) hosts- Fibre Channel storage area network (FC SAN) “” SQL server storageNetwork-attached storage (NAS) image storage, logs, backups✑ 10 Apache Hadoop /Spark servers- Core Data Lake- Data analysis workloads✑ 20 miscellaneous servers- Jenkins, monitoring, bastion hosts,Business Requirements -Build a reliable and reproducible environment with scaled panty of production.✑ Aggregate data in a centralized Data Lake for analysis✑ Use historical data to perform predictive analytics on future shipments✑ Accurately track every shipment worldwide using proprietary technology✑ Improve business agility and speed of innovation through rapid provisioning of new resources✑ Analyze and optimize architecture for performance in the cloud✑ Migrate fully to the cloud if all other requirements are metTechnical Requirements -✑ Handle both streaming and batch data✑ Migrate existing Hadoop workloads✑ Ensure architecture is scalable and elastic to meet the changing demands of the company.✑ Use managed services whenever possible✑ Encrypt data flight and at restConnect a VPN between the production data center and cloud environmentSEO Statement -We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around.We need to organize our information so we can more easily understand where our customers are and what they are shipping.CTO Statement -IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO’ s tracking technology.CFO Statement -Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don’t want to commit capital to building out a server environment.Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system.You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?A. Cloud Pub/Sub, Cloud Dataflow, and Cloud StorageB. Cloud Pub/Sub, Cloud Dataflow, and Local SSDC. Cloud Pub/Sub, Cloud SQL, and Cloud StorageD. Cloud Load Balancing, Cloud Dataflow, and Cloud StorageE. Cloud Dataflow, Cloud SQL, and Cloud Storage
83A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage
After migrating ETL jobs to run on BigQuery, you need to verify that the output of the migrated jobs is the same as the output of the original. You’ve loaded a table containing the output of the original job and want to compare the contents with output from the migrated job to show that they are identical. The tables do not contain a primary key column that would enable you to join them together for comparison.What should you do?A. Select random samples from the tables using the RAND() function and compare the samples.B. Select random samples from the tables using the HASH() function and compare the samples.C. Use a Dataproc cluster and the BigQuery Hadoop connector to read the data from each table and calculate a hash from non-timestamp columns of the table after sorting. Compare the hashes of each table.D. Create stratified random samples using the OVER() function and compare equivalent samples from each table.
C. Use a Dataproc cluster and the BigQuery Hadoop connector to read the data from each table and calculate a hash from non-timestamp columns of the table after sorting. Compare the hashes of each table.84
You are a head of BI at a large enterprise company with multiple business units that each have different priorities and budgets. You use on-demand pricing forBigQuery with a quota of 2K concurrent on-demand slots per project. Users at your organization sometimes don’t get slots to execute their query and you need to correct this. You’d like to avoid introducing new projects to your account.What should you do?A. Convert your batch BQ queries into interactive BQ queries.B. Create an additional project to overcome the 2K on-demand per-project quota.C. Switch to flat-rate pricing and establish a hierarchical priority model for your projects.D. Increase the amount of concurrent slots per project at the Quotas page at the Cloud Console.
C. Switch to flat-rate pricing and establish a hierarchical priority model for your projects.85
You have an Apache Kafka cluster on-prem with topics containing web application logs. You need to replicate the data to Google Cloud for analysis in BigQuery and Cloud Storage. The preferred replication method is mirroring to avoid deployment of Kafka Connect plugins.What should you do?A. Deploy a Kafka cluster on GCE VM Instances. Configure your on-prem cluster to mirror your topics to the cluster running in GCE. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.B. Deploy a Kafka cluster on GCE VM Instances with the PubSub Kafka connector configured as a Sink connector. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.C. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Source connector. Use a Dataflow job to read from PubSub and write to GCS.D. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Sink connector. Use a Dataflow job to read from PubSub and write to GCS.
A. Deploy a Kafka cluster on GCE VM Instances. Configure your on-prem cluster to mirror your topics to the cluster running in GCE. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.86
You’ve migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of many shuffing operations and initial data are parquet files (on average 200-400 MB size each). You see some degradation in performance after the migration to Dataproc, so you’d like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you’d like to continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload.What should you do?A. Increase the size of your parquet files to ensure them to be 1 GB minimum.B. Switch to TFRecords formats (appr. 200MB per file) instead of parquet files.C. Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS.D. Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size.
A. Increase the size of your parquet files to ensure them to be 1 GB minimum.87
Your team is responsible for developing and maintaining ETLs in your company. One of your Dataflow jobs is failing because of some errors in the input data, and you need to improve reliability of the pipeline (incl. being able to reprocess all failing data).What should you do?A. Add a filtering step to skip these types of errors in the future, extract erroneous rows from logs.B. Add a try… catch block to your DoFn that transforms the data, extract erroneous rows from logs.C. Add a try… catch block to your DoFn that transforms the data, write erroneous rows to PubSub directly from the DoFn.D. Add a try… catch block to your DoFn that transforms the data, use a sideOutput to create a PCollection that can be stored to PubSub later.
88D. Add a try… catch block to your DoFn that transforms the data, use a sideOutput to create a PCollection that can be stored to PubSub later.
You’re training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected neural net, and you’ve discovered that the dataset contains latitude and longitude of the property. Real estate professionals have told you that the location of the property is highly influential on price, so you’d like to engineer a feature that incorporates this physical dependency.What should you do?A. Provide latitude and longitude as input vectors to your neural net.B. Create a numeric column from a feature cross of latitude and longitude.C. Create a feature cross of latitude and longitude, bucketize at the minute level and use L1 regularization during optimization.D. Create a feature cross of latitude and longitude, bucketize it at the minute level and use L2 regularization during optimization.
89C. Create a feature cross of latitude and longitude, bucketize at the minute level and use L1 regularization during optimization.
You are deploying MariaDB SQL databases on GCE VM Instances and need to configure monitoring and alerting. You want to collect metrics including network connections, disk IO and replication status from MariaDB with minimal development effort and use StackDriver for dashboards and alerts.What should you do?A. Install the OpenCensus Agent and create a custom metric collection application with a StackDriver exporter.B. Place the MariaDB instances in an Instance Group with a Health Check.C. Install the StackDriver Logging Agent and configure fluentd in_tail plugin to read MariaDB logs.D. Install the StackDriver Agent and configure the MySQL plugin.
90D. Install the StackDriver Agent and configure the MySQL plugin.
You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications have been defaulted. You have been asked to train a model to predict default rates for credit applicants.What should you do?A. Increase the size of the dataset by collecting additional data.B. Train a linear regression to predict a credit default risk score.C. Remove the bias from the data and collect applications that have been declined loans.D. Match loan applicants with their social profiles to enable feature engineering.
91C. Remove the bias from the data and collect applications that have been declined loans.
You need to migrate a 2TB relational database to Google Cloud Platform. You do not have the resources to significantly refactor the application that uses this database and cost to operate is of primary concern.Which service do you select for storing and serving your data?A. Cloud SpannerB. Cloud BigtableC. Cloud FirestoreD. Cloud SQL
92 D. Cloud SQL
You’re using Bigtable for a real-time application, and you have a heavy load that is a mix of read and writes. You’ve recently identified an additional use case and need to perform hourly an analytical job to calculate certain statistics across the whole database. You need to ensure both the reliability of your production application as well as the analytical workload.What should you do?A. Export Bigtable dump to GCS and run your analytical job on top of the exported files.B. Add a second cluster to an existing instance with a multi-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload.C. Add a second cluster to an existing instance with a single-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload.D. Increase the size of your existing cluster twice and execute your analytics workload on your new resized cluster.
93C. Add a second cluster to an existing instance with a single-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload.
You are designing an Apache Beam pipeline to enrich data from Cloud Pub/Sub with static reference data from BigQuery. The reference data is small enough to fit in memory on a single worker. The pipeline should write enriched results to BigQuery for analysis. Which job type and transforms should this pipeline use?A. Batch job, PubSubIO, side-inputsB. Streaming job, PubSubIO, JdbcIO, side-outputsC. Streaming job, PubSubIO, BigQueryIO, side-inputsD. Streaming job, PubSubIO, BigQueryIO, side-outputs
94C. Streaming job, PubSubIO, BigQueryIO, side-inputs
You have a data pipeline that writes data to Cloud Bigtable using well-designed row keys. You want to monitor your pipeline to determine when to increase the size of you Cloud Bigtable cluster. Which two actions can you take to accomplish this? (Choose two.)A. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Read pressure index is above 100.B. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Write pressure index is above 100.C. Monitor the latency of write operations. Increase the size of the Cloud Bigtable cluster when there is a sustained increase in write latency.D. Monitor storage utilization. Increase the size of the Cloud Bigtable cluster when utilization increases above 70% of max capacity.E. Monitor latency of read operations. Increase the size of the Cloud Bigtable cluster of read operations take longer than 100 ms.
95C. Monitor the latency of write operations. Increase the size of the Cloud Bigtable cluster when there is a sustained increase in write latency.D. Monitor storage utilization. Increase the size of the Cloud Bigtable cluster when utilization increases above 70% of max capacity.
You want to analyze hundreds of thousands of social media posts daily at the lowest cost and with the fewest steps.You have the following requirements:✑ You will batch-load the posts once per day and run them through the Cloud Natural Language API.✑ You will extract topics and sentiment from the posts.✑ You must store the raw posts for archiving and reprocessing.✑ You will create dashboards to be shared with people both inside and outside your organization.You need to store both the data extracted from the API to perform analysis as well as the raw social media posts for historical archiving. What should you do?A. Store the social media posts and the data extracted from the API in BigQuery.B. Store the social media posts and the data extracted from the API in Cloud SQL.C. Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery.D. Feed to social media posts into the API directly from the source, and write the extracted data from the API into BigQuery.
96C. Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery.
You store historic data in Cloud Storage. You need to perform analytics on the historic data. You want to use a solution to detect invalid data entries and perform data transformations that will not require programming or knowledge of SQL.What should you do?A. Use Cloud Dataflow with Beam to detect errors and perform transformations.B. Use Cloud Dataprep with recipes to detect errors and perform transformations.C. Use Cloud Dataproc with a Hadoop job to detect errors and perform transformations.D. Use federated tables in BigQuery with queries to detect errors and perform transformations.
97B. Use Cloud Dataprep with recipes to detect errors and perform transformations.
Your company needs to upload their historic data to Cloud Storage. The security rules don’t allow access from external IPs to their on-premises resources. After an initial upload, they will add new data from existing on-premises applications every day. What should they do?A. Execute gsutil rsync from the on-premises servers.B. Use Cloud Dataflow and write the data to Cloud Storage.C. Write a job template in Cloud Dataproc to perform the data transfer.D. Install an FTP server on a Compute Engine VM to receive the files and move them to Cloud Storage.
98A. Execute gsutil rsync from the on-premises servers.