Skip to main content

Question 137

You have a data pipeline with a Dataflow job that aggregates and writes time series metrics to Bigtable. You notice that data is slow to update in Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)

  • A. Configure your Dataflow pipeline to use local execution
  • B. Increase the maximum number of Dataflow workers by setting maxNumWorkers in PipelineOptions
  • C. Increase the number of nodes in the Bigtable cluster
  • D. Modify your Dataflow pipeline to use the Flatten transform before writing to Bigtable
  • E. Modify your Dataflow pipeline to use the CoGroupByKey transform before writing to Bigtable