Skip to main content

Question 127

You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do?

  • A. Use Cloud TPUs without any additional adjustment to your code.
  • B. Use Cloud TPUs after implementing GPU kernel support for your customs ops.
  • C. Use Cloud GPUs after implementing GPU kernel support for your customs ops.
  • D. Stay on CPUs, and increase the size of the cluster you're training your model on.

https://cloud.google.com/tpu/docs/intro-to-tpu