Skip to main content

Question 15

You have been asked to develop an input pipeline for an ML training model that processes images from disparate sources at a low latency. You discover that your input data does not fit in memory. How should you create a dataset following Google-recommended best practices?

  • A. Create a tf.data.Dataset.prefetch transformation.
  • B. Convert the images to tf.Tensor objects, and then run Dataset.from_tensor_slices().
  • C. Convert the images to tf.Tensor objects, and then run tf.data.Dataset.from_tensors().
  • D. Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training.


References