Welcome to IoTForums.com Community
Register now to participate in discussions, ask questions and share your knowledge on internet of things and automation! Feel free to sign up today.
Register Now

Run ML workloads cheaper and faster with the latest GPUs

WebMaster

Administrator
Staff member
Dec 12, 2019
67
3
16
Running ML workloads more cost effectively
Google Cloud wants to help you run your ML workloads as efficiently as possible. To do this, we offer many options for accelerating ML training and prediction, including many types of NVIDIA GPUs. This flexibility is designed to let you get the right tradeoff between cost and throughput during training or cost and latency for prediction.

We recently reduced the price of NVIDIA T4 GPUs, making AI acceleration even more affordable. In this post, we’ll revisit some of the features of recent generation GPUs, like the NVIDIA T4, V100, and P100. We’ll also touch on native 16-bit (half-precision) arithmetics and Tensor Cores, both of which provide significant performance boosts and cost savings. We’ll show you how to use these features, and how the performance benefit of using 16-bit and automatic mixed precision for training often outweighs the higher list price of NVIDIA’s newer GPUs.

More at link below
Please, Log in or Register to view URLs content!
 

Log in

or Log in using