Optimization Algorithms for Distributed Machine Learning

Prijzen vanaf
40,99

Beschrijving

Bol The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.

Vergelijk aanbieders (2)

Shop
Prijs
Verzendkosten
Totale prijs
€ 42,99
 40,99
Gratis
 40,99
Naar shop
Gratis Shipping Costs
 44,99
Gratis
 44,99
Naar shop
Gratis Shipping Costs
Beschrijving (2)
Bol

The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.

Amazon

Pagina's: 140, Editie: 1st ed. 2023, Paperback, Springer


Productspecificaties

Merk Springer
EAN
  • 9783031190698
  • 9783031190667
Maat

Prijshistorie

Prijzen voor het laatst bijgewerkt op: