GPU Computing for AI
FP16 (Half-Precision)
16-bit floating-point number format used to accelerate computations and reduce memory footprint in neural networks, at the cost of a slight precision loss that is often acceptable.
← Wstecz