TensorRT-LLM evaluation of the new H200 GPU achieves 11,819 tokens/s on Llama2-13B
https:// github.com /NVIDIA/TensorRT-LLM/blob/release/0.5.0/docs/source/blogs/H200launch.md TensorRT-LLM evaluation of the new H200 GPU achieves 11,819 tokens/s on Llama2-13B
H200 is up to 1.9x faster than H100. This performance is enabled by H200's larger, faster HBM3e memory.
0
comments