Using Private Data Center for AI Inferencing

By Published On: March 18, 2024

Nvidia is expected to announce some hero numbers around AI Model Training during its GTC event. Many headlines go to cloud-scale companies with hundreds of thousands of GPUs. However, what do organizations need to process their data for AI on-premises? Can AI inferencing be done on-premises? Dell sponsored @FuturumGroup and the CTO Advisor to take a look at a demo of retrieval augmented generation (RAG) on a Dell Precision Laptop with an Nvidia RTX 500 to get an idea of what can be accomplished outside the four doors of a cloud data center. CTO Advisor Alastair Cooke provides an overview of the RAG process and runs through the demo.

Share This Story, Choose Your Platform!