[ad_1]
Nvidia released this tool on Tuesday, February 13th Ongoing The company says it is a personal AI chatbot. Users looking to download the software will need a Windows PC or workstation equipped with an RTX 30 or 40-series GPU with a minimum of 8GB of VRAM. Once downloaded, the app can be installed with a few clicks and used immediately.
Because it is a local chatbot, Chat With RTX has no knowledge of the outside world. However, users can feed it with their personal data, such as documents, files, etc., and optimize it to run queries on them. One such use case could be to feed it a large volume of task-related documents and then ask it to summarize, analyze, or answer a specific question that might take hours to find manually.
It supports text, pdf, doc/docx, and xml file formats. Apart from this, the AI bot also accepts YouTube videos and playlist URLs and can answer questions or summarize videos using the transcription of the video. For this functionality, it will need internet access.
According to the demo video, Chat with RTX is a web server with a Python instance that does not include large language model (LLM) information when freshly downloaded. Users can choose between the Mistral or Llama 2 model to train it and then use their own data to run queries.
The company says the chatbot leverages open-source projects such as Retrieval-Augmented Generation (RAG), TensorRT-LLM, and RTX Acceleration for its functionality.
[ad_2]