The NVIDIA Augmented Reality SDK (AR SDK) is a comprehensive collection of AI-powered features for real-time modeling and tracking of human faces and bodies from video.
The AR SDK enables developers to build state-of-the-art video processing applications with AI-powered features, such as body pose estimation, face detection, landmark tracking, gaze redirection, and facial expressions. The SDK is powered by NVIDIA graphics processing units (GPUs) with Tensor Cores, supporting high throughput and low latency processing.
This repository contains lightweight sample applications showcasing AR SDK features. The applications process real-time webcam or video streams via the SDK, with processing varying by application. The source code demonstrates SDK usage.
Refer to the Get Started on Windows and Get Started on Linux sections of the AR SDK Documentation for the list of supported GPUs, operating systems, and NVIDIA graphics driver versions.
In order to access and compile the sample applications, the following prerequisites must be installed:
- Git: https://git-scm.com/install/
- Git LFS: https://github.com/git-lfs/git-lfs#installing
- CMake v3.21 or later: https://cmake.org/download/
- Microsoft Visual Studio 2022 (MSVC17.0) or later: https://visualstudio.microsoft.com/downloads/
- Ensure the Desktop development with C++ workload is selected and installed
In order to build and run the sample applications, the NVIDIA AR SDK must be installed.
The AR SDK is comprised of the SDK Core and a set of optional features that can be downloaded and installed individually. The SDK Core and features are distributed through the NVIDIA GPU Cloud (NGC) platform.
The SDK Core includes the API headers, library files, and runtime dependencies. It does not include the libraries and models that are required to run any of the features. Once installed, the SDK Core provides a script to fetch and install features from NGC.
To install the SDK Core and features, navigate to the AR SDK Documentation and follow the installation instructions for Windows or Linux.
Each sample application requires a particular set of AR SDK features to be installed. See the README.md file in each sample application directory for details of the features required for that application.
Clone using git:
git clone git@github.com:NVIDIA-Maxine/AR-SDK-Samples.gitcd AR-SDK-Samples
Initialize git-lfs:
git lfs installgit lfs pull
By default, CMake will build all sample applications whose required features are installed. Any sample application that does not have all of its required features installed will be skipped.
Note: Some sample applications have additional requirements beyond those mentioned here. See the README.md in each individual application directory for specific requirements.
In a Visual Studio 2022 Developer Command Prompt:
cd AR-SDK-Samples
cmake.exe -S . -B build -G "Visual Studio 17 2022" -DARSDK_ROOT=</path/to/AR_SDK>
cmake.exe --build build --config Release
Replace </path/to/AR_SDK> with the root path of your AR SDK installation.
To run the sample applications, use the provided wrapper scripts run_<app name>_<mode>.bat, which set required
environment variables and run the application.
For example, to run FaceTrackApp with webcam input, built in Release configuration:
cd build\apps\FaceTrackApp\Release
run_facetrackapp_webcam.bat
- From Windows Start menu, open CMake GUI
- In "Where is the source code:" select the path to AR-SDK-Samples
- In "Where to build the binaries:" select the path to AR-SDK-Samples/build
- Click "Configure"
- Set the variable
ARSDK_ROOTto the location where the AR SDK is installed - Click "Configure" again
- Click "Generate"
- Click "Open Project" to open the generated solution file in Visual Studio
- In Visual Studio: Build -> Build Solution, (or Ctrl+Shift+B)
For example, to run FaceTrackApp:
- Right click FaceTrackApp in the Solution Explorer of Visual Studio
- Set as Startup Project
- Run Local Windows Debugger
For convenience, use the script build_samples.sh to install required dependencies and build all sample applications.
The script should not be run as root; it will prompt for elevated privileges to install dependencies, if necessary.
The script will also prompt for the desired location to build the sample applications, which is set to ~/mysamples by default.
cd AR-SDK-Samples
./build_samples.sh
To run the sample applications, use the provided wrapper scripts run_<app name>_<mode>.sh, which set required
environment variables and run the application.
For example, to run FaceTrackApp with webcam input, built in the default location of ~/mysamples:
cd ~/mysamples/build/apps/FaceTrackApp
run_facetrackapp_webcam.sh
Triton client applications are applications that communicate with an NVIDIA Triton Inference Server allowing off-client inference processing. The Triton backend application comes with the SDK and needs to run in a separate process from the client sample applications in this repository.
To set up the Triton server, follow the instructions on Triton Installation
To build Triton client applications, pass the flag -DENABLE_TRITON=ON to the CMake command during configuration. The flag will be enabled by default when running the build_samples.sh script on Linux.
Before running the sample applications, you must start the
Triton server by running the run_triton_server.sh script in the server
package. Refer to the Triton Installation
section of the Triton documentation for more details.
The sample applications need to be run as a separate process from the server. When running manually, the server and the sample applications can be run on separate terminals or using utilities such as tmux.
Most features have a corresponding Triton client app. For example, FaceTrackTritonClientApp can be used to run Face Detection and Landmark Detection. Similarly, EyeContactTritonClient can be used to run the Eye Contact features.
See README.md in each Triton client app directory for details on how to run the corresponding app.
You can use a lossless codec, such as the Ut Video Codec, to save output video from the sample applications without compression artifacts. For example, to save an output video with the Ut codec, specify the option --codec=ULY0 in the command-line arguments of the application.
If the apps cannot find required libraries, they may not run. Please ensure you run the apps using the provided wrapper
scripts, named run_<app name>_<mode>.bat (Windows) or run_<app name>_<mode>.sh (Linux). These scripts will set up
environment variables required to run the app, and ensure required libraries for the SDK and dependencies can be loaded.
If CMake complains with a warning message:
Required feature <feature name> is not available.
or
REQUIRED FEATURE: <feature name> VERSION: <version> is not available.
followed by
Skipping <app name>.
Make sure that the latest version of the feature is installed (see section on Features under Setup).
Note that it is possible to build a subset of the sample apps by installing a subset of all features, say only the ones that are required for the sample app you want to build.
An error like:
moov atom not found, followed by
Error: Could not open video, is likely due to OpenCV trying to interpret temporary text files as video. These larger files are maintained using git-lfs, which needs to be installed and initialized in the repository for any of the applications to be able to load the provided sample videos. See the section on Accessing the sample code to initialize git-lfs.
The video codec error message from OpenCV when running the applications in offline mode can be ignored. OpenCV should fall back to the default codec on both Windows and Linux. Some example error messages could be: "Could not open codec 'libopenh264': Unspecified error" "OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)'"
Please refer to the online documentation guides
- Software license - Refer to LICENSE
- Third party licenses - Refer to external/ThirdPartyLicenses.txt
- Sample data - Refer to resources/NVIDIA Sample Data License (2025.10.22).pdf






