Hydra Hair Procedural Plugin


After playing around with the houdiniProceduralAPI schema for Husk, I was very curious to see how the procedural system in Hydra worked. There’s been talk around hydra2.0 for a while and Steve Lavietes’ presentation from 2022 demonstrates how it works and how it relates to procedurals.

Steve and Dirk Van Gelder held another great prestenation about it at Siggraph, which further explained how a procedural system can be implemented. This is where my interest really sparked, since Dirk shared some example code and what they had done was not too far from what I wanted to do, namely deform geometry. In my case I want to deform curves to some target primitives.

In the husk procedural, you’re bound to a number of constraints, and the biggest drawback in my opinion is that the procedural is limited to single-threaded execution. You’re also bound to invoke the procedural through Husk, not USD or Hydra. However, there are some great benefits of using the HoudiniProceduralAPI as well. For example, being able to create your deformer inside of Houdini is much easier than writing the deformer from scratch.

Introduction

This post is ultimately about Hydra2.0 and how I used it to create a render time procedural. I will cover the steps to set up the development project with CMake, generating the schema, The scene index, adding python bindings and writing the deformer in opencl. I will skip a couple of things in order to keep this post shorter, and just touch briefly on some, but feel free to check the GitHub project for the entire code base.

The procedural in this case will deform some basis curves to a target geometry using opencl. In production, it’s preferred to generate or deform hair at render time because caching the groom to disc at each frame can result in massive storage overhead, especially for hair that simply could deform rigidly, such as peach fuzz or short hair. Previously, one would need to write these render time procedurals as plugins to each render engine, which is lots of work and requires using different APIs for every renderer you’d like to implement. Doing it in Hydra will instead work as a “one-stop-shop” since the render delegates will read from the hydra stage, so anything done in the Hydra stage would be represented in the renderer as well.

Prerequisites

These are the tools and libraries I’ve used for this project.

  • USD 23.08 or later
  • OpenCL 1.20 or later
  • CMake 3.14 or later
  • python3.7
  • numpy (for testing)
  • C++ 17

I’ve built this on my M1 MacOS Sonoma using clang, so some parts will probably differ if you’re building for another platform.

Getting Started

To begin, create a folder structure for the project:

projectname/
    |––build/
    |––usd/
    |––ocl/
    |––testenv/
    |––CMakeLists.txt

build folder is where we’ll build our library to
usd folder is for all USD related source files
ocl folder is for all opencl related source files
testenv folder is for testing the procedural
CMakeLists.txt is the root cmake file which we will fill later!

We’ll fill all of these folders with stuff later!

Then you should be sure to have the prerequisites installed. Only USD and CMake will be required to start out, but further on we’ll make use of the other libraries as well. USD should be downloaded from the Pixar github page. CMake can be downloaded from the CMake website or through Homebrew or similar package managers. Next, we’ll begin to create the base of our USD plugin.

The Schema

A schema in USD is a type of “tag” that can be applied to a USD prim and it can define what that prim is or what it can do. These are some examples of USD schemas: UsdGeomMesh, UsdGeomSphere, UsdShadeMaterialBiningAPI …

Notice the last one having the API suffix, this means it’s an API schema. You can read more about what the schemas are their differences here, but simply put, an isA schema defines that a prim is of a certain type, whilst an API schema can be applied on top of an isA schema and simply defines some functions and/or properties on that prim. In our case, we’ll want an API schema since we’re not defining a prim type, but rather an interface that can be used to setup the procedural.

Our schema will define a number of properties:

  • target (UsdRelationship – relationship to the target geometry)
  • prim (int array – the prim/face number of the target geometry each hair attaches to)
  • paramuv (float2 array – the barycentric coordinates of the prim/face each hair attaches to)
  • rest (float3 array – The rest positions of the target geometry)

These properties will be required by our deformer later on. The prim and paramuv properties will be arrays with one entry per hair curve, similar to a primitive attribute in Houdini. The rest property is a copy of the target points from which the curves was generated.

Making Your Own Schema

In the same link as before, there’s a basic introduction to generating new schema classes, namely using the usdgenschema script that comes with USD.

I will try and explain to the best of my ability how I’ve set up my build system, but if it doesn’t make sense or you want a shorter version, I strongly recommend this github repo which I used as guide when starting out.

To begin, create a schema.usda file inside the projectname/usd folder. In this file we’ll specify everything the schema should define, and the usdgenschema script will create (some of) our sources and header files for us. Here’s what mine looks like:

usd/schema.usda
usda
#usda 1.0
(
    """ This file contains an example schemata for code generation using
        usdGenSchema.
    """
    subLayers = [
        @usd/schema.usda@
    ]
)

over "GLOBAL" (
    customData = {
        string libraryName       = "hairProc"
        string libraryPath       = "."
        bool useLiteralIdentifier = 0
    }
) {}

class "HairProceduralAPI"
(
    inherits = </APISchemaBase>
    doc = """API for procedurally deforming a groom to an animated geometry. Apply this API to the hair geometry.
    Required attributes:
        - hairproc:target: The animated geometry
        - hairproc:prim: The prim each strand should attach to on the target
        - hairproc:uv: The barycentric coordinates of the prim each strand should attach to on the target
        - hairproc:rest: The capture positions of the target geometry
    """
    customData = {
        string className = "HairProceduralAPI"
    }
)
{
    rel hairProc:target (
        customData = {
            string apiName = "target"
        }
        doc = """The target on to which the hairs should attach to"""
    )

    int[] hairProc:prim = [] (
        customData = {
            string apiName = "prim"
        }
        doc = """The prim on the target that each strand shhould attach to"""
    )

    float2[] hairProc:paramuv = [] (
        customData = {
            string apiName = "paramuv"
        }
        doc = """The barycentric coordnates (vec3f) that the strand should attach to on the prim"""
    )

    float3[] hairProc:rest = [] (
        customData = {
            string apiName = "rest"
        }
        doc = """The rest positions of the captures targets. If this isnt set, will try to use rest attribute from target"""
    )
}

The schema file says that we’ll create an applied API schema, the properties we want, the name of the schema: HairProceduralAPI and the name of the library we’ll create: hairProc. Following the USD syntax, the full name of the schema is then HairProcHairProceduralAPI (UsdGeom is the library, Mesh is schema, so UsdGeomMesh is the full schema name).

Once the schema file is set up, go ahead and run:

Bash
usdgenschema schema.usda

from a terminal within the usd folder in your project. This will create a number of files. The names and content of the files will be generated from the content of the schema.usda file:

  • hairProceduralAPI.cpp and .h (declares the API schema class and functions)
  • tokens.cpp and .h (registers the tokens for USD to know about our schema and properties)
  • wrapHairProcedualAPI.cpp (python wrapper for API schema)
  • wrapTokens.cpp (python wrapper for tokens)
  • api.h (defines some C++ macros)
  • generatedschema.usda (final schema file)
  • plugInfo.json (plugin descriptor for USD to find our plugin)

These files compose the base of our schema plugin. They could be built and used by USD straight away. But since we want to do some more stuff to the plugin, there’s more to do before we’re happy. First off, we can change some lines in the plugInfo.json file that will make our life a bit easier later on. Just set these key-value pairs:

JSON
{
...
"LibraryPath": "@PLUG_INFO_LIBRARY_PATH@", 
"Name": "hairProc",
"ResourcePath": "@PLUG_INFO_RESOURCE_PATH@",
"Root": "@PLUG_INFO_ROOT@", 
"Type": "library"
}

We’ll be able to switch out the text inside the @ symbols with CMake later. This allows us to specify the install directory inside the CMake script however we want it. The values inside this file will tell USD about our plugin and where to find it, relative to this file.

Setting up CMake

CMake will be our build system. It’s often described as the “standard” build system for C++. It is however quite tricky to set up in my opinion, especially comparing to a language like python where building is practically non-existent and package management is EASY. But disregarding the mess of writing CMake files, It’s actually quite straight forward once you get a somewhat understanding of it. Here’s an introduction which explains CMake better than I’ll ever be able to.

The general idea is to create one or multiple CMakeLists.txt files to instruct CMake about how a program should build and compile, such as: defining a target(.exe, .dll, .dylib, .so …), compiler flags, linking, header search paths, and so on. You can create multiple CMakeLists.txt files in multiple directories, each defining a separate target, and chain them together. CMake can generate a number of different outputs from the instructions you write in the CMakeLists.txt files, for example, generate a Xcode or Visual Studio project. We’ll use the default generator from CMake to generate Makefile files though, which can be used to compile and install our targets. In my case, I create 3 targets, HairProc, _hairProc, libocl.

  • HairProc – the core plugin containing the API schema and our scene index plugin
  • _hairProc – the python library wrapping HairProc to python
  • ocl – the wrapper around OpenCL we’ll use for our deformer

To begin, we’ll fill out the CMakeLists.txt file in the root of the project that we made before. In this root file, we’ll specify some global settings such as our target names, our install path (where the final library will be installed), and finally we’ll add our subdirectories (CMake will look for CMakeLists.txt files within the added subdirectories. We’ll create those later)

CMakeLists.txt
CMake
cmake_minimum_required(VERSION 3.14 FATAL_ERROR)
project(hairProc VERSION 0.1.0 LANGUAGES CXX)
# Set the C++ requirements
set(CMAKE_CXX_STANDARD 17) # c++ 17 used for some convenience functions
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# Target names
set(USDPLUGIN_NAME hairProc) # Need to match schema library name
set(PYPACKAGE_NAME vik) # The package name used for python. "from vik import HairProc"
set(OCLMODULE_NAME ocl) # The name of the opencl library
# Install directory
set(USD_INSTALL_ROOT /opt/USD) # The path to our USD istall directory
set(CMAKE_INSTALL_PREFIX "/opt/USD_resources" CACHE PATH "..." FORCE) # Where this project will be installed
# Some RPATH stuff. I'm not sure think this is how you should do it....
set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib")
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
# Set some compile definitions. 
add_definitions(-D_LIBCPP_ENABLE_CXX17_REMOVED_UNARY_BINARY_FUNCTION) # Enables linking USD libraries with C++17
# The subdirectories defines our actual library targets
add_subdirectory(usd)
# uncomment once we've set up the opencl stuff
# add_subdirectory(ocl)

Before moving on, lets be proactive and also create those CMakeLists.txt files inside the subdirectories.

projectname/
    |__ocl/
        |--CMakeLists.txt
    |__usd/
        |--CMakeLists.txt
    ...

We can leave the one inside the ocl directory empty for now and focus on the one inside the usd folder.

usd/CmakeLists.txt

In this file, we’ll need to define two libraries: One for the actual plugin, and one that wraps the plugin to python. We’ll also need to find all the dependency libraries in order to link them and to use their headers.

First, we can set some default values

CMake
set(USDPLUGIN_TARGETS_NAME "${USDPLUGIN_NAME}Targets")
set(INSTALL_CONFIGURATION_DIRECTORY "lib/cmake/${USDPLUGIN_NAME}Schemas")
set(PLUG_INFO_LIBRARY_PATH "../../${USDPLUGIN_NAME}.dylib")
set(PLUG_INFO_RESOURCE_PATH "resources")
set(PLUG_INFO_ROOT "..")

Let’s find our .cpp and .h files in the current directory and add them to some local variable

CMake
file(GLOB sources "*.cpp")
file(GLOB headers "*.h")

Then we create our first library target which will be a shared library (.dylib, .dll, .so). The USDPLUGIN_NAME is defined in the parent CMakeLists.txt file. The install function tells CMake what to do at Install time, in this case, copy the header files over to the ${CMAKE_INSTALL_PREFIX}/include/usd.

CMake
add_library(${USDPLUGIN_NAME}
    SHARED
        ${headers}
        ${sources}
)
install(
    FILES
        ${headers}
    DESTINATION
        include/usd
)

Then we’ll find our dependencies and add their headers to our header search paths

CMake
find_package(Python3 REQUIRED)
set(CMAKE_FIND_PACKAGE_REDIRECTS_DIR ${USD_INSTALL_ROOT})
find_package(pxr REQUIRED)
target_include_directories(${USDPLUGIN_NAME}
    PRIVATE
        ${PYTHON_INCLUDE_PATH}
        ${PXR_INCLUDE_DIRS}
        ".."
)

We’ll also need to add them to our linker path. I’m skipping the OpenCL stuff for now in order for our plugin to build until we’ve created that library. If you prefer, the OpenCL implementation doesn’t need to be a separate library, I just like to have it separated like this.

CMake
target_link_libraries(${USDPLUGIN_NAME}
    PUBLIC
        ${PXR_LIBRARIES}
#       ${OCLMODULE_NAME}
)

Now we’ll need to set some USD specific stuff and some other install time directives. We’ll also use the configure_file function to rename those @ strings inside the plugInfo.json file

CMake
set_target_properties(${USDPLUGIN_NAME} PROPERTIES PREFIX "")
target_compile_features(${USDPLUGIN_NAME}
    PUBLIC
        cxx_auto_type
)
configure_file(
    plugInfo.json
    ${CMAKE_BINARY_DIR}/plugInfo.json
    @ONLY
)
target_compile_definitions(${USDPLUGIN_NAME}
    PRIVATE
        MFB_PACKAGE_NAME=${USDPLUGIN_NAME}
        MFB_ALT_PACKAGE_NAME=${USDPLUGIN_NAME}
        MFB_PACKAGE_MODULE=${MODULE_NAME}
)
install(
    FILES ${CMAKE_BINARY_DIR}/plugInfo.json
    DESTINATION "lib/usd/${USDPLUGIN_NAME}/resources"
)
install(
    TARGETS ${USDPLUGIN_NAME}
    EXPORT ${USDPLUGIN_TARGETS_NAME}
    LIBRARY DESTINATION lib
    INCLUDES DESTINATION include
)
install(
    FILES generatedSchema.usda
    DESTINATION "lib/usd/${USDPLUGIN_NAME}/resources"
)
install(
    FILES schema.usda
    DESTINATION "lib/usd/${USDPLUGIN_NAME}/resources/${USDPLUGIN_NAME}"
)

Building the project at this point should work fine! However, in order to use our schema in python, we’ll need to do some more setup

Create the Python Bindings

The python wrapper requires four additional files which aren’t generated with usdgenschema, so we first have to create them inside the usd subdirectory!

The first one being usd/module.cpp:

C++
#include "pxr/base/tf/pySafePython.h"
#include "Python.h"
#include "pxr/pxr.h"
#include "pxr/base/tf/pyModule.h"
PXR_NAMESPACE_USING_DIRECTIVE
TF_WRAP_MODULE
{
    TF_WRAP(HairProcTokens);
    TF_WRAP(HairProcHairProceduralAPI);
}

and the second being usd/moduleDeps.cpp:

C++
#include "pxr/pxr.h"
#include "pxr/base/tf/registryManager.h"
#include "pxr/base/tf/scriptModuleLoader.h"
#include "pxr/base/tf/token.h"
#include <vector>
PXR_NAMESPACE_OPEN_SCOPE
TF_REGISTRY_FUNCTION(TfScriptModuleLoader) {
    // List of direct dependencies for this library.
    const std::vector<TfToken> reqs = {
        TfToken("sdf"),
        TfToken("tf"),
        TfToken("usd"),
        TfToken("vt"),
        TfToken("hd"),
        TfToken("usdImaging")
    };
    TfScriptModuleLoader::GetInstance().RegisterLibrary(TfToken("hairProc"), TfToken("vik.HairProc"), reqs);
}
PXR_NAMESPACE_CLOSE_SCOPE

Simply change:

  • hairProc -> your library name
  • vik -> your python package name
  • HairProcedural -> your schema name

The third file we’ll need is __init__.py:

Python
from pxr import Tf
Tf.PreparePythonModule()
del Tf

which will be installed to the final python directory and used to run the Tf.PreparePythonModule() function when importing our module.

The fourth and final additional file is __packageinit__.py which simply will be installed as the __init__.py of our python package, (vik is the package, hairProc is the module). We can leave it blank unless we want some custom behavior when importing the package.

usd/CMakeLists.txt

We’ll need to update the CMake file as well to build our python library.

First we’ll split up the regular source files and the python specific source files and create a proper, capitalized module name for the python library. This should be added before the add_library() function call that we wrote before.

CMake
list(FILTER sources EXCLUDE REGEX "./module*|./wrap*")
file(GLOB module_sources "module*.cpp")
file(GLOB wrap_sources "wrap*.cpp")
# PY MODULE NAME
string(SUBSTRING ${USDPLUGIN_NAME} 0 1 LIBNAME_FL)
string(SUBSTRING ${USDPLUGIN_NAME} 0 1 LIBNAME_FL)
string(TOUPPER ${LIBNAME_FL} LIBNAME_FL)
string(SUBSTRING ${USDPLUGIN_NAME} 1 -1 LIBNAME_SUFFIX)
set(MODULE_NAME
    "${LIBNAME_FL}${LIBNAME_SUFFIX}"
)

Then we’ll append the rest at the end of the file:

CMake
set(USDPLUGIN_PYTHON_NAME _${USDPLUGIN_NAME})
add_library(${USDPLUGIN_PYTHON_NAME}
    SHARED
        ${module_sources}
        ${wrap_sources}
)
set_target_properties(${USDPLUGIN_PYTHON_NAME}
  PROPERTIES
    INSTALL_RPATH "@loader_path/../../.."
)
target_include_directories(
    ${USDPLUGIN_PYTHON_NAME}
    PRIVATE
        ${PYTHON_INCLUDE_PATH}
        ${PXR_INCLUDE_DIRS}
)
set_target_properties(${USDPLUGIN_PYTHON_NAME} PROPERTIES SUFFIX ".so")
set_target_properties(${USDPLUGIN_PYTHON_NAME}
    PROPERTIES
        PREFIX ""
)
target_compile_definitions(${USDPLUGIN_PYTHON_NAME}
    PRIVATE
        MFB_PACKAGE_NAME=${USDPLUGIN_NAME}
        MFB_ALT_PACKAGE_NAME=${USDPLUGIN_NAME}
        MFB_PACKAGE_MODULE=${MODULE_NAME}
)
target_link_libraries(${USDPLUGIN_PYTHON_NAME}
    ${USDPLUGIN_NAME}
)
set(PYTHON_PACKAGE_RELATIVE_PATH lib/python/${PYPACKAGE_NAME})
set(INSTALL_PYTHONPACKAGE_DIR ${PYTHON_PACKAGE_RELATIVE_PATH})
set(INSTALL_WRAPPER_DIR ${INSTALL_PYTHONPACKAGE_DIR}/${MODULE_NAME})
install(
    FILES __init__.py
    DESTINATION ${INSTALL_WRAPPER_DIR}
)
install(
    TARGETS ${USDPLUGIN_PYTHON_NAME}
    DESTINATION ${INSTALL_WRAPPER_DIR}
)
install(
    FILES __packageinit__.py
    DESTINATION ${INSTALL_PYTHONPACKAGE_DIR}
    RENAME __init__.py
)

Similarly to before, we create a new library and add the source files to it, add some header search paths, link against the USD library and define some install time directives.

Build and Install the Project

At this point, we should be able to build the project correctly and have the python bindings set up! Open a terminal in the build directory or cd path_to_project/build and then:

Bash
cmake --build ..
cmake --install .

Our project should now be installed in CMAKE_INSTALL_PREFIX folder which we specified in the root CMakeListst.txt file. The result folder structure depends on what we write in the CMake install() function calls. Following my example it should look something like this:

opt/
|__USD_resources/
    |__lib/
        |––hairProc(.dylib, .so, .dll)
        |__python/
            |__vik/
                |––__init__.py
                |––_hairProc.so
        |__usd/
            |__hairProc/
                |__resources/
                    |__hairProc/
                        |––schema.usda
                    |––plugInfo.json
                    |––generatedSchema.usda
    |__include/
        |__usd/
            |––headers..

Now we just need to set a couple of environment variables before we’ll have our plugin registered with USD!

  • export PXR_PLUGINPATH_NAME=/opt/USD_resources/lib/usd/hairProc/resources:${PXR_PLUGINPATH_NAME}
  • export PYTHONPATH=/opt/USD_resources/lib/python:${PYTHONPATH}

These will help USD and python find our plugin as well as our python package.

Testing the Schema

If the the build was successful, we should now be able to import our module in python and C++. To try this out, let’s create a genHairProc.py file inside the testenv folder of our project. We’ll use this script throughout the development in order to generate the stage. You could also use Houdini to create the test stage. However, I only have access to Houdini NC, so I can’t save .usda files… Python will do fine!

testenv/genHairProc.py
Python
from vik import HairProc
from pxr import Usd, UsdGeom, Sdf
import os


def open_stage(stage_name):
    if os.path.isfile(stage_name):
        os.remove(stage_name)
    stage = Usd.Stage.CreateNew(stage_name)
    return stage


def build_plane(stage, path):
    pass


def build_hair(stage, path, apply_api=True):
    curve_pts = [(1,0,0), (1,1,0)]
    curve_cnt = [2]
    hair = UsdGeom.BasisCurves.Define(stage, Sdf.Path(path))
    hair.CreatePointsAttr(curve_pts)
    hair.CreateCurveVertexCountsAttr(curve_cnt)
    hair.CreateTypeAttr("linear")
    if apply_api:
        api = HairProc.HairProceduralAPI.Apply(hair.GetPrim())
        assert(hair.GetPrim().HasAPI("HairProceduralAPI"))
        assert(hair.GetPrim().HasAPI(HairProc.HairProceduralAPI))
    return hair


def do_stuffs(stage):
    mesh = build_plane(stage, "/plane")
    hair = build_hair(stage, "/hair")
  

if __name__ == "__main__":
    stage_name = os.path.join(os.path.dirname(__file__), "hairProc.usda")
    stage = open_stage(stage_name)
    do_stuffs(stage)
    stage.Export(stage_name)

Executing the script should generate a testenv/hairProc.usda file, and when viewing the result through usdview we should see a single basis curve with our HairProceduralAPI schema applied. When checking the properties tab we should also see our schema properties: hairproc:paramuv, hairproc:prim, hairproc:rest, hairproc:target

Register Scene index Plugin

The schema should be working now and ready to be turned in to a procedural! This is where Hydra 2.0 and the notion of a scene index comes in to the picture. I think of a scene index as a node inside Houdini (very basic comparison), where the data flows in, you modify that data, and the data goes on to the next node through the output. Similarly in Hydra, your scene index slots in somewhere in the scene index chain and when your scene index queries some data, a scene index above you will provide it, and when a scene index below you queries some data, your scene index will provide it. So naturally, the first scene index will be the stage scene index which provides information about the stage, and the final one will be the render delegate. This data is not handled by the scene index itself, rather, the scene index applies some data source to a prim, and this data source is what will be used down the line.

In order to register our scene index plugin, we’ll need to create four more files in the usd directory, namely:

  • hairProceduralSceneIndexPlugin.h
  • hairProceduralSceneIndexPlugin.cpp
  • hairProceduralSceneIndex.h
  • hairProceduralSceneIndex.cpp

Where the sceneIndexPlugin will:

  1. register the plugin and its functions
  2. create our scene index upon request

and the sceneIndex will:

  1. initialize our deformer
  2. override some virtual functions to set our custom data source on a prim.
usd/hairProceduralSceneIndexPlugin.h
C++
class HairProcHairProceduralSceneIndexPlugin : public HdSceneIndexPlugin {
public:
    HairProcHairProceduralSceneIndexPlugin();
    ~HairProcHairProceduralSceneIndexPlugin() override;
protected:
    HdSceneIndexBaseRefPtr _AppendSceneIndex(
        const HdSceneIndexBaseRefPtr& inputScene,
        const HdContainerDataSourceHandle& inputArgs) override;
};
usd/hairProceduralSceneIndex.h
C++
TF_DECLARE_REF_PTRS(HairProcHairProceduralSceneIndex);
class HairProcHairProceduralSceneIndex : public HdSingleInputFilteringSceneIndexBase 
{
public:
    HAIRPROC_API
    static HairProcHairProceduralSceneIndexRefPtr New(
        const HdSceneIndexBaseRefPtr& inputSceneIdndex);
    HAIRPROC_API
    HdSceneIndexPrim GetPrim(const SdfPath& primPath) const override;
    HAIRPROC_API
    SdfPathVector GetChildPrimPaths(const SdfPath& primPath) const override;
protected:
    HairProcHairProceduralSceneIndex(const HdSceneIndexBaseRefPtr& inputSceneIndex);
    void _PrimsAdded(const HdSceneIndexBase& sender, const HdSceneIndexObserver::AddedPrimEntries& entries) override;
    void _PrimsRemoved(const HdSceneIndexBase& sender, const HdSceneIndexObserver::RemovedPrimEntries& entried) override;
    void _PrimsDirtied(const HdSceneIndexBase& sender, const HdSceneIndexObserver::DirtiedPrimEntries& entries) override;
private:
    void _init_deformer(
        const SdfPath& primPath,
        HairProcHairProceduralSchema& procSchema,
        HdBasisCurvesSchema& basisCurvesSchema,
        HdPrimvarsSchema& primvarSchema);
    typedef std::map<SdfPath, std::unordered_set<SdfPath, SdfPath::Hash>> _TargetsMap;
    mutable _TargetsMap _targets;
    
    typedef std::unordered_map<SdfPath, HairProcHairProceduralDeformerSharedPtr, SdfPath::Hash> _HairProcMap;
    mutable _HairProcMap _deformerMap;
};

We can also be a bit proactive and create a class for our deformer that will handle the actual deformation.

usd/hairProceduralDeformer.h
C++
TF_DECLARE_REF_PTRS(HairProcHairProceduralDeformer);

class HairProcHairProceduralDeformer {
public:
    HairProcHairProceduralDeformer(VtArray<HdContainerDataSourceHandle> targetContainers,
                                   HdContainerDataSourceHandle sourceContainer,
                                   const SdfPath& primPath)
                                    : _targetContainers(targetContainers),
                                      _sourceContainer(sourceContainer),
                                      _primPath(primPath.GetAsString()) {};

    VtVec3fArray Deform(const HdSampledDataSource::Time& shutterOffset) {return VtVec3fArray()};

private:
    VtArray<HdContainerDataSourceHandle> _targetContainers;
    HdContainerDataSourceHandle _sourceContainer;
    std::string _primPath;
};

using HairProcHairProceduralDeformerSharedPtr = std::shared_ptr<class HairProcHairProceduralDeformer>;

The functions will be implemented in each respective .cpp files!

usd/hairProceduralSceneIndexPlugin.cpp
C++
TF_DEFINE_PRIVATE_TOKENS (
    _tokens,
    ((sceneIndexPluginName, "HairProcHairProceduralSceneIndexPlugin")));

TF_REGISTRY_FUNCTION(TfType) {
    HdSceneIndexPluginRegistry::Define<HairProcHairProceduralSceneIndexPlugin>();
}

TF_REGISTRY_FUNCTION(HdSceneIndexPlugin) {
    HdSceneIndexPluginRegistry::GetInstance().RegisterSceneIndexForRenderer(
        TfToken(),
        _tokens->sceneIndexPluginName,
        nullptr,
        0,
        HdSceneIndexPluginRegistry::InsertionOrderAtStart);
}

HdSceneIndexBaseRefPtr HairProcHairProceduralSceneIndexPlugin::_AppendSceneIndex(
        const HdSceneIndexBaseRefPtr& inputSceneIndex,
        const HdContainerDataSourceHandle& inputArgs) {
    TF_UNUSED(inputArgs);
    return HairProcHairProceduralSceneIndex::New(inputSceneIndex);
}
PXR_NAMESPACE_CLOSE_SCOPE

First, we invoke the TF_REGISTRY_FUNCTION macro and register the plugin in the HdSceneIndexPluginRegistry singleton, which allows USD to find our plugin. Then we implement the _AppendSceneIndex virtual function inherited from the HdSceneIndexPlugin class and return a TfRefPtr (reference counted smart pointer, similar to std::shared_ptr) to our scene index.

The scene index .cpp file will be where we decide what data sources will be overwritten and for what prims. To get the data of a prim, you’d do:

C++
HdContainerDataSourceHandle sourceDs = _GetInputSceneIndex()->GetPrim(primPath).dataSource;

This means that a child scene index that queries some data will invoke the GetPrim() function of our scene index, which in turn will return a HdSceneIndexPrim. The HdSceneIndexPrim contains the path and data source of a prim. This data source is what we’ll override in our GetPrim() function.

usd/hairProceduralSceneIndex.cpp
C++
HdSceneIndexPrim HairProcHairProceduralSceneIndex::GetPrim(const SdfPath& primPath) const {
    HdSceneIndexPrim prim = _GetInputSceneIndex()->GetPrim(primPath);
    if (prim.primType == HdPrimTypeTokens->basisCurves) {

        HdBasisCurvesSchema curveSchema = HdBasisCurvesSchema::GetFromParent(prim.dataSource);
        HdPrimvarsSchema primvarSchema = HdPrimvarsSchema::GetFromParent(prim.dataSource);
        HairProcHairProceduralSchema hairProcSchema = HairProcHairProceduralSchema::GetFromParent(prim.dataSource);

        if (curveSchema && primvarSchema && hairProcSchema) { 
            if (auto deformer = _deformerMap.find(primPath); deformer != _deformerMap.end()) {
                prim.dataSource = _HairProcDataSource::New(primPath, prim.dataSource, deformer->second);
            }
        }
    }
    return prim;
}
C++
void 
HairProcHairProceduralSceneIndex::_PrimsAdded(
        const HdSceneIndexBase& sender,
        const HdSceneIndexObserver::AddedPrimEntries& entries) {
    if (!_IsObserved()) {
        return;
    }
    for (const HdSceneIndexObserver::AddedPrimEntry& entry: entries) {
        if (entry.primType == HdPrimTypeTokens->basisCurves) {

            auto prim = _GetInputSceneIndex()->GetPrim(entry.primPath);
            HdBasisCurvesSchema curveSchema = HdBasisCurvesSchema::GetFromParent(prim.dataSource);
            HdPrimvarsSchema primvarSchema = HdPrimvarsSchema::GetFromParent(prim.dataSource);
            HairProcHairProceduralSchema hairProcSchema = HairProcHairProceduralSchema::GetFromParent(prim.dataSource);

            if (curveSchema && primvarSchema && hairProcSchema) {
                _init_deformer(entry.primPath, hairProcSchema, curveSchema, primvarSchema);
            }
        }
    }
    _SendPrimsAdded(entries);
}
C++
void
HairProcHairProceduralSceneIndex::_PrimsDirtied(
        const HdSceneIndexBase& sender,
        const HdSceneIndexObserver::DirtiedPrimEntries& entries) {

    // If any prims in entries are part of _targets, we need to also dirty their sources, ie the hairProcedural prims
    if (!_IsObserved()) {
        return;
    }

    HdSceneIndexObserver::DirtiedPrimEntries dirty = entries;

    for (const HdSceneIndexObserver::DirtiedPrimEntry& entry: entries) {
        if (auto it = _targets.find(entry.primPath); it != _targets.end()) {
            for (const SdfPath& path : it->second) {

                auto prim = _GetInputSceneIndex()->GetPrim(path);
                HdPrimvarsSchema primvarSchema = HdPrimvarsSchema::GetFromParent(prim.dataSource);
                dirty.emplace_back(path, primvarSchema.GetPointsLocator());
            }
        }
    }
    _SendPrimsDirtied(dirty);
}
C++
void HairProcHairProceduralSceneIndex::_init_deformer(
        const SdfPath& primPath,
        HairProcHairProceduralSchema& procSchema,
        HdBasisCurvesSchema& basisCurvesSchema,
        HdPrimvarsSchema& primvarSchema){

    HdPathArrayDataSourceHandle target = procSchema.GetTarget();
    VtArray<SdfPath> targets = target->GetTypedValue(0);
    if (!targets.size()) {
        return;
    }

    VtArray<HdContainerDataSourceHandle> targetDs;
    for (auto it: targets) {
        HdSceneIndexPrim prim = _GetInputSceneIndex()->GetPrim(it);
        targetDs.push_back(prim.dataSource);
    }
    if (targetDs.size() == 0){
        return;
    }

    HdContainerDataSourceHandle sourceDs = _GetInputSceneIndex()->GetPrim(primPath).dataSource;
    TraceCollector::GetInstance().SetEnabled(true);

    HairProcHairProceduralDeformerSharedPtr deformer = std::make_shared<HairProcHairProceduralDeformer>(targetDs, sourceDs, primPath);

    _deformerMap[primPath] = deformer;
    for (SdfPath& path : targets) {
        _targets[path].insert(primPath);
    }
}

In _PrimsAdded(), we get access to prims that are added to the hydra stage, and here I’m checking for prims that is of type BasisCurve and has some HdSchemas applied and initialize my deformer for any prims that do.

Since our input hair mesh is assumed to not be animated at this point, the hair prim will not be dirtied when changing frames. Therefore, In _PrimsDirtied(), I’m checking if the dirtied prim exists inside my private _targets map (the prims which the hair should attach to), and if it does, I’ll dirty those hair prims as well.

In GetPrim() function, I’m checking whether the prim is of type BasisCurves, and if the data source contain some HdSchemas. I’m also checking whether the prim has a deformer applied. If all conditions are true I set the data source of the prim to our custom data source.

The _init_deformer() function sets up the deformer and stores a pointer to the deformer and the targets for the prim in some private members.

Data Sources

Any data source in Hydra inherits from HdDataSourceBase and is an efficient way to store and send data. I used a couple of data sources for this project, namely:

  • HdVec3fArrayDataSource – The data source for the points primvar
  • HdContainerDataSource – Used for overriding access to primvars. Will contain other data sources

The main idea is that once the HdSceneIndexPrim has our data source applied, then some child scene index can query values inside that data source. If our data source doesn’t override a certain primvar or property that is being queried, it should send the query to the input data source. This will be done until some data source can supply the value of the query. In the other case where our data source does override the primvar/property that is being queried, then we’ll simply return our modified value. In our case we’ll override the points primvar and the value we’ll return will be our deformed points. I order to keep the correct structure of the data source attached on the HdSceneIndexPrim, we’ll also need to create a couple of HdContainerDataSources that will wrap everything up nicely.

usd/HairProceduralDataSource.h
C++
class _PrimvarDataSource final : public HdContainerDataSource
{
public:
    HD_DECLARE_DATASOURCE(_PrimvarDataSource);

    TfTokenVector GetNames() override;
    HdDataSourceBaseHandle Get(const TfToken &name) override;

private:
    _PrimvarDataSource(
        const HdDataSourceBaseHandle &primvarValueSrc,
        const TfToken &interpolation,
        const TfToken &role)
      : _primvarValueSrc(primvarValueSrc)
      , _interpolation(interpolation)
      , _role(role) {}

    HdDataSourceBaseHandle _primvarValueSrc;
    TfToken _interpolation;
    TfToken _role;
};


class _PointsDataSource : public HdVec3fArrayDataSource {
public:
    HD_DECLARE_DATASOURCE(_PointsDataSource);

    VtValue GetValue(const Time shutterOffset) override;
    VtVec3fArray GetTypedValue(const Time shutterOffset) override;
    bool GetContributingSampleTimesForInterval(
            const Time startTime,
            const Time endTime,
            std::vector<Time> * const outSampleTimes) override;

private:
    _PointsDataSource(
        HdPrimvarsSchema& primvarSchema,
        HairProcHairProceduralDeformerSharedPtr deformer) 
        : _schema(primvarSchema), _deformer(deformer) {}

    HdPrimvarsSchema _schema;
    HairProcHairProceduralDeformerSharedPtr _deformer;
};


class _PrimvarOverrideDataSource : public HdContainerDataSource
{
public:
    HD_DECLARE_DATASOURCE(_PrimvarOverrideDataSource);

    TfTokenVector GetNames() override;
    HdDataSourceBaseHandle Get(const TfToken &name) override;

private:
    _PrimvarOverrideDataSource(
        const HdContainerDataSourceHandle& inputDs,
        HdPrimvarsSchema& primvarSchema,
        HairProcHairProceduralDeformerSharedPtr deformer)
        : _inputDs(inputDs), _schema(primvarSchema), _deformer(deformer) {}

    HdContainerDataSourceHandle _inputDs;
    HdPrimvarsSchema _schema;
    HairProcHairProceduralDeformerSharedPtr _deformer;
};


// class _HairProcDataSource
// used as override to access primvars on a HairProceduralAPI prim
class _HairProcDataSource : public HdContainerDataSource
{
public:
    HD_DECLARE_DATASOURCE(_HairProcDataSource);

    TfTokenVector GetNames() override;
    HdDataSourceBaseHandle Get(const TfToken &name) override;

private:
    _HairProcDataSource(const SdfPath& primPath, const HdContainerDataSourceHandle &primDataSource, HairProcHairProceduralDeformerSharedPtr deformer) 
        : _primPath(primPath), _primDs(primDataSource), _deformer(deformer){}

    HdContainerDataSourceHandle _primDs;
    HairProcHairProceduralDeformerSharedPtr _deformer;
    const SdfPath& _primPath;
};
usd/hairProceduralDataSource.cpp
C++
TfTokenVector _PrimvarDataSource::GetNames() {
    return {HdPrimvarSchemaTokens->primvarValue,
            HdPrimvarSchemaTokens->interpolation,
            HdPrimvarSchemaTokens->role};
}


HdDataSourceBaseHandle _PrimvarDataSource::Get(const TfToken &name) {
    if (name == HdPrimvarSchemaTokens->primvarValue) {
        return _primvarValueSrc;
    }
    if (name == HdPrimvarSchemaTokens->interpolation) {
        return
            HdPrimvarSchema::BuildInterpolationDataSource(_interpolation);
    }
    if (name == HdPrimvarSchemaTokens->role) {
        return
            HdPrimvarSchema::BuildRoleDataSource(_role);
    }
    return nullptr;
}


VtValue _PointsDataSource::GetValue(const Time shutterOffset) {
    return VtValue(GetTypedValue(shutterOffset));
}


VtVec3fArray _PointsDataSource::GetTypedValue(const Time shutterOffset) {
    if (_schema.GetPrimvar(HdTokens->points)) {
        return _deformer->Deform(shutterOffset);
    }
    return VtVec3fArray();
}


bool _PointsDataSource::GetContributingSampleTimesForInterval(
        const Time startTime,
        const Time endTime,
        std::vector<Time> * const outSampleTimes) {
    return false;
}


HdDataSourceBaseHandle _PrimvarOverrideDataSource::Get(const TfToken& name) {
    if (name == HdTokens->points) {
        return _PrimvarDataSource::New(
            _PointsDataSource::New(_schema, _deformer),
            HdPrimvarSchemaTokens->vertex,
            HdPrimvarSchemaTokens->point
        );
    }
    HdDataSourceBaseHandle result = _inputDs->Get(name);
    return result;
}

HdDataSourceBaseHandle _HairProcDataSource::Get(const TfToken& name) {
    auto result = _primDs->Get(name);
    if (name == HdPrimvarsSchemaTokens->primvars) {
        auto primvarSchema = HdPrimvarsSchema::GetFromParent(_primDs);
        if (auto primvarContainer = HdContainerDataSource::Cast(result)) {
            return _PrimvarOverrideDataSource::New(primvarContainer, primvarSchema, _deformer);
        }
    }
    return result;
}

This is it for the data source. The _HairProcDataSource class is what we apply to the HdSceneIndexPrim. Following the function calls from the _HairProcDataSource::Get() method, we’ll end up at the _PointsDataSource data source when we query the points primvar, and any other query will be sent to the input data source by the _PrimvarsOverrideDataSource::Get() method. Calling the _PointsDataSource::GetValue() or ::GetTypedValue() will invoke our deformer. Notice that we’re also sending the deformer as a parameter to the data sources, storing it as a private member of the _PointsDataSource class, then simply calling _deformer->Deform(shutterOffset) from inside GetTypedValue().

Before moving on, we can update the plugInfo.json file so USD can find our scene index plugin. Simply add the HairProcHairProceduralSceneIndexPlugin to the types key:

JSON
{
    "Plugins": [
        {
            "Info": {
                "Types": {
                    "HairProcHairProceduralSceneIndexPlugin": {
                        "bases": [
                            "HdSceneIndexPlugin"
                        ],
                        "displayName": "Scene Index to execute HairProc", 
                        "loadWithRenderer": "", 
                        "priority": 1
                    },
                    ...

The HdSchema

An HdSchema is not the same as the USD schema we created before. An HdSchema enables easy access to data sources inside of Hydra. In our case, it enables us to easily access data sources for the properties we defined on the USD schema, i.e. prim, target, rest, paramuv.

usd/hairProceduralSchema.h
C++
#define HAIRPROC_SCHEMA_TOKENS \
    (hairProcedural) \
    (target) \
    (prim) \
    (paramuv) \
    (rest) \

TF_DECLARE_PUBLIC_TOKENS(HairProcHairProceduralSchemaTokens, HAIRPROC_API, HAIRPROC_SCHEMA_TOKENS);

class HairProcHairProceduralSchema : public HdSchema {
public:
    HairProcHairProceduralSchema(HdContainerDataSourceHandle container) : HdSchema(container) {}

    HAIRPROC_API
    HdVec2fArrayDataSourceHandle GetParamuv();

    HAIRPROC_API
    HdPathArrayDataSourceHandle GetTarget();

    HAIRPROC_API
    HdIntArrayDataSourceHandle GetPrim();
    
    HAIRPROC_API
    HdVec3fArrayDataSourceHandle GetRest();

    HAIRPROC_API
    static HairProcHairProceduralSchema GetFromParent(const HdContainerDataSourceHandle& parent);

    HAIRPROC_API
    static const TfToken& GetSchemaToken();

    HAIRPROC_API
    static const HdDataSourceLocator& GetDefaultLocator();

    HAIRPROC_API
    static const HdDataSourceLocator& GetParamuvLocator();

    HAIRPROC_API
    static const HdDataSourceLocator& GetTargetLocator();

    HAIRPROC_API
    static const HdDataSourceLocator& GetPrimLocator();
    
    HAIRPROC_API
    static const HdDataSourceLocator& GetRestLocator();

    HAIRPROC_API
    static HdContainerDataSourceHandle BuildRetained(
        const HdVec2fArrayDataSourceHandle& uv,
        const HdPathArrayDataSourceHandle& target,
        const HdIntArrayDataSourceHandle& prim,
        const HdVec3fArrayDataSourceHandle& rest);

    class Builder {
    public:
        HAIRPROC_API
        Builder& SetParamuv(const HdVec2fArrayDataSourceHandle& uv);

        HAIRPROC_API
        Builder& SetTarget(const HdPathArrayDataSourceHandle& target);

        HAIRPROC_API
        Builder& SetPrim(const HdIntArrayDataSourceHandle& prim);
        
        HAIRPROC_API
        Builder& SetRest(const HdVec3fArrayDataSourceHandle& rest);
    
        HAIRPROC_API
        HdContainerDataSourceHandle Build();

    private:
        HdVec2fArrayDataSourceHandle _paramuv;
        HdPathArrayDataSourceHandle _target;
        HdIntArrayDataSourceHandle _prim;
        HdVec3fArrayDataSourceHandle _rest;
    };
};
usd/hairProceduralSchema.cpp
C++
TF_DEFINE_PUBLIC_TOKENS(HairProcHairProceduralSchemaTokens, HAIRPROC_SCHEMA_TOKENS);

HdVec2fArrayDataSourceHandle HairProcHairProceduralSchema::GetParamuv() {
    return _GetTypedDataSource<HdVec2fArrayDataSource>(HairProcHairProceduralSchemaTokens->paramuv);
}

...

HdContainerDataSourceHandle HairProcHairProceduralSchema::BuildRetained(
        const HdVec2fArrayDataSourceHandle& paramuv,
        const HdPathArrayDataSourceHandle& target,
        const HdIntArrayDataSourceHandle& prim,
        const HdVec3fArrayDataSourceHandle& rest) {

    TfToken names[4];
    HdDataSourceBaseHandle values[4];

    size_t count = 0;
    if (paramuv) {
        names[count] = HairProcHairProceduralSchemaTokens->paramuv;
        values[count++] = paramuv;
    }
    if (target) {
        names[count] = HairProcHairProceduralSchemaTokens->target;
        values[count++] = target;
    }
    if (prim) {
        names[count] = HairProcHairProceduralSchemaTokens->prim;
        values[count++] = prim;
    }
    if (rest) {
        names[count] = HairProcHairProceduralSchemaTokens->rest;
        values[count++] = prim;
    }
    return HdRetainedContainerDataSource::New(count, names, values);
}

const TfToken& HairProcHairProceduralSchema::GetSchemaToken() {
    return HairProcHairProceduralSchemaTokens->hairProcedural;
}

HairProcHairProceduralSchema
HairProcHairProceduralSchema::GetFromParent(const HdContainerDataSourceHandle& parent) {    
    return HairProcHairProceduralSchema(
        parent 
        ? HdContainerDataSource::Cast(parent->Get(HairProcHairProceduralSchemaTokens->hairProcedural))
        : nullptr);
}

const HdDataSourceLocator& HairProcHairProceduralSchema::GetDefaultLocator() {
    static const HdDataSourceLocator locator(
        HairProcHairProceduralSchemaTokens->hairProcedural);
    return locator;
}

const HdDataSourceLocator& HairProcHairProceduralSchema::GetParamuvLocator() {
    static const HdDataSourceLocator locator(
        HairProcHairProceduralSchemaTokens->hairProcedural,
        HairProcHairProceduralSchemaTokens->paramuv);
    return locator;
}
...

HairProcHairProceduralSchema::Builder&
HairProcHairProceduralSchema::Builder::SetParamuv(const HdVec2fArrayDataSourceHandle& uv) {
    _paramuv = uv;
    return *this;
}

...

HdContainerDataSourceHandle HairProcHairProceduralSchema::Builder::Build() {
    return HairProcHairProceduralSchema::BuildRetained(
        _paramuv,
        _target,
        _prim,
        _rest
    );
}


PXR_NAMESPACE_CLOSE_SCOPE

I’ve left out the rest of the function implementations, but they all follow the same pattern as those of paramuv.

In the _PrimsAdded() and GetPrim() functions on the HairProcHairproceduralSceneIndex class, you can see we’re checking whether a prim has our custom schema applied:

C++
HairProcHairProceduralSchema hairProcSchema = HairProcHairProceduralSchema::GetFromParent(prim.dataSource);

The GetFromParent() function checks whether certain tokens exists inside a data source, and if the correct tokens exists, it will return a valid HdSchema. This means that the tokens needs to exist on the prim before the SceneIndex is created. That’s why we want to create an APISchemaAdapter.

The APISchemaAdapter

An APISchemaAdapter allows Hydra to “notice” the existence of a specified API schema prim and create some data sources based on it. This means that our hairProceduralAPI prims will be automatically picked up by Hydra. Implementing the APISchemaAdapter is similar to the SceneIndexPlugin in that it just registers a class to a registry. This registry will apply our adapter to any prim on the stage that has our HairProceduralAPI schema applied. The adapter will then create some data sources on the prim for our scene index and schema to find and use.

usd/hairProceduralAPIAdapter.cpp
C++
TF_DEFINE_PRIVATE_TOKENS(
    _tokens,
    (hairProcedural)
    (target)
    (prim)
    (paramuv)
    (rest)
);

TF_REGISTRY_FUNCTION(TfType) {
    typedef HairProcHairProceduralAPIAdapter Adapter;
    TfType t = TfType::Define<Adapter, TfType::Bases<Adapter::BaseAdapter> >();
    t.SetFactory< UsdImagingAPISchemaAdapterFactory<Adapter> >();
}

Where the HairProcHairProceduralAPIAdapter is a subclass of UsdImagingAPISchemaAdapter with some virtual functions overwritten:

C++
HdContainerDataSourceHandle
HairProcHairProceduralAPIAdapter::GetImagingSubprimData(
        UsdPrim const& prim,
        TfToken const& subprim,
        TfToken const& appliedInstanceName,
        const UsdImagingDataSourceStageGlobals& stageGlobals) {

    return HdRetainedContainerDataSource::New(_tokens->hairProcedural, _HairProcHairProceduralDataSource::New(prim, stageGlobals));
}

HdDataSourceLocatorSet
HairProcHairProceduralAPIAdapter::InvalidateImagingSubprim(
        UsdPrim const& prim,
        TfToken const& subprim,
        TfToken const& appliedInstanceName,
        TfTokenVector const& properties,
        const UsdImagingPropertyInvalidationType invalidationType) {

    return HdDataSourceLocatorSet();
}

The _hairProcDataSource Is the data source that will be registered with our hairProceduralAPI schema prim and it enables us to access its properties within Hydra.

We’ll also need to edit the plugInfo.json file again to tell USD about our APIAdapter:

C++
{
    "Plugins": [
        {
            "Info": {
                "Types": {
                    "HairProcHairProceduralAPIAdapter": {
                        "apiSchemaName": "HairProceduralAPI", 
                        "bases": [
                            "UsdImagingAPISchemaAdapter"
                        ],
                        "isInternal": true
                    },
                    ...

Before testing this out, we need to set an environment variable so usdview enables some of the functionality we’re using:

Bash
export USDIMAGINGGL_ENGINE_ENABLE_SCENE_INDEX=true

When building the project at this point and running usdview on the hairProc.usda stage again, we should now have a registered scene index! This can be verified by looking at the Hydra Scene Browser window in usdview

The Deformer

Recall in the HairProceduralSceneIndex class, we’re calling _init_deformer() inside of the _PrimsAdded() function, and we’re passing the “deformer” to our _HairProcDataSource. That “Deformer” is currently doing nothing! Let’s change that! Remove the temp HairProceduralDeformer::Deform function definition in the header file and create a cpp file instead:

usd/hairProceduralDeformer.cpp

I’m currently setting up my OpenCL context inside the constructor of the deformer, and invoking the kernel inside the Deform() function, but since this post isn’t mainly about OpenCL, I’ll leave that part out for this post. In any case, the VtVec3fArray (array of float3) returned from the Deform() function will determine the position of the points for the curves. Since we’re storing some private members on the deformer, we should have access to everything we need!

C++
VtVec3fArray HairProcHairProceduralDeformer::Deform(const HdSampledDataSource::Time& shutterOffset) {
    auto tgtPrimvarsSchema = HdPrimvarsSchema::GetFromParent(_targetContainers[0]);
    auto srcPrimvarsSchema = HdPrimvarsSchema::GetFromParent(_sourceContainer);
    auto srcCurveSchema = HdBasisCurvesSchema::GetFromParent(_sourceContainer);
    auto srcProcSchema = HairProcHairProceduralSchema::GetFromParent(_sourceContainer);
    auto xformSchema = HdXformSchema::GetFromParent(_targetContainers[0]);

    GfMatrix4f xform = static_cast<GfMatrix4f>(xformSchema.GetMatrix()->GetTypedValue(shutterOffset));

    VtVec3fArray srcPos = srcPrimvarsSchema.GetPrimvar(HdTokens->points).GetPrimvarValue()->GetValue(shutterOffset).UncheckedGet<VtArray<GfVec3f>>();
    VtVec3fArray tgtPos = tgtPrimvarsSchema.GetPrimvar(HdTokens->points).GetPrimvarValue()->GetValue(shutterOffset).UncheckedGet<VtArray<GfVec3f>>();
    ...
}

these HdSchemas that we fetch will be used to get the source positions, target positions, mesh information and so on. And our custom HairProcHairProceduralSchema is used to get our API schema properties (prim, target, rest, paramuv)

OpenCL

I feel it’s unnecessary to go in to too much detail about my OpenCL implementation since it’s my first time implementing it for C++ and I’m sure it’s far from optimal. However, you’re free to look at the Github project for more info on how I set it up. But just to touch on it briefly, I’ve wrapped the OpenCL stuff in to a standalone library where I’ve implemented the functions we need to initialize and execute the OpenCL kernels. This library is used with the HairProceduralDeformer class by access to the DeformerContext class of our ocl library. The DeformerContext is used for initializing OpenCL, setting all the arguments to our kernels, and to execute the kernels. Since I’m on a Mac, I’m restricted to OpenCL 1.2, which works fine, but if you’re on another OS I’d recommend a later version.

I used a couple of source files for this library:

  • DeformerContext.h & .cpp (Wrapper around OpenCL to easily create program, context, bind arguments, and so on)
  • KernelUtils.h (Some utility functions, such as read .cl / .ocl files)
  • opencl.hpp (C++ wrapper for OpenCL, downloaded from Github)

The CMakeLists.txt file for this library is quite similar to the one we made for the USD plugin, however, this one doesn’t need any python bindings.

CMake
set(OCLMODULE_TARGETS_NAME "${USDPLUGIN_NAME}Targets")

find_package(OpenCL REQUIRED)

add_compile_definitions(OCL_FILE_PATH="${CMAKE_INSTALL_PREFIX}/ocl/kernels")

set(OPENCL_CLHPP_HEADERS_DIR .)

file(GLOB sources "*.cpp")
file(GLOB headers "*.h")

file(GLOB OCL "kernels/*.cl")

install(
    FILES ${OCL}
    DESTINATION "ocl"
)

# OCL LIBRARY
add_library(${OCLMODULE_NAME}
    SHARED
        ${headers}
        ${sources}
)

target_link_libraries(${OCLMODULE_NAME}
    PUBLIC
        ${OpenCL_LIBRARIES}
)

install(
    FILES
        ${headers}
    DESTINATION
        include/ocl
)

install(
    TARGETS ${OCLMODULE_NAME}
    EXPORT ${OCLMODULE_TARGETS_NAME}
    LIBRARY DESTINATION lib
    INCLUDES DESTINATION include
)

The Kernels

I’m currently running two kernels, CalcTargetFrames for calculating the frames F and R of the target prims, and HairProc for deforming the curves. In this context, prim=face (Houdini convention), not UsdPrim.

  • We’ll calculate the inverted rest frames and deformed frames R and F for each targeted prim containing points p0, p1, p2. R can be calculated once at t=0 and cached. I’m currently using the rest property from the hairProceduralAPI schema instead of the point positions, since the target geometry might not be in its rest position at t=0. The frame F will be an orthogonal 3×3 matrix with the y axis oriented along the normal of the prim. Since the matrix is orthogonal, the matrix R is simply F transposed, calculated at the rest position.
  • We’ll also update the target positions with the USD transform. When querying the points primvar inside the Hydra scene, the points are returned in local space, so we’ll transform the target points to match the USD transform.
ocl/kernels/hairProc.cl

Here’s the CalcTargetFrames kernel for calculating F and R

OpenCL
static void CalcPrimFrame(
    const float3 p0,
    const float3 p1,
    const float3 p2,
    mat3 m
)
{
    __private float3 z = normalize(p2 - p0);
    __private float3 y = normalize(cross(p1 - p0, z));
    __private float3 x = normalize(cross(y, z));

    Mat3FromCols(x, y, z, m);
}
OpenCL
__kernel void CalcTargetFrames(
    __global float* tgt_p,
    const __global int* tgt_prmpts_indices,
    const __global int* tgt_prmpts_lengths,
    const __global int* tgt_prmpts_offset,

    const __global float* tgt_xform,
    const __global int* unique_prm,

    __global float* result,
    const int invert
)
{
    int idx = get_global_id(0);
    int tgt_prm = unique_prm[idx];

    // Assume at least 3 points on target prim
    int pt0 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm]];
    int pt1 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 1];
    int pt2 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 2];
    int pt3;

    float3 p0 = vload3(pt0, tgt_p);
    float3 p1 = vload3(pt1, tgt_p);
    float3 p2 = vload3(pt2, tgt_p);
    float3 p3;

    int ptlast = pt0 + tgt_prmpts_lengths[tgt_prm];
    int npts = ptlast - pt0;

    if (npts != 3 && npts != 4) {
        return;
    }

    mat4 xform;
    Mat4Load(0, tgt_xform, xform);

    mat4 xformT;
    Mat4Transpose(xform, xformT);

    p0 = Mat4VecMul(p0, xformT);
    p1 = Mat4VecMul(p1, xformT);
    p2 = Mat4VecMul(p2, xformT);
    if (npts == 4) {
        pt3 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 3];
        p3 = vload3(pt3, tgt_p);
        p3 = Mat4VecMul(p3, xformT);
        vstore3(p3, pt3, tgt_p);
    }

    mat3 F;
    CalcPrimFrame(p0, p1, p2, F);

    if (invert == 1) {
        mat3 R;
        Mat3Transpose(F, R); // Invert orthogonal matrix by transposing
        Mat3Store(R, idx, result);
    } else {
        Mat3Store(F, idx, result);
    }

    vstore3(p0, pt0, tgt_p);
    vstore3(p1, pt1, tgt_p);
    vstore3(p2, pt2, tgt_p);
}

The curves should deform with the attachment point (paramuv) of the target prim, so we’ll find the updated attachment point next.

Following the Houdini implementation of parametric spaces, we need to define two functions to get the updated point positions within the target prim. One for the case of a triangle target prim, and one for the case of a quad (anything else is ignored for now). The quad case will make use of bilinear interpolation, see bilinear patch and in the case of a triangle will use linear interpolation between the three points of the triangle.

triangle case:

b(u,v) = p_0*(1-u-v) + p_1*u + p_2*v

quad case:

\begin{align*}
b(u,v) &= p_0*(1-u)*(1-v) \\ 
&+ p_1*(1-u)*v \\
&+ p_2*u*v \\ 
&+ p_3*u*(1-v)
\end{align*}

where u, v is our paramuv property of the hair strand and p0, p1, p2, (p3) are the points of the target prim.

We want to rotate each point c1..n of the curve along the root point c0 of the curve. If we just move c1..n by -c0 then RF will rotate the curve along the root. Let o be the vector between ci and c0:

\vec{o}  = c_i - c_0

The final position c’ of point i is then:

c_i' = \vec{o}RF + c_0 + b(u, v)-c_0

And implemented in OpenCL:

ocl
static float3 quadBarycentric(
    const float2 uv,
    const float3 p0,
    const float3 p1,
    const float3 p2,
    const float3 p3
)
{
    float u = uv.x;
    float v = uv.y;
    return (float3) p0*(1-u)*(1-v) + p1*(1-u)*v + p2*u*v + p3*u*(1-v);
}


static float3 triBarycentric(
    const float2 uv,
    const float3 p0,
    const float3 p1,
    const float3 p2
)
{
    float u = uv.x;
    float v = uv.y;
    float w = 1.0f-u-v;
    return (float3) p0*w + p1*u + p2*v;
}


__kernel void HairProc(
    __global float* result,
    __global float* src_p,
    __global int* src_primpts_lengths,  // How many points each prim consists of.
    __global int* src_primpts_indices,  // Array containing the first point index of each prim

    /* Targets */
    __global float* tgt_p,
    __global int* tgt_prmpts_lengths,
    __global int* tgt_prmpts_indices,
    __global int* tgt_prmpts_offset,

    __global float* tgt_rest_frames,
    __global float* tgt_frames,

    /* Capture attributes */
    __global float* capt_uv,
    __global int* capt_prm,
    __global int* unique_prm
)
{
    // idx will be our loop over each hair strand.
    int idx = get_global_id(0);

    int tgt_prm = unique_prm[capt_prm[idx]];

    // Start by getting the new uv and normal values of the target prim
    int pt0 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm]];
    int ptlast = pt0 + tgt_prmpts_lengths[tgt_prm];
    int npts = ptlast - pt0;

    if (npts != 3 && npts != 4) {
        return;
    }

    int pt1 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 1];
    int pt2 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 2];
    int pt3;

    // float3
    float3 p0 = vload3(pt0, tgt_p);
    float3 p1 = vload3(pt1, tgt_p);
    float3 p2 = vload3(pt2, tgt_p);
    float3 p3;

    float2 uv = vload2(idx, capt_uv);

    float3 new_pos;
    if (npts == 3) {
        new_pos = triBarycentric(uv, p0, p1, p2);
    } else {
        pt3 = tgt_prmpts_indices[tgt_prmpts_offset[tgt_prm] + 3];
        p3 = vload3(pt3, tgt_p);
        new_pos = quadBarycentric(uv, p0, p1, p2, p3);
    }

    int src_start = src_primpts_indices[idx];
    int src_end = src_start + src_primpts_lengths[idx];

    // float3 translate = vload3(0, tgt_translate);
    float3 root = vload3(src_start, src_p);
    float3 offset = new_pos - root;

    mat3 R;
    mat3 F;
    Mat3Load3(capt_prm[idx], tgt_rest_frames, R);
    Mat3Load3(capt_prm[idx], tgt_frames, F);

    for (int pt_idx = src_start + 1; pt_idx < src_end; pt_idx++) {
        float3 p = vload3(pt_idx, src_p);

        p -= root;
        p = Mat3VecMul(p, R);
        p = Mat3VecMul(p, F);
        p += root + offset;

        vstore3(p, pt_idx, result);
    }
    root += offset;
    vstore3(root, src_start, result);
}

As usual in OpenCL, I find the hardest part being the indexing of arrays. I’ve also complicated things a bit since I’m not using the prim property as is from the API schema, but instead I am converting it to a non-repeating, dense array before adding it to the buffer. it would be unnecessary to calculate R and F multiple times for the same prims, and for prims which doesn’t have a curve attached. This is done on the CPU before setting the arguments to the kernel.

Update Test Script

Previously, we set up the test script which built our hairProc.usda file. We set that up solely to create a curve and add the HairProceduralAPI schema, but we didn’t set any of the properties of the schema. It’s also boring to test with a single curve… So let’s update the testenv/genHairProc.py script for us to test the procedural in multiple scenarios.

testenv/genHairProc.py

First, we’ll create a couple of functions to generate some simple geometries which our curves will attach to

Python
def build_plane(stage, name="plane", w=2, h=2, pos=(0,0,0)):
    """
    Build a simple plane for testing hair procedural
    """
    pts = np.zeros((4, 3))
    pts[0] = (pos[0] + w, pos[1] + h*2, pos[2])
    pts[1] = (pos[0] - w, pos[1] + h*2, pos[2])
    pts[3] = (pos[0] + w, 0, pos[2])
    pts[2] = (pos[0] - w, 0, pos[2])

    mesh = UsdGeom.Mesh.Define(stage, Sdf.Path("/" + name))
    mesh.CreatePointsAttr(pts)
    mesh.CreateFaceVertexIndicesAttr([0,1,2,3])
    mesh.CreateFaceVertexCountsAttr([4])

    # np.random.seed(int(hashlib.sha1(path.encode("utf-8")).hexdigest(), 16) % 10**8)
    cd = np.array([0.05, 0.05, 0.05])
    mesh.CreateDisplayColorAttr(cd)
    return mesh


def build_tube(stage, name="tube", rows=5, columns=5, radius=1, height=2, caps=True, pos=(0,0,0), triangles=False):
    """
    Build a tube for testing hair procedural
    """
    rows = max(rows, 2)
    columns = max(columns, 3)
    rc = rows * columns

    pts = np.zeros((rc, 3))
    for i in range(rows):
        y = (height / (rows-1)) * i
        for j in range(columns):
            v = ((j % columns)) / columns * np.pi * 2
            p = (pos[0] + np.sin(v) * radius, pos[1] + y, pos[2] + np.cos(v) * radius)

            pts[i*columns+j] = p

    indices = []
    counts = []

    # CAPS
    if caps:
        pts.resize((rc + 2, 3))
        pts[-2] = pos
        pts[-1] = (pos[0], pos[1] + height, pos[2])
        for i in range(columns):
            indices.append(i + 1 if i + 1 != columns else 0)
            indices.append(i)
            indices.append(rc)
            counts.append(3)

        for i in range(rc-columns, rc):
            indices.append(rc + 1)
            indices.append(i)
            indices.append(i + 1 if i != rc-1 else rc-columns)
            counts.append(3)

    # SIDES
    for i in range(rows-1):
        for j in range(columns):
            idx = i*columns+j

            if not triangles:
                indices.append(idx + 1 if j != columns-1 else i*columns)
            
            indices.append(idx + columns + 1 if j != columns-1 else i*columns+columns)
            indices.append(idx + columns)
            indices.append(idx)
            
            counts.append(4 if not triangles else 3)
            
            if triangles:
                indices.append(idx + 1 if j != columns-1 else i*columns)
                indices.append(idx + columns + 1 if j != columns-1 else i*columns+columns)
                indices.append(idx)
                
                counts.append(3)


    mesh = UsdGeom.Mesh.Define(stage, Sdf.Path("/"+name))
    mesh.CreatePointsAttr(pts)
    mesh.CreateFaceVertexIndicesAttr(indices)
    mesh.CreateFaceVertexCountsAttr(counts)

    cd = np.array([0.1, 0.1, 0.1])
    mesh.CreateDisplayColorAttr(cd)
    return mesh

Then we can update the build_hair() function:

Python
def build_hair(stage, target, count=15, path="/curves", faces=[], apply_api=True):
    """
    Create hair primitives on a target UsdPrim. Choose what faces to apply the hair to 
    for testing face indexes.
    """
    counts = target.GetFaceVertexCountsAttr().Get()
    indices = target.GetFaceVertexIndicesAttr().Get()
    pts = target.GetPointsAttr().Get()

    if not faces:
        faces = range(len(counts))

    l = count / len(faces)

    curve_cnt = [2] * count
    curve_pts = np.zeros((count * 2, 3))
    curve_w = np.full((1, count*2), 0.03)
    curve_prm = []
    curve_uvs = []

    total = 0
    carried = 0

    for i in faces:
        c = counts[i]
        li = int(np.rint(l + carried))
        carried += l - li
        if li == 0:
            continue

        offset = 0
        if i > 0:
            offset = sum(counts[0:i])

        p = np.zeros((c, 3))
        for j in range(c):
            p[j] = pts[indices[j + offset]]

        e = p[1:] - p[0]
        n = np.cross(e[0], e[1])
        d = np.dot(n, n)
        if d != 0.0:
            n /= np.sqrt(d)

        r = np.random.random((li, 2))
        roots = np.zeros((li, 3), dtype=np.float64)
        for j in range(li):
            u = r[j][0]
            v = r[j][1]
            if c == 3:
                u /= 2
                v /= 2
                w = 1-u-v
                roots[j] = p[0]*w + p[1]*u + p[2]*v
            elif c == 4:
                roots[j] = p[0]*(1-u)*(1-v) + p[1]*(1-u)*v + p[2]*u*v + p[3]*u*(1-v)

            curve_uvs.append((u, v))
        tips = roots + n

        curve_pts.put(range(total, total + li * 6), np.stack((roots, tips), -2))        
        curve_prm += [i] * li

        total += li * 6

    hair = UsdGeom.BasisCurves.Define(stage, Sdf.Path(path))
    hair.CreatePointsAttr(curve_pts)
    hair.CreateCurveVertexCountsAttr(curve_cnt)
    hair.CreateWidthsAttr(curve_w)
    hair.CreateTypeAttr("linear")

    np.random.seed(int(hashlib.sha1(path.encode("utf-8")).hexdigest(), 16) % 10**8)
    cd = np.random.random((3, 1))
    hair.CreateDisplayColorAttr(cd)

    if apply_api:
        api = HairProc.HairProceduralAPI.Apply(hair.GetPrim())
        api.CreatePrimAttr(curve_prm)
        api.CreateParamuvAttr(curve_uvs)
        api.CreateRestAttr(pts)
        rel = api.CreateTargetRel()
        rel.SetTargets([target.GetPath()])

        assert(hair.GetPrim().HasAPI("HairProceduralAPI"))
        assert(hair.GetPrim().HasAPI(HairProc.HairProceduralAPI))

    return hair

The build_hair function will now scatter some hairs randomly on the target geometry and also creates the API schema properties!

In order to test the deformation, we’ll also need some function to animate our geometry. I’ve made three different functions to animate some USD geometry: transform() to animate the UsdXform of the prim, transform_pts(), similar to transform() but animates the points instead of the xform, and twist() to test non-uniform deformation:

Python
def transform(stage, prim, nframes=10, speed=1, translate=True, rotate=False, pos=(0,0,0)):
    """
    Set the xform of the UsdPrim
    """
    frames = range(nframes)
    stage.SetStartTimeCode(frames[0])
    stage.SetEndTimeCode(frames[-1])
    stage.SetFramesPerSecond(24)

    xform = UsdGeom.Xformable(prim)

    if rotate:
        op = xform.AddRotateYOp()
        for f in frames:
            op.Set((np.pi*2)/nframes * 4, f)

    if translate:
        op = xform.AddTranslateOp()
        for f in frames:
            op.Set((pos[0], pos[1], pos[2] + np.sin(f/nframes*4*np.pi)*10), f)


def transform_pts(stage, prim, nframes=10, speed=0.1, pos=(0,0,0)):
    """
    Transforms each point idividually.
    """
    frames = range(nframes)
    stage.SetStartTimeCode(frames[0])
    stage.SetEndTimeCode(frames[-1])
    stage.SetFramesPerSecond(24)

    pts_attr = prim.GetPointsAttr()
    pts = np.array(pts_attr.Get())

    pts_attr.Set(pts, frames[0])

    def _func(v, t):
        x = v[0] * np.cos(t) - v[2] * np.sin(t)
        y = v[1]
        z = v[0] * np.sin(t) + v[2] * np.cos(t)
        return np.array((x, y, z))

    vf = np.vectorize(_func, signature="(n)->(n)")
    vf.excluded.add("t")

    for f in frames:
        pts = vf(v=pts, t=(np.pi*2)/nframes * 4)
        pts_attr.Set(pts + pos, f)


def twist(stage, prim, nframes=10, height=10):
    """
    twists the geometry to test skewing of faces
    """
    frames = range(nframes)
    stage.SetStartTimeCode(frames[0])
    stage.SetEndTimeCode(frames[-1])
    stage.SetFramesPerSecond(24)

    pts_attr = prim.GetPointsAttr()
    pts = np.array(pts_attr.Get())

    def _func(v, t, h):
        x = v[0] * np.cos(t * (v[1]/h)) - v[2] * np.sin(t * (v[1]/h))
        y = v[1]
        z = v[0] * np.sin(t * (v[1]/h)) + v[2] * np.cos(t * (v[1]/h))
        return np.array((x, y, z))

    vf = np.vectorize(_func, signature="(n)->(n)")
    vf.excluded.add("t")
    vf.excluded.add("h")

    for f in frames:
        pts = vf(v=pts, t=(np.pi*2)/nframes * 4, h=height)
        pts_attr.Set(pts, f)

Now, lets update the do_stuffs() function:

Python
def do_stuffs(stage):
    """
    Build the stage for testing hair procedural
    """
    tube1 = build_tube(stage, "tube1", rows=5, columns=10, height=10, caps=True)
    tube2 = build_tube(stage, "tube2", rows=5, columns=10, height=10, caps=True, triangles=True)
    plane1 = build_plane(stage, "plane1",  w=3, h=3)
    plane2 = build_plane(stage, "plane2",  w=3, h=3)

    build_hair(stage, tube1, count=1000, path="/curves1")
    build_hair(stage, tube2, count=1000, path="/curves2")
    build_hair(stage, plane1, count=1000, path="/curves3")
    build_hair(stage, plane2, count=1000, path="/curves4")

    twist(stage, tube1, nframes=200, height=10)
    twist(stage, tube2, nframes=200, height=10, pos=(10,0,0))
    transform_pts(stage, plane1, nframes=200, pos=(20,0,0))
    transform(stage, plane2, nframes=200, pos=(30,0,0))

    add_camera(stage, (15,5,100), (0,0,1))

Lets run the python script again to generate the stage, then finally:

Bash
usdview testenv/hairProc.usda

or render our stage:

Bash
usdrecord hairProc.usda --frames "0:200, 1" --camera "/cameras/camera1" render/frame.###.jpg

Result

This is the final result of the hairProc.usda file rendered through usdrecord using Metal. The hairs are deformed procedurally to the target geometries at render time, i.e. no value clips or time samples are stored on the hairs. From left to right:

  1. twisting tube with quads
  2. twisting tube with triangles
  3. animated points
  4. animated xform

There’s clearly a problem with the deformation on the first tube though. The reason is how the local frames are calculated from the first three points of a quad, and a quad can bend across its surface, F(p0, p1, p2) != F(p3, p2, p1). There’s plenty of ways to work around it: you could for example treat each quad as 4 triangles to cover all the axes of rotation. You could also calculate the frames for each internal point of the quad and interpolate the frames at paramuv (using slerp for example). You could also interpolate the normals of the vertices of the prims and use that as the y axis for the frames. One problem with interpolating the rotation is that you’d need to calculate the final matrix for each curve, which is computationally heavier than caching the matrices once for each target prim.

To properly show the issue, here’s a comparison with using quads (first image) and triangles (second image).

Here I’ve just changed the topology of the tube, but in reality, you often can’t rely on the input geometry being created in any certain way, so ideally, this should be handled by the deformer… But I’ll leave that up to you or future me to solve 🙂

TODO

I’m sure there’s plenty one could do to increase the speed and robustness of this setup, but for demonstration purpose, I’m OK with the result here. However, for a production build, There’s lots I’d like to change.

I would for example like to replace all the 3×3 rotation matrices in the kernels to quaternions, since it would be less data to send to the OpenCL device, it would also be preferred to use quaternions if implementing the slerp method I mentioned before. Alternatively, replace the entire method for calculating the frames and instead interpolate the normals for each hair curve inside the HairProc kernel.

I could also be smarter about how I generate the properties on the API schema prim, for example store the target prims as an indexed array with indices to a dense, non-repeating array straight away, instead of doing it when setting up the OpenCL context. This would be an increase of data stored on the hairs, but would result in a faster setup time for the deformer.

It would also be ideal to respect any subdivisionScheme of the target prim.

I’d also like to get more settings on the API schema, to control the behavior of the deformer… Ultimately, I’d like a generalized setup not bound to deforming hairs, where one could just define inputs, outputs and a kernel file in order to deform any type of geometry however one would like, but I believe that’s quite far away from this example.

I’d like to see how this can be added to Houdini USD as well. Currently this plugin runs with the native USD build, but since Houdini runs an internal USD build, I’m not entirely sure how one would install this plugin to also work with Houdini

Conclusion

This became quite a hefty post about something that visually isn’t very impressive. Regardless of that, I hope it inspires more people to look in to Hydra 2.0 and all the cool things one can do with it! I would again like to refer to Nvidia and their plugin-samples for USD, which was my main source of inspiration and information when doing this project. There’s some great information on that page which goes in to more detail how scene indexing works in Hydra.

I’m sure there’s plenty more one could do for this project, but I’ll leave it here for now. Thank you so much for reading!

, ,