Talks
Hyperdimensional Computing: what is hot and what is not
HDC (Hyperdimensional Computing) is a rapidly evolving field within artificial intelligence that captivates newcomers from their very first exposure. Described as brain-inspired and capable of computing in superposition, being a potential bridge to Artificial General Intelligence (AGI), wakes immediate enthusiasm among the audience. As an experienced HDC researcher, I feel compelled to review common pitfalls encountered when publishing HDC-empowered research solutions. In my presentation, I will overview the most promising trends in HDC, as well as those that are less intriguing, aiming to guide future research directions effectively.
Reducing computational complexity of perception and reasoning by neuro-vector-symbolic architectures
We recently proposed neuro-vector-symbolic architectures (NVSA) in which high-dimensional distributed vectors are properly generated by neural nets and further processed by a VSA-informed machinery at different levels of abstraction. Using NVSA, we could set state-of-the-art accuracy record on few-shot continual learning [CVPR 2022] as well as visual abstract reasoning tasks [Nature Machine Intelligence 2023]. This is not where the advantages end: NVSA also reduces the computational complexity associated with both perceptual and reasoning tasks, yet on modern CPUs/GPUs. NVSA expanded computation-in-superposition to highly nonlinear transformations in CNNs and Transformers, effectively doubling their throughput at nearly iso-accuracy and computational cost [NeurIPS 2023]. NVSA also made probabilistic abduction tractable by avoiding exhaustive probability computations and brute-force symbolic searches which led to 244× faster inference compared with the probabilistic reasoning within the state-of-the-art approaches [Nature Machine Intelligence 2023]. Finally, NVSA permitted learning-to-reason: instead of hard-coding the rule formulations associated with a reasoning task, NVSA could transparently learn the rule formulations with just one pass through the training data [NeurIPSW Math-AI 2023].
Hyperdimensional Computing for Efficient Neuromorphic Visual Processing A.Renner
This talk explores the potential of Hyperdimensional Computing (HDC) as a framework for efficient neuromorphic processing. We introduce Hierarchical Resonator Networks (HRNs), a novel architecture for scene understanding. HRNs leverage HDC to identify objects and their generative factors directly from visual input. The network computes with complex-valued vectors implemented as spike-timing phasor neurons. This enables implementation on low-power neuromorphic hardware like Intel's Loihi.
Additionally, we demonstrate the HRN’s capability in visual odometry, accurately estimating robot self-motion from event-based data. This approach is a step towards robust, low-power computer vision and robotics.
HDC Feature Aggregation for Time Series Data and Beyond
This talk gives an overview of applying HDC for feature encoding prior to aggregation. This approach can be useful in several domains, including image processing for spatial features, time series for temporal sequences, and any other distinct features. Typically represented as vectors, features are commonly combined through superposition to create compact representations for tasks like classification. HDC, equipped with operators like superposition and binding, provides a practical way to incorporate more contextual or temporal knowledge into these vector-based representations. For instance, the presentation shows how HDC temporal encoding can enhance time series classification algorithms across different fields such as automotive or biomedical signals.