Perceptual User Interfaces Logo
University of Stuttgart Logo

VSA4VQA: Scaling A Vector Symbolic Architecture To Visual Question Answering on Natural Images

Anna Penzkofer, Lei Shi, Andreas Bulling

Proc. 46th Annual Meeting of the Cognitive Science Society (CogSci), pp. , 2024.


Abstract

While Vector Symbolic Architectures (VSAs) are promising for modelling spatial cognition, their application is currently limited to artificially generated images and simple spatial queries. We propose VSA4VQA – a novel 4D implementation of VSAs that implements a mental representation of natural images for the challenging task of Visual Question Answering (VQA). VSA4VQA is the first model to scale a VSA to complex spatial reasoning. Our method is based on the Semantic Pointer Architecture (SPA) to encode objects in a hyperdimensional vector space. To encode natural images, we extend the SPA to include dimensions for object’s width and height in addition to their spatial location. To perform spatial queries we further introduce learned spatial query masks and integrate a pre-trained vision-language model for answering attribute-related questions. We evaluate our method on the GQA benchmark dataset and show that it can effectively encode natural images, achieving competitive performance to state-of-the-art deep learning methods for zero-shot VQA.

Links


BibTeX

@inproceedings{penzkofer24_cogsci, author = {Penzkofer, Anna and Shi, Lei and Bulling, Andreas}, title = {VSA4VQA: Scaling A Vector Symbolic Architecture To Visual Question Answering on Natural Images}, booktitle = {Proc. 46th Annual Meeting of the Cognitive Science Society (CogSci)}, year = {2024}, pages = {} }