Aura Vision (YC W19) – Google Analytics for Physical Stores

We're Daniel, Jaime, and Jonathon - the founders of Aura Vision. (https://auravision.ai)

Aura Vision is like Google Analytics for physical retail stores. Our mission is to ensure that physical retailers can innovate and improve their stores with data, in the same way their eCommerce counterparts do, while protecting customer privacy.

Retail teams often know very little about what shoppers do in-store leading up to a purchase. To try to increase sales, they change layouts, products, and media in their shops based on anecdotal knowledge, and experience. That’s because it’s hard to get good quality data about what consumers actually do in their stores at the moment. Many retailers periodically place people in doorways with clipboards recording shopper demographics and behaviours, which of course is costly and not very scalable.

We use existing security cameras in stores to detect the demographics (age, gender, staff/customer) and behaviour of all visitors using our proprietary computer vision technology. This creates an anonymised feed of aggregated data for the retailer, giving them new tools to improve their stores. E.g.

- To increase footfall, retailers can A/B test window displays, selecting the one with the highest peel off rate (the ratio of entries to people walking by)

- To uncover why a product is underselling, retailers can learn about the movement and dwell times of different demographics around products.

- To increase sales, they can select products that are suited to the demographics in that store.

- To increase conversion rates, retailers can identify where customers spend most of their time in-store and locate staff accordingly.

We started out in the UK during the birth of GDPR, so we’re acutely aware of the need to protect customer privacy. Video is deleted as part of the processing, and never stored thereafter, and our system never identifies people, nor stores identities. All data is aggregated into 15 minute chunks, which fully anonymises the counts, so you are left with information on the behaviours that the camera observed in that period. Those chunks are supplied back to the retailer through our dashboard and API as heatmaps and counts. We don’t rely on facial recognition, instead taking in visual cues from all features across the body.

In contrast many other retail tracking solutions, like Bluetooth and WiFi, aren’t GDPR compliant as they store MAC addresses, or other phone IDs without consent, which count as personal data. This means they can re-identify you when you come back to the store, or another store on their network. While regulation will do a good job at getting rid of these tracking solutions, we want to help by showing retailers there’s an option that gives them more useful data anyway.

Daniel and Jaime studied under the same supervisor at the University of Southampton during their computer vision PhDs. They saw plenty of opportunities for using deep learning in people tracking. A key part of Daniel's PhD was estimating people's demographics from CCTV footage and this led to the end result we are running now. Myself and Daniel went to primary school together, and my background is in APIs and frontends.

Thanks for reading! We know the HN community has many people interested and knowledgeable in computer vision and deep learning, so we're looking forward to hearing your thoughts. If you or someone you know has experienced similar challenges in retail, please reach out! [email protected]



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov