Chennai based AI company Mad Street Den has raised $1.5 million in funding from Exfinity Technology Fund and GrowX Ventures, reports the Times of India. The startup will use the funds for product development and to set up an office in the Bay Area later this year. However, according to a Techcrunch report, the company will use the capital to hire “very senior people in the valley” and double its headcount this year, but there are no plans for a US office yet.
Mad Street Den currently offers one product, MADstack, a cloud based platform that offers AI and computer vision modules using an API. The modules include object recognition, gaze tracking, head and facial gestures, emotion-expression detection and 3D facial reconstruction among others. As an example, for fashion retailers the AI platform can use object recognition and facial recognition, to detect the sex and clothing of a person in an image a user takes, to display back similar clothing on a fashion website. Similarly using the platform’s modules, mobile games can change gameplay based on gestures and expression detection.
The company currently markets its product as consumer engagement solutions for online fashion, mobile gaming, robotics, IOT, analytics, hospitality and automotive verticals among others. It also claims to be in talks with “several” e-commerce companies to introduce its technology in their services in India this year.
MADstack currently makes money for each API call made, but the company will model its pricing based on verticals in the future, for example, e-commerce partners can pay via affiliate commission while other industries could be billed via licensing fees. Other than this platform, the startup is also looking to release its own services and applications. It said that a range of ‘proofs of concept’ have been created — including a “short of staring competition competition game,” which the company will launch eventually.
Founded in 2013 by Ashwini Asokan and Anand Chandrasekaran, Mad Street Den’s core product MADstack was released in August last year. At launch, the solution only detected facial landmarks, expressions and emotions, and facial & head gestures.