Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

"BlinkIn Visual Assistance - Defining a new category of automated visual support towards to a visual AI Operating System for Europe (not Siri/Alexa). Our moonshot is to ""Give the world a Help Button""."

Periodic Reporting for period 1 - BlinkIn Visual Assistant - June 2021 (BlinkIn Visual Assistance - Defining a new category of automated visual support towards to a visual AI Operating System for Europe (not Siri/Alexa). Our moonshot is to "Give the world a Help Button".)

Berichtszeitraum: 2023-02-01 bis 2024-01-31

Blinkin aims to revolutionise the way technical problems are resolved in the real world by developing a platform that facilitates self-service for product users, supported by multimodal AI and backed by human experts. This initiative is driven by the need to address escalating service costs, diminishing professional service availability, and insufficient capabilities across various industries. The primary objective is to empower consumers to independently solve technical issues, such as setting up electronic devices or repairing home appliances, thereby reducing the burden on manufacturers’ customer support and field service teams while improving the time-to-resolution, the overall availability of support and the resources spent on each case. By doing so, Blinkin aligns the interests of consumers, professionals, and manufacturers, promoting efficiency, sustainability and self-reliance.
Blinkin has developed the platform for AI-boosted Companion Experiences, which will enable product companies to create and publish step-by-step guides through a vastly enhanced Blinkin No-Code AI Studio. It will allow them to pre-script digital self-service experiences for their customers end-to-end, thereby controlling the personalised user journey and outcome much more effectively.
A significant achievement in the project has been to include the utilisation of companies' existing repositories of multimodal data (images, text, videos, etc.) in the development of Small & Large Vision-Language models. These models can be built using a Hybrid Fusion ML approach, ensuring robust semantic representation across different data modalities.
Once published, the web applications allow end-users to engage in tasks like image/video/text question-answering and summarization, enriching the self-service experience with .
In case the AI-guided self-service attempts by individuals (professional or consumer) require an on-demand intervention from a human expert of the product company, the integration with Blinkin’s Help Desk enables agents to support the user directly, ensuring their personal service needs are fulfilled.
Blinkin's innovation ambition lies in its unique approach to cross-modal grounding, enabling the development of Vision-Language models that surpass traditional solutions in terms of efficiency and effectiveness.
The platform's future ability to leverage unstructured, unlabelled multimodal data to support complex problem-solving tasks represents a significant advancement. Our proposition aims to help organisations capture and unlock valuable knowledge hidden in vast amounts of their own data.
Another distinctive innovation is AI Studio, which provides an intuitive WYSIWYG no-code environment for transforming the aforementioned data into multimodal companions (specialised actionable web applications) in just minutes. The Studio also offers the option to automatically generate step-by-step guides from a selected source and deploy Visual AI as-you-go using only text prompts.
Moreover, the built-in seamless transition in the end-user experience from AI-fuelled self-service to live expert assistance - while securing the users’ individual progress in their self-service attempts - will ensure effective support beyond the state of the art in customer service.
blinkint-parta-img.jpg
Mein Booklet 0 0