On September 25, OpenAI announced that its generative AI tool ChatGPT would now allow for image and voice commands. OpenAI says that with the image command feature, users will be able to get ChatGPT (GPT-4 with vision) to analyze image inputs provided by them. Discussing ChatGPT’s image recognition abilities, OpenAI claims that it has taken measures to prevent the tool’s ability to analyze and make direct statements about people. It also mentions that the new features should not be used for high-risk purposes without proper verification of the AI-generated results by a human. The features will be rolled out for ChatGPT Plus and Enterprise users in the next two weeks. The voice feature will be available on iOS and Android (users will have to opt-in to it on the settings page of their app) and the image feature will be available on all platforms. (Note: you can read about the voice command here). Pre-deployment testing of the image command feature: The image feature finished training in 2022 and then in early 2023, OpenAI gave a diverse set of alpha users access to GPT-4V (GPT 4 with vision), including Be My Eyes, a free mobile app for blind and low-vision people, it explains in a document detailing its approach to ensuring the safety of the feature. In March 2023, Be My Eyes and OpenAI collaborated to create Be My AI, a new tool to describe the visual world for people who are blind or have low vision. This tool incorporated GPT-4V…
