OpenAI’s iOS app redesigns UI ahead of speech mode release

OpenAI’s iOS app redesigns UI ahead of speech mode release

OpenAI is revamping its UI for the upcoming speech mode in the iOS app. The recent update brought several changes:

Changes to the speech interface:

    • The vision capability and the option to open the camera have been removed from the main screen and the alpha introduction screen. These features are expected to be released later and will not be part of the late July release for the initial advanced voice mode.
    • The camera and image upload options are still available, but have been moved within the UI.
    • The mute button is now more visible and clearly looks like a mute button. Previously, users had to tap on the voice animation to toggle it on or off.
    • The buttons have been slightly enlarged for better usability.
    • A new three-dot menu contains the option to upload images, suggesting features like screen sharing may be coming in the future.
    • You can also search your Memories on ChatGPT. This option was added to the MacOS app last time. I’m not 100% sure if it’s available to the public yet, because I still don’t have the memory feature working out of the box.

    Enhanced GPT capabilities:

      • Users have noticed that ChatGPT has become smarter. This follows an announcement from the ChatGPT X account (on the X social network, formerly Twitter), although no specific details were provided.
      • In the custom GPT TestingCatalog, ChatGPT can now automatically decide whether to display an image in markdown from search or external sources. If you haven’t seen this feature yet, you can explore it in the TestingCatalog GPT.

      These updates highlight OpenAI’s ongoing efforts to improve the user experience and expand the availability of new features globally.