Microsoft wants more than just ChatGPT on Bing. The group has now announced the next step: an even better AI language model that is not just limited to texts is to be released shortly. Audio and video should also understand GPT-4.
Microsoft: GPT-4 launch is imminent
At an event, the head of technology at Microsoft Germany divulged that the next version of the AI language model is already available with GPT-4. It is due to be released next week. As a direct difference to the current model, GPT-4 should not only be able to use text as a source, but apparently also create new text based on audio and video.
Microsoft has not given any further details on the new functionality. Users may be able to follow the language model Show pieces of music or videos and GPT-4 will ask questions based on it. The fact that GPT-4 in turn creates its own music or videos from predefined texts is at least unlikely.
At the event, Marianne Janik, Managing Director of Microsoft Germany, made it clear how important AI models are for the group. The current development be no less than an “iPhone moment”. However, many experts will be needed to “make the use of AI add value”. According to her, such a disruption is not necessarily accompanied by job loss (source: heise online).
For Microsoft, the introduction of ChatGPT at Bing has already paid off. The group recently announced that Bing is now open 100 million daily active users comes.
AI can also create images. You can see how this works here:
GPT-4: Much greater training than GPT-3
The text robot ChatGPT currently uses its predecessors GPT-3 and GPT-3.5. In comparison, GPT-4 is supposed to be a significantly extensive training have experienced. A total of 17 trillion training data is said to have been used, which corresponds to 100 times the amount. In addition, GPT-4 should be able to provide answers even faster.