Blog Post Image

Google Gemini API Now Compatible with OpenAI, Helping Developers Switch Easily

🌟Google introduces a new endpoint for the Gemini API, making it easier for developers to switch to Gemini. 🔑 The new endpoint supports OpenAI's chat completion and embedding APIs, but functionalities are not yet complete. 🛠️ The vLLM project offers support for multiple models, enhancing API flexibility.

Google recently announced the launch of a new endpoint for its Gemini API, designed to make it easier for developers who have adopted OpenAI solutions to switch to Gemini. This new endpoint is currently in beta and only supports a subset of OpenAI functionalities.

Image

According to Google, this new endpoint can replace OpenAI's endpoints when using direct REST calls or the official OpenAI SDK. For example, if you have a program written using the OpenAI SDK (such as Python), you can modify the initialization to use Google's model with the following code:

from openai import OpenAI
 
client = OpenAI (
    api_key="gemini_api_key",
    base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)

In the code, developers need to provide a Gemini API key, which can be directly written in the code or passed through the OPENAI_API_KEY environment variable. To generate text, you can use the chat completion API as shown below, specifying the name of the Gemini model you wish to use:

response = client.chat.completions.create (
 
    model="gemini-1.5-flash",
    n=1,
    
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "Explain how AI works"}
    ]
)
 
print (response.choices [0].message)

Additionally, the new Gemini endpoint also supports OpenAI's embedding API, which measures the relevance between text strings. In short, the embedding API maps text to vectors of floating-point numbers, which developers can use to search for specific values, cluster text, detect anomalies, and provide recommendations. The following code snippet demonstrates how to use this feature in Gemini:

response = client.embeddings.create (
    input="Your text string here",
    model="text-embedding-004"
)
 
print (response.data [0].embedding)

Currently, the chat completion API and the embedding API are the only OpenAI features that can be used on the Gemini model through the new OpenAI endpoint. Support for image uploads and structured outputs is also limited to certain functionalities. Google stated that they plan to add more OpenAI features to make it easier for developers to use Gemini as an alternative to OpenAI, but the specific timeline is not yet clear.

In discussions on Reddit, commentators praised Google's move, seeing it as a solution for OpenAI API users to escape lock-in, although there is still a long way to go before achieving a standard API for easy switching between different model providers.

As a more general approach, the vLLM project aims to support various generation and embedding models and provide a server compatible with OpenAI. With vLLM, developers can use Mistral, Llama, Llava, and many other major models currently available.

Official introduction: https://developers.googleblog.com/en/gemini-is-now-accessible-from-the-openai-library/

View Gemini on AiTools.Moe

Publisher

Homura

2024/11/13

Categories

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates