Last modified: Jan 29, 2026 By Alexander Williams

OpenAI API Python Example: Get AI Responses

Artificial intelligence is changing how we build software. The OpenAI API provides powerful AI models. You can access them directly from your Python code.

This guide will show you a practical example. You will learn to send a prompt and get a text response. We will cover setup, code, and best practices.

What is the OpenAI API?

The OpenAI API is a cloud service. It gives developers access to models like GPT-3.5 and GPT-4. You send a text prompt. The model returns a intelligent text completion.

You can use it for chatbots, content generation, and summarization. The API handles the complex AI. You focus on building your application.

Prerequisites and Setup

Before you start, you need a few things. First, ensure you have Python installed. Version 3.7 or higher is recommended.

You also need an OpenAI account. Go to the OpenAI platform website. Sign up and navigate to the API keys section.

Create a new secret key. Save this key securely. You will use it to authenticate your Python code.

Next, install the official OpenAI Python library. Open your terminal or command prompt. Run the following pip command.


pip install openai
    

This command installs the necessary package. It includes the openai module and its dependencies.

Your First OpenAI API Call in Python

Let's write a simple script. This script will ask the AI a question. It will then print the answer.

Create a new Python file. Name it something like openai_example.py. Open it in your code editor.

First, you need to import the library and set your API key. It is best practice to keep your key out of your code. Use an environment variable.

For this example, we will set it directly. Remember to replace 'your-api-key-here' with your actual key.

 
# Import the OpenAI library
import openai

# Set your API key
openai.api_key = 'your-api-key-here'

# Define the prompt you want to send to the AI
prompt_text = "Explain quantum computing in simple terms."

# Make the API call using the ChatCompletion endpoint
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",  # Specify the AI model to use
    messages=[
        {"role": "user", "content": prompt_text}
    ]
)

# Extract and print the AI's response from the returned object
ai_message = response.choices[0].message.content
print(ai_message)
    

Understanding the Code

Let's break down the script step by step.

The line import openai loads the library. This gives you access to all its functions.

openai.api_key = 'your-api-key-here' tells the library who you are. It authenticates your requests with OpenAI's servers.

We store our question in the prompt_text variable. This is the input for the AI model.

The core of the script is the openai.ChatCompletion.create() function call. This sends the request to the API.

We pass two main arguments. The model parameter chooses which AI to use. "gpt-3.5-turbo" is a good balance of cost and capability.

The messages parameter is a list of conversation turns. Here, we have one turn from the "user" role with our prompt.

The API returns a complex JSON object. We store it in the response variable.

The AI's answer is nested inside this object. The path response.choices[0].message.content extracts the text.

Finally, we print the answer to the console.

Running the Example and Sample Output

Save your Python file. Run it from the terminal. Use the command below.


python openai_example.py
    

You should see an output similar to the following. The exact text will vary.


Quantum computing is a type of computing that uses quantum bits, or qubits, instead of the traditional bits used in classical computers. While a classical bit can be either a 0 or a 1, a qubit can be both 0 and 1 at the same time, thanks to a principle called superposition. This allows quantum computers to process a vast number of possibilities simultaneously, making them potentially much faster for certain types of problems, like factoring large numbers or simulating complex molecules.
    

Congratulations. You have successfully called the OpenAI API from Python.

Key Parameters for Better Responses

The basic call works. But you can control the AI's output more precisely. Use additional parameters in the create() function.

max_tokens: This limits the length of the response. One token is roughly 3/4 of a word. Setting it prevents very long replies.

temperature: This controls creativity. A value of 0 makes outputs deterministic and focused. A value like 0.7 makes them more random and creative.

top_p: An alternative to temperature for controlling randomness. Use one or the other, not both.

Here is an example using these parameters.

 
response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "Write a short tagline for a coffee shop."}
    ],
    max_tokens=50,   # Limit response to about 40 words
    temperature=0.8, # Encourage creative, varied responses
)
print(response.choices[0].message.content)
    

Handling Errors and Best Practices

API calls can fail. Your code should handle errors gracefully. Wrap your API call in a try-except block.

Common errors include invalid API keys, network issues, or hitting rate limits.

Always keep your API key secret. Never commit it to public code repositories like GitHub. Use environment variables or a config file.

For building more complex applications, consider structuring your code into a proper API. You can learn how to Install Flask-RESTful for Python API Development to create your own service layer.

Be mindful of costs. The API charges per token used. Test with simple prompts and lower max_tokens during development.

Conclusion

You now know how to use the OpenAI API with Python. We covered the setup, a basic example, and key parameters.

The process is straightforward. Install the library, authenticate, send a prompt, and process the response.

This opens doors to many applications. You can build smart assistants, content tools, and more.

Start experimenting with different prompts and models. Remember to follow best practices for security and cost management. Happy coding.