Skip to content

Quickstart

Add wishful to your project like any other library:

Terminal window
pip install wishful

wishful targets Python 3.12+. It uses litellm, so any provider it supports will work here too.

Configure your provider with the usual environment variables. For example, with OpenAI:

Terminal window
export OPENAI_API_KEY=...
export DEFAULT_MODEL=gpt-4.1

Or with Azure OpenAI:

Terminal window
export AZURE_API_KEY=...
export AZURE_API_BASE=https://<your-endpoint>.openai.azure.com/
export AZURE_API_VERSION=2025-04-01-preview
export DEFAULT_MODEL=azure/gpt-4.1

You can also set WISHFUL_MODEL instead of DEFAULT_MODEL if you prefer.

Drop this into a scratch file or a REPL:

from wishful.static.text import extract_emails
raw = "Contact us at team@example.com or sales@demo.dev"
print(extract_emails(raw))

Run it once.

If you open the new .wishful/text.py file, you’ll see real Python code that wishful generated, validated and cached for you.

A‑ha moment: the function you imported didn’t exist anywhere in your repo — the import itself was the spec. You just wished it into existence. 🪄

For when you want the magic but your Wi-Fi doesn’t.

For CI, demos, or offline experiments, flip on stubbed generation:

Terminal window
export WISHFUL_FAKE_LLM=1
python your_script.py

In fake mode, wishful returns deterministic stub implementations instead of calling a real model. The test suite uses this mode; your project can too.

  • Curious about the magic under the hood? Read How it works.
  • Want to test multiple variants and keep the best? Check out Explore.
  • Want the type registry and Yoda-speak examples? Head to Types.
  • Hacking on wishful itself? See Contributing for the uv‑powered dev loop.