Quickstart
1) Install wishful
Section titled “1) Install wishful”Add wishful to your project like any other library:
pip install wishfulwishful targets Python 3.12+. It uses litellm,
so any provider it supports will work here too.
2) Point it at an LLM
Section titled “2) Point it at an LLM”Configure your provider with the usual environment variables. For example, with OpenAI:
export OPENAI_API_KEY=...export DEFAULT_MODEL=gpt-4.1Or with Azure OpenAI:
export AZURE_API_KEY=...export AZURE_API_BASE=https://<your-endpoint>.openai.azure.com/export AZURE_API_VERSION=2025-04-01-previewexport DEFAULT_MODEL=azure/gpt-4.1You can also set WISHFUL_MODEL instead of DEFAULT_MODEL if you prefer.
3) Your first wish
Section titled “3) Your first wish”Drop this into a scratch file or a REPL:
from wishful.static.text import extract_emails
raw = "Contact us at team@example.com or sales@demo.dev"print(extract_emails(raw))Run it once.
If you open the new .wishful/text.py file, you’ll see real Python code that wishful
generated, validated and cached for you.
A‑ha moment: the function you imported didn’t exist anywhere in your repo — the import itself was the spec. You just wished it into existence. 🪄
4) Optional: fake LLM mode
Section titled “4) Optional: fake LLM mode”For when you want the magic but your Wi-Fi doesn’t.
For CI, demos, or offline experiments, flip on stubbed generation:
export WISHFUL_FAKE_LLM=1python your_script.pyIn fake mode, wishful returns deterministic stub implementations instead of calling a real model. The test suite uses this mode; your project can too.
5) Next steps
Section titled “5) Next steps”- Curious about the magic under the hood? Read How it works.
- Want to test multiple variants and keep the best? Check out Explore.
- Want the type registry and Yoda-speak examples? Head to Types.
- Hacking on wishful itself? See Contributing for the
uv‑powered dev loop.