Configuration
From Python
Section titled “From Python”Because even genies need settings.
import wishful
wishful.configure( model="openai/gpt-5", # Switch models - use litellm model IDs (default: "azure/gpt-4.1") cache_dir="/tmp/.wishful", # Cache directory for generated modules (default: ".wishful") spinner=False, # Show/hide the "generating..." spinner (default: True) review=True, # Review code before execution (default: False) allow_unsafe=False, # Disable safety checks - dangerous! (default: False) temperature=0.7, # LLM sampling temperature (default: 1.0) max_tokens=8000, # Maximum LLM response tokens (default: 4096) debug=True, # Enable debug logging (default: False) log_level="INFO", # Logging level: DEBUG, INFO, WARNING, ERROR (default: WARNING) log_to_file=True, # Write logs to cache_dir/_logs/ (default: True) system_prompt="Custom prompt", # Override the system prompt for LLM (advanced))
# Context radius is configured separately (it likes to be special)wishful.set_context_radius(6) # Lines of context around imports AND call sites (default: 3)All Configuration Options:
Your wish, your rules.
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "azure/gpt-4.1" | LLM model identifier (litellm format) |
cache_dir | str | Path | ".wishful" | Directory for cached generated modules |
review | bool | False | Prompt for approval before executing generated code |
spinner | bool | True | Show spinner during LLM generation |
allow_unsafe | bool | False | Disable safety validation (use with caution!) |
temperature | float | 1.0 | LLM sampling temperature (0.0-2.0) |
max_tokens | int | 4096 | Maximum tokens for LLM response |
debug | bool | False | Enable debug mode (sets log_level to DEBUG) |
log_level | str | "WARNING" | Logging level (DEBUG, INFO, WARNING, ERROR) |
log_to_file | bool | True | Write logs to {cache_dir}/_logs/ |
system_prompt | str | (see source) | Custom system prompt for LLM (advanced) |
Environment variables (loaded via python-dotenv)
Section titled “Environment variables (loaded via python-dotenv)”The .env file: where wishes go before they’re spoken aloud.
All settings can also be configured via environment variables:
WISHFUL_MODELorDEFAULT_MODEL- LLM model identifierWISHFUL_CACHE_DIR- Cache directory pathWISHFUL_REVIEW- Set to"1"to enable review modeWISHFUL_DEBUG- Set to"1"to enable debug modeWISHFUL_UNSAFE- Set to"1"to disable safety checksWISHFUL_SPINNER- Set to"0"to disable spinnerWISHFUL_MAX_TOKENS- Maximum tokens (integer)WISHFUL_TEMPERATURE- Sampling temperature (float)WISHFUL_CONTEXT_RADIUS- Context lines around imports and call sites (integer)WISHFUL_LOG_LEVEL- Logging level (DEBUG, INFO, WARNING, ERROR)WISHFUL_LOG_TO_FILE- Set to"0"to disable file loggingWISHFUL_SYSTEM_PROMPT- Custom system promptWISHFUL_FAKE_LLM- Set to"1"for deterministic stub generation (testing)
Safety
Section titled “Safety”- Safety checks block
os,subprocess,sysimports;eval/exec; andopenin write-ish modes unlessallow_unsafe=True. - Tests flip
allow_unsafe=Trueby default, so don’t blindly copy test fixtures into production code.
Logging
Section titled “Logging”log_level&log_to_filepiggyback onwishful.settings.- Logs live under
<cache_dir>/_logs/YYYY-MM-DD.logwhen file logging is on (enabled by default; stays in the sandboxed cache dir).
Want to see the knobs in action? Pair this with the CLI.