MiroFish
An AI prediction engine powered by multi-agent simulation. Upload seed materials (news, policies, financial data, or even fiction), describe what you want to predict, and MiroFish deploys thousands of AI agents with distinct personalities and memory to interact in a digital parallel world — generating prediction reports based on simulated outcomes.
Prerequisites
MiroFish requires two external API keys before it can function:
- LLM API Key — any OpenAI-compatible API (this template defaults to Alibaba DashScope / qwen-plus)
- Recommended: Zeabur AI Hub — get your API key directly on Zeabur, supports Claude, GPT, Gemini and more. Set
LLM_BASE_URL to https://hnd1.aihub.zeabur.ai/ and LLM_MODEL_NAME to e.g. gpt-4.1-mini
- Or use OpenAI (
LLM_BASE_URL = https://api.openai.com/v1), Alibaba Bailian (https://dashscope.aliyuncs.com/compatible-mode/v1), or other OpenAI-compatible providers
- Zep Cloud API Key — for agent memory and knowledge graphs
What You Can Do After Deployment
- Open your domain — access the MiroFish interface
- Upload seed materials — provide news articles, policy documents, financial data, or other text
- Describe your prediction — use natural language to explain what outcome you want to simulate
- Run simulation — MiroFish creates AI agents that interact and evolve, then generates a prediction report
- Chat with agents — interact with the simulated agents to explore different scenarios
Use Cases
- Public opinion analysis — predict how the public might react to events or announcements
- Policy testing — simulate the impact of policy changes before implementation
- Creative exploration — deduce possible endings for stories or explore fictional scenarios
- Financial forecasting — analyze market trends (experimental)
Environment Variables
| Variable | Description |
|---|
LLM_API_KEY | Your LLM API key (required) |
LLM_BASE_URL | LLM API endpoint (default: Alibaba DashScope) |
LLM_MODEL_NAME | Model name (default: qwen-plus) |
ZEP_API_KEY | Zep Cloud API key for agent memory (required) |
LLM_BOOST_API_KEY | Optional secondary LLM for acceleration |
LLM_BOOST_BASE_URL | Optional secondary LLM endpoint |
LLM_BOOST_MODEL_NAME | Optional secondary LLM model name |
OASIS_DEFAULT_MAX_ROUNDS | Max simulation rounds (default: 10). Higher = more detailed but costs more tokens |
Important Notes
- LLM token consumption can be significant. Start with simulations under 40 rounds
- The free Zep Cloud tier is sufficient for basic usage
- This project is at an early stage (v0.1.2); APIs may change
License
AGPL-3.0 — GitHub