Progress Report
🛠️ Devlog Update – October 10, 2025
🖼️ Image Injection Live on Front End
The front-end UI now supports image injection, enabling real-time image input for vision-capable models. This lays the groundwork for upcoming features like local image analysis, tagging, and visual reasoning workflows.
🔍 Backend Research – Qwen 2.5 Omni + llama.cpp GPU/vision
We're actively exploring llama.cpp builds that support both:
GPU acceleration (ideally via cuBLAS or Metal)
Vision input (multi-modal / image-token streaming)
Goal is to find a runtime that can run Qwen 2.5 Omni or similar locally with full image comprehension.
If compatibility doesn't exist yet, fallback options include:
Testing with alternative vision models (e.g., MiniCPM-V or Qwen-VL variants)
Isolated multimodal inference pipeline outside llama.cpp, piped in
More soon.
FriedrichAI
🧠 FriedrichAI — Offline AI Dev Toolkit > Build smarter. Anywhere. Anytime. A fully offline AI assistant.
More posts
- GPU/CPU optimization Complete64 days ago
- Steam page68 days ago
- Developer AMA69 days ago
- DRM or lack there of.69 days ago
- Early Access build plan finalized69 days ago
- Meet FriedrichAI's younger sibling.71 days ago
- 🧠 No API. No Cloud. No BS. Just Real Offline AI.71 days ago
- Help Us help you.71 days ago
- We are starting our EULA. first rule.71 days ago
- 🛠️ Upcoming Build Update: What’s In, What’s Not, and What’s Next71 days ago
Leave a comment
Log in with itch.io to leave a comment.