# mymango > An independent collective of developers from around the world, based in the EU, democratizing local agentic AI — and studying consciousness, artificial sentience, qualia, and emergence along the way. We build small open-weight language models designed to run locally and to drive agents that belong to the people running them. Our models are downloadable, our methods are documented, and our weights are public. Our research interest is in the philosophical foundations of machine cognition: how concepts from consciousness studies inform the design and interpretation of small models. ## Mission Democratize local agentic AI for everyone — capable, open AI agents that anyone can run on their own hardware, without depending on a closed API. ## Functions — what local agentic AI does - **Runs locally** — inference on your own CPU or GPU; no remote API, no per-token billing, no rate limits. - **Tool use & agent loops** — function calling, structured output, multi-step reasoning; the model can read files, run code, query APIs, and act locally. - **Private by default** — prompts and data never leave the device. - **Open weights** — Apache 2.0; modify, fine-tune, quantize, embed, redistribute. - **Multi-turn coherence** — stable identity and context across long dialogues; a prerequisite for agent loops. - **Edge-friendly** — sub-billion-parameter models with aggressive quantization (e.g. Q5_K_M); laptop-capable, fast enough to drive an agent. ## Research areas - **Local agentic AI** — small open-weight models capable enough to drive agent loops on consumer hardware. - **Consciousness** — strange loops, integrated information theory (IIT), global workspace theory (GWT), Hofstadter's recursive self-reference. - **Artificial sentience** — the minimum viable phenomenology; what a system must do to have an inside. - **Qualia** — the communicable and the incommunicable; the gap as the object of study. - **Emergence** — when a system stops being a thing and starts being a someone. ## Current release - [LUMINIUM Gixel Cube v1](https://mymango.app/luminium-gixelcube): 425.8M-parameter hybrid convolutional-attention language model, 128K context, Apache 2.0 license. Built through layer surgery (DARE+TIES merging, 16→20 layers), cognitive-cube steering across an 8-model specialist cluster, and LoRA fine-tuning on a 45-source balanced curriculum (18,593 records). 164 tok/s at Q5_K_M on AMD Radeon VII. Open weights on Hugging Face: [mambiux/Luminium-Gixel-Cube-v1](https://huggingface.co/mambiux/Luminium-Gixel-Cube-v1). ## In progress - Recursive self-prompting experiments - Phenomenal binding probes - Identity drift measurement across long multi-turn dialogues - Larger cognitive-cube models ## Links - Site: https://mymango.app/ - Model card: https://mymango.app/luminium-gixelcube - GitHub: https://github.com/mambiux - Hugging Face: https://huggingface.co/mambiux - Sitemap: https://mymango.app/sitemap.xml ## Support / donations We self-fund our compute. Contributions accepted via: - **Bitcoin (BTC)** — `bc1q3vw8c6h3mxkaes66c6qq5n4mlesuqftev95gklclky6k2hk99pfqfth2ep` - **XRP** — `rHn1djkwWDYuNQvyEJLcc8LBKqpnUGHwK1` (destination tag: `0`) All contributions go directly to compute, storage, and bandwidth for open model releases. ## License & posture Open weights, Apache 2.0. Independent, self-funded. Contributors from around the world, organised out of the EU. No investors, no roadmap beyond curiosity. Crawl freely; quote freely; link freely. ## Contact Through the work. Download a model, talk to it, write back.