
OmarAi Dev Log 5
Hello everyone!
I was finally able to launch my model to the public and was able to gather tons of feedback from my peers on how I should improve the model and make it more like me. This blog will explore the changes I’ve made on the program since the launch and my reasons for making these changes. Some changes were quite simple, like including some more information about me within the memory, but some were more complicated—such as tweaking the personality of the model through additional fine-tuning to allow it to give bigger and more comprehensive responses. Additionally, I have also tried to create new models based on other large language models like Llama, MPT, and Falcon.
One of the first things I noticed after launching OmarAI was that users enjoyed its conversational tone but wanted more depth. While my initial training data gave it a strong sense of my casual style—complete with slang and emojis—the AI often kept its answers too short or simplistic, especially when asked more elaborate questions. To fix this, I tweaked the fine-tuning to encourage lengthier, more detailed responses. I included examples of real conversations where I wrote lengthy paragraphs, or provided more involved stories and insights. This taught the AI to deliver fuller, richer answers rather than short, surface-level replies.
Another piece of feedback I consistently received was that the AI sometimes lacked continuity when referring to personal information. For example, it would occasionally mention random facts about me that I might have briefly referenced in older messages, but it wouldn’t always remember important details I’d shared in the more recent data. To address this, I expanded the “memory” section of my system prompt. This memory now includes a curated list of facts about me—covering my interests, hobbies, close relationships, and notable personal anecdotes—so OmarAI has a more reliable context to draw upon. However, I also updated the protocols to ensure that truly sensitive or private information remains confidential. In other words, if someone asks for something personal that isn’t meant to be shared, the AI knows to politely but firmly redirect the conversation without revealing anything I’d consider private.
Speaking of protocols, I made sure to refine them for both security and personality reasons. Originally, the model’s system prompt included a friendly, casual directive, but some users tried to push boundaries by asking for deeply personal details. Now, OmarAI not only refuses to reveal such information, but it does so in a playful, lighthearted way—staying true to my personality while maintaining respect for personal boundaries. I also introduced a “playful fallback” mechanism, so when repeatedly pressed, OmarAI can politely deflect the conversation with a bit of humor, such as sharing a random fun fact or changing the subject in a friendly manner.
Finally, I experimented with building models based on Llama, MPT, and Falcon. While each has its strengths—like Llama’s efficient performance or Falcon’s open-source flexibility—I discovered that GPT-based models still deliver the best overall user experience in terms of coherence, creativity, and ease of deployment for my specific needs. In the future, I may consider a hybrid approach that leverages the benefits of multiple architectures, but for now, I’m happy with where OmarAI stands.
As I continue to refine and improve OmarAI, I’m grateful for everyone’s feedback and support. Seeing people interact with an AI version of me has been both surreal and enlightening. There’s still room to grow, but for now, I’m proud of the progress we’ve made and excited for what’s next. Stay tuned for more updates, and feel free to keep sending feedback my way!