My Other Articles

published in LinkedIn

I’ve always known that ontology plays a key role in developing IT systems tailored to specific business domains, and I figured LLMs wouldn’t be an exception. That curiosity led me to dive in and see for myself. So, this weekend, I decided to put it to the test and explore how LLMs could benefit from added context through ontology.


In today’s fast-paced digital landscape, AI tools like ChatGPT and Claude are revolutionizing how we interact with technology, seamlessly integrating into our daily lives. But as amazing as these tools are, they come with some significant security issues. While they do have built-in security features, these aren’t always enough to prevent certain vulnerabilities.


This small project explores how Large Language Models (LLMs) might help improve Java code quality and security. In this article, I will take you through the development of AI Code Sentinel, detailing its integration with SonarQube for code scanning, storage of results in a PostgreSQL database, AI-driven code improvement suggestions, automated code remediation, and direct updates to the code repository.


Chat Memory is like a mental note-taking system that helps AI models, like Language Models (LLMs), remember important details from previous conversations. It’s essential for developing intelligent chatbots and virtual assistants that can understand and respond to user queries accurately. Chat Memory enables AI models to “remember” important information from previous conversations, such as user preferences, order history, or previous questions asked.


Today, we’re diving into low-level abstraction APIs of LangChain4j. If you caught my last post, you’ll remember we explored the high-level abstractions with a simple example using AiServices and Tool annotation. Super handy for cutting down on boilerplate and zooming in on the business logic for your AI/LLM apps.


In today’s fast-paced digital landscape, AI tools like ChatGPT and Claude are revolutionizing how we interact with technology, seamlessly integrating into our daily lives. But as amazing as these tools are, they come with some significant security issues. While they do have built-in security features, these aren’t always enough to prevent certain vulnerabilities.


Hey folks! 🚀 So, I’ve been diving deep into AI pair programming, and guess what? I had one of those “aha!” moments with my project, thanks to GitHub’s Dependabot. Let me spill the beans on this little adventure.