Meet My Personal
AI Assistant
Explore my portfolio interactively! This AI assistant can answer your questions about my skills and projects, or even help you get in touch and navigate the website.
ExploreWhy It Matters
This project showcases practical application of working with Large Language Models (LLMs), implementing robust function calling, developing the necessary secure backend APIs, and crafting an interactive frontend.
“Robots will not only replace existing jobs but also create new fields and opportunities for humans, focusing on unique human strengths like creativity and problem-solving.”- Kevin Kelly
Core Frameworks
- Aceternity
- Function App
- Gemini
- Txtai
- Python
and more...
Features
Trustworthy Answers
- Information Retrieval from RAG architecture ensures users get the updated answer.
- Focused on delivering accurate answers grounded in my portfolio data, ensuring information is readily accessible.
- Can't fit all the FAQs in a website? Let AI handle them!
My AI can now perform helpful actions directly within the portfolio website!
Navigation
- Scrolling to target section upon request.
- Offers help when needed.
- "show me your projects section!"
Email Sending
- It is always better to have someone to do the job for you!
- Say "Send an email to Zi Shen". He knows what to do.
Reminders Creation
- Allows visitors to leave actionable follow-up requests.
- "Remind Zi Shen to reply to my invitation for meeting this after at 3pm."
Interact with AI to Create Reminders.And So Much More...
Workflows

Beyond Basic Prompts: Pipeline combining semantic context retrieval and function execution allows LLMs to handle complex requests and provide reliable answers.
Beyond The Veil
where magic happens

Contextualization
- User query and chat history are processed by a dedicated LLM (Summary Agent).
- A refined query enriched with conversational context is returned to assist semantic searching and function retrieval.
image source: txtai
Parallel Processing: RAG
- Contextualized query is sent to Azure Function App for semantic searching.
- Concurrently, the contextualized query is analyzed by another specialized LLM (Function Call Agent).
Azure Functions1const navigateProjectsDeclaration: FunctionDeclaration = {
2 name: "navigateProjects",
3 parameters: {
4 type: Type.OBJECT,
5 description: "navigate user to a specific project.",
6 properties: {
7 project: {
8 type: Type.STRING,
9 description:
10 "The target project to navigate to.",
11 enum: [
12 "projects",
13 "personal-assistant"
14 ],
15 },
16 },
17 required: ["project"],
18 },
19};
Parallel Processing: Function Intent
- Predefined functions with required parameters and context to handle specific tasks.
- The LLM determines when to use a function and extracts parameters from the user's prompt
Gemini AI
Final Synthesis
- All the necessary information is gathered and ready to be crafted into final prompt.
- A final api request is sent to obtain a conversational response.
- Executes function call retrieved earlier and simultaneously update chat window.
Deeper Into the Forest

RAG Semantic Searching
- Create embeddings, indexing data and perform meaningful search to extract needed information.
- Use LLMs to generate responses based on the retrieved data.
image source: txtai
Data Indexing
- Transforms text data into vector embeddings using transformer model.
- Store vector embeddings in an efficient index structure (FAISS) for fast approximate nearest neighbour search.