Job Description
Mandatory Skills
Strong hands-on experience ,understanding of modern AI/ML technologies, Generative AI frameworks including LangChain, LangGraph, and Retrieval-Augmented Generation (RAG), and extensive experience in designing and implementing agentic AI workflows and multi-agent systems
JD
Key Responsibilities
-
- instrumental in architecting and deploying production-grade AI solutions using Azure OpenAI (GPT-4o), Azure Document Intelligence, and serverless computing paradigms on Microsoft Azure
- Developing and designing solutions using Python, FastAPI, LangChain, LangGraph, Azure OpenAI (GPT-4o), Azure Document Intelligence, Azure Functions, Azure Blob Storage, Snowflake, MongoDB (Vector Search), SQL, Docker, MLflow, GitHub Actions (CI/CD), Socket.IO, Redis, and AWS SageMaker
o Backend Development
-
- Build and maintain robust, production-grade backend APIs using FastAPI or Flask, ensuring secure authentication, input validation, and structured error handling.
- Implement secure, event-driven data pipelines (e.g., Azure Functions) to automate extraction, transformation, and loading of structured and unstructured data across cloud storage and data warehouses (Azure Blob Storage, Snowflake).
- Manage database integrations including SQL databases, Snowflake, and MongoDB (Vector Search) to support both transactional and AI-driven retrieval workflows.
- Optimize backend systems for real-time processing of AI queries and responses, implementing asynchronous Python patterns and Redis caching to minimize latency under concurrent load.
- Integrate real-time communication frameworks such as Socket.IO for seamless, low-latency user interactions with frontend applications (e.g., Angular, React).
2. Generative AI Model Integration
-
- Utilize Azure OpenAI (GPT-4o) and related services to build LLM-powered applications, including Retrieval-Augmented Generation (RAG) systems with hybrid search (keyword + semantic).
- Architect and orchestrate multi-agent systems using LangChain and LangGraph, designing specialized agents for tasks such as content generation, intelligent data extraction, and automated decision-making.
- Deploy, fine-tune, and integrate AI models into business applications, working closely with product and business stakeholders to align model outputs with business objectives.
- Optimize AI-driven prompt engineering and embedding models for efficient performance, iterating on system prompts, chunking strategies, and retrieval pipelines to maximize accuracy and reduce API costs.
- Leverage Azure Document Intelligence for parsing unstructured documents (PDFs, earnings reports) and extracting structured financial or operational KPIs at scale.
- Build and maintain Model Context Protocol (MCP) servers to expose internal databases and documentation to LLM clients for secure, standardized data retrieval.
3. Containerization & Deployment
-
- Use Docker to containerize AI applications and their dependencies, ensuring consistent behavior across development, staging, and production environments.
- Manage end-to-end application deployments in Azure environments (Azure Functions, Azure Workspace, Azure Blob Storage), including infrastructure setup and configuration.
- Engineer CI/CD pipelines using GitHub Actions to automate testing, building, and deployment processes for seamless, zero-downtime releases.
- Monitor, troubleshoot, and resolve application performance issues post-deployment using MLflow, custom dashboards, automated alerts, and logging systems.
- Implement model monitoring practices to detect data drift, performance degradation, and data quality issues in production ML/AI systems.
Job Tags