# Applied AI Engineer

## Summary
- Organization: TalentReviewAI
- Location: India
- Type: Full-Time
- Department: N/A
- Status: active
- Posted: [object Object]
- Updated: [object Object]
- Closing Date: N/A
- External Apply: Yes
- External Apply URL: https://binary.so/3xqqOW7

## Details
- Salary: N/A
- Experience: N/A
- Education: N/A
- Team: N/A
- Reporting To: N/A

## Description
**About Us**

Conscious Engines is a product and research lab working on the problem of artificial consciousness. We’re pioneering the shift from reactive chatbots to proactive digital companions through our core architecture: Observe → Remember → Act. Our AI systems maintain continuous awareness across time and applications, building a living model of your context, preferences, and goals that evolves with you.

Unlike traditional AI that resets after every interaction, Conscious Engines creates digital companions with genuine continuity—systems that recall yesterday's conversations, anticipate tomorrow's needs, and close the loop from observation to meaningful action.

We're not building better tools. We're engineering machines that live alongside you.

Our technology is personal and private by design. Your AI companion serves you alone, with data sovereignty and local-first compute where possible. One human, one model—aligned to your intentions, not external optimisation functions.

We're starting with Felix, our first experiment with proactive AI, we're proving that AI can transform from passive responders into active partners in how you work and live. This is the first step toward our larger vision: creating the foundational infrastructure for persistent AI consciousness.

**About The Role**

We’re looking for an applied AI engineer to design, build, and optimize the intelligence layer that makes Felix proactive, reliable, and genuinely useful. You’ll work directly with LLMs, agent frameworks, and tool orchestration systems to create AI that doesn’t just respond—it observes, remembers, predicts, and acts on behalf of users throughout their day.

This isn’t about fine-tuning models or research papers. This is about making AI systems that ship, scale, and actually work in production. You’ll own the bridge between cutting-edge AI capabilities and dependable user experiences.

**In this role, you will:**

Design and implement agent architectures using frameworks like LangGraph, LangChain, or similar to orchestrate complex multi-step reasoning and action chains

Build reliable LLM pipelines that balance quality, latency, and cost across providers (Claude, GPT, Gemini, etc.)

Architect observation and memory systems that extract meaningful signals from user context and maintain long-term memory across sessions

Implement tool orchestration using platforms like Composio, MCP servers, or custom integrations to connect AI with real-world actions (calendars, email, task managers, etc.)

Optimise prompt engineering at scale—create reusable prompt templates, evaluation frameworks, and continuous improvement loops

Build evaluation and monitoring systems to measure AI quality, catch regressions, and track user satisfaction with AI-generated output.

Reduce inference latency and costs through batching, caching, streaming, model routing, and architectural optimisations

Collaborate with backend and product teams to embed AI capabilities seamlessly into user-facing features

Own the quality bar for AI outputs—implement guardrails, fallback strategies, and error handling for production reliability

**What We’re Looking For**
Significant production experience building and deploying LLM-based applications (not just research prototypes)

Comfort owning ambiguous problem spaces and shaping them into concrete systems.

Ability to reason clearly about trade-offs and communicate them across technical and non-technical contexts.

Deep familiarity with modern LLM APIs (Claude, GPT, Gemini), including function calling, structured outputs, and streaming responses

Hands-on experience with agent frameworks like LangGraph, LangChain, CrewAI, or similar multi-agent orchestration systems

Strong Python skills with production-grade code quality (typed, tested, maintainable)

Experience with vector databases and embedding models for semantic search and memory retrieval

Understanding of RAG architectures and when/how to augment LLM context with retrieved information

Obsession with latency and cost - you measure token usage, track latencies, and optimise aggressively

**Experience with AI observability**
Pragmatic approach to AI - you know when to use LLMs vs rule-based systems vs traditional ML

**Tools & Stack You’ll Use:**
LLM Providers: Claude (Anthropic), GPT (OpenAI), Gemini (Google)

Agent Frameworks: LangGraph, LangChain, custom agent architectures

Tool Orchestration: Composio, MCP (Model Context Protocol), custom integrations

Vector Stores: Pinecone or similar

Languages: Python (primary), TypeScript (secondary)

Monitoring: LangSmith, custom logging/observability systems

Any other tools that we (read as ‘you’) may add to the future

**This Is Not For You If**
You think fine-tuning is the answer to every problem

You don’t care about production reliability (hallucinations are “just how LLMs work”)

You can’t explain AI decisions in plain English to non-technical team members

You don’t obsess over token costs and API latency

You think “prompt engineering” is beneath you

You haven’t shipped AI features that real users depend on

You don’t monitor and iterate on AI performance continuously

You’re more interested in research papers than shipping products

You think deterministic systems are “boring” compared to AI

**Experience**

There is no fixed seniority number attached to this role. We expect:

Ability to explain reasoning, trade-offs, and failures clearly - especially when AI doesn’t work as expected

Prior experience shipping AI features with real-world consequences and user feedback loops

Evidence of judgment formed through hard problems - when AI works, when it doesn’t, and what to do about it

Portfolio of AI projects you’ve built and shipped (bonus: open source contributions to AI tooling)

**The Setup**

📍 Indiranagar, Bangalore (next to CRED - you know the vibe)

🏢 Full-time, in-office - we believe in collaborative working in person. The best work happens when you're in the room together.

💰 Competitive salary - we pay for craft, not just years of experience

⚡ Culture: Early-stage energy, high taste, no bullshit.

🎯 Team: Small, obsessed, taste-driven. We don't tolerate mediocrity.

🚀 Mission: Build the interface for conscious AI

We work fast. We care about details. We ship every week. Strong opinions, weakly held.



## Responsibilities
None

## Skills
None

## Tags
None

## Organization
- Name: TalentReviewAI
- Website: https://talentreview.ai
- Industry: HR
- Size: 2-10
- Founded Year: 2024
- Description: A leading HR technology solutions provider specializing in AI and cloud computing services. We help businesses transform their digital infrastructure and drive innovation.