Applied AI Engineer
Softjourn
- Ивано-Франковск
- Постоянная работа
- Полная занятость
- 5+ years of primary software engineering experience building and shipping production systems;
- 2+ years of hands-on experience building and deploying LLM-based applications or AI agents in production environments;
- Strong software engineering fundamentals and the ability to write clean, maintainable, production-ready code;
- Experience building APIs, services, integrations, and data flows for production systems
- Experience designing and implementing agentic systems using major model providers and open-weight models (e.g. OpenAI, Anthropic, Gemini, Llama, or comparable);
- Hands-on experience with at least one orchestration framework (LangChain, LangGraph, AutoGen, CrewAI, etc.);
- Experience with LLM evaluation and benchmarking: designing evaluation pipelines, measuring model performance, and iterating systematically;
- Experience architecting RAG systems, including chunking strategies, embedding models, vector databases, and reranking (e.g. Pinecone, Weaviate, Qdrant, Chroma, Milvus);
- Strong prompt design and iteration skills;
- Experience building custom tools, skills, integrations, or protocol-based extensions for AI systems, including MCP-style server/client patterns;
- Cloud platform experience with AWS and/or GCP (Vertex AI) and/or Azure AI Foundry (formerly Azure AI Studio);
- Familiarity with AI safety, responsible AI, and output guardrails for production systems;
- Experience with testing, observability, monitoring, and optimization for latency, throughput, reliability, and cost in production AI systems;
- Experience working with data platforms relevant to AI workloads, such as Redis, Cassandra, BigQuery, Postgres, or similar systems;
- Upper- intermediate level of English.
- Experience building or fine-tuning models for domain-specific use cases (e.g. financial data, ticketing);
- Experience building custom agent frameworks from scratch;
- Experience with agent evaluation frameworks (LangSmith, AgentEvals, etc.);
- Portfolio of delivered AI projects: production applications, demos, experiments, blog posts, or open-source contributions;
- Machine learning and deep learning fundamentals: model training, evaluation, regularization, and optimization (e.g. PyTorch, TensorFlow, JAX);
- Hands-on experience with AI domains beyond LLMs, such as computer vision, recommendation and personalization systems, forecasting and time-series modeling, or generative AI;
- Experience applying AI safety and guardrails in regulated or high-sensitivity environments.
- Design, build, and deploy LLM-based applications and agentic systems to production, from prototype to live system;
- Implement RAG pipelines, tool-using agents, and multi-step workflows over proprietary and third-party data sources;
- Evaluate foundation models (OpenAI, Anthropic, Gemini, Llama, and others) against client use cases, constraints, and budget;
- Build and maintain LLM evaluation pipelines, including offline benchmarks, online monitoring, and human-in-the-loop review processes;
- Measure and improve model performance iteratively across quality, relevance, faithfulness, latency, and cost;
- Deploy and serve AI systems on cloud platforms with attention to scalability, reliability, and cost efficiency;
- Work with Solution Architects and delivery teammates to translate ambiguous business problems into concrete technical tasks and solution components;
- Communicate technical decisions, limitations, and trade-offs clearly to teammates, product stakeholders, and clients when needed;
- Contribute reusable internal tools, skills, integrations, and MCP-style patterns that improve future client delivery;
- Stay current with the fast-moving AI landscape and evaluate emerging frameworks, tools, and approaches with a practical mindset;
- Making customization for the client.