Ml Ops Engineer
About us:
Our mission is to supercharge IT enterprise organizations with custom AI solutions. Headquartered in the heart of Munich with a hub in Lisbon, we’re a team of builders, dreamers, and
- solvers tackling some of the most exciting and complex challenges in AI native development adoption.
We operate on speed, adaptability, and extreme ownership. That means we move fast, stay flexible, and take full responsibility for our impact. Our clients trust us because we don’t ship generic tools — we embed AI into
- world enterprise workflows with precision and empathy.
What You’ll Do:
As a ML Ops Engineer, you’ll be the backbone of our technical infrastructure — owning the delivery pipeline, model serving stack, and runtime systems that power our AI workflows.
Your mission:
ensure that our AI agents, APIs, and web interfaces are reliable, scalable, and fast, from prototype to production.
- Build and maintain backend APIs and services to support AI-driven features and workflows.
- Own our ML Ops pipeline:
from model versioning and testing to containerized deployment and CI/CD.
- Set up observability and monitoring for LLM-based services and agentic systems.
- Manage infrastructure for
- tuning,
- augmented generation (RAG), and
- time agent orchestration.
- Collaborate closely with AI engineers to streamline model integration, scaling, and latency optimization.
- Contribute to frontend features and internal tooling as needed — you’re not afraid of building
-
- end.
- Automate everything you can — and keep our infrastructure lean, secure, and maintainable.
What We’re Looking For:
You’re a ML Ops Engineer who’s not afraid to get deep into infra. You understand what it takes to ship AI-powered features to production, and you take pride in building systems that just work.
Must-Haves:
- Experience deploying LLMs or RAG pipelines in production.
- Strong experience with Docker, CI/CD pipelines (Git
Hub Actions, Git
Lab CI, etc. ), and cloud infrastructure (AWS, GCP, or Azure).
- Solid understanding of ML Ops workflows, including model packaging, deployment, and serving.
- Experience with logging, tracing, monitoring, and metrics (Prometheus, Grafana, Sentry, etc. ).
- High autonomy — you’re comfortable owning infra and deployment from day one.
Bonus Points:
- 3+ years experience in full stack or backend engineering roles.
- Proficiency with Python (Fast
API, Flask, or similar) and modern JS frameworks (React, Next. Js, etc. ).
- Familiarity with GPU orchestration and efficient serving (Triton, v
LLM, or Hugging Face Inference).
- Exposure to enterprise authentication and compliance requirements.
- Contributions to internal tooling for ML teams (feature stores, model registries, sandbox environments, etc. ).
- Knowledge of data engineering best practices (ETL pipelines, batch vs. streaming).
Why Join Us:
- Be the bridge between
- edge AI and
- solid production systems.
- Work closely with a team of AI engineers, product thinkers, and enterprise clients.
- Build infrastructure that powers agents, not just dashboards.
- Enjoy a
- trust,
- moving culture where you can take real ownership of what you build.
#J-18808-Ljbffr
- Informações detalhadas sobre a oferta de emprego
Empresa: Ki Performance Localização: Lisboa
Lisboa, Lisboa, PortugalPublicado: 5. 6. 2025
Vaga de emprego atual
Seja o primeiro a candidar-se à vaga de emprego oferecida!