Software Engineer, Multimedia
Fireworks AI
Date: 23 hours ago
City: Redwood City, CA
Salary:
$170,000
-
$240,000
per year
Contract type: Full time

About Us:
Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.
The Role:
We're looking for a strong Backend Infrastructure Engineer to help accelerate our multimedia AI capabilities. You'll build and optimize the infrastructure powering state-of-the-art multimodal AI including vision-language models (VLMs), and speech AI models. You'll focus on achieving industry-leading latency and throughput across diverse multimedia workloads. You'll develop infrastructure for features like VLM fine-tuning, real-time voice processing pipelines, and model enablement on the latest hardware. You'll be instrumental in helping us capture significant ARR growth in the multimedia AI space while ensuring we deliver the fastest, most reliable multimodal platform in the market.
Key Responsibilities:
Base Pay Range (Plus Equity): $170,000 USD - $240,000 USD
Why Fireworks AI?
Here at Fireworks, we’re building the future of generative AI infrastructure. Fireworks offers the generative AI platform with the highest-quality models and the fastest, most scalable inference. We’ve been independently benchmarked to have the fastest LLM inference and have been getting great traction with innovative research projects, like our own function calling and multi-modal models. Fireworks is funded by top investors, like Benchmark and Sequoia, and we’re an ambitious, fun team composed primarily of veterans from Pytorch and Google Vertex AI.
The Role:
We're looking for a strong Backend Infrastructure Engineer to help accelerate our multimedia AI capabilities. You'll build and optimize the infrastructure powering state-of-the-art multimodal AI including vision-language models (VLMs), and speech AI models. You'll focus on achieving industry-leading latency and throughput across diverse multimedia workloads. You'll develop infrastructure for features like VLM fine-tuning, real-time voice processing pipelines, and model enablement on the latest hardware. You'll be instrumental in helping us capture significant ARR growth in the multimedia AI space while ensuring we deliver the fastest, most reliable multimodal platform in the market.
Key Responsibilities:
- Collaborate with ML engineers and researchers to productionize models and support evolving multimedia capabilities
- Identify, profile and address performance bottlenecks across the stack, from media preprocessing to vision/audio encoders to the core inference engine
- Ensure high reliability, observability, and security across backend systems.
- Own the enablement and optimization of new model releases, ensuring we consistently deliver the fastest implementations in the market.
- Build and maintain performant APIs and services
- Collaborate closely with customers and sales teams to implement custom features and optimizations that drive ARR growth
- Propose new roadmap items based on customer needs.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience as a backend or infrastructure engineer, ideally supporting ML/AI systems or data-intensive workloads.
- Experience with PyTorch and deep learning frameworks for inference and training.
- Strong programming skills in Python and/or Go, with a track record of building reliable distributed backend systems.
- Experience with cloud platforms (e.g., AWS, GCP), infrastructure-as-code tools (e.g., Terraform), and containerization/orchestration tools (e.g., Docker, Kubernetes).
- Experience supporting ML workloads in production (model fine-tuning, distributed training, inference optimization)
- Experience working directly with LLMs, vision-language models, audio models (ASR, TTS) or other multimodal AI systems in production environments
- Experience with performance optimization and profiling for high-throughput systems
- Knowledge of model quantization, speculative decoding, or other ML optimization techniques
Base Pay Range (Plus Equity): $170,000 USD - $240,000 USD
Why Fireworks AI?
- Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
- Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
- Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
- Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
How to apply
To apply for this job you need to authorize on our website. If you don't have an account yet, please register.
Post a resume