« Mesos, Docker and Ochopod in Autodesk Localization Services | Main | Introducing aomi: A Light Dusting of Opinions on Using Vault »



Feed You can follow this conversation by subscribing to the comment feed for this post.


A fabulous case study, looks like some big wins for you. I am interested in a bit more detail behind the AWS Instance reduction... Is this a statement of container driven efficiency on AWS (denser instances), migration from AWS to owned infrastructure/other providers, combination of both, etc... What are the key sources of cost savings? Thanks!

Stephen Voorhees

Great question. Our DC/OS infrastructure currently runs solely out of AWS. Our reduction of instances is driven purely by densification of our infrastructure.

Our event streaming infrastructure is fairly complex with multiple technologies involved (Kafka, Zookeeper, RabbitMQ, Redis, etc...), this can quickly bloat your instances. With some careful planning around the resources we allocated per container, we were able to get far better density w/ multiple containers per instance.

I'd say we have further opportunity for optimization. Optimizations that will both improve our density (better placement, improved app code and design, etc.), and optimizations that may slightly reduce our density (place to avoid resource contention, etc.). Placement and resource allocation is definitely an art form :-)

The comments to this entry are closed.

Welcome to the Autodesk cloud engineering blog. This is the place to find out what the Autodesk cloud engineering teams are up to. We'll share information about our technology, open source projects, events and more.