• Cloud DevOps Engineer – Data Science

    Job Locations US-Washington DC | US-VA-McLean
    Client Services
    Regular Full-Time
  • Overview

    What do you get when you bring together the brightest minds and place them into an exciting, fast-paced environment that fosters intellectual growth and rewards based on impact, not tenure?


    You get one of the fastest growing consulting companies in the United States. While we may be a new name in consultancy, we were born from a storied one. Guidehouse was founded in 2018 as an evolution of PwC Public Sector with a mission to help our clients deliver on their mission; providing bold new strategies that catalyze transformative results across all ends of the enterprise. We embrace brilliance. We embrace independence. Join us.


    Build the future of Data Science as part of the Artificial Intelligence Center of Excellence (CoE). The CoE is a unique team within Guidehouse, focusing on solving our client’s most important challenges using Data and Advanced Analytics, AI, and Automation. As one of the top Public Sector professional services firms, the CoE works on a huge variety of projects; from predictive analytics models to support our healthcare and financial services divisions, to open source analysis for federal agencies, and deep learning initiatives for advanced analysis. As part of Guidehouse’s Data Analytics team, you will work on high-impact and high-visibility projects, helping to shape not only Guidehouse’s current business, but its long term strategy.

    This role involves working in a complex, multi-functional, Agile team environment with other data scientists, engineers, and UI/UX developers to develop and productionize analytics solutions. The Cloud DevOps Engineer is involved in many aspects of a customer engagement; from the collaboration with other team members and customers to help stakeholders discover the information hidden in vast amounts of data, and help them make smarter decisions to deliver even better products.

    • Lead team initiatives to continuously refine our AWS deployment practices for improved reliability, repeatability and security. You’ll create/contribute to plans, collaborate with other team members. These high-visibility initiatives will help to increase service levels, lower costs, and deliver features more quickly.
    • Work closely with Data Science team to automate deployment and configuration of infrastructure to support roll out of data products/projects on AWS Data Stack. This includes building Machine Learning workflows in AWS that comprise the full stack from front-end to back end.
    • Design effective monitoring / alerting (for conditions such as application-errors, high memory usage) and log aggregation approaches (to quickly access logs for troubleshooting, or generate reports for trend analysis) to proactively notify business stakeholders of issues and communicate metrics, working closely with these stakeholders, using tools including AWS CloudWatch, SageMaker, EMR, Glue etc.
    • Write code and scripts to automate provisioning of AWS services and to configure services, using tools and languages including AWS CLI / API, Terraform, Ansible, Chef, Python, Bash, and Git.
    • Configure build pipelines to support automated testing and deployments using tools including Jenkins, CircleCI, AWS CodeDeploy. You’ll configure these pipelines for specific products and help optimize them for performance and scalability.
    • Help refine DevSecOps security practices (including regular security patching, minimum-permissions accounts and policies, encrypt-everything) in compliance with Health IT, government and other standards regulations, implement, and verify them, using tools like Sonarqube, VeraCode to analyze and verify compliance.
    • Document and diagram deployment-specific aspects of architectures and environments, working closely with Software Engineers, Data Scientists, Software Engineers in Test, and others in DevOps.
    • Troubleshoot issues in production and other environments, applying debugging and problem-solving techniques (e.g., log analysis, non-invasive tests) , working closely with development and product teams.
    • Suggest deployment patterns & practices improvements based on learnings from past deployments and production issues; collaborate with DevOps team to implement these.
    • Promote a DevOps culture, including building relationships with other technical and business teams.
    • Work closely with InterOps to deploy and configure the platform to on-board clinics.
    • Work to ensure system and data security is maintained at a high standard, ensuring the confidentiality, integrity and availability of the Navigating Cancer's applications is not compromised.
    • Work on Setting up the framework for a universal artifact management tool like Artifactory.



    • Ability to automate away manual interactions and have a passion for helping enable developers to write code that works
    • A strong understanding of Linux administration including Bash scripting
    • An understanding of automation and RPA orchestration tools such as UIPath
    • Networking expertise including VPCs, SDNs (e.g., Amazon / Azure) / VLANs, routers and firewalls
    • Familiarity with at least one IAC / CM tool such as Terraform, Ansible, Chef, or Puppet
    • Familiarity with at least one code build / deploy tool such as Jenkins, Circle CI
    • Familiarity with DB setup, configuration and monitoring.
    • Work in terms of enabling capabilities through a blend of process and technology


    • Bachelor’s degree in Computer Science, Engineering, Applied Mathematics, Statistics, Data Management or related fields.
    • 4+ years AWS administration experience / training including provisioning EC2 instances, VPCs, Elastic Beanstalk, Lambda functions, RDS Aurora Server/serverless databases, S3 storage, IAM security, ECS containers, Cloudwatch metrics & logs
    • 4+ years of experience developing and / or deploying serverless functions using AWS Lambda, Azure Functions, or Google Cloud Functions
    • Experience developing and / or deploying Docker Containers on ECS/EKS or Kubernetes
    • Experience in automating provisioning of Infra to enable complete application ecosystem on demand
    • 3+ years’ Experience with SQL; Adept in using RDS-PostgreSQL or other DBMS
    • Experience with monitoring / alerting tools such as New Relic, Grafana, Prometheus, Sysdig
    • Experience with log aggregation tools such as Datadog, ELK, Splunk
    • Experience in Python as well as at least one other programming language such as Ruby, Java, Scala, JavaScript / Node.js, Go, C#, or C/C++.
    • AWS DevOps Solution Architecture Certification


    Additional Desired Skills: 

    • 2+ years of experience in building cloud Data Lakes to support Data Analytics and Machine Learning tasks.
    • 1+ years of experience in AWS RDS, schema design, system performance & optimization, capacity planning. Preferably AWS Big Data Architect Certification or equivalent.
    • Demonstrable in-depth understanding of data structures and ETL processes (including SSIS)
    • Experience with structured and unstructured data, including relational databases (SQL Server), graph databases (Neo4J), NoSQL databases (MongoDB) and unstructured data
    • Experience with working with big data (Scala, Spark, Pig)
    • Experience with the operationalization and maintenance of analytics APIs using Plumber, Flask, Swagger and similar
    • Experience in data analytics, business intelligence, or data science

    Additional Requirements

    • This position requires successful completion of a background check and employment verification.
    • The successful candidate must not be subject to employment restrictions from a former employer (such as a non-compete) that would prevent the candidate from performing the job responsibilities as described.



    Guidehouse is an affirmative action and equal opportunity employer. Employment decisions will be made without regard to race, color, religion, sex, age, national origin, military status, veteran status, handicap, physical or mental disability, sexual orientation, gender identity, genetic information or other characteristics protected by law.


    If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.


    Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.


    Benefits include:

    • Medical, Rx, Dental & Vision Insurance
    • Personal and Family Sick Time & Company Paid Holidays
    • Parental Leave and Adoption Assistance
    • 401(k) Retirement Plan
    • Student Loan Paydown
    • Basic Life & Supplemental Life
    • Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
    • Short-Term & Long-Term Disability
    • Tuition Reimbursement, Personal Development & Learning Opportunities
    • Skills Development & Certifications
    • Employee Referral Program
    • Corporate Sponsored Events & Community Outreach
    • Emergency Back-Up Childcare Program


    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed