- Bachelor’s degree in Computer Science, Engineering, Mathematics or a related field or equivalent professional or military experience
- 8+ years of experience of Data platform implementation
- 3+ years of hands-on experience in implementation and performance tuning of Kinesis, Kafka, Spark or similar implementations
- Hands on experience with building data or machine learning pipeline
- Experience with one or more relevant tools (Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis)
- Current experience with hands-on implementation
In this role, you will work with our partners, customers and focus on our AWS offerings such Amazon Kinesis, AWS Glue, Amazon Redshift, Amazon EMR, Amazon Athena, Amazon SageMaker and more. You will help our customers and partners to remove the constraints that prevent them from leveraging their data to develop business insights.
AWS Professional Services engage in a wide variety of projects for customers and partners, providing collective experience from across the AWS customer base and are obsessed about customer success. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based upon customer needs.
You will also have the opportunity to create white papers, writing blogs, build demos and other reusable collateral that can be used by our customers. Most importantly, you will work closely with our Solution Architects, Data Scientists and Service Engineering teams.
The ideal candidate will have extensive experience with design, development and operations that leverages deep knowledge in the use of services like Amazon Kinesis, Apache Kafka, Apache Spark, Amazon Sagemaker, Amazon EMR, NoSQL technologies and other 3rd parties.
This is a customer facing role. You will be required to travel to client locations and deliver professional services when needed.
Inclusive Team Culture
Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 85,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
Our team puts a high value on work-life harmony. Striking a healthy balance between your personal and professional life is crucial to your happiness and success here. We are a customer-obsessed organization—leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. As such, this is a customer facing role in a hybrid delivery model. Project engagements include remote delivery methods and onsite engagement that will include travel to customer locations as needed.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
- Masters or PhD in Computer Science, Physics, Engineering or Math.
- Familiar with Machine learning concepts
- Hands on experience working on large-scale data science/data analytics projects
- Hands-on experience with technologies such as AWS, Hadoop, Spark, Spark SQL, MLib or Storm/Samza.
- Experience Implementing AWS services in a variety of distributed computing, enterprise environments.
- Experience with at least one of the modern distributed Machine Learning and Deep Learning frameworks such as TensorFlow, PyTorch, MxNet Caffe, and Keras.
- Experience building large-scale machine-learning infrastructure that have been successfully delivered to customers.
- Experience defining system architectures and exploring technical feasibility trade-offs.
- 3+ years experiences developing cloud software services and an understanding of design for scalability, performance and reliability.
- Ability to prototype and evaluate applications and interaction methodologies.
- Experience with AWS technology stack.
- Written and verbal technical communication skills with an ability to present complex technical information in a clear and concise manner to a variety of audiences.