We are truly leading the way to disrupt the data warehouse industry. We are accomplishing this vision by leveraging relational database technologies like Redshift along with emerging Big Data technologies like Elastic Map Reduce (EMR) to build a data platform capable of scaling with the ever-increasing volume of data produced by AWS services. The successful candidate will have the ability to shape and build AWS' data lake and supporting systems for years to come.
You should have deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms. You should have excellent business and communication skills to be able to work with business owners to understand data requirements, and to build ETL to ingest the data into the data lake. You should be an expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data lake. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive growth.
** For more information on AWS, please visit http://aws.amazon.com **
This position requires a Bachelor's Degree in Computer Science or a related technical field, and 5+ years of relevant employment experience.
5+ years of work experience with ETL, Data Modeling, and Data Architecture.
Expert-level skills in writing and optimizing SQL.
Experience with Big Data technologies such as Hadoop/Hive/Spark.
Solid Linux skills.
Experience operating very large data warehouses or data lakes.
Solid communication skills and team player.
A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches.