We are building out our BigData @Scale Engineering team to design and develop the next generation of large scale distributed data platforms at Salesforce.
Do the points below get you excited about the role? Come and join us at Salesforce!
Eat, sleep, and breathe large scale distributed systems and data platforms
Working with HBase, Phoenix, HDFS, Yarn, Kafka, Spark, or equivalent large scale distributed systems technologies
Building reliable, self-healing services on unreliable hardware.
Designing, developing, and operating resilient distributed systems that run across thousands of compute nodes in datacenters around the globe
Strong, heartfelt opinions on CAP theorem, sketching out different consistency models on a single napkin and defend each of them.
You don't just use open source projects, but are motivated to contribute to them
For Senior engineers (5 years minimum):
BS, MS, or PhD in Computer Science or related discipline
Excellent knowledge of Computer Science fundamentals, with strong competencies in data structures, algorithms, software design and coding
Knowledge or experience with large scale distributed systems is a nice to have
For more experienced candidates (6+ years), all the above +:
Experience with Java, and/or C++ in a Linux/UNIX data center environment
Experience designing and building infrastructure or services at a large scale
Experience with open source projects like HBase, Phoenix, Kafka, HDFS, Hadoop, Cassandra etc., or industry or academic projects in the areas of large scale distributed systems or data platforms
Experience with Agile development methodology and Continuous Integration/Delivery
Experience using telemetry and metrics to drive operational excellence