Web Crawler & Data Developer
Design and build data mining web crawlers. Build databases and data pipelines for storing and processing large, sometimes unstructured datasets for use within an analytics platform. Work with relational and big data management systems.
BS/MS degree in Computer Science, Engineering or a related subject
Experience in common third-party APIs (Google, Facebook, Twitter, etc).
Good knowledge of relational databases, version control tools and of developing web services
Passion for best design and coding practices and a desire to develop new bold ideas.
Strong communication skills.
Self-starter and good work ethic.
Desirable but not mandatory.
Hands on experience building social media crawlers, specifically Facebook and Twitter.
Experience building noSQL databases.
Experience building Hadoop systems (e.g. using any of Kafka, Flume, Sqoop, HDFS, HBase, Hive, Pig, Ambari, SPARK, and YARN).
Cloud hosting experience (Amazon Web Services, Google Compute Engine).
Experience of Agile software development
- Looking for full time positions but willing to consider hire on a contract basis, too.
- Offer competitive payment rates.