Challenge
Autonomous Learning Cluster
Supported by a group of specialists in scalable computing, machine learning and open-source software. We seek to integrate autonomous data acquisition/valorisation into scalable machine learning pipeline end-to-end, proactively using the Internet, telemetrics and sensor network to train and improve itself.
We believe that state-of-the-art AI algorithms and architectures perform best on large amount of real data, unconstrained of down-sampling, manual annotation and cost control. As a result, our solutions leverage the best of web crawling, APIs, unstructured data management and ensemble/proactive learning algorithm, and scale all components adaptively to perform under varying resources and minimal supervision.
Team
Peng Cheng (Co-Founder, Lead Committer)
- Specialization: Apache Spark, Algorithms, Computer Vision
- Research Exp.: Human Detection, Kernel Machine Approximation, Stochastic Gradient Descent
- Eng. Exp.: Recommendation Engine, Parallel Data Encryption, SpookyStuff, ISpark
- Alma Mater: University of Wollongong, NSW, Australia. (Master of Computer Science)
- Language: Mandarin, English
- Blood Type: O Negative
- Interests: Cycling, Power Lifting, Drone Control
Career
Consider yourself a rule breaker? Welcome to submit your resume to yo@tribbloids.com
Big Data Engineer
Must have:
- Understanding of algorithm, complexity, scalability, parallel computing and optimization
- Proficient in the art of persuasion – you’ll use that a lot to persuade others to adopt your idea!
- Prefer proof-of-concept, coordination and convergence of vision over brute force and unproven hypotheses
- object oriented programming
- Scala/Java
- git
- maven/sbt/gradle
- prefer Linux for daily use over Windows: RHEL/ubuntu preferred, MacOS also works if you know it well
Good to have:
- active open-source project committer
- Any machine learning/artificial intelligence R&D experience will be helpful
- Apache Spark
- Apache Hadoop and its ecosystem
- functional programming
- understanding of HTTP protocal, RESTful API and JSON format
- operational experience in Amazon EC2/S3
- prefer to work on personal laptop over dedicated workstation
- experience in JavaScript, any of its frameworks or Python a plus.