Back to all jobs
Senior Software Engineer – Big Data Infrastructure (OLAP Engine)
Hong Kong, Asia, Australia, Brisbane, Australia, Melbourne, Australia, Sydney, New Zealand, Auckland, New Zealand, Wellington, Remote, Taiwan, Taipei
Remotefull-timeEngineeringJob Description
Binance is a leading global blockchain ecosystem behind the world’s largest cryptocurrency exchange by trading volume and registered users. We are trusted by 300+ million people in 100+ countries for our industry-leading security, user fund transparency, trading engine speed, deep liquidity, and an unmatched portfolio of digital-asset products. Binance offerings range from trading and finance to education, research, payments, institutional services, Web3 features, and more. We leverage the power of digital assets and blockchain to build an inclusive financial ecosystem to advance the freedom of money and improve financial access for people around the world.
We are developing innovative new features including our next-generation cluster management system, improvements for real-time processing of big data, and ways to enable customers to more easily interact with their data. We’re looking for top engineers to build them from the ground up. You will work backwards from the customer needs and you will get to do everything from designing and building large scale systems and features for the savviest customers in the business.
You will have a chance to work with the open source community and contribute significant portions its software to open source projects including Hadoop, Spark, Hive, Presto and HBase.
This role is 100% remote, work from home based.
Responsibilities
-
Design, develop, and maintain backend services and APIs for global products using Java.
-
Build a high-performance, highly available, and secure backend system to provide robust support for Big Data business operations.
-
Engage in and improve the whole lifecycle of service, from inception and design, through to deployment, operation, and refinement.
-
Develop and maintain tools, re-designing capacity planning infrastructure for greater scalability.
-
Troubleshooting, diagnosing, fixing software issues, and ensuring data security, and end-to-end system architecture.
Requirements
- Must: Engaged or drove Bigdata relevant projects currently/previously, such as: Apache open source project (e.g., Spark, YARN, Hadoop, Zookeeper, Kafka)
- Minimum of 6 years of experience in software development and architecture.
- Strong expertise in Java/Python and backend frameworks (e.g., Spring Boot, RESTful APIs)
- Experience in high availability, high performance, and resource optimization.
- Experience with databases (SQL/NoSQL), messaging systems (e.g., Kafka, RabbitMQ), and microservices architecture
- In-depth understanding of Linux and computer networks.
- Experience in profiling, benchmarking and optimizing big data applications
- Bilingual English and Mandarin is required to be able to coordinate with overseas partners and stakeholders.
Nice to Have
- AI mindset, AI application to Big Data relevant work tasks to boost efficiency