Job Description
Summary
We are one of the fastest growing companies in the digital asset sector. If you are looking for cutting-edge work, where you will have opportunities to develop your career among peers who are experts in their field, and you believe in the future of digital currency, then look no further than Bitget!
What you'll do
- Responsible for information collection from social media KOL and blockchain information aggregation websites.
- Responsible for the development and optimization of crawler core functions, architecture design, system planning, and technical research on crawler technology.
- Responsible for crawler data collection, analysis, cleaning, deduplication, distribution, storage, etc., and continuously improve collection efficiency.
- Crack the anti-crawling mechanism, optimize the crawler routing scheduling strategy, and be responsible for the design and optimization of the anti-crawling strategy.
- Crawler health and data quality monitoring to ensure the stability, reliability, timeliness and accuracy of the crawler.
What you'll need
- Understand or master the basic concepts of blockchain, be familiar with the use of Python language, and be proficient in mainstream open source crawler frameworks such as Scrapy, Requests, Ui automator2, Playwright, etc.
- Proficient in crawler crawling principles and technologies, familiar with crawler design and implementation processes, and at least 2-3 years of experience in Internet data crawling and crawler system development.
- Proficient in mongo, redis, mq, mysql and other related technologies, and understand various web front-end technologies, including XHTML/XML/CSS/JavaScript/AJAX, etc.
- Familiar with anti-crawler mechanisms and have corresponding cracking solutions.
- Excellent problem analysis and problem-solving skills, passionate about solving challenging problems, good teamwork spirit and strong sense of responsibility.
- Experience in web3 field, crawler platform platformization experience, familiarity with java language, and experience in big data processing are preferred.
Job Responsibilities:
- Responsible for information collection from social media KOL and blockchain information aggregation websites.
- Responsible for the development and optimization of crawler core functions, architecture design, system planning, and technical research on crawler technology.
- Responsible for crawler data collection, analysis, cleaning, deduplication, distribution, warehousing, etc., and continuously improve collection efficiency.
- Crack the anti-crawling mechanism, optimize the crawler routing scheduling strategy, and be responsible for the design and optimization of the anti-crawling strategy.
- Crawler health and data quality monitoring to ensure the stability, reliability, timeliness and accuracy of the crawler.
Job requirements:
- Understand or master the basic concepts of blockchain, be familiar with the use of Python language, and be proficient in mainstream open source crawler frameworks such as Scrapy, Requests, Ui automator2, Playwright, etc.
- Proficient in crawler principles and technologies, familiar with crawler design and implementation process, at least 2-3 years of experience in Internet data crawling and crawler system development.
- Proficient in mongo, redis, mq, mysql and other related technologies, and understand various Web front-end technologies, including XHTML/XML/CSS/JavaScript/AJAX, etc.
- Be familiar with anti-crawler mechanisms and have corresponding cracking solutions.
- Excellent analytical and problem-solving skills, passion for solving challenging problems, good teamwork spirit and strong sense of responsibility.
- Priority will be given to candidates with experience in web3, crawler platform development, familiarity with Java language, and experience in big data processing.
Skills
- Communications Skills
- Development
- Python
- Software Engineering
- Strategic Thinking
- Team Collaboration