Data Engineer: Data Platforms
-
- Software Engineering
- Entry Level
Data Engineer: Data Platforms
-
- Software Engineering
- Entry Level
In this role, you’ll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You’ll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you’ll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
Your Role and Responsibilities
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
Your primary responsibilities include:
- Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
- Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
Required Technical and Professional Expertise
- Must have 3-5 years exp in Big Data -Hadoop Spark -Scala ,Python
- Hbase, Hive Good to have Aws -S3,
- athena ,Dynomo DB, Lambda, Jenkins GIT
- Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
- Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD’s were used to apply business transformations and utilized Hive Context objects to perform read/write operations..
Preferred Technical and Professional Expertise
- Understanding of Devops.
- Experience in building scalable end-to-end data ingestion and processing solutions
- Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala”
想知道成为IBMer是种什么感觉吗?
About IBM
IBM’s greatest invention is the IBMer. We believe that through the application of intelligence, reason and science, we can improve business, society and the human condition, bringing the power of an open hybrid cloud and AI strategy to life for our clients and partners around the world.
Restlessly reinventing since 1911, we are not only one of the largest corporate organizations in the world, we’re also one of the biggest technology and consulting employers, with many of the Fortune 50 companies relying on the IBM Cloud to run their business.
At IBM, we pride ourselves on being an early adopter of artificial intelligence, quantum computing and blockchain. Now it’s time for you to join us on our journey to being a responsible technology innovator and a force for good in the world.
主要职位详细信息
没有看到合适的岗位?
不要担心。加入我们的人才网络,获得有关最新机会的新闻。