Job Description
Mindteck is seeking a highly skilled and motivated Data Engineer to join our dynamic team in Singapore. In this pivotal role, you will be responsible for designing, building, and maintaining the robust data infrastructure that powers our enterprise analytics and mission-critical applications. You will work at the intersection of data science and software engineering, ensuring that our data pipelines are scalable, reliable, and performant.
As a Data Engineer at Mindteck, you will collaborate with cross-functional teams to translate complex business requirements into high-quality technical solutions. We are looking for a professional who is passionate about data architecture, data quality, and the continuous improvement of our data ecosystem. If you thrive in a fast-paced environment and are eager to leverage modern data technologies to solve real-world problems, we want to hear from you.
Responsibilities
- Design, develop, and maintain scalable data pipelines to ingest, process, and store large volumes of structured and unstructured data.
- Collaborate with data scientists and analysts to optimize data models and schemas for high-performance analytics.
- Ensure the reliability, availability, and security of data infrastructure by implementing monitoring and automated alerting.
- Optimize existing data workflows to reduce latency and improve overall system efficiency.
- Work closely with stakeholders to understand business requirements and deliver actionable data solutions.
- Troubleshoot and resolve complex data-related issues across production environments.
- Maintain comprehensive documentation for data architecture and technical processes.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related quantitative field.
- 3+ years of professional experience in data engineering, software development, or a similar data-centric role.
- Strong proficiency in SQL and scripting languages such as Python or Scala.
- Hands-on experience with cloud-based data warehouses (e.g., AWS Redshift, Google BigQuery, or Azure Synapse).
- Experience with ETL/ELT tools and workflow orchestration platforms (e.g., Apache Airflow, dbt, or Informatica).
- Knowledge of Big Data technologies such as Spark, Kafka, or Hadoop is highly desirable.
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
- Strong analytical problem-solving skills and the ability to work effectively in a contract-based team environment.