5 ก.พ. 2569
Write and clean maintainable and efficient code to build and enhance web applications related to data products for prototype/production-ready purposes using various platforms such as Databricks Apps, Streamlit, Dockers.
Design, implement, and monitor web applications and machine learning model deployment using various methods and tools such CI/CD and MLFlow.
Implement scalable ETL/ELT pipelines using Apache Spark, Python, SQL on Databricks.
Maintain Medallion Architecture (Bronze, Silver, Gold layers) for structured and semistructured data.
Develop and manage Delta Lake tables/views for source ingestion, enterprise view data integration, and support various data access use cases.
Collaborate with data analysts, scientists, and business stakeholders to deliver clean, reliable datasets
Bachelor’s degree in Computer Science, Computer Engineering, Information Technology, or a related technical field (0-1 years of experience is welcome to apply).
Proficiency in Python, SQL, PySpark, Streamlit, Basic Machine Learning and AI.
Experience with Databricks, Apache Spark, and Delta Lake.
Understanding of Medallion Architecture and Lakehouse principles.
Familiarity with various RDBMS, and cloud platforms.
Experience with CI/CD, Git, workflow orchestration tools on Databricks and a basic understanding of containerization (Docker).
Excellent problem-solving skills and the ability to break down complex technical requirements.
Learner mindset and flexible.
Strong desire to learn cross-functional skills (Data & DevOps) and adapt to a fast-paced Environment













