Climate change is one of the most challenges of our time. I use the techniques and tools learned in the course to build an end-to-end data pipeline and CO2 dashboard.
I had completed the analytics engineering bootcamp in Udemy. So data modelling and dbt are not new to me. In week 4, some interesting take-aways for me are the usage of dbt cloud and the software engineering best practices like modularity, portability, CI/CD, and documentation. Great course as usual from dezoomcamp!
In week 3 we learned data warehouse and google Big Query. Some best practices like creating external table, partition and clustering tables are introduced. We can even do machine learning in Big Query.
In week 2 we learned Prefect to manage our data flow. I like Prefect very much because it allows to use python to define tasks and flows. Very interesting topic to learn!
During week 1, participants learned how to build a data pipeline and retrieve and ingest data using Docker, Postgres, Docker-compose, Terraform and Google Cloud.