Write less terrible code with Jupyter Notebook
GoDataDriven open source contribution: July 2018 edition
Dynniq presentation video at AI Expo Europe 2018
Working with multiple partition formats within a Hive table with Spark
Having different file formats (Avro and Parquet) for the same data source is a problem we often encounter. We can create a partitioned table on top of this data. With Hive you can alter the type of a given partition so we can access the data with one table. We discovered that Spark doesn't support this functionality yet, so we started investigating how we could add this.