Go Data Driven BLOG!

Welcome to the Go Data Driven BLOG.

This is the place where we share our knowledge and opinions. We will try to post new content regularly.
Enjoy, the GoDataDriven team.

How to Find Blockchain Use Cases: Part I

17 Sep

This three-part series explores ways to find blockchain use cases, beginning with the problems it can solve. This first article in a series on blockchain we describe how to identify relevant use cases.

Read more...


Opening up some training material

05 Sep

What are our trainings like? People interested in the program often has to "just" believe us. Now they can believe us a little less as we're opening up some chapters from our curriculum!

Read more...


GoDataDriven open source contribution: Augustus 2018 edition

05 Sep

Welcome to the Open Source at GoDataDriven, Augustus 2018 edition, otherwise called the Fokko edition

Read more...


Python Masterclass with Restart Network

21 Aug

Python Masterclass by Rodrigo with Restart Network

Read more...


EuroPython 2018

15 Aug

I don't have to tell you Edinburgh is all about history. In contrast, between the 23rd and 29th of July more than 1300 Python-enthusiasts gathered to talk about the future.

Read more...


Write less terrible code with Jupyter Notebook

05 Aug

How can you quickly go from prototype to production code using Jupyter Notebooks?

Read more...


GoDataDriven open source contribution: July 2018 edition

01 Aug

Welcome to the Open Source at GoDataDriven, July 2018 edition

Read more...


Dynniq presentation video at AI Expo Europe 2018

31 Jul

Dynniq with AI use cases was invited to AI Expo Europe 2018 in RAI Amsterdam. Watch the recording of the presentation here.

Read more...


Working with multiple partition formats within a Hive table with Spark

31 Jul

Having different file formats (Avro and Parquet) for the same data source is a problem we often encounter. We can create a partitioned table on top of this data. With Hive you can alter the type of a given partition so we can access the data with one table. We discovered that Spark doesn't support this functionality yet, so we started investigating how we could add this.

Read more...


Handling encoding issues with Unicode normalisation in Python

28 Jul

When reading and writing from various systems, it is not uncommon to encounter encoding issues when the systems have different locales. In this post I show several options for handling such issues.

Read more...