The Exasol Xperience event was the first of it’s kind, bringing together more than 100 attendees, from 16 countries. It has provided invaluable networking opportunities and insights on Exasol users experience of Exasol and the challenges that they are solving daily with it. You can see some great pictures and tweets of the event here.
The first day of the Exasol Xperience was full of many highlights, but one in particular I have been waiting for, for months; the release details of Version 6.
Dr Jens Graupmann recapped Exasol’s history with us first, and here are some of the points I noted of interest before we dive into what Version 6 will contain!
- 2011- V4 Exasol releases it’s first TPC-H benchmark results that show it’s blistering performance
- 2012 – V4.1 UDF functionality for R, Python and LUA is introduced
- 2013 – V4.2 Enterprise readiness, including resource management and connectivity
- 2014 – V5 Improvements and features, plus new TPC-H results released (which are still unbeaten)
- 2016 (~Q3) – V6 ….
Onto the long awaited version 6!
Version 6 seems packed with new features, so here follows just a handful of the new features and improvements, with more to follow:
External data sources can be connected via Virtual Schemas. Using the external schema meta data, tables can be referenced “locally” within Exasol. This will support an “internal” to “external” join.
Any query using a virtual schema object will be forwarded to the connected data source.
Using Virtual schemas should provide agile access to current data, reduce redundancy, reduce the need for complex ETL and reduce disk space waste.
With this being the first iteration of Virtual schemas, an improvement has already been promised on this technology to come, by the means of an Intelligent cache.
The improved import will provide a common framework for data imports. This framework will be available on GitHub for the Exasol user community to use and expand upon.
Version 6 will include as standard:
- A generic JDBC adapter
- Native Hadoop adapter, supporting all HDFS formats (by using native HCatalog)
From V6, you can integrate ANY analytical programming language. UDFs will be able to be written in any language, so long as the language has been encapsulated into an isolated container. This will also enable a developer to use different versions of the same language simultaneously.
Again, this framework will be available on GitHub to download.