Our slogan is “Because your data makes sense”.
To make that true we designed a proven Reference Architecture that can be implemented at the speed of the customer and allows the customer to apply the multiple speed implementation.

Access Layer

The Access layer is the central analytical workspace where the data will be explored and analysed by reports, scorecard and dashboard tools. A trend that is more and more coming up is applications that are using the Enterprise Data Warehouse to explore the data.

Presentation Layer

The Presentation Layer is the place where the integration of common business logic (calculation, derived values, time series) takes place. Star Schema modelling is preferred here.
This layer can be virtualized instead of persisting it if your database system will allow it.

By using Pivotal Greenplum it’s possible to virtualize the Presentation Layer and reduce the overall project cost by 15%.

Foundation Layer

In the Foundation Layer integration of data from different source systems will take place and is designed by the principles of Data Vault 2.0.

It consists out of :

  • a Raw Data Vault area which is source driven and is generated by our Data Warehouse Automation tool (Foundation Accelerator 4.0). The data is 100% traceable to it’s origin and fully represents the source data at any given point in time.
  • a business Data Vault area which is Business Driven and is implemented with manually built ELT

The Foundation Layer reflects the single version of facts and allows you to implement multiple versions of the truth in the Presentation Layer. It’s keeping all requested history of change at atomic level.

Source Layer

In this area are the enterprise application databases, structured & unstructured data stored. In most cases there is no historical data stored in this layer, that’s why we record all history in the Data Vault Foundation Layer.
This layer also can contain a Data Lake that has the following characteristics:

  • High Volume
  • High Velocity
  • Variety
  • Veracity

The data of the Data Lake is stored on a Hadoop File System or in a NoSQL database. The usage of Big Data SQL to get an optimized access to the data and combine it real time with structured data.

Data Lab

The goals of the Data Lab/ Analytical Area / Sandbox area is :

  • To reduce shadow IT : End-Users that will build their own Excels or Ms-Access based reporting.
  • To supports one of Data Discovery exercises on structured or unstructured data.
  • To supports Data Mining exercises.

Data inside the Sandbox can be combined from the data warehouse a Big Data “Data Pool” and operational data.

Data Integration

Our Data Integration consultants are specialised in Talend Data Integration.Talend offers robust enterprise data integration software in an open, scalable architecture to respond faster to business requests. Talend provides the unified tools to develop and deploy data integration jobs 10 times faster than hand coding, at 1/5th the cost of competitors.

Master Data Management

Talend Master Data Management (MDM) tools unify all data—from customers to products to suppliers and beyond—into a single, actionable “version of the truth.” Turn your master data into business value with one solution.

Meta Data Management

Accelerate time to compliance and improve data accessibility with detailed information about all of your metadata. Talend Metadata Manager is a metadata management tool that connects data from platforms, databases, and analytics tools to generate a holistic view of the information supply chain in a language that everyone can understand.

Open Source Business Intelligence

In general, Open Source software gets closest to what users want.

Read more

Data Vault 2.0 Experts

Our company exists out of Data Vault 2.0 certified specialists.

Read more

Data Warehouse Automation

Data warehouse automation will reduce the overall project cost by 40%.

Read more

Reference Architecture

The DataSense approach is based on a proven Reference Architecture.

Read More