fbpx

Data Lake Creation

Qlik Compose for Data Lakes automates, and cloud data lake automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments.

Data Engineering

  • Move Your Data
  • Automated Your Warehouse
  • Create Your Data Lake
  • Govern Your Data Catalog

Data Movement

Let's move and transform your data!

Data Warehouse Automation

Let's automate your warehouse build with Qlik Compose

Data Lake Creation

Qlik Compose for Data Lakes

Enterprise Management

Control & Monitor Your Data Movement & Warehouse Automation

Governed Data Catalog

Qlik Data Catalyst - Enable analytics across your enterprise with trusted, business-ready data

Data Lake Creation

Qlik Compose For Data Lakes

Your fastest way to analytics-ready data lakes

 

Automate Analytics-Ready Data Pipelines

Qlik Compose for Data Lakes (formerly Attunity Compose) automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments.

 

Universal Data Ingestion

Supporting one of the broadest ranges of data sources, Qlik Compose for Data Lakes ingests data into your data lake whether it’s on-premises, in the cloud, or in a hybrid environment. Sources include:

  • RDBMS: DB2, MySQL, Oracle, PostgreSQL, SQL Server, Sybase 
  • Data warehouses: Exadata, IBM Netezza, Pivotal, Teradata, Vertica
  • Hadoop: Apache Hadoop, Cloudera, Hortonworks, MapR
  • Cloud: Amazon Web Services, Microsoft Azure, Google Cloud
  • Messaging systems: Apache Kafka
  • Enterprise applications: SAP
  • Legacy systems: DB2 z/OS, IMS/DB, RMS, VSAM

 

Easy Data Structuring And Transformation

An intuitive and guided user interface helps you build, model and execute data lake pipelines.

  • Automatically generate schemas and Hive Catalog structures for operational data stores (ODS) and historical data stores (HDS) without manual coding.

 

Continuous Updates

Be confident that your ODS and HDS accurately represent your source systems.

  • Use change data capture (CDC) to enable real-time analytics with less administrative and processing overhead.
  • Efficiently process initial loading with parallel threading.
  • Leverage time-based partitioning with transactional consistency to ensure that only transactions completed within a specified time are processed.

 

Leverage The Latest Technology

Take advantage of Hive SQL and Apache Spark advancements including:

  • The latest Hive SQL advancements including the ACID MERGE operation that efficiently processes data insertions, updates, and deletions while ensuring data integrity.
  • Pushdown processing to Hadoop or Spark engines. Automatically generated transformation logic is pushed down to Hadoop or Spark for processing as data flows through the pipeline.

 

Historical Data Store

Derive analytics-specific data sets from a full historical data store (HDS).

  • New rows are automatically appended to HDS as data updates arrive from source systems.
  • New HDS records are automatically time-stamped, enabling the creation of trend analysis and other time-oriented analytic data marts.
  • Supports data models that include Type-2, slowing changing dimensions.

 

You May Also Be Interested In

Professional Services

Ensuring your successfully defined, scoped and resourced solution delivery!

Training Services

We provide product training and user training services to our partners and customers.

Support Services

We provide a holistic support service ranging from proactive environment monitoring to SLA based maintenance services.