Kyligence Certified Master (KCM)

Training Introduction

Trainees will obtain a series of high-level skills such as operation and tuning of Kyligence products in actual production scenarios. Through interactive lectures from instructor and hands-on practices in quantity, successful trainees will master the ability to fully release product features, and be qualified to plan and deploy architectures and solutions on Kyligence Enterprise with quickly troubleshooting problems, according to various business scenarios.

Target Audiences and Pre-requisites

  • Data analyst, data miners, and business intelligence (BI), data warehouse (DW) and big data technicians, across industries.
  • Teachers and students from univeristies or research institutions whom are interested in big data analysis or data mining and its practices
  • Trainees should have 1~2 years background with DW or OLAP and experience with Hadoop technologies, familiar with the principle of Kyligence Enterprise and certification related to processing with Hadoop, is preferred.

Training Duration: 4 Days

Certificate

Kyligence Certified Master (KCM)

Curriculum

1. Kyligence Products Overview
  • Positioning & Features of Kyligence Enterprise
  • Principle & Architecture
  • New Functions
2. Kyligence Enterprise Deployment
  • Environment & Cluster Deployment
  • Configurations & Override
  • Enable HA & LB
  • Use Kyligence Cloud for Cloud Deployment
3. Dementional Modeling & OLAP
  • OLAP Fundamental
  • Data Source Preparation & Import
  • Table Sampling & Cardinality
4. Model & Cube Design
  • Model Design
  • Computed Column
  • Dimensions & Aggregation Groups
  • Auto Dimension Optimization
  • Use of Measures
  • Table Index
5. Cube Building
  • Full Build & Incremental Build
  • Cube Building Process
  • Cube & Segment Management
  • Cube Storage
6. SQL Query
  • Query Engines & Auto Routing
  • SQL Execution Process
  • Cube Hit
  • Query Pushdown
7. Data Visualization
  • ODBC/JDBC Driver
  • Integrate with BI Tools
8. Optimization for Cube Design & Querying
  • Lookup Table Snapshot & Derived Dimension
  • High Cardinality Dimension
  • Rowkey Setting
  • Shard Strategy
  • Cache Mechanism
  • Use Kyligence Robot for Tuning
9. Optimization for Key Configurations
  • Product Related
  • Hadoop Related
10. Use Spark Engine
  • Use Spark for Cube Building
  • Use Spark for Querying
  • Key Configurations for Spark & Dynamic Allocation
11. Product Security Control
  • Users & Authentication
  • Project Access Control
  • Data Access Control
  • Integrate with LDAP
  • Integrate with Kerberos
12. Product Advanced Operation, Upgrade & Migration
  • Job Status & Tracking
  • Log Analysis & Diagnosis
  • Metadata Management & Disaster Recovery
  • Garbage Cleanup
  • Hadoop Components Status
  • Trouble Shooting (KB)
  • Product Upgrade
  • Cluster Migration
13. Use REST API
  • Access & Authentication
  • REST API for Major Functions
14. Advanced Features
  • Auto Modeling
  • R/W Separation Deployment
  • Streaming Cube (Kafka)
15. Typical Scenarios & Best Practices
  • Biz Scenarios & Case Study
  • Multiple Fact Tables
  • SCD
  • Segments Merge Strategy
  • Optimization for Count Distinct (Precise)
Lab Practices:
  • Deploy & Use Kyligence Enterprise for Empowering Data Analysis
  • Metadata Backup & Recovery
  • Cube Design Best Practice
  • Real-time Data Analysis & Streaming Cube Building