Artificial Intelligence Training Courses

Artificial Intelligence Training

AI, Synthetic Intelligence training

Client Testimonials

Business Rule Management (BRMS) with Drools Training Course

I appreciate the effort made by NobleProg and the trainer in particular to hold this course, Bernard not only described the features of the product, he also helped me understand how it fits with my project.

 

Fernando Orus - InSynergy Consulting SA

Business Rule Management (BRMS) with Drools Training Course

I appreciate the effort made by NobleProg and the trainer in particular to hold this course, Bernard not only described the features of the product, he also helped me understand how it fits with my project.

 

Fernando Orus - InSynergy Consulting SA

Business Rule Management (BRMS) with Drools Training Course

I appreciate the effort made by NobleProg and the trainer in particular to hold this course, Bernard not only described the features of the product, he also helped me understand how it fits with my project.

 

Fernando Orus - InSynergy Consulting SA

Business Rule Management (BRMS) with Drools Training Course

I appreciate the effort made by NobleProg and the trainer in particular to hold this course, Bernard not only described the features of the product, he also helped me understand how it fits with my project.

 

Fernando Orus - InSynergy Consulting SA

Business Rule Management (BRMS) with Drools Training Course

I appreciate the effort made by NobleProg and the trainer in particular to hold this course, Bernard not only described the features of the product, he also helped me understand how it fits with my project.

 

Fernando Orus - InSynergy Consulting SA

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Managing Business Logic with Drools

A very good overview of Drools with some deep dives in the code and practicals.

Patrick Phelan - Sun Life Financial

Introduction to Drools 6

I liked the logic exercises (writing rules conditions) on the 2nd day.

Jan Janke- CERN

Introduction to Drools 6

I liked the logic exercises (writing rules conditions) on the 2nd day.

Jan Janke- CERN

Introduction to Drools 6

I liked the logic exercises (writing rules conditions) on the 2nd day.

Jan Janke- CERN

Introduction to Drools 6

I liked the logic exercises (writing rules conditions) on the 2nd day.

Jan Janke- CERN

Subcategories

Artificial Intelligence Course Outlines

ID Name Duration Overview
39656 WildFly Server Administration 14 hours This course is created for Administrators, Developers or anyone who is interested in managing WildFly Application Server (AKA JBoss Application Server). This course usually runs on the newest version of the Application Server, but it can be tailored (as a private course) to older versions starting from version 5.1. Module 1: Installing Core Components Installing the Java environment  Installing JBoss AS Application server features Creating a custom server configuration Module 2: Customizing JBoss AS Services How to monitor JBoss AS services JBoss AS thread pool Configuring logging services Configuring the connection to the database Configuring the transaction service Module 3. Deploying EJB 3 Session Beans Developing Enterprise JavaBeans Configuring the EJB container Module 4: Deploying a Web Application Developing web layout Configuring JBoss Web Server Module 5: Deploying Applications with JBoss Messaging Service The new JBoss Messaging system Developing JMS applications Advanced JBoss Messaging Module 6: Managing JBoss AS Introducing Java Management Extension JBoss AS Administration Console Managing applications Administering resources
116139 Programming with Big Data in R 21 hours Introduction to Programming Big Data with R (bpdR) Setting up your environment to use pbdR Scope and tools available in pbdR Packages commonly used with Big Data alongside pbdR Message Passing Interface (MPI) Using pbdR MPI 5 Parallel processing Point-to-point communication Send Matrices Summing Matrices Collective communication Summing Matrices with Reduce Scatter / Gather Other MPI communications Distributed Matrices Creating a distributed diagonal matrix SVD of a distributed matrix Building a distributed matrix in parallel Statistics Applications Monte Carlo Integration Reading Datasets Reading on all processes Broadcasting from one process Reading partitioned data Distributed Regression Distributed Bootstrap
182842 Drools 6 and DSL for Business Analysts 21 hours This 3 days course is aimed to introduce Drools 6 to Business Analysts responsible for writing tests and rules. This course focuses on creating pure logic. Analysts after this course can writing tests and logic which then can be further integrated by developers with business applications. Short introduction to rule engines Short history or Expert Systems and Rules Engine What is Artificial Intelligence? Forward vs Backward chaining Declarative vs procedure/oop Comparison of solutions When to use rule engines? When not to use rule engines? Alternatives to rule engines KIE Declarative vs Traditional Fact Model Executing simple rules with simple tests Authoring Assets Decision tables Rule Templates Guided rule editor Testing, limits and benefits Developing simple process with rules Writing rules in Eclipse Stateless vs Stateful sessions Selecting proper facts Basic operators and Drools specific operators ) Basic accumulate functions (sum, max, etc...) ​Intermediate calculations Inserting new facts Exercises (lots of them) Ordering rules with BPMN Salience Ruleflow vs BPMN 2.0 Executing ruleset from a process Rules vs gateways Short overview of BPMN 2.0 features (transactions, exception handling) Comprehensive declarative business logic in Drools Domain Specific Languages (DSL) Creating new languages Preparing DSL to be used by manages Basic Natural Language Processing (NLP) with DSL Strategies for writing DSL from rules Strategies for writing rules from DSL written by analysts Unit testing Test strategies (test per case or per rule) Executing test automatically
211265 Using Computer Network ToolKit (CNTK) 28 hours Computer Network ToolKit (CNTK) is Microsoft's Open Source, Multi-machine, Multi-GPU, Highly efficent RNN training machine learning framework for speech, text, and images. Audience This course is directed at engineers and architects aiming to utilize CNTK in their projects. Getting started Setup CNTK on your machine Enabling 1bit SGD Developing and Testing CNTK Production Test Configurations How to contribute to CNTK Tutorial Tutorial II CNTK usage overview Examples Presentations Multiple GPUs and machines Configuring CNTK Config file overview Simple Network Builder BrainScript Network Builder SGD block Reader block Train, Test, Eval Top-level configurations Describing Networks Basic concepts Expressions Defining functions Full Function Reference Data readers Text Format Reader CNTK Text Format Reader UCI Fast Reader (deprecated) HTKMLF Reader LM sequence reader LU sequence reader Image reader Evaluating CNTK Models Overview C++ Evaluation Interface C# Evaluation Interface Evaluating Hidden Layers C# Image Transforms for Evaluation Advanced topics Command line parsing rules Top-level commands Plot command ConvertDBN command
240777 Natural Language Processing with TensorFlow 35 hours TensorFlow™ is an open source software library for numerical computation using data flow graphs. SyntaxNet is a neural-network Natural Language Processing framework for TensorFlow. Word2Vec is used for learning vector representations of words, called "word embeddings". Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in Mikolov et al.). Used in tandem, SyntaxNet and Word2Vec allows users to generate Learned Embedding models from Natural Language input. Audience This course is targeted at Developers and engineers who intend to work with SyntaxNet and Word2Vec models in their TensorFlow graphs. After completing this course, delegates will: understand TensorFlow’s structure and deployment mechanisms be able to carry out installation / production environment / architecture tasks and configuration be able to assess code quality, perform debugging, monitoring be able to implement advanced production like training models, embedding terms, building graphs and logging Getting Started Setup and Installation TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics 101 Prepare the Data Download Inputs and Placeholders Build the Graph Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output Advanced Usage Threading and Queues Distributed TensorFlow Writing Documentation and Sharing your Model Customizing Data Readers Using GPUs Manipulating TensorFlow Model Files TensorFlow Serving Introduction Basic Serving Tutorial Advanced Serving Tutorial Serving Inception Model Tutorial Getting Started with SyntaxNet Parsing from Standard Input Annotating a Corpus Configuring the Python Scripts Building an NLP Pipeline with SyntaxNet Obtaining Data Part-of-Speech Tagging Training the SyntaxNet POS Tagger Preprocessing with the Tagger Dependency Parsing: Transition-Based Parsing Training a Parser Step 1: Local Pretraining Training a Parser Step 2: Global Training Vector Representations of Words Motivation: Why Learn word embeddings? Scaling up with Noise-Contrastive Training The Skip-gram Model Building the Graph Training the Model Visualizing the Learned Embeddings Evaluating Embeddings: Analogical Reasoning Optimizing the Implementation    
73746 Introduction to the use of neural networks 7 hours The training is aimed at people who want to learn the basics of neural networks and their applications. The Basics Whether computers can think of? Imperative and declarative approach to solving problems Purpose Bedan on artificial intelligence The definition of artificial intelligence. Turing test. Other determinants The development of the concept of intelligent systems Most important achievements and directions of development Neural Networks The Basics Concept of neurons and neural networks A simplified model of the brain Opportunities neuron XOR problem and the nature of the distribution of values The polymorphic nature of the sigmoidal Other functions activated Construction of neural networks Concept of neurons connect Neural network as nodes Building a network Neurons Layers Scales Input and output data Range 0 to 1 Normalization Learning Neural Networks Backward Propagation Steps propagation Network training algorithms range of application Estimation Problems with the possibility of approximation by Examples XOR problem Lotto? Equities OCR and image pattern recognition Other applications Implementing a neural network modeling job predicting stock prices of listed Problems for today Combinatorial explosion and gaming issues Turing test again Over-confidence in the capabilities of computers
116476 Hadoop Administration on MapR 28 hours Audience: This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand. Big Data Overview: What is Big Data Why Big Data is gaining popularity Big Data Case Studies Big Data Characteristics Solutions to work on Big Data. Hadoop & Its components: What is Hadoop and what are its components. Hadoop Architecture and its characteristics of Data it can handle /Process. Brief on Hadoop History, companies using it and why they have started using it. Hadoop Frame work & its components- explained in detail. What is HDFS and Reads -Writes to Hadoop Distributed File System. How to Setup Hadoop Cluster in different modes- Stand- alone/Pseudo/Multi Node cluster. (This includes setting up a Hadoop cluster in VM BOX/VMware, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster). What is Map Reduce frame work and how it works. Running Map Reduce jobs on Hadoop cluster. Understanding Replication , Mirroring and Rack awareness in context of Hadoop clusters. Hadoop Cluster Planning: How to plan your hadoop cluster. Understanding hardware-software to plan your hadoop cluster. Understanding workloads and planning cluster to avoid failures and perform optimum. What is MapR and why MapR : Overview of MapR and its architecture. Understanding & working of MapR Control System, MapR Volumes , snapshots & Mirrors. Planning a cluster in context of MapR. Comparison of MapR with other distributions and Apache Hadoop. MapR installation and cluster deployment. Cluster Setup & Administration: Managing services, nodes ,snapshots, mirror volumes and remote clusters. Understanding and managing Nodes. Understanding of Hadoop components, Installing Hadoop components alongside MapR Services. Accessing Data on cluster including via NFS Managing services & nodes. Managing data by using volumes, managing users and groups, managing & assigning roles to nodes, commissioning decommissioning of nodes, cluster administration and performance monitoring, configuring/ analyzing and monitoring metrics to monitor performance, configuring and administering MapR security. Understanding and working with M7- Native storage for MapR tables. Cluster configuration and tuning for optimum performance. Cluster upgrade and integration with other setups: Upgrading software version of MapR and types of upgrade. Configuring Mapr cluster to access HDFS cluster. Setting up MapR cluster on Amazon Elastic Mapreduce. All the above topics include Demonstrations and practice sessions for learners to have hands on experience of the technology.
182614 Hadoop for Business Analysts 21 hours Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics Audience Business Analysts Duration three days Format Lectures and hands on labs. Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software Labs : first look at Hadoop Section 2: HDFS Overview concepts (horizontal scaling, replication, data locality, rack awareness) architecture (Namenode, Secondary namenode, Data node) data integrity future of HDFS : Namenode HA, Federation labs : Interacting with HDFS Section 3 : Map Reduce Overview mapreduce concepts daemons : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Thinking in map reduce Future of mapreduce (yarn) labs : Running a Map Reduce program Section 4 : Pig pig vs java map reduce pig latin language user defined functions understanding pig job flow basic data analysis with Pig complex data analysis with Pig multi datasets with Pig advanced concepts lab : writing pig scripts to analyze / transform data Section 5: Hive hive concepts architecture SQL support in Hive data types table creation and queries Hive data management partitions & joins text analytics labs (multiple) : creating Hive tables and running queries, joins , using partitions, using text analytics functions Section 6: BI Tools for Hadoop BI tools and Hadoop Overview of current BI tools landscape Choosing the best tool for the job
39649 Natural Language Processing 21 hours This course has been designed for people interested in extracting meaning from written English text, though the knowledge can be applied to other human languages as well. The course will cover how to make use of text written by humans, such as  blog posts, tweets, etc... For example, an analyst can set up an algorithm which will reach a conclusion automatically based on extensive data source. Short Introduction to NLP methods word and sentence tokenization text classification sentiment analysis spelling correction information extraction parsing meaning extraction question answering Overview of NLP theory probability statistics machine learning n-gram language modeling naive bayes maxent classifiers sequence models (Hidden Markov Models) probabilistic dependency constituent parsing vector-space models of meaning
73734 Introduction to Drools 6 21 hours This 3 days course is aimed to introduce Drools 6 to developers as well as business analysts. Short introduction to rule engines Short history or Expert Systems and Rules Engine What is Artificial Intelligence? Forward vs Backward chaining Declarative vs procedure/oop Comparison of solutions When to use rule engines? When not to use rule engines? Alternatives to rule engines KIE Authoring Assets Workbench Integration Executing rules directly from KIE Deployment Decision tables Rule Templates Guided rule editor Testing Work Items Versioning and deployment A bit more about repository (git) Developing simple process with rules Writing rules in Eclipse Stateless vs Stateful sessions Selecting proper facts Basic operators and Drools specific operators ) Basic accumulate functions (sum, max, etc...) ​Intermediate calculations Inserting new facts Exercises (lots of them) Ordering rules with BPMN Salience Ruleflow vs BPMN 2.0 Executing ruleset from a process Rules vs gateways Short overview of BPMN 2.0 features (transactions, exception handling) Comprehensive declarative business logic in Drools Domain Specific Languages (DSL) Creating new languages Preparing DSL to be used by manages Basic Natural Language Processing (NLP) with DSL Fusion (CPE), temporal reasoning (for events to happen after, between, etc...) Fusion operators Example in Event Schedules Unit testing Optional Topics OptaPlanner jBPM Drools and integration via web services Drools integration via command line How to change rules/process after deployment without compiling
116498 Modelling Decision and Rules with OMG DMN 14 hours This course teaches how to design and execute decisions in rules with OMG DMN (Decision Model and Notation) standard.Introduction to DMN Short history Basic concepts Decision requirements Decision log Scope and uses of DMN (human and automated decision making) Decision Requirements DRG DRD Decision Table Simple Expression Language (S-FEEL) FEEL Overview of Execution Tools available on the market Simple scenarios and workshop for executing the decision tables
182615 Data Analytics With R 21 hours R is a very popular, open source environment for statistical computing, data analytics and graphics. This course introduces R programming language to students.  It covers language fundamentals, libraries and advanced concepts.  Advanced data analytics and graphing with real world data. Audience Developers / data analytics Duration 3 days Format Lectures and Hands-on Day One: Language Basics Course Introduction About Data Science Data Science Definition Process of Doing Data Science. Introducing R Language Variables and Types Control Structures (Loops / Conditionals) R Scalars, Vectors, and Matrices Defining R Vectors Matricies String and Text Manipulation Character data type File IO Lists Functions Introducing Functions Closures lapply/sapply functions DataFrames Labs for all sections Day Two: Intermediate R Programming DataFrames and File I/O Reading data from files Data Preparation Built-in Datasets Visualization Graphics Package plot() / barplot() / hist() / boxplot() / scatter plot Heat Map ggplot2 package ( qplot(), ggplot()) Exploration With Dplyr Labs for all sections Day 3: Advanced Programming With R Statistical Modeling With R Statistical Functions Dealing With NA Distributions (Binomial, Poisson, Normal) Regression Introducing Linear Regressions Recommendations Text Processing (tm package / Wordclouds) Clustering Introduction to Clustering KMeans Classification Introduction to Classification Naive Bayes Decision Trees Training using caret package Evaluating Algorithms R and Big Data Hadoop Big Data Ecosystem RHadoop Labs for all sections
83728 Introduction to Machine Learning 7 hours This training course is for people that would like to apply basic Machine Learning techniques in practical applications. Audience Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Ridge regression Clustering
121325 MATLAB Fundamentals 21 hours This three-day course provides a comprehensive introduction to the MATLAB technical computing environment. The course is intended for beginning users and those looking for a review. No prior programming experience or knowledge of MATLAB is assumed. Themes of data analysis, visualization, modeling, and programming are explored throughout the course. Topics include: Working with the MATLAB user interface Entering commands and creating variables Analyzing vectors and matrices Visualizing vector and matrix data Working with data files Working with data types Automating commands with scripts Writing programs with logic and flow control Writing functions Part 1 A Brief Introduction to MATLAB Objectives: Offer an overview of what MATLAB is, what it consists of, and what it can do for you An Example: C vs. MATLAB MATLAB Product Overview MATLAB Application Fields What MATLAB can do for you? The Course Outline Working with the MATLAB User Interface Objective: Get an introduction to the main features of the MATLAB integrated design environment and its user interfaces. Get an overview of course themes. MATALB Interface Reading data from file Saving and loading variables Plotting data Customizing plots Calculating statistics and best-fit line Exporting graphics for use in other applications Va​riables and Expressions Objective: Enter MATLAB commands, with an emphasis on creating and accessing data in variables. Entering commands Creating variables Getting help Accessing and modifying values in variables Creating character variables Analysis and Visualization with Vectors Objective: Perform mathematical and statistical calculations with vectors, and create basic visualizations. See how MATLAB syntax enables calculations on whole data sets with a single command. Calculations with vectors Plotting vectors Basic plot options Annotating plots Analysis and Visualization with Matrices Objective: Use matrices as mathematical objects or as collections of (vector) data. Understand the appropriate use of MATLAB syntax to distinguish between these applications. Size and dimensionality Calculations with matrices Statistics with matrix data Plotting multiple columns Reshaping and linear indexing Multidimensional arrays Part 2 Automating Commands with Scripts Objective: Collect MATLAB commands into scripts for ease of reproduction and experimentation. As the complexity of your tasks increases, entering long sequences of commands in the Command Window becomes impractical. A Modelling Example The Command History Creating script files Running scripts Comments and Code Cells Publishing scripts Working with Data Files Objective: Bring data into MATLAB from formatted files. Because imported data can be of a wide variety of types and formats, emphasis is given to working with cell arrays and date formats. Importing data Mixed data types Cell arrays Conversions amongst numerals, strings, and cells Exporting data Multiple Vector Plots Objective: Make more complex vector plots, such as multiple plots, and use color and string manipulation techniques to produce eye-catching visual representations of data. Graphics structure Multiple figures, axes, and plots Plotting equations Using color Customizing plots Logic and Flow Control Objective: Use logical operations, variables, and indexing techniques to create flexible code that can make decisions and adapt to different situations. Explore other programming constructs for repeating sections of code, and constructs that allow interaction with the user. Logical operations and variables Logical indexing Programming constructs Flow control Loops Matrix and Image Visualization Objective: Visualize images and matrix data in two or three dimensions. Explore the difference in displaying images and visualizing matrix data using images. Scattered Interpolation using vector and matrix data 3-D matrix visualization 2-D matrix visualization Indexed images and colormaps True color images Part 3 Data Analysis Objective: Perform typical data analysis tasks in MATLAB, including developing and fitting theoretical models to real-life data. This leads naturally to one of the most powerful features of MATLAB: solving linear systems of equations with a single command. Dealing with missing data Correlation Smoothing Spectral analysis and FFTs Solving linear systems of equations Writing Functions Objective: Increase automation by encapsulating modular tasks as user-defined functions. Understand how MATLAB resolves references to files and variables. Why functions? Creating functions Adding comments Calling subfunctions Workspaces  Subfunctions Path and precedence Data Types Objective: Explore data types, focusing on the syntax for creating variables and accessing array elements, and discuss methods for converting among data types. Data types differ in the kind of data they may contain and the way the data is organized. MATLAB data types Integers Structures Converting types File I/O Objective: Explore the low-level data import and export functions in MATLAB that allow precise control over text and binary file I/O. These functions include textscan, which provides precise control of reading text files. Opening and closing files Reading and writing text files Reading and writing binary files Note that the actual delivered might be subject to minor discrepancies from the outline above without prior notification. Conclusion Note that the actual delivered might be subject to minor discrepancies from the outline above without prior notification. Objectives: Summarise what we have learnt A summary of the course Other upcoming courses on MATLAB Note that the course might be subject to few minor discrepancies when being delivered without prior notifications.
73735 Introduction to Nools 7 hours Flows Defining A Flow Sessions Facts Assert Retract Modify Retrieving Facts Firing Disposing Removing A Flow Removing All Flows Checking If A Flow Exists Agenda Group Focus Auto Focus Conflict Resolution Defining Rules Structure Salience Scope Constraints Not Or From Exists Actions Async Actions Globals Import Browser Support
120703 Artificial Neural Networks, Machine Learning, Deep Thinking 21 hours DAY 1 - ARTIFICIAL NEURAL NETWORKS Introduction and ANN Structure. Biological neurons and artificial neurons. Model of an ANN. Activation functions used in ANNs. Typical classes of network architectures . Mathematical Foundations and Learning mechanisms. Re-visiting vector and matrix algebra. State-space concepts. Concepts of optimization. Error-correction learning. Memory-based learning. Hebbian learning. Competitive learning. Single layer perceptrons. Structure and learning of perceptrons. Pattern classifier - introduction and Bayes' classifiers. Perceptron as a pattern classifier. Perceptron convergence. Limitations of a perceptrons. Feedforward ANN. Structures of Multi-layer feedforward networks. Back propagation algorithm. Back propagation - training and convergence. Functional approximation with back propagation. Practical and design issues of back propagation learning. Radial Basis Function Networks. Pattern separability and interpolation. Regularization Theory. Regularization and RBF networks. RBF network design and training. Approximation properties of RBF. Competitive Learning and Self organizing ANN. General clustering procedures. Learning Vector Quantization (LVQ). Competitive learning algorithms and architectures. Self organizing feature maps. Properties of feature maps. Fuzzy Neural Networks. Neuro-fuzzy systems. Background of fuzzy sets and logic. Design of fuzzy stems. Design of fuzzy ANNs. Applications A few examples of Neural Network applications, their advantages and problems will be discussed. DAY -2 MACHINE LEARNING The PAC Learning Framework Guarantees for finite hypothesis set – consistent case Guarantees for finite hypothesis set – inconsistent case Generalities Deterministic cv. Stochastic scenarios Bayes error noise Estimation and approximation errors Model selection Radmeacher Complexity and VC – Dimension Bias - Variance tradeoff Regularisation Over-fitting Validation Support Vector Machines Kriging (Gaussian Process regression) PCA and Kernel PCA Self Organisation Maps (SOM) Kernel induced vector space Mercer Kernels and Kernel - induced similarity metrics Reinforcement Learning DAY 3 - DEEP LEARNING This will be taught in relation to the topics covered on Day 1 and Day 2 Logistic and Softmax Regression Sparse Autoencoders Vectorization, PCA and Whitening Self-Taught Learning Deep Networks Linear Decoders Convolution and Pooling Sparse Coding Independent Component Analysis Canonical Correlation Analysis Demos and Applications
182617 Solr for Developers 21 hours This course introduces students to the Solr platform. Through a combination of lecture, discussion and labs students will gain hands on experience configuring effective search and indexing. The class begins with basic Solr installation and configuration then teaches the attendees the search features of Solr. Students will gain experience with faceting, indexing and search relevance among other features central to the Solr platform. The course wraps up with a number of advanced topics including spell checking, suggestions, Multicore and SolrCloud. Duration: 3 days Audience: Developers, business users, administrators Overall Goal Provide experienced web developers and technical staff with a comprehensive introduction to the Solr search platform. Teach software developer deep skills creating search solutions. I. Fundamentals Solr Overview Installing and running Solr Adding content to Solr Reading a Solr XML response Changing parameters in the URL Using the browse interface Labs: install Solr, run queries II. Searching Sorting results Query parsers More queries Hardwiring request parameters Adding fields to default search Faceting Result grouping Labs: advanced queries, experiment with faceted search III. Indexing Adding your own content to Solr Deleting data from solr Building a bookstore search Adding book data Exploring the book data Dedupe update processor Labs: indexing various document collections IV. Schema Updating Adding fields to the schema Analyzing text Labs: customize Solr schema V. Relevance Field weighting Phrase queries Function queries Fuzzier search Sounds-like Labs: implementing queries for  relevance VI. Extended features More-like-this Geospatial Spell checking Suggestions Highlighting Pseudo-fields Pseudo-joins Multilanguage Labs: implementing spell checking and suggestions VII. Multicore Adding more kinds of data Labs: creating and administering cores VIII. SolrCloud Introduction How SolrCloud works Commit strategies ZooKeeper Managing Solr config files Labs: administer SolrCloud IX. Developing with Solr API Talking to Solr through REST Configuration Indexing and searching Solr and Spring Labs: code to read and write Solr index, exercise in Spring with Solr X. Developing with Lucene API Building a Lucene index Searching, viewing, debugging Extracting text with Tika Scaling Lucene indices on clusters Lucene performance tuning Labs: coding with Lucene XI. Conclusion Other approaches to search ElasticSearch DataStax Enterprise: Solr+Cassandra Cloudera Solr integration Blur Future directions
131301 OpenStack Overview 7 hours The course is dedicated to IT engineers and architects who are looking for a solution to host private or public IaaS (Infrastructure as a Service) cloud. This is also great opportunity for IT managers to gain knowledge owerview about possibilities which could be enabled by OpenStack. Before You spend a lot of money on OpenStack implementation, You could consider all pros and cons by attending on our course. This topic is also avaliable as individual consultancy. Course goal: gaining basic knowledge regarding OpenStack Introduction: What is OpenStack? Foundations of Cloud Computing OpenStack vs VMware OpenStack evolution OpenStack distributions OpenStack releases OpenStack deployment solutions OpenStack competitors OpenStack Services: Underpinning services Keystone Glance Nova Neutron Cinder Horizon Swift Heat Ceilometer Trove Sahara Ironic Zaqar Manila Designate Barbican OpenStack Architecture: Node roles High availability Scalability Segregation Backup Monitoring Self service portal Interfaces Quotas Workflows Schedulers Migrations Load balancing Autoscaling Demonstration: How to download and execute RC files How to create an external network in Neutron How to upload an image to Glance How to create a new flavor in Nova How to update default Nova and Neutron quotas How to create a new tenant in Keystone How to create a new user in Keystone How to manage roles in Keystone How to create a tenant network in Neutron How to create a router in Neutron How to manage router’s interfaces in Neutron How to update security groups in Neutron How to upload RSA key-pair to the project How to allocate floating IPs to the project How to launch an instance from image in Nova How to associate floating IPs with instances How to create a new volume in Cinder How to attach the volume to the instance How to take a snapshot of the instance How to take a snapshot of the volume How to launch an instance from snapshot in Nova How to create a volume from snapshot in Cinder
121392 Machine Learning for Robotics 21 hours This course introduce machine learning methods in robotics applications. It is a broad overview of existing methods, motivations and main ideas in the context of pattern recognition. After short theoretical background, participants will perform simple exercise using open source (usually R) or any other popular software. Regression Probabilistic Graphical Models Boosting Kernel Methods Gaussian Processes Evaluation and Model Selection Sampling Methods Clustering CRFs Random Forests IVMs
78401 IoT (Internet of Things) for Entrepreneurs, Managers and Investors 21 hours Estimates for Internet of Things or IoT market value are massive, since by definition the IoT is an integrated and diffused layer of devices, sensors, and computing power that overlays entire consumer, business-to-business, and government industries. The IoT will account for an increasingly huge number of connections: 1.9 billion devices today, and 9 billion by 2018. That year, it will be roughly equal to the number of smartphones, smart TVs, tablets, wearable computers, and PCs combined. In the consumer space, many products and services have already crossed over into the IoT, including kitchen and home appliances, parking, RFID, lighting and heating products, and a number of applications in Industrial Internet. However the underlying technologies of IoT are nothing new as M2M communication existed since the birth of Internet. However what changed in last couple of years is the emergence of number of inexpensive wireless technologies added by overwhelming adaptation of smart phones and Tablet in every home. Explosive growth of mobile devices led to present demand of IoT. Due to unbounded opportunities in IoT business, a large number of small and medium sized entrepreneurs jumped on a bandwagon of IoT gold rush. Also due to emergence of open source electronics and IoT platform, cost of development of IoT system and further managing its sizable production is increasingly affordable. Existing electronic product owners are experiencing pressure to integrate their device with Internet or Mobile app. This training is intended for a technology and business review of an emerging industry so that IoT enthusiasts/entrepreneurs can grasp the basics of IoT technology and business. Course objectives Main objective of the course is to introduce emerging technological options, platforms and case studies of IoT implementation in home & city automation (smart homes and cities), Industrial Internet, healthcare, Govt., Mobile Cellular and other areas. Basic introduction of all the elements of IoT-Mechanical, Electronics/sensor platform, Wireless and wireline protocols, Mobile to Electronics integration, Mobile to enterprise integration, Data-analytics and Total control plane M2M Wireless protocols for IoT- WiFi, Zigbee/Zwave, Bluetooth, ANT+ : When and where to use which one? Mobile/Desktop/Web app- for registration, data acquisition and control –Available M2M data acquisition platform for IoT-–Xively, Omega and NovoTech, etc. Security issues and security solutions for IoT Open source/commercial electronics platform for IoT-Raspberry Pi, Arduino , ArmMbedLPC etc Open source /commercial enterprise cloud platform for IoT-Ayla, iO Bridge, Libellium, Axeda, Cisco frog cloud Studies of business and technology of some of the common IoT devices like Home automation, Smoke alarm, vehicles, military, home health etc Target Audience Investors and IoT entrepreneurs Managers and Engineers whose company is venturing into IoT space Business Analysts & Investors Pre-requisites Should have basic knowledge of business operation, devices, electronics systems and data systems Must have basic understanding of software and systems Basic understanding of Statistics ( in Excel levels) 1. Day 1, Session 1 — Business Overview of Why IoT is so important Case Studies from Nest, CISCO and top industries IoT adaptation rate in North American & and how they are aligning their future business model and operation around IoT Broad Scale Application Area Smart House and Smart City Industrial Internet Smart Cars Wearables Home Healthcare Business Rule Generation for IoT 3 layered architecture of Big Data — Physical (Sensors), Communication, and Data Intelligence 2. Day 1, Session 2 — Introduction of IoT: All about Sensors – Electronics Basic function and architecture of a sensor — sensor body, sensor mechanism, sensor calibration, sensor maintenance, cost and pricing structure, legacy and modern sensor network — all the basics about the sensors Development of sensor electronics — IoT vs legacy, and open source vs traditional PCB design style Development of sensor communication protocols — history to modern days. Legacy protocols like Modbus, relay, HART to modern day Zigbee, Zwave, X10,Bluetooth, ANT, etc. Business driver for sensor deployment — FDA/EPA regulation, fraud/tempering detection, supervision, quality control and process management Different Kind of Calibration Techniques — manual, automation, infield, primary and secondary calibration — and their implication in IoT Powering options for sensors — battery, solar, Witricity, Mobile and PoE Hands on training with single silicon and other sensors like temperature, pressure, vibration, magnetic field, power factor etc. 3. Day 1, Session 3 — Fundamental of M2M communication — Sensor Network and Wireless protocol What is a sensor network? What is ad-hoc network? Wireless vs. Wireline network WiFi- 802.11 families: N to S — application of standards and common vendors. Zigbee and Zwave — advantage of low power mesh networking. Long distance Zigbee. Introduction to different Zigbee chips. Bluetooth/BLE: Low power vs high power, speed of detection, class of BLE. Introduction of Bluetooth vendors & their review. Creating network with Wireless protocols such as Piconet by BLE Protocol stacks and packet structure for BLE and Zigbee Other long distance RF communication link LOS vs NLOS links Capacity and throughput calculation Application issues in wireless protocols — power consumption, reliability, PER, QoS, LOS Hands on training with sensor network PICO NET- BLE Base network Zigbee network-master/slave communication Data Hubs : MC and single computer ( like Beaglebone ) based datahub 4. Day 1, Session 4 — Review of Electronics Platform, production and cost projection PCB vs FPGA vs ASIC design-how to take decision Prototyping electronics vs Production electronics QA certificate for IoT- CE/CSA/UL/IEC/RoHS/IP65: What are those and when needed? Basic introduction of multi-layer PCB design and its workflow Electronics reliability-basic concept of FIT and early mortality rate Environmental and reliability testing-basic concepts Basic Open source platforms: Arduino, Raspberry Pi, Beaglebone, when needed? RedBack, Diamond Back 5. Day 2, Session 1 — Conceiving a new IoT product- Product requirement document for IoT State of the present art and review of existing technology in the market place Suggestion for new features and technologies based on market analysis and patent issues Detailed technical specs for new products- System, software, hardware, mechanical, installation etc. Packaging and documentation requirements Servicing and customer support requirements High level design (HLD) for understanding of product concept Release plan for phase wise introduction of the new features Skill set for the development team and proposed project plan -cost & duration Target manufacturing price 6. Day 2, Session 2 — Introduction to Mobile app platform for IoT Protocol stack of Mobile app for IoT Mobile to server integration –what are the factors to look out What are the intelligent layer that can be introduced at Mobile app level ? iBeacon in IoS Window Azure Linkafy Mobile platform for IoT Axeda Xively 7. Day 2, Session 3 — Machine learning for intelligent IoT Introduction to Machine learning Learning classification techniques Bayesian Prediction-preparing training file Support Vector Machine Image and video analytic for IoT Fraud and alert analytic through IoT Bio –metric ID integration with IoT Real Time Analytic/Stream Analytic Scalability issues of IoT and machine learning What are the architectural implementation of Machine learning for IoT 8. Day 2, Session 4 — Analytic Engine for IoT Insight analytic Visualization analytic Structured predictive analytic Unstructured predictive analytic Recommendation Engine Pattern detection Rule/Scenario discovery — failure, fraud, optimization Root cause discovery 9. Day 3, Session 1 — Security in IoT implementation Why security is absolutely essential for IoT Mechanism of security breach in IOT layer Privacy enhancing technologies Fundamental of network security Encryption and cryptography implementation for IoT data Security standard for available platform European legislation for security in IoT platform Secure booting Device authentication Firewalling and IPS Updates and patches 10. Day 3, Session 2 — Database implementation for IoT : Cloud based IoT platforms SQL vs NoSQL-Which one is good for your IoT application Open sourced vs. Licensed Database Available M2M cloud platform Axeda Xively Omega NovoTech Ayla Libellium CISCO M2M platform AT &T M2M platform Google M2M platform 11. Day 3, Session 3 — A few common IoT systems Home automation Energy optimization in Home Automotive-OBD IoT-Lock Smart Smoke alarm BAC ( Blood alcohol monitoring ) for drug abusers under probation Pet cam for Pet lovers Wearable IOT Mobile parking ticketing system Indoor location tracking in Retail store Home health care Smart Sports Watch 12. Day 3, Session 4 — Big Data for IoT 4V- Volume, velocity, variety and veracity of Big Data Why Big Data is important in IoT Big Data vs legacy data in IoT Hadoop for IoT-when and why? Storage technique for image, Geospatial and video data Distributed database Parallel computing basics for IoT
120985 Data Shrinkage for Government 14 hours Why shrink data Relational databases Introduction Aggregation and disaggregation Normalisation and denormalisation Null values and zeroes Joining data Complex joins Cluster analysis Applications Strengths and weaknesses Measuring distance Hierarchical clustering K-means and derivatives Applications in Government Factor analysis Concepts Exploratory factor analysis Confirmatory factor analysis Principal component analysis Correspondence analysis Software Applications in Government Predictive analytics Timelines and naming conventions Holdout samples Weights of evidence Information value Scorecard building demonstration using a spreadsheet Regression in predictive analytics Logistic regression in predictive analytics Decision Trees in predictive analytics Neural networks Measuring accuracy Applications in Government
78399 Neural Network in R 14 hours This course is an introduction to applying neural networks in real world problems using R-project software. Introduction to Neural Networks What are Neural Networks What is current status in applying neural networks Neural Networks vs regression models Supervised and Unsupervised learning Overview of packages available nnet, neuralnet and others differences between packages and itls limitations Visualizing neural networks Applying Neural Networks Concept of neurons and neural networks A simplified model of the brain Opportunities neuron XOR problem and the nature of the distribution of values The polymorphic nature of the sigmoidal Other functions activated Construction of neural networks Concept of neurons connect Neural network as nodes Building a network Neurons Layers Scales Input and output data Range 0 to 1 Normalization Learning Neural Networks Backward Propagation Steps propagation Network training algorithms range of application Estimation Problems with the possibility of approximation by Examples OCR and image pattern recognition Other applications Implementing a neural network modeling job predicting stock prices of listed
210897 Deep Learning with TensorFlow 21 hours TensorFlow is a 2nd Generation API of Google's open source software library for Deep Learning. The system is designed to facilitate research in machine learning, and to make it quick and easy to transition from research prototype to production system. Audience This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects After completing this course, delegates will: understand TensorFlow’s structure and deployment mechanisms be able to carry out installation / production environment / architecture tasks and configuration be able to assess code quality, perform debugging, monitoring be able to implement advanced production like training models, building graphs and logging Machine Learning and Recursive Neural Networks (RNN) basics NN and RNN Backprogation Long short-term memory (LSTM) TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics 101 Prepare the Data Download Inputs and Placeholders Build the Graph Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output Advanced Usage Threading and Queues Distributed TensorFlow Writing Documentation and Sharing your Model Customizing Data Readers Using GPUs Manipulating TensorFlow Model Files TensorFlow Serving Introduction Basic Serving Tutorial Advanced Serving Tutorial Serving Inception Model Tutorial
182021 Big Data Storage Solution - NoSQL 14 hours When traditional storage technologies don't handle the amount of data you need to store there are hundereds of alternatives. This course try to guide the participants what are alternatives for storing and analyzing Big Data and what are theirs pros and cons. This course is mostly focused on discussion and presentation of solutions, though hands-on exercises are available on demand. Limits of Traditional Technologies SQL databases Redundancy: replicas and clusters Constraints Speed Overview of database types Object Databases Document Store Cloud Databases Wide Column Store Multidimensional Databases Multivalue Databases Streaming and Time Series Databases Multimodel Databases Graph Databases Key Value XML Databases Distribute file systems Popular NoSQL Databases MongoDB Cassandra Apache Hadoop Apache Spark other solutions NewSQL Overview of available solutions Performance Inconsitencies Document Storage/Search Optimized Solr/Lucene/Elasticsearch other solutions
78400 Big Data Business Intelligence for Govt. Agencies 40 hours Advances in technologies and the increasing amount of information are transforming how business is conducted in many industries, including government. Government data generation and digital archiving rates are on the rise due to the rapid growth of mobile devices and applications, smart sensors and devices, cloud computing solutions, and citizen-facing portals. As digital information expands and becomes more complex, information management, processing, storage, security, and disposition become more complex as well. New capture, search, discovery, and analysis tools are helping organizations gain insights from their unstructured data. The government market is at a tipping point, realizing that information is a strategic asset, and government needs to protect, leverage, and analyze both structured and unstructured information to better serve and meet mission requirements. As government leaders strive to evolve data-driven organizations to successfully accomplish mission, they are laying the groundwork to correlate dependencies across events, people, processes, and information. High-value government solutions will be created from a mashup of the most disruptive technologies: Mobile devices and applications Cloud services Social business technologies and networking Big Data and analytics IDC predicts that by 2020, the IT industry will reach $5 trillion, approximately $1.7 trillion larger than today, and that 80% of the industry's growth will be driven by these 3rd Platform technologies. In the long term, these technologies will be key tools for dealing with the complexity of increased digital information. Big Data is one of the intelligent industry solutions and allows government to make better decisions by taking action based on patterns revealed by analyzing large volumes of data — related and unrelated, structured and unstructured. But accomplishing these feats takes far more than simply accumulating massive quantities of data.“Making sense of thesevolumes of Big Datarequires cutting-edge tools and technologies that can analyze and extract useful knowledge from vast and diverse streams of information,” Tom Kalil and Fen Zhao of the White House Office of Science and Technology Policy wrote in a post on the OSTP Blog. The White House took a step toward helping agencies find these technologies when it established the National Big Data Research and Development Initiative in 2012. The initiative included more than $200 million to make the most of the explosion of Big Data and the tools needed to analyze it. The challenges that Big Data poses are nearly as daunting as its promise is encouraging. Storing data efficiently is one of these challenges. As always, budgets are tight, so agencies must minimize the per-megabyte price of storage and keep the data within easy access so that users can get it when they want it and how they need it. Backing up massive quantities of data heightens the challenge. Analyzing the data effectively is another major challenge. Many agencies employ commercial tools that enable them to sift through the mountains of data, spotting trends that can help them operate more efficiently. (A recent study by MeriTalk found that federal IT executives think Big Data could help agencies save more than $500 billion while also fulfilling mission objectives.). Custom-developed Big Data tools also are allowing agencies to address the need to analyze their data. For example, the Oak Ridge National Laboratory’s Computational Data Analytics Group has made its Piranha data analytics system available to other agencies. The system has helped medical researchers find a link that can alert doctors to aortic aneurysms before they strike. It’s also used for more mundane tasks, such as sifting through résumés to connect job candidates with hiring managers. Each session is 2 hours Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Govt. Case Studies from NIH, DoE Big Data adaptation rate in Govt. Agencies & and how they are aligning their future operation around Big Data Predictive Analytics Broad Scale Application Area in DoD, NSA, IRS, USDA etc. Interfacing Big Data with Legacy data Basic understanding of enabling technologies in predictive analytics Data Integration & Dashboard visualization Fraud management Business Rule/ Fraud detection generation Threat detection and profiling Cost benefit analysis for Big Data implementation Day-1: Session-2 : Introduction of Big Data-1 Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume. Data Warehouses – static schema, slowly evolving dataset MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc. Hadoop Based Solutions – no conditions on structure of dataset. Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS Batch- suited for analytical/non-interactive Volume : CEP streaming data Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc) Less production ready – Storm/S4 NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database Day-1 : Session -3 : Introduction to Big Data-2 NoSQL solutions KV Store - Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB) KV Store - Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB KV Store (Hierarchical) - GT.m, Cache KV Store (Ordered) - TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord KV Cache - Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua Tuple Store - Gigaspaces, Coord, Apache River Object Database - ZopeDB, DB40, Shoal Document Store - CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris Wide Columnar Store - BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI Varieties of Data: Introduction to Data Cleaning issue in Big Data RDBMS – static structure/schema, doesn’t promote agile, exploratory environment. NoSQL – semi structured, enough structure to store data without exact schema before storing data Data cleaning issues Day-1 : Session-4 : Big Data Introduction-3 : Hadoop When to select Hadoop? STRUCTURED - Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration) SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB) Warehousing data = HUGE effort and static even after implementation For variety & volume of data, crunched on commodity hardware – HADOOP Commodity H/W needed to create a Hadoop Cluster Introduction to Map Reduce /HDFS MapReduce – distribute computing over multiple servers HDFS – make data available locally for the computing process (with redundancy) Data – can be unstructured/schema-less (unlike RDBMS) Developer responsibility to make sense of data Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS Day-2: Session-1: Big Data Ecosystem-Building Big Data ETL: universe of Big Data Tools-which one to use and when? Hadoop vs. Other NoSQL solutions For interactive, random access to data Hbase (column oriented database) on top of Hadoop Random access to data but restrictions imposed (max 1 PB) Not good for ad-hoc analytics, good for logging, counting, time-series Sqoop - Import from databases to Hive or HDFS (JDBC/ODBC access) Flume – Stream data (e.g. log data) into HDFS Day-2: Session-2: Big Data Management System Moving parts, compute nodes start/fail :ZooKeeper - For configuration/coordination/naming services Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari In Cloud : Whirr Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI : Introduction to Machine learning Learning classification techniques Bayesian Prediction-preparing training file Support Vector Machine KNN p-Tree Algebra & vertical mining Neural Network Big Data large variable problem -Random forest (RF) Big Data Automation problem – Multi-model ensemble RF Automation through Soft10-M Text analytic tool-Treeminer Agile learning Agent based learning Distributed learning Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Govt. Insight analytic Visualization analytic Structured predictive analytic Unstructured predictive analytic Threat/fraudstar/vendor profiling Recommendation Engine Pattern detection Rule/Scenario discovery –failure, fraud, optimization Root cause discovery Sentiment analysis CRM analytic Network analytic Text Analytics Technology assisted review Fraud analytic Real Time Analytic Day-3 : Sesion-1 : Real Time and Scalable Analytic Over Hadoop Why common analytic algorithms fail in Hadoop/HDFS Apache Hama- for Bulk Synchronous distributed computing Apache SPARK- for cluster computing for real time analytic CMU Graphics Lab2- Graph based asynchronous approach to distributed computing KNN p-Algebra based approach from Treeminer for reduced hardware cost of operation Day-3: Session-2: Tools for eDiscovery and Forensics eDiscovery over Big Data vs. Legacy data – a comparison of cost and performance Predictive coding and technology assisted review (TAR) Live demo of a Tar product ( vMiner) to understand how TAR works for faster discovery Faster indexing through HDFS –velocity of data NLP or Natural Language processing –various techniques and open source products eDiscovery in foreign languages-technology for foreign language processing Day-3 : Session 3: Big Data BI for Cyber Security –Understanding whole 360 degree views of speedy data collection to threat identification Understanding basics of security analytics-attack surface, security misconfiguration, host defenses Network infrastructure/ Large datapipe / Response ETL for real time analytic Prescriptive vs predictive – Fixed rule based vs auto-discovery of threat rules from Meta data Day-3: Session 4: Big Data in USDA : Application in Agriculture Introduction to IoT ( Internet of Things) for agriculture-sensor based Big Data and control Introduction to Satellite imaging and its application in agriculture Integrating sensor and image data for fertility of soil, cultivation recommendation and forecasting Agriculture insurance and Big Data Crop Loss forecasting Day-4 : Session-1: Fraud prevention BI from Big Data in Govt-Fraud analytic: Basic classification of Fraud analytics- rule based vs predictive analytics Supervised vs unsupervised Machine learning for Fraud pattern detection Vendor fraud/over charging for projects Medicare and Medicaid fraud- fraud detection techniques for claim processing Travel reimbursement frauds IRS refund frauds Case studies and live demo will be given wherever data is available. Day-4 : Session-2: Social Media Analytic- Intelligence gathering and analysis Big Data ETL API for extracting social media data Text, image, meta data and video Sentiment analysis from social media feed Contextual and non-contextual filtering of social media feed Social Media Dashboard to integrate diverse social media Automated profiling of social media profile Live demo of each analytic will be given through Treeminer Tool. Day-4 : Session-3: Big Data Analytic in image processing and video feeds Image Storage techniques in Big Data- Storage solution for data exceeding petabytes LTFS and LTO GPFS-LTFS ( Layered storage solution for Big image data) Fundamental of image analytics Object recognition Image segmentation Motion tracking 3-D image reconstruction Day-4: Session-4: Big Data applications in NIH: Emerging areas of Bio-informatics Meta-genomics and Big Data mining issues Big Data Predictive analytic for Pharmacogenomics, Metabolomics and Proteomics Big Data in downstream Genomics process Application of Big data predictive analytics in Public health Big Data Dashboard for quick accessibility of diverse data and display : Integration of existing application platform with Big Data Dashboard Big Data management Case Study of Big Data Dashboard: Tableau and Pentaho Use Big Data app to push location based services in Govt. Tracking system and management Day-5 : Session-1: How to justify Big Data BI implementation within an organization: Defining ROI for Big Data implementation Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain Case studies of revenue gain from saving the licensed database cost Revenue gain from location based services Saving from fraud prevention An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation. Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System: Understanding practical Big Data Migration Roadmap What are the important information needed before architecting a Big Data implementation What are the different ways of calculating volume, velocity, variety and veracity of data How to estimate data growth Case studies Day-5: Session 4: Review of Big Data Vendors and review of their products. Q/A session: Accenture APTEAN (Formerly CDC Software) Cisco Systems Cloudera Dell EMC GoodData Corporation Guavus Hitachi Data Systems Hortonworks HP IBM Informatica Intel Jaspersoft Microsoft MongoDB (Formerly 10Gen) MU Sigma Netapp Opera Solutions Oracle Pentaho Platfora Qliktech Quantum Rackspace Revolution Analytics Salesforce SAP SAS Institute Sisense Software AG/Terracotta Soft10 Automation Splunk Sqrrl Supermicro Tableau Software Teradata Think Big Analytics Tidemark Systems Treeminer VMware (Part of EMC)
188771 Machine Learning Fundamentals with Scala and Apache Spark 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Scala programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Machine Learning with Python Choice of libraries Add-on tools Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
182619 HBase for Developers 21 hours This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters. We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises. Duration : 3 days Audience : Developers  & Administrators Section 1: Introduction to Big Data & NoSQL Big Data ecosystem NoSQL overview CAP theorem When is NoSQL appropriate Columnar storage HBase and NoSQL Section 2 : HBase Intro Concepts and Design Architecture (HMaster and Region Server) Data integrity HBase ecosystem Lab : Exploring HBase Section 3 : HBase Data model Namespaces, Tables and Regions Rows, columns, column families, versions HBase Shell and Admin commands Lab : HBase Shell Section 3 : Accessing HBase using Java API Introduction to Java API Read / Write path Time Series data Scans Map Reduce Filters Counters Co-processors Labs (multiple) : Using HBase Java API to implement  time series , Map Reduce, Filters and counters. Section 4 : HBase schema Design : Group session students are presented with real world use cases students work in groups to come up with design solutions discuss / critique and learn from multiple designs Labs : implement a scenario in HBase Section 5 : HBase Internals Understanding HBase under the hood Memfile / HFile / WAL HDFS storage Compactions Splits Bloom Filters Caches Diagnostics Section 6 : HBase installation and configuration hardware selection install methods common configurations Lab : installing HBase Section 7 : HBase eco-system developing applications using HBase interacting with other Hadoop stack (MapReduce, Pig, Hive) frameworks around HBase advanced concepts (co-processors) Labs : writing HBase applications Section 8 : Monitoring And Best Practices monitoring tools and practices optimizing HBase HBase in the cloud real world use cases of HBase Labs : checking HBase vitals
211569 Apache SystemML for Machine Learning 14 hours Apache SystemML is a distributed and declarative machine learning platform. SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark. Audience This course is suitable for Machine Learning researchers, developers and engineers seeking to utilize SystemML as a framework for machine learning. Running SystemML Standalone Spark MLContext Spark Batch Hadoop Batch JMLC Tools Debugger IDE Troubleshooting Languages and ML Algorithms DML PyDML Algorithms
192406 Analytics Domain Expertise 7 hours This course is part of the Data Scientist skill set (Domain: Analytics Domain Expertise). Analytics Domain Expertise Recap on Big Data Analytics overview and applications Big Data strategy and implementation Case-studies
78402 Big Data Business Intelligence for Telecom and Communication Service Providers 35 hours Overview Communications service providers (CSP) are facing pressure to reduce costs and maximize average revenue per user (ARPU), while ensuring an excellent customer experience, but data volumes keep growing. Global mobile data traffic will grow at a compound annual growth rate (CAGR) of 78 percent to 2016, reaching 10.8 exabytes per month. Meanwhile, CSPs are generating large volumes of data, including call detail records (CDR), network data and customer data. Companies that fully exploit this data gain a competitive edge. According to a recent survey by The Economist Intelligence Unit, companies that use data-directed decision-making enjoy a 5-6% boost in productivity. Yet 53% of companies leverage only half of their valuable data, and one-fourth of respondents noted that vast quantities of useful data go untapped. The data volumes are so high that manual analysis is impossible, and most legacy software systems can’t keep up, resulting in valuable data being discarded or ignored. With Big Data & Analytics’ high-speed, scalable big data software, CSPs can mine all their data for better decision making in less time. Different Big Data products and techniques provide an end-to-end software platform for collecting, preparing, analyzing and presenting insights from big data. Application areas include network performance monitoring, fraud detection, customer churn detection and credit risk analysis. Big Data & Analytics products scale to handle terabytes of data but implementation of such tools need new kind of cloud based database system like Hadoop or massive scale parallel computing processor ( KPU etc.) This course work on Big Data BI for Telco covers all the emerging new areas in which CSPs are investing for productivity gain and opening up new business revenue stream. The course will provide a complete 360 degree over view of Big Data BI in Telco so that decision makers and managers can have a very wide and comprehensive overview of possibilities of Big Data BI in Telco for productivity and revenue gain. Course objectives Main objective of the course is to introduce new Big Data business intelligence techniques in 4 sectors of Telecom Business (Marketing/Sales, Network Operation, Financial operation and Customer Relation Management). Students will be introduced to following: Introduction to Big Data-what is 4Vs (volume, velocity, variety and veracity) in Big Data- Generation, extraction and management from Telco perspective How Big Data analytic differs from legacy data analytic In-house justification of Big Data -Telco perspective Introduction to Hadoop Ecosystem- familiarity with all Hadoop tools like Hive, Pig, SPARC –when and how they are used to solve Big Data problem How Big Data is extracted to analyze for analytics tool-how Business Analysis’s can reduce their pain points of collection and analysis of data through integrated Hadoop dashboard approach Basic introduction of Insight analytics, visualization analytics and predictive analytics for Telco Customer Churn analytic and Big Data-how Big Data analytic can reduce customer churn and customer dissatisfaction in Telco-case studies Network failure and service failure analytics from Network meta-data and IPDR Financial analysis-fraud, wastage and ROI estimation from sales and operational data Customer acquisition problem-Target marketing, customer segmentation and cross-sale from sales data Introduction and summary of all Big Data analytic products and where they fit into Telco analytic space Conclusion-how to take step-by-step approach to introduce Big Data Business Intelligence in your organization Target Audience Network operation, Financial Managers, CRM managers and top IT managers in Telco CIO office. Business Analysts in Telco CFO office managers/analysts Operational managers QA managers Breakdown of topics on daily basis: (Each session is 2 hours) Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Telco. Case Studies from T-Mobile, Verizon etc. Big Data adaptation rate in North American Telco & and how they are aligning their future business model and operation around Big Data BI Broad Scale Application Area Network and Service management Customer Churn Management Data Integration & Dashboard visualization Fraud management Business Rule generation Customer profiling Localized Ad pushing Day-1: Session-2 : Introduction of Big Data-1 Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume. Data Warehouses – static schema, slowly evolving dataset MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc. Hadoop Based Solutions – no conditions on structure of dataset. Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS Batch- suited for analytical/non-interactive Volume : CEP streaming data Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc) Less production ready – Storm/S4 NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database Day-1 : Session -3 : Introduction to Big Data-2 NoSQL solutions KV Store - Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB) KV Store - Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB KV Store (Hierarchical) - GT.m, Cache KV Store (Ordered) - TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord KV Cache - Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua Tuple Store - Gigaspaces, Coord, Apache River Object Database - ZopeDB, DB40, Shoal Document Store - CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris Wide Columnar Store - BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI Varieties of Data: Introduction to Data Cleaning issue in Big Data RDBMS – static structure/schema, doesn’t promote agile, exploratory environment. NoSQL – semi structured, enough structure to store data without exact schema before storing data Data cleaning issues Day-1 : Session-4 : Big Data Introduction-3 : Hadoop When to select Hadoop? STRUCTURED - Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration) SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB) Warehousing data = HUGE effort and static even after implementation For variety & volume of data, crunched on commodity hardware – HADOOP Commodity H/W needed to create a Hadoop Cluster Introduction to Map Reduce /HDFS MapReduce – distribute computing over multiple servers HDFS – make data available locally for the computing process (with redundancy) Data – can be unstructured/schema-less (unlike RDBMS) Developer responsibility to make sense of data Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS Day-2: Session-1.1: Spark : In Memory distributed database What is “In memory” processing? Spark SQL Spark SDK Spark API RDD Spark Lib Hanna How to migrate an existing Hadoop system to Spark Day-2 Session -1.2: Storm -Real time processing in Big Data Streams Sprouts Bolts Topologies Day-2: Session-2: Big Data Management System Moving parts, compute nodes start/fail :ZooKeeper - For configuration/coordination/naming services Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari In Cloud : Whirr Evolving Big Data platform tools for tracking ETL layer application issues Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI : Introduction to Machine learning Learning classification techniques Bayesian Prediction-preparing training file Markov random field Supervised and unsupervised learning Feature extraction Support Vector Machine Neural Network Reinforcement learning Big Data large variable problem -Random forest (RF) Representation learning Deep learning Big Data Automation problem – Multi-model ensemble RF Automation through Soft10-M LDA and topic modeling Agile learning Agent based learning- Example from Telco operation Distributed learning –Example from Telco operation Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut More scalable Analytic-Apache Hama, Spark and CMU Graph lab Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Telecom Insight analytic Visualization analytic Structured predictive analytic Unstructured predictive analytic Customer profiling Recommendation Engine Pattern detection Rule/Scenario discovery –failure, fraud, optimization Root cause discovery Sentiment analysis CRM analytic Network analytic Text Analytics Technology assisted review Fraud analytic Real Time Analytic Day-3 : Sesion-1 : Network Operation analytic- root cause analysis of network failures, service interruption from meta data, IPDR and CRM: CPU Usage Memory Usage QoS Queue Usage Device Temperature Interface Error IoS versions Routing Events Latency variations Syslog analytics Packet Loss Load simulation Topology inference Performance Threshold Device Traps IPDR ( IP detailed record) collection and processing Use of IPDR data for Subscriber Bandwidth consumption, Network interface utilization, modem status and diagnostic HFC information Day-3: Session-2: Tools for Network service failure analysis: Network Summary Dashboard: monitor overall network deployments and track your organization's key performance indicators Peak Period Analysis Dashboard: understand the application and subscriber trends driving peak utilization, with location-specific granularity Routing Efficiency Dashboard: control network costs and build business cases for capital projects with a complete understanding of interconnect and transit relationships Real-Time Entertainment Dashboard: access metrics that matter, including video views, duration, and video quality of experience (QoE) IPv6 Transition Dashboard: investigate the ongoing adoption of IPv6 on your network and gain insight into the applications and devices driving trends Case-Study-1: The Alcatel-Lucent Big Network Analytics (BNA) Data Miner Multi-dimensional mobile intelligence (m.IQ6) Day-3 : Session 3: Big Data BI for Marketing/Sales –Understanding sales/marketing from Sales data: ( All of them will be shown with a live predictive analytic demo ) To identify highest velocity clients To identify clients for a given products To identify right set of products for a client ( Recommendation Engine) Market segmentation technique Cross-Sale and upsale technique Client segmentation technique Sales revenue forecasting technique Day-3: Session 4: BI needed for Telco CFO office: Overview of Business Analytics works needed in a CFO office Risk analysis on new investment Revenue, profit forecasting New client acquisition forecasting Loss forecasting Fraud analytic on finances ( details next session ) Day-4 : Session-1: Fraud prevention BI from Big Data in Telco-Fraud analytic: Bandwidth leakage / Bandwidth fraud Vendor fraud/over charging for projects Customer refund/claims frauds Travel reimbursement frauds Day-4 : Session-2: From Churning Prediction to Churn Prevention: 3 Types of Churn : Active/Deliberate , Rotational/Incidental, Passive Involuntary 3 classification of churned customers: Total, Hidden, Partial Understanding CRM variables for churn Customer behavior data collection Customer perception data collection Customer demographics data collection Cleaning CRM Data Unstructured CRM data ( customer call, tickets, emails) and their conversion to structured data for Churn analysis Social Media CRM-new way to extract customer satisfaction index Case Study-1 : T-Mobile USA: Churn Reduction by 50% Day-4 : Session-3: How to use predictive analysis for root cause analysis of customer dis-satisfaction : Case Study -1 : Linking dissatisfaction to issues – Accounting, Engineering failures like service interruption, poor bandwidth service Case Study-2: Big Data QA dashboard to track customer satisfaction index from various parameters such as call escalations, criticality of issues, pending service interruption events etc. Day-4: Session-4: Big Data Dashboard for quick accessibility of diverse data and display : Integration of existing application platform with Big Data Dashboard Big Data management Case Study of Big Data Dashboard: Tableau and Pentaho Use Big Data app to push location based Advertisement Tracking system and management Day-5 : Session-1: How to justify Big Data BI implementation within an organization: Defining ROI for Big Data implementation Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain Case studies of revenue gain from customer churn Revenue gain from location based and other targeted Ad An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation. Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System: Understanding practical Big Data Migration Roadmap What are the important information needed before architecting a Big Data implementation What are the different ways of calculating volume, velocity, variety and veracity of data How to estimate data growth Case studies in 2 Telco Day-5: Session 3 & 4: Review of Big Data Vendors and review of their products. Q/A session: AccentureAlcatel-Lucent Amazon –A9 APTEAN (Formerly CDC Software) Cisco Systems Cloudera Dell EMC GoodData Corporation Guavus Hitachi Data Systems Hortonworks Huawei HP IBM Informatica Intel Jaspersoft Microsoft MongoDB (Formerly 10Gen) MU Sigma Netapp Opera Solutions Oracle Pentaho Platfora Qliktech Quantum Rackspace Revolution Analytics Salesforce SAP SAS Institute Sisense Software AG/Terracotta Soft10 Automation Splunk Sqrrl Supermicro Tableau Software Teradata Think Big Analytics Tidemark Systems VMware (Part of EMC)
116141 Introduction to Recommendation Systems 7 hours Audience Marketing department employees, IT strategists and other people involved in decisions related to the design and implementation of recommender systems. Format Short theoretical background follow by analysing working examples and short, simple exercises. Challenges related to data collection Information overload Data types (video, text, structured data, etc...) Potential of the data now and in the near future Basics of Data Mining Recommendation and searching Searching and Filtering Sorting Determining weights of the search results Using Synonyms Full-text search Long Tail Chris Anderson idea Drawbacks of Long Tail Determining Similarities Products Users Documents and web sites Content-Based Recommendation i measurement of similarities Cosine distance The Euclidean distance vectors TFIDF and frequency of terms Collaborative filtering Community rating Graphs Applications of graphs  Determining similarity of graphs Similarity between users Neural Networks Basic concepts of Neural Networks Training Data and Validation Data Neural Network examples in recommender systems How to encourage users to share their data Making systems more comfortable Navigation Functionality and UX Case Studies Popularity of recommender systems and their problems Examples
192386 R Programming for Data Analysis 14 hours This course is part of the Data Scientist skill set (Domain: Data and Technology) Introduction and preliminaries Making R more friendly, R and available GUIs Rstudio Related software and documentation R and statistics Using R interactively An introductory session Getting help with functions and features R commands, case sensitivity, etc. Recall and correction of previous commands Executing commands from or diverting output to a file Data permanency and removing objects Simple manipulations; numbers and vectors Vectors and assignment Vector arithmetic Generating regular sequences Logical vectors Missing values Character vectors Index vectors; selecting and modifying subsets of a data set Other types of objects Objects, their modes and attributes Intrinsic attributes: mode and length Changing the length of an object Getting and setting attributes The class of an object Arrays and matrices Arrays Array indexing. Subsections of an array Index matrices The array() function The outer product of two arrays Generalized transpose of an array Matrix facilities Matrix multiplication Linear equations and inversion Eigenvalues and eigenvectors Singular value decomposition and determinants Least squares fitting and the QR decomposition Forming partitioned matrices, cbind() and rbind() The concatenation function, (), with arrays Frequency tables from factors Lists and data frames Lists Constructing and modifying lists Concatenating lists Data frames Making data frames attach() and detach() Working with data frames Attaching arbitrary lists Managing the search path Data manipulation Selecting, subsetting observations and variables           Filtering, grouping Recoding, transformations Aggregation, combining data sets Character manipulation, stringr package Reading data Txt files CSV files XLS, XLSX files SPSS, SAS, Stata,… and other formats data Exporting data to txt, csv and other formats Accessing data from databases using SQL language Probability distributions R as a set of statistical tables Examining the distribution of a set of data One- and two-sample tests Grouping, loops and conditional execution Grouped expressions Control statements Conditional execution: if statements Repetitive execution: for loops, repeat and while Writing your own functions Simple examples Defining new binary operators Named arguments and defaults The '...' argument Assignments within functions More advanced examples Efficiency factors in block designs Dropping all names in a printed array Recursive numerical integration Scope Customizing the environment Classes, generic functions and object orientation Graphical procedures High-level plotting commands The plot() function Displaying multivariate data Display graphics Arguments to high-level plotting functions Basic visualisation graphs Multivariate relations with lattice and ggplot package Using graphics parameters Graphics parameters list Automated and interactive reporting Combining output from R with text
210937 Machine Learning with PredictionIO 21 hours PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack. Audience This course is directed at developers and data scientists who want to create predictive engines for any machine learning task. Getting Started Quick Intro Installation Guide Downloading Template Deploying an Engine Customizing an Engine App Integration Overview Developing PredictionIO System Architecture Event Server Overview Collecting Data Learning DASE Implementing DASE Evaluation Overview Intellij IDEA Guide Scala API Machine Learning Education and Usage​ Examples Comics Recommendation Text Classification Community Contributed Demo Dimensionality Reducation and usage PredictionIO SDKs (Select One) Java PHP Python Ruby Community Contributed  
121452 Introduction to Machine Learning with MATLAB 21 hours
81500 Administrator Training for Apache Hadoop 35 hours Audience: The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment Goal: Deep knowledge on Hadoop cluster administration. 1: HDFS (17%) Describe the function of HDFS Daemons Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing. Identify current features of computing systems that motivate a system like Apache Hadoop. Classify major goals of HDFS Design Given a scenario, identify appropriate use case for HDFS Federation Identify components and daemon of an HDFS HA-Quorum cluster Analyze the role of HDFS security (Kerberos) Determine the best data serialization choice for a given scenario Describe file read and write paths Identify the commands to manipulate files in the Hadoop File System Shell 2: YARN and MapReduce version 2 (MRv2) (17%) Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons Understand basic design strategy for MapReduce v2 (MRv2) Determine how YARN handles resource allocations Identify the workflow of MapReduce job running on YARN Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN. 3: Hadoop Cluster Planning (16%) Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster. Analyze the choices in selecting an OS Understand kernel tuning and disk swapping Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario 4: Hadoop Cluster Installation and Administration (25%) Given a scenario, identify how the cluster will handle disk and machine failures Analyze a logging configuration and logging configuration file format Understand the basics of Hadoop metrics and cluster health monitoring Identify the function and purpose of available tools for cluster monitoring Be able to install all the ecosystem components in CDH 5, including (but not limited to): Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig Identify the function and purpose of available tools for managing the Apache Hadoop file system 5: Resource Management (10%) Understand the overall design goals of each of Hadoop schedulers Given a scenario, determine how the FIFO Scheduler allocates cluster resources Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN Given a scenario, determine how the Capacity Scheduler allocates cluster resources 6: Monitoring and Logging (15%) Understand the functions and features of Hadoop’s metric collection abilities Analyze the NameNode and JobTracker Web UIs Understand how to monitor cluster Daemons Identify and monitor CPU usage on master nodes Describe how to monitor swap and memory allocation on all nodes Identify how to view and manage Hadoop’s log files Interpret a log file
73776 Statistics with SPSS Predictive Analytics Software 14 hours Goal: Learning to work with SPSS at the level of independence The addressees: Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and learn popular data mining techniques. Using the program The dialog boxes input / downloading data the concept of variable and measuring scales preparing a database Generate tables and graphs formatting of the report Command language syntax automated analysis storage and modification procedures create their own analytical procedures Data Analysis descriptive statistics Key terms: eg variable, hypothesis, statistical significance measures of central tendency measures of dispersion measures of central tendency standardization Introduction to research the relationships between variables correlational and experimental methods Summary: This case study and discussion
192390 Data Mining & Machine Learning with R 14 hours Introduction to Data mining and Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Dicriminant analysis Logistic regression K-Nearest neighbors Support Vector Machines Neural networks Decision trees Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means Advanced topics Ensemble models Mixed models Boosting Examples Multidimensional reduction Factor Analysis Principal Component Analysis Examples
116502 Big Data Architect 35 hours Day 1 - provides a high-level overview of essential Big Data topic areas. The module is divided into a series of sections, each of which is accompanied by a hands-on exercise. Day 2 - explores a range of topics that relate analysis practices and tools for Big Data environments. It does not get into implementation or programming details, but instead keeps coverage at a conceptual level, focusing on topics that enable participants to develop a comprehensive understanding of the common analysis functions and features offered by Big Data solutions. Day 3 - provides an overview of the fundamental and essential topic areas relating to Big Data solution platform architecture. It covers Big Data mechanisms required for the development of a Big Data solution platform and architectural options for assembling a data processing platform. Common scenarios are also presented to provide a basic understanding of how a Big Data solution platform is generally used.  Day 4 - builds upon Day 3 by exploring advanced topics relatng to Big Data solution platform architecture. In particular, different architectural layers that make up the Big Data solution platform are introduced and discussed, including data sources, data ingress, data storage, data processing and security.  Day 5 - covers a number of exercises and problems designed to test the delegates ability to apply knowledge of topics covered Day 3 and 4.  Day 1 - Fundamental Big Data Understanding Big Data Fundamental Terminology & Concepts Big Data Business & Technology Drivers Traditional Enterprise Technologies Related to Big Data Characteristics of Data in Big Data Environments Dataset Types in Big Data Environments Fundamental Analysis and Analytics Machine Learning Types Business Intelligence & Big Data Data Visualization & Big Data Big Data Adoption & Planning Considerations Day 2 - Big Data Analysis & Technology Concepts Big Data Analysis Lifecycle (from business case evaluation to data analysis and visualization) A/B Testing, Correlation Regression, Heat Maps Time Series Analysis Network Analysis Spatial Data Analysis Classification, Clustering Outlier Detection Filtering (including collaborative filtering & content-based filtering) Natural Language Processing Sentiment Analysis, Text Analytics File Systems & Distributed File Systems, NoSQL Distributed & Parallel Data Processing, Processing Workloads, Clusters Cloud Computing & Big Data Foundational Big Data Technology Mechanisms Day 3 - Fundamental Big Data Architecture New Big Data Mechanisms, including ... Security Engine Cluster Manager  Data Governance Manager Visualization Engine Productivity Portal Data Processing Architectural Models, including ... Shared-Everything and Shared-Nothing Architectures Enterprise Data Warehouse and Big Data Integration Approaches, including ... Series Parallel Big Data Appliance Data Virtualization Architectural Big Data Environments, including ... ETL  Analytics Engine Application Enrichment Cloud Computing & Big Data Architectural Considerations, including ... how Cloud Delivery and Deployment Models can be used to host and process Big Data Solutions Day 4 - Advanced Big Data Architecture Big Data Solution Architectural Layers including ... Data Sources, Data Ingress and Storage, Event Stream Processing and Complex Event Processing, Egress, Visualization and Utilization, Big Data Architecture and Security, Maintenance and Governance Big Data Solution Design Patterns, including ... Patterns pertaining to Data Ingress, Data Wrangling, Data Storage, Data Processing, Data Analysis, Data Egress, Data Visualization Big Data Architectural Compound Patterns Day 5 - Big Data Architecture Lab Incorporates a set of detailed exercises that require delegates to solve various inter-related problems, with the goal of fostering a comprehensive understanding of how different data architecture technologies, mechanisms and techniques can be applied to solve problems in Big Data environments.
121393 Introduction to Deep Learning 21 hours This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction. Backprop, modular models Logsum module RBF Net MAP/MLE loss Parameter Space Transforms Convolutional Module Gradient-Based Learning  Energy for inference, Objective for learning PCA; NLL:  Latent Variable Models Probabilistic LVM Loss Function Handwriting recognition
83619 Drools Rules Administration 21 hours This course has been prepared for people who are involved in administering corporate knowledge assets (rules, process) like system administrators, system integrators, application server administrators, etc... We are using the newest stable community version of Drools to run this course, but older versions are also possible if agreed before booking.Drools Administration Short Introduction to Rule Engines Artificial Intelligence Expert Systems What is a Rule Engine? Why use a Rule Engine? Advantages of a Rule Engine When should you use a Rule Engine? Scripting or Process Engines When you should NOT use a Rule Engine Strong and Loose Coupling What are rules? Where things are Managing rules in a jar file Git repository Executing rules from KIE Managing BPMN and workflows files Moving knowledge files (rules, processes, forms, work times...) Rules Testing Where to store test How to execute tests Testing with JUnit Deployment Strategies stand alone application Invoking rules from Java Code integration via files (json, xml, etc...) integration via web services using KIE for integration Administration of rules authoring Packages Artifact Repository Asset Editor Validation Data Model Categories versioning Domain Specific Languages Optimizing hardware and software for rules execution Multithreading and Drools Kie Projects structures Lifecycles Building Deploying Running Installation and Deployment Cheat Sheets Organization Units Users, Rules and Permissions Authentication Repositories Backup and Restore Logging
176933 Introductory R for Biologists 28 hours I. Introduction and preliminaries 1. Overview Making R more friendly, R and available GUIs Rstudio Related software and documentation R and statistics Using R interactively An introductory session Getting help with functions and features R commands, case sensitivity, etc. Recall and correction of previous commands Executing commands from or diverting output to a file Data permanency and removing objects Good programming practice:  Self-contained scripts, good    readability e.g. structured scripts, documentation, markdown installing packages; CRAN and Bioconductor 2. Reading data Txt files  (read.delim) CSV files 3. Simple manipulations; numbers and vectors  + arrays Vectors and assignment Vector arithmetic Generating regular sequences Logical vectors Missing values Character vectors Index vectors; selecting and modifying subsets of a data set Arrays Array indexing. Subsections of an array Index matrices The array() function + simple operations on arrays e.g. multiplication, transposition   Other types of objects 4. Lists and data frames Lists Constructing and modifying lists Concatenating lists Data frames Making data frames Working with data frames Attaching arbitrary lists Managing the search path 5. Data manipulation Selecting, subsetting observations and variables          Filtering, grouping Recoding, transformations Aggregation, combining data sets Forming partitioned matrices, cbind() and rbind() The concatenation function, (), with arrays Character manipulation, stringr package short intro into grep and regexpr 6. More on Reading data                                             XLS, XLSX files readr  and readxl packages SPSS, SAS, Stata,… and other formats data Exporting data to txt, csv and other formats 6. Grouping, loops and conditional execution Grouped expressions Control statements Conditional execution: if statements Repetitive execution: for loops, repeat and while intro into apply, lapply, sapply, tapply 7. Functions Creating functions Optional arguments and default values Variable number of arguments Scope and its consequences 8. Simple graphics in R Creating a Graph Density Plots Dot Plots Bar Plots Line Charts Pie Charts Boxplots Scatter Plots Combining Plots II. Statistical analysis in R  1.    Probability distributions R as a set of statistical tables Examining the distribution of a set of data 2.   Testing of Hypotheses Tests about a Population Mean Likelihood Ratio Test One- and two-sample tests Chi-Square Goodness-of-Fit Test Kolmogorov-Smirnov One-Sample Statistic  Wilcoxon Signed-Rank Test Two-Sample Test Wilcoxon Rank Sum Test Mann-Whitney Test Kolmogorov-Smirnov Test 3. Multiple Testing of Hypotheses Type I Error and FDR ROC curves and AUC Multiple Testing Procedures (BH, Bonferroni etc.) 4. Linear regression models Generic functions for extracting model information Updating fitted models Generalized linear models Families The glm() function Classification Logistic Regression Linear Discriminant Analysis Unsupervised learning Principal Components Analysis Clustering Methods(k-means, hierarchical clustering, k-medoids) 5.  Survival analysis (survival package) Survival objects in r Kaplan-Meier estimate, log-rank test, parametric regression Confidence bands Censored (interval censored) data analysis Cox PH models, constant covariates Cox PH models, time-dependent covariates Simulation: Model comparison (Comparing regression models)  6.   Analysis of Variance One-Way ANOVA Two-Way Classification of ANOVA MANOVA III. Worked problems in bioinformatics            Short introduction to limma package Microarray data analysis workflow Data download from GEO: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE1397 Data processing (QC, normalisation, differential expression) Volcano plot              Custering examples + heatmaps
192394 Predictive Modelling with R 14 hours Problems facing forecasters Customer demand planning Investor uncertainty Economic planning Seasonal changes in demand/utilization Roles of risk and uncertainty Time series Forecasting Seasonal adjustment Moving average Exponential smoothing Extrapolation Linear prediction Trend estimation Stationarity and ARIMA modelling Econometric methods (casual methods) Regression analysis Multiple linear regression Multiple non-linear regression Regression validation Forecasting from regression Judgemental methods Surveys Delphi method Scenario building Technology forecasting Forecast by analogy Simulation and other methods Simulation Prediction market Probabilistic forecasting and Ensemble forecasting
211585 TensorFlow for Image Recognition 28 hours This course explores, with specific examples, the application of Tensor Flow to the purposes of image recognition Audience This course is intended for engineers seeking to utilize TensorFlow for the purposes of Image Recognition After completing this course, delegates will be able to: understand TensorFlow’s structure and deployment mechanisms carry out installation / production environment / architecture tasks and configuration assess code quality, perform debugging, monitoring implement advanced production like training models, building graphs and logging Machine Learning and Recursive Neural Networks (RNN) basics NN and RNN Backprogation Long short-term memory (LSTM) TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics 101 Tutorial Files Prepare the Data Download Inputs and Placeholders Build the Graph Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output Advanced Usage Threading and Queues Distributed TensorFlow Writing Documentation and Sharing your Model Customizing Data Readers Using GPUs Manipulating TensorFlow Model Files TensorFlow Serving Introduction Basic Serving Tutorial Advanced Serving Tutorial Serving Inception Model Tutorial Convolutional Neural Networks Overview Goals Highlights of the Tutorial Model Architecture Code Organization CIFAR-10 Model Model Inputs Model Prediction Model Training Launching and Training the Model Evaluating a Model Training a Model Using Multiple GPU Cards Placing Variables and Operations on Devices Launching and Training the Model on Multiple GPU cards Deep Learning for MNIST Setup Load MNIST Data Start TensorFlow InteractiveSession Build a Softmax Regression Model Placeholders Variables Predicted Class and Cost Function Train the Model Evaluate the Model Build a Multilayer Convolutional Network Weight Initialization Convolution and Pooling First Convolutional Layer Second Convolutional Layer Densely Connected Layer Readout Layer Train and Evaluate the Model Image Recognition Inception-v3 C++ Java  
177545 Advanced Deep Learning 28 hours Machine Learning Limitations Machine Learning, Non-linear mappings Neural Networks Non-Linear Optimization, Stochastic/MiniBatch Gradient Decent Back Propagation Deep Sparse Coding Sparse Autoencoders (SAE) Convolutional Neural Networks (CNNs) Successes: Descriptor Matching Stereo-based Obstacle Avoidance for Robotics Pooling and invariance Visualization/Deconvolutional Networks Recurrent Neural Networks (RNNs) and their optimizaiton Applications to NLP RNNs continued, Hessian-Free Optimization Language analysis: word/sentence vectors, parsing, sentiment analysis, etc. Probabilistic Graphical Models Hopfield Nets, Boltzmann machines, Restricted Boltzmann Machines Hopfield Networks, (Restricted) Bolzmann Machines Deep Belief Nets, Stacked RBMs Applications to NLP , Pose and Activity Recognition in Videos Recent Advances Large-Scale Learning Neural Turing Machines  
83626 From Data to Decision with Big Data and Predictive Analytics 21 hours Audience If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you. It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing. It is not aimed at people configuring the solution, those people will benefit from the big picture though. Delivery Mode During the course delegates will be presented with working examples of mostly open source technologies. Short lectures will be followed by presentation and simple exercises by the participants Content and Software used All software used is updated each time the course is run so we check the newest versions possible. It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning. Quick Overview Data Sources Minding Data Recommender systems Target Marketing Datatypes Structured vs unstructured Static vs streamed Attitudinal, behavioural and demographic data Data-driven vs user-driven analytics data validity Volume, velocity and variety of data Models Building models Statistical Models Machine learning Data Classification Clustering kGroups, k-means, nearest neighbours Ant colonies, birds flocking Predictive Models Decision trees Support vector machine Naive Bayes classification Neural networks Markov Model Regression Ensemble methods ROI Benefit/Cost ratio Cost of software Cost of development Potential benefits Building Models Data Preparation (MapReduce) Data cleansing Choosing methods Developing model Testing Model Model evaluation Model deployment and integration Overview of Open Source and commercial software Selection of R-project package Python libraries Hadoop and Mahout Selected Apache projects related to Big Data and Analytics Selected commercial solution Integration with existing software and data sources
131343 Oracle SQL Intermediate - Data Extraction 14 hours Limiting results The WHERE clause Comparison operators LIKE Condition Prerequisite BETWEEN ... AND IS NULL condition Condition IN Boolean operators AND, OR and NOT Many of the conditions in the WHERE clause The order of the operators. DISTINCT clause SQL functions The differences between the functions of one and multilines Features text, numeric, date, Explicit and implicit conversion Conversion functions Nesting functions Viewing the performance of the functions - dual table Getting the current date function SYSDATE Handling of NULL values Aggregating data using the grouping function Grouping functions How grouping functions treat NULL values Create groups of data - the GROUP BY clause Grouping multiple columns Limiting the function result grouping - the HAVING clause Subqueries Place subqueries in the SELECT command Subqueries single and multi-lineage Operators Subqueries single-line Features grouping in subquery Operators Subqueries multi-IN, ALL, ANY How NULL values ​​are treated in subqueries Operators collective UNION operator UNION ALL operator INTERSECT operator MINUS operator Further Usage Of Joins Revisit Joins Combining Inner and Outer Joins Partitioned Outer Joins Hierarchical Queries Further Usage Of Sub-Queries Revisit sub-queries Use of sub-queries as virtual tables/inline views and columns Use of the WITH construction Combining sub-queries and joins Analytics functions OVER clause Partition Clause Windowing Clause Rank, Lead, Lag, First, Last functions Retrieving data from multiple tables (if time at end) Types of connectors The use NATURAL JOIN Aliases tables Joins in the WHERE clause INNER JOIN Inner join External Merge LEFT, RIGHT, FULL OUTER JOIN Cartesian product Aggregate Functions (if time at end) Revisit Group By function and Having clause Group and Rollup Group and Cube
210929 Introduction to R with Time Series Analysis 21 hours Introduction and preliminaries Making R more friendly, R and available GUIs Rstudio Related software and documentation R and statistics Using R interactively An introductory session Getting help with functions and features R commands, case sensitivity, etc. Recall and correction of previous commands Executing commands from or diverting output to a file Data permanency and removing objects Simple manipulations; numbers and vectors Vectors and assignment Vector arithmetic Generating regular sequences Logical vectors Missing values Character vectors Index vectors; selecting and modifying subsets of a data set Other types of objects Objects, their modes and attributes Intrinsic attributes: mode and length Changing the length of an object Getting and setting attributes The class of an object Arrays and matrices Arrays Array indexing. Subsections of an array Index matrices The array() function The outer product of two arrays Generalized transpose of an array Matrix facilities Matrix multiplication Linear equations and inversion Eigenvalues and eigenvectors Singular value decomposition and determinants Least squares fitting and the QR decomposition Forming partitioned matrices, cbind() and rbind() The concatenation function, (), with arrays Frequency tables from factors Lists and data frames Lists Constructing and modifying lists Concatenating lists Data frames Making data frames attach() and detach() Working with data frames Attaching arbitrary lists Managing the search path Data manipulation Selecting, subsetting observations and variables           Filtering, grouping Recoding, transformations Aggregation, combining data sets Character manipulation, stringr package Reading data Txt files CSV files XLS, XLSX files SPSS, SAS, Stata,… and other formats data Exporting data to txt, csv and other formats Accessing data from databases using SQL language Probability distributions R as a set of statistical tables Examining the distribution of a set of data One- and two-sample tests Grouping, loops and conditional execution Grouped expressions Control statements Conditional execution: if statements Repetitive execution: for loops, repeat and while Writing your own functions Simple examples Defining new binary operators Named arguments and defaults The '...' argument Assignments within functions More advanced examples Efficiency factors in block designs Dropping all names in a printed array Recursive numerical integration Scope Customizing the environment Classes, generic functions and object orientation Graphical procedures High-level plotting commands The plot() function Displaying multivariate data Display graphics Arguments to high-level plotting functions Basic visualisation graphs Multivariate relations with lattice and ggplot package Using graphics parameters Graphics parameters list Time series Forecasting Seasonal adjustment Moving average Exponential smoothing Extrapolation Linear prediction Trend estimation Stationarity and ARIMA modelling Econometric methods (casual methods) Regression analysis Multiple linear regression Multiple non-linear regression Regression validation Forecasting from regression
234409 Deep Learning for Vision with Caffe 21 hours Caffe is a deep learning framework made with expression, speed, and modularity in mind. This course explores the application of Caffe as a Deep learning framework for image recognition using MNIST as an example Audience This course is suitable for Deep Learning researchers and engineers interested in utilizing Caffe as a framework. After completing this course, delegates will be able to: understand Caffe’s structure and deployment mechanisms carry out installation / production environment / architecture tasks and configuration assess code quality, perform debugging, monitoring implement advanced production like training models, implementing layers and logging Installation Docker Ubuntu RHEL / CentOS / Fedora installation Windows Caffe Overview Nets, Layers, and Blobs: the anatomy of a Caffe model. Forward / Backward: the essential computations of layered compositional models. Loss: the task to be learned is defined by the loss. Solver: the solver coordinates model optimization. Layer Catalogue: the layer is the fundamental unit of modeling and computation – Caffe’s catalogue includes layers for state-of-the-art models. Interfaces: command line, Python, and MATLAB Caffe. Data: how to caffeinate data for model input. Caffeinated Convolution: how Caffe computes convolutions. New models and new code Detection with Fast R-CNN Sequences with LSTMs and Vision + Language with LRCN Pixelwise prediction with FCNs Framework design and future Examples: MNIST    
2914 Business Rule Management (BRMS) with Drools 7 hours This course is aimed at enterprise architects, business and system analysts and managers who want to apply business rules to their solution. With Drools you can write your business rules using almost natural language, therefore reducing the gap between business and IT. Short Introduction to Rule Engines Artificial Intelligence Expert Systems What is a Rule Engine? Why use a Rule Engine? Advantages of a Rule Engine When should you use a Rule Engine? Scripting or Process Engines When you should NOT use a Rule Engine Strong and Loose Coupling What are rules? Creating and Implementing Rules Fact Model KIE Spreadsheet Eclipse Domain Specific Language (DSL) Replacing rules with DSL Testing DSL rules jBPM Integration with Drools Fusion What is Complex Event Processing? Short overview on Fusion Rules Testing Testing with KIE Testing with JUnit Integrating Rules with Applications Invoking rules from Java Code
83724 OptaPlanner in Practice 21 hours Planner introduction What is OptaPlanner? What is a planning problem? Use Cases and examples Bin Packaging Problem Example Problem statement Problem size Domain model diagram Main method Solver configuration Domain model implementation Score configuration Travelling Salesman Problem (TSP) Problem statement Problem size Domain model Main method Chaining Solver configuration Domain model implementation Score configuration Planner configuration Overview Solver configuration Model your planning problem Use the Solver Score calculation Score terminology Choose a Score definition Calculate the Score Score calculation performance tricks Reusing the Score calculation outside the Solver Optimization algorithms Search space size in the real world Does Planner find the optimal solution? Architecture overview Optimization algorithms overview Which optimization algorithms should I use? SolverPhase Scope overview Termination SolverEventListener Custom SolverPhase Move and neighborhood selection Move and neighborhood introduction Generic Move Selectors Combining multiple MoveSelectors EntitySelector ValueSelector General Selector features Custom moves Construction heuristics First Fit Best Fit Advanced Greedy Fit Cheapest insertion Regret insertion Local search Local Search concepts Hill Climbing (Simple Local Search) Tabu Search Simulated Annealing Late Acceptance Step counting hill climbing Late Simulated Annealing (experimental) Using a custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor Evolutionary algorithms Evolutionary Strategies Genetic Algorithms Hyperheuristics Exact methods Brute Force Depth-first Search Benchmarking and tweaking Finding the best Solver configuration Doing a benchmark Benchmark report Summary statistics Statistics per dataset (graph and CSV) Advanced benchmarking Repeated planning Introduction to repeated planning Backup planning Continuous planning (windowed planning) Real-time planning (event based planning) Drools Short introduction to Drools Writing Score Function in Drools Integration Overview Persistent storage SOA and ESB Other environment
139417 Data Protection 35 hours This is an Instructor led course, and is the non-certification version of the "CDP - Certificate in Data Protection" course Those experienced in data protection issues, as well as those new to the subject, need to be trained so that their organisations are confident that legal compliance is continually addressed. It is necessary to identify issues requiring expert data protection advice in good time in order that organisational reputation and credibility are enhanced through relevant data protection policies and procedures. Objectives: The aim of the syllabus is to promote an understanding of how the data protection principles work rather than simply focusing on the mechanics of regulation. The syllabus places the Act in the context of human rights and promotes good practice within organisations. On completion you will have: an appreciation of the broader context of the Act.  an understanding of the way in which the Act and the Privacy and Electronic Communications (EC Directive) Regulations 2003 work a broad understanding of the way associated legislation relates to the Act an understanding of what has to be done to achieve compliance Course Synopsis: The syllabus comprises three main parts, each sub-sections. Context - this will address the origins of and reasons for the Act together with consideration of privacy in general. Law – Data Protection Act - this will address the main concepts and elements of the Act and subordinate legislation. Application - this will consider how compliance is achieved and how the Act works in practice. 1. Context The objective is to ensure a basic appreciation of the context of data protection law and in particular that privacy is wider than data protection. 1.1 What is privacy? 1.1.1 The right to private and family life and the relevance of confidentiality. 1.1.1 European Convention on Human Rights and Fundamental Freedoms, UK Human Rights Act 1.2 History of data protection legislation in the UK 1.2.1 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data 1980 1.2.2 Council of Europe Convention 108, 1981 1.2.3 Data Protection Act 1984 1.2.4 Data Protection Directive 95/46/EC 1.2.5 Telecommunications Directive 97/66/EC, Privacy and Electronic Communications 2. The Law 2.1 Data Protection Act 2.1.1 The definitions The objective is to ensure that candidates know, and understand the major definitions in the Act and how to apply them in order to identify what information and processing activities are subject to the Act. 2.1.2 The Role of the Commissioner The objective is to ensure an understanding of the role and main powers of the Information commissioner. The following are to be covered. 2.1.2.1 Enforcement (including roles of the First-tier Tribunal and the Courts) Information and Enforcement Notices Prosecution Warrants (entry/inspection) (Schedule 9,1(1) & 12 only – that is a basic understanding of grounds for issuing and nature of offences) Assessment Notices (s41A-s41C) including effect of s55 (3) added by the Coroners and Justice Act 2009 which provides that the Information Commissioner may not issue a monetary penalty notice in respect of anything found in pursuance of an assessment notice or an assessment under s51 (7). Monetary penalties (s55A-55E) including the effect of the s55 (3A) provision. Undertakings (NB candidates are required to have a basic understanding of how the ICO uses ‘undertakings’ and that they do not derive from any provision in the DPA98. They are not expected to know the detail of their status and provenance). 2.1.2.2 Carrying out s42 assessments 2.1.2.3 Codes of Practice (including s52A-52E Code of Practice on data sharing) and all current ICO issued Codes but not any codes issued by other bodies. Candidates will be expected to have a broad understanding of s52A-E, to appreciate the distinction between a statutory code and other ICO issued codes and have a broad understanding (but not a detailed knowledge) of ICO issued codes. 2.1.3 Notification The exemptions from notification. A basic understanding of the two tier fee regime. 2.1.4 The Data Protection Principles The objective is to ensure an understanding of how the principles regulate the processing of personal data and how they are enforced, as well as an understanding of the individual principles in the light of guidance on their interpretation found in Part II of Schedule 1. Candidates will be required to show an understanding of the need to interpret and apply the principles in context. Introduction: how the principles regulate and how they are enforced including Information and Enforcement Notices. 2.1.5 Individual Rights The objective is to ensure an understanding of the rights conferred by the Act and how they can be applied and enforced. 2.1.6 Exemptions The objective is to ensure awareness of the fact that there are exemptions from certain provisions of the Act, and knowledge and understanding of some of these and how to apply them in practice. Candidates are not expected to have a detailed knowledge of all the exemptions. The following are expected to be covered in some detail: 2.1.7 Offences The objective is to ensure an awareness of the fact that there are a range of offences under the Act and of the role of the Courts as well as an appreciation of how certain specified offences apply in practice. It is not intended that candidates should have a detailed knowledge of all the offences. The candidates will be expected to cover: Unlawful obtaining and disclosure of personal data Unlawful selling of personal data Processing without notification Failure to notify changes in processing Failure to comply with an Enforcement Notice, an Information Notice or Special Information Notice. Warrant offences (Schedule 9,12) 2.2 Privacy and Electronic Communications (EC Directive) Regulations 2003 The objective is to ensure an awareness of the relationship between the above Regulations and the Act, an awareness of the broad scope of the Regulations and a detailed understanding of the practical application of the main provisions relating to unsolicited marketing. 2.3 Associated legislation The objective is to ensure a basic awareness of some other legislation which is relevant and an appreciation that data protection legislation must be considered in the context of other law. 3. Application The objective is to ensure an understanding of the practical application of the Act in a range of circumstances. This will include detailed analysis of sometimes complex scenarios, and deciding how the Act applies in particular circumstances and explaining and justifying a decision taken or advice given. 3.1 How to comply with the Act 3.2 Addressing scenarios in specific areas 3.3 Data processing topics Monitoring – internet, email, telephone calls and CCTV Use of the internet (including Electronic Commerce) Data matching Disclosure and Data sharing
192382 Big Data & Database Systems Fundamentals 14 hours The course is part of the Data Scientist skill set (Domain: Data and Technology). Data Warehousing Concepts What is Data Ware House? Difference between OLTP and Data Ware Housing Data Acquisition Data Extraction Data Transformation. Data Loading Data Marts Dependent vs Independent data Mart Data Base design ETL Testing Concepts: Introduction. Software development life cycle. Testing methodologies. ETL Testing Work Flow Process. ETL Testing Responsibilities in Data stage.       Big data Fundamentals Big Data and its role in the corporate world The phases of development of a Big Data strategy within a corporation Explain the rationale underlying a holistic approach to Big Data Components needed in a Big Data Platform Big data storage solution Limits of Traditional Technologies Overview of database types NoSQL Databases Hadoop Map Reduce Apache Spark
182612 Cassandra for Developers 21 hours This course will introduce Cassandra –  a popular NoSQL database.  It will cover Cassandra principles, architecture and data model.   Students will learn data modeling  in CQL (Cassandra Query Language) in hands-on, interactive labs.  This session also discusses Cassandra internals and some admin topics. Duration : 3 days Audience : Developers Section 1: Introduction to Big Data / NoSQL NoSQL overview CAP theorem When is NoSQL appropriate Columnar storage NoSQL ecosystem Section 2 : Cassandra Basics Design and architecture Cassandra nodes, clusters, datacenters Keyspaces, tables, rows and columns Partitioning, replication, tokens Quorum and consistency levels Labs : interacting with cassandra using CQLSH Section 3: Data Modeling – part 1 introduction to CQL CQL Datatypes creating keyspaces & tables Choosing columns and types Choosing primary keys Data layout for rows and columns Time to live (TTL) Querying with CQL CQL updates Collections (list / map / set) Labs : various data modeling exercises using CQL ; experimenting with queries and supported data types Section 4: Data Modeling – part 2 Creating and using secondary indexes composite keys (partition keys and clustering keys) Time series data Best practices for time series data Counters Lightweight transactions (LWT) Labs : creating and using indexes;  modeling time series data Section 5 : Data Modeling Labs  : Group design session multiple use cases from various domains are presented students work in groups to come up designs and models discuss various designs, analyze decisions Lab : implement one of the scenario Section 6: Cassandra drivers Introduction to Java driver CRUD (Create / Read / Update, Delete) operations using Java client Asynchronous queries Labs : using Java API for Cassandra Section 7 : Cassandra Internals understand Cassandra design under the hood sstables, memtables, commit log read path / write path caching vnodes Section 8: Administration Hardware selection Cassandra distributions Cassandra best practices (compaction, garbage collection,) troubleshooting tools and tips Lab : students install Cassandra, run benchmarks Section 9:  Bonus Lab (time permitting) Implement a music service like Pandora / Spotify on Cassandra
120789 Designing Inteligent User Interface with HTML5, JavaScript and Rule Engines 21 hours Coding interfaces which allow users to get what they want easily is hard. This course guides you how to create effective UI with newest technologies and libraries. It introduces idea of coding logic in Rule Engines (mostly Nools and PHP Rules) to make it easier to modify and test. After that the courses shows a way of integrating the logic on the front end of the website using JavaScript. Logic coded this way can be reused on the backend. Writing your rules Available rule engines Stating rules in a declarative manner Extending rules Create unit tests for the rules Available test frameworks Running tests automatically Creating GUI for the rules Available frameworks GUI design principles Integrating logic with the GUI Running rules in the browser Ajax Decision tables Create functional tests for the GUI Available frameworks Testing against multiple browsers
83730 Apache Mahout for Developers 14 hours Audience Developers involved in projects that use machine learning with Apache Mahout. Format Hands on introduction to machine learning. The course is delivered in a lab format based on real world practical use cases. Implementing Recommendation Systems with Mahout Introduction to recommender systems Representing recommender data Making recommendation Optimizing recommendation Clustering Basics of clustering Data representation Clustering algorithms Clustering quality improvements Optimizing clustering implementation Application of clustering in real world Classification Basics of classification Classifier training Classifier quality improvements
139421 Data Mining and Analysis 28 hours Objective: Delegates be able to analyse big data sets, extract patterns, choose the right variable impacting the results so that a new model is forecasted with predictive results. Data preprocessing Data Cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Statistical inference Probability distributions, Random variables, Central limit theorem Sampling Confidence intervals Statistical Inference Hypothesis testing Multivariate linear regression Specification Subset selection Estimation Validation Prediction Classification methods Logistic regression Linear discriminant analysis K-nearest neighbours Naive Bayes Comparison of Classification methods Neural Networks Fitting neural networks Training neural networks issues Decision trees Regression trees Classification trees Trees Versus Linear Models Bagging, Random Forests, Boosting Bagging Random Forests Boosting Support Vector Machines and Flexible disct Maximal Margin classifier Support vector classifiers Support vector machines 2 and more classes SVM’s Relationship to logistic regression Principal Components Analysis Clustering K-means clustering K-medoids clustering Hierarchical clustering Density based clustering Model Assesment and Selection Bias, Variance and Model complexity In-sample prediction error The Bayesian approach Cross-validation Bootstrap methods
116432 Machine Learning Fundamentals with Python 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Python programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Machine Learning with Python Choice of libraries Add-on tools Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
188716 Natural Language Processing with Python 28 hours This course introduces linguists or programmers to NLP in Python. During this course we will mostly use nltk.org (Natural Language Tool Kit), but also we will use other libraries relevant and useful for NLP. At the moment we can conduct this course in Python 2.x or Python 3.x. Examples are in English or Mandarin (普通话). Other languages can be also made available if agreed before booking.Overview of Python packages related to NLP   Introduction to NLP (examples in Python of course) Simple Text Manipulation Searching Text Counting Words Splitting Texts into Words Lexical dispersion Processing complex structures Representing text in Lists Indexing Lists Collocations Bigrams Frequency Distributions Conditionals with Words Comparing Words (startswith, endswith, islower, isalpha, etc...) Natural Language Understanding Word Sense Disambiguation Pronoun Resolution Machine translations (statistical, rule based, literal, etc...) Exercises NLP in Python in examples Accessing Text Corpora and Lexical Resources Common sources for corpora Conditional Frequency Distributions Counting Words by Genre Creating own corpus Pronouncing Dictionary Shoebox and Toolbox Lexicons Senses and Synonyms Hierarchies Lexical Relations: Meronyms, Holonyms Semantic Similarity Processing Raw Text Priting struncating extracting parts of string accessing individual charaters searching, replacing, spliting, joining, indexing, etc... using regular expressions detecting word patterns stemming tokenization normalization of text Word Segmentation (especially in Chinese) Categorizing and Tagging Words Tagged Corpora Tagged Tokens Part-of-Speech Tagset Python Dictionaries Words to Propertieis mapping Automatic Tagging Determining the Category of a Word (Morphological, Syntactic, Semantic) Text Classification (Machine Learning) Supervised Classification Sentence Segmentation Cross Validation Decision Trees Extracting Information from Text Chunking Chinking Tags vs Trees Analyzing Sentence Structure Context Free Grammar Parsers Building Feature Based Grammars Grammatical Features Processing Feature Structures Analyzing the Meaning of Sentences Semantics and Logic Propositional Logic First-Order Logic Discourse Semantics  Managing Linguistic Data  Data Formats (Lexicon vs Text) Metadata
10540 Artificial Intelligence Overview 7 hours This course has been created for managers, solutions architects, innovation officers, CTOs, software architects and everyone who is interested overview of applied artificial intelligence and the nearest forecast for its development. Artificial Intelligence History Intelligent Agents Problem Solving Solving Problems by Searching Beyond Classical Search Adversarial Search Constraint Satisfaction Problems Knowledge and Reasoning Logical Agents First-Order Logic Inference in First-Order Logic Classical Planning Planning and Acting in the Real World Knowledge Representation Uncertain Knowledge and Reasoning Quantifying Uncertainty Probabilistic Reasoning Probabilistic Reasoning over Time Making Simple Decisions Making Complex Decisions Learning Learning from Examples Knowledge in Learning Learning Probabilistic Models Reinforcement Learning Communicating, Perceiving, and Acting; Natural Language Processing Natural Language for Communication Perception Robotics Conclusions Philosophical Foundations AI: The Present and Future
83729 Applied Machine Learning 14 hours This training course is for people that would like to apply Machine Learning in practical applications. Audience This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work. Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Bayesian Graphical Models Factor Analysis (FA) Principal Component Analysis (PCA) Independent Component Analysis (ICA) Support Vector Machines (SVM) for regression and classification Boosting Ensemble models Neural networks Hidden Markov Models (HMM) Space State Models Clustering
182618 Spark for Developers 21 hours OBJECTIVE: This course will introduce Apache Spark. The students will learn how  Spark fits  into the Big Data ecosystem, and how to use Spark for data analysis.  The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX. AUDIENCE : Developers / Data Analysts Scala primer A quick introduction to Scala Labs : Getting know Scala Spark Basics Background and history Spark and Hadoop Spark concepts and architecture Spark eco system (core, spark sql, mlib, streaming) Labs : Installing and running Spark First Look at Spark Running Spark in local mode Spark web UI Spark shell Analyzing dataset – part 1 Inspecting RDDs Labs: Spark shell exploration RDDs RDDs concepts Partitions RDD Operations / transformations RDD types Key-Value pair RDDs MapReduce on RDD Caching and persistence Labs : creating & inspecting RDDs;   Caching RDDs Spark API programming Introduction to Spark API / RDD API Submitting the first program to Spark Debugging / logging Configuration properties Labs : Programming in Spark API, Submitting jobs Spark SQL SQL support in Spark Dataframes Defining tables and importing datasets Querying data frames using SQL Storage formats : JSON / Parquet Labs : Creating and querying data frames; evaluating data formats Mlib mlib intro mlib algorithms Labs : Writing mlib applications GraphX GraphX library overview GraphX APIs Labs : Processing graph data using Spark Spark Streaming Streaming overview Evaluating Streaming platforms Streaming operations Sliding window operations Labs : Writing spark streaming applications Spark and Hadoop Hadoop Intro (HDFS / YARN) Hadoop + Spark architecture Running Spark on Hadoop YARN Processing HDFS files using Spark Spark Performance and Tuning Broadcast variables Accumulators Memory management & caching Spark Operations Deploying Spark in production Sample deployment templates Configurations Monitoring Troubleshooting
217761 Apache Spark MLlib 35 hours MLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs. It divides into two packages: spark.mllib contains the original API built on top of RDDs. spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.   Audience This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark spark.mllib: data types, algorithms, and utilities Data types Basic statistics summary statistics correlations stratified sampling hypothesis testing streaming significance testing random data generation Classification and regression linear models (SVMs, logistic regression, linear regression) naive Bayes decision trees ensembles of trees (Random Forests and Gradient-Boosted Trees) isotonic regression Collaborative filtering alternating least squares (ALS) Clustering k-means Gaussian mixture power iteration clustering (PIC) latent Dirichlet allocation (LDA) bisecting k-means streaming k-means Dimensionality reduction singular value decomposition (SVD) principal component analysis (PCA) Feature extraction and transformation Frequent pattern mining FP-growth association rules PrefixSpan Evaluation metrics PMML model export Optimization (developer) stochastic gradient descent limited-memory BFGS (L-BFGS) spark.ml: high-level APIs for ML pipelines Overview: estimators, transformers and pipelines Extracting, transforming and selecting features Classification and regression Clustering Advanced topics
81501 Hadoop Administration 21 hours The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment Course goal: Getting knowledge regarding Hadoop cluster administration Introduction to Cloud Computing and Big Data solutions Apache Hadoop evolution: HDFS, MapReduce, YARN Installation and configuration of Hadoop in Pseudo-distributed mode Running MapReduce jobs on Hadoop cluster Hadoop cluster planning, installation and configuration Hadoop ecosystem: Pig, Hive, Sqoop, HBase Big Data future: Impala, Cassandra
20217 Semantic Web Overview 7 hours The Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Semantic Web Overview Introduction Purpose Standards Ontology Projects Resource Description Framework (RDF) Introduction Motivation and Goals RDF Concepts RDF Vocabulary URI and Namespace (Normative) Datatypes (Normative) Abstract Syntax (Normative) Fragment Identifiers
83733 Managing Business Rules with PHP Business Rules 14 hours This course explain how to write declarative rules using PHP Business Rules (http://sourceforge.net/projects/phprules/). It shows how to write, organize and integrate rules with existing code. Most of the course is based on exercises preceded with short introduction and examples. Short Introduction to Rule Engines Artificial Intelligence Expert Systems What is a Rule Engine? Why use a Rule Engine? Advantages of a Rule Engine When should you use a Rule Engine? Scripting or Process Engines When you should NOT use a Rule Engine Strong and Loose Coupling What are rules? Creating and Implementing Rules Fact Model Rule independence Priority, flags and processes Executing rules Integrating rules with existing applications and Rule Maintenance Rule integration PHP Unit tests and automated testing DDD and TDD with Business rules
182611 Hadoop for Developers (4 days) 28 hours Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.   Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software Lab : first look at Hadoop Section 2: HDFS Design and architecture concepts (horizontal scaling, replication, data locality, rack awareness) Daemons : Namenode, Secondary namenode, Data node communications / heart-beats data integrity read / write path Namenode High Availability (HA), Federation labs : Interacting with HDFS Section 3 : Map Reduce concepts and architecture daemons (MRV1) : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Map Reduce Version 1 and Version 2 (YARN) Internals of Map Reduce Introduction to Java Map Reduce program labs : Running a sample MapReduce program Section 4 : Pig pig vs java map reduce pig job flow pig latin language ETL with Pig Transformations & Joins User defined functions (UDF) labs : writing Pig scripts to analyze data Section 5: Hive architecture and design data types SQL support in Hive Creating Hive tables and querying partitions joins text processing labs : various labs on processing data with Hive Section 6: HBase concepts and architecture hbase vs RDBMS vs cassandra HBase Java API Time series data on HBase schema design labs : Interacting with HBase using shell;   programming in HBase Java API ; Schema design exercise
116431 Machine Learning Fundamentals with R 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
83595 Data Mining 21 hours Course can be provided with any tools, including free open-source data mining software and applicationsIntroduction Data mining as the analysis step of the KDD process ("Knowledge Discovery in Databases") Subfield of computer science Discovering patterns in large data sets Sources of methods Artificial intelligence Machine learning Statistics Database systems What is involved? Database and data management aspects Data pre-processing Model and inference considerations Interestingness metrics Complexity considerations Post-processing of discovered structures Visualization Online updating Data mining main tasks Automatic or semi-automatic analysis of large quantities of data Extracting previously unknown interesting patterns groups of data records (cluster analysis) unusual records (anomaly detection) dependencies (association rule mining) Data mining Anomaly detection (Outlier/change/deviation detection) Association rule learning (Dependency modeling) Clustering Classification Regression Summarization Use and applications Able Danger Behavioral analytics Business analytics Cross Industry Standard Process for Data Mining Customer analytics Data mining in agriculture Data mining in meteorology Educational data mining Human genetic clustering Inference attack Java Data Mining Open-source intelligence Path analysis (computing) Police-enforced ANPR in the UK Reactive business intelligence SEMMA Stellar Wind Talx Zapaday Data dredging, data fishing, data snooping
116124 Data Mining with R 14 hours Sources of methods Artificial intelligence Machine learning Statistics Sources of data Pre processing of data Data Import/Export Data Exploration and Visualization Dimensionality Reduction Dealing with missing values R Packages Data mining main tasks Automatic or semi-automatic analysis of large quantities of data Extracting previously unknown interesting patterns groups of data records (cluster analysis) unusual records (anomaly detection) dependencies (association rule mining) Data mining Anomaly detection (Outlier/change/deviation detection) Association rule learning (Dependency modeling) Clustering Classification Regression Summarization Frequent Pattern Mining Text Mining Decision Trees Regression Neural Networks Sequence Mining Frequent Pattern Mining Data dredging, data fishing, data snooping
182610 Advanced Hadoop for Developers 21 hours Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers. Audience: developers Duration: three days Format: lectures (50%) and hands-on labs (50%).   Section 1: Data Management in HDFS   Various Data Formats (JSON / Avro / Parquet) Compression Schemes Data Masking Labs : Analyzing different data formats;  enabling compression Section 2: Advanced Pig   User-defined Functions Introduction to Pig Libraries (ElephantBird / Data-Fu) Loading Complex Structured Data using Pig Pig Tuning Labs : advanced pig scripting, parsing complex data types Section 3 : Advanced Hive   User-defined Functions Compressed Tables Hive Performance Tuning Labs : creating compressed tables, evaluating table formats and configuration Section 4 : Advanced HBase   Advanced Schema Modelling Compression Bulk Data Ingest Wide-table / Tall-table comparison HBase and Pig HBase and Hive HBase Performance Tuning Labs : tuning HBase; accessing HBase data from Pig & Hive; Using Phoenix for data modeling
215769 Jenetics 21 hours Jenetics is an advanced Genetic Algorithm, respectively an Evolutionary Algorithm, library written in modern day Java. Audience This course is directed at Researchers seeking to utilize Jenetics in their projects   Introduction Architecture Base Classes Domain Classes Operation Classes Engine Classes Nuts and Bolts Concurrency Randomness Serialization Utility Classes Extending Jenetics  Genes Chromosomes Selectors Alterers Statistics Engine Advanced Topics Encoding Codec Problem Validation Termination Evolution Performance Internals PRNG Testing Random Seeding Incubation Weasel Program Examples Ones Counting Real Function Rastrigin Function 0/1 knapsack Travelling salesman Evolving Images Build  
240765 DeepLearning4J for Image Recognition 21 hours Deeplearning4j is an Open-Source Deep-Learning Software for Java and Scala on Hadoop and Spark. Audience This course is meant for engineers and developers seeking to utilize DeepLearning4J in their image recognition projects. Getting Started Quickstart: Running Examples and DL4J in Your Projects Comprehensive Setup Guide Convolutional Neural Networks  Convolutional Net Introduction Images Are 4-D Tensors? ConvNet Definition How Convolutional Nets Work Maxpooling/Downsampling DL4J Code Sample Other Resources Datasets Datasets and Machine Learning Custom Datasets CSV Data Uploads Scaleout Iterative Reduce Defined Multiprocessor / Clustering Running Worker Nodes Advanced DL2J Build Locally From Master Use the Maven Build Tool Vectorize Data With Canova Build a Data Pipeline Run Benchmarks Configure DL4J in Ivy, Gradle, SBT etc Find a DL4J Class or Method Save and Load Models Interpret Neural Net Output Visualize Data with t-SNE Swap CPUs for GPUs Customize an Image Pipeline Perform Regression With Neural Nets Troubleshoot Training & Select Network Hyperparameters Visualize, Monitor and Debug Network Learning Speed Up Spark With Native Binaries Build a Recommendation Engine With DL4J Use Recurrent Networks in DL4J Build Complex Network Architectures with Computation Graph Train Networks using Early Stopping Download Snapshots With Maven Customize a Loss Function  
39553 Model MapReduce and Apache Hadoop 14 hours The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers. Data Mining and Business Intelligence Introduction Area of application Capabilities Basics of data exploration Big data What does Big data stand for? Big data and Data mining MapReduce Model basics Example application Stats Cluster model Hadoop What is Hadoop Installation Configuration Cluster settings Architecture and configuration of Hadoop Distributed File System Console tools DistCp tool MapReduce and Hadoop Streaming Administration and configuration of Hadoop On Demand Alternatives
116117 Predictive Models with PMML 7 hours The course is created to scientific, developers, analysts or any other people who want to standardize or exchange their models with Predictive Model Markup Language (PMML) file format.Predictive Models Intro to predictive models Predictive models supported by PMML PMML Elements Header Data Dictionary Data Transformations Model Mining Schema Targets Output API Overview of API providers for PMML Executing your model in a cloud
182616 Hadoop For Administrators 21 hours Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos. “…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized” — Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising Audience Hadoop administrators Format Lectures and hands-on labs, approximate balance 60% lectures, 40% labs. Prerequisites Introduction Hadoop history, concepts Ecosystem Distributions High level architecture Hadoop myths Hadoop challenges (hardware / software) Labs: discuss your Big Data projects and problems Planning and installation Selecting software, Hadoop distributions Sizing the cluster, planning for growth Selecting hardware and network Rack topology Installation Multi-tenancy Directory structure, logs Benchmarking Labs: cluster install, run performance benchmarks HDFS operations Concepts (horizontal scaling, replication, data locality, rack awareness) Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode) Health monitoring Command-line and browser-based administration Adding storage, replacing defective drives Labs: getting familiar with HDFS command lines Data ingestion Flume for logs and other data ingestion into HDFS Sqoop for importing from SQL databases to HDFS, as well as exporting back to SQL Hadoop data warehousing with Hive Copying data between clusters (distcp) Using S3 as complementary to HDFS Data ingestion best practices and architectures Labs: setting up and using Flume, the same for Sqoop MapReduce operations and administration Parallel computing before mapreduce: compare HPC vs Hadoop administration MapReduce cluster loads Nodes and Daemons (JobTracker, TaskTracker) MapReduce UI walk through Mapreduce configuration Job config Optimizing MapReduce Fool-proofing MR: what to tell your programmers Labs: running MapReduce examples YARN: new architecture and new capabilities YARN design goals and implementation architecture New actors: ResourceManager, NodeManager, Application Master Installing YARN Job scheduling under YARN Labs: investigate job scheduling Advanced topics Hardware monitoring Cluster monitoring Adding and removing servers, upgrading Hadoop Backup, recovery and business continuity planning Oozie job workflows Hadoop high availability (HA) Hadoop Federation Securing your cluster with Kerberos Labs: set up monitoring Optional tracks Cloudera Manager for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Cloudera distribution environment (CDH5) Ambari for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Ambari cluster manager and Hortonworks Data Platform (HDP 2.0)
217765 From Zero to AI 35 hours This course is created for people who have no previous expierence in probability and statistics.   Probability (3.5h) Definition of probability Binomial distribution Everyday usage exercises Statistics (10.5h) Descriptive Statistics Inferential Statistics Regression Logistic Regression Exercises Intro to programming (3.5h) Procedural Programming Funcional Programming OOP Programming Exercises (writing logic for a game of choice, e.g. cross and noughts) Machine Learning (10.5h) Classification Clustering Neural Networks Exercises (write AI for a computer game of choice) Rules Engines and Expert Systems (7 hours) Intro to Rule Engines Write AI for the same game and combing solutions into hybrid approach
240773 NLP with Deeplearning4j 14 hours Deeplearning4j is an open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs. Word2Vec is a method of computing vector representations of words introduced by a team of researchers at Google led by Tomas Mikolov. Audience This course is directed at researchers, engineers and developers seeking to utilize Deeplearning4J to construct Word2Vec models. Getting Started DL4J Examples in a Few Easy Steps Using DL4J In Your Own Projects: Configuring the POM.xml File Word2Vec Introduction Neural Word Embeddings Amusing Word2vec Results the Code Anatomy of Word2Vec Setup, Load and Train A Code Example Troubleshooting & Tuning Word2Vec Word2vec Use Cases Foreign Languages GloVe (Global Vectors) & Doc2Vec
21634 Managing Business Logic with Drools 21 hours This course is aimed at enterprise architects, business and system analysts, technical managers and developers who want to apply business rules to their solutions. This course contains a lot of simple hands-on exercises during which the participants will create working rules. Please refer to our other courses if you just need an overview of Drools. This course is usually delivered on the newest stable version of Drools and jBPM, but in case of a bespoke course, can be tailored to a specific version. Short Introduction to Rule Engines Artificial Intelligence  Expert Systems What is a Rule Engine? Why use a Rule Engine? Advantages of a Rule Engine When should you use a Rule Engine? Scripting or Process Engines When you should NOT use a Rule Engine Strong and Loose Coupling What are rules? Creating and Implementing Rules Fact Model KIE Rules visioning and repository Exercises Domain Specific Language (DSL) Replacing rules with DSL Testing DSL rules Exercises jBPM Integration with Drools Short overview of basic BPMN Invoking rules from a processes Grouping rules Exercises Fusion What is Complex Event Processing? Short overview on Fusion Exercises Mvel - the rule language Filtering (fact type, field Operators Compound conditions Operators priority Accumulate Functions (average, min, max, sum, collectList, etc....) Rete - under the hood Compilation algorithm Drools RETE extensions Node Types Understating Rete Tree Rete Optimization Rules Testing Testing with KIE Testing with JUnit OptaPlanner An overview of OptaPlanner Simple examples Integrating Rules with Applications Invoking rules from Java Code

Course Discounts

Upcoming Courses

Weekend Artificial Intelligence courses, Evening Artificial Intelligence training, Artificial Intelligence boot camp, Artificial Intelligence instructor-led , Evening Artificial Intelligence courses,Weekend Artificial Intelligence training, Artificial Intelligence on-site, Artificial Intelligence classes, Artificial Intelligence one on one training , Artificial Intelligence training courses, Artificial Intelligence coaching, Artificial Intelligence trainer , Artificial Intelligence private courses

Some of our clients