Machine Learning Training in Southampton

Machine Learning Training in Southampton

Machine Learning courses

Southampton

International House, Southampton International Business Park George Curl Way
Southampton SO18 2RZ
United Kingdom
GB
Southampton
This centre occupies a great position - highly visible from the M27 motorway. The building's focal point is a beautiful atrium flooding the reception area with...Read more

Client Testimonials

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

A practical introduction to Data Analysis and Big Data

Overall the Content was good.

Sameer Rohadia - Continental AG / Abteilung: CF IT Finance

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Neural Networks Fundamentals using TensorFlow as Example

Very good all round overview.Good background into why Tensorflow operates as it does.

Kieran Conboy - INTEL R&D IRELAND LIMITED

A practical introduction to Data Analysis and Big Data

Willingness to share more

Balaram Chandra Paul - MOL Information Technology Asia Limited

Artificial Neural Networks, Machine Learning and Deep Thinking

Very flexible

Frank Ueltzhöffer - Robert Bosch GmbH

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

Advanced Deep Learning

The global overview of deep learning

Bruno Charbonnier - OSONES

Machine Learning and Deep Learning

The training provided the right foundation that allows us to further to expand on, by showing how theory and practice go hand in hand. It actually got me more interested in the subject than I was before.

Jean-Paul van Tillo - Travix International

Data Mining & Machine Learning with R

The trainer was so knowledgeable and included areas I was interested in

Mohamed Salama - Edmonton Police Service

Machine Learning and Deep Learning

We have gotten a lot more insight in to the subject matter. Some nice discussion were made with some real subjects within our company

Sebastiaan Holman - Travix International

Neural Networks Fundamentals using TensorFlow as Example

Given outlook of the technology: what technology/process might become more important in the future; see, what the technology can be used for

Commerzbank AG

A practical introduction to Data Analysis and Big Data

It covered a broad range of information.

Continental AG / Abteilung: CF IT Finance

A practical introduction to Data Analysis and Big Data

presentation of technologies

Continental AG / Abteilung: CF IT Finance

Advanced Deep Learning

The exercises are sufficiently practical and do not need a high knowledge in Python to be done.

Alexandre GIRARD - OSONES

Neural Networks Fundamentals using TensorFlow as Example

I liked the opportunities to ask questions and get more in depth explanations of the theory.

Sharon Ruane - INTEL R&D IRELAND LIMITED

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Neural Networks Fundamentals using TensorFlow as Example

I was amazed at the standard of this class - I would say that it was university standard.

David Relihan - INTEL R&D IRELAND LIMITED

Machine Learning and Deep Learning

Coverage and depth of topics

Anirban Basu - Travix International

Advanced Deep Learning

Doing exercises on real examples using Keras. Mihaly totally understood our expectations about this training.

Paul Kassis - OSONES

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

Neural Networks Fundamentals using TensorFlow as Example

Knowledgeable trainer

Sridhar Voorakkara - INTEL R&D IRELAND LIMITED

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Applied Machine Learning

ref material to use later was very good

PAUL BEALES - Seagate Technology

Artificial Neural Networks, Machine Learning, Deep Thinking

It was very interactive and more relaxed and informal than expected. We covered lots of topics in the time and the trainer was always receptive to talking more in detail or more generally about the topics and how they were related. I feel the training has given me the tools to continue learning as opposed to it being a one off session where learning stops once you've finished which is very important given the scale and complexity of the topic.

Jonathan Blease - Knowledgepool Group Ltd

Machine Learning Course Events - Southampton

Code Name Venue Duration Course Date PHP Course Price [Remote / Classroom]
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Southampton 28 hours Tue, 2018-01-30 09:30 £5200 / £6200
MLFWR1 Machine Learning Fundamentals with R Southampton 14 hours Tue, 2018-01-30 09:30 £2600 / £3100
dsstne Amazon DSSTNE: Build a recommendation system Southampton 7 hours Fri, 2018-02-02 09:30 £1100 / £1350
datamodeling Pattern Recognition Southampton 35 hours Mon, 2018-02-05 09:30 £6500 / £7750
encogadv Encog: Advanced Machine Learning Southampton 14 hours Tue, 2018-02-06 09:30 £2200 / £2700
opennlp OpenNLP for Text Based Machine Learning Southampton 14 hours Wed, 2018-02-07 09:30 £2200 / £2700
aiauto Artificial Intelligence in Automotive Southampton 14 hours Thu, 2018-02-08 09:30 £2600 / £3100
mlrobot1 Machine Learning for Robotics Southampton 21 hours Mon, 2018-02-12 09:30 £3300 / £4050
mlbankingpython_ Machine Learning for Banking (with Python) Southampton 21 hours Mon, 2018-02-12 09:30 £3300 / £4050
undnn Understanding Deep Neural Networks Southampton 35 hours Mon, 2018-02-12 09:30 £5500 / £6750
mlentre Machine Learning Concepts for Entrepreneurs and Managers Southampton 21 hours Mon, 2018-02-12 09:30 £3300 / £4050
mlios Machine Learning on iOS Southampton 14 hours Tue, 2018-02-13 09:30 £2200 / £2700
Fairseq Fairseq: Setting up a CNN-based machine translation system Southampton 7 hours Wed, 2018-02-14 09:30 £1100 / £1350
snorkel Snorkel: Rapidly process training data Southampton 7 hours Fri, 2018-02-16 09:30 £1100 / £1350
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Southampton 21 hours Mon, 2018-02-26 09:30 £3900 / £4650
mldt Machine Learning and Deep Learning Southampton 21 hours Mon, 2018-02-26 09:30 £3900 / £4650
systemml Apache SystemML for Machine Learning Southampton 14 hours Mon, 2018-02-26 09:30 £2200 / £2700
pythonadvml Python for Advanced Machine Learning Southampton 21 hours Mon, 2018-02-26 09:30 £3300 / £4050
aiintrozero From Zero to AI Southampton 35 hours Mon, 2018-02-26 09:30 £6500 / £7750
BigData_ A practical introduction to Data Analysis and Big Data Southampton 35 hours Tue, 2018-02-27 09:30 £5500 / £6500
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking Southampton 21 hours Tue, 2018-02-27 09:30 £3300 / £4050
Fairsec Fairsec: Setting up a CNN-based machine translation system Southampton 7 hours Wed, 2018-02-28 09:30 £1100 / £1350
predio Machine Learning with PredictionIO Southampton 21 hours Wed, 2018-02-28 09:30 £3300 / £4050
matlabml1 Introduction to Machine Learning with MATLAB Southampton 21 hours Wed, 2018-02-28 09:30 £3300 / £4050
octnp Octave not only for programmers Southampton 21 hours Wed, 2018-02-28 09:30 £3300 / £4050
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Southampton 7 hours Thu, 2018-03-01 09:30 £1100 / £1350
dmmlr Data Mining & Machine Learning with R Southampton 14 hours Thu, 2018-03-01 09:30 £2600 / £3100
dladv Advanced Deep Learning Southampton 28 hours Tue, 2018-03-06 09:30 £5200 / £6200
encogintro Encog: Introduction to Machine Learning Southampton 14 hours Wed, 2018-03-07 09:30 £2200 / £2700
textsum Text Summarization with Python Southampton 14 hours Wed, 2018-03-07 09:30 £2200 / £2700
wolfdata Data Science: Analysis and Presentation Southampton 7 hours Fri, 2018-03-09 09:30 £1100 / £1350
Torch Torch: Getting started with Machine and Deep Learning Southampton 21 hours Mon, 2018-03-12 09:30 £3900 / £4650
opennmt OpenNMT: Setting up a Neural Machine Translation System Southampton 7 hours Mon, 2018-03-12 09:30 £1100 / £1350
mlintro Introduction to Machine Learning Southampton 7 hours Mon, 2018-03-12 09:30 £1300 / £1550
pythontextml Python: Machine Learning with Text Southampton 21 hours Tue, 2018-03-13 09:30 £3300 / £3800
cpde Data Engineering on Google Cloud Platform Southampton 32 hours Tue, 2018-03-13 09:30 £5500 / £6500
mlfunpython Machine Learning Fundamentals with Python Southampton 14 hours Tue, 2018-03-13 09:30 £2200 / £2700
appliedml Applied Machine Learning Southampton 14 hours Wed, 2018-03-14 09:30 £2600 / £3100
mlbankingr Machine Learning for Banking (with R) Southampton 28 hours Tue, 2018-03-20 09:30 £4400 / £5400
patternmatching Pattern Matching Southampton 14 hours Wed, 2018-03-21 09:30 £2600 / £3100
mlfsas Machine Learning Fundamentals with Scala and Apache Spark Southampton 14 hours Wed, 2018-03-21 09:30 £2200 / £2700
OpenNN OpenNN: Implementing neural networks Southampton 14 hours Wed, 2018-03-21 09:30 £2600 / £3100
MLFWR1 Machine Learning Fundamentals with R Southampton 14 hours Wed, 2018-03-21 09:30 £2600 / £3100
radvml Advanced Machine Learning with R Southampton 21 hours Mon, 2018-03-26 09:30 £3900 / £4650
matlabdl Matlab for Deep Learning Southampton 14 hours Mon, 2018-03-26 09:30 £2200 / £2700
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Southampton 28 hours Tue, 2018-03-27 09:30 £5200 / £6200
datamodeling Pattern Recognition Southampton 35 hours Mon, 2018-04-02 09:30 £6500 / £7750
aiauto Artificial Intelligence in Automotive Southampton 14 hours Tue, 2018-04-03 09:30 £2600 / £3100
mlrobot1 Machine Learning for Robotics Southampton 21 hours Wed, 2018-04-04 09:30 £3300 / £4050
mlios Machine Learning on iOS Southampton 14 hours Wed, 2018-04-04 09:30 £2200 / £2700
Fairseq Fairseq: Setting up a CNN-based machine translation system Southampton 7 hours Thu, 2018-04-05 09:30 £1100 / £1350
mlentre Machine Learning Concepts for Entrepreneurs and Managers Southampton 21 hours Mon, 2018-04-09 09:30 £3300 / £4050
snorkel Snorkel: Rapidly process training data Southampton 7 hours Thu, 2018-04-19 09:30 £1100 / £1350
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Southampton 7 hours Fri, 2018-04-20 09:30 £1100 / £1350
dmmlr Data Mining & Machine Learning with R Southampton 14 hours Mon, 2018-04-23 09:30 £2600 / £3100
mldt Machine Learning and Deep Learning Southampton 21 hours Mon, 2018-04-23 09:30 £3900 / £4650
undnn Understanding Deep Neural Networks Southampton 35 hours Mon, 2018-04-23 09:30 £5500 / £6750
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking Southampton 21 hours Mon, 2018-04-23 09:30 £3300 / £4050
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Southampton 21 hours Tue, 2018-04-24 09:30 £3900 / £4650
Fairsec Fairsec: Setting up a CNN-based machine translation system Southampton 7 hours Tue, 2018-04-24 09:30 £1100 / £1350
predio Machine Learning with PredictionIO Southampton 21 hours Tue, 2018-04-24 09:30 £3300 / £4050
BigData_ A practical introduction to Data Analysis and Big Data Southampton 35 hours Tue, 2018-04-24 09:30 £5500 / £6500
matlabml1 Introduction to Machine Learning with MATLAB Southampton 21 hours Tue, 2018-04-24 09:30 £3300 / £4050
octnp Octave not only for programmers Southampton 21 hours Tue, 2018-04-24 09:30 £3300 / £4050
pythonadvml Python for Advanced Machine Learning Southampton 21 hours Wed, 2018-04-25 09:30 £3300 / £4050
opennlp OpenNLP for Text Based Machine Learning Southampton 14 hours Thu, 2018-04-26 09:30 £2200 / £2700
encogadv Encog: Advanced Machine Learning Southampton 14 hours Mon, 2018-04-30 09:30 £2200 / £2700
mlintro Introduction to Machine Learning Southampton 7 hours Mon, 2018-04-30 09:30 £1300 / £1550
wolfdata Data Science: Analysis and Presentation Southampton 7 hours Mon, 2018-04-30 09:30 £1100 / £1350
aiintrozero From Zero to AI Southampton 35 hours Mon, 2018-04-30 09:30 £6500 / £7750
dladv Advanced Deep Learning Southampton 28 hours Mon, 2018-04-30 09:30 £5200 / £6200
systemml Apache SystemML for Machine Learning Southampton 14 hours Tue, 2018-05-01 09:30 £2200 / £2700
textsum Text Summarization with Python Southampton 14 hours Tue, 2018-05-01 09:30 £2200 / £2700
opennmt OpenNMT: Setting up a Neural Machine Translation System Southampton 7 hours Wed, 2018-05-02 09:30 £1100 / £1350
mlfunpython Machine Learning Fundamentals with Python Southampton 14 hours Wed, 2018-05-02 09:30 £2200 / £2700
appliedml Applied Machine Learning Southampton 14 hours Thu, 2018-05-03 09:30 £2600 / £3100
cpde Data Engineering on Google Cloud Platform Southampton 32 hours Tue, 2018-05-08 09:30 £5500 / £6500
Torch Torch: Getting started with Machine and Deep Learning Southampton 21 hours Wed, 2018-05-09 09:30 £3900 / £4650
patternmatching Pattern Matching Southampton 14 hours Thu, 2018-05-10 09:30 £2600 / £3100
mlfsas Machine Learning Fundamentals with Scala and Apache Spark Southampton 14 hours Thu, 2018-05-10 09:30 £2200 / £2700
OpenNN OpenNN: Implementing neural networks Southampton 14 hours Thu, 2018-05-10 09:30 £2600 / £3100
dsstne Amazon DSSTNE: Build a recommendation system Southampton 7 hours Thu, 2018-05-10 09:30 £1100 / £1350
MLFWR1 Machine Learning Fundamentals with R Southampton 14 hours Thu, 2018-05-10 09:30 £2600 / £3100
mlbankingr Machine Learning for Banking (with R) Southampton 28 hours Mon, 2018-05-14 09:30 £4400 / £5400
pythontextml Python: Machine Learning with Text Southampton 21 hours Tue, 2018-05-15 09:30 £3300 / £3800
encogintro Encog: Introduction to Machine Learning Southampton 14 hours Wed, 2018-05-16 09:30 £2200 / £2700
matlabdl Matlab for Deep Learning Southampton 14 hours Wed, 2018-05-16 09:30 £2200 / £2700
radvml Advanced Machine Learning with R Southampton 21 hours Wed, 2018-05-16 09:30 £3900 / £4650
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Southampton 28 hours Mon, 2018-05-21 09:30 £5200 / £6200
mlbankingpython_ Machine Learning for Banking (with Python) Southampton 21 hours Tue, 2018-05-22 09:30 £3300 / £4050
aiauto Artificial Intelligence in Automotive Southampton 14 hours Wed, 2018-05-23 09:30 £2600 / £3100
Fairseq Fairseq: Setting up a CNN-based machine translation system Southampton 7 hours Thu, 2018-05-24 09:30 £1100 / £1350
mlrobot1 Machine Learning for Robotics Southampton 21 hours Tue, 2018-05-29 09:30 £3300 / £4050
mlentre Machine Learning Concepts for Entrepreneurs and Managers Southampton 21 hours Wed, 2018-05-30 09:30 £3300 / £4050
mlios Machine Learning on iOS Southampton 14 hours Thu, 2018-05-31 09:30 £2200 / £2700
datamodeling Pattern Recognition Southampton 35 hours Mon, 2018-06-04 09:30 £6500 / £7750
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Southampton 7 hours Fri, 2018-06-08 09:30 £1100 / £1350
snorkel Snorkel: Rapidly process training data Southampton 7 hours Fri, 2018-06-08 09:30 £1100 / £1350
Fairsec Fairsec: Setting up a CNN-based machine translation system Southampton 7 hours Tue, 2018-06-12 09:30 £1100 / £1350
mldt Machine Learning and Deep Learning Southampton 21 hours Wed, 2018-06-13 09:30 £3900 / £4650

Course Outlines

Code Name Duration Outline
predio Machine Learning with PredictionIO 21 hours

PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack.

Audience

This course is directed at developers and data scientists who want to create predictive engines for any machine learning task.

Getting Started

  • Quick Intro
  • Installation Guide
  • Downloading Template
  • Deploying an Engine
  • Customizing an Engine
  • App Integration Overview

Developing PredictionIO

  • System Architecture
  • Event Server Overview
  • Collecting Data
  • Learning DASE
  • Implementing DASE
  • Evaluation Overview
  • Intellij IDEA Guide
  • Scala API

Machine Learning Education and Usage​ Examples

  • Comics Recommendation
  • Text Classification
  • Community Contributed Demo
  • Dimensionality Reducation and usage

PredictionIO SDKs (Select One)

  • Java
  • PHP
  • Python
  • Ruby
  • Community Contributed

 

Fairsec Fairsec: Setting up a CNN-based machine translation system 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course
    Part lecture, part discussion, heavy hands-on practice

Introduction
    Why Neural Machine Translation?

Overview of the Torch project

Overview of a Convolutional Neural Machine Translation model
    Convolutional Sequence to Sequence Learning
    Convolutional Encoder Model for Neural Machine Translation
    Standard LSTM-based model

Overview of training approaches
    About GPUs and CPUs
    Fast beam search generation

Installation and setup

Evaluating pre-trained models

Preprocessing your data

Training the model

Translating

Converting a trained model to use CPU-only operations

Joining to the community

Closing remarks

undnn Understanding Deep Neural Networks 35 hours

This course begins with giving you conceptual knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).

Part-1(40%) of this training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Theano, DeepDrive, Keras, etc.

Part-2(20%) of this training introduces Theano - a python library that makes writing deep learning models easy.

Part-3(40%) of the training would be extensively based on Tensorflow - 2nd Generation API of Google's open source software library for Deep Learning. The examples and handson would all be made in TensorFlow.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

  • have a good understanding on deep neural networks(DNN), CNN and RNN

  • understand TensorFlow’s structure and deployment mechanisms

  • be able to carry out installation / production environment / architecture tasks and configuration

  • be able to assess code quality, perform debugging, monitoring

  • be able to implement advanced production like training models, building graphs and logging
     

Not all the topics would be covered in a public classroom with 35 hours duration due to the vastness of the subject.

The Duration of the complete course will be around 70 hours and not 35 hours.

Part 1 – Deep Learning and DNN Concepts


Introduction AI, Machine Learning & Deep Learning

  • History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

  • Collective Intelligence: aggregating knowledge shared by many virtual agents

  • Genetic algorithms: to evolve a population of virtual agents by selection

  • Usual Learning Machine: definition.

  • Types of tasks: supervised learning, unsupervised learning, reinforcement learning

  • Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

  • Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

  • Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)


 

Basic Concepts of a Neural Network (Application: multi-layer perceptron)

  • Reminder of mathematical bases.

  • Definition of a network of neurons: classical architecture, activation and

  • Weighting of previous activations, depth of a network

  • Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

  • Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

  • Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

  • Approximation of a function by a network of neurons: presentation and examples

  • Approximation of a distribution by a network of neurons: presentation and examples

  • Data Augmentation: how to balance a dataset

  • Generalization of the results of a network of neurons.

  • Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

  • Optimization and convergence algorithms


 

Standard ML / DL Tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

  • Data management tools: Apache Spark, Apache Hadoop Tools

  • Machine Learning: Numpy, Scipy, Sci-kit

  • DL high level frameworks: PyTorch, Keras, Lasagne

  • Low level DL frameworks: Theano, Torch, Caffe, Tensorflow


 

Convolutional Neural Networks (CNN).

  • Presentation of the CNNs: fundamental principles and applications

  • Basic operation of a CNN: convolutional layer, use of a kernel,

  • Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

  • Presentation of the different CNN architectures that brought the state of the art in classification

  • Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

  • Use of an attention model.

  • Application to a common classification case (text or image)

  • CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

  • Main strategies for increasing feature maps for image generation.


 

Recurrent Neural Networks (RNN).

  • Presentation of RNNs: fundamental principles and applications.

  • Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

  • Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

  • Presentation of the different states and the evolutions brought by these architectures

  • Convergence and vanising gradient problems

  • Classical architectures: Prediction of a temporal series, classification ...

  • RNN Encoder Decoder type architecture. Use of an attention model.

  • NLP applications: word / character encoding, translation.

  • Video Applications: prediction of the next generated image of a video sequence.


Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

  • Presentation of the generational models, link with the CNNs

  • Auto-encoder: reduction of dimensionality and limited generation

  • Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

  • Generative Adversarial Networks: Fundamentals.

  • Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

  • Convergence of a GAN and difficulties encountered.

  • Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

  • Applications for the generation of images or photographs, text generation, super-resolution.

Deep Reinforcement Learning.

  • Presentation of reinforcement learning: control of an agent in a defined environment

  • By a state and possible actions

  • Use of a neural network to approximate the state function

  • Deep Q Learning: experience replay, and application to the control of a video game.

  • Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

  • Applications: control of a single video game or a digital system.

 

Part 2 – Theano for Deep Learning

Theano Basics

  • Introduction

  • Installation and Configuration

Theano Functions

  • inputs, outputs, updates, givens

Training and Optimization of a neural network using Theano

  • Neural Network Modeling

  • Logistic Regression

  • Hidden Layers

  • Training a network

  • Computing and Classification

  • Optimization

  • Log Loss

Testing the model


Part 3 – DNN using Tensorflow

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables

  • Feeding, Reading and Preloading TensorFlow Data

  • How to use TensorFlow infrastructure to train models at scale

  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

  • Prepare the Data

  • Download

  • Inputs and Placeholders

  • Build the GraphS

    • Inference

    • Loss

    • Training

  • Train the Model

    • The Graph

    • The Session

    • Train Loop

  • Evaluate the Model

    • Build the Eval Graph

    • Eval Output

The Perceptron

  • Activation functions

  • The perceptron learning algorithm

  • Binary classification with the perceptron

  • Document classification with the perceptron

  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick

  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Nonlinear decision boundaries

  • Feedforward and feedback artificial neural networks

  • Multilayer perceptrons

  • Minimizing the cost function

  • Forward propagation

  • Back propagation

  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals

  • Model Architecture

  • Principles

  • Code Organization

  • Launching and Training the Model

  • Evaluating a Model


 

Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

Tensorflow - Advanced Usage

  • Threading and Queues

  • Distributed TensorFlow

  • Writing Documentation and Sharing your Model

  • Customizing Data Readers

  • Manipulating TensorFlow Model Files


TensorFlow Serving

  • Introduction

  • Basic Serving Tutorial

  • Advanced Serving Tutorial

  • Serving Inception Model Tutorial

systemml Apache SystemML for Machine Learning 14 hours

Apache SystemML is a distributed and declarative machine learning platform.

SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark.

Audience

This course is suitable for Machine Learning researchers, developers and engineers seeking to utilize SystemML as a framework for machine learning.

Running SystemML

  • Standalone
  • Spark MLContext
  • Spark Batch
  • Hadoop Batch
  • JMLC

Tools

  • Debugger
  • IDE
  • Troubleshooting

Languages and ML Algorithms

  • DML
  • PyDML
  • Algorithms
cpde Data Engineering on Google Cloud Platform 32 hours

This four-day instructor-led class provides participants a hands-on introduction to designing and building data processing systems on Google Cloud Platform. Through a combination of presentations, demos, and hand-on labs, participants will learn how to design data processing systems, build end-to-end data pipelines, analyze data and carry out machine learning. The course covers structured, unstructured, and streaming data.

This course teaches participants the following skills:

  • Design and build data processing systems on Google Cloud Platform
  • Process batch and streaming data by implementing autoscaling data pipelines on Cloud Dataflow
  • Derive business insights from extremely large datasets using Google BigQuery
  • Train, evaluate and predict using machine learning models using Tensorflow and Cloud ML
  • Leverage unstructured data using Spark and ML APIs on Cloud Dataproc
  • Enable instant insights from streaming data

This class is intended for experienced developers who are responsible for managing big data transformations including:

  • Extracting, Loading, Transforming, cleaning, and validating data
  • Designing pipelines and architectures for data processing
  • Creating and maintaining machine learning and statistical models
  • Querying datasets, visualizing query results and creating reports

The course includes presentations, demonstrations, and hands-on labs.

Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform

Module 1: Google Cloud Dataproc Overview

  • Creating and managing clusters.
  • Leveraging custom machine types and preemptible worker nodes.
  • Scaling and deleting Clusters.
  • Lab: Creating Hadoop Clusters with Google Cloud Dataproc.

Module 2: Running Dataproc Jobs

  • Running Pig and Hive jobs.
  • Separation of storage and compute.
  • Lab: Running Hadoop and Spark Jobs with Dataproc.
  • Lab: Submit and monitor jobs.

Module 3: Integrating Dataproc with Google Cloud Platform

  • Customize cluster with initialization actions.
  • BigQuery Support.
  • Lab: Leveraging Google Cloud Platform Services.

Module 4: Making Sense of Unstructured Data with Google’s Machine Learning APIs

  • Google’s Machine Learning APIs.
  • Common ML Use Cases.
  • Invoking ML APIs.
  • Lab: Adding Machine Learning Capabilities to Big Data Analysis.

Serverless Data Analysis with Google BigQuery and Cloud Dataflow

Module 5: Serverless data analysis with BigQuery

  • What is BigQuery.
  • Queries and Functions.
  • Lab: Writing queries in BigQuery.
  • Loading data into BigQuery.
  • Exporting data from BigQuery.
  • Lab: Loading and exporting data.
  • Nested and repeated fields.
  • Querying multiple tables.
  • Lab: Complex queries.
  • Performance and pricing.

Module 6: Serverless, autoscaling data pipelines with Dataflow

  • The Beam programming model.
  • Data pipelines in Beam Python.
  • Data pipelines in Beam Java.
  • Lab: Writing a Dataflow pipeline.
  • Scalable Big Data processing using Beam.
  • Lab: MapReduce in Dataflow.
  • Incorporating additional data.
  • Lab: Side inputs.
  • Handling stream data.
  • GCP Reference architecture.

Serverless Machine Learning with TensorFlow on Google Cloud Platform

Module 7: Getting started with Machine Learning

  • What is machine learning (ML).
  • Effective ML: concepts, types.
  • ML datasets: generalization.
  • Lab: Explore and create ML datasets.

Module 8: Building ML models with Tensorflow

  • Getting started with TensorFlow.
  • Lab: Using tf.learn.
  • TensorFlow graphs and loops + lab.
  • Lab: Using low-level TensorFlow + early stopping.
  • Monitoring ML training.
  • Lab: Charts and graphs of TensorFlow training.

Module 9: Scaling ML models with CloudML

  • Why Cloud ML?
  • Packaging up a TensorFlow model.
  • End-to-end training.
  • Lab: Run a ML model locally and on cloud.

Module 10: Feature Engineering

  • Creating good features.
  • Transforming inputs.
  • Synthetic features.
  • Preprocessing with Cloud ML.
  • Lab: Feature engineering.

Building Resilient Streaming Systems on Google Cloud Platform

Module 11: Architecture of streaming analytics pipelines

  • Stream data processing: Challenges.
  • Handling variable data volumes.
  • Dealing with unordered/late data.
  • Lab: Designing streaming pipeline.

Module 12: Ingesting Variable Volumes

  • What is Cloud Pub/Sub?
  • How it works: Topics and Subscriptions.
  • Lab: Simulator.

Module 13: Implementing streaming pipelines

  • Challenges in stream processing.
  • Handle late data: watermarks, triggers, accumulation.
  • Lab: Stream data processing pipeline for live traffic data.

Module 14: Streaming analytics and dashboards

  • Streaming analytics: from data to decisions.
  • Querying streaming data with BigQuery.
  • What is Google Data Studio?
  • Lab: build a real-time dashboard to visualize processed data.

Module 15: High throughput and low-latency with Bigtable

  • What is Cloud Spanner?
  • Designing Bigtable schema.
  • Ingesting into Bigtable.
  • Lab: streaming into Bigtable.

 

textsum Text Summarization with Python 14 hours

In Python Machine Learning, the Text Summarization feature is able to read the input text and produce a text summary. This capability is available from the command-line or as a Python API/Library. One exciting application is the rapid creation of executive summaries; this is particularly useful for organizations that need to review large bodies of text data before generating reports and presentations.

In this instructor-led, live training, participants will learn to use Python to create a simple application that auto-generates a summary of input text.

By the end of this training, participants will be able to:

  • Use a command-line tool that summarizes text.
  • Design and create Text Summarization code using Python libraries.
  • Evaluate three Python summarization libraries: sumy 0.7.0, pysummarization 1.0.4, readless 1.0.17

Audience

  • Developers
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction to Text Summarization with Python

  • Comparing sample text with auto-generated summaries
  • Installing sumy (a Python Command-Line Executable for Text Summarization)
  • Using sumy as a Command-Line Text Summarization Utility (Hands-On Exercise)

Evaluating three Python summarization libraries: sumy 0.7.0, pysummarization 1.0.4, readless 1.0.17 based on documented features

Choosing a library: sumy, pysummarization or readless

Creating a Python application using sumy library on Python 2.7/3.3+

  • Installing the sumy library for Text Summarization
  • Using the Edmundson (Extraction) method in sumy Python Library for Text

Summarization

  • Creating simple Python test code that uses sumy library to generate a text summary

Creating a Python application using pysummarization library on Python 2.7/3.3+

  • Installing pysummarization library for Text Summarization
  • Using the pysummarization library for Text Summarization
  • Creating simple Python test code that uses pysummarization library to generate a text summary

Creating a Python application using readless library on Python 2.7/3.3+

  • Installing readless library for Text Summarization
  • Using the readless library for Text Summarization

Creating simple Python test code that uses readless library to generate a text summary

Troubleshooting and debugging

Closing Remarks

aiintrozero From Zero to AI 35 hours

This course is created for people who have no previous experience in probability and statistics.

Probability (3.5h)

  • Definition of probability
  • Binomial distribution
  • Everyday usage exercises

Statistics (10.5h)

  • Descriptive Statistics
  • Inferential Statistics
  • Regression
  • Logistic Regression
  • Exercises

Intro to programming (3.5h)

  • Procedural Programming
  • Functional Programming
  • OOP Programming
  • Exercises (writing logic for a game of choice, e.g. noughts and crosses)

Machine Learning (10.5h)

  • Classification
  • Clustering
  • Neural Networks
  • Exercises (write AI for a computer game of choice)

Rules Engines and Expert Systems (7 hours)

  • Intro to Rule Engines
  • Write AI for the same game and combine solutions into hybrid approach
Fairseq Fairseq: Setting up a CNN-based machine translation system 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course
    Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.

Introduction
    Why Neural Machine Translation?

Overview of the Torch project

Overview of a Convolutional Neural Machine Translation model
    Convolutional Sequence to Sequence Learning
    Convolutional Encoder Model for Neural Machine Translation
    Standard LSTM-based model

Overview of training approaches
    About GPUs and CPUs
    Fast beam search generation

Installation and setup

Evaluating pre-trained models

Preprocessing your data

Training the model

Translating

Converting a trained model to use CPU-only operations

Joining to the community

Closing remarks

mlintro Introduction to Machine Learning 7 hours

This training course is for people that would like to apply basic Machine Learning techniques in practical applications.

Audience

Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work

Sector specific examples are used to make the training relevant to the audience.

  • Naive Bayes
  • Multinomial models
  • Bayesian categorical data analysis
  • Discriminant analysis
  • Linear regression
  • Logistic regression
  • GLM
  • EM Algorithm
  • Mixed Models
  • Additive Models
  • Classification
  • KNN
  • Ridge regression
  • Clustering
aiauto Artificial Intelligence in Automotive 14 hours

This course covers AI (emphasizing Machine Learning and Deep Learning) in Automotive Industry. It helps to determine which technology can be (potentially) used in multiple situation in a car: from simple automation, image recognition to autonomous decision making.

Current state of the technology

  • What is used
  • What may be potentially used

Rules based AI 

  • Simplifying decision

Machine Learning 

  • Classification
  • Clustering
  • Neural Networks
  • Types of Neural Networks
  • Presentation of working examples and discussion

Deep Learning

  • Basic vocabulary 
  • When to use Deep Learning, when not to
  • Estimating computational resources and cost
  • Very short theoretical background to Deep Neural Networks

Deep Learning in practice (mainly using TensorFlow)

  • Preparing Data
  • Choosing loss function
  • Choosing appropriate type on neural network
  • Accuracy vs speed and resources
  • Training neural network
  • Measuring efficiency and error

Sample usage

  • Anomaly detection
  • Image recognition
  • ADAS

 

 

 

 

facebooknmt Facebook NMT: Setting up a Neural Machine Translation System 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course

  • Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.

Introduction
    Why Neural Machine Translation?
    Borrowing from image recognition techniques

Overview of the Torch and Caffe2 projects

Overview of a Convolutional Neural Machine Translation model
    Convolutional Sequence to Sequence Learning
    Convolutional Encoder Model for Neural Machine Translation
    Standard LSTM-based model

Overview of training approaches
    About GPUs and CPUs
    Fast beam search generation

Installation and setup

Evaluating pre-trained models

Preprocessing your data

Training the model

Translating

Converting a trained model to use CPU-only operations

Joining to the community

Closing remarks

appliedml Applied Machine Learning 14 hours

This training course is for people that would like to apply Machine Learning in practical applications.

Audience

This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization.

The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work.

Sector specific examples are used to make the training relevant to the audience.

  • Naive Bayes
  • Multinomial models
  • Bayesian categorical data analysis
  • Discriminant analysis
  • Linear regression
  • Logistic regression
  • GLM
  • EM Algorithm
  • Mixed Models
  • Additive Models
  • Classification
  • KNN
  • Bayesian Graphical Models
  • Factor Analysis (FA)
  • Principal Component Analysis (PCA)
  • Independent Component Analysis (ICA)
  • Support Vector Machines (SVM) for regression and classification
  • Boosting
  • Ensemble models
  • Neural networks
  • Hidden Markov Models (HMM)
  • Space State Models
  • Clustering
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example 28 hours

This course will give you knowledge in neural networks and generally in machine learning algorithm,  deep learning (algorithms and applications).

This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow.

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

  • Inputs and Placeholders
  • Build the GraphS
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

The Perceptron

  • Activation functions
  • The perceptron learning algorithm
  • Binary classification with the perceptron
  • Document classification with the perceptron
  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick
  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Nonlinear decision boundaries
  • Feedforward and feedback artificial neural networks
  • Multilayer perceptrons
  • Minimizing the cost function
  • Forward propagation
  • Back propagation
  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals
  • Model Architecture
  • Principles
  • Code Organization
  • Launching and Training the Model
  • Evaluating a Model
snorkel Snorkel: Rapidly process training data 7 hours

Snorkel is a system for rapidly creating, modeling, and managing training data. It focuses on accelerating the development of structured or "dark" data extraction applications for domains in which large labeled training sets are not available or easy to obtain.

In this instructor-led, live training, participants will learn techniques for extracting value from unstructured data such as text, tables, figures, and images through modeling of training data with Snorkel.

By the end of this training, participants will be able to:

  • Programmatically create training sets to enable the labeling of massive training sets
  • Train high-quality end models by first modeling noisy training sets
  • Use Snorkel to implement weak supervision techniques and apply data programming to weakly-supervised machine learning systems

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

 

MLFWR1 Machine Learning Fundamentals with R 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

Introduction to Applied Machine Learning

  • Statistical learning vs. Machine learning
  • Iteration and evaluation
  • Bias-Variance trade-off

Regression

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercises

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercises

Cross-validation and Resampling

  • Cross-validation approaches
  • Bootstrap
  • Exercises

Unsupervised Learning

  • K-means clustering
  • Examples
  • Challenges of unsupervised learning and beyond K-means
mldt Machine Learning and Deep Learning 21 hours

This course covers AI (emphasizing Machine Learning and Deep Learning)

Machine learning

Introduction to Machine Learning

  • Applications of machine learning
  • Supervised Versus Unsupervised Learning
  • Machine Learning Algorithms
    • Regression
    • Classification
    • Clustering
    • Recommender System
    • Anomaly Detection
    • Reinforcement Learning

Regression

  • Simple & Multiple Regression
    • Least Square Method
    • Estimating the Coefficients
    • Assessing the Accuracy of the Coefficient Estimates
    • Assessing the Accuracy of the Model
    • Post Estimation Analysis
    • Other Considerations in the Regression Models
    • Qualitative Predictors
    • Extensions of the Linear Models
    • Potential Problems
    • Bias-variance trade off [under-fitting/over-fitting] for regression models

Resampling Methods

  • Cross-Validation
  • The Validation Set Approach
  • Leave-One-Out Cross-Validation
  • k-Fold Cross-Validation
  • Bias-Variance Trade-Off for k-Fold
  • The Bootstrap

Model Selection and Regularization

  • Subset Selection [Best Subset Selection, Stepwise Selection, Choosing the Optimal Model]
  • Shrinkage Methods/ Regularization [Ridge Regression, Lasso & Elastic Net]
  • Selecting the Tuning Parameter
  • Dimension Reduction Methods
    • Principal Components Regression
    • Partial Least Squares

Classification

  • Logistic Regression

    • The Logistic Model cost function

    • Estimating the Coefficients

    • Making Predictions

    • Odds Ratio

    • Performance Evaluation Matrices

    • [Sensitivity/Specificity/PPV/NPV, Precision, ROC curve etc.]

    • Multiple Logistic Regression

    • Logistic Regression for >2 Response Classes

    • Regularized Logistic Regression

  • Linear Discriminant Analysis

    • Using Bayes’ Theorem for Classification

    • Linear Discriminant Analysis for p=1

    • Linear Discriminant Analysis for p >1

  • Quadratic Discriminant Analysis

  • K-Nearest Neighbors

  • Classification with Non-linear Decision Boundaries

  • Support Vector Machines

    • Optimization Objective

    • The Maximal Margin Classifier

    • Kernels

    • One-Versus-One Classification

    • One-Versus-All Classification

  • Comparison of Classification Methods

Introduction to Deep Learning

ANN Structure

  • Biological neurons and artificial neurons

  • Non-linear Hypothesis

  • Model Representation

  • Examples & Intuitions

  • Transfer Function/ Activation Functions

  • Typical classes of network architectures

Feed forward ANN.

  • Structures of Multi-layer feed forward networks

  • Back propagation algorithm

  • Back propagation - training and convergence

  • Functional approximation with back propagation

  • Practical and design issues of back propagation learning

Deep Learning

  • Artificial Intelligence & Deep Learning

  • Softmax Regression

  • Self-Taught Learning

  • Deep Networks

  • Demos and Applications

Lab:

Getting Started with R

  • Introduction to R

  • Basic Commands & Libraries

  • Data Manipulation

  • Importing & Exporting data

  • Graphical and Numerical Summaries

  • Writing functions

Regression

  • Simple & Multiple Linear Regression

  • Interaction Terms

  • Non-linear Transformations

  • Dummy variable regression

  • Cross-Validation and the Bootstrap

  • Subset selection methods

  • Penalization [Ridge, Lasso, Elastic Net]

Classification

  • Logistic Regression, LDA, QDA, and KNN,

  • Resampling & Regularization

  • Support Vector Machine

  • Resampling & Regularization

Note:

  • For ML algorithms, case studies will be used to discuss their application, advantages & potential issues.

  • Analysis of different data sets will be performed using R

dsstne Amazon DSSTNE: Build a recommendation system 7 hours

Amazon DSSTNE is an open-source library for training and deploying recommendation models. It allows models with weight matrices that are too large for a single GPU to be trained on a single host.

In this instructor-led, live training, participants will learn how to use DSSTNE to build a recommendation application.

By the end of this training, participants will be able to:

  • Train a recommendation model with sparse datasets as input
  • Scale training and prediction models over multiple GPUs
  • Spread out computation and storage in a model-parallel fashion
  • Generate Amazon-like personalized product recommendations
  • Deploy a production-ready application that can scale at heavy workloads

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

 

mlfunpython Machine Learning Fundamentals with Python 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Python programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

Introduction to Applied Machine Learning

  • Statistical learning vs. Machine learning
  • Iteration and evaluation
  • Bias-Variance trade-off

Machine Learning with Python

  • Choice of libraries
  • Add-on tools

Regression

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercises

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercises

Cross-validation and Resampling

  • Cross-validation approaches
  • Bootstrap
  • Exercises

Unsupervised Learning

  • K-means clustering
  • Examples
  • Challenges of unsupervised learning and beyond K-means
deepmclrg Machine Learning & Deep Learning with Python and R 14 hours

MACHINE LEARNING

1: Introducing Machine Learning

  • The origins of machine learning
  • Uses and abuses of machine learning
  • Ethical considerations
  • How do machines learn?
  • Abstraction and knowledge representation
  • Generalization
  • Assessing the success of learning
  • Steps to apply machine learning to your data
  • Choosing a machine learning algorithm
  • Thinking about the input data
  • Thinking about types of machine learning algorithms
  • Matching your data to an appropriate algorithm
  • Using R for machine learning
  • Installing and loading R packages
  • Installing an R package
  • Installing a package using the point-and-click interface
  • Loading an R package
  • Summary

2: Managing and Understanding Data

  • R data structures
  • Vectors
  • Factors
  • Lists
  • Data frames
  • Matrixes and arrays
  • Managing data with R
  • Saving and loading R data structures
  • Importing and saving data from CSV files
  • Importing data from SQL databases
  • Exploring and understanding data
  • Exploring the structure of data
  • Exploring numeric variables
  • Measuring the central tendency – mean and median
  • Measuring spread – quartiles and the five-number summary
  • Visualizing numeric variables – boxplots
  • Visualizing numeric variables – histograms
  • Understanding numeric data – uniform and normal distributions
  • Measuring spread – variance and standard deviation
  • Exploring categorical variables
  • Measuring the central tendency – the mode
  • Exploring relationships between variables
  • Visualizing relationships – scatterplots
  • Examining relationships – two-way cross-tabulations
  • Summary

3: Lazy Learning – Classification Using Nearest Neighbors

  • Understanding classification using nearest neighbors
  • The kNN algorithm
  • Calculating distance
  • Choosing an appropriate k
  • Preparing data for use with kNN
  • Why is the kNN algorithm lazy?
  • Diagnosing breast cancer with the kNN algorithm
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Transformation – normalizing numeric data
  • Data preparation – creating training and test datasets
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Transformation – z-score standardization
  • Testing alternative values of k
  • Summary

4: Probabilistic Learning – Classification Using

  • Naive Bayes
  • Understanding naive Bayes
  • Basic concepts of Bayesian methods
  • Probability
  • Joint probability
  • Conditional probability with Bayes' theorem
  • The naive Bayes algorithm
  • The naive Bayes classification
  • The Laplace estimator
  • Using numeric features with naive Bayes
  • Example – filtering mobile phone spam with the naive Bayes algorithm
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – processing text data for analysis
  • Data preparation – creating training and test datasets
  • Visualizing text data – word clouds
  • Data preparation – creating indicator features for frequent words
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

5: Divide and Conquer – Classification Using

  • Decision Trees and Rules
  • Understanding decision trees
  • Divide and conquer
  • The C5.0 decision tree algorithm
  • Choosing the best split
  • Pruning the decision tree
  • Example – identifying risky bank loans using C5.0 decision trees
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – creating random training and test datasets
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Boosting the accuracy of decision trees
  • Making some mistakes more costly than others
  • Understanding classification rules
  • Separate and conquer
  • The One Rule algorithm
  • The RIPPER algorithm
  • Rules from decision trees
  • Example – identifying poisonous mushrooms with rule learners
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

6: Forecasting Numeric Data – Regression Methods

  • Understanding regression
  • Simple linear regression
  • Ordinary least squares estimation
  • Correlations
  • Multiple linear regression
  • Example – predicting medical expenses using linear regression
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Exploring relationships among features – the correlation matrix
  • Visualizing relationships among features – the scatterplot matrix
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Model specification – adding non-linear relationships
  • Transformation – converting a numeric variable to a binary indicator
  • Model specification – adding interaction effects
  • Putting it all together – an improved regression model
  • Understanding regression trees and model trees
  • Adding regression to trees
  • Example – estimating the quality of wines with regression trees
  • and model trees
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
  • Visualizing decision trees
    • Step 4 – evaluating model performance
  • Measuring performance with mean absolute error
    • Step 5 – improving model performance
  • Summary

7: Black Box Methods – Neural Networks and

  • Support Vector Machines
  • Understanding neural networks
  • From biological to artificial neurons
  • Activation functions
  • Network topology
  • The number of layers
  • The direction of information travel
  • The number of nodes in each layer
  • Training neural networks with backpropagation
  • Modeling the strength of concrete with ANNs
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Understanding Support Vector Machines
  • Classification with hyperplanes
  • Finding the maximum margin
  • The case of linearly separable data
  • The case of non-linearly separable data
  • Using kernels for non-linear spaces
  • Performing OCR with SVMs
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

8: Finding Patterns – Market Basket Analysis Using

  • Association Rules
  • Understanding association rules
  • The Apriori algorithm for association rule learning
  • Measuring rule interest – support and confidence
  • Building a set of rules with the Apriori principle
  • Example – identifying frequently purchased groceries with
  • association rules
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – creating a sparse matrix for transaction data
  • Visualizing item support – item frequency plots
  • Visualizing transaction data – plotting the sparse matrix
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Sorting the set of association rules
  • Taking subsets of association rules
  • Saving association rules to a file or data frame
  • Summary

9: Finding Groups of Data – Clustering with k-means

  • Understanding clustering
  • Clustering as a machine learning task
  • The k-means algorithm for clustering
  • Using distance to assign and update clusters
  • Choosing the appropriate number of clusters
  • Finding teen market segments using k-means clustering
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – dummy coding missing values
  • Data preparation – imputing missing values
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

10: Evaluating Model Performance

  • Measuring performance for classification
  • Working with classification prediction data in R
  • A closer look at confusion matrices
  • Using confusion matrices to measure performance
  • Beyond accuracy – other measures of performance
  • The kappa statistic
  • Sensitivity and specificity
  • Precision and recall
  • The F-measure
  • Visualizing performance tradeoffs
  • ROC curves
  • Estimating future performance
  • The holdout method
  • Cross-validation
  • Bootstrap sampling
  • Summary

11: Improving Model Performance

  • Tuning stock models for better performance
  • Using caret for automated parameter tuning
  • Creating a simple tuned model
  • Customizing the tuning process
  • Improving model performance with meta-learning
  • Understanding ensembles
  • Bagging
  • Boosting
  • Random forests
  • Training random forests
  • Evaluating random forest performance
  • Summary

DEEP LEARNING with R

1: Getting Started with Deep Learning

  • What is deep learning?
  • Conceptual overview of neural networks
  • Deep neural networks
  • R packages for deep learning
  • Setting up reproducible results
  • Neural networks
  • The deepnet package
  • The darch package
  • The H2O package
  • Connecting R and H2O
  • Initializing H2O
  • Linking datasets to an H2O cluster
  • Summary

2: Training a Prediction Model

  • Neural networks in R
  • Building a neural network
  • Generating predictions from a neural network
  • The problem of overfitting data – the consequences explained
  • Use case – build and apply a neural network
  • Summary

3: Preventing Overfitting

  • L1 penalty
  • L1 penalty in action
  • L2 penalty
  • L2 penalty in action
  • Weight decay (L2 penalty in neural networks)
  • Ensembles and model averaging
  • Use case – improving out-of-sample model performance
  • using dropout
  • Summary

4: Identifying Anomalous Data

  • Getting started with unsupervised learning
  • How do auto-encoders work?
  • Regularized auto-encoders
  • Penalized auto-encoders
  • Denoising auto-encoders
  • Training an auto-encoder in R
  • Use case – building and applying an auto-encoder model
  • Fine-tuning auto-encoder models
  • Summary

5: Training Deep Prediction Models

  • Getting started with deep feedforward neural networks
  • Common activation functions – rectifiers, hyperbolic tangent,
  • and maxout
  • Picking hyperparameters
  • Training and predicting new data from a deep neural network
  • Use case – training a deep neural network for automatic
  • classification
  • Working with model results
  • Summary

6: Tuning and Optimizing Models

  • Dealing with missing data
  • Solutions for models with low accuracy
  • Grid search
  • Random search
  • Summary

DEEP LEARNING WITH PYTHON

I Introduction

1 Welcome

  • Deep Learning The Wrong Way
  • Deep Learning With Python
  • Summary

II Background

2 Introduction to Theano

  • What is Theano?
  • How to Install Theano
  • Simple Theano Example
  • Extensions and Wrappers for Theano
  • More Theano Resources
  • Summary

3 Introduction to TensorFlow

  • What is TensorFlow?
  • How to Install TensorFlow
  • Your First Examples in TensorFlow
  • Simple TensorFlow Example
  • More Deep Learning Models
  • Summary

4 Introduction to Keras

  • What is Keras?
  • How to Install Keras
  • Theano and TensorFlow Backends for Keras
  • Build Deep Learning Models with Keras
  • Summary

5 Project: Develop Large Models on GPUs Cheaply In the Cloud

  • Project Overview
  • Setup Your AWS Account
  • Launch Your Server Instance
  • Login, Configure and Run
  • Build and Run Models on AWS
  • Close Your EC2 Instance
  • Tips and Tricks for Using Keras on AWS
  • More Resources For Deep Learning on AWS
  • Summary

III Multilayer Perceptrons

6 Crash Course In Multilayer Perceptrons

  • Crash Course Overview
  • Multilayer Perceptrons
  • Neurons
  • Networks of Neurons
  • Training Networks
  • Summary

7 Develop Your First Neural Network With Keras

  • Tutorial Overview
  • Pima Indians Onset of Diabetes Dataset
  • Load Data
  • Define Model
  • Compile Model
  • Fit Model
  • Evaluate Model
  • Tie It All Together
  • Summary

8 Evaluate The Performance of Deep Learning Models

  • Empirically Evaluate Network Configurations
  • Data Splitting
  • Manual k-Fold Cross Validation
  • Summary

9 Use Keras Models With Scikit-Learn For General Machine Learning

  • Overview
  • Evaluate Models with Cross Validation
  • Grid Search Deep Learning Model Parameters
  • Summary

10 Project: Multiclass Classification Of Flower Species

  • Iris Flowers Classification Dataset
  • Import Classes and Functions
  • Initialize Random Number Generator
  • Load The Dataset
  • Encode The Output Variable
  • Define The Neural Network Model
  • Evaluate The Model with k-Fold Cross Validation
  • Summary

11 Project: Binary Classification Of Sonar Returns

  • Sonar Object Classification Dataset
  • Baseline Neural Network Model Performance
  • Improve Performance With Data Preparation
  • Tuning Layers and Neurons in The Model
  • Summary

12 Project: Regression Of Boston House Prices

  • Boston House Price Dataset
  • Develop a Baseline Neural Network Model
  • Lift Performance By Standardizing The Dataset
  • Tune The Neural Network Topology
  • Summary

IV Advanced Multilayer Perceptrons and Keras

13 Save Your Models For Later With Serialization

  • Tutorial Overview .
  • Save Your Neural Network Model to JSON
  • Save Your Neural Network Model to YAML
  • Summary

14 Keep The Best Models During Training With Checkpointing

  • Checkpointing Neural Network Models
  • Checkpoint Neural Network Model Improvements
  • Checkpoint Best Neural Network Model Only
  • Loading a Saved Neural Network Model
  • Summary

15 Understand Model Behavior During Training By Plotting History

  • Access Model Training History in Keras
  • Visualize Model Training History in Keras
  • Summary

16 Reduce Overfitting With Dropout Regularization

  • Dropout Regularization For Neural Networks
  • Dropout Regularization in Keras
  • Using Dropout on the Visible Layer
  • Using Dropout on Hidden Layers
  • Tips For Using Dropout
  • Summary

17 Lift Performance With Learning Rate Schedules

  • Learning Rate Schedule For Training Models
  • Ionosphere Classification Dataset
  • Time-Based Learning Rate Schedule
  • Drop-Based Learning Rate Schedule
  • Tips for Using Learning Rate Schedules
  • Summary

V Convolutional Neural Networks

18 Crash Course In Convolutional Neural Networks

  • The Case for Convolutional Neural Networks
  • Building Blocks of Convolutional Neural Networks
  • Convolutional Layers
  • Pooling Layers
  • Fully Connected Layers
  • Worked Example
  • Convolutional Neural Networks Best Practices
  • Summary

19 Project: Handwritten Digit Recognition

  • Handwritten Digit Recognition Dataset
  • Loading the MNIST dataset in Keras
  • Baseline Model with Multilayer Perceptrons
  • Simple Convolutional Neural Network for MNIST
  • Larger Convolutional Neural Network for MNIST
  • Summary

20 Improve Model Performance With Image Augmentation

  • Keras Image Augmentation API
  • Point of Comparison for Image Augmentation
  • Feature Standardization
  • ZCA Whitening
  • Random Rotations
  • Random Shifts
  • Random Flips
  • Saving Augmented Images to File
  • Tips For Augmenting Image Data with Keras
  • Summary

21 Project Object Recognition in Photographs

  • Photograph Object Recognition Dataset
  • Loading The CIFAR-10 Dataset in Keras
  • Simple CNN for CIFAR-10
  • Larger CNN for CIFAR-10
  • Extensions To Improve Model Performance
  • Summary

22 Project: Predict Sentiment From Movie Reviews

  • Movie Review Sentiment Classification Dataset
  • Load the IMDB Dataset With Keras
  • Word Embeddings
  • Simple Multilayer Perceptron Model
  • One-Dimensional Convolutional Neural Network
  • Summary

VI Recurrent Neural Networks

23 Crash Course In Recurrent Neural Networks

  • Support For Sequences in Neural Networks
  • Recurrent Neural Networks
  • Long Short-Term Memory Networks
  • Summary

24 Time Series Prediction with Multilayer Perceptrons

  • Problem Description: Time Series Prediction
  • Multilayer Perceptron Regression
  • Multilayer Perceptron Using the Window Method
  • Summary

25 Time Series Prediction with LSTM Recurrent Neural Networks

  • LSTM Network For Regression
  • LSTM For Regression Using the Window Method
  • LSTM For Regression with Time Steps
  • LSTM With Memory Between Batches
  • Stacked LSTMs With Memory Between Batches
  • Summary

26 Project: Sequence Classification of Movie Reviews

  • Simple LSTM for Sequence Classification
  • LSTM For Sequence Classification With Dropout
  • LSTM and CNN For Sequence Classification
  • Summary

27 Understanding Stateful LSTM Recurrent Neural Networks

  • Problem Description: Learn the Alphabet
  • LSTM for Learning One-Char to One-Char Mapping
  • LSTM for a Feature Window to One-Char Mapping
  • LSTM for a Time Step Window to One-Char Mapping
  • LSTM State Maintained Between Samples Within A Batch
  • Stateful LSTM for a One-Char to One-Char Mapping
  • LSTM with Variable Length Input to One-Char Output
  • Summary

28 Project: Text Generation With Alice in Wonderland

  • Problem Description: Text Generation
  • Develop a Small LSTM Recurrent Neural Network
  • Generating Text with an LSTM Network
  • Larger LSTM Recurrent Neural Network
  • Extension Ideas to Improve the Model
  • Summary
pythonadvml Python for Advanced Machine Learning 21 hours

In this instructor-led, live training, participants will learn the most relevant and cutting-edge machine learning techniques in Python as they build a series of demo applications involving image, music, text, and financial data.

By the end of this training, participants will be able to:

  • Implement machine learning algorithms and techniques for solving complex problems
  • Apply deep learning and semi-supervised learning to applications involving image, music, text, and financial data
  • Push Python algorithms to their maximum potential
  • Use libraries and packages such as NumPy and Theano

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

annmldt Artificial Neural Networks, Machine Learning, Deep Thinking 21 hours

DAY 1 - ARTIFICIAL NEURAL NETWORKS

Introduction and ANN Structure.

  • Biological neurons and artificial neurons.
  • Model of an ANN.
  • Activation functions used in ANNs.
  • Typical classes of network architectures .

Mathematical Foundations and Learning mechanisms.

  • Re-visiting vector and matrix algebra.
  • State-space concepts.
  • Concepts of optimization.
  • Error-correction learning.
  • Memory-based learning.
  • Hebbian learning.
  • Competitive learning.

Single layer perceptrons.

  • Structure and learning of perceptrons.
  • Pattern classifier - introduction and Bayes' classifiers.
  • Perceptron as a pattern classifier.
  • Perceptron convergence.
  • Limitations of a perceptrons.

Feedforward ANN.

  • Structures of Multi-layer feedforward networks.
  • Back propagation algorithm.
  • Back propagation - training and convergence.
  • Functional approximation with back propagation.
  • Practical and design issues of back propagation learning.

Radial Basis Function Networks.

  • Pattern separability and interpolation.
  • Regularization Theory.
  • Regularization and RBF networks.
  • RBF network design and training.
  • Approximation properties of RBF.

Competitive Learning and Self organizing ANN.

  • General clustering procedures.
  • Learning Vector Quantization (LVQ).
  • Competitive learning algorithms and architectures.
  • Self organizing feature maps.
  • Properties of feature maps.

Fuzzy Neural Networks.

  • Neuro-fuzzy systems.
  • Background of fuzzy sets and logic.
  • Design of fuzzy stems.
  • Design of fuzzy ANNs.

Applications

  • A few examples of Neural Network applications, their advantages and problems will be discussed.

DAY -2 MACHINE LEARNING

  • The PAC Learning Framework
    • Guarantees for finite hypothesis set – consistent case
    • Guarantees for finite hypothesis set – inconsistent case
    • Generalities
      • Deterministic cv. Stochastic scenarios
      • Bayes error noise
      • Estimation and approximation errors
      • Model selection
  • Radmeacher Complexity and VC – Dimension
  • Bias - Variance tradeoff
  • Regularisation
  • Over-fitting
  • Validation
  • Support Vector Machines
  • Kriging (Gaussian Process regression)
  • PCA and Kernel PCA
  • Self Organisation Maps (SOM)
  • Kernel induced vector space
    • Mercer Kernels and Kernel - induced similarity metrics
  • Reinforcement Learning

DAY 3 - DEEP LEARNING

This will be taught in relation to the topics covered on Day 1 and Day 2

  • Logistic and Softmax Regression
  • Sparse Autoencoders
  • Vectorization, PCA and Whitening
  • Self-Taught Learning
  • Deep Networks
  • Linear Decoders
  • Convolution and Pooling
  • Sparse Coding
  • Independent Component Analysis
  • Canonical Correlation Analysis
  • Demos and Applications
wolfdata Data Science: Analysis and Presentation 7 hours

The Wolfram System's integrated environment makes it an efficient tool for both analyzing and presenting data. This course covers aspects of the Wolfram Language relevant to analytics, including statistical computation, visualization, data import and export and automatic generation of reports.

  • Using associations
  • Querying with datasets
  • Machine learning for classification and prediction
  • Working with semantically imported data
  • Authoring customizable documents from templates
  • Deploying results to the cloud
radvml Advanced Machine Learning with R 21 hours

In this instructor-led, live training, participants will learn advanced techniques for Machine Learning with R as they step through the creation of a real-world application.

By the end of this training, participants will be able to:

  • Use techniques as hyper-parameter tuning and deep learning
  • Understand and implement unsupervised learning techniques
  • Put a model into production for use in a larger application

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

bspkaml Machine Learning 21 hours
This course will be a combination of theory and practical work with specific examples used throughout the event.

Introduction

This section provides a general introduction of when to use 'machine learning', what should be considered and what it all means including the pros and cons. Datatypes (structured/unstructured/static/streamed), data validity/volume, data driven vs user driven analytics, statistical models vs. machine learning models/ challenges of unsupervised learning, bias-variance trade off, iteration/evaluation, cross-validation approaches, supervised/unsupervised/reinforcement.

MAJOR TOPICS

1.Understanding naive Bayes

  • Basic concepts of Bayesian methods 
  • Probability 
  • Joint probability
  • Conditional probability with Bayes' theorem 
  • The naive Bayes algorithm 
  • The naive Bayes classification 
  • The Laplace estimator
  • Using numeric features with naive Bayes

2.Understanding decision trees

  • Divide and conquer 
  • The C5.0 decision tree algorithm 
  • Choosing the best split 
  • Pruning the decision tree

3. Understanding neural networks

  • From biological to artificial neurons 
  • Activation functions 
  • Network topology 
  • The number of layers 
  • The direction of information travel 
  • The number of nodes in each layer 
  • Training neural networks with backpropagation
  • Deep Learning

4. Understanding Support Vector Machines

  • Classification with hyperplanes 
  • Finding the maximum margin 
  • The case of linearly separable data 
  • The case of non-linearly separable data 
  • Using kernels for non-linear spaces

5. Understanding clustering

  • Clustering as a machine learning task
  • The k-means algorithm for clustering 
  • Using distance to assign and update clusters 
  • Choosing the appropriate number of clusters

6. Measuring performance for classification

  • Working with classification prediction data 
  • A closer look at confusion matrices 
  • Using confusion matrices to measure performance 
  • Beyond accuracy – other measures of performance 
  • The kappa statistic 
  • Sensitivity and specificity 
  • Precision and recall 
  • The F-measure 
  • Visualizing performance tradeoffs 
  • ROC curves 
  • Estimating future performance 
  • The holdout method 
  • Cross-validation 
  • Bootstrap sampling

7. Tuning stock models for better performance

  • Using caret for automated parameter tuning 
  • Creating a simple tuned model 
  • Customizing the tuning process 
  • Improving model performance with meta-learning 
  • Understanding ensembles 
  • Bagging 
  • Boosting 
  • Random forests 
  • Training random forests 
  • Evaluating random forest performance

MINOR TOPICS

8. Understanding classification using nearest neighbors 

  • The kNN algorithm 
  • Calculating distance 
  • Choosing an appropriate k 
  • Preparing data for use with kNN 
  • Why is the kNN algorithm lazy?

9. Understanding classification rules 

  • Separate and conquer 
  • The One Rule algorithm 
  • The RIPPER algorithm 
  • Rules from decision trees

10.Understanding regression 

  • Simple linear regression 
  • Ordinary least squares estimation 
  • Correlations 
  • Multiple linear regression

11.Understanding regression trees and model trees 

  • Adding regression to trees

12. Understanding association rules 

  • The Apriori algorithm for association rule learning 
  • Measuring rule interest – support and confidence 
  • Building a set of rules with the Apriori principle

Extras

  • Spark/PySpark/MLlib and Multi-armed bandits
datamodeling Pattern Recognition 35 hours

This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics.

The course is interactive and includes plenty of hands-on exercises, instructor feedback, and testing of knowledge and skills acquired.

Audience
    Data analysts
    PhD students, researchers and practitioners

 

Introduction

Probability theory, model selection, decision and information theory

Probability distributions

Linear models for regression and classification

Neural networks

Kernel methods

Sparse kernel machines

Graphical models

Mixture models and EM

Approximate inference

Sampling methods

Continuous latent variables

Sequential data

Combining models

 

encogadv Encog: Advanced Machine Learning 14 hours

Encog is an open-source machine learning framework for Java and .Net.

In this instructor-led, live training, participants will learn advanced machine learning techniques for building accurate neural network predictive models.

By the end of this training, participants will be able to:

  • Implement different neural networks optimization techniques to resolve underfitting and overfitting
  • Understand and choose from a number of neural network architectures
  • Implement supervised feed forward and feedback networks

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

mlrobot1 Machine Learning for Robotics 21 hours

This course introduce machine learning methods in robotics applications.

It is a broad overview of existing methods, motivations and main ideas in the context of pattern recognition.

After short theoretical background, participants will perform simple exercise using open source (usually R) or any other popular software.

  • Regression
  • Probabilistic Graphical Models
  • Boosting
  • Kernel Methods
  • Gaussian Processes
  • Evaluation and Model Selection
  • Sampling Methods
  • Clustering
  • CRFs
  • Random Forests
  • IVMs
patternmatching Pattern Matching 14 hours

Pattern Matching is a technique used to locate specified patterns within an image. It can be used to determine the existence of specified characteristics within a captured image, for example the expected label on a defective product in a factory line or the specified dimensions of a component. It is different from "Pattern Recognition" (which recognizes general patterns based on larger collections of related samples) in that it specifically dictates what we are looking for, then tells us whether the expected pattern exists or not.

Audience
    Engineers and developers seeking to develop machine vision applications
    Manufacturing engineers, technicians and managers

Format of the course
    This course introduces the approaches, technologies and algorithms used in the field of pattern matching as it applies to Machine Vision.

Introduction
    Computer Vision
    Machine Vision
    Pattern Matching vs Pattern Recognition

Alignment
    Features of the target object
    Points of reference on the object
    Determining position
    Determining orientation

Gauging
    Setting tolerance levels
    Measuring lengths, diameters, angles, and other dimensions
    Rejecting a component

Inspection
    Detecting flaws
    Adjusting the system

Closing remarks

 

encogintro Encog: Introduction to Machine Learning 14 hours

Encog is an open-source machine learning framework for Java and .Net.

In this instructor-led, live training, participants will learn how to create various neural network components using ENCOG. Real-world case studies will be discussed and machine language based solutions to these problems will be explored.

By the end of this training, participants will be able to:

  • Prepare data for neural networks using the normalization process
  • Implement feed forward networks and propagation training methodologies
  • Implement classification and regression tasks
  • Model and train neural networks using Encog's GUI based workbench
  • Integrate neural network support into real-world applications

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

matlabml1 Introduction to Machine Learning with MATLAB 21 hours

MATLAB is a numerical computing environment and programming language developed by MathWorks.

  1. MATLAB Basics
  2. MATLAB More Advanced Features
  3. BP Neural Network
  4. RBF, GRNN and PNN Neural Networks
  5. SOM Neural Networks
  6. Support Vector Machine, SVM
  7. Extreme Learning Machine, ELM
  8. Decision Trees and Random Forests
  9. Genetic Algorithm, GA
  10. Particle Swarm Optimization, PSO
  11. Ant Colony Algorithm, ACA
  12. Simulated Annealing, SA
  13. Dimenationality Reduction and Feature Selection
Torch Torch: Getting started with Machine and Deep Learning 21 hours

Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others.

In this course we cover the principles of Torch, its unique features, and how it can be applied in real-world applications. We step through numerous hands-on exercises all throughout, demonstrating and practicing the concepts learned.

By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects.

Audience
    Software developers and programmers wishing to enable Machine and Deep Learning within their applications

Format of the course
    Overview of Machine and Deep Learning
    In-class coding and integration exercises
    Test questions sprinkled along the way to check understanding

Introduction to Torch
    Like NumPy but with CPU and GPU implementation
    Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking

Installing Torch
    Linux, Windows, Mac
    Bitmapi and Docker

Installing Torch packages
    Using the LuaRocks package manager

Choosing an IDE for Torch
    ZeroBrane Studio
    Eclipse plugin for Lua

Working with the Lua scripting language and LuaJIT
    Lua's integration with C/C++
    Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.
    Object orientation and serialization in Torch
    Coding exercise

Loading a dataset in Torch
    MNIST
    CIFAR-10, CIFAR-100
    Imagenet

Machine Learning in Torch
    Deep Learning
        Manual feature extraction vs convolutional networks
    Supervised and Unsupervised Learning
        Building a neural network with Torch    
    N-dimensional arrays

Image analysis with Torch
    Image package
    The Tensor library

Working with the REPL interpreter

Working with databases

Networking and Torch

GPU support in Torch

Integrating Torch
    C, Python, and others

Embedding Torch
    iOS and Android

Other frameworks and libraries
    Facebook's optimized deep-learning modules and containers

Creating your own package

Testing and debugging

Releasing your application

The future of AI and Torch

pythontextml Python: Machine Learning with Text 21 hours

In this instructor-led, live training, participants will learn how to use the right machine learning and NLP (Natural Language Processing) techniques to extract value from text-based data.

By the end of this training, participants will be able to:

  • Solve text-based data science problems with high-quality, reusable code
  • Apply different aspects of scikit-learn (classification, clustering, regression, dimensionality reduction) to solve problems
  • Build effective machine learning models using text-based data
  • Create a dataset and extract features from unstructured text
  • Visualize data with Matplotlib
  • Build and evaluate models to gain insight
  • Troubleshoot text encoding errors

Audience

  • Developers
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction

  • The value of text-based data

Workflow for a Text-Based Data Science Problem

Choosing the Right Machine Learning Libraries

Overview of NLP Techniques

Preparing a Dataset

Visualizing the Data

Working with Text Data with scikit-learn

Building a Machine Learning Model

Splitting into Train and Test Sets

Applying Linear Regression and Non-Linear Regression

Applying NLP Techniques

Parsing Text Data Using Regular Expressions

Exploring Other Machine Language Approaches

Troubleshooting Text Encoding Issues

Closing Remarks

bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking 21 hours

1. Understanding classification using nearest neighbors 

  • The kNN algorithm 
  • Calculating distance 
  • Choosing an appropriate k 
  • Preparing data for use with kNN 
  • Why is the kNN algorithm lazy?

2.Understanding naive Bayes 

  • Basic concepts of Bayesian methods 
  • Probability 
  • Joint probability
  • Conditional probability with Bayes' theorem 
  • The naive Bayes algorithm 
  • The naive Bayes classification 
  • The Laplace estimator
  • Using numeric features with naive Bayes

3.Understanding decision trees 

  • Divide and conquer 
  • The C5.0 decision tree algorithm 
  • Choosing the best split 
  • Pruning the decision tree

4. Understanding classification rules 

  • Separate and conquer 
  • The One Rule algorithm 
  • The RIPPER algorithm 
  • Rules from decision trees

5.Understanding regression 

  • Simple linear regression 
  • Ordinary least squares estimation 
  • Correlations 
  • Multiple linear regression

6.Understanding regression trees and model trees 

  • Adding regression to trees

7. Understanding neural networks 

  • From biological to artificial neurons 
  • Activation functions 
  • Network topology 
  • The number of layers 
  • The direction of information travel 
  • The number of nodes in each layer 
  • Training neural networks with backpropagation

8. Understanding Support Vector Machines 

  • Classification with hyperplanes 
  • Finding the maximum margin 
  • The case of linearly separable data 
  • The case of non-linearly separable data 
  • Using kernels for non-linear spaces

9. Understanding association rules 

  • The Apriori algorithm for association rule learning 
  • Measuring rule interest – support and confidence 
  • Building a set of rules with the Apriori principle

10. Understanding clustering

  • Clustering as a machine learning task
  • The k-means algorithm for clustering 
  • Using distance to assign and update clusters 
  • Choosing the appropriate number of clusters

11. Measuring performance for classification 

  • Working with classification prediction data 
  • A closer look at confusion matrices 
  • Using confusion matrices to measure performance 
  • Beyond accuracy – other measures of performance 
  • The kappa statistic 
  • Sensitivity and specificity 
  • Precision and recall 
  • The F-measure 
  • Visualizing performance tradeoffs 
  • ROC curves 
  • Estimating future performance 
  • The holdout method 
  • Cross-validation 
  • Bootstrap sampling

12. Tuning stock models for better performance 

  • Using caret for automated parameter tuning 
  • Creating a simple tuned model 
  • Customizing the tuning process 
  • Improving model performance with meta-learning 
  • Understanding ensembles 
  • Bagging 
  • Boosting 
  • Random forests 
  • Training random forests
  • Evaluating random forest performance

13. Deep Learning

  • Three Classes of Deep Learning
  • Deep Autoencoders
  • Pre-trained Deep Neural Networks
  • Deep Stacking Networks

14. Discussion of Specific Application Areas

OpenNN OpenNN: Implementing neural networks 14 hours

OpenNN is an open-source class library written in C++  which implements neural networks, for use in machine learning.

In this course we go over the principles of neural networks and use OpenNN to implement a sample application.

Audience
    Software developers and programmers wishing to create Deep Learning applications.

Format of the course
    Lecture and discussion coupled with hands-on exercises.

Introduction to OpenNN, Machine Learning and Deep Learning

Downloading OpenNN

Working with Neural Designer
    Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics

OpenNN architecture
    CPU parallelization

OpenNN classes
    Data set, neural network, loss index, training strategy, model selection, testing analysis
    Vector and matrix templates

Building a neural network application
    Choosing a suitable neural network
    Formulating the variational problem (loss index)
    Solving the reduced function optimization problem (training strategy)

Working with datasets
     The data matrix (columns as variables and rows as instances)

Learning tasks
    Function regression
    Pattern recognition

Compiling with QT Creator

Integrating, testing and debugging your application

The future of neural networks and OpenNN

mlios Machine Learning on iOS 14 hours

In this instructor-led, live training, participants will learn how to use the iOS Machine Learning (ML) technology stack as they as they step through the creation and deployment of an iOS mobile app.

By the end of this training, participants will be able to:

  • Create a mobile app capable of image processing, text analysis and speech recognition
  • Access pre-trained ML models for integration into iOS apps
  • Create a custom ML model
  • Add Siri Voice support to iOS apps
  • Understand and use frameworks such as coreML, Vision, CoreGraphics, and GamePlayKit
  • Use languages and tools such as Python, Keras, Caffee, Tensorflow, sci-kit learn, libsvm, Anaconda, and Spyder

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

dladv Advanced Deep Learning 28 hours
  • Machine Learning Limitations
  • Machine Learning, Non-linear mappings
  • Neural Networks
  • Non-Linear Optimization, Stochastic/MiniBatch Gradient Decent
  • Back Propagation
  • Deep Sparse Coding
  • Sparse Autoencoders (SAE)
  • Convolutional Neural Networks (CNNs)
  • Successes: Descriptor Matching
  • Stereo-based Obstacle
  • Avoidance for Robotics
  • Pooling and invariance
  • Visualization/Deconvolutional Networks
  • Recurrent Neural Networks (RNNs) and their optimizaiton
  • Applications to NLP
  • RNNs continued,
  • Hessian-Free Optimization
  • Language analysis: word/sentence vectors, parsing, sentiment analysis, etc.
  • Probabilistic Graphical Models
  • Hopfield Nets, Boltzmann machines, Restricted Boltzmann Machines
  • Hopfield Networks, (Restricted) Bolzmann Machines
  • Deep Belief Nets, Stacked RBMs
  • Applications to NLP , Pose and Activity Recognition in Videos
  • Recent Advances
  • Large-Scale Learning
  • Neural Turing Machines

 

BigData_ A practical introduction to Data Analysis and Big Data 35 hours

Participants who complete this training will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools.

Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class.

The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability.

Audience

  • Developers / programmers
  • IT consultants

Format of the course

  • Part lecture, part discussion, hands-on practice and implementation, occasional quizing to measure progress.

Introduction to Data Analysis and Big Data

  • What makes Big Data "big"?
    • Velocity, Volume, Variety, Veracity (VVVV)
  • Limits to traditional Data Processing
  • Distributed Processing
  • Statistical Analysis
  • Types of Machine Learning Analysis
  • Data Visualization

Languages used for Data Analysis

  • R language
    • Why R for Data Analysis?
    • Data manipulation, calculation and graphical display
  • Python
    • Why Python for Data Analysis?
    • Manipulating, processing, cleaning, and crunching data

Approaches to Data Analysis

  • Statistical Analysis
    • Time Series analysis
    • Forecasting with Correlation and Regression models
    • Inferential Statistics (estimating)
    • Descriptive Statistics in Big Data sets (e.g. calculating mean)
  • Machine Learning
    • Supervised vs unsupervised learning
    • Classification and clustering
    • Estimating cost of specific methods
    • Filtering
  • Natural Language Processing
    • Processing text
    • Understaing meaning of the text
    • Automatic text generation
    • Sentiment analysis / Topic analysis
  • Computer Vision
    • Acquiring, processing, analyzing, and understanding images
    • Reconstructing, interpreting and understanding 3D scenes
    • Using image data to make decisions

Big Data infrastructure

  • Data Storage
    • Relational databases (SQL)
      • MySQL
      • Postgres
      • Oracle
    • Non-relational databases (NoSQL)
      • Cassandra
      • MongoDB
      • Neo4js
    • Understanding the nuances
      • Hierarchical databases
      • Object-oriented databases
      • Document-oriented databases
      • Graph-oriented databases
      • Other
  • Distributed Processing
    • Hadoop
      • HDFS as a distributed filesystem
      • MapReduce for distributed processing
    • Spark
      • All-in-one in-memory cluster computing framework for large-scale data processing
      • Structured streaming
      • Spark SQL
      • Machine Learning libraries: MLlib
      • Graph processing with GraphX
  • Scalability
    • Public cloud
      • AWS, Google, Aliyun, etc.
    • Private cloud
      • OpenStack, Cloud Foundry, etc.
    • Auto-scalability
  • Choosing the right solution for the problem
  • The future of Big Data
  • Closing remarks
matlabdl Matlab for Deep Learning 14 hours

In this instructor-led, live training, participants will learn how to use Matlab to design, build, and visualize a convolutional neural network for image recognition.

By the end of this training, participants will be able to:

  • Build a deep learning model
  • Automate data labeling
  • Work with models from Caffe and TensorFlow-Keras
  • Train data using multiple GPUs, the cloud, or clusters

Audience

  • Developers
  • Engineers
  • Domain experts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

mlfsas Machine Learning Fundamentals with Scala and Apache Spark 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Scala programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

Introduction to Applied Machine Learning

  • Statistical learning vs. Machine learning
  • Iteration and evaluation
  • Bias-Variance trade-off

Machine Learning with Python

  • Choice of libraries
  • Add-on tools

Regression

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercises

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercises

Cross-validation and Resampling

  • Cross-validation approaches
  • Bootstrap
  • Exercises

Unsupervised Learning

  • K-means clustering
  • Examples
  • Challenges of unsupervised learning and beyond K-means
octnp Octave not only for programmers 21 hours

Course is dedicated for those who would like to know an alternative program to the commercial MATLAB package. The three-day training provides comprehensive information on moving around the environment and performing the OCTAVE package for data analysis and engineering calculations. The training recipients are beginners but also those who know the program and would like to systematize their knowledge and improve their skills. Knowledge of other programming languages is not required, but it will greatly facilitate the learners' acquisition of knowledge. The course will show you how to use the program in many practical examples.

Introduction

Simple calculations

  • Starting Octave, Octave as a calculator, built-in functions

The Octave environment

  • Named variables, numbers and formatting, number representation and accuracy, loading and saving data 

Arrays and vectors

  • Extracting elements from a vector, vector maths

Plotting graphs

  • Improving the presentation, multiple graphs and figures, saving and printing figures

Octave programming I: Script files

  • Creating and editing a script, running and debugging scripts,

Control statements

  • If else, switch, for, while

Octave programming II: Functions

Matrices and vectors

  • Matrix, the transpose operator, matrix creation functions, building composite matrices, matrices as tables, extracting bits of matrices, basic matrix functions

Linear and Nonlinear Equations

More graphs

  • Putting several graphs in one window, 3D plots, changing the viewpoint, plotting surfaces, images and movies,

 Eigenvectors and the Singular Value Decomposition

 Complex numbers

  • Plotting complex numbers,

 Statistics and data processing

 GUI Development

mlbankingr Machine Learning for Banking (with R) 28 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. R will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of live projects.

Audience

  • Developers
  • Data scientists
  • Banking professionals with a technical background

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction

  • Difference between statistical learning (statistical analysis) and machine learning
  • Adoption of machine learning technology by finance and banking companies

Different Types of Machine Learning

  • Supervised learning vs unsupervised learning
  • Iteration and evaluation
  • Bias-variance trade-off
  • Combining supervised and unsupervised learning (semi-supervised learning)

Machine Learning Languages and Toolsets

  • Open source vs proprietary systems and software
  • R vs Python vs Matlab
  • Libraries and frameworks

Machine Learning Case Studies

  • Consumer data and big data
  • Assessing risk in consumer and business lending
  • Improving customer service through sentiment analysis
  • Detecting identity fraud, billing fraud and money laundering

Introduction to R

  • Installing the RStudio IDE
  • Loading R packages
  • Data structures
  • Vectors
  • Factors
  • Lists
  • Data Frames
  • Matrixes and Arrays

How to Load Machine Learning Data

  • Databases, data warehouses and streaming data
  • Distributed storage and processing with Hadoop and Spark
  • Importing data from a database
  • Importing data from Excel and CSV

Modeling Business Decisions with Supervised Learning

  • Classifying your data (classification)
  • Using regression analysis to predict outcome
  • Choosing from available machine learning algorithms
  • Understanding decision tree algorithms
  • Understanding random forest algorithms
  • Model evaluation
  • Exercise

Regression Analysis

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercise

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercise

Hands-on: Building an Estimation Model

  • Assessing lending risk based on customer type and history

Evaluating the performance of Machine Learning Algorithms

  • Cross-validation and resampling
  • Bootstrap aggregation (bagging)
  • Exercise

Modeling Business Decisions with Unsupervised Learning

  • When sample data sets are not available
  • K-means clustering
  • Challenges of unsupervised learning
  • Beyond K-means
  • Bayes networks and Markov Hidden Models
  • Exercise

Hands-on: Building a Recommendation System

  • Analyzing past customer behavior to improve new service offerings

Extending your company's capabilities

  • Developing models in the cloud
  • Accelerating machine learning with additional GPUs
  • Applying Deep Learning neural networks for computer vision, voice recognition and text analysis

Closing Remarks

dmmlr Data Mining & Machine Learning with R 14 hours

Introduction to Data mining and Machine Learning

  • Statistical learning vs. Machine learning
  • Iteration and evaluation
  • Bias-Variance trade-off

Regression

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercises

Classification

  • Bayesian refresher
  • Naive Bayes
  • Dicriminant analysis
  • Logistic regression
  • K-Nearest neighbors
  • Support Vector Machines
  • Neural networks
  • Decision trees
  • Exercises

Cross-validation and Resampling

  • Cross-validation approaches
  • Bootstrap
  • Exercises

Unsupervised Learning

  • K-means clustering
  • Examples
  • Challenges of unsupervised learning and beyond K-means

Advanced topics

  • Ensemble models
  • Mixed models
  • Boosting
  • Examples

Multidimensional reduction

  • Factor Analysis
  • Principal Component Analysis
  • Examples
mlentre Machine Learning Concepts for Entrepreneurs and Managers 21 hours

This training course is for people that would like to apply Machine Learning in practical applications for their team.  The training will not dive into technicalities and revolve around basic concepts and business/operational applications of the same.

Target Audience

  1. Investors and AI entrepreneurs
  2. Managers and Engineers whose company is venturing into AI space
  3. Business Analysts & Investors

Introduction to Neural Networks

Introduction to Applied Machine Learning

  • Statistical learning vs. Machine learning
  • Iteration and evaluation
  • Bias-Variance trade-off

Machine Learning with Python

  • Choice of libraries
  • Add-on tools

Machine learning Concepts and Applications

Regression

  • Linear regression
  • Generalizations and Nonlinearity
  • Use cases

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Use Cases

Cross-validation and Resampling

  • Cross-validation approaches
  • Bootstrap
  • Use Cases

Unsupervised Learning

  • K-means clustering
  • Examples
  • Challenges of unsupervised learning and beyond K-means

Short Introduction to NLP methods

  • word and sentence tokenization
  • text classification
  • sentiment analysis
  • spelling correction
  • information extraction
  • parsing
  • meaning extraction
  • question answering

Artificial Intelligence & Deep Learning

Technical Overview

  • R v/s Python
  • Caffe v/s Tensor Flow
  • Various Machine Learning Libraries
mlbankingpython_ Machine Learning for Banking (with Python) 21 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. Python will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction

  • Difference between statistical learning (statistical analysis) and machine learning
  • Adoption of machine learning technology and talent by finance and banking companies

Different Types of Machine Learning

  • Supervised learning vs unsupervised learning
  • Iteration and evaluation
  • Bias-variance trade-off
  • Combining supervised and unsupervised learning (semi-supervised learning)

Machine Learning Languages and Toolsets

  • Open source vs proprietary systems and software
  • Python vs R vs Matlab
  • Libraries and frameworks

Machine Learning Case Studies

  • Consumer data and big data
  • Assessing risk in consumer and business lending
  • Improving customer service through sentiment analysis
  • Detecting identity fraud, billing fraud and money laundering

Hands-on: Python for Machine Learning

  • Preparing the Development Environment
  • Obtaining Python machine learning libraries and packages
  • Working with scikit-learn and PyBrain

How to Load Machine Learning Data

  • Databases, data warehouses and streaming data
  • Distributed storage and processing with Hadoop and Spark
  • Exported data and Excel

Modeling Business Decisions with Supervised Learning

  • Classifying your data (classification)
  • Using regression analysis to predict outcome
  • Choosing from available machine learning algorithms
  • Understanding decision tree algorithms
  • Understanding random forest algorithms
  • Model evaluation
  • Exercise

Regression Analysis

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercise

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercise

Hands-on: Building an Estimation Model

  • Assessing lending risk based on customer type and history

Evaluating the performance of Machine Learning Algorithms

  • Cross-validation and resampling
  • Bootstrap aggregation (bagging)
  • Exercise

Modeling Business Decisions with Unsupervised Learning

  • When sample data sets are not available
  • K-means clustering
  • Challenges of unsupervised learning
  • Beyond K-means
  • Bayes networks and Markov Hidden Models
  • Exercise

Hands-on: Building a Recommendation System

  • Analyzing past customer behavior to improve new service offerings

Extending your company's capabilities

  • Developing models in the cloud
  • Accelerating machine learning with GPU
  • Applying Deep Learning neural networks for computer vision, voice recognition and text analysis

Closing Remarks

cpb100 Google Cloud Platform Fundamentals: Big Data & Machine Learning 8 hours

This one-day instructor-led course introduces participants to the big data capabilities of Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants get an overview of the Google Cloud platform and a detailed view of the data processing and machine learning capabilities. This course showcases the ease, flexibility, and power of big data solutions on Google Cloud Platform.

This course teaches participants the following skills:

  • Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform.
  • Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform.
  • Employ BigQuery and Cloud Datalab to carry out interactive data analysis.
  • Train and use a neural network using TensorFlow.
  • Employ ML APIs.
  • Choose between different data processing products on the Google Cloud Platform.

This class is intended for the following:

  • Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform.
  • Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports.
  • Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists.

The course includes presentations, demonstrations, and hands-on labs.

Module 1: Introducing Google Cloud Platform

  • Google Platform Fundamentals Overview.
  • Google Cloud Platform Data Products and Technology.
  • Usage scenarios.
  • Lab: Sign up for Google Cloud Platform.

Module 2: Compute and Storage Fundamentals

  • CPUs on demand (Compute Engine).
  • A global filesystem (Cloud Storage).
  • CloudShell.
  • Lab: Set up a Ingest-Transform-Publish data processing pipeline.

Module 3: Data Analytics on the Cloud

  • Stepping-stones to the cloud.
  • Cloud SQL: your SQL database on the cloud.
  • Lab: Importing data into CloudSQL and running queries.
  • Spark on Dataproc.
  • Lab: Machine Learning Recommendations with SparkML.

Module 4: Scaling Data Analysis

  • Fast random access.
  • Datalab.
  • BigQuery.
  • Lab: Build machine learning dataset.
  • Machine Learning with TensorFlow.
  • Lab: Train and use neural network.
  • Fully built models for common needs.
  • Lab: Employ ML APIs

Module 5: Data Processing Architectures

  • Message-oriented architectures with Pub/Sub.
  • Creating pipelines with Dataflow.
  • Reference architecture for real-time and batch data processing.

Module 6: Summary

  • Why GCP?
  • Where to go from here
  • Additional Resources
opennmt OpenNMT: Setting up a Neural Machine Translation System 7 hours

OpenNMT is a full-featured, open-source (MIT) neural machine translation system that utilizes the Torch mathematical toolkit.

In this training participants will learn how to set up and use OpenNMT to carry out translation of various sample data sets. The course starts with an overview of neural networks as they apply to machine translation. Participants will carry out live exercises throughout the course to demonstrate their understanding of the concepts learned and get feedback from the instructor. By the end of this training, participants will have the knowledge and practice needed to implement a live OpenNMT solution.

Source and target language samples will be pre-arranged per the audience's requirements.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course

  • Part lecture, part discussion, heavy hands-on practice

Introduction
    Why Neural Machine Translation?

Overview of the Torch project

Installation and setup

Preprocessing your data

Training the model

Translating

Using pre-trained models

Working with Lua scripts

Using extensions

Troubleshooting

Joining the community

Closing remarks

opennlp OpenNLP for Text Based Machine Learning 14 hours

The Apache OpenNLP library is a machine learning based toolkit for processing natural language text. It supports the most common NLP tasks, such as language detection, tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing and coreference resolution.

In this instructor-led, live training, participants will learn how to create models for processing text based data using OpenNLP. Sample training data as well customized data sets will be used as the basis for the lab exercises.

By the end of this training, participants will be able to:

  • Install and configure OpenNLP
  • Download existing models as well as create their own
  • Train the models on various sets of sample data
  • Integrate OpenNLP with existing Java applications

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction to Machine Learning and Natural Language Processing

Installing and Configuring OpenNLP

Overview of OpenNLP's Library Structure

Downloading Existing Models

Calling the OpenNLP's APIs

Sentence Detection and Tokenization

Part-of-Speach (POS) Tagging

Phrase Chunking

Parsing

Name Finding

English Coreference

Training the Tools

Creating a Model from Scratch

Extending OpenNLP

Closing remarks

Other regions

Machine Learning training courses in Southampton, Weekend Machine Learning courses in Southampton, Evening Machine Learning training in Southampton, Machine Learning instructor-led in Southampton , Machine Learning on-site in Southampton, Machine Learning instructor-led in Southampton, Evening Machine Learning courses in Southampton, Machine Learning trainer in Southampton, Machine Learning boot camp in Southampton,Machine Learning classes in Southampton, Machine Learning one on one training in Southampton, Machine Learning instructor in Southampton,Weekend Machine Learning training in Southampton, Machine Learning coaching in Southampton

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients