Machine Learning Training Courses

Machine Learning Training

Machine Learning courses

Client Testimonials

Artificial Neural Networks, Machine Learning, Deep Thinking

It was very interactive and more relaxed and informal than expected. We covered lots of topics in the time and the trainer was always receptive to talking more in detail or more generally about the topics and how they were related. I feel the training has given me the tools to continue learning as opposed to it being a one off session where learning stops once you've finished which is very important given the scale and complexity of the topic.

Jonathan Blease - Knowledgepool Group Ltd

Applied Machine Learning

ref material to use later was very good

PAUL BEALES - Seagate Technology

Data Mining & Machine Learning with R

The trainer was so knowledgeable and included areas I was interested in

Mohamed Salama - Edmonton Police Service

Advanced Deep Learning

The global overview of deep learning

Bruno Charbonnier - OSONES

Advanced Deep Learning

The exercises are sufficiently practical and do not need a high knowledge in Python to be done.

Alexandre GIRARD - OSONES

Advanced Deep Learning

Doing exercises on real examples using Keras. Mihaly totally understood our expectations about this training.

Paul Kassis - OSONES

Neural Networks Fundamentals using TensorFlow as Example

Knowledgeable trainer

Sridhar Voorakkara - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

I was amazed at the standard of this class - I would say that it was university standard.

David Relihan - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

Very good all round overview.Good background into why Tensorflow operates as it does.

Kieran Conboy - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

I liked the opportunities to ask questions and get more in depth explanations of the theory.

Sharon Ruane - INTEL R&D IRELAND LIMITED

Machine Learning and Deep Learning

We have gotten a lot more insight in to the subject matter. Some nice discussion were made with some real subjects within our company

Sebastiaan Holman - Travix International

Machine Learning and Deep Learning

The training provided the right foundation that allows us to further to expand on, by showing how theory and practice go hand in hand. It actually got me more interested in the subject than I was before.

Jean-Paul van Tillo - Travix International

Machine Learning and Deep Learning

Coverage and depth of topics

Anirban Basu - Travix International

Subcategories

Machine Learning Course Outlines

Code Name Duration Overview
appliedml Applied Machine Learning 14 hours This training course is for people that would like to apply Machine Learning in practical applications. Audience This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work. Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Bayesian Graphical Models Factor Analysis (FA) Principal Component Analysis (PCA) Independent Component Analysis (ICA) Support Vector Machines (SVM) for regression and classification Boosting Ensemble models Neural networks Hidden Markov Models (HMM) Space State Models Clustering
deepmclrg Machine Learning & Deep Learning with Python and R 14 hours MACHINE LEARNING 1: Introducing Machine Learning The origins of machine learning Uses and abuses of machine learning Ethical considerations How do machines learn? Abstraction and knowledge representation Generalization Assessing the success of learning Steps to apply machine learning to your data Choosing a machine learning algorithm Thinking about the input data Thinking about types of machine learning algorithms Matching your data to an appropriate algorithm Using R for machine learning Installing and loading R packages Installing an R package Installing a package using the point-and-click interface Loading an R package Summary 2: Managing and Understanding Data R data structures Vectors Factors Lists Data frames Matrixes and arrays Managing data with R Saving and loading R data structures Importing and saving data from CSV files Importing data from SQL databases Exploring and understanding data Exploring the structure of data Exploring numeric variables Measuring the central tendency – mean and median Measuring spread – quartiles and the five-number summary Visualizing numeric variables – boxplots Visualizing numeric variables – histograms Understanding numeric data – uniform and normal distributions Measuring spread – variance and standard deviation Exploring categorical variables Measuring the central tendency – the mode Exploring relationships between variables Visualizing relationships – scatterplots Examining relationships – two-way cross-tabulations Summary 3: Lazy Learning – Classification Using Nearest Neighbors Understanding classification using nearest neighbors The kNN algorithm Calculating distance Choosing an appropriate k Preparing data for use with kNN Why is the kNN algorithm lazy? Diagnosing breast cancer with the kNN algorithm Step 1 – collecting data Step 2 – exploring and preparing the data Transformation – normalizing numeric data Data preparation – creating training and test datasets Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Transformation – z-score standardization Testing alternative values of k Summary 4: Probabilistic Learning – Classification Using Naive Bayes Understanding naive Bayes Basic concepts of Bayesian methods Probability Joint probability Conditional probability with Bayes' theorem The naive Bayes algorithm The naive Bayes classification The Laplace estimator Using numeric features with naive Bayes Example – filtering mobile phone spam with the naive Bayes algorithm Step 1 – collecting data Step 2 – exploring and preparing the data Data preparation – processing text data for analysis Data preparation – creating training and test datasets Visualizing text data – word clouds Data preparation – creating indicator features for frequent words Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Summary 5: Divide and Conquer – Classification Using Decision Trees and Rules Understanding decision trees Divide and conquer The C5.0 decision tree algorithm Choosing the best split Pruning the decision tree Example – identifying risky bank loans using C5.0 decision trees Step 1 – collecting data Step 2 – exploring and preparing the data Data preparation – creating random training and test datasets Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Boosting the accuracy of decision trees Making some mistakes more costly than others Understanding classification rules Separate and conquer The One Rule algorithm The RIPPER algorithm Rules from decision trees Example – identifying poisonous mushrooms with rule learners Step 1 – collecting data Step 2 – exploring and preparing the data Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Summary 6: Forecasting Numeric Data – Regression Methods Understanding regression Simple linear regression Ordinary least squares estimation Correlations Multiple linear regression Example – predicting medical expenses using linear regression Step 1 – collecting data Step 2 – exploring and preparing the data Exploring relationships among features – the correlation matrix Visualizing relationships among features – the scatterplot matrix Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Model specification – adding non-linear relationships Transformation – converting a numeric variable to a binary indicator Model specification – adding interaction effects Putting it all together – an improved regression model Understanding regression trees and model trees Adding regression to trees Example – estimating the quality of wines with regression trees and model trees Step 1 – collecting data Step 2 – exploring and preparing the data Step 3 – training a model on the data Visualizing decision trees Step 4 – evaluating model performance Measuring performance with mean absolute error Step 5 – improving model performance Summary 7: Black Box Methods – Neural Networks and Support Vector Machines Understanding neural networks From biological to artificial neurons Activation functions Network topology The number of layers The direction of information travel The number of nodes in each layer Training neural networks with backpropagation Modeling the strength of concrete with ANNs Step 1 – collecting data Step 2 – exploring and preparing the data Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Understanding Support Vector Machines Classification with hyperplanes Finding the maximum margin The case of linearly separable data The case of non-linearly separable data Using kernels for non-linear spaces Performing OCR with SVMs Step 1 – collecting data Step 2 – exploring and preparing the data Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Summary 8: Finding Patterns – Market Basket Analysis Using Association Rules Understanding association rules The Apriori algorithm for association rule learning Measuring rule interest – support and confidence Building a set of rules with the Apriori principle Example – identifying frequently purchased groceries with association rules Step 1 – collecting data Step 2 – exploring and preparing the data Data preparation – creating a sparse matrix for transaction data Visualizing item support – item frequency plots Visualizing transaction data – plotting the sparse matrix Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Sorting the set of association rules Taking subsets of association rules Saving association rules to a file or data frame Summary 9: Finding Groups of Data – Clustering with k-means Understanding clustering Clustering as a machine learning task The k-means algorithm for clustering Using distance to assign and update clusters Choosing the appropriate number of clusters Finding teen market segments using k-means clustering Step 1 – collecting data Step 2 – exploring and preparing the data Data preparation – dummy coding missing values Data preparation – imputing missing values Step 3 – training a model on the data Step 4 – evaluating model performance Step 5 – improving model performance Summary 10: Evaluating Model Performance Measuring performance for classification Working with classification prediction data in R A closer look at confusion matrices Using confusion matrices to measure performance Beyond accuracy – other measures of performance The kappa statistic Sensitivity and specificity Precision and recall The F-measure Visualizing performance tradeoffs ROC curves Estimating future performance The holdout method Cross-validation Bootstrap sampling Summary 11: Improving Model Performance Tuning stock models for better performance Using caret for automated parameter tuning Creating a simple tuned model Customizing the tuning process Improving model performance with meta-learning Understanding ensembles Bagging Boosting Random forests Training random forests Evaluating random forest performance Summary DEEP LEARNING with R 1: Getting Started with Deep Learning What is deep learning? Conceptual overview of neural networks Deep neural networks R packages for deep learning Setting up reproducible results Neural networks The deepnet package The darch package The H2O package Connecting R and H2O Initializing H2O Linking datasets to an H2O cluster Summary 2: Training a Prediction Model Neural networks in R Building a neural network Generating predictions from a neural network The problem of overfitting data – the consequences explained Use case – build and apply a neural network Summary 3: Preventing Overfitting L1 penalty L1 penalty in action L2 penalty L2 penalty in action Weight decay (L2 penalty in neural networks) Ensembles and model averaging Use case – improving out-of-sample model performance using dropout Summary 4: Identifying Anomalous Data Getting started with unsupervised learning How do auto-encoders work? Regularized auto-encoders Penalized auto-encoders Denoising auto-encoders Training an auto-encoder in R Use case – building and applying an auto-encoder model Fine-tuning auto-encoder models Summary 5: Training Deep Prediction Models Getting started with deep feedforward neural networks Common activation functions – rectifiers, hyperbolic tangent, and maxout Picking hyperparameters Training and predicting new data from a deep neural network Use case – training a deep neural network for automatic classification Working with model results Summary 6: Tuning and Optimizing Models Dealing with missing data Solutions for models with low accuracy Grid search Random search Summary DEEP LEARNING WITH PYTHON I Introduction 1 Welcome Deep Learning The Wrong Way Deep Learning With Python Summary II Background 2 Introduction to Theano What is Theano? How to Install Theano Simple Theano Example Extensions and Wrappers for Theano More Theano Resources Summary 3 Introduction to TensorFlow What is TensorFlow? How to Install TensorFlow Your First Examples in TensorFlow Simple TensorFlow Example More Deep Learning Models Summary 4 Introduction to Keras What is Keras? How to Install Keras Theano and TensorFlow Backends for Keras Build Deep Learning Models with Keras Summary 5 Project: Develop Large Models on GPUs Cheaply In the Cloud Project Overview Setup Your AWS Account Launch Your Server Instance Login, Configure and Run Build and Run Models on AWS Close Your EC2 Instance Tips and Tricks for Using Keras on AWS More Resources For Deep Learning on AWS Summary III Multilayer Perceptrons 6 Crash Course In Multilayer Perceptrons Crash Course Overview Multilayer Perceptrons Neurons Networks of Neurons Training Networks Summary 7 Develop Your First Neural Network With Keras Tutorial Overview Pima Indians Onset of Diabetes Dataset Load Data Define Model Compile Model Fit Model Evaluate Model Tie It All Together Summary 8 Evaluate The Performance of Deep Learning Models Empirically Evaluate Network Configurations Data Splitting Manual k-Fold Cross Validation Summary 9 Use Keras Models With Scikit-Learn For General Machine Learning Overview Evaluate Models with Cross Validation Grid Search Deep Learning Model Parameters Summary 10 Project: Multiclass Classification Of Flower Species Iris Flowers Classification Dataset Import Classes and Functions Initialize Random Number Generator Load The Dataset Encode The Output Variable Define The Neural Network Model Evaluate The Model with k-Fold Cross Validation Summary 11 Project: Binary Classification Of Sonar Returns Sonar Object Classification Dataset Baseline Neural Network Model Performance Improve Performance With Data Preparation Tuning Layers and Neurons in The Model Summary 12 Project: Regression Of Boston House Prices Boston House Price Dataset Develop a Baseline Neural Network Model Lift Performance By Standardizing The Dataset Tune The Neural Network Topology Summary IV Advanced Multilayer Perceptrons and Keras 13 Save Your Models For Later With Serialization Tutorial Overview . Save Your Neural Network Model to JSON Save Your Neural Network Model to YAML Summary 14 Keep The Best Models During Training With Checkpointing Checkpointing Neural Network Models Checkpoint Neural Network Model Improvements Checkpoint Best Neural Network Model Only Loading a Saved Neural Network Model Summary 15 Understand Model Behavior During Training By Plotting History Access Model Training History in Keras Visualize Model Training History in Keras Summary 16 Reduce Overfitting With Dropout Regularization Dropout Regularization For Neural Networks Dropout Regularization in Keras Using Dropout on the Visible Layer Using Dropout on Hidden Layers Tips For Using Dropout Summary 17 Lift Performance With Learning Rate Schedules Learning Rate Schedule For Training Models Ionosphere Classification Dataset Time-Based Learning Rate Schedule Drop-Based Learning Rate Schedule Tips for Using Learning Rate Schedules Summary V Convolutional Neural Networks 18 Crash Course In Convolutional Neural Networks The Case for Convolutional Neural Networks Building Blocks of Convolutional Neural Networks Convolutional Layers Pooling Layers Fully Connected Layers Worked Example Convolutional Neural Networks Best Practices Summary 19 Project: Handwritten Digit Recognition Handwritten Digit Recognition Dataset Loading the MNIST dataset in Keras Baseline Model with Multilayer Perceptrons Simple Convolutional Neural Network for MNIST Larger Convolutional Neural Network for MNIST Summary 20 Improve Model Performance With Image Augmentation Keras Image Augmentation API Point of Comparison for Image Augmentation Feature Standardization ZCA Whitening Random Rotations Random Shifts Random Flips Saving Augmented Images to File Tips For Augmenting Image Data with Keras Summary 21 Project Object Recognition in Photographs Photograph Object Recognition Dataset Loading The CIFAR-10 Dataset in Keras Simple CNN for CIFAR-10 Larger CNN for CIFAR-10 Extensions To Improve Model Performance Summary 22 Project: Predict Sentiment From Movie Reviews Movie Review Sentiment Classification Dataset Load the IMDB Dataset With Keras Word Embeddings Simple Multilayer Perceptron Model One-Dimensional Convolutional Neural Network Summary VI Recurrent Neural Networks 23 Crash Course In Recurrent Neural Networks Support For Sequences in Neural Networks Recurrent Neural Networks Long Short-Term Memory Networks Summary 24 Time Series Prediction with Multilayer Perceptrons Problem Description: Time Series Prediction Multilayer Perceptron Regression Multilayer Perceptron Using the Window Method Summary 25 Time Series Prediction with LSTM Recurrent Neural Networks LSTM Network For Regression LSTM For Regression Using the Window Method LSTM For Regression with Time Steps LSTM With Memory Between Batches Stacked LSTMs With Memory Between Batches Summary 26 Project: Sequence Classification of Movie Reviews Simple LSTM for Sequence Classification LSTM For Sequence Classification With Dropout LSTM and CNN For Sequence Classification Summary 27 Understanding Stateful LSTM Recurrent Neural Networks Problem Description: Learn the Alphabet LSTM for Learning One-Char to One-Char Mapping LSTM for a Feature Window to One-Char Mapping LSTM for a Time Step Window to One-Char Mapping LSTM State Maintained Between Samples Within A Batch Stateful LSTM for a One-Char to One-Char Mapping LSTM with Variable Length Input to One-Char Output Summary 28 Project: Text Generation With Alice in Wonderland Problem Description: Text Generation Develop a Small LSTM Recurrent Neural Network Generating Text with an LSTM Network Larger LSTM Recurrent Neural Network Extension Ideas to Improve the Model Summary
MLFWR1 Machine Learning Fundamentals with R 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
wolfdata Data Science: Analysis and Presentation 7 hours The Wolfram System's integrated environment makes it an efficient tool for both analyzing and presenting data. This course covers aspects of the Wolfram Language relevant to analytics, including statistical computation, visualization, data import and export and automatic generation of reports. Using associations Querying with datasets Machine learning for classification and prediction Working with semantically imported data Authoring customizable documents from templates Deploying results to the cloud
mlfunpython Machine Learning Fundamentals with Python 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Python programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Machine Learning with Python Choice of libraries Add-on tools Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
datamodeling Pattern Recognition 35 hours This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. The course is interactive and includes plenty of hands-on exercises, instructor feedback, and testing of knowledge and skills acquired. Audience     Data analysts     PhD students, researchers and practitioners   Introduction Probability theory, model selection, decision and information theory Probability distributions Linear models for regression and classification Neural networks Kernel methods Sparse kernel machines Graphical models Mixture models and EM Approximate inference Sampling methods Continuous latent variables Sequential data Combining models  
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking 21 hours DAY 1 - ARTIFICIAL NEURAL NETWORKS Introduction and ANN Structure. Biological neurons and artificial neurons. Model of an ANN. Activation functions used in ANNs. Typical classes of network architectures . Mathematical Foundations and Learning mechanisms. Re-visiting vector and matrix algebra. State-space concepts. Concepts of optimization. Error-correction learning. Memory-based learning. Hebbian learning. Competitive learning. Single layer perceptrons. Structure and learning of perceptrons. Pattern classifier - introduction and Bayes' classifiers. Perceptron as a pattern classifier. Perceptron convergence. Limitations of a perceptrons. Feedforward ANN. Structures of Multi-layer feedforward networks. Back propagation algorithm. Back propagation - training and convergence. Functional approximation with back propagation. Practical and design issues of back propagation learning. Radial Basis Function Networks. Pattern separability and interpolation. Regularization Theory. Regularization and RBF networks. RBF network design and training. Approximation properties of RBF. Competitive Learning and Self organizing ANN. General clustering procedures. Learning Vector Quantization (LVQ). Competitive learning algorithms and architectures. Self organizing feature maps. Properties of feature maps. Fuzzy Neural Networks. Neuro-fuzzy systems. Background of fuzzy sets and logic. Design of fuzzy stems. Design of fuzzy ANNs. Applications A few examples of Neural Network applications, their advantages and problems will be discussed. DAY -2 MACHINE LEARNING The PAC Learning Framework Guarantees for finite hypothesis set – consistent case Guarantees for finite hypothesis set – inconsistent case Generalities Deterministic cv. Stochastic scenarios Bayes error noise Estimation and approximation errors Model selection Radmeacher Complexity and VC – Dimension Bias - Variance tradeoff Regularisation Over-fitting Validation Support Vector Machines Kriging (Gaussian Process regression) PCA and Kernel PCA Self Organisation Maps (SOM) Kernel induced vector space Mercer Kernels and Kernel - induced similarity metrics Reinforcement Learning DAY 3 - DEEP LEARNING This will be taught in relation to the topics covered on Day 1 and Day 2 Logistic and Softmax Regression Sparse Autoencoders Vectorization, PCA and Whitening Self-Taught Learning Deep Networks Linear Decoders Convolution and Pooling Sparse Coding Independent Component Analysis Canonical Correlation Analysis Demos and Applications
patternmatching Pattern Matching 14 hours Pattern Matching is a technique used to locate specified patterns within an image. It can be used to determine the existence of specified characteristics within a captured image, for example the expected label on a defective product in a factory line or the specified dimensions of a component. It is different from "Pattern Recognition" (which recognizes general patterns based on larger collections of related samples) in that it specifically dictates what we are looking for, then tells us whether the expected pattern exists or not. Audience     Engineers and developers seeking to develop machine vision applications     Manufacturing engineers, technicians and managers Format of the course     This course introduces the approaches, technologies and algorithms used in the field of pattern matching as it applies to Machine Vision. Introduction     Computer Vision     Machine Vision     Pattern Matching vs Pattern Recognition Alignment     Features of the target object     Points of reference on the object     Determining position     Determining orientation Gauging     Setting tolerance levels     Measuring lengths, diameters, angles, and other dimensions     Rejecting a component Inspection     Detecting flaws     Adjusting the system Closing remarks  
mlrobot1 Machine Learning for Robotics 21 hours This course introduce machine learning methods in robotics applications. It is a broad overview of existing methods, motivations and main ideas in the context of pattern recognition. After short theoretical background, participants will perform simple exercise using open source (usually R) or any other popular software. Regression Probabilistic Graphical Models Boosting Kernel Methods Gaussian Processes Evaluation and Model Selection Sampling Methods Clustering CRFs Random Forests IVMs
Torch Torch: Getting started with Machine and Deep Learning 21 hours Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others. In this course we cover the principles of Torch, its unique features, and how it can be applied in real-world applications. We step through numerous hands-on exercises all throughout, demonstrating and practicing the concepts learned. By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects. Audience     Software developers and programmers wishing to enable Machine and Deep Learning within their applications Format of the course     Overview of Machine and Deep Learning     In-class coding and integration exercises     Test questions sprinkled along the way to check understanding Introduction to Torch     Like NumPy but with CPU and GPU implementation     Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking Installing Torch     Linux, Windows, Mac     Bitmapi and Docker Installing Torch packages     Using the LuaRocks package manager Choosing an IDE for Torch     ZeroBrane Studio     Eclipse plugin for Lua Working with the Lua scripting language and LuaJIT     Lua's integration with C/C++     Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.     Object orientation and serialization in Torch     Coding exercise Loading a dataset in Torch     MNIST     CIFAR-10, CIFAR-100     Imagenet Machine Learning in Torch     Deep Learning         Manual feature extraction vs convolutional networks     Supervised and Unsupervised Learning         Building a neural network with Torch         N-dimensional arrays Image analysis with Torch     Image package     The Tensor library Working with the REPL interpreter Working with databases Networking and Torch GPU support in Torch Integrating Torch     C, Python, and others Embedding Torch     iOS and Android Other frameworks and libraries     Facebook's optimized deep-learning modules and containers Creating your own package Testing and debugging Releasing your application The future of AI and Torch
matlabml1 Introduction to Machine Learning with MATLAB 21 hours MATLAB Basics MATLAB More Advanced Features BP Neural Network RBF, GRNN and PNN Neural Networks SOM Neural Networks Support Vector Machine, SVM Extreme Learning Machine, ELM Decision Trees and Random Forests Genetic Algorithm, GA Particle Swarm Optimization, PSO Ant Colony Algorithm, ACA Simulated Annealing, SA Dimenationality Reduction and Feature Selection
OpenNN OpenNN: Implementing neural networks 14 hours OpenNN is an open-source class library written in C++  which implements neural networks, for use in machine learning. In this course we go over the principles of neural networks and use OpenNN to implement a sample application. Audience     Software developers and programmers wishing to create Deep Learning applications. Format of the course     Lecture and discussion coupled with hands-on exercises. Introduction to OpenNN, Machine Learning and Deep Learning Downloading OpenNN Working with Neural Designer     Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics OpenNN architecture     CPU parallelization OpenNN classes     Data set, neural network, loss index, training strategy, model selection, testing analysis     Vector and matrix templates Building a neural network application     Choosing a suitable neural network     Formulating the variational problem (loss index)     Solving the reduced function optimization problem (training strategy) Working with datasets      The data matrix (columns as variables and rows as instances) Learning tasks     Function regression     Pattern recognition Compiling with QT Creator Integrating, testing and debugging your application The future of neural networks and OpenNN
dladv Advanced Deep Learning 28 hours Machine Learning Limitations Machine Learning, Non-linear mappings Neural Networks Non-Linear Optimization, Stochastic/MiniBatch Gradient Decent Back Propagation Deep Sparse Coding Sparse Autoencoders (SAE) Convolutional Neural Networks (CNNs) Successes: Descriptor Matching Stereo-based Obstacle Avoidance for Robotics Pooling and invariance Visualization/Deconvolutional Networks Recurrent Neural Networks (RNNs) and their optimizaiton Applications to NLP RNNs continued, Hessian-Free Optimization Language analysis: word/sentence vectors, parsing, sentiment analysis, etc. Probabilistic Graphical Models Hopfield Nets, Boltzmann machines, Restricted Boltzmann Machines Hopfield Networks, (Restricted) Bolzmann Machines Deep Belief Nets, Stacked RBMs Applications to NLP , Pose and Activity Recognition in Videos Recent Advances Large-Scale Learning Neural Turing Machines  
BigData_ A practical introduction to Data Analysis and Big Data 28 hours Participants who complete this training will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools. Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class. The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability. Audience Developers / programmers IT consultants Format of the course     Part lecture, part discussion, heavy hands-on practice and implementation, occasional quizing to measure progress. Introduction to Data Analysis and Big Data What makes Big Data "big"? Velocity, Volume, Variety, Veracity (VVVV) Limits to traditional Data Processing Distributed Processing Statistical Analysis Types of Machine Learning Analysis Data Visualization Languages used for Data Analysis R language (crash course) Why R for Data Analysis? Data manipulation, calculation and graphical display Python (crash course) Why Python for Data Analysis? Manipulating, processing, cleaning, and crunching data Approaches to Data Analysis Statistical Analysis Time Series analysis Forecasting with Correlation and Regression models Inferential Statistics (estimating) Descriptive Statistics in Big Data sets (e.g. calculating mean) Machine Learning Supervised vs unsupervised learning Classification and clustering Estimating cost of specific methods Filtering Natural Language Processing Processing text Understaing meaning of the text Automatic text generation Sentiment/Topic Analysis Computer Vision Acquiring, processing, analyzing, and understanding images Reconstructing, interpreting and understanding 3D scenes Using image data to make decisions Big Data infrastructure Data Storage Relational databases (SQL) MySQL Postgres Oracle Non-relational databases (NoSQL) Cassandra MongoDB Neo4js Understanding the nuances Hierarchical databases Object-oriented databases Document-oriented databases Graph-oriented databases Other Distributed Processing Hadoop HDFS as a distributed filesystem MapReduce for distributed processing Spark All-in-one in-memory cluster computing framework for large-scale data processing Structured streaming Spark SQL Machine Learning libraries: MLlib Graph processing with GraphX Search Engines ElasticSearch Solr Scalability Public cloud AWS, Google, Aliyun, etc. Private cloud OpenStack, Cloud Foundry, etc. Auto-scalability Choosing right solution for the problem The future of Big Data Closing remarks  
mlfsas Machine Learning Fundamentals with Scala and Apache Spark 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Scala programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Machine Learning with Python Choice of libraries Add-on tools Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
octnp Octave not only for programmers 21 hours Course is dedicated for those who would like to know an alternative program to the commercial MATLAB package. The three-day training provides comprehensive information on moving around the environment and performing the OCTAVE package for data analysis and engineering calculations. The training recipients are beginners but also those who know the program and would like to systematize their knowledge and improve their skills. Knowledge of other programming languages is not required, but it will greatly facilitate the learners' acquisition of knowledge. The course will show you how to use the program in many practical examples. Introduction Simple calculations Starting Octave, Octave as a calculator, built-in functions The Octave environment Named variables, numbers and formatting, number representation and accuracy, loading and saving data  Arrays and vectors Extracting elements from a vector, vector maths Plotting graphs Improving the presentation, multiple graphs and figures, saving and printing figures Octave programming I: Script files Creating and editing a script, running and debugging scripts, Control statements If else, switch, for, while Octave programming II: Functions Matrices and vectors Matrix, the transpose operator, matrix creation functions, building composite matrices, matrices as tables, extracting bits of matrices, basic matrix functions Linear and Nonlinear Equations More graphs Putting several graphs in one window, 3D plots, changing the viewpoint, plotting surfaces, images and movies,  Eigenvectors and the Singular Value Decomposition  Complex numbers Plotting complex numbers,  Statistics and data processing  GUI Developmen
dmmlr Data Mining & Machine Learning with R 14 hours Introduction to Data mining and Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Dicriminant analysis Logistic regression K-Nearest neighbors Support Vector Machines Neural networks Decision trees Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means Advanced topics Ensemble models Mixed models Boosting Examples Multidimensional reduction Factor Analysis Principal Component Analysis Examples
mlentre Machine Learning Concepts for Entrepreneurs and Managers 21 hours This training course is for people that would like to apply Machine Learning in practical applications for their team.  The training will not dive into technicalities and revolve around basic concepts and business/operational applications of the same. Target Audience Investors and AI entrepreneurs Managers and Engineers whose company is venturing into AI space Business Analysts & Investors Introduction to Neural Networks Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Machine Learning with Python Choice of libraries Add-on tools Machine learning Concepts and Applications Regression Linear regression Generalizations and Nonlinearity Use cases Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Use Cases Cross-validation and Resampling Cross-validation approaches Bootstrap Use Cases Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means Short Introduction to NLP methods word and sentence tokenization text classification sentiment analysis spelling correction information extraction parsing meaning extraction question answering Artificial Intelligence & Deep Learning Technical Overview R v/s Python Caffe v/s Tensor Flow Various Machine Learning Libraries
cpb100 Google Cloud Platform Fundamentals: Big Data & Machine Learning 8 hours This one-day instructor-led course introduces participants to the big data capabilities of Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants get an overview of the Google Cloud platform and a detailed view of the data processing and machine learning capabilities. This course showcases the ease, flexibility, and power of big data solutions on Google Cloud Platform. This course teaches participants the following skills: Identify the purpose and value of the key Big Data and Machine Learning products in the Google Cloud Platform. Use Cloud SQL and Cloud Dataproc to migrate existing MySQL and Hadoop/Pig/Spark/Hive workloads to Google Cloud Platform. Employ BigQuery and Cloud Datalab to carry out interactive data analysis. Train and use a neural network using TensorFlow. Employ ML APIs. Choose between different data processing products on the Google Cloud Platform. This class is intended for the following: Data analysts, Data scientists, Business analysts getting started with Google Cloud Platform. Individuals responsible for designing pipelines and architectures for data processing, creating and maintaining machine learning and statistical models, querying datasets, visualizing query results and creating reports. Executives and IT decision makers evaluating Google Cloud Platform for use by data scientists. The course includes presentations, demonstrations, and hands-on labs. Module 1: Introducing Google Cloud Platform Google Platform Fundamentals Overview. Google Cloud Platform Data Products and Technology. Usage scenarios. Lab: Sign up for Google Cloud Platform. Module 2: Compute and Storage Fundamentals CPUs on demand (Compute Engine). A global filesystem (Cloud Storage). CloudShell. Lab: Set up a Ingest-Transform-Publish data processing pipeline. Module 3: Data Analytics on the Cloud Stepping-stones to the cloud. Cloud SQL: your SQL database on the cloud. Lab: Importing data into CloudSQL and running queries. Spark on Dataproc. Lab: Machine Learning Recommendations with SparkML. Module 4: Scaling Data Analysis Fast random access. Datalab. BigQuery. Lab: Build machine learning dataset. Machine Learning with TensorFlow. Lab: Train and use neural network. Fully built models for common needs. Lab: Employ ML APIs Module 5: Data Processing Architectures Message-oriented architectures with Pub/Sub. Creating pipelines with Dataflow. Reference architecture for real-time and batch data processing. Module 6: Summary Why GCP? Where to go from here Additional Resources
opennmt OpenNMT: Setting up a Neural Machine Translation system 7 hours OpenNMT is a full-featured, open-source (MIT) neural machine translation system that utilizes the Torch mathematical toolkit. In this training participants will learn how to set up and use OpenNMT to carry out translation of various sample data sets. The course starts with an overview of neural networks as they apply to machine translation. Participants will carry out live exercises throughout the course to demonstrate their understanding of the concepts learned and get feedback from the instructor. By the end of this training, participants will have the knowledge and practice needed to implement a live OpenNMT solution. Source and target language samples will be pre-arranged per the audience's requirements. Audience     Translation and localization engineers     Machine translation specialists and managers Format of the course     Part lecture, part discussion, heavy hands-on practice Introduction     Why Neural Machine Translation? Overview of the Torch project Installation and setup Preprocessing your data Training the model Translating Using pre-trained models Working with Lua scripts Using extensions Troubleshooting Joining the community Closing remarks
predio Machine Learning with PredictionIO 21 hours PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack. Audience This course is directed at developers and data scientists who want to create predictive engines for any machine learning task. Getting Started Quick Intro Installation Guide Downloading Template Deploying an Engine Customizing an Engine App Integration Overview Developing PredictionIO System Architecture Event Server Overview Collecting Data Learning DASE Implementing DASE Evaluation Overview Intellij IDEA Guide Scala API Machine Learning Education and Usage​ Examples Comics Recommendation Text Classification Community Contributed Demo Dimensionality Reducation and usage PredictionIO SDKs (Select One) Java PHP Python Ruby Community Contributed  
Fairsec Fairsec: Setting up a CNN-based machine translation system 7 hours Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT). In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements. Audience     Translation and localization engineers Format of the course     Part lecture, part discussion, heavy hands-on practice Introduction     Why Neural Machine Translation? Overview of the Torch project Overview of a Convolutional Neural Machine Translation model     Convolutional Sequence to Sequence Learning     Convolutional Encoder Model for Neural Machine Translation     Standard LSTM-based model Overview of training approaches     About GPUs and CPUs     Fast beam search generation Installation and setup Evaluating pre-trained models Preprocessing your data Training the model Translating Converting a trained model to use CPU-only operations Joining to the community Closing remarks
systemml Apache SystemML for Machine Learning 14 hours Apache SystemML is a distributed and declarative machine learning platform. SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark. Audience This course is suitable for Machine Learning researchers, developers and engineers seeking to utilize SystemML as a framework for machine learning. Running SystemML Standalone Spark MLContext Spark Batch Hadoop Batch JMLC Tools Debugger IDE Troubleshooting Languages and ML Algorithms DML PyDML Algorithms
cpde Data Engineering on Google Cloud Platform 32 hours This four-day instructor-led class provides participants a hands-on introduction to designing and building data processing systems on Google Cloud Platform. Through a combination of presentations, demos, and hand-on labs, participants will learn how to design data processing systems, build end-to-end data pipelines, analyze data and carry out machine learning. The course covers structured, unstructured, and streaming data. This course teaches participants the following skills: Design and build data processing systems on Google Cloud Platform Process batch and streaming data by implementing autoscaling data pipelines on Cloud Dataflow Derive business insights from extremely large datasets using Google BigQuery Train, evaluate and predict using machine learning models using Tensorflow and Cloud ML Leverage unstructured data using Spark and ML APIs on Cloud Dataproc Enable instant insights from streaming data This class is intended for experienced developers who are responsible for managing big data transformations including: Extracting, Loading, Transforming, cleaning, and validating data Designing pipelines and architectures for data processing Creating and maintaining machine learning and statistical models Querying datasets, visualizing query results and creating reports The course includes presentations, demonstrations, and hands-on labs. Leveraging Unstructured Data with Cloud Dataproc on Google Cloud Platform Module 1: Google Cloud Dataproc Overview Creating and managing clusters. Leveraging custom machine types and preemptible worker nodes. Scaling and deleting Clusters. Lab: Creating Hadoop Clusters with Google Cloud Dataproc. Module 2: Running Dataproc Jobs Running Pig and Hive jobs. Separation of storage and compute. Lab: Running Hadoop and Spark Jobs with Dataproc. Lab: Submit and monitor jobs. Module 3: Integrating Dataproc with Google Cloud Platform Customize cluster with initialization actions. BigQuery Support. Lab: Leveraging Google Cloud Platform Services. Module 4: Making Sense of Unstructured Data with Google’s Machine Learning APIs Google’s Machine Learning APIs. Common ML Use Cases. Invoking ML APIs. Lab: Adding Machine Learning Capabilities to Big Data Analysis. Serverless Data Analysis with Google BigQuery and Cloud Dataflow Module 5: Serverless data analysis with BigQuery What is BigQuery. Queries and Functions. Lab: Writing queries in BigQuery. Loading data into BigQuery. Exporting data from BigQuery. Lab: Loading and exporting data. Nested and repeated fields. Querying multiple tables. Lab: Complex queries. Performance and pricing. Module 6: Serverless, autoscaling data pipelines with Dataflow The Beam programming model. Data pipelines in Beam Python. Data pipelines in Beam Java. Lab: Writing a Dataflow pipeline. Scalable Big Data processing using Beam. Lab: MapReduce in Dataflow. Incorporating additional data. Lab: Side inputs. Handling stream data. GCP Reference architecture. Serverless Machine Learning with TensorFlow on Google Cloud Platform Module 7: Getting started with Machine Learning What is machine learning (ML). Effective ML: concepts, types. ML datasets: generalization. Lab: Explore and create ML datasets. Module 8: Building ML models with Tensorflow Getting started with TensorFlow. Lab: Using tf.learn. TensorFlow graphs and loops + lab. Lab: Using low-level TensorFlow + early stopping. Monitoring ML training. Lab: Charts and graphs of TensorFlow training. Module 9: Scaling ML models with CloudML Why Cloud ML? Packaging up a TensorFlow model. End-to-end training. Lab: Run a ML model locally and on cloud. Module 10: Feature Engineering Creating good features. Transforming inputs. Synthetic features. Preprocessing with Cloud ML. Lab: Feature engineering. Building Resilient Streaming Systems on Google Cloud Platform Module 11: Architecture of streaming analytics pipelines Stream data processing: Challenges. Handling variable data volumes. Dealing with unordered/late data. Lab: Designing streaming pipeline. Module 12: Ingesting Variable Volumes What is Cloud Pub/Sub? How it works: Topics and Subscriptions. Lab: Simulator. Module 13: Implementing streaming pipelines Challenges in stream processing. Handle late data: watermarks, triggers, accumulation. Lab: Stream data processing pipeline for live traffic data. Module 14: Streaming analytics and dashboards Streaming analytics: from data to decisions. Querying streaming data with BigQuery. What is Google Data Studio? Lab: build a real-time dashboard to visualize processed data. Module 15: High throughput and low-latency with Bigtable What is Cloud Spanner? Designing Bigtable schema. Ingesting into Bigtable. Lab: streaming into Bigtable.  
aiintrozero From Zero to AI 35 hours This course is created for people who have no previous experience in probability and statistics. Probability (3.5h) Definition of probability Binomial distribution Everyday usage exercises Statistics (10.5h) Descriptive Statistics Inferential Statistics Regression Logistic Regression Exercises Intro to programming (3.5h) Procedural Programming Functional Programming OOP Programming Exercises (writing logic for a game of choice, e.g. noughts and crosses) Machine Learning (10.5h) Classification Clustering Neural Networks Exercises (write AI for a computer game of choice) Rules Engines and Expert Systems (7 hours) Intro to Rule Engines Write AI for the same game and combing solutions into hybrid approach
aiauto Artificial Intelligence in Automotive 14 hours This course covers AI (emphasizing Machine Learning and Deep Learning) in Automotive Industry. It helps to determine which technology can be (potentially) used in multiple situation in a car: from simple automation, image recognition to autonomous decision making. Current state of the technology What is used What may be potentially used Rules based AI  Simplifying decision Machine Learning  Classification Clustering Neural Networks Types of Neural Networks Presentation of working examples and discussion Deep Learning Basic vocabulary  When to use Deep Learning, when not to Estimating computational resources and cost Very short theoretical background to Deep Neural Networks Deep Learning in practice (mainly using TensorFlow) Preparing Data Choosing loss function Choosing appropriate type on neural network Accuracy vs speed and resources Training neural network Measuring efficiency and error Sample usage Anomaly detection Image recognition ADAS        
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example 28 hours This course will give you knowledge in neural networks and generally in machine learning algorithm,  deep learning (algorithms and applications). This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow. TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics Inputs and Placeholders Build the GraphS Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output The Perceptron Activation functions The perceptron learning algorithm Binary classification with the perceptron Document classification with the perceptron Limitations of the perceptron From the Perceptron to Support Vector Machines Kernels and the kernel trick Maximum margin classification and support vectors Artificial Neural Networks Nonlinear decision boundaries Feedforward and feedback artificial neural networks Multilayer perceptrons Minimizing the cost function Forward propagation Back propagation Improving the way neural networks learn Convolutional Neural Networks Goals Model Architecture Principles Code Organization Launching and Training the Model Evaluating a Model
mlintro Introduction to Machine Learning 7 hours This training course is for people that would like to apply basic Machine Learning techniques in practical applications. Audience Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Ridge regression Clustering
mldt Machine Learning and Deep Learning 21 hours This course covers AI (emphasizing Machine Learning and Deep Learning)Machine learning Introduction to Machine Learning Applications of machine learning Supervised Versus Unsupervised Learning Machine Learning Algorithms Regression Classification Clustering Recommender System Anomaly Detection Reinforcement Learning Regression Simple & Multiple Regression Least Square Method Estimating the Coefficients Assessing the Accuracy of the Coefficient Estimates Assessing the Accuracy of the Model Post Estimation Analysis Other Considerations in the Regression Models Qualitative Predictors Extensions of the Linear Models Potential Problems Bias-variance trade off [under-fitting/over-fitting] for regression models Resampling Methods Cross-Validation The Validation Set Approach Leave-One-Out Cross-Validation k-Fold Cross-Validation Bias-Variance Trade-Off for k-Fold The Bootstrap Model Selection and Regularization Subset Selection [Best Subset Selection, Stepwise Selection, Choosing the Optimal Model] Shrinkage Methods/ Regularization [Ridge Regression, Lasso & Elastic Net] Selecting the Tuning Parameter Dimension Reduction Methods Principal Components Regression Partial Least Squares Classification Logistic Regression The Logistic Model cost function Estimating the Coefficients Making Predictions Odds Ratio Performance Evaluation Matrices [Sensitivity/Specificity/PPV/NPV, Precision, ROC curve etc.] Multiple Logistic Regression Logistic Regression for >2 Response Classes Regularized Logistic Regression Linear Discriminant Analysis Using Bayes’ Theorem for Classification Linear Discriminant Analysis for p=1 Linear Discriminant Analysis for p >1 Quadratic Discriminant Analysis K-Nearest Neighbors Classification with Non-linear Decision Boundaries Support Vector Machines Optimization Objective The Maximal Margin Classifier Kernels One-Versus-One Classification One-Versus-All Classification Comparison of Classification Methods Introduction to Deep Learning ANN Structure Biological neurons and artificial neurons Non-linear Hypothesis Model Representation Examples & Intuitions Transfer Function/ Activation Functions Typical classes of network architectures Feed forward ANN. Structures of Multi-layer feed forward networks Back propagation algorithm Back propagation - training and convergence Functional approximation with back propagation Practical and design issues of back propagation learning Deep Learning Artificial Intelligence & Deep Learning Softmax Regression Self-Taught Learning Deep Networks Demos and Applications Lab: Getting Started with R Introduction to R Basic Commands & Libraries Data Manipulation Importing & Exporting data Graphical and Numerical Summaries Writing functions Regression Simple & Multiple Linear Regression Interaction Terms Non-linear Transformations Dummy variable regression Cross-Validation and the Bootstrap Subset selection methods Penalization [Ridge, Lasso, Elastic Net] Classification Logistic Regression, LDA, QDA, and KNN, Resampling & Regularization Support Vector Machine Resampling & Regularization Artificial Neural Network Deep Learning   Note: For ML algorithms, case studies will be used to discuss their application, advantages & potential issues. Analysis of different data sets will be performed using R

Upcoming Courses

CourseCourse DateCourse Price [Remote / Classroom]
Machine Learning with PredictionIO - Bristol, Temple GateMon, 2017-08-14 09:30£3300 / £4150
Artificial Neural Networks, Machine Learning, Deep Thinking - EdinburghMon, 2017-08-14 09:30£3900 / £5550
Pattern Recognition - Coventry - The QuadrantMon, 2017-08-14 09:30£6500 / £7750
Machine Learning Fundamentals with R - GlasgowTue, 2017-08-15 09:30£2600 / £3300

Other regions

Weekend Machine Learning courses, Evening Machine Learning training, Machine Learning boot camp, Machine Learning instructor-led , Machine Learning instructor, Machine Learning private courses, Machine Learning classes, Machine Learning on-site, Machine Learning training courses, Machine Learning one on one training ,Weekend Machine Learning training, Evening Machine Learning courses, Machine Learning coaching

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients