SUPERCOMPUTERS PROGRAMMING (deep learning series #4) 2019-2020

We teach how to think, make algorithms and program in the parallel method building on TensorFlow, CUDA, Neural Networks, Anaconda, C++. R. Haskell.C++, Python, Matlab, Octave, R and getting all done with OpenCl and CUDA on the latest nVidia platforms for tasks and robotics.

When: In definition.
Where: In definition.
Cost: In definition.

Write to info@strongartificialintelligence.com , subject INTRO2019

SUPERCOMPUTERS PROGRAMMING (deep learning series #3) 2017

Maurizio Viviani intro
Deep learning series is the intensive hands-on course leading to the capacity of creating and managing strong AI systems.
During my courses I realized the difficulties for students who begin their journey into this new universe. So many elements, techniques, softwares, different logical approaches, most students get lost or take very difficult paths. I created this introductory course with the purpose to help you having the best and clearest approach to Deep Learning and accelerating your learning to the maximum.
In this course you will have for 7 consecutive days 4 hours/day lesson + 4 hours/day hands on programming a supercomputer, I and my assistants will drive you, clarify your doubts, explain difficult theories.
Who is this course for: curious people, students, researchers, scientists, programmers. This course is for people loving science and wanting to go deeper.

Prerequisites: curiosity, maths, some programming. The more, the better. Classes will be formed after selection for homogeneus learning.

Cost for online course: 250 US $. Discounts apply UNTIL JANUARY 9. Discounts apply automatically if you have attended other courses with us. You will be asked to pay only if your request of enrollment is accepted; should I think that the class is not suitable to you, I will suggest other courses for creating the basis correctly.
Cost for in-person in-class San Francisco course: 1500 US $. Discounts apply if you have been invited or if you have attended other courses with us. You will be asked to pay if your request of enrollment is accepted.
Date: Course will be taught 1st to 7th every month (January 2nd to 8th). If you take online class you have 30 days to complete assignments and to verify with us.
What happens after the course? You can enter our tutorage programs or you can take internships and/or further courses with us and in these cases our tutorage is free. We want to change the world doing our best and helping everyone to do his/her best.

Day 1:
Supercomputers and Mathematics: MATLAB/OCTAVE accelerated parallel GPU programming for Machine Learning. Change your vision: algorithms.
Laboratory: Write and test your Matlab / Octave algorithm

Day 2:
Enlarge your vision with R for big data sets. Manage Matlab/Octave routines through Python. Build a powerful agent.
Laboratory: write and test R and Python agents for your algorithm

Day 3:
C++: the glue of your code. Get access to powerful C++ libraries.
Laboratory: Create and use C++ libraries for feeding your agents

Day 4:
CUDA day, go deep parallel programming. Top technology of nVidia.
Laboratory: write CUDA code for having your algorithm broken in thousands of threads and keep them optimized and under control. Execution time must be not longer than 10% of not optimized code if you have a recent nVidia GPGPU.

Day 5:
TensorFlow. Machine Learning. How to program it. Top technology of Google. Deep Neural Networks. Convolutional Neural Networks.
Laboratory: Have TensorFlow working your data. Implement it. Execution time must be not longer than 10% of optimized code from yesterday and not longer than 1% of not optimized code, if you have a recent nVidia GPGPU.

Day 6:
Training day. Design and implement Deep Neural Networks for training purpose for your algorithm.
Laboratory: Improve your code. Execution time must be not longer than 10% of optimized code for TensorFlow from yesterday and not longer than 1% of optimized code and not longer than 0.1% of not optimized code, if you have a recent nVidia GPGPU.

Day 7:
Your algorithm is now AI. Feed it with Big Data. Search and use hyperparameters. Classify.
Laboratory: One billion data set. Adapt and correct your code. Train your DNNs. Try to make your new born Strong Artificial Intelligence.

When: every month 1st to 7th (only on January 2nd to 8th). Course is 8 hours/day, morning lessons class 9 am to 1 pm and afternoon laboratory class 1.30 pm to 5.30 pm
Where: company HQ, 101 California Street Suite 2710, San Francisco CA 94111, USA

SUPERCOMPUTERS PROGRAMMING (deep learning series #2) 2017
Maurizio Viviani intro
Welcome! i am Maurizio Viviani CEO of Strongartificialintelligence.com
My company is born for giving services and high skilled formation in the area of parallel programming for supercomputers.
My aim is to help forming a new way of thinking algorithms and to program them in parallel computing.
This a crucial step for High Performance Computing, the way for Supercomputers.
My top courses are for CUDA programming.
Cuda is the language created by nVidia for programming Graphic Processor Units efficiently.
This kind of programming goes all parallel, which means we have thousands of threads running at the same time keeping the CPU just for heavier computations along the track.

Changing our way of programming, we can send many many different routines to the GPU, totally unloading the CPU, and making the capability of the machine hundreds or thousands time more than working just on the CPU.
This is the next revolution in the digital era.
We will reduce drastically energy consumption simply by using a few supercomputers instead of many traditional computers.
What is a supercomputer? It is a machine which can make much more computations than a normal computer.
Because of this, the exact definition of Supercomputer is always changing and it will never stop, maybe in the future they will be all Quantum computers but now and in the near future Supercomputers will be parallel computing machines. Energy consumption reduced means much less weight in autonomous cars and in flying vehicles like my stratospheric balloons, where I passed from 12 Kilograms of devices using many Arduino controllers to just 2.4 Kilograms by using nVidia Jetson TX1.
A long term effect will be the huge reduction of the waste at the end of vital life of electronic devices.
The language for parallel computing developed by nVidia is CUDA, and the language is strongly connected to C, C++ and Python, which are very important languages for scientific programming. C and C++ are the base of our technology in existence, and Python is a marvelous language with a huge base of scientific libraries existing, making very easy to make reliable software multi purpose.
Most of Google is written in Python.

Our courses of C++ and Python are full immersion and bring quickly to a deep knowledge of them.
Python is the one going stronger because of its quicker learning times and because its adaptability.
After knowing C++ and/or Python, CUDA classes can be approached because they rely on the knowledge of the other language.
Alongside C++ and Python, we teach Matlab and Octave (the free alternative to Matlab), they are the main software for mathematically creating the algorithm and they put the programmer in a perspective out of the box, which is perfect to make optimized precode.
After having created the algorithm it is easy to write the C++/Python/Java program for it.

With Matlab and Octave we teach R, the statistical language fresh born, it is a huge boost for the management of big data, making much quicker to make internal routines for giving results to the main external software.
TensorFlow is probably the strongest Machine Learning library that will make the AI very fast.
We do not teach just the languages, we teach the way for programming going to deep learning, which is the natural evolution of machine learning, meaning machines programmed for making independently tasks and to modify them based on the learning technology.
Deep learning passing through unsupervised learning is the road for the creation of a strong artificial intelligence, which will be the biggest step.
The definition of strong artificial intelligence is the capability of a machine to perform better than the human brain, and this is already happening in some fields and it will be soon.

Autonomous cars will practically eliminate the risk of accidents, machine failures of wheels, brakes, engine, will be very well managed by the supercomputer and human errors will be never again cause of casualties.

Robotics will be one of the main fields for parallel programming because of the reduce consumption of energy and the very reduced weight of computers.

Cryptocurrencies will make a huge step forward due to the much higher computing capability of supercomputers, and they will be much more energy convenients than now.
All this will be possible only if the running softwares of everything will be put in parallel software coding. The revolution is here and we help to program it.
The arguments are huge and for this reason we make 2 days pills for full immersion in the disciplines, so everyone can test whether he is ready to make an intensive course on it or not.
During my CUDA courses students will really program a Supercomputer.

New York 2016/2017: