In this tutorial I have shown how to use Weka for combining multiple classification algorithms. Both ensembles (bagging and boosting) and voting combining technique are discussed. The parameters and procedure to invoke stacking is left for the user because it is closely related to voting.
Views: 33898 Rushdi Shams
MIT 6.034 Artificial Intelligence, Fall 2010 View the complete course: http://ocw.mit.edu/6-034F10 Instructor: Patrick Winston Can multiple weak classifiers be used to make a strong one? We examine the boosting algorithm, which adjusts the weight of each classifier, and work through the math. We end with how boosting doesn't seem to overfit, and mention some applications. License: Creative Commons BY-NC-SA More information at http://ocw.mit.edu/terms More courses at http://ocw.mit.edu
Views: 173900 MIT OpenCourseWare
In this episode, you can see our chief-data-science-executive-wizard-commander Ingo and Data Scientist #7 ride around in a shopping cart. Of course, they also have to do something else as they eventually get tired of it. So, Ingo uses M&Ms to explain ensemble methods and how you can leverage the combined expertise of a panel of machine learning algorithms to deliver better predictive power. Watch Ingo ‘crowd guess’ the number of M&Ms in a bowl by random sampling strangers in a parking lot and averaging the estimates - exemplifying the bagging method. Also, learn how the boosting method differs from bagging (obviously using more M&Ms). Data Scientist #7 loves not only the M&Ms but also the idea of ensembles so he immediately wants to boost his knowledge by checking out the book: “Wisdom of Crowds” by James Surowiecki which Ingo recommended reading.
Views: 923 RapidMiner, Inc.
Title: Dark Knowledge Abstract: A simple way to improve classification performance is to average the predictions of a large ensemble of different classifiers. This is great for winning competitions but requires too much computation at test time for practical applications such as speech recognition. In a widely ignored paper in 2006, Caruana and his collaborators showed that the knowledge in the ensemble could be transferred to a single, efficient model by training the single model to mimic the log probabilities of the ensemble average. This technique works because most of the knowledge in the learned ensemble is in the relative probabilities of extremely improbable wrong answers. For example, the ensemble may give a BMW a probability of one in a billion of being a garbage truck but this is still far greater (in the log domain) than its probability of being a carrot. This "dark knowledge", which is practically invisible in the class probabilities, defines a similarity metric over the classes that makes it much easier to learn a good classifier. I will describe a new variation of this technique called "distillation" and will show some surprising examples in which good classifiers over all of the classes can be learned from data in which some of the classes are entirely absent, provided the targets come from an ensemble that has been trained on all of the classes. I will also show how this technique can be used to improve a state-of-the-art acoustic model and will discuss its application to learning large sets of specialist models without overfitting. This is joint work with Oriol Vinyals and Jeff Dean. Bio: Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research. Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He has received honorary doctorates from the University of Edinburgh and the University of Sussex. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998), the ITAC/NSERC award for contributions to information technology (1992) the Killam prize for Engineering (2012) and the NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and Engineering. Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His current main interest is in unsupervised learning procedures for multi-layer neural networks with rich sensory input.
Views: 26393 TTIC
Artur Fattakhov, Ilya Kibardin and Dmitriy Abulkhanov share their winner’s solutions of Kaggle Camera Model Identification. In this competition, Kagglers challenged to build an algorithm that identifies which camera model captured an image by using traces intrinsically left in the image. From this video you will learn: - How to get additional photo data - Training Scheme with cyclic learning rate and pseudo labeling - Snapshot Ensembles aka Multi Checkpoint TTA - Training on small crops and finetune on big crops to speed up without loss in quality - Prediction equalization Slides: https://gh.mltrainings.ru/presentations/KibardinFattahovAbulkhanov_KaggleCamera_2018.pdf Github: https://github.com/ikibardin/kaggle-camera-model-identification Yandex hosts biweekly training sessions on machine learning. These meetings offer an opportunity for the participants of data analysis contests to meet, talk, and exchange experience. Each of these events is made up of a practical session and a report. The problems are taken from Kaggle and similar platforms. The reports are given by successful participants of recent contests, who share their strategies and talk about the techniques used by their competitors.
Views: 1784 Тренировки ML
Evaluates multiple WEKA classifiers (with set ups stored in a text file) on a variety of datasets that are located in a directory. Only datasets that match the required capabilities are used for evaluation.
Views: 399 TheAdamsWorkflow
Сергей Мушинский рассказывает про задачу определение фона на изображениях автомобилей (Kaggle Carvana Image Masking Challenge). Сергей вместе со своей командой занял в соревновании 4 место. Из видео вы сможете узнать: - Использование псевдоразметки изображений нейронными сетями - Работа в команде в условиях общих ограниченных вычислительных ресурсов - Бывает ли полезно вручную доразмечать объекты - Подходы других участников: от двух сетей без усреднения до сложных ансамблей с разнообразными архитектурами Слайды: https://gh.mltrainings.ru/presentations/Mushinskiy_KaggleCarvanaImageMasking%20Challenge_2017.pdf Узнать о текущих соревнованиях можно на сайте http://mltrainings.ru/ Узнать о новых тренировках и видео можно из групп: ВКонтакте https://vk.com/mltrainings Facebook https://www.facebook.com/groups/1413405125598651/
Views: 2872 Тренировки ML
Machine Learning for Encrypted Malware Traffic Classification: Accounting for Noisy Labels and Non-Stationarity Blake Anderson (Cisco Systems, Inc.) David McGrew (Cisco Systems, Inc.) The application of machine learning for the detection of malicious network traffic has been well researched over the past several decades; it is particularly appealing when the traffic is encrypted because traditional pattern-matching approaches cannot be used. Unfortunately, the promise of machine learning has been slow to materialize in the network security domain. In this paper, we highlight two primary reasons why this is the case: inaccurate ground truth and a highly non-stationary data distribution. To demonstrate and understand the effect that these pitfalls have on popular machine learning algorithms, we design and carry out experiments that show how six common algorithms perform when confronted with real network data. With our experimental results, we identify the situations in which certain classes of algorithms underperform on the task of encrypted malware traffic classification. We offer concrete recommendations for practitioners given the real-world constraints outlined. From an algorithmic perspective, we find that the random forest ensemble method outperformed competing methods. More importantly, feature engineering was decisive; we found that iterating on the initial feature set, and including features suggested by domain experts, had a much greater impact on the performance of the classification system. For example, linear regression using the more expressive feature set easily outperformed the random forest method using a standard network traffic representation on all criteria considered. Our analysis is based on millions of TLS encrypted sessions collected over 12 months from a commercial malware sandbox and two geographically distinct, large enterprise networks. More on http://www.kdd.org/kdd2017/
Views: 1336 KDD2017 video
In this video, Billy Decker of StatSlice Systems shows you how to create and read a Classification Matrix in 5 minutes with the Microsoft Excel data mining add-in*. In this example, we will create a Classification Matrix based on a mining structure with all of its associated models that we have created previously. For the example, we will be using a tutorial spreadsheet that can be found on Codeplex at: https://dataminingaddins.codeplex.com/releases/view/87029 You will also need to attach the AdventureworksDW2012 data file to SQL Server which can be downloaded here: http://msftdbprodsamples.codeplex.com/releases/view/55330 *This tutorial assumes that you have already installed the data mining add-in for Excel and configured the add-in to be pointed at an instance of SQL Server with Analysis Services to which you have access rights.
Views: 4117 StatSlice Systems
With a growing interest in interpretability, there is an increasing need to characterize what exactly we mean by it and how to sensibly compare the interpretability of different approaches. In this talk, I'll start by discussing some research in interpretable machine learning from our group, and then broaden it out to discuss what interpretability is and when it is needed. I'll argue that our current desire for "interpretability" is as vague as asking for "good predictions" -- a desire that. while entirely reasonable, must be formalized into concrete needs such as high average test performance (perhaps held-out likelihood is a good metric) or some kind of robust performance (perhaps sensitivity or specificity are more appropriate metrics). This objective of this talk is to start a conversation to do the same for interpretability: I will suggest a taxonomy for interpretable models and their evaluation, and also highlight important open questions about the science of interpretability in machine learning.
Views: 2864 Talks at Google
This tutorial starts with introduction of Dataset. All aspects of dataset are discussed. Then basic working of RapidMiner is discussed. Once the viewer is acquainted with the knowledge of dataset and basic working of RapidMiner, following operations are performed on the dataset. K-NN Classification Naïve Bayes Classification Decision Tree Association Rules
Views: 36888 RapidMinerTutorial
Good features are informative, independent, and simple. In this episode, we'll introduce these concepts by using a histogram to visualize a feature from a toy dataset. Updates: many thanks for the supportive feedback! I’d love to release these episodes faster, but I’m writing them as we go. That way, I can see what works and (more importantly) where I can improve. We've covered a lot of ground already, so next episode I'll review and reinforce concepts, introduce clearer syntax, spend more time on testing, and continue building intuition for supervised learning. I also realize some folks had dependency bugs with Graphviz (my fault!). Moving forward, I won't use any libraries not already installed by Anaconda or Tensorflow. Last: my code in this cast is similar to these great examples. You can use them to produce a more polished chart, if you like: http://matplotlib.org/examples/statistics/histogram_demo_multihist.html Follow https://twitter.com/random_forests for updates on new episodes! Subscribe to the Google Developers: http://goo.gl/mQyv5L - Subscribe to the brand new Firebase Channel: https://goo.gl/9giPHG And here's our playlist: https://goo.gl/KewA03
Views: 399405 Google Developers
Here, I used Shiny, an R package that makes it easy to build interactive web applications (apps) straight from R, to develop semi-automated machine learning models to predict rainfall over a region the user selects. The user can extract predictand and predictors by drawing a polygon over a region. Then, the user can select some or all of the machine learning algorithms provided. Provided models include Linear regression models (GLM, SGLM), Tree-based ensemble models (random forest and boosting), Support vector Machines, Artificial Neural Network, and other non-linear models (GAM, SGAM, MARS). Finally, the user can download a presentation of the results.
Views: 1787 Fisseha Berhane
2017 Machine Learning Course Project
Views: 111 Liuqing Li
In this episode we're going to train our own image classifier to detect Darth Vader images. The code for this repository is here: https://github.com/llSourcell/tensorflow_image_classifier I created a Slack channel for us, sign up here: https://wizards.herokuapp.com/ The Challenge: The challenge for this episode is to create your own Image Classifier that would be a useful tool for scientists. Just post a clone of this repo that includes your retrained Inception Model (label it output_graph.pb). If it's too big for GitHub, just upload it to DropBox and post the link in your GitHub README. I'm going to judge all of them and the winner gets a shoutout from me in a future video, as well as a signed copy of my book 'Decentralized Applications'. This CodeLab by Google is super useful in learning this stuff: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/?utm_campaign=chrome_series_machinelearning_063016&utm_source=gdev&utm_medium=yt-desc#0 This Tutorial by Google is also very useful: https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html This is a good informational video: https://www.youtube.com/watch?v=VpDonQAKtE4 Really deep dive video on CNNs: https://www.youtube.com/watch?v=FmpDIaiMIeA I love you guys! Thanks for watching my videos and if you've found any of them useful I'd love your support on Patreon: https://www.patreon.com/user?u=3191693 Much more to come so please SUBSCRIBE, LIKE, and COMMENT! :) edit: Credit to Clarifai for the first conv net diagram in the video Follow me: Twitter: https://twitter.com/sirajraval Facebook: https://www.facebook.com/sirajology Instagram: https://www.instagram.com/sirajraval/ Instagram: https://www.instagram.com/sirajraval/ Signup for my newsletter for exciting updates in the field of AI: https://goo.gl/FZzJ5w Hit the Join button above to sign up to become a member of my channel for access to exclusive content!
Views: 646622 Siraj Raval
The fifth Pacific Northwest Regional Natural Language Processing Workshop will be held on Friday April 27, 2018, in Redmond, WA. We accepted abstracts and papers on all aspects of natural language text and speech processing, computational linguistics, and human language technologies. As with past four workshops, the goal of this one-day NW-NLP event is to provide a less-formal setting in the Pacific Northwest to present research ideas, make new acquaintances, and learn about the breadth of exciting work currently being pursued in North-West area. Morning Talks Title: Semantic Matching Against a Corpus: New Applications and Methods Speakers: Lucy Lin, Scott Miles and Noah Smith. Title: Synthetic and Natural Noise Both Break Neural Machine Translation Speakers: Yonatan Belinkov and Yonatan Bisk. Title: Syntactic Scaffolds for Semantic Structures Speakers: Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer and Noah A. Smith. See more at https://www.microsoft.com/en-us/research/video/nw-nlp-2018-semantic-matching-against-a-corpus-new-applications-and-methods-synthetic-and-natural-noise-both-break-neural-machine-translation-and-syntactic-scaffolds-for-semantic-structures/
Views: 1186 Microsoft Research
Pavel Ostyakov and Alexey Kharlamov share their solution of Kaggle Cdiscount’s Image Classification Challenge. In this competition, Kagglers were challenged to build a model that classifies the products based on their images. From this video you will learn: - How to decide which architectures to use - How to train networks faster - Problem with training second layer of classifiers - Errors while solving the problem - Ideas of other teams: using several images of product, ensembling and using kNN Павел Остяков и Алексей Харламов рассказывают про задачу классификации товаров по изображениям (Kaggle Cdiscount’s Image Classification Challenge). Павел и Алексей вместе со своей командой заняли в соревновании 5 место. Из видео вы сможете узнать: - Как принимается решение, какие архитектуры использовать - Способы ускорить обучение сетей - Сложности построения второго слоя классификаторов и способ решения - Ошибки, допущенные в процессе решения - Идеи других участников: использование нескольких изображений товара, ансамблирование и kNN Yandex hosts biweekly training sessions on machine learning. These meetings offer an opportunity for the participants of data analysis contests to meet, talk, and exchange experience. Each of these events is made up of a practical session and a report. The problems are taken from Kaggle and similar platforms. The reports are given by successful participants of recent contests, who share their strategies and talk about the techniques used by their competitors. On Dec. 9, we looked at Porto Seguro’s Safe Driver Prediction challenge on Kaggle.
Views: 2125 Тренировки ML
Recognizing Malware www.mlprague.com Slides: http://www.slideshare.net/mlprague/libor-mokovsk-recognizing-malware
Views: 177 Jiří Materna
Part of the CHI 2016 Human Centred Machine Learning Workshop. The full paper is here: Can Machine-Learning Apply to Musical Ensembles? Charles Martin and Henry Gardner http://www.doc.gold.ac.uk/~mas02mg/HCML2016/HCML2016_paper_5.pdf
Views: 45 Marco Gillies
This video is part of an online course, Intro to Machine Learning. Check out the course here: https://www.udacity.com/course/ud120. This course was designed as part of a program to help you and others become a Data Analyst. You can check out the full details of the program here: https://www.udacity.com/course/nd002.
Views: 10581 Udacity
Filmed at PyData London 2017 Description Mechanical ventilators are widely used in intensive care, they are sophisticated but Doctors do not have time to analyse the copious traces of data in a neonatal unit. We are providing an easy-to-interpret summary of this time-series data using visualisation and machine learning. This is an open source collaboration with the NHS, All results are open. Abstract Mechanical ventilators are widely used in intensive care. Even two decades ago they were be primarily mechanical devices whose "only" task was to inflate the patient’s lung. Recently, however, they have become equipped with powerful computers that provide sophisticated ventilator modes. Data provided by the ventilators are almost never downloaded, stored or analysed. The data is complex, high frequency and requires time-intensive scrutiny to review. Doctors do not have time to analyse these traces in a neonatal unit. We are providing a simple and easy-to-interpret summary of 100Hz dual-channel ventilator data to improve the quality of care of young infants by time-poor staff. This involves signal processing, visualisation, building a gold standard and machine learning to segment breaths and summarise a baby's behaviour. This builds on our talk at PyDataLondon Meetup 30 in January 2017. Our goal is to open source the research so that others can benefit from the processes that we develop. We invite feedback from the audience to help improve our methods. Anyone interested in time series data, automated labeling, scikit-learn, Bokeh and medical applications will find this talk of interest. Both the highs and lows of our current approaches will be discussed. This is a collaboration between Dr Gusztav Belteki (Cambridge University Hopsitals NHS Foundation Turst), Ian Ozsvald (ModelInsight) and Giles Weaver (ModelInsight). www.pydata.org PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R. We aim to be an accessible, community-driven conference, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
Views: 1205 PyData
To be Posted Soon
Views: 68 Night Bloom
NYC Open Data Meetup, R pacakge Caret workshop
Views: 817 Vivian Zhang
Innovations in ways to compile, assess and act on the ever-increasing quantities of health data are changing the practice and police of medicine. Statisticians Laura Hatfield and Sherri Rose will discuss recent methodological advances and the impact of big data on human health. Speakers: Laura Hatfield, PhD Associate Professor, Department of Health Care Policy, Harvard Medical School Sherri Rose, PhD Associate Professor, Department of Health Care Policy, Harvard Medical School Like Harvard Medical School on Facebook: https://goo.gl/4dwXyZ Follow on Twitter: https://goo.gl/GbrmQM Follow on Instagram: https://goo.gl/s1w4up Follow on LinkedIn: https://goo.gl/04vRgY Website: https://hms.harvard.edu/
Views: 4046 Harvard Medical School
แนะนำการใช้งาน RapidMiner เบื้องต้น ประกอบวิชาเรียน Data Mining และขอแก้ไขคำอธิบาย เรื่อง Data Science Platform ในคลิปนิดนึงนะค่ะ _ คำอธิบายเพิ่มเติมเรื่อง Data Science Platform สามารถศึกษาได้จาก link https://rapidminer.com/ - คำอธิบายเรื่อง Data Science - http://www.parinya.net/node/2251
Views: 1821 Kanda
Organizations today are collecting more information about what's going on in their environments than ever before, but manually sifting through all this data to find evil on your network is next to impossible. Increasingly, companies are turning to big data analytics and machine learning to detect security incidents. Most of these solutions are black-box products that cannot be easily tailored to the environments in which they run. Therefore, reliable detection of security incidents remains elusive, and there is a distinct lack of open source innovation. It doesn't have to be this way! Many security pros think nothing of whipping up a script to extract downloaded files from a PCAP, yet recoil in horror at the idea of writing their own machine learning tools. The "analytics barrier" is perceived to be very high, but getting started is much easier than you think! In this presentation, we’ll walk through the creation of a simple Python script that can learn to find malicious activity in your HTTP proxy logs. At the end of it all, you'll not only gain a useful tool to help you identify things that your IDS and SIEM might have missed, but you’ll also have the knowledge necessary to adapt that code to other uses as well. David J. Bianco is a Security Technologist at Sqrrl Data, Inc. Before coming to work as a Security Technologist and DFIR subject matter expert at Sqrrl, he led the hunt team at Mandiant, helping to develop and prototype innovative approaches to detect and respond to network attacks. Prior to that, he spent five years helping to build an intel-driven detection & response program for General Electric (GE-CIRT). He set detection strategies for a network of nearly 500 NSM sensors in over 160 countries and led response efforts for some of the company’s the most critical incidents. He stays active in the community, speaking and writing on the subjects of Incident Detection & Response, Threat Intelligence and Security Analytics. He is also a member of the MLSec Project (http://www.mlsecproject.org). You can follow him on Twitter as @DavidJBianco or subscribe to his blog, "Enterprise Detection & Response" (http://detect-respond.blogspot.com). Chris McCubbin is the Director of Data Science and a co-founder of Sqrrl Data, Inc. His primary task is prototyping new designs and algorithms to extend the capabilities of the Sqrrl Enterprise cybersecurity solution. Prior to cofounding Sqrrl, he spent 2 years developing big-data analytics for the Department of Defense at TexelTek, Inc and 10 years as Senior Professional Staff at the Johns Hopkins Applied Physics Laboratory where he applied machine learning algorithms to swarming unmanned vehicle ensembles. He holds a Masters degree in Computer Science and Bachelor’s degrees in Mathematics and Computer Science from the University of Maryland.
Views: 253 BSides Boston
Vancouver Data Blog http://vancouverdata.blogspot.com/
Views: 17232 el chief
Jonathan Martin will give you a whirlwind tour of the fundamental concepts and algorithms in machine learning, then explore a front-end application: selecting the "best" photos to feature on our photo sharing site. Don't expect mathematically laborious derivations of SVM kernels or the infinite VC dimension of Neural Nets, but we will gain enough intuition to make informed compromises (thanks to the No Free Lunch theorem, everything is a compromise) in our pursuit of aesthetically-intelligent machines. Find Jonathan on Twitter: @nybblr http://www.bignerdranch.com http://www.twitter.com/bignerdranch http://www.facebook.com/bignerdranch
Views: 85 Big Nerd Ranch
The objective of this course is to give you a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms. In this series, we'll be covering linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks. For each major algorithm that we cover, we will discuss the high level intuitions of the algorithms and how they are logically meant to work. Next, we'll apply the algorithms in code using real world data sets along with a module, such as with Scikit-Learn. Finally, we'll be diving into the inner workings of each of the algorithms by recreating them in code, from scratch, ourselves, including all of the math involved. This should give you a complete understanding of exactly how the algorithms work, how they can be tweaked, what advantages are, and what their disadvantages are. In order to follow along with the series, I suggest you have at the very least a basic understanding of Python. If you do not, I suggest you at least follow the Python 3 Basics tutorial until the module installation with pip tutorial. If you have a basic understanding of Python, and the willingness to learn/ask questions, you will be able to follow along here with no issues. Most of the machine learning algorithms are actually quite simple, since they need to be in order to scale to large datasets. Math involved is typically linear algebra, but I will do my best to still explain all of the math. If you are confused/lost/curious about anything, ask in the comments section on YouTube, the community here, or by emailing me. You will also need Scikit-Learn and Pandas installed, along with others that we'll grab along the way. Machine learning was defined in 1959 by Arthur Samuel as the "field of study that gives computers the ability to learn without being explicitly programmed." This means imbuing knowledge to machines without hard-coding it. https://pythonprogramming.net/machine-learning-tutorial-python-introduction/ https://twitter.com/sentdex https://www.facebook.com/pythonprogra... https://plus.google.com/+sentdex
Views: 1521495 sentdex
GOVANIFY'S LAB #2 AI is always viewed as a huge black box, as something even creators don't understand and you are always shown the neuronal network part of the story, but what about showing to you the complete story about AI? As always in this series of video, through a silly project, I'm explaining to you how AI works and how state-of-the-art object detection works. I am also giving you resources to make you able to research on your own and learn more about the subject, by trying to make this as useful in passive and active learning once again! I've tried to improve as best as I could both the audio quality and video quality, even though the editing is still sort of lazy imho. I take all sort of criticisms and help requests about this, so feel free to contact me! I sincerely hope you liked this video! -G ~~~~~ Resources: TensorFlow: http://tensorflow.org/ OpenAI: https://openai.com Twitter: https://twitter.com/GovanifY Blog: https://govanify.com Mail: [email protected] (PGP available on the "About" page on my website) Video made using KdenLive and Audacity on Gentoo Linux Microphone: AT2035 XLR Audio Interface: Focusrite Scarlett 2i2 Camera: The Frankenstein phone(aka what happens when police breaks your phone and you repair the screen as you can, OnePlus 3t) Sources: Background Music: Hopeless Romantic, FearofDark 2-Minute Neuroscience - Medulla Oblongata, Neuroscientifically Challenged 3D Animation - Brain with Neurons Firing, TomsTwinFilms 4K Mask RCNN COCO Object detection and segmentation #2, Karol Majek Baxter the Robot playing Tic-tac-toe Game 4, Michael Overstreet But what _is_ a Neural Network _ Chapter 1, deep learning, 3Blue1Brown Cleverbot vs Humans, Humor Vault FUNNIEST CAPTCHA FAILS, Random GO 2015 SPL Finals - Nao-Team HTWK vs. B-Human 1st half, HTWK Robots Google's DeepMind AI Just Taught Itself To Walk, Tech Insider Hot Robot At SXSW Says She Wants To Destroy Humans _ The Pulse _ CNBC, CNBC How to use Tensorboard Embeddings Visualization with MNIST, Anuj shah MariFlow gets Gold in 50cc Mushroom Cup, SethBling Mind reading with brain scanners _ John-Dylan Haynes _ TEDxBerlin, TEDx Talks RASPUTIN - Vladimir Putin - Love The Way You Move (Funk Overload) @slocband, Pace Audioo Recurrent Neural Network Visualization, Michael Stone Walking around Shibuya - Tokyo - 渋谷を歩く- 4K Ultra HD, TokyoStreetView - Japan The Beautiful Tensorflow 14 Visualization Tensorboard 1 (neural network tutorials), 周莫烦 US Military Robot Dog will make a great companion for RoboCop, ArmedForcesUpdate Visualization of newly formed synapses with unprecedented resolution, Max Planck Florida Institute for Neuroscience I don't think I've used much of other sources, if you think you should be credited or do not like to be on this video please contact me Random thanks in no particular order: : Ely (sykhro), Frafnir, Batiste Flo., Xaddgx, Vlad
Views: 840 Gravof Corp
Slides and blog posts available at https://elie.net/ai This talk explains why artificial intelligence (AI) is the key to building anti-abuse defenses that keep up with user expectations and combat increasingly sophisticated attacks. It covers the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting, and how to overcome them. This video is a re-recording of the talk I gave at RSA 2018 on the subject
Views: 5180 Elie Bursztein