标签归档:机器学习公开课

Andrew Ng 深度学习公开课系列第五门课程序列模型开课

Deep Learning Specialization on Coursera

Andrew Ng 深度学习课程系列第五门课程序列模型(Sequence Models)在1月的尾巴终于开课 ,在跳票了几次之后,这门和NLP比较相关的深度学习课程终于开课了。这门课程属于Coursera上的深度学习专项系列 ,这个系列有5门课,目前终于完备,感兴趣的同学可以关注:Deep Learning Specialization

This course will teach you how to build models for natural language, audio, and other sequence data. Thanks to deep learning, sequence algorithms are working far better than just two years ago, and this is enabling numerous exciting applications in speech recognition, music synthesis, chatbots, machine translation, natural language understanding, and many others. You will: - Understand how to build and train Recurrent Neural Networks (RNNs), and commonly-used variants such as GRUs and LSTMs. - Be able to apply sequence models to natural language problems, including text synthesis. - Be able to apply sequence models to audio applications, including speech recognition and music synthesis. This is the fifth and final course of the Deep Learning Specialization.

这门课程主要面向自然语言,语音和其他序列数据进行深度学习建模,将会学习递归神经网络,GRU,LSTM等内容,以及如何将其应用到语音识别,机器翻译,自然语言理解等任务中去。个人认为这是目前互联网上最适合入门深度学习的系列系列课程了,Andrew Ng 老师善于讲课,另外用Python代码抽丝剥茧扣作业,课程学起来非常舒服,希望最后这门RNN课程也不负众望。参考我之前写得两篇小结:

Andrew Ng 深度学习课程小记

Andrew Ng (吴恩达) 深度学习课程小结

额外推荐: 深度学习课程亚美游AMG88整理

Coursera上机器学习课程(公开课)汇总推荐

Deep Learning Specialization on Coursera

Coursera上有很多机器学习课程,这里做个总结,因为机器学习相关的概念和应用很多,这里推荐的课程仅限于和机器学习直接相关的课程,虽然深度学习属于机器学习范畴,这里暂时也将其排除在外,后续会专门推出深度学习课程的系列推荐。

1. Andrew Ng 老师的 机器学习课程(Machine Learning)

机器学习入门首选课程,没有之一。这门课程从一开始诞生就备受瞩目,据说全世界有数百万人通过这门课程入门机器学习。课程的级别是入门级别的,对学习者的背景要求不高,Andrew Ng 老师讲解的又很通俗易懂,所以强烈推荐从这门课程开始走入机器学习。课程简介:

机器学习是一门研究在非特定编程条件下让计算机采取行动的学科。最近二十年,机器学习为我们带来了自动驾驶汽车、实用的语音识别、高效的网络搜索,让我们对人类基因的解读能力大大提高。当今机器学习技术已经非常普遍,您很可能在毫无察觉情况下每天使用几十次。许多研究者还认为机器学习是人工智能(AI)取得进展的最有效途径。在本课程中,您将学习最高效的机器学习技术,了解如何使用这些技术,并自己动手实践这些技术。更重要的是,您将不仅将学习理论知识,还将学习如何实践,如何快速使用强大的技术来解决新问题。最后,您将了解在硅谷企业如何在机器学习和AI领域进行创新。 本课程将广泛介绍机器学习、数据挖掘和统计模式识别。相关主题包括:(i) 监督式学习(参数和非参数算法、支持向量机、核函数和神经网络)。(ii) 无监督学习(集群、降维、推荐系统和深度学习)。(iii) 机器学习实例(偏见/方差理论;机器学习和AI领域的创新)。课程将引用很多案例和应用,您还需要学习如何在不同领域应用学习算法,例如智能机器人(感知和控制)、文本理解(网络搜索和垃圾邮件过滤)、计算机视觉、医学信息学、音频、数据库挖掘等领域。

这里有老版课程评论,非常值得参考推荐:Machine Learning

2. 台湾大学林轩田老师的 機器學習基石上 (Machine Learning Foundations)---Mathematical Foundations

如果有一定的基础或者学完了Andrew Ng老师的机器学习课程,这门机器学习基石上-数学基础可以作为进阶课程。林老师早期推出的两门机器学习课程口碑和难度均有:机器学习基石机器学习技法 ,现在重组为上和下,非常值得期待:

Machine learning is the study that allows computers to adaptively improve their performance with experience accumulated from the data observed. Our two sister courses teach the most fundamental algorithmic, theoretical and practical tools that any user of machine learning needs to know. This first course of the two would focus more on mathematical tools, and the other course would focus more on algorithmic tools. [機器學習旨在讓電腦能由資料中累積的經驗來自我進步。我們的兩項姊妹課程將介紹各領域中的機器學習使用者都應該知道的基礎演算法、理論及實務工具。本課程將較為著重數學類的工具,而另一課程將較為著重方法類的工具。]

3. 台湾大学林轩田老师的 機器學習基石下 (Machine Learning Foundations)---Algorithmic Foundations

作为2的姊妹篇,这个机器学习基石下-算法基础 更注重机器学习算法相关知识:

Machine learning is the study that allows computers to adaptively improve their performance with experience accumulated from the data observed. Our two sister courses teach the most fundamental algorithmic, theoretical and practical tools that any user of machine learning needs to know. This second course of the two would focus more on algorithmic tools, and the other course would focus more on mathematical tools. [機器學習旨在讓電腦能由資料中累積的經驗來自我進步。我們的兩項姊妹課程將介紹各領域中的機器學習使用者都應該知道的基礎演算法、理論及實務工具。本課程將較為著重方法類的工具,而另一課程將較為著重數學類的工具。

可参考早期的老版本课程评论:機器學習基石 (Machine Learning Foundations) 機器學習技法 (Machine Learning Techniques)

4. 华盛顿大学的 "机器学习专项课程(Machine Learning Specialization)"

这个系列课程包含4门子课程,分别是 机器学习基础:案例研究 , 机器学习:回归 , 机器学习:分类, 机器学习:聚类与检索:

This Specialization from leading researchers at the University of Washington introduces you to the exciting, high-demand field of Machine Learning. Through a series of practical case studies, you will gain applied experience in major areas of Machine Learning including Prediction, Classification, Clustering, and Information Retrieval. You will learn to analyze large and complex datasets, create systems that adapt and improve over time, and build intelligent applications that can make predictions from data.

4.1 Machine Learning Foundations: A Case Study Approach(机器学习基础: 案例研究)

你是否好奇数据可以告诉你什么?你是否想在2018免费送彩金游戏机器学习促进商业的核心方式上有深层次的理解?你是否想能同专家们讨论2018免费送彩金游戏回归,分类,深度学习以及推荐系统的一切?在这门课上,你将会通过一系列实际案例学习来获取实践经历。

Do you have data and wonder what it can tell you? Do you need a deeper understanding of the core ways in which machine learning can improve your business? Do you want to be able to converse with specialists about anything from regression and classification to deep learning and recommender systems? In this course, you will get hands-on experience with machine learning from a series of practical case-studies. At the end of the first course you will have studied how to predict house prices based on house-level features, analyze sentiment from user reviews, retrieve documents of interest, recommend products, and search for images. Through hands-on practice with these use cases, you will be able to apply machine learning methods in a wide range of domains. This first course treats the machine learning method as a black box. Using this abstraction, you will focus on understanding tasks of interest, matching these tasks to machine learning tools, and assessing the quality of the output. In subsequent courses, you will delve into the components of this black box by examining models and algorithms. Together, these pieces form the machine learning pipeline, which you will use in developing intelligent applications. Learning Outcomes: By the end of this course, you will be able to: -Identify potential applications of machine learning in practice. -Describe the core differences in analyses enabled by regression, classification, and clustering. -Select the appropriate machine learning task for a potential application. -Apply regression, classification, clustering, retrieval, recommender systems, and deep learning. -Represent your data as features to serve as input to machine learning models. -Assess the model quality in terms of relevant error metrics for each task. -Utilize a dataset to fit a model to analyze new data. -Build an end-to-end application that uses machine learning at its core. -Implement these techniques in Python.

4.2 Machine Learning: Regression(机器学习: 回归问题)

这门课程关注机器学习里面的一个基本问题: 回归(Regression), 也通过案例研究(预测房价)的方式进行回归问题的学习,最终通过Python实现相关的机器学习算法。

Case Study - Predicting Housing Prices In our first case study, predicting house prices, you will create models that predict a continuous value (price) from input features (square footage, number of bedrooms and bathrooms,...). This is just one of the many places where regression can be applied. Other applications range from predicting health outcomes in medicine, stock prices in finance, and power usage in high-performance computing, to analyzing which regulators are important for gene expression. In this course, you will explore regularized linear regression models for the task of prediction and feature selection. You will be able to handle very large sets of features and select between models of various complexity. You will also analyze the impact of aspects of your data -- such as outliers -- on your selected models and predictions. To fit these models, you will implement optimization algorithms that scale to large datasets. Learning Outcomes: By the end of this course, you will be able to: -Describe the input and output of a regression model. -Compare and contrast bias and variance when modeling data. -Estimate model parameters using optimization algorithms. -Tune parameters with cross validation. -Analyze the performance of the model. -Describe the notion of sparsity and how LASSO leads to sparse solutions. -Deploy methods to select between models. -Exploit the model to form predictions. -Build a regression model to predict prices using a housing dataset. -Implement these techniques in Python.

4.3 Machine Learning: Classification(机器学习:分类问题)

这门课程关注机器学习里面的另一个基本问题: 分类(Classification), 通过两个案例研究进行学习:情感分析和贷款违约预测,最终通过Python实现相关的算法(也可以选择其他语言,但是强烈推荐Python)。

Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended).

4.4 Machine Learning: Clustering & Retrieval(机器学习:聚类和检索)

这门课程关注的是机器学习里面的另外两个基本问题:聚类和检索,同样通过案例研究进行学习:相似文档查询,一个非常具有实际应用价值的问题:

Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python.

5. 密歇根大学的 Applied Machine Learning in Python(在Python中应用机器学习)

Python机器学习应用课程,这门课程主要聚焦在通过Python应用机器学习,包括机器学习和统计学的区别,机器学习工具包scikit-learn的介绍,有监督学习和无监督学习,数据泛化问题(例如交叉验证和过拟合)等。这门课程同时属于"Python数据科学应用专项课程系列(Applied Data Science with Python Specialization)"。

This course will introduce the learner to applied machine learning, focusing more on the techniques and methods than on the statistics behind these methods. The course will start with a discussion of how machine learning is different than descriptive statistics, and introduce the scikit learn toolkit. The issue of dimensionality of data will be discussed, and the task of clustering data, as well as evaluating those clusters, will be tackled. Supervised approaches for creating predictive models will be described, and learners will be able to apply the scikit learn predictive modelling methods while understanding process issues related to data generalizability (e.g. cross validation, overfitting). The course will end with a look at more advanced techniques, such as building ensembles, and practical limitations of predictive models. By the end of this course, students will be able to identify the difference between a supervised (classification) and unsupervised (clustering) technique, identify which technique they need to apply for a particular dataset and need, engineer features to meet that need, and write python code to carry out an analysis. This course should be taken after Introduction to Data Science in Python and Applied Plotting, Charting & Data Representation in Python and before Applied Text Mining in Python and Applied Social Analysis in Python.

6. 俄罗斯国立高等经济学院和Yandex联合推出的 高级机器学习专项课程系列(Advanced Machine Learning Specialization)

该系列授课语言为英语,包括深度学习,Kaggle数据科学竞赛,机器学习中的贝叶斯方法,强化学习,计算机视觉,自然语言处理等7门子课程,截止目前前3门课程已开,感兴趣的同学可以关注:

This specialization gives an introduction to deep learning, reinforcement learning, natural language understanding, computer vision and Bayesian methods. Top Kaggle machine learning practitioners and CERN scientists will share their experience of solving real-world problems and help you to fill the gaps between theory and practice. Upon completion of 7 courses you will be able to apply modern machine learning methods in enterprise and understand the caveats of real-world data and settings.

以下是和机器学习直接相关的子课程,其他这里略过:

6.3 Bayesian Methods for Machine Learning(面向机器学习的贝叶斯方法)

该课程关注机器学习中的贝叶斯方法,贝叶斯方法在很多领域都很有用,例如游戏开发和毒品发现。它们给很多机器学习算法赋予了“超能力”,例如处理缺失数据,从小数据集中提取大量有用的信息等。当贝叶斯方法被应用在深度学习中时,它可以让你将模型压缩100倍,并且自动帮你调参,节省你的时间和金钱。

Bayesian methods are used in lots of fields: from game development to drug discovery. They give superpowers to many machine learning algorithms: handling missing data, extracting much more information from small datasets. Bayesian methods also allow us to estimate uncertainty in predictions, which is a really desirable feature for fields like medicine. When Bayesian methods are applied to deep learning, it turns out that they allow you to compress your models 100 folds, and automatically tune hyperparametrs, saving your time and money. In six weeks we will discuss the basics of Bayesian methods: from how to define a probabilistic model to how to make predictions from it. We will see how one can fully automate this workflow and how to speed it up using some advanced techniques. We will also see applications of Bayesian methods to deep learning and how to generate new images with it. We will see how new drugs that cure severe diseases be found with Bayesian methods.

7. 约翰霍普金斯大学的 Practical Machine Learning(机器学习实战)

这门课程从数据科学的角度来应用机器学习进修实战,课程将会介绍机器学习的基础概念譬如训练集,测试集,过拟合和错误率等,同时这门课程也会介绍机器学习的基本模型和算法,例如回归,分类,朴素贝叶斯,以及随机森林。这门课程最终会覆盖一个完整的机器学习实战周期,包括数据采集,特征生成,机器学习算法应用以及结果评估等。这门机器学习实践课程同时属于约翰霍普金斯大学的 数据科学专项课程(Data Science Specialization)系列:

One of the most common tasks performed by data scientists and data analysts are prediction and machine learning. This course will cover the basic components of building and applying prediction functions with an emphasis on practical applications. The course will provide basic grounding in concepts such as training and tests sets, overfitting, and error rates. The course will also introduce a range of model based and algorithmic machine learning methods including regression, classification trees, Naive Bayes, and random forests. The course will cover the complete process of building prediction functions including data collection, feature creation, algorithms, and evaluation.

8. 卫斯理大学 Regression Modeling in Practice(回归模型实战)

这门课程关注的是数据分析以及机器学习领域的最重要的一个概念和工具:回归(模型)分析。这门课程使用SAS或者Python,从线性回归开始学习,到了解整个回归模型,以及应用回归模型进行数据分析:

This course focuses on one of the most important tools in your data analysis arsenal: regression analysis. Using either SAS or Python, you will begin with linear regression and then learn how to adapt when two variables do not present a clear linear relationship. You will examine multiple predictors of your outcome and be able to identify confounding variables, which can tell a more compelling story about your results. You will learn the assumptions underlying regression analysis, how to interpret regression coefficients, and how to use regression diagnostic plots and other tools to evaluate the quality of your regression model. Throughout the course, you will share with others the regression models you have developed and the stories they tell you.

这门课程同时属于卫斯理大学的 数据分析与解读专项课程系列(Data Analysis and Interpretation Specialization)

9. 卫斯理大学的 Machine Learning for Data Analysis(面向数据分析的机器学习)

这门课程关注数据分析里的机器学习,机器学习的过程是一个开发、测试和应用预测算法来实现目标的过程,这门课程以 Regression Modeling in Practice(回归模型实战) 为基础,介绍机器学习中的有监督学习概念,同时从基础的分类算法到决策树以及聚类都会覆盖。通过完成这门课程,你将会学习如何应用、测试和解读机器学习算法用来解决实际问题。

Are you interested in predicting future outcomes using your data? This course helps you do just that! Machine learning is the process of developing, testing, and applying predictive algorithms to achieve this goal. Make sure to familiarize yourself with course 3 of this specialization before diving into these machine learning concepts. Building on Course 3, which introduces students to integral supervised machine learning concepts, this course will provide an overview of many additional concepts, techniques, and algorithms in machine learning, from basic classification to decision trees and clustering. By completing this course, you will learn how to apply, test, and interpret machine learning algorithms as alternative methods for addressing your research questions.

这门课程同时属于卫斯理大学的 数据分析与解读专项课程系列(Data Analysis and Interpretation Specialization)

10. 加州大学圣地亚哥分校的 Machine Learning With Big Data(大数据机器学习)

这门课程关注大数据中的机器学习技术,将会介绍相关的机器学习算法和工具。通过这门课程,你可以学到:通过机器学习过程来设计和利用数据;将机器学习技术用于探索和准备数据来建模;识别机器学习问题的类型;通过广泛可用的开源工具来使用数据构建模型;在Spark中使用大规模机器学习算法分析大数据。

Want to make sense of the volumes of data you have collected? Need to incorporate data-driven decisions into your process? This course provides an overview of machine learning techniques to explore, analyze, and leverage data. You will be introduced to tools and algorithms you can use to create machine learning models that learn from data, and to scale those models up to big data problems. At the end of the course, you will be able to: • Design an approach to leverage data using the steps in the machine learning process. • Apply machine learning techniques to explore and prepare data for modeling. • Identify the type of machine learning problem in order to apply the appropriate set of techniques. • Construct models that learn from data using widely available open source tools. • Analyze big data problems using scalable machine learning algorithms on Spark.

这门课程同时属于 加州大学圣地亚哥分校的大数据专项课程系列(Big Data Specialization)

11. 俄罗斯搜索巨头Yandex推出的 Big Data Applications: Machine Learning at Scale(大数据应用:大规模机器学习)

机器学习正在改变世界,通过这门课程,你将会学习到:识别实战中需要用机器学习算法解决的问题;通过Spark MLLib构建、调参、和应用线性模型;里面文本处理的方法;用决策树和Boost方法解决机器学习问题;构建自己的推荐系统。

Machine learning is transforming the world around us. To become successful, you’d better know what kinds of problems can be solved with machine learning, and how they can be solved. Don’t know where to start? The answer is one button away. During this course you will: - Identify practical problems which can be solved with machine learning - Build, tune and apply linear models with Spark MLLib - Understand methods of text processing - Fit decision trees and boost them with ensemble learning - Construct your own recommender system. As a practical assignment, you will - build and apply linear models for classification and regression tasks; - learn how to work with texts; - automatically construct decision trees and improve their performance with ensemble learning; - finally, you will build your own recommender system! With these skills, you will be able to tackle many practical machine learning tasks. We provide the tools, you choose the place of application to make this world of machines more intelligent.

这门课程同时属于Yandex推出的 面向数据工程师的大数据专项课程系列(Big Data for Data Engineers Specialization)

注:本文首发“课程图谱博客”:http://blog.coursegraph.com ,同步发布到这里, 本文链接地址:http://blog.coursegraph.com/coursera上机器学习课程公开课汇总推荐 http://blog.coursegraph.com/?p=696

Andrew Ng 深度学习课程系列第四门课程卷积神经网络开课

Deep Learning Specialization on Coursera

Andrew Ng 深度学习课程系列第四门课程卷积神经网络(Convolutional Neural Networks)将于11月6日开课 ,不过课程资料已经放出,现在注册课程已经可以听课了 ,这门课程属于Coursera上的深度学习专项系列 ,这个系列有5门课,前三门已经开过好几轮,但是第4、第5门课程一直处于待定状态,新的一轮将于11月7号开始,感兴趣的同学可以关注:Deep Learning Specialization

This course will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images. You will: - Understand how to build a convolutional neural network, including recent variations such as residual networks. - Know how to apply convolutional networks to visual detection and recognition tasks. - Know to use neural style transfer to generate art. - Be able to apply these algorithms to a variety of image, video, and other 2D or 3D data. This is the fourth course of the Deep Learning Specialization.

个人认为这是目前互联网上最适合入门深度学习的课程系列了,Andrew Ng 老师善于讲课,另外用Python代码抽丝剥茧扣作业,课程学起来非常舒服,参考我之前写得两篇小结:

Andrew Ng 深度学习课程小记

Andrew Ng (吴恩达) 深度学习课程小结

额外推荐: 深度学习课程亚美游AMG88整理

Andrew Ng (吴恩达) 深度学习课程小结

Deep Learning Specialization on Coursera

Andrew Ng (吴恩达) 深度学习课程从宣布到现在大概有一个月了,我也在第一时间加入了这个Coursera上的深度学习系列课程,并且在完成第一门课“Neural Networks and Deep Learning(神经网络与深度学习)”的同时写了2018免费送彩金游戏这门课程的一个小结:Andrew Ng 深度学习课程小记。之后我断断续续的完成了第二门深度学习课程“Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization"和第三门深度学习课程“Structuring Machine Learning Projects”的相关视频学习和作业练习,也拿到了课程证书。平心而论,对于一个有经验的工程师来说,这门课程的难度并不高,如果有时间,完全可以在一个周内完成三门课程的相关学习工作。但是对于一个完全没有相关经验但是想入门深度学习的同学来说,可以预先补习一下Python机器学习的相关知识,如果时间允许,建议先修一下 CourseraPython系列课程Python for Everybody Specialization 和 Andrew Ng 本人的 机器学习课程

吴恩达这个深度学习系列课 (Deep Learning Specialization) 有5门子课程,截止目前,第四门"Convolutional Neural Networks" 和第五门"Sequence Models"还没有放出,不过上周四 Coursera 发了一封邮件给学习这门课程的用户:

Dear Learners,

We hope that you are enjoying Structuring Machine Learning Projects and your experience in the Deep Learning Specialization so far!

As we are nearing the one month anniversary of the Deep Learning Specialization, we wanted to thank you for your feedback on the courses thus far, and communicate our timelines for when the next courses of the Specialization will be available.

We plan to begin the first session of Course 4, Convolutional Neural Networks, in early October, with Course 5, Sequence Models, following soon after. We hope these estimated course launch timelines will help you manage your subscription as appropriate.

If you’d like to maintain full access to current course materials on Coursera’s platform for Courses 1-3, you should keep your subscription active. Note that if you only would like to access your Jupyter Notebooks, you can save these locally. If you do not need to access these materials on platform, you can cancel your subscription and restart your subscription later, when the new courses are ready. All of your course progress in the Specialization will be saved, regardless of your decision.

Thank you for your patience as we work on creating a great learning experience for this Specialization. We look forward to sharing this content with you in the coming weeks!

Happy Learning,

Coursera

大意是第四门深度学习课程 CNN(卷积神经网络)将于10月上旬推出,第五门深度学习课程 Sequence Models(序列模型, RNN等)将紧随其后。对于付费订阅的用户,如果你想随时随地获取当前3门深度学习课程的所有资料,最好保持订阅;如果你仅仅想访问 Jupyter Notebooks,也就是获取相关的编程作业,可以先本地保存它们。你也可以现在取消订阅这门课程,直到之后的课程开始后重新订阅,你的所有学习资料将会保存。所以一个比较省钱的办法,就是现在先离线保存相关课程资料,特别是编程作业等,然后取消订阅。当然对于视频,也可以离线下载,不过现在免费访问这门课程的视频有很多办法,譬如Coursera本身的非订阅模式观看视频,或者网易云课堂免费提供了这门课程的视频部分。不过我依然觉得,吴恩达这门深度学习课程,如果仅仅观看视频,最大的功效不过30%,这门课程的精华就在它的练习和编程作业部分,特别是编程作业,非常值得揣摩,花钱很值。

再次回到 Andrew Ng 这门深度学习课程的子课程上,第二门课程是“Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization",有三周课程,包括是深度神经网络的调参、正则化方法和优化算法讲解:

第一周课程是2018免费送彩金游戏深度学习的实践方面的经验 (Practical aspects of Deep Learning), 包括训练集/验证集/测试集的划分,Bias 和
Variance的问题,神经网络中解决过拟合 (Overfitting) 的 Regularization 和 Dropout 方法,以及Gradient Check等:


这周课程依然强大在编程作业上,有三个编程作业需要完成:

完成编程的作业的过程也是一个很好的回顾课程视频的过程,可以把一些听课中容易忽略的点补上。

第二周深度学习课程是2018免费送彩金游戏神经网络中用到的优化算法 (Optimization algorithms),包括 Mini-batch gradient descent,RMSprop, Adam等优化算法:

编程作业也很棒,在老师循循善诱的预设代码下一步一步完成了几个优化算法。

第三周深度学习课程主要2018免费送彩金游戏神经网络中的超参数调优和深度学习框架问题(Hyperparameter tuning , Batch Normalization and Programming Frameworks),顺带讲了一下多分类问题和 Softmax regression, 特别是最后一个视频简单介绍了一下 TensorFlow , 并且编程作业也是和TensorFlow相关,对于还没有学习过Tensorflow的同学,刚好是一个入门学习机会,视频介绍和作业设计都很棒:


第三门深度学习课程Structuring Machine Learning Projects”更简单一些,只有两周课程,只有 Quiz, 没有编程作业,算是Andrew Ng 老师2018免费送彩金游戏深度学习或者机器学习项目方法论的一个总结:

第一周课程主要2018免费送彩金游戏机器学习的策略、项目目标(可量化)、训练集/开发集/测试集的数据分布、和人工评测指标对比等:


课程虽然没有提供编程作业,但是Quiz练习是一个2018免费送彩金游戏城市鸟类识别的机器学习案例研究,通过这个案例串联15个问题,对应着课程视频中的相关经验,值得玩味。

第二周课程的学习目标是:

“Understand what multi-task learning and transfer learning are
Recognize bias, variance and data-mismatch by looking at the performances of your algorithm on train/dev/test sets”

主要讲解了错误分析(Error Analysis), 不匹配训练数据和开发/测试集数据的处理(Mismatched training and dev/test set),机器学习中的迁移学习(Transfer learning)和多任务学习(Multi-task learning),以及端到端深度学习(End-to-end deep learning):

这周课程的选择题作业仍然是一个案例研究,2018免费送彩金游戏无人驾驶的:Autonomous driving (case study),还是用15个问题串起视频中得知识点,体验依然很棒。

最后,2018免费送彩金游戏Andrew Ng (吴恩达) 深度学习课程系列,Coursera上又启动了新一轮课程周期,9月12号开课,对于错过了上一轮学习的同学,现在加入新的一轮课程刚刚好。不过相信 Andrew Ng 深度学习课程会成为他机器学习课程之后 Coursera 上又一个王牌课程,会不断滚动推出的,所以任何时候加入都不会晚。另外,如果已经加入了这门深度学习课程,建议在学习的过程中即使保存资料,我都是一边学习一边保存这门深度学习课程的相关资料的,包括下载了课程视频用于离线观察,完成Quiz和编程作业之后都会保存一份到电脑上,方便随时查看。

索引:Andrew Ng 深度学习课程小记

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:

本文链接地址:Andrew Ng (吴恩达) 深度学习课程小结 /?p=9761

Andrew Ng 深度学习课程小记

Deep Learning Specialization on Coursera

2011年秋季,Andrew Ng 推出了面向入门者的MOOC雏形课程机器学习: Machine Learning,随后在2012年4月,Andrew Ng 在Coursera上推出了改进版的Machine Learning(机器学习)公开课: Andrew Ng' Machine Learning: Master the Fundamentals,这也同时宣告了Coursera平台的诞生。当时我也是第一时间加入了这门课程,并为这门课程写了一些笔记:Coursera公开课笔记: 斯坦福大学机器学习 。同时也是受这股MOOC浪潮的驱使,建立了“课程图谱”,因此结识了不少公开课爱好者和MOOC大神。而在此之前,Andrew Ng 在斯坦福大学的授课视频“机器学习”也流传甚广,但是这门面向斯坦福大学学生的课程难道相对较高。直到2012年Coursera, Udacity等MOOC平台的建立,把课程视频,作业交互,编程练习有机结合在一起,才产生了更有生命力的MOOC课程。Andrew Ng 在为新课程深度学习写的宣传文章“deeplearning.ai: Announcing new Deep Learning courses on Coursera”里提到,这门机器学习课程自从开办以来,大约有180多万学生学习过,这是一个惊人的数字。

回到这个深度学习系列课:Deep Learning Specialization ,该课程正式开课是8月15号,但是在此之前几天已经开放了,加入后可以免费学习7天,之后开始按月费49美元收取,直到取消这个系列的订阅为止。正式加入的好处是,除了课程视频,还可以在Coursera平台上做题和提交编程作业,得到实时反馈,如果通过的话,还可以拿到相应的课程证书。我在上周六加入了这门以 deeplearning.ai 的名义推出的Deep Learning(深度学习)系列课,并且利用业余时间完成了第一门课“Neural Networks and Deep Learning(神经网络与深度学习)”的相关课程,包括视频观看和交互练习以及编程作业,体验很不错。自从Coursera迁移到新平台后,已经很久没有上过相关的公开课了,这次要不是Andrew Ng 离开百度后重现MOOC江湖,点燃了内心久违的MOOC情节,我大概也不会这么认真的去上公开课了。

具体到该深度学习课程的组织上,Andrew Ng 把这门课程的门槛已经降到很低,和他的机器学习课程类似,这是一个面向AI初学者的深度学习系列课程

If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.

AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.

We will help you master Deep Learning, understand how to apply it, and build a career in AI.

虽然面向初学者,但是这门课程也会讲解很多实践中的工程经验,所以这门课程既适合没有经验的同学从基础学起,也适合有一定基础的同学查遗补漏:

从实际听课的效果上来看,如果用一个字来总结效果,那就是“值”,花钱也值。该系列第一门课是“Neural Networks and Deep Learning(神经网络与深度学习)” 分为4个部分:

1. Introduction to deep learning
2. Neural Networks Basics
3. Shallow neural networks
4. Deep Neural Networks

第一周2018免费送彩金游戏“深度学习的介绍”非常简单,也没有编程作业,只有简单的选择题练习,主要是2018免费送彩金游戏深度学习的宏观介绍和课程的相关介绍:

第二周2018免费送彩金游戏“神经网络基础”从二分类讲起,到逻辑回归,再到梯度下降,再到用计算图(computation graph )求导,如果之前学过Andrew Ng的“Machine Learning(机器学习)” 公开课,除了Computation Graph, 其他应该都不会陌生:

第二周课程同时也提供了编程作业所需要的基础部分视频课程:Python and Vectorization。这门课程的编程作业使用Python语言,并且提供线上 Jupyter Notebook 编程环境完成作业,无需线下编程验证提交,非常方便。这也和之前机器学习课程的编程作业有了很大区别,之前那门课程使用Octave语言(类似Matlab的GNU Octave),并且是线下编程测试后提交给服务器验证。这次课程线上完成编程作业的感觉是非常棒的,这个稍后再说。另外就是强调数据处理时的 Vectorization(向量化/矢量化),并且重度使用 Numpy 工具包, 如果没有特别提示,请尽量避免使用 "for loop":

当然,这部分最赞的是编程作业的设计了,首先提供了一个热身可选的编程作业:Python Basics with numpy (optional),然后是本部分的相关作业:Logistic Regression with a Neural Network mindset。每部分先有一个引导将这部分的目标讲清楚,然后点击“Open Notebook”开始作业,Notebook中很多相关代码老师已经精心设置好,对于学生来说,只需要在相应提示的部分写上几行关键代码(主要还是Vectorization),运行后有相应的output,如果output和里面提示的期望输出一致的话,就可以点击保存继续下一题了,非常方便,完成作业后就可以提交了,这部分难度不大:

第三周课程2018免费送彩金游戏“浅层神经网络”的课程我最关心的其实是2018免费送彩金游戏反向传播算法的讲解,不过在课程视频中这个列为了可选项,并且实话实话Andrew Ng2018免费送彩金游戏这部分的讲解并不能让我满意,所以如果看完这一部分后对于反向传播算法还不是很清楚的话,可以脑补一下《反向传播算法入门亚美游AMG88索引》中提到的相关文章。不过瑕不掩瑜,老师2018免费送彩金游戏其他部分的讲解依然很棒,包括激活函数的选择,为什么需要一个非线性的激活函数以及神经网络中的初始化参数选择等问题:

虽然视频中留有遗憾,但是编程作业堪称完美,在Python Notebook中老师用代入模式系统的过了一遍神经网络中的基本概念,堪称“手把手教你用Python写一个神经网络”的经典案例:

update: 这个周六(2017.08.20)完成了第四周课程和相关作业,也达到了拿证书的要求,不过需要上传相关证件验证ID,暂时还没有操作。下面是2018免费送彩金游戏第四周课程的一点补充。

第四周课程2018免费送彩金游戏“深度神经网络(Deep Neural Networks)”,主要是多层神经网络的相关概念,有了第三周课程基础,第四周课程视频相对来说比较轻松:

不过本周课程的提供了两个编程作业,一个是一步一步完成深度神经网络,一个是深度神经网络的应用,依然很棒:

完成最后的编程作业就可以拿到相应的分数和可有获得课程证书了,不过获得证书前需要上传自己的相关证书完成相关身份验证,这个步骤我还没有操作,所以是等待状态:

这是我学完Andrew Ng这个深度学习系列课程第一门课程“Neural Networks and Deep Learning(神经网络与深度学习)” 的体验,如果用几个字来总结这个深度学习系列课程,依然是:值、很值、非常值。如果你是完全的人工智能的门外汉或者入门者,那么建议你先修一下Andrew Ng的 Machine Learning(机器学习)公开课 ,用来过渡和理解相关概念,当然这个是可选项;如果你是一个业内的从业者或者深度学习工具的使用者,那么这门课程很适合给你扫清很多迷雾;当然,如果你对机器学习和深度学习了如指掌,完全可以对这门课程一笑了之。

2018免费送彩金游戏是否付费学习这门深度学习课程,个人觉得很值,相对于国内各色收费的人工智能课程,这门课程49美元的月费绝对物超所值,只要你有时间,你完全可以一个月学完所有课程。 特别是其提供的作业练习平台,在尝试了几个周的编程作业后,我已经迫不及待的想进入到其他周课程和编程作业了。

最后再次附上这门课程的链接,正如这门课程的目标所示:掌握深度学习、拥抱AI,现在就加入吧:Deep Learning Specialization: Master Deep Learning, and Break into AI

如何计算两个文档的相似度(三)

Deep Learning Specialization on Coursera

上一节我们用了一个简单的例子过了一遍gensim的用法,这一节我们将用课程图谱的实际数据来做一些验证和改进,同时会用到NLTK来对课程的英文数据做预处理。

三、课程图谱相关实验

1、数据准备
为了方便大家一起来做验证,这里准备了一份Coursera的课程数据,可以在这里下载:coursera_corpus,(百度网盘链接: http://t.cn/RhjgPkv,密码: oppc)总共379个课程,每行包括3部分内容:课程名t课程简介t课程详情, 已经清除了其中的html tag, 下面所示的例子仅仅是其中的课程名:

Writing II: Rhetorical Composing
Genetics and Society: A Course for Educators
General Game Playing
Genes and the Human Condition (From Behavior to Biotechnology)
A Brief History of Humankind
New Models of Business in Society
Analyse Numérique pour Ingénieurs
Evolution: A Course for Educators
Coding the Matrix: Linear Algebra through Computer Science Applications
The Dynamic Earth: A Course for Educators
...

好了,首先让我们打开Python, 加载这份数据:

>>> courses = [line.strip() for line in file('coursera_corpus')]
>>> courses_name = [course.split('t')[0] for course in courses]
>>> print courses_name[0:10]
['Writing II: Rhetorical Composing', 'Genetics and Society: A Course for Educators', 'General Game Playing', 'Genes and the Human Condition (From Behavior to Biotechnology)', 'A Brief History of Humankind', 'New Models of Business in Society', 'Analyse Numxc3xa9rique pour Ingxc3xa9nieurs', 'Evolution: A Course for Educators', 'Coding the Matrix: Linear Algebra through Computer Science Applications', 'The Dynamic Earth: A Course for Educators']

2、引入NLTK
NTLK是著名的Python自然语言处理工具包,但是主要针对的是英文处理,不过课程图谱目前处理的课程数据主要是英文,因此也足够了。NLTK配套有文档,有语料库,有AMG88,甚至国内有同学无私的翻译了这本书: 用Python进行自然语言处理,有时候不得不感慨:做英文自然语言处理的同学真幸福。

首先仍然是安装NLTK,在NLTK的主页详细介绍了如何在Mac, Linux和Windows下安装NLTK:http://nltk.org/install.html ,最主要的还是要先装好依赖NumPy和PyYAML,其他没什么问题。安装NLTK完毕,可以import nltk测试一下,如果没有问题,还有一件非常重要的工作要做,下载NLTK官方提供的相关语料:

>>> import nltk
>>> nltk.download()

这个时候会弹出一个图形界面,会显示两份数据供你下载,分别是all-corpora和book,最好都选定下载了,这个过程需要一段时间,语料下载完毕后,NLTK在你的电脑上才真正达到可用的状态,可以测试一下布朗语料库

>>> from nltk.corpus import brown
>>> brown.readme()
'BROWN CORPUSnnA Standard Corpus of Present-Day Edited AmericannEnglish, for use with Digital Computers.nnby W. N. Francis and H. Kucera (1964)nDepartment of Linguistics, Brown UniversitynProvidence, Rhode Island, USAnnRevised 1971, Revised and Amplified 1979nnhttp://www.hit.uib.no/icame/brown/bcm.htmlnnDistributed with the permission of the copyright holder,nredistribution permitted.n'
>>> brown.words()[0:10]
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of']
>>> brown.tagged_words()[0:10]
[('The', 'AT'), ('Fulton', 'NP-TL'), ('County', 'NN-TL'), ('Grand', 'JJ-TL'), ('Jury', 'NN-TL'), ('said', 'VBD'), ('Friday', 'NR'), ('an', 'AT'), ('investigation', 'NN'), ('of', 'IN')]
>>> len(brown.words())
1161192

现在我们就来处理刚才的课程数据,如果按此前的方法仅仅对文档的单词小写化的话,我们将得到如下的结果:

>>> texts_lower = [[word for word in document.lower().split()] for document in courses]
>>> print texts_lower[0]
['writing', 'ii:', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading,', 'research,', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic,', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words,', 'ideas,', 'talents,', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

注意其中很多标点符号和单词是没有分离的,所以我们引入nltk的word_tokenize函数,并处理相应的数据:

>>> from nltk.tokenize import word_tokenize
>>> texts_tokenized = [[word.lower() for word in word_tokenize(document.decode('utf-8'))] for document in courses]
>>> print texts_tokenized[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading', ',', 'research', ',', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic', ',', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers', '...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

对课程的英文数据进行tokenize之后,我们需要去停用词,幸好NLTK提供了一份英文停用词数据:

>>> from nltk.corpus import stopwords
>>> english_stopwords = stopwords.words('english')
>>> print english_stopwords
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now']
>>> len(english_stopwords)
127

总计127个停用词,我们首先过滤课程语料中的停用词:
>>> texts_filtered_stopwords = [[word for word in document if not word in english_stopwords] for document in texts_tokenized]
>>> print texts_filtered_stopwords[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', ',', 'research', ',', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', ',', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'support.', 'introduced', 'variety', 'rhetorical', 'conceptsxe2x80x94that', ',', 'ideas', 'techniques', 'inform', 'persuade', 'audiencesxe2x80x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', ',', 'visual', ',', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', ',', 'demonstrations', ',', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', ',', 'rhetoric', 'course', 'design', ',', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', ',', 'students', ',', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', ',', 'writers', 'exchange', ',', 'place', 'exchange', 'work', 'feedback']

停用词被过滤了,不过发现标点符号还在,这个好办,我们首先定义一个标点符号list:
>>> english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '&', '!', '*', '@', '#', '$', '%']

然后过滤这些标点符号:
>>> texts_filtered = [[word for word in document if not word in english_punctuations] for document in texts_filtered_stopwords]
>>> print texts_filtered[0]
['writing', 'ii', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', 'research', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', 'ideas', 'talents', 'support.', 'introduced', 'variety', 'rhetorical', 'conceptsxe2x80x94that', 'ideas', 'techniques', 'inform', 'persuade', 'audiencesxe2x80x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', 'visual', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', 'demonstrations', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', 'rhetoric', 'course', 'design', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', 'students', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', 'writers', 'exchange', 'place', 'exchange', 'work', 'feedback']

更进一步,我们对这些英文单词词干化(Stemming),NLTK提供了好几个相关工具接口可供选择,具体参考这个页面: http://nltk.org/api/nltk.stem.html , 可选的工具包括Lancaster Stemmer, Porter Stemmer等知名的英文Stemmer。这里我们使用LancasterStemmer:

>>> from nltk.stem.lancaster import LancasterStemmer
>>> st = LancasterStemmer()
>>> st.stem('stemmed')
'stem'
>>> st.stem('stemming')
'stem'
>>> st.stem('stemmer')
'stem'
>>> st.stem('running')
'run'
>>> st.stem('maximum')
'maxim'
>>> st.stem('presumably')
'presum'

让我们调用这个接口来处理上面的课程数据:
>>> texts_stemmed = [[st.stem(word) for word in docment] for docment in texts_filtered]
>>> print texts_stemmed[0]
['writ', 'ii', 'rhet', 'compos', 'rhet', 'compos', 'eng', 'sery', 'interact', 'read', 'research', 'compos', 'act', 'along', 'assign', 'design', 'help', 'becom', 'effect', 'consum', 'produc', 'alphabet', 'vis', 'multimod', 'texts.', 'join', 'us', 'becom', 'effect', 'writ', '...', 'bet', 'citizens.', 'rhet', 'compos', 'cours', 'writ', 'exchang', 'word', 'idea', 'tal', 'support.', 'introduc', 'vary', 'rhet', 'conceptsxe2x80x94that', 'idea', 'techn', 'inform', 'persuad', 'audiencesxe2x80x94that', 'help', 'becom', 'effect', 'consum', 'produc', 'writ', 'vis', 'multimod', 'texts.', 'class', 'includ', 'short', 'video', 'demonst', 'activities.', 'envid', 'rhet', 'compos', 'learn', 'commun', 'includ', 'enrol', 'cours', 'instructors.', 'bring', 'expert', 'writ', 'rhet', 'cours', 'design', 'design', 'assign', 'cours', 'infrastruct', 'help', 'shar', 'expery', 'writ', 'stud', 'profess', 'us.', 'collab', 'facilit', 'wex', 'writ', 'exchang', 'plac', 'exchang', 'work', 'feedback']

在我们引入gensim之前,还有一件事要做,去掉在整个语料库中出现次数为1的低频词,测试了一下,不去掉的话对效果有些影响:

>>> all_stems = sum(texts_stemmed, [])
>>> stems_once = set(stem for stem in set(all_stems) if all_stems.count(stem) == 1)
>>> texts = [[stem for stem in text if stem not in stems_once] for text in texts_stemmed]

3、引入gensim
有了上述的预处理,我们就可以引入gensim,并快速的做课程相似度的实验了。以下会快速的过一遍流程,具体的可以参考上一节的详细描述。

>>> from gensim import corpora, models, similarities
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

>>> dictionary = corpora.Dictionary(texts)
2013-06-07 21:37:07,120 : INFO : adding document #0 to Dictionary(0 unique tokens)
2013-06-07 21:37:07,263 : INFO : built Dictionary(3341 unique tokens) from 379 documents (total 46417 corpus positions)

>>> corpus = [dictionary.doc2bow(text) for text in texts]

>>> tfidf = models.TfidfModel(corpus)
2013-06-07 21:58:30,490 : INFO : collecting document frequencies
2013-06-07 21:58:30,490 : INFO : PROGRESS: processing document #0
2013-06-07 21:58:30,504 : INFO : calculating IDF weights for 379 documents and 3341 features (29166 matrix non-zeros)

>>> corpus_tfidf = tfidf[corpus]

这里我们拍脑门决定训练topic数量为10的LSI模型:
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=10)

>>> index = similarities.MatrixSimilarity(lsi[corpus])
2013-06-07 22:04:55,443 : INFO : scanning corpus to determine the number of features
2013-06-07 22:04:55,510 : INFO : creating matrix for 379 documents and 10 features

基于LSI模型的课程索引建立完毕,我们以Andrew Ng教授的机器学习公开课为例,这门课程在我们的coursera_corpus文件的第211行,也就是:

>>> print courses_name[210]
Machine Learning

现在我们就可以通过lsi模型将这门课程映射到10个topic主题模型空间上,然后和其他课程计算相似度:
>>> ml_course = texts[210]
>>> ml_bow = dicionary.doc2bow(ml_course)
>>> ml_lsi = lsi[ml_bow]
>>> print ml_lsi
[(0, 8.3270084238788673), (1, 0.91295652151975082), (2, -0.28296075112669405), (3, 0.0011599008827843801), (4, -4.1820134980024255), (5, -0.37889856481054851), (6, 2.0446999575052125), (7, 2.3297944485200031), (8, -0.32875594265388536), (9, -0.30389668455507612)]
>>> sims = index[ml_lsi]
>>> sort_sims = sorted(enumerate(sims), key=lambda item: -item[1])

取按相似度排序的前10门课程:
>>> print sort_sims[0:10]
[(210, 1.0), (174, 0.97812241), (238, 0.96428639), (203, 0.96283489), (63, 0.9605484), (189, 0.95390636), (141, 0.94975704), (184, 0.94269753), (111, 0.93654782), (236, 0.93601125)]

第一门课程是它自己:
>>> print courses_name[210]
Machine Learning

第二门课是Coursera上另一位大牛Pedro Domingos机器学习公开课
>>> print courses_name[174]
Machine Learning

第三门课是Coursera的另一位创始人,同样是大牛的Daphne Koller教授的概率图模型公开课
>>> print courses_name[238]
Probabilistic Graphical Models

第四门课是另一位超级大牛Geoffrey Hinton的神经网络公开课,有同学评价是Deep Learning的必修课。
>>> print courses_name[203]
Neural Networks for Machine Learning

感觉效果还不错,如果觉得有趣的话,也可以动手试试。

好了,这个系列就到此为止了,原计划写一下在英文维基百科全量数据上的实验,因为课程图谱目前暂时不需要,所以就到此为止,感兴趣的同学可以直接阅读gensim上的相关文档,非常详细。之后我可能更关注将NLTK应用到中文信息处理上,欢迎关注。

注:原创文章,转载请注明出处“我爱自然语言处理”:www.52nlp.cn

本文链接地址:/如何计算两个文档的相似度三