Bet365

You are not logged in.

- Topics: Active | Unanswered

**Myrtis53R9****Member**- Registered: 2020-09-14
- Posts: 1

General ML.

You are currently browsing the archive for the General ML category.

In , , , , by | In March of 2016, the computer program defeated.

in a five game match.

Never before had a Go computer program beaten a profession al Go player on the full size board.

In January of 2017, AlphaGo won 60 consecutive online Go games against many of the best Go players in the world using the online pseudonym Master.

During these games, AlphaGo (Master) played many non-traditional moves—moves that most professional Go players would have considered bad before AlphaGo appeared.

These moves are changing the Go community as professional Go players adopt them into their play.

Two Randomly Selected Games from the series of 60 AlphaGo games played in January 2017 Match 1 – Google DeepMind Challenge Match: Lee Sedol vs AlphaGo The algorithms used by AlphaGo (Deep Learning, Monte Carlo Tree Search, and convolutional neural nets) are similar to the algorithms that I used at Penn State for autonomous vehicle path planning in a dynamic environment.

Deep Learning and Monte Carlo Tree Search can be used in any game.

Google Deep Mind has had a lot of applying these algorithms to Atari video games where the computer learns strategy through self play.

Very similar algorithms created AlphaGo from self play and analysis of professional and amateur Go games.

I often wonder what we can learn about other board games from computers.

We will learn more about Go from AlphaGo in two weeks.

From May 23rd to 27th, .

in , by |.

in , , by | The that the seems to have solved heads-up limit hold’em poker.

You can 179 classifiers competing on 121 data sets and the winner is ….

December 29, 2014 in by | Permalink Fernandez-Delgado, Cernadas, Barro, and Amorim tested 179 classfiers on 121 data sets and reported their results in “Do we Need Hundreds of Classiers to Solve Real World Classication Problems?” The classifiers were drawn from the following 17 families: “discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods” from the Weka, Matlab, and R machine learning libraries. The 121 datasets were drawn mostly from the UCI classification repository.

The overall result was that the random forest classifiers were best on average followed by support vector machines, neural networks, and boosting ensembles.

For more details, read the paper.

in , , , by | Mnih, Kavukcuoglu, Silver, Graves, Antonoglon, Wierstra, and Riedmiller authored the paper “” which describes and an Atari game playing program created by the company Deep Mind (recently acquired by Google).

The AI did not just learn how to pay one game.

(The same learning parameters, neural network topologies, and algorithms were used for every game).

The games ran with only four kilobytes of RAM and a 210 x 160 pixel display with 128 colors.

Various machine learning techniques have been applied to the old Atari games using the which precisely reproduces the Atari 2600 gaming system.

(See e.g.

“” by Diuk, Cohen, and Littman 2008, ”” by Hausknecht, Khandelwal, Miikkulainen, and Stone 2012, “ “ by Shung Zhang , ”” by Hausknect, Lehman,” Miikkulalianen, and Stone 2014, and “’ ” by Korjus, Kuzovkin, Tampuu, and Pungas 2014.) To learn from raw video, they first converted the video to grayscale and then downsampled/cropped to 84 x 84 images.

The last four frames were used to determine actions.

The 28224 input pixels were run through two hidden convolution neural net layers and one fully connected (no convolution) 256 node hidden layer with a single output for each possible action.

Training was done with stochastic gradient decent using random samples drawn from a historical database of previous games played by the AI to improve convergence (This technique known as “experience replay” is described in “” Long-Ji Lin 1993.) The objective function for supervised learning is usually a loss function representing the difference between the predicted label and the actual label.

For these games the correct action is unknown, so reinforcement learning is used instead of supervised learning.

The authors used a variant of to train the weights in their neural network.

They describe their algorithm in detail and compare it to several historical reinforcement algorithms, so this section of the paper can be used as a brief introduction to reinforcement learning.

November 25, 2014 in by | Permalink The KDD 2014 article, written by Dong, Gabrilovich, Heitz, Horn, Lao, Murphy, Strohmann, Sun, and Zang, .

Each entry in the database is of the form subject-predicate-object-probability restricted to about 4500 predicates such as “born in”, “married to”, “held at”, or “authored by”.

The database was built by combining the knowledge base Freebase with the Wikipedia and approximately one billion web pages.

Dong et al.

YAGO2, Freebase, and the related project Knowledge Graph.

Knowledge Graph consists of high confidence knowledge.) The information from the Wikipedia and the Web was extracted using standard natural language processing (NLP) tools including: “named entity recognition, part of speech tagging, dependency parsing, co-reference resolution (within each document), and entity linkage (which maps mentions of proper nouns and their co-references to the corresponding entities in the KB).” The text in these sources is mined using “distance supervision” (see Mintz, Bills, Snow, and Jurafksy “Distant Supervision for relation extraction without labeled data” 2009).

Probabilities for each triple store are calculated using logistic regression (via MapReduce).

Further information is extracted from internet tables (over 570 million tables) using the techniques in “Recovering semantics of tables on the web” by Ventis, Halevy, Madhavan, Pasca, Shen, Wu, Miao, and Wi 2012.

The facts extracted using the various extraction techniques are fused with logistic regression and boosted decision stumps (see “How boosting the margin can also boost classifier complexity” by Reyzin and Schapire 2006).

Implications of the extracted knowledge are created using two techniques: the path ranking algorithm and a modified tensor decomposition.

The path ranking algorithm (see “Random walk inference and learning in a large scale knowledge base” by Lao, Mitchell, and Cohen 2011) can guess that if two people parent the same child, then it is likely that they are married.

Several other examples of inferences derived from path ranking are provided in table 3 of the paper.

Tensor decomposition is just a generalization of singular value decomposition, a well-known machine learning technique.

The authors used a “more powerful” modified version of tensor decomposition to derive additional facts.

(See “Reasoning with Neural Tensor Networks for Knowledge Base Completion” by Socher, Chen, Manning, and Ng 2013.) The article is very detailed and provides extensive references to knowledge base construction techniques.

It, along with the references, can serve as a great introduction to modern knowledge engineering.

Rules for CS Reseach and Seven Principles of Learning.

September 4, 2014 in , by | Permalink I read two nice articles this week: “Ten Simple Rules for Effective Computational Research” and “Seven Principles of Learning Better From Cognitive Science”.

In “Ten Simple Rules for Effective Computational Research”, Osborne, Bernabeu, Bruna, Calderhead, Cooper, Dalchau, Dunn, Fletcher, Freeman, Groen, Knapp, McInerny, Mirams, Pitt-Francis, Sengupta, Wright, Yates, Gavaghan, Emmott and Deane wrote up these guidelines for algorithm research: Look Before You Leap.

Develop a Prototype First.

Make Your Code Understandable to Others (and Yourself).

Don’t Underestimate the Complexity of Your Task.

Understand the Mathematical, Numerical, and Computational Methods Underpinning Your Work.

Use Pictures: They Really Are Worth a Thousand Words.

Version Control Everything.

Test Everything.

Share Everything.

Keep Going!.

Read the full PLOS 5 page article by clicking here.

Scott Young recently returned from a year abroad and wrote Up “Seven Principles of Learning Better From Cognitive Science” which is a review/summary of the book “Why Don’t Students Like School?” by Daniel Willingham. Here are the 7 Principles: Factual knowledge precedes skill.

Memory is the residue of thought.

We understand new things in the context of what we already know.

Proficiency requires practice.

Cognition is fundamentally different early and late in training.

People are more alike than different in how we learn.

Intelligence can be changed through sustained hard work.

Read the full 5 page article by clicking here! Get the great $10 book here.

in , , by | has a.

Gunnar Carlsson on the Shape of Data.

July 28, 2014 in by | Permalink Carl sent me a YouTube video by Dr Gunnar Carlsson on the application of Topology to Data Mining (Topological Data Analysis).

Dr.

Carlsson created a short 5 minute introduction, and a longer video of one of his lectures.

For more information, check out “Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival” by Nicolaua, Levineb, and Carlsson.

Also, Bansal and Choudhary put together a nice set of slides on the subject with applications to clustering and visualization.

Assorted Links Feb 2014.

February 19, 2014 in , , by | Permalink Enjoying John Baez’s blog Azimuth. Especially the posts on good research practices and an older post on levels of mathematical understanding.

García-Pérez, Serrano, and Boguñá wrote a cool paper on primes, probability, and integers as a bipartite network.

Loved the idea behind the game theoretical book “Survival of the Nicest” (see Yes Magazine for a two page introduction).

Scott Young is learning Chinese quickly.

Cyber warriors to the rescue.

Mao, Fluxx, and Douglas Hofstadter‘s Nomic are fun games.

Healy and Caudell are applying category theory to semantic and neural networks.

Some MOOCs for data science and machine learning.

Here an old but good free online course on the Computational Complexity of Machine Learning.

Great TeX graphics.

Watch this Ted Video to learn anything in 20 hours (YMMV).

Where are all the Steeler fans? Cowboy fans? ….

Productivity Hints.

Copper + Magnets = Fun.

Stray dogs on the subway.

Deep learning on NPR.

Happy 40th birthday D&D.

Deep learning in your browser.

How to write a great research paper.

Do Deep Nets Really Need to be Deep.

A variation on neural net dropout.

Provable algorithms for Machine Learning.

100 Numpy Exercises.

Learn and Practice Applied Machine Learning | Machine Learning Mastery.

« Older entries (9).

(3).

(5).

(7).

(8).

(5).

(1).

(1).

(26).

(1).

(12).

(29).

(36).

(15).

(11).

(14).

(6).

(22).

(27).

(26).

(17).

(1).

(8).

(14).

(13).

(5).

(19).

(3).

(6).

(31).

(1).

(1).

(1).

(1).

(1).

(1).

(1).

(2).

(3).

(3).

(1).

(1).

(2).

(2).

(3).

(1).

(2).

(3).

(2).

(1).

(2).

(5).

(1).

(3).

(5).

(4).

(5).

(5).

(4).

(5).

(4).

(2).

(14).

(15).

(14).

(18).

(17).

(19).

(15).

(22).

(26).

(18).

Search this site Powered by and.

Offline