en.unionpedia.org

Support vector machine, the Glossary

Index Support vector machine

In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis.[1]

Table of Contents

  1. 105 relations: Alexey Chervonenkis, Algorithm, Bayesian optimization, Bayesian probability, Bell Labs, Big data, Binary classification, Cluster analysis, Computer vision, Convex function, Coordinate descent, Corinna Cortes, Cross-validation (statistics), Data augmentation, Directed acyclic graph, Distance from a point to a plane, Document classification, Dot product, Duality (optimization), Empirical risk minimization, Error correction code, Feature (machine learning), Fisher kernel, Generalization error, Gradient descent, Graphical model, Handwriting recognition, Hava Siegelmann, Hesse normal form, Hinge loss, Homogeneous polynomial, Hyperbolic functions, Hyperparameter (machine learning), Hyperparameter optimization, Hyperplane, Hyperplane separation theorem, Hypothesis, Image segmentation, In situ adaptive tabulation, Interior-point method, Isabelle Guyon, JavaScript, Journal of Machine Learning Research, Karush–Kuhn–Tucker conditions, Kernel method, Least-squares support vector machine, Lecture Notes in Computer Science, LIBSVM, Linear classifier, Linear regression, ... Expand index (55 more) »

  2. Support vector machines

Alexey Chervonenkis

Alexey Yakovlevich Chervonenkis (Алексей Яковлевич Червоненкис; 7 September 1938 – 22 September 2014) was a Soviet and Russian mathematician.

See Support vector machine and Alexey Chervonenkis

Algorithm

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation.

See Support vector machine and Algorithm

Bayesian optimization

Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms.

See Support vector machine and Bayesian optimization

Bayesian probability

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

See Support vector machine and Bayesian probability

Bell Labs

Bell Labs is an American industrial research and scientific development company credited with the development of radio astronomy, the transistor, the laser, the photovoltaic cell, the charge-coupled device (CCD), information theory, the Unix operating system, and the programming languages B, C, C++, S, SNOBOL, AWK, AMPL, and others.

See Support vector machine and Bell Labs

Big data

Big data primarily refers to data sets that are too large or complex to be dealt with by traditional data-processing application software.

See Support vector machine and Big data

Binary classification

Binary classification is the task of classifying the elements of a set into one of two groups (each called class). Support vector machine and Binary classification are statistical classification.

See Support vector machine and Binary classification

Cluster analysis

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each other than to those in other groups (clusters). Support vector machine and cluster analysis are statistical classification.

See Support vector machine and Cluster analysis

Computer vision

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions.

See Support vector machine and Computer vision

Convex function

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points.

See Support vector machine and Convex function

Coordinate descent

Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function.

See Support vector machine and Coordinate descent

Corinna Cortes

Corinna Cortes (born 31 March 1961) is a Danish computer scientist known for her contributions to machine learning.

See Support vector machine and Corinna Cortes

Cross-validation (statistics)

Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set.

See Support vector machine and Cross-validation (statistics)

Data augmentation

Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data.

See Support vector machine and Data augmentation

Directed acyclic graph

In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles.

See Support vector machine and Directed acyclic graph

Distance from a point to a plane

In Euclidean space, the distance from a point to a plane is the distance between a given point and its orthogonal projection on the plane, the perpendicular distance to the nearest point on the plane.

See Support vector machine and Distance from a point to a plane

Document classification

Document classification or document categorization is a problem in library science, information science and computer science.

See Support vector machine and Document classification

Dot product

In mathematics, the dot product or scalar productThe term scalar product means literally "product with a scalar as a result".

See Support vector machine and Dot product

Duality (optimization)

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem.

See Support vector machine and Duality (optimization)

Empirical risk minimization

Empirical risk minimization is a principle in statistical learning theory which defines a family of learning algorithms based on evaluating performance over a known and fixed dataset.

See Support vector machine and Empirical risk minimization

Error correction code

In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.

See Support vector machine and Error correction code

Feature (machine learning)

In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon.

See Support vector machine and Feature (machine learning)

Fisher kernel

In statistical classification, the Fisher kernel, named after Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model.

See Support vector machine and Fisher kernel

Generalization error

For supervised learning applications in machine learning and statistical learning theory, generalization errorMohri, M., Rostamizadeh A., Talwakar A., (2018) Foundations of Machine learning, 2nd ed., Boston: MIT Press (also known as the out-of-sample error or the risk) is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. Support vector machine and generalization error are classification algorithms.

See Support vector machine and Generalization error

Gradient descent

Gradient descent is a method for unconstrained mathematical optimization.

See Support vector machine and Gradient descent

Graphical model

A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables.

See Support vector machine and Graphical model

Handwriting recognition

Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices.

See Support vector machine and Handwriting recognition

Hava Siegelmann

Hava Siegelmann is an American computer scientist and Provost Professor at the University of Massachusetts Amherst.

See Support vector machine and Hava Siegelmann

Hesse normal form

The Hesse normal form named after Otto Hesse, is an equation used in analytic geometry, and describes a line in \mathbb^2 or a plane in Euclidean space \mathbb^3 or a hyperplane in higher dimensions.

See Support vector machine and Hesse normal form

Hinge loss

In machine learning, the hinge loss is a loss function used for training classifiers. Support vector machine and hinge loss are support vector machines.

See Support vector machine and Hinge loss

Homogeneous polynomial

In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree.

See Support vector machine and Homogeneous polynomial

Hyperbolic functions

In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle.

See Support vector machine and Hyperbolic functions

Hyperparameter (machine learning)

In machine learning, a hyperparameter is a parameter, such as the learning rate or choice of optimizer, which specifies details of the learning process, hence the name hyperparameter.

See Support vector machine and Hyperparameter (machine learning)

Hyperparameter optimization

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm.

See Support vector machine and Hyperparameter optimization

Hyperplane

In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension.

See Support vector machine and Hyperplane

Hyperplane separation theorem

In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space.

See Support vector machine and Hyperplane separation theorem

Hypothesis

A hypothesis (hypotheses) is a proposed explanation for a phenomenon.

See Support vector machine and Hypothesis

Image segmentation

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels).

See Support vector machine and Image segmentation

In situ adaptive tabulation

In situ adaptive tabulation (ISAT) is an algorithm for the approximation of nonlinear relationships.

See Support vector machine and In situ adaptive tabulation

Interior-point method

Interior-point methods (also referred to as barrier methods or IPMs) are algorithms for solving linear and non-linear convex optimization problems.

See Support vector machine and Interior-point method

Isabelle Guyon

Isabelle Guyon (born August 15, 1961) is a French-born researcher in machine learning known for her work on support-vector machines, artificial neural networks and bioinformatics.

See Support vector machine and Isabelle Guyon

JavaScript

JavaScript, often abbreviated as JS, is a programming language and core technology of the Web, alongside HTML and CSS.

See Support vector machine and JavaScript

Journal of Machine Learning Research

The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning.

See Support vector machine and Journal of Machine Learning Research

Karush–Kuhn–Tucker conditions

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

See Support vector machine and Karush–Kuhn–Tucker conditions

Kernel method

In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). Support vector machine and kernel method are classification algorithms.

See Support vector machine and Kernel method

Least-squares support vector machine

Least-squares support-vector machines (LS-SVM) for statistics and in statistical modeling, are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis. Support vector machine and least-squares support vector machine are classification algorithms, statistical classification and support vector machines.

See Support vector machine and Least-squares support vector machine

Lecture Notes in Computer Science

Lecture Notes in Computer Science is a series of computer science books published by Springer Science+Business Media since 1973.

See Support vector machine and Lecture Notes in Computer Science

LIBSVM

LIBSVM and LIBLINEAR are two popular open source machine learning libraries, both developed at the National Taiwan University and both written in C++ though with a C API.

See Support vector machine and LIBSVM

Linear classifier

In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. Support vector machine and Linear classifier are classification algorithms and statistical classification.

See Support vector machine and Linear classifier

Linear regression

In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables).

See Support vector machine and Linear regression

Linear separability

In Euclidean geometry, linear separability is a property of two sets of points.

See Support vector machine and Linear separability

Logistic regression

In statistics, the logistic model (or logit model) is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. Support vector machine and logistic regression are statistical classification.

See Support vector machine and Logistic regression

Loss function

In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event.

See Support vector machine and Loss function

Loss functions for classification

In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to).

See Support vector machine and Loss functions for classification

Machine learning

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions.

See Support vector machine and Machine learning

Machine Learning (journal)

Machine Learning is a peer-reviewed scientific journal, published since 1986.

See Support vector machine and Machine Learning (journal)

Margin (machine learning)

In machine learning the margin of a single data point is defined to be the distance from the data point to a decision boundary. Support vector machine and margin (machine learning) are support vector machines.

See Support vector machine and Margin (machine learning)

Margin classifier

In machine learning, a margin classifier is a classifier which is able to give an associated distance from the decision boundary for each example. Support vector machine and margin classifier are classification algorithms and statistical classification.

See Support vector machine and Margin classifier

MATLAB

MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks.

See Support vector machine and MATLAB

Multiclass classification

In machine learning and statistical classification, multiclass classification or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification). Support vector machine and multiclass classification are classification algorithms and statistical classification.

See Support vector machine and Multiclass classification

Newton's method

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function.

See Support vector machine and Newton's method

Normal (geometry)

In geometry, a normal is an object (e.g. a line, ray, or vector) that is perpendicular to a given object.

See Support vector machine and Normal (geometry)

Normed vector space

In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers on which a norm is defined.

See Support vector machine and Normed vector space

OpenCV

OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly for real-time computer vision.

See Support vector machine and OpenCV

Overfitting

In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".

See Support vector machine and Overfitting

Parallel computing

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously.

See Support vector machine and Parallel computing

Perceptron

In machine learning, the perceptron (or McCulloch–Pitts neuron) is an algorithm for supervised learning of binary classifiers. Support vector machine and perceptron are classification algorithms.

See Support vector machine and Perceptron

Permutation test

A permutation test (also called re-randomization test or shuffle test) is an exact statistical hypothesis test making use of the proof by contradiction.

See Support vector machine and Permutation test

Platt scaling

In machine learning, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. Support vector machine and Platt scaling are statistical classification.

See Support vector machine and Platt scaling

Polynomial kernel

In machine learning, the polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models.

See Support vector machine and Polynomial kernel

Positive-definite kernel

In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix.

See Support vector machine and Positive-definite kernel

Posterior predictive distribution

In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values.

See Support vector machine and Posterior predictive distribution

Predictive analytics

Predictive analytics is a form of business analytics applying machine learning to generate a predictive model for certain business applications.

See Support vector machine and Predictive analytics

Probabilistic classification

In machine learning, a probabilistic classifier is a classifier that is able to predict, given an observation of an input, a probability distribution over a set of classes, rather than only outputting the most likely class that the observation should belong to. Support vector machine and probabilistic classification are statistical classification.

See Support vector machine and Probabilistic classification

Quadratic programming

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions.

See Support vector machine and Quadratic programming

Radial basis function kernel

In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. Support vector machine and radial basis function kernel are support vector machines.

See Support vector machine and Radial basis function kernel

Rate of convergence

In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit.

See Support vector machine and Rate of convergence

Real number

In mathematics, a real number is a number that can be used to measure a continuous one-dimensional quantity such as a distance, duration or temperature.

See Support vector machine and Real number

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features').

See Support vector machine and Regression analysis

Regularization perspectives on support vector machines

Within mathematical analysis, Regularization perspectives on support-vector machines provide a way of interpreting support-vector machines (SVMs) in the context of other regularization-based machine-learning algorithms. Support vector machine and regularization perspectives on support vector machines are support vector machines.

See Support vector machine and Regularization perspectives on support vector machines

Regularized least squares

Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.

See Support vector machine and Regularized least squares

Relevance vector machine

In mathematics, a Relevance Vector Machine (RVM) is a machine learning technique that uses Bayesian inference to obtain parsimonious solutions for regression and probabilistic classification. Support vector machine and Relevance vector machine are classification algorithms.

See Support vector machine and Relevance vector machine

Ridge regression

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated.

See Support vector machine and Ridge regression

Scikit-learn

scikit-learn (formerly scikits.learn and also known as sklearn) is a free and open-source machine learning library for the Python programming language.

See Support vector machine and Scikit-learn

Semantic role labeling

In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result.

See Support vector machine and Semantic role labeling

Sequential minimal optimization

Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). Support vector machine and Sequential minimal optimization are support vector machines.

See Support vector machine and Sequential minimal optimization

Shogun is a free, open-source machine learning software library written in C++.

See Support vector machine and Shogun (toolbox)

Sigmoid function

A sigmoid function is any mathematical function whose graph has a characteristic S-shaped or sigmoid curve.

See Support vector machine and Sigmoid function

Sign function

In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that has the value, or according to whether the sign of a given real number is positive or negative, or the given number is itself zero.

See Support vector machine and Sign function

Space mapping

The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993.

See Support vector machine and Space mapping

Statistical classification

When classification is performed by a computer, statistical methods are normally used to develop the algorithm. Support vector machine and statistical classification are classification algorithms.

See Support vector machine and Statistical classification

Stochastic gradient descent

Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable).

See Support vector machine and Stochastic gradient descent

Structured prediction

Structured prediction or structured (output) learning is an umbrella term for supervised machine learning techniques that involves predicting structured objects, rather than scalar discrete or real values.

See Support vector machine and Structured prediction

Subderivative

In mathematics, subderivatives (or subgradient) generalizes the derivative to convex functions which are not necessarily differentiable.

See Support vector machine and Subderivative

Subgradient method

Subgradient methods are convex optimization methods which use subderivatives.

See Support vector machine and Subgradient method

Supervised learning

Supervised learning (SL) is a paradigm in machine learning where input objects (for example, a vector of predictor variables) and a desired output value (also known as human-labeled supervisory signal) train a model.

See Support vector machine and Supervised learning

Synthetic-aperture radar

Synthetic-aperture radar (SAR) is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes.

See Support vector machine and Synthetic-aperture radar

Transduction (machine learning)

In logic, statistical inference, and supervised learning, transduction or transductive inference is reasoning from observed, specific (training) cases to specific (test) cases.

See Support vector machine and Transduction (machine learning)

Unit of observation

In statistics, a unit of observation is the unit described by the data that one analyzes.

See Support vector machine and Unit of observation

Unsupervised learning

Unsupervised learning is a method in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data.

See Support vector machine and Unsupervised learning

Vapnik–Chervonenkis theory

Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis.

See Support vector machine and Vapnik–Chervonenkis theory

Vladimir Vapnik

Vladimir Naumovich Vapnik (Владимир Наумович Вапник; born 6 December 1936) is a computer scientist, researcher, and academic.

See Support vector machine and Vladimir Vapnik

Weak supervision

Weak supervision is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them.

See Support vector machine and Weak supervision

Weka (software)

Waikato Environment for Knowledge Analysis (Weka) is a collection of machine learning and data analysis free software licensed under the GNU General Public License.

See Support vector machine and Weka (software)

Winnow (algorithm)

The winnow algorithm Nick Littlestone (1988). Support vector machine and winnow (algorithm) are classification algorithms.

See Support vector machine and Winnow (algorithm)

See also

Support vector machines

References

[1] https://en.wikipedia.org/wiki/Support_vector_machine

Also known as Applications of support vector machines, Support Vector Machines, Support vector classifier, Support vector method, Support vector regression, Support-vector machine, Svm (learning), Svm (machine learning), Transductive Support Vector Machine.

, Linear separability, Logistic regression, Loss function, Loss functions for classification, Machine learning, Machine Learning (journal), Margin (machine learning), Margin classifier, MATLAB, Multiclass classification, Newton's method, Normal (geometry), Normed vector space, OpenCV, Overfitting, Parallel computing, Perceptron, Permutation test, Platt scaling, Polynomial kernel, Positive-definite kernel, Posterior predictive distribution, Predictive analytics, Probabilistic classification, Quadratic programming, Radial basis function kernel, Rate of convergence, Real number, Regression analysis, Regularization perspectives on support vector machines, Regularized least squares, Relevance vector machine, Ridge regression, Scikit-learn, Semantic role labeling, Sequential minimal optimization, Shogun (toolbox), Sigmoid function, Sign function, Space mapping, Statistical classification, Stochastic gradient descent, Structured prediction, Subderivative, Subgradient method, Supervised learning, Synthetic-aperture radar, Transduction (machine learning), Unit of observation, Unsupervised learning, Vapnik–Chervonenkis theory, Vladimir Vapnik, Weak supervision, Weka (software), Winnow (algorithm).