en.unionpedia.org

OpenMP, the Glossary

Index OpenMP

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows.[1]

Table of Contents

  1. 99 relations: Absoft, ACCU (organisation), AMD, Amdahl's law, API, Arm DDT, Arm Holdings, Arm MAP, Automatic parallelization, Bulk synchronous parallel, C (programming language), C++, Chapel (programming language), Cilk, Compare-and-swap, Computer cluster, Concurrency (computer science), Concurrent computing, Consortium, Cray, Cross-platform software, Data parallelism, Desktop computer, Differential equation, Directive (programming), Distributed shared memory, Embarrassingly parallel, Environment variable, False sharing, Field-programmable gate array, Fold (higher-order function), Fork (system call), Fortran, FreeBSD, Fujitsu, Function (computer programming), General-purpose computing on graphics processing units, GNU Compiler Collection, Granularity (parallel computing), Hardware acceleration, Heterogeneous System Architecture, Hewlett-Packard, HP-UX, IBM, IBM AIX, Include directive, Instruction set architecture, Intel, Intel Advisor, Intel C++ Compiler, ... Expand index (49 more) »

  2. Fortran

Absoft

Absoft Corporation was an American software company active from 1980 to 2022.

See OpenMP and Absoft

ACCU (organisation)

ACCU, previously known as the Association of C and C++ Users, is a non-profit user group of people interested in software development, dedicated to raising the standard of computer programming.

See OpenMP and ACCU (organisation)

AMD

Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and fabless semiconductor company based in Santa Clara, California, that designs, develops and sells computer processors and related technologies for business and consumer markets.

See OpenMP and AMD

Amdahl's law

In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved.

See OpenMP and Amdahl's law

API

An is a way for two or more computer programs or components to communicate with each other. OpenMP and API are application programming interfaces.

See OpenMP and API

Arm DDT

Linaro DDT is a commercial C, C++ and Fortran 90 debugger.

See OpenMP and Arm DDT

Arm Holdings

Arm Holdings plc (formerly an acronym for Advanced RISC Machines and originally Acorn RISC Machine) is a British semiconductor and software design company based in Cambridge, England, whose primary business is the design of central processing unit (CPU) cores that implement the ARM architecture family of instruction sets.

See OpenMP and Arm Holdings

Arm MAP

Arm MAP, is an application profiler produced by Allinea Software now part of Arm.

See OpenMP and Arm MAP

Automatic parallelization

Automatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. OpenMP and Automatic parallelization are parallel computing.

See OpenMP and Automatic parallelization

Bulk synchronous parallel

The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. OpenMP and bulk synchronous parallel are parallel computing.

See OpenMP and Bulk synchronous parallel

C (programming language)

C (pronounced – like the letter c) is a general-purpose programming language. OpenMP and c (programming language) are c programming language family.

See OpenMP and C (programming language)

C++

C++ (pronounced "C plus plus" and sometimes abbreviated as CPP) is a high-level, general-purpose programming language created by Danish computer scientist Bjarne Stroustrup.

See OpenMP and C++

Chapel (programming language)

Chapel, the Cascade High Productivity Language, is a parallel programming language that was developed by Cray, and later by Hewlett Packard Enterprise which acquired Cray. OpenMP and Chapel (programming language) are c programming language family.

See OpenMP and Chapel (programming language)

Cilk

Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. OpenMP and Cilk are c programming language family.

See OpenMP and Cilk

Compare-and-swap

In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization.

See OpenMP and Compare-and-swap

Computer cluster

A computer cluster is a set of computers that work together so that they can be viewed as a single system. OpenMP and computer cluster are parallel computing.

See OpenMP and Computer cluster

Concurrency (computer science)

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome.

See OpenMP and Concurrency (computer science)

Concurrent computing

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

See OpenMP and Concurrent computing

Consortium

A consortium is an association of two or more individuals, companies, organizations, or governments (or any combination of these entities) with the objective of participating in a common activity or pooling their resources for achieving a common goal.

See OpenMP and Consortium

Cray

Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington.

See OpenMP and Cray

Cross-platform software

In computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several computing platforms.

See OpenMP and Cross-platform software

Data parallelism

Data parallelism is parallelization across multiple processors in parallel computing environments. OpenMP and Data parallelism are parallel computing.

See OpenMP and Data parallelism

Desktop computer

A desktop computer (often abbreviated desktop) is a personal computer designed for regular use at a stationary location on or near a desk (as opposed to a portable computer) due to its size and power requirements.

See OpenMP and Desktop computer

Differential equation

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives.

See OpenMP and Differential equation

Directive (programming)

In computer programming, a directive or pragma (from "pragmatic") is a language construct that specifies how a compiler (or other translator) should process its input.

See OpenMP and Directive (programming)

In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as a single shared address space.

See OpenMP and Distributed shared memory

Embarrassingly parallel

In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to split the problem into a number of parallel tasks. OpenMP and embarrassingly parallel are parallel computing.

See OpenMP and Embarrassingly parallel

Environment variable

An environment variable is a user-definable value that can affect the way running processes will behave on a computer.

See OpenMP and Environment variable

False sharing

In computer science, false sharing is a performance-degrading usage pattern that can arise in systems with distributed, coherent caches at the size of the smallest resource block managed by the caching mechanism.

See OpenMP and False sharing

Field-programmable gate array

A field-programmable gate array (FPGA) is a type of configurable integrated circuit that can be repeatedly programmed after manufacturing.

See OpenMP and Field-programmable gate array

Fold (higher-order function)

In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value.

See OpenMP and Fold (higher-order function)

Fork (system call)

In computing, particularly in the context of the Unix operating system and its workalikes, fork is an operation whereby a process creates a copy of itself.

See OpenMP and Fork (system call)

Fortran

Fortran (formerly FORTRAN) is a third generation, compiled, imperative programming language that is especially suited to numeric computation and scientific computing.

See OpenMP and Fortran

FreeBSD

FreeBSD is a free and open-source Unix-like operating system descended from the Berkeley Software Distribution (BSD).

See OpenMP and FreeBSD

Fujitsu

is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Kawasaki, Kanagawa.

See OpenMP and Fujitsu

Function (computer programming)

In computer programming, a function, procedure, method, subroutine, routine, or subprogram is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.

See OpenMP and Function (computer programming)

General-purpose computing on graphics processing units

General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). OpenMP and General-purpose computing on graphics processing units are parallel computing.

See OpenMP and General-purpose computing on graphics processing units

GNU Compiler Collection

The GNU Compiler Collection (GCC) is a collection of compilers from the GNU Project that support various programming languages, hardware architectures and operating systems.

See OpenMP and GNU Compiler Collection

Granularity (parallel computing)

In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task.

See OpenMP and Granularity (parallel computing)

Hardware acceleration

Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU).

See OpenMP and Hardware acceleration

Heterogeneous System Architecture

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.

See OpenMP and Heterogeneous System Architecture

Hewlett-Packard

The Hewlett-Packard Company, commonly shortened to Hewlett-Packard or HP, was an American multinational information technology company headquartered in Palo Alto, California.

See OpenMP and Hewlett-Packard

HP-UX

HP-UX (from "Hewlett Packard Unix") is Hewlett Packard Enterprise's proprietary implementation of the Unix operating system, based on Unix System V (initially System III) and first released in 1984.

See OpenMP and HP-UX

IBM

International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American multinational technology company headquartered in Armonk, New York and present in over 175 countries.

See OpenMP and IBM

IBM AIX

AIX (Advanced Interactive eXecutive, pronounced) is a series of proprietary Unix operating systems developed and sold by IBM for several of its computer platforms.

See OpenMP and IBM AIX

Include directive

Many programming languages and other computer files have a directive, often called include, import, or copy, that causes the contents of the specified file to be inserted into the original file.

See OpenMP and Include directive

Instruction set architecture

In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers.

See OpenMP and Instruction set architecture

Intel

Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware.

See OpenMP and Intel

Intel Advisor

Intel Advisor (also known as "Advisor XE", "Vectorization Advisor" or "Threading Advisor") is a design assistance and analysis tool for SIMD vectorization, threading, memory use, and GPU offload optimization.

See OpenMP and Intel Advisor

Intel C++ Compiler

Intel oneAPI DPC++/C++ Compiler and Intel C++ Compiler Classic (deprecated icc and icl is in Intel OneAPI HPC toolkit) are Intel’s C, C++, SYCL, and Data Parallel C++ (DPC++) compilers for Intel processor-based systems, available for Windows, Linux, and macOS operating systems.

See OpenMP and Intel C++ Compiler

Intel Parallel Studio

Intel Parallel Studio XE was a software development product developed by Intel that facilitated native code development on Windows, macOS and Linux in C++ and Fortran for parallel computing. OpenMP and Intel Parallel Studio are application programming interfaces.

See OpenMP and Intel Parallel Studio

Intel Xe

Intel Xe (stylized as Xe and pronounced as two separate letters, abbreviation for "exascale for everyone"), earlier known unofficially as Gen12, is a GPU architecture developed by Intel.

See OpenMP and Intel Xe

Library (computing)

In computer science, a library is a collection of read-only resources that is leveraged during software development to implement a computer program.

See OpenMP and Library (computing)

Linearizability

In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that.

See OpenMP and Linearizability

Linux

Linux is both an open-source Unix-like kernel and a generic name for a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds.

See OpenMP and Linux

Load balancing (computing)

In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient.

See OpenMP and Load balancing (computing)

MacOS

macOS, originally Mac OS X, previously shortened as OS X, is an operating system developed and marketed by Apple since 2001.

See OpenMP and MacOS

Map (parallel pattern)

Map is an idiom in parallel computing where a simple operation is applied to all elements of a sequence, potentially in parallel. OpenMP and Map (parallel pattern) are parallel computing.

See OpenMP and Map (parallel pattern)

Memory bandwidth

Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor.

See OpenMP and Memory bandwidth

Message Passing Interface

The Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. OpenMP and message Passing Interface are application programming interfaces and parallel computing.

See OpenMP and Message Passing Interface

Microsoft Windows

Microsoft Windows is a product line of proprietary graphical operating systems developed and marketed by Microsoft.

See OpenMP and Microsoft Windows

Multiprocessing

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. OpenMP and Multiprocessing are parallel computing.

See OpenMP and Multiprocessing

NEC

is a Japanese multinational information technology and electronics corporation, headquartered at the NEC Supertower in Minato, Tokyo, Japan.

See OpenMP and NEC

Nonprofit organization

A nonprofit organization (NPO), also known as a nonbusiness entity, nonprofit institution, or simply a nonprofit (using the adjective as a noun), is a legal entity organized and operated for a collective, public or social benefit, as opposed to an entity that operates as a business aiming to generate a profit for its owners.

See OpenMP and Nonprofit organization

Numerical analysis

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).

See OpenMP and Numerical analysis

Numerical integration

In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.

See OpenMP and Numerical integration

Nvidia

Nvidia Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware.

See OpenMP and Nvidia

Operating system

An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.

See OpenMP and Operating system

Oracle Corporation

Oracle Corporation is an American multinational computer technology company headquartered in Austin, Texas.

See OpenMP and Oracle Corporation

Oracle Developer Studio

Oracle Developer Studio, formerly named Oracle Solaris Studio, Sun Studio, Sun WorkShop, Forte Developer, and SunPro Compilers, is the Oracle Corporation's flagship software development product for the Solaris and Linux operating systems.

See OpenMP and Oracle Developer Studio

Oracle Solaris

Solaris is a proprietary Unix operating system originally developed by Sun Microsystems.

See OpenMP and Oracle Solaris

Parallel computing

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously.

See OpenMP and Parallel computing

Parallel programming model

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. OpenMP and parallel programming model are parallel computing.

See OpenMP and Parallel programming model

Partitioned global address space

In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm. OpenMP and partitioned global address space are parallel computing.

See OpenMP and Partitioned global address space

Processor affinity

Processor affinity, or CPU pinning or "cache affinity", enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU or CPUs rather than any CPU.

See OpenMP and Processor affinity

Programmer

A programmer, computer programmer or coder is an author of computer source code someone with skill in computer programming.

See OpenMP and Programmer

Pthreads

In computing, POSIX Threads, commonly known as pthreads, is an execution model that exists independently from a programming language, as well as a parallel execution model. OpenMP and pthreads are parallel computing.

See OpenMP and Pthreads

Race condition

A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results.

See OpenMP and Race condition

Red Hat

Red Hat, Inc. (formerly Red Hat Software, Inc.) is an American software company that provides open source software products to enterprises and is a subsidiary of IBM.

See OpenMP and Red Hat

Rogue Wave Software

Rogue Wave Software was an American software development company based in Louisville, Colorado.

See OpenMP and Rogue Wave Software

ROSE (compiler framework)

The ROSE compiler framework, developed at Lawrence Livermore National Laboratory (LLNL), is an open-source software compiler infrastructure to generate source-to-source analyzers and translators for multiple source languages including C (C89, C98, Unified Parallel C (UPC)), C++ (C++98, C++11), Fortran (77, 95, 2003), OpenMP, Java, Python, and PHP.

See OpenMP and ROSE (compiler framework)

Runtime system

In computer programming, a runtime system or runtime environment is a sub-system that exists both in the computer where a program is created, as well as in the computers where the program is intended to be run.

See OpenMP and Runtime system

SequenceL

SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability. OpenMP and SequenceL are parallel computing.

See OpenMP and SequenceL

In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. OpenMP and shared memory are parallel computing.

See OpenMP and Shared memory

Single instruction, multiple data

Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. OpenMP and Single instruction, multiple data are parallel computing.

See OpenMP and Single instruction, multiple data

Single program, multiple data

In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism where-by multiple processors cooperate in the execution of a program in order to obtain results faster. OpenMP and single program, multiple data are parallel computing.

See OpenMP and Single program, multiple data

Software portability

Software portability is a design objective for source code to be easily made to run on different platforms.

See OpenMP and Software portability

Speedup

In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem.

See OpenMP and Speedup

Standard streams

In computer programming, standard streams are preconnected input and output communication channels between a computer program and its environment when it begins execution.

See OpenMP and Standard streams

Supercomputer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. OpenMP and supercomputer are parallel computing.

See OpenMP and Supercomputer

Symmetric multiprocessing

Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. OpenMP and Symmetric multiprocessing are parallel computing.

See OpenMP and Symmetric multiprocessing

Task parallelism

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. OpenMP and Task parallelism are parallel computing.

See OpenMP and Task parallelism

Texas Instruments

Texas Instruments Incorporated (TI) is an American multinational semiconductor company headquartered in Dallas, Texas.

See OpenMP and Texas Instruments

The Portland Group

PGI (formerly The Portland Group, Inc.) was a company that produced a set of commercially available Fortran, C and C++ compilers for high-performance computing systems.

See OpenMP and The Portland Group

Thread (computing)

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.

See OpenMP and Thread (computing)

Unified Parallel C

Unified Parallel C (UPC) is an extension of the C programming language designed for high-performance computing on large-scale parallel machines, including those with a common global address space (SMP and NUMA) and those with distributed memory (e. g. clusters). OpenMP and Unified Parallel C are c programming language family and parallel computing.

See OpenMP and Unified Parallel C

VTune

VTune Profiler (formerly VTune Amplifier) is a performance analysis tool for x86-based machines running Linux or Microsoft Windows operating systems.

See OpenMP and VTune

X10 (programming language)

X10 is a programming language being developed by IBM at the Thomas J. Watson Research Center as part of the Productive, Easy-to-use, Reliable Computing System (PERCS) project funded by DARPA's High Productivity Computing Systems (HPCS) program.

See OpenMP and X10 (programming language)

X86

x86 (also known as 80x86 or the 8086 family) is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the 8086 microprocessor and its 8-bit-external-bus variant, the 8088.

See OpenMP and X86

See also

Fortran

References

[1] https://en.wikipedia.org/wiki/OpenMP

Also known as Open MP, TCMP.

, Intel Parallel Studio, Intel Xe, Library (computing), Linearizability, Linux, Load balancing (computing), MacOS, Map (parallel pattern), Memory bandwidth, Message Passing Interface, Microsoft Windows, Multiprocessing, NEC, Nonprofit organization, Numerical analysis, Numerical integration, Nvidia, Operating system, Oracle Corporation, Oracle Developer Studio, Oracle Solaris, Parallel computing, Parallel programming model, Partitioned global address space, Processor affinity, Programmer, Pthreads, Race condition, Red Hat, Rogue Wave Software, ROSE (compiler framework), Runtime system, SequenceL, Shared memory, Single instruction, multiple data, Single program, multiple data, Software portability, Speedup, Standard streams, Supercomputer, Symmetric multiprocessing, Task parallelism, Texas Instruments, The Portland Group, Thread (computing), Unified Parallel C, VTune, X10 (programming language), X86.