Various Flashcards
ODBC
An ODBC (Open Database Connectivity) connection is a standard software interface that allows applications to access data in database management systems (DBMS) using SQL as a standard for accessing the data. ODBC manages this by inserting a middle layer, called a database driver, between an application and the DBMS. The purpose of this layer is to translate the application’s data queries into commands that the DBMS understands.
transformer
A transformer is a deep learning architecture developed by Google and based on the multi-head attention mechanism, proposed in a 2017 paper “Attention Is All You Need”.[1] Text is converted to numerical representations called tokens, and each token is converted into a vector via looking up from a word embedding table.[1] At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism allowing the signal for key tokens to be amplified and less important tokens to be diminished. The transformer paper, published in 2017, is based on the softmax-based attention mechanism proposed by Bahdanau et. al. in 2014 for machine translation,[2][3] and the Fast Weight Controller, similar to a transformer, proposed in 1992.[4][5][6]
Transformers have the advantage of having no recurrent units, and thus requires less training time than previous recurrent neural architectures, such as long short-term memory (LSTM),[7] and its later variation has been prevalently adopted for training large language models (LLM) on large (language) datasets, such as the Wikipedia corpus and Common Crawl.[8]
Transformer
Transformers have emerged as a monumental breakthrough in the field of artificial intelligence, NLP.
By effectively managing sequential data through their unique self-attention mechanism, these models have outperformed traditional RNNs. Their ability to handle long sequences more efficiently and parallelize data processing significantly accelerates training.
Pioneering models like Google’s BERT and OpenAI’s GPT series exemplify the transformative impact of Transformers in enhancing search engines and generating human-like text.
As a result, they have become indispensable in modern machine learning, driving forward the boundaries of AI and opening new avenues in technological advancements.
Application
An application program (software application, or application, or app for short) is a computer program designed to carry out a specific task other than one relating to the operation of the computer itself,[1] typically to be used by end-users.[2] Word processors, media players, and accounting software are examples. The collective noun “application software” refers to all applications collectively.[3] The other principal classifications of software are system software, relating to the operation of the computer, and utility software (“utilities”).
IBM LinuxOne
BM LinuxONE Server
Linux Workloads: Optimized for Linux workloads, offering a high-performance, secure, and scalable environment for running open-source applications.
Cloud and Hybrid Cloud Environments: Suitable for organizations looking to integrate their on-premises infrastructure with cloud or hybrid cloud environments, providing flexibility and scalability.
Cost-Effective Scaling: Provides a scalable and cost-effective solution for growing Linux-based applications and databases without the complexity of traditional server farms.
Security and Isolation: Features strong isolation and encryption capabilities to protect workloads, making it suitable for handling sensitive data and multi-tenancy scenarios.
Energy Efficiency: Designed to be energy-efficient, reducing the total cost of ownership for businesses focused on sustainability and operational costs.
z16 vs LinuxOne
Decision Factors
Workload Characteristics: Assess the nature of your workloads. If you have high-volume transactional workloads or need to run mixed workloads including traditional mainframe applications, z16 might be the better choice. For Linux-specific applications, LinuxONE could be more suitable.
Security Requirements: Consider the level of security needed. Both platforms offer robust security features, but the z16 has additional capabilities like quantum-safe cryptography.
Scalability and Performance Needs: Evaluate your scalability requirements. If you need to scale vertically and manage massive workloads efficiently, the z16 has an edge. LinuxONE offers excellent scalability within Linux environments.
Cost Considerations: Consider both upfront and ongoing costs. LinuxONE might offer a more cost-effective solution for Linux workloads, while z16 could provide better value for mixed and high-volume transactional workloads.
Future Growth: Think about future growth and potential changes in your workloads. Flexibility and the ability to adapt to changing needs are crucial.
Kernel
A kernel is the core component of an operating system (OS) that manages the system’s operations and hardware. It acts as a bridge between applications and the actual data processing done at the hardware level. The kernel has complete control over the system and is responsible for managing resources efficiently and ensuring smooth operation.
Homomorphic encryption
Homomorphic encryption is the conversion of data into ciphertext that can be analyzed and worked with as if it were still in its original form. Homomorphic encryption enables complex mathematical operations to be performed on encrypted data without compromising the encryption.
In mathematics, homomorphic describes the transformation of one data set into another while preserving relationships between elements in both sets. The term is derived from the Greek words for same structure. Because the data in a homomorphic encryption scheme retains the same structure, identical mathematical operations will provide equivalent results – regardless of whether the action is performed on encrypted or decrypted data.
Homomorphic encryption differs from typical encryption methods because it enables mathematical computations to be performed directly on the encrypted data, which can make the handling of user data by third parties safer. Homomorphic encryption is designed to create an encryption algorithm that enables an infinite number of additions to encrypted data.
stored program computer model
the stored program computer model, also known as the stored program concept or von Neumann architecture (named after the mathematician and physicist John von Neumann who contributed to its definition), is a fundamental principle for modern computers. This concept involves storing computer programs in the same memory that holds data, allowing the program to be treated as data—read, written, and modified. This architecture is the foundation for virtually all contemporary computers and provides a flexible and efficient way to execute a wide variety of programs.
DR test
Disaster recovery
TFLOP
TFLOP stands for Tera Floating Point Operations Per Second. It’s a measure of a computer’s performance, specifically its ability to perform floating-point calculations. Floating-point calculations are complex operations that can involve real numbers with a wide range of values, and they’re fundamental to scientific, engineering, and graphics computations.
Here’s a breakdown of the term:
Tera-: This is a metric prefix that stands for trillion. So, one tera- equals 1,000,000,000,000 (10¹²).
Floating Point Operations: These are a type of arithmetic used in computing that support a wide range of values by expressing them in a decimal point format.
Per Second: This indicates the number of operations that can be performed in one second.
Power Systems
The IBM Power Systems are a family of server computers from IBM that are based on the company’s Power processors. These systems are known for their robust performance and are commonly used in enterprise environments for complex, mission-critical applications. Here’s a brief overview of some of the key features of IBM Power Systems:
Performance: IBM Power Systems are designed for high performance. They are built on IBM’s POWER architecture, which is a Reduced Instruction Set Computing (RISC) architecture. This architecture is optimized for handling large volumes of data and complex computing tasks.
IBM Power10
IBM Power10-based systems allow customers to run more container software on fewer servers, delivering significant improvements in performance and economics for cloud native applications – and a compelling set of reasons to move forward with application modernization.
With Red Hat OpenShift running on Power10, customers can take advantage of a powerful and flexible platform for modernizing their applications, as well as developing and deploying new cloud native apps in a hybrid cloud infrastructure. Power10-based systems support end-to-end security with accelerated cryptographic performance, transparent memory encryption, and enhanced defense for return-oriented programming attacks.
ONXX
The Open Neural Network Exchange (ONNX) [ˈɒnɪks][2] is an open-source artificial intelligence ecosystem[3] of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.
Power10
Power10 delivers faster business insights by running AI “in place” with four new Matrix Math Accelerator (MMA) units in each Power10 core. MMAs provide an alternative to external accelerators, such as GPUs, and related device management, for execution of statistical machine learning and inferencing (scoring) workloads. This reduces costs and leads to a greatly simplified solution stack for AI.