This post will provide you with an A-Z guide of terms associated with Kubernetes and Cloud Native. Through this, we have compiled a list of over 100 terms with the aim of helping you understand the terminology required to start learning about Kubernetes and Cloud Native.
A | Definition |
---|---|
Abstraction | An abstraction is the process of simplifying complex details by hiding them behind easy-to-use interfaces. It allows developers and users to work with higher-level concepts without needing to understand the underlying complexities. |
Access Control | When we talk about Access Control, there are three stages that need to be considered: authentication, authorization, and admission. During authentication, incoming requests are checked to verify if the provided credentials are valid. If the credentials are invalid, the request is rejected at this stage. Authorization determines whether the authenticated user has permission to perform the requested operation. If authorization is denied, the request is rejected. The final stage is admission, which involves admission controllers and policies applied to the cluster. Requests that comply with the defined policies are allowed, while those that violate the policies are rejected. These three stages determine who can view, manage, and use resources in a computer environment. |
API Gateway | If you have many different parts of a software system working together, an API Gateway can help simplify things. It acts as a central point where requests from users or other services come in. It typically performs request processing based on defined policies, including authentication, authorization, access control, SSL/TLS offloading, routing, and load balancing. The API Gateway then combines the necessary responses from various parts of the system and sends back a single response to the user. This way, the user doesn't need to make multiple requests or even be aware of the different parts working behind the scenes. The API Gateway helps streamline the process and makes it easier to manage and use the system. |
Application Programming Interfaces (API) | An API is designed to make things simpler for developers by exposing data and functionality through service interfaces. This allows other applications to interfere directly with code making components reusable and open. |
Application Performance Monitoring (APM) | APM tools assist businesses in ensuring that they are getting the most value and performance out of their application suite. When combined with KPIs and service level objectives, APM can be used to alert when a service or part of a service falls below the defined level of performance. This encompasses application service and performance monitoring rather than metrics such as CPU usage, network saturation, and memory pressure. |
Authentication | Authentication is the process of verifying the identity of a user and is the first step in granting access to an individual to a network or a system. Kubernetes authentication refers to verifying a user's identity using a username, password, client certificates, tokens, etc., who want to access the Kubernetes API. |
Authorization | Authorization is the process that comes after authentication in Kubernetes. It determines what actions a user or service account is allowed to perform within the cluster. Different authorization modes can be used, such as RBAC (role-based access control), ABAC (attribute-based access control), and webhook. RBAC allows you to define roles and permissions for users and objects. Webhook relies on an external service to decide whether a request should be allowed or not. There are also two special modes: Always Allow and Always Deny, mainly used for testing purposes. The authorization mode needs to be specified using the --authorization-mode flag when starting the API server. |
Artificial Intelligence for IT Operations (AIOps) | AIOps combines data analytics and machine learning to improve IT operations. Its goal is to enhance and streamline IT tasks using automation and advanced analytics. Using artificial intelligence techniques, it aids in problem-solving and offers predictive insights. |
Autoscaling | Autoscaling is a feature that adjusts resources like server capacity, virtual machines, and storage based on business needs. It automatically adds or removes resources as demand fluctuates, optimizing costs and ensuring smooth performance for the company. |
B | Definition |
---|---|
Backend database | A backend database is one that users can access through front-end programs like web applications and mobile applications rather than directly through internal application programming or by manipulating the data on a low level using SQL commands. Simply put, whenever a customer searches for something on your front-end applications, the backend database will store data about the price, keywords, photos, etc. |
Backend-as-a-Service (BaaS) | Front-end developers may concentrate on creating the components of an application that consumers view and interact with, thanks to BaaS. Important tasks like managing events, storing data, and handling the background operations of the application are handled by BaaS. Because of this, front-end developers can concentrate on making a fantastic user experience without having to worry about the specifics of how everything interacts technically. |
Bare Metal Machine | A physical server known as a "bare metal machine" uses the actual hardware as its operating system without the virtualization layer. It provides total access to the CPU, memory, and storage capabilities of the underlying hardware. Running any resource-intensive applications or high-performance computing greatly benefits from this. A bare metal machine can also be described as a computer that does not run within a hypervisor or as a virtual machine, as the operating system is directly installed on the hardware. |
Big Data | For organizational projects, machine learning modeling, and other analytical uses, big data is the accumulation of structured, semi-structured, and unstructured data. Big data can be derived from various sources, including commercial transaction systems, logs, social networks, and medical records. The three V’s - volume of the data (in gigabytes and terabytes), variety of data types, and velocity of the processing speed—are frequently used to describe this. This generally aids companies in streamlining operations, enhancing customer service, etc. |
Block Storage | Block Storage is a convenient and flexible way of managing additional storage for your instances. Block Storage is configured in units known as volumes . Volumes function as block devices, this means they appear to the operating system as locally attached storage drives which can be partitioned and formatted to your individual needs. |
Blue Green Deployment | Running two identical production environments, referred to as Blue and Green, is a strategy called "blue-green deployment" that lowers risk and downtime. This method lowers the risk during the deployment phase of business applications because you can quickly switch back to the other environment if you experience a problem with one version. |
C | Definition |
---|---|
Cloud Computing | Cloud computing refers to anything that provides computer services through the internet, including servers, storage, databases, and networking. Platform as a Service (PaaS), Software as a Service (SaaS), and Infrastructure as a Service (IaaS) are the three services that make up this delivery model. You normally only pay for the services you really utilize with cloud computing. Your operational costs will go down as a result, allowing your infrastructure to run more effectively. |
Cloud Native Technology | Cloud native technology is the approach, principles, and practices with which you can build applications by taking advantage of the cloud computing delivery model. As a result, you can create flexible, scalable, and resilient applications with cloud-native architecture, helping you bring new market ideas and faster responses to customer demands. With the cloud-native architecture, you can build applications suitable for running on both public and private clouds. It incorporates the concepts of DevOps, continuous delivery, microservices, and containers. This can offer on-demand access to computing power and applications for developers. |
Cloud Service Provider (CSP) | Cloud Service Providers (CSPs) are companies that provide components of cloud computing services based on the cloud computing delivery model, such as Platform as a Service (PaaS), Software as a Service (SaaS), and Infrastructure as a Service (IaaS). They have their own data centers where they can host storage, manage, and compute resources for their customers' cloud computing infrastructure and platform services. Businesses can benefit from scalability and flexibility by not being limited by on-premises servers when using CSPs. They can personalize by configuring the servers to their liking, which aids in responsive load balancing. The majority of CSPs provide a pay-as-you-go subscription model. This means that customers will only have to pay for the resources they have used, such as the time a service is used. |
Cloud Storage | Cloud Storage is a service model in which data transmitted and used during cloud computing is backed up, managed, and accessed on remote storage systems. It has an accessible interface with proper elasticity and scalability, meaning you can increase or decrease storage based on your demands. In addition, it has the ability to serve multiple customers at once. |
Cluster | A cluster can be defined as a set of nodes, servers, or machines that act as a single system to maintain a high availability and fault tolerance. It means that if one node goes down, the other ones will keep on working. This allows the application to continue being up and running. |
Command Line Interface (CLI) | Command Line Interface (CLI) is a text-based user interface that is used to communicate through text commands with a computer’s operating system or a software application. The commands, along with options and parameters, are usually written in a console or a terminal. CLIs are often used to navigate file systems, launch programs, and run commands on external hosts. Users and system administrators often prefer them as CLIs can be faster and more efficient for tasks that involve automation and scripting. |
Containers | A container is a ready-to-run software package containing everything needed to run applications: the code and any runtime it requires, application and system libraries, and default values for any essential settings. |
Continuous Delivery (CD) | Continuous Delivery (CD) surrounds how your code will be deployed to Kubernetes or a specific platform. It is part of the CI/CD pipeline, but the tooling for CD is different. |
Continuous Deployment (CD) | Continuous Deployment (CD) is the practice in which new code changes are deployed to production. It aims to minimize the time between the committing of the code and its availability to the user. It is a fully automated process, and hence, it reduces the risk of human error. |
Continuous Integration (CI) | Continuous Integration (CI) is a series of events in a DevOps lifecycle where developers frequently make changes to the code, and the code gets committed to the repository, followed by automated testing. This is usually paired with a CD process to have these changes automatically deployed to production systems. |
Control Plane | In Kubernetes, the “control plane” is part of a cluster that manages the state of any resources, e.g., worker nodes. By checking on the status of workloads and resources like memory on the nodes through the controller manager, the control plane can ensure that the state of the cluster matches what it was told it should be. If a cluster administrator wants to change the deployment of resources on the cluster, the control plane makes the necessary changes once the new state is declared. A Kubernetes control plane receives these declarations through its API server and then uses a scheduler to assign workloads based on resources and any rules you define. |
D | Definition |
---|---|
Data Plane | Data plane is a cloud-native term surrounding the service mesh ecosystem. They are sidecars that attach to the application pods with common features such as observability, auto TLS, routing, and load balancing. |
Database as a Service (DBaaS) | Database as a Service (DBaaS) is a cloud computing service that allows users to run databases in the cloud without having to purchase hardware, manage the software setup, or other specialized tasks that are required if creating a database server from scratch. As these databases are part of a cloud provider's managed service, they are offered on demand and can be in varying flavors and configurations. |
Debugging | Debugging is a term that has a broad meaning for application developers when they use various approaches to debug their application issues and for infrastructure when they debug issues at the infrastructure level. However, because it occurs frequently and is not straightforward, Kubernetes troubleshooting/debugging is also a trivial effort. As a result, there are numerous troubleshooting approaches and methods to use when debugging the Kubernetes cluster to detect faults, such as the kubectl debug command, description to see the events, logs to see the pod logs, and so on. |
Deep Learning | Deep learning is a subset of machine learning that uses artificial intelligence and neural networks accompanied by many layers to extract higher-level features from data. Processing multiple layers of input data from large datasets in iteration helps it recognize patterns and features in data. Deep learning models are built using frameworks and libraries and used for tasks like image and speech recognition, natural language processing, and others. |
DevOps | At its core, DevOps is a dynamic, agile relationship that bridges the gap between development and operations within an organization. It is not merely a set of tools but a cultural shift that promotes a collaborative approach to delivering software quickly, efficiently, and reliably. It cultivates a culture where constant refinement and evolution are the norms, creating superior, innovative products that go beyond satisfying customer requirements. The built-in agility that comes with DevOps offers businesses a competitive edge. It empowers them to swiftly adapt to market alterations, secure and keep their customers, and boost overall productivity. |
DevSecOps | DevSecOps is a philosophy that integrates security practices within the DevOps framework. It aims to embed security in every part of the development process, ensuring fast, safe code delivery, and reducing security vulnerabilities in the application lifecycle. |
Docker | Docker is a containerization platform that helps build, deploy, and manage containers. With Docker, developers can package and run applications alongside the dependencies required to run them into loosely isolated environments called containers. Docker helps in the quicker delivery of your applications by isolating them from the infrastructure and allows for a significant reduction in delays between writing codes and running them in production. |
E | Definition |
---|---|
Edge Computing | Edge computing can be referred to as the computing that takes place near the data source or the physical location of the data. It involves transferring application workloads from the cloud to remote sites. As computing happens very close to the data, latency gets reduced, facilitating faster data transmission between distant places by lessening the delay and, thus, helping with better communication. Due to the low latency, edge computing is highly compatible with IoT devices, robotics, automatic vehicles, etc. |
Egress Controller | Egress is the traffic that leaves from a private network to the public cloud. Egress controller is the software that manages the traffic leaving the Kubernetes cluster to external APIs, databases, etc. It also directs by intercepting and routing the leaving traffic to its appropriate destination. It also helps in load balancing so that the traffic that left the cluster is optimized so that it can be performant and reliable. |
Elasticity | Elasticity is the ability of cloud computing to automatically increase or decrease the infrastructural resources on a sudden spike or drop in requirements. This helps in the efficient management of workloads. Elasticity helps in minimizing infrastructural costs. Elasticity is not applicable to all kinds of environments, but It is helpful to those scenarios where there is a frequent sudden fluctuation in resource requirements for specific time intervals. |
Event-Driven Architecture | A change in an application state is called an event. The software architecture that promotes the creation, processing, and consumption of events can be termed an event-driven architecture. This type of architecture creates a structure that properly routes events from the source to the receiver and ensures that the services remain decoupled. |
Extensibility | In cloud computing, extensibility is the technology’s capability to add additional features and elements to its structure. With extensibility, a system can have the ability to extend itself with the help of extensions. Extensions can add new functionality to a system and modify existing functionality. |
F | Definition |
---|---|
Federated Database | A Federated Database is when several databases function as a single entity, making each database self-sustained and functional. In a query on a federated database by an application, the system figures out the requested data contained in the component database and passes the request to it. It can also be termed as a database visualization implying several databases appearing as one. |
Firewall | Firewall is a system that helps in filtering network traffic on the basis of specified rules set by the administrator. It can be software, hardware, or a combination of the two. It examines traffic using pre-determined rules and establishes a barrier between secured and controlled internal trusted networks. It blocks suspicious and unworthy traffic and keeps the network safe and secure. |
Full Cycle Development | A full cycle software development is a set of steps in the software development cycle. It begins with the planning phase, then moves on to the development phase, and finally to the maintenance phase. The goal of full cycle development is to ensure that the program fits all of the pre-set requirements. To keep up with changing demands and requirements, this style of software development goes through numerous iterations. |
Function as a Service (FaaS) | Function as a Service (FaaS) is a term often used in the serverless world, where a small piece of your code can be executed on demand. This allows you to have different modules of code deployed and then invoked on the fly. |
G | Definition |
---|---|
Gateway API | Intended as an enhancement to Kubernetes service networking, Gateway API aims to bring many of the advanced features of various Kubernetes Ingress Controllers into a generalized specification. Supported by the Kubernetes Networking Special Interest Group (SIG), the specification within Gateway API would allow cluster admins and application builders increased portability, as declarations for networking objects would be standardized and not specific to each Ingress Controller. |
Go Language | Go is an open-source functional programming language which was used by Google when they originated Kubernetes. Most Kubernetes subcomponents and the wider ecosystem also use Go. Many developers are attracted to the clean syntax of Go, particularly compared to some other programming languages. The fact it is compilable on nearly any machine makes it invaluable for today’s software engineers, many of whom work on scalable, cloud-based environments. And most of all, the code is resource-efficient, simple, and fast, helping developers rapidly get code up and running for the organization. |
gRPC | gRPC is an open-source, high-performance Remote Procedure Call (RPC) that can run in any environment, helping to connect services across data centers. With the help of gRPC, you will get pluggable support for load balancing, tracing, health checking, and authentication. gRPC supports bi-directional streaming and is applicable in distributed computing to connect devices, mobile applications, and browsers to backend services. |
H | Definition |
---|---|
High Availability | High Availability refers to the quality of computing infrastructure that allows an application, IT system, or other infrastructure to operate continuously, even if one of its components goes down. Mission-critical infrastructures need to continue running without downtime. For example, Kubernetes clusters are highly available because there are multiple nodes which can run replicas of containerized applications. So, if one node goes down, the other nodes will distribute the load among themselves to keep the application running. |
Horizontal Scaling | Horizontal Scaling refers to the addition of nodes or machines to your infrastructure based on new demands. If you are hosting an application on a server and it no longer has the ability to serve the traffic, then adding another server will allow the application to handle more traffic. Horizontal scaling can be more difficult to put into effect and is costlier than vertical scaling. It also requires additional tools to be deployed to allow engineers to fix software when things go wrong in production. |
Hyper Text Transfer Protocol (HTTP) | HTTP is a protocol that allows for web pages and content to be served across the internet. The protocol allows the transfer of data from the server to the client. HyperText Transfer Protocol Secure (HTTPS) is the secure version of the HTTP protocol using TLS. An additional layer of data security is given by the HTTPS protocol over plain HTTP, and hence, it is more secure. In other words, HTTPS protocol allows data transfer in encrypted form. |
Hybrid Cloud | A hybrid cloud is a combination of public and private clouds. The public cloud is managed by a third-party cloud provider and can be used by everyone, while the private cloud is exclusive to an organization. Hybrid cloud allows data and applications to be shared between the public and private clouds by allowing them to move. This gives an organization more flexibility and provides them with more deployment options. |
Hypervisor | The Hypervisor is the software that helps create and run a Virtual Machine (VM). It isolates the hypervisor operating system and resources from the VMs and helps in enabling the creation and management of those VMs. Separating the machine’s resources from the hardware helps in the appropriate provisioning of those resources so that they can be used by the VM. |
I | Definition |
---|---|
Idempotence | Idempotence can be described as the operation that always leads to the same outcome regardless of the number of times of execution. After the initial execution, a subsequent call with the same parameters, an idempotent operation will not change the state of the application. |
Infrastructure as a Service (IaaS) | IaaS is the most flexible cloud delivery model, allowing for complete control over a business’s infrastructure with the flexibility to purchase only the required components, effortless scalability, and advanced customization. With IaaS, you can directly control operating systems, security components, and applications for your business without directly purchasing hardware infrastructure. |
Infrastructure as Code (IaC) | Infrastructure as Code (IaC) is the technique of storing computing infrastructure definitions in one or more files, as code. It replaces the old model of manual infrastructure as a service provisioning. Infrastructure as Code represents data center resources such as servers, load balancers, and subnets as code, allowing infrastructure teams to have a single source for all configurations. It will also enable them to manage their data center through a CI/CD pipeline, including version control and deployment strategies. |
Ingress Controller | An Ingress Controller, such as the NGINX ingress controller, acts as a reverse proxy and load balancer that helps implement a Kubernetes Ingress. This is an abstraction layer for the routing of traffic as it accepts traffic from the Kubernetes platform and load balances it to the pods that are running inside the platform. It helps in converting the configurations from the Ingress resources to routing rules, and allows reverse proxies to recognize those rules and implement them. |
Instances | An instance is a server that runs applications in the cloud. It is often provided by a third-party cloud provider, and is often also called a virtual machine. You can scale up and down instances based on the requirements of your business. A single instance can host an application, while more than one instance grouped in a cluster can host one as well. You can also start instances in the different geographical regions the cloud provider provides to make your application more available across the globe. |
Integrated Development Environment (IDE) | An IDE can be defined as software that is used for building applications by combining several common developer tools into a single Graphical User Interface (GUI). It consists of a source code editor that helps in writing code by highlighting syntax, providing language-specific auto-completion, and checking errors while the code is being written. An IDE also consists of local build automation that helps in automating simple repetitive tasks like compiling and packaging source code files into binary form for execution, running automated tests, etc., as well as a debugger that helps in testing the program and graphically displaying the location of bugs in the code, memory contents and other helpful information. |
IP Packet | An IP packet is a unit of data in a network that contains the source and destination address of the data along with other control information that is responsible for the transport of the packet of data over the network. The local network connects to the internet from where it receives the IP packet and reads the source and destination address of the data. It then finds the next possible destination of the packet from its routing table and helps it reach its destination. IP packet networking is also a significant part of Kubernetes clusters, where pods and nodes communicate with each other and the outside world using IP packets. It also helps with communication between different components within a cluster and brings in benefits that include easy scaling, improved network performance, etc. |
J | Definition |
---|---|
JWT | JWT or JSON Web Token is a server-generated token consisting of basic details of the concerned end-user. It mainly carries security information, such as email ID, user ID, password, etc. As suggested by the name, the records carried by it are stored in a JSON format, and it is very easy to use by clients, which makes JWT very useful for authentication, authorization, and cryptography. |
K | Definition |
---|---|
K3s | K3s is a slimmed-down but fully functional distribution of Kubernetes. It is not a fork but a CNCF conformant and an entirely compatible version of Kubernetes. K3s is engineered and packaged into a single binary of about 40 MB, meaning it has less memory consumption than K8s. K3s is highly efficient and performant and widely used for use cases regarding development, edge computing, IoT, and ARM. |
Kubernetes (K8s) | K8s or Kubernetes is an open-source container orchestration system that helps manage, deploy, and scale containerized applications. It supports automation and provides load balancing when needed. Kubernetes also provides monitoring of your application in a cluster which helps reduce downtime. It has a rapidly growing ecosystem with widely available support and tools. K8s is an abbreviation coming from the 10-letter word with the eight letters between K and S. |
K9s | K9s is a tool that provides a terminal-based user interface for managing and monitoring Kubernetes clusters. The provided visual interface helps users manage Kubernetes pods, deployments, services, etc., in a more user-friendly way compared to the default Kubernetes command-line tool,Kubectl. Being lightweight with features comprising resource filtering, inline editing, resource management, etc., it is a useful tool for those developers and administrators who use Kubernetes quite often to perform complex cluster administrative tasks without having to write long terminal commands. |
Kubectl | The command line tool Kubectl is used to execute commands on Kubernetes clusters. It is utilized for all Kubernetes cluster operations, including application deployment, log viewing, and cluster resource management. |
Kube-proxy | Every node in the Kubernetes cluster has a process called Kube-proxy running in it. Its responsibility is to find new services. Kube-proxy sets up the necessary rules on each node to direct traffic to newly generated services to a backend pod each time a new service is launched. |
L | Definition |
---|---|
Large Language Model (LLM) | Large Language Models (LLMs) are AI systems trained on vast text datasets to understand and generate human-like text. They can perform tasks like answering questions, summarizing information, and generating content, supporting various applications in customer service, content creation, and education. |
Layer 7 | Layer 7, also known as the application layer, is the top layer of the 7-layered OSI (Open Systems Interconnection) model. It is the top layer of the data processing that happens behind the surface of the software application with which a user interacts. Layer 7 is responsible for API callings, responses that load websites, etc. The main protocols used are HTTP and SMTP. |
Linux | Linux is the best-known and most-used open-source operating system. As an operating system, Linux is software that sits underneath all of the other software on a computer, receiving requests from those programs and relaying these requests to the computer’s hardware. |
Load Balancer | A load balancer is a type of service that acts as a traffic controller. It routes client requests to the nodes, which can serve them quickly and effectively. If one host goes down and becomes unresponsive, the load balancer redistributes its workloads among the other nodes ensuring the availability of the application. If a new node joins a cluster, the load balancer will automatically send requests to the pods attached to it. |
Loosely Coupled Architecture | The architectural pattern where the components of an application are built independently from one another is known as a loosely coupled architecture. Applications with this architecture can allow teams to develop features with the ability to deploy and scale them independently. With this, organizations can iterate quickly on individual components of the application. Application development is faster with a loosely coupled architecture. |
M | Definition |
---|---|
Machine Learning | Machine learning can be identified as a member of the artificial intelligence “family tree” that involves training computer systems to make decisions without explicitly being programmed. It involves feeding large amounts of data, typically referred to as “training data,” into a complex mesh of mathematics and software models or algorithms. A machine learning model learns from this data iteratively, looking for patterns, trends, and correlations similar to how you process, analyze, and store information in blog posts! The model is continuously tested and refined until it can make accurate predictions on new, unseen data. Then, when the model has a general understanding of the data it’s ingesting, it can lock the model's weights, biases, parameters, and artifacts into a deployment package and be run at scale in production. |
Managed Service Providers (MSP) | Managed Service Providers are third-party companies that offer software whose management and operations are managed by the companies themselves. This type of software offering is called managed services. Managed services help organizations lower their operational overloads because they are ready-to-use, and companies can effectively outsource tasks that fall outside their core competencies. |
Microservices | Microservices architecture, or simply microservices, is a distinctive method of developing software systems that focus on building single-function modules with well-defined interfaces and operations. |
Middleware | In cloud computing, middleware is the software that different applications use to communicate with each other. It lies between the operating system and the applications running on them and enables communication and data management. Intelligent and efficient connectivity among applications helps in faster development and innovations. It acts as a bridge between diverse technologies, tools, and databases, which helps seamlessly integrate them into a single system. |
Machine Learning as a Service (MLaaS) | MLaaS can be considered as a range of services that offer machine learning tools as part of the cloud computing service. The tools offered as part of the service can be used for predictive analysis, deep learning, face recognition, natural language processing, etc. They are often ready-made tools that are easy to adopt for an organization depending on the needs. MLaaS uses algorithms that find patterns in data, and users don’t have to do any computation. With MLaaS, building in-house infrastructure becomes easy, along with the management and storage of data. |
Machine Learning Operations (MLOps) | MLOps, is a practice that unifies machine learning system development and machine learning system operations. It aims to streamline the deployment, testing, and maintenance of ML models in production, thus ensuring their reliability and effectiveness. |
Monolith | Monolithic architecture is the traditional architectural design for software applications. It is a unified model meaning it is composed all in a single piece. Monolithic applications are tightly coupled and self-contained. They are single-tiered, meaning that multiple components are combined into a large application. In the monolithic architecture, all components must be present for the code to be executed and compiled for the application to run. |
Mutual Transport Layer Security (mTLS) | mTLS is a technique used to authenticate and encrypt messages sent between two services. It is a standard Transport Layer Security (TLS) protocol. Here, the identity of connections from both sides is validated. mTLS helps ensure that the traffic between the server and the client is trusted and secure by providing an additional layer of security. Brute-force attacks, spoofing attacks, etc., can be prevented by using mTLS. |
Multi-Cluster | Multi-cluster is a technique for deploying your workload or application on or across many Kubernetes clusters to improve availability, scalability, and other factors. It helps ensure compliance with different geographic and conflicting regulations because an individual cluster can be adapted to comply with geographic and certificate-based regulations. With multi-clusters, the speed and safety of the software delivery can be increased because development teams can deploy applications to isolated clusters. This will selectively expose the services that are available for testing and release. |
N | Definition |
---|---|
Networking | Networking, in the context of computers, is the process of connection between two or more computing devices. This makes devices communicate with each other and share resources. It can be wired like an ethernet connection or wireless like a Wi-Fi connection. Networking makes you connect devices to the internet, printers, send data files, etc. |
NodePort | A NodePort service is used to get external traffic directly into your service. NodePort is an open port that is present on all of your nodes or VMs. Any traffic sent to this port will be forwarded to the service. NodePort lies in the range of 30000 to 32767 by default. |
Nodes | A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Kubernetes runs your workload by placing containers into Pods to run on Nodes. Each node is managed by the control plane and contains the services necessary to run Pods. Typically, you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node. The components on a node include the kubelet, a container runtime, and the kube-proxy. |
O | Definition |
---|---|
Open Authorization (OAuth) | OAuth is a protocol known for allowing secured user authorization. It functions over the HTTPS protocol and works with applications having access-based tokens, servers, APIs, devices, etc. With the help of OAuth, the application can decide how to give secure and controlled access to a user. It is mainly used for Java-based, browser-based, mobile, and web application development and is a global standard that everyone can use easily. |
Observability | Observability can be defined as the ability to form a state of opinion towards the current and possible states of a simple or a complex software system derived from the monitoring insights. Observability is often called o11y because it is a thirteen-letter word with eleven letters between o and y. |
OpenID Connect (OIDC) | OIDC is an authentication protocol that works on top of the OAuth framework. OIDC helps provide one set of credentials to users, which helps access multiple sites. It allows an individual to use Single Sign-On (SSO) to access relying party sites by using OpenID Providers, such as email IDs, social networks, etc., to authenticate a user. OIDC gives the information of users, the context of their authentication, and access to the users’ profile information to the application or the service. |
Open Source | Open Source has its source code available publicly, allowing anyone to view, modify, and distribute the code. It is developed in a collaborative way relying on peer review and community contributions. As a result, Open Source software is usually cheaper, more flexible, and has more longevity than closed source ones because they are built with the help of developer communities rather than a single company. |
OpenStack | OpenStack is a cloud operating system that controls a lot of computing storage and network resources throughout a data center. The storage and resources are managed and provisioned by APIs with common authentication mechanisms. OpenStack provides Infrastructure as a Service, and the additional components provide fault tolerance, orchestration, and service management, which ensures high availability for the end user. OpenStack breaks up into pieces that can help you plug and play components depending on your needs. |
P | Definition |
---|---|
Platform as a Service (PaaS) | PaaS, is a cloud service delivery model which uses a third-party platform to run and develop applications. It helps developers to develop and manage their applications without maintaining the infrastructure. They can also use built-in software components for development, which reduces the work of writing codes. As a third-party provider provides the platform, it supports the platform and eliminates the need to install other hardware and software. As a result, PaaS is scalable and cost-effective. |
Pod | Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods can contain one or more containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod's resources. |
Portability | Portability is a software characteristic that comes in the form of reusability that helps avoid vendor lock-ins in certain operating environments. A portable application can take minimal effort to adapt to a new environment. It is also flexible while shifting to new components and operating systems. |
Progressive Delivery | Progressive Delivery is a modern software development method in which you gradually roll out new features to limit the potential negative impact and estimate user engagement with new features of a product. It is built on the basis of continuous delivery, and it requires that you are already using continuous integration and continuous delivery (CI/CD) as part of the software delivery pipeline. It expands the practice to include rollouts of more granular features, observability, canary, and A/B testing. |
Proxy | A cloud proxy is a cloud-based system located between a client and a server, data center, or SaaS application. It acts as an intermediary between the client and the server. It provides safe and secure server access and protects the server from malicious threats and malware. Besides securing, a cloud proxy helps in significant cost savings and gives a great user experience. |
R | Definition |
---|---|
Rate Limiting | Rate Limiting is the ability to block users, bots, and applications that are overusing/abusing a web property. Whilst this helps in limiting network traffic, this strategy can stop certain types of bot attacks as well as reduce the strain on web servers. |
Reinforcement Learning | Reinforcement learning is a technique of machine learning where the agent learns to behave in an environment by doing actions and taking feedback from the results of the action. Good action gives positive feedback, while any bad action provides negative feedback. Positive feedback usually comes with a reward, and the agent learns from the feedback and adjusts itself to maximize the reward over time. There is no labeled data with reinforcement learning; hence, learning from the experience only improves. Reinforcement learning is highly involved with making complex decisions and is often used with game playing, robotics, recommendation systems, etc. |
Reliability | In cloud native computing, “reliability” is a term used to describe how well a system can tolerate failures. If a system keeps working despite having infrastructure changes and failing individual components, then the system is highly reliable. |
Resilience | Resilience is a type of computing in which redundant IT resources are distributed for operational purposes. Resilient computing employs pre-configured IT resources for uninterrupted processing. In cloud computing, resilience refers to the ability of servers, storage systems, and other network-connected devices to remain connected to the network without losing operational capabilities or interfering with their activities is referred to as resilience. |
S | Definition |
---|---|
Scalability | Scalability in cloud computing is the ability to increase or decrease the number of IT resources to meet the required demands of the business. Scalability helps in reducing the cost of an organization and also enhances the performance of the workloads by automatically adding or deleting resources based on internal and external demands. |
Secure Shell (SSH) | SSH is a network protocol that provides a secure way of communication between two devices. It connects a client with a server by providing a secure channel and encrypting all the data transmitted between them. The encryption prevents tampering and attacks on the data while in transit. It provides authentication, which allows remote connection with command execution. |
Self Healing | A self healing system is capable of recovering itself from certain types of failures. This type of system has a control loop that constantly compares the actual state of the system to the desired state by the operators. It takes corrective action if it finds any difference, such as a lower number of instances running than desired. |
Self-Service | In cloud computing, “self-service” means flexibility. Self-service helps you manage your cloud applications quickly, independently, and efficiently. You can configure your application according to your needs and scale independently by adding additional storage and servers. |
Service Proxy | A service proxy intercepts traffic from a service and forwards it to another service after applying some logic to it. It acts as a gatekeeper that collects information about network traffic and applies rules to them. |
Serverless | Serverless computing is an approach that allows building and running applications and services without having to think about the underlying infrastructure of servers. With serverless, as opposed to typical PaaS, the team can concentrate on the functionality of the service without having to worry about infrastructure issues like scaling and fault tolerance. |
Software as a Service (SaaS) | SaaS allows companies to access cloud-based applications without installing multiple platforms. SaaS is great for small businesses that can’t manage frequent software installations and updates. It is also helpful for managing applications that are used periodically and do not require much customization. |
Stateful Apps | Applications that save data to persistent disk storage to be used by the server, the client, and other applications are known as 'stateful applications'. StatefulSets, a Kubernetes object, is used for managing stateful applications such as Databases. |
Subnets | Subnets are subnetworks, meaning a network divided into multiple smaller networks. They can be used to separate networks logically for different purposes, such as business functions like Accounts Network, Sales Network, etc. Subnets can also be divided for security and access purposes as well as many other reasons. |
Supervised Learning | Supervised learning is a type of machine learning where the machine gets trained with labeled data. Labeled data means that the input data is tagged with correct outputs. Based on those data, the machine gets to predict results. During training, the process involves minimizing the loss function, which measures the difference between the predicted and true outputs for each example in the training set. The algorithm adjusts its parameters to reduce the loss function after successive iterations. |
T | Definition |
---|---|
Talos Linux | Talos Linux is a secure and performant Linux operating system distribution designed for Kubernetes. In Talos, all access to the cluster is done through the API, eliminating the possibility of Secure Shelling (SSH) into a cluster’s nodes, which reduces the potential for surface attacks. The system is highly predictable, as it reduces configuration drift while providing secure Kubernetes. Additionally, it minimizes unexpected issues and problems by having an immutable infrastructure layer on top of physical servers, ensuring that all servers are identical and have the same configuration defined by a user. |
Tightly Coupled Architecture | The style of architecture in which the application components are interdependent is called ‘Tightly Coupled Architecture’. Interdependence of components means that a change in a single component can affect the others. This type of architecture is faster in nature and easier to implement. However, it can leave systems vulnerable to failure. Tightly Coupled Architecture also requires coordinated component rollouts which hampers developer productivity but speeds up the initial development cycle. |
Traffic Shadowing | Traffic Shadowing is a deployment pattern that involves asynchronous copying of production traffic to a non-production service for testing purposes. This makes zero production impact and, instead, helps you to test the actual behavior of a service and test persistent services. It takes less machinery for testing and, as a result, holds a high ground when compared to blue-green and canary deployments. |
Transport Layer Security (TLS) | TLS is a protocol that is used to provide increased security for communications between networks. TLS helps in ensuring the safe and secure delivery of data without monitoring and alteration. It uses a combination of encoding techniques and allows for an encrypted connection while data transmission happens over a network. As monitoring data is not possible, private information remains safe. TLS is used in applications that support messaging, sending emails, etc. TLS is also used to encrypt HTTP traffic to make HTTPS secure. |
U | Definition |
---|---|
Unschedulable | You can modify node objects with certain labels, which ensures it is treated in a certain way when scheduling. Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance. |
Unsupervised Learning | Unsupervised learning is the type of machine learning where the algorithm learns to identify hidden patterns and insights from a given data without labeled outputs. The models are not supervised with a training dataset, meaning that the training dataset only contains input data. The goal of unsupervised learning is to find the underlying structure of the dataset, group the data according to similarities, and represent the dataset in a compressed format. |
V | Definition |
---|---|
Version Control | Version Control is the practice of tracking and managing changes to a document. This system helps store changes in a set of files so that, if required, you can recall a specific version later. Storing records provides a facility to resolve conflicts and simplifies collaboration by storing code in a repository. Git is a popular example of a version control system that is widely used to store code. |
Vertical Scaling | Vertical Scaling is the process of adding additional resources to a system to meet its demands. With the help of Vertical Scaling, you can add more power to your current machine. If your server requires more processing power, you can upgrade your CPU to meet the requirement with the help of Vertical Scaling. It also helps you upgrade other elements, such as your storage and memory. |
Virtual Machine | A virtual machine (VM) can be defined as a virtual environment built with CPU, memory, network interface, and storage that functions as a virtual computer system. The whole system is created on a physical hardware system located on or off premises. You can run multiple operating systems on a single machine with the help of VMs. |
Virtualization | In the context of cloud-native computing, 'virtualization' can be defined as the process of taking a server and allowing it to run multiple isolated operating systems. The isolated operating systems and their dedicated compute resources are referred to as Virtual Machines or instances. Virtualization helps the users of a data center spin up a new VM within minutes without having to worry about adding a new physical computer to a data center. |
Volume | In cloud computing, volume is the amount of data stored, processed, or transferred in the cloud.The volume of data can vary according to the size of the business. Cloud service providers offer storage solutions to store large volumes of data, and you can scale the storage size depending on your needs. Volume is essential for a business or organization because it ensures that the organization is using the appropriate amount of services and only paying for the required services. |
W | Definition |
---|---|
WebAssembly | WebAssembly is a binary instruction format executed in modern-day web browsers for stack-based virtual machines. It standardizes the fast execution of code compiled from high-level languages other than Javascript, the primary language used for web development. As a result, developers can use languages like C, C++, Rust, etc., to build applications that can run on web browsers. WebAssembly allows the reusability of existing code and libraries from different web environments reducing the time for building applications. By providing an improved development of applications by compiling code ahead of time, it is highly secured and facilitates interoperability which helps developers use existing technologies while developing applications. |
Workloads | In Kubernetes, a workload is an application running on a cluster. Whether your workload is a single component or several that work together, on Kubernetes, you run it inside a set of pods. To make life considerably easier, you don't need to manage each Pod directly. Instead, you can use workload resources that manage a set of pods on your behalf. These resources configure controllers that make sure the appropriate types (as well as the number) of pods are running to match the state you specified. |
Y | Definition |
---|---|
YAML | The data serialization language YAML, sometimes known as "YAML Ain't Markup Language," is frequently used to create configuration files. YAML files can contain various types of key-value data in the form of maps or lists. Lists have values listed in a certain order, whereas maps aid in associating key-value pairings. |
Z | Definition |
---|---|
Zero Trust Architecture | Zero Trust is a strategic initiative that helps prevent successful data breaches by eliminating the concept of trust from an organization's network architecture. Zero Trust is not about making a system trusted but instead about eliminating trust and expecting the worst. In Kubernetes, you can use various techniques to create Zero Trust services in an environment, from container image hardening to pod specification runtime constraints. |
Back to Start