What are Services ?
Services aka micro-services are a logical collection of containers. Some of the properties of Services include:
a) Services are like virtual hosts for containers. All the containers within a Service share CPU/Memory, Disks and Network Identity (IP Address).
b) When a Service is deployed, the containers within the Services are scheduled together.
c) Services are long running processes
d) Services may or may not expose container ports for external access
e) Services hold a set of policies like scaling, deployment, volume claim and network access
f) Scaling events are triggered based on the average CPU/Memory utilization of all the containers within a Service
g) Services can expose a Shared Folder or a Persistent Volume across containers for inter-container communication. Check Side Car Patterns for more information
h) Containers within a Service can access each other over localhost:<container port>
What is a State in a Service ?
A Service can maintain a state across requests. The state can be maintained both at the Northbound interface (using Sticky Session) as well as at the Southbound interface (using Persistent Volumes).
a) Persistent Volumes:
For example, a Service can store information in a database hosted in a Persistent Volume. When there are multiple instances of the same service, each instance performs read and write operations to their own copy of the database.
b) Sticky Sessions:
For example, a service can generate a session ID from the initial call and store it in an in-memory database. It can validate the session ID in the subsequent Client requests against the original value stored in its in-memory database. When there are multiple instances of the same service, the Load Balancer can maintain a Sticky Session to direct the calls to the appropriate service instance.
Types of Services
a) Stateless
When a Service does not maintain a state in the form of a Persistent Volume or a Sticky Session, it is called a Stateless Service. However it can still use a Shared Folder for inter-container communication. Since a Stateless service does not use a Persistent Volume, the Volume Claim policy is not applicable for a Stateless Service.
b) Stateful
When a Service maintains a state in the form of a Persistent Volume or a Sticky Session, it is called a Stateful Service. It can use one or more Persistent Volumes which can be dedicated for a single container within the service or can be shared across containers within the service. A Volume Claim policy can be applied to each of these Persistent Volumes to describe the access mode, mount mode and the minimum/maximum allocation sizes. Due to the stickiness factor, Stateful services cannot scale horizontally and thus Scaling Policies are not applicable for Stateful Services. Please check Adding High Availability for stateful services for more information on how Stateful Services can be scaled.
Accessing a Service
1) Service to Service within an application
Within an Application, containers of a Service can communicate with other Services using the Target Service Name and the port number. The Target Service then routes the requests to the appropriate container internally based on the container port. For example, in the picture above, container A1 in Service_A can communicate with Service_B using the endpoint Service_B:8080. Service_B then routes the request to Container B2 which is listening on port 8080.
2) Web Access using Node Port
A Service can expose one or more container ports as external ports to be accessed via a Node IP Address. These external Service Ports can then be mapped to one of the available Node Ports (NP) in the range 30000 - 32176. For example, Service_B can expose the container port 8080 as an external Port and map the container port 8080 to Node Port (NP) 30000. The Service can then be accessed using the end point Node_IP:30000.
3) Web Access using Ingress and Domain Name
A Service can expose one or more container ports as external ports to be accessed via a LoadBalancer with or without Ingress routing. gopaddle uses an in-built NGINX ingress controller to route the requests to the respective services. The Service routing rules are derived from the Route Paths defined in each container. Please check Creating a Container for steps on how to define route paths for containers. In order to access the Service via Ingress, the Service needs to be mapped to a Domain with TLS. The Service can the be accessed via the DomainName.
Currently gopaddle does not support Load Balancer IP based access or a Domain Name access without TLS. If an Ingress is used, the Service needs to be mapped to a Domain Name with TLS. By default Load Balancer port 443 is used for TLS Based (HTTPS) Domain Name access.
3a) Ingress with Load Balancer
gopaddle uses a Network Load Balancer along with Ingress to route the requests to the Services. For example, Service_B can expose the container port 8080 as an external Port and be deployed with the Ingress with LoadBalancer option and mapped to a Domain Name. The Service can then be accessed using the end-point https://domainName:443.
3b) Ingress Without Load Balancer
In this case, the external ports need to be mapped to one of the available Node Ports (NP) in the range 30000 - 32176.
For example, Service_B can expose the container port 8080 as an external Port and map the container port 8080 to Node Port (NP) 30000. When deployed with the Ingress without LoadBalancer option and a Domain Name, the Service can be accessed using the end-point https://domainName:Nodeport.
Service Versions
Adding or Deleting a Container from a Service needs to be version controlled so that right update strategies are available while updating a running Service. However policies with a Service are global to all its versions and thus changing a policy within a Service gets reflected in all its versions.