In Pokemon, but one of them was Paras, a little bug with two little mushrooms on its back.
It evolved into Parasect, which was the shell of the body controlled by the parasite mushrooms.
I was trying to make an allegory with today’s subject matter but it doesn’t quite fit.
If a Worker Node dies, the Pods running on it die too.
A Replication Controller ensures that a specific number of pod replicas are always up and running by creating new pods in an instance like this.
Remember that each Pod in a Cluster has a unique IP, even those on the same Node, so how do pods let everyone else know about the changes so everything keeps working.
Services are abstractions defining a logical set of Pods and a policy on how to use them, and defined using YAML or JSON (human readable languages).
They also allow your applications to receive traffic, because IP addresses aren’t exposed outside the cluster without them.
Exposure happens in different ways.
* The default, ClusterIP, exposes on an internal IP, only reachable from within.
* Exposure on the same port of each Node with NodePort and NAT [<nodeIP>:<NodePort>]
* External LoadBalancer in the current cloud (varies) with a fixed external IP.
* Give an arbitrary name specified by [externalName] to expose the service by returning a CNAME record and no proxy.
These services loosely couple between pods, usually targeted by a LabelSelector, though if you don’t use selector, it won’t make something called an Endpoints object that lets users manually map a Service to select endpoints.
(Or you’re using [type: ExternalName]
But now there are LABELS, the bits that match a set of Pods, allowing logical operation on Kubernetes objects. They can be attached to objects upon creation or later on, and modified whenever.
Key/value pairs attached to objects to
- Embed version tags
- Classify an object
- Designate said objects for development, test, and production.
(Also, make a Service at the same time you make a Deployment with --expose)
—————————————
Application: Running.
Services:
Kubernetes, enabled by default when the cluster starts.
Make a new one, expose it to traffic, (—expose)
I type it in by hand (You can click the image and have the code automatically load and run, but what fun is that?) and get a “There is no need to specify a resource type as a separate argument” error.
I probably added a spare space in there somewhere.
But I try again and the service is now exposed.
But what port did we open? (Well, 8080, because that’s what the code said, but look at the describe services command.
[list of information]
Let’s make an enviroment called NODE_PORT with a value assigned.
Not 100% sure what happened. Although it’s the age-old computing adage - sometimes, if nothing shows up, you did a great job!
So we test it with curl;
Hi, terminal!
==================
The second part are LABELS
kubectl describe deployment -
We’re going to query our list of pods with this label with kubectl get pods -l (that’s a parameter)
(We also added it to existing services)
And we store it in the enviroment variable. Remember the command?
No.
Apply a new label with the command + object type + object name and new label:
kubectl label pod $POD_NAME app=v1
Check it (there’s a lot of information here), and see the pods.
Okay, let's delete it now!
It's a simple command, and we can also delete the label with it. Let's check what happened.
The route also isn't exposed anymore, giving off a (7) Failed to connect to 172.17.0.53 port 30810: Connection refused error message.
Luckily, the application is still running!
It evolved into Parasect, which was the shell of the body controlled by the parasite mushrooms.
I was trying to make an allegory with today’s subject matter but it doesn’t quite fit.
If a Worker Node dies, the Pods running on it die too.
A Replication Controller ensures that a specific number of pod replicas are always up and running by creating new pods in an instance like this.
Remember that each Pod in a Cluster has a unique IP, even those on the same Node, so how do pods let everyone else know about the changes so everything keeps working.
Services are abstractions defining a logical set of Pods and a policy on how to use them, and defined using YAML or JSON (human readable languages).
They also allow your applications to receive traffic, because IP addresses aren’t exposed outside the cluster without them.
Exposure happens in different ways.
* The default, ClusterIP, exposes on an internal IP, only reachable from within.
* Exposure on the same port of each Node with NodePort and NAT [<nodeIP>:<NodePort>]
* External LoadBalancer in the current cloud (varies) with a fixed external IP.
* Give an arbitrary name specified by [externalName] to expose the service by returning a CNAME record and no proxy.
These services loosely couple between pods, usually targeted by a LabelSelector, though if you don’t use selector, it won’t make something called an Endpoints object that lets users manually map a Service to select endpoints.
(Or you’re using [type: ExternalName]
But now there are LABELS, the bits that match a set of Pods, allowing logical operation on Kubernetes objects. They can be attached to objects upon creation or later on, and modified whenever.
Key/value pairs attached to objects to
- Embed version tags
- Classify an object
- Designate said objects for development, test, and production.
(Also, make a Service at the same time you make a Deployment with --expose)
—————————————
Application: Running.
Services:
Kubernetes, enabled by default when the cluster starts.
Make a new one, expose it to traffic, (—expose)
I type it in by hand (You can click the image and have the code automatically load and run, but what fun is that?) and get a “There is no need to specify a resource type as a separate argument” error.
I probably added a spare space in there somewhere.
But I try again and the service is now exposed.
But what port did we open? (Well, 8080, because that’s what the code said, but look at the describe services command.
[list of information]
Let’s make an enviroment called NODE_PORT with a value assigned.
Not 100% sure what happened. Although it’s the age-old computing adage - sometimes, if nothing shows up, you did a great job!
So we test it with curl;
Hi, terminal!
==================
The second part are LABELS
kubectl describe deployment -
We’re going to query our list of pods with this label with kubectl get pods -l (that’s a parameter)
(We also added it to existing services)
And we store it in the enviroment variable. Remember the command?
No.
Apply a new label with the command + object type + object name and new label:
kubectl label pod $POD_NAME app=v1
Check it (there’s a lot of information here), and see the pods.
Okay, let's delete it now!
It's a simple command, and we can also delete the label with it. Let's check what happened.
The route also isn't exposed anymore, giving off a (7) Failed to connect to 172.17.0.53 port 30810: Connection refused error message.
Luckily, the application is still running!
Comments
Post a Comment