Docker Compose is a tool that you will use to define and share multi-container applications. This means you can run a project with multiple containers using a single source, you will need to create a YAML file to define the services you need and run with a single command.
docker-compose.yml
services:simplecore.api:container_name:simplecoreapi.from.docker.composed#image: bindplus.api.v2 # will use imageimage:${DOCKER_REGISTRY-}simplecoreapi:1.0.0# will use Dockerfilebuild:context:.# ASP-NET-Core-Simple-API replace with the actual name of your project, docker file locationdockerfile:ASP-NET-Core-Simple-API/Dockerfileports:-49195:80#[targer port]:[docker file port]
The docker-compose.override.yml is the configuration file where you can override existing settings from the docker-compose.yml or can even add completely new services.
Introduction to kubernetes using the simple .net core api image from docker container registry
Basic nderstanding of deployment and service yaml file in kubernetes.
Using minikube
Kubernetes Service and Deployment are two fundamental components of the Kubernetes ecosystem.
The Service acts as the gateway for communication and load balancing. On the other hand, Deployment orchestrates the management and scaling of your application’s replicas.
Minikube is a lightweight and simplified version of Kubernetes, primarily used for local development and testing purposes.
What is minikube used for?
You can use Minikube to learn how to deploy and manage applications in Kubernetes.
Minikube can be used to develop Kubernetes applications and test your applications before deploying them to production.
What is the difference between Kubernetes and minikube?
On the Deployment Scale: Kubernetes is designed for large-scale deployments across multiple nodes and clusters, making it suitable for managing complex and distributed environments. On the other hand, Minikube is a lightweight and simplified version of Kubernetes, suitable for local development and testing purposes.
$minikubetunnel╭──────────────────────────────────────────────────────────────────────────────────────────────────────────╮│ ││ Youaretryingtoruntheamd64binaryonanM1system. ││ Pleaseconsiderrunningthedarwin/arm64binaryinstead. ││ Downloadat https://github.com/kubernetes/minikube/releases/download/v1.32.0/minikube-darwin-arm64 ││ │╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯✅ Tunnelsuccessfullystarted📌 NOTE:Pleasedonotclosethisterminalasthis process must stay alive for the tunnel to be accessible ...❗ Theservice/ingresssimple-core-apirequiresprivilegedportstobe exposed: [80]🔑 sudopermissionwillbeaskedforit.🏃 Startingtunnelforservicesimple-core-api.Password:
Create docker image using this command: docker build -t [image name] [docker file location]
Example:
dockerbuild-t simplecoreapi:1.0.0.
Step 4: Docker Container
Create docker container using this command: docker run -d -p [port]:[exposed port] –name [container name] [image name] Example: (Note it will return the container id)
Note: exposed port should be the same as the exposed port in the docker file.
Step 5: Let’s test it!
Testing our container using Postman:
Accessing the files in the container
The snap shot below is from Docker desktop, you can see that the files generated by the api are stored in the payloads-default folder, which is configure in the appsettings.json:
Docker Container Shell
We can also access the files saved using the docker container shell, here I will show you, take note I switched to my windows machine, here’s what I did:
We need to get the container Id using the command: docker ps
Run bash inside the container using the command: docker exec -it 1653 sh //1653 is the first 4 character of the container id
inside the shell, change the directory to payloads-default to list the files created.
In the excerpt of the saved payload below, notice that the values of the Environment and LogJsonPath are from the environment passed.
Note:
· “LogJson__Path=Payloads” is the same as in appsettings.json: “LogJson”: { “Path”: ” payloads” }
· Will work if not assigned/declared in appsettings.json
· Overriding appsettings file
· To override environment variables, remove them from the appsettings file
Docker Volumes
Volumes are a mechanism for storing data outside containers. All volumes are managed by Docker and stored in a dedicated directory on your host, usually /app for Macbook systems.
Volumes are mounted to filesystem paths in your containers. When containers write to a path beneath a volume mount point, the changes will be applied to the volume instead of the container’s writable image layer.