Pratik Gautam https://www.linuxtechi.com Tue, 12 Sep 2023 10:07:56 +0000 en-US hourly 1 https://www.linuxtechi.com/wp-content/uploads/2020/02/cropped-linuxtechi-favicon-32x32.png Pratik Gautam https://www.linuxtechi.com 32 32 How to Setup Traefik for Docker Containers on Ubuntu 20.04 https://www.linuxtechi.com/setup-traefik-docker-containers-ubuntu/ https://www.linuxtechi.com/setup-traefik-docker-containers-ubuntu/#respond Mon, 05 Apr 2021 03:33:23 +0000 https://www.linuxtechi.com/?p=12429 Traefik is one of the modern methods which is used to set up reverse proxy for docker containers. When you want to run multiple applications in docker containers exposing port 80 and 443, traefik can be the best option for reverse proxy. Traefik provides its ... Read more

The post How to Setup Traefik for Docker Containers on Ubuntu 20.04 first appeared on LinuxTechi.]]>
Traefik is one of the modern methods which is used to set up reverse proxy for docker containers. When you want to run multiple applications in docker containers exposing port 80 and 443, traefik can be the best option for reverse proxy. Traefik provides its own monitoring dashboard. You can also use Traefik for HTTP load balancer. In this article, we are going to setup Traefik v2.4 on Ubuntu 20.04 with a simple example.

Prerequisites

  • Ubuntu 20.04 LTS
  • Docker CE (Community Edition)
  • Internet connection to download packages
  • Sudo Privileged user account
  • A domain for traefik dashboard [Should be maintained A record]
  • A domain for wordpress site [Should be maintained A record]

In this article, we are using ‘linuxtechi.local’ as domain and ‘traefik.linuxtechi.local’ as FQDN for Traefik dashboard.

Steps to setup Traefik on Ubuntu 20.04

In this article, First we will setup Traefik and then will register WodressPress Container to traefik for reverse proxy and load balancing. We will be configure Traefik to serve everything over HTTPS using Let’s encrypt ssl certificate.

Follow the steps below to set up the Traefik reverse proxy.

1 ) Configure Traefik

Create a configuration files and set up an encrypted password to access the traefik dashboard. You can use htpasswd utility to create the encrypted password. To use htpasswd utility, install the utility with the following command.

$ sudo apt-get install -y apache2-utils

Once the installation is completed, run the following command to generate an encrypted password. In this example I have used “Traefik@123#” to encrypt. You can have your own assumptions. User is taken as “admin” you can replace it with your own username.

$ htpasswd -nb admin Traefix@123#

You will get encrypted password as :

pkumar@traefik:~$ htpasswd -nb admin Traefik@123#
admin:$apr1$V.9MT9VH$MtLgwiAa4jq1ngDVvTdJu/
pkumar@traefik:~$

Copy this output and save it somewhere as we need to use this encrypted password in Traefik configuration file to setup basic authentication for Traefik dashboard.

Now create a configuration file called traefik.toml using TOML format.  We will be using three providers of Traefik namely api, docker and acme. Acme provides let’s encrypt TLS certificates.

Create a traefik.toml file with following contents

$ vi traefik.toml
[entryPoints]
  [entryPoints.web]
    address = ":80"
    [entryPoints.web.http.redirections.entryPoint]
      to = "websecure"
      scheme = "https"
  [entryPoints.websecure]
    address = ":443"

[api]
  dashboard = true
[certificatesResolvers.lets-encrypt.acme]
  email = "info@linuxtechi.local"
  storage = "acme.json"
  [certificatesResolvers.lets-encrypt.acme.tlsChallenge]

[providers.docker]
  watch = true
  network = "web"

[providers.file]
  filename = "traefik_secure.toml"

Save and Close the file

In above file entrypoint web handles port 80 while entrypoint websecure handle port 443 for SSL/TLS connection.

All the traffic on port 80 is redirected forcibly to websecure entry point to secure connections. Don’t forget to change email and domain in the above file ‘traefik.toml’ that suits to your setup.

Let’s create the other file ‘traefik_secure.toml’ with following contents.

$ vi traefik_secure.toml
[http.middlewares.simpleAuth.basicAuth]
  users = [
    "admin:$apr1$V.9MT9VH$MtLgwiAa4jq1ngDVvTdJu/"
  ]

[http.routers.api]
  rule = "Host(`traefik.linuxtechi.local`)"
  entrypoints = ["websecure"]
  middlewares = ["simpleAuth"]
  service = "api@internal"
  [http.routers.api.tls]
    certResolver = "lets-encrypt"

Save and exit the file.

Contents of above will enable Username and password authentication for Traefik dashboard and also enable let’s encrypt TLS certificates for http routers.

Don’t forget to change password string for admin user and host entry in above file that suits to your setup.

2) Running traefik container

Create new docker network for the proxy to share among containers. Use the following command to create a docker network.

$ docker network create web

When you start traefik container, add the container to this network.  You can add additional containers to this network for Traefik to work as a reverse proxy.

Create an empty file which holds Let’s encrypt information and modify the permission accordingly.

$ touch acme.json
$ chmod 600 acme.json

Once this json file moves to docker container the ownership will get changed to root automatically.

Create a traefik container using the following command:

$ docker run -d \
   -v /var/run/docker.sock:/var/run/docker.sock \
   -v $PWD/traefik.toml:/traefik.toml \
   -v $PWD/traefik_secure.toml:/traefik_secure.toml \
   -v $PWD/acme.json:/acme.json \
   -p 80:80 \
   -p 443:443 \
   --network web \
   --name traefik \
    traefik:v2.4

As the command is too long, it has been broken down into multiple lines.

Now you can access traefik dashboard to monitor the health of the containers. Navigate to https://your_domain.com/dashboard/  [Replace your_domain with your own domain] and provide admin credentials [username is admin and password is encrypted password created in above step]. In my case, URL would be:

https://traefik.linuxtechi.local/dashboard/

Traefik-login-window-Ubuntu

Once logged in to the dashboard, you will have the following interface.

Traefik-Dashboard-Ubuntu

3) Register Containers to Traefik

You have configured Traefik which is running on your server.  In this step, I am going to add a WordPress container for Traefik to proxy. The WordPress container will be managed with Docker Compose.

Let’s create a docker-compose.yml file with following contents.

$ vi docker-compose.yml

To specify the version and network we will use, add the following lines to the file.

version: "3"
networks:
  web:
    external: true
  internal:
    external: false

I have used version 3 because it is the latest version of Docker Compose. Traefik will recognize our applications only if they are part of the same network. In the previous steps I have created docker network manually with network name web so I have included this network in the docker-compose.yml file and exposed it to external for traefik proxy. I have defined another network to connect our application with database container which is not needed to expose through traefik.

Now define each of the services. Firstly, create a service for WordPress application. Add the following lines in docker-compose.yml file.

services:
  wordpress:
    image: wordpress:latest
    environment:
      WORDPRESS_DB_HOST: mysql:3306
      WORDPRESS_DB_USER: dbuser
      WORDPRESS_DB_PASSWORD: dbpass@123#
      WORDPRESS_DB_NAME: wordpress_db
    labels:
      - traefik.http.routers.blog.rule=Host(`blog.linuxtechi.local`)
      - traefik.http.routers.blog.tls=true
      - traefik.http.routers.blog.tls.certresolver=lets-encrypt
      - traefik.port=80
    networks:
      - internal
      - web
    depends_on:
      - mysql

I have used traefik.port=80, traefik will use this port to route traffic to WordPress container.

Replace Host: blog.linuxtechi.com with your own WordPress site domain

Now, you need to configure the MySQL service for the database. Add the following lines to the bottom of docker-compose.yml file

mysql:
  image: mysql:latest
  environment:
    MYSQL_ROOT_PASSWORD: sqlpass@123#
    MYSQL_DATABASE: wordpress_db
    MYSQL_USER: dbuser
    MYSQL_PASSWORD: dbpass@123#
  networks:
    - internal
  labels:
    - traefik.enable=false

In this example, I have used MySQL lastest image for a database container. Environment variables for wordress and mysql service have been defined in file itself. MySQL service is not needed to proxy using traefik, so I have used internal networks only.

Your complete docker-compose.yml file will look like:

version: "3"
networks:
  web:
    external: true
  internal:
    external: false
services:
  wordpress:
    image: wordpress:latest
    environment:
      WORDPRESS_DB_HOST: mysql:3306
      WORDPRESS_DB_USER: dbuser
      WORDPRESS_DB_PASSWORD: dbpass@123#
      WORDPRESS_DB_NAME: wordpress_db
    labels:
      - traefik.http.routers.blog.rule=Host(`blog.linuxtechi.local`)
      - traefik.http.routers.blog.tls=true
      - traefik.http.routers.blog.tls.certresolver=lets-encrypt
      - traefik.port=80
    networks:
      - internal
      - web
    depends_on:
      - mysql

  mysql:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: sqlpass@123#
      MYSQL_DATABASE: wordpress_db
      MYSQL_USER: dbuser
      MYSQL_PASSWORD: dbpass@123#
    networks:
      - internal
    labels:
      - traefik.enable=false

Save the file and exit the file

Now run the following command to create MySQL and wordpress container.

$ docker-compose up -d

Now navigate to Traefik dashboard and click on HTTP routers, you will find new containers added to the dashboard.

WordPress-Frontend-Traefik-Ubuntu

Now use the url blog.linuxtechi.local [Replace with your domain]. You will be redirected to WordPress installation wizard with TLS connection.

WordPress-Installation-Traefik-Ubuntu

Complete the installation wizard. You are now all good to use your WordPress site.

Conclusion:

In this article, you have learned how to set up traefik in Ubuntu 20.04. You got an ideahow to register containers automatically to traefik for load balancing and reverse proxy. Also, you learned how to configure WordPress sites with traefik proxy.

Also Read : How to Setup Private Docker Registry on Ubuntu 20.04

The post How to Setup Traefik for Docker Containers on Ubuntu 20.04 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/setup-traefik-docker-containers-ubuntu/feed/ 0
How to Setup Private Docker Registry on Ubuntu 20.04 https://www.linuxtechi.com/setup-docker-private-registry-on-ubuntu/ https://www.linuxtechi.com/setup-docker-private-registry-on-ubuntu/#comments Thu, 04 Mar 2021 04:27:32 +0000 https://www.linuxtechi.com/?p=12244 In this post, we are going to learn how to setup private docker registry on Ubuntu 20.04. Why Private Docker Registry ? For Smooth CI/CD development using the docker platform, consider using a self-hosted docker registry server. Docker registry is the repository where you can ... Read more

The post How to Setup Private Docker Registry on Ubuntu 20.04 first appeared on LinuxTechi.]]>
In this post, we are going to learn how to setup private docker registry on Ubuntu 20.04.

Why Private Docker Registry ?

For Smooth CI/CD development using the docker platform, consider using a self-hosted docker registry server. Docker registry is the repository where you can store your docker images and pull them to run applications on the server. For faster delivery as well as secure infrastructure, it is recommended to set up your own docker private registry to store your docker images and distribute among organizations.

Prerequisites

  • User account with sudo privileges
  • A server for Docker registry
  • Nginx on the Docker Registry server
  • A client server
  • Docker and Docker-Compose on both servers.

What is Private Docker Registry?

Docker Registry is a Server-side application which allows you to store your docker images locally into one centralized location. By setting up your own docker registry server, you can pull and push docker images without having to connect to the Docker hub, saving your bandwidth and preventing you from security threats.

Also Read : How to Install Docker on Ubuntu 22.04 / 20.04 LTS

Before You start

Before starting, I ensure that you have installed Docker and Docker-Compose on both client server and local registry server. To verify you have installed required software, you can run the following commands to check the software version.

$ docker version

docker-version-output-linux

$ docker-compose version

docker-compose-version-output-linux

Also, you need to ensure that docker service is started and is setup to enable at boot time:

$ sudo systemctl start docker
$ sudo systemctl enable docker

Install and Configure Private Docker Registry

To configure Private Docker Registry, follow the steps:

Create Registry Directories

Configure your server that is going to host a private registry. Create a new directory that will store all the required configuration files.

Use the following command to create a new project directory ‘my-registry’ and two sub directories ‘nginx’ and ‘auth’. You can have your own assumption for the project name.

$ mkdir -p my-registry/{nginx, auth}

Now navigate to the project directory and create new directories inside nginx as:

$ cd my-registry/
$ mkdir -p nginx/{conf.d/, ssl}

Create Docker-Compose script and services

You need to create a new docker-compose.yml script that defines the docker-compose version and services required to set up a private registry.

Create a new file “docker-compose.yml” inside “my-registry” directory with vi editor.

$ vi docker-compose.yml

Define your service in the docker-compose file as:

services:
#Registry
  registry:
    image: registry:2
    restart: always
    ports:
    - "5000:5000"
    environment:
      REGISTRY_AUTH: htpasswd
      REGISTRY_AUTH_HTPASSWD_REALM: Registry-Realm
      REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.passwd
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - myregistrydata:/data
      - ./auth:/auth
    networks:
      - mynet

#Nginx Service
  nginx:
    image: nginx:alpine
    container_name: nginx
    restart: unless-stopped
    tty: true
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d/:/etc/nginx/conf.d/
      - ./nginx/ssl/:/etc/nginx/ssl/
    networks:
      - mynet

#Docker Networks
networks:
  mynet:
    driver: bridge

#Volumes
volumes:
  myregistrydata:
    driver: local

Save and close the file

Setup nginx Port forwarding

We need to create nginx virtual host configuration for nginx web service. Go to nginx/conf.d/ directory created in the above step.

$ cd nginx/conf.d/

Now create a nginx virtual host file with your text editor. In this example I am going to name it myregistry.conf. You can have your own assumption.

$ vi myregistry.conf

Add the following contents:

upstream docker-registry {
    server registry:5000;
}
server {
    listen 80;
    server_name registry.linuxtechi.com;
    return 301 https://registry.linuxtechi.com$request_uri;
}
server {
    listen 443 ssl http2;
    server_name registry.linuxtechi.com;
    ssl_certificate /etc/nginx/ssl/certificate.crt;
    ssl_certificate_key /etc/nginx/ssl/private.key;
    # Log files for Debug
    error_log  /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
    location / {
        if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" )  {
            return 404;
        }
        proxy_pass                          http://docker-registry;
        proxy_set_header  Host              $http_host;
        proxy_set_header  X-Real-IP         $remote_addr;
        proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header  X-Forwarded-Proto $scheme;
        proxy_read_timeout                  900;
    }
}

Replace your domain name with server_name parameter and save the file.

Increase nginx file upload size

By default, nginx has a 1mb limit to upload files. As docker images exceed this limit, you need to increase the upload size in nginx configuration file. In this example, I am going to create an extra nginx configuration file with a 2GB upload limit .

Go to nginx configuration directory

$ cd myregistry/nginx/conf.d
$ vi additional.conf

Add the following line and save the file

client_max_body_size 2G;

Configure SSL certificate and Authentication

After creating nginx configuration file, now we need to set up an ssl certificate . You should have a valid ssl certificate file with a private key. Copy your certificate file and private key to nginx/ssl directory as:

$ cd myregistry/nginx/ssl
$ cp /your-ssl-certificate-path/certificate.crt .
$ cp /your-private-key-path/private.key .

If you do not have a valid purchased ssl certificate, you can generate your own self signed ssl certificate. Remember that a self signed ssl certificate is not recommended for production environments. To generate self signed ssl certificate, run the following command:

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout \
 /etc/ssl/private/nginx-private.key -out /etc/ssl/certs/nginx-certificate.crt

You will be asked to submit some details like, Country code, domain name, email id. Fill up the details and continue.

Now setup Basic authentication as:

Go to auth directory

$ cd auth

Request a new password file named registry.password for your user. In this example I am going to use linuxtechi user.

$ htpasswd -Bc registry.password linuxtechi

If you get ‘htpasswd not found command‘, run the following command in your terminal and try again.

$  sudo apt install apache2-utils -y

Type a strong password and enter again to confirm your password. You have added a basic authentication user for docker registry.

Run Docker Registry

You have completed setup. You can build registry using docker-compose command.

Go to the directory, where we create docker-compose.yml file

$ cd myregistry

Now run the following command:

$ docker-compose up -d

Docker registry is now up, you can verify the running containers using following command:

$ docker ps -a

You will get following output:

docker-ps-a-command-output-linux

Pull Image from Docker Hub to a Private registry

To store an image from Docker hub to private registry, use docker pull command to pull docker images from docker hub. In this example, I am going to pull docker image of centos.

$ docker pull centos

After successfully pulling images from docker hub, tag an image to label it for private registry.

In this example, I am going to tag centos images as : registry.linuxtechi.com/linuxtechi-centos

$ docker image tag [image name] registry.linuxtechi.com/[new-image-name]

Example:

$ docker images tag centos registry.linuxtechi.com/linuxtechi-centos

To check if docker image is locally available or not , run the following command.

$ docker images

Push docker image to private registry

You have pulled docker image from docker hub and created a tag for private registry. Now you need to push local docker image to private registry.

Firstly, Login to your private registry using following command:

$ docker login https://registry.linuxtechi.com/v2/

Use your own registry url in the place of ‘https://registry.linuxtechi.com’

You will be prompted for username and password; you will get login successful message as:

docker-login-private-registry-linux

Now you can push your docker image to a private registry. To push image run the following command:

$ docker push registry.linuxtechi.com/linuxtechi-centos

Replace your image name after ‘docker push’

Once push is completed, you can go to browser and enter the url:

https://registry.linuxtechi.com/v2/_catalog

Replace registry.linuxtechi.com with your own url and provide basic authentication. You will find repositories list as :

docker-private-registry-gui-linux

Pulling docker image from Private Registry

You have pushed your local docker image to your private docker registry. In the same way you can pull docker images from your docker private registry to the local server.

Run the following command to login in your private registry server.

$ docker login https://registry.linuxtechi.com

Replace registry.linuxtechi.com with your own private registry url and provide basic authentication. Once the login is successful, run the following command to pull the docker image from private registry. In this example, I am going to pull previously pushed docker image in the local server. You can have your own assumption for docker image name.

$ docker pull registry.linuxtechi.com/linuxtechi-centos

You will have output similar as:

docker-pull-image-private-registry-linux

Conclusion:

In the article you have learned about how to host your own private docker registry. Also you got idea about how to pull images from docker hub to local server, tag the image and push into private registry. You have also learned how to pull docker images from private registry in the local server.

Also Read : How to Install KVM on Ubuntu 20.04 LTS Server (Focal Fossa)

The post How to Setup Private Docker Registry on Ubuntu 20.04 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/setup-docker-private-registry-on-ubuntu/feed/ 2
How to Access Remote Windows Desktop from Ubuntu Linux https://www.linuxtechi.com/access-remote-windows-desktop-ubuntu/ https://www.linuxtechi.com/access-remote-windows-desktop-ubuntu/#respond Mon, 15 Feb 2021 03:13:52 +0000 https://www.linuxtechi.com/?p=12155 You must have heard about the Windows app “Remote Desktop Connection“. This application comes with default windows installation and allows you to access another PC or server remotely. It uses remote desktop protocol to establish remote desktop connection sessions. Some of the Linux distributions may ... Read more

The post How to Access Remote Windows Desktop from Ubuntu Linux first appeared on LinuxTechi.]]>
You must have heard about the Windows app “Remote Desktop Connection“. This application comes with default windows installation and allows you to access another PC or server remotely. It uses remote desktop protocol to establish remote desktop connection sessions.

Some of the Linux distributions may provide you RDP clients to connect to the Windows system. However, for some linux distributions you may need to install RDP clients to establish remote desktop connection.

As a Linux user there are some rdp tools available which you can install and use for windows remote connection. In this article we are going to explain how to install RDP clients on Ubuntu linux and use them to access (or connect) remote windows desktop.

Reminna

Reminna is a free,open source and powerful remote desktop client for remote desktop sharing. As it provides useful features most of the Linux and UNIX users adopt reminna client to connect remote desktop.

You can install reminna on your linux system by using the following command.

On Ubuntu,

$ sudo apt update
$ sudo apt install -y remmina remmina-plugin-vnc

Once remmina is installed on your system, you can access gui for remote desktop connection.

Reminna_RDP_client

Enter your windows system IP address and press enter. You will be prompted for username and password details. Submit all the details and control your remote desktop.

Vinagre

Vinagre is an ssh, vnc and rdp client for the Gnome desktop environment. It has advanced features like connecting multiple servers simultaneously and switching between them using tab. Vinagre also supports copy/paste between client and server.

To install Vinagre on Ubuntu, use the following command.

$ sudo apt update
$ sudo apt install -y vinagre

After completing installation go to your application list and search for a remote desktop.

Click on the application as shown in the below image.

Vinagre

Click on Connect , select RDP from the drop down menu. Enter your remote desktop credentials and click connect.

Vinagre_RDP_access

KRDC

KRDC is a remote desktop tool designed for the KDE desktop environment. Installation of KRDC comes with two protocols VNC and RDP which gives you hassle free access to your remote desktop.

To install and configure KRDC in your system follow the commands:

$ sudo apt update
$ sudo apt install -y krdc

Once the installation is completed you are now good to use the KRDC client. You can type krdc in the command line which opens the new gui tool. You can also search krdc in your application list and launch

KRDC_RDP_client

Click on the KRDC application and you will get the following gui on your screen.

KRDC_rdp_access

Select RDP protocol from the drop down menu, enter your remote desktop IP address and click Enter.

You can customize your configuration and once you are done with customization, click on ok. You will be asked to enter your username and password in the next prompt as :

KRDC_rdp_config

Enter your username and password as asked and you are ready to use your remote desktop.

KRDC_rdp_username

FreeRDP

FreeRDP is a free and open-source client for the remote desktop protocol. FreeRDP is a portable rdp protocol released under Apache license. To install and configure freeRDP in Ubuntu use the following command. freerdp2-x11 performs installation of freeRdp client in Ubuntu.

$ sudo apt update
$ sudo apt install -y freerdp2-x11

Once the installation is complete, you can use following command to access your remote desktop

$ sudo xfreerdp /u:remote_user /p:remote_password /v:remote_host_ip

Once you run this command, you may get warning to accept certificates. Enter “yes” to the warning and now you will be able to access your remote desktop as:

KRDC_rdp_remote access

Conclusion

From this article, you have learned how to access remote windows desktop from your linux machine. Also you got to know about different tools available for RDP connection, how to configure these tools and use for remote connection.

Also ReadTop 8 Music Player for Ubuntu and Linux Mint

The post How to Access Remote Windows Desktop from Ubuntu Linux first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/access-remote-windows-desktop-ubuntu/feed/ 0
How to Install NFS Server on Debian 12 Step-by-Step https://www.linuxtechi.com/how-to-install-nfs-server-on-debian/ https://www.linuxtechi.com/how-to-install-nfs-server-on-debian/#comments Mon, 08 Feb 2021 03:06:05 +0000 https://www.linuxtechi.com/?p=12089 In this post, we will show how to install nfs server on Debian 12 step-by-step. We will also explain how to mount NFS share on the remote Linux machine using automount utility. Network File System (NFS) is a widely used protocol for sharing files and ... Read more

The post How to Install NFS Server on Debian 12 Step-by-Step first appeared on LinuxTechi.]]>
In this post, we will show how to install nfs server on Debian 12 step-by-step. We will also explain how to mount NFS share on the remote Linux machine using automount utility.

Network File System (NFS) is a widely used protocol for sharing files and directories between Unix-like operating systems over a network. It allows you to seamlessly access files and folders on remote servers as if they were local. The latest NFS is version 4, It provides central management which can be secured with a firewall and Kerberos authentication.

Prerequisites

  • A Debian 12 System to act as the NFS server.
  • Any Linux machine that will serve as the NFS client.
  • Administrative access to both the server and client.

Lab Environment

  • NFS server : 192.168.1.240 (Debian 12)
  • NFS Client :  192.168.1.250 (Rocky Linux)

Now, let’s dive into the installation and configuration steps of NFS Server.

1) Install NFS Server on Debian 12

First, let’s set up the NFS server on your Debian 12 machine. Open a terminal and follow these steps:

Update your package list to ensure you have the latest information about available packages:

$ sudo apt-get update

Install nfs server package:

$ sudo apt install nfs-kernel-server -y

Install-nfs-server-package-debian-12

After installing NFS server package and its dependencies, start the NFS service and enable it to start on boot:

$ sudo systemctl start nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server

Verify the NFS service, run

$ sudo systemctl status nfs-kernel-server

NFS-Service-Status-Debian12

2) Configure NFS Exports on the Server

First make a directory which will shared over the network using nfs.

$  sudo mkdir -p /mnt/nfsshare

As NFS share will be used by any user on the client, so set the following permissions on the directory using chown command.

$ sudo chown nobody:nogroup /mnt/nfsshare

Assign the following permission on the directory so that user on client machine can read and write the files. However, you can set it as per your requirement.

$ sudo chmod 755 /mnt/nfsshare

Update  the export information in /etc/exports file

$ sudo vi /etc/exports

Add the following entry at the end of the file.

/mnt/nfsshare 192.168.1.0/24(rw,sync,no_subtree_check)

save and exit the file

Your /etc/export file should look like,

NFS-Server-Exports-Entry-Debian12

Here,

  • 192.168.1.0/24 : Allowed network to access NFS share.
  • rw: Allows read and write access.
  • sync: Data is written to the server’s disk immediately for consistency.
  • no_subtree_check:  Disables subtree checking to omprove NFS performance.

Now, export the shared directory.

$ sudo exportfs -a

If above does not throw any error. Meaning, your configuration is correct.

3) Firewall Rules for NFS Server

In case firewall is enabled on your Debian system then you must allow NFS traffic. Open the necessary ports  allow the client to connect to NFS by using the following command,

$ sudo ufw allow from 192.168.1.0/24 to any port nfs
$ sudo ufw reload

4) Install NFS Client on Remote System

Now, access your remote linux system where you want to mount nfs share. Install NFS client package:

For Rocky Linux / AlmaLinux / RHEL

$ sudo dnf install nfs-utils -y

For Ubuntu & Debian

$ sudo apt install nfs-common

5) Mount NFS Share on Client Machine

Make a directory on which NFS share will be mounted,

$ sudo mkdir -p /mnt/shared_nfs

For permanent mount add the following entry in /etc/fstab file. Open the file using any of your favorite editors.

$ sudo vi /etc/fstab

Add following line at the end of the file,

192.168.1.240:/mnt/nfsshare /mnt/shared_nfs nfs4 defaults,user,exec,_netdev 0 0

save and exit the file.

Your file should look like,

Mount-NFS-Share-Using-Fstab-File-Linux

where,

  • 192.168.1.240:/mnt/nfsshare = shared folder coming from nfs server
  • /mnt/shared_nfs = mount directory in client machine
  • nfs4 = signifies nfs version 4
  • defaults,user,exec = Permit any user to mount the file system also allow them to exec binaries
  • _netdev: It prevents the client from attempting to mount nfs file system until the network has been enabled.

Mount the NFS file system using below commands,

$ systemctl daemon-reload
$ sudo mount -a

Mount-NFS-Share-Df-Command-Output

You can test the connection by creating a file in /mnt/shared_nfs on the client machine.

Let’s try to create a file with touch command on a NFS share,

$ cd /mnt/shared_nfs
$ touch testFile.txt

If this doesn’t show any error your configuration is fine and you are ready to use NFS share system.

Note: It is not recommended to use fstab for mounting the NFS share. Instead use the auto-mounter, It will automatically mounts/unmounts NFS share on the demand.

Mounting NFS Share using automount utility

Install autofs package which will provide automount utility. Run the following command on client machine

$ sudo dnf install autofs     // RHEL or Rocky Linux or Alma Linux
$ sudo apt install autofs    // Ubuntu or Debian

Edit the file /etc/auto.master, change the line “/misc /etc/auto.misc” to “/- /etc/auto.misc

$ sudo vi /etc/auto.master
---
#/misc   /etc/auto.misc
/-       /etc/auto.misc
---

save and close the file

Edit-Auto-Master-File-for-Direct-Mapping-RockyLinux

Now, edit the file “/etc/auto.misc” and add the following entry at the end,

$ sudo vi /etc/auto.misc
.....
/mnt/shared_nfs -fstype=nfs,rw,soft,intr 192.168.1.240:/mnt/nfsshare
.....

NFS-Share-Entry-Auto-Misc-File

Where

  • /mnt/shared_nfs : Mount point on the client machine
  • -fstype=nfs,rw,soft,intr : File system type and options
  • 192.168.1.240:/mnt/nfsshare : NFS Server and its share path

Restart autofs service to make above changes into the effect.

$ sudo systemctl restart autofs

Now, try to access the directory “/mnt/shared_nfs” and autofs should automatically mount the NFS share.

$ cd /mnt/shared_nfs/ ; ls

Test-Automount-Linux-Command-Line

Perfect, above output confirms that automount is working fine as NFS share is mounted automatically when we try to access the folder “/mnt/shared_nfs”.

That’s all from this post, We hope that you have found it informative and useful. Kindly do post your queries and feedback in below comments section.

The post How to Install NFS Server on Debian 12 Step-by-Step first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/how-to-install-nfs-server-on-debian/feed/ 4
How to Launch AWS EC2 Instance Using Terraform https://www.linuxtechi.com/how-to-launch-aws-ec2-instance-using-terraform/ https://www.linuxtechi.com/how-to-launch-aws-ec2-instance-using-terraform/#comments Wed, 13 Jan 2021 04:31:13 +0000 https://www.linuxtechi.com/?p=11907 Terraform is an open source ‘infrastructure as code’ command line tool used to manage infrastructure in the cloud. With terraform you define declarative configuration file called HashiCorp Configuration Language (HCL) and provision your infrastructure. For instance, you need a Virtual machine, you just define resources ... Read more

The post How to Launch AWS EC2 Instance Using Terraform first appeared on LinuxTechi.]]>
Terraform is an open source ‘infrastructure as code’ command line tool used to manage infrastructure in the cloud. With terraform you define declarative configuration file called HashiCorp Configuration Language (HCL) and provision your infrastructure. For instance, you need a Virtual machine, you just define resources like memory, storage, computing in the form of code and push in cloud. You will get the virtual machine or virtual instanace.Terraform is supported in all major cloud provider like Amazon cloud, Google cloud, Alibaba cloud and Microsoft Azure cloud.

This article will cover installation Terraform on Ubuntu 20.04 LTS system and launching AWS EC2 instance (Centos 8 stream) using terraform.

Installation of Terraform on Ubuntu 20.04 LTS

Download the latest version of Terraform  from URL https://www.terraform.io/downloads.html . At the time of writing article, the latest version is 0.14.3.

To Download terraform from command, run following wget command

$ wget https://releases.hashicorp.com/terraform/0.14.3/terraform_0.14.3_linux_amd64.zip

Now, unzip the downloaded file.

$ sudo apt install zip -y
$ sudo unzip  terraform_0.14.3_linux_amd64.zip

This will output you a terraform file just move it to /usr/local/bin/ to execute the command.

$ sudo mv terraform /usr/local/bin/

Check the version

$ terraform version

This should provide you output similar to below

ubuntu@linuxtechi:~$ terraform version
Terraform v0.14.3
ubuntu@linuxtechi:~$

Prefect, above output confirm that Terraform has been installed.

Launching AWS EC2 Instance Using Terraform

Let’s make a directory and configure Terraform inside it. Run following commands

$ mkdir terraform
$ cd terraform

Now, make a configuration file. I am giving here the name as config.tf . You can give name as per your choice but remember the extension must be ‘tf’.

$ vi config.tf

Add the following terms provider AWS, your access key, secret key and region where you are going to launch ec2 instance. Here, I am going to use my favorite Singapore region.

On the second block of the code define resource as ‘aws_instance’, ami  (I have picked ami from Centos AMI <https://wiki.centos.org/Cloud/AWS>). Give a instance type and also a tag of your choice.

provider "aws" {
access_key = "YOUR-ACCESS-kEY"
secret_key = "YOUR-SECRET-KEY"
region = "ap-southeast-1"
}

resource "aws_instance" "instance1" {
ami = "ami-05930ce55ebfd2930"
instance_type = "t2.micro"
tags = {
Name = "Centos-8-Stream"
}
}

Save & close the file.

Now, initialize your configuration by executing beneath terraform command

$ terraform init

Once Terraform has initialized, see what is going to happen by executing command,

$ terraform plan

If everything goes fine, then you should see following output.

terraform-plan

Now, execute your terraform code,

$ terraform apply

Type ‘yes’ and press enter for the confirmation.

enter-yes-terraform-apply

On the success of the executing you should be able to see output as below:

success-terrafrom-apply

Log-in to your AWS account and go to ec2 service you should find a ec2 instance with the tag you defined above.

ec2-in-aws-console

It’s simple and easy to provision infrastructure in cloud using the terraform. Hope you like the article. If you found any difficulty, please do comment us.

The post How to Launch AWS EC2 Instance Using Terraform first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/how-to-launch-aws-ec2-instance-using-terraform/feed/ 4
Monitor API Call and User Activity in AWS Using CloudTrail https://www.linuxtechi.com/monitor-api-call-user-activity-aws-cloudtrail/ https://www.linuxtechi.com/monitor-api-call-user-activity-aws-cloudtrail/#respond Mon, 04 Jan 2021 05:38:55 +0000 https://www.linuxtechi.com/?p=11827 CloudTrail is a service that is used to track user activity and API usage in AWS cloud. It enables auditing and governance of the AWS account. With it, you can monitor what is happening in your AWS account and continuously monitor them. It provides event ... Read more

The post Monitor API Call and User Activity in AWS Using CloudTrail first appeared on LinuxTechi.]]>
CloudTrail is a service that is used to track user activity and API usage in AWS cloud. It enables auditing and governance of the AWS account. With it, you can monitor what is happening in your AWS account and continuously monitor them. It provides event history which tracks resource changes. You can also enable logging of all the event in S3 and analyze which another service like Athena or Cloudwatch.

In this tutorial, we are going to see the event history of your AWS account. Also, we are going to create a ‘trail’ and store the event in S3 and analyze them using Cloudwatch.

Event history

All read/write management events are logged by event history. It lets you view, filter, and download your recent AWS account activity over the past 90 days. You don’t need to set anything for it.

Using AWS console

Go to the service ‘CloudTrail’ and click on the dashboard. You can see the event name, time, and source. You can click on ‘View full Event history’ to get all the events.

event-history-from-dashboard

event-history-detail

On the detail page of Event history, you can apply a filter as your choice. To see all the events use Read-only and false as above.

Using AWS CLI

You can also use AWS CLI to look at the events. The following command shows the Terminated instance of your account.

# aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=TerminateInstances

Trails

Now, let’s create a trail that will log all the events of your account and store them in an S3 bucket.

On the left side, select Trails and click on ‘Create trail

create-trail

On the next page, give a trail name, choose to create a new S3 bucket, and give a bucket name. (If you have already a bucket, you can choose the existing s3 bucket also)

choose-trail-attribute-1

Scroll down the page and enable CloudWatch Logs. Create a log group and give a name. Also, Assign IAM role and give a name. Then, Click on next.

click-next

If you want to log all types of events, then click select options under the Events type section. We are just going with Management events. So, click on next.

choose-log-events-next

Now, review your configuration and click on ‘Create Trail’.

You can also see the list of created trails with the help of following AWS command.

# aws cloudtrail list-trails

list-trails-cli

Use the following command to see all the events of the trail we created above.

# aws cloudtrail describe-trails --trail-name-list management-events

describe-trails-cli

Analyze log in Cloudwatch

During creating CloudTrail we have defined to send the log to Cloudwatch. So, go to Cloudwatch service and click on ‘log group’.

log-groups-cloudwatch

By default, logs are kept indefinitely and never expire. Here, you can also apply the filter to get the desired output. For example, we are going to see all the running instances in the AWS account. To do this, use the filter ‘RunInstances’ as shown below. The output is shown in JSON format.

runinstance-filter-cloudwatch

You can also use CLI to get all the log events. Run the following command to get all the events of the log group you defined above.

# aws logs filter-log-events --log-group-name aws-cloudtrail-logs-20201229

In this article, we see how to audit and find the activities in AWS account using CloudTrail. Thank you for reading.

Also Read: How to Create and Add EBS Volume in AWS Instance (EC2)

The post Monitor API Call and User Activity in AWS Using CloudTrail first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/monitor-api-call-user-activity-aws-cloudtrail/feed/ 0