Linux find largest file in directory recursively using find

du -a /dir/ | sort -n -r | head -n 20

Kill Listening port


sudo kill 'sudo lsof -t -i:9143'

sudo kill `sudo lsof -t -i:9143`

NFS share

Mount the NFS share by running the following command as root or user with sudo privileges:

sudo mount -t nfs /var/backup

To automatically mount an NFS share when your Linux system starts up add a line to the /etc/fstab file. The line must include the hostname or the IP address of the NFS server, the exported directory, and the mount point on the local machine.

sudo nano /etc/fstab

# <file system> <dir> <type> <options> <dump> <pass> /var/backups nfs defaults 0 0

Install VMware Remote Console

download from VMware - VMware-Remote-Console-12.0.0-17287072.x86_64.bundle

chmod +x VMware-Remote-Console-12.0.0-17287072.x86_64.bundle

chmod +x VMware-Remote-Console-12.0.0-17287072.x86_64.bundle

Add User

To create a new user account named username using the adduser command you would run:

sudo adduser username

If you want the newly created user to have administrative rights, add the user to the sudo group :

sudo usermod -aG sudo username

Delete User

To delete the user, without removing the user files, run:

sudo deluser username

If you want to delete the user and its home directory and mail spool, use the --remove-home flag:

sudo deluser --remove-home username

Run the sudo Commands Without Entering a Password

echo "username ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/username


apt-get install screen

screen --version

Named sessions are useful when you run multiple screen sessions. To create a named session, run the screen command with the following arguments:

screen -S testcon

You can detach from the screen session at any time by typing:

Ctrl+a d

To resume your screen session use the following command:

screen -r

To find the session ID list the current running screen sessions with:

screen -ls

If you want to restore screen testcon.pts-0, then type the following command:

screen -r testcon

End the session

screen -ls

There is a screen on: 162712.zoltan-copy

kill 62712


pkill screen


irst you need to re attach to the screen session

screen -r 23520 as you have done. Then press ctrl + a and then a k and press y when it asks if you really want to kill the session

additional commands

When you start a new screen session, it creates a single window with a shell in it.

You can have multiple windows inside a Screen session.

To create a new window with shell type Ctrl+a c, the first available number from the range 0...9 will be assigned to it.

Below are some most common commands for managing Linux Screen Windows:

Ctrl+a c Create a new window (with shell)

Ctrl+a " List all window

Ctrl+a 0 Switch to window 0 (by number )

Ctrl+a A Rename the current window

Ctrl+a S Split current region horizontally into two regions

Ctrl+a | Split current region vertically into two regions

Ctrl+a tab Switch the input focus to the next region

Ctrl+a Ctrl+a Toggle between the current and previous region

Ctrl+a Q Close all regions but the current one

Ctrl+a X Close the current region


Change Hostname

sudo hostnamectl set-hostname zoltan-dev

sudo nano /etc/hostname

example localhost zoltan-dev

sudo nano /etc/cloud/cloud.cfg

Search for preserve_hostname, and change the value from false to true:

How to keep processes running after ending ssh session

ssh into your remote box. type screen Then start the process you want.

Press Ctrl-A then Ctrl-D. This will detach your screen session but leave your processes running. You can now log out of the remote box.

If you want to come back later, log on again and type screen -r This will resume your screen session, and you can see the output of your process.

Samba Share


sudo apt install samba

sudo nano /etc/samba/smb.conf


sudo chmod -R 0777 /datadisk2

sudo chown -R nobody:nogroup /datadisk2

sudo mount /dev/sdc1 /datadisk2

sudo nano /etc/fstab

/dev/sdc1 /datadisk2 ext4 defaults 0 0

sudo nano /etc/samba/smb.conf


path = /datadisk2

browsable =yes

writable = yes

guest ok = yes

read only = no

force user = nobody ???

create mask = 777 ????

directory mask = 777 ????


path = /mnt/md0/plex/movies

browsable =yes

writable = yes

guest ok = yes

read only = no

inherit permissions = yes


files owned by nobody


sudo service smbd restart

Change Folder Ownership


dan@arioch:/mnt/md0$ ls -l

total 20

drwx------ 2 root root 16384 Nov 3 14:41 lost+found

drwxr-xr-x 6 root root 4096 Nov 3 14:45 plex

dan@arioch:/mnt/md0$ sudo chgrp users plex

dan@arioch:/mnt/md0$ ls -l

total 20

drwx------ 2 root root 16384 Nov 3 14:41 lost+found

drwxr-xr-x 6 root users 4096 Nov 3 14:45 plex


sudo chgrp -R users plex

dan@arioch:/mnt/md0$ ls -l plex

total 16

drwxr-xr-x 2 root users 4096 Nov 3 14:45 homemovies

drwxr-xr-x 2 root users 4096 Nov 3 14:45 movies

drwxr-xr-x 2 root users 4096 Nov 3 14:45 photos

drwxr-xr-x 2 root users 4096 Nov 3 14:45 tvshow

The group ownership can be inherited by new files and folders created in your folder /path/to/parent by setting the setgid bit using chmod g+s like this:

chmod g+s /path/to/parent

Ubuntu Software RAID

Creating a RAID 1 Array

The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.

Requirements: minimum of 2 storage devices

Primary benefit: Redundancy

Things to keep in mind: Since two copies of the data are maintained, only half of the disk space will be usable

Identifying the Component Devices

To get started, find the identifiers for the raw disks that you will be using:




sda 100G disk

sdb 100G disk

vda 25G disk

├─vda1 24.9G ext4 part /

├─vda14 4M part

└─vda15 106M vfat part /boot/efi

As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda and /dev/sdb identifiers for this session. These will be the raw components we will use to build the array.

Creating the Array

To create a RAID 1 array with these components, pass them in to the mdadm --create command. You will have to specify the device name you wish to create (/dev/md0 in our case), the RAID level, and the number of devices:

sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

If the component devices you are using are not partitions with the boot flag enabled, you will likely see the following warning. It is safe to type y to continue:


mdadm: Note: this array has metadata at the start and

may not be suitable as a boot device. If you plan to

store '/boot' on this device please ensure that

your boot-loader understands md/v1.x metadata, or use


mdadm: size set to 104792064K

Continue creating array? y

The mdadm tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:

cat /proc/mdstat


Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : active raid1 sdb[1] sda[0]

104792064 blocks super 1.2 [2/2] [UU]

[====>................] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec

unused devices: <none>

As you can see in the first highlighted line, the /dev/md0 device has been created in the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The second highlighted line shows the progress on the mirroring. You can continue the guide while this process completes.

Creating and Mounting the Filesystem

Next, create a filesystem on the array:

sudo mkfs.ext4 -F /dev/md0

Create a mount point to attach the new filesystem:

sudo mkdir -p /mnt/md0

You can mount the filesystem by typing:

sudo mount /dev/md0 /mnt/md0

Check whether the new space is available by typing:

df -h -x devtmpfs -x tmpfs


Filesystem Size Used Avail Use% Mounted on

/dev/vda1 25G 1.4G 23G 6% /

/dev/vda15 105M 3.4M 102M 4% /boot/efi

/dev/md0 99G 60M 94G 1% /mnt/md0

The new filesystem is mounted and accessible.

Saving the Array Layout

To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf file. You can automatically scan the active array and append the file by typing:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:

sudo update-initramfs -u

Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:

echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab

Your RAID 1 array should now automatically be assembled and mounted each boot.

REF; https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-18-04

Add desktop to Ubuntu Server

sudo apt-get install tasksel

run tasksel then select the desktop environment you want

Dual boot

# check to see if ubuntu see the windows install

sudo os-prober

sudo nano /etc/default/grub

hash out - grub time style hidden

set grub time out to 10 seconds

sudo update-grub

ref;How To Dual Boot Windows 10 and Linux Mint On Separate Hard Drives (From A Linux User)