Dear Readers, Welcome to Linux Interview Questions and Answers have been designed specially to get you acquainted with the nature of questions you may encounter during your Job interview for the subject of Linux. These Linux Questions are very important for campus placement test and job interviews. As per my experience good interviewers hardly plan to ask any particular questions during your Job interview and these model questions are asked in the online technical test and interview of many IT companies.
GNU GRUB is a Multiboot boot loader. It was derived from GRUB, the GRand Unified Bootloader, which was originally designed and implemented by Erich Stefan Boleyn.
Briefly, a boot loader is the first software program that runs when a computer starts. It is responsible for loading and trferring control to the operating system kernel software (such as the Hurd or Linux). The kernel, in turn, initializes the rest of the operating system (e.g. GNU)
$HOME/.bash_profile, $HOME/.bash_bashrc
File is /etc/inittab and required to change below lines:
id:5:initdefault: to id:3:initdefault:
FINGER command can be used i.g; finger username
An inode is a data structure on a traditional Unix-style file system such as UFS or ext3. An inode stores basic information about a regular file, directory, or other file system object.
iNode number also called as index number, it consists following attributes:
File type (executable, block special etc)
Permissions (read, write etc)
Owner
Group
File Size
File access, change and modification time (remember UNIX or Linux never stores file creation time, this is favorite question asked in UNIX/Linux sys admin job interview)
File deletion time
Number of links (soft/hard)
Extended attribute such as append only or no one can delete file including root user (immutability)
Access Control List (ACLs)
Following command will be used to show inodes of file and folders:
ls -i
Following command will show complete info about any file or folders with inode number
stat file/folder
Files/Folders can also be deleted using inode numbers with following command:
find out the inode number using 'ls -il' command then run below command
find . -inum inode_number -exec rm -i {} \;
blockdev command
This is sample output - yours may be different.
# Before test
$ blockdev --getra /dev/sdb
256
$ time dd if=/tmp/disk.iso of=/dev/null bs=256k
2549+1 records in
2549+1 records out
668360704 bytes (668 MB) copied, 6,84256 seconds, 97,7 MB/s
real 0m6.845s
user 0m0.004s
sys 0m0.865s
# After test
$ blockdev --setra 1024 /dev/sdb
$ time dd if=/tmp/disk.iso of=/dev/null bs=256k
2435+1 records in
2435+1 records out
638390272 bytes (638 MB) copied, 0,364251 seconds, 1,8 GB/s
real 0m0.370s
user 0m0.001s
sys 0m0.370s
usermod -L username
SH, BASH, CSH, TCSH, NOLOGIN, KSH
/etc/sysconfig/network-scripts/ethX
We can do this with below given 2 methods:
ethtool -s eth0 speed 100 duplex full
ethtool -s eth0 speed 10 duplex half
OR
mii-tool -F 100baseTx-HD
mii-tool -F 10baseT-HD
/etc/resolve.conf
/etc/exports and exportfs -av , deamons are quotad, portmapper, mountd, nfsd and nlockmgr/status
netstat -antp
Soft Links =>
1) Soft link files will have different inode numbers then source file
2) If original file deleted then soft link file be of no use
3) Soft links are not updated
4) Can create links between directories
5) Can cross file system boundaries
Hard Links => 1) Hard links will have the same inode number as source file
2) Hard links can not link directories
3) Can not cross file system boundaries
4) Hard links always refers to the source, even if moved or removed
Display User who are using file/folder : fuser -u file/folder
Kill All Processes which are using file/folder: fuser -k file/folder
killall -u username
We can use "logrotate"'s "copytruncate" option which will simply copy original file and truncate original file :)
Cron :
1) Minimum granularity is minute (i.e Jobs can be scheduled to be executed every minute)
2) Cron job can be scheduled by any normal user ( if not restricted by super user )
3) Cron expects system to be running 24 x 7. If a job is scheduled, and system is down during that time, job is not executed
4) Ideal for servers
5) Use cron when a job has to be executed at a particular hour and minute
Anacron :
1) Minimum granularity is only in days
2) Anacron can be used only by super user ( but there are workarounds to make it usable by normal user )
3) Anacron doesn’t expect system to be running 24 x 7. If a job is scheduled, and system is down during that time, it start the jobs when the system comes back up.
4) Ideal for desktops and laptops
5) Use anacron when a job has to be executed irrespective of hour and minute
SSH 22, ftp 20/21, http 80, https 443, SMTP/SMPTS 25/465, POP3/POP3S 110/995, IMAP/IMAPS 143/993
1) Create a file FILE1 and this should be read,write,executable for all user but Read only for user USER1
2) Copy FILE1 ACLs to FILE2 ACL
3) Delete a USER1's rule for FILE1 which were setup in step 1)
1) touch FILE1 ; chmod 777 FILE1 ; setfacl -m u:USER1:r FILE1
2) getfacl FILE1 | setfacl --set-file=- FILE2
3) setfacl -x u:USER1 FILE1
Write efidisk.img from RHEL 6 DVD images/ subdirectory to USB
dd if=efidisk.img of=/dev/usb (usb device name)
Try following to check:
Enable/Disable SMART on device/disk : smartctl -s on /dev/sda
Check device SMART health : smartctl -H /dev/sda
Check device SMART capabilities : smartctl -c /dev/sda
Enable/Disable automatic offline testing on device : smartctl -o on/off /dev/sda
Show device SMART vendor-specific Attributes and values : smartctl -A /dev/sda
Show device log [TYPE : error, selftest, selective, directory,background,
scttemp[sts,hist]] : smartctl -l TYPE /dev/sda
Run test on device [TEST: offline short long conveyance select,M-N pending,N
afterselect,[on|off] scttempint,N[,p] : smartctl -t /dev/sda
- dear friends read our next blog to get diffrences.
Set following in /etc/sysctl.conf : net.ipv4.icmp_echo_ignore_all = 1
Then "sysctl -p"
or
echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all
SYN Flood :
A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a fake/forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet(Acknowledge), and waiting for a packet in response from the sender address(response to the ACK Packet). However, because the sender address is forged, the response never comes. These half-open connections saturate the number of available connections the server is able to make, keeping it from responding to legitimate requests until after the attack ends
ICMP Flood : There are three types of ICMP Flood :
1) Smurf Attack : http://en.wikipedia.org/wiki/Smurf_attack
2) Ping Flood : http://en.wikipedia.org/wiki/Ping_flood
3) Ping of Death : http://en.wikipedia.org/wiki/Ping_of_death
Use "ssh-keygen -t dsa or rsa" at local system for creating public and private keys
Then copy /root/.ssh/id_dsa.pub to remote_server by name /root/.ssh/authorized_keys
Change permissions of /root/.ssh/authorized_keys file at remote_server "chmod 0600 ~/.ssh/authorized_keys"
Now try to login from local system to remote_server "ssh root@remote_server"
These are the commands that can help.
top, sar, vmstat, iostat, free
More detailed explanation of about these commands is here.
This video here gives you the basic understanding of DNS. However, you should check this article here to understand the theory part better. More about DNS tables is explained here.
Boot process follows 6 major steps. Starting with BIOS > MBR > GRUB > KERNEL > INIT > RUN LEVELS
This image here (taken from the geek stuff) should give you a better idea. linux-boot-process
Image:
For automatic rotation, mailing or removal if the log files, for compressing or rotating the log files each day (when the file is too big) we can use the logrotate command: logrotate [-dv][-f|] [-s|] config file+. We can also use it with the –f option in the command line or for mailing with –m, which has two arguments: the subject and the recipient name.
The documentation of an application is usually installed together with the application and the directory that we named for the application will be used for storage. As an example: we have the application App1- the documentation will be stored at /user/doc/App1. The documentations means all the information regarding that application including the time when it was created, its modules and of course its name.
To create a new user account we must use the useradd command. With no –D option, the useradd the new account is determined by the default system values and the values from the command line. The home directory will be the that of the system default, for making the home directory, the option used is –m.
With POP3 the inbox is verified on the server of the mail and the new mails are downloaded on your computer and stored there. With IMAP the mail headers are downloaded on the server and when you click on the mail message you can read it. The storage is made on the mail server in case of IMAP but they can be copied in your PC’s “Local Floder”. So POP3 is useful when u use a single computer for email checking and in this case there is no need to store the messages on the mail server. When you use more than one place or computer to verify your email IMAP is the solution.
The place used for maintaining files for the purpose of being accessed by the network systems is called a file server. The files can receive different access privileges. In Linux there is a software called “samba” for file sharing, file editing and viewing on remote systems that support Windows 9x, Millenium, 2000 and NT or Mac computers. Whwnever we delete a file we may recover it because all the time a backup is made for the files from the file server.
These two are a little different because Linux Shell is an interface for command execution and Shell is a user program for command execution. So the Script Shell is obviously a script made for the Shell program, A Shell Script is not hard to be checked for errors, in comparison to other programs that are bigger. Anyway it has a disadvantage: it is a little slow due to new processes that are started for each execution of a shell command.
The driver must be put on a floppy disk, then when setup boots from the CD or DVD at the command promt write dd; later the driver will be loaded from the floppy drive, after this we can use the fdisk or disk druid utilities to configure the partitions. When the drivers setup is automatically the things become simple as in the case of IDE drives, but they can be accessed by /dev /sda, /dev /sdb an so on.
Yes, in the beginning the boot loaders are being loaded by the BIOS then they are loading the kernel, then the file systems are mounted by the kernel so the drivers can begin installing and loading. To be more detailed the boot process happens in 4 major steps:
1)when we turn on the computer the Bios is called (that chip from the motherboard) and it starts the processor and the power on tests for verifying the devices that are connected and their availability for use. After this is done the BIOS will cross in a Specific place in the memory (RAM) and look for the booting device. The hard disk will have a boot sector which is the first sector and from here the MBR will be loaded in the RAM.
2)the booting process is performed by the boot loader (for example GRUB and LILO are common used boot loaders), this will give the user different boot options and this is what determines how the kernel is loaded.
3)the kernel handles the boot process after it is loaded and the hardware initialization begins, partitions are created by the kernel.
4)INITloads.
The Linux centralized server that has no workstation state is called a Stateless Linux server. This server is being seen when a specific system’s state is wanted by the user to be in the other machines too. Practically this server is storing each machine’s prototype and each machine’s snapshots, also the home directories. To know what state fits each system we must use LDAP.
For tracking information about a system syslogd is used, it helps us saving the information in log files too. It is used by the Unix or Internet sockets for remote and local logging. The logged messages some field like hostname, time or program name, which are the information that the daemon is tracking. The signals that the user is giving make syslogd react, these signals are: SIGHUP- which performs a re-initialization by closing all the files that are open and re-reading the cinfiguaration file (/etc/syslog.conf by dfault)and restarting the syslog;SiGTERMsyslogd is killed;SIGINT, SISQUIT-if the debugging is not activated syslog will be killed, but if it’s activated these signals are ignored;SIGURSR1- this is a signal that is used along with –d or debug option which is switched on or off;SIGCHLD-due to waiting messages it waits for childs.
SELinux is the name for Security-enhanced Linux which, as a prototype of Linux kernel that contains utilities which are made for improved functionality of the security feature in such a way to prove to the Linux community the value of Linux controls. In order to achieve greater security for the Flask system new design are made for the architectural components based on Role-based Accsess Control, Multi-level Security and Type Enforcement.
Because the use of the swap space for extending the physical memory, the quantity of memory(RAM) from the system determines the needed size of the swap space. The physical memory and the swap space can be calculated together in Linux as the total amount of memory, so when for instance we have 16 MB of memory on the motherboard of the computer and we assign 8MB for swapping the total memory used by Linux will be 24.
There are two ways of managing hash table collisions: one is open addressing and the other is separate chaining. The first way the data items are moved from the full array they hash to and their new place is another cell from the array. The second way every element of the array is made from a linked list and in this list the data items are moved.
We can see what was the partition where the lost file was located with the pwd (means present work directory) and with the unmount command we can unmount the directory. Then the “debugfs” command will manage and repair the majority of sever errors or bugs from Linux. The entire code is: #debugfs /usr/directory name. the next step is using “Isdel”.
For viewing the boot messages we can use dmesg, a command that prints on the screen the kernel ring buffer messages; the command is to be used just after the boot sequence. The syntax of a ring buffer is like this: dmesg [options]. If dmesg is called with no options the messages from the kernel will be written to the standard output.
Shadow passwords are given using pwconv command and their purpose is the increase in system security. The file /etc/shadow gets created with that command and modifications are made to the passwords, they will be replaces with “x” in /etc/passwd file.
The directory over which we as users have the control and when we log in it is the working directory. But the current user working directory is not necessarily the home directory.
The graphics are different, Linux has more commands, Linux has more user-friendly features than Unix, Linux is versatile and independent while Unix requires special machine for installation, kernel and file system is different.
For checking the disk integrity and file system the command used is fsck.
Unix originally began as a propriety operating system from Bell Laboratories, which later on spawned into different commercial versions. On the other hand, Linux is free, open source and intended as a non-propriety operating system for the masses.
The key differences between the BASH and DOS console lies in 3 areas:
- BASH commands are case sensitive while DOS commands are not;
- under BASH, / character is a directory separator and \ acts as an escape character. Under DOS, / serves as a command argument delimiter and \ is the directory separator
- DOS follows a convention in naming files, which is 8 character file name followed by a dot and 3 character for the extension. BASH follows no such convention.
To open the default shell (which is where the command prompt can be found), press Ctrl-Alt-F1. This will provide a command line interface (CLI) from which you can run commands as needed.
One highly possible reason for getting different results from what seems to be the same command has something to do with case sensitivity issues. Since Linux is case sensitive, a command that was previously used might have been entered in a different format from the present one. For example, to lists all files in the directory, you should type the command ls, and not LS. Typing LS would either result in an error message if there is no program by that exact name exist, or may produce a different output if there is a program named LS that performs another function.
It contains locally installed files. This directory actually matters in environments where files are stored on the network. Specifically, locally-installed files go to /usr/local/bin, /usr/local/lib, etc.). Another application of this directory is that it is used for software packages installed from source, or software not officially shipped with the distribution.
Every process in the system is identified by a unique process id or pid. Use the kill command followed by the pid in order to terminate that process. To terminate all process at once, use kill 0.
Comments are created by typing the # symbol before the actual comment text. This tells the shell to completely ignore what follows. For example: “# This is just a comment that the shell will ignore.”
You can use parentheses to group commands. For example, if you want to send the current date and time along with the contents of a file named OUTPUT to a second file named MYDATES, you can apply command grouping as follows: (date cat OUTPUT) > MYDATES .