Articles From Emmett Dulaney
Filter Results
Article / Updated 09-28-2018
GPG includes the tools you need to use public key encryption and digital signatures on your Linux system. You can figure out how to use GPG gradually as you begin using encryption in Linux. The information you find here shows some of the typical tasks you can perform with GPG to protect your Linux system. How to Generate the key pair with GPG in Linux The steps for generating the key pairs are as follows: Type gpg --gen-key. If you’re using GPG for the first time, it creates a .gnupg directory in your home directory and a file named gpg.conf in that directory. Then it asks what kind of keys you want: Please select what kind of key you want: (1) DSA and ElGamal (default) (2) DSA (sign only) (4) RSA (sign only) Your selection? Press Enter for the default choice, which is good enough. GPG prompts you for the key size (the number of bits). Press Enter again to accept the default value of 2,048 bits. GPG asks you when the keys expire. The default is to never expire. If the default is what you want (and why not?), press Enter. When GPG asks whether you really want the keys to never expire, press the Y key to confirm. GPG prompts you for your name, your email address, and a comment to make it easier to associate the key pair with your name. Type each piece of requested information, and press Enter. When GPG gives you a chance to change the information or confirm it, confirm by typing o and pressing Enter. GPG prompts you for a passphrase that protects your private key. Type a long phrase that includes lowercase and uppercase letters, numbers, and punctuation marks — the longer the better — and then press Enter. Be careful to choose a passphrase that you can remember easily. GPG generates the keys. It may ask you to perform some work on the PC so that the random-number generator can generate enough random numbers for the key-generation process. How to exchange keys using GPG in Linux If you're an administrator, protecting your Linux system should always be at the top of your mind. To communicate with others, you have to give them your public key. You also have to get public keys from those who may send you a message (or when someone who might sign a file and you want to verify the signature). GPG keeps the public keys in your key ring. (The key ring is simply the public keys stored in a file, but the name sounds nice because everyone has a key ring in the real world, and these keys are keys of a sort.) To list the keys in your key ring, type gpg --list-keys To send your public key to someone or to place it on a website, you have to export the key to a file. The best way is to put the key in what GPG documentation calls ASCII-armored format, with a command like this: gpg --armor --export [email protected] > kdulaneykey.asc This command saves the public key in ASCII-armored format (which looks like garbled text) in the file named kdulaneykey.asc. You replace the email address with your email address (the one you used when you created the key) and replace the output filename with something different. After you export the public key to a file, you can mail that file to others or place it on a website for use by others. When you import a key from someone, you typically get it in ASCII-armored format as well. If you have a us-http://www.us-cert.gov/pgp/email.htmlin a file named uscertkey.asc, you import it into the key ring with the following command: gpg --import uscertkey.asc Use the gpg --list-keys command to verify that the key is in your key ring. Here’s what you might see when typing gpg --list-keys on the system: /home/kdulaney/.gnupg/pubring.gpg ----------------------------- pub 1024D/7B38A728 2018-08-28 uid Kristin Dulaney <[email protected]> sub 2048g/3BD6D418 2018-08-28 pub 2048R/F0E187D0 2019-09-08 [expires: 2019-10-01] uid US-CERT Operations Key <[email protected]> The next step is checking the fingerprint of the new key. Type the following command to get the fingerprint of the US-CERT key: gpg --fingerprint [email protected] GPG prints the fingerprint, as follows: pub 2048R/F0E187D0 2018-09-08 [expires: 2019-10-01] Key fingerprint = 049F E3BA 240B 4CF1 3A76 06DC 1868 49EC F0E1 87D0 uid US-CERT Operations Key <[email protected]> At this point, you need to verify the key fingerprint with someone at the US-CERT organization. If you think that the key fingerprint is good, you can sign the key and validate it. Here’s the command you use to sign the key: gpg --sign-key [email protected] GPG asks for confirmation and then prompts you for your passphrase. After that, GPG signs the key. Because key verification and signing are potential weak links in GPG, be careful about what keys you sign. By signing a key, you say that you trust the key to be from that person or organization. How to sign a file with GPG in Linux You may find signing files to be useful if you send a file to someone and want to assure the recipient that no one tampered with the file and that you did in fact send the file. GPG makes signing a file easy. You can compress and sign a file named message with the following command: gpg -o message.sig -s message To verify the signature, type gpg --verify message.sig To get back the original document, type gpg -o message --decrypt message.sig Sometimes, you don’t care about keeping a message secret, but you want to sign it to indicate that the message is from you. In such a case, you can generate and append a clear-text signature with the following command: gpg -o message.asc --clearsign message This command appends a clear-text signature to the text message. Here’s a typical clear-text signature block: -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFDEhAtaHWlHHs4pygRAhiqAJ9Qj0pPMgKVBuokDyUZaEYVsp6RIQCfaoBm 9zCwrSAG9mo2DXJvbKS3ri8= =2uc/ -----END PGP SIGNATURE----- When a message has a clear-text signature appended, you can use GPG to verify the signature with the following command: gpg --verify message.asc If you indeed signed the message, the last line of the output says that the signature is good. Encrypting and decrypting documents with GPG in Linux To encrypt a message meant for a recipient, you can use the --encrypt (or -e) GPG command. Here’s how you might encrypt a message for US-CERT by using its GPG key: gpg -o message.gpg -e -r [email protected] message The message is encrypted with the US-CERT public key (without a signature, but you can add the signature with the -s command). When US-CERT receives the message.gpg file, the recipient must decrypt it by using US-CERT’s private key. Here’s the command that someone at US-CERT can use: gpg -o message --decrypt message.gpg Then GPG prompts for the passphrase to unlock the US-CERT private key, decrypts the message, and saves the output in the file named message. If you want to encrypt a file that no one else has to decrypt, you can use GPG to perform symmetric encryption. In this case, you provide a passphrase to encrypt the file with the following GPG command: gpg -o secret.gpg -c somefile GPG prompts you for the passphrase and asks you to repeat the passphrase (to make sure that you didn’t mistype anything). Then GPG encrypts the file, using a key generated from the passphrase. To decrypt a file encrypted with a symmetric key, type gpg -o myfile --decrypt secret.gpg GPG prompts you for the passphrase. If you enter the correct passphrase, GPG decrypts the file and saves the output (in this example) in the file named myfile. Check here to discover ten security terms you should know for Linux systems.
View ArticleArticle / Updated 09-28-2018
Like any other OS, Linux needs to be protected with a firewall. A firewall is a network device or host with two or more network interfaces — one connected to the protected internal network and the other connected to unprotected networks, such as the Internet. The firewall controls access to and from the protected internal network. If you connect an internal network directly to the Internet, you have to make sure that every system on the internal network is properly secured — which can be nearly impossible, because a single careless user can render the entire internal network vulnerable. A firewall is a single point of connection to the Internet: You can direct all your efforts toward making that firewall system a daunting barrier to unauthorized external users. Essentially, a firewall is a protective fence that keeps unwanted external data and software out and sensitive internal data and software in. The firewall runs software on your Linux system that examines the network packets arriving at its network interfaces and then takes appropriate action based on a set of rules. The idea is to define these rules so that they allow only authorized network traffic to flow between the two interfaces. Configuring the firewall involves setting up the rules properly. A configuration strategy is to reject all network traffic and then enable only a limited set of network packets to go through the firewall. The authorized network traffic would include the connections necessary to enable internal users to do things such as visit websites and receive electronic mail. To be useful at protecting your Linux system, a firewall must have the following general characteristics: It must control the flow of packets between the Internet and the internal network. It must not provide dynamic routing because dynamic routing tables are subject to route spoofing — the use of fake routes by intruders. Instead, the firewall uses static routing tables (which you can set up with the route command on Linux systems). It must not allow any external user to log in as root. That way, even if the firewall system is compromised, the intruder is blocked from using root privileges from a remote login. It must be kept in a physically secure location. It must distinguish between packets that come from the Internet and packets that come from the internal protected network. This feature allows the firewall to reject packets that come from the Internet but have the IP address of a trusted system on the internal network. It acts as the SMTP mail gateway for the internal network. Set up the sendmail software so that all outgoing mail appears to come from the firewall system. Its user accounts are limited to a few user accounts for those internal users who need access to external systems. External users who need access to the internal network should use SSH for remote login. It keeps a log of all system activities, such as successful and unsuccessful login attempts. It provides DNS name-lookup service to the outside world to resolve any host names that are known to the outside world. It provides good performance so that it doesn’t hinder internal users’ access to specific Internet services (such as HTTP and FTP). A firewall can take many forms. Here are three common forms of a firewall you might find on a Linux system: Packet filter firewall: This simple firewall uses a router capable of filtering (blocking or allowing) packets according to various characteristics, including the source and destination IP addresses, the network protocol (TCP or UDP), and the source and destination port numbers. Packet filter firewalls are usually placed at the outermost boundary with an untrusted network, and they form the first line of defense. An example of a packet filter firewall is a network router that employs filter rules to screen network traffic. Packet filter firewalls are fast and flexible, but they can’t prevent attacks that exploit application-specific vulnerabilities or functions. They can log only a minimal amount of information, such as source IP address, destination IP address, and traffic type. Also, they’re vulnerable to attacks and exploits that take advantage of flaws within the TCP/IP protocol, such as IP address spoofing, which involves altering the address information in network packets to make them appear to come from a trusted IP address. Stateful inspection firewall: This type of firewall keeps track of the network connections that network applications are using. When an application on an internal system uses a network connection to create a session with a remote system, a port is also opened on the internal system. This port receives network traffic from the remote system. For successful connections, packet filter firewalls must permit incoming packets from the remote system. Opening many ports to incoming traffic creates a risk of intrusion by unauthorized users who abuse the expected conventions of network protocols such as TCP. Stateful inspection firewalls solve this problem by creating a table of outbound network connections, along with each session’s corresponding internal port. Then this state table is used to validate any inbound packets. This stateful inspection is more secure than a packet filter because it tracks internal ports individually rather than opening all internal ports for external access. Application-proxy gateway firewall: This firewall acts as an intermediary between internal applications on a Linux system that attempt to communicate with external servers such as a web server. A web proxy receives requests for external web pages from web browser clients running inside the firewall and relays them to the exterior web server as though the firewall was the requesting web client. The external web server responds to the firewall, and the firewall forwards the response to the inside client as though the firewall was the web server. No direct network connection is ever made from the inside client host to the external web server. Application-proxy gateway firewalls have some advantages over packet filter firewalls and stateful inspection firewalls. First, application-proxy gateway firewalls examine the entire network packet rather than only the network addresses and ports, which enables these firewalls to provide more extensive logging capabilities than packet filters or stateful inspection firewalls. Another advantage is that application-proxy gateway firewalls can authenticate users directly, whereas packet filter firewalls and stateful inspection firewalls normally authenticate users on the basis of the IP address of the system (that is, source, destination, and protocol type). Given that network addresses can be easily spoofed, the authentication capabilities of application-proxy gateway firewalls are superior to those found in packet filter and stateful inspection firewalls. The advanced functionality of application-proxy gateway firewalls, however, results in some disadvantages compared with packet filter or stateful inspection firewalls: Because of the full packet awareness found in application-proxy gateways, the firewall is forced to spend significant time reading and interpreting each packet. Therefore, application-proxy gateway firewalls generally aren’t well suited to high-bandwidth or real-time applications. To reduce the load on the firewall, a dedicated proxy server can be used to secure less time-sensitive services, such as email and most web traffic. Application-proxy gateway firewalls are often limited in terms of support for new network applications and protocols. An individual application-specific proxy agent is required for each type of network traffic that needs to go through the firewall. Most vendors of application-proxy gateways provide generic proxy agents to support undefined network protocols or applications. Those generic agents, however, tend to negate many of the strengths of the application-proxy gateway architecture; they simply allow traffic to tunnel through the firewall. Most firewalls implement a combination of these firewall functionalities. Linux systems are no different. Many vendors of packet filter firewalls or stateful inspection firewalls have also implemented basic application-proxy functionality to offset some of the weaknesses associated with their firewalls. In most cases, these vendors implement application proxies to provide better logging of network traffic and stronger user authentication. Nearly all major firewall vendors have introduced multiple firewall functions into their products in some manner. In a large organization, you may also have to isolate smaller internal networks from the corporate network. You can set up such internal firewalls the same way that you set up Internet firewalls.
View ArticleArticle / Updated 09-28-2018
The Linux kernel has built-in packet filtering software in the form of something called netfilter. You use the iptables command to set up the rules for what happens to the packets based on the IP addresses in their header and the network connection type. To find out more about netfilter and iptables, visit the documentation section of the netfilter website. The built-in packet filtering capability is handy when you don’t have a dedicated firewall between your Linux system and the Internet, such as when you connect your Linux system to the Internet through a DSL or cable modem. Essentially, you can have a packet filtering firewall inside your Linux system sitting between the kernel and the applications. The security level configuration tool in Linux Most Linux distributions, such as Fedora and SUSE, now include GUI tools to turn on a packet filtering firewall and simplify the configuration experience for the user. In some distributions, you need to install ufw (an acronym for Uncomplicated Firewall), which lets you manage a net-filter firewall and simplify configuration. ufw serves as a front end to iptables, which allows you to enter commands in a terminal window directly through it. The command sudo ufw enable turns the firewall on, and the command sudo ufw status verbose displays such information as the following: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip The default settings are exactly what you’re looking for in most cases for a client machine: allowing outgoing traffic and denying incoming traffic. You can allow incoming packets meant for specific Internet services such as SSH, Telnet, and FTP. If you select a network interface such as eth0 (the first Ethernet card) as trusted, all network traffic over that interface is allowed without any filtering. In SUSE, to set up a firewall, choose Main Menu→ System→ YaST. In the YaST Control Center window that appears, click Security and Users on the left side of the window and then click Firewall on the right side. YaST opens a window that you can use to configure the firewall. You can designate network interfaces (by device name, such as eth0, ppp0, and so on) to one of three zones: internal, external, or demilitarized zone. Then, for that zone, you can specify what services (such as HTTP, FTP, and SSH) are allowed. If you have two or more network interfaces, and you use the Linux system as a gateway (a router), you can enable forwarding packets between network interfaces (a feature called masquerading). You can also turn on different levels of logging, such as logging all dropped packets that attempt connection at specific ports. If you change the firewall settings, choose the Startup category and click Save Settings and Restart Firewall Now. The iptables command in Linux The graphical user interface (GUI) firewall configuration tools are just front ends that use the iptables command to implement the firewall. If your Linux system doesn’t have a GUI tool, you can use iptables directly to configure firewalling on your Linux system. Using the iptables command is somewhat complex. The command uses the concept of a chain, which is a sequence of rules. Each rule says what to do with a packet if the header contains certain information, such as the source or destination IP address. If a rule doesn’t apply, iptables consults the next rule in the chain. By default, there are three chains: INPUT chain: Contains the first set of rules against which packets are tested. The packets continue to the next chain only if the INPUT chain doesn’t specify DROP or REJECT. FORWARD chain: Contains the rules that apply to packets attempting to pass through this system to another system (when you use your Linux system as a router between your LAN and the Internet, for example). OUTPUT chain: Includes the rules applied to packets before they’re sent out (either to another network or to an application). When an incoming packet arrives, the kernel uses iptables to make a routing decision based on the destination IP address of the packet. If the packet is for this server, the kernel passes the packet to the INPUT chain. If the packet satisfies all the rules in the INPUT chain, the packet is processed by local processes such as an Internet server that’s listening for packets of this type. If the kernel has IP forwarding enabled, and the packet has a destination IP address of a different network, the kernel passes the packet to the FORWARD chain. If the packet satisfies the rules in the FORWARD chain, it’s sent out to the other network. If the kernel doesn’t have IP forwarding enabled, and the packet’s destination address isn’t for this server, the packet is dropped. If the local processing programs that receive the input packets want to send network packets out, those packets pass through the OUTPUT chain. If the OUTPUT chain accepts those packets, they’re sent out to the specified destination network. You can view the current chains, add rules to the existing chains, or create new chains of rules by using the iptables command, which normally requires you to be root to interact with. When you view the current chains, you can save them to a file. If you’ve configured nothing else, and your system has no firewall configured, typing iptables -L should show the following: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In this case, all three chains — INPUT, FORWARD, and OUTPUT — show the same ACCEPT policy, which means that everything is wide open. If you’re setting up a packet filter, the first thing you do is specify the packets that you want to accept. To accept packets from the 192.168.0.0 network address, add the following rule to the INPUT chain: iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT Now add a rule to drop everything except local loopback (the lo network interface) traffic and stop all forwarding with the following commands: iptables -A INPUT -i ! lo -j REJECT iptables -A FORWARD -j REJECT The first iptables command, for example, appends to the INPUT chain (-A INPUT) the rule that if the packet doesn’t come from the lo interface (-i ! lo), iptables rejects the packet (-j REJECT). Before rejecting all other packets, you may add more rules to each INPUT chain to allow specific packets in. You can select packets to accept or reject based on many parameters, such as IP addresses, protocol types (TCP, UDP), network interface, and port numbers. You can do all sorts of specialized packet filtering with iptables. Suppose that you set up a web server and want to accept packets meant for only HTTP (port 80) and SSH services. The SSH service (port 22) is for you to securely log in and administer the server. Also suppose that the server’s IP address is 192.168.0.10. Here’s how you might set up the rules for this server: iptables -P INPUT DROP iptables -A INPUT -s 0/0 -d 192.168.0.10 -p tcp --dport 80 -j ACCEPT iptables -A INPUT -s 0/0 -d 192.168.0.10 -p tcp --dport 22 -j ACCEPT In this case, the first rulesets up the default policy of the INPUT chain to DROP, which means that if none of the specific rules matches, the packet is dropped. The next two rules say that packets addressed to 192.168.0.10 and meant for ports 80 and 22 are accepted. Don’t type iptables commands from a remote login session. A rule that begins denying packets from all addresses can also stop what you type from reaching the system; in that case, you may have no way of accessing the system over the network. To avoid unpleasant surprises, always type iptables rules at the console — the keyboard and monitor connected directly to your Linux PC that’s running the packet filter. If you want to delete all filtering rules in a hurry, type iptables -F to flush them. To change the default policy for the INPUT chain to ACCEPT, type iptables -t filter -P INPUT ACCEPT. This command causes iptables to accept all incoming packets by default. Not every iptables command is discussed here. You can type man iptables to read a summary of the commands. You can also read about netfilter and iptables. After you define the rules by using the iptables command, those rules are in memory and are gone when you reboot the system. Use the iptables-save command to store the rules in a file. You can save the rules in a file named iptables.rules by using the following command: iptables-save > iptables.rules Here’s a listing of the iptables.rules file generated on a Fedora system: # Generated by iptables-save v1.3.0 on Sun Dec 28 16:10:12 2019 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [6:636] -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A INPUT -s 192.168.0.0/255.255.255.0 -j ACCEPT -A INPUT -i ! lo -j REJECT --reject-with icmp-port-unreachable COMMIT # Completed on Sun Dec 28 16:10:12 2019 These rules correspond to the following iptables commands used to configure the filter: iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT iptables -A INPUT -i ! lo -j REJECT iptables -A FORWARD -j REJECT If you want to load these saved rules into iptables, use the following command: iptables-restore < iptables.rules
View ArticleArticle / Updated 09-27-2018
Linux comes with the GNU Privacy Guard (GnuPG or GPG) encryption and authentication utility. With GnuPG, you can create your public and private key pair on your Linux system, encrypt files with your key, and digitally sign a message to authenticate that it’s from you. If you send a digitally signed message to someone who has your public key, the recipient can verify that you signed the message. Understanding public key encryption The basic idea behind public key encryption is to use a pair of keys — one private and the other public — that are related but can’t be used to guess one from the other. Anything encrypted with the private key can be decrypted only with the corresponding public key, and vice versa. The public key is for distribution to other people; you keep the private key in a safe place. You can use public key encryption to communicate securely with others. Let’s try an example. Suppose that Alice wants to send secure messages to Bob. Each person generates public key and private key pairs, after which they exchange their public keys. When Alice wants to send a message to Bob, she encrypts the message by using Bob’s public key and sends the encrypted message to him. Now the message is secure from eavesdropping, because only Bob’s private key can decrypt the message, and only Bob has that key. When Bob receives the message, he uses his private key to decrypt the message and read it. At this point, you might say, “Wait a minute! How does Bob know that the message really came from Alice? What if someone else uses Bob’s public key and sends a message as though it came from Alice?” This situation is where digital signatures come in. Understanding digital signatures The purpose of digital (electronic) signatures is the same as that of pen-and-ink signatures, but how you sign digitally is different. Unlike a pen-and-ink signature, your digital signature depends on the message you’re signing. The first step in creating a digital signature is applying a mathematical function to the message and reducing it to a fixed-size message digest (also called a hash or a fingerprint). No matter how big your message is, the message digest is usually 128 or 160 bits, depending on the hashing function. The next step is applying public key encryption. Simply encrypt the message digest with your private key, and you get the digital signature for the message. Typically, the digital signature is added to the end of the message, and voilà — you get an electronically signed message. What good does the digital signature do? Well, anyone who wants to verify that the message is indeed signed by you takes your public key and decrypts the digital signature. What that person gets is the message digest (the encrypted hash) of the message. Then he or she applies the same hash function to the message and compares the computed hash with the decrypted value. If the two match, then no one has tampered with the message. Because your public key was used to verify the signature, the message must have been signed with the private key known only to you, so the message must be from you! In the theoretical scenario in which Alice sends private messages to Bob, Alice can digitally sign her message to make sure that Bob can tell that the message is really from her. Here’s how Alice sends her private message to Bob with the assurance that Bob can tell it’s from her: Alice uses software to compute the message digest of the message and then encrypts the digest by using her private key — her digital signature for the message. Alice encrypts the message (again, using some convenient software and Bob’s public key). She sends both the encrypted message and the digital signature to Bob. Bob decrypts the message, using his private key. Bob decrypts the digital signature, using Alice’s public key, which gives him the message digest. Bob computes the message digest of the message and compares it with what he got by decrypting the digital signature. If the two message digests match, Bob can be sure that the message really came from Alice. Using GPG in a Linux system GPG includes the tools you need to use public key encryption and digital signatures. You can figure out how to use GPG gradually as you begin using encryption. The following information shows some of the typical tasks you can perform with GPG. Generating the key pair in Linux The steps for generating the key pairs are as follows: Type gpg --gen-key. If you’re using GPG for the first time, it creates a .gnupg directory in your home directory and a file named gpg.conf in that directory. Then it asks what kind of keys you want: Please select what kind of key you want: (1) DSA and ElGamal (default) (2) DSA (sign only) (4) RSA (sign only) Your selection? Press Enter for the default choice, which is good enough. GPG prompts you for the key size (the number of bits). Press Enter again to accept the default value of 2,048 bits. GPG asks you when the keys expire. The default is to never expire. If the default is what you want (and why not?), press Enter. When GPG asks whether you really want the keys to never expire, press the Y key to confirm. GPG prompts you for your name, your email address, and a comment to make it easier to associate the key pair with your name. Type each piece of requested information, and press Enter. When GPG gives you a chance to change the information or confirm it, confirm by typing o and pressing Enter. GPG prompts you for a passphrase that protects your private key. Type a long phrase that includes lowercase and uppercase letters, numbers, and punctuation marks — the longer the better — and then press Enter. Be careful to choose a passphrase that you can remember easily. GPG generates the keys. It may ask you to perform some work on the PC so that the random-number generator can generate enough random numbers for the key-generation process. Exchanging keys using Linux To communicate with others, you have to give them your public key. You also have to get public keys from those who may send you a message (or when someone who might sign a file and you want to verify the signature). GPG keeps the public keys in your key ring. (The key ring is simply the public keys stored in a file, but the name sounds nice because everyone has a key ring in the real world, and these keys are keys of a sort.) To list the keys in your key ring, type gpg --list-keys To send your public key to someone or to place it on a website, you have to export the key to a file. The best way is to put the key in what GPG documentation calls ASCII-armored format, with a command like this: gpg --armor --export [email protected] > kdulaneykey.asc This command saves the public key in ASCII-armored format (which looks like garbled text) in the file named kdulaneykey.asc. You replace the email address with your email address (the one you used when you created the key) and replace the output filename with something different. After you export the public key to a file, you can mail that file to others or place it on a website for use by others. When you import a key from someone, you typically get it in ASCII-armored format as well. If you have a [email protected] GPG public key in a file named uscertkey.asc, you import it into the key ring with the following command: gpg --import uscertkey.asc Use the gpg --list-keys command to verify that the key is in your key ring. Here’s what you might see when typing gpg --list-keys on the system: /home/kdulaney/.gnupg/pubring.gpg ----------------------------- pub 1024D/7B38A728 2018-08-28 uid Kristin Dulaney <[email protected]> sub 2048g/3BD6D418 2018-08-28 pub 2048R/F0E187D0 2019-09-08 [expires: 2019-10-01] uid US-CERT Operations Key <[email protected]> The next step is checking the fingerprint of the new key. Type the following command to get the fingerprint of the US-CERT key: gpg --fingerprint [email protected] GPG prints the fingerprint, as follows: pub 2048R/F0E187D0 2018-09-08 [expires: 2019-10-01] Key fingerprint = 049F E3BA 240B 4CF1 3A76 06DC 1868 49EC F0E1 87D0 uid US-CERT Operations Key <[email protected]> At this point, you need to verify the key fingerprint with someone at the US-CERT organization. If you think that the key fingerprint is good, you can sign the key and validate it. Here’s the command you use to sign the key: gpg --sign-key [email protected] GPG asks for confirmation and then prompts you for your passphrase. After that, GPG signs the key. Because key verification and signing are potential weak links in GPG, be careful about what keys you sign. By signing a key, you say that you trust the key to be from that person or organization. Signing a file in Linux You may find signing files to be useful if you send a file to someone and want to assure the recipient that no one tampered with the file and that you did in fact send the file. GPG makes signing a file easy. You can compress and sign a file named message with the following command: gpg -o message.sig -s message To verify the signature, type gpg --verify message.sig To get back the original document, type gpg -o message --decrypt message.sig Sometimes, you don’t care about keeping a message secret, but you want to sign it to indicate that the message is from you. In such a case, you can generate and append a clear-text signature with the following command: gpg -o message.asc --clearsign message This command appends a clear-text signature to the text message. Here’s a typical clear-text signature block: -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFDEhAtaHWlHHs4pygRAhiqAJ9Qj0pPMgKVBuokDyUZaEYVsp6RIQCfaoBm 9zCwrSAG9mo2DXJvbKS3ri8= =2uc/ -----END PGP SIGNATURE----- When a message has a clear-text signature appended, you can use GPG to verify the signature with the following command: gpg --verify message.asc If you indeed signed the message, the last line of the output says that the signature is good. Encrypting and decrypting documents in Linux To encrypt a message meant for a recipient, you can use the --encrypt (or -e) GPG command. Here’s how you might encrypt a message for US-CERT by using its GPG key: gpg -o message.gpg -e -r [email protected] message The message is encrypted with the US-CERT public key (without a signature, but you can add the signature with the -s command). When US-CERT receives the message.gpg file, the recipient must decrypt it by using US-CERT’s private key. Here’s the command that someone at US-CERT can use: gpg -o message --decrypt message.gpg Then GPG prompts for the passphrase to unlock the US-CERT private key, decrypts the message, and saves the output in the file named message. If you want to encrypt a file that no one else has to decrypt, you can use GPG to perform symmetric encryption. In this case, you provide a passphrase to encrypt the file with the following GPG command: gpg -o secret.gpg -c somefile GPG prompts you for the passphrase and asks you to repeat the passphrase (to make sure that you didn’t mistype anything). Then GPG encrypts the file, using a key generated from the passphrase. To decrypt a file encrypted with a symmetric key, type gpg -o myfile --decrypt secret.gpg GPG prompts you for the passphrase. If you enter the correct passphrase, GPG decrypts the file and saves the output (in this example) in the file named myfile.
View ArticleArticle / Updated 09-27-2018
One important aspect of securing the host is protecting important system files — and the directories on your Linux system that contain these files. In Linux, you can protect the files through file ownership and the permission settings that control who can read, write, or (in the case of executable programs) execute the file. The default Linux file security is controlled through the following settings for each file or directory: User ownership Group ownership Read, write, execute permissions for the owner Read, write, execute permissions for the group Read, write, execute permissions for others (everyone else) How to view ownerships and permissions in Linux You can see settings related to ownership and permissions for a file when you look at a detailed listing with the ls -l command. For example, in Ubuntu, type the following command to see the detailed listing of the /etc/inittab file: ls -l /etc/inittab The resulting listing looks something like this: -rw-r--r-- 1 root root 1666 Feb 16 07:57 /etc/inittab The first set of characters describes the file permissions for user, group, and others. The third and fourth fields show the user and group that own this file. In this case, user and group names are the same: root. How to change file ownerships in Linux You can set the user and group ownerships with the chown command. If the file /dev/hda should be owned by the user root and the group disk, you type the following command as root to set up this ownership: chown root.disk /dev/hda To change the group ownership alone, use the chgrp command. Here’s how you can change the group ownership of a file from whatever it was earlier to the group named accounting: chgrp accounting ledger.out How to change file permissions in Linux Use the chmod command to set the file permissions. To use chmod effectively, you have to specify the permission settings. One way is to concatenate one or more letters from each column of the table below, in the order shown in the table (Who/Action/Permission). File Permission Codes Who Action Permission u (user) + (add) r (read) g (group) - (remove) w (write) o (others) = (assign) x (execute) a (all) s (set user ID) To give everyone read and write access to all files in a directory, type chmod a+rw *. To permit everyone to execute a specific file, type chmod a+x filename. Another way to specify a permission setting is to use a three-digit sequence of numbers. In a detailed listing, the read, write, and execute permission settings for the user, group, and others appear as the sequence rwxrwxrwx with dashes in place of letters for disallowed operations. Think of rwxrwxrwx as being three occurrences of the string rwx. Now assign the values r=4, w=2, and x=1. To get the value of the sequence rwx, simply add the values of r, w, and x. Thus, rwx = 7. With this formula, you can assign a three-digit value to any permission setting. If the user can read and write the file but everyone else can only read the file, for example, the permission setting is rw-r--r--, and the value is 644. Thus, if you want all files in a directory to be readable by everyone but writable only by the user, use the following command: chmod 644 * How to set default permission in Linux What permission setting does a file get when you (or a program) create a new file? The answer is in what is known as the user file-creation mask, which you can see and set by using the umask command. Type umask, and the command prints a number showing the current file-creation mask. For the root user, the mask is set to 022, whereas the mask for other users is 002. To see the effect of this file-creation mask and to interpret the meaning of the mask, follow these steps: Log in as root, and type the following command: touch junkfileThis command creates a file named junkfile with nothing in it. Type ls -l junkfile to see that file’s permissions. You see a line similar to the following: -rw-r--r-- 1 root root 0 Aug 24 10:56 junkfile Interpret the numerical value of the permission setting by converting each three-letter permission in the first field (excluding the first letter) to a number between 0 and 7. For each letter that’s present, the first letter gets a value of 4, the second letter is 2, and the third is 1. rw- translates to 4+2+0 (because the third letter is missing), or 6. Similarly, r-- is 4+0+0 = 4. Thus, the permission string -rw-r--r-- becomes 644. Subtract the numerical permission setting from 666. What you get is the umask setting. In this case, 666 – 644 results in a umask of 022. Thus, a umask of 022 results in a default permission setting of 666 – 022 = 644. When you rewrite 644 in terms of a permission string, it becomes rw-r--r--. To set a new umask, type umask followed by the numerical value of the mask. Here’s how you go about it: Figure out what permission settings you want for new files. If you want new files that can be read and written only by the owner and no one else, the permission setting looks like this:rw------- Convert the permissions to a numerical value by using the conversion method that assigns 4 to the first field, 2 to the second, and 1 to the third. Thus, for files that are readable and writable only by their owner, the permission setting is 600. Subtract the desired permission setting from 666 to get the value of the mask. For a permission setting of 600, the mask becomes 666 – 600 = 066. Use the umask command to set the file-creation mask by typing umask 066. A default umask of 022 is good for system security because it translates to files that have read and write permission for the owner and read permissions for everyone else. The bottom line is that you don’t want a default umask that results in files that are writable by the whole world. How to check for set user ID permission in Linux Another permission setting can be a security hazard. This permission setting, called the set user ID (or setuid and/or suid for short), applies to executable files. When the suid permission is enabled, the file executes under the user ID of the file’s owner. In other words, if an executable program is owned by root and the suid permission is set, the program runs as though root is executing it, no matter who executed the program. The suid permission means that the program can do a lot more (such as read all files, create new files, and delete files) than a normal user program can do. Another risk is that if a suid program file has a security hole, crackers can do a lot more damage through such programs than through other vulnerabilities. You can find all suid programs with a simple find command: find / -type f -perm +4000 You see a list of files such as the following: /bin/su /bin/ping /bin/eject /bin/mount /bin/ping6 /bin/umount /opt/kde4/bin/fileshareset /opt/kde4/bin/artswrapper /opt/kde4/bin/kcheckpass … lines deleted … Many of the programs have the suid permission because they need it, but you should check the complete list to make sure that it contains no strange suid programs (such as suid programs in a user’s home directory). If you type ls -l /bin/su, you see the following permission settings: -rwsr-xr-x 1 root root 25756 Aug 19 17:06 /bin/su The s in the owner’s permission setting (-rws) tells you that the suid permission is set for the /bin/su file, which is the executable file for the su command that you can use to become root or another user.
View ArticleArticle / Updated 09-27-2018
The first step in securing your Linux system is setting up a security policy — a set of guidelines that states what you enable users (as well as visitors over the Internet) to do on your Linux system. The level of security you establish depends on how you use the Linux system and on how much is at risk if someone gains unauthorized access to your system. If you’re a system administrator for one or more Linux systems in an organization, you probably want to involve company management, as well as users, in setting up the security policy. Obviously, you can’t create a draconian policy that blocks all access. (That policy would prevent anyone from effectively working on the system.) On the other hand, if users are creating or using data that’s valuable to the organization, you must set up a policy for your Linux system that protects the data from disclosure to outsiders. In other words, the security policy should strike a balance between users’ needs and your need to protect the system. For a stand-alone Linux system or a home system that you occasionally connect to the Internet, the security policy can be just a list of the Internet services that you want to run on the system and the user accounts that you plan to set up on the system. For any larger organization, you probably have one or more Linux systems on a LAN connected to the Internet — preferably through a firewall. (To reiterate, a firewall is a device that controls the flow of Internet Protocol [IP] packets between the LAN and the Internet.) In such cases, thinking of computer security systematically (across the entire organization) is best. Here's what a Linux security framework should focus on Determining the business requirements for security Performing risk assessments Establishing a security policy Implementing a cybersecurity solution that includes people, process, and technology to mitigate identified security risks Continuously monitoring and managing security Determining business requirements for security and how Linux fits The business requirements for security identify the computer resources and information you have to protect (including any requirements imposed by applicable laws, such as the requirement to protect the privacy of some types of data). Typical security requirements may include items such as the following: Enabling access to information by authorized users. Implementing business rules that specify who has access to what information. Employing a strong user-authentication system. Denying execution to malicious or destructive actions on data. Protecting data from end to end as it moves across networks. Implementing all security and privacy requirements that applicable laws impose. Performing risk analysis on Linux systems Risk analysis is about identifying and assessing risks — potential events that can harm your Linux system. The analysis involves determining the following and performing some analysis to establish the priority for handling the risks: Threats: What you’re protecting against. Vulnerabilities: Weaknesses that may be exploited by threats (the risks). Probability: The likelihood that a threat will exploit the vulnerability. Impact: The effect of exploiting a specific vulnerability. Mitigation: What to do to reduce vulnerabilities. Typical threats to a Linux system Some typical threats to your Linux system include the following: DoS attack: The computer and network are tied up so that legitimate users can’t make use of the systems. For businesses, a DoS attack can mean a loss of revenue. Because bringing a system to its knees with a single computer attack is a bit of a challenge these days, the more common tactic is to point many computers at a single site and let them do the dirty work. Although the purpose and result are the same as ever, this ganging-up is referred to as a distributed (DDoS) attack because more than one computer is attacking the host. Unauthorized access: The computer and network are used by someone who isn’t an authorized user. The unauthorized user can steal information or maliciously corrupt or destroy data. Some businesses may be hurt by the negative publicity resulting from the mere fact that an unauthorized user gained access to the system, even if the data shows no sign of explicit damage. Disclosure of information to the public: Disclosure in this case means the unauthorized release of information. The disclosure of a password file, for example, enables potential attackers to figure out username and password combinations for accessing a system. Exposure of other sensitive information, such as financial and medical data, may be a potential liability for a business. Typical vulnerabilities on Linux systems The threats to your system and network come from exploitation of vulnerabilities in your organization’s resources, both computer and people. Following are some common vulnerabilities: People’s foibles (divulging passwords, losing security cards, and so on) Internal network connections (routers, switches) Interconnection points (gateways [routers and firewalls] between the Internet and the internal network) Third-party network providers (Internet service providers [ISPs], long-distance carriers) with looser security Operating-system security holes (potential holes in Internet servers, such as those associated with sendmail, named, and bind) Application security holes (known weaknesses in specific applications) The 1-2-3 of risk analysis (probability and effect) on Linux systems To perform risk analysis, assign a numeric value to the probability and effect of each potential vulnerability. To develop a workable risk analysis, do the following for each vulnerability or risk: Assign subjective ratings of low, medium, and high to the probability. As the ratings suggest, low probability means a lesser chance that the vulnerability will be exploited; high probability means a greater chance. Assign similar ratings to the effect.What you consider to be the effect is up to you. If the exploitation of a vulnerability would affect your business greatly, assign it a high effect rating. Assign a numeric value to the three levels — low = 1, medium = 2, and high = 3 — for both probability and effect. Multiply the probability by the effect. You can think of this product as being the risk level. Decide to develop protections for vulnerabilities that exceed a specific threshold for the product of probability and effect. You might choose to handle all vulnerabilities that have a probability × effect value greater than 6, for example. If you want to characterize the probability and effect with finer gradations, use a scale of, say, 1 through 5 instead of 1 through 3, and follow the same steps. Establishing a security policy for Linux systems Using risk analysis and any business requirements that you may have to address (regardless of risk level) as a foundation, you can craft a security policy for the organization. Such a security policy typically addresses high-level objectives such as ensuring the confidentiality, integrity, and availability of data and systems. The security policy typically addresses the following areas: Authentication: Examples include what method is used to ensure that a user is the real user, who gets access to the system, the minimum length and complexity of passwords, how often users change passwords, and how long a user can be idle before that user is logged out automatically. Authorization: Examples include what different classes of users can do on the system and who can have the root password. Data protection: Examples include what data must be protected, who has access to the data, and whether encryption is necessary for some data. Internet access: Examples include restrictions on LAN users from accessing the Internet, what Internet services (such as web and Internet Relay Chat) users can access, whether incoming emails and attachments are scanned for viruses, whether the network has a firewall, and whether virtual private networks (VPNs) are used to connect private networks across the Internet. Internet services: Examples include what Internet services are allowed on each Linux system; the existence of any file servers, mail servers, or web servers; what services run on each type of server; and what services, if any, run on Linux systems used as desktop workstations. Security audits: Examples include who tests whether the security is adequate, how often security is tested, and how problems found during security testing are handled. Incident handling: Examples include the procedures for handling any computer security incidents, who must be informed, and what information must be gathered to help with the investigation of incidents. Responsibilities: Examples include who is responsible for maintaining security, who monitors log files and audit trails for signs of unauthorized access, and who maintains the security policy. Implementing security solutions (mitigation) on a Linux system After you analyze the risks (vulnerabilities) and develop a security policy, you must select the mitigation approach: how to protect against specific vulnerabilities. You develop an overall security solution based on security policy, business requirements, and available technology. This solution makes use of people, process, and technology, and includes the following: Services (authentication, access control, encryption) Mechanisms (username and password, firewalls) Objects (hardware, software) Because it’s impossible to protect computer systems from all attacks, solutions identified through the risk management process must support three integral concepts of a holistic security program: Protection: Provide countermeasures such as policies, procedures, and technical solutions to defend against attacks on the assets being protected. Detection: Monitor for potential breakdowns in the protective measures that could result in security breaches. Reaction (response): Respond to detected breaches to thwart attacks before damage occurs; often requires human involvement. Because absolute protection from attacks is impossible to achieve, a security program that doesn’t incorporate detection and reaction is incomplete. Managing security on Linux systems In addition to implementing security solutions, you also need to implement security management measures to continually monitor, detect, and respond to any security incidents. The combination of the risk analysis, security policy, security solutions, and security management provides the overall security framework. Such a framework helps establish a common level of understanding of security concerns and a common basis for the design and implementation of security solutions.
View ArticleArticle / Updated 09-27-2018
It is easy to share files between Linux computers on a local network. The Linux way of accomplishing this is to utilize NFS (Network File System). Sharing files through NFS is simple and involves two basic steps: On the Linux system that runs the NFS server, you export (share) one or more directories by listing them in the /etc/exports file and by running the exportfs command. In addition, you must start the NFS server. On each client system, you use the mount command to mount the directories that your server exported. The only problem with using NFS is that each client system must support it. Microsoft Windows doesn’t ship with NFS, so you have to buy the NFS software separately if you want to share files by using NFS. Using NFS if all systems on your LAN run Linux (or other variants of Unix with built-in NFS support) makes good sense, however. NFS has security vulnerabilities, so you shouldn’t set up NFS on systems that are directly connected to the Internet without using the RPCSEC_GSS security that comes with NFS version 4 (NFSv4). Version 4.2 was released in November 2016; you should use it for most purposes, because it includes all the needed updates. The following information walks you through NFS setup, using an example of two Linux PCs on a LAN. Exporting a file system with NFS in Linux Start with the server system that exports — makes available to the client systems — the contents of a directory. On the server, you must run the NFS service and designate one or more file systems to export. To export a file system, you have to add an appropriate entry to the /etc/exports file. Suppose that you want to export the /home directory, and you want to enable the host named LNBP75 to mount this file system for read and write operations. You can do so by adding the following entry to the /etc/exports file: /home LNBP75(rw,sync) If you want to give access to all hosts on a LAN such as 192.168.0.0, you could change this line to /home 192.168.0.0/24(rw,sync) Every line in the /etc/exports file has this general format: <em>Directory host1</em>(<em>options</em>) <em>host2</em>(<em>options</em>) … The first field is the directory being shared via NFS, followed by one or more fields that specify which hosts can mount that directory remotely and several options in parentheses. You can specify the hosts with names or IP addresses, including ranges of addresses. The options in parentheses denote the kind of access each host is granted and how user and group IDs from the server are mapped to ID the client. (If a file is owned by root on the server, for example, what owner is that on the client?) Within the parentheses, commas separate the options. If a host is allowed both read and write access, and all IDs are to be mapped to the anonymous user (by default, the anonymous user is named nobody), the options look like this: (rw,all_squash) The table below shows the options you can use in the /etc/exports file. You find two types of options: general options and user ID mapping options. Options in /etc/exports Option Description General Options secure Allows connections only from port 1024 or lower (default) insecure Allows connections from port 1024 or higher ro Allows read-only access (default) rw Allows both read and write access sync Performs write operations (writing information to the disk) when requested (by default) async Performs write operations when the server is ready no_wdelay Performs write operations immediately wdelay Waits a bit to see whether related write requests arrive and then performs them together (by default) hide Hides an exported directory that’s a subdirectory of another exported directory (by default) no_hide Causes a directory to not be hidden (opposite of hide) subtree_check Performs subtree checking, which involves checking parent directories of an exported subdirectory whenever a file is accessed (by default) no_subtree_check Turns off subtree checking (opposite of subtree_check) insecure_locks Allows insecure file locking User ID Mapping Options all_squash Maps all user IDs and group IDs to the anonymous user on the client no_all_squash Maps remote user and group IDs to similar IDs on the client (by default) root_squash Maps remote root user to the anonymous user on the client (by default) no_root_squash Maps remote root user to the local root user anonuid=UID Sets the user ID of anonymous user to be used for the all_squash and root_squash options anongid=GID Sets the group ID of anonymous user to be used for the all_squash and root_squash options After adding the entry in the /etc/exports file, manually export the file system by typing the following command in a terminal window: exportfs -a This command exports all file systems defined in the /etc/exports file. Now you can start the NFS server processes. In Debian, start the NFS server by logging in as root and typing /etc/init.d/nfs-kernel-server start in a terminal window. In Fedora, type /etc/init.d/nfs start. In SUSE, type /etc/init.d/nfsserver start. If you want the NFS server to start when the system boots, type update-rc.d nfs-kernel-server defaults in Debian. In Fedora, type chkconfig - -level 35 nfs on. In SUSE, type chkconfig - -level 35 nfsserver on. When the NFS service is up, the server side of NFS is ready. Now you can try to mount the exported file system from a client system and access the exported file system as needed. If you ever make any changes in the exported file systems listed in the /etc/exports file, remember to restart the NFS service. To restart a service, invoke the script in the /etc/init.d directory with restart as the argument (instead of the start argument that you use to start the service). Mounting an NFS file system in Linux To access an exported NFS file system on a client system, you have to mount that file system on a mount point. The mount point is nothing more than a local directory. Suppose that you want to access the /home directory exported from the server named LNBP200 at the local directory /mnt/lnbp200 on the client system. To do so, follow these steps: Log in as root , and create the directory with this command: mkdir /mnt/lnbp200 Type the following command to mount the directory from the remote system (LNBP200) on the local directory/mnt/lnbp200: mount lnbp200:/home /mnt/lnbp200 After completing these steps, you can view and access exported files from the local directory /mnt/lnbp200. To confirm that the NFS file system is indeed mounted, log in as root on the client system, and type mount in a terminal window. You see a line similar to the following about the NFS file system: lnbp200:/home/public on /mnt/lnbp200 type nfs (rw,addr=192.168.0.4) NFS supports two types of mount operations: hard and soft. By default, a mount is hard, which means that if the NFS server doesn’t respond, the client keeps trying to access the server indefinitely until the server responds. You can soft-mount an NFS volume by adding the -o soft option to the mount command. For a soft mount, the client returns an error if the NFS server fails to respond and doesn’t retry.
View ArticleArticle / Updated 09-27-2018
In Linux systems, you can use the tar command to archive files to a device, such as a hard drive or tape. The tar program in Linux creates an archive file that can contain other directories and files and (optionally) compress the archive for efficient storage. Then the archive is written to a specified device or another file. Many software packages are distributed in the form of a compressed tar file. The command syntax of the tar program in Linux is as follows: tar <em>options destination source</em> Here, options usually is specified by a sequence of single letters, with each letter specifying what tar does; destination is the device name of the backup device; and source is a list of file or directory names denoting the files to back up. Backing up and restoring a single-volume archive in Linux Suppose that you want to back up the contents of the /etc/X11 directory on a hard drive. Log in as root, and type the following command, where xxx represents your drive: tar zcvf /dev/<em>xxx</em> /etc/X11 The tar program displays a list of filenames as each file is copied to the compressed tar archive. In this case, the options are zcvf, the destination is /dev/<em>xxx</em> (the drive), and the source is the /etc/X11 directory (which implies all its subdirectories and their contents). You can use a similar tar command to back up files to a tape by replacing the hard drive location with that of the tape device, such as /dev/st0 for a SCSI tape drive. This table defines a few common tar options in Linux. Common tar Options Option Does the Following c Creates a new archive. f Specifies the name of the archive file or device on the next field in the command line. M Specifies a multivolume archive. t Lists the contents of the archive. v Displays verbose messages. x Extracts files from the archive. z Compresses the tar archive by using gzip. To view the contents of the tar archive that you create on the drive, type the following command (replacing xxx with the drive device): tar ztf /dev/<em>xxx</em> You see a list of filenames (each beginning with /etc/X11) indicating what’s in the backup. In this tar command, the t option lists the contents of the tar archive. To extract the files from a tar backup, follow these steps while logged in as root: Change the directory to/tmp by typing this command: cd /tmp This step is where you can practice extracting the files from the tar backup. For a real backup, change the directory to an appropriate location. (Typically, you type cd /.) Type the following command: tar zxvf /dev/<em>xxx</em> This tar command uses the x option to extract the files from the archive stored on the device (replace xxx with the drive). Now if you check the contents of the /tmp directory, you notice that the tar command creates an etc/X11 directory tree in /tmp and restores all the files from the tar archive to that directory. The tar command strips the leading / from the filenames in the archive and restores the files in the current directory. If you want to restore the /etc/X11 directory from the archive, use this command (substituting the device name for xxx): tar zxvf /dev/<em>xxx</em> -C / The -C option changes directories to the directory specified (in this case, the root directory of /) before doing the tar; the / at the end of the command denotes the directory where you want to restore the backup files. In Linux systems, you can use the tar command to create, view, and restore an archive. You can store the archive in a file or in any device you specify with a device name. Backing up and restoring a multivolume archive in Linux Sometimes, the capacity of a single storage medium is less than the total storage space needed to store the archive. In this case, you can use the M option for a multivolume archive, meaning that the archive can span multiple tapes. Note, however, that you can’t create a compressed, multivolume archive, so you have to drop the z option. The M tells tar to create a multivolume archive. The tar command prompts you for a second media when the first one is filled. Take out the first media and insert another when you see the following prompt: Prepare volume #2 and hit return: When you press Enter, the tar program continues with the second media. For larger archives, the tar program continues to prompt for new media as needed. To restore from this multivolume archive, type cd /tmp to change the directory to /tmp. (The /tmp is used directory for illustrative purposes, but you have to use a real directory when you restore files from archive.) Then type (replacing xxx with the device you’re using) tar xvfM /dev/<em>xxx</em> The tar program prompts you to feed the media as necessary. Use the du -s command to determine the amount of storage you need for archiving a directory. Type du -s /etc to see the total size of the /etc directory in kilobytes, for example. Here’s typical output from that command: 35724 /etc The resulting output shows that the /etc directory requires at least 35,724 kilobytes of storage space to back up. Backing up on tapes for Linux systems Although backing up on tapes is as simple as using the right device name in the tar command, you do have to know some nuances of the tape device to use it well. When you use tar to back up to the device named /dev/st0 (the first SCSI tape drive), the tape device automatically rewinds the tape when the tar program finishes copying the archive to the tape. The /dev/st0 device is called a rewinding tape device because it rewinds tapes by default. If your tape can hold several gigabytes of data, you may want to write several tar archives — one after another — to the same tape. (Otherwise, much of the tape may be left empty.) If you plan to do so, your tape device can’t rewind the tape after the tar program finishes. To help you with scenarios like this one, several Linux tape devices are nonrewinding. The nonrewinding SCSI tape device is called /dev/nst0. Use this device name if you want to write one archive after another on a tape. After each archive, the nonrewinding tape device writes an end of file (EOF) marker to separate one archive from the next. Use the mt command to control the tape; you can move from one marker to the next or rewind the tape. When you finish writing several archives to a tape using the /dev/nst0 device name, for example, you can force the tape to rewind with the following command: mt -f /dev/nst0 rewind After rewinding the tape, you can use the following command to extract files from the first archive to the current disk directory: tar xvf /dev/nst0 After that, you must move past the EOF marker to the next archive. To do so, use the following mt command: mt -f /dev/nst0 fsf 1 This command positions the tape at the beginning of the next archive. Now use the tar xvf command again to read this archive. If you save multiple archives on a tape, you have to keep track of the archives yourself. The order of the archives can be hard to remember, so you may be better off simply saving one archive per tape. Performing incremental backups in Linux Suppose that you use tar to back up your system’s hard drive on a tape. Because creating a full backup can take quite some time, you don’t want to repeat this task every night. (Besides, only a small number of files may have changed during the day.) To locate the files that need backing up, you can use the find command to list all files that have changed in the past 24 hours: find / -mtime -1 -type f -print This command prints a list of files that have changed within the past day. The -mtime -1 option means that you want the files that were last modified less than one day ago. Now you can combine this find command with the tar command to back up only those files that have changed within the past day: tar cvf /dev/st0 `find / -mtime -1 -type f -print` When you place a command between single back quotes, the shell executes that command and places the output at that point in the command line. The result is that the tar program saves only the changed files in the archive. This process gives you an incremental backup of only the files that have changed since the previous day. Performing automated backups in Linux In Linux systems, you can use crontab to set up recurring jobs (called cron jobs). The Linux system performs these tasks at regular intervals. Backing up your system is a good use of the crontab facility. Suppose that your backup strategy is as follows: Every Sunday at 1:15 a.m., your system backs up the entire hard drive on the tape. Monday through Saturday, your system performs an incremental backup at 3:10 a.m. by saving only those files that have changed during the past 24 hours. To set up this automated backup schedule, log in as root, and type the following lines in a file named backups (assuming that you’re using a SCSI tape drive): 15 1 * * 0 tar zcvf /dev/st0 / 10 3 * * 1-6 tar zcvf /dev/st0 `find / -mtime -1 -type f -print` Next, submit this job schedule by using the following crontab command: crontab backups Now you’re set for an automated backup. All you need to do is to place a new tape in the tape drive every day. Remember also to give each tape an appropriate label.
View ArticleArticle / Updated 09-27-2018
As a Linux system administrator, you may have to run some programs automatically at regular intervals or execute one or more commands at a specified time in the future. Your Linux system includes the facilities to schedule jobs to run at any future date or time you want. You can also set up the system to perform a task periodically or just once. Here are some typical tasks you can perform by scheduling jobs on your Linux system: Back up the files in the middle of the night. Download large files in the early morning when the system isn’t busy. Send yourself messages as reminders of meetings. Analyze system logs periodically and look for any abnormal activities. You can set up these jobs by using the at command or the crontab facility of Linux. How to schedule one-time jobs in Linux If you want to run one or more commands at a later time, you can use the at command. The atd daemon — a program designed to process jobs submitted with at — runs your commands at the specified time and mails the output to you. Before you try the at command in Linux, you need to know that the following configuration files control which users can schedule tasks by using the at command: /etc/at.allow contains the names of the users who may use the at command to submit jobs. /etc/at.deny contains the names of users who are not allowed to use the at command to submit jobs. If these files aren’t present, or if you find an empty /etc/at.deny file, any user can submit jobs by using the at command. The default in Linux is an empty /etc/at.deny file; when this default is in place, anyone can use the at command. If you don’t want some users to use at, simply list their usernames in the /etc/at.deny file. To use at to schedule a one-time job in Linux for execution at a later time, follow these steps: Run the at command with the date or time when you want your commands to be executed. When you press Enter, the at> prompt appears, as follows: at 21:30 at> This method is the simplest way to indicate the time when you want to execute one or more commands; simply specify the time in a 24-hour format. In this case, you want to execute the commands at 9:30 tonight (or tomorrow, if it’s already past 9:30 p.m.). You can, however, specify the execution time in many ways. At the at> prompt, type the commands you want to execute as though you were typing at the shell prompt. After each command, press Enter and continue with the next command. When you finish entering the commands you want to execute, press Ctrl+D to indicate the end. Here’s an example that shows how to execute the ps command at a future time: at> ps at> <EOT> job 1 at 2018-12-28 21:30 After you press Ctrl+D, the at command responds with the <EOT> message, a job number, and the date and time when the job will execute. Formats for the at Command for the Time of Execution Command When the Job Will Run at now Immediately at now + 15 minutes 15 minutes from the current time at now + 4 hours 4 hours from the current time at now + 7 days 7 days from the current time at noon At noon today (or tomorrow, if it’s already past noon) at now next hour Exactly 60 minutes from now at now next day At the same time tomorrow at 17:00 tomorrow At 5:00 p.m. tomorrow at 4:45pm At 4:45 p.m. today (or tomorrow, if it’s already past 4:45 p.m.) at 3:00 Dec 28, 2018 At 3:00 a.m. on December 28, 2018 After you enter one or more jobs, you can view the current list of scheduled jobs with the atq command. The output of this command looks similar to the following: 4 2018-12-28 03:00 a root 5 2018-10-26 21:57 a root 6 2018-10-26 16:45 a root The first field in each line shows the job number — the same number that the at command displays when you submit the job. The next field shows the year, month, day, and time of execution. The last field shows the jobs pending in the a queue and the username. If you want to cancel a job, use the atrm command to remove that job from the queue. When you’re removing a job with the atrm command, refer to the job by its number, as follows: atrm 4 This command deletes job 4 scheduled for 3:00 a.m. on December 28, 2018.] When a job executes, the output is mailed to you. Type mail at a terminal window to read your mail and to view the output from your jobs. How to schedule recurring jobs in Linux Although at is good for running commands at a specific time, it’s not useful for running a program automatically at repeated intervals. You have to use crontab to schedule such recurring jobs, such as if you want to back up your files to tape at midnight every evening. You schedule recurring jobs by placing job information in a file with a specific format and submitting this file with the crontab command. The cron daemon — crond — checks the job information every minute and executes the recurring jobs at the specified times. Because the cron daemon processes recurring jobs, such jobs are also referred to as cron jobs. Any output from a cron job is mailed to the user who submits the job. (In the submitted job-information file, you can specify a different recipient for the mailed output.) Two configuration files control who can schedule cron jobs in Linux by using crontab: /etc/cron.allow contains the names of the users who are allowed to use the crontab command to submit jobs. /etc/cron.deny contains the names of users who are not allowed to use the crontab command to submit jobs. If the /etc/cron.allow file exists, only users listed in this file can schedule cron jobs. If only the /etc/cron.deny file exists, users listed in this file can’t schedule cron jobs. If neither file exists, the default Linux setup enables any user to submit cron jobs. To submit a cron job in Linux, follow these steps: Prepare a shell script (or an executable program in any programming language) that can perform the recurring task you want to perform. You can skip this step if you want to execute an existing program periodically. Prepare a text file with information about the times when you want the shell script or program (from Step 1) to execute; then submit this file by using crontab. You can submit several recurring jobs with a single file. Each line with timing information about a job has a standard format, with six fields. The first five fields specify when the job runs, and the sixth and subsequent fields constitute the command that runs. Here’s a line that executes the myjob shell script in a user’s home directory at 5 minutes past midnight each day: 5 0 * * * $HOME/myjob The table below shows the meaning of the first five fields. Note: An asterisk (*) means all possible values for that field. Also, an entry in any of the first five fields can be a single number, a comma-separated list of numbers, a pair of numbers separated by a hyphen (indicating a range of numbers), or an asterisk. Format for the Time of Execution in crontab Files Field Number Meaning of Field Acceptable Range of Values* 1 Minute 0–59 2 Hour of the day 0–23 3 Day of the month 0–31 4 Month 1–12 (1 means January, 2 means February, and so on) or the names of months using the first three letters — Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec 5 Day of the week 0–6 (0 means Sunday, 1 means Monday, and so on) or the three-letter abbreviations of weekdays — Sun, Mon, Tue, Wed, Thu, Fri, Sat * An asterisk in a field means all possible values for that field. If an asterisk is in the third field, for example, the job is executed every day. If the text file jobinfo (in the current directory) contains the job information, submit this information to crontab with the following command: crontab jobinfo That’s it! You’re set with the cron job. From now on, the cron job runs at regular intervals (as specified in the job-information file), and you receive mail messages with the output from the job. To verify that the job is indeed scheduled in Linux, type the following command: crontab -l The output of the crontab -l command shows the cron jobs currently installed in your name. To remove your cron jobs, type crontab -r. If you log in as root, you can also set up, examine, and remove cron jobs for any user. To set up cron jobs for a user, use this command: crontab _u <em>username filename</em> Here, username is the user for whom you install the cron jobs, and filename is the file that contains information about the jobs. Use the following form of the crontab command to view the cron jobs for a user: crontab _u <em>username</em> -l To remove a user’s cron jobs, use the following command: crontab -u <em>username</em> -r Note: The cron daemon also executes the cron jobs listed in the systemwide cron job file /etc/crontab. Here’s a typical /etc/crontab file from a Linux system (type cat /etc/crontab to view the file): SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly The first four lines set up several environment variables for the jobs listed in this file. The MAILTO environment variable specifies the user who receives the mail message with the output from the cron jobs in this file. The line that begins with # is a comment line. The four lines following the run-parts comment execute the run-parts shell script (located in the /usr/bin directory) at various times with the name of a specific directory as argument. Each argument to run-parts — /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly — is a directory. Essentially, run-parts executes all scripts located in the directory that you provide as an argument. The table below lists the directories where you can find these scripts and when they execute. You have to look at the scripts in these directories to know what executes at these intervals. Script Directories for cron Jobs Directory Name Script Executes /etc/cron.hourly Every hour /etc/cron.daily Each day /etc/cron.weekly Weekly /etc/cron.monthly Once each month
View ArticleArticle / Updated 09-27-2018
When you’re the system administrator, you must keep an eye on how well your Linux system is performing. You can monitor the overall performance of your Linux system by looking at information such as Central processing unit (CPU) usage Physical memory usage Virtual memory (swap-space) usage Hard drive usage Linux comes with utilities that you can use to monitor these performance parameters. Keep reading to check out a few of these utilities and show you how to understand the information presented by said utilities. Using the top utility in Linux To view the top CPU processes — the ones that use most of the CPU time — you can use the text mode top utility. To start that utility, type top in a terminal window (or text console). The top utility displays a text screen listing the current processes, arranged in the order of CPU usage, along with various other information, such as memory and swap-space usage. The top utility updates the display every 5 seconds. If you keep top running in a window, you can continually monitor the status of your Linux system. To quit top, press Q, press Ctrl+C, or close the terminal window. The first five lines of the output screen provide summary information about the system, as follows: The first line shows the current time, how long the system has been up, how many users are logged in, and three load averages — the average number of processes ready to run during the past 1, 5, and 15 minutes. The second line lists the total number of processes/tasks and the status of these processes. The third line shows CPU usage — what percentage of CPU time is used by user processes, what percentage by system (kernel) processes, and the percentage of time during which the CPU is idle. The fourth line shows how the physical memory is being used — the total amount, how much is used, how much is free, and how much is allocated to buffers (for reading from the hard drive, for example). The fifth line shows how the virtual memory (or swap space) is being used — the total amount of swap space, how much is used, how much is free, and how much is being cached. The table that appears below the summary information lists information about the current processes, arranged in decreasing order by amount of CPU time used. The table below summarizes the meanings of the column headings in the table that top displays. Column Headings in top Utility’s Output Heading Meaning PID Process ID of the process. USER Username under which the process is running. PR Priority of the process. NI Nice value of the process. The value ranges from –20 (highest priority) to 19 (lowest priority), and the default is 0. (The nice value represents the relative priority of the process: The higher the value, the lower the priority and the nicer the process, because it yields to other processes.) VIRT Total amount of virtual memory used by the process, in kilobytes. RES Total physical memory used by a task (typically shown in kilobytes, but an m suffix indicates megabytes). SHR Amount of shared memory used by the process. S State of the process (S for sleeping, D for uninterruptible sleep, R for running, Z for zombies — processes that should be dead but are still running — and T for stopped). %CPU Percentage of CPU time used since the last screen update. %MEM Percentage of physical memory used by the process. TIME+ Total CPU time the process has used since it started. COMMAND Shortened form of the command that started the process. Using the uptime command in Linux You can use the uptime command to get a summary of the system’s state. Just type the command like this: uptime It displays output similar to the following: 15:03:21 up 32 days, 57 min, 3 users, load average: 0.13, 0.23, 0.27 This output shows the current time, how long the system has been up, the number of users, and (finally) the three load averages — the average number of processes that were ready to run in the past 1, 5, and 15 minutes. Load averages greater than 1 imply that many processes are competing for CPU time simultaneously. The load averages give you an indication of how busy the system is. Using the vmstat utility in Linux You can get summary information about the overall system usage with the vmstat utility. To view system usage information averaged over 5-second intervals, type the following command (the second argument indicates the total number of lines of output vmstat displays): vmstat 5 8 You see output similar to the following listing: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 31324 4016 18568 136004 1 1 17 16 8 110 33 4 61 1 0 1 31324 2520 15348 139692 0 0 7798 199 1157 377 8 8 6 78 1 0 31324 1584 12936 141480 0 19 5784 105 1099 437 12 5 0 82 2 0 31324 1928 13004 137136 7 0 1586 138 1104 561 43 6 0 51 3 1 31324 1484 13148 132064 0 0 1260 51 1080 427 50 5 0 46 0 0 31324 1804 13240 127976 0 0 1126 46 1082 782 19 5 47 30 0 0 31324 1900 13240 127976 0 0 0 0 1010 211 3 1 96 0 0 0 31324 1916 13248 127976 0 0 0 10 1015 224 3 2 95 0 The first line of output shows the averages since the last reboot. After that line, vmstat displays the 5-second average data seven more times, covering the next 35 seconds. The tabular output is grouped as six categories of information, indicated by the fields in the first line of output. The second line shows further details for each of the six major fields. You can interpret these fields by using the table below. Meaning of Fields in the vmstat Utility’s Output Field Name Description procs Number of processes and their types: r = processes waiting to run, b = processes in uninterruptible sleep, and w = processes swapped out but ready to run. memory Information about physical memory and swap-space usage (all numbers in kilobytes): swpd = virtual memory used, free = free physical memory, buff = memory used as buffers, and cache = virtual memory that’s cached. swap Amount of swapping (the numbers are in kilobytes per second): si = amount of memory swapped in from disk, and so = amount of memory swapped to disk. io Information about input and output. (The numbers are in blocks per second, where the block size depends on the disk device.) bi = rate of blocks sent to disk, and bo = rate of blocks received from disk. system Information about the system: in = number of interrupts per second (including clock interrupts), and cs = number of context switches per second — how many times the kernel changed which process was running. cpu Percentages of CPU time used: us = percentage of CPU time used by user processes, sy = percentage of CPU time used by system processes, id = percentage of time CPU is idle, and wa = time spent waiting for input or output (I/O). In the vmstat utility’s output, high values in the si and so fields indicate too much swapping. (Swapping refers to the copying of information between physical memory and the virtual memory on the hard drive.) High numbers in the bi and bo fields indicate too much disk activity. Checking disk performance and disk usage in Linux systems Linux comes with the /sbin/hdparm program to control IDE or ATAPI hard drives, which are common on PCs. One feature of the hdparm program allows you to use the -t option to determine the rate at which data is read from the disk into a buffer in memory. Here’s the result of typing /sbin/hdparm -t /dev/hda on one system: /dev/hda: Timing buffered disk reads: 178 MB in 3.03 seconds = 58.81 MB/sec The command requires the IDE drive’s device name (/dev/hda for the first hard drive and /dev/hdb for the second hard drive) as an argument. If you have an IDE hard drive, you can try this command to see how fast data is read from your system’s disk drive. To display the space available in the currently mounted file systems, use the df command. If you want a more readable output from df, type the following command: df -h Here’s typical output from this command: Filesystem Size Used Avail Use% Mounted on /dev/hda5 7.1G 3.9G 2.9G 59% / /dev/hda3 99M 18M 77M 19% /boot none 125M 0 125M 0% /dev/shm /dev/scd0 2.6G 0 100% /media/cdrecorder As this example shows, the -h option causes the df command to display the sizes in gigabytes (G) and megabytes (M). To check the disk space being used by a specific directory, use the du command. You can specify the -h option to view the output in kilobytes (K) and megabytes (M), as shown in the following example: du -h /var/log Here’s typical output from that command: 152K /var/log/cups 4.0K /var/log/vbox 4.0K /var/log/httpd 508K /var/log/gdm 4.0K /var/log/samba 8.0K /var/log/mail 4.0K /var/log/news/OLD 8.0K /var/log/news 4.0K /var/log/squid 2.2M /var/log The du command displays the disk space used by each directory, and the last line shows the total disk space used by that directory. If you want to see only the total space used by a directory, use the -s option. Type du -sh /home to see the space used by the /home directory, for example. The command produces output that looks like this: 89M /home
View Article