Thursday, October 17, 2024

Essential Linux commands for various system administration tasks

In this blog post, I am trying to consolidate common commands available to manage the Linux Operating System. If you are looking for more detail info, you can either refer my previous Linux blog series(Part1 to 10) or use manual pages available for each commands using command -- man <command>


In Linux, there are several commands available to manage files, users, processes, networking, packages, and system settings. Below is a categorized list of essential Linux commands for various system management tasks:



1. File and Directory Management:

  • ls: Lists files and directories.
  • cd: Changes the current directory.
  • pwd: Prints the current working directory.
  • cp: Copies files or directories.
  • mv: Moves or renames files or directories.
  • rm: Removes files or directories.
  • mkdir: Creates directories.
  • rmdir: Removes empty directories.
  • find: Searches for files in a directory hierarchy.
  • touch: Creates an empty file or updates the timestamp of an existing file.
  • ln: Creates hard and symbolic links.

2. File Permissions and Ownership:

  • chmod: Changes file permissions.
  • chown: Changes file owner and group.
  • chgrp: Changes the group ownership of a file.
  • umask: Sets default file permissions.
  • stat: Displays detailed file or file system status, including permissions.

3. Process Management:

  • ps: Displays information about active processes.
  • top or htop: Provides a dynamic view of running processes.
  • kill: Sends signals (e.g., to terminate) to processes.
  • killall: Kills all processes by name.
  • nice: Starts a process with a defined priority.
  • renice: Changes the priority of a running process.
  • jobs: Lists background jobs in the current shell.
  • bg: Resumes a job in the background.
  • fg: Brings a background job to the foreground.

4. Disk and Storage Management:

  • df: Shows disk space usage.
  • du: Displays the amount of disk space used by files and directories.
  • mount: Mounts a file system.
  • umount: Unmounts a file system.
  • fdisk: Disk partitioning tool.
  • parted: Another disk partitioning tool with advanced features.
  • mkfs: Creates a file system on a disk partition.
  • fsck: File system consistency check and repair.
  • lsblk: Lists information about block devices.
  • blkid: Displays block device attributes (UUID, file system type).
  • sync: Synchronizes cached writes to persistent storage.

5. Networking:

  • ifconfig: Configures network interfaces (deprecated, replaced by ip).
  • ip: Manages network interfaces, routing, and more.
  • ping: Tests network connectivity to a host.
  • netstat: Displays network connections, routing tables, and more (replaced by ss).
  • ss: Provides detailed socket statistics.
  • traceroute: Shows the path packets take to reach a host.
  • nslookup: Queries DNS to obtain domain name or IP address mapping.
  • dig: Performs DNS lookups.
  • scp: Securely copies files between hosts over the network.
  • rsync: Synchronizes files and directories between two locations over a network.
  • iptables: Configures firewall rules (replaced by nftables on many systems).
  • nft: Manages packet filtering and network address translation (NAT).

6. User and Group Management:

  • useradd: Adds a new user.
  • usermod: Modifies user account properties.
  • userdel: Deletes a user account.
  • groupadd: Adds a new group.
  • groupmod: Modifies a group.
  • groupdel: Deletes a group.
  • passwd: Changes the password for a user.
  • who: Shows who is logged in.
  • w: Shows detailed information about who is logged in and what they are doing.
  • last: Displays the last logins of users.

7. System Monitoring and Performance:

  • uptime: Shows system uptime and load averages.
  • free: Displays memory usage.
  • vmstat: Reports virtual memory statistics.
  • iostat: Reports CPU and I/O statistics.
  • sar: Collects and reports system activity information.
  • dmesg: Prints kernel ring buffer messages (used for debugging hardware and kernel issues).
  • lsof: Lists open files and the processes using them.

8. Package Management:

  • Debian-based systems (e.g., Ubuntu):
    • apt: Manages packages (install, remove, update).
    • apt-get: Another package manager (more script-friendly).
    • dpkg: Manages individual Debian packages.
  • Red Hat-based systems (e.g., RHEL, CentOS):
    • yum: Manages packages (install, update, remove) (replaced by dnf).
    • dnf: Next-generation package manager for Red Hat-based systems.
    • rpm: Manages individual RPM packages.
  • Arch-based systems (e.g., Arch, Manjaro):
    • pacman: Manages packages for Arch Linux and its derivatives.

9. Service and System Management:

  • systemctl: Controls system services (start, stop, restart, enable, disable).
  • service: Manages services (older systems, replaced by systemctl).
  • journalctl: Views and manages systemd logs.
  • crontab: Schedules periodic tasks (cron jobs).
  • at: Schedules one-time tasks.
  • shutdown: Shuts down or restarts the system.
  • reboot: Reboots the system.

10. Security and Permissions:

  • sudo: Executes commands as another user (usually root).
  • su: Switches to another user (e.g., root).
  • gpasswd: Administers /etc/group (group management).
  • iptables/nft: Configures firewall rules.
  • semanage: Manages SELinux settings (on systems with SELinux).

11. Archiving and Compression:

  • tar: Archives multiple files into a single file and optionally compresses them.
  • gzip: Compresses files using the gzip algorithm.
  • gunzip: Decompresses files compressed with gzip.
  • zip: Compresses files into a ZIP archive.
  • unzip: Extracts files from a ZIP archive.
  • bzip2/bunzip2: Compresses and decompresses files using the bzip2 algorithm.
  • xz: Compresses files using the xz algorithm.

12. Backup and Recovery:

  • rsync: Efficiently synchronizes files and directories between locations.
  • dd: Creates low-level copies of data, useful for backup and disk imaging.
  • tar: Archives files for backup purposes.
  • cpio: Copies files into and out of archives.


These are just some of the commonly used Linux commands for managing different aspects of the system. Depending on your Linux distribution and the tasks you're performing, you might encounter additional tools specific to that system.

-> echo "Thank You :) "

Tuesday, July 23, 2024

The Installation failed in the FIRST_BOOT phase with an error during SYSPREP operation

 Hi there,

Recently, I was working on a Windows server 2012 R2 upgrade to Windows server 2019. Some of the servers failed to upgrade after 51% with an error message - "The Installation failed in the FIRST_BOOT phase with an error during SYSPREP operation"





After digging further in the setup logs ($WINDOWS.~BT\Sources\Panther\setupact.log), I found these lines(Below screenshot) at the end of the log file which clearly indicates that the setup was failing to upgrade the rdweb component. 

The log file(setupact.log) will be accessible from C:\ drive. you will able to find it under a hidden folder - $WINDOWS.~BT 




To make it work, I had to remove the RDS-Web-Access feature from PowerShell using the below command 

Remove-WindowsFeature -Name RDS-Web-Access

After removing the rds-web-access feature, the upgrade was completed successfully.


Note: You might have to reconfigure the RD-web page or restore from the old backup. (Please comment if you found any alternative option to restore the web form.)


 #> echo "Thank you :)"





Friday, May 31, 2024

Cloud secure data lifecycle - "security follows the data"

Data is the most vital component of any system, including cloud environments. Understanding cloud data concepts is critical if you want to secure cloud-based systems.

The figure shows the cloud secure data lifecycle, and its steps are described in the following list.

  • Create: The Create phase covers any circumstance where data is “new.” This new data can be freshly generated content, imported data that is new to the cloud environment, or data that has been modified/updated and has a new shape or state. The Create phase presents the greatest opportunity to classify data according to its sensitivity, ensuring that the right security controls are implemented from the beginning. Decisions made during this phase typically impact the data throughout the entire lifecycle.
Aside from data classification, it’s also important at this stage to consider tagging data with any important attributes, as well as assigning proper access restrictions to the data. Again, what you do during the Create phase usually travels with the data through each of the other phases. So, extra thought should be given to how the created data needs to be managed throughout its lifecycle.

  • Store: The Store phase often happens in tandem with (or immediately after) the Create phase. During this phase, the created or modified data is saved to some digital repository within the application or system. Storage can be in the form of saved files on a filesystem, rows and columns saved to a database, or objects saved in a cloud storage system.
During the Store phase, the classification level assigned during creation is used to assign and implement appropriate security controls. Controls like encryption (at rest), Access Control Lists (ACLs), logging, and monitoring are important during this phase. In addition, this phase is when you should consider how to back up your data to maintain redundancy and availability appropriately. 

  • Use: The Use phase includes any viewing, processing, or consumption of data that was previously in the Store phase. For this model, the Use phase is considered read-only and does not include any modification. (Modifications are covered in the Create phase.)

One important consideration during this phase is that data must be unencrypted while in use. For this reason, the Use phase presents some of the greatest threats to data, if not properly secured. File access monitors, logging and monitoring, and technologies like Information Rights Management (IRM) are important to detect and prevent unauthorised access during the Use phase.

  • Share: During the Share phase, data is made available for use by others, such as employees, customers, and partners. As it’s shared, data often traverses a variety of public and private networks and locations and is subjected to various unique threats along the way. Proper encryption (in transit) is important during this phase, as well as IRM and Data Loss Prevention (DLP) technologies that help ensure sensitive data stays out of the wrong hands.
  • Archive: The Archive phase involves data transitioning from active use to long-term “cold” storage. Archiving can entail moving data from a primary storage tier to a slower, less redundant tier that is less expensive or can include moving data off the cloud to a separate medium altogether (backup tape, for example).
Most data is eventually archived after it’s no longer needed regularly. Once archived, the data must be secured and also remain available for retrieval, when necessary. Legal and regulatory requirements must be carefully considered during the Archive phase, as these requirements may influence how long specific data is required to be stored.

  • Destroy: The final phase of the data lifecycle is the Destroy phase. Destroying data involves completely removing it from the cloud using logical erasure or physical destruction (like disk pulverising or degaussing). In cloud environments, customers generally have to rely on logical destruction methods like crypto-shredding or data overwriting. Still, many CSPs have processes for physical destruction, per contractual agreements and regulatory requirements.

 #> echo "Thank you :)"

Thursday, March 28, 2024

Identifying Information Security fundamentals

  • Pillars of Information Security
Information security is the practice of protecting information by maintaining its confidentiality, integrity, and availability. These three principles form the pillars of information security, and they’re often referred to as the CIA triad

    • Confidentiality

Confidentiality entails limiting access to data to authorized users and systems. In other words, confidentiality prevents the exposure of information to anyone who is not an intended party. 

The concept of confidentiality is closely related to the security best practice of least privilege, which asserts that access to systems or information should only be granted on a need-to-know basis.  To enforce the principle of least privilege and maintain confidentiality, you must classify (or categorize) data by its sensitivity level. 


    • Integrity

Integrity involves maintaining the accuracy, validity, and completeness of information and systems. It ensures that data is not tampered with by anyone other than an authorized party for an authorized purpose. 

A checksum is a value derived from a piece of data that uniquely identifies that data and is used to detect changes that may have been introduced during storage or transmission. Checksums are generated based on cryptographic hashing algorithms and help you validate the integrity of data. 

    • Availability

Availability is all about ensuring that authorized users can access required systems and data when and where they need it 

One of the most common attacks on availability is Distributed Denial of Service, or DDoS, which is a coordinated attack by multiple compromised machines disrupting a system’s availability. Another common and rapidly growing attack on availability is ransomware, which involves an attacker blocking a system or data owners from accessing their systems and data until a sum of money is paid. 

  • Security controls

You know all about confidentiality, integrity, and availability — that’s great! Now, how do you enforce those concepts in your systems? Security control is the specific mechanism or measures implemented to safeguard systems or assets against potential threats and vulnerabilities — said another way, security controls protect the confidentiality, integrity, and availability of your systems and data. 

Security controls can be categorized in a couple of ways: by their type and by their function. 

Types of security control include: 

      • Technical controls use technology (shocking, I know!) to protect information systems and data. Things like firewalls, data loss prevention (DLP) systems, and encryption fall under this category.
      • Physical controls involve the use of physical measures to protect an organization’s assets. This can include things like doors, gates, surveillance cameras, and physical disposal of sensitive information (including things like shredding and degaussing).
      • Administrative controls include the set of policies, procedures, guidelines, and practices that govern the protection of systems and data. This can be anything from incident response plans (see later in this chapter) to security awareness training.
Functions of security controls include:

      • Preventative controls keep negative security events from happening. This includes things like security awareness training, locked doors, and encryption.
      • Detective controls identify negative security events when they do happen. Examples of detective controls include log monitoring and video surveillance.
      • Corrective controls fix or reduce damages associated with a negative security event and may include measures to prevent the same negative event from happening again. Backups and system recovery features are the most common examples of corrective controls.
  • Threats, Vulnerabilities, and Risks
Threats, vulnerabilities, and risks are interrelated terms describing things that may compromise the pillars of information security for given system or an asset (the thing you’re protecting). 
 
    • Threats

A threat is anything capable of intentionally or accidentally compromising an asset’s security. Some examples of common threats include: 

      • Natural disasters: Earthquakes, hurricanes, floods, and fires can cause physical damage to critical infrastructure, leading to loss of connectivity, power outages, or even destruction of systems.
      • Malware: Malicious software such as viruses, worms, and ransomware can infect systems, steal data, or disrupt normal business operations.
      • Phishing attacks: Deceptive emails, messages, or websites designed to trick individuals into revealing sensitive information like passwords, credit card numbers, or personal data.
      • Denial of service (DOS) attacks: Deliberate attempts to overload a network, server, or website with excessive traffic, making it unavailable to legitimate users.
    • Vulnerabilities
A vulnerability is a weakness or gap existing within a system; it’s something that, if not taken care of, may be exploited to compromise an asset’s confidentiality, integrity, or availability. Examples of vulnerabilities include: 
      • Unpatched software: Failure to install updates or patches for operating systems, applications, or firmware, leaving security vulnerabilities open to exploitation.
      • Lack of environmental protection: Missing or faulty fire suppression systems or other physical protections, leaving infrastructure vulnerable to natural disasters and other environmental threats.
      • Insecure passwords: The use of commonly used or otherwise insecure passwords, makes it easier for attackers to gain unauthorized access to accounts or systems.
      • Untrained employees: Lack of security awareness training for employees and system users, leaving them susceptible to phishing attacks.
Threats are pretty harmless without an associated vulnerability, and vice versa. A good fire detection and suppression system gives your data centre a fighting chance, just like (you hope) thorough security awareness training for your organization’s employees will neutralize the threat of an employee clicking on a link in a phishing email. 
 
    • Risks

Risk is used to define the potential for damage or loss of an asset. Risk = Threat x Vulnerability. This simple equation is the cornerstone of risk management. 

Some examples of risks include: 

» A fire wipes out your data centre, making service unavailable for five days.

» A hacker steals half of your customer’s credit card numbers, causing significant reputational damage to your company.

» An attacker gains root privilege through a phishing email and steals your agency’s Top Secret defence intelligence 

  • Identity and Access Management (IAM)

IAM consists of four key elements: identification, authentication, authorization, and accountability. 

    • Identification is the act of establishing who (or what) someone (or something) is. In computing, identification is the process by which you associate an entity (i.e., a system or user) with a unique identity or name, such as a username or email address.
    • Authentication takes identification a step further and validates a user’s identity. During authentication, you answer the question “Are you who you say you are?” before authorizing access to a system.
Authenticators generally fit into one of three factors (or methods): 
      • Something you know: Passwords and PINs (Personal Identification Numbers) fall into this category.
      • Something you have: Security tokens and smart cards are examples of this factor.
      • Something you are: Examples of this factor include fingerprints, iris scans, voice analysis, and other biometric methods
    • Authorization is the process of granting access to a user based on their authenticated identity and the policies you’ve set for them.
    • Accountability involves assigning and holding an entity responsible for its actions within an information system. Accountability requires establishing unique user identities, enforcing strong authentication, and maintaining thorough logs to track user actions

  • Encryption and decryption

Encryption is the process of using an algorithm (or cipher) to convert plaintext (or the original information) into ciphertext. The ciphertext is unreadable unless it goes through the reverse process, known as decryption, which then allows an authorized party to convert the ciphertext back to its original form using the appropriate encryption key(s).  

Types of encryption

Encryption can either be symmetric-key or asymmetric-key. The two encryption types function very differently and are generally used for different applications. 

    • Symmetric-key encryption (sometimes referred to as secret-key encryption) uses the same key (called a secret key) for both encryption and decryption (see Figure). Using a single key means the party encrypting the information must give that key to the recipient before they can decrypt the information. The secret key is typically sent to the intended recipient as a message separate from the ciphertext. Symmetric-key encryption is simple, fast, and relatively cheap.
A notable drawback of symmetric-key encryption is it requires a secure channel for the initial key exchange between the encrypting party and the recipient. If your secret key is compromised, the encrypted information is as good posted on a billboard.

                                

    • Asymmetric-key encryption (more commonly known as public-key encryption) operates by using two keys — one public and one private. The public key, as you might guess, is made publicly available for anyone to encrypt messages. The private key remains a secret of the owner and is required to decrypt messages that come from anyone else (see Figure). Although public-key encryption is typically slower than its counterpart, it removes the need to secretly distribute keys and also has some very important uses                                

Common uses of encryption: Data Protection(in-Rest and in-transit), Authentication and Authorization, Network Security(TLS), Digital Signature, VPN, Crypto-Shredding, etc. 

  • Importance of Business Continuity and Disaster Recovery

Business continuity (BC) refers to the policies, procedures, and tools you put in place to ensure critical business functions continue during and after a disaster or crisis. The goal of business continuity is to allow essential personnel the ability to access important systems and data until the crisis is resolved. 

DR is the part of BC focused on restoring full operation of and access to hardware, software, and data as quickly as possible after a disaster.  

Business continuity broadly focuses on the procedures and systems you have in place to keep a business up and running during and after a disaster. Disaster recovery more narrowly focuses on getting your systems and data back after a crisis hits. 

You need to be aware of a couple important related metrics: 

» Recovery Time Objective (RTO) is the amount of time within which business processes must be restored in order to avoid significant consequences associated with the disaster. In other words, RTO answers the question “How much time can pass before an outage or disruption has unacceptably affected my business?”

» Recovery Point Objective (RPO) is the maximum amount of data loss that’s tolerable to your organization. It answers the question “How much data can I lose before my business is unacceptably impacted by a disaster?” RPO plays an important role in determining frequency of backups.


 #> echo "Thank you :)"

Thursday, February 1, 2024

How to Enable Microsoft 365 Unified Audit Log

  The Unified Audit Log, as the name implies, is a log file in which different activities performed in and through Microsoft 365 are recorded. 

Including the entire list would take up too much space. Still, information within the log includes amongst other things: user and admin activity in Exchange Online, SharePoint Online, OneDrive for Business, Power BI, Microsoft Teams, Stream, Power Apps, etc.

The log contains a lot of useful information that you can use for various activities related to your security operations. It can be used to:

  1. To monitor user behaviour and detect suspicious activities
  2. Perform forensic investigations into actions related to an incident
  3. Monitor specific use cases in your environment through various platforms like Microsoft 365 Defender, Microsoft Defender for Cloud Apps, Azure Monitor, and Microsoft Sentinel.

To enable it through the Microsoft Security Center, navigate to Audit. If the log search is off, then this option should be present. Turning it on is as easy as clicking Turn on auditing.

Alternatively, connect to Exchange Online PowerShell and run the following script. It checks whether the log is already enabled. If it isn’t, it will do so.


if((Get-AdminAuditLogConfig).UnifiedAuditLogIngestionEnabled -ne "True"){
Write-Host "Enabling the Unified Audit Log."
Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $True }else{
Write-Host "The Unified Audit Log was already enabled."
}