This chapter covers the following A+ 220-1002 exam objectives:
• 4.1 – Compare and contrast best practices associated with types of documentation.
• 4.2 – Given a scenario, implement basic change management best practices.
• 4.3 – Given a scenario, implement basic disaster prevention and recovery methods.
Welcome to the first chapter of Domain 4.0: Operational Procedures. While this domain comprises the smallest percentage of the exam, it’s not by much. So as with all of the domains, it is important that you understand the content.
Now we’ll be shifting gears to the organizational, operational, and facilities side of things so prepare for a bit of a different mindset. Because it is an A+ exam, we won’t be going very deep into operational procedures, but you should know the basics.
In this chapter we will cover the fundamentals of documentation, change management, and disaster recovery. As you progress to other certifications, and if you progress into management, these concepts become more crucial.
ExamAlert
Objective 4.1 concentrates on: network topology diagrams, knowledge base/articles, incident documentation, regulatory and compliance policy, acceptable use policy, password policy, and inventory management.
Proper documentation is a key element of any organization. Without it, we have chaos. With it, we can at least bring some semblance of order to our networks, policies, and decisions. For us techs, the most important reason to have solid documentation is that it helps us to troubleshoot problems. If a person on the team documents properly, then it makes it that much easier for everyone else when those people need to access the information. If everyone documents well, then it means increased productivity for the entire team. And one other thing: leave it better than you found it. That means if something is not accurate, make it so. Others will thank you, and you never know, you might thank yourself one day. (We all know that we technicians talk to ourselves sometimes!)
To develop quality network documentation, an administrator should use network diagramming software, perhaps in conjunction with network mapping software. A good network diagram should show how computers and network devices are connected together—their topology so to speak. Figure 39.1 shows a basic example of a network diagram.
Figure 39.1 Network Diagram
In the figure you can see network switches, a couple of SOHO routers, a workstation, a cable modem and the cloud/Internet. A topology is just one way of documenting the network; it doesn’t show where the systems are, but it shows how they are connected. For example, there is a master switch that connects out to the Internet, and has two other connections to separate firewalled LANs. My main workstation, AV-Editor has access to both networks because it is a multi-homed computer; meaning it has two NICs. In general, we’re not overly concerned with the client computers, but particular workstations might be important to list in the network diagram.
Figure 39.1 displays more of a high-level logical topology diagram: IP addresses used by the LAN and certain systems, and what devices groups of computers are connected to. However, you might get a little more detailed about the individual ports on switches and the actual physical connections; at that point, it might be referred to as a physical network diagram. It all depends on what you are focusing on: the physical or the logical, or both.
You can build your own network diagram with tools such as Microsoft Visio or ConceptDraw, or use network mapping software that will automatically search the network for hosts including servers, routers and switches; for example, SolarWinds’ Network Topology Mapper. Combine both together and you can come up with some pretty powerful network documentation.
ExamAlert
Network topology diagrams identify network components and how they are physically and/or logically connected together.
You might also opt to use a spreadsheet to sort computers by name or IP address. Some companies use virtual notebooks or custom-made Wikis for their documentation to supplement a network diagram. And of course, there are plenty vendors that offer network documentation software solutions.
The whole point is to have solid documentation that you can refer to in the case that there is a problem, or if you need to re-configure, or add-on or remove components from the network.
Well, I’ve been referencing knowledge base articles throughout the book, especially Microsoft-related articles. But a knowledge base is more than articles written by a company; the information is also spread amongst community support, forums and blogs. Regardless, it’s important to know where and how to find the information you seek.
The “where” I can answer with this: Go to the source! I say it often—use the websites created by the manufacturers of the hardware and the developers of the software. For example, if you are supporting Windows 10 clients, use the Microsoft support sites. If you are using Western Digital hard drives, use the Western Digital support site. Remember this when using an Internet search engine. Often, you will get third-party results which may or may not contain accurate data. So always start by going to the source.
The “how” might differ depending on which vendor’s site you visit, but for the most part, they are internally searchable by phrase or by knowledge base (KB) number. Once you get the knack for searching, you can learn how to do most everything with a product using the support website; from installation and configuration, to security and troubleshooting. Let’s give a couple of examples, starting with Microsoft since it is so prevalent on the A+ exams.
The Microsoft Knowledge Base is spread among multiple websites and has hundreds of thousands of articles and posts from Microsoft employees and from the Microsoft “community”. To search the knowledge base, simply go to one of the sites listed below and type in the search term or knowledge base article number.
• Microsoft Support: https://support.microsoft.com. This is the main support site that Microsoft offers for end-users and for IT professionals. Over the years, a lot of content from other Microsoft sites has been redirected here. Also, Microsoft has moved away from the term “Knowledge Base” to a certain degree, and often uses terms such as “help” or “support” instead. For example, this link: https://support.microsoft.com/en-us/help/322756 demonstrates how to back up and restore the registry in Windows. In the past, this article would have been called KB #322756, and it is still searchable that way, but Microsoft has moved toward more easily searchable URLs as you will see if you follow the previous link. (Once you access the link, it will append it with the post name, which is what makes it more search engine friendly.) You’ll find in your journeys that you sometimes end up at docs.microsoft.com as well.
• Microsoft TechNet: https://technet.microsoft.com. Historically, this was the support site designed with the IT professional in mind. The Microsoft Knowledge Base can be found here: https://technet.microsoft.com/en-gb/ms772425.aspx. From this location, you can search for solutions within a mini-search engine or by the Knowledge Base (KB) article number, plus there is community support, labs, a Wiki, and blogs. However, a lot of content has been redirected over to support.microsoft.com over the years, so be ready to search both.
Here are a couple of other examples:
• Apple: https://support.apple.com/. Here you can find support articles and community support for Apple products including macOS-based systems and iOS-based mobile devices.
• Android: https://support.google.com/android/. Here you can learn about the Android OS, and also redirect to the major manufacturers that use it on their mobile devices.
• Intel: https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base. This KB contains articles, posts, and discussions about all of Intel’s products. They have a separate developer KB as well.
• Western Digital: https://support.wdc.com/knowledgebase/. This site supplies written articles for the various hard drives and other products that WD manufacturers, along with community support.
Try accessing some of these links and spend some time searching around the knowledge bases so that you can get a feel for how they work. Think about some of the products and software that you use at home or at work and locate their support sites and knowledge bases. You will find that some companies have better support and KBs than others—some have superior technical documentation specialists, and a more efficiently structured community platform. Over time, this kind of product documentation often leads to a higher level of customer satisfaction as well as trusted name recognition. This is the model to follow if your organization currently makes, or decides to create its own knowledge base.
ExamAlert
Know how to research knowledge bases! If you can research within these, you can research pretty much anywhere.
Incident documentation is something that you maintain during the incident response process. It should be initiated at the onset of an event and continued through to its conclusion. If you know, or even suspect, that there is an incident, start recording all facts and information that you encounter. Use some type of logbook (hardcopy or digital, though I prefer hardcopy for this type of procedure), plus a mobile device with webcam, other digital camera, audio recorder, or a combination of those to record all of the data that you can.
Incident documentation is just a piece of the incident response procedure. We’ll be discussing that in more depth in Chapter 41, “Incident Response, Communication, and Professionalism.”
Compliance is the process of making sure an organization and its employees follow the policies, procedures, regulations, standards, laws, and ethical practices that have been written by, or apply to, the organization. In a nutshell, the resulting documentation is called a compliance policy. Most corporations have one, and they are usually quite similar. This documentation is available to all employees, and will often include principles of business conduct. For example, no discrimination, integrity in business dealings, fair competition, proper record keeping, environmental sustainability, cooperation with authorities, and so on. Additional documentation will detail how this is to be accomplished by way of policies and procedures. Generally, this type of documentation, or at least the overview, is publicly available via the Internet (as a PDF), and in print form.
There are organizations that create standardized policies and procedures, for example, the International Organization for Standardization (ISO). Companies that wish to follow these standards can do so and be certified as ISO-compliant for that particular standard. For example, ISO 9001:2015 for quality standards and personnel security, and ISO/IEC 27002:2013, named Information technology – Security techniques). An organization has to be examined and accredited by an accrediting certification body to state that it is ISO certified. This is a rigorous process that a company should not take lightly. Also, keeping up the standard can create too much documentation and could possibly bog the company down in details and minutia if it doesn’t have the appropriate compliance personnel. These personnel must be well trained in the day-to-day operations of a company and its procedures, have a strong understanding of information technology, and be well-versed in how to read, update, and publish technical documentation.
For a company that doesn’t have the necessary personnel, or wherewithal to certify to, or use, the ISO standards, there are still individual guidelines that can be acquired such as the NIST SP 800-88 (Guidelines for Media Sanitization), that we spoke of in the security chapters. The NIST has plenty of guidelines such as this that an organization can use to model their IT infrastructure and overall security plan. All of this can be integrated with an organization’s documentation.
Regulatory policies of an organization attempt to achieve compliance with a government’s objectives through laws and regulations. Now we’re going beyond standards, and moving into the realm of law. For example, in many organizations compliance people will confirm that certain laws are being followed, especially as they pertain to personally identifiable information (PII). For instance, the Privacy Act of 1974 (2015 edition) which establishes a code of fair information practice. And the Sarbanes-Oxley Act (SOX) which governs the disclosure of financial and accounting information. Most industries are regulated to some extent, so it falls to the compliance people to know a little bit about the law as well.
Note
As a technician, you should take a look at these regulatory laws to get a better idea of what is expected of a company, and what a company might expect from you and any other employees and contractors. Also, consider looking at some of the compliance management software suites available on the Internet.
Acceptable use policies (AUPs) define the rules that restrict how a computer, network, or other system may be used. They state what users are allowed to do when it comes to the technology infrastructure of an organization. Often, the AUP must be signed by the employees before they begin working on any systems.
This protects the organization, but it also defines to employees exactly what they should, and should not, be working on. If a director asks a particular employee to repair a particular system that was outside the AUP parameters, the employee would know to refuse. If employees are found working on a system that is outside the scope of their work, and they signed an AUP, it could be grounds for termination. As part of an AUP, employees enter into an agreement acknowledging they understand that the unauthorized sharing of data is prohibited. Also, employees should understand that they are not to take any information or equipment home without express permission from the various parties listed in the policy. The idea behind this is to protect the employee, the sensitive data (especially PII), the company systems (from viruses and network attacks), and the company itself (from legal action).
In Chapter 33, “Windows Security Settings and Best Practices,” we discussed some basic password policies that can be configured on a Windows client or server. However, there should be a written policy that states how passwords—and configured password policies—should function, and how they are implemented and used. This should be a part of the policies and procedures of an organization’s overall documentation. In fact, it should be planned and developed before any configuration of a system’s password policies. This document will state all of your rules for password configuration, usage, storage, and cryptographic hashing.
For example, as part of your new password policy plan, you might decide to have a high limit for characters and state that users can pick up to 64 characters. This might sound like a lot, but with several organizations (including NIST) recommending length over complexity, and the fact that NIST recommends longer passphrases over passwords, it actually can make for less forgotten passwords with more security due to length, which ultimately translates to bit-strength. It’s the mandatory minimum that is even more important, at the very least 8, but if you are using passphrases, it should be more. NIST also doesn’t necessarily recommend special characters anymore either. The concept here is to increase security while fostering usability. Once again, we are looking for the balance between confidentiality and availability.
You might also state that users have to change passwords every three months, and might recommend checking for blacklisted (or pwned) passwords. As part of the document, you should describe what absolute secrecy is, and that employees need to abide to it for their own protection and for the organization’s sake. The written document should be well structured and be easy to read, with an overview, a scope of purpose, and procedures for the creation of passwords, the creation of policies, and the enforcement of the written policies. Keep in mind that the password is only one factor of authentication. It should be incorporated into a multi-factor authentication (MFA) scheme. Using MFA enhances security much more than just having a strong password policy, but both are important.
Note
Take a look at the NIST SP 800-63 document which delves into digital identity guidelines, including credentials such as passwords.
https://www.nist.gov/itl/tig/projects/special-publication-800-63
Inventory management, or should we say IT asset inventory management, is the supervision, tracking, and auditing of IT equipment within the organization’s infrastructure. All companies are at risk of technology sprawl—meaning the disorganization of IT equipment and software that can occur over time. To reduce this risk, an organization will use written and software-based documentation to track all assets. This includes tracking the lifecycle of client computers, servers, switches, routers, mobile devices, IoT devices, and other hardware, as well as tracking software that is installed, uninstalled, and updated. It also includes any items that are stored for later use. You might use asset tags for physically stored items. These could be written or printed tags, barcode stickers, or RFID tags. There are a variety of software packages available that can track all of this information. Most inventory tracking systems can read all of those types of tags, and can communicate with handheld wireless and USB-based devices used to scan the tags. This software is part of your overall technical documentation.
ExamAlert
Asset tags and barcodes are used by inventory management systems and software to identify and keep record of company assets.
Documentation might also include things that you collect, such as licenses for software. For example, Microsoft has used the certificate of authenticity (COA) and the client-access license (CAL) for ages. These commercial licenses come with software that is purchased and they prove that the organization paid for the software or the additional client licenses to connect to that software (as is the case with Windows Server products). Many types of software use a standard end-user licensing agreement (EULA), a personal license which might be on paper or stored on the computer (or online) and might be a personal single license or commercial multiple licenses.
Let’s not forget about the virtual side of things. VMs should be documented and tracked the same way that physical computers are. This VM management helps us to avoid virtualization sprawl.
Answer these questions. The answers follow the last question. If you cannot answer these questions correctly, consider reading this section again until you can.
1. You have been tasked with fixing a problem on a Windows Server. You need to find out which switch it connects to and how it connects. Which of the following types of documentation should you consult?
A. Microsoft Knowledge Base
B. Network topology diagram
C. Incident documentation
D. Compliance policy
E. Inventory management
2. You work for an enterprise-level organization that is certified as ISO 27002:2013. You have been tasked with adding a group of Windows client computers with a new image configuration to the IT asset inventory DB, which has a standard procedure. You must furnish a document to be signed off by two people. Who should you approach for signatures? (Select the two best answers.)
A. Your manager
B. Compliance officer
C. IT director
D. Owner of the company
E. CISO
3. What do inventory management systems and software use to keep track of assets? (Select the two best answers.)
A. Regulatory policies
B. AUPs
C. Asset tags
D. Barcodes
1. B. Use a network topology diagram (if one is available). This documentation should graphically map out what switch the server connects to and how. An automated network map would work as well. While the Microsoft Knowledge Base is great for answering questions about Windows Server, Microsoft has no way of knowing exactly how your organization has setup the network; nor do you want them to know—unless perhaps you initiate a tech support call to them for another issue. Incident documentation is used during the incident response process. Compliance policy deals with adhering to guidelines, standards, and possibly law. Inventory management will help you to find out things such as when the server was installed, and possibly where it is physically located, but the best documentation to find out how network devices and servers are connected is the network topology diagram documentation.
2. A and C. Before you perform any work where ISO compliance requires signatures, always obtain the signature of your manager, and any other parties that should be aware of what you are about to do. In this case, the IT director (or other similar title) should be aware of anything substantial being added to the network as assets. You might also have a project manager, or someone in asset management or other departments sign off as well. If hardcopy, make copies and store the documents in the appropriate location. If digital, make sure that the signatures are properly validated and store the e-docs in the proper secure locations. The compliance officer need not be involved unless there is a change concerning processes and procedures—yes that would be a procedure to change a procedure. The owner of the company shouldn’t be bothered with these types of day-to-day operations, other than it should be part of your weekly report. Also, an enterprise-level company will more likely have a group of executives, instead of an owner. One of those might be the Chief Information Security Officer (CISO); however, this person will usually not be included, because the IT director will either report to that person directly or will be working closely with them.
3. C and D. Asset tags and barcodes are used by inventory management systems and software to identify and keep record of company assets. Regulatory policies of an organization attempt to achieve compliance with a government’s objectives through laws and regulations. Acceptable use policies (AUPs) state what users are allowed to do when it comes to the technology infrastructure of an organization.
ExamAlert
Objective 4.2 focuses on these concepts: documented business processes, purpose of the change, scope of the change, risk analysis, plan for change, end-user acceptance, change board, backup plan, and document changes.
Change management is a structured way of changing the state of a computer system, network, policy, procedure, or process. The idea behind this is that change is necessary, but an organization should adapt with that change, and be knowledgeable of it throughout its lifecycle. Any change that a person wants to make should be introduced to each of the leaders of the various departments that it might affect. Those personnel must approve the change before it goes into effect. Before this happens, department managers will most likely make recommendations and/or give stipulations. There might even be a committee involved. When the necessary people have signed off on the change, it should be tested and then implemented. During implementation, it should be monitored and documented carefully.
In a larger organization that complies with various certifications such as ISO 9001:2015, this whole process can be a complex task. IT people should have charts of personnel, project managers and department heads. There should also be current procedures in place that show who needs to be contacted in the case of a proposed change.
The typical A+ technician doesn’t need to know all that much about change management, but should know how to work within a system and implement basic change management best practices. To that end, Table 39.1 gives a couple of definitions for change management terms that you should know for the exam. Let’s say there is a scenario where you as an IT technician see a need to update the firewall software for a group of client computers.
Table 39.1 Change Management Terms
ExamAlert
Know the change management terms and definitions for the exam.
Remember that some changes require more attention to change management than others. A basic change to a system might not even require a signature, or it might simply require a form template with a manager’s signature. But a more complex change that affects multiple systems and users will need a more developed change management approach. It might consist of stages, including planning, awareness, analysis and learning, and finally adoption. Keep an open mind. The point where advanced change management planning should occur, and the particular procedures and naming conventions used, will vary from one organization to the next.
Note
Here’s a Microsoft-related example strategy for change management:
https://docs.microsoft.com/en-us/microsoftteams/change-management-strategy
Note
The Cram Quiz at the end of the chapter covers the material for both objectives 4.2 and 4.3.
ExamAlert
Objective 4.3 concentrates on: backup and recovery, backup testing, UPS, surge protector, cloud storage vs. local storage backups, and account recovery options.
Don’t be looking for disasters, they will come looking for you—that is if… you don’t plan well, and if you don’t incorporate fault tolerance and redundancy whenever possible. The more we secure and provide redundancy, the more we reduce the risk of disaster. However, a disaster can happen. In the case that it does, in the unlikely event, we need to be ready. Be prepared with a disaster recovery plan (DRP).
The objective of a DRP is to ensure that an organization can respond quickly to an emergency and minimize the effects of the disaster on the organization, it’s employees, and it’s technology. It could be a simple one-page document (for small offices) or an entire set of documentation including profiles, processes, and procedures; more likely the latter.
Note
The following link leads to the NIST SP 800-34 Contingency Planning Guide. Study it, and do a search for DRPs from large companies such as IBM
https://csrc.nist.gov/publications/detail/sp/800-34/rev-1/final
For the A+ we are concerned with a couple of concepts within the realm of disaster prevention and disaster recovery: backup and recovery, cloud versus local backups, and account recovery. Let’s discuss those now.
Backing up data is critical for a company. It is not enough to rely on a fault-tolerant array of hard drives, or other redundancy methods. Individual files or the entire system can be backed up to another set of hard drives, or to optical discs, or to tape. Windows 10/8, and Windows 7 use separate programs for backing up data. They are each accessed differently, but they work in similar ways. Let’s discuss File History and Windows Backup.
File History is a file backup program that can be accessed from the Control Panel. After turning it on, it automatically searches for accessible drives on the local computer or network that are potential candidates for backups. By default, it copies files from the Libraries location, Desktop, Contacts, and Favorites. You can select the copy destination that the File History program will use. You can also restore personal files from here as well. To initiate a file copy within the File History program:
1. Start File History by accessing Control Panel > File History. (If in Category mode of the Control Panel, go to System and Security > File History.)
2. Enable File History by clicking the Turn on button. That will automatically initiate a backup. Or click the Select a drive link to select or add a network location to back up to. Click OK when finished. This returns you to the main File History window and initiates the backup.
3. Subsequent backups can be made by clicking the Run now link or by selecting the Advanced Settings link and configuring when the files are to be saved.
If File History is no longer needed or desired, click the Turn off button.
In some cases, you might want to back up more than just personal files from specific locations, and you might want to back up the entire system. One way to do this is to use the System Image Backup option (linked to the bottom-left corner of the File History window). This is actually a recreation of the older Backup and Restore program from Windows 7 (located directly in the Control Panel in Windows 7). This program can create an image of your system drive and user data files, from which you can restore later on. You can also manually select additional information, such as the entire C: drive as shown in Figure 39.2. There are third-party imaging products as well (for example, Symantec Ghost). Many organizations prefer to use these.
ExamAlert
Know the difference between a file level backup and an image level backup.
Figure 39.2 Windows Backup screen with the C: drive selected
Larger companies will use more elaborate backup systems, which often backup to tape drives with large capacities such as Linear Tape-Open (LTO). A typical LTO-8 tape can hold 12 TB of raw data. These drives come with their own programs that will allow you to select various types of backups and verify those backups in several ways. Two methods of backup include the full backup and the incremental backup.
Full backup: This method backs up the entire contents of a folder or drive, whichever is selected. The full backup can be stored on one or more tapes. If more than one is used, the restore process would require starting with the oldest tape and moving through the tapes chronologically one by one. Full backups can use a lot of space, causing a backup operator to use a lot of backup tapes, which can be expensive. Full backups can also be time-consuming if there is a lot of data. So, often, incremental (or differential) backups are used with full backups as part of a backup plan.
Incremental backup: This method backs up only the contents of a folder that has changed since the last full backup or the last incremental backup. An incremental backup must be preceded by a full backup. Restoring the contents of a folder or volume would require a person to start with the full backup tape and then move on to each of the incremental backup tapes chronologically, ending with the latest incremental backup tape.
Windows Server has a built-in program called Windows Server Backup (wbadmin.msc). After adding it as a feature in Windows Server, you can then backup data how you wish, optimize the backup performance, and select either full or incremental backups for individual volumes as shown in Figure 39.3
Figure 39.3 Windows Server Backup screen set to incremental backup of the C: drive.
ExamAlert
Know how to use File History and Windows Server backup.
After a backup is complete it should be verified or validated in some way. Manufacturers of backup software and hardware solutions will usually include some kind of verification mechanism that you can select during the backup process. This will verify that the backup was written properly to the backup media.
However, this isn’t enough to satisfy a DRP. A backup operator needs to periodically test backups by actually restoring hand-picked backup jobs to test systems. This might seem like a shot in the dark, but you can logically select what to test by being included in the change management loop. Any substantial change proposals might need to notify the backup group, so that those changes can be tested by way of a new backup/restoration.
When initially backing up a system, such as a Windows server for example, that backup should be thoroughly tested via a restoration and in-depth comparison of the original data to the restored data. But it goes further than that; restores should be tested on simulated systems with simulated failures. So, for example, if we are concerned that the server’s system drive (or array) could fail, then we could test that by setting up a test server with the same configuration and hard drive array and restore the system data or image to that test system. Or, if the IT budget doesn’t allow for this, we could at least test it virtually. Quality virtualization software is a must in this case because it needs to emulate hardware appropriately.
ExamAlert
Perform backup testing to ensure that backups are actually being performed and that data can actually be restored!
Note
See the following NIST links for guidelines on the backup (CP-9) and recovery (CP-10) of data. These are part of SP 800-53.
Backup: https://nvd.nist.gov/800-53/Rev4/control/CP-9
Recovery: https://nvd.nist.gov/800-53/Rev4/control/CP-10
Most of what we have discussed so far has been based on the backup of data to local storage. The beauty of local storage is that you own it. (Or your organization does!) That means that you can access it when you wish, it is physically available to you, and can most likely be secured more easily. In addition, if there is a failure, the time to repair will usually be less than if you backup data to the cloud. Plus, simulations and testing can be run faster as well (in most cases). So being local has its advantages. However, it can be costly: servers, racks, tape drives, electricity, and so on can make an IT person wonder if backing up to the cloud is a better solution—and sometimes it is.
The big platforms such as Amazon Web Services (AWS), Azure, Google Cloud and so on have various cloud services plus storage, syncing, and backup solutions. These tend to be more secure than services such as DropBox, OneDrive, and Google Drive because they are designed for business use, especially enterprise level business, where security is paramount. The key is speed. We need to have a fast backup solution (and more importantly, a rapid restoration process) in spite of the location of the backup.
Regardless of the solution you use, the backups should be well documented, and the backup accounts should have strong passwords/passphrases. This is all part of a data backup strategy where we are concerned with having onsite backups (for easy restoration), offsite backups (for disastrous situations), backup testing, and an organized storage system that is properly documented.
The first thing to remember is this: Don’t delete accounts! Accounts may need to be accessed several years later for a variety of reasons. Instead of deletion, accounts should be disabled. If you refer back to Chapter 33, “Windows Security Settings and Best Practices,” Figure 33.1 shows the option to disable an account. Beyond this, archive old accounts to another location, and backup any account folders.
Going a bit further, on the domain side of things you can protect objects from accidental deletion. For example, Figure 39.4 shows a user account within a Windows domain that has been protected from accidental deletion within the Object tab. On a Windows Server this tab is only accessible if you enable the viewing of Advanced Features. This technique is best used on accounts that exist within an OU—in the figure we are working within the Marketing OU.
Figure 39.4 Protecting a user account from accidental deletion
At some point accounts may need to be recovered. This might be as simple as a folder restoration, or it might get more in depth if the account profile was corrupted.
Folder restoration implies that there is a backup of the user accounts. On a Windows client, user accounts (and their profiles) are stored in C:\Users. This entire folder can (and should) be backed up. On a Windows domain controller, accounts (such as admin accounts) are stored in C:\Users by default, but generally, you will be using roaming profiles for domain users, so in that case, the accounts are stored wherever you create the profiles folder—which should usually be on another partition, drive, or system altogether. An example of this is shown in Figure 31.1 in Chapter 31, “Physical and Logical Security.” Either way, those folders should be backed up.
When it comes time to restore the folders (if they have been accidentally deleted or were corrupted), restore from backup, copy the accounts to the appropriate folders, and then if necessary, re-create, or repair the user accounts within the appropriate user group or OU: for Windows client computers this is done in Local Users and Groups; for Windows domain controllers this is done in Active Directory Users and Computers. If necessary, set a profile path or copy profiles to new accounts. On a Windows domain controller, you can also use the Ntdsutil.exe command-line utility to incorporate the users.
ExamAlert
Know how to restore accounts in Windows.
In the case of corruption, you can attempt to copy the profile to a new account as we mentioned in Chapter 36, “Troubleshooting Microsoft Windows.” Sometimes, you might need to go a bit further and modify the registry and security identifiers (SIDs), and perform additional configuration, but this goes a bit beyond the A+ certification.
Note
You don’t want to get caught without a backup—it can seriously affect your job security. User accounts are at the top of the list when it comes to backups.
Note
We cover UPS and surge protector in Chapter 40, “Safety Procedures and Environmental Controls.”
Note
The following cram quiz combines this section and the previous section related to objectives 4.2 and 4.3.
Answer these questions. The answers follow the last question. If you cannot answer these questions correctly, consider reading this section again until you can.
1. In a change management board meeting you are discussing any vulnerabilities that can be mitigated as part of a recommend change and any that could potentially occur due to that change. What best describes what you are discussing?
A. Purpose of change
B. Scope of change
C. Backout plan
D. Document changes
E. Risk analysis
2. You have been tasked with backing up new user profiles in an enterprise environment. You propose to backup these user accounts to a new tape backup device. Which of the following procedures should you follow? (Select the two best answers.)
A. Change management
B. End-user acceptance
C. File History
D. Incremental backup
E. Backup testing
3. You have been contracted to perform some work at a small office. There is a problem with a Windows 10 computer and the user accounts folder has been corrupted. “Luckily” the company has a backup. Where should you restore the accounts to? (Select the two best answers.)
A. C:\Users
B. Ntdsutil
C. Active Directory Users and Computers
D. Local Users and Groups
E. ISO-compliant array
1. E. You are discussing risk analysis which is the attempt to determine threats that could occur with computers and networks. Purpose of change is where you give a basic description of the change and why the change should come about. Scope of change is where you go into detail about what systems will be updated. The backout plan is a set of procedures that will reverse any failed changes made quickly and efficiently. Documenting changes happens once approval is made, the technician should carefully document any changes that are made and when.
2. A and E. Because this is a change (backing up to a new tape device), a change management document will probably be needed, listing procedures for usage of the new backup device, and the backup of the new accounts. Backup testing should be done often, or at least periodically, but definitely when it comes to new data, as is the case in this scenario with new user profiles. We are not concerned with the end-user acceptance aspect of change management because the users should not be affected by this—it should be transparent to them—but if we were, that would be part of change management. File History is a Windows 10/8 tool; in enterprise environments we would use Windows Server Backup or a comparable third-party tool. Because these are new user profiles, we would want to do a full backup, not an incremental backup.
3. A and D. First, you will have to restore from backup and copy the user accounts to the C:\Users folder. Then, you’ll need to make sure the accounts exist within Local Users and Groups. You might have to add them and then specify the profile path for the user, or perhaps copy profiles to new users. It depends on the scenario and the scope of the damage. Ntdsutil.exe and Active Directory Users and Computers are tools that work on Windows Servers, not Windows clients. A small office will probably not be ISO-compliant, nor with use an array of hard drives to store user accounts; they will most likely simply be stored on the Windows 10 client as they were before.