Sara Perrott

Sara Perrott is an information security professional with a systems and network engineering background. She teaches classes related to Windows Server, Amazon Web Services, networking, and virtualization. Sara addressed the AWS Imagine conference in 2018 and presented at the RSA conference in 2019.

Articles From Sara Perrott

9 results
9 results
Windows Server 2019 & PowerShell All-in-One For Dummies Cheat Sheet

Cheat Sheet / Updated 03-15-2022

PowerShell 5.1 is the current released version of Windows PowerShell and is the version that ships with Windows Server 2016 and Windows Server 2019. It is installed by default on these newer operating systems, but it’s also available for install on Windows Server 2008 R2 with Service Pack1, Windows Server 2012, and Windows Server 2012 R2. The last three operating systems must have Windows Management Framework 5.1 installed to support PowerShell 5.1.

View Cheat Sheet
Windows Server 2022 and PowerShell All-in-One For Dummies Cheat Sheet

Cheat Sheet / Updated 02-03-2022

PowerShell 5.1 is the version of Windows PowerShell that ships with Windows Server 2022, Windows Server 2019, and Windows Server 2016. It’s available for installation on Windows Server 2008 R2 with Service Pack1, Windows Server 2012, and Windows Server 2012 R2. The last three operating systems must have Windows Management Framework 5.1 installed to support PowerShell 5.1. You can upgrade to PowerShell 7.2 fairly easily (the more recent version from Microsoft), though the examples on this Cheat Sheet were only tested in PowerShell 5.1.

View Cheat Sheet
How to Install and Configure Hyper-V

Article / Updated 09-24-2019

Windows Server 2019 offers Hyper-V, a Type 1 hypervisor. Hyper-V is a role that gets installed on a Windows Server 2019 operating system. If you want to save time, you can also download and install Windows Hyper-V Server 2019, which is a standalone product available for download that contains the Windows Hyper-V hypervisor, Windows Server drivers, and various virtualization components — the same tech that you get from installing the role. Here, you learn how to install Hyper-V from the role. The lab systems that are used for this installation are joined to the domain sometestorg.com. Windows 10 also has a version of Hyper-V available that you can install. It’s a feature that can be enabled, and it will allow you to support virtual machines, virtual networking, and virtual storage. This is very helpful if you need to be able to run multiple operating systems in your normal day-to-day activities. This feature is only available if you’re running Windows 10 Pro, Enterprise or Education editions. It is not available on Windows 10 Home edition. The Windows 10 version of Hyper-V does not support advanced functionality like live migration, Hyper-V Replica, or SR-IOV. How to install Hyper-V You need to make some basic configuration decisions during the installation of Hyper-V, but they can be changed after the installation, so if you change your mind or make a mistake, don’t panic! Follow these steps to install Hyper-V: From Server Manager, choose Manage→ Add Roles and Features. On the Before You Begin screen, click Next. On the Select Installation Type screen, click Next. On the Select Destination Server screen, click Next. On the Select Server Roles screen, select Hyper-V. Click Add Features in the dialog box that pops up, and then click Next. On the Select Features screen, click Next. On the first Hyper-V screen, click Next. On the Create Virtual Switches screen, select the network adapter you want to use for the virtual switch. As you can see in the figure, you have only one adapter to choose from right now, so select it. Click Next. On the Virtual Machine Migration screen, select the Allow This Server to Send and Receive Live Migrations of Virtual Machines on This Server check box and select the Use Credential Security Support Provider check box. Live migrations enable you to move a virtual machine from one Hyper-V host server to another Hyper-V host server with no downtime. CredSSP is the simplest way to set up live migration, but it requires you to log into the server being migrated, so it isn’t the best for automatically moving virtual machines. Click Next. On the Default Stores screen, keep the default locations and click Next. On the Confirm Installation Selections screen, select the Restart the Destination Server Automatically If Required check box. Click Yes on the dialog box that is confirming you want to allow the reboot. Click Install. The Hyper-V role installs, and then the server restarts. When it comes back up from the restart, you can start configuring the Hyper-V host. How to configure Hyper-V After Hyper-V is installed, there are many different things that you can configure or change from what you set during installation. Getting to the Hyper-V console is similar to the other roles that you install on Windows Server 2019. From Server Manager, choose Tools→  Hyper-V Manager. When Hyper-V Manager opens, you see the name of the server on which you just installed the role. Click that server, and you see the menus change to reflect some of the things that you can do with the host. If you right-click the host, you see a menu similar to the one show. This menu allows you to configure your Hyper-V host the way that you want to. To start configuring the host, click Hyper-V Settings in the menu that you got from right-clicking the server’s name. Virtual Hard Disks and Virtual Machines The first two configuration options — Virtual Hard Disks and Virtual Machines — allow you to change the storage location of the virtual hard disks that are used for the VMs and the location of the VM’s configuration files. NUMA Spanning The third option, Non-Uniform Memory Access (NUMA) Spanning, shown in the following figure, allows you to set the host to act as a NUMA node. This allows VMs to use resources from the server they’re on as well as other servers that are configured to be NUMA nodes. This means that a virtual machine can have more CPU or RAM than what is on the one physical host, if another host which is also a NUMA node is sharing that resource. This has an impact on performance so I wouldn’t recommend it unless you’re using it in a lab or development environment. Avoid using this in production environments. Live Migrations Assuming you followed along in the installation of Hyper-V, your Live Migrations section should have a check mark in the Enable Incoming and Outgoing Live Migrations check box. On this screen, you can specify how many live migrations can happen at any given time. The default here is two, as shown here. You can also specify a particular IP address if you want Live Migration to happen over a different interface than the rest of the traffic. There is a plus sign next to Live Migrations. If you click that, you get the option for Advanced Features. Advanced Features is where you can change what kind of authentication you want to use for migrations. This is set to CredSSP right now (if you followed the installation instructions), and this is where you can choose Kerberos if you would like. You can also choose performance options from here. Your choices are TCP/IP, Compression, or SMB. I recommend leaving this on Compression. Storage Migrations Storage Migrations allows you to move VM storage with no downtime to the virtual machine. It’s very helpful when moving to a new storage array, or when getting ready to perform maintenance on a storage array because you can move the storage with the virtual machine still powered on. In this section, you can decide how many storage migrations you want to allow to happen at the same time. The default setting for this screen is two. Enhanced Session Mode Policy Enhanced Session Mode Policy allows your Hyper-V host to connect to your VMs over Remote Desktop Protocol (RDP). You may be wondering why you would want to allow that. When you use RDP to connect, you can pass local devices to your VMs like disk drives, flash drives, and other peripherals. You also gain a shared clipboard that allows you to copy and paste, and it improves support for viewing the VMs on a higher-resolution monitor. This setting is disabled by default on Windows Server 2019 so you need to enable it if you want to use this feature. Replication Configuration You can set up your Hyper-V host to act as a Hyper-V Replica. When a Hyper-V host is configured as a replica, VMs are copied to it from the primary Hyper-V servers. If the primary Hyper-V server ever experienced a major malfunction, the replica server can bring up the VMs that are kept in a powered-off state. You can specify whether you want replication traffic to be sent plaintext or encrypted. I always recommend using encryption when it’s available. And you can also select whether you want to allow replication from any server that can authenticate, or if you want to limit replication to specific servers. This screen is shown here. Keyboard The Keyboard screen is one of the user settings. You can specify whether key combinations like Alt+Tab, for example, will apply to the physical computer the keyboard is attached to, the VM, or on the VM but only if the VM is full screen. Mouse Release Key If you haven’t installed the VM drivers, you can set which key combination you want to use to release the mouse so that you can use it outside of the VM. Unless there is a good reason not to, install the VM drivers. Enhanced Session Mode Enhanced Session Mode is enabled for the user by default. It allows you to use a remote desktop connection to pass through drives, printers, and so on, and to use the shared clipboard. Reset Check Boxes All this setting does is reset check boxes that are used to hide pages or messages when they’re checked. It doesn’t reset anything else. Virtual Switch Manager When you right-click your Hyper-V host, you may notice an option for Virtual Switch Manager. This selection allows you to create virtual switches that your VMs can use to communicate on the network. There are three types of switches that you can use within Hyper-V: External: Allows you to connect to a physical network Internal: Allows the virtual machines to communicate with other virtual machines on the same switch and with the host Private: Only allows virtual machines to communicate with other virtual machines on the same switch Having the right type of switch to support your use case is critical if you want your Hyper-V deployments to succeed. The screen is shown in the following figure. Virtual SAN Manager Also in the menu for your Hyper-V host is the Virtual SAN Manager. This allows you to connect your Hyper-V host to a Fibre Channel SAN. This is especially helpful for large organizations that have invested in Fibre Channel technology. You can see in the following figure that you can define the World Wide Node Name (WWNN) for the Fibre Channel port that is on the Hyper-V host. Fibre Channel SANs utilize special switching equipment to support high-speed, low-latency storage networks. Systems that use Fibre Channel need special storage network adapters installed, which are referred to as host bus adapters (HBAs).

View Article
What is Hyper-V in Windows Server 2019?

Article / Updated 09-24-2019

In the technology field, there is always the next hot thing that everybody starts talking about. Virtualization was one of those topics when it first made its appearance. Microsoft’s virtualization product is called Hyper-V. Virtualization has enabled IT professionals to better use the resources that have been purchased and has led to the creation of cloud computing services. Introduction to virtualization Every organization used to have physical servers. In most cases, they followed best practices and one server was dedicated to one application. This often led to wasted resources because the application didn’t actually need all the central processing unit (CPU) and random access memory (RAM) it was given, so those resources would sit idle. At the same time, the organization was paying for power and cooling for a server that wasn’t necessarily doing anything at the moment. The amount of time it took to stand up a new server could be an issue for projects that were time-sensitive. With each physical server, you had to rack it, cable it, configure it, and install software on it. Provisioning new servers for large projects could take weeks to months, especially if multiple teams were involved. Virtualization was a game changer. Instead of buying individual smaller servers to run single applications, an organization could purchase bigger, more powerful servers to run a hypervisor of some kind that would, in turn, run multiple virtual servers, referred to as virtual machines (VMs). By purchasing larger servers to run the smaller workloads, organizations were able to save on power and cooling costs. They were also able to reduce the amount of time needed to go to market, because the virtualization administrator was typically the one who would spin up the server operating system in a VM, set up the networking, and perform the basic configuration tasks like assigning IP addresses and other necessary steps. Virtualization really streamlined the process for system administrators and organizations to be able to build servers quickly in response to the needs of other teams for projects or for expanding the existing capacity to support applications. It also simplified recovery efforts when configured properly because VMs on a failed host could be transferred to another host. You sometimes hear hypervisors referred to as hosts and virtual machines referred to as guests. If you run into this terminology, don’t let it confuse it. These terms are used across all types of virtualization technologies. Type 1 and Type 2 hypervisors Before diving into the difference between Type 1 and Type 2 hypervisors, make sure you understand what a hypervisor is. The hypervisor is essentially a process that allows you to create, run, and manage VMs. The hypervisor is ultimately responsible for presenting resources to the VMs that are running on it, including CPU, RAM, networking, and storage. Most of the hypervisors let you overprovision VMs, meaning that you can assign resources that are not necessarily available. This may work for you if your workloads are very small, but if there are spikes in the workloads, or if VMs take too many resources, then the hypervisor could become starved for resources, which could impact all your VMs that are running on that hypervisor. Do not over-provision your VMs. Type 1 hypervisors Type 1 hypervisors are also referred to as bare-metal hypervisors. This is because the software for the hypervisor can run directly on the host system’s hardware. Type 1 hypervisors provide the best performance and security of the hypervisors, but some of them are more complex that others to set up. Here are some examples of the more common Type 1 hypervisors: Microsoft Hyper-V VMware ESXi Oracle VM Server KVM Citrix XenServer Type 2 hypervisors Type 2 hypervisors are referred to as hosted hypervisors. They require an operating system to be able to install and run. Type 2 hypervisors are usually easier to install and configure, but they’re less secure and not as performant as Type 1 hypervisors because they don’t have direct access to the host system’s hardware. Here are some examples of the more common Type 2 hypervisors: Oracle VirtualBox VMware Workstation VMware Fusion

View Article
How to Install Containers on Windows Server 2019

Article / Updated 09-24-2019

Containers are a game-changing technology — especially for teams that have developers who need dynamic environments to work from. A developer can launch a container that supports the needs of her application within minutes, and many of the container images are purpose build with the various programming frameworks called out in the title of the container image. Windows Server 2019 supports two variations on containers: Windows container: The Windows container is the traditional container model. It’s fast, lightweight, and easy to use. The downside is that it shares the kernel with the host operating system (OS). Hyper-V container: If you have a workload that requires different versions of the kernel, or highly secure workloads that can’t share a kernel, the Hyper-V container is the better choice. The Hyper-V container has a higher performance hit on the host server, but because it runs each virtual machine (VM) in its own container, you can have containers that have different versions of the kernel, and you have true isolation because the container is not sharing the kernel of the host OS with the host and other containers. The best thing about this conversation is that you don’t need to decide on one type or the other type. Containers can go from being Windows containers to Hyper-V containers. In this chapter, I show you how to install Windows containers and Hyper-V containers, as well as how to install the Docker pieces that are needed to make everything work. How to install Windows containers Installing Windows containers is simple. You just enable the feature, and then install Docker. This section covers installing the feature. From Server Manager, choose Manage → Add Roles and Features. On the Before You Begin screen, click Next. On the Select Installation Type screen, click Next. On the Select Destination Server screen, click Next. On the Select Server Roles screen, click Next. On the Select Features screen, select Containers (shown in the following figure), and then click Next. On the Confirm Installation Selections screen, click Install. Click Close and restart the server. How to install Hyper-V containers To install Hyper-V containers, you also have to install the Hyper-V role. You can install them both at the same time. Follow these steps: From Server Manager, choose Manage→Add Roles and Features. On the Before You Begin screen, click Next. On the Select Installation Type screen, click Next. On the Select Destination Server screen, click Next. On the Select Server Roles screen, select Hyper-V, click Add Features, and then click Next. On the Select Features screen, select Containers and then click Next. On the Hyper-V screen, click Next. On the Create Virtual Switches screen, select your network adapter, and click Next (see the following figure). On the Virtual Machine Migration screen, click Next. On the Default Stores screen, click Next. On the Confirm Installation Selections screen, click Install. After the installation is complete, click Close and then restart the server. How to install Docker At this point, you’ve at least got the containers feature installed. You may have even installed the Hyper-V role and the containers feature at the same time. Now you need to install the Docker Engine. This is the piece that really ties all the other pieces together. You’ll need to open PowerShell to run these commands, as well as the commands that follow under “Test Your Container Installation.” To open PowerShell, right-click on Start and select Windows PowerShell (Admin). After you’ve opened PowerShell, your first step is to install the Microsoft Package Provider for Docker. This is done with the following command: Install-Module -Name DockerMsftProvider -Repository PSGallery -Force Now you can install the latest version of Docker with the following command: Install-Package -Name docker -ProviderName DockerMsftProvider After Docker is installed, you need one more restart. You can do this through the graphical user interface (GUI), or you can just type the following into PowerShell: Restart-Computer -Force These commands are shown in the following figure. If everything went well, you get no output. The PowerShell prompt will simply return, and you can run the next command. Test your container installation After your server is configured and Docker is installed, you’ll want to test to ensure that your container installation is working properly. Test a Windows container There is a simple way to test that your Windows container installation is installed properly: Download and run a container image. One of my favorites is a sample image because it prints out a “Hello world”–style message and then exits. To run this test, you use the docker run command. Because the container image is not downloaded yet, it will download the container image first, and then run it. If you want to stage the image so you can play with it later, you can use the docker pull command instead of docker run, and it will only download the container image. Here is the command to download the sample container: docker pull microsoft/dotnet-samples:dotnetapp Note that the download may take a few minutes because it's pulling down a copy of Nano Server. You can watch the progress on the screen. See the following figure for the output from running the command. The container images must use the same kernel version as the container host. If you try to run the container with a kernel version that doesn’t match the container host’s kernel version, you’ll get an error similar to the screenshot in the following figure. Notice the first line of the error, which ends in “The container operating system does not match the host operating system.” Test a Hyper-V container Testing the Hyper-V container is similar to testing a Windows container, but because the kernel isn’t shared, you have far more freedom as far as which container images you want to run. The command itself is similar — you just need to include --isolation=hyperv to tell it that you want it to launch the container as a Hyper-V container rather than a Windows container. docker run –-isolation=hyperv microsoft/dotnet-samples:dotnetapp As you can see in the following figure, the container image, which was downloaded previously in the Windows container section, ran and gave us the Hello message with the super adorable .NET Foundation’s robot mascot.

View Article
What are the Windows Server Docker and Docker Hub?

Article / Updated 09-24-2019

Docker is an open-source platform that assists you in packaging and deploying applications in Windows Server 2019. You can run multiple containers on a container host, and because they share the container host’s kernel, they use fewer resources than virtual machines (VMs) because you don’t need the overhead of a hypervisor to manage them. Docker architecture Docker is architected to use a client–server model. The Docker client talks to the Docker server component, which is called a daemon. Your Docker client can be on the same server as the Docker daemon, or you can run the Docker client from your workstation. The Docker server The Docker server is the brains of the operation. It manages much of what goes on in Docker, including the various objects that are created, and communications with the Docker application programming interface (API). The server component is referred to as a daemon. The Docker client The Docker client is where you perform most of your work with containers. Whenever you run a Docker command, you’re running it from the Docker client. The Docker registry Docker images are stored in the Docker registry. You may also hear this referred to as a repository. Registry is the official word in Docker documentation, but many developers are used to calling this type of construct a repository. Both words work— be aware that you may see them used interchangeably. Docker objects Docker objects is a term used to refer to a multitude of different components, like images, containers, and services. Basic Docker commands Docker commands always start with docker and include keywords that determine the action that you want to take. The table lists some of the more common commands that you should remember. Common Docker Commands Command Description docker pull Pulls a container image from whichever registry you have configured to store your container images docker push Pushes your container image to whichever registry you have configured to store your container images docker run Pulls the container image if it is not available already and then creates the new container from the container image docker images Lists all the container images that are stored locally on the container host docker login Used to log in to a registry; not required for public registries, but required to access private registries docker stop <name> Stops the running container that was named docker ps Lists all the containers that are running at that time Introduction to Docker Hub Docker Hub is a public registry owned by Docker that is available for storing container images in individual repositories. Businesses can use Docker Hub to create their own private repositories to store proprietary container images in as well. Many of the images that are available are from large open-source projects, but there are also plenty of container images from organizations that are not open source. For example, Microsoft has a public repository that has about 68 container images at the time of this writing. You may be asking, “How do I get to Docker Hub? It sounds pretty cool.” Access the Docker Hub online. Finding public images Public images are the easiest ones to find. You don't need an account to search for public images, nor do you need an account to do a docker pull on one. To find an image that you’re interested in, you can simply type your query into the search box at the top. For example, if you want to search for Server Core, just type Server Core and press Enter, as shown. If only one container image matches your query, you’re taken to a page that is dedicated to that container image. If you type the name of an organization, or your search returns multiple results, you’re presented with search results. If you had searched for Microsoft, for example, you could have gotten any container image that has to do with Microsoft. Official Microsoft container images can be filtered on by selecting Verified Publisher from the filters on the left side of the screen, as shown. One of the really great things about Docker Hub is that you can click a container image to learn more about it. The page that you click into is the same one you get if you search for a product and there is only one result. You’re presented with a description of the container image, which includes available tags and commands needed to use the container image. These commands are often used to accept licensing agreements. The Microsoft SQL Server container image, for example, tells you to run this command to start an MS SQL server instance running SQL Express: docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Express' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest-ubuntu The information on the container image will also cover software requirements and available environment variables, along with a full listing of tags. Tags allow you to choose different versions of a container image. If you don’t specify a tag, then by default you get the container image with the “latest” tag. You’re also given the command to pull an image if you’re interested in it. For example, to pull this MS SQL container image into Docker, you would run the following: docker pull mcr.microsoft.com/mssql/server One last thing that is really helpful is that you can see how many times a container image has been pulled. This information is useful if you aren’t familiar with the organization that supplied the container image. Underneath the name next to a logo of a down arrow is a number that tells you how many times it has been pulled. Microsoft SQL Server, at the time of this writing, had been pulled over 10 million times, as shown here. Creating a private repository Public repositories make acquiring container images convenient, but if you’re working on container images and you don’t want them to be publicly available, you’ll want to create a private repository. When pulling or pushing container images to your repository, you have to use the docker login command to authenticate before you'll be allowed to work with the repository. By default, you get one free private repository in Docker Hub. If you need more private repositories than that, you can upgrade to a paid plan. At the time of this writing, you could pay $7 a month for five private repositories. Creating an account Creating an account on Docker Hub is simple and free. From the home page, click the Sign Up link in the upper-right corner. Choose a Docker ID, enter your email address and password, accept Docker’s terms, check the box on the CAPTCHA, and then click Sign Up, as shown here. You’ll get an email to verify your email address. Click the link in the email to activate your account. Creating your private repository When you log in to Docker Hub after creating your account, you’re asked whether you want to create a repository or create an organization. Click Create a Repository. Enter a name for your repository and a description. Change visibility to Private. Click Create. You can choose to link your repository to your GitHub or Bitbucket accounts to do automated container image builds. This menu is located in the repository creation menu, though you can go back in later and set it if you need to. After your repository is created, it will be blank, but it will give you a sample of the command you would need to run to push things to your repository, as shown. Using a private repository To use your private repository, you first have to log in to Docker; then you can push and pull container images as much as you want. To log in, enter the following command: docker login To pull the standard Nano Server image from Microsoft’s repository. add the command that will let you push the container image to your repository. You would normally do this after you made changes to the image. docker push <<em>mydockerid</em>>/myrepo:nano The command uses my Docker ID, followed by the name of my repository, and then the tag used for my container image. In this case, a tag with a value of nano. You can see the command line part in the following figure. After the container image has been pushed, it will show up in your repository in Docker Hub. All your tags that are pushed to Docker Hub show up in your portal. You can't alter the container images from inside of Docker Hub; in fact, the only thing you can do is delete them. To modify your container images, you need to pull them, make your changes, and then push them again. The following figure shows you what Docker Hub looks like after the tagged container image has been pushed. To pull the container image down to modify it, issue a very similar command to what you used to push the tagged image: docker pull <<em>mydockerid</em>>/myrepo:nano After you make the changes that you need to make (like updating the container image), you can push it back up to your private repository where it’s accessible to any system from which you can log in to your Docker repository.

View Article
What Are Windows Server Containers?

Article / Updated 09-24-2019

Virtualization drastically changed the way that IT operated in organizations of all sizes, but containers have had a large impact as well. You may be wondering why someone would want to use containers in Windows Server 2019. They’re just virtual machines (VMs), right? Well, not exactly. The technologies may seem similar, but containers and VMs are not the same. VMs are presenting virtual hardware to the user. Containers don’t expose the hardware or the operating system; they’re meant to run applications in isolation. VMs can be thought of as Infrastructure as a Service (IaaS). Although VMs do present virtual hardware to system administrators, the administrators of virtual servers don’t have to be concerned about the underlying hardware. They can focus on the operating system and applications that they’re responsible for. Containers take this idea and refine it to where each container is responsible for running an application. The application is baked into the image so the containers can be stood up and torn down constantly. This is great for Platform as a Service (PaaS) scenarios where developers just want to test their code and not worry about getting servers provisioned to test against. Developers don’t generally care about hardware or operating systems; they just want to know that their code works in the way they expect it to. The main idea behind containers is that the application inside of each container has all the resources that it requires to function within the same container. This means that you can drop the container on any container host, and all the application’s requirements will still be met because those requirements (.NET, for example) move with the application inside the container. What a container looks like in Windows Server 2019 You may be wondering what containers look like. Let's use the example of containers in Windows specifically. At a high level, the architecture looks something like this figure. In a Windows Server operating system, after you enable the containers feature, you install the Docker Engine. The Docker Engine is responsible for packaging and deploying the containers. Microsoft partnered with Docker for the first time with Windows Server 2016 to support running containers on a Windows operating system. Important container terms As with most newer technologies, there are new terms that you need to understand to be on the same page as other system administrators who work with containers. Here are the most important terms: Container host: The container host is the system that is configured with the Windows Container feature. It can be a physical host or, through the joys of nested virtualization, a virtual host. All the containers on the container host share the host’s resources. Container image: When you create a container image, you create a deployable image that contains the changes you made to the original image, which were stored in the sandbox. The container image does not contain the operating system (OS); instead, when you deploy custom container images, they’re a layer of customization that is added on top of the container OS image. Sandbox: The sandbox saves changes as they’re made to the container image. This can include modifications to the file system and Registry, and any new applications you might install. Changes saved in the sandbox can be saved as container images so they can be reused. Container OS image: Not to be confused with the container image, the container OS image can’t be modified. It is the first layer in the container sandwich and provides the operating system that the container will use. Container repository: Container images along with any dependencies they may have are stored in a container repository so that they can be reused. They can be stored in a local repository, or if you plan on using the image across multiple container hosts, you can create private or public repositories on Docker Hub. Repositories may also be referred to as registries; Docker Hub, for instance, is often referred to as a container registry. How containers run on Windows Server 2019 Containers use the Docker Engine to run on Windows Server. Containers were first introduced in Windows Server 2016, but the technology and, of course, Docker itself have been around a lot longer than that. Docker is the engine that is responsible for packaging and delivering container images. Those container images can be based on Windows or Linux operating systems and can run in your datacenter and Windows Server 2019.

View Article
Windows Server 2019 User Experiences and Server Manager

Article / Updated 09-18-2019

Windows Server 2019 has two user experiences to choose from. What you use will depend on the workload you’re wanting to support, as well as organizational requirements. Here, the Desktop Experience and the Server Core experience are discussed, as well as some pros and cons of each. Desktop Experience Desktop Experience is what you would consider to be the standard graphical user interface (GUI) that you may have used in previous versions of the Windows Server operating systems. It allows you to interact with the system with buttons and menus rather than through the command line. Server with Desktop Experience can be managed through Group Policy if attached to an Active Directory domain, and workgroup (non-domain) servers can be managed via local Group Policy. Desktop Experience tends to be the easier form of server installation and administration for beginning system administrators, but I highly recommend that you don’t rely on the GUI (shown in the figure). Become a PowerShell ninja instead! PowerShell is a very versatile language and can be used on a variety of systems, including some of the newer versions of Linux. Server Core Server Core (shown in the following figure) provides a much simpler interface if you connect to the console. You’re greeted by a somewhat familiar-looking command window that prompts you for your username and password. After you’ve logged in, you get the traditional C:\ prompt. You can run the traditional command-line commands from this console. Alternatively, by typing powershell.exe, you can launch a PowerShell window. Initial configuration is done with the sconfig utility, though it could be done through a PowerShell script or PowerShell Desired State Configuration (DSC). This experience can be managed through Group Policy if attached to an Active Directory domain or through local Group Policy if they’re workstation servers. Nano Nano provides an even simpler interface and a much more limited console, which is referred to as the Recovery Console. It isn’t available through the regular installer on the disc; instead, you have to “build” the image from files available on the disc. Nano has a much smaller footprint, both in disk and compute needs than Desktop Experience or Server Core. Because it has a smaller overall footprint, the attack surface is also reduced. Windows Server Nano 2019 is available only as a container base operating system image, and can only be run as a container on a container host. Note: You won’t really see Nano discussed in depth anywhere in this book because you’re far more likely to encounter the Desktop Experience or Server Core installations of Windows Server 2019. Nano can’t be managed through Group Policy. You need to use PowerShell DSC instead if you want to manage Nano at scale. You may be asking why you would even use Nano when it’s such a limited version of the operating system. If you need to run container workloads that use .NET, Nano is an excellent candidate because it has been optimized to run .NET Core applications. What Server Manager has to offer When you first install Windows Server 2019 and you log in, the first screen that you’re greeted with is Server Manager (see the following figure). This screen gives you a central area to do all the configuration tasks you need to do on your server. It presents a handy menu to manage all the roles and features installed on your server as well. Server Manager will allow you to manage remote servers, not just the local server. The remote servers need to be added to Server Manager before they can be managed, and some firewall ports may need to be opened to allow full functionality. After remote servers are added, you can run PowerShell against them and perform basic management tasks like shutting down, connecting via Remote Desktop Protocol (RDP), and so on. You can manage up to 100 remote servers with Server Manager. This number may be lower depending on what you’re running on the manage servers. If you’re running large workloads, then you may not be able to manage as many. Server Manager can be used to manage the same operating system it’s installed on, as well as operating systems that are older than what is installed. It can’t manage the operating system on a server that is running a newer version of the operating system. For example, a server running Server Manager on Server 2012 R2 can’t manage a server running Windows Server 2016. The figure shows some of the options available through the Server Manager menu. You may notice that Remote Desktop Connection is grayed out. This is because I was logged on the server that is in the window. Here’s a list of some of the more commonly used features of Server Manager: Managing local and remote servers Managing roles and features on servers (To install or remove roles and features, the target system must be running at least Server 2012) Starting management tools like Windows PowerShell and MMC snap-ins Reviewing events, performance data, and results from the Best Practices Analyzer

View Article
What’s New in Windows Server 2019

Article / Updated 09-18-2019

With each new version of Windows Server, Microsoft introduces new and innovative technologies to improve administration or add needed functionality. Here are some of the new features in Windows Server 2019: App Compatibility Feature on Demand (FoD) for Server Core: The App Compatibility FoD package includes a set of binaries that improve compatibility for applications that require some of the graphical tools that haven’t historically been available with Server Core. To use these capabilities, you need to install the FoD package from Microsoft; it’s available as an optional package download from the Microsoft Evaluation Downloads page in the form of an ISO image file. Just search for Windows Server Core Features on Demand, and ensure that you download the same version of FoD as the version of Server Core that you’re going to install or you’ve already installed. All you need to do is copy the ISO image file to the local storage on the server or to a shared storage location. Then you can use PowerShell to mount the ISO with the Mount-DiskImage This will give you the ability to use Internet Explorer 11, Event Viewer, Performance Monitor, Resource Monitor, Device Manager, Microsoft Management Console (MMC), File Explorer, Windows PowerShell ISE, and Failover Cluster Manager, and it will add support for SQL Server Management Studio. Improvements to clustering: Several improvements have been made in regards to clustering in Windows Server 2019: Cluster Sets is a new technology that allow you to group multiple clusters. These clusters may just be compute or storage, or they may be hyperconverged (both storage and compute) clusters. This allows the movement of virtual machines (VMs) across different clusters, which, in turn, allows you to do maintenance tasks with little to no impact to the uptime of the VMs. To use the Cluster Sets feature, you create a VM and point it to a unified namespace (a name that is shared and provides access across multiple storage systems) for the cluster set. From there, the VM will be assigned to a cluster, and the cluster will assign it to a specific node. File Share Witness is a file share that can be used to reach quorum in a clustering scenario. It received two enhancements in Windows Server 2019. The first enhancement enables the Failover Cluster Manager to block the creation of a file share witness if Distributed File System (DFS) is being used. An error message will also be displayed letting you know that this is not supported because it can cause stability issues in your cluster if your file share witness is put on a DFS share. The second enhancement to File Share Witness enables you to use a file share witness in scenarios that were not previously supported — for example, when you have poor Internet connections to remote locations, when you don't have shared drives, when you don’t have a domain controller connection (for instance in a demilitarized zone [DMZ]), or in a workgroup or cross-domain cluster where there is no Active Directory–based cluster name. The DMZ is the area where you’ll typically locate public-facing systems like web servers. It’s essentially a lower-trust network being exposed to an untrusted network, like the Internet. Moving clusters between domains no longer results in the cluster being destroyed. Two new PowerShell cmdlets were created that allow you to move a cluster from one domain to another domain. Failover Clustering will no longer use NT LAN Manager (NTLM) for authentication. Instead, you’ll use Kerberos and certificates to manage authentication on your failover clusters. Improvements to containers: You may be aware that containers were added in Windows Server 2016. The underlying technology used on Windows Server for containers is Docker. New container capabilities have been added in Windows Server 2019: You can use group managed service accounts (gMSA) to access network resources. The container’s host name doesn’t need to be the same as the gMSA. You can use the gMSA on both Windows and Hyper-V isolated containers. Applications that have specific communications needs such as support for Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), general-purpose input/output (GPIO), and universal asynchronous receiver-transmitter/communication (UART/COM) port can now be containerized. Host Device Access allows you to assign a simple bus to Windows Server containers. This is especially useful for Internet of Things (IoT) devices like sensors and other peripheral devices. A third container image has been created that resolves application programming interface (API) dependencies that were not available in Server Core. You can now deploy Kubernetes on Windows Server 2019. The master node still needs to be on Linux, but you can configure worker nodes to run on Windows Server. If you’re in a Windows-centric shop and you’re trying to automate processes, or you’re just looking for a container orchestration solution, Kubernetes is a great one to go with. You can find lots of great resources on Kubernetes if it’s something you’re interested in. Congestion control: Windows Server 2019 includes Low Extra Delay Background Transport (LEDBAT), a network congestion control provider. As the name suggests, LEDBAT can find available network bandwidth for running updates and other network-intensive jobs. When the network is not in use, it can consume all the bandwidth. When the network is in use, it gives up bandwidth for your users and applications so that they don’t experience network delays. Security enhancements: There are three enhancements made to security in Windows Server 2019, expanding on work done in Windows Server 2016 when Windows Defender was officially introduced to the server operating system. These enhancements are as follows: Windows Defender Advanced Threat Protection (ATP): Provides visibility to attack activities that target memory and kernel-level areas, as well as the ability to respond to compromised systems. It also aids in forensics investigations and can be used to collect data about the system remotely. Windows Defender ATP Exploit Guard: ATP Exploit Guard has similar capabilities to Host Intrusion Prevention Systems (HIPS). It’s designed to protect systems from multiple methods of attack, as well as block suspicious behavior that is often seen in compromises involving malware. The exploit protection capability replaces the older Enhanced Mitigation Experience Toolkit (EMET) that was previously offered by Microsoft. Windows Defender Application Control: This feature was actually released in Windows Server 2016, but customer feedback provided to Microsoft conveyed that it was difficult to deploy. The version that ships with Windows Server 2019 comes with default policies built in to address some of the hardships that organizations faced. Microsoft applications are allowed to run by default, and executables that are known to be able to bypass code integrity checks are blocked. Software-defined networking (SDN) enhancements: There were several improvements within the area of SDN: One of the great improvements in security was made by introducing the Encrypted Networks feature, which provides end-to-end encryption and is configured on a per-subnet basis. High-performance gateways allow for the network throughput to be increased up to six times. This is really great for hybrid scenarios where some systems are on-premises and others are in Azure. Access control lists were introduced for the SDN fabric and can be applied automatically. This can improve the security of your SDN. Your Hyper-V hosts can now generate firewall logs in the appropriate format for Azure Network Watcher. IPv6 support was added, including all the security features available with the traditional IPv4 SDN. Virtual network peering was introduced, to give you a method to allow separate virtual networks to communicate. Shielded VMs: The concept of the shielded VM was introduced in Windows Server 2016. Some cool new features available with Windows Server 2019 include the following: The ability to run shielded VMs on systems that have intermittent connectivity to the Host Guardian Service (HGS) The ability to enable VMConnect enhanced session mode and PowerShell Direct to aid in troubleshooting efforts Support for shielded VMs running Linux operating systems Improvements in storage: Storage Spaces Direct (S2D) was introduced in Windows Server 2016 Datacenter edition. This was a great step in the direction of hyperconverged architectures. It allows for locally attached storage to be leveraged to create highly available and easily scalable software-defined storage. Some of the new features added in Windows Server 2019 include the following: New PowerShell cmdlets: These cmdlets simplify volume management and the retrieval of performance history when using Storage Spaces Direct. Storage Migration Service: Storage Migration Service allows you to inventory existing servers for their data, security, and network settings, and then migrates those settings to a new modern server using Server Message Block (SMB). This is a huge win for you if you have some old file servers hanging around still because it simplifies the migration to a newer and more supported operating system. The new system takes over the identity of the old server — your users won’t even know anything happened! Improvements to Storage Replica: Storage Replica was initially released in Windows Server 2016 Datacenter edition and allows for synchronous and asynchronous block replication between servers and/or clusters. With Windows Server 2019, Storage Replica has been made available in the Standard edition as well as the Datacenter edition. The Standard edition version of Storage Replica does have a few limitations that don’t exist in the Datacenter version. You’ll need to see if these limitations will impact your use case; if they will, be sure to install the Datacenter edition. System Insights: System Insights is a new feature in Windows Server 2019. It utilizes machine learning to analyze performance data and other metrics on each server. This feature can be especially beneficial if you need to do capacity forecasting for compute, storage, and networking needs. System Insights can be managed through PowerShell or through the newer version of Windows Admin Center. Windows Admin Center: Windows Admin Center can be used to centrally manage your servers, from viewing performance statistics, reviewing logs, and performing configuration tasks to setting up recovery for your local server to Azure by utilizing Azure Site Recovery. Windows Admin Center can now connect to Server 2008 R2, though with limited functionality. Server 2012, 2012R2, 2016, Windows 10, and of course Windows Server 2019 are fully supported. The tool is browser-based and is designed to complement existing tools, but not necessarily replace them.

View Article