Quantcast
Channel: VMware vSphere Blog
Viewing all 626 articles
Browse latest View live

vSphere 7 with Kubernetes – Declarative GitOps Continuous Delivery for Tanzu Kubernetes clusters

$
0
0

VMware Cloud Native Apps Icon(By Michael West, Technical Product Manager, VMware)

Development teams are moving at ever increasing velocity.  Driven by dynamically changing requirements from the lines of business, they are embracing unprecedented levels of automation.  Infrastructure teams are designing new automation frameworks as well.   The traditional approach has been an imperative framework.   This requires the operations team to define a set of instructions to be executed in order to achieve a state transition from what is in place now to the final goal.   Example: Create a VM, Add Network Interface, Power on VM, etc.  Planning and execution are the responsibility of the implementing team.  Even with automation, imperative DevOps can be challenging to maintain at scale.

 

Declarative DevOps

Declarative DevOps takes a different approach.  Operations teams define the desired state for their system in a declarative way.  They might define how many virtual machines they need, their resource requirements, network and storage configuration or what applications would be deployed to them.   They would not define the steps to transition to this desired state, but would rely on an orchestration system to reconcile the active state of their environment with the desired state they have defined.  Overwhelmingly, organizations are adopting Kubernetes as this desired state orchestration system.

 

Kubernetes is more than Container Orchestration

Kubernetes is generally thought of as an orchestrator of container based modern applications.  It is definitely that, however it is much more.  Kubernetes is an extensible platform that allows for the definition of custom resources that can be managed through the Kubernetes API.  vSphere 7 with Kubernetes contains a Kubernetes control plane embedded in vSphere and makes extensive use of custom resources to allow developers to automate lifecycle management of their own Kubernetes clusters on-demand.  This embedded Kubernetes API and stack of custom resources is called the Tanzu Kubernetes Grid Service for vSphere and provides a model for declarative DevOps.

 

What is GitOps

Taking the notion of declarative DevOps further is the concept of GitOps.  GitOps defines Kubernetes cluster management and application delivery with Git as the single source of truth for your declarative infrastructure.   The desired state of all infrastructure is stored in a Git-based source code management system.  Teams describe the desired state of infrastructure and application environments like dev, test, staging and production in Git repos.

Continuous Integration pipelines push infrastructure and application changes to Git Repos.  Continuous Delivery tools notice the changes and compare the actual state of the system to the desired state defined in the Git repo manifests, then attempt to reconcile the two.

 

vSphere 7 with Kubernetes Enables GitOps

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.  It is deployed on running Kubernetes clusters – like the vSphere 7 with Kubernetes Supervisor Cluster – as a set of deployments, services, configmaps and secrets.  DevOps teams set up the source code Git Repos and target kubernetes clusters for deployment of infrastructure and applications.  As developer pipelines push new code to the repos, Argo CD orchestrates the deployment of that code into the appropriate target environments.

 

Let’s See it in Action

The following demonstration shows how the Argo CD continuous delivery platform, deployed using the vSphere Pod Service, automates Tanzu Kubernetes cluster lifecycle management through a GitOps operating model.

 

The post vSphere 7 with Kubernetes – Declarative GitOps Continuous Delivery for Tanzu Kubernetes clusters appeared first on VMware vSphere Blog.


vSphere 7 with Kubernetes: Announcing General Availability of VMware Cloud Foundation 4.0 to Take Organizations from Zero to Kubernetes

$
0
0
VMware Cloud Foundation Icon

vSphere 7 with Kubernetes, the new generation of vSphere for containerized applications, is now generally available through the VMware Cloud Foundation (VCF) 4.0 release. Organizations can literally go from zero to Kubernetes, by transforming their current hybrid cloud infrastructure to support containers and Kubernetes by using VCF 4.

IT teams can use vSphere’s vCenter Server tools and their existing skill sets to deliver modern services to the development teams supporting lines of business. Additionally, development teams can now get self-service access to the newly introduced VMware Cloud Foundation Services through Kubernetes and REST APIs. Learn more about the VMware Cloud Foundation Services on this blog by our CTO Kit Colbert.

 

VMware Cloud Foundation Services

vSphere 7 with Kubernetes removes barriers between the developer and IT teams with a unified platform for managing both virtual machines and containers in a single hybrid cloud infrastructure stack that is available in the datacenter, on the cloud or at the edge.

Access to modern developer services and tools

Development teams need access to modern tools and constructs using Kubernetes APIs, so they can rapidly build and enhance applications for  your organization’s customers, partners and employees. VCF 4 with Tanzu delivers fully compliant and conformant Kubernetes for your development teams with the Tanzu Kubernetes Grid service.

Day 0 and Day 2 operations

Organizations that deploy modern applications are faced with Day 0 and Day 2 operations challenges related to networking, lifecycle and security of the infrastructure that supports containerized applications. Unfortunately, development and Dev-Ops teams too frequently end up performing the Day 0 and Day 2 operations, reducing their productivity. Modern applications are quite complex in nature, consisting of distributed virtual machines, containers and GPU resources to name a few. VCF 4.0 with Tanzu simplifies operations for containerized applications significantly, by enabling vSphere admins to use the familiar vCenter Server interface to manage containers, including specifying quota allocation, storage and networking policies, backup and high availability policies, and more.

Application Focused Management

Additionally, vSphere with Kubernetes through VCF provides IT admins the ability to create “namespaces.” Namespaces are groups of logically related application components that represent a single application.  Developers and admins can then operate on the namespace instead of on all the individual components –saving time and reducing errors. For example, within vCenter, an admin could apply an access policy to a namespace and the policy would apply to everything within that namespace.

Multiple infrastructure stacks and siloed teams

Currently, organizations that use containers and Kubernetes frequently deploy multiple infrastructure stacks:  hybrid cloud infrastructure like VCF to run existing applications, and public cloud infrastructure to run containers. These siloed infrastructure stacks are often managed by different teams due to the skillsets needed for the different stacks. VCF 4, powered by vSphere with Kubernetes, dramatically eases the process of deploying and operating containers and Kubernetes, and enables your teams to leverage their existing skillsets to support Kubernetes services for your applications. With VCF with Tanzu, organizations can operate existing and modern applications easily and consistently alongside all their existing virtual machine applications.

To learn more about how VMware’s vSphere 7 with Kubernetes and VCF deliver streamlined development and agile operations for your organization, please visit the VMware Cloud Foundation blog site.

Next Steps

Now is the time to transform your organization from zero to Kubernetes by deploying VMware Cloud Foundation 4 with Tanzu so your teams can build, run and manage containerized applications and virtual machines on a single unified platform.

Key VMware Cloud Foundation 4 resources:

 Learn more:

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and stay safe!

The post vSphere 7 with Kubernetes: Announcing General Availability of VMware Cloud Foundation 4.0 to Take Organizations from Zero to Kubernetes appeared first on VMware vSphere Blog.

vSphere 7 – vSGX & Secure Enclaves

$
0
0

Virtualization is a pretty revolutionary idea, adding a layer of software to help us solve other problems in IT. From a risk perspective, though, it adds more things to track. This isn’t to say that ESXi is insecure – far from it. ESXi is the most secure hypervisor around (our Common Criteria certifications help demonstrate this, for example, as well as our outstanding record for fixing & disclosing issues). At VMware we’re honest about risk because we want our customers to be secure. When an application has a secret, like an encryption key or personally identifying information (PII), we must point out that the secret is visible to a lot of layers. First, it’s stored in system memory and in the CPUs. Second, the hypervisor can see it. Third, the guest OS can see it. And last, of course, the application.

Diagram from Intel of how SGX keeps secrets from the OS and Hypervisor

What if there were a way to eliminate the guest OS and the hypervisor from the risk equation, though? As it turns out that’s what Intel’s Software Guard Extensions (SGX) strive to do. SGX allows an application to “conspire” with the CPU to keep secrets from the guest OS and the hypervisor, thereby reducing risk. Some applications are starting to explore this functionality, and in vSphere 7 we expose it to VMs running virtual hardware version 17. We call it “vSGX.”

When you’re running vSphere 7 on hardware that has SGX capabilities you can enable vSGX as part of the VM hardware configuration:

New VM with vSGX Enabled

Sounds great, right? It’s a very interesting concept, using hardware to help keep secrets. However, a wise person once said that there isn’t such a thing as a free lunch. There’s always a catch. In this case, there are a few.

Restrictions on vSGX

First, you need a CPU that supports SGX. These will be Intel CPUs — AMD has a different approach called Secure Encrypted Virtualization (SEV). As of this writing there aren’t many Intel server-class CPUs that support SGX, and they’re all single socket.

Second, there are some operational considerations. When you keep secrets from the hypervisor then the hypervisor can’t help you with vMotion, for instance. Or Fault Tolerance. You can’t suspend and resume or take snapshots because the hypervisor can’t copy memory to disk. Some of the Carbon Black Workload guest integrity checks can’t work, either.

You might think you can live without these things, but how will you patch your environment if you can’t use vMotion? What about DRS? These are fundamental things you’re denying yourself. SGX helps us add security, but security is a lot more than just one feature. Information security professionals think about security in terms of the CIA triad: confidentiality, integrity, and availability. While SGX might increase confidentiality it may do so at the expense of integrity and availability if an application isn’t designed properly.

Hardware-assisted secure enclaves are very interesting, and just the tip of the iceberg when it comes to what CPU hardware can help us with. Like many situations in IT, though, it’s best when their use is a collaboration between BOTH the infrastructure and the applications.

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.

The post vSphere 7 – vSGX & Secure Enclaves appeared first on VMware vSphere Blog.

Use Your Enterprise DHCP Services with VMware Cloud on Dell EMC

$
0
0

VMware Cloud on Dell EMC is a hyperconverged hardware and software-defined data center stack that is jointly engineered by VMware and Dell EMC and includes complete lifecycle management. With VMware Cloud on Dell EMC, you can satisfy the latency or locality requirements of your business applications through a streamlined provisioning process, while experiencing the benefit of reduced operational overhead – because it is all delivered as a service to your on-premises data center or edge location.

When you make VMware Cloud on Dell EMC part of your infrastructure, there are several services to consider integrating with the rest of your enterprise. In the last post, we took a look at how the NSX-T DNS forwarders can be pointed to your internal name servers to enable both improved manageability as well as to prepare the way for seamless workload migration when connecting with existing VMware vSphere environments.

For workloads that use dynamic IP addresses, you also have the option to relay DHCP requests from the VMware Cloud on Dell EMC SDDC rack to your existing enterprise IP address management services. In this post, we will take a closer look at how it works.

Each VMware Cloud on Dell EMC SDDC deployment is fully assembled, installed, and configured before it arrives on site. This includes vSphere compute, vSAN storage, and NSX-T network virtualization. Although NSX-T does include built-in DHCP services, many customers prefer to manage the address scopes from a central location to better facilitate troubleshooting, auditing, or reporting. This is accomplished by relaying requests from the SDDC to existing DHCP servers.

It is straightforward to enable the DHCP relay and disable the built-in DHCP service, but keep in mind that the change is an all-or-none global configuration and cannot be applied selectively to different network segments on the SDDC. For that reason, it’s a good idea to decide up front if your infrastructure architects require control of the address management service. The configuration can be changed afterwards, but it will disrupt running workloads since the network segments must be reconfigured.

In the sections below, you will see the steps required to configure the relay service and the compute gateway firewall.

Create a New DHCP Scope

The first step is to add a new scope for the dynamic IP addresses and related attributes for the new network segment that will be created on the VMware Cloud on Dell EMC SDDC. In the example below, you can see a standard Windows DHCP server configuration. The key items to double-check are the default gateway and DNS servers that you intend the clients on the remote side to use.

Enable the DHCP Relay Service and Create New Network Segment

As stated above, it’s best to enable DHCP relay before any workload network segments are created on the VMware Cloud on Dell EMC SDDC. Otherwise, any existing network segments will need to be temporarily placed into the disconnected state, which will be disruptive to running applications. Configuring the relay to point to the central DHCP server is straightforward, as seen in the following microdemo.

One thing to note that is admittedly unintuitive: when you create a network segment, the user interface currently does require that a DHCP address range be entered, although the values are effectively ignored when using the relay.

Configure Firewall to Permit DHCP Service Access

With the default configuration using the built-in DHCP service, no DHCP traffic leaves the SDDC, so no additional firewall rules are needed. However, when changing over to a DHCP relay, the lease request needs to travel out to the enterprise network, so the firewall must be configured to allow this communication.

Some customers may opt to create an any/any/any rule that will permit all traffic in and out of the new VMware Cloud on Dell EMC SDDC – just as similar traffic would typically be allowed on existing infrastructure in a data center. If that is the case, it would not be necessary to create a rule specifically for DHCP relay. On the other hand, sites with a higher security posture will employ firewalls and access-control lists throughout the network. The instructions below will follow the principle of least privilege to facilitate more granular control.

On a technical note, it helps to remember that a DHCP relay communicates on behalf of the client to obtain a lease from a server – the relay, not the client, is the source IP address of the request. In this scenario the relay IP is also the address of the compute gateway interface that was specified during creation, as seen in the previous step.

Using the VMware Cloud Service portal, create two new inventory groups, each containing a single IP address –  one to match the network segment compute gateway interface and the other to identify the destination DHCP server on your network. In the future, if additional network segments require relay access, those gateway addresses can be added to the group.

Once the groups are created, the last step is to create a new firewall rule on the compute gateway to specify that the relay traffic should be allowed. See the full firewall configuration workflow in the microdemo below.

Takeaways

VMware Cloud on Dell EMC is part of the broader VMware Hybrid Cloud, which provides consistent infrastructure so that your applications can run anywhere using the same processes, procedures, automation, and staff skill everywhere. You can integrate your existing enterprise services with a new VMware Cloud on Dell EMC SDDC rack for seamless management, migration, and other workflows.

For more information on VMware Cloud on Dell EMC, read the Technical Overview paper, follow us on Twitter, or get in touch with your VMware account team.

The post Use Your Enterprise DHCP Services with VMware Cloud on Dell EMC appeared first on VMware vSphere Blog.

vSphere 7 – Introduction to Kubernetes Namespaces

$
0
0

Kubernetes IconThe next component I’m going to cover in vSphere with Kubernetes for the vSphere Administrator is Supervisor Namespaces. Namespaces are set up by the vSphere Admin and allow them to control resource limits and other policies. We’ll go into those details below!

What is a Kubernetes Namespace?

In a large Kubernetes cluster with many projects, teams or customers there may be a need to carve out a piece to ensure fair allocation of resources and permissions. A Namespace provides that method to better share the resources of a Kubernetes cluster. It also allows you to attach policies and authorization. It can also provide the scope for Pods, Services and Deployments in a cluster. Click on the link to the Kubernetes website to learn more about Kubernetes Namespaces

What is a Supervisor Namespace?

As I mentioned previously the Supervisor Cluster IS a Kubernetes Cluster and Kubernetes is great at running “platforms.” And sometimes, like in the case of a TKG cluster, that platform is Kubernetes! In this image you can see where the two types of Namespaces are used.

Image of an SDDC running vSphere with Kubernetes. Supervisor Cluster with two namespaces running Pods and a TKG cluster

Figure 1 – Namespaces running Pods and TKG cluster

Now, for a LONG time in vSphere we’ve had the concepts of Resource Pools, RBAC, encryption and other vSphere features and capabilities. But what has been missing is a wrapper of sorts that a vSphere administrator could leverage to make Day 2 operations easier. Each feature or component is normally manipulated individually or as part of a tool like a PowerCLI script.

Namespaces has the potential to change all of that. With Namespaces I can give a project, team or customer a “sandbox” they can play in. They can’t see the other sandboxes and they can’t expand past their sandbox limits. The Namespace construct allows the vSphere admin to set several policies in one place. The user/team/customer/project can create new workloads to their hearts content within their Namespace.

Remember, VM isolation already ensures the VM’s can’t see each other. NSX-T isolation ensures the workloads can’t communicate with each other. I’ll keep repeating “Same rules apply. We’re leveraging the proven infrastructure of vSphere”. You can run Development and Production on the same infrastructure and control the impact of each with Namespaces.

As a vSphere Administrator looking to simplify operations this could make for some interesting capabilities going forward. It could also save you a significant amount of time and effort in standing up new applications. There’s nothing in vSphere with Kubernetes that says the DevOps team gets to have all the fun!

Let’s get into some of the things the vSphere Administrator can set on a Namespace.

Resource Limits

Sounds a lot like a Resource Pool because it’s backed by a Resource Pool! You can see here in the image below.

A screenshot of the vSphere 7 client Namespace configuration screen

Figure 2 – Namespace Configuration

I have created a Namespace called “demo-app-01” and it’s a Resource Pool with a new Namespace icon. If I click on “Capacity and Usage” I’m presented with a dialog box that allows me to set the various CPU, Memory and Storage limits.

A screenshot of Namespace Resource Limits configuration

Figure 3 – Namespace Resource Limits configuration

Permissions

In this example I’m granting the user CPBU.LAB\mike permissions to view content in this Namespace

Animated GIF of adding permissions to a Namespace

Figure 4 – Animated GIF of adding permissions to a Namespace

Storage

In Kubernetes there are Storage Classes. In vSphere there is Storage Based Policy Management. These map one to one. On the left are the VM Storage Policies and on the right are the Kubernetes Storage

VM Storage PoliciesFigure 5 – VM Storage Policies

Kubernetes Storage PoliciesFigure 6 – Namespace Storage Policies

Note: If I want all data in the Namespace to be encrypted, I would select a storage policy with VM Encryption enabled. All new VMDK’s at that point would be created as encrypted. Including persistent volume VMDK’s! (Same rules apply as before, 3rd party KMS is necessary for VM Encryption)

Registry Service

The Registry Service allows for a private registry of container images. It is based on the VMware Harbor project. Why would you use a private registry? To ensure that only the container images that are “approved” are running in your infrastructure. Downloading untested, un-updated container images could lead to some very interesting talks with your compliance and security teams.

When you configure a Registry Service a private Namespace is created to run the registry. You enable the Registry Service on a per-cluster basis. The following image will show a configured Registry Service.

Image Registry settings

Figure 7 – Image Registry settings

Network

As part of the installation and configuration of vSphere with Kubernetes you needed to have configured NSX-T to provide the necessary networking used by vSphere to support Kubernetes. Within vCenter you can display the settings for the management network and Workload Network.

Namespace Network settings

Figure 8 – Namespace Network settings

You can also make minor edits to some of the settings for both right here as well.

Management Network SettingsFigure 9 – Management Network Settings

Workload Network SettingsFigure 10 – Workload Network Settings

Wrap Up

I hope this introductory series of blogs is helpful for those of you that are vSphere Administrators. It’s meant to help you better understand the components and features that make up vSphere with Kubernetes. If you have any ideas on vSphere with Kubernetes topics that you’d like to learn more about from a vSphere Administrator standpoint then please reach out to me on Twitter. I’m @mikefoley and my DM’s are open.

Thanks,

mike


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.

The post vSphere 7 – Introduction to Kubernetes Namespaces appeared first on VMware vSphere Blog.

#vSphere7 Launch TweetChat with #vSAN7 & #CloudFoundation4

$
0
0

Twitter IconOur last #vSphereChat on Twitter was truly something special —  not only did we accomplish hosting a tweet chat across THREE different VMware Twitter accounts, but also, we discussed our latest launches – #vSphere7, #vSAN7 and #CloudFoundation4. The transcript is below.

There will be another Tweetchat coming up on April 30, 2020 on vSAN at 11:00 AM Pacific. Have questions or thoughts on hyperconverged storage? Bring them! Just follow the #vSANchat hashtag in Twitter.

For the March 26th tweetchat vSphere expert Bob Plankers was joined by members of the vSAN team, John Nicholson and Pete Flecha, as well as Josh Townsend and Rick Walsworth from the Cloud Foundation team. In case you missed it, we’re recapping the vSphere event below.

VMware vSphere IconQ1: How does vSphere 7 deliver essential services for modern hybrid cloud?

Bob Plankers

A1: That’s a loaded question. It might be easier to say what it doesn’t do at this point. vSphere 7 is the core of the virtual data center and brings all sorts of security and availability and performance features with it to make it easy to run workloads.

A1 Part 2: I was talking to someone this morning about how vSphere 7 helps us reduce risk in our environments. Features like vMotion and DRS make the infrastructure self-healing and flexible. VM Encryption helps us be secure with whatever equipment we have.

A1 Part 3: The new Lifecycle Manager features in vSphere 7 are going to be pretty nice, too. Firmware management added to vSphere patching… integrated HCL checking… integrated compatibility matrix checking… lots of manual work by admins now eliminated.

Q2: How does vSphere 7 simplify lifecycle management?

Bob Plankers

A2 Part 1: In vSphere 6.7 and earlier we had Update Manager. Now in vSphere 7, we get Lifecycle Manager, which also cares about hardware firmware and other products. So, for instance, we can make sure that our Dell PowerEdge R640s get the right disk controller firmware.

A2 Part 2: Firmware is really important to vSAN 7 and the vSAN Chat folks as well. Disks, controllers, all of that need to be running versions that are safe and perform well.

A2 Part 3: We care about other solutions, too. Have @vRealizeOps installed? vSphere 7 Lifecycle Manager will check to make sure it’s the right level for a vSphere upgrade.

A2 Part 4: We have lots of info on vSphere 7 Lifecycle going up on our blog. Subscribe, we have blogs published every Monday – Thursday until May. It’s nuts!

vSphere Security ShieldQ3: What is “intrinsic security” and what does it mean for vSphere 7?

 Bob Plankers

A3: We use “intrinsic security” a lot. It means that we consider security as part of our product development, and not just an afterthought once it’s built. When you do that you get better, more comprehensive security that’s easier to operate and use.

A3 Part 2: It also means that our security tools, such as @vmw_carbonblack, integrate well with vSphere 7 and form a holistic solution. The goal is to make security easy to enable and operate.

A3 Part 3: At the vSphere 7 level we do a LOT of hardening of the hypervisor and tools, so when you install them you get the best practices right out of the proverbial box.

A3 Part 4: The folks in our @vmwarevcf & VMware Cloud Foundation chat also have really great guides to hardening and compliance in the VMware Validated Designs, here, which covers PCI DSS 3.2.1 and NIST 800-53v4 right now.

A3 Part 5: There are also all the features like VM Encryption, vTPM, and so on. Easy to enable, easy to use. In fact, vSphere 7 (and 6.7) are the only hypervisors to natively support Microsoft Device Guard and Credential Guard. It makes it easy to be secure and in compliance!

A3 Part 6: Also keep an eye on Identity Federation!

Q4: How can vSphere7 help with gaps between IT operations and developers?

Bob Plankers

A4: vSphere 7 with Kubernetes is going to go a long way towards this. A couple of good resources to start with can be found here and here.

A4 Part 2: Namespaces is basically a really killer version of the vApp concept, with developer self-service in it.

A4 Part 3: vSphere 7 Namespaces lets a vSphere Admin set resource limits and “guard rails” but then developers can do their own thing without having to open tickets all. the. time. GLORIOUS for everyone involved.

Kubernetes IconQ5: vSphere 7 with Kubernetes changes things for deploying & operating applications. What are the options now?

Bob Plankers

A5: vSphere with Kubernetes is what Pacific turned into. Options for Kubernetes clusters are vSphere Pod Service clusters or Tanzu Kubernetes Grid clusters. @mikefoley posted this just this morning!

A5 Part 2: Everything vSphere is and has been is all still there, but we also add containers both as a native object in ESXi and the Tanzu Kubernetes Grid (TKG) clusters.

AND…

A5 Part 3: You can use the vSphere Client to do admin tasks, like always, or PowerCLI, or whatever tool you want. Still works. But you can also ask Kubernetes to do it, too. Mix & match infrastructure all you want in your YAML files. Flexible, nice APIs, all in Namespaces.

A5 Part 4: And because it’s still just vSphere 7 (just!) under the hood, it already works with all your corporate governance and processes and such. Problem solved, let’s people focus on productive things, not red tape.

Q6: Which use cases would vSphere 7 be ideal for?

Bob Plankers

A6: ALL THE CASES.

No, really. It’s very flexible, scales from very small (Mac Mini) to very big (dozens of CPU sockets and cores). Networking is very flexible and fast. And with vSphere 7’s focus on improving vMotion and DRS for big workloads, there’s…

A6 Part 2: …very little reason now to NOT run something in vSphere 7. We just need to keep working on the apps people that doubt it and tell us their apps cannot be virtualized. That’s absolutely not true, and a disservice to their organizations.

A6 Part 3: Same for vSAN Chat and vSAN 7, too. A lot of the issues with adopting new releases like #vsphere7 come down to people, and their fears. “If we automate something what will I do for my job?” — turns out there’s always something to do!

VMware Cloud Foundation IconQ7: How does vSphere 7 with Kubernetes enhance @VMware Cloud Foundation 4?

 Bob Plankers

A7: We should get the @vmwarevcf folks in this, too, @RickWalsworth. In short, it brings K8s functions into an enterprise in a very short order. Want to get developers going with modern apps quickly? VCF is a GREAT way to do that.

Rick Walsworth

A7: Yes, there are add-on SKUs so that existing vSphere customers can add on the remaining NSX, vSAN, vRealize, and SDDC components to complete up the full stack. Go here for more info.

VMware vSAN IconQ8: Explain how vSphere 7 integrates with vSAN 7 and VMware Cloud Foundation 4.

Bob Plankers

A8: VMware Cloud Foundation is very flexible about storage and infrastructure. You can use the storage you have for workloads, but for the VCF management cluster, it relies on vSAN 7. It’s pretty amazing and eliminates a TON of admin overhead.

Rick Walsworth

A8: The ability to resize workload domains on the fly provides the ability to allocate capacity on demand so that you can allocate storage and network resources easily without having to deploy new infrastructure.

Q9: What is one thing about the vSphere 7 release that you want users to know?

Bob Plankers

A9: VMware and vSphere are really focusing on simplifying things, making things easy to use, making things fast. DRS improvements, vMotion improvements, Lifecycle, Identity Federation, Precision Clocks, vSGX, all sorts of things.

A9 Part 2: Check out our blogs on all the new things. And join us on April 2 for a vSphere Admin launch event!

Q10: If vSphere 7 were a superhero, which superhero would it be and why?

Bob Plankers

A10: This is going to sound bad, but Thing from The Fantastic Four. SOLID, STABLE, GREAT LOOKING!

Narrator’s Voice: At this point, having said “Thing,” Bob was asked to stop tweeting. 🙂

Follow the additional VMware experts who participated in this, vSAN, and Cloud Foundation chats, including John Nicholson (@Lost_Signal), Pete Flecha (@vPedroArrow), Josh Townsend (@joshuatownsend) and Rick Walsworth (@RickWalsworth).

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.

The post #vSphere7 Launch TweetChat with #vSAN7 & #CloudFoundation4 appeared first on VMware vSphere Blog.

vSphere Web Client Support beyond Adobe Flash EOL

$
0
0

vSphere Web Client in vSphere 7

In vSphere 7, the vSphere Client (HTML5) is the only client to manage all the vSphere workflows. Starting with vSphere 6.7 Update 1, vSphere Client (HTML5) has become feature complete to support vSphere management capabilities as that of the vSphere Web Client (Flash). VMware discussed the topic in this blog, in October of 2018:  https://blogs.vmware.com/vsphere/2018/10/fully-featured-vsphere-client-in-vsphere-6-7-update-1.html

VMware also bid farewell to the vSphere Web Client in August of 2017, in this blog post; https://blogs.vmware.com/vsphere/2017/08/goodbye-vsphere-web-client.html reminding customers that there would be no vSphere Web Client in the coming versions vSphere.

vSphere Web Client beyond Adobe Flash End-of-Life

As it is evident, vSphere Web Client uses Adobe Flash which is being sunset soon. By Dec 2020, Adobe is stopping the support for Flash and all the browser vendors are aligned with this timeline to disable running Flash on them. Due to this sunsetting, there is a window of about 11 months where the Flash client (vSphere Web Client) may not work in customer environments when browsers are upgraded to the latest version.

VMware’s recommendation is to upgrade vCenter Servers to vSphere 6.7 Update 3 and use HTML5 based vSphere Client to manage vSphere environments. Customers will not need to upgrade ESXi hosts to 6.7 version as a method of addressing Flash Client supportability. Note that a 6.7 vCenter Server can manage vSphere 6.5 ESXi hosts.

VMware will continue to support vSphere Web Client for vSphere 6.5 and 6.7 till the end of support for these versions on browsers where Adobe Flash can run. Additional information about VMware’s support for vSphere Web Client on vSphere 6.5 and 6.7 versions are covered in this KB article, vSphere Web (Flash) Client Supportability and End of Life (78589): https://kb.vmware.com/s/article/78589

Browser Support Statements:

 

Feedback?

Please leave any feedack in the comment section below.

 

 


We are excited about vSphere 7 and what it means for our customers and the future. Join us Monday through Thursdays through June 2020 for posts highlighting the new features, capabilities, and improvements in vSphere 7 and vSphere with Kubernetes. It’s easy to stay informed, please follow us directly using the RSS feed, on Facebook, and on Twitter.

The post vSphere Web Client Support beyond Adobe Flash EOL appeared first on VMware vSphere Blog.

Announcing Flatcar Linux Support on vSphere

$
0
0

We are excited to share the news that Flatcar Container Linux by Kinvolk is now officially supported on vSphere. Flatcar Container Linux  is a popular container OS and the perfect drop-in replacement to CoreOS Container Linux.

CoreOS Container Linux will be end-of-life on May 26, 2020, so users who want to receive ongoing maintenance and security updates should migrate to a supported guest OS by that date. With VMware and Kinvolk’s collaboration to bring Flatcar Container Linux to vSphere, Flatcar is a supported path forward for CoreOS users.

To find out how to run Flatcar Container Linux as your guest OS in vSphere, please see the documentation on running Flatcar in VMware and on how to update a VM from CoreOS to Flatcar Container Linux. The VMware Compatibility Guide (VCG) echos the support for Flatcar Linux. Running Flatcar Container Linux is supported in vSphere 6.7 and higher.

VMware and Kinvolk are collaborating to further enhance the user experience for Flatcar Linux as a guest OS, and as a foundation for Kubernetes in vSphere environments.
VMware provides full support to its customers deploying Flatcar. Kinvolk offers subscriptions for enterprise support, please email hello@kinvolk.io to discuss a Flatcar Container Linux Subscription.

 

The post Announcing Flatcar Linux Support on vSphere appeared first on VMware vSphere Blog.


vSphere 7 – vSphere Trust Authority

$
0
0

VMware vSphere IconAt VMware we talk a lot about intrinsic security, which is the idea that security in a vSphere environment is baked in to the product at a deep level, not sprinkled on as an afterthought. Security is a huge focus of vSphere 7, even though when we talk about the new features it’s usually about certificate management improvements, vSGX, Identity Federation, and vSphere Trust Authority. All of the massive improvements to patching and upgrades, via vSphere Lifecycle Manager and Update Planner, are security features, too. If you cannot patch you can’t secure your environment from vulnerabilities that are discovered, and operational issues like bugs and driver issues (theoretically speaking, of course). 🙂 Operational issues affect availability, which is directly related to security, and it is all tied back to risk.

One of the ways you see vSphere 7 improving intrinsic security is by allowing vSphere Admins to manage deeper into the infrastructure itself. You see Lifecycle Manager now able to work together with tools like Dell EMC OpenManage Integration for VMware vCenter and HPE iLO Amplifier to manage server firmware as part of the remediation cycles. This is a huge deal because there are vulnerabilities in server & management controllers, too, and those are not fixed by simply updating ESXi. We need to make sure the hardware gets updated, too, so that it is a solid and safe foundation for us to build our infrastructures on.

Foundations of Trust

Image of a Dell R630 Trusted Platform ModulevSphere Trust Authority (vTA) is a tool to help ensure that our infrastructure is safe & secure, and to ensure that if its security is ever in question we act to repair it. To understand vTA we need to look back at vSphere 6.7, which introduced support for Trusted Platform Module (TPM) 2.0 and the host attestation process. A TPM is an inexpensive component ($20 to $40 US) that can be installed inside servers manufactured in the last 10 years or so. They’re often overlooked when ordering servers, so you may not have them, but they are modular and can be added after a server is deployed (though it’s much easier if you order them preinstalled – a tip for your next hardware order).

TPMs do three things:

First, a TPM serves as a cryptographic processor, and can generate cryptographic keys as well as random numbers. A TPM is connected over a serial bus inside the server, and when I say “serial” I mean less Universal Serial Bus (USB) and more “1990s modem speeds.” Truth is, most cryptographic operations are done by the main CPUs now, because those CPUs have very fast cryptographic functions (which is why technologies like VM Encryption and vSAN Encryption have very low overhead). Nevertheless, modem speeds are just fine for storing and retrieving keys and certificates and the like, which are all very small. The TPM random number generator can also be very helpful in jumpstarting, or “seeding,” the random number generators in CPUs and operating systems that run on bare metal, like hypervisors.

Second, a TPM can store cryptographic materials, like keys, certificates, and signatures. It has techniques called “binding” and “sealing” that help control how the secrets it stores can be retrieved. Furthermore, to prevent people from physically stealing a TPM and accessing the secrets, TPMs are cryptographically bound to the server you first enable them on. Do not buy any off eBay or try to recycle them from older systems (for fun search eBay for “TPM server” and you will see all sorts of them!).

Last, a TPM can help us determine if a system’s integrity is intact by doing something called “attestation.” The TPM can measure and store security information, and then summarize it in a way that is very cryptographically strong. If a server has UEFI Secure Boot and its TPM enabled, vCenter Server can collect these security measurements and determine if the system booted with authentic software, and in a configuration we trust. That is attestation, and you can see it in the vSphere Client’s “Security” section of the Monitor tab:

Image of the vSphere Client's Security Tab Showing Attestation Status

vSphere 6.7’s attestation is view-only, though. Beyond an alarm when there is a problem there are not really any repercussions for failing attestation. That means that a secure workload, such as one using VM Encryption to protect its data on disk, could potentially be moved by DRS back on to a questionable host. It also means that if vCenter Server, which handles all the encryption keys for a cluster, is running inside a cluster the encryption keys might be vulnerable. Last, we cannot encrypt vCenter Server, either, because there is a dependency loop.

vSphere Trust Authority

This is where vSphere Trust Authority comes in. vTA establishes its own management cluster, which serves as a hardware root of trust. This cluster is a very secure, heavily scrutinized, small cluster outside of the normal vSphere clusters in an environment. Ideally that cluster is separate from all other clusters and has a very small number of admins. The hosts that comprise this cluster run ESXi and can be very small machines since they likely will not be running any workloads. In lieu of a separate cluster, an existing management cluster can also serve as the vTA root of trust, making it easy to get started.

vSphere Trust Authority Diagram

Once the vTA management cluster is established it handles two big tasks. First, it takes over the distribution of the encryption keys from the Key Management Servers (KMS). This means that vCenter Server no longer is in the critical path for those keys, which also means that we can encrypt vCenter Server to protect it. Second, vTA handles the attestation of other hosts. Because vTA handles the encryption keys, if a host fails attestation vTA will withhold the keys from it. This prevents secure workloads from moving to that host until the problem can be resolved. That is exactly what we want, since we do not want our secrets being given to potentially untrustworthy servers.

In Conclusion

vSphere Trust Authority is a new and very foundational technology right now, helping us build trust in our hardware and software configurations at deeper levels. At first glance it might look like vSphere Trust Authority is adding complexity, with a separate cluster and additional configuration work. For some folks that is true. However, security-oriented customers with larger deployments will find that this has the very real potential to simplify operations in their environments. The vTA management cluster can be used to attest thousands of hosts and clusters, so the cost and complexity of that cluster is offset by the lower risk and higher trust that vTA brings to their entire enterprise.

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.

The post vSphere 7 – vSphere Trust Authority appeared first on VMware vSphere Blog.

vSphere 7 – Why Upgrade? Here’s What Beta Participants Think!

$
0
0

VMware vSphere IconBefore I joined VMware in late 2018 I was a VMware customer for over 15 years. One of the many things I looked forward to was participating in the vSphere Beta Program. It was a chance to see new features and offer feedback to the product managers and engineering teams more directly. It also helped me prepare for upgrading my environments. Every new release of vSphere has features aimed at helping a vSphere Admin do their job better, faster, and with less overhead & hassle, so it always made sense to thoughtfully upgrade after some testing with the released product.

One of the reasons I have always liked VMware, as a company, is that constructive feedback is always appreciated. The truth about software companies, whether it is VMware or others, is that few of the people involved in the development of the products see the products like a true vSphere Admin does. vSphere Admins are in the trenches, dealing with applications and users and compliance auditors and ancient storage arrays and the whole mess. It is very important that we ask them what they think, and we encourage them to always tell us the truth. Now that vSphere 7 has been released we did just that, asking people in the beta program to answer some questions.

Are you likely to upgrade to the latest version?

Graph showing responses to the question "Would you upgrade to vSphere 7?"

Whoa. From my perspective the vSphere 7 beta was one of the most stable ones I have run in my many years, but it is stunning to see 285 of 307 vSphere Admins answer “yes” to the question of whether they’d upgrade to it. Even more so, it is not the typical bell curve you’d see, as 252 of those responses were enthusiastic yes. Of course, a survey is one thing, real life will be different, especially with COVID19. Still, the beta community tends to be somewhat vocal, and the respondents volunteered to complete the survey, so if people were angry or disappointed I would expect them to speak up like they usually do.

Let’s look at what they said when asked WHY they’d upgrade.

What is the top feature worth upgrading for?

Graph showing responses to the question "What is the feature that you would upgrade for?"

The results here are a real “who’s who” of vSphere 7 features. No question here that people are looking forward to the lifecycle management, upgrade planning, and Kubernetes features in vSphere 7. I really appreciate the enthusiasm from the “ALL OF THEM” respondents, too. Not everybody answered this question, which is typical for a freeform text field in a survey, though. To gauge whether the positive response is real I read through the other comment fields, and while some folks had a few issues it appears most were resolved with beta refreshes. That’s a very positive thing by itself!

It’s worth mentioning that the vSphere Technical Marketing team is blogging about all of these features through May, with deeper dives through the summer, if you want to learn more. And if you want to be part of ongoing beta programs here at VMware please reach out to your account teams who can help you set that up.

Conclusion

I think the data speaks for itself, but if there is a conclusion to be made it is that vSphere 7 is a solid release. Should you upgrade to it? Yes, but thoughtfully. It is very important to make sure that all the components in your environment, like backup systems, hardware, storage, and so on are compatible. The VMware Compatibility Guide is a great tool for determining hardware readiness, and if your servers aren’t on there yet ask your vendors, as they are the ones that do the testing.

In the meantime you can do what the VMware Hands-on Labs do and run vSphere 7 nested inside another vSphere environment. Did you know you can set the guest OS type to ESXi, and that ESXi has VMware Tools? While it isn’t officially supported, it’s great for testing at any time, especially since you can take a snapshot of the environment and reset it if you do something bad to it. It’s a very good way to practice the upgrade process to develop confidence in it.

Don’t forget the VMware Test Drive, either. Test Drive is a great way to experience quite a few VMware products in an environment that’s already configured.

As always, thank you for being our customers. This whole post is about feedback, and we mean it when we say that if you have some please let us know.

Customer quote about vSphere Lifecycle Manager

(Thank you to Liz Wageck & Kristine Andersson for the data and work with the beta community)

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and please stay safe.

The post vSphere 7 – Why Upgrade? Here’s What Beta Participants Think! appeared first on VMware vSphere Blog.

vSphere 7 – vLCM Compatibility Checks

$
0
0

The previous blog post on vSphere Lifecycle Manager (vLCM) covers all its capabilities and explains how vLCM ensures consistency across all ESXi hosts in a cluster using a declarative model. The desired state includes the ESXi base image, optional vendor addons, and firmware & drivers. When hosts drift from the desired state, the configured single image, customers can remediate hosts to compliance.

This blog post will go into more details on the integrated hardware compatibility checks in vLCM.

Hardware Compatibility Checks

When a vendor integration in vLCM is set up using a supported Hardware Support Manager (HSM), Dell or HP in the first vSphere 7 release, customers can verify hardware compatibility. Checking the image compliance for a desired state that includes the optional Firmware and Drivers Addon provides insights on devices on the ESXi hosts and what firmware version they run versus what firmwares are defined in the image.

In the screenshot example above, you’ll notice that the hosts in this cluster are not compliant to the desired state. It immediately shows us that various devices including the storage controller is running an older firmware version. To rectify the desired state drift, we can remediate the hosts. The HSM will then work to get the firmwares updates on the devices in the host, all done from within the vSphere client!

The ability to easily upgrade device firmwares using vLCM is a powerful tool that provides reliability and increased performance as hardware devices are only as good as its firmwares and drivers.

HCL Automatic Checks

Running the best firmware is key for all hardware devices, as is the case when running hyper-converged infrastructures (HCI) with vSAN. Think about the storage controller and its impact on storage performance and storage media used for vSAN. Customers will greatly benefit if all storage components in an ESXi host perform as optimal as possible, using the best possible firmware.

Checking updates for firmwares and drivers using the desired state in vLCM provides this capability. We expanded the checks for storage controllers so that the firmware is also checked against the VMware Compatibility Guide (VCG, also referred to as HCL). vCenter Server with vLCM syncs with the HCL.

When you examine the hardware compatibility option under Cluster > Updates it provides the option to discover and resolve potential hardware compatibility issues. In the first release, this is limited to vSAN storage controllers, although selected NVMe devices are also checked as they use embedded storage controllers.

Taking a closer look at the storage controller is this screenshot, we notice it is shown as a compatible device. So not only the firmware of the storage controller is checked against the desired state including the additional firmware and drivers addon. It is also checked against the HCL.

The screenshot below shows the firmware of the Dell HBA330 adapter in all hosts is up to date. It is running the same version as recommended in the HCL. A direct link to the HCL is also provided if you want to review the listing.

 

More Resources to Learn

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and stay safe!

The post vSphere 7 – vLCM Compatibility Checks appeared first on VMware vSphere Blog.

Integrate VMware Cloud on Dell EMC with Your Enterprise Infrastructure via AD Authentication and Hybrid Linked Mode

$
0
0

VMware Cloud on Dell EMC is a hyperconverged hardware and software-defined data center stack, jointly engineered by VMware and Dell EMC, that includes complete lifecycle management. Simply select the number and size of Dell EMC VxRail hosts needed for your on-premises edge or data center and leave the provisioning, as well as the ongoing lifecycle management, to VMware.

In recent posts, we took a look at how to configure the DNS forwarding and how to set up DHCP relays to integrate more seamlessly with your enterprise services. Now we will walk through the next stage of typical configurations for a vSphere hybrid cloud – AD authentication and Hybrid Linked Mode.

Configure Active Directory Authentication

By default, each new VMware Cloud on Dell EMC SDDC deployment includes an admin account called cloudadmin@vmc.local which is initialized with a random password during bring-up. This password can be obtained from the secure VMware Cloud Services portal, along with the unique URL for the vCenter Server that is managing the cluster.

Screenshot of dialog box used to obtain the cloudadmin password

It’s generally not a secure practice for multiple administrators to share a single admin account, as it really complicates things like auditing, password rotation, and handling employee terminations. The solution is to have each administrator log in with their own user account, and a popular way to configure this in vSphere is to incorporate Active Directory (AD) as an authentication source.

Long-time vSphere admins, please note that VMware Cloud on Dell EMC can only be configured for AD authentication over LDAP; it cannot be joined to your AD and cannot use Integrated Windows Authentication, which is deprecated.

Enabling Active Directory over LDAP is straightforward and has two main steps. First, navigate to the single sign on configuration panel and click ADD under the identity sources tab. Fill in the fields with details that are appropriate for your AD domain, as seen in the image below. For more details on the configuration, see the product documentation  and also be aware of the pertinent Windows Server LDAP channel binding concern.

Setting up AD over LDAP

Once the new identity source is available, the second step is to grant permissions. The best approach is to designate a group in AD that can be assigned the CloudAdmin role in vCenter. This way, any administrator that is a member of that AD group will be able to log into vCenter without resorting to use of that shared cloudadmin@vmc.local account.

Granting permissions to AD group

Configure Hybrid Linked Mode

Each vCenter Server, whether on-premises or part of VMware Cloud, is a separate entity. By default, an administrator would be required to log into each vCenter individually to manage the underlying SDDC resources. This is a challenge for larger environments that typically have many vCenter Servers. The solution that has been available for quite a long time is to link the different vCenter Servers together, which can be done for on-prem systems through Enhanced Linked Mode (ELM). But to link an on-prem vCenter and a managed VMware Cloud on Dell EMC vCenter, administrators must use Hybrid Linked Mode (HLM). Please take a look at the product documentation for more specific details. The gist of the procedure is that you first download the vCenter Cloud Gateway Appliance and deploy it in the on-prem SDDC, then you link the VMware Cloud SDDC. The download link can be conveniently found under the hybrid cloud administration settings:

Download link for the GW appliance

Deploy the gateway appliance to on-prem vSphere infrastructure and log in with admin credentials to complete the configuration:

Link the domain

After the two domains are linked, log in to the gateway with your individual admin account and observe that both vCenter Server environments now show in a single UI.

Two vCenter Server environments in one view

Takeaways

When you add a VMware Cloud on Dell EMC SDDC to your data center, you can manage the resources more effectively by authenticating against your enterprise Active Directory and by linking to your existing vCenter Server environment. This enables workflows such as seamless virtual machine migration and deployment across the two SDDCs.

For more information, see the VMware Cloud on Dell EMC product website, follow us on Twitter, or download the Technical Overview paper.

The post Integrate VMware Cloud on Dell EMC with Your Enterprise Infrastructure via AD Authentication and Hybrid Linked Mode appeared first on VMware vSphere Blog.

vSphere Mobile Client App

$
0
0

<Blog by Abhijith Prabhudev; vSphere Product Manager>

vSphere Mobile Client App

Mobile and other handheld devices are becoming a part of our life by helping us keep up with every possible thing, may it be our health, our social life, or our work. vSphere Client product team wanted to enable admins to keep an eye on their datacenters using these handheld devices.

We are happy to announce that vSphere Mobile Client fling is released to help you monitor your datacenters and take some quick actions to continue running your workloads without any disruption. This lightweight app is available for download for both Android and iOS devices. This app is built using the same vSphere Client (HTML5) APIs, and is compliant with Clarity theme, thus maintaining the consistent data and experience across different vSphere management platforms. Released as a fling since the start of 2019, this app has all the modern mobile app features like Bio-metric authentication, optimized for the touch interface, and offline notification capabilities (more about this later).

vSphere Mobile Client is built keeping monitoring in mind. You can monitor the resource utilization by VMs, Hosts, Clusters, etc., for the same granular metrics as that of vSphere Client (HTML5). You can also monitor the status of tasks, events, or alarms in your environment. There are only a limited set of operations supported in the app to allow you to mitigate any issues you would notice while you monitor. Ex: a host is running out of resources in a DRS enabled cluster and you want to put a host into maintenance mode so DRS can vMotion your application causing zero downtime or you want to take a snapshot of a VM or power reset a VM for a quick fix or have a quick peek at the VM’s remote console. You cannot perform any advanced operations like configuring a cluster or performing manual vMotions thus reducing the risk of “fat fingering” mistakes.

Another cool feature supported by vSphere Mobile Client is offline monitoring of long-running tasks. Let’s assume you are putting a host into maintenance mode which will trigger a bunch of vMotion operations that could take a lot of time to complete. Wouldn’t it be nice to send a notification to your phone when the task completes, so you can open your laptop only if there are any failures? You could achieve this by configuring a notification appliance that monitors these long-running tasks and sends push notifications to your phone app on task completion.

To use the vSphere Mobile Client app, all you need is a VPN connectivity from your phone app to a vCenter server in your datacenter. If your enterprise uses any form of Mobile Device Management (MDM) software, then you can connect using the built-in VPN in the MDM. Since it’s release in 2019 we have seen a significant increase in adoption and usage of the app across the geography. Some of the future capabilities we are considering bringing in the app are: the ability to manage multiple linked vCenter servers including VMware Cloud on AWS, integration between vSAN monitoring app, customizable notifications, vSphere Skyline health integrations, and browser to mobile handoff capabilities.

Before we want to make vSphere Mobile client app as an officially supported product, we want to get feedback from you. Could you use the app and share the feedback using the in-app feedback capability? Let us know what other features support would make you use this app to monitor your production environment. You can leave general feedback comments in the Fling’s comments section or in the device App stores.

Enjoy the app and let us know what you think about it!

What happened to vSphere Mobile Watchlist?!

There used to be a mobile client to manage vSphere environment called vSphere Mobile Watchlist which was built a long time ago. Since this app was heavyweight, was not built to reuse the vSphere Client APIs, it was very expensive for us to enhance it to give a consistent experience. Rather we rebuilt the brand new lightweight vSphere Mobile client app. vSphere Watchlist is no longer supported/maintained.

 

The post vSphere Mobile Client App appeared first on VMware vSphere Blog.

vSphere 7 with Kubernetes Network Service Part 1: The Supervisor Cluster

$
0
0

(By Michael West, Technical Product Manager, VMware)

VMware Cloud Native Apps Icon

vSphere 7 with Kubernetes enables operations teams to deliver both infrastructure and application services as part of the core platform.  The Network service provides automation of software defined networking to both the Kubernetes clusters embedded in vSphere and Tanzu Kubernetes clusters deployed through the Tanzu Kubernetes Grid Service for vSphere.  The term “service” has become somewhat overloaded.  In this context, I am not referring to a Kubernetes service specifically, but to the more generic term that describes a particular capability that is technically composed of several technologies across products.  This blog and demonstration video are the first of two parts and will explore automated networking of the Supervisor cluster through the vSphere Network service.  A follow-on blog will dive into the Tanzu Kubernetes cluster networking.

 

vSphere 7 with Kubernetes Services

 

As a starting point lets briefly explore the services exposed through vSphere 7 with Kubernetes.  Operations teams enable the Supervisor Kubernetes cluster on vSphere clusters through a simple wizard in the vSphere Client.  That Supervisor cluster provides the Kubernetes backbone onto which we have built services that can be consumed by both Operations and DevOps teams.  The first service exposed in the Tanzu Runtime Services is the Tanzu Kubernetes Grid Service for vSphere.  The TKG service allows DevOps teams to lifecycle Kubernetes clusters on demand.  The TKG service leverages the Hybrid Infrastructure Services to create VMs, configure networking and storage, provide container registries and even to deploy pods directly to ESXi hosts.  Our focus is the Network Service.

 

 

The network service provides an abstraction on the underlying software defined networking used in the Supervisor cluster.  The current version of vSphere 7 with Kubernetes includes support for NSX-T as the provider of networking services.  Operations teams deploy NSX T 3.0 and the vSphere Network Service works through the NSX Container Plugin (NCP) to automate the lifecycle of networking resources consumed by the Kubernetes clusters.   For a technical overview on vSphere 7 with Kubernetes services, try this link:

 

vSphere Network Service and NSX

 

NSX can be configured to use port groups created directly on a version 7.0 vSphere Distributed Switch that connects non-NSX workloads.  There is no need for a VDS dedicated to NSX.  In my lab, the NSX overlay network is configured on the vDS-Transport port group.  The vCenter management network is used for communication between VC and NSX Manager, as well as the Supervisor Control plane nodes.  The vDS -Uplink port group will carry traffic connecting to non-NSX networks.

 

vSphere 7.0 Distributed Switch Configuration

 

When the vSphere 7 with Kubernetes Supervisor cluster is enabled, the network service creates segment port groups on the VDS.  Corresponding network segments are created in NSX, along with a Tier-1 gateway to provide connectivity between the network segments.  Notice that NSX segment port groups are not created on a separate switch but are now sharing an existing VDS.  NSX segments provide network connections to which you can attach virtual machines or pods. The VMs or Pods can then communicate with each other over tunnels between hypervisors if they are connected to the same segment.  Traffic is routed through the Tier-1 gateway to connect other segments.   Segments were called logical switches in earlier versions of NSX.

 

vSphere 7.0 Distributed Switch with NSX segment port groups

 

 

NSX Container Plugin (NCP)

 

NCP is a controller, running as a Kubernetes pod in the Supervisor cluster control plane.  It watches for network resources added to etcd through the Kubernetes API and orchestrates the creation of corresponding objects in NSX.   Each VDS segment port group gets a corresponding NSX segment and each of those segments is assigned a network subnet from the pod CIDR defined in the Supervisor Cluster deployment.  The segments are connected to a Tier-1 gateway.   The segment containing the Supervisor Control Plane is attached to an external Load Balancer.

 

NSX Manager Network Topology

 

 

Namespace Creation

 

VI Admins create namespaces in vCenter.  This creates both a vCenter namespace object and a Kubernetes namespace in the Supervisor Kubernetes Cluster.   We refer to this as a vSphere Namespace.  A VDS port group and NSX segment are also created for each namespace.  The NSX segment is isolated through rules created in the NSX Distributed Firewall to deny ingress and egress traffic by default.   DevOps users can leverage Kubernetes Network Policy integration to create granular access for the applications they deploy in the namespace.

 

Banking Namespace NSX Segment

 

 

 

Pod and Load Balancer Type Service Creation

 

Users have the choice of creating Tanzu Kubernetes clusters using the TKG Service for vSphere and deploying pods into those clusters, or using the vSphere Pod Service to directly deploy them onto the ESXi hosts through the Supervisor cluster.  In the case of Supervisor cluster deployment of pods, they are connected to the NSX segment for their namespace and acquire an IP from the subnet range assigned to that namespace.  Kubernetes services provide grouping and service discovery for pods.  Load Balancer type Kubernetes services enable Ingress to pods from outside the cluster.  The creation of a Load Balancer type service causes NCP to orchestrate the creation of NSX Virtual Servers associated with the Load Balancer created in the initial Supervisor Cluster deployment.  The virtual server is assigned an IP and port that is used to access the service.

 

Pods Run on ESXi – connected to Namespace NSX Segment

 

Let’s see it in action

 

The following demonstration shows a high level architecture of network components in the Supervisor cluster and how the vSphere network service automates the creation of those components in vCenter and NSX.  For more information on vSphere 7 with Kubernetes, check out our product page:  https://www.vmware.com/products/vsphere.html

 

 

The post vSphere 7 with Kubernetes Network Service Part 1: The Supervisor Cluster appeared first on VMware vSphere Blog.

Use Your Enterprise DHCP Services with VMware Cloud on Dell EMC

$
0
0

VMware Cloud on Dell EMC is a hyperconverged hardware and software-defined data center stack that is jointly engineered by VMware and Dell EMC and includes complete lifecycle management. With VMware Cloud on Dell EMC, you can satisfy the latency or locality requirements of your business applications through a streamlined provisioning process, while experiencing the benefit of reduced operational overhead – because it is all delivered as a service to your on-premises data center or edge location.

When you make VMware Cloud on Dell EMC part of your infrastructure, there are several services to consider integrating with the rest of your enterprise. In the last post, we took a look at how the NSX-T DNS forwarders can be pointed to your internal name servers to enable both improved manageability as well as to prepare the way for seamless workload migration when connecting with existing VMware vSphere environments.

For workloads that use dynamic IP addresses, you also have the option to relay DHCP requests from the VMware Cloud on Dell EMC SDDC rack to your existing enterprise IP address management services. In this post, we will take a closer look at how it works.

Each VMware Cloud on Dell EMC SDDC deployment is fully assembled, installed, and configured before it arrives on site. This includes vSphere compute, vSAN storage, and NSX-T network virtualization. Although NSX-T does include built-in DHCP services, many customers prefer to manage the address scopes from a central location to better facilitate troubleshooting, auditing, or reporting. This is accomplished by relaying requests from the SDDC to existing DHCP servers.

It is straightforward to enable the DHCP relay and disable the built-in DHCP service, but keep in mind that the change is an all-or-none global configuration and cannot be applied selectively to different network segments on the SDDC. For that reason, it’s a good idea to decide up front if your infrastructure architects require control of the address management service. The configuration can be changed afterwards, but it will disrupt running workloads since the network segments must be reconfigured.

In the sections below, you will see the steps required to configure the relay service and the compute gateway firewall.

Create a New DHCP Scope

The first step is to add a new scope for the dynamic IP addresses and related attributes for the new network segment that will be created on the VMware Cloud on Dell EMC SDDC. In the example below, you can see a standard Windows DHCP server configuration. The key items to double-check are the default gateway and DNS servers that you intend the clients on the remote side to use.

Enable the DHCP Relay Service and Create New Network Segment

As stated above, it’s best to enable DHCP relay before any workload network segments are created on the VMware Cloud on Dell EMC SDDC. Otherwise, any existing network segments will need to be temporarily placed into the disconnected state, which will be disruptive to running applications. Configuring the relay to point to the central DHCP server is straightforward, as seen in the following microdemo.

One thing to note that is admittedly unintuitive: when you create a network segment, the user interface currently does require that a DHCP address range be entered, although the values are effectively ignored when using the relay.

Configure Firewall to Permit DHCP Service Access

With the default configuration using the built-in DHCP service, no DHCP traffic leaves the SDDC, so no additional firewall rules are needed. However, when changing over to a DHCP relay, the lease request needs to travel out to the enterprise network, so the firewall must be configured to allow this communication.

Some customers may opt to create an any/any/any rule that will permit all traffic in and out of the new VMware Cloud on Dell EMC SDDC – just as similar traffic would typically be allowed on existing infrastructure in a data center. If that is the case, it would not be necessary to create a rule specifically for DHCP relay. On the other hand, sites with a higher security posture will employ firewalls and access-control lists throughout the network. The instructions below will follow the principle of least privilege to facilitate more granular control.

On a technical note, it helps to remember that a DHCP relay communicates on behalf of the client to obtain a lease from a server – the relay, not the client, is the source IP address of the request. In this scenario the relay IP is also the address of the compute gateway interface that was specified during creation, as seen in the previous step.

Using the VMware Cloud Service portal, create two new inventory groups, each containing a single IP address –  one to match the network segment compute gateway interface and the other to identify the destination DHCP server on your network. In the future, if additional network segments require relay access, those gateway addresses can be added to the group.

Once the groups are created, the last step is to create a new firewall rule on the compute gateway to specify that the relay traffic should be allowed. See the full firewall configuration workflow in the microdemo below.

Takeaways

VMware Cloud on Dell EMC is part of the broader VMware Hybrid Cloud, which provides consistent infrastructure so that your applications can run anywhere using the same processes, procedures, automation, and staff skill everywhere. You can integrate your existing enterprise services with a new VMware Cloud on Dell EMC SDDC rack for seamless management, migration, and other workflows.

For more information on VMware Cloud on Dell EMC, read the Technical Overview paper, follow us on Twitter, or get in touch with your VMware account team.

The post Use Your Enterprise DHCP Services with VMware Cloud on Dell EMC appeared first on VMware vSphere Blog.


vSphere 7 with Kubernetes Network Service Part 1: The Supervisor Cluster

$
0
0

(By Michael West, Technical Product Manager, VMware)

VMware Cloud Native Apps Icon

vSphere 7 with Kubernetes enables operations teams to deliver both infrastructure and application services as part of the core platform.  The Network service provides automation of software defined networking to both the Kubernetes clusters embedded in vSphere and Tanzu Kubernetes clusters deployed through the Tanzu Kubernetes Grid Service for vSphere.  The term “service” has become somewhat overloaded.  In this context, I am not referring to a Kubernetes service specifically, but to the more generic term that describes a particular capability that is technically composed of several technologies across products.  This blog and demonstration video are the first of two parts and will explore automated networking of the Supervisor cluster through the vSphere Network service.  A follow-on blog will dive into the Tanzu Kubernetes cluster networking.

 

vSphere 7 with Kubernetes Services

 

As a starting point lets briefly explore the services exposed through vSphere 7 with Kubernetes.  Operations teams enable the Supervisor Kubernetes cluster on vSphere clusters through a simple wizard in the vSphere Client.  That Supervisor cluster provides the Kubernetes backbone onto which we have built services that can be consumed by both Operations and DevOps teams.  The first service exposed in the Tanzu Runtime Services is the Tanzu Kubernetes Grid Service for vSphere.  The TKG service allows DevOps teams to lifecycle Kubernetes clusters on demand.  The TKG service leverages the Hybrid Infrastructure Services to create VMs, configure networking and storage, provide container registries and even to deploy pods directly to ESXi hosts.  Our focus is the Network Service.

 

 

The network service provides an abstraction on the underlying software defined networking used in the Supervisor cluster.  The current version of vSphere 7 with Kubernetes includes support for NSX-T as the provider of networking services.  Operations teams deploy NSX T 3.0 and the vSphere Network Service works through the NSX Container Plugin (NCP) to automate the lifecycle of networking resources consumed by the Kubernetes clusters.   For a technical overview on vSphere 7 with Kubernetes services, try this link:

 

vSphere Network Service and NSX

 

NSX can be configured to use port groups created directly on a version 7.0 vSphere Distributed Switch that connects non-NSX workloads.  There is no need for a VDS dedicated to NSX.  In my lab, the NSX overlay network is configured on the vDS-Transport port group.  The vCenter management network is used for communication between VC and NSX Manager, as well as the Supervisor Control plane nodes.  The vDS -Uplink port group will carry traffic connecting to non-NSX networks.

 

vSphere 7.0 Distributed Switch Configuration

 

When the vSphere 7 with Kubernetes Supervisor cluster is enabled, the network service creates segment port groups on the VDS.  Corresponding network segments are created in NSX, along with a Tier-1 gateway to provide connectivity between the network segments.  Notice that NSX segment port groups are not created on a separate switch but are now sharing an existing VDS.  NSX segments provide network connections to which you can attach virtual machines or pods. The VMs or Pods can then communicate with each other over tunnels between hypervisors if they are connected to the same segment.  Traffic is routed through the Tier-1 gateway to connect other segments.   Segments were called logical switches in earlier versions of NSX.

 

vSphere 7.0 Distributed Switch with NSX segment port groups

 

 

NSX Container Plugin (NCP)

 

NCP is a controller, running as a Kubernetes pod in the Supervisor cluster control plane.  It watches for network resources added to etcd through the Kubernetes API and orchestrates the creation of corresponding objects in NSX.   Each VDS segment port group gets a corresponding NSX segment and each of those segments is assigned a network subnet from the pod CIDR defined in the Supervisor Cluster deployment.  The segments are connected to a Tier-1 gateway.   The segment containing the Supervisor Control Plane is attached to an external Load Balancer.

 

NSX Manager Network Topology

 

 

Namespace Creation

 

VI Admins create namespaces in vCenter.  This creates both a vCenter namespace object and a Kubernetes namespace in the Supervisor Kubernetes Cluster.   We refer to this as a vSphere Namespace.  A VDS port group and NSX segment are also created for each namespace.  The NSX segment is isolated through rules created in the NSX Distributed Firewall to deny ingress and egress traffic by default.   DevOps users can leverage Kubernetes Network Policy integration to create granular access for the applications they deploy in the namespace.

 

Banking Namespace NSX Segment

 

 

 

Pod and Load Balancer Type Service Creation

 

Users have the choice of creating Tanzu Kubernetes clusters using the TKG Service for vSphere and deploying pods into those clusters, or using the vSphere Pod Service to directly deploy them onto the ESXi hosts through the Supervisor cluster.  In the case of Supervisor cluster deployment of pods, they are connected to the NSX segment for their namespace and acquire an IP from the subnet range assigned to that namespace.  Kubernetes services provide grouping and service discovery for pods.  Load Balancer type Kubernetes services enable Ingress to pods from outside the cluster.  The creation of a Load Balancer type service causes NCP to orchestrate the creation of NSX Virtual Servers associated with the Load Balancer created in the initial Supervisor Cluster deployment.  The virtual server is assigned an IP and port that is used to access the service.

 

Pods Run on ESXi – connected to Namespace NSX Segment

 

Let’s see it in action

 

The following demonstration shows a high level architecture of network components in the Supervisor cluster and how the vSphere network service automates the creation of those components in vCenter and NSX.  For more information on vSphere 7 with Kubernetes, check out our product page:  https://www.vmware.com/products/vsphere.html

 

 

The post vSphere 7 with Kubernetes Network Service Part 1: The Supervisor Cluster appeared first on VMware vSphere Blog.

vSphere 7 – Virtual Watchdog

$
0
0
A close up of a logo Description automatically generated

The new virtual watchdog timer (vWDT) is a new virtual device introduced in vSphere 7. It enables developers and administrators to have a standard way to know whether the guest operating system (OS) and applications, running inside a virtual machine, have crashed. It is an important function for clustered applications to gain high availability. In this blog post we will introduce the virtual watchdog timer, and discuss how to configure it in vSphere 7.

Overview

A watchdog timer helps the operating system or application to recover from crashes by powering off or resetting the server if the watchdog timer has not been reset by the OS within the programmed time. When workloads run on vSphere, the virtual equivalent of the watchdog timer helps the guest OS to achieve the same goal. It does so by resetting the virtual machine if the guest OS stops responding and cannot recover on its own due insuperable operating system or application faults.

This means that if the guest operating system stops responding and cannot recover on its own due insuperable operating system or application faults, the virtual watchdog timer is not reset within the allocated time. When this happens, an virtual machine reset is issued. When the system in the virtual machine is booted again, the watchdog timer helps the guest OS to understand if the restart was caused by a crash.

The virtual watchdog device is provided by vSphere, but is configured by the guest OS. It is exposed to the guest OS through BIOS/EFI ACPI tables.

Virtual Watchdog Timer Specifications

The Watchdog Resource Table (WDRT) feature provides addresses of the following registers on the device along with information such as the maximum timer value, timer resolution and other vendor/device information which the guest OS can use to configure and operate the device. Typically, modern guest operating systems use the Watchdog Action Table (WDAT) integration. WDAT describes an abstract device and provides information such as the instructions that it offers (see below), minimum and maximum count value, timer resolution, some flags and other vendor/device information. The Guest OS uses this information to configure and operate the watchdog device.

WDAT instructions:

  • WATCHDOG_ACTION_RESET
  • WATCHDOG_ACTION_QUERY_CURRENT_COUNTDOWN_PERIOD
  • WATCHDOG_ACTION_QUERY_COUNTDOWN_PERIOD
  • WATCHDOG_ACTION_SET_COUNTDOWN_PERIOD
  • WATCHDOG_ACTION_QUERY_RUNNING_STATE
  • WATCHDOG_ACTION_SET_RUNNING_STATE
  • WATCHDOG_ACTION_QUERY_STOPPED_STATE
  • WATCHDOG_ACTION_SET_STOPPED_STATE
  • WATCHDOG_ACTION_QUERY_REBOOT
  • WATCHDOG_ACTION_SET_REBOOT
  • WATCHDOG_ACTION_QUERY_SHUTDOWN
  • WATCHDOG_ACTION_SET_SHUTDOWN
  • WATCHDOG_ACTION_QUERY_WATCHDOG_STATUS
  • WATCHDOG_ACTION_SET_WATCHDOG_STATUS

Guest OS Support

Modern server operating systems include support for watchdog timers. No additional VMware drivers are necessary on both Windows and Linux operating systems. Additional configuration may be required depending on the used guest OS. Other operating systems like FreeBSD of Mac OS X do not support a watchdog timer.

  • Windows 2003 supports a Watchdog Resource Table (WDRT)
  • Windows 2008 and later supports Watchdog Action Table (WDAT).
    • The guest OST does not require additional configurations.
  • Linux distributions, like Ubuntu 18.04 and Red Hat Enterprise Linux 7.6, based on 4.9 or later kernel support Watchdog Action Table (WDAT).
    • Verify if the wdat_wdt.ko driver is available.

How to Configure

The goal is to provide a watchdog timer that allows the guest OS to use it without the need for additional drivers. To configure a virtual machine to use a virtual watchdog timer, VM hardware version 17 (introduced with vSphere 7) and a guest operating system that supports watchdog timer devices are required.

Start with BIOS/EFI boot

You can enable the virtual watchdog timer to start either by the guest OS, or by the BIOS or EFI firmware. If you chose the virtual watchdog device to start by the BIOS or EFI firmware, it starts before the guest operating system boots. Be sure you meet the requirements. If the guest OS does not support watchdog devices, then virtual machine will be constantly rebooted by the watchdog device.

Verification

The vSphere Client provides information if the virtual watchdog timer is running on the virtual machine.

To Conclude

The virtual Watchdog device capability in vSphere 7 is a great addition for VI admins and developers to understand the status of their clustered applications running on vSphere. Be sure to check all new vSphere 7 capabilities here!

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and stay safe!

The post vSphere 7 – Virtual Watchdog appeared first on VMware vSphere Blog.

Join us for the VMware Cloud on Dell EMC’s Second Generation Launch Event!

$
0
0

VMware Cloud on Dell EMC, the fully managed infrastructure as-a-service offering from VMware, has officially launched its 2nd generation service offering aimed towards providing Enterprises with a scalable infrastructure service option offering the best attributes of on-premise and the Cloud.

We are very excited about this milestone. Please attend our special CUBE event to see why.

We’ve brought together Richard Villars, Vice President, Data Center and Cloud at IDC, VMware Cloud on Dell EMC’s Marketing Leadership, and VMware’s Cloud Platform Business CTO Kit Colbert to highlight the Enterprise Grade innovations that the second generation of VMware Cloud on Dell EMC delivers.

Join us on Thursday, May 21, 2020 at 8 AM PDT!

Visit the event page to learn more and sign up.

The post Join us for the VMware Cloud on Dell EMC’s Second Generation Launch Event! appeared first on VMware vSphere Blog.

vSphere 7 – Integrated Windows Authentication (IWA) Deprecation

$
0
0

VMware vSphere IconReaders of the vSphere 7.0 release notes have noticed that, in the “Product Support Notices” section, Integrated Windows Authentication is listed as deprecated. Naturally, there are quite a few questions about this, especially in the wake of all the changes Microsoft has been suggesting to Active Directory. Let’s try to answer some of these!

What is Integrated Windows Authentication?

Integrated Windows Authentication (IWA) is an authentication method in vSphere that relies on the OS that vCenter Server runs on to be joined to a Microsoft Windows Active Directory (AD) domain. IWA uses that connection to the domain to authenticate users into vCenter Server.

What is meant by “deprecation?”

Deprecation means that a feature is still present in a product, and still fully supported, but will be removed in a future release.

In this case Integrated Windows Authentication is still present in vSphere 7.0. When you upgrade to vSphere 7 your previous IWA settings will be moved to the upgraded vCenter Server instance. You can still configure it from scratch on a new installation. You can also call support and be fully supported, until vSphere 7.0 is not supported any longer. That’s April 2, 2025 for the end of general support, and April 2, 2027 for extended support.

Don’t I need Integrated Windows Authentication to connect to Active Directory?

Not necessarily. There are a few different ways to connect vCenter Server to Microsoft Active Directory:

  • Using IWA, which uses the proprietary Windows interfaces to authenticate.
  • Using LDAP. All Active Directory domain controllers offer LDAP, and if configured, LDAPS, as an interface for accessing Active Directory.
  • Using Identity Federation, introduced in vSphere 7.0. This feature allows vCenter Server to connect to Active Directory Federation Services (ADFS) using the standard OAUTH2 & OIDC protocols.

Why is Integrated Windows Authentication deprecated?

vCenter Server 6.7 and earlier have Windows versions and can be installed directly on a Windows Server. As such, IWA made a lot of sense because many of those Windows Servers were already joined to a domain. However, vCenter Server 7.0 is only available as an appliance, and there is no longer an installable Windows version. While we encourage people to treat vCenter Server as an appliance and not as something with a separate operating system, the truth is that the appliances run the Photon OS, which is a distribution of Linux. Not surprisingly, Linux distributions do not natively connect to Windows domains. They require additional software installed to do so, which adds complexity.

Joining infrastructure to a Windows domain introduces other complexities, too. There is the potential for dependency loops, where the infrastructure relies on systems that are running on that same infrastructure. Things might be fine when everything is up & running, but a major incident like a power outage exposes the dependency loop and is then much harder to recover from. There are also political & people issues, too. For good security reasons many organizations have tight controls over who can join devices to Active Directory. Joining infrastructure systems to corporate Active Directory instances requires appropriate access, and that is not always a smooth relationship between teams. In many organizations a domain join is an infrequent occurrence for the vSphere Admins, so when the AD support team audits accounts for inactivity they end up disabling the vSphere Admin’s domain-joining account, which then surprises the vSphere Admin at some – likely extremely inopportune – time in the future. This is not to blame either team. Each team has differing goals and needs, especially for security, and it is hard to reconcile the two in the face of compliance demands.

The adage that good fences make good neighbors is just as true in IT as it is in a neighborhood. LDAP and OAUTH2 are industry-standard authentication protocols, and their use provides a nice clean interface not just between vCenter Server and Active Directory, but also between the teams that support those systems. The change to LDAP/LDAPS also will likely have positive effects on other systems, such as firewalls, by reducing complexity in rules and troubleshooting. It’s easier to control dependencies & dependency loop situations with LDAP. Last, while we only officially support direct connections from vCenter Server to domain controllers, use of protocols like LDAP & LDAPS may offer opportunities for introducing redundancy & failover using application load balancers and other techniques, which is a flexibility that the Linux-based Windows domain connections used for IWA could never have.

Is this related to the Microsoft AD LDAP signing changes and the Event ID 2889 log entries?

Not directly, though it does speak to the complexities of having multiple duplicate authentication methods in vSphere. By the way, if you are using IWA and seeing 2889 events we’ve published guidance on why that is and why you’re still secure.

How does this affect ESXi?

ESXi can be joined to an Active Directory domain as well, and that functionality is also deprecated (but continues to be supported on the same terms as IWA). The alternatives currently are to use local user accounts or use a lockdown mode that prevents local logins, and direct all configuration & usage through the Role-Based Access Controls (RBAC) present in vCenter Server.

So, what should my vCenter Server be using for authentication moving forward?

The two main authentication mechanisms moving forward will be AD over LDAPS and Identity Federation. vSphere 7 supports both equally well, and older versions of vSphere support AD over LDAPS, too.

While AD over LDAP is also supported, we always recommend securing all network communications with TLS. Please configure your Active Directory domain controllers with certificates to enable TLS and configure vCenter Server to use LDAPS. Identity Federation is deeply dependent on cryptography, and communications between vCenter Server and ADFS are secured.

How do I switch my authentication methods?

You can remove the old authentication method and then recreate it with a different protocol using the same domain information. A great example of this is shown in TAM Lab 048, done by Bill Hill, one of our Technical Account Managers. His video shows switching vSphere from LDAP to LDAPS.

We always encourage vSphere Admins to test changes before they make them in their production environments. A great way to do that is with nested ESXi. Deploy a small vCenter Server for testing and install ESXi in a VM for that vCenter Server to manage it (when you’re configuring the new VM choose “ESXi 6.5 and newer” from the list of operating systems). Once it is set up you can shut it all down and take a snapshot, so that if the environment gets messy you can restore it to a working & clean state. While we do not support nested ESXi directly, it is how the Hands-on Labs work, and how many of us do our testing. It’s a great way to test things like authentication.

Conclusion

When it comes to vSphere & security we’re making it easy to do the right things, and making vSphere secure by default. Change can be unwelcome, but when it’s to reduce complexity, improve support, and better draw the boundaries between authentication systems and their clients we feel that’s a big win. The transition is made easier with the continued full support of Integrated Windows Authentication through the life of vSphere 7.0, and the standard options available as replacements. We hope you agree!

As always, thank you for being our customer, and please let us know how we can help make your lives and infrastructure more secure.

 


We are excited about vSphere 7 and what it means for our customers and the future. Watch the vSphere 7 Launch Event replay, an event designed for vSphere Admins, hosted by theCUBE. We will continue posting new technical and product information about vSphere 7 and vSphere with Kubernetes Monday through Thursdays into May 2020. Join us by following the blog directly using the RSS feed, on Facebook, and on Twitter. Thank you, and stay safe!

The post vSphere 7 – Integrated Windows Authentication (IWA) Deprecation appeared first on VMware vSphere Blog.

Tweet Chat Recap: vForum Online Featuring Team #vSAN

$
0
0

In the spirit of integration and cross-collaboration, we hosted another vSphere Chat with our friends over at vSAN to discuss the ins and outs of vForum Online. Joining us on this adventure through vForum Online were resident VMware experts, Bob Plankers and Mike Foley for Team vSphere, Sonali Desai for Team VMware Cloud on AWS, Keith Lee from Team Tanzu, Pete Flecha and John Nicholson from Team vSAN, Elena Salazar and Christian Lowery from the VMware Events team, and finally from Team Cloud Foundation, Josh Townsend and Rick Walsworth. With a packed house, let’s dive into the recap of all the questions and insights from all these great experts & more!

Q1. For those who have never attended a virtual event like vForum Online before, what are some tips and tricks for making the most of the experience?

Elena Salazar

A1: To get the most out of vForum Online…

– Build your personal agenda ahead of time

– Prepare questions to ask experts during the Q&A segments

– Place blocks in your calendar to avoid distractions

Bob Plankers

A1: Blocking the calendar is the best idea here! Plus, I believe that the sessions will be available afterward so pick the ones you want to ask questions in.

Josh Townsend

A1: Ask the experts! There are lots of smart people waiting to answer questions on vForum Online. Ask your toughest questions on any @VMware question and get that social interaction you’ve been craving!

Bob Plankers

A1 Part 2: +1 to this, lots of people online there. Just look at all the sessions and know that the experts are sitting in the Q&A for you: https://bit.ly/2yuW8lM

VMware Events

A1: If you meet someone in one of the virtual sessions, make sure to connect on social media before signing off!


Q2: Besides the vSphere 7 and VMware on AWS sessions, what are some other can’t-miss topics & sessions for vSphere admins?

Bob Plankers

A2: vSphere with Kubernetes ones, HCI ones, and the intrinsic security ones that fit your organization’s needs are good ideas, if you ask me! https://bit.ly/2yuW8lM

Keith Lee

A2: If you/your company are starting on the Kubernetes journey then check out the “App Modernization” track at vForum Online with several talks and @VMwareHOL’s on how using VMware Tanzu and vSphere with Kubernetes make it easy.

VMware Events

A2:  There are 14 VMware Hands-on Labs you don’t want to miss!


Q3: Who are some of the VMware experts that will be presenting during vForum Online?

Rick Walsworth

A3: Many of the folks on this tweet chat have vForum Online breakout sessions, but to see the entire list of technical sessions go here: https://bit.ly/2yuW8lM

Pete Flecha

A3 Part 2: Wow, looking at this catalog, this is like a mini VMworld. Looking forward to the SD-WAN, VMC, HCX and Kubernetes sessions.

VMware Events

A3 Part 3: That’s exactly how one of our vExpert bloggers described it! https://bit.ly/3cUJ9IS

Pete Flecha

A3 Part 5: I’m presenting the What’s new with vSAN 7 but I brought my friends @Lost_Signal and @vmpete to help me with the chat so bring all the questions, which are my favorite part!


Q4: Can you give us a sneak peek into what will be covered in the “Maximizing the Benefits of vSphere 7” vForum Online session?

VMware Events

A4: This session focuses on how VMware vSphere capabilities evolve across multi-cloud environments, intrinsic security and container orchestration!


Q5: What kind of vSphere 7 with Kubernetes or VMware on AWS information can we expect at vForum Online in the ‘Building Modern Apps’ sessions?

Rick Walsworth

A5: Look through the vForum Online guide and highlight the App Modernization and multi-cloud tracks on this agenda and you’ll see all of the solid technical breakouts that cover this in-depth.


Q6: What if someone wants even more vSphere 7 with Kubernetes or VMware on AWS info after vForum Online? Where should they go?

Bob Plankers

A6: Check out the VMware vSphere Blog: https://bit.ly/3ba5ssU

Keith Lee

A6: Shameless plug but if you want to know more in what’s happening in the Kubernetes space and ecosystem around it…check out my recent blog post on Kubernetes podcasts: https://bit.ly/2zQAYPp

A6 Part 2: Previous VMworld sessions and blogs such as the VMware Tanzu Blog and the VMware vSphere Blog are great resources! Don’t forget the awesome @virtspeaking podcast by @vPedroArrow & @Lost_Signal!

Bob Plankers

A6 Part 3: It’d be shameless if there wasn’t great content there. This is just plain helpful! 🙂

Mike Foley

A6: More links to more vSphere with Kubernetes blogs: vSphere Pod Service & vSphere Namespaces!

A6 Part 2: I have a number of blogs on the VMware vSphere Blog on vSphere with Kubernetes, like the 101 Whitepaper and TKG Clusters.


Q7: What are some examples of real-life business cases we’ll hear about at vForum Online 2020?

A7: We live in a new world where so many people are even more dependent on technology. The good news is that VMware has been able to service the needs of our customers well during the pandemic. You may hear how our Workspace ONE and Horizon solutions help remote work while also hearing how our Hybrid Cloud solutions help to ensure infrastructure reliability and flexibility to run workloads where it makes the most sense. From securing your infrastructure and applications to building new apps with Kubernetes to solving cutting edge problems with AI and ML – there will be something for everyone!


Q8: What should I tell my boss to convince them that I should attend vForum Online? Any tips for how to convince my coworkers to attend as well?

Rick Walsworth

A8: Hmmm, let’s see –
Boss, I want to attend the VMware vForum Online because:

a). It’s Free
b). It’s got awesome technical tracks and VMware experts
c). It’s completely virtual.

Did I mention that it is FREE?

John Nicholson

A8: “I’m signing up for free training, can I expense Uber Eats for lunch?”

A8 Part 2: If my boss will not let me do free training, I’m finding a new boss 🙂

VMwareEvents

A8: Tell them about our carbon savings calculator!

It tracks the carbon emissions your company saves by attending the event virtually instead of flying to the Palo Alto campus. The more employees who attend, the more carbon emissions your company will save!


Q9: What’s your #1 pro-tip for getting the most out of vForum Online?

Rick Walsworth

A9: Get social and network with as many people as you can. Plan your vForum journey ahead of time, use this guide to map out the tracks and sessions that you want to attend: https://bit.ly/2yuW8lM

John Nicholson

A9 Part 2: Put it on a big screen. I use AirPlay to send my laptop to my 70” TV.

John Nicholson

A9 Part 4: Warning @vPedroArrow may appear taller on the TV than in real life.

Keith Lee

A9: Ask questions! You have access to SO many friendly vExperts who want to help and share their knowledge!


And that’s a wrap for another vSphere Chat! We hope you enjoyed this vSphere Chat on vForum Online as much as we did –– now is the perfect time to register!

A special thank you to our vSphere experts –– Bob Plankers (@plankers) and Mike Foley (@mikefoley), as well as our other experts and vSphere fans from around the globe who tuned in and contributed. This wouldn’t be possible without our vExperts, loyal fans, and customers!

Follow the additional VMware experts who participated in this and the vSAN chat, including John Nicholson (@Lost_Signal), Pete Flecha (@vPedroArrow), Josh Townsend (@joshuatownsend), Rick Walsworth (@RickWalsworth), Sonali Desai (@sonaliddesai), and Keith Lee (@KeithRichardLee).

Special thank you to participants Elena Salazar (@elenacsalazar) and Christian Lowery (@clowerycontent) from the VMware Events team, your #1 resource for all things vForum Online.

Remember to follow @VMwarevSphere and stay tuned for our monthly expert chats. Join the conversation by using the #vSphereChat hashtag and asking your questions. Have a specific topic you’d like to cover? Contact us and we’ll bring the topic to our experts for consideration.

In the meantime, stay tuned to our channels because we’re busy planning the next vSphere Chat. More to come!

The post Tweet Chat Recap: vForum Online Featuring Team #vSAN appeared first on VMware vSphere Blog.

Viewing all 626 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>