CIOs have a lot on their plates. The balancing act of investing in the future and maximizing operational efficiency today is a constant battle that bleeds into every big decision. So where do you invest your resources? The popular answer might be: transforming IT today so that tomorrow you can focus on business strategy. That’s just great. Really. Unfortunately, the rest of the organization thinks all of that should have been done yesterday. So while you know you’re adding value to the company and making IT more agile, a lot of other departments see your department as more of a drag than an engine. That’s not going to help your job security is it?
The 2015 State of the CIO survey shows that CIOs spend the majority of their time in transformational activities now, but in 3–5 years, 72% of CIOs believe they will (or want to) focus more on business strategy. This transition suggests that many organizations are seeking to currently transform their infrastructure while simultaneously acknowledging how important it is to look into future tech.
But which is more pertinent to a CIO, whose average tenure at an organization is only 5 years, according to the same study? And how does he or she balance the responsibilities of current transformation with the need to prove their strategic value?
There’s no doubt CIOs are spending more time overseeing the details of digital transformation. After all, those clouds aren’t going to wrangle themselves. But all that time down in the trenches slaving over a hot data center architecture priority means less time for the cool strategic activities that raise buzz and bring in revenue. This year only 27% of CIOs classify themselves as business strategists, a 7% drop from last year.
Another thing taking time away from strategic thinking: holding onto the things IT already does. More than a third of CIOs report fighting turf battles against others in the C-suite. It’s not just the COO, CMO, CISO or CFO that are doing the encroaching, either. There’s a whole new batch of C-level execs elbowing their way to that top-floor table. The emergence of chief digital officers, chief data officers, chief transformation officers, and more makes it easier for others to argue the CIO is just becoming the sys-admin in chief.
Sadly a lot of CIOs don’t see the writing on the wall—at least not as big and bold as some others in the organization. Yes, 36% of them say they're involved in a turf battle. But a much higher percentage—close to half—of non-IT executives surveyed by IDC recognize the struggle for control. And more alarmingly, 37% of business leaders say the CIO is being sidelined while only 20% of CIOs feel the same way. In all fairness that’s a difficult thing to find out, never mind to admit. It’s frequently a matter of perception, but perception is the number one weapon of choice in turf battles.
For what it’s worth, both groups understand that the CIO and IT get blamed more than they should. About half of business and IT executives surveyed said that IT gets scapegoated whenever anything goes wrong anywhere in the company.
One way to prove your value as a CIO used to be the going for a quick win. Unfortunately knocking down the low-hanging fruit isn’t as impressive to your peers as it used to be. What’s worse is that most CIOs don’t seem to know that. The State of the CIO survey found 51% of CIOs named the quick win a key tactic for improving relations with other departments. However, just 31% of non-IT business decision-makers agree.
If all this sounds a little bleak remember the biggest problem here is the CIOs who don’t know all this is happening. You are not among them. Knowing is half the battle, as the saying goes. There are many things to do to change other people’s perceptions of IT. One is to embed your people in other departments. It’s easy to shift blame for a failure to someone you’ve never met, it’s harder when they are working right beside you, understand your problems, and are trying to fix them. As your transformation progresses and your ability to deliver timely, helpful solutions increases, others will see IT for the valuable and indispensable ally it is.
Gartner recommends that enterprises dump Windows 8.1 deployment plans, even if that means delays. According to Gartner analyst Steve Kleynhans, “It is likely that Windows 8.1 will suffer a similar fate to Windows Vista, whereby industry support died off relatively quickly.”
He recommends that companies that have already started deploying Windows 8.1 reconsider and instead shift to a plan to migrate to Windows 10, even if that delays the rollout beyond the earlier timetable envisioned for Windows 8.1. “Windows 8.1 is no longer the right option for new enterprise deployment, and indeed, resources should be refocused on early adoption of Windows 10,” Kleynhans wrote in a Gartner report.
Windows 10 is not an incremental step from Windows 8.1; it is Microsoft’s attempt to create a single ecosystem that unites tablets, phones, PCs, embedded systems, and even the Xbox One. It will allow these products to share a universal application architecture and Windows Store ecosystem. Expanding upon the Windows Runtime platform introduced by Windows 8, this architecture allows applications to be adapted for use between these platforms while sharing common code.
Gartner recommends that clients running Windows 7 skip Windows 8.1 because Windows 10 offers better security and management, as well as an improved user experience and a more business-focused app store. And those that have already begun deploying Windows 8.1 should reconsider. This isn’t the first time Gartner suggested that enterprise clients skip a Windows edition; Gartner also recommends that companies bypass Windows Vista and wait for Windows 7.
Nobody in IT wants to be on the leading edge of an enterprise software deployment, but the advantages of upgrading to Windows 10 are compelling. Gartner clearly has a strong record of making the right calls in this area, and if you can plan a single migration to the pending release of Windows 10 you can accelerate the business value of a standardized enterprise operating system and avoid the costs of potentially upgrading to Windows again in the near future.
Interesting fact about Windows 10: it’s not going to work as well with a device that’s not Microsoft-created. So for all of us who have a fondness for products outside of the Microsoft realm, this is something to consider. On a broader scale—for business and enterprise—it may be a big deal.
Most of us are used to having the freedom to utilize our own devices—BYOD is virtually standard in many companies today, and employees may well expect it to be the norm. But working from home or the coffee shop down the street may soon mean bringing a company-owned device with you as Windows 10 enters the picture.
Windows 10 has been purposefully created to work best with Microsoft products. Some of its features are only available through a Microsoft product—and those new Windows 10 features are supposed to be pretty fantastic, so it may be worth investing some IT budget money into new equipment.
Tech industry veterans like Matt Schulz theorize that with the implementation of Windows 10, the Surface will soon replace a lot of laptops (and certainly PCs) in most enterprises. And if we are to rely on those devices to go home with us and work effectively, he may have a good point—they’ll be lighter, simpler to travel with, less prone to injury due to ease of transport, and will be able to utilize all that Windows 10 has to offer up.
Making the switch to Microsoft devices may not be an issue if IT is geared up for a shift anyway—as many probably are in the wake of Windows 7’s support going away entirely in 2020 (it sounds far away, but that’s only five years from now). And if IT’s been keeping up with the news of Windows 10’s arrival, they may have already planned ahead for its inception and use in Microsoft devices.
It may also fall to IT to ensure that executive and director phones are Windows 10 friendly—a must in our on-the-go work world.
There are some significant advantages if you choose to upgrade to Windows 10 sooner rather than later. Chances are good that if you’re utilizing Windows now, you’re going to need to upgrade to the new version eventually, but waiting is not in your best interest for a number of reasons.Cost Advantage Are you’re already on Windows 7 and upgrading 15 computers or less? If you upgrade now, you’ll be able to do it for free. Plus, any further upgrades on the system will be free as well. If you wait, you’ll pay around $200 for each machine. If you have more than 15 computers to upgrade, you’ll still pay less now than later. Procrastination, in this instance, is decidedly not a good thing. Upgrade Schedule According to Microsoft, Windows 10 (and subsequent patches and service packs) is going to be it for a while. In other words, if you wait a few years for the next upgrade version, you’ll still be buying Windows 10, but you’ll spend more on the product and the training. So why not get it for less—or even for free—now? Training Opportunities Microsoft says Windows 10 doesn’t require a lot of extra training to implement or utilize effectively. Upgrading now means your teams will already be immersed in the new version of Windows 10 as more improvements are made. Future add-ons will have a shorter, simpler learning curve.
Microsoft is currently offering training on Windows 10 at a lower cost than what it will be in a few years. In other words, if you upgrade now, you can expect to spend a lot less money on training than if you wait.Windows 7 Support: Going, Going, Gone Mainstream support for Windows 7 ended January 13, 2015 and extended support ends in 2020. Upgrading to Windows 10 means you’re fully supported. This could make moving projects through the pipeline a lot more seamless. Our Microsoft Practice Team is here to answer any questions you may have – give us a ring.
It’s a jungle out there. And by “out there” we mean your enterprise IT. And by “jungle” we mean the huge number of technologies like ABAP, Java, and .NET that handle your business processes on multiple system components. Of course these service-enabled, distributed, heterogeneous systems can be accessed on all those BYOD via many different channels. One of the great challenges all this causes is finding out why performance is degrading. In short, have you ever tried to find one root (cause) in a jungle?
In order to identify the root cause of incidents, IT needs to use a systematic, top-down approach to isolate the erroneous component and subsequently resolve the issue. Unfortunately, even though the most common causes are network related, admins frequently look for data from things like servers, security, application design/health, and end client systems or users – which have nothing to do with networking.
Remember the words of Sherlock Holmes: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” In the search for root causes you need to have tools that go beyond network element views tools so you can quickly rule out everything that is not the problem.
That doesn’t mean using every tool out there. Do that and you will have more data than you can use from too many sources to be certain of. A report by Enterprise Management Associates found that the larger the business the more tools it is likely to use, even though that doesn’t improve reporting. Where businesses with fewer than 1000 employees use three to five tools, most of those with more than 5000 employees use six or more, with 25% using 16 or more tools!
As the report says, “More tools mean process inefficiencies since most function independently and don’t share data directly. Also, each tool must be individually installed, configured, and maintained.” IT managers need to consolidate and integrate tools as much as possible so your staff can spend less time maintaining the tools and more time managing the network.
One thing we all know is that technology doesn’t stand still. Just consider the growing interest in software-defined networking. Whether it’s SDN or something else, your network is going to change and your monitoring and management tools need to be flexible enough to handle that. Although those tools may need add-ons you need to make sure you won’t have to replace them or wind up with SDN as a separate thing needing its own management tools.
Thanks to new developments like cloud, SDN, and big data, the IT jungle is getting denser every day. Whether your efforts turn into a well-planned safari or a battle for survival depends on how hard it is for you to find the root cause of the problems you face. The best gear you can pack is a consolidated set of management solutions which works across different data sets, supports new technologies, and delivers information for service-related outcomes. To find out more about what you can do to improve your network management systems click here.
Modern data centers contain the tools and assets that can power an enterprise in the digital age—but only if these tools (applications) and assets (data) can be accessed. However, when apps can’t be deployed or if data are siloed, an enterprise’s technology can become a competitive disadvantage.
The mobile and BYOD revolution, for example, theoretically allows employees to access data and apps from anywhere, thus enabling workers to be more productive and effective. But when mobile workers can’t access applications or enterprise data, it can have a big impact on productivity, flexibility, and revenue.
Employees suddenly may:
- Not be able to do their jobs efficiently (or at all)
- Be unable to collaborate remotely with colleagues and partners
- Need to go into headquarters or a branch office to work
- Lose potential sales in the field
All of these outcomes can have a negative impact on an enterprise’s bottom line, as can unscheduled downtime, problems securing virtualized workloads, inability to manage big data (including data from the Internet of Things), and poor server management.
This puts tremendous pressure on IT to solve data center problems as quickly as possible. Without proper tools, however, IT pros can waste valuable time trying to locate the source of an issue, whether it’s a user’s device, a wireless network, a branch server, an on-premises data center, or a cloud deployment. And when it comes to protracted problems in the data center, time costs money.
Rather than relying on guesswork and a disparate set of tools that each offer only limited transparency into the network, IT pros are better served by using infrastructure management tools that can be integrated with and extend existing tools for monitoring, provisioning, and configuring server and application software.
The ideal data center infrastructure relies on a single platform that unifies computing resources, networking infrastructure, data center management, and cloud deployments. Such a platform should enable IT professionals to automate and simplify management of the data center across servers, the network, and clouds.
Unified infrastructure management allows IT to monitor the health status of domains, automate and standardize network and data center access, and manage operating systems, applications and servers. A unified computing infrastructure also saves money by reducing the number of servers in the data center through virtualization and consolidation of heavily used Microsoft enterprise applications like SQL, SharePoint, and Exchange.
In the case of a mobile worker unsuccessfully trying to access an application, an automated solution within a unified infrastructure could detect the failed attempts, locate the source of failure, and resolve the problem, saving time and requiring little or no human intervention.
As newer technologies such as cloud computing and virtualization become integrated with legacy systems, data center management has become more complex than ever. Using tools such as Cisco’s Unified Computing System (UCS), along with Microsoft System Center and PowerShell, enterprises can simplify management, gain more transparency, identify and solve problems faster, scale to meet the needs of the business, and lower operating costs through greater efficiency.
Click here for more information on how Cisco can provide the optimal infrastructure for data centers and Microsoft environments.
Today, I want to shine the spotlight on VMware Workspace. With Workspace, all of your apps, all your file content, virtual desktop, and apps from all different places can now exist in one location—and you can easily access them by logging into an HTML browser. Workspace is simply a great tool where end users can easily access corporate files and applications in one central place.
Workspace is customizable, which means administrators can make it reflect everything about your brand. Your color, even your logo can be customized. And this is just the beginning. Read on to learn about other key VMware Workspace features.The Password Conundrum
A lot of people have apps in the cloud, and it’s very easy to work from the cloud. But this can easily cause password problems. For example, for every Software as a Service (SaaS) application—whether it’s SalesForce or pcconnection.com or Concur or Kronos or SAP or Server Care—users all have individual user names and passwords. This is a management struggle when new users come on board. IT administrators must visit all kinds of different portals and sign up the user with the new username and password. It’s very labor intensive! And it’s confusing for the user too, since they’ve got to remember all these usernames and passwords.
Then there are the dreaded IT calls: “I can't login to my app anymore,” common when folks are using the password and username for a different SaaS application. Believe me when I tell you that this is not how IT wants to spend their day.Security Assertion Markup Language (SAML) to exchange authentication and create a trust between the customer and a SaaS provider. Once a trust is established, you no longer need a username and password. All of the communication and logging into the SaaS application is done in the background, passing tokens between workspace and that SaaS application. Logging on one time will bring a smile to your user’s face and solves many password problems at the same time.
“Elegant code” is the ultimate compliment for a programmer. It means the work is as clean and simple as it can be. Only that which is absolutely necessary is used, and as a result it diminishes the chance of a bug slipping in.
Much the same, reduced complexity is key to any enterprise mobile device strategy. To get your employees to use their personal or company-issued mobile devices in a secure way, just make it easier for them to do.
Step One: Have a use policy that is as brief and easy to understand as you can possibly make it. Anyone should be able to read it and immediately understand what they are personally responsible for and what the consequence will be if they don’t hold up their end.
For example, end users should be responsible for backing up data and retain responsibility for any lost data that isn’t backed up. Also, clarify who is responsible for device maintenance and what actions can result in an instant loss of BYOD privileges.
Step Two: No one should be able to use any personal device for work unless it has a passcode. It’s estimated that just about half of all Americans do not have a passcode set on their phone or tablet today. This straightforward requirement will radically improve security for both the company and the employee.
Step Three: All BYOD devices should be subscribed to a tracking/remote-wipe service. This service, often part of a mobile device management software package, gives peace of mind to both the employee and the IT department.
Location services will let you find the device on a map, sound an alarm that makes it easier to find when you are near it, and even lock or wipe it if it can’t be recovered. The service is very popular with employees who can now easily find a phone that has been swallowed by couch cushions.
A cloud-based system can help implement a lot of protocols in a way that is simple to manage and requires little effort from employees. Having everyone connected to the same corporate cloud means everyone is protected by the same network restrictions no matter if the employee is at their desk, stuck in O’Hare Airport, or working from home in their pajamas.Whether your requirements are small or large, there’s an elegant solution to be found. We can find the perfect products and services for your unique mobile environment with custom configuration, imaging, delivery, and enhanced security solutions. Click here and let one of our experts help guide you along the way.
Just because something is generally a good idea doesn’t mean you should rush right out and do it. Your best bet is usually to wait until all the facts are in and weigh the evidence carefully before proceeding. We would all agree that buying flood insurance for your home is a prudent move—unless you live in the desert. Virtualization is also a great example of something that seems like a no-brainer, but may not be in every circumstance. Read on for tips on making sure your infrastructure is ready for virtualization.
Simply put, virtualization lets you do more with less. Collecting disparate computing resources into shareable pools lets you see how much they’re being used and manage them all from a single console. The immediate benefit of this is discovering your excess capacity. According to a study by Green Grid, the average business only uses 5 percent to 25 percent of its server capacity. Most of the power those servers use goes toward heating up the room, which then costs you more money to cool down.
Virtualization is known to make organizations more agile. Greater control of your IT infrastructure means you can respond quickly when changes in business demand different resources. It also can completely change the way IT managers think. Instead of always focusing on taking care of the technology, they can focus on the services their technology can provide.
But all this can only happen if your system is in good enough shape.
Just focusing on servers when you implement virtualization will usually bring a lower-than-expected ROI. It’s essential to also consider the network and storage infrastructure. Virtualization brings with it increased resource demand and changes in traffic patterns. If you don’t also have increased monitoring and accountability for control of the system, virtualization will just put additional stress on the data center and degrade its performance.
However, if you take a proactive approach to virtualization you can maintain the high level of application performance, security, availability, and manageability your business requires. The best way to do this is with a comprehensive IT infrastructure assessment. This assessment should examine the cost and power advantages of migrating to a virtualized infrastructure and identify relevant network configuration and security issues.
Cisco’s Data Center Virtualization Assessment Service will prepare you for virtualization by identifying gaps in your server, storage, and network infrastructure that could limit ROI. It also improves the security profile of your virtualized environment. Working closely with subject matter experts, you will be able to manage risk associated with highly mobile and increasingly complex virtual machine environments.
The assessment also provides recommendations to help you evolve your data center to a next-generation design using technologies and solutions from Cisco and our partners. Enterprise and solution architects, virtualization experts, and project managers work collaboratively to provide needed insight into your technical, business, and financial requirements.If you think this might be a good next step for your business, click here to find out more.
Imagine for a moment that your first car was the only one you were ever allowed to buy. For most of us that was a rusty bucket of bolts that met our basic needs: getting us to point B with a minimal number of breakdowns. But if you kept it indefinitely, and you wanted to keep up with the speed of traffic, you would need to upgrade it one piece at a time—swapping out each antiquated part for a new one along the way. You would start to get stuck with parts that weren’t made specifically for your model and eventually have to figure out how to graft on a whole new system that hadn’t even been imagined when your car was built.
Sounds crazy, right? Have you looked at your data center recently?
Think about all the upgrades, workarounds, homemade patches, and entirely new systems you’ve grafted on to it. If it were a car, it would look like some steampunk cruiser out of a Mad Max movie.
Businesses can’t afford to stay glued to an inflexible IT infrastructure. They just won’t be able keep up with the new application and service demands of a workforce operating around the world. Its limitations make it responsible for slow rollout of critical applications and services, inadequate resources, poor operation visibility and control, and unpredictable system integration. And that’s where Cisco comes in.
Cisco’s Unified Data Center (UDC) platform is designed to overcome current data center constraints and provide agile, simplified, efficient IT service delivery and cloud computing. Its innovative platform facilitates virtualization, simplification, automation, and accelerated delivery of cloud applications and services providing a sustainable business advantage. Just as importantly, UDC is built so it can be customized to your needs. It has an open, standards-based data center network architecture and ecosystem that maintains customer choice and increases business value while substantially decreasing the total cost of ownership.
UDC combines three integrated, leading data center technologies: Cisco Unified Fabric, Unified Computing, and Unified Management. The result: A radically simplified architecture. This simplicity lays the foundation for significant cost savings and agility.
The Unified Data Center saves you money by:
- Increasing computing power per rack unit
- Avoiding unnecessary CapEx investment for servers required solely to support memory-intensive applications
- Avoiding unnecessary investment to increase network and I/O bandwidth
- Reducing management complexity
- Improving network efficiency to ensure QoS despite centralization of servers
Application management—provisioning, delivering, and updating end users’ apps —can be a challenge for many organizations. Whether you’re still managing physical desktops or you’ve made the switch to virtual desktop infrastructure (VDI), the difficulties associated with application management are real. Application management is resource intensive for IT staff, it can significantly impact the end-user experience, and—quite honestly—the work never stops.You’re tasked with creating different master images for HR, Finance, and every user group that needs a specific set of applications. And then you have to maintain those images forever. If you’ve got a dozen images, every time you patch or update, you’ve got a lot of work to do. There’s got to be a better way. Fortunately, the area of application management continues to evolve, and VMware’s App Volumes has made some significant leaps forward in terms of streamlining the end-user experience and increasing IT manageability. We’ve come a long way since the old days of rolling out applications PC by PC. The arrival of virtualization simplified things greatly—but even with VDI, you’ve traditionally been limited to two different types of virtual desktops: persistent and non-persistent (also called floating desktops). The former offers great personalization, giving users the same desktop every time they log in and enabling them to add their own applications and customize as they please. The downside is that IT has to provision lots of desktops, and that takes time and resources. Floating desktops, on the other hand, provide a new desktop every time a user logs in and then wipe all data after the session is terminated. That’s great from an IT perspective, because it reduces the management burden and improves security, but the user experience suffers due to lack of personalization. What if you could combine the best of both worlds and get an application management solution that delivers an exceptional user experience without sacrificing manageability? That’s where VMware App Volumes comes in. App Volumes dynamically delivers applications in real time. It’s called push-button application delivery, but I actually like to call it “application injection.” Here’s how it works: upon login, applications are delivered based on the user and user group. So users get a desktop with all the personalization they want, and they get an injection of all the applications that are appropriate for them. When a user from a different user group logs in, they might see a completely different application set. App Volumes enables IT to give an application stack to the user based on the applications that they need. You can manage not only IT sets of applications, but also the users that have rights to install applications. App Volume solves user installed apps problems by giving the user (if authorized) a writeable app stack to install their owns applications in. This solves a lot of problems with application management, allowing organizations to create a stateless desktop that combines the best of persistent and non-persistent solutions. When a user logs in, he gets his files, settings, IT apps, and his own apps in a new desktop. Now, when a user calls into the help desk and says, “Hey, I need this application for a project I’m working on,” IT can inject that app in real time. In, literally, 3 to 4 seconds, the application pops up while the desktop is still running. It’s a pretty incredible technology, and it highlights just how rapidly the area of application management is maturing.
If you’ve been wondering about virtualization and what it can do for your organization, now’s a great time to get started. The experts at PC Connection can help you discover how the latest end-user computing technologies work in real-world environments. Whether you want to schedule an assessment, install a Proof of Concept Appliance (PoCA) in your facility, or see a demo in our Customer Briefing Center, we can give you all the information and guidance you need to make an informed end-user computing investment.
Windows Server 2003 was a breakthrough platform. With its reliable and stable environment it’s no wonder why it’s still popular today. Unfortunately, with end of service imminent, it will no longer be viable for enterprise computing. Security, compliance, and maintenance costs will expand rapidly while new security risks pop up all over the horizon.
Here’s a rundown of some of the incredible differences in performance and capabilities you’ll see when you step into the present with Windows Server 2012.
Implement Powerful Identity and Access Management Control
While Windows Server 2003 supports Active Directory Services, Windows Server R2 also supports additional important identity and access management (I&AM) capabilities.
- Dynamic Access Control allows you to apply data governance across your file servers to control who can access information. It also allows you to audit who has accessed information.
- Windows Server R2 provides greater I&AM flexibility by supporting virtualization of Active Directory Services. Windows Server 2003 offers no real support for virtualization.
Windows Server R2 supports virtual desktop infrastructure so you can reduce desktop computing capital and operational costs.
- Hyper-V clustering optimizes the use of virtualized storage resources.
- Hyper-V Replica supports replicating Hyper-V virtual machines to a secondary site to streamline virtualization redundancy and disaster recovery.
- Support for shared-nothing live migration of virtual machines—including their storage, memory, and device state between Hyper-V hosts without any downtime.
Windows 2003 was not architected for storage support. With Windows Server R2 you can implement live storage migration and enforce storage QoS parameters.
- Storage spaces with tiering allow you to optimize storage cost and performance by utilizing solid state and hard drive storage within the same storage pool.
- The shared virtual hard disk file capabilities allow you to share a VHDX file as a failover cluster so you can protect the application services running inside your virtual machines.
Web and Applications Server Capabilities
Needless to say, in 2003 Windows Server was not designed as a Web and application platform. But when you upgrade, you open up a whole new field of possibilities.
- Windows Server R2 supports multi-tenant, high-density websites and supports dynamic IP restrictions.
- You’ll also gain greater networking controls, including superior IP address management capabilities and Hyper-V Network Virtualization that enable end-to-end network virtualization.
With the end of support date for Windows Server 2003 fast approaching, there's never been a better time to plan your data center transformation. Our experts have designed this helpful tool to get you started on the right upgrade path for your unique environment, applications, and workloads.
As cloud adoption continues to accelerate, it appears that every company is moving at least some of its business-critical workloads from on-premises servers up into the ether. Use of public (or hybrid public-private) cloud infrastructure offers myriad benefits, but not every business is ready to take what for many seems like a leap of faith. For some, regulatory or governance considerations are keeping servers and storage on site; for others there’s a lack of clarity about the advantages that cloud migration really offers.
Windows Server 2003, the decade-old OS that still powers millions of servers both here and worldwide. After July 14, 2015, any vulnerability in Windows Server 2003 that is uncovered by hackers will not be patched; support for new applications and utilities will not be addressed and applications and data running on those servers will be operating under risk of imminent failure or data loss. In a nutshell, businesses that don’t consider migration are really gambling with their data.
So somewhere between migrating to the cloud and remaining on Windows Server 2003 there is a middle ground: migration to newer OS platforms such as Windows Server 2012. Here’s why upgrading to the latest OS makes good business sense.
Let’s start with the obvious. Moving to supported OS platforms reduces the risk of malware or data loss, since vulnerabilities (when discovered) are rapidly addressed.
Second, there are bottom line benefits. Most older servers run a single application, which is quite inefficient. Most of those Windows Server 2003 server CPUs are way underutilized, sitting idle while waiting for something to do. This was the driving factor behind server virtualization, which become the underpinning of cloud computing. Since virtual servers completely isolate each application in its own environment, many applications can be hosted on a single physical server without fear of the dreaded blue screen of death causing other applications to crash. The result? Fewer servers are needed, saving money in hardware, OS licenses, power, cooling, and real estate.
Upgrading to a newer OS like Windows Server 2012 is a logical choice for any organization looking to improve efficiency, especially when considering that Microsoft includes its server virtualization platform, Hyper-V, as part of the server OS license. Why not just virtualize Windows Server 2003 servers? First, there are performance issues. The past decade has seen an order of magnitude improvement in CPU power, thanks to Moore’s Law and multiple CPU cores. In many cases the first new server you deploy may have more raw power than the several servers it is designed to replace. Secondly, there are software and integration issues. Windows Server 2003 won’t run Hyper-V, so older versions of other hypervisors like VMware and Xeon would be required—at an additional cost. Experts agree that virtualizing without upgrading is a classic case of throwing good money after bad.
Are there still Windows Server 2003 servers in your shop? Maybe it’s finally time to get that migration strategy under way, whether in preparation of a cloud migration or just to drive efficiency up and long-term support costs down.
With the end of support date for Windows Server 2003 fast approaching, there's never been a better time to plan your data center transformation. Our experts have designed this helpful tool to get you started on the right upgrade path for your unique environment, applications, and workloads.
If you’re still running Windows Server 2003 in your data center, you should take steps now to plan and execute a migration strategy to protect your infrastructure. After July 14, Microsoft will no longer issue security updates for any version of Windows Server 2003.
But what’s that first step? For many, it’s turning to Microsoft to guide you on this journey. Microsoft offers informative online tools and resources that can help you discover, assess, target, and migrate your server resources. For any organization having trouble getting the ball rolling on this massive undertaking, this approach—along with Microsoft’s expert resources—provides a valuable strategic guideline to get your migration underway and through to completion.
First, you’ll want to discover which applications and workloads are running on Windows Server 2003 today. If you’re running legacy Windows Server 2003 and SQL Server 2005 in your environment, the time has come for a server evolution. But moving to modern infrastructure and databases is not an insignificant task. You can download the Windows Server 2003 Roles Migration Process infographic using Gartner’s 5R (Re-host, Refactor, Revise, Rebuild, Replace, Retire) platform methods as helpful reference models.
Next, you’ll need to assess your infrastructure and categorize applications and workloads by type, importance, and degree of complexity. Have your team take the Upgrading Skills to Windows Server 2012 Jump Start course to accelerate certifications in the skills required to maintain Windows Server 2012, and watch the video Re-Architecting Your Infrastructure with Windows Server 2012 and Microsoft System Center 2012 for an architectural discussion of the critical system components and how you can redesign what you have now to be ready for migration.
At this point, it might be a good idea to download the Windows Server 2012 R2 trial to familiarize yourself with the new server operating system. Once that’s done, you should feel comfortable enough to proceed to the next step, and target a migration destination for each application and workload upon migration.
The final step is to officially migrate from Windows Server 2003, and you can build your migration plan internally or collaborate with a partner.
You can download the Microsoft Deployment Toolkit, which provides a unified collection of tools, processes, and guidance for automating server deployments. You should also consider getting the latest Windows Server 2012 and other server infrastructure training at Microsoft Virtual Academy so your team can be prepared to execute the migration and manage the new server infrastructure.
Microsoft also offers other useful resources to help with your Windows Server 2003 migration, including analyst migration recommendations. The Microsoft website offers free downloads of the IDC whitepaper Why You Should Get Current, and Windows Server 2003 Migration Advice from Gartner that can provide you with expert insights that can facilitate your Windows Sever 2003 migration.
Good luck and welcome to the next phase for your business!
With the end of support date for Windows Server 2003 fast approaching, there's never been a better time to plan your data center transformation. Our experts have designed this helpful tool to get you started on the right upgrade path for your unique environment, applications, and workloads.
Windows Server 2003 has been a stable and reliable operating system that enterprises have depended on for years, but it’s coming to its end-of-life shortly. Sound familiar? Companies went through a similar experience with a desktop operating system not that long ago. The experiences that enterprises had during that transition provide some valuable lessons for the current one—for which the stakes are even higher.
As with Windows XP last year, many organizations are postponing the shift until they can no longer avoid it. Web analytics firm StatCounter found that Windows XP’s global market share was still nearly 19% last February, just a couple of months before its end-of-life.
For those companies that were behind the ball during the migration from XP, one singular lesson shines through: The support overhead costs of maintaining legacy platforms after Microsoft ceased providing support were massively detrimental to their business.
Many companies that postponed Windows XP migrations found increasing software crashes as new applications requested additional operating systems resources. Windows Server 2003 runs 32-bit applications, while the Microsoft-recommended migration path Windows Server 2012 R2 only supports 64-bit applications. That means similar problems can be expected for those enterprises stubborn enough to remain on the legacy system.
After support ceased, Windows XP users increasingly called internal help desks as they experienced more problems with their computers. This lead many companies to allocate additional IT resources toward maintaining desktops and notebooks—often at the expense of enterprise IT initiatives. Internal support costs for maintaining the legacy operating systems increased dramatically. Again, enterprises should expect a similar scenario with Windows Server 2003: a server that is no longer receiving regular software updates and security patches will require increased maintenance and troubleshooting.
IT can also learn deployment lessons from Windows XP upgrade strategies. User workstations could be gradually migrated based on need without serious or imminent threat to the health of the company, but with server operating systems, the stakes are higher. Enterprise security and compliance are major threats to organizations continuing to use Windows Server 2003 after its support deadline. Those costs won’t just add up—they’ll crush you.
According to Andrew Hertenstein, manager of Microsoft datacenter solutions for En Pointe Technologies, a Microsoft systems integrator, “For most of the compliance specifications out there, we have to be on a patched server. Well, that goes right out the window the day you stay on 2003 because you won't be patched. You won't be up-to-date. Your security vulnerabilities are still there.”
There is a greater imperative to migrate all servers quickly to minimize exposure than with a desktop OS. If IT drags their feet on this migration, it will require additional investment in firewall and intrusion detection system platforms to secure the legacy server infrastructure. This will drive up capital costs as well as the operational costs of managing additional network devices.
Weaning off successful enterprise software can be a difficult challenge for enterprises that come to depend on reliable, stable operating systems when so many other factors in the IT world are less dependable. If your company is still having trouble justifying the cost or initiating the process for any other reason, perhaps a long hard look at the internal struggles and rewards you faced when upgrading from XP will provide the proper catalyst for migration. Just remember, this time around the risks are greater than ever.
As organizations increasingly require access for a growing number of mobile devices, new workloads are required on back-office servers to integrate tablets and smartphones with existing applications, files, and functions. Organizations still running Windows Server 2003 may find they are unable to support the mobility functions that end users now demand to maintain a competitive edge.
Think about it—in 2003, a Harvard student began developing Facebook, and smartphone screens were about two square inches. Windows Server 2003 was introduced to address the needs of enterprise computing, but those needs have changed a great deal since then. Mobile applications have gotten dramatically more functional and ubiquitous, and mobile devices have gotten more powerful and become an essential part of the business world.
Windows Server 2003 will reach end-of-life on July 14 when Microsoft ceases issuing software updates and security patches. Legacy deployments will no longer receive new releases and will remain exposed to security threats, and it will be nearly impossible to ensure compliance without investing in additional hardware to protect the legacy servers. While evaluating migration options, it’s important to make sure the servers you select are up to the mobility challenges facing your organization.
Many organizations wrestle with whether or not to support bring your own device (BYOD) initiatives, in which employees gain the flexibility to select their own mobile devices for accessing enterprise resources. Making sure the enterprise can secure the mobile devices is crucial.
If you choose to upgrade to Windows Server 2012 R2, you’ll find Active Directory Domain Services and Active Directory Federation Services that support Mobile Device Management (MDM). They now work together to provide access to enterprise resources based on user and device combinations as well as access policies that are defined by IT. BYOD is enabled by the Active Directory Workplace Join. Once a mobile device has been validated as trustworthy, IT can grant conditional access to the user/device combination. This enables single sign-on so that authorized smartphones and tablets can securely access enterprise resources.
Windows Server 2012 R2 also offers Dynamic Access Control (DAC), which simplifies mobile access and makes it easier to enhance authorization and authentication by applying better security, risk management, and auditing policies in Active Directory.
Mobility places greater demands on server infrastructure, and scalable, feature-rich virtualization via Hyper-V in Windows Server 2012 R2 provides the enterprise with greater flexibility in supporting rapidly increasing mobile demands to support QoS and performance requirements.
In 2003, terms like cloud computing, big data, and BYOD were not yet in the IT lexicon and dial-up remote access was still a factor in many enterprise networks. Windows Server 2003 adapted as best it could to the new developments in business technology along the way, but it wasn’t designed for today’s applications. It had a remarkable run and served enterprises worldwide for over a decade, but the security, compliance, maintenance costs, and risk are too great for most organizations to withstand once service end in July.
Advances in mobility management in Windows Server 2012 R2 provide the enterprise with tremendous flexibility in supporting mobile users while protecting enterprise resources.
By the time you read this, the July 14, 2015 deadline for the end of support for Windows Server 2003 may have already passed, leaving millions of servers around the world in limbo. End of support means open season for hackers and malware writers, safe in the knowledge that there are no more patches coming to repair new security holes.
Still, when it comes to migration, it’s definitely a case of “better late than never.” There are many reasons for upgrading—security and governance issues, support for the latest applications, taking advantage of virtualization built into newer server OS versions to name a few—and very few reasons (other than inertia, that most powerful of IT forces) to keep you in the status quo.
If you’re finally ready to take the plunge here are a few shortcuts that can help cut down the amount of time it takes to get from server oblivion to nirvana.
Measure Twice, Deploy Once
Take a solid inventory of which applications are running on each server. Most 2003 servers are not virtualized, thus many run a single application (email, CRM, finance) and are probably underutilized. Chances are: a) there are applications on your servers you’ve completely forgotten about, and b) migration will give you an opportunity to consolidate some workloads using Hyper-V, (free with Windows Server 2012), VMware, or Citrix Xen to reduce the total number of servers you need.
Business users are taking a lesson from consumer application users and want the same intuitive: responsive application support for all the devices they use to access business information. In our “there’s an app for that” business mentality, choosing a platform that simplifies mobile integration—whether Web- or app-based—not only makes users happy, it also makes them more productive.
Want to get up to speed on 2012 and the latest virtualization, backup, and security tools at your disposal? While you’re deciding what to buy, take advantage of public Cloud provider infrastructure or platform-as-a-service offerings to familiarize yourself, operations, and developers with your new server environment. You can begin testing legacy applications and determine changes that would be of benefit long before the new servers arrive.
Don’t Go It Alone
For most IT organizations the mantra of “do more with less” has morphed into “do everything with nothing.” IT resources and staffs are limited at best, forcing many into firefighting mode—hence the inability to get migration off the ground till now. Here’s where coming late to the party may actually play in your favor. Many server vendors, VARs, and integrators have finely honed practices that focus on 2003 migration, often with years of experience under their belts thanks to the pushback of the cutoff date in recent months. Working with a partner, like PC Connection who has been there and done that can enable you to think about the big picture while your virtual team handles the actual migration and integration of the new systems into existing network and storage.
There is still time to complete an upgrade with little to no impact on user productivity or data security if you have the right players on your team. Get started now!
Less than a year ago, Gartner characterized the mainstream market action surrounding software-defined networking (SDN) as “mostly just tire-kicking.” That assessment of the market, however, is fast becoming old news. IDC has forecast that the “worldwide SDN market for the enterprise and cloud service provider segments will grow from $960 million in 2014 to over $8 billion by 2018.”
That’s an annual growth rate of nearly 90% and an indication, as IDC writes, that SDN “continues to gain ground within the broader enterprise and cloud service provider markets for datacenter networking.”
SDN is designed to deliver a host of critical networking services—like automated provisioning, network virtualization, and network programmability—to data center and enterprise networks.
The advantage of SDN is that it allows central management of network policies and resources. This all operates through a software-based controller that works with hardware from different vendors.
As you might imagine, SDN eliminates a lot of the routine maintenance and support that ties up IT resources. This gives an enterprise the opportunity to better think strategically, be more flexible and agile, and better leverage cloud applications and a converged infrastructure.
With so many obvious benefits, what’s been keeping the adoption rate for SDNs so low until now?
One major factor has been the lack of a compelling business case, or as LightReading’s Mitch Wagner writes, “SDN exists down deep at the bottom of the network, while financial benefits become obvious high up in the application layers.”
But that’s changing as the use cases start adding up and business executives start noticing changes to the bottom line. According to two recent surveys sponsored by Cisco Systems, use cases reported for SDN include unified wired and wireless networks, virtual machine migration, cloud hosting, load balancing and software-defined clouds. This all adds up to operational efficiency which then translates into stronger financial performance.
Indeed, business goals should guide an enterprise’s SDN strategy. IT professionals who want to get the most of out their SDN deployments should consult with an experienced SDN provider.
One of the most important tech events this year was VMworld 2014, a gathering of VMware employees, IT pros, thought leaders, and industry experts to discuss the latest advances in virtualization and cloud technology. If you were unable to attend, you missed a number of big announcements from VMware. PC Connection’s VMware team was out in full force at VMworld, and we’re here to share with you some of the biggest announcements. For the fullest experience, watch our webinar recap of the whole show.
Their first big announcement was in the area of software-defined data centers:
vCloud Suite 5.8 brings new features in site recovery manager and user data protection, plus improved interoperability with NSX, allowing you to customize the provisioning of NSX firewall routing services using vCloud Suite technologies. And it features vSphere Support Assistant, which is a free vCenter plugin you can use to identify issues before they become actual problems.
The next big announcement was of the EVO offerings. EVO is VMware’s brand for their hyper-converged infrastructure technologies, and VMware announced two versions of EVO converged infrastructure solutions: EVO Rail and EVO Rack. EVO Rail is a 2U 4-node rack appliance that you can purchase directly from your server hardware vendor with VMware technologies fully enabled on it. It’s the easiest way to roll out a software-defined data center. EVO Rack delivers a full hyper-converged infrastructure used to build and operate a high performance software-defined data center.
The third big announcement was the convergence of VMware’s management products into the vRealize line of offerings. The vRealize Suite includes vRealize Cloud Management Platform and vRealize Operations Air, vRealize Automation Air, and vRealize Business Air. The vRealize Suite will greatly help clients manage the delivery of IT services whether they are hosted in their data center or from an external cloud provider all under one unified management experience.
Those are just a taste of the announcements that came out of VMworld 2014. For a deeper dive on these and other items from VMworld, check out our webinar with Sam Tessier, technical partner manager at VMware, where he goes through all the details of this year’s show.
Windows Server 2003 has been viewed as a secure and bulletproof platform for well over a decade, but the cyber security landscape has changed dramatically over that time—and new threats appear continuously. Sadly, legacy servers may be ill equipped to handle new threats, and organizations should seek server upgrades to protect both hardware and software against attacks. The plain fact is that when Microsoft ceases distributing security updates and patches for Windows Server 2003 software on July 14, it will become more expensive to secure legacy servers than to upgrade them for most companies.
The costs of maintaining legacy, unsupported servers against unacceptable exposure to cyber criminals will siphon resources away from IT budgets, especially if the organization has to implement new firewalls and intrusion detection systems. Plus, applications running on Windows Server 2003 will likely fail to meet compliance standards and regulations, leading to an additional cost burden to shore up security standards.
The silver lining is that when you upgrade your Windows Server 2003 platforms, you increase security by capitalizing on new software and hardware security capabilities that were heretofore unavailable on your legacy system. Upgrading security using Windows Server 2012 R2 and the Intel Xeon processor E5 v3 product family allows the enterprise to ensure ongoing protection while improving efficiency and productivity. It provides the enterprise with a secure and supported server environment that enables continued compliance with regulatory requirements that demand ongoing software updates.
Windows Server 2012 R2 offers businesses an enterprise-class, multi-tenant data center infrastructure that simplifies the secure deployment of IT services and enables the secure, streamlined integration of premises-based and cloud-based applications.
Access to corporate resources such as workloads, storage, and networks help increase the agility of your business while protecting corporate information. Windows Server 2012 R2 also provides frameworks, services, and tools to increase security, scalability, and elasticity. Evolved features such as centralized SSL certificate support and application initialization help improve enterprise security and server performance. Your IT staff can provide consistent access to corporate resources by more efficiently managing and federating user identities and credentials across the organization while providing secure, always-available access to your corporate network.
Implementing Windows Server 2012 R2 on more secure hardware platforms also helps you further secure enterprise infrastructure. Deploying Windows Server 2012 R2 software on servers based on the Intel Xeon processor E5 v3 family protects the infrastructure by providing a hardware-assisted security foundation that strengthens malware protections and guards the operating system against escalation of attacks. It also provides added protection from threats against hypervisors, firmware, and other prelaunch software components.
By upgrading to Windows Server 2012 R2 on servers running E5 v3 processors, you gain access to powerful security and performance benefits, such as accelerated data encryption, strengthened malware protection, and the ability to create a trusted boot environment to protect your server landscape against malware or other tampering. You can also accelerate encryption and decryption via the use of the Advanced Encryption Standard New Instructions (Intel AES NI).
For more information on the security advantages of upgrading to Windows Server 2012 R2 on servers running E5 v3 processors, watch this brief video.