How to get started with Chef

The content below is taken from the original (How to get started with Chef), to continue reading please visit the site. Remember to respect the Author & Copyright.

When you have dozens or even hundreds of machines to manage, manual just isn’t an option. Software updates, security patches, and changes on this scale require automated tools to handle these important tasks in a way that is timely and consistent. Enter automated configuration software platforms like Chef. If you’ve ever wanted to automate software installation or server configuration across all of your servers with one command, you’re in the right place.  In this post, we’ll give you a peek at the Cloud Academy course on Chef on how to get started with Chef

Whether you’re a DevOps Engineer, Site Reliability Engineer, Systems Administrator, or Developer, this course can help you learn how to use Chef for managing your infrastructure at scale.

Get started with Chef 

Before getting started, it’s worth noting that Chef is both the name of the configuration management software you’ll be learning about in this course and the company that created it. For the purposes of this post, we’re referring to Chef the software.

Chef is a platform that allows you to automate the creation, configuration, and management of your infrastructure. This includes installing software, making sure services are running, setting firewall rules, and other similar tasks. For example, you can tell Chef that you want all of your web servers to have the Apache web server installed and running, and Chef will make sure that happens.You can also use Chef to create entire cloud infrastructures. For example, if you need a cloud environment that runs virtual machines in an auto scaling group behind a load balancer, you could use Chef to create all of that.

How is Chef used?

To give you a better idea of what Chef is capable of, let’s look at how some Chef customers use the platform.

Media and marketing company Gannett was taking days, and in some cases even weeks for their deployments. They didn’t have visibility into what was actually being done, and by whom. Chef came into the picture when an engineer needed to replicate a production AWS environment, and lacked a way to do it consistently. When other engineers saw how he had solved the problem with Chef, it sparked conversations about how to push for more automation. The end result was a push toward DevOps methodologies, and Chef served as the core automation tool.

Deployments that previously took days are now done in minutes because all of the infrastructure changes that they need to make are specified in code. When you have infrastructure defined as code, it allows you to apply years of software engineering practices to your infrastructure.

This scenario isn’t uncommon. A lot of companies are still doing things manually, and deploying software on an infrequent schedule. Sometimes, all it takes is seeing a potentially better way.

Facebook uses Chef to manage tens of thousands of servers. This means that with one command, Facebook can ensure that thousands of servers are configured as needed, and in exactly the same way. Another great example is the company Riot Games, creators of the game League of Legends. They have servers running 24/7, all around the world, and they use Chef to automate the management of those servers.

As you know, manual efforts do not scale well, but automation does. With Chef, you have a clean and consistent way to automate the management of your infrastructure. Once you have the desired configuration written in code, it doesn’t matter how many servers you need to manage because the same code will be executed on all of them.

Get started with Chef: Chef architecture

Chef has three core components: Workstations, the Chef Server, and nodes. This is how the components work together: The Workstation is where you write the code that specifies how different nodes should be configured. And once that code is written and tested, it can be deployed to the Chef Server. Once the code is on the Chef Server you can instruct the nodes to download the latest code from the Chef Server, and then execute it.

Let’s drill down a little further into each component:

Workstation

The Workstation is where you write the code that specifies how different nodes should be configured. In Chef terminology, a Workstation is a computer where the Chef Development Kit is installed. It’s where you actually write and test the code that Chef uses to manage servers. It’s also used to interact with the Chef Server. The development kit contains everything required to develop with Chef. Part of the installation is a set of tools that are used to interact with the different components of Chef.

Some of the most important tools are:

The “chef” executable is used to generate different code templates, execute system commands under the context of the Chef Development Kit, as well as install gems into the development kit environment. (RubyGems is a package manager for Ruby, and the individual package is referred to as a gem.) So, the “chef” executable used to help with development related tasks.

The chef-client is a key component of Chef because it’s the driving force behind nodes. It is an executable that can also be run as a service, and it’s responsible for things like registering a node with the Chef Server, synchronizing code from the Chef Server to the node, and making sure the node that is running the chef-client is configured correctly, based on your code. The chef-client is a part of the development kit, in addition to being run on nodes. It can be used to configure any node it runs on. So, if you’re going to test your code, you’ll need the chef-client.

Ohai is required by the chef-client executable. Ohai gathers attributes about a node, such as the operating system and version, network, memory and CPU usage, host names, configuration info, and more. It is automatically run by the chef-client, and the information it gathers is accessible to you in code. Having that information allows you to do things like conditionally execute code, or set configuration file values. The attributes that Ohai collects are saved on the Chef server, which allows you to use them to search for nodes based on the values. For example, if you wanted to search for all nodes running inside of AWS, you could search based on the EC2 attributes that Ohai automatically collects, and saves to the Chef Server.

The “knife” is a versatile tool that serves multiple roles. It is used to interact with the Chef Server and includes tasks such as uploading code from a workstation, setting global variables, and a lot more. The knife is used for installing the chef-client on nodes. It can even provision cloud infrastructure.

Kitchen is an important tool that is used for testing the Chef code you develop. Testing is an important requirement for any configuration management software. Chef gives you the power to run commands across your entire infrastructure at the same time. This means that you could update thousands of servers at once. If you’re executing untested code, you run the risk of breaking some of the components of your infrastructure with the push of a button. Kitchen allows you to test your code against one or more virtual machines at the same time.

Berkshelf is a dependency management tool that allows you to download code shared by the Chef community. As you begin to develop with Chef, you’ll find yourself running into problems that have already been solved by other developers. Developers frequently share their work on the Chef Supermarket so you don’t have to develop the functionality for yourself.

Chef Server

Chef uses a client-server architecture, which means there are two components: a server and zero or more clients. The server component of the client-server architecture in this case is the Chef Server. It is a central hub where all of your automation code lives. It’s also responsible for knowing about all of the nodes that it manages. At its core, the Chef Server is a RESTful API that allows authenticated entities to interact with the different endpoints.

For the most part, you won’t need to interact with the API directly. At least not in the early stages while you’re still learning Chef. Instead, you will use one of the tools built around the API. There is a web-based management UI that allows you to interact with the API via a browser, or the chef-client, which is the way nodes interact with the API. They grab whatever they need from the server and then perform any configuration locally on the node itself to reduce the amount of work the server needs to perform. Finally, there is the knife command, which you may recall is used to interact with the Chef Server.

Nodes

Chef refers to clients generically as nodes. Because Chef is capable of managing all kinds of different devices, nodes are any type of machine capable of running the “chef-client” software. It could be a physical server, an on-premise or cloud-based virtual machine, a network device such as a switch or router, and even containers.

In order for Chef to configure a node, the chef-client needs to be installed on the node. The chef-client is responsible for making sure the node is authenticated and registered with the Chef server. The chef-client uses RSA public key-pairs to authenticate between a node and the Chef server. Once a node is registered with the Chef server, it can access the server’s data and configuration information.

Next Steps

So far, we’ve covered Chef at a high level. In our course how to get started with Chef we’ll dig deeper into the Chef Workstation to talk about chef-repo and Cookbooks, and we’ll show you how to create your first recipe, configure the Chef Server for basic development, and a lot more.

By the end of the course you should be able to:

  • Understand the use cases for Chef
  • Explain the Chef architecture
  • Describe the components of Chef
  • Create a simple cookbook

Getting Started with Chef is free for Cloud Academy subscribers. If you’re not already a Cloud Academy member, we invite you to try Cloud Academy with our free 7-day trial where you’ll have full access to Cloud Academy video courses, hands-on labs, and quizzes throughout the trial period.

10 Reddit Tips and Tricks to help you become a master Redditor

The content below is taken from the original (10 Reddit Tips and Tricks to help you become a master Redditor), to continue reading please visit the site. Remember to respect the Author & Copyright.

Reddit is one of the most prevalent social news platforms used for sharing links and information. It lets users upvote or downvote any content submitted by other users. Reddit features ‘Karma’ points so as to avoid the spammers from posting the links randomly. There is a limit of posting links on the site, so if you are a spammer and looking for some platform to copy-paste your links randomly, Reddit isn’t for you. The platform is quite strict to the spammers and takes no time to delete the spam content. A user gets ‘Karma’ points when someone comments on their link or upvote it.

While it is a very popular social news platforms, people are still not getting the best out of it. In this post, we will learn about some very interesting and helpful Reddit tips and tricks which will help you make the best out of it.

Reddit tips and tricks

1. Subreddits are very important

If you are new to Reddit, do remember that Subreddits here are very important. Subreddits are actually the categories designed to organize the content shared by millions of users. Each Subreddit is dedicated to a particular topic like educations, computers, Windows, and health etc. Every time you submit a link, you have to select the suitable Subreddit best relevant to your link niche. There are thousands of Subreddits here and it is important to select the best fit category so as to get more of targetted viewers and readers to engage with your posts. Reddit tips and tricks

2. Make a slideshow of images in Reddit

If you want to see the images on any particular Subreddit, you can just create a quick slideshow. All you have to do is to go your desired Subreddit page and tweak the URL a bit. Add a letter ‘p’ after Reddit in the URL and you can watch the quick slideshow of the images of that particular Subreddit. For example, you want to watch the pictures of flowers, the URL for selected Subreddit will be http://bit.ly/2knmIoG, now add letter ‘p’ and make it http://bit.ly/2kHaQds/ and you watch the beautiful Slideshow of all flowers pictures posted on Reddit by various users.

3. Create & use MultiReddits

You can club similar Subreddits and create a MultiReddit and add the SubReddits into it so as to avoid the repetitive posts and Subreddits. To create a MultiReddit, login to your Reddit account and go to the left panel of settings, click on Create and follow the instructions. While you create a MutliReddit, make sure you give it a suitable name according to the genre.



4. Reddit commenting tips

Comments play a major role in getting Karma Points in Reddit. While, it is important to comment sense, adding some effects like line break, bold/italic or strikethrough can make your comment noticeable.

  • To display a line break in your comment or to start a new paragraph, put two spaces at the end of the line.
  • For make a word bold, type it between two asterisks **word** and to italicize your word type between one asterisk like *word* and type the ~~word~~ for strikethrough.
  • Use Unicode characters to add emoticons in your Reddit comments. Emoticons make your comments more expressive, like (?_?) emoticon shows your disapproval.

5. Random Button

Have you ever used the Random button on the top menu ribbon? Clicking on the Random button shows the random Subreddits used by various users. So, if you are bored of the usual and common SubReddits shown on the home page, keep clicking the Random button. One click might take you to the Subreddit CampfireCooking and next click may randomly open the Subreddit JavaScript. 

6. Reddit Mobile Mode

Do you know that Reddit has its mobile version specially designed for your mobile phones? To browse Reddit in its mobile version, you just have to add an ‘i’ on the main URL, i.e, i.reddit.com. 

7. Unblock the Subreddits

You may often come across some blocked SubReddits in your workplace, but don’t worry you can unblock them anytime. You can unblock them either by using HTTPS or by adding a ‘+‘ symbol at the end of blocked Subreddit’s name i.e,  http://bit.ly/2kH5rDa

8. Increase your Karma Points

This works slow but it does. You can increase your Karma points by posting relevant comments on the popular subreddits. It is not important to post a long comment, short but relevant ones will help you increasing your Karma points for sure.

Answering the commonly asked questions will also help you earning a good amount of Karma points. Make sure your answer is a bit funny but viable. ‘More upvotes’ is equal to more Karma points. Posting some funny and elaborate stories in answers also helps to get a lot of attention, followed by upvotes and finally the Karma points.

9. Use shortkeys with Reddit Enhancement Suite

Keyboard shortcuts (shortkeys) save time and increase productivity. Reddit keyboard shortcuts will certainly give you a better experience. To use these shortcuts you first need to install the Reddit Enhancement Suite.

Below is the list of shortkeys you get with the Suite-

  • Z = Downvote.
  • A = Upvote.
  • J= Next
  • H= Hide
  • U= User
  • S=Save
  • F=Refresh
  • I= Inbox
  • R= Go to targetted subreddit
  • C= Open post
  • X= Open preview
  • L= Open in new tab

The Reddit Enhancement Suite also lets you fix the user interface and lets you watch the videos right from the front page or to enlarge pictures.

10. Create Multiple Accounts in Reddit

If you are using Reddit for your digital marketing, creating more than one account will be helpful. Make one personal account, two marketing accounts and keep one or two accounts for backup. It is very easy and quick to create the Reddit accounts. Do remember to use your few months old accounts for marketing purposes.

Hope these tips help you get the best out of Reddit. Windows 10 users may want to check out these Reddit apps.



Amazon releases Chime, a new cloud-based UCaaS

The content below is taken from the original (Amazon releases Chime, a new cloud-based UCaaS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon today announced Chime, a unified communications as a service (UCaaS) offering hosted in Amazon Web Service’s cloud.

Amazon is entering a crowded market of UC solutions, some of which are already cloud-based and others that run on customer premises. Nevertheless, analysts who track Amazon say the company has an opportunity here.

Chime uses a mobile or desktop application that is available across iOS, Android and Windows environments. It uses noise-cancelling wideband audio, which Amazon says allows it to deliver high quality audio and video experiences. When a meeting starts, Chime calls all the participants, who can join by clicking a button; there is no PIN required. Chime shows a visual roster of all attendees, which Amazon says eliminates the “who just joined” questions that can occur on conference calls. Any user has the ability to mute a noisy participant. Advanced editions of Chime allow IT to centrally manage users and settings, including integrating it with existing corporate directories.

Chime’s pay-as-you-go licensing model is based on how much it is used in an organization. Many other UCaaS offerings require licensing contracts and seats. A Basic edition of Chime is free and allows users to attend meetings and make video and voice calls. A Plus edition is $2.50 per user per month and adds some user management features, such as linking Chime to an Active Directory and retaining up to 1GB of message history per user. A Pro Plan allows screen sharing for up to 100 users and includes unlimited Voice over IP (VoIP) for $15 per user per month. There is also a rate per minute for conference call dial ins, which in the U.S. is $0.003 or $0.012 for toll-free.

Anyone can download and use Chime, but Amazon partners Level 3 and Vonage will offer supported versions of it when it is generally released in the second quarter. In a press release announcing Chime, Amazon cited the retailer Brooks Brothers which deployed the offering in a pilot before its release. Internal adoption reached 90% of the company’s corporate staff without any formal rollout or training, IT director Phillip Miller said.

+MORE AT NETWORK WORLD: Hot products at RSA 2017 | University attacked by its own vending machines, smart light bulbs & 5,000 IoT devices +

The UC market is fragmented into two major buckets: on-premises platforms, which have been the traditional deployment method and remain the most common. Cisco Unified Communications Manager (Unified CM), Avaya Aura platform and the Mitel MiCollab are some of the big vendors.

In recent years with the advent of cloud computing, the UCaaS market has evolved. Vendors like Cisco with WebEx, GoTo Meeting, Intercall and PGI offer cloud and application-based platforms in which organizations don’t have to install hardware and software packages to run UC systems.

“It’s a market that’s continually in transition,” says Bern Elliot, distinguished analyst at Gartner who tracks the UC industry. Gartner estimates the cloud-based UC market was $12 billion in 2016, growing at 15% to $22 billion in 2020.

But Elliot says Amazon’s entrance into the UCaaS market should also be viewed in a larger context. In the broader cloud market, two of Amazon’s biggest competitors are Microsoft and Google, both of whom have strong offerings across not only infrastructure as a service, but enterprise applications as well. Microsoft has Office 365 and Google has its G Suite of work apps. In the UC space, Microsoft has Skype for Business and Google has Hangouts. Amazon before Chime did not have a competing UC offering.

Chime is the latest in a series of enterprise applications offerings rolled out by Amazon in recent years. Amazon WorkDocs is a file storing and sharing service (competitive with Microsoft OneDrive and Google Drive). WorkSpaces is a virtual desktop offering, and WorkMail is an email and calendaring application.

“Amazon is slowly amassing a digital workplace portfolio,” Elliot says, adding that Amazon’s enterprise application offerings aren’t nearly as well known or established as Microsoft or Google’s, yet.

“If you take the long view, which Amazon certainly does, they want to position themselves as a broader enterprise partner and be able to talk to customers not only about their infrastructure, but applications too,” Elliot explains. “Microsoft and Google can have those conversations, and Amazon would like to be able to too.”

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Around-the-world cyclist has bike and all his possessions stolen in London

The content below is taken from the original (Around-the-world cyclist has bike and all his possessions stolen in London), to continue reading please visit the site. Remember to respect the Author & Copyright.

Rob Lutter had his bike and ‘entire life’s work’ stolen from outside Co-op in Kingston on Friday

Rob Lutter’s stolen Kona Rove Al bike.

Around-the-world cyclist, adventurer, photographer and author Rob Lutter has had his bike and all of his possessions stolen in London. …Continue reading »

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

The content below is taken from the original (AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap), to continue reading please visit the site. Remember to respect the Author & Copyright.

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

Finesse Lets You Schedule Your Dropbox Files For Deletion

The content below is taken from the original (Finesse Lets You Schedule Your Dropbox Files For Deletion), to continue reading please visit the site. Remember to respect the Author & Copyright.

Web: Your Dropbox storage is limited, so you don’t want to waste it on files that you won’t need in the future. Finesse lets you schedule certain files in Dropbox to be deleted when you’re done with them.

Read more…

Microsoft’s New Tools Make Surface Devices Better Suited For The Enterprise

The content below is taken from the original (Microsoft’s New Tools Make Surface Devices Better Suited For The Enterprise), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s line of premium laptops and tablets have been well received by enterprise customers and are being deployed all around the globe. The latest Surface devices, which all come with Windows Hello and are made of high-end materials, ship with Windows 10 but to help these clients have more control over the devices, Microsoft is giving them new tools for better management.

In some environments, where data leaks and security are of the highest order, corporations will lock down hardware so that USB drives and webcams cannot be operated by the user. To help its corporate customers obtain more control over their own devices, Microsoft has announced Surface Enterprise Management Mode which can be deployed to Surface Pro 4, Surface Book and Surface Studio.

This new tool allows an organization to take ownership of a device and lock down the hardware configurations within the device firmware. Hardware rules can be applied to Wi-Fi networks, Bluetooth, Ethernet, time of day, application access and certificates that can be included in initial deployments or dynamically pushed via the cloud.

In the event that a Surface device is lost or stolen with SEMM deployed, it requires both physical possession and a unique certificate to make any changes to the configuration of the device.

The goal with this new tool is obvious, to help move more of Microsoft’s hardware into the enterprise. By giving IT admins more control over the hardware, it helps to match what is considered best-in-class capabilities offered by other manufacturers and now that the Surface line is offering these tools too, it opens the door to additional customers who handle sensitive data.

Even though Microsoft hit a bumpy road when the company launched it Surface brand several years ago, the company has found success with its Pro 3, Pro 4 and Surface Book. The product line is frequently topping a billion dollars in a single quarter and the company is expected to refresh its laptop and tablet hardware in the first half of 2017.

The post Microsoft’s New Tools Make Surface Devices Better Suited For The Enterprise appeared first on Petri.

Microsoft Increases Office 365 Security With Three New Tools

The content below is taken from the original (Microsoft Increases Office 365 Security With Three New Tools), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft.com

Microsoft.com

Microsoft’s Office 365 platform is nothing short of a huge success for the company. With tens of millions of users accessing the tools every day, the company has successfully pivoted an on-premises piece of software to a cloud-based platform of productivity tools.

At Ignite last year, Microsoft began introducing new security features into the platform that started the process of turning this service into a new security layer for the company. Today, Microsoft is announcing three new features for the platform that will further enhance Office 365.

Office 365 Threat Intelligence is now available in private preview with general availability planned for later this quarter. This service provides near real-time insight into the threats that impact a network (malware, adware, ransomware, ect) and can help isolate or deflect these malicious threats.

A new public preview of Office 365 Advanced Data Governance has been announced; this feature uses the company’s foundation of machine learning to identify and protect data that may be exposing an organization to unnecessary risk. This appears to be an interesting feature but I’ll need a bit more time to fully dig in to see how it works and its effectiveness.

Finally, there is the new Office 365 Secure Core; Tony has a great write-up about this tool that’s worth checking out for a deeper dive. This feature allows IT pros to evaluate the strength of their Office 365 configuration and model how simple changes can enhance and reduce their security vulnerabilities.

Microsoft says that they scan over 200 billion emails each month for malware and phishing which gives them a huge graph of data that allows them to more effectively prevent attacks from reaching the end user. On average, for the last six months, the company says they have blocked 200,000 exploit attempts per day.

All of these new features will enhance Office 365 but more importantly for Microsoft, will further cement its productivity tools as the premier applications in this sector. While building a word processor, presentation tool and even spreadsheets are easy to do with modern development languages, creating the security layer that Office 365 provides is another story.

The post Microsoft Increases Office 365 Security With Three New Tools appeared first on Petri.

Office 365 Drops Site Mailboxes. What Should You Do Next?

The content below is taken from the original (Office 365 Drops Site Mailboxes. What Should You Do Next?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Site Mailboxes message

Site Mailboxes message

Office 365 Starts the Countdown for Site Mailbox Termination

According to a note (MC92090) published through the Office 365 Admin Center on January 31, 2017, beginning in March 2017, SharePoint site owners will no longer be able to create new site mailboxes. Existing site mailboxes will function until they are replaced by something else. The news was expected, but it poses some challenges for tenants who have deployed site mailboxes to serve purposes like contract management that involve a mixture of email communication and document management.

Site Mailboxes 101

A site mailbox allows users to share email information along with pointers (“stubs”) to files held in a document library. Administrators create a site mailbox by adding the mailbox app to a site, which causes SharePoint to create a new Exchange mailbox. Users can then create and send messages from the site mailbox or move items from their mailboxes to the site mailbox to share with other members of the site. Figure 1 shows how Outlook presents the stubs for documents stored in a SharePoint document library associated with a site mailbox.

Site mailbox Outlook

Figure 1: Site Mailbox in use with Outlook (image credit: Tony Redmond)

Customers never embraced the concept of site mailboxes and their usage was low, even within Office 365 where Microsoft took care of the work required to integrate SharePoint and Exchange. The advent of Office 365 Groups and the continued popularity of shared mailboxes provided customers with sufficient means to share information. The long-term direction is to replace site mailboxes with Groups, and Microsoft expects to provide a migration tool in late 2017.

The Difference with Office 365 Groups

Two major features distinguish site mailboxes from Office 365 Groups. First, the integration between site mailboxes and Outlook clients functions more like a shared mailbox. For instance, users can drag and drop items from their personal mailbox or a shared mailbox into the site mailbox. Unlike the conversations stored in group mailboxes, Exchange treats items stored in the site mailbox as messages and retain all the characteristics of the messages.

Second, site mailboxes use pointers to represent files. The pointers hold some attributes, such as the author, file size, document name, and so on, and a synchronization process keeps the pointers updated. Office 365 Groups always defer to the browser interfere when users want to open files in the group document library.

What Now?

The question for those using site mailboxes is how to proceed. They can wait until Microsoft delivers the migration tool to move existing site mailboxes to Office 365 Groups, but what should they do to satisfy new user needs after March 2017? The options are to use:

  • Shared mailboxes.
  • Office 365 Groups.

The answer will be different depending on the business needs. Shared mailboxes are best if the functionality required is email-centric, like the need to provide a single repository for a team to handle collective tasks. Dedicated folders, such as one for each customer or project, can hold any documents related to the tasks.

Office 365 Groups are best when the need is more document-centric. For example, a team managing contracts on behalf of a company is likely to need some of the document management capabilities found in document libraries, like check-in/check-out. In these circumstance, an Office 365 group is the best option.

OWA’s Access to Group Mailboxes

Interestingly, although Outlook desktop does not allow you to drag and drop (or copy) items from personal mailboxes to a group mailbox, OWA is perfectly happy to accommodate this functionality. To do this, you add the group mailbox to OWA as a shared folder, which is OWA-speak for a shared mailbox. Here’s how:

  • Expand the Folders section of OWA to expose your mailbox name.
  • Select the mailbox name and select “Add shared folder” from the right-click menu.
  • Input the name (or SMTP address) of the group mailbox you want to access (Figure 2). In this case, I’m selecting the Exchange Grumpy Old Men group (a fine collection of humans). See this article for the steps used to create this group by converting an email distribution group.

OWA group mailbox shared folder

Figure 2: Adding a group mailbox as an OWA shared folder (image credit: Tony Redmond)

  • OWA adds the group mailbox to its resource list.
  • Click the group mailbox to open it and expose its folders. Group conversations are in the Inbox, but all the other folders are available (Figure 3).
  • You can now drag and drop or copy items from your mailbox to the group mailbox.

It is possible that this access is because OWA treats group mailboxes in the same way as shared mailboxes. The same technique does not work with Outlook desktop because you cannot add group mailboxes to an Outlook profile in the same way as you can add a shared mailbox.

OWA group mailbox

Figure 3: Accessing the items in a group mailbox Inbox with OWA (image credit: Tony Redmond)

Collaboration Options

Microsoft offers many collaboration methods within Office 365 – Groups, Teams, Yammer, shared mailboxes, and plain-old email. It is reasonable to expect that they should whittle down the less successful methods, which is exactly what has happened with site mailboxes. Transitions are painful for users and tenant administrators alike. At least there are some places to go!

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Drops Site Mailboxes. What Should You Do Next? appeared first on Petri.

.NET: Manage Azure Managed Disks

The content below is taken from the original (.NET: Manage Azure Managed Disks), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are announcing beta 5 of the Azure Management Libraries for .NET. Beta 5 adds support for Azure Managed Disks.

Today, Microsoft announced the general availability of Azure Managed Disks – it simplifies the management and scaling of Virtual Machines. Specify the size and disk you want to use for Virtual Machines. You do not have to worry about creating and managing Storage Accounts.

You can use the Azure Management Libraries for .NET to manage Managed Disks.

http://bit.ly/2kYKF51

You can download beta 5 from:

Create a Virtual Machine with Managed Disks

You can create a Virtual Machine with an implicit Managed Disk for the operating system and explicit Managed Disks for data using a define() … create() method chain. Creation is simplified with implicit creation of managed disks without specifying all the disk details. You do not have to worry about creating and managing Storage Accounts.

var linuxVM1 = azure.VirtualMachines
  .Define(linuxVM1Name)
  .WithRegion(Region.USEast)
  .WithNewResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithNewPrimaryPublicIpAddress(linuxVM1Pip)
  .WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
  .WithRootUsername(“tirekicker”)
  .WithSsh(sshkey)
  .WithNewDataDisk(100)
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

You can download the full, ready-to-run sample code.

Create a Virtual Machine Scale Set with Managed Disks

You can create a Virtual Machine Scale Set with implicit Managed Disks for operating systems and explicit Managed Disks for data using a define() … create() method chain.

var vmScaleSet = azure.VirtualMachineScaleSets
  .Define(vmScaleSetName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithSku(VirtualMachineScaleSetSkuTypes.StandardD5v2)
  .WithExistingPrimaryNetworkSubnet(network, "subnet1")
  .WithExistingPrimaryInternetFacingLoadBalancer(publicLoadBalancer)
  .WithoutPrimaryInternalLoadBalancer()
  .WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
  .WithRootUsername("tirekicker")
  .WithSsh(sshkey)
  .WithNewDataDisk(100)
  .WithNewDataDisk(100, 1, CachingTypes.ReadWrite)
  .WithNewDataDisk(100, 2, CachingTypes.ReadOnly)
  .WithCapacity(3)
  .Create();

You can download the full, ready-to-run sample code.

Create an Empty Managed Disk and Attach It to a Virtual Machine

You can create an empty Managed Disk using a define() … create() method chain.

var dataDisk = azure.Disks.Define(diskName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithData()
  .WithSizeInGB(50)
  .Create();

You can attach the empty Managed Disk to a Virtual Machine using another define() … create() method chain.

var linuxVM2 = azure.VirtualMachines.Define(linuxVM2Name)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithNewPrimaryPublicIpAddress(linuxVM2Pip)
  .WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
  .WithRootUsername("tirekicker")
  .WithSsh(sshkey)
  .WithNewDataDisk(100)
  .WithNewDataDisk(100, 1, CachingTypes.ReadWrite)
  .WithExistingDataDisk(dataDisk)
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

You can download the full, ready-to-run sample code.

Update a Virtual Machine

You can detach Managed Disks and attach new Managed Disks using an update() … apply() method chain.

linuxVM2.Update()
  .WithoutDataDisk(2)
  .WithNewDataDisk(200)
  .Apply();

You can download the full, ready-to-run sample code.

Create a Virtual Machine From a Specialized VHD

You can create a Virtual Machine from a Specialized VHD using a define() … create() method chain.

var linuxVM4 = azure.VirtualMachines.Define(linuxVmName3)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithoutPrimaryPublicIpAddress()
  .WithSpecializedOsUnmanagedDisk(specializedVhd, OperatingSystemTypes.Linux)
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

You can download the full, ready-to-run sample code.

Create a Virtual Machine Using a Custom Image

You can create a custom image from a de-allocated and generalized Virtual Machine using a define() … create() method chain.

var virtualMachineCustomImage = azure.VirtualMachineCustomImages
  .Define(customImageName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .FromVirtualMachine(linuxVM) // from a de-allocated and generalized Virtual Machine
  .Create();

You can create a Virtual Machine from the custom image using another define() … create() method chain.

var linuxVM4 = azure.VirtualMachines.Define(linuxVM4Name)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithoutPrimaryPublicIpAddress()
  .WithLinuxCustomImage(virtualMachineCustomImage.Id)
  .WithRootUsername(userName)
  .WithSsh(sshkey)
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

You can download the full, ready-to-run sample code.

Create a Virtual Machine Using Specialized Disks From Snapshots

You can create a Managed Disk Snapshot for an operating system disk.

 

// Create a Snapshot for an operating system disk
var osDisk = azure.Disks.GetById(linuxVM.OsDiskId);
var osSnapshot = azure.Snapshots.Define(managedOSSnapshotName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithLinuxFromDisk(osDisk)
  .Create();
// Create a Managed Disk from the Snapshot for the operating system disk
var newOSDisk = azure.Disks.Define(managedNewOSDiskName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithLinuxFromSnapshot(osSnapshot)
  .WithSizeInGB(100)
  .Create();

You can create a Managed Disk Snapshot for a data disk.

// Create a Snapshot for a data disk
var dataSnapshot = azure.Snapshots.Define(managedDataDiskSnapshotName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithDataFromDisk(dataDisk)
  .WithSku(DiskSkuTypes.StandardLRS)
  .Create();
// Create a Managed Disk from the Snapshot for the data disk
var newDataDisk = azure.Disks.Define(managedNewDataDiskName)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithData()
  .FromSnapshot(dataSnapshot)
  .Create();

You can create a Virtual Machine from these specialized disks using a define() … create() method chain.

var linuxVM5 = azure.VirtualMachines.Define(linuxVm5Name)
  .WithRegion(Region.USEast)
  .WithExistingResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithoutPrimaryPublicIpAddress()
  .WithSpecializedOsDisk(newOSDisk, OperatingSystemTypes.Linux)
  .WithExistingDataDisk(newDataDisk)
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

You can download the full, ready-to-run sample code.

Convert a Virtual Machine to Use Managed Disks With a Single Reboot

You can convert a Virtual Machine with unmanaged disks (Storage Account based) to Managed Disks with a single reboot.

var linuxVM6 = azure.VirtualMachines.Define(linuxVM6Name)
  .WithRegion(Region.USEast)
  .WithNewResourceGroup(rgName)
  .WithNewPrimaryNetwork("10.0.0.0/28")
  .WithPrimaryPrivateIpAddressDynamic()
  .WithNewPrimaryPublicIpAddress(linuxVM6Pip)
  .WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
  .WithRootUsername("tirekicker")
  .WithSsh(sshkey)
  .WithUnmanagedDisks() // uses Storage Account
  .WithNewUnmanagedDataDisk(100) // uses Storage Account
  .WithSize(VirtualMachineSizeTypes.StandardD3V2)
  .Create();

linuxVM7.Deallocate();
linuxVM7.ConvertToManaged();

You can download the full, ready-to-run sample code.

Try It

You can run the samples above or go straight to our GitHub repo. Give it a try and let us know what do you think (via e-mail or comments below). Over the next few weeks, we will be adding support for more Azure services and applying finishing touches to the API.

Office 365 Secure Score Analyzes Tenant Security

The content below is taken from the original (Office 365 Secure Score Analyzes Tenant Security), to continue reading please visit the site. Remember to respect the Author & Copyright.

security_keyboard_hero

Scoring Office 365 Tenant Security

In August 2016, I wrote about the Office 365 Secure Score service, which was then in preview and noted that my tenant had scored 50 out of 243. Now, the service is in production and my score has advanced to 55 (Figure 1). Naturally, I am thrilled.

Secure Score Office 365 tenant

Figure 1: Viewing the Secure Score for an Office 365 tenant (image credit: Tony Redmond)

The idea behind Secure Score is simple. Microsoft acknowledges that it can be difficult for an administrator to understand how best to secure an Office 365 tenant. There are many places in administrative consoles where settings can be tweaked and much to monitor on an ongoing basis. It therefore makes sense to measure a tenant against a set of predetermined standards and score the tenant based on the actions taken to increase security. At the same time, outstanding actions can be flagged to the administrator, who then decides whether to implement the action and so increase the tenant score.

For example, if Rights Management is configured to allow tenant users to protect confidential content, it’s worth five points. Even better, if users store documents in OneDrive for Business, it’s worth ten points. Although you can argue that OneDrive for Business is a more secure location for documents than a local hard drive or a network file share, assigning ten points to this measurement seems like more of an encouragement to do better.

The points awarded for different aspects are combined into a tenant score. The maximum rating is 450 points. I have some work to do to increase my score from 55. On the upside, the dashboard says that the average score for an Office 365 tenant is 18, so most tenants have even more to do.

To assess a tenant, you log onto http://bit.ly/2lwihFf using a global administrator account (the plan is to include the service in the Security and Compliance Center). Global administrator access is required to measure all the areas that contribute to the security of a tenant. The first time you assess a tenant, you’ll be asked to grant access.

Assessment is not a one-time operation as a check is performed daily to determine an updated score, which is then published to the tenant dashboard.

Suggested Actions

The dashboard includes a useful list of suggested list of actions (Figure 2) that can be taken to improve the score. I noted some errors in the list such as the edict to enable mailbox auditing for all users, something that has been in place in my tenant for some time now. The report informed me that auditing was enabled or 343 mailboxes out of 385, which was an interesting observation considering that the tenant includes just 49 user, room, and discovery mailboxes. Another suggestion is to force password resets every 60 days, a technique that is not best practice when multi-factor authentication and strong passwords are used.

Office 365 Secure Score actions

Figure 2: Actions to improve a tenant’s Secure Score (image credit: Tony Redmond)

Some of the actions are noted as “Not Scored”. This indicates that addressing the action won’t influence the tenant score now – but it might in the future when Microsoft incorporates the action into the Secure Score assessment.

The Secure Score dashboard includes a Score Analyzer tab to allow administrators to:

  • Track progress of their score over time.
  • Understand the actions that contribute to the current tenant score.
  • Understand how they can improve their score by completing various actions. For example, a tenant score increases by 30 points if multi-factor authentication is enabled for all users whereas 15 points is added if the outbound spam policy notifies an administrator when a tenant user is blocked for suspicious activity.

Analysis tools like Secure Score are constantly reviewed to ensure accuracy and relevance. Some of the errors that I noted in August have been addressed by Microsoft and some new tests have been added. But that’s not the point. The reason why Secure Score exists is to drive awareness of the actions that administrators can take to increase the security of their tenant. You might not agree with Microsoft’s assessment of the importance of the various measurements but that’s just detail. The more important thing is to maintain awareness of security on an ongoing basis.

Sponsored

Pay Attention

More information on Secure Score can be gained by watching the Ignite 2016 session on the topic. You can help Microsoft develop Secure Score by noting any issues that occur in the Microsoft Technology Community. Overall, despite some minor glitches, Secure Score is a very worthwhile service that deserves your support – and your attention, especially if your tenant is one of those that scores below mine.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Secure Score Analyzes Tenant Security appeared first on Petri.

Give Your Virtual Windows Desktops a Name With This AutoHotkey Script

The content below is taken from the original (Give Your Virtual Windows Desktops a Name With This AutoHotkey Script), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows: The new multiple desktops feature in Windows 10 is as excellent as it is overdue. However, it would be nice if you could do things like give your desktop a name or see which desktop you’re currently on. This AutoHotkey script does just that.

Read more…

Printing in 2.5 Dimensions: Casio Printer Adds Realistic Textures to Paper

The content below is taken from the original (Printing in 2.5 Dimensions: Casio Printer Adds Realistic Textures to Paper), to continue reading please visit the site. Remember to respect the Author & Copyright.


Once 3D printing hits the consumer market on a widespread scale, it has the potential to totally change how we view manufacturing. Many technophiles have visions of creating their own jewelry, housewares, and even mechanical parts with a home unit and plastic or metal materials. But have you ever thought about printing forms that are somewhere between two and three dimensions? 2. Read more…

Microsoft celebrates 20 years of Visual Studio

The content below is taken from the original (Microsoft celebrates 20 years of Visual Studio), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft announced today it is celebrating 20 years of Visual Studio with the introduction of Visual Studio 2017, the latest iteration of its developer tool suite, on March 7.

A lot has changed in those 20 years, as illustrated by a picture Microsoft posted of the contents of Visual Studio 97 (below), the first iteration of the IDE. Back then it was pretty much just a bunch of languages in one box with no real integration. 

visual studio 97 Microsoft

And most of the languages supported back then are gone—such as Visual J++, a Java compiler that caused all kinds of legal problems with Sun Microsystems, and Visual C++, which has been ditched in favor of C#. Also, Visual FoxPro is pretty much dead, and the support apps, including SourceSafe and InterDev, have been replaced with newer apps or functions. 

Things changed in 2002 with the launch of Visual Studio .NET. C#—designed to be an easier-to-use object-oriented language than C++—was first introduced, and a radically changed Visual Basic.NET also made its debut. Apps were compiled through the .NET Framework instead of to machine code. 

Microsoft would dump the “.NET” suffix from the name with the release of Visual Studio 2005, but the intention remained. Microsoft settled on C# as the main programming language, with Visual Basic designated the beginner language. 

Over time, Microsoft would add more internet-oriented language support through add-ons, such as Python, Ruby, node.js and M. It also added a new language, F#, which doesn’t have a large following, but what it does have for users are very dedicated.

Visual Studio 2017 launch event

Microsoft will hold a two-day event beginning on March 7 to launch Visual Studio 2017, which will be livestreamed. March 8 will include “a full day of live training with multiple topics to choose from.”

The launch event will feature top members of the Visual Studio team showing off the latest developments from Visual Studio, .NET, Xamarin, Azure and more. Developers can engage in demo sessions focusing on key improvements within the product.

Microsoft is also asking for developers to share their stories of their using Visual Studio by recording a video on their smartphone and including details such as how long they have been using Visual Studio, what is the coolest software they’ve built, what do they like about Visual Studio and, showing Microsoft hasn’t lost its sense of corny behavior, birthday wishes for Visual Studio. At least they don’t call it “bae.”

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Azure Backup and Azure Site Recovery now available in UK

The content below is taken from the original (Azure Backup and Azure Site Recovery now available in UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

We’re pleased to announce Azure Backup and Azure Site Recovery now available in the UK.

Azure Backup – The Azure-based service you can use to back up (or protect) and restore your data in the Microsoft cloud. Azure Backup enables Azure IaaS VM backup as well as replace your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive. Learn more about Azure Backup.

Azure Site Recovery – Contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud (Azure), or to a secondary datacenter. Learn more about Azure Site Recovery.

We are excited about these new Azure Services, and invite customers using these Azure regions to try them today!

Several police forces look at introducing ‘close pass’ scheme after success in West Midlands

The content below is taken from the original (Several police forces look at introducing ‘close pass’ scheme after success in West Midlands), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hampshire Police are to follow West Midlands Police’s ‘close pass’ scheme which was described as a success in improving the safety of cyclists.

Hampshire Police are to follow West Midlands Police’s ‘close pass’ scheme which was described as a success in improving the safety of cyclists.

Instant File Recovery from Cloud using Azure Backup

The content below is taken from the original (Instant File Recovery from Cloud using Azure Backup), to continue reading please visit the site. Remember to respect the Author & Copyright.

Since its inception, Azure Backup has empowered enterprises to embark on the digital transformation to cloud by providing a cloud-first approach to backup enterprise data both on-premises and in the cloud. Today, we are excited to go beyond providing Backup-as-a-Service (BaaS) and introduce Restore-as-a-Service (RaaS) in the form of Azure Backup instant restore!

With Instant Restore, you can restore files and folders instantly from cloud based recovery points without provisioning any additional infrastructure, and at no additional cost. Instant Restore provides a writeable snapshot of a recovery point that you can quickly mount as one or more iSCSI based recovery volumes. Once the snapshot is mounted, you can browse through it and recover items by simply copying them from the recovery volumes to a destination of your choice.

Value proposition

  • One restore mechanism for all backup sources – The Restore-as-a-Service model of Azure Backup unifies the approach for recovering individual files and folders backed up from sources in the cloud or on-premises. You can use instant restore, whether you are backing up on-premises data to cloud using Azure Backup agent or protecting Azure VMs using Azure VM backup.
  • Instant recovery of files – Instantly recover files from the cloud backups of Azure VMs or on-premises file-servers. Whether it’s a case of accidental file deletion or simply validating the backup, instant restore drastically reduces the time taken to recover your first file.
  • Open and review files in the recovery volumes before restoring them – Our Restore-as-a-Service approach allows you to open application files such as SQL, Oracle directly from cloud recovery point snapshots as if they are present locally, without having to restore them, ​and attach them to live application instances. 
  • Recover any combination of files to any target – Since Azure Backup provides the entire snapshot of the recovery point and relies on copy of items for recovery, you can restore multiple files from multiple folders to a local server or even to a network-share of your choice.

Availability

Azure Backup Instant Recovery of files is available in preview for customers of Azure Backup agent and Azure VM backup (Windows VMs).

Learn how to instantly recover files using Azure Backup Agent

Watch the video below to start using Instant Restore for recovering files backed up with Azure Backup Agent for files and folders.

The supported regions for this preview are available here and will be updated as new regions are included in the preview.

Learn how to instantly recover files from Azure Virtual Machine Backups

Watch the video below to instantly recover files from an Azure VM (Windows) backup.

Visit this document to know more about how to instantly recover files from  Windows Azure VM backups.

The instant file restore capability will be available soon for users who are protecting their Linux VMs using Azure VM backup. If you are interested in being an early adopter and provide valuable feedback, please let us know at [email protected]. Watch the video below to know more.

 

Related links and additional content

HVAC-focused I/O device server runs OpenWrt

The content below is taken from the original (HVAC-focused I/O device server runs OpenWrt), to continue reading please visit the site. Remember to respect the Author & Copyright.

Barix’s OpenWrt-driven “Barionet 1000″ is its first Barionet programmable I/O controller to run Linux, and the first to offer WiFi and USB connectivity. Barix is rebooting its line of HVAC-oriented Barionet universal programmable I/O devices with the Barionet 1000. This is the first model to run on Linux, the first to offer WiFi, and the […]

Europeans will get ‘portable’ streaming libraries next year

The content below is taken from the original (Europeans will get ‘portable’ streaming libraries next year), to continue reading please visit the site. Remember to respect the Author & Copyright.

The European Union is supposed to be a big, borderless family of member states, but this concept is far from true in the online world. For several years, EU regulators have been working towards a "Digital Single Market" with the aim of breaking down some of the regional barriers. One success story is free mobile roaming across the EU, which comes into force this summer, and now various European bodies have agreed upon new rules that’ll put an end to the geo-blocking of various online services like Netflix.

You see, Netflix offers different catalogs of films and TV shows in different EU countries. If you’re a British Netflix subscriber in Paris for the weekend, for example, you’ll still be able to log in to your account, but you’ll see the French content library. Similar geo-blocking tactics are employed on various music and video streaming services, as well as on video game and e-book stores. The EU would ideally like everything to be available to everyone across the region — just ask Paramount — but unravelling the complex web of country-specific licensing and copyright agreements is going to take some time.

Thus, the EU has come up with the quick fix of "portability," allowing Europeans to access services in other countries as if they were at home. How this will work in practice has now been agreed upon, and it’s all pretty simple. Paid services like Netflix already require that you create an account and, of course, it knows where you call home. When the new rules go into effect, your country’s film and TV catalog will effectively become linked to your account, meaning you can binge anything you normally could even though you’re accessing it from a German IP address. This will be mandatory for paid services, whereas free ones like BBC iPlayer will be able to decide for themselves whether to offer portability or not.

The portability regulations still need a final stamp of approval from the European Parliament and Council of the EU, but given how simple and common sense they are, it’s likely this will be mere formality. After the go-ahead is given, providers will have nine months to make their services portable, meaning everything should be working as the EU intends by the beginning of next year.

Via: Digital TV Europe

Source: European Commission

ZeroStack uses machine learning to create self-driving clouds

The content below is taken from the original (ZeroStack uses machine learning to create self-driving clouds), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud mania continues to grow as businesses move more and more workloads to platforms such as Microsoft Azure and Amazon Web Services (AWS). But while public cloud hype is stealing all the headlines, private data centers are quietly plodding along and growing, as well. There is so much data growth today that businesses have to invest in both public clouds and private data centers, hence the high adoption rate of “hybrid” environments. 

The landscape for public cloud services is set—Azure and Amazon have won that battle—but private data centers are in a state of change. The legacy model of buying best-of-breed components and cobbling the technology together to build a private cloud is a long, complex process that just can’t keep up with the needs of a digital organization. Turnkey private clouds are becoming increasingly popular because they give businesses an Amazon-like experience but in a private cloud model, so the data and infrastructure stays in the company data center. 

While the platforms used to build self-service private clouds continue to evolve, there is still a significant amount of overhead for the IT department to deploy, manage and optimize the infrastructure. I’ve talked to a number of businesses that have deployed a private cloud “stack” that have told me it can take up to six months of tweaking and tuning to get the deployment right. But then changes have to be made to accommodate new workloads or increased traffic, putting IT behind the eight ball once again. 

ZeroStack’s private cloud managed by AI 

This week, ZeroStack announced the first-ever private cloud stack managed by an artificial intelligence (AI) “learning engine,” delivering a true self-driving environment. ZeroStack’s solution involves on-premises hardware and software that is managed by a cloud-based, self-service portal. This “cloud-managed” platform has enabled the company to collect, monitor, analyze and model over 1 million objects over the past 18 months. ZeroStack has taken this experience and all that data and used it to build its own AI known as Z-Brain. The smart folks working at ZeroStack have leveraged the data to build algorithms to productize predictive models to help improve both short- and long-term decision making. As someone who has his own Z-Brain, I can appreciate how powerful this can be! 

The solution will continue to collect telemetry data and leverage machine learning to provide new insights that can help customers have a better-running private cloud. Changes can then be fully automated or recommended to organizations that aren’t comfortable with an AI making decisions about their data centers. 

With all the hype out there about AI, it is refreshing to see ZeroStack talk about how it develops its algorithms and is essentially building best practices into its software. 

What ZeroStack’s Z-Brain does 

The Z-Brain provides self-driving capabilities in the following areas: 

  • Capacity planning. The AI does three types of capacity planning: infrastructure utilization, project-based capacity planning and an infrastructure advisor to help with future needs.
  • Zero touch upgrades. No intervention from the IT organization is needed because the Z-Brain handles the upgrades of all the software modules in the ZeroStack solution. If you have tried to manage a complex software stack, you know how much time this can save.
  • Efficiency optimization. Virtual machines can be “auto-sized” to specific workloads, preventing organizations from using more resources than necessary. 

The above features are available to customers in the latest release, but as they say in infomercials, “But wait, there’s more.” ZeroStack 3.0 will use the AI capabilities to build cloud optimization capabilities that can determine the best cloud to run specific workloads based on cost and performance. Version 4.0 will include automated performance troubleshooting where the root cause of application performance issues can be identified quickly and remediated, maximizing cloud uptime. 

ZeroStack’s mission has been to lower the burden on the infrastructure and optimization teams with respect to running a private cloud. The AI can offload many of the day-to-day operational tasks that weigh down IT today. As the complexity of data centers continues to increase and the business demands become more challenging to meet, ZeroStack’s AI-driven approach will become a key requirement of running a private cloud.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

FreeDOS 1.2: Why DOS is amazing in 2017

The content below is taken from the original (FreeDOS 1.2: Why DOS is amazing in 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

Right now, as I sit here typing these words, it is February of the year 2017.

The words of which I speak? They are entirely about DOS. Yes—that DOS. The one that powered so many computers throughout the 1980s and a chunk of the 1990s. The one with a big, low-resolution “C:>”. The one you installed with floppy disks often onto a hard drive measured in megabytes. 

DOS. 

You see, back on December 25, something miraculous happened. Something that changed the world forever: FreeDOS 1.2 was released

FreeDOS, for those who have (perhaps rightly so) put all memories of MS-DOS (and other similar DOS-ish systems of days gone by) behind them, is a completely free software (GPL licensed) MS-DOS compatible operating system. 

And it is still being actively developed. By a team. With new releases. In 2017. 

For comparison, the last stand-alone release of MS-DOS (version 6.22) was released in 1994—nearly a quarter of a century ago. (Hot-damn, that makes me feel old.) 

Since then, we have seen the rise of Windows 95, 98, ME, 2000, XP, Vista, 7, 8 and 10 and the rise of Linux as a server, mobile and (to a somewhat lesser extent) desktop powerhouse. We’ve seen the creation (and death) of BeOS, as well as the near-death and resurrection of Apple. The computing world has changed a lot in that time. And yet here we are. The year two-thousand seventeen. With a brand-new version of DOS—freaking DOS. 

Interestingly, the founder of the FreeDOS project, Jim Hall, has been working on his open-source DOS system for longer than Microsoft was actively developing MS-DOS. If you think about it, that means Jim is the king of DOS. To my knowledge, there is no man, woman, child or orangutan who has dedicated more of their life to developing and building DOS than Jim. 

If that doesn’t earn the man a spot in the Guinness Book of World Records, nothing will. 

What does DOS in 2017 look like?

But what does a new release of an MS-DOS-compatible system look like in 2017? Does it add new features, new technologies? It turns out it looks a lot like the DOS you remember—with a few niceties added here and there, such as improved installers and simple package manager-style tools. 

One of the core features of FreeDOS is that it can run any software made for MS-DOS. Adding new features that might break existing legacy applications—often from the 1980s—would defeat the purpose of the system entirely. 

All of this begs the question: If the latest and greatest version of FreeDOS runs software designed for computers of the 1980s and 1990s, what’s the point? Why would I care? 

It’s a fair question. Most computer users nowadays, quite simply, won’t care—not in the slightest. 

That’s especially true for those who grew up in the post-DOS era with graphical user interfaces (Windows, GNOME, KDE, MacOS, etc.) already loaded on their first computers. Most of you who fit that description likely didn’t spend much time running software designed specifically for DOS. You don’t have the old files that need opening in them. And, perhaps most important, you don’t have the nostalgia for the old games and software from that era. 

But for people like me—kids who grew up watching Matthew Broderick change his school grades on an old TRS-80 in War Games, kids who played the original Civilization and the first graphical adventure games Space Quest and Kings Quest, kids who wrote their first lines of code in QuickBasic and GWBasic—FreeDOS is an absolute dream. 

How I use FreeDOS 

Let me lay out for you how I use FreeDOS. And, make no mistake, I use it almost every single day. 

I tend to run FreeDOS in an emulator (VirtualBox, etc.). Luckily there are x86/PC emulators available for just about every platform, so this isn’t much of a problem. I can even run it on Android tablets when I’m on the go. 

The system is incredibly lightweight (because it’s DOS). That means if I sync my DOS files (using something like NextCloud, DropBox, etc.) between my computers and devices, my FreeDOS system is with me and up to date everywhere I go—on every computing device I own. 

But what do I actually, you know, do within FreeDOS? 

Mostly I play games—classic games, great games, some of the greatest video games ever created. The likes of Master of Orion 2, Star Control 2, Civilization, Simcity, Ultima and so many others. 

It’s not just about classic gaming, though. FreeDOS also allows me to get some real work done in a no-distractions, no-nonsense sort of way. Much of my writing (not all, but a significant portion) has been done within FreeDOS, using some of the classic DOS word processors (WordPerfect, Microsoft Word, WordStar and others). These older writing tools lack just about every bell and whistle we come to expect from a modern office application, to be sure. And that’s what I love about them. I can sit down to write and just immerse myself in the words on the screen. 

(Side note: While FreeDOS is GPL licensed, much of the software I run within it is neither free software nor open source. These are, in large part, relics of computing’s past—closed-source applications that have long since been abandoned by their original authors. Normally I keep closed-source software away from my computers. But in the case of 25-plus-year-old bits of long-lost code, I make an exception.) 

Heck, I even run a telnet-accessible DOS-based BBS for the sole purpose of enjoying old-school, text-based, multiplayer games. I wouldn’t be able to do that quite so easily without FreeDOS. 

Highly portable, distraction-free working. A completely future-proof environment. (As long as the ability to emulate an x86 processor exists, I’ll be able to keep and run the same software with the same data files.) And when I don’t feel like working, I can enjoy some of the best games mankind has ever created. 

For that, I’d like to take this moment to thank Jim and the rest of the FreeDOS team. Because of your work, over so many years, the world is that much more fun for people like me.

And last I checked, FreeDOS 1.2 was downloaded more than 100,000 times in the first month alone. So, apparently there are a lot of people like me.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

What’s keeping enterprises from using G Suite?

The content below is taken from the original (What’s keeping enterprises from using G Suite?), to continue reading please visit the site. Remember to respect the Author & Copyright.

While Google has spent the past year trying to woo enterprises to its G Suite productivity apps, it’s still the underdog compared to Microsoft Office, at least among large businesses. So what’s keeping it from broader appeal?

One of the biggest hurdles for Google achieving broader enterprise adoption is just the fact that the company’s products aren’t identical to Word, Excel, PowerPoint, Outlook, and other Microsoft Office apps, Gartner Senior Research Analyst Joe Mariano said.

“Enterprises have been ingrained in the Microsoft stack for essentially the beginning of time, it feels like,” Mariano said. “[Enterprises] have problems shifting away from that, because they have a lot of investments, either in customizations or how they’re using the tools.”

Office has been the dominant productivity suite in the enterprise for decades, with Word, Excel, PowerPoint, and Outlook making up key parts of businesses’ everyday workflows. Google has a tough road ahead of it supplanting those applications.

It’s not for lack of trying: Under the leadership of Diane Greene, the senior vice president of Google Cloud, the company’s products have undergone a number of changes and fresh launches aimed at appealing to large businesses.

Those changes included a new Google Sites that has been built to compete more closely with Microsoft’s SharePoint document management and storage system, and a Springboard service that’s supposed to help employees more easily find files they need. Last year, Greene revealed that the company created a consulting group to help understand the needs of enterprises using its products.

The company recently announced that 3 million organizations are paying for G Suite, up from 2 million at the end of 2015. It’s solid progress for Google, but much of that expansion has come from small and medium-size businesses, not massive customers.

Peter Yared, the CTO of Sapho, said that G Suite adoption has been largely nonexistent among the companies that his company serves.

“Look, we never run into Slack, we never run into G Suite,” he said. “We never run into these things. Those are for the small part of the [small and medium business] segment.”

Sapho sells a service that helps companies connect their disparate — and often outdated — systems of record with one another to help speed up their employees’ work. Its clients are exactly the sort of large enterprises that Google is trying to gain favor with.

There are some enterprises that have taken the plunge, however. Telus International, a subsidiary of the Canadian telecommunications company that provides phone support, currently has 25,000 of its employees using G Suite. The company migrated over to Google’s productivity suite four years ago and isn’t looking back, according to Michael Ringman, its chief information officer.

At the time of the migration, Telus International had several different setups for productivity and collaboration. Working across the company’s offices (some of which exist as a result of acquisitions) while using on-premises versions of Microsoft Office was cumbersome. While it took work to manage the change from Microsoft’s suite to Google’s, Ringman said that the outcome was positive.

“The reality we’ve seen, we’ve seen better collaboration, better communication and frankly had better [employee] engagement,” Ringman said. “We actually measure our engagement scores … and have seen an increase in our engagement scores somewhat directly as well as indirectly due to our rollout of the Google G Suite.”

Businesses build entire workflows around Office products, and will often use macros to automate some of their work, said Patrick Moorhead, the founder and principal analyst at Moor Insights and Strategy. That entrenched use of specific features can also hinder adoption.

“So, for instance, a company will go in and do macros and run their business on a spreadsheet. And that is a factor. I can’t just dial up G Suite and have those macros work,” Moorhead said. “G Suite was born in the cloud, Office 365 was born on the desktop. So if you have things that need to run on the desktop really well like macros, that instantly takes you out of the G Suite camp.”

That’s a problem Google’s engineers are working hard to tackle, according to Prabhakar Raghavan, a vice president of engineering at the company. However, he said it’s a challenge because the macros have been built for client software that runs on a user’s computer, rather than a web app.

“Our intent is not to move off 100 percent cloud into some sort of hybrid environment, our direction is to remain in the cloud,” he said. “And so the challenge my engineers are hard at work solving is how to provision, entirely in the cloud, the things that people can get from a hybrid environment.”

People at Telus International who still need to rely on macros or other dedicated functionality in Microsoft Office can still use the on-premises software that the company still holds licenses for. When employees using Chromebooks need a traditional desktop computer environment, the company uses virtual desktops to compensate. 

Still, switching from Office to G Suite involves much more than just getting users to adopt a new user interface. That could prove problematic to businesses, according to David Lavenda, the co-founder of Harmon.ie. His company makes software that helps connect Microsoft Outlook and SharePoint, and ease companies’ migrations from Lotus Notes to Office 365. A number of its clients looked at G Suite and ultimately opted to go with Office.

There are many components in the institutional use of a particular office suite, Lavenda said. “There’s the support structure behind it, there’s expertise in these organizations knowing how to support these products and knowing where to turn to, and how to fix problems. It’s much more than just getting people to edit documents with a different user interface.”

To increase user familiarity with Google’s tools, the company acquired Synergyse last year and has been using its e-learning courses to help people get acquainted with Docs, Sheets, and Slides.

Furthermore, the company will actually send its product managers and engineers to help with deployments of G Suite at large customers. Raghavan said a team will arrive at a customer’s work site on the first day of a major deployment and just walk around to help answer questions.

“These are rank-and-file engineers who usually write code,” Raghavan said. “And it’s great both ways because the customer feels well taken care of, through the transition. And for an engineer or a product manager, it’s a great learning [opportunity], because you’re like ‘Oh my God, I never realized that pixel there was confusing.'”

But while Google is continuing to gain traction, Microsoft remains the dominant player in the productivity app market. The Redmond-based titan reported last month it has 85 million monthly active commercial users of Office 365. At least in the near term, G Suite and Office 365 are fighting largely for the chance to pick up customers who are migrating off on-premises versions of Office. Gartner’s Mariano pointed out that some enterprises are actually running in hybrid environments where some people are using G Suite and others are using Office.

“It’s getting to the point now where enterprises are almost letting them duke it out in real time in the real world,” Mariano said. “Which is an interesting thing. We see that a lot in higher education, where the administrative side might be using Office 365, and the student body might be using G Suite.”

While it still faces challenges, Google has improved its enterprise compatibility, through its continued enhancements for security and compliance capabilities, as well as deploying features in ways that don’t disrupt existing workflows, Moorhead said.

“Every year, they’re getting more friendly to the enterprise with their products,” he said.

Ringman said Google still has work to do in order to make it possible for Telus International to run its whole business on G Suite. In particular, he called out data sovereignty as a key issue for moving some remaining information into the cloud. Regulations require some Canadian data be stored in-country, and G Suite doesn’t yet allow users to store data there. 

Google isn’t slowing down its introduction of enterprise features. The company kicked 2017 off with an announcement of a set of feature upgrades aimed squarely at solving enterprise security concerns. Google expanded its data-loss prevention features to Drive, added S/MIME to Gmail and allowed administrators to restrict logins to only people with hardware security keys.

Google is planning additional enterprise-focused features for G Suite and the other services covered under its Cloud division. The team is hosting a three-day conference in March, which is expected to feature a suite of announcements aimed at driving its enterprise business forward.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Why You Need Less Noise for Work and Your Health

The content below is taken from the original (Why You Need Less Noise for Work and Your Health), to continue reading please visit the site. Remember to respect the Author & Copyright.

Shhh. Hear that? No? That’s surprising. Odds are, you can hear something right now: A siren, the hum of a fan, the blur of background conversations, the ticking of a watch. It’s seldom our worlds are fully silent–so seldom that complete silence feels shocking.

Read more…

Don’t be a lemming: Cloud-first doesn’t mean cloud-only

The content below is taken from the original (Don’t be a lemming: Cloud-first doesn’t mean cloud-only), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Cloud-first” began as a government concept, one that is still promulgated today. Government CIOs are mandated to look at cloud computing first and foremost when they plan an expansion of their IT footprint. Some have been successful, but for most, it’s been slow going.

The cloud-first strategy has now found its way into Global 5000 companies. Although cost savings is the battle cry, most enterprises try to align their existing and future workloads with mostly public cloud platforms. They are, in essence, taking an approach where the cloud should always be considered. Most of the time, they push away traditional approaches.

But when they hear executives talking cloud-first, most IT leaders will consider it to be a mandate, not a consideration. Indeed, when I hear business execs say “cloud-first,” it typically means “If you don’t go cloud, you have some explaining to do.” That dogmatic interpretation of “cloud-first” as “cloud-only” essentially forces IT to opt for cloud-based platforms, no matter if they are a fit.

Being dogmatically cloud-first is as dangerous as not considering the cloud.

Fit needs to be your priority. I do a great deal of analysis to determine if my clients’ workloads are a fit for the cloud. Although 65 percent of them, on average, may be a fit, the other 35 percent, on average, are not. If you take a dogmatic cloud-first approach, that means you’re going to move about 35 percent of your applications to the wrong platform.

Fit issues come down to a few categories, including security, compliance, governance, performance, and the ability to use platforms or services in the public cloud. Many in IT assume that these services or platforms already exist, but that’s not always true. They end up finding out the hard way that, for example, core networking and database services are missing for the applications they are migrating.

I believe most IT shops follow the correct approach to “cloud-first”: To start, see if the cloud is the best fit and use it; if not, then use an other approach. But I fear that if the concept of cloud-first becomes dogma, organizations will be blind to cloud fit, which is the real measure of how to proceed.

If your company follows the cloud-first dogma blindly, count on wasting a lot of time and money.

Top 5 Trends in Azure Hybrid Cloud Management

The content below is taken from the original (Top 5 Trends in Azure Hybrid Cloud Management), to continue reading please visit the site. Remember to respect the Author & Copyright.

As you begin to implement your organization’s 2017 tech strategy, here are the top five most interesting things that are going on in the world of Hybrid Cloud with Microsoft Azure that you should consider.

Backup

Almost every business struggles with backup, and the funny (or unfunny) thing about these struggles is that they are the same no matter what size the organization is.

  • Complexity: Backup should be simple. Every minute we spend looking at backup is time wasted. We should take backup for granted and only spend time with it when we need to change services or restore things. Unfortunately, backup products, in a race to be the best, have bolted on unnecessary bells and whistles, and legacy (they support tape) solutions often never worked in the first place.
  • Cost: Every business has a lot of data that must be protected. And that means you need lots of storage to protect that data on. We can be clever with that storage, but it’s another capital expenditure cost that detracts from business flexibility. Additionally, we have to pay upfront for backup software, which is often very expensive.

There are some newer players in the backup market that don’t fall fowl of the above, but, like with email, many businesses view backup as a service that should be taken for granted as a utility. Many of us have moved email to the cloud, so why don’t we move backup, too? Backup is one of the easiest services to move to the cloud, and Microsoft offers a rapidly evolving and very cost-effective solution in Azure Backup. Many small-medium enterprises are adopting Azure Backup to protect single file servers or to backup Hyper-V and vSphere. Depending on the solution, you can:

  • Seed the first backup by tape.
  • Perform backups directly to the cloud, using very cheap storage, negating the need for local storage.
  • Keep backups for a short time on-premises, and keep long-term backups in the cloud.
  • Encrypt all backup data and keep it for up to 99 years in the cloud.

Disaster Recovery

Disaster recovery (DR) has a different objective to backup; with DR we need to return a business to an operational state with minimal loss of data, in a short time, with as much automation as possible, and in another location. These goals made DR very expensive and very complex, and this is why few organizations have DR solutions, and of those that do, the solutions are unreliable and costlier than they should be.

Azure Site Recovery (ASR), a DR-site-in-the-cloud, offers IT business continuity services for customers using physical, VMware, or Hyper-V servers at a very cost effective rate. Automation is central to all-things-Azure, and ASR enables you to completely automate the failover of services to Azure once you decide to invoke your business continuity plan. Instead of being a “snowflake solution,” ASR is a cloud service that is used by many businesses and tested on a daily basis. This means that you can have faith in the service, and you can build on that faith by performing test failovers of your services to an isolated network within Azure without impacting production systems.

Operations Management Suite

It’s true that Azure Backup and Azure Site Recovery are a part of the Operations Management Suite (OMS), but OMS is much more than backup and DR; most of what is included is automation and management.

Enterprise management solutions have been around for decades; in that time, there has been one constant — we seem to spend more time installing, fixing, and upgrading the management system than we do on the systems we are meant to be managing.

OMS offers us a management service from the cloud; we simply choose which management features we want and we turn them on. The focus is on data. A searchable database can be queried, alerted on, and visualized using dashboards and Power BI. OMS is evolving at a fast rate, adding features to manage services no matter where they are (Windows or Linux, in Azure, on-premises, or in AWS). Recent examples of new features include on-premises network and vSphere monitoring!

Sponsored

Azure Stack

Hybrid Cloud is more than just a network connection, and Microsoft is in a unique position to act on that. Microsoft started down this path when Azure was powered by Windows Server 2012 Hyper-V and improvements in Azure started to trickle down. Windows Server 2016 includes not only security and features from Azure but also pieces of the Azure fabric such as storage and networking.

Later in 2017, three OEMs (HPE, Lenovo, and Dell) will start to sell pre-tested hardware configurations with a solution called Azure Stack — this is a version of Azure that you can run in your own data center, using the same Azure Resource Manager API as Azure; offering some (and this will grow) of the features of Azure such as storage, compute, and networking; and giving developers and operators a common platform. No matter where you choose to deploy services, the underlying architecture will be the same; you can use the same PowerShell cmdlets, the same JSON templates, and even use the same VM OS images from the Azure Marketplace.

Remote Desktop

The desktop is where we run applications, such as Microsoft Office or File Explorer. Usually, we run these applications on our PC or laptop, but many organizations move some or all applications to a remote location for the following reasons:

  • Centralized deployment
  • Better information security
  • Speed up migrations, such as Office 365
Sponsored

Microsoft and Citrix have announced a very close relationship in which Citrix Cloud will

  • Be sold via the Azure Marketplace
  • Deploy Citrix VDI and XenApp services as virtual machines in a customer’s Azure subscription

This partnership will benefit both corporations, and hopefully customers, too, if the licensing works smoothly and is cost effective, and it is sure to generate many headlines once the services go live.

The post Top 5 Trends in Azure Hybrid Cloud Management appeared first on Petri.