Disaster Recovery using Amazon Web Services (AWS)

The content below is taken from the original (Disaster Recovery using Amazon Web Services (AWS)), to continue reading please visit the site. Remember to respect the Author & Copyright.

“You can’t predict a disaster, but you can be prepared for one!” Disaster recovery is one of the biggest challenges for infrastructure. Amazon Web Services allows us to easily tackle this challenge and ensure business continuity. In this post, we’ll take a look at what disaster recovery means, compare traditional disaster recovery versus that in the cloud, and explore essential AWS services for your disaster recovery plan.

What is Disaster Recovery?

There are several disaster scenarios that can impact your infrastructure. These include natural disasters such as an earthquake or fire, as well as those caused by human error such as unauthorized access to data, or malicious attacks.

“Any event that has a negative impact on a company’s business continuity or finances could be termed a disaster.”

In any case, it is crucial to have a tested disaster recovery plan ready. A disaster recovery plan will ensure that our application stays online no matter the circumstances. Ideally, it ensures that users will experience zero, or at worst, minimal issues while using your application.

If we’re talking about on-premise centers, a disaster recovery plan is expensive to maintain and implement. Often, such plans are insufficiently tested or poorly documented. As such, it’s adequate for protecting resources. More often than not, companies with a good disaster recovery plan aren’t capable of conducting it because it was never tested in a real environment. As a result, users cannot access the application and the company suffers significant losses.

Let’s take a closer look at some of the important terminology associated with disaster recovery:

Business Continuity. All of our applications require Business Continuity. Business Continuity ensures that an organization’s critical business functions continue to operate or recover quickly despite serious incidents.

Disaster Recovery. Disaster Recovery (DR) enables recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster.

RPO and RTO. Recover Point Objective (RPO) and Recovery Time Objective (RTO) are the two most important parts of a good DR plan for our workflow. Recover Point Objective (RPO) is the maximum targeted period in which data might be lost from an IT service due to a major incident. Recovery Time Objective (RTO) is a targeted time period after which a business process must be restored after a disaster or disruption to service.

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

Traditional Disaster Recovery plan (on-premise)

A traditional on-premise Disaster Recovery plan often includes a fully duplicated infrastructure that is physically separate from the infrastructure that contains our production. In this case, an additional financial investment is required to cover expenses related to hardware and for maintenance and testing. When it comes to on-premise data centers, physical access to the infrastructure is often overlooked.

These are the security requirements for an on-premise data center disaster recovery infrastructure:

  • Facilities to house the infrastructure, including power and cooling.
  • Security to ensure the physical protection of assets.
  • Suitable capacity to scale the environment.
  • Support for repairing, replacing, and refreshing the infrastructure.
  • Contractual agreements with an internet service provider (ISP) to provide internet connectivity that can sustain bandwidth utilization for the environment under a full load.
  • Network infrastructure such as firewalls, routers, switches, and load balancers.
  • Enough server capacity to run all mission-critical services. This includes storage appliances for the supporting data, and servers to run applications and backend services such as user authentication, Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), monitoring, and alerting.

Obviously, this kind of disaster recovery plan requires large investments in building disaster recovery sites or data centers (CAPEX). In addition, storage, backup, archival and retrieval tools, and processes (OPEX) are also expensive. And, all of these processes, especially installing new equipment, take time.

An on-premise disaster recovery plan can be challenging to document, test, and verify, especially if you have multiple clients on a single infrastructure. In this scenario, all clients on this infrastructure will experience problems with performance even if only one client’s data is corrupted.

Disaster Recovery plan on AWS

There are many advantages of implementing a disaster recovery plan on AWS.

Financially, we will only need to invest a small amount in advance (CAPEX), and we won’t have to worry about the physical expenses for resources (for example, hardware delivery) that we would have in on an “on-premise” data center.

AWS enables high flexibility, as we don’t need to perform a failover of the entire site in case only one part of our application isn’t working properly. Scaling is fast and easy. Most importantly, AWS allows a “pay as you use” (OPEX) model, so we don’t have to spend a lot in advance.

Also, AWS services allow us to fully automate our disaster recovery plan. This results in much easier testing, maintenance, and documentation of the DR plan itself.

This table shows the AWS service equivalents to an infrastructure inside an on-premise data center.

On premise data center infrastructure AWS Infrastructure
DNS Route 53
Load Balancers ELB/appliance
Web/app servers EC2/Auto Scaling
Database servers RDS
AD/authentication AD failover nodes
Dana centers Availability Zones
Disaster recovery Multi-region

 

Essential AWS Services for Disaster Recovery

While planning and preparing a DR plan, we’ll need to think about the AWS services we can use. Also, we need to understand our selected services support data migration and durable storage. These are some of the key features and services that you should consider when creating your Disaster Recovery plan:

AWS Regions and Availability Zones  –  The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”). A Region is a physical location in the world that has multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity housed in separate facilities. These AZs allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.

Amazon S3Provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities within a region and are designed to provide a durability of 99.999999999% (11 9s).

Amazon Glacier – Provides extremely low-cost storage for data archiving and backup. Objects are optimized for infrequent access, for which retrieval times of several hours are adequate.

Amazon EBS –  Provides the ability to create point-in-time snapshots of data volumes. You can use the snapshots as the starting point for new Amazon EBS volumes. And, you can protect your data for long-term durability because snapshots are stored within Amazon S3.

AWS Import/Export – Accelerates moving large amounts of data into and out of AWS by using portable storage devices for transport. The AWS Import/Export service bypasses the internet and transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network.

AWS Storage Gateway is a service that connects an on-premise software appliance with cloud-based storage. This provides seamless, highly secure integration between your on-premise IT environment and the AWS storage infrastructure.

Amazon EC2 Provides resizable compute capacity in the cloud. In the context of DR, the ability to rapidly create virtual machines that you can control is critical.

Amazon EC2 VM Import Connector enables you to import virtual machine images from your existing environment to Amazon EC2 instances.

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances.

Amazon VPC allows you to provision a private, isolated section of the AWS cloud. Here,  you can launch AWS resources in a virtual network that you define.

Amazon Direct Connect makes it easy to set up a dedicated network connection from your premises to AWS.

Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud.

AWS CloudFormation gives developers and systems administrators an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. You can create templates for your environments and deploy associated collections of resources (called a stack) as needed.

Disaster Recovery Scenarios with AWS

There are several strategies that we can use for disaster recovery of our on-premise data center using AWS infrastructure:

  • Backup and Restore
  • Pilot Light
  • Warm Standby
  • Multi-Site

Backup and Restore

The Backup and Restore scenario is an entry level form of disaster recovery on AWS. This approach is the most suitable one in the event that you don’t have a DR plan.

In on-premise data centers, data backup would be stored on tape. Obviously it will take time to recover data from tapes in the event of a disaster. For Backup and Restore scenarios using AWS services, we can store our data on Amazon S3 storage, making them immediately available if a disaster occurs. If we have a large amount of data that needs to be stored on Amazon S3, ideally we would use AWS Export/Import or even AWS Snowball to store our data on S3 as soon as possible.

AWS Storage Gateway enables snapshots of your on-premise data volumes to be transparently copied into Amazon S3 for backup. You can subsequently create local volumes or Amazon EBS volumes from these snapshots.

Backup and Restore scenario

Backup and Restore scenario

The Backup and Restore plan is suitable for lower level business-critical applications. This is also an extremely cost-effective scenario and one that is most often used when we need backup storage. If we use a compression and de-duplication tool, we can further decrease our expenses here. For this scenario, RTO will be as long as it takes to bring up infrastructure and restore the system from backups. RPO will be the time since the last backup.

Pilot Light

The term “Pilot Light” is often used to describe a DR scenario where a minimal version of an environment is always running in the cloud. This scenario is similar to a Backup and Restore scenario. For example, with AWS you can maintain a Pilot Light by configuring and running the most critical core elements of your system in AWS. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

Pilot light scenario

Pilot Light scenario

A Pilot Light scenario is suitable for solutions that require a lower RTO and RPO. This scenario is a mid-range cost DR solution.

Warm Standby

A Warm Standby scenario is an expansion of the Pilot Light scenario where some services are always up and running. As we plan a DR plan, we need to identify crucial points of our on-premise infrastructure and then duplicate it inside the AWS. In most cases, we’re talking about web and app servers running on a minimum-sized fleet. Once a disaster occurs, infrastructure located on AWS takes over the traffic and performs its scaling and converting to a fully functional production environment with minimal RPO and RTO.

Warm standby scenario

Warm standby scenario

The Warm Standby scenario is more expensive than Backup and Restore and Pilot Light because in this case, our infrastructure is up and running on AWS. This is a suitable solution for core business-critical functions and in cases where RTO and RPO need to be measured in minutes.

Multi-Site

The Multi-Site scenario is a solution for an infrastructure that is up and running completely on AWS as well as on an “on-premise” data center. By using the weighted route policy on Amazon Route 53 DNS, part of the traffic is redirected to the AWS infrastructure, while the other part is redirected to the on-premise infrastructure.

Data is replicated or mirrored to the AWS infrastructure.

Multi site scenario

Multi-Site scenario

In a disaster event, all traffic will be redirected to the AWS infrastructure. This scenario is also the most expensive option, and it presents the last step toward full migration to an AWS infrastructure. Here, RTO and RPO are very low, and this scenario is intended for critical applications that demand minimal or no downtime.

Wrap up

There are many options and scenarios for Disaster Recovery planning on AWS.

The scope of possibilities has been expanded further with AWS’ announcement of its strategic partnership with VMware. Thanks to this partnership, users can expand their on-premise infrastructure (virtualized using VMware tools) to AWS, and create a DR plan via resources provided by AWS using VMware tools that they are already accustomed to using.

Don’t allow any kind of disaster to take you by surprise. Be proactive and create the DR plan that best suits your needs.

Currently working as CLOUDWEBOPS OÜ AWS Consultant as well as Data Architect/DBA and DevOps @WizardHealth. Proud owner of AWS Solutions and DevOps Associate certificates. Besides databases and cloud computing, interested in automatization and security. Founder and co-organizer of the first AWS User Group in Bosnia and Herzegovina .

More Posts

Deploy Domain Controllers as Azure Virtual Machines

The content below is taken from the original (Deploy Domain Controllers as Azure Virtual Machines), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft-Azure-stack-hero

Microsoft-Azure-stack-hero

This guide will show you how to deploy an Azure virtual machine as a domain controller (DC).

Extending Your Domain

There are many reasons why you would extend your existing domain into the cloud, including:

  • Single sign-on: Enable hybrid applications to work within a single Active Directory (AD) forest.
  • Disaster recovery: Have your domain already running when failing over application/data workloads to the cloud.
  • ADFS: Federate your forest to Azure AD using machines in the cloud, eliminating Internet connectivity at the office as a single point of failure for SaaS apps.

 

 

To extend an existing domain you will need:

  • Site-to-site networking: You must have either VPN or ExpressRoute connectivity (with routing configured) from your on-premises network (including the VLANs with active DCs) to your Azure virtual network (VNet) that will host your new in-Azure DCs.
  • Temporarily edit the DNS settings of the VNet: The new in-Azure DCs will need to find your existing domain, which requires DNS. Edit the settings of the VNet and temporarily use the IP addresses of some on-premises DCs as the DNS servers of your VNet – you will need to undo this after the successful promotion of your in-Azure DCs.

Afterwards, you will use AD Sites and Services to create:

  • An AD site for your AD deployment in that VNet or Azure region.
  • Subnet definitions for the VNets that will contain domain members.
  • Associate the subnets with your AD site.
  • Create one or more site links to mimic your replication paths (site-to-site network connections) from the on-premises domain controllers.

Specify Your Domain Controller Virtual Machines

For all but the Fortune 1000s, DCs are usually lightweight machines that do nothing other than DNS, authentication, and authorization. For this reason, I go with the cheapest option for virtual machines in Azure, the Basic A-series. The 300 IOPS for a data disk limit doesn’t impact AD performance, and the lack of load balancer support (NAT rules for RDP and load balancing rules) doesn’t hurt either because domain controllers shouldn’t be visible on the Internet.

The sizing of the machines depends on the memory requirements. In my small deployments, I’ve opted for a Basic A2 (to run Azure AD Connect for small labs or businesses) and a Basic A1 as the alternative machine; memory requirements for Azure AD Connect depend on the number of users being replicated to Azure AD.

Larger businesses should use empirical data on resource utilization of their DCs to size Azure virtual machines. Maybe an F-Series, or a Dv2-Series would suit, or in extreme scenarios, maybe they’ll need “S” machines that support Premium Storage (SSD) accounts for data disks.

Building the Domain Controller

There are a couple of things to consider when deploying a new Azure virtual machine that will be a DC. You should deploy at least 2 DCs. To keep your domain highly available, during localized faults or planned maintenance, create the machines in an availability set. Note that Azure Resource Manager (ARM) currently won’t allow you to add a virtual machine to an availability set after creating the machine.

Creating new domain controllers in an Azure availability set [Image Credit: Aidan Finn]

Creating new domain controllers in an Azure availability set [Image Credit: Aidan Finn]

You must store all AD Directory Services (DS) files on a non-caching data disk to be supported and to avoid USN rollbacks. Once the machine is created, open the settings of the machine in the Azure Portal, browse to Disks, and click Attach New.

Sponsored

Give the disk a name that is informative, size the disk, and make sure that host caching is disabled (to avoid problems and to be supported).

Add a data disk to the Azure domain controller [Image Credit: Aidan Finn]

Add a data disk to the Azure domain controller [Image Credit: Aidan Finn]

Once the machine is deployed, log into it and launch Disk Management. Bring the new data disk online and format the new disk with an NTFS volume.

The disks of a virtual DC in Azure [Image Credit: Aidan Finn]

The disks of a virtual DC in Azure [Image Credit: Aidan Finn]

Note that Standard Storage (HDD) accounts only charge you for data stored within the virtual hard disk, not for the size of the disk. Azure Backup does charge an instance fee based on the total size of disks, but going from 137GB (the OS disk) to 237GB (with a 100GB data disk) won’t increase this fee (the price band is 50-500GB).

Static IP Address

A DC must have a static IP address. Do not edit the IP configuration of the virtual machine in the guest OS. Bad things will happen and careers will be shortened. Ignore any errors you’ll see later about DHCP configurations; your virtual machine will have a static IP address, but using the method that is supported in Azure.

Using the Azure Portal, identify the NIC of your DC and edit its settings. Browse to IP Configurations. Click the IP Configuration; here you’ll see the Azure-assigned IP configuration of this NIC. Change the assignment from Dynamic to Static. You can reuse the already assigned IP address or you can enter a new (unused) one that is valid for the subnet. Not the IP address for later.

azuredcstaticip

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

Virtual Network DNS

AD admins know now that things can vary. If your first DC in Azure is joining an on-premises domain, then you will:

  1. Temporarily configure the VNet to use the IP addresses of 1 or more on-premises DCs as DNS server.
  2. Perform the first DC promotion.
  3. Reset the VNet DNS settings to use the in-Azure DCs as DNS servers.

In this lab, I’m building a new/isolated domain, so I will simply edit the DNS settings of the VNet to use the new static IP address of my DC virtual machine.

Open the settings of the virtual network and browse to DNS Servers. Change the option from Default (Azure-Provided) to Custom, and enter the IP address(es) of the machine(s) that will be your DCs.

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

Configuring a static IP address for the Azure domain controller [Image Credit: Aidan Finn]

This option, along with the default gateway of the subnet and the static IP address of the machine’s NIC, will form the static IP configuration of your DC(s).

Promote the Domain Controller

Log into your DC, add the Active Directory Domain Services (AD DS) role, and start the DC promotion. Continue as normal until you get to the Paths screen; this is where you will instruct the AD DS configuration wizard to store the AD files on the data disk (F: in this case) instead of the %systemroot% (normally C:).

Change the AD DS paths to use an Azure data disk [Image Credit: Aidan Finn]

Change the AD DS paths to use an Azure data disk [Image Credit: Aidan Finn]

Complete the wizard. You will see a warning in the Prerequisites Check screen, and probably later in the event logs about the DC having a DHCP configuration – remember that the guest OS must be left with a DHCP configuration, and you have configured a static IP configuration in the Azure fabric.

Ignore the warning about a DHCP configuration in the Azure machine’s guest OS [Image Credit: Aidan Finn]

Ignore the warning about a DHCP configuration in the Azure machine’s guest OS [Image Credit: Aidan Finn]

You can complete the wizard to get your DC functional.

If you are extending your on-premises domain, remember to change the DNS settings of your VNet after verifying in Event Viewer that the DC is fully active after a first complete sync of the AD databases and SYSVOL.

Active Directory Sites and Services

The final step in the build process is to ensure that the Active Directory topology is modified or up-to-date. New subnets (the network address of your virtual network) should be added to AD in Active Directory Sites and Services. Sites should be created and the subnets should be added to those sites. And new IP inter-site transports should be added to take control of replication paths and intervals between any on-premises sites and your in-Azure site(s).

My simple Azure-based domain’s topology [Image Credit: Aidan Finn]

My simple Azure-based domain’s topology [Image Credit: Aidan Finn]

Sponsored

As usual, make sure that you test AD and SysVol replication between and inside of sites, verify that DNS is running, and that the AD replication logs are clear.

The post Deploy Domain Controllers as Azure Virtual Machines appeared first on Petri.

Remotely Monitor a Raspberry Pi To See What’s Running and Get Notifications If Something Goes Wrong

The content below is taken from the original (Remotely Monitor a Raspberry Pi To See What’s Running and Get Notifications If Something Goes Wrong), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re running a Raspberry Pi that’s doing something in the background, like working as a security camera system or a weather station, then it’s good to know exactly what it’s up to no matter where you are. Initial State shows off how to build a dashboard that keeps you up to date and notifies you if anything goes wrong.

Read more…

Office 365 Mailbox Quotas Swelling to 100 GB

The content below is taken from the original (Office 365 Mailbox Quotas Swelling to 100 GB), to continue reading please visit the site. Remember to respect the Author & Copyright.

Exchange Online and Outlook slider

Microsoft Stays Quiet but Office 365 Roadmap Reveals All

Microsoft hasn’t said anything about increasing the default quota for Exchange Online mailboxes from the previous 50 GB limit, so it came as a surprise when the Office 365 Roadmap announced that an increase was on the way (Figure 1).

Office 365 roadmap

Figure 1: The Office 365 Roadmap announces the change (image credit: Tony Redmond)

The last increase occurred in August 2013 when Microsoft upped mailbox quotas from 25 GB to 50 GB.

You might wonder why Microsoft is increasing mailbox quotas within Exchange Online. After all, only relatively few individuals need more than 50 GB. Well, storage is cheap, especially when bought in the quantities that Microsoft purchases to equip hundreds of thousands of Office 365 servers. And because storage is cheap, Microsoft is able to offer sufficient to users to enable them to keep all their data online.

It’s also a competitive advantage when Office 365 provides 100 GB mailboxes and Google’s G Suite is limited to 30 GB (shared between Gmail, Google Drive, and Google Photos)

Apart from anything else, storing data online makes sure that it is indexed, discoverable, and comes under the control of the data governance policies that can you can apply within Office 365.

In particular, keeping data online is goodness because it means that users don’t have to stuff information into PST files. PSTs are insecure, prone to failure, invisible for compliance purposes, and a throwback to a time when storage was expensive and mailbox quotas small. Given the size of online quotas available today, there’s really no excuse for Office 365 tenants to tolerate PST usage any more. It’s time to find, remove, and ingest user PSTs via the Office 365 Import Service or a commercial product like QUADROtech’s PST FlightDeck.

Rolling Out to Exchange Online

According to the roadmap, the new 100 GB limit is “rolling out”. However, I have not yet seen an increase in my tenant. When the upgrade happens, any mailbox that has not been assigned a specific quota will receive the increase. In other words, an administrator has not changed the quotas for the mailbox, usually to reduce the limit.

To change a mailbox quota, you might expect to use the Office 365 Admin Center or Exchange Online Administration Center (EAC). The normal path to editing settings is to select a user, find what you need to change, and do it. In this case, you access Exchange properties under Mail Settings in the Office 365 Admin Center or select the mailbox in EAC. Either way, you’ll end up with the screen shown in Figure 2.

Edit Exchange Online mailbox quota

Figure 2: Editing Exchange Online mailbox quotas (image credit: Tony Redmond)

The problem is that there is no way to amend mailbox quotas here. Clicking the Learn More link brings us to a page that tells us that a More options link should be available to allow the mailbox quotas to be updated. The page does say that it relates to Exchange 2013 so it’s annoying to find it displayed when working with Exchange Online.

A further search locates a knowledge base article that recommends using PowerShell to set Exchange Online mailbox limits. The logic is that most administrators are likely to leave mailbox quotas alone (that’s one of the reasons to make the quotas so large), so why clutter up the GUI with unnecessary options.

I’m pretty Exchange-literate so being brought from page to page to discover how to perform a pretty simple task doesn’t disturb me too much, but it’s not a good experience for a new administrator.

PowerShell Does the Trick

PowerShell is often the right way to perform a task inside Office 365. In this case, the Set-Mailbox cmdlet can be used to update the three quota settings that determine how a mailbox behaves. This example shows how to set mailbox quotas:

[PS] C:\> Set-Mailbox -Identity TRedmond -ProhibitSendQuota 75GB -ProhibitSendReceiveQuota 80GB -IssueWarningQuota 73GB

  • The IssueWarningQuota property tells Exchange the point at which nagging messages should be sent to the mailbox owner to tell them that they are approaching the quota limit.
  • The ProhibitSendQuota property marks the limit at which Exchange will no longer accept new outbound messages from the mailbox.
  • The ProhibitSendReceiveQuota property tells Exchange when to cut off both outbound and inbound service to the mailbox.

Logically, warnings should sound before limits cut in to stop users doing work. A gap of a gigabyte or two between warning and limit should be sufficient for a user to take the hint and either clean out their mailbox or request an increased limit. A well-designed retention policy also helps as it can remove old items without user intervention to keep mailboxes under quota.

New Limits for Some Plans

The new mailbox quotas will only apply to the Office 365 E3 and E5 plans. Other plans will remain with the 50 GB quota as described in the Exchange Online limits page (which hasn’t yet been updated to reflect this change).

Consider Before You Fill

Having a large mailbox can be an advantage. It can also create some challenges. Search is much better today than ever before, but looking for a particular item can still sometimes be like looking for the proverbial needle in the haystack. That’s why I delete items I know I don’t need. Or think I don’t need (Recoverable Items save the day).

More importantly, if use the Outlook desktop client, consider how much data you want to cache locally and how well the hard disk on your PC will cope with the size of that cache (the OST file). PCs equipped with fast SSDs usually perform well up to the 10 GB mark and slow thereafter. PCs with slow-spinning 5,400 rpm hard drives will pause for thought well beforehand.

Sponsored

The solution? Use the Outlook “slider” to restrict the amount of data synchronized to the cache. Outlook 2016 allows you to store just three days of mail (suitable for desktop virtualization projects) up to “All”. Setting the slider to a year or so is reasonable for most people. That is, unless you absolutely insist on caching all of your mailbox. If so, invest in fast disks.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros,” the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Office 365 Mailbox Quotas Swelling to 100 GB appeared first on Petri.

List of websites to download old version software for Windows

The content below is taken from the original (List of websites to download old version software for Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

While it is always recommended to have the latest an upgraded version of software, we sometimes might need to use the older version. Probably when the upgraded version is not compatible with your Windows PC or when you don’t really like the upgraded features and UI or maybe even when te software has gone Paid! Usually, the developers delete the older versions or replace them with the upgraded versions of software but thankfully there are some websites which help you download the old version of software. Here in this post, we will discuss the five best websites to download old version software for Windows.

Download old version software

1. Oldversion.com Download old version software

Running since 2001, this website has an extensive collection of old software, both for Windows, Linux, Android as well as Mac. More than 2800 versions of 190 software are listed here in proper categories. Furthermore, there is also a search box where you can search for the desired program in no time. The site also has its own forum where you can post your query about the software and the versions required.

You can browse the software by categories or even alphabetically. Both, current as well as the older version of programs are available for download. This is one of the best websites to download the older version of software. Check it here.

2. Oldware.org

This is again a well-organized website offering the old version of popular Windows software. The extensive list includes around 2400 programs. All programs here are displayed alphabetically, and there is also a quick jump option where you can select the desired option from the drop-down menu. Almost every program is verified by the website author.

A simple user interface and the alphabetically organized list of software makes this website worth adding in the list of best websites to download old version software for Windows. The homepage also shows the latest ten files added and the most popularly downloaded programs. Just click on any program and download the version you need. Check it here.

3. OldApps.com

A detailed website with a proper categorization of software and its various versions available for Windows, Mac, and Linux. The home page shows it all. Just go to the desired category and select the program you want to download. The wide range of categories include- browsers, messengers, file sharing programs and lot more. Click, and you can see various versions available for free download. The website shows the release date of the program, the size of the setup file, and supported operating systems.

You will probably get to see the oldest versions of most of the programs listed here. Tabs like ‘Recently added apps’ and ‘Apps for Windows’ and ‘Most Downloaded Apps’ gives you quick access to the programs.While, there is a Community page too on the website, but it seems to be down currently. You can also use the search tab if you go directly to the program you want to download. Check it here.

4. Last Freeware Version

This website enlists the old versions of almost every popular program, but the interface is a bit clumsy if compared to the other download websites mentioned above. You need some time to get accustomed to the interface and then search for the program you need.

The software programs here are neither listed alphabetically nor category-wise. But, the plus point here is that it lists out the free versions of some really good programs that are now available only as paid versions. Visit 321download.com.

5. PortableApps.com

This website basically provides you the latest versions of software, but it simply lists the older versions too. Huge collection of popular software programs includes more than 300 real portable apps, with no bundleware or shovelware.

The website has its own support forum where you can post your query and get help. As the name suggests, the website offers all portable apps which you can carry on your cloud drive or a portable device. May it be your favorite games, photo editing software, Office apps, Media Player apps, utilities or more, the website offers you all. In short, it is a platform offering all portable apps tied together.

Always visit safe software download sites to download your software and never click on Next, Next blindly. Opt out of 3rd party offers and avoid getting Potentially Unwanted Programs installed on your computer.



Rugged module runs Yocto Linux on up to 12-core Xeon-D

The content below is taken from the original (Rugged module runs Yocto Linux on up to 12-core Xeon-D), to continue reading please visit the site. Remember to respect the Author & Copyright.

Eurotech’s “CPU-161-18” is a headless COM Express Type 6 Compact module with a 12-core Xeon-D, up to 24GB DDR4, PCIe x16, and wide temperature operation. Like Advantech’s SOM-5991, Eurotech’s CPU-161-18 is “server-class” COM Express Type 6 Compact module aimed at high-end embedded applications, and equipped with Intel’s 14nm “Broadwell” based Xeon D-1500 SoCs. The module […]

Google launches first developer preview of Android Things, its new IoT platform

The content below is taken from the original (Google launches first developer preview of Android Things, its new IoT platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google today announced Android Things, its new comprehensive IoT platform for building smart devices on top of Android APIs and Google’s own services. Android Thing is now available as a developer preview.

Essentially, this is Android for IoT. It combines Google’s earlier efforts around Brillo (which was also Android-based but never saw any major uptake from developers) with its Android developer tools like Android Studio, the Android SDK, Google Play Services and Google’s cloud computing services. Support for Weave, Google’s IoT communications platform that (together with Brillo) makes up Google’s answer to Apple’s HomeKit, is on the roadmap and will come in a later developer preview.

As a Google spokesperson told me, the company sees Android Things as an evolution of Brillo that builds on what the Google learned from this earlier project. Google will work with all early access Brillo users to migrate their projects to Android Things.

Google has partnered with a number of hardware manufacturers to offer solutions based on Intel Edison, NXP Pico and the Raspberry Pi 3. One interesting twist here is that Google will also soon enable all the necessary infrastructure to push Google’s operating system updates and security fixes to these devices.

In addition, Google also today announced that a number of new smart device makers are putting their weight behind Weave. Belkin WeMo, LiFX, Honeywell, Wink, TP-Link and First Alert will adopt the protocol to allow their devices to connect to the Google Assistant and other devices, for example. The Weave platform is also getting an update and a new Device SDK with built-in support for light bulbs, smart plugs, switches and thermostats, with support for more device types coming soon. Weave is also getting a management console and easier access to the Google Assistant.

Google’s IoT platforms have long been a jumble of different ideas and protocols that didn’t always catch on (remember Android@Home from 2011?). It looks like the company is now ready to settle on a single, consolidated approach. Nest Weave, a format that was developed by Nest for Nest, is now being folded into the overall Weave platform, too. So instead of lots of competing and overlapping products, there is now one consolidated approach to IoT from Google — at least for the time being.

Featured Image: JOSH EDELSON/Getty Images

OpenStack Developer Mailing List Digest December 3 – 9

The content below is taken from the original (OpenStack Developer Mailing List Digest December 3 – 9), to continue reading please visit the site. Remember to respect the Author & Copyright.

Updates:

  • Nova placement/resource providers update with some discussions on aggregates and API [4]
  • New Nova core reviewer: Stephen Finucane [8]
  • Project mascots are all around the mailing list, search for “logo” in the subject to find them
  • Status update on unsupported Ironic drivers [10]
  • The DefCore Committee is now called Interop Working Group [11]

Creating a New IRC Meeting Room [9]

  • Create a new channel: #openstack-meeting-5
  • Generally recommend project teams to use the meeting channels on Freenode
  • Let projects use their channels for the meetings, but only if the channel is logged
  • As a next step limit the official meeting rooms for official projects and have non-official projects using their own IRC channels

Neutron Trunk port feature

  • Clarifying some usability aspects [1]
  • Performance measurements [2]

Ocata Bugsmash Day [3]

  • Thanks to Huawei and Intel and all the attendees to make it happen
  • Let’s keep the tradition and grow the event further if we can

PTG Travel Support Program [5][6]

  • Deadline of the first phase is this week
  • Phase two deadline is January 15th
  • Also reminding you to register to the event if you can come, but haven’t done it yet [7]

Finish test job transition to Ubuntu Xenial [12]

  • Merged at last! [13]
  • A lot of experimental and non votings jobs had to be updated
  • Changes to Master no longer run on trusty
  • Might have missed things still, so keep a look out

 

[1] http://bit.ly/2gEmgix

[2] http://bit.ly/2hm3Cdu

[3] http://bit.ly/2gEpItx

[4] http://bit.ly/2hm3Y3S

[5] http://bit.ly/2gEqjva

[6] http://bit.ly/2hm3QBC

[7] http://bit.ly/2gEviMx

[8] http://bit.ly/2hm5X8c

[9] http://bit.ly/2gEn3ji

[10] http://bit.ly/2hm57Zi

[11] http://bit.ly/2gEsPBE

[12] http://bit.ly/2gAMU9a

[13] http://bit.ly/2gEAOP1

A Strava add-on will now let you write stories about your rides

The content below is taken from the original (A Strava add-on will now let you write stories about your rides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Strava Storyteller is a new add-on that will let you write stories about your ride, including maps and videos

Your club data will now be accessible through your mobile too
Your club data will now be accessible through your mobile too

Strava Storyteller is a new add-on that will let you write stories about your ride, including maps and videos

10 essential PowerShell security scripts for Windows administrators

The content below is taken from the original (10 essential PowerShell security scripts for Windows administrators), to continue reading please visit the site. Remember to respect the Author & Copyright.

PowerShell is an enormous addition to the Windows toolbox that gives Windows admins the ability to automate all sorts of tasks, such as rotating logs, deploying patches, and managing users. Whether it’s specific Windows administration jobs or security-related tasks such as managing certificates and looking for attack activity, there is a way to do it in PowerShell.

Speaking of security, there’s a good chance someone has already created a PowerShell script or a module to handle the job. Microsoft hosts a gallery of community-contributed scripts that handle a variety of security chores, such as penetration testing, certificate management, and network forensics, to name a few.

An AWS user’s take on AWS vs. Microsoft Azure and Google Cloud Platform

The content below is taken from the original (An AWS user’s take on AWS vs. Microsoft Azure and Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re new to the field, you will want to choose the platform that will help you get started with cloud computing. As a longtime AWS user, I believe that this is an excellent platform for a future cloud user. But there are also valid reasons for being familiar with all of the leading cloud providers. This post is about AWS vs Microsoft Azure and Google Cloud with a focus on the following categories: Compute, analytics, storage, network, and pricing.

skilled-team

AWS vs Microsoft Azure and Google Cloud Platform

First, let’s say a few words about each of the platforms:

  • Amazon Web Services. Launched in 2006, AWS has a bit of a head start on the other platforms. With constant innovations and improvements over the years, the platform now has more than 70 services with a wide range of coverage. AWS servers are available in 14 geographical regions. Market share of the company is steadily growing, reporting 31% market share in the second quarter of 2016.
  • Microsoft Azure. Running since 2010, Microsoft Azure is a complex system that provides support for many different services, programming languages, and frameworks. It has 67 services and data centers in 30 different geographical regions. It currently holds 11% of the market as of Q2 2016.
  • Google Cloud Platform. Introduced in 2011, Google Cloud Platform is the youngest platform. Designed to meet the needs of Google Search and Youtube, it became available to everyone as a part of the Google for Work package. It has more than 50 services and 6 global data centers, with another 8 announced for 2017. With only 5% of market share and quite aggressive expansion, Google’s moment is yet to come.

aws-azure-gcp-logo

Now that we know who are we dealing with, let’s start with our comparison:

Compute

Computing is a fundamental process for your entire business. The advantage of cloud computing is that you have a powerful and expandable computing force at your disposal that is ready when you need it.

The central AWS computing service is Elastic Compute Cloud (EC2). EC2 has become a synonym for scalable computing on demand. Depending on the industry, additions such as AWS Elastic Beanstalk or EC2 Container Services can significantly reduce your costs. At the moment, AWS supports 7 different instance families and 38 instance types. It also offers regional support and zone support at the same time.

The heart of Microsoft Azure computing is Virtual Machines and Virtual Machine Scale Sets, which can be used for processing. Windows client apps can be deployed with the RemoteApp service. Using Azure, you can use 4 different instance families, 33 instance types, and you can place it in different regions. Zone support is not provided.

Google Cloud Platform uses Compute Engine for running computing processes. One disadvantage is that its pricing is less flexible compared to AWS and Azure. It supports most of the main services that you would need such as container deployment, scalability, web and mobile apps processing, etc. Google Cloud supports 4 instance families, 18 different instance types, and provides regional and zone support.

AWS is the clear front runner when it comes to compute power. Not just because it offers you the most learning resources, but also because it provides the best learning platform.

Analytics

Cloud computing platforms provide quite a lot of useful data about your business. All you need to do to is make the proper analysis.

In the field of data analytics, AWS has made an entry to a big data and machine learning. However, if you don’t need extensive data analysis, you can use its Quick Sight service. This service will help you discover patterns and make correct conclusions from the data you’re receiving.

Similarly, Azure has taken steps toward big data and machine learning, but they don’t have a specific offering in these areas.

Google Cloud Platform, however, has the most advanced offering for big data analysis, machine learning, and artificial intelligence.

If you’re looking for a high level of data analytics, Google Cloud Platform is probably the best choice. However, if you just want to keep track of your daily business, AWS will serve you just fine.

analytics

Storage

Storage is an important pillar of cloud computing because it enables us to allocate all sorts of information (needed for our business) in an online location.

The AWS Simple Storage Service, known as S3, is pretty much industry standard. As a result, you will find a wealth of documentation, case studies, webinars, sample codes, libraries, and tutorials to consult, as well as forum discussions where AWS engineers participated. It’s also good to know that S3 is object oriented storage and you can also use Glacier as the archiving service.

Azure and Google Cloud Platform both have quite reliable and robust storage, but you won’t find anywhere near as much documentation and information about them as you will with AWS. They also have working and archive storage and different additional services, but they can’t out-perform AWS.

Here, AWS’s deep resources for new users makes it the clear champion in this category.

Network

It may come in handy to have your network in the cloud. You can have your VPN in an isolated place for your team only. And, it’s a great feature that adds value to your cloud system.

The AWS offering here is quite good. You can use the Virtual Private Cloud to create your VPN and set your network topology, create subnets, route tables, even private IP address ranges, and network gateways. On top of that, you can use Route 53 to have your DNS web service.

Microsoft Azure also has a solid private networking offer. Its Virtual Network (VNET) allows you to set your VPN, have public IP if you want, and use a hybrid cloud, firewall, or DNS.

Google Cloud Platform’s offering is not as extensive. It has the Cloud Virtual Network, and supports subnet, Public IP, firewall protection, and DNS.

The networking category winner is AWS because it has the most reliable DNS provider.

Pricing

At the end of the day, everyone wants to know: “So, how much is that going to cost me?” Because prices for each provider will be formed according to your needs and requirements we can’t quote exact costs here. However, we can tell you about the pricing models that each provider is using.

AWS uses three payment models:

  • On demand: You pay only for the resources and services you use
  • Reserve: Choose the quantity of resources that you want to book upfront for 1 to 3 years and pay based on utilization
  • Spot: Take advantage of unused capacity and bid with others for additional space

Please note that AWS charges are rounded by the hour used.

Azure pricing is a bit flexible and charges per minute, by rounding per commitments. Their pricing models aren’t as flexibile compared to other platforms. Sustained use pricing is created to enable discounts in the case of on-demand use if a particular instance is used for a larger percentage of the month.

GCP pricing is similar to Azure. They also charge per minute, rounding in 10 minutes per period. In addition to on-demand charging, GCP offers sustained use discounting, which means that you will get a discount for regular usage.

Pricing models are a bit tricky. Each platform offers a pricing calculator that can help you estimate costs. If you consider using AWS, I would suggest that you get in touch with a local APN company, and they can help you estimate your monthly costs.

 

RISC OS Interview – Rob Sprowson

The content below is taken from the original (RISC OS Interview – Rob Sprowson), to continue reading please visit the site. Remember to respect the Author & Copyright.

We continue with our series of interviews with people in the RISC OS world. In this interview, we catch-up with Elesar’s Rob Sprowson.

If you have any other questions, feel free to join in the discussion.

If you would have any suggestions for people you would like us to interview, or would like to be interviewed, just let us know….

Would you like to introduce yourself?
Could-have-been basketball player, still not tall enough.

How long have you been using RISC OS?
That’s patchy: starting with an Acorn Electron which my sister and I eventually broke through overuse, then a big gap until picking up a 2nd hand BBC Micro from the local newspaper in the mid 1990’s, then in parallel RISC OS from 1997ish would make either 33 or 19 years depending on which you count.
Oh dear, now I can’t claim I’m 21 any more either.

What other systems do you use?
Mostly Windows because of the specialist CAD software and other electronics design tools I need to use daily. I have some VMs saved with Linux and FreeBSD but they’re mostly for testing things or recompiling NetSurf, I don’t really know what I’m doing but as they’re VMs it doesn’t matter too much if I destroy something through careless typing.

What is your current RISC OS setup?
Singular? Nothing’s that simple. For email I use a Risc PC (well, more specifically my monitors are stacked vertically and I’m too lazy to remove the Risc PC holding the whole pile up – those cases are built like brick bunkers).
For development, a Titanium of course, it’s nice to do an experimental OS rebuild in 1 minute or less as I don’t like tea and have trouble finding other things to do that take the ‘time it takes to boil a kettle’.
Then there are piles and cupboards and boxes of other things of other vintages which get dragged out for compatibility testing, erm, more than 15 if you include Raspberry Pi’s though some of them are on loan rather than machines I myself own.

Do you attend any of the shows and what do you think of them?
This year I got wheeled out on behalf of ROOL for Wakefield and the South West show. Shows are great to hear what normal users think and what they can’t do but would like to, being too deeply buried in the inner workings of something makes it very difficult to see that.
Some of the shows could be freshened up a bit rather than repeating the ‘tables & chairs’ format every year, to attract a larger audience – the show organisers should visit similar trade shows or enthusiast conventions to steal ideas to improve the presentation of RISC OS ones.

What do you use RISC OS for in 2016 and what do you like most about it?
I like that the OS doesn’t get in my way. If I want to save something in the root directory of my harddisc there’s no patronising error box popping up asking me to confirm that. I used to work with someone who had a book on usability called "Don’t make me think", and that seems a good mantra to work by.

What is your favourite feature/killer program in RISC OS?
Obligatory plug for Pluto here: Pluto Pluto Pluto. Oh, did I mention Pluto?

What would you most like to see in RISC OS in the future?
The bounty scheme that ROOL runs seems to have a good selection of sensible "big ticket" items in, so I’d go with that since Ben/Steve/Andrew know their onions.
Reasonably frequently someone will ask on their forum "is feature X available" when there’s a bounty for X already open, but you never see the total going up so I guess they’re a source of hot air rather than stumping up just a tenner to help make something happen. The world runs on these shiny money tokens in our pockets, so people shouldn’t get too upset if you ask someone to do something for nothing and nothing happens.

Can you tell us about what you are working on in the RISC OS market at the moment?
There are a couple of CloudFS enhancements in the immediate pipeline, but it
tends to get busy at Elesar which is distracting, because some of the
protocols to talk to the servers are eye wateringly complicated and you
really need to be ‘in the zone’ to work on them.

Any surprises you can’t or dates to tease us with?
There are 3 hardware projects and 3 software projects on RISC OS side of the Elesar hob. I tend to come up with ideas faster than they can be implemented, so sometimes things get culled because they’re superceded or because during the derisking stage it becomes apparent that by the time they’re finished they’d no longer be commercially viable.

Apart from iconbar (obviously) what are your favourite websites?
Iconbar who?

Santa Claus is a regular iconbar reader. Any not-so-subtle hints you would like to drop him for presents this year (assuming you have been very good)?
A time machine, and a whole cod, to go back in time and slap some people with. You know who you are…I’m coming for you.

Do you have any New Year’s Resolutions for 2017?
No, I don’t believe in that mumbo jumbo. Only humans attach significance to January 1st; we’re just orbiting the sun same as the previous day.

Any questions we forgot to ask you?
How many mouse buttons I’ve worn out? 2 I think, but fortunately the micro switches are easy to replace and good for another 1 million clicks!

Elesar website

No comments in forum

Top 10 Ways to Speed Up Old Technology

The content below is taken from the original (Top 10 Ways to Speed Up Old Technology), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even if you are building a brand new computer, odds are you have some old gear around the house you’d like to get as much life out of as possible. From phones to old laptops to old TVs, here are some tips to speed up and clean up your older tech.

Read more…

How to add & use Pickit Free Images add-in to Microsoft Office

The content below is taken from the original (How to add & use Pickit Free Images add-in to Microsoft Office), to continue reading please visit the site. Remember to respect the Author & Copyright.

Presentations should be illustrative, not exhaustive and it is an image that makes the presentation illustrative. This helps us in many ways. For instance, it helps us emphasize a point without any ambiguity. A new add-in for Microsoft Office – Pickit is designed just for this purpose.

Pickit makes it convenient for Microsoft Office customers to tell their stories by leveraging specially curated photos. The add-in is designed to works in all Office apps like,  OneNote 2016 or later, PowerPoint 2016, Word 2016. Besides, Pickit plugin is also compatible with Mac and online version of Office applications.

Pickit Free Images add-in for Office

If you have Office 365 installed on your system, launch PowerPoint application and hit the ‘Insert’ tab.

Pickit Free Images add-in for Office

Next, navigate to ‘Store’ and look for ‘Pickit’ add-in, and select it.

Now, you have authentic visuals from the world’s leading image makers, right at your fingertips and in the task pane.

Once downloaded, the Pickit icon will appear as a button in the PowerPoint and Word ribbons.

Just carry out a keyword search or select a category to find images you are looking for.

All images are legal and free to use. No license or additional cost involved.

Pickit appears perfect option for presentations as it offers a quick and easy way to bring your work to life, without leaving your presentation.

When you are not sure what to search for, just browse for Pickit professionally curated collection. There’s a new image collection, “Talk Like a Rosling,” which features inspired content from statistician and presenter Hans Rosling and the latest project from his team at Gapminder—Dollar Street.

You can download the Pickit add-in from the Office Store in the Office apps or the web. For more information or to add the Pickit add-in, visit office.com.



“Dear Boss, I want to attend the OpenStack Summit”

The content below is taken from the original (“Dear Boss, I want to attend the OpenStack Summit”), to continue reading please visit the site. Remember to respect the Author & Copyright.

Want to attend the OpenStack Summit Boston but need help with the right words for getting your trip approved? While we won’t write the whole thing for you, here’s a template to get you going. It’s up to you to decide how the Summit will help your team, but with free workshops and trainings, technical sessions, strategy talks and the opportunity to meet thousands of likeminded Stackers, we don’t think you’ll have a hard time finding an answer.

 

Dear [Boss],

All I want for the holidays is to attend the OpenStack Summit in Boston, May 8-11, 2017. The OpenStack Summit is the largest open source conference in North America, and the only one where I can get free OpenStack training, learn how to contribute code upstream to the project, and meet with other users to learn how they’ve been using OpenStack in production. The Summit is an opportunity for me to bring back knowledge about [Why you want to attend! What are you hoping to learn? What would benefit your team?] and share it with our team, while helping us get to know similar OpenStack-minded teams around the world (think 60+ countries and nearly 1,200 companies represented).

If I register before mid-March, I get early bird pricing–$600 USD for 4 days (plus an optional day of training). Early registration also allows me to RSVP for trainings and workshops as soon as they open (they always sell out!), or sign up to take the Certified OpenStack Administrator exam onsite.

At the OpenStack Summit Austin last year, over 7,800 attendees heard case studies from Superusers like AT&T and China Mobile, learned how teams are using containers and container orchestration like Kubernetes with OpenStack, and gave feedback to Project Teams about user needs for the upcoming software release. You can browse past Summit content at openstack.org/videos to see a sample of the conference talks.

The OpenStack Summit is the opportunity for me to expand my OpenStack knowledge, network and skills. Thanks for considering my request.

[Your Name]

For God’s sake, stop trying to make Microsoft Bob a thing. It’s over

The content below is taken from the original (For God’s sake, stop trying to make Microsoft Bob a thing. It’s over), to continue reading please visit the site. Remember to respect the Author & Copyright.

Vid Microsoft has entered the virtual reality race, announcing a new headset called Evo in collaboration with Intel.

The headset will have the same advanced features of current high-end products including the Oculus Rift, HTC Vive and soon-to-be-launched Sulon Q, but will work with mid-range laptops, the company said.

Up to now, Microsoft has focused its VR efforts on augmented reality (AR) and its Hololens glasses that add digital elements on a screen that you look through to the real world. The Evo will go full VR, covering your eyes and then reflecting an augmented reality back to you.

Critically, the Evo will allow for “inside-out” spacial awareness, meaning that sensors will be built into the headset to allow you to walk around a physical space with the headset, rather than requiring that external sensors be set up within your room to define a space.

That inside-out technology is what Sulon Q hopes to help it get first maneuver advantage on the market when it launches early next year, while both the Rift and Vive are furiously working on their own versions.

Promotional videos for Microsoft’s new Evo also show the headset as being wireless – again, something that Sulon Q is pushing as a unique advantage to its system (it has a full Windows 10 computer built into the headset), and something that both Oculus and Vive are working on.

At the moment, high-end headsets have to be physically connected by a wire to a high-spec computer. The Evo, on the other hand, will be wirelessly paired with a computer to achieve, well, this:

Youtube Video

Hm, what does this remind us of? Oh yes, that’s right. Microsoft Bob. Which it tried to make a success in 2015. And failed. It even has the little dog in the corner, too. Maybe 2017 will be kinder.

Youtube Video

Back to 2016: if Microsoft announced a new VR headset that wasn’t wireless and didn’t have inside-out tracking, it would have been laughed out of the room. The big question is when will it launch?

And on that Microsoft is being wildly vague. It says its Hololens should be available “in the first half of 2017,” and it says it has already shared the specs for PCs that will power its new headset, with those PCs available “next year.” It says developer kits will be made available to developers at the Game Developers Conference in San Francisco in February.

And it announced that the hardware developer 3Glasses will “bring the Windows 10 experience to their S1 device in the first half of 2017” – but that’s not the same as saying Microsoft Evo headsets will be available by then.

Incidentally the minimum specs for the new headsets are:

  • Intel Mobile Core i5 dual-core
  • Intel HD Graphics 620 (GT2) or equivalent
  • 8GB RAM
  • HMDI 1.4 or 2
  • 100GB drive (preferably solid state)
  • Bluetooth 4.0

Taking all the announcements together, it looks as though Microsoft is aiming at a Q3 or Q4 2017 launch of its VR headset – a timeline that is likely to give Oculus, Vive and Sulon a few months’ head start, but probably not enough of one to steal the market.

Where Microsoft and Intel really could win, however, is if they do manage to create a good VR system that requires a less powerful machine to run. That would pull down the price tag for the whole system and place it above the current best offering on the market – the PlayStation VR – in terms of quality. ®

Sponsored:
Magic quadrant for enterprise mobility management suites

For God’s sake, stop trying to make Microsoft Bob a thing. It’s over

The content below is taken from the original (For God’s sake, stop trying to make Microsoft Bob a thing. It’s over), to continue reading please visit the site. Remember to respect the Author & Copyright.

Vid Microsoft has entered the virtual reality race, announcing a new headset called Evo in collaboration with Intel.

The headset will have the same advanced features of current high-end products including the Oculus Rift, HTC Vive and soon-to-be-launched Sulon Q, but will work with mid-range laptops, the company said.

Up to now, Microsoft has focused its VR efforts on augmented reality (AR) and its Hololens glasses that add digital elements on a screen that you look through to the real world. The Evo will go full VR, covering your eyes and then reflecting an augmented reality back to you.

Critically, the Evo will allow for “inside-out” spacial awareness, meaning that sensors will be built into the headset to allow you to walk around a physical space with the headset, rather than requiring that external sensors be set up within your room to define a space.

That inside-out technology is what Sulon Q hopes to help it get first maneuver advantage on the market when it launches early next year, while both the Rift and Vive are furiously working on their own versions.

Promotional videos for Microsoft’s new Evo also show the headset as being wireless – again, something that Sulon Q is pushing as a unique advantage to its system (it has a full Windows 10 computer built into the headset), and something that both Oculus and Vive are working on.

At the moment, high-end headsets have to be physically connected by a wire to a high-spec computer. The Evo, on the other hand, will be wirelessly paired with a computer to achieve, well, this:

Youtube Video

Hm, what does this remind us of? Oh yes, that’s right. Microsoft Bob. Which it tried to make a success in 2015. And failed. It even has the little dog in the corner, too. Maybe 2017 will be kinder.

Youtube Video

Back to 2016: if Microsoft announced a new VR headset that wasn’t wireless and didn’t have inside-out tracking, it would have been laughed out of the room. The big question is when will it launch?

And on that Microsoft is being wildly vague. It says its Hololens should be available “in the first half of 2017,” and it says it has already shared the specs for PCs that will power its new headset, with those PCs available “next year.” It says developer kits will be made available to developers at the Game Developers Conference in San Francisco in February.

And it announced that the hardware developer 3Glasses will “bring the Windows 10 experience to their S1 device in the first half of 2017” – but that’s not the same as saying Microsoft Evo headsets will be available by then.

Incidentally the minimum specs for the new headsets are:

  • Intel Mobile Core i5 dual-core
  • Intel HD Graphics 620 (GT2) or equivalent
  • 8GB RAM
  • HMDI 1.4 or 2
  • 100GB drive (preferably solid state)
  • Bluetooth 4.0

Taking all the announcements together, it looks as though Microsoft is aiming at a Q3 or Q4 2017 launch of its VR headset – a timeline that is likely to give Oculus, Vive and Sulon a few months’ head start, but probably not enough of one to steal the market.

Where Microsoft and Intel really could win, however, is if they do manage to create a good VR system that requires a less powerful machine to run. That would pull down the price tag for the whole system and place it above the current best offering on the market – the PlayStation VR – in terms of quality. ®

Sponsored:
Magic quadrant for enterprise mobility management suites

Free ebook: Containerized Docker Applications Lifecycle with Microsoft Tools and Platform

The content below is taken from the original (Free ebook: Containerized Docker Applications Lifecycle with Microsoft Tools and Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is offering a new free ebook titled, Containerized Docker Applications Lifecycle with Microsoft Tools and Platform , by Cesar de la Torre…. Read more at VMblog.com.

Bluetooth 5 is out: Now will home IoT take off?

The content below is taken from the original (Bluetooth 5 is out: Now will home IoT take off?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bluetooth is aiming straight for the internet of things as the fifth version of the wireless protocol arrives with twice as much speed for low-power applications.

Bluetooth Low Energy (BLE), which gains the most from the new Bluetooth 5 specification, can now go as fast as 2Mbps (bits per second) and typically can cover a whole house or a floor of a building, the Bluetooth Special Interest Group (SIG) said Wednesday. Those features could help to make it the go-to network for smart homes and some enterprise sites.

The home IoT field is pretty open right now because most people haven’t started buying things like connected thermostats and door locks, ABI Research analyst Avi Greengart said. Bluetooth starts out with an advantage over its competition because it’s built into most smartphones and tablets, he said. Alternatives like ZigBee and Z-Wave often aren’t.

“It’s easy to predict that within two to three years, pretty much every phone will have Bluetooth 5,” Greengart said. “Sometimes ubiquity is the most important part of a standard.”

As the new protocol rolls out to phones, users should be able to control Bluetooth 5-equipped devices without going through a hub.

Bluetooth is in a gradual transition between two flavors of the protocol. The “classic” type is what’s been linking cellphones to cars and mice to PCs for years. BLE, a variant that uses less power, can work in small, battery-powered devices that are designed to operate for a long time without human interaction.

BLE devices now outnumber classic Bluetooth products and most chips include both modes, said Steve Hegenderfer, director of developer programs at the Bluetooth SIG.

With Bluetooth 5, BLE matches the speed of the older system, and in time, manufacturers are likely to shift to the low-power version, he said.

Range has quadrupled in Bluetooth 5, so users shouldn’t have to worry about getting closer to their smart devices in order to control them. Also, things like home security systems – one of the most common starting points for smart-home systems — will be able to talk to other Bluetooth 5 devices around the house, Parks Associates analyst Tom Kerber said.

Another enhancement in the new version will help enterprises use Bluetooth beacons for location. BLE has a mechanism for devices to broadcast information about what they are and what they can do so other gear can coordinate with them. Until now, those messages could only contain 31 bytes of information.

Now they can be eight times that size, making it easier to share information like the location and condition of enterprise assets, such as medical devices in hospitals. Google’s Physical Web concept, intended to let users easily interact with objects, is based on BLE beacons.

Bluetooth still needs to fill in a few pieces of the puzzle, ABI’s Greengart said.

The new, longer range is an improvement, but a mesh would be better, he said. In a mesh configuration, which is available in competing networks like ZigBee and Thread, each device only needs to connect with the one closest to it. That takes less power, and it’s better than relying on each device’s range to cover a home, because walls and other obstacles can keep signals from reaching their full range, he said. The Bluetooth SIG is at work on a mesh capability now.

Consumers are also waiting for a high-fidelity audio connection to wireless headphones, a need that’s getting more urgent as phone makers phase out physical jacks, Greengart said. As with mesh, it’s coming from Bluetooth but not here yet.

Although Bluetooth 5 makes strides that could help drive IoT adoption, the field is still open, he said. “There’s room for almost any solution to succeed, including Wi-Fi.”

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Best Practices for Domain Controller VMs in Azure

The content below is taken from the original (Best Practices for Domain Controller VMs in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

cloud-hand-hero-img

cloud-hand-hero-img

This post will explain the best practices and support policies for deploying domain controllers (DCs) as virtual machines in Microsoft Azure.

What About Azure AD Domain Services?

In the not too distant past, if you wanted to run an application in the cloud with domain membership and consistent usernames and passwords, then you had no choice – you had to deploy one or more (preferably 2 or more) domain controllers as virtual machines in the cloud. Azure Active Directory (AD) didn’t offer domain membership, and couldn’t offer the same type of username/password authentication and authorization that you get with Active Directory Domain Services.

 

 

However, things have changed … slightly. Azure AD has recently added Domain Services as a generally available and supported feature. But be careful; Azure AD Domain Services might not be what you think it is!

Azure AD Domain Services allows you to deploy a domain-dependent application in the cloud without the additional cost of virtual machines that are functioning as domain controllers. However, Azure AD Domain Services is not another domain controller in your existing domain – in fact, it is not even your existing domain. Using Azure AD Connect you can clone your domain into Azure AD Domain Services. This means that your Organizational Units (OUs), group policies, groups, and so on can live on in the cloud, but in a different domain that is a clone of your on-premises domain.

Stretching an Active Directory domain to Azure virtual machines [Image Credit: Aidan Finn]

Stretching an Active Directory domain to Azure virtual machines [Image Credit: Aidan Finn]

If you want your on-premises AD forest to be truly extended into the cloud, then today, the best option is to continue to use virtual machines running the Active Directory Domain Services role. I do suspect that this will eventually change (I hope that AD goes the way of Exchange). My rule of thumb is this: if I want a hybrid cloud with cross-site authentication and authorization, then I will run domain controllers in the cloud.

Backup

Running DCs as virtual machines in Azure is safe, as long as you follow some rules. If you are running domain controllers running an OS that is older than Windows Server 2012 (WS2012), then you should never copy a domain controller’s virtual hard disks or restore it from backup. Azure supports the VM-GenerationID features of WS2012, so you can safely restore domain controllers from backup.

There is a bit of a “gotcha” with this VM-GenerationID feature. The normal practice to shut down virtual machines in Azure is to do so from the portal or PowerShell. Doing so will deallocate the virtual machine and reset the VM-GenerationID, which is undesirable. We should always shut down domain controllers using the shutdown command in the guest OS, otherwise:

  • The AD DS database is reset
  • The RID pool is discarded
  • SYSVOL is marked as non-authoritative

IP Configuration

You should never configure the IP configuration of an Azure virtual machine in the guest OS. A new domain controller will complain about having a DHCP configuration – let it complain because there will be no harm if you follow the correct procedures.

Edit the settings of the NIC of each virtual domain controller in the Azure Portal. Set the NIC to use a static IP address and record this IP address. Your new DC(s) will be the DNS servers of your network; open the settings of the virtual network (VNet) and configure the DNS server settings to use the IP addresses of your new domain controllers.

Note that if you are adding a new domain controller to an existing on-premises domain, then you will need a site-to-site network connection and you should temporarily configure the VNet to use the IP address of one of your on-premises DCs as a DNS server; this will allow your new cloud-based DC to find the domain so that it can join it.

Sponsored

Domain Controller Files

I rarely pay attention to anything in the wizard when promoting a new domain controller; it’s all next-next-next, and I doubt I’m unique. However, there is one very important screen that you must not overlook.

Azure implements write caching on the OS disk of virtual machines. This will cause an issue for databases such as AD, which can lead to corruption such as a USN rollback. You must add a data disk, with caching disabled, to the virtual machine and use this new volume to store:

  • AD DS database
  • Logs
  • SYSVOL

There is no additional cost for this if you use standard storage disks; standard storage is billed for based on data stored, not the overall size of deployed disks. Note that Azure Backup instance charges are based on the size of the disks, but you shouldn’t need so much data that you’ll exceed the 50GB-500GB price band to incur additional instance charges.

Active Directory Topology

If you work in a large enterprise, then you’ve probably already realized that it would be a good idea to define an AD topology for your new site (Azure). However, many of you work in the small-to-midsized enterprise world, so you’ve never had to do much in AD Sites and Services.

You should deploy the following for each region that you deploy AD DCs into:

  • An AD site
  • A network definition for each address space in Azure that will have domain members. Associating this definition with a site will control authentication/authorization and AD replication traffic
  • A site link to control the flow and timing of AD replication traffic – try to leverage lower cost links if you have a choice.

You can perform some advanced engineering of AD replication to reduce outbound data transfer costs. Be careful because some advanced AD engineering can have unintended consequences!

  • You can introduce disable the Bridge All Site Link option of a site link to prevent transitive inter-site replication.
  • If you have multiple site links to your Azure sites, then you can add costs to the links to mimic the costs of your networks. For example, a site-to-site VPN might be more cost effective than an ExpressRoute connection.
  • If you have a lot of data churn in your Azure site(s) that doesn’t affect your on-premises site(s), then you can reduce the frequency of replication to avoid redundant replication.
  • You can disable change notification to further reduce replication — be careful of this feature!
  • Changing the replication compression algorithm can reduce network costs. The DWORD value HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Replicator compression algorithm controls the algorithm that is used. The default is 3 (Xpress Compress). Changing this value to 2 (MSZip) increases compression but will increase CPU utilization in domain controllers.
  • Read-Only Domain Controllers (RODCs) do not replicate, but they are reliant on a network connection to full domain controllers to retrieve data to perform authentication and authorization.

Read-Only Domain Controllers

RODCs are supported in Azure. You can choose to deploy RODCs in Azure if you need to restrict what secrets are stored in the cloud; you can filter which attributes are available in the cloud if you wish. Most Windows roles work well with RODCs, but make sure that your applications will work well and not become overly dependent on site-to-site network links.

Global Catalog Servers

Every DC in a single-domain forest should be a global catalog server; this does not incur any additional replication traffic (outbound data transfer) costs.

However, multi-domain forests use universal groups and these require careful placement and usage of global catalog (GC) servers. You should place at least one GC server in Azure if you require the multi-domain forest to continue authenticating users if the site-to-site link fails – a GC is required to expand universal group membership, and a DC must verify that the user is not in a universal group with a DENY permission.

Note that the placement or lack of placement of GCs will impact traffic if you have stretched a multi-domain AD forest to the cloud:

  • A lack of a cloud-based GC will cause cross-VPN/ExpressRoute traffic for every authentication.
  • Having one or more GCs in the cloud will increase replication traffic.

ADFS and Azure AD Connect

One of the risks of using ADFS to integrate your AD forest with Azure AD is that all of your cloud services will be unavailable if Azure AD cannot talk to your ADFS cluster. The simplest solution to this is to move ADFS (and some domain controllers) from on-premises to Azure, effectively putting your critical component next door to the service that requires reliable connectivity.

Sponsored

I have also opted to deploy Azure AD Connect in an Azure virtual machine. The benefit is that in a disaster recovery scenario, my connection to Azure AD is already running in the cloud. On the downside, I need to realize that it can take up to 15 minutes (with the most frequent option in an AD site link) from an on-premises AD site to replicate to a site in Azure, and then up to 30 minutes (the default and most frequent replication option in Azure AD Connect) for changes to appear in Azure AD – you can manually trigger inter-site replication in AD and Azure AD Connect.

 

The post Best Practices for Domain Controller VMs in Azure appeared first on Petri.

OpenFog Consortium: It’s Been a Very Good Year

The content below is taken from the original (OpenFog Consortium: It’s Been a Very Good Year), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fog computing is gaining traction across industries and academia, and across the world.  In just one year, the OpenFog Consortium has grown from six founding members to 51 members in 14 countries—and still counting! But it’s not just this flood of interest that is impressive—it’s the work our members are doing together to accelerate fog […]

Google and Slack deepen partnership in the face of Microsoft Teams

The content below is taken from the original (Google and Slack deepen partnership in the face of Microsoft Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Slack and Google have vastly deepened their partnership roughly a month after Microsoft announced its competitor to the popular enterprise chat service.

Wednesday saw the announcement of several new features aimed at making G Suite, Google’s set of productivity software and services, more useful to people who use Slack. The functionality resulting from the partnership will make it easier to share and work on files stored in Google Drive using Slack.

Slack and Google were early partners during the lifecycle of the chat service which gives business users a set of rooms where they can discuss work, share files and more. Microsoft recently announced Teams, a similar service integrated into Office 365 that’s currently in beta.

A representative for Slack said via email that Microsoft’s introduction of a competitive product that is expected to become generally available next year didn’t have an impact on the company’s decision to deepen its integration with Google.

But one way or another, the partnership has clear benefits for both companies. Slack becomes more useful for G Suite users, and Google gets to make its productivity suite a more attractive offering to those organizations that want to use Slack.

In a thoroughly modern turn, Google is building a Drive Bot, which will inform users about changes to a file, and let them approve, reject and settle comments in Slack, rather than opening Google Docs. It goes along with Slack’s continuing embrace of bots as a key part of the chat service’s vision of productivity.

When users share a file from Drive in Slack, the chat service will check the sharing permissions on the file and make sure that it’s set up so that everyone in a channel can access what’s being shared. If the settings don’t match, users will be asked to modify them.

To go along with that, users will be able to associate Team Drive folders with particular Slack channels. That means the Marketing folder for members of the marketing team inside a company can also be linked to the team’s Slack channel.

When users upload files to a Slack channel with a linked Team Drive, the files will be backed up inside Google’s cloud storage service. When Team Drive files get changed, users will be notified using Slack.

In addition, Google and Slack are working together to give users access to previews of Google Docs that they share in Slack, so it’s possible for people to see inside a file at a glance without having to open it.

It’s not yet clear when all of this functionality will be making its way into Slack, however. The chat service startup will let users sign up for notifications about the forthcoming updates here.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Codemade Is a Big Collection of Open-Source Electronics Projects

The content below is taken from the original (Codemade Is a Big Collection of Open-Source Electronics Projects), to continue reading please visit the site. Remember to respect the Author & Copyright.

When it comes to tracking down DIY electronics project ideas, you’ve got a lot of solid web sites out there. Codemade is a web app that gathers a bunch of those sources together.

Read more…

UK vinyl sales made more money than music downloads last week

The content below is taken from the original (UK vinyl sales made more money than music downloads last week), to continue reading please visit the site. Remember to respect the Author & Copyright.

Digital music might be the future, but legacy formats like vinyl aren’t going away any time soon. New figures from the Entertainment Retailers Association (ERA) have shown that more money was spent on vinyl records than digital music downloads in the UK last week, highlighting a significant shift in how consumers are choosing to buy their music.

Figures show that during week 48 of 2016, consumers spent £2.4 million on vinyl, while downloads took £2.1 million. Compare that to the same period last year when £1.2 million was spent on records, with digital downloads bringing in £4.4 million. The ERA puts the surge in sales down to recent shopping events like Black Friday and the popularity of the format as a Christmas gift. It’s also helped by the fact that Sainsbury’s and Tesco now stock records in many of their branches.

It’s welcome news for vinyl lovers and the music industry in general, but digital music is also going from strength to strength. Instead of buying music to keep, Brits are increasingly turning to streaming services like Spotify to get their music fix. Last weekend, The Weeknd broke streaming records on Spotify after his new album was streamed 40 million times on day one and 223 million times in its first week.

It’s also worth considering that vinyl albums are often a lot more expensive than downloads. BBC News reports that last week’s biggest-selling vinyl was Kate Bush’s triple-disc live album Before The Dawn, which costs around £52. The same album is £13 on Amazon. Downloaded albums are still more popular, though: last week saw 295,000 digital downloads versus 120,000 vinyl album sales.

Recent research suggests that some people don’t even buy vinyl to listen to, with 7 percent of collectors admitting they don’t own a record player. It’s believed that some buy records to help support artists they like, while others may use the sleeves to decorate their home.

Via: BBC News

Say goodbye to MS-DOS command prompt

The content below is taken from the original (Say goodbye to MS-DOS command prompt), to continue reading please visit the site. Remember to respect the Author & Copyright.

My very first technology article, back in 1987, was about MS-DOS 3.30. Almost 30 years later, I’m still writing, but the last bit of MS-DOS, cmd.exe — the command prompt — is on its way out the door.

It’s quite possible that you have been using Microsoft Windows for years — decades, even — without realizing that there’s a direct line to Microsoft’s earliest operating system or that an MS-DOS underpinning has carried over from one Windows version to another — less extensive with every revision, but still there nonetheless. Now we’re about to say goodbye to all of that.

Interestingly, though, there was not always an MS-DOS from Microsoft, and it wasn’t even dubbed that at birth. The history is worth reviewing now that the end is nigh.

Back in 1980, the ruling PC operating system was Digital Research’s CP/M for the z80 processor. At the same time, Tim Patterson created Quick and Dirty Operating System (QDOS). This was a CP/M clone with a better file system for the hot new processor of the day, the 8086. At the time, no one much cared.

Until, that is, IBM decided to build an 8086-based PC. For this new gadget, IBM needed to settle on programming languages and an operating system. It could get the languages from a small independent software vendor called Microsoft, but where could it get an operating system?

The obvious answer, which a 25-year-old Bill Gates seconded, was to go straight to the source: CP/M’s creator and Digital Research founder, Gary Kildall. What happened next depends on whom you believe. But whether Kildall was really out flying for fun when IBM came by to strike a deal for CP/M for the x86 or not, he didn’t meet with IBM, and they didn’t strike a deal.

So IBM went back to Microsoft and asked it for help in finding an operating system. It just so happened that Paul Allen, Microsoft’s other co-founder, knew about QDOS. Microsoft subsequently bought QDOS for approximately $50,000 in 1981. Then, in short order, IBM made it one of the PC’s operating systems, Microsoft renamed QDOS to MS-DOS, and, crucially, it got IBM to agree that Microsoft could sell MS-DOS to other PC makers. That concession was the foundation on which Microsoft would build its empire.

Late last month, in Windows 10 Preview Build 14791, the command prompt was put out to pasture. Dona Sarkar, head of the Windows Insider Program, wrote, “PowerShell is now the defacto command shell from File Explorer. It replaces Command Prompt (aka, cmd.exe).”

That “defacto” suggests that it’s not all over for the command prompt. And it’s true that you can still opt out of the default by opening Settings > Personalization > Taskbar, and turning “Replace Command Prompt with Windows PowerShell in the menu when I right-click the Start button or press Windows key+X” to “Off.”

But you might as well wave bye-bye to the old command prompt. Build 14791 isn’t just any beta. It’s the foundation for the Redstone 2 upgrade, a.k.a. Windows 10 SP2. This is the future of Windows 10, and it won’t include this oldest of Microsoft software relics.

PowerShell, which just turned 10, was always going to be DOS’s replacement. It consists of a command-line shell and a .Net Framework-based scripting language. PowerShell was added to give server administrators fine control over Windows Server. Over time, it has become a powerful system management tool for both individual Windows workstations and servers. Command.com and its NT-twin brother, cmd.exe, were on their way out.

They had a good run. A good way to understand how they held out for so long is to look at DOS as a house under constant renovation.

First, all there was was the basic structure, the log cabin, if you will, of Microsoft operating systems. That log cabin was given a coat of paint, which is what Windows 1.0 amounted to — MS-DOS all the way, with a thin veneer of a GUI. Over time, Microsoft completely changed the façade in ways that made the old log cabin completely unrecognizable.

With Windows NT in 1993, Windows started replacing the studs and joists as well. Over the years, Microsoft replaced more and more of MS-DOS’s braces and joints with more modern and reliable materials using improved construction methods.

Today, after decades, the last pieces of the antique structure are finally being removed. All good things must come to an end. It’s way past time. Many security problems in Windows trace back to its reliance on long-antiquated software supports.

Still, it’s been fun knowing you, MS-DOS. While you certainly annoyed the heck out of me at times, you were also very useful back in your day. I know many programmers and system administrators who got their start with you on IBM PCs and clones. So, goodbye and farewell.

While few users even bothered to look at you these days, you helped launch the PC revolution. You won’t be forgotten.

This story, “Say goodbye to MS-DOS command prompt” was originally published by

Computerworld.