Microsoft will buy out existing cloud storage contracts for customers switching to OneDrive for Business

The content below is taken from the original ( Microsoft will buy out existing cloud storage contracts for customers switching to OneDrive for Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Microsoft is targeting its cloud storage rivals including Dropbox, Box, and Google today by offering to essentially buy out customers’ existing contracts if they make the switch to OneDrive for Business. The company says that customers currently paying for one of these competitive solutions, can instead opt to use OneDrive for free for the remainder of their contract’s term. The… Read More

SiFive Introduces RISC-V Linux-Capable Multicore Processor

The content below is taken from the original ( SiFive Introduces RISC-V Linux-Capable Multicore Processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

Slowly but surely, RISC-V, the Open Source architecture for everything from microcontrollers to server CPUs is making inroads in the community. Now SiFive, the major company behind putting RISC-V chips into actual silicon, is releasing a chip that’s even more powerful. At FOSDEM this weekend, SiFive announced the release of a Linux-capable Single Board Computer built around the RISC-V ISA. It’s called the HiFive Unleashed, and it’s the first piece of silicon capable or running Linux on a RISC-V core.

SiFive’s HiFive Unleashed

The HiFive Unleashed is built around the Freedom U540 SOC, a quad-core processor built on a 28nm process. The chip itself boasts four U54 RV64GC cores with an additional E51 RV64IMAC management core. This chip has support for 64-bit DDR4 with ECC and a single Gigabit Ethernet port. Those specs are just the chip though, and you’ll really need a complete system for a single board computer. This is the HiFive Unleashed, a board sporting the Freedom U540, 8GB of DDR4 with ECC, 32MB of Quad SPI Flash, Gigabit Ethernet, and a microSD card slot for storage. If you don’t mind being slightly inaccurate while describing this to a technological youngling, you could say this is comparable to a Raspberry Pi but with a completely Open Source architecture.

News of this caliber can’t come without some disappointment though, and in this case it’s that the HiFive Unleashed will ship this summer and cost $999. Yes, compared to a Raspberry Pi or BeagleBone that is an extremely high price, but it has to be borne in mind that this is a custom chip and low-volume silicon on a 28nm process. Until a router or phone manufacturer picks up a RISC-V chip for some commodity equipment, this architecture will be expensive.

This announcement of a full Single Board Computer comes just months after the announcement of the SOC itself. Already, GCC support works, Linux stuff is going upstream, and the entire Open Source community seems reasonably enthusiastic about RISC-V. It’ll be great to see where this goes in the coming years, and when we can get Linux-capable RISC-V chips for less than a kilobuck.

Spotify teams with Discord to soundtrack your gaming chats

The content below is taken from the original ( Spotify teams with Discord to soundtrack your gaming chats), to continue reading please visit the site. Remember to respect the Author & Copyright.

Spotify and gaming chat app Discord are joining forces so your entire channel can bump to the same music during a raid. Starting today, you can link your Spotify Premium account to your Discord account and keep the beats rocking for your entire commu…

New Azure Data Factory self-paced hands-on lab for UI

The content below is taken from the original ( New Azure Data Factory self-paced hands-on lab for UI), to continue reading please visit the site. Remember to respect the Author & Copyright.

A few weeks back, we announced the public preview release of the new browser-based V2 UI experience for Azure Data Factory. We’ve since partnered with Pragmatic Works, who have been long-time experts in the Microsoft data integration and ETL space, to create a new set of hands on labs that you can now use to learn how to build those DI patterns using ADF V2.

In that repo, you will find data files and scripts in the Deployment folder. There are also lab manual folders for each lab module as well an overview presentation to walk you through the labs. Below you will find more details on each module.

The repo also includes a series of PowerShell and database scripts as well as Azure ARM templates that will generate resource groups that the labs need in order for you to successfully build out an end-to-end scenario, including some sample data that you can use for Power BI reports in the final Lab Module 9.

Here is how the individual labs are divided:

  • Lab 1 – Setting up ADF and Resources, Start here to get all of the ARM resource groups and database backup files loaded properly.
  • Lab 2 – Lift and Shift of SSIS to Azure, Go to this lab if you have existing SSIS packages on-prem that you’d like to migrate directly to the cloud using the ADF SSIS-IR capability.
  • Lab 3 – Rebuilding an existing SSIS job as an ADF pipeline.
  • Lab 4 – Take the new ADF pipeline and enhance it with data from Cloud Sources.
  • Lab 5 – Modernize the DW pipeline by transforming Big Data with HDInsight.
  • Lab 6 – Go to this lab to learn how to create copy workflows in ADF into Azure SQL Data Warehouse.
  • Lab 7 – Build a trigger-based schedule for your new ADF pipeline.
  • Lab 8 – You’ve operationalized your pipeline based on a schedule. Now learn how to monitor and manage that DI process.
  • Lab 9 – Bringing it all Together

Thank you and we hope that you enjoy using the lab to learn how to build scale-out data integration project using Azure Data Factory!

Looking Back at Microsoft Bob

The content below is taken from the original ( Looking Back at Microsoft Bob), to continue reading please visit the site. Remember to respect the Author & Copyright.

Every industry has at least one. Automobiles had the Edsel. PC Hardware had the IBM PCJr and the Microchannel bus. In the software world, there’s Bob. If you don’t remember him, Bob was Microsoft’s 1995 answer to why computers were so darn hard to use. [LGR] gives us a nostalgic look back at Bob and concludes that we hardly knew him.

Bob altered your desktop to be a house instead of a desk. He also had helpers including the infamous talking paper clip that suffered slings and arrows inside Microsoft Office long after Bob had been put to rest.

Microsoft had big plans for Bob. There was a magazine and add-on software (apparently there was only one title released). Of course, if you want to install Bob yourself, you’ll need to boot Windows 3.1 — this is 1995, remember.

To log in you had to knock on the big red door and then tell the helpful dog all your personal information. Each user had a private room and all users would share other rooms.

We like to feature retrocomputing of the great old computers of our youth. This is kind of the anti-example of this. Bob was a major fail. PC World awarded it 7th place in the 25 worst tech products of all time and CNet called it the number one worst product of the decade.

Once you’ve had enough of 1995 failed software, you can always read up on some more successful Z80 clones. Or you can further back in the way back machine and see what user interfaces were like in the 1960s and 1970s.

802.11: Wi-Fi standards and speeds explained

The content below is taken from the original ( 802.11: Wi-Fi standards and speeds explained), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the world of wireless, the term Wi-Fi is synonymous with wireless access in general, despite the fact that it is a specific trademark owned by the Wi-Fi Alliance, a group dedicated to certifying that Wi-Fi products meet the IEEE’s set of 802.11 wireless standards.

These standards, with names such as 802.11b (pronounced “Eight-O-Two-Eleven-Bee”, ignore the “dot”) and 802.11ac, comprise a family of specifications that started in the 1990s and continues to grow today. The 802.11 standards codify improvements that boost wireless throughput and range as well as the use of new frequencies as they  become available. They also address new technologies that reduce power consumption.

To read this article in full, please click here

Biocylcer wants to recycle construction waste into new building materials

The content below is taken from the original ( Biocylcer wants to recycle construction waste into new building materials), to continue reading please visit the site. Remember to respect the Author & Copyright.

Waste from construction and demolition sites accounts for approximately 15-30% of all landfill content in the United States. According to NASA’s estimates, more than 500 million tons of often non-biodegradable building materials containing carcinogens and other toxins are sent off to the junkyard yearly. 

Seeking to alleviate some of these environmental consequences of the built environment, Chris Maurer of redhouse studio has created the Biocycler, a mobile machine to be placed at demolition sites in order to recycle waste. Maurer, who previously served as director of the non-profit firm MASS Design Group in Rwanda, has teamed up with both NASA and MIT for the project, which is currently running a Kickstarter campaign to build a working prototype.

The machine, which will collect waste on site, uses living organisms, primarily mushrooms, as binders to form ground up trash materials into bricks. Fungi—Earth’s great decomposer—contains mycelium, the vegetative part of mushrooms that e…

Password Rotation for Windows on Amazon EC2 Made Easy with EC2Rescue

The content below is taken from the original ( Password Rotation for Windows on Amazon EC2 Made Easy with EC2Rescue), to continue reading please visit the site. Remember to respect the Author & Copyright.

EC2Rescue for Windows is an easy-to-use tool that you run on an Amazon EC2 Windows Server instance to diagnose and troubleshoot possible problems. A common use of the tool is to reset the local administrator password.

Password rotation is an important security task in any organization. In addition, setting strong passwords is necessary to ensure that the password doesn’t get hacked by brute force or dictionary attacks. However, these tasks become very challenging to perform manually, particularly when you are dealing with more than a few servers.

AWS Systems Manager allows you to manage your fleet remotely, and to run commands at scale using Run Command. The Systems Manager Parameter Store feature is integrated with AWS Key Management Service (AWS KMS). Parameter Store allows for string values to be stored encrypted, with granular access controlled by AWS Identity and Access Management (IAM) policies.

In this post, I show you how to rotate the local administrator password for your Windows instances using EC2Rescue, and store the rotated password in Parameter Store. By using Systems Manager Maintenance Window, you can then schedule this activity to occur automatically at a frequency of your choosing.

Overview

EC2Rescue is available as a Run Command document called AWSSupport-RunEC2RescueForWindowsTool. The option to reset the local administrator password allows you to specify which KMS key to use to encrypt the randomly generated password.
If your EC2 Windows instances are already enabled with Systems Manager, then the password reset via EC2Rescue Run Command happens online, with no downtime. You can then configure a Systems Manager maintenance window to run AWSSupport-RunEC2RescueForWindowsTool on a schedule (make sure your EC2 instances are running during the maintenance window!).

Workflow

In this post, I provide step-by-step instructions to configure this solution manually. For those of you who want to see the solution in action with minimal effort, I have created a CloudFormation template that configures everything for you, in the us-west-2 (Oregon) region.

 

Keep reading to learn what is being configured, or jump to the Deploy the solution section for a description of the template parameters.

Define a KMS key

First, you create a KMS key specifically to encrypt Windows passwords. This gives you control over which users and roles can encrypt these passwords, and who can then decrypt them. I recommend that you create a new KMS key dedicated to this task to better manage access.

Create a JSON file for the Key policy

In a text editor of your choosing, copy and paste the following policy. Replace ACCOUNTID with your AWS Account ID. Administrators is the IAM role name that you want to allow to decrypt the rotated passwords, and EC2SSMRole is the IAM role name attached to your EC2 instances. Save the file as RegionalPasswordEncryptionKey-Policy.json.

{
  "Version": "2012-10-17",
  "Id": "key-policy",
  "Statement": [
    {
      "Sid": "Allow access for Key Administrators",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNTID:role/Administrators"
      },
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    },
    {
      "Sid": "Allow decryption",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNTID:role/Administrators"
      },
      "Action": "kms:Decrypt",
      "Resource": "*"
    },
    {
      "Sid": "Allow encryption",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNTID:role/EC2SSMRole"
      },
      "Action": "kms:Encrypt",
      "Resource": "*"
    }
  ]
}

Create the key and its alias (user-friendly name)
Use the following CLI command to create a KMS key that the IAM role Administrators can manage, and that the IAM role EC2SSMRole can use for encryption only:

aws kms create-key \
–-policy file://RegionalPasswordEncryptionKey-Policy.json \
--description "Key used to encrypt local Administrator passwords stored in SSM Parameter Store."

Output:
{
	"KeyMetadata":
	{
		"Origin": "AWS_KMS",
		"KeyId": "88eea0b7-0508-4318-a0bc-feee4a5250a3",
		"Description": "Key used to encrypt local Administrator passwords stored in SSM Parameter Store.",
		(...)
	}
}

Use the following CLI command to create an alias for the KMS key:

aws kms create-alias \
--alias-name alias/WindowsPasswordRotation-EncryptionKey \
--target-key-id 88eea0b7-0508-4318-a0bc-feee4a5250a3

Define a maintenance window

Use a maintenance window to schedule the password reset. Here is the CLI command to schedule such activity every Sunday at 5AM UTC:

aws ssm create-maintenance-window \
--name "windows-password-rotation" \
--schedule "cron(0 5 ? * SUN *)" \
--duration 2 \
--cutoff 1 \
--no-allow-unassociated-targets

Output:
{ "WindowId": "mw-0f2c58266a8c49246" }

Define a target

Use tags to identify which instances to reset the password. For example, you can reset all instances tagged with tag key Environment with value Production.

aws ssm register-target-with-maintenance-window \
--window-id "mw-0f2c58266a8c49246" \
--targets "Key=tag:Environment,Values=Production" \
--owner-information "Production Servers" \
--resource-type "INSTANCE"

Output:
{
	"WindowTargetId": "a5fc445b-a7f1-4591-b528-98440832da41"
}

Define a maintenance window IAM role

These steps are only necessary if you haven’t configured a maintenance window before. Skip this section if you already have your IAM role with the AmazonSSMMaintenanceWindowRole AWS Managed Policy attached.

Create a JSON file for the role trust policy

In a text editor of your choosing, copy and paste the following trust policy. Save the file as AutomationMWRole-Trust-Policy.json.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ssm.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create the IAM role and attach the Amazon managed policy for SSM maintenance window

Use the following CLI two commands to create the IAM role AutomationMWRole and associate the AmazonSSMMaintenanceWindowRole AWS Managed Policy.

aws iam create-role \
--role-name AutomationMWRole \
--assume-role-policy-document file://AutomationMWRole-Trust-Policy.json

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole \
--role-name AutomationMWRole

Define a task

EC2Rescue is available as a Run Command document called AWSSupport-RunEC2RescueForWindowsTool. This document allows you to run EC2Rescue remotely on your instances. One of the available options is the ability to reset the local administrator password. As the final configuration step, create a Run Command task that runs AWSSupport-RunEC2RescueForWindowsTool with the following parameters:

  • Command = ResetAccess
  • Parameters = KMS Key ID

Rotated passwords are saved in Parameter Store encrypted with the KMS key to use. Here is the CLI command using the KMS key, target, maintenance window and role that you previously generated:

aws ssm register-task-with-maintenance-window \
--targets "Key=WindowTargetIds,Values=a5fc445b-a7f1-4591-b528-98440832da41" \
--task-arn "AWSSupport-RunEC2RescueForWindowsTool" \
--service-role-arn "arn:aws:iam::ACCOUNTID:role/AutomationMWRole" \
--window-id "mw-0f2c58266a8c49246" \
--task-type "RUN_COMMAND" \
--task-parameters  "{\"Command\":{ \"Values\": [\"ResetAccess\"] }, \"Parameters\":{ \"Values\": [\"88eea0b7-0508-4318-a0bc-feee4a5250a3\"] } }" \
--max-concurrency 5 \
--max-errors 1 \
--priority 1

Output:
{
	"WindowTaskId": "a3571731-c64c-4e43-be8d-7b543942a179"
}

Your Windows instances need to be enabled for Systems Manager, and to have additional IAM permissions to be able to write to Parameter Store. You can accomplish this by adding the following policy to the existing IAM roles associated with your Windows instances:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter"
            ],
            "Resource": [
                "arn:aws:ssm:*:ACCOUNTID:parameter/EC2Rescue/Passwords/*"
            ]
        }
    ]
}

Save the policy as EC2Rescue-ResetAccess-Policy.json. Here are the CLI commands to create a new IAM customer managed policy and attach it to an existing Systems Manager IAM role for EC2 (in this example, EC2SSMRole).

aws iam create-policy \
--policy-name EC2Rescue-ResetAccess-Policy \
--policy-document file://EC2Rescue-ResetAccess-Policy.json

aws iam attach-role-policy \
--policy-arn arn:aws:iam::ACCOUNTID:policy/EC2Rescue-ResetAccess-Policy \
--role-name EC2SSMRole

Deploy the solution

To simplify the deployment of this solution, I have created an AWS CloudFormation template that configures all the parts described earlier in this post, and deploys this solution in us-west-2 (Oregon).

 

These are the parameters that the template requires:

  • Target
    • Tag key to filter your instances
    • Tag value to filter your instances
    • Cron expression for the maintenance window
  • Permissions
    • Existing IAM role name you are using for your Systems Manager-enabled EC2 instances, which will be authorized to encrypt the passwords.
  • Security
    • Current IAM role name that you are using to deploy this CloudFormation template. The role is authorized to decrypt the passwords and manage the KMS key.

The following figure shows the CloudFormation console, with the default template parameters and the existing EC2 IAM role named EC2SSMRole, as well as the administrative IAM role Administrators, which you use to create the CloudFormation stack.

The deployment takes few minutes. Here is a sample maintenance window, which was last executed on October 29th 2017.

Parameter Store has an encrypted parameter for each production Windows instance. You can decrypt each value from the console if you have kms:decrypt permissions on the key used to encrypt the password.

Conclusion

In this post, I showed you how to enhance the security of an EC2 environment by automating secure password rotation. Passwords are rotated on a schedule, and these actions are logged in AWS CloudTrail. Passwords are stored in Parameter Store with a KMS key, so that you have granular control over who has access to the encrypted passwords with IAM policies.
As long as your EC2 Windows instances are running during the maintenance window and are configured to work with Systems Manager, the local administrator password is rotated automatically. Additional maintenance windows can be created for other environments, or new targets added to the existing maintenance window (such as development, staging, or QA environments).

About the Author
Alessandro Martini is a Senior Cloud Support Engineer in the AWS Support organization. He likes working with customers, understanding and solving problems, and writing blog posts that outline solutions on multiple AWS products. He also loves pizza, especially when there is no pineapple on it.

Zombie … in SPAAACE: amateur gets chatty with ‘dead’ satellite

The content below is taken from the original ( Zombie … in SPAAACE: amateur gets chatty with ‘dead’ satellite), to continue reading please visit the site. Remember to respect the Author & Copyright.

NASA reckons it might even be able to operate ‘IMAGE’, thought dead since 2005

An amateur astronomer hunting the Zuma satellite that SpaceX may or may not have lost has instead turned up signals from a NASA bird thought dead since 2005.…

Azure ExpressRoute updates – New partnerships, monitoring and simplification

The content below is taken from the original ( Azure ExpressRoute updates – New partnerships, monitoring and simplification), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.

Apple whispers farewell to macOS Server

The content below is taken from the original ( Apple whispers farewell to macOS Server), to continue reading please visit the site. Remember to respect the Author & Copyright.

All the bits that make it a server are being deprecated

Apple appears to have all but killed all-but-killed macOS Server by deprecating most of what distinguishes it from a desktop OS.…

UK hits its 95 percent ‘superfast’ broadband coverage target

The content below is taken from the original ( UK hits its 95 percent ‘superfast’ broadband coverage target), to continue reading please visit the site. Remember to respect the Author & Copyright.

'Superfast' broadband with speeds of at least 24 Mbps is now available across 95 percent of the UK, according to new stats thinkbroadband.com published today. The milestone was actually achieved last month, meaning the government's Broadband Delivery…

Voicelabs launches Alpine to bring retailers to the voice shopping ecosystem

The content below is taken from the original ( Voicelabs launches Alpine to bring retailers to the voice shopping ecosystem), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Voicelabs, a company that has been experimenting in the voice computing market for some time with initiatives in advertising and analytics, is now pivoting its business again – this time, to voice-enabled commerce. The company is today launching its latest product out of stealth: Alpine.AI, a solution that builds voice shopping apps for retailers by importing their catalog, then layering… Read More

How to move files between Office 365, SharePoint and OneDrive

The content below is taken from the original ( How to move files between Office 365, SharePoint and OneDrive), to continue reading please visit the site. Remember to respect the Author & Copyright.

Last year, Microsoft announced that they would allow copying files using Office 365. But now onwards, Microsoft allows users to movie files in Office 365 with full fidelity protections for metadata and version management. Thus it helps in easing up […]

This post How to move files between Office 365, SharePoint and OneDrive is from TheWindowsClub.com.

Trueface.ai integrates with IFTTT as the latest test-case of its facial recognition tech

The content below is taken from the original ( Trueface.ai integrates with IFTTT as the latest test-case of its facial recognition tech), to continue reading please visit the site. Remember to respect the Author & Copyright.

Trueface.ai, the stealthy facial recognition startup that’s backed by 500 Startups and a slew of angel investors, is integrating with IFTTT IFTT to allow developers to start playing around with its technology. Chief executive, Shaun Moore tells me that the integration with IFTT represents the first time that facial recognition technology will be made available to the masses without the need… Read More

New Whitepaper: Separating Multi-Cloud Strategy from Hype

The content below is taken from the original ( New Whitepaper: Separating Multi-Cloud Strategy from Hype), to continue reading please visit the site. Remember to respect the Author & Copyright.

Is multi-cloud a strategy for avoiding vendor lock-in?

A 2017 RightScale survey* reported that 85% of enterprises have embraced a multi-cloud strategy. However, depending on whom you ask, multi-cloud is either an essential enterprise strategy or a nonsense buzzword.

Part of the reason for such opposing views is that we lack a complete definition of multi-cloud.

What is multi-cloud? There is little controversy in stating that multi-cloud is “the simultaneous use of multiple cloud vendors,” but to what end, exactly? Many articles superficially claim that multi-cloud is a strategy for avoiding vendor lock-in, for implementing high availability, for allowing teams to deploy the best platform for their app, and the list goes on.

But where can teams really derive the most benefit from a multi-cloud strategy? Without any substance to these claims, it can be difficult to determine if multi-cloud can live past its 15 minutes of fame.

Is multi-cloud a strategy for avoiding vendor lock-in?

Of the many benefits associated with multi-cloud, avoiding vendor lock-in is probably the most cited reason for a multi-cloud strategy. In a recent Stratoscale survey, more than 80% of enterprises reported moderate to high levels of concern about being locked into a single public cloud platform.

more than 80% of enterprises reported moderate to high levels of concern about being locked in to a single public cloud platform.

How you see vendor lock-in depends on your organization’s goals. For some companies, avoiding vendor lock-in is a core business requirement or a way to achieve greater portability for their applications. With such portability, teams can more easily move applications to another framework or platform. For others, being able to take advantage of vendor-specific features that save time on initial development is an acceptable trade-off for portability. Regardless of your point of view, a strategy that avoids vendor lock-in at all costs does mean that you will have to give up some unique vendor functionality.

In most cases, teams can still avoid vendor lock-in even without using multiple cloud providers. But how?

The key to staying flexible even within a single platform is about the choices you make. Building in degrees of tolerance and applying disciplined design decisions as a matter of strategy can ensure flexibility and portability down the road.

With this in mind, teams can work to abstract away vendor-specific functionality. Here are two simple examples:

  • Code level: Accessing functionality such as blob storage through an interface that could be implemented using any storage back-end (local storage, S3, Azure Storage, Google Cloud Storage, among other options). In addition to the flexibility this provides during testing, this tactic makes it easier for developers to port to a new platform if needed.
  • Containers: Containers and their orchestration tools are additional abstraction layers that can make workloads more flexible and portable.

Any technology decision represents some degree of lock-in, so organizations must weigh the pros and cons of depending too heavily on any single platform or tools.

So, is multi-cloud a really an effective strategy for avoiding vendor lock-in?

The bottom line is this: A multi-cloud strategy can help you avoid vendor lock-in, but it isn’t a requirement.

Implementing high availability and pursuing a best-fit technology approach are also frequently cited as a benefit of a multi-cloud strategy. But how do these hold up when it comes to real deployments and actual business cases?

This is just one of the questions that we’ll answer in our new whitepaper, Separating Multi-Cloud Strategy from Hype: An Objective Analysis of Arguments in Favor of Multi-Cloud.

You will learn:

  • The reality vs. hype of multi-cloud deployments
  • How to achieve high availability while avoiding vendor lock-in
  • The advantages of a best-fit technology approach
  • The arguments that should be driving your multi-cloud strategy

Discover the best approach for your multi-cloud strategy in our new whitepaper, download now. 

Discover the best approach for your multi-cloud strategy in our new whitepaper.

References: RightScale 2017 State of the Cloud Report | 2017 Stratoscale Hybrid Cloud Survey

Using Docker Machine with Azure

The content below is taken from the original ( Using Docker Machine with Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve written about using Docker Machine with a number of different providers, such as with AWS, with OpenStack, and even with a local KVM/Libvirt daemon. In this post, I’ll expand that series to show using Docker Machine with Azure. (This is a follow-up to my earlier post on experimenting with Azure.)

As with most of the other Docker Machine providers, using Docker Machine with Azure is reasonably straightforward. Run docker-machine create -d azure --help to get an idea of some of the parameters you can use when creating VMs on Azure using Docker Machine. A full list of the various parameters and options for the Azure drive is also available.

The only required parameter is --azure-subscription-id, which specifies your Azure subscription ID. If you don’t know this, or want to obtain it programmatically, you can use this Azure CLI command:

az account show --query "id" -o tsv

If you have more than one subscription, you’ll probably need to modify this command to filter it down to the specific subscription you want to use.

Additional parameters that you can supply include (but aren’t limited to):

  • Use the --azure-image parameter to specify the VM image you’d like to use. By default, the Azure driver uses Ubuntu 16.04.
  • By default, the Azure driver launches a Standard_A2 VM. If you’d like to use a different size, just supply the --azure-size parameter.
  • The --azure-location parameter lets you specify an Azure region other than the default, which is “westus”.
  • You can specify a non-default resource group (the default value is “docker-machine”) by using the --azure-resource-group parameter.
  • The Azure driver defaults to a username of “docker-user”; use the --azure-ssh-user to specify a different name.
  • You can customize networking configurations using the --azure-subnet-prefix, --azure-subnet, and --azure-vnet options. Default values for these options are 192.168.0.0/16, “docker-machine”, and “docker-machine”, respectively.

So what would a complete command look like? Using Bash command substitution to supply the Azure subscription ID, a sample command might look like this:

docker-machine create -d azure \
--azure-subscription-id $(az account show --query "id" -o tsv) \
--azure-location westus2 \
--azure-ssh-user ubuntu \
--azure-size "Standard_B1ms" \
dm-azure-test

This would create an Azure VM named “dm-azure-test”, based on the (default) Ubuntu 16.04 LTS image, in the “westus2” Azure region and using a username of “ubuntu”. Once the VM is running and responding across the network, Docker Machine will provision and configure Docker Engine on the VM.

Once the VM is up, all the same docker-machine commands are available:

  • docker-machine ls will list all configured machines (systems managed via Docker Machine); this is across all supported Docker Machine providers
  • docker-machine ssh <name> to establish an SSH connection to the VM
  • eval $(docker-machine env <name>) to establish a Docker configuration pointing to the remote VM (this would allow you to use a local Docker client to communicate with the remote Docker Engine instance)
  • docker-machine stop <name> stops the VM (which can be restarted using docker-machine start <name>, naturally)
  • docker-machine rm <name> deletes the VM

Clearly, there’s more available, but this should be enough to get most folks rolling.

If I’ve missed something (or gotten it incorrect), please hit me up on Twitter. I’ll happily make corrections where applicable.

Glucose-tracking smart contact lens is comfortable enough to wear

The content below is taken from the original ( Glucose-tracking smart contact lens is comfortable enough to wear), to continue reading please visit the site. Remember to respect the Author & Copyright.

The concept of a smart contact lens has been around for a while. To date, though, they haven't been all that comfortable: they tend to have electronics built into hard substrates that make for a lens which can distort your vision, break down and othe…

ITSM Connector for Azure is now generally available

The content below is taken from the original ( ITSM Connector for Azure is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post is also authored by Kiran Madnani, Principal PM Manager, Azure Infrastructure Management and Snehith Muvva, Program Manager II, Azure Infrastructure Management.

We are happy to announce that the IT Service Management Connector (ITSMC) for Azure is now generally available. ITSMC provides bi-directional integration between Azure monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service Manager.

Customers use Azure monitoring tools to identify, analyze and troubleshoot issues. However, the work items related to an issue is typically stored in an ITSM tool. Instead of having to having to go back and forth between your ITSM tool and Azure monitoring tools, customers can now get all the information they need in one place. ITSMC will improve the troubleshooting experience and reduce the time it takes to resolve issues. Specifically, you can use ITSMC to:

  1. Create or update work-items (Event, Alert, Incident) in the ITSM tools based on Azure alerts (Activity Log Alerts, Near Real-Time metric alerts and Log Analytics alerts)
  2. Pull the Incident and Change Request data from ITSM tools into Azure Log Analytics.

You can setup ITSMC by following the steps in our documentation. Once set up, you can send Azure alerts to ITSM tool using the ITSM action in Action groups.

clip_image001

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You can also view your incident and change request data in Log Analytics to perform trend analysis or correlate it against operational data.

image

 

 

 

 

 

 

 

 

 

 

To learn about the pricing, visit our pricing page. We are excited to launch ITSM connector and look forward to your feedback.

How to use the new Files Restore feature in OneDrive for Business

The content below is taken from the original ( How to use the new Files Restore feature in OneDrive for Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

The OneDrive team at Microsoft just announced a new useful feature for the OneDrive for Business users. This feature is called as Files Restore. Sometimes when we are handling a large capacity cloud storage, there are chances that we may mess […]

This post How to use the new Files Restore feature in OneDrive for Business is from TheWindowsClub.com.

Acronis Releases a Free, AI-based Ransomware Protection Tool

The content below is taken from the original ( Acronis Releases a Free, AI-based Ransomware Protection Tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

Acronis , a global leader in hybrid cloud data protection and storage, today released Acronis Ransomware Protection, a free, stand-alone version of… Read more at VMblog.com.

Windows 10 can now show you all the data it’s sending back to Microsoft

The content below is taken from the original ( Windows 10 can now show you all the data it’s sending back to Microsoft), to continue reading please visit the site. Remember to respect the Author & Copyright.

 Microsoft’s and its partners’ engineers use the telemetry data from Windows 10 to diagnose crashes, learn about its users hardware configurations and more. It’s on by default and while Microsoft tells you that it collects this data and gives you a choice between basic (the default setting) and “full” diagnostics, it never allowed you to actually see exactly what… Read More

Someone’s Made The Laptop Clive Sinclair Never Built

The content below is taken from the original ( Someone’s Made The Laptop Clive Sinclair Never Built), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Sinclair ZX Spectrum was one of the big players in the 8-bit home computing scene of the 1980s, and decades later is sports one of the most active of all the retrocomputing communities. There is a thriving demo scene on the platform, there are new games being released, and there is even new Spectrum hardware coming to market.

One of the most interesting pieces of hardware is the ZX Spectrum Next, a Spectrum motherboard with the original hardware and many enhancements implemented on an FPGA. It has an array of modern interfaces, a megabyte of RAM compared to the 48k of the most common original, and a port allowing the connection of a Raspberry Pi Zero for off-board processing. Coupled with a rather attractive case from the designer of the original Sinclair model, and it has become something of an object of desire. But it’s still an all-in-one a desktop unit like the original, they haven’t made a portable. [Dan Birch has changed all that, with his extremely well designed Spectrum Next laptop.

He started with a beautiful CAD design for a case redolent of the 1990s HP Omnbook style of laptop, but with some Spectrum Next styling cues. This was sent to Shapeways for printing, and came back looking particularly well-built. Into the case went an LCD panel and controller for the Next’s HDMI port, a Raspberry Pi, a USB hub, a USB to PS/2 converter, and a slimline USB keyboard. Unfortunately there does not seem to be a battery included, though we’re sure that with a bit of ingenuity some space could be found for one.

The result is about as good a Spectrum laptop as it might be possible to create, and certainly as good as what might have been made by Sinclair or Amstrad had somehow the 8-bit micro survived into an alternative fantasy version of the 1990s with market conditions to put it into the form factor of a high-end compact laptop. The case design would do any home-made laptop design proud as a basis, we can only urge him to consider releasing some files.

There is a video of the machine in action, which we’ve placed below the break.

We’ve never bought you a laptop with a spectrum main board before, but we have brought you a recreated Sinclair in the form of this modern-day ZX80.

Quantum Computing Hardware Teardown

The content below is taken from the original ( Quantum Computing Hardware Teardown), to continue reading please visit the site. Remember to respect the Author & Copyright.

Although quantum computing is still in its infancy, enough progress is being made for it to look a little more promising than other “revolutionary” technologies, like fusion power or flying cars. IBM, Intel, and Google all either operate or are producing double-digit qubit computers right now, and there are plans for even larger quantum computers in the future. With this amount of inertia, our quantum computing revolution seems almost certain.

There’s still a lot of work to be done, though, before all of our encryption is rendered moot by these new devices. Since nothing is easy (or intuitive) at the quantum level, progress has been considerably slower than it was during the transistor revolution of the previous century. These computers work because of two phenomena: superposition and entanglement. A quantum bit, or qubit, works because unlike a transistor it can exist in multiple states at once, rather than just “zero” or “one”. These states are difficult to determine because in general a qubit is built using a single atom. Adding to the complexity, quantum computers must utilize quantum entanglement too, whereby a pair of particles are linked. This is the only way for any hardware to “observe” the state of the computer without affecting any qubits themselves. In fact, the observations often don’t yet have the highest accuracy themselves.

There are some other challenges with the hardware as well. All quantum computers that exist today must be cooled to a temperature very close to absolute zero in order to take advantage of superconductivity. Whether this is because of a reduction in thermal noise, as is the case with universal quantum computers based on ion traps or other technology, or because it is possible to take advantage of other interesting characteristics of superconductivity like the D-Wave computers do, all of them must be cooled to a critical temperature. A further challenge is that even at these low temperatures, the qubits still interact with each other and their read/write devices in unpredictable ways that get more unpredictable as the number of qubits scales up.

So, once the physics and the refrigeration are sorted out, let’s take a look at how a few of the quantum computing technologies actually manipulate these quantum curiosities to come up with working, programmable computers.

Wire Loops and Josephson Junctions

Arguably the most successful commercial application of a quantum computer so far has been from D-Wave. While these computers don’t have “fully-programmable” qubits they are still more effective at solving certain kinds of optimization problems than traditional computers. Since they don’t have the same functionality as a “universal” quantum computer, it has been easier for the company to get more qubits on a working computer.

The underlying principle behind the D-Wave computer is a process known as quantum annealing. Basically, the qubits are set to a certain energy state and are then let loose to return to their lowest possible energy state. This can be imagined as a sort of quantum Traveling Salesman problem, and indeed that is exactly how the quantum computer can solve optimization problems. D-Wave hardware works by using superconducting wire loops, each with a weakly-insulating Josephson junction, to store data via small magnetic fields. With this configuration, the qubit achieves superposition because the electrons in the wire loop can flow both directions simultaneously, where the current flow creates the magnetic field. Since the current flow is a superposition of both directions, the magnetic field it produces is also a superposition of “up” and “down”. There is a tunable coupling element at each qubit’s location on the chip which is what the magnetic fields interact with and is used to physically program the processor and control how the qubits interact with each other.

Because the D-Wave computer isn’t considered a universal quantum computer, the processing power per qubit is not equivalent to that which would be found in a universal quantum computer. Current D-Wave computers have 2048 qubits, which if it were truly universal would have mind-numbing implications. Additionally, it’s still not fully understood if the D-Wave computer exhibits true quantum speedup but presumably companies such as Lockheed Martin wouldn’t have purchased them (repeatedly) if there wasn’t utility.

There are ways to build universal quantum computers, though. Essentially all that is needed is something that exhibits quantum effects and that can be manipulated by an external force. For example, one idea that has been floated include using impurities found in diamonds. For now, though, there are two major ways that we will focus on that scientists have built successful quantum computers on: ion traps and semiconductors.

Ion Traps

In an ion trap, a qubit is created by ionizing an atom of some sort. This can be done in many ways, but this method using calcium ions implemented by the University of Oxford involves heating up a sample, shooting electrons at it, and trapping some of the charged ions for use in the computer. From there, the ion can be cooled to the required temperature using a laser. The laser’s wavelength is specifically chosen to resonate with the ion in such a way that the ion slows down to the point that its thermal fluctuations no longer impact its magnetic properties. The laser is also used to impart a specific magnetic field to the ion which is how the qubit is “programmed”. Once the operation is complete, the laser is again used to probe the ion and determine its state.

The problem of scalability immediately rears its head in this example, though. In order to have a large number of qubits, a large number of ions need to be trapped and simultaneously manipulated by a series of lasers. The fact that the qubits can influence each other adds to the problem, although this property can also be exploited to help read information out of the system. For reasons of complexity, it seems that the future of the universal quantum computer may be found in something we are all familiar with: silicon.

Semiconductors

Silicon, in its natural state, is actually an effective insulator. Silicon has four valence electrons which are all perfectly content to stay confined to a single nucleus which means there is no flow of charge, and therefore no current flow. To make something useful out of silicon like a diode or transistor which can conduct electricity in specific ways, silicon manufacturers infuse impurities in the silicon, usually boron or phosphorous atoms. This process of introducing impurities is called “doping” and imbues the silicon with an excess or deficit of electrons in the outer shells, which means that now there are charges present in the silicon lattice. These charges can be manipulated for all of the wonderful effects that we use to create our modern world.

But we can take this process of doping one step further. Rather than introducing a lot of impurities in the silicon, scientists have found a way to put a single impurity, a solitary phosphorus atom including its outermost electron, in a device that resembles a field-effect transistor. Using the familiar and well-understood behavior of these transistors, the single impurity becomes the qubit.

In this system, a large external magnetic field is applied in order to ensure that the electron is in a particular spin state. This is how the qubit is set. From there, the transistor can be used to read the state of this single electron. If the electron is in the “up” position, it will have enough energy to move out of the transistor and the device can register the remaining positive charge of the atom. If it is in the “down” position it will still be inside the transistor and the device will see a negative charge from the electron.

These (and some other) methods have allowed researchers to achieve long cohesion times within the qubit — essentially the amount of time that the qubit is in a relevant state before it decays and is no longer useful. In ion traps, this time is on the order of nano- or microseconds. In this semiconductor type, the time is on the order of seconds which is an eternity in the world of quantum computing. If this progress keeps up, quantum computers may actually be commonplace within the next decade. And we’ll just have to figure out how to use them.

Top 9 Frequently Asked Questions About Ripple and XRP

The content below is taken from the original ( Top 9 Frequently Asked Questions About Ripple and XRP), to continue reading please visit the site. Remember to respect the Author & Copyright.

The market interest about Ripple and XRP has reached a fever pitch, and naturally, people have questions about the company, the digital asset, how it’s used and where to buy it.

In order to clear up any misconceptions about Ripple and XRP, we’ve published answers to nine of the most frequently asked questions that the Ripple team has received. This list will be updated regularly as news and new developments unfold.

1. How do I buy XRP?
XRP is available for purchase on more than 60 digital asset exchanges worldwide, many of which are listed on this page. Please note that Ripple does not endorse, recommend, or make any representations with respect to the gateways and exchanges that appear on that page. Every exchange has a different process for purchasing XRP.

If you’ve already purchased XRP and have a question about your purchase, then please reach out to the exchange directly. In order to maintain healthy XRP markets, it’s a top priority for Ripple to have XRP listed on top digital asset exchanges, making it broadly accessible worldwide. Ripple has dedicated resources to the initiative so you can expect ongoing progress toward creating global liquidity.

2. What is the difference between XRP, XRP Ledger, and Ripple?
XRP is the digital asset native to XRP Ledger. The XRP Ledger is an open-source, distributed ledger. Ripple is a privately held company.

3. How many financial institutions have adopted XRP?
As of January 2018, MoneyGram and Cuallix — two major payment providers — have publicly announced their pilot use of XRP in payment flows through xRapid to provide liquidity solutions for their cross-border payments. Ripple has a growing pipeline of financial institutions that are also interested in using XRP in their payment flows.

4. How secure is XRP? Do I have to use exchanges?
The XRP Ledger is where XRP transactions occur and are recorded. The software that maintains the Ledger is open source and executes continually on a distributed network of servers operated by a variety of organizations. It’s an open-source code base that actively develops and maintains the ledger. Since XRP Ledger’s inception, we’ve worked to make the Ledger more resilient and resistant to a single point of failure through decentralization, by decentralizing it, a process that continues today.

To purchase XRP you must use an exchange or gateway and/or have a digital wallet. Ripple does not endorse, recommend, or make any representations with respect to gateways, exchanges, or wallets, but please see the list of exchanges that offer XRP here.

5. Is the XRP Ledger centralized?
This is a top misconception with the XRP Ledger. Centralization implies that a single entity controls the Ledger. While Ripple contributes to the open-source code of the XRP Ledger, we don’t own, control, or administer the XRP Ledger. The XRP Ledger is decentralized. If Ripple ceased to exist, the XRP Ledger would continue to exist.

Ripple has an interest in supporting the XRP Ledger for several reasons, including contributing to the longer-term strategy to encourage the use of XRP as a liquidity tool for financial institutions. Decentralization of the XRP Ledger is an ongoing a process that started right at its inception. inception and has been ongoing since. In May 2017, we publicly shared our decentralization strategy.

First, weannounced plans to continue to diversify validators on the XRP Ledger, which we expanded to 55 validator nodes in July 2017. We also shared plans to add attested validators to Unique Node Lists (UNLs), and announced over the course of 2017 and 2018, for every two attested third-party validating nodes that meet the objective criteria mentioned above, we will remove one validating node operated by Ripple, until no entity operates a majority of trusted nodes on the XRP Ledger.

We believe these efforts will increase the XRP Ledger’s enterprise-grade resiliency and robustness, leading to XRP’s continued adoption as the best digital asset for payments.

6. Which wallet should I use?
Ripple does not endorse, recommend, or make any representations with respect to digital wallets. It’s advisable to always conduct your own due diligence before trusting money to any third party or third-party technology.

7. Does the price volatility of XRP impact whether financial institutions adopt xRapid?
No. Ripple has a stable cache of financial institutions that are interested in piloting xRapid. Financial institutions who use xRapid don’t need to hold XRP for an extended period of time. What’s more, XRP settles in three to five seconds, which means financial institutions are exposed to limited volatility during the course of the transaction.

8. Can Ripple freeze XRP transactions? Are they able to view or monitor transactions?
No one can freeze XRP, including Ripple. All transactions on XRP Ledger are publicly viewable.

9. Can Ripple create more XRP?
No. Ripple the company didn’t create XRP; 100 billion XRP was created before the company was formed, and after Ripple was founded, the creators of XRP gifted a substantial amount of XRP to the company.

The post Top 9 Frequently Asked Questions About Ripple and XRP appeared first on Ripple.