Top 4 reasons for using Azure Security Center for partner security solutions

The content below is taken from the original (Top 4 reasons for using Azure Security Center for partner security solutions), to continue reading please visit the site. Remember to respect the Author & Copyright.

As customers expand the boundaries of their environments to hybrid cloud, they often prefer to bring their trusted partners with them. Azure Marketplace includes a variety of security solutions from leading vendors. Azure Security Center takes this a step further, by partnering with these vendors to provide an integrated experience in Azure, while relying on Marketplace for partner certification and billing.

Security Center integrates with Endpoint Protection (Trend Micro), Web Application Firewall (Barracuda, F5, Imperva and soon Microsoft WAF and Fortinet), Next Generation Firewall (Check Point, Barracuda and soon Fortinet and Cisco) solutions. And just last week at Microsoft Ignite, we released integration with Vulnerability Assessment (Qualys – preview) solutions. If you missed the Azure Security Center session where these integrations were highlighted, you can catch it on demand. During FY17, Security Center will both expand the number of partners within these existing categories and introduce new categories.

So, why use Security Center to deploy and monitor security solutions from partners?

  • Ease of deployment: Deploying a partner solution by following the Security Center recommendation is much easier. The deployment process can be fully automated using a default configuration and network topology, or customers can choose a semi-automated option to allow more flexibility and customization of the configuration.
  • Integrated Detections: Security events from partner solutions are automatically collected, aggregated and displayed as part of Security Center alerts and incidents. These events are also fused with detections from other sources to provide advanced threat detection capabilities.
  • Unified Health Monitoring and Management: Integrated health events allow customers to monitor all partner solutions at a glance. Basic management is available with easy access to advanced configuration using the partner solution.
  • Export to SIEM: Customers can now export all Security Center and partners’ alerts in CEF format to on-premise SIEM systems using Microsoft Azure Log Integration (preview)

Currently, to leverage this advanced level of integration, partner solutions must be deployed from Security Center, by following a recommendation. Partner packages that are deployed directly from the Azure Marketplace or through automation, are not yet supported. Security Center plans to add this support over the next year where partner solutions will be auto discovered and connected to Security Center, regardless of their mode of deployment.

Interested in learning more on Azure Security Center and its partner ecosystem integration?

Dell to reveal ‘micro data centres’ for outdoor use

The content below is taken from the original (Dell to reveal ‘micro data centres’ for outdoor use), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dell to reveal ‘micro data centres’ for outdoor use

Sometimes you need to compute on the edge, as well as live there

Dell micro Modular Data Center

Dell’s micro Modular Data Center

Dell’s teased something interesting ahead of next week’s DellWorld gabfest: a “micro Modular Data Center (MDC)” that can be deployed outdoors if required.

As depicted above (or here for those of you on our mobile site) Dell’s offering has three racks. One will hold the company’s DSS 9000 rack scale infrastructure that our sibling site The Next Platform has detailed as capable of holding up to 96 compute nodes. The other two racks hold power and cooling equipment.

Dell says the MDC can be “… placed indoors or outdoors, leveraging outside air or mechanical cooling technologies.”

Why would anyone want 96 weatherproof compute nodes standing on the street? As we’ve previously explored, plenty of thinking about the Internet of Things suggests putting processing at the edge of the network to save comms costs and save the data centre core for jobs known to be worth doing. This MDC concept will help with such rigs. It will also help telcos who want to preserve core network capacity by putting compute and content out where their users are. And that can mean putting compute capacity next to mobile base stations, hence the possibility of outdoor use.

Or perhaps this is one for serious domestic bit barn builders who have run out of roof space or have been told their computers are no longer welcome in the house. ®

Sponsored:
Optimizing the hybrid cloud

DuckDuckGo Search Tips and Tricks to get the best out of it

The content below is taken from the original (DuckDuckGo Search Tips and Tricks to get the best out of it), to continue reading please visit the site. Remember to respect the Author & Copyright.

Undoubtedly Google and Bing are among the more popular search engines in the world of internet. Along with Google, every other search engine is evolving and implementing new features and techniques to serve at it’s best. In this race, there is one more upstart search engine called DuckDuckGo is making the unique presence. As we all know, every search engine we use tracks us and shows the results based on that. But, DuckDuckGo does not track you and maintains privacy. Not only that, you can do a lot more using DuckDuckGo which is not possible using other search engines. In this article, we will take a look at some of the best DuckDuckGo search tips and tricks.

DuckDuckGo Search Tips and Tricks

DuckDuckGo Search Tips and Tricks

As mentioned earlier, I will let you know how to make the best use of DuckDuckGo to get most out of it. You will also get to know what all it can do, which cannot be achieved by Google. If you need to see the larger verion of the images, just click on the image.

1. Search directly in Sites from Address Bar using !Bangs

Bangs in DuckDuckGo allows you to search for what you want on the website directly. Let us say, if you want to search for ‘mobiles’ in Amazon, then just type ‘!a mobiles’ in the search box of DuckDuckGo and it would take you to mobiles section on Amazon website.

use-bangs-in-duckduckgo

There are Bangs available for various websites like ‘!yt’ for ‘YouTube’, ‘!w’ for ‘Wikipedia’ and a lot more. This is really one of the best DuckDuckGo search tips and tricks. You can find the entire list of available Bangs. You can also create a bang for your website.

2. See the Social Media Information directly in the Results

DuckDuckGo shows the social media information directly in the search results. Let us say, I searched with Twitter handle of TWC as ‘@thewindowsclub’ and hit enter. It would show the profile information and what is it in the search result itself. You are not supposed to click on any link.

social-media-informaiton-in-duckduckgo

This is really done smartly by DuckDuckGo. To find Google Plus and Gravatar profiles, you need to use ‘G+’ and ‘Gravatar’ follows by the username respectively.

3. Easy to find Cheat sheets

DuckDuckGo makes it easy for us to find Cheat sheets of the popular app, services, operating systems and more. Just type the word followed by the keyword ‘cheat sheet’ for which you want to find the cheat sheet. Here is the Windows 10 cheat sheet.

cheat-sheet-in-duckduckgo-search-tips-and-tricks

It is always better to be specific while looking for cheat sheet, as Windows cheat sheet would differ from Windows 10 cheat sheet.

4. Shortening URLs is easier with DuckDuckGo

DuckDuckGo allows you to shorten and expand URLs and we do not need any more services to do so. Just type ‘shorten’ followed by URL in DuckDuckGo and it would give you the shortened URL as the result.

shorten-url-in-duckduckgo

5. Use DuckDuckGo as a Password Generator

If you are looking for a strong password, then DuckDuckGo makes it easy for you to do so. Just type ‘password’ followed by a number of characters. Let us say, I just typed ‘password 10’ in DuckDuckGo and it has generated my strong password of 10 characters. Why don’t you try?

generate-strong-passwords-in-duckduckgo

6. Check whether a Website is Down or Not

Want to know whether any website is down? Then, you can ask DuckDuckGo and it would let you know.

check-website-is-down-using-duckduckgo-search-tips-and-tricks

DuckDuckGo uses the instant answers feature it has. Apart from just showing us the instant answers, it would help us in avoiding clicking through different links. If you do not see instant answers in DuckDuckGo, make sure that you enabled ‘instant answers’ feature under Settings > General.

7. Generate QR Codes

DuckDuckGo allows you to generate QR Codes for any things. Say, you want to generate QR Code for one web page, then head over to DuckDuckGo and type ‘qr’ followed by the link. Just hit enter and it would generate the QR Code.

generate-qr-code-in-duckduckgo

You can just scan it using your mobile to open that specific web page without typing it all again. Interesting, right?

8. Change Case of the Text

If you want to change the case of the text, then you can do it easily using DuckDuckGo. If you want to change the text to lowercase, then type ‘lowercase [text]’ and to convert the text to uppercase type ‘uppercase [text]’.

case-change-in-duckduckgo

Use ‘title’ keyword followed by the text, to get title case. By appending or prepending ’chars’ keyword, you could even find the number of characters in the text.

9. Search for Any Calendar

DuckDuckGo shows calendar as an instant answer. Go and search for ‘calendar’ in DuckDuckGo and it would show you the present calendar with the current date highlighted.

calendar-in-duckduckgo

You can even see the calendar of any year by searching like ‘calendar 21st March 1989’. It would show the calendar as specified in the search.

search-specific-calendar-duckduckgo

10. Generate Random Text

Every one of us knows how to generate Lorem Ipsum random text in Microsoft Word. If you want to get random text using DuckDuckGo, then it is easy.

generate-random-text-in-duckduckgo

Just type ‘10 paragraphs of lorem ipsum’ and you will see the 10 paragraphs of lorem ipsum random text got generated.

11. Encode URL

If you want your URL to be encoded for some purpose, then DuckDuckGo comes handy. Just type ‘url encode’ follow by the URL and it would generate the encoded URL.

encode-url-in-duckduckgo

Now, you can copy and share the encoded URL.

 12. Easy to Find Color Codes

Among 256 RGB Colors, it is not easy to find the color code we want. Now, using DuckDuckGo it is easy to find the color code. Just type ‘color codes’, hit enter and it would show you the charts containing all color codes.

see-color-codes-in-duckduckgo

No more confusion from now on!

13. Find HTML Codes easily

If you want to find HTML code of any special character, then it would be a tidy task. But, with DuckDuckGo, just type ‘html chars’ and it would show you the list of HTML codes for all special characters. Just copy the code you want and use it as it is.

find-html-codes-in-duckduckgo

You can also HTML code for the specific special character as shown in the image below.

html-code-for-special-character-in-duckduckgo

This is said to be the best DuckDuckGo search tips and tricks for developers.

14. Generate Random Passphrase

If you want to set random passphrase as a password, then you can use DuckDuckGo to generate that. Just type ‘random passphrase’ and you could see the random phrase being generated.

random-passphrase-in-duckduckgo

These are some of the DuckDuckGo.com search tips and tricks which are not possible even in Google Search. If you have anything to add, please do share with us through comments.

Google Search lover? Take a look at these Google Search Tricks. Users of Wolfram Alpha may want to learn how to use Wolfram Alpha knowledge engine effectively.



Aligned Claims Breakthrough in Data Center Waste Heat Reuse

The content below is taken from the original (Aligned Claims Breakthrough in Data Center Waste Heat Reuse), to continue reading please visit the site. Remember to respect the Author & Copyright.

Aligned Energy claims it has achieved a breakthrough in reusing waste heat exhausted by servers in the data center – a concept that is not new but difficult to implement in data centers effectively.

The Danbury, Connecticut-based company says the combination of its data center cooling system and a system by the Swedish company Climeon, which converts low-grade waste heat into electricity, can serve as an effective energy source for a data center.

The solution addresses two fundamental problems in data center waste-heat reuse: low-temperature heat produced by servers and the difficulty of transporting heat efficiently. Climeon’s technology is able to put low-grade heat to use efficiently, while using energy produced by a data center to power the same data center means heat doesn’t have to be moved over long distances.

Learn about the challenges in reusing data center waste heat in depth: How to Reuse Waste Heat From Data Centers Intelligently

Climeon’s heat power units use incoming heat to turn its “unique working media” from liquid to gas. As that happens, pressure goes up – since gas needs more space than liquid – and rotates a turbine that generates electricity. The company claims it has found a way to do this “without any losses during the condensation process.”

In the joint solution, heat source for Climeon’s units will be Aligned’s data center cooling system, which removes heat directly at the rack by a heat sink and transports it by a thermal bus, which, according to Aligned, is a much more efficient way to move heat than forced air.

The cooling system is sold by Aligned subsidiary Inertech. One of the company’s other subsidiaries, data center service provider Aligned Data Centers, has deployed the system in its Dallas data center and also plans to use it in another data center it is building in Phoenix.

In addition to efficiency, the cooling system is what enables Aligned Data Centers to offer customers capacity that’s expandable on-demand, rather than charging them for capacity reserved for future expansion.

ManagedMethods brings shadow IT and shadow data into the light

The content below is taken from the original (ManagedMethods brings shadow IT and shadow data into the light), to continue reading please visit the site. Remember to respect the Author & Copyright.

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  

At the recent Gartner Security & Risk Management Summit, Gartner VP Neil MacDonald spoke about the technology trends for 2016 that provide the most effective business support and risk management. Cloud Access Security Brokers (CASBs) are number one on the list. According to Gartner, companies’ use of Software as a Service (SaaS) applications create new challenges to security teams due to limited visibility and control options. CASBs enable businesses to apply much-needed security policies across multiple cloud services.

SaaS is more than a trend; it’s a global movement. In its IDC 50th Anniversary Transformation Everywhere presentation (opens a PDF), IDC claims that by 2020, penetration of SaaS versus traditional software deployment will be more than 25%, and packaged software will shrink to 10% of new enterprise installations. That means that 90% of new software installations will run from the cloud.

As Gartner’s MacDonald alluded, there are two problems with SaaS applications that traditional security solutions do a poor job addressing. One is the limited visibility into the cloud apps, and the other is limited control over what happens inside a cloud app. This is the gap that CASB products intend to fill.

The visibility problem, also called shadow IT, refers to the IT department not knowing about or not sanctioning the use of various cloud apps. In this case, individual users, small workgroups and even entire departments engage with a SaaS provider and sign up for service—without IT involvement. Given that IT is ultimately responsible for data security, having people use unknown apps creates a number of problems for the organization. For instance:

  • A SaaS app might not meet the business’s security and compliance requirements.
  • Some SaaS apps are too risky for business use.
  • Data can get out of sync if it is replicated in too many places.
  • Data might elude disaster recovery plans if IT doesn’t know where it is.
  • The company could overpay for software licenses if multiple services in a category – say, file sharing – are used.

Even if a SaaS application is officially sanctioned for the company’s use, there can be a problem with controlling what goes on within that application. This is called shadow data, and it introduces problems like:

  • Employees or other users putting sensitive or regulated data in the cloud without proper controls such as encryption.
  • Workers sharing files with inappropriate people, inside or outside the organization.
  •  Disgruntled workers deleting an excessive number of files.

SaaS app providers generally do a poor job of helping customers get the visibility and control they need. Traditional tools like firewalls – even next generation firewalls – might be able to report who is going to what cloud app, but they can’t provide the depth of visibility and control that businesses need. And thus the CASB market was born.

Today the CASB solution marke is relatively mature, with many vendors’ solutions already three to five years old. While many CASB vendors focus on large enterprises, ManagedMethods is targeting small-to-medium sized businesses (SMB) and organizations like school districts and state and local governments.

ManagedMethods’ solution is called Cloud Access Monitor. It is designed to uncover shadow IT through cloud discovery from the network, and help customers understand what cloud apps people are trying to use and the potential risk they present. Cloud Access Monitor provides additional layers of reporting for compliance, visibility and understanding, and then puts additional layers of data security and threat protection around the cloud apps. In particular, Cloud Access Monitor is integrated with four of the leading SaaS applications: Office 365 and OneDrive; G Suite (previously called Google Apps for Work); Box; and Dropbox.

Cloud Access Monitor can address many use cases, but the company says the two primary ones are cloud discovery and cloud audit and control.

In cloud discovery, the tool examines HTTP traffic to determine what cloud apps a person or group is using, which apps are used the most, which ones present risk to the organization, what apps people are uploading data to, from which locations people are accessing apps, and so on. The goal is to gain visibility and insight into what’s happening with the cloud as it pertains to network traffic.

Using this insight, a company can determine which cloud apps it wants to sanction and which ones it wants to block. It can gain bargaining power to negotiate a site license agreement with its preferred applications. The company can account for data in the cloud for leak prevention and disaster recovery plans. Knowing what apps are in use, and by whom, is the first step to being able to take control.

The second major use case for Cloud Access Monitor is to take control of shadow data. Cloud Access Monitor is able to uncover what is going on inside a sanctioned cloud app, looking at who has access, what data they have sharing permissions to, who they are sharing the data with outside the corporate domain, whether there is sensitive or regulated data in the app, whether there is malware present, and more. Customers can get proactive alerts or can direct specific actions to happen when a situation is non-compliant with a policy; for example, drop a user’s connection to the app if he is trying to download excessive data.

The four core SaaS apps supported grant OAuth permission into those accounts to gain visibility, discover all the data at rest, analyze that data for compliance reporting, scan for malware, and put in policies for data loss prevention.

For example, a retail organization may be using Google Drive (G Suite) and have a policy monitor to scan for malware as files are uploaded to Google Drive and scan data at rest, which provides for threat protection. They also prevent any associate or employee of the firm from deleting specific files and files of a certain size.  

In another common use case, school districts using cloud applications in classrooms need to scan the data at rest in those apps for profanity, offensive remarks, threats and generally inappropriate behavior inside the data.

Cloud Access Monitor can be deployed on premise as a virtual appliance or in the cloud. If on premise, ManagedMethods can use a SPAN or tap and collect data passively. Otherwise the product can ingest data from firewall log files, so there is no impact to the network from deploying this solution. ManagedMethods says it can work with pretty much any firewall, but it has special partnerships with Check Point and WatchGuard to allow integration into their cloud management consoles and their threat protection layers.

As the use of SaaS applications continues to increase, organizations need a way to regain visibility and control over what their users are doing in the cloud.

OpenStack Developer Mailing List Digest October 8-14

The content below is taken from the original (OpenStack Developer Mailing List Digest October 8-14), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • loquacities: Newton docs are live on docs.openstack.org! Way to go docs team \o/
  • dhellmann: OpenStack Newton is officially released!
  • tristanC: 6 TC members elected for Ocata [1].
  • dulek: Cinder gate is now voting on basic rolling upgrades support. One step closer to get assert:supports-rolling-upgrade tag. :)
  • More

Thoughts on the TC Election Process

  • When deciding to run, candidates write a long thoughtful essay on their reasons for wanting to serve on the TC.
    • It is rare for anyone to ask follow-up question, or to challenge the candidates to explain their position more definitively.
    • Some people pick by names they are most familiar with and don’t read those candidacy posts.
    • It is believed that it’s rare for someone who hasn’t been a PTL of a large project to be elected.
    • An example of implicit bias, blind auditions for musical orchestras radically changing the selection results [2].
  • Proposal: have candidates self-nominate, but instead of a long candidacy letter, just state their interests in serving.
    • After nominations close, the election officials will assign each candidate with a  non-identifying label (e.g. random number).
    • Candidates will post their thoughts and positions and respond to questions from people.
    • Candidacy essay would be posted in the campaign period, instead of the nomination period. This will exclude biographical information.
    • Perhaps candidates can forward their responses to election officials, who will post them for the candidates and identify only by candidate number.
    • The voting form will only list the candidates’ numbers.
  • Thoughts on the proposal:
    • Not allowing people to judge peoples’ character introduces a fraud incentive. You can tell friends your number secretly. Their implicit bias will make them think this is morally ok, and make them more likely to vote for you.
    • It can be important to identify candidates. For some people, there’s a difference in what they say, and what they end up doing when left calling the shots.
    • Familiarity doesn’t necessarily equal bias. Trust is not bias.
    • A good example [2] of needing to know the speaker and words came out of the thread. Also a reason why anonymous elections for leaders are a bad idea and favor native English speakers.
  • We need several things:
    • Allow time between the nomination and the voting. Some candidates don’t announce until the last day or two. This doesn’t allow much time to get to know them.
    • How to deal with timezone differences. One candidate may post an answer early and get more reaction.
    • Reduce the effect of incumbency.
  • The comparison of orchestra auditions was brought up a couple of cycles ago as well, but could be a bad comparison. The job being asked of people was performing their instrument, and it turns out a lot of things not having to do with performing their instrument were biasing the results.
    • The job of the TC is:
      • Putting the best interests of OpenStack at heart.
      • Be effective in working with a diverse set of folks in our community to get things done.
      • To find areas of friction and remove them.
      • Help set the overall direction for the project that community accepts.
    • Writing a good candidacy email isn’t really good representation of those abilities. It’s the measure of writing a good candidacy email, in English.
  • Sean Dague hopes that when voters vote in the election that they are taking the reputation of individuals into account.
    • Look at the work they did across all of OpenStack.
    • How they got consensus on items.
    • What efforts they are able to get folks to rally around and move forward.
    • When they get stuck and get unstuck.
    • When they ask for help and/or admit they’re out of their element.
    • How they help new folks.
    • How they work with long timers.
    • It’s easy to dismiss it as a popularity contest, however, this is about evaluating the plausible promise that the individuals put forward. Not just ideas they have, but how likely they are to be able to bring them to fruition.
  • Full thread

API Workgroup News

  • API usability tests being conducted at the Barcelona summit [3].
  • Two lively discussions [4]:
    • Collecting and improving error messages across OpenStack.
    • Request semantics with regards to GET and body processing.
  • New guidelines:
    • Add a warning about JSON expectations [5].
  • Guidelines currently under review:
    • Specify time intervals based filtering queries [6].
  • Full thread

Project Teams Gathering from the Ops Perspective

  • The first PTG will be held February 20-24 in Atlanta, GA at the downtown Sheraton hotel.
  • Tickets are $100.
  • Group rate is $185/night.
  • Registration will go live in the next couple of weeks.
  • Horizontal/cross project teams will meet Monday and Tuesday.
  • Vertical projects will meet Wednesday through Friday.
  • There’s a lot of great planning happening around the PTG planning, however, it’s going take some time for operators to figure it out.
  • Tom Fifield gives some notes for the operators:
    • Check out the diagram on the PTG site [7].
      • We’re finally acknowledging a release cycle starts with planning. Now we’ll be finalizing a release, while planning another.
      • This puts the summit at the right place to get feedback and decent ideas from users.
    • The OpenStack summit is the place the entire community gets together.
      • The PTG doesn’t mean the summit becomes a marketing thing. The summit can also include:
        • Pre-spec brainstorming
        • Feedback with users
        • Be involved in strategic direction.
    • Don’t expect Ops at the PTG
      • The PTG has been designed for space to get stuff done. Unless a user is deep in code, they won’t be there. If you want feedback from users, use the summit.
  • For ops-focused teams like Kolla, participating at OpenStack summits and Ops mid cycles are essential. Not everyone has to go to every event though. These teams should organize who is going to what events.
  • If you’re going to the summit in Barcelona, Thierry and Erin from the OpenStack Foundation will be hosting informational presentation on the PTG [8].
  • Full thread

Next PTL/TC Elections Timeframes

  • At the last TC meeting, TC members discussed future election period, with consideration of the OpenStack Summit and Project Teams Gathering.
  • The TC charter which uses “Design Summit” and “Summit” interchangeably is no longer valid and requires change.
    • There was a focus on limiting the impact change to avoid the need to modify the Foundation bylaws [9].
    • PTL elections would continue to be organized around development cycle boundaries.
    • TC elections would continue to be organized relative to OpenStack Summit dates.
  • Full thread

Running Non-Devstack Jobs in Python Projects

  • Devstack is the common tool to deploy OpenStack in CI environments.
    • However, it doesn’t deploy OpenStack in production versus tools like Kolla, Fuel, TripleO, etc.
  • Things might (and did) break when deploying OpenStack outside of Devstack:
    • SSL was not tested. Some projects still don’t test with SSL enabled.
    • IPv6 is not tested everywhere.
    • Production scenarios with HA (HAproxy and/or Pacemaker) are not tested.
  • Proposal:
    • This is not about removing Devstack. The idea is to add more coverage in an interactive way.
    • Projects like TripleO and Heat have been added as CI jobs in the experimental pipeline.
    • A draft document about increasing coverage in different projects [10].
  • Finding a balance between enough testing and overusing infra resources is tricky.
    • Also anything that’s more complicated than unit tests has > 0% chance of failure.
  • Another proposal:
    • Running periodic testing and moving forward reference hashes everyday if tests pass.
      • Allows deployment tools to move forward automatically.
      • Quite close to master, but not tightly coupled into every change.
      • This is pretty much what the OpenStack-Ansible project does for its “integrated build”.
  • Full thread

 

[1] – http://bit.ly/2dQjmpP

[2] – http://bit.ly/2egn22c

[3] – http://bit.ly/2egkdyk

[4] – http://bit.ly/2dQjOo5

[5] – http://bit.ly/2cEDZ4C

[6] – http://bit.ly/2dQkfPq

[7] – http://bit.ly/2dQifqp

[8] – http://bit.ly/2egkpxF

[9] – http://bit.ly/2eglCEX

[10] – http://bit.ly/2egmQjw

Drones are delivering blood to hospitals in Rwanda

The content below is taken from the original (Drones are delivering blood to hospitals in Rwanda), to continue reading please visit the site. Remember to respect the Author & Copyright.

In Rwanda, transporting critical medicine and blood can be difficult if the patient is in a remote location. Heavy downpours can wash out the roads, and local hospitals are often too small to stock everything their doctors might need. Now, the Rwandan government is side-stepping the problem with a drone delivery program. In the western half of the country, 21 transfusion clinics can request batches of blood via text. The order will be picked up by Zipline, a California-based robotics firm, at its "nest" base in Muhanga. A small drone will then be deployed and, upon arrival, swoop down low to drop the package off at a designated "mailbox" area.

The new "Zip" drones can carry up to 1.5KG of blood — enough to save a person’s life — over 150KM. Zipline is starting with 15 drones which will make 50 to 150 emergency flights each day. Each delivery should take 30 minutes, bypassing any problems or poor infrastructure down below. In the future, the government hopes to expand the scheme and offer different types of medicine and lifesaving vaccines. The first step, in early 2017, will be to offer similar blood deliveries in the eastern half of the country too.

Over the course of the next year, Zipline — along with its partners, UPS and Gavi, the Vaccine Alliance — will look to see whether the deliveries can be replicated in other countries across Africa and the Americas. Eventually, the group wants to serve Indian reservations in Maryland, Nevada and Washington state. "The hours saved delivering blood products or a vaccine for someone who has been exposed to rabies with this technology could make the difference between life and death," Dr. Seth Berkley, CEO of Gavi said. "Every child deserves basic, lifesaving vaccines. This technology could be an important step towards ensuring they get them."

AWS gets richer with VMware partnership

The content below is taken from the original (AWS gets richer with VMware partnership), to continue reading please visit the site. Remember to respect the Author & Copyright.

VMware signed deals with Microsoft, Google and IBM earlier this year as it has shifted firmly to a hybrid cloud strategy, but it was the deal it signed with AWS this week that has had everybody talking.

The cloud infrastructure market breaks down to AWS with around a third of the market — and everybody else. Microsoft is the closest competitor with around 10 percent. While VMware has had deals in place with other major players, the one with AWS matters more because it gives AWS even greater advantage in the cloud market.

The traditional vendors have taken a hybrid approach with Microsoft and IBM arguing that most large organizations, bogged down in legacy hardware and software, can’t afford to go whole hog into the cloud. It’s an argument that makes sense, especially for their customer bases.

AWS on the other hand has argued that the future is the cloud, and while it welcomed any customers, it made its bet with the companies moving to the cloud or who were born there. That approach has clearly worked with the company on an $11.5 billion run rate this year.

Meanwhile, in spite of those strategic deals with other larger IT vendors, VMware has struggled with the cloud market. It boasts almost 100 percent penetration inside the data center. It was and remains the go-to company for server virtualization, and while that worked fine in a data center-centric world, that world is changing rapidly.

What VMware did was provide a way to make use of all the resources in a machine in a much more efficient way, letting you break down that single server into multiple virtual machines. That was great for its time in the early 2000s when servers were expensive and finding ways to use them as efficiently as possible was a prime objective for IT.

The cloud changed all of that, moving the virtual machine to the cloud where you could spin up whatever resources you needed whenever you wanted and only pay for the resources you were actually using. If you needed more, you simply spun up more. If you needed less, you could take them down. That put the data center model — and VMware — at a distinct disadvantage.

You couldn’t just go out and buy more servers every time your work loads demanded it. There was a procurement process and it took weeks or months, while the cloud let you satisfy your needs almost instantly.

VMware has actually been dabbling in the cloud since around 2010 starting with an early Platform as a Service play called VMforce, which was supposed to work with Salesforce. It also began flirting with partnerships at around the same time with an early partnership with Google to take on the fledgling Microsoft Azure.

It made another hybrid cloud attempt in 2013 with the launch of vCloud Hybrid Service. It even originally launched CloudFoundry, the open source private cloud platform, which eventually became part of Pivotal, the company that EMC, VMware and GE spun out in 2012.

None of these gained much traction for VMware, however as the company was competing with all of these other vendors including AWS, Google, Microsoft and IBM — and there was little to separate itself from the pack. That brings us to the present day where the company is taking a new stab at the hybrid model and partnering like crazy with its former competitors.

Teaming with AWS is a different matter than the previous announcements because with AWS it gets the top player in the market, which could help salvage its cloud business (and indeed its entire business) after so many false starts.

As for AWS, it gets to play in the hybrid playground where it has had limited access until now. That gives the cloud infrastructure giant a way to go after Microsoft and IBM right in their prime markets and possibly gain even more marketshare.

People were talking because it was the biggest deal for VMware by far, and as for AWS, well it was a case of the rich getting richer. That has to have the competitors feeling pretty nervous today as AWS puts a stake in the ground right in their territory — and VMware gets to come along for the ride.

Introducing Zaius, Google and Rackspace’s open server running IBM POWER9

The content below is taken from the original (Introducing Zaius, Google and Rackspace’s open server running IBM POWER9), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by John Zipfel, Technical Program Manager

Here at Google Cloud, our goal is to enable our users and customers to be successful with options, high performance and value. We’re committed to open innovation, and look forward to working with industry partners on platform and infrastructure designs.

In fact, earlier this year, we announced that we would collaborate with Rackspace on the development of a new Open Compute Project (OCP) server based on the IBM POWER9 CPU. And we recently announced that we joined the OpenCAPI Consortium in support of the new open standard for a high-speed pathway to improve server performance. Today, we’re excited to share the first spec draft of our new server, Zaius P9 Server, which combines the benefits of IBM POWER9 and OpenCAPI for the OCP community.

Over the past few months, we’ve worked closely with Rackspace, IBM and Ingrasys to learn about the needs of the OCP community and help ensure that Zaius is useful for a broad set of users. With Zaius, Google is building upon the success of the Open Server specification and Barreleye platforms, while contributing the 12 years of experience we’ve gained from designing and deploying servers in our own data centers.

Zaius incorporates many design aspects that are new to Google and unique to OCP: POWER9 was designed to be an advanced accelerated computing platform for scale-out solutions, and will be available for components that use OpenCAPI and PCIE-Gen4 interfaces. The Zaius design brings out all possible PCIe Gen4 and OpenCAPI lanes from the processors to slots and connectors for an unprecedented amount of raw bandwidth compared to prior generation systems. Additionally, the updated package design reduces system complexity and the new microarchitecture provides increased efficiency and performance gains.

Block diagram of Zaius

The specifications

Zaius is a dual-socket platform based on the IBM POWER9 Scale Out CPU. It supports a host of new technologies including DDR4 memory, PCIE Gen4 and the OpenCAPI interface. It’s designed with a highly efficient 48V-POL power system and will be compatible with the 48v Open Rack V2.0 standard. The Zaius BMC software is being developed using Open BMC, the framework for which we’ve released on GitHub. Additionally, Zaius will support a PCIe Gen4 x16 OCP 2.0 mezzanine slot NIC.

We’ve shared these designs with the OCP community for feedback, and will submit them to the OCP Foundation later this year for review. Following this specification, we plan to release elements of the board’s design collateral, including the schematics and layout. If accepted, these standards will continue the goal of promoting 48V architectures. This is a draft specification of a preliminary, untested design, but we’re hoping that an early release will drive collaboration and discussion within the community.

We look forward to a future of heterogeneous architectures within our cloud. And, as we continue our commitment to open innovation, we’ll continue to collaborate with the industry to improve these designs and the product offerings available to our users.

Lifesize launches new video gear for huddle rooms

The content below is taken from the original (Lifesize launches new video gear for huddle rooms), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the world of videoconferencing, there’s a gap between the large conference room systems and lecture hall gear, and the individual’s webcam on their computer, tablet or smartphone. For smaller conference rooms, many of which have been renamed “huddle rooms”, neither  option seems appropriate, because of cost (using a larger system) or convenience (2-4 people shouldn’t have to crowd around a laptop screen).

Videoconferencing vendors continue to address this need, with Lifesize being the latest – the company announced today its Icon 450 system, a videoconferencing camera and audio system aimed specifically at the huddle room. The system connects to the Lifesize Cloud, the company’s cloud-based videoconferencing platform.

On the hardware side, the Icon 450 includes a wide-angle lens (82-degree horizontal field of view, 59-degree vertical), a 1080p resolution lens with 5x optical zoom, and auto-focus features. New to this gear is what Lifesize calls its “Smart Framing Sensor”, which automatically adjusts the camera to center on everyone on a particular call, reducing the need for an employee to manually pan or zoom in on a specific speaker. The camera connects via HDMI to a display (not included), which supports both 1080p or 720p resolutions. For audio, the Icon 450 includes the Lifesize Phone HD device – not only does it have an audio speaker for the call, but a touch-screen display lets employees create and launch meetings at the touch of a button.

The cloud-based service lets you launch meetings from Microsoft Office 365, Google Apps for  Work and supports Skype for Business sessions as well. Integration with these and other services lets them send invites via email, start a meeting via their calendar or provide a meeting URL. The goal is to make creating and using a videoconferencing system a lot easier than current larger room systems, Lifesize says.

The system starts at $4,999 (including the camera, Phone HD device, remote control and cables); subscriptions for the Lifesize Cloud service start at $29 per month per user, with an annual contract. 

Tintri Announces Simpler Disaster Recovery for Multiple Data Centers and Clouds

The content below is taken from the original (Tintri Announces Simpler Disaster Recovery for Multiple Data Centers and Clouds), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tintri , Inc, a leading provider of all-flash VM-aware storage (VAS), today announced the upcoming availability of Synchronous Replication for… Read more at VMblog.com.

Startup dusts off rent-a-box on-premises corpse, adds ARM muscle, cloud brains

The content below is taken from the original (Startup dusts off rent-a-box on-premises corpse, adds ARM muscle, cloud brains), to continue reading please visit the site. Remember to respect the Author & Copyright.

Startup Igneous Systems has re-discovered and re-imagined the idea of customers renting an externally managed system on their premises, giving it an Internet of Things (IoT) and public cloud make-over.

The new angles are that IoT devices can generate vast amounts of data which is difficult to send to a set of on-premises servers or the public cloud because transmission is too slow and/or the data is sensitive and needs to be retained on site.

Igneous’s answer is to store it locally in a public cloud-like, scale-out, hyper-converged system that can run some compute processes on the data, and be managed through the cloud. Example compute workloads are auto-tagging, metadata extraction, image processing, and text analysis.

Igneous owns and operates the on-premises equipment and the customer has a capacity-based subscription pricing scheme. The systems are remotely managed and monitored through a loud service, and there is automated cloud management processes for upgrades, troubleshooting, and failure management. Igneous claims that these functions can be performed on a mirror of incoming and outgoing data streams without slowing down low-level system functions.

The Igneous architecture utilises a microservices approach and incorporates stream processing, an event-driven framework, and container services.

Igneous_nanoserver

Igneous nanoserver

The equipment is based on ARM-powered nano-servers, with each being a disk drive, ARM processor and Ethernet link. Igneous calls this a JBOND – Just a Bunch of Networked Drives. There is a SoC (System On Chip) with a Linux-running Marvell 32-bit Armada 370 processor, with two Cortex-A9 cores running at up to 1 GHz. Networking is via two 1 GbitE ports, and the nanoservers operate in a kind of mesh.

It reminds El Reg of Seagate’s Ethernet-addressed Kinetic disk drives, except that Igneous has had its own nanoservers built and the JBOND is accessed as an S3 data store. Any processing capabilities to operate on the stored data need adding. Data is stored using erasure coding and failed nanoserver disks have data rebuilt using other nanoservers.

We also think 32-bit ARM CPUs look a bit light in compute power terms and 64-bit ones will need to be used if any serious compute is going to be done on the data.

Access is via Amazon’s S3 API across a customer’s 10Gbit Ethernet infrastructure. Other public cloud service access protocols will probably be added in the future.

Igneous_boxes

The dataBox and dataRouter systems

A dataBox has 60 drives (nano-servers with 6TB disks) in a 4U enclosure and is a capacity store. A dataRouter is a stateless 1U server used for protocol endpoints and keeping track of data in the nanoserver mesh.

Customers start with a 2×1 system of two dataRouters and one dataBox at a usable capacity of 212TB. The dataRouters and dataBoxes can scale independently.

Igneous_dashboard

Igneous cloud management dashboard

There is some existing third-party software support. For example, Commvault Virtual Server Protection (VSP) enables users to back up to an Igneous System and Infinite IO works with Igneous.

This Igneous nanoserver kit is an interesting start in what we might ultimately call micro or nano hyper-converged systems. Getting involved with it means leading/bleeding edge work for Linux and S3 buffs.

Get a datasheet here. The Igneous Data Service is available immediately to customers in North America. Customers can purchase annual subscriptions in increments of 212TB of usable capacity, starting at under $40,000 (equivalent to 1.5 cents/GB/month). This is claimed to be lower than the cost of S3 (3 cents/GB/month). Volume discounts are available, based on installed capacity or contract duration. ®

Sponsored:
IBM FlashSystem V9000 product guide

Microsoft’s true Hybrid Cloud: consistent, not just connected

The content below is taken from the original (Microsoft’s true Hybrid Cloud: consistent, not just connected), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today marked the announcement of Windows Server 2016 general availability, another milestone in Microsoft’s commitment to hybrid cloud. You can start using Windows Server 2016 today with the availability of Windows Server 2016 images in the Azure Marketplace.

Hybrid cloud is the reality for all enterprise customers we work with, even those with the most ambitious cloud plans. Some applications should and will move quickly to public cloud, while others face technological and regulatory obstacles.  As such, we’ve built-in hybrid capabilities across the Microsoft portfolio, covering data, identity, management, applications, and the infrastructure platform overall.

Hybrid Stack graphic

Figure 1: Hybrid cloud capabilities built-in across Microsoft products and services

True hybrid cloud enablement goes beyond connectivity and provides consistency. Great network connectivity and the ability to “lift and shift” virtual machines are basic requirements. Consistency goes a step further, providing IT professional, developer, and end user experiences that don’t change based on the location of the resource. Consistency across a hybrid cloud environment enables uniform development, unified dev-ops and management, common identity and security, and seamless extension of existing applications to the cloud. Consistent hybrid cloud helps customers execute on their cloud strategy faster, in a way that makes the most sense for their business.

Read more about Microsoft’s Hybrid Cloud and Windows Server 2016

Alibaba backs PlaceIQ, a provider of location tools and data

The content below is taken from the original (Alibaba backs PlaceIQ, a provider of location tools and data), to continue reading please visit the site. Remember to respect the Author & Copyright.

PlaceIQ has a new, big-name investor — Chinese e-commerce giant Alibaba.

The companies aren’t disclosing the size of the deal; they’ll only say that it’s a strategic, minority investment, and that it’s an addition to the $25 million round that PlaceIQ announced at the beginning of this year. Co-founder and CEO Duncan McCall told me this is part of a larger partnership, where Alibaba will be using PlaceIQ’s technology.

“I don’t want to speak for [Alibaba], but in my mind, they didn’t want to invest in a company — they wanted to find a strategic partner,” McCall said.

PlaceIQ combines location data with “first-party” data from marketers to provide a fuller understanding of consumer behavior — not just whether someone’s visited a car dealership, for example, but whether they’re actually shopping for a car, and what kinds TV stations they’re likely to watch.

McCall said Alibaba could use PlaceIQ’s technology in a variety of ways, including marketing, product recommendations and providing data for broader decision-making.

In his view, this illustrates how location data has become useful for more than mobile advertising, or even advertising in general: “Location now becomes less about sales and marketing and more real technology making decisions and business insight where that becomes much more important.”

McCall also suggested that as PlaceIQ looks to expand internationally, it will continue to follow this partnership model.

“For us, our core value is in the data and technology,” he said. “When we want to get into [a new geography], let’s find the market leader and deploy with them.”

Featured Image: Jozsef Bagota/Shutterstock

Amazon ElastiCache for Redis Update – Sharded Clusters, Engine Improvements, and More

The content below is taken from the original (Amazon ElastiCache for Redis Update – Sharded Clusters, Engine Improvements, and More), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many AWS customers use Amazon ElastiCache to implement a fast, in-memory data store for their applications.

We launched Amazon ElastiCache for Redis in 2013 and have added snapshot exports to S3, a refreshed engine, scale-up capabilities, tagging, and support for Multi-AZ operation with automatic failover over the past year or so.

Today we are adding a healthy collection of new features and capabilities to ElastiCache for Redis. Here’s an overview:

Sharded Cluster Support – You can now create sharded clusters that can hold more than 3.5 TiB of in-memory data.

Improved Console – Creation and maintenance of clusters is now more straightforward and requires far fewer clicks.

Engine Update – You now have access to the features of the Redis 3.2 engine.

Geospatial Data – You can now store and process geospatial data.

Let’s dive in!

Sharded Cluster Support / New Console
Until now, ElastiCache for Redis allowed you to create a cluster containing a single primary node and up to 5 read replicas. This model limited the size of the in-memory data store to 237 GiB per cluster.

You can now create clusters with up to 15 shards, expanding the overall in-memory data store to more than 3.5 TiB. Each shard can have up to 5 read replicas, giving you the ability to handle 20 million reads and 4.5 million writes per second.

The sharded model, in conjunction with the read replicas, improves overall performance and availability. Data is spread across multiple nodes and the read replicas support rapid, automatic failover in the event that a primary node has an issue.

In order to take advantage of the sharded model, you must use a Redis client that is cluster-aware. The client will treat the cluster as a hash table with 16,384 slots spread equally across the shards, and will then map the incoming keys to the proper shard.

ElastiCache for Redis treats the entire cluster as a unit for backup and restore purposes; you don’t have to think about or manage backups for the individual shards.

The Console has been improved and I can create my first Scale Out cluster with ease (note that I checked Cluster Mode enabled (Scale Out) after I chose Redis as my Cluster engine):

The Console helps me to choose a suitable node type with a handy new menu:

You can also create sharded clusters using the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, the ElastiCache API, or via a AWS CloudFormation template.

Engine Update
Amazon ElastiCache for Redis is compatible with version 3.2 of the Redis engine. The engine includes three new features that may be of interest to you:

Enforced Write Consistency – the new WAIT command blocks the caller until all previous write commands have been acknowledged by the primary node and a specified number of read replicas. This change does not make Redis in to a strongly consistent data store, but it does improve the odds that a freshly promoted read replica will include the most recent writes to previous primary.

SPOP with COUNT – The SPOP command removes and then returns a random element from a set. You can now request more than one element at a time.

Bitfields – Bitfields are a memory-efficient way to store a collection of many small integers as a bitmap, stored as a Redis string. Using the BITFIELD command, you can address (GET) and manipulate (SET, increment, or decrement) fields of varying widths without having to think about alignment to byte or word boundaries.

Our implementation of Redis includes a snapshot mechanism that does not need to fork the server process into parent and child processes. Under heavy load, the standard, fork-based snapshot mechanism can lead to degraded performance due to swapping. Our alternative implementation comes in to play when memory utilization is above 50% and neatly sidesteps the issue. It is a bit slower, so we use it only when necessary.

We have improved the performance of the syncing mechanism that brings a fresh read replica into sync with its primary node. We made a similar improvement to the mechanism that brings the remaining read replicas back in to sync with the newly promoted primary node.

As I noted earlier, our engine is compatible with the comparable open source version and your applications do not require any changes.

Geospatial Data
You can now store and query geospatial data (a latitude and a longitude). Here are the commands:

  • GEOADD – Insert a geospatial item.
  • GEODIST – Get the distance between two geospatial items.
  • GEOHASH – Get a Geohash (geocoding) string for an item.
  • GEOPOS – Return the positions of items identified by a key.
  • GEORADIUS -Return items that are within a specified radius of a location.
  • GEORADIUSBYMEMBER – Return items that are within a specified radius of another item.

Available Now
Sharded cluster creation and all of the features that I mentioned are available now and you can start using them today in all AWS regions.

Jeff;

Terahertz radiation could speed up computer memory by 1000 times

The content below is taken from the original (Terahertz radiation could speed up computer memory by 1000 times), to continue reading please visit the site. Remember to respect the Author & Copyright.

One area limiting speed in personal computing speed is memory — specifically, how quickly individual memory cells can be switched, which is currently done using an external magnetic field. European and Russian scientists have proposed a new method using much more rapid terahertz radiation, aka "T-rays," the same things used in airport body scanners. According to their research, published in the journal Nature, swapping out magnetic fields for T-rays could crank up the rate of the cell-resetting process by a factor of 1000, which could be used to create ultrafast memory.

The radiation is actually a series of short electromagnetic pulses pinging the cells at terahertz frequencies (which have wavelengths of about 0.1 millimeter, lying between microwaves and infrared light, according to the scientists’ press release). Most of the recent T-ray experiments have dealt with quick, precise inspections of organic and mechanical material. Aside from quickly scanning you for contraband and awkward bulges at airports, other proposals have involved using terahertz radiation to look into broken microchip innards, peer into fragile texts and even comb airport luggage for bombs.

But similar to those hypothetical applications, you won’t see T-rays in your PCs any time soon. The scientists have successfully demonstrated the concept on a weak ferromagnet, thulium orthoferrite (TmFeO₃), and even found that the terahertz radiation’s effect was ten times greater than a traditional external magnetic field, meaning the new method is both far faster and more efficient. But the scientists have yet to publish tests on actual computer memory cells, so it’s unknown when, or if, T-rays will buzz around inside your machine.

Source: Nature

Claranet Chooses OnApp Portal to Enhance its VMware Cloud Services

The content below is taken from the original (Claranet Chooses OnApp Portal to Enhance its VMware Cloud Services), to continue reading please visit the site. Remember to respect the Author & Copyright.

OnApp has announced that Claranet , a leading European managed service provider, has chosen OnApp as its customer portal for cloud services based on… Read more at VMblog.com.

Reverse Engineering The Internet Of Coffee

The content below is taken from the original (Reverse Engineering The Internet Of Coffee), to continue reading please visit the site. Remember to respect the Author & Copyright.

The public promise of the Internet Of Things from years ago when the first journalists discovered the idea and strove to make it comprehensible to the masses was that your kitchen appliances would be internet-connected and somehow this would make our lives better. Fridges would have screens, we were told, and would magically order more bacon when supplies ran low.

A decade or so later some fridges have screens, but the real boom in IoT applications has not been in such consumer-visible applications. Most of your appliances are still just as unencumbered by connectivity as they were twenty years ago, and that Red Dwarf talking toaster that Lives Only To Toast is still fortunately in the realm of fiction.

The market hasn’t been devoid of IoT kitchen appliances though. One is the Smarter Coffee coffee machine, a network-connected coffeemaker that is controlled from an app. [Simone Margaritelli] bought one, though while he loved the coffee he really wasn’t keen on its not having a console application. He thus set about creating one, starting with reverse engineering its protocol by disassembling the Android version of its app.

What he found was sadly not an implementation of RFC 2324, instead it uses a very simple byte string to issue commands with parameters such as coffee strength. There is no security, and he could even trigger a firmware upgrade. The app requires a registration and login, though this appears to only be used for gathering statistics. His coffee application can thus command all the machine’s capabilities from his terminal, and he can enjoy a drink without reaching for an app.

On the face of it you might think that the machine’s lack of security might not matter as it is on a private network behind a firewall. But it represents yet another example of a worrying trend in IoT devices for completely ignoring security. If someone can reach it, the machine is an open book and the possibility for mischief far exceeds merely pranking its owner with a hundred doppio espressos. We have recently seen the first widely publicised DDoS attack using IoT devices, it’s time manufacturers started taking this threat seriously.

If the prospect of coffee hacks interests you, take a look at our previous coverage.

[via /r/homeautomation]

Filed under: cooking hacks

Azure Backup hosts Ask Me Anything session

The content below is taken from the original (Azure Backup hosts Ask Me Anything session), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Azure Backup team will host a special Ask Me Anything session on /r/Azure, Thursday, October 20, 2016 from 09:00 am to 1:00 pm PDT.

What’s an AMA session?

We’ll have folks from across the Azure Backup Engineering team available to answer any questions you have. You can ask us anything about our products, services or even our team!

Why are you doing an AMA?

We like reaching out and learning from our customers and the community. We want to know how you use Azure and Azure Backup and how your experience has been. Your questions provide insights into how we can make the service better.We did this last year and we are excited about the questions and feedback we have received and we are doing it again.

Who will be there?

You, of course! We’ll also have PMs and Developers from the Azure Backup team participating throughout the day.
Have any questions about the following topics? Bring them to the AMA.

•    Backup of Azure IaaS VMs (both Classic and Resource Manager VMs)
•    Azure Backup Server
•    System Center Data Protection Manager (We announced VMware VM backup support couple of months ago)
•    Microsoft Azure Recovery Services Agent

Why should I ask questions here instead of StackOverflow, MSDN or Twitter? Can I really ask anything?

An AMA is a great place to ask us anything. StackOverflow and MSDN have restrictions on which questions can be asked while Twitter only allows 140 characters. With an AMA, you’ll get answers directly from the team and have a conversation with the people who build these products and services.

Here are some question ideas:
•    What is Azure Backup? What is the cloud-first approach to backup?
•    How should I choose between Azure Backup Server and System Center Data Protection Manager for my application backup?
•    What are the pros/cons of using VM backup versus File/Folder backup?
•    How does the “Protected Instances” billing model work?
•    Why should I pick cloud over tape for long term retention?
•    What can be protected in Recovery Services vault?

Go ahead, ask us anything about our public products or the team. Please note, we cannot comment on unreleased features and future plans.

Join us! We’re looking forward to having a conversation with you!

Office 365: How Does the New Office Tap Feature Work in Word and Outlook 2016?

The content below is taken from the original (Office 365: How Does the New Office Tap Feature Work in Word and Outlook 2016?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office 365 Hero

In today’s Ask the Admin, I’ll look at the new Tap feature in Word and Outlook 2016.

Find information using Tap in Word and Outlook 2016 (Image Credit: Russell Smith)

Find information using Tap in Word and Outlook 2016 (Image Credit: Russell Smith)

Microsoft recognizes that the real power of cloud services, such as Office 365, lies not necessarily in the client apps that we use to access and work with data, but in the ability to tap into data and use analytics and business intelligence (BI) to get insights for making business decisions.

 

 

Often knowledge workers don’t need complex BI or analytics, but just access to company data that has already been presented in a document so that it can be reused somewhere else. Sounds like it ought to be a simple task, right?

If you synchronize documents to your local device from SharePoint or OneDrive for Business, you can leave Office and use Cortana or File Explorer to search for information, open the desired document, and then copy and paste the text, object, or graphic once you’ve located it in the file. Or alternatively, you can search files using the Office 365 web portal. It’s not that hard, but there are a lot of steps in this process.

Office Tap is available for Office 365 users with Business Premium, Enterprise E3, or Enterprise E5 subscriptions, and aims to provide an easier way to re-purpose frequently used information from Word, Excel, and PowerPoint documents. A list of recommended objects is presented when you open the Tap pane, but if Tap doesn’t recommend something useful, you can still run a more comprehensive search without leaving the Tap pane.

Tap is focused on searching for objects in documents, such as images and tables, that might be relevant. That’s not to say that text isn’t searched, but currently Tap doesn’t identify chunks of text that might be what you’re looking for. Documents with relevant text are included in the search results, you’ll just have to open the document manually and locate what you’d like to use. If an image or table is what you need, you can add the object from the Tap pane without opening the document in which the object is contained.

  • Tap can be accessed in Word and Outlook from the Insert tab on the ribbon. Just click Document Item in the Tap section of the tab to open the Tap pane.
  • Look through the list of recommended results and select an object to insert.
  • If Tap doesn’t surface anything useful, in the search box on the Tap pane, type what you want to search for and press ENTER.
  • The results will appear in the Tap pane.

Each document that contains relevant results is represented in the pane as a tile. If specific objects relevant to your search are surfaced, they will be represented as icons at the bottom of each tile, including a count of each object type. These objects can be expanded and viewed by clicking the arrow in the bottom left corner of the tile. Or you can open the document by selecting more options in the bottom right corner and clicking Open in Word.

Tap currently appears to be limited to searching OneDrive for Business, so information stored in SharePoint libraries won’t be surfaced. If you need to open a document from the Tap pane, it will be opened in a web browser and not using an Office app on your device. But going forward, Microsoft will no doubt improve Tap to make it easier to use.

The post Office 365: How Does the New Office Tap Feature Work in Word and Outlook 2016? appeared first on Petri.

Azure PowerShell 3.0.0–Highlights and breaking changes

The content below is taken from the original (Azure PowerShell 3.0.0–Highlights and breaking changes), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure PowerShell is a set of PowerShell cmdlets which assist you in managing your assets in Azure using Azure Resource Manager (ARM) and Azure Service Management (RDFE).  Azure PowerShell 3.0.0 adds various improvements and fixes across multiple Azure resources; however, in accordance with semantic versioning, the introduction of a new major revision indicates breaking changes in a small subset of our cmdlets.  You can install the cmdlets via your favorite installation path indicated in the Azure PowerShell 3.0.0 release notes.

Resource improvements

ApiManagement

  • Enable support of Importing and Exporting SOAP based APIs (Wsdl Format)

    • Import-AzureRmApiManagementApi
    • Export-AzureRmApiManagementApi
  • Deprecated cmdlet Set-AzureRmApiManagementVirtualNetworks. In place, place used cmdlet Update-AzureRmApiManagementDeployment
  • Enabled support for ARM based VNETs for configuration Vpn via cmdlet Update-AzureRmApiManagementDeployment
  • Introduced support for VpnType (None, External, Internal) to differentiate ApiManagement workloads for Internet and Intranet
  • Fixed PowerShell issues

Batch

  • Added new cmdlet for reactivating tasks

    • Enable-AzureBatchTask
  • Added new parameter for application packages on job manager tasks and cloud tasks
    • New-AzureBatchTask -ApplicationPackageReferences
  • Added new parameters for job auto termination
    • New-AzureBatchJob -OnAllTasksComplete -OnTaskFailure
    • New-AzureBatchJob -ExitConditions

ExpressRoute

  • Added new parameter service key in return object when provider list all cross connection

    • Get-AzureCrossConnectionCommand

MachineLearning

  • Get-AzureRmMlWebService supports paginated response
  • Remind user Get-AzureRmMlWebService "Name" parameter needs to work with "ResourceGroupName" parameter

Network

  • Added new cmdlet to get application gateway backend health

    • Get-AzureRmApplicationGatewayBackendHealth
  • Added support for creating UltraPerformance sku
    • New-AzureRmVirtualNetworkGateway -GatewaySku
    • New-AzureVirtualNetworkGateway -GatewaySku

RemoteApp

  • Added cmdlets to enable User Disk and Gold Image Migration feature

    • Export-AzureRemoteAppUserDisk
    • Export-AzureRemoteAppTemplateImage

SiteRecovery

  • New cmdlets have been added to support one to one mapping with service objects.

    • Get-AzureRmSiteRecoveryFabric
    • Get-AzureRmSiteRecoveryProtectableItem
    • Get-AzureRmSiteRecoveryProtectionContainerMapping
    • Get-AzureRmSiteRecoveryRecoveryPoin
    • Get-AzureRmSiteRecoveryReplicationProtectedItem
    • Get-AzureRmSiteRecoveryServicesProvider
    • New-AzureRmSiteRecoveryFabri
    • New-AzureRmSiteRecoveryProtectionContainerMapping
    • New-AzureRmSiteRecoveryReplicationProtectedItem
    • Remove-AzureRmSiteRecoveryFabric
    • Remove-AzureRmSiteRecoveryProtectionContainerMapping
    • Remove-AzureRmSiteRecoveryReplicationProtectedItem
    • Remove-AzureRmSiteRecoveryServicesProvider
    • Set-AzureRmSiteRecoveryReplicationProtectedItem
    • Start-AzureRmSiteRecoveryApplyRecoveryPoint
    • Update-AzureRmSiteRecoveryServicesProvider
  • Following cmdlets have been modified for to support one to one mapping with service objects.
    • Edit-AzureRmSiteRecoveryRecoveryPlan
    • Get-AzureRmSiteRecoveryNetwork
    • Get-AzureRmSiteRecoveryNetworkMapping
    • Get-AzureRmSiteRecoveryProtectionContainer
    • Get-AzureRmSiteRecoveryStorageClassification
    • Get-AzureRmSiteRecoveryStorageClassificationMapping
    • Start-AzureRmSiteRecoveryCommitFailoverJob
    • Start-AzureRmSiteRecoveryPlannedFailoverJob
    • Start-AzureRmSiteRecoveryTestFailoverJob
    • Start-AzureRmSiteRecoveryUnplannedFailoverJob
    • Update-AzureRmSiteRecoveryProtectionDirection
    • Update-AzureRmSiteRecoveryRecoveryPlan
  • HUB support added to Set-AzureRmSiteRecoveryReplicationProtectedItem.
  • Deprecation warning introduced for cmlets/parameter-sets which does not comply to SiteRecovery service object model.

Breaking changes

Data Lake Store

The following cmdlets were affected this release (PR 2965):

Get-AzureRmDataLakeStoreItemAcl (Get-AdlStoreItemAcl)

  • This cmdlet was removed and replaced with Get-AzureRmDataLakeStoreItemAclEntry (Get-AdlStoreItemAclEntry).
  • The old cmdlet returned a complex object representing the access control list (ACL). The new cmdlet returns a simple list of entries in the chosen path’s ACL.
# Old
Get-AdlStoreItemAcl -Account myadlsaccount -Path /foo

# New
Get-AdlStoreItemAclEntry -Account myadlsaccount -Path /foo

Get-AzureRmDataLakeStoreItemAclEntry (Get-AdlStoreItemAclEntry)

  • This cmdlet replaces the old cmdlet Get-AzureRmDataLakeStoreItemAcl (Get-AdlStoreItemAcl).
  • This new cmdlet returns a simple list of entries in the chosen path’s ACL, with type DataLakeStoreItemAce[].
  • The output of this cmdlet can be passed in to the -Acl parameter of the following cmdlets:
    • Remove-AzureRmDataLakeStoreItemAcl
    • Set-AzureRmDataLakeStoreItemAcl
    • Set-AzureRmDataLakeStoreItemAclEntry
# Old
Get-AdlStoreItemAcl -Account myadlsaccount -Path /foo

# New
Get-AdlStoreItemAclEntry -Account myadlsaccount -Path /foo

Remove-AzureRmDataLakeStoreItemAcl (Remove-AdlStoreItemAcl), Set-AzureRmDataLakeStoreItemAcl (Set-AdlStoreItemAcl), Set-AzureRmDataLakeStoreItemAclEntry (Set-AdlStoreItemAclEntry)

  • These cmdlets now accept DataLakeStoreItemAce[] for the -Acl parameter.
  • DataLakeStoreItemAce[] is returned by Get-AzureRmDataLakeStoreItemAclEntry (Get-AdlStoreItemAclEntry).
# Old
$acl = Get-AdlStoreItemAcl -Account myadlsaccount -Path /foo
Set-AdlStoreItemAcl -Account myadlsaccount -Path /foo -Acl $acl

# New
$aclEntries = Get-AdlStoreItemAclEntry -Account myadlsaccount -Path /foo
Set-AdlStoreItemAcl -Account myadlsaccount -Path /foo -Acl $aclEntries

ApiManagement

The following cmdlets were affected this release (PR 2971):

New-AzureRmApiManagementVirtualNetwork

  • The required parameters to reference a virtual network changed from requiring SubnetName and VnetId to SubnetResourceId in format/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ClassicNetwork/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}
# Old
$virtualNetwork = New-AzureRmApiManagementVirtualNetwork -Location <String> -SubnetName <String> -VnetId <Guid>

# New
$virtualNetwork = New-AzureRmApiManagementVirtualNetwork -Location <String> -SubnetResourceId <String>

Deprecating Cmdlet Set-AzureRmApiManagementVirtualNetworks

  • The Cmdlet is getting deprecated as there was more than one way to Set Virtual Network associated to ApiManagement deployment.
# Old
$networksList = @()
$networksList += New-AzureRmApiManagementVirtualNetwork -Location $vnetLocation -VnetId $vnetId -SubnetName $subnetName
Set-AzureRmApiManagementVirtualNetworks -ResourceGroupName "ContosoGroup" -Name "ContosoApi" -VirtualNetworks $networksList

# New
$masterRegionVirtualNetwork = New-AzureRmApiManagementVirtualNetwork -Location <String> -SubnetResourceId <String>
Update-AzureRmApiManagementDeployment -ResourceGroupName "ContosoGroup" -Name "ContosoApi" -VirtualNetwork $masterRegionVirtualNetwork

Network

The following cmdlets were affected this release (PR 2982):

New-AzureRmVirtualNetworkGateway

  • Description of what has changed :- Bool parameter:-ActiveActive is removed and SwitchParameter:-EnableActiveActiveFeature is added for enabling Active-Active feature on newly creating virtual network gateway.
# Old 
# Sample of how the cmdlet was previously called
New-AzureRmVirtualNetworkGateway -ResourceGroupName $rgname -name $rname -Location $location -IpConfigurations $vnetIpConfig1,$vnetIpConfig2 -GatewayType Vpn -VpnType RouteBased -EnableBgp $false -GatewaySku HighPerformance -ActiveActive $true

# New
# Sample of how the cmdlet should now be called
New-AzureRmVirtualNetworkGateway -ResourceGroupName $rgname -name $rname -Location $location -IpConfigurations $vnetIpConfig1,$vnetIpConfig2 -GatewayType Vpn -VpnType RouteBased -EnableBgp $false -GatewaySku HighPerformance -EnableActiveActiveFeature

Set-AzureRmVirtualNetworkGateway

  • Description of what has changed :- Bool parameter:-ActiveActive is removed and 2 SwitchParameters:-EnableActiveActiveFeature / DisableActiveActiveFeature are added for enabling and disabling Active-Active feature on virtual network gateway.
# Old
# Sample of how the cmdlet was previously called
Set-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gw -ActiveActive $true
Set-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gw -ActiveActive $false  

# New
# Sample of how the cmdlet should now be called
Set-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gw -EnableActiveActiveFeature
Set-AzureRmVirtualNetworkGateway -VirtualNetworkGateway $gw -DisableActiveActiveFeature

Intel is shipping an ARM-based FPGA. Repeat, Intel is shipping an ARM-based FPGA

The content below is taken from the original (Intel is shipping an ARM-based FPGA. Repeat, Intel is shipping an ARM-based FPGA), to continue reading please visit the site. Remember to respect the Author & Copyright.

Intel’s followed up on its acquisition of Altera by baking a microprocessor into an field-programmable gate array (FPGA).

The Stratix 10 family brings to fruition Chipzilla’s long-rumoured desire to put an x86 on an FPGA ARM core into a 14nm-process FPGA.

The chip is part of the company’s push beyond its stagnating PC-and-servers homeland into emerging markets like high-performance computing and software-defined networking.

Intel says the quad-core 64-bit ARM Cortex-A53 processor helps position the device for “high-end compute and data-intensive applications ranging from data centres, network infrastructure, cloud computing, and radar and imaging systems.”

Compared to the Stratix V, Altera’s current generation before the Chipzilla slurp, Intel says the Stratix 10 has five times the density and twice the performance; 70 per cent lower power consumption at equivalent performance; 10 Tflops (single precision); and 1 TBps memory bandwidth.

The devices will be pitched at acceleration and high-performance networking kit.

The Stratix 10 “Hyperflex architecture” uses bypassable registers – yes, they’re called “Hyper-Registers”, which are associated with individual routing segments in the chip, and are available at the inputs of “all functional blocks” like adaptive logic modules (ALMs), embedded memory blocks, and digital signal processing (DSP) blocks.

Designs can bypass individual Hyper-Registers, so design tools can automatically choose the best register location. Intel says this means “performance tuning does not require additional ALM resources … and does not require additional changes or added complexity to the design’s place-and-route.”

The company reckons the design also cuts down on on-chip routing congestion.

There’s more on the architecture in this white paper.

Oh, and it’s got an on-chip ARM core. Did we mention that? ®

Sponsored:
IBM FlashSystem V9000 product guide

Intel is shipping an ARM-based FPGA. Repeat, Intel is shipping an ARM-based FPGA

The content below is taken from the original (Intel is shipping an ARM-based FPGA. Repeat, Intel is shipping an ARM-based FPGA), to continue reading please visit the site. Remember to respect the Author & Copyright.

Intel’s followed up on its acquisition of Altera by baking a microprocessor into an field-programmable gate array (FPGA).

The Stratix 10 family brings to fruition Chipzilla’s long-rumoured desire to put an x86 on an FPGA ARM core into a 14nm-process FPGA.

The chip is part of the company’s push beyond its stagnating PC-and-servers homeland into emerging markets like high-performance computing and software-defined networking.

Intel says the quad-core 64-bit ARM Cortex-A53 processor helps position the device for “high-end compute and data-intensive applications ranging from data centres, network infrastructure, cloud computing, and radar and imaging systems.”

Compared to the Stratix V, Altera’s current generation before the Chipzilla slurp, Intel says the Stratix 10 has five times the density and twice the performance; 70 per cent lower power consumption at equivalent performance; 10 Tflops (single precision); and 1 TBps memory bandwidth.

The devices will be pitched at acceleration and high-performance networking kit.

The Stratix 10 “Hyperflex architecture” uses bypassable registers – yes, they’re called “Hyper-Registers”, which are associated with individual routing segments in the chip, and are available at the inputs of “all functional blocks” like adaptive logic modules (ALMs), embedded memory blocks, and digital signal processing (DSP) blocks.

Designs can bypass individual Hyper-Registers, so design tools can automatically choose the best register location. Intel says this means “performance tuning does not require additional ALM resources … and does not require additional changes or added complexity to the design’s place-and-route.”

The company reckons the design also cuts down on on-chip routing congestion.

There’s more on the architecture in this white paper.

Oh, and it’s got an on-chip ARM core. Did we mention that? ®

Sponsored:
IBM FlashSystem V9000 product guide

The People, Talks, and Swag of Open Hardware Summit

The content below is taken from the original (The People, Talks, and Swag of Open Hardware Summit), to continue reading please visit the site. Remember to respect the Author & Copyright.

Friday was the 2016 Open Hardware Summit, a yearly gathering of people who believe in the power of open design. The use of the term “summit” rather than “conference” is telling. This gathering brings together a critical mass of people running hardware companies that adhere to the ideal of “open”, but this isn’t at the exclusion of anyone — all are welcome to attend. Hackaday has built the world’s largest repository of Open Hardware projects. We didn’t just want to be there — We sponsored, sent a team of people, and thoroughly enjoyed ourselves in the process.

Join me after the break for a look at the talks, a walk through the swag bags, and a feel for what this wonderful day held.

Talks and Certification

openhardwarecertannThe big news at the conference is the unveiling of the Open Hardware Certification. [Brian Benchoff] wrote a very detailed explanation of the certification which you need to check out. But there are two big takeaways: you can certify your hardware which includes a unique identifier for people to look for to ensure they know what they are purchasing. If you have a product ready you should certify it before the end of October. That way you will be able to secure a very low number as far as unique identifiers go.

eric-pan-xfactory-talkSince this is an overview post I’m not going to go too in-depth about the talks. But here are a couple of highlights.

I really enjoyed hearing [Eric Pan], CEO of Seeed Studio, talk about x.factory which is operated by Chaihuo makerspace. This sounds like the industrial equivalent of a hackerspace. It’s a communal factory, which seeks to bring together talent from different factories all over Shenzhen.

[Eric’s] example is that your startup is great at making technology-enhanced furniture, except for lacking the expertise of building great furniture. Conversely, Shenzhen has many furniture manufacturers who make great furniture but lack the expertise to incorporate electronics and other technology. Bring these two together and you have the new-economy knowledge of the technologists, with the hard-earned manufacturing knowledge and distribution network of the furniture company. Collaborations across industries will be very interesting to watch.

microbit-talk[Steve Hodges] is on the Microsoft team who worked with the BBC to develop the micro:bit. I think it’s pretty awesome that the UK is striving to make sure a bare circuit board passes through the hands of every student as they pass through the school system. [Steve] demoed the board live — showing the software emulator and flashing a build to the board. It’s cool to see the block editor used to build the program, then switch over to look at (and alter) the JavaScript behind the visual abstraction.

[Steve] mentioned that they are working to make micro:bit available in Europe this year, and in North America as soon as next year.

Goodie Bags

It’s common to get a goodie bag at a conference. This one is notable as it contains some freebies I will definitely be holding onto. There were a ton of stickers which I use to wallpaper the back of my workbench area so these are always appreciated. Hackaday’s contribution was sweet (sorry, couldn’t resist): a sucker, Tindie and Hackaday stickers, and a few propaganda cards.

But check out the PCBs that came in the collector’s bag. I love having breakout boards on hand. CircuitMaker included an empty Arduino shield which has bus board, SMD adapter, and small-pitch through hole areas. I’m unlikely to use this with an Arduino but I will chop it up willy-nilly for needed projects. Sparkfun sent along four SOIC to DIP adapters, always handy to have. Digikey included their PCB ruler. This thing is pretty awesome, and although I’ve seen it at other conferences I haven’t scored one for myself. I laughed at the “For Reference Only” disclaimer in the upper right. Seeed Studio also included a PCB reference ruler.

open-hardware-summit-pcb-freebies

The two that I really loved are dev boards. First is the Open Source Business Card Holder from Screaming Circuits and Sunstone. I’ll never use this as a business card holder but it has a PIC18F46K22 with an NXP MMA8552Q 3-axis accelerometer. The I2c/SPI bus is broken out, as are the ICSP pins. The GPIO pins aren’t broken out but there are two user buttons and 19 SMD LEDs. Unpopulate the resistors and there’s your broken out pins.

OSH Park made the second dev board possible, it’s [Piotr Esden-Tempski’s] 1Bitsy STM32F415 breakout board. It’s pretty awesome to see an ARM dev board included in everyone’s bag. I am certain to play around with this… I would imagine that a lot of the attendees will. If you got one of these in your bag and aren’t going to use it, make sure you take it (and the business card holder) along to the next hackerspace meeting as someone will be very happy to have these in hand.

This makes me think we need to run a contest challenging you to build around something you got as a freebie at a conference. We’re about to give away our own very awesome PCB-based item at the SuperConference.

More to Come

OSH Park hosted an epic Bring-a-Hack at their Portland headquarters the night before Open Hardware summit. There was also a really great happy hour when the talks wrapped at 5pm on Friday, an afterbar, and an after-afterbar. I got a look at really cool hardware during all of this and will be publishing more articles this week. Keep your eye on Hackaday for that.

A 3D upgrade that could make you care about Microsoft Paint again

The content below is taken from the original (A 3D upgrade that could make you care about Microsoft Paint again), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s hard to remember a time when Microsoft Paint wasn’t shorthand for something ham-handedly drawn on a free computer art program. And the while, the software giant has upgraded the app with some regularity in the 30+ plus years since it was first introduced along with Windows 1.0, it will take a lot of doing to remove its association with poorly crafted memes.

It seems likely that the taint may never fully go away, but new leaks point to an upgrade for Windows 10 that may well cause dismissive parties to take another good long look at the program. A new video leaked online ahead of October 26’s big Windows 10 event points to a Paint packing some impressive new features.

giphy-2

“Hi, and welcome to Paint,” the video’s host begins atop a bed of fittingly chipper music. “We have completely redesigned the app and it is packed with cool new features.”giphy-3

What follows is an impressive demo that incorporates new simple 3D design functionality into the perennial program, which could well be Microsoft’s way of laying the groundwork for user created HoloLens content – something akin to an entry-level version of the Oculus Quill or Vive Tilt Brush.

The Verge took an early version of the application for a spin, and while it was lacking the most compelling 3D features (due, likely, to the fact that it dates back to May), the site seemed suitable impressed with the new program’s speed, bucking that old adage about watching Paint dry. Whatever the case, we’re looking to see something much more fully functioning at that event in a couple of weeks.