ultimate-settings-panel (5.2)

The content below is taken from the original (ultimate-settings-panel (5.2)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ultimate Settings Panel is an all in one settings solution for a multitude of configuration options in Windows 7, 8.1 and 10, Microsoft Office, Windows Server and System Center Configuration Manager.

The GeoCities Cage at Exodus Communications ~1999 (x-post /r/serverporn)

The content below is taken from the original (The GeoCities Cage at Exodus Communications ~1999 (x-post /r/serverporn)), to continue reading please visit the site. Remember to respect the Author & Copyright.

http://bit.ly/2fNoI3a

A closer look at HPE’s ‘The Machine’

The content below is taken from the original (A closer look at HPE’s ‘The Machine’), to continue reading please visit the site. Remember to respect the Author & Copyright.

Analysis HPE is undertaking the single most determined and ambitious architectural redesign of a server’s architecture in recent history in the shape of The Machine.

We’ll try to provide what Army types call a sitrep about The Machine, HPE’s “completely different” system: its aims, its technology and its situation. Think of this as a catch-up article about this different kind of server.

The Machine is being touted as being a memory-driven computer in which a universal pool of non-volatile memory is accessed by large numbers of specialised cores, and in which data is not moved from processor (server) to processor (server), but in which data can stay still while different processors are brought to bear on either all of it or subsets of it.

Aims include not moving masses of data to servers across relatively slow interconnects, and gaining the high processing speed of in-memory computing without using expensive DRAM. The main benefit is hoped to be a quantum leap in compute performance and energy efficiency, providing the ability to extend computation into new workloads as well as speed analytics and HPC and other existing workloads.

It involves developments at virtually every level of server construction, from every chip design, through system-on-chips, silicon photonics chips and message protocols, server boards, CPU-memory access and architectures, chassis, network fabrics, operating system code and application stacks from which IO may be completely redesigned.

There is a real chance HPE may have over-reached itself and that, even if it does deliver The Machine to the market, hidebound users and suspicious developers may not adopt it.

The Machine is an extraordinary high-stakes bet by HPE, so let’s try and assess the system and its state here.

Core concepts

The Machine is basically a bunch of processors accessing a universal memory pool via a phonics interconnect.

The_Machine_Basic_schematic

Basic concept for The Machine

We can envisage five technology attributes of The Machine:

  1. Heterogeneous specialised core processors or nodes
  2. Photonics-based CPU to universal memory pool interconnect
  3. Universal non-volatile memory pool
  4. New operating system
  5. Software programming scheme

The cores

The cores are processor cores focussed on specific workloads, meaning not necessarily x86 cores. They could also be ATOMs, ARMs or something else or a mix, but probably single socket and multi-core.

A processor in The Machine is instantiated on a SOC (System On a Chip) along with some memory and photonic interconnect. The SOC, which is a computational unit, could link to 8 DRAM things (DIMMs probably, in a mocked-up system we discuss later). The interconnect goes to a fabric switch and then on to shared memory.

The_MAchine_universal_memory_pool_access

Note: a SOC on one node talks via the fabric (through bridges or switches) to persistent memory on the other nodes without involving the destination node’s SOC.

Photonics is an integration of silicon and lasers to use light signals in a networking fabric that is faster than pumping electrons down a wire.

The Machine’s SOCs run a stripped-down Linux and interconnect to a massive pool of universal memory through the photonics fabric.

This memory is intended to use HPE’s Memristor technology, which is hoped to provide persistent storage, memory and high-speed cache functions in a single, cost-effective device.

When HPE talks of a massive pool it has hundreds of petabytes in mind, and this memory is both fast and persistent.

Accessing memory

We should not think of The Machine’s processors accessing memory in the same way a current server’s x86 processor accesses its directly connected DRAM via intermediate on-chip caches, with the DRAM being volatile and having its contents fetched from mass storage – disk or SSD. This is a memory hierarchy scheme.

The Machine intends to collapse this hierarchy. Its memory was originally going to be made from Memristor technology, which would replace a current server’s on-chip caches and DRAM, but the delayed timescale for that meant a stop-gap scheme involving DRAM and Phase-Change Memory appeared in April 2015. HP’s CTO Marin Fink, the father of The Machine, said the system would be delivered in a three-phase approach.

it would still be a memory-driven system and use Linux:

  1. Phase 1 (version 0.9) DRAM-based starting system, with say, a 320TB pool of DRAM in the rack-level starter system that emulates persistent memory.
  2. Phase 2 Machine using Phase Change Memory, non-volatile storage-class memory,
  3. Phase 3 would be the Memristor-based system.

Phase 1 is a working prototype, with phase 2 an actual but intermediate system, and phase 3 the full Memristor-based system.

SanDisk (now WDC) ReRAM became part of HPE’s memory-driven computing ideas in October 2015. We imagine that its ReRAM replaced the Phase Change Memory notion in the phase 2 Machine.

The phase 2 machine has server nodes (CPU + DRAM + optical Interconnect) sharing a central pool of slower-than-DRAM non-volatile memory, with potentially thousands of these server nodes (SOCs). HPE’s existing Apollo chassis could be used for it.

Phase 3 would then have a Memristor shared central pool. If the server nodes retain their local DRAM then HPE will have failed in its attempt to collapse the memory hierarchy to just a Memristor pool. We would either still have a 2-level memory hierarchy or a NUMA scheme.

Productisation timescale

The Machine has been beset by delays. For example, HP Labs director Martin Fink told the HP Discover 2015 audience at Las Vegas that “a working prototype of The Machine would be ready in time for next year’s event.” Bits of one have been seen. At the time of the 2015 Discover event Fink posted a blog, ”Accelerating The Machine” which was actually about The Machine’s schedule delay.

He wrote at the time:

We’re building hardware to handle data structures and run applications that don’t exist today. Simultaneously, we’re writing code to run on hardware that doesn’t exist today. To do this, we emulate the nascent hardware inside the best systems available, such as HPE Superdome X. What we learn in hardware goes into the emulators so the software can improve. What we learn in software development informs the hardware teams. It’s a virtuous cycle.

He showed a mock-up of a system board in this video:

Youtube Video

We see here a node board with elements labelled by our sister publication The Next Platform. It has produced a schematic diagram based on its understanding of the mocked-up node board:

From it we can see there is a processor SOC, local DRAM, non-volatile memory accessed by media controllers and off-board communications via an optical adapter controlled by a fabric switch.

Is the Machine’s shared memory pool created by combining the persistent memory on these node boards? It appears so.

We can say then, that, if this is the case, access to local, on-board persistent memory will be faster than access through the fabric switch and optical link to off-board persistent memory will be slower and we have a NUMA-like situation to deal with.

It also appears that the SOCs contain a processor and its cores and also cache memory; so, if this is the case, we have a CPU using a memory hierarchy of cache, DRAM, local NVM and remote NVM – four tiers.

Again, if this is the case, then The Machine’s usage of the Memristor-collapses-all-the-memory-storage tiers idea is exposed as complete nonsense. This is a lot of supposition but backed up by HPE Distinguished Technologist, Keith Packard.

Machine hardware

More information on The Machine’s hardware can be found in an August 2015 discussion of The Machine’s then-hardware by Keith Packard here. It talks of 32TB of memory in 5U, 64-bit ARM processor-based nodes. All of the nodes will be connected at the memory level so that every single processor can do a load or store instruction to access memory on any system, we’re told. The nodes have “a 64-bit ARM SoC with 256GB of purely local RAM along with a field-programmable gate array (FPGA) to implement the NGMI (next-generation memory interconnect) protocol.”

Sponsored:
10 Reasons LinuxONE is the best choice for Linux workloads

Leveraging OneNote: Build a Project Dashboard

The content below is taken from the original (Leveraging OneNote: Build a Project Dashboard), to continue reading please visit the site. Remember to respect the Author & Copyright.

Leveraging OneNote

Leveraging OneNote

Projects can fall into a state of limbo where you are waiting for something outside your control to occur. When a project falls into limbo, it stops being tracked and can fall behind. You would think that Microsoft, the productivity company, would have a great solution to this, but it does not. I use OneNote to build a project dashboard and keep projects on my radar and out of limbo.

 

 

The static nature of OneNote is a double-edged sword. Building a dashboard to show project status can easily be done, but keeping that dashboard up-to-date takes work. Ideally, you will have your project notebook shared with your team so that they can keep their sections up-to-date. Creating a section in your project notebook for “status” lets different departments describe their status in whatever way fits them best.

Build a Project Dashboard

OneNote Project Dashboard

OneNote Project Dashboard

Since OneNote is essentially digital paper, your project dashboard can look however you decide. This can a blessing and a curse because too many options can be overwhelming. However, not enough options can restrict the look and functionality. OneNote 2016 follows the Office paradigm when it comes to styles. I use the heading styles to build a single dashboard for all my teams.

Sponsored

My project dashboard helps me know the status of all my different projects. Each project has a heading, then under the heading are activities. Each activity should have several tasks and a clear outcome (e.g., Design Website V1 or Outline Research Paper on Otters). Each activity should have a status and the title linked to a OneNote page. Under the activity I list the current ongoing activities (e.g., sketching concepts or Bob is looking up where otters live.

Link to Work Elsewhere in Your Notebooks

OneNote Link Pages

OneNote Link Pages

Linking pages together makes the dashboard extremely functional and immediately useful. For the status of an activity you can make it whatever you want. My project activities each have one of the following statuses: active, waiting, on hold, or done. After an activity, has been complete for long enough, I delete it from my project status page.

Work on projects should be done in their own sections. The project summary page is a place where all your projects are summarized. Linking to the working pages makes them accessible and organized. Ideally, Microsoft would build more dynamic summary pages, but that is currently not possible.

You can use the Find Tags tool on OneNote 2016 to collect all your to-do items. Then group them by section and copy/paste them to your projects summary page. The Find Tags tool will automatically link the to-do items, making it easy to jump to the item and check it off. However, these links do not sync the checkbox status meaning you must check the box in the summary and on the working page.

Make Your Own Dashboard Design

Your project dashboard can take many different forms. I made mine to fit how I work. If you want to organize your activities by person to help you manage their time, that would work, too. A dashboard is intended to give you an overview of all your responsibilities and quick access to them. If your responsibilities need to be broken out on to different dashboards such as a daily dashboard and open items dashboard, then so be it. I have found my project dashboard helps me build a compete daily checklist.

Sponsored

How do you keep your projects organized? Do you have a tool to keep things from falling into limbo? Try out this dashboard tool and let me know how it works. If you have any improvements, I’d be interested in hearing them. Where do you think Microsoft needs to improve its productivity focus with projects?

The post Leveraging OneNote: Build a Project Dashboard appeared first on Petri.

Britain’s wartime codebreaking base could host a national cyber security college

The content below is taken from the original (Britain’s wartime codebreaking base could host a national cyber security college), to continue reading please visit the site. Remember to respect the Author & Copyright.

Plans are afoot to build the U.K.’s first National College of Cyber Security at Bletchley Park, the birthplace of the country’s wartime codebreaking efforts.

It was at Bletchley Park that Colossus, the world’s first electronic computer, was built during World War II to crack the Lorenz code used by the German high command. Bletchley is also where Alan Turing developed some of his mathematical theories of computing while working on breaking the enigma code.

After the war the site fell into disrepair, but parts of it have been restored and now house the U.K.’s National Museum of Computing.

Other buildings at Bletchley Park, though, are still vacant and it is in one of those that Qufaro, a company founded only last year, hopes to set up a cyber security school.

The company’s directors include Margaret Sale, widow of one of the museum’s founders, and Alastair MacWillson, an IT security consultant.

Qufaro’s ambitious plan for the site includes offering virtual courses on cyber security, and a specialist school offering young people a path to become cyber security professionals.

It hopes to obtain a mix of government funding and corporate sponsorship for the school, and plans to run conferences at the site during school holidays to raise additional funds.

The project has the support of Bletchley Park Science and Innovation Center, which leases the buildings from owner BT, according to a Qufaro spokeswoman.

You can sign up for email updates on the project at Qufaro’s website — although curiously the company insists on obtaining your full name, gender and date of birth in exchange.

If Turing, famously persecuted — and prosecuted — for his homosexuality, were still alive, he might have taken some comfort from the fact that societal attitudes to sexuality and gender have evolved to the point where Qufaro provides “non-binary/third gender” as a possible response to its questionnaire.

“As a country we don’t know a large amount about our cyber security talent,” the spokeswoman explained. “This information allows us to learn more about the type of people that might be right for the sector.”

ZoneSavvy taps big data to help SMBs find best sites for businesses

The content below is taken from the original (ZoneSavvy taps big data to help SMBs find best sites for businesses), to continue reading please visit the site. Remember to respect the Author & Copyright.

Location, location, location: As the old joke goes, those are the three keys to business success. Now, with big data analysis, corporations can be smarter than ever before about where to open up new offices or businesses.

But what if you run a mom-and-pop shop, or you’re dreaming of quitting your corporate job and opening a boutique? Even medium-size businesses do not have the money to spend on the sort of systems and analysis teams that corporate behemoths use to locate new businesses.

This is where ZoneSavvy, a new website created by software engineer Mike Wertheim, could help. The site is straightforward: You enter a business type, the ZIP code of the general area where you want to locate the business, and the distance from that ZIP code you are willing to consider. ZoneSavvy then gives you suggestions for which nearby neighborhoods would be the best locations for your business.

ZoneSavvy does this by sifting through and cross-referencing demographic, real estate, and economic information. It looks at the age and income of people living in your target area, the price of commercial real estate, and what types of businesses are located there. By comparing that information with data from other areas, it determines which types of businesses are popular in similar neighborhoods  and under-represented in the area you’re interested in.

For example, if you’re thinking of opening up a dance club in New York City within a 10-mile radius of midtown Manhattan, ZoneSavvy will look at neighborhoods with the same profile as your target area. It will then tell you which neighborhoods in the vicinity of your target ZIP code have no dance clubs, but are similar to areas where dance clubs are clustered. In this way, you can not only identify the types of neighborhoods where dance clubs prosper, but also which neighborhoods of that type currently offer no competition.

ZoneSavvy also lets commercial property owners and real estate agents do the reverse: enter an address of a property for which they are trying to find a tenant. The site will then suggest which types of businesses would most likely succeed in that neighborhood.

This would be especially useful to real estate agents who are having trouble finding tenants for a property, by giving them ideas for the type of tenants they should be marketing to and additional information they can use in pitching the property, said Wertheim.

The main thrust of the site, though, is helping people figure out where to locate new businesses.

“Big retailers, companies like Burger King and McDonald’s, spend a lot of time and money figuring out where to locate new businesses and franchises,” noted Ray Wang, founder of Constellation Research. “They have corporate real estate offices, facility management staff, planners and huge databases. Small businesses don’t have anything like that.”

Several real estate agents agreed that up to now, they haven’t seen anything on the market like ZoneSavvy. “The site sounds like it would really help narrow down the neighborhoods where you should be looking at for your business,” said Carlo Caparruva, managing director of the commercial practice at Keller Williams Mid-Town Direct Realty in Maplewood, New Jersey.

But business owners shouldn’t rely completely on ZoneSavvy, Caparruva said. “You’ll still need to do due diligence,” he stressed.

Just as real estate site Zillow may not indicate why a home may be priced well below the average sale price of houses in a particular neighborhood, ZoneSavvy may not give you a complete understanding of why a certain type of business is underrepresented in a given neighborhood. There may be negative factors that the system does not take into account.

ZoneSavvy includes government-produced data as well as information available online, Wertheim said.

Wertheim, who is also a senior software engineer at LinkedIn, wrote the app that the system uses in Java and is hosting the site on AWS. He plans to use customer-support contractors as the site attracts users. Terms of use include a flat rate of US$39.95 per month or $29.95 per month for multiple months of use.

Microsoft Overhauls Office 365 Roadmap – For the Better

The content below is taken from the original (Microsoft Overhauls Office 365 Roadmap – For the Better), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office 365 Roadmap

Office 365 Roadmap

In a contribution posted to the “Change Alerts” group in the Microsoft Technical Community, Brian Levenson, an Office 365 product marketing manager, said that plans were in place to make improvements to the Office 365 roadmap.

A Roadmap to Happiness

The Office 365 roadmap is Microsoft’s public listing of development items in progress for all applications running in the service. It is divided into:

  • Launched: Recent updates that are fully available across Office 365
  • Rolling Out: Updates that are currently being made available to Office 365 tenants. Not all customers yet have access to this functionality.
  • In development: Updates that are currently in development or being tested. First Release tenants are likely to have access to some of the functionality in this category.
  • Cancelled: Updates that Microsoft previously announced that are no longer in development or are “indefinitely delayed”.
  • Previously Released: Updates that are available as part of the base functionality of Office 365.

As you’d imagine, given that Office 365 is over five years old now, the last category is the largest. However, at the time of writing, there are some 154 updates in development and 55 rolling out, quantities that underline the work required for tenant administrators to keep abreast of what’s changing inside Office 365.

The Happiness of Roadmap Items

The Roadmap has been around for a couple of years now and it’s an extremely useful tool for tracking change within Office 365. That is, if you understand the text provided by Microsoft to explain each roadmap item. Take the example in Figure 1, which describes the new Files view just launched for Office 365 Groups.

Office 365 Roadmap item

Figure 1: A typical description for an Office 365 roadmap item (image credit: Tony Redmond)

As you’d expect, the text describing the update is upbeat and positive. Change is good. Improvement is coming. All is well in the world. What something like this does not explain is the downside of change and how it might affect the way users work.

But after a while you become used to the happy tone and learn to interpret what’s coming down the line. Configuring some user accounts for First Release is a good idea because you can then match the text against reality and come up with your own description of what’s happening.

Sponsored

What’s New

Microsoft’s new idea is to assign unique feature identifiers to roadmap items. This doesn’t sound earth-shattering and indeed it is not. However, it is a step forward because no easy way exists today to focus in on a specific update and link it across roadmap, announcement (in blogs.office.com), and the Message Center (Figure 2) in the Office 365 Admin Center (note to self: I should be better at reading these announcements). Using feature identifiers to correlate announcements in the message center with roadmap items is goodness.

Office 365 Admin Center

Figure 2: Updates listed in the Office 365 Admin Center (image credit: Tony Redmond)

Despite frequent requests, Microsoft will still not provide an RSS feed for the Office 365 Roadmap (an unofficial RSS feed is available).

As part of the transition, Microsoft is moving to use a new database to track features and plans to refresh the web site interface so that it works well on both PCs and mobile devices. Perhaps the database change will be the catalyst to generate an RSS feed. As it often takes some time before new features show up in certain Office 365 plans (including those available for government and education), it would also be useful if Microsoft indicated what features are available in what plans.

The transition is scheduled for early December. At that point, the web site will switch to a new home in products.office.com instead of its current location in the Office 365 FastTrack site. No action is necessary for users as the magic of redirection will point you to the new location.

Sponsored

No More Snippets

As anyone who works with Office 365 knows, the devil is very much in the detail when it comes to understand how the service actually works. Over the last few months, I have published weekly “snippets”, or collections of short notes about different aspects of Office 365. However, according to web site traffic, it seems that snippets get less traffic than regular articles. This is possibly because it is harder for search engines to locate content when it’s included in a list of items.

In any case, snippets are no more. I shall no longer bring you news about the darker corners of Office 365, such as why OWA decided that floating icons (Figure 3) were a good idea for an entire day this week.

OWA floating icons

Figure 3: OWA’s floating icons (image credit: Tony Redmond)

Nor, for instance, will I be able to ponder why Delve throws up strange results from time to time, including its most recent quirk where “app@sharepoint” is listed as the author of all my documents (Figure 4). Nor shall I complain about changes such as the decision to remove the “Quick Links” web part from the modern team site home page, which seems to be a weird and backwards step. It’s been a strange week so far.

Odd Delve Item

Figure 4: App@sharepoint makes an appearance in Delve (image credit: Tony Redmond)

Sponsored

Instead of all this frippery, you’ll see regular, straightforward articles from me about Teams, and how many active users Office 365 has, and the battle between G Suite and Office 365. Or indeed, the majority of this article.

It’s all for the best.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Microsoft Overhauls Office 365 Roadmap – For the Better appeared first on Petri.

AWS or Private Cloud or both, what’s your strategy?

The content below is taken from the original (AWS or Private Cloud or both, what’s your strategy?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Well we just opened up a hornet’s nest didn’t we?! Our recent announcement of the new Cisco UCS S-Series Storage Server definitely caused a stir in the industry. We launched into the emerging scale out storage market with a broad ecosystem of solution partners backed by amazing customers like Cirrity and Green Cloud who use […]

Missile tech helps boffins land drone on car moving at 50 km/h

The content below is taken from the original (Missile tech helps boffins land drone on car moving at 50 km/h), to continue reading please visit the site. Remember to respect the Author & Copyright.

Robotics boffins have landed an autonomous quadcopter on a car moving at 50 km/h and think doing so might just change the drone business.

As explained at arXiv by a group of researchers from Mobile Robotics and Autonomous Systems Laboratory at Polytechnique Montreal, unmanned aerial vehicles (UAVs, aka drones) look handy for lots of delivery tasks but “their more widespread usage still faces many hurdles, due in particular to their limited range and the difficulty of fully automating the deployment and retrieval.”

Instead of flying home, the authors envisage drones hitching a ride on buses, delivery trucks or boats, which could offer recharging facilities or take care of long-haul journeys that would otherwise be beyond a drone’s capabilities.

Another scenario the paper contemplates is search and rescue operations, in which “the synergy between ground and air vehicles could help save precious mission time and would pave the way for the efficient deployment of large fleets” of drones.

Nice idea, but what about the maths to help a drone touch down on a moving car? That’s what the paper considers in its discussion of a “Kalman filter”, an algorithm commonly used in robotics to determine useful trajectories for an autonomous device to employ. The authors also put a technique called Proportional Navigation to work, noting that it “is most commonly known as a guidance law for ballistic missiles, but can also been used for UAV guidance.”

Landing on the car required it to be fitted with a flat surface, a target and a mobile phone that transmitted GPS data to the drone. But after plenty of experimentation the authors got the job done at speeds up to 50 km/h.

Regulators the world over have been cautious about UAVs, because they occupy contested airspace and in delivery roles offer the novel threat of tat ordered from web bazaars becoming a gravity-assisted threat. It’s therefore probably safe to say we’re years away from drones dropping in on trucks as you drive to work. But it’s an interesting idea! ®

Sponsored:
Customer Identity and Access Management

Botnet Tracker lets you track activity of live Botnets worldwide

The content below is taken from the original (Botnet Tracker lets you track activity of live Botnets worldwide), to continue reading please visit the site. Remember to respect the Author & Copyright.

The security landscape has undergone a rapid change in recent times. Cyber threats, malware infections have increasingly become a principal problem for security experts. Different research studies carried out indicates that the use of Botnets is increasing at an alarming rate. In this post, we will take a look at some Botnet Trackers that can help you keep a tab on Botnet activity. But before we see them, let’s learn about a few things.

What is a Botnet?

A Botnet is a networked collection of compromised machines called robots. It is being largely used for conducting espionage operations and stealing sensitive information via controlled nodes a.k.a. Botmasters. These machines are then used to carry out a concerted attack.

How do Botnets infect systems?

The techniques botnets use to infect other machines and recruit new bots is simple. Individual bots are dispersed geographically across the world and across the entire IP address space.

In most cases, the mode used is a host of social engineering tactics. Besides, thumb drives and other types of common media can be used to distribute Botcode. The bot code is generally installed via the Autorun and Autoplay features on machines running Windows. So, the systems running Windows OS are most vulnerable to Botnet attacks.

Drive-by downloads is another way via which botnets affect system when a user visits a website and malware is downloaded by exploiting web browser vulnerabilities.

The plug-ins and add-ons use in browsers have seen an upward trend in recent years. As such, browser-based attacks have surfaced regularly and contributed significantly to the rise in infections via Drive-by downloads.

Read: Botnet Removal Tools.

Botnet Tracker

A Botnet is designed with the specific intent of carrying out large-scale click fraud and Bitcoin mining. A Botnet Tracker is a tool that can be used to analyze its malicious architecture and activity in real-time.

Tracking botnets is not easy since the power of a botnet is a measure of the size or number of machines infected. Therefore, tracking botnets involve a multi-step strategy.

Different botnet detection tools and techniques are deployed in the process. For instance, websites dedicated to tracking some of the infamous botnets such as Zeus Tracker track the Zeus botnet’s Command & Control servers (hosts) around the world to provide users a domain- and IP-blocklist. The statistics help reveal some useful information about the crimeware.

The main focus lays on providing system administrators an option to block well-known hosts and to avoid and detect infections in their networks. For this purpose, the Botnet tracker from TrendMicro offers several blocklists. These blocklists are offered in various formats and for different purposes

Additionally, the tool from TrendMicro can help CERTs, ISPs and LEAs (law enforcement) to track malicious hosts in their network/country hosts that are online and running botnet code. Although the real power of a botnet is difficult to determine, implementation of these strategies in combination can help in identifying the threat in the first instance and avert losses.

Botnet Tracker

Lookingglasscyber.com displays a real-time map shows the actual data from their threat intelligence feeds. It shows the Infections per second, Live Attacks statistics, tracks Botnets like Sality, Mobile, Conficker, ZeroAccess, APT, Trojan, TinyBanker, Clicker, Ramdo, Shiz, Flashback, Sensor, and Dyre.


Visit malwaretech.com and click on the Connect button to see live Botnets in action worldwide. This Botnet tracker allows you to track the activities of Sality4, Kelihos, Necurs, Goze and Mira Botnets.



Microsoft Invests in Quantum Computing Division with Key Appointments

The content below is taken from the original (Microsoft Invests in Quantum Computing Division with Key Appointments), to continue reading please visit the site. Remember to respect the Author & Copyright.

By WindowsITPro

By WindowsITPro

Microsoft is making a significant investment in its quantum computing division with the addition of several new executives, including the appointment of long-time Microsoft executive Todd Homdahl who will lead the scientific and engineering effort.

Microsoft has also hired two leaders in the field of quantum computing, Leo Kouwenhoven and Charles Marcus. The company will soon bring on Matthia Troyer and David Reilly, two other leaders in the field.

The team will work to build a quantum computer while also creating the software that could run on it, Microsoft says in a blog post.

The investment comes months after Microsoft founder Bill Gates said that quantum cloud computing could arrive in as soon as 6 years.

“Microsoft’s approach to building a quantum computer is based on a type of qubit – or unit of quantum information – called a topological qubit,” the company says. “Using qubits, researchers believe that quantum computers could very quickly process multiple solutions to a problem at the same time, rather than sequentially.”

Ultimately, Microsoft wants to “create dependable tools that scientists without a quantum background can use to solve some of the world’s most difficult problems… that could revolutionize industries such as medicine and materials science.”

Holmdahl previously worked on the development of the Xbox, Kinect and HoloLens, while Marcus and Kouwenhoven are both academic researchers that have been working with Microsoft’s quantum team for years. Microsoft has provided funding for an “increasing share” of the topological qubit research in their labs.

Marcus says that without cooperation between scientists and engineers, a quantum economy will never be realized.

“I knew that to get over the hump and get to the point where you started to be able to create machines that have never existed before, it was necessary to change the way we did business,” Marcus said in a statement. “We need scientists, engineers of all sorts, technicians, programmers, all working on the same team.”

This article was originally posted here at WindowsITPro.

Alibaba Cloud to Launch its First European, Middle East Data Centers as Part of Global Push

The content below is taken from the original (Alibaba Cloud to Launch its First European, Middle East Data Centers as Part of Global Push), to continue reading please visit the site. Remember to respect the Author & Copyright.

Brought to You by The WHIR

Brought to You by The WHIR

Alibaba’s cloud computing arm will close out the end of the year with four new data centers in a move that Alibaba Cloud is calling a “major milestone” in the Chinese company’s global expansion.

According to an announcement on Monday, the four data centers will open by the end of 2016 in the Middle East (Dubai), Europe, Australia and Japan, bringing its network to 14 locations around the world.

Alibaba Cloud started the rollout on Monday with the launch of its data center in Dubai. Alibaba said that with the launch, it will be the first major global public cloud services provider to offer cloud services from a local data center in the Middle East. According to Gartner, the public cloud services market in Middle East and North Africa region is projected to grow 18.3 percent in 2016 to $879.3 million, up from $743.1 million in 2015.

“Alibaba Cloud has contributed significantly to China’s technology advancement, establishing critical commerce infrastructure to enable cross-border businesses, online marketplaces, payments, logistics, cloud computing and big data to work together seamlessly,” Simon Hu, President of Alibaba Cloud said in a statement. “We want to establish cloud computing as the digital foundation for the new global economy using the opportunities of cloud computing to empower businesses of all sizes across all markets.”

The new locations offer access to various services such as data storage and analytics, and cloud security, the company says.

In its most recent earnings, Alibaba’s cloud unit revenue jumped 130 percent to 1.5 billion yuan in the quarter. The division currently has 651,000 paying customers.

Alibaba Cloud’s first European data center will be launched in partnership with Vodafone Germany in a facility based in Frankfurt.

Expanding its footprint in Asia-Pacific, Alibaba Cloud is opening a data center in Sydney by the end of 2016. It will have a dedicated team based in Australia to help build partnerships with local technology companies.

Finally, its Japanese data center is hosted by SB Cloud Corporation, a joint venture between Softbank and Alibaba Group.

“The four new data centers will further expand Alibaba Cloud’s global ecosystem and footprint, allowing us to meet the increasing demand for secure and scalable cloud computing services from businesses and industries worldwide. The true potential of data-driven digital transformation will be seen through globalization and the opportunities brought by the new global economy will become a reality,” Sicheng Yu, Vice President of Alibaba Group and General Manager of Alibaba Cloud Global said.

Fancy a wee quasi-DRAM? Supermicro bulks up server memory

The content below is taken from the original (Fancy a wee quasi-DRAM? Supermicro bulks up server memory), to continue reading please visit the site. Remember to respect the Author & Copyright.

Data is getting closer to compute – Supermicro’s X10DRU-i+ dual-socket server is available with 1TB or 2TB of application memory for analytics, database, and caching apps by being equipped with Diablo’s Memory1 flash DIMM modules, and up to a 4x memory increase from DRAM-only servers.

The flash is treated as quasi-DRAM with the DMX software preemptively fetching data from the flash and shunting it to DRAM caching style. In effect the flash expands the memory capacity and makes the servers good for Big Data-style applications.

Diablo says the servers can provide up to 40TB of system memory in a single rack, with no changes to hardware or applications.

Memory1 is a DDR4 DIMM module with up to 128GB flash capacity and Diablo Memory Expansion software (DMX). DRAM DIMMs are installed in the system along with Memory1 modules, think of a typical 8:1 flash-DRAM ratio.

These X10DRU-i+ dual-socket servers have 24 DIMM slots, being called Fat Twin servers, and dual Xeon E5-2600 v4 processors. The flash is an onboard DRAM backing store, so to speak.

Supermicro_memory1_server

Supermicro X10DRU-i+ dual-socket server

The DMX software provides memory virtualisation, dynamic data tiering and data prediction plus flash endurance enhancement and performance tuning. We understand we might see doubled-up Memory1 capacity of 256GB, meaning 4TB servers, in 2017.

Supermicro is also developing its portfolio of hot-swap NVMe flash drive server products. Stifel MD Aaron Rakers writes that Supermicro thinks hot-swap, dual-port NVMe flash drives can be six times faster than SAS SSDs, and had more than 70 NVMe products available at the end of September. He says it is working on next-gen M.2 NVMe products that could support up to 12TB of flash capacity.

The X10DRU-i+ server mentioned above could transition to having 10 front-end NVMe drive slots instead of the 10 SAS ones currently available. That 10 x 2.5-inch drive bay space could be altered to support 20 new format NVMe flash drives.

Rakers notes:

Brocade has previously stated that it believes that NVMe-over-Fabrics may be as disruptive to the storage industry over the next few years as all-flash storage was over the prior few years. Also, Mellanox recently stated that it believes that NVMe-over-Fabrics, which will enable customers to build high-performance block-based storage and is the first new major block-based storage protocol in 15 years, has the potential to replace Fibre Channel.

The net-net here is that memory is getting bulked out with slower/cheaper-than-DRAM storage-class memory, SAS 2.5-inch SSDs are giving way to NVMe-connected flash drives which can use an M.2 card form factor, and Fibre Channel/iSCSI-connected SANs will give way to much faster, RDMA-based, NVMe over Fabrics-connected shared storage. Data is getting much closer to compute both in physical terms and in access-latency terms. ®

Sponsored:
Magic quadrant for enterprise mobility management suites

What is Google Cloud Deployment Manager and how to use it

The content below is taken from the original (What is Google Cloud Deployment Manager and how to use it), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Grace Mollison, Solutions Architect

Using Google Cloud Deployment Manager is a great way to manage and automate your cloud environment. By creating a set of declarative templates, Deployment Manager lets you consistently deploy, update and delete resources like Google Compute Engine, Google Container Engine, Google BigQuery, Google Cloud Storage and Google Cloud SQL. As one of the less well known features of Google Cloud Platform (GCP), let’s talk about how to use Deployment Manager.

Deployment Manager uses three types of files:

Using templates is the recommended method of using Deployment Manager, and requires a configuration file as a minimum. The configuration file defines the resources you wish to deploy and their configuration properties such as zone and machine type.

Deployment manager supports a wide array of GCP resources. Here’s a complete list of supported resources and associated properties, which you can also retrieve with this gcloud command:

$ gcloud deployment-manager types list

Deployment Manager is often used alongside a version control system into which you can check in the definition of your infrastructure. This approach is commonly referred to as "infrastructure as code" It’s also possible to pass properties to Deployment Manager directly using gcloud command, but that’s not a very scalable approach.

Anatomy of a Deployment Manager configuration

To understand how things fit together, let’s look at the set of files that are used to create a simple network with two subnets and a single deployed instance.

The configuration consists of three files:

  • net-config.yaml – configuration file
  • network.jinja – template file
  • instance.jinja – template file

You can use template files as logical units that break down the configuration into smaller and reusable parts. Templates can then be composed into a larger deployment. In this example, network configuration and instance deployment have been broken out into their own templates.

Understanding templates

Templates provide the following benefits and functionality:

  • Composability, making it easier to manage, maintain and reuse the definitions of the cloud resources declared in the templates. In some cases you may not want to recreate the end-to-end configuration as defined in the configuration file. In that case, you can just reuse one or more templates to help ensure consistency in the way in which you create resources.
  • Templates written in your choice of Python or Jinja2. Jinja2 is a simpler but less powerful templating language than Python. It uses the same syntax as YAML but also allows the use of conditionals and loops. Python templates are more powerful and allow you to programmatically generate the contents of your templates.
  • Template variables – an easy way to reuse templates by allowing you to declare the value to be passed to the template in the configuration file. This means that you can change a specific value for each configuration without having to update the template. For example, you may wish to deploy your test instances in a different zone to your production instances. In that case, simply declare within the template a variable that inherits the zone value from the master configuration file.
  • Environment variables, which also help you reuse templates across different projects and deployments. Examples of an environment variable include things like the Project ID or deployment name, rather than resources you want to deploy.

Here’s how to understand the distinction between the template and environment variables. Imagine you have two projects where you wish to deploy identical instances, but to different zones. In this case, name your instances based on the Project ID and Deployment name found from the environment variables, and set the zone through a template variable.

A Sample Deployment Manager configuration

For this example, we’ve decided to keep things simple and use templates written in Jinja2.

The network file

This file creates a network and its subnets whose name and range are passed through from the variable declaration in net-config.yaml, the calling configuration file.

The “for” subnet loop repeats until it has read all the values in the subnets properties. The config file below declares two subnets with the following values:

Subnet name
IP range
web
10.177.0.0/17
data
10.178.128.0/17

The deployment will be deployed into the us-central1 region. You can easily change this by changing the value of the “region” property in the configuration file without having to modify the network template itself.

The instance file

The instance file, in this case "instance.jinja," defines the template for an instance whose machine type, zone and subnet are defined in the top level configuration file’s property values.

The configuration file

This file, called net-config.yaml, is the main configuration file that marshals the templates that we defined above to create a network with a single VM.

To include templates as part of your configuration, use the imports property in the configuration file that calls the template (going forward, the master configuration file). In our example the master configuration file is called net-config.yaml and imports two templates at lines 15 – 17:

The resource network is defined by the imported template network.jinja.
The resource web-instance is defined by the imported template instance.jinja.

Template variables are declared that are passed to each template. In our example, lines 19 – 27 define the network values that are passed through to the network.ninja template.

Lines 28 to 33 define the instance values.

To deploy a configuration, pass the configuration file to Deployment Manager via the gcloud command or the API. Using gcloud command, type the following command:

$ gcloud deployment-manager deployments create net --configuration net-config.yaml

You’ll see a message indicating that the deployment has been successful

You can see the deployment from Cloud Console.

Note that the instance is named after the deployment specified in instance.jinja.

The value for the variable “deployment” was passed in via the gcloud command “create net” where “net” is the name of the deployment

You can explore the configuration by looking at the network and Compute Engine menus:

You can delete a deployment from Cloud Console by clicking the delete button or with the following gcloud command:

$ gcloud deployment-manager deployments delete net

You’ll be prompted for verification that you want to proceed.

Next steps

Once you understand the basics of Deployment Manager, there’s a lot more you can do. You can take the example code snippets that we walked through here and build more complicated scenarios, for example, implementing a VPN that connects back to your on premises environment. There are also many Deployment Manager example configurations on Github.

Then, go ahead and start thinking about advanced Deployment Manager features such as template modules and schemas. And be sure to let us know how it goes.

OpenStack Developer Mailing List Digest November 5-18

The content below is taken from the original (OpenStack Developer Mailing List Digest November 5-18), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • mriedem: We’re now running neutron by default in Ocata CI jobs [1].
  • stevemar: fernet token format is now the default format in keystone! thanks lbragstad samueldmq and dolphm for making this happen!
  • Ajaegar: developer.openstack.org is now hosted by OpenStack infra.
  • Tonyb: OpenStack requirements on pypi [2] is now a thing!
  • All

Registration Open For the Project Teams Gathering

  • The first OpenStack Project Teams Gathering event geared toward existing upstream team members, providing a venue for those project teams to meet, discuss and organize the development work for the Pike release.
  • Where: Atlanta, GA
  • When: The week of February 20, 2017
  • Register and get more info [3]
  • Read the FAQ for any questions. If you still have questions, contact Thierry (ttx) over IRC on free node, or email foundation staff at [email protected].
  • Full thread

Follow up on Barcelona Review Cadence Discussions

  • Summary of concerns were Nova is a complex beast. Very few people know even most of it well.
  • There are areas in Nova where mistakes are costly and hard to rectify later.
  • Large amount of code does not merge quickly.
  • Barrier of entry for Nova core is very high.
  • Subsystem maintainer model has been pitched [4].
  • Some believe this is still worth giving a try again in attempt to merge good code quickly.
  • Nova today uses a list of experts [5] to sign off on various changes today.
  • Nova PTL Matt Riedemann’s take:
    • Dislikes the constant comparison of Nova and the Linux kernel. Lets instead say all of OpenStack is the Linux Kernel, and the subsystems are Nova, Cinder, Glance, etc.
    • The bar for Nova core isn’t as high as some people make it out to be:
      • Involvement
      • Maintenance
      • Willingness to own and fix problems.
      • Helpful code reviews.
    • Good code is subjective. A worthwhile and useful change might actually break some other part of the system.
  • Nova core Jay Pipes is supportive of the proposal of subsystems, but with a commitment to gathering data about total review load, merge velocity, and some kind of metric to assess code quality impact.
  • Full thread

Embracing New Languages in OpenStack

  • Technical Committee member Flavio Percoco proposes a list of what the community should know/do before accepting a new language:
    • Define a way to share code/libraries for projects using the language
      • A very important piece is feature parity on the operator.
      • Oslo.config for example, our config files shouldn’t change because of a different implementation language.
      • Keystone auth to drive more service-service interactions through the catalog to reduce the number of things an operator needs to configure directly.
      • oslo.log so the logging is routed to the same places and same format as other things.
      • oslo.messaging and oslo.db as well
    • Work on a basic set of libraries for OpenStack base services
    • Define how the deliverables are distributed
    • Define how stable maintenance will work
    • Setup the CI pipelines for the new language
      • Requirements management and caching/mirroring for the gate.
    • Longer version of this [6].
  • Previous notes when the Golang discussion was started to work out questions [7].
  • TC member Thierry Carrez says the most important in introducing the Go should not another way for some of our community to be different, but another way for our community to be one.
  • TC member Flavio Percoco sees part of the community wide concerns that were raised originated from the lack of an actual process of this evaluation to be done and the lack of up front work, which is something trying to be addressed in this thread.
  • TC member Doug Hellmann request has been to demonstrate not just that Swift needs Go, but that Swift is willing to help the rest of the community in the adoption.
    • Signs of that is happening, for example discussion about how oslo.config can be used in the current version of Swift.
  • Flavio has started a patch that documents his post and the feedback from the thread [8]
  • Full thread

API Working Group News

  • Guidelines that have been recently merged:
    • Clarify why CRUD is not a great descriptor [9]
    • Add guidelines for complex queries [10]
    • Specify time intervals based filtering queries [11]
  • Guidelines currently under review:
    • Define pagination guidelines [12]
    • WIP add API capabilities discovery guideline [13]
    • Add the operator for “not in” to the filter guideline [14]
  • Full thread

OakTree – A Friendly End-user Oriented API Layer

  • The OpenStack summit results of the Interop Challenge shown on stage was awesome. 17 different people from 17 different clouds ran the same workload!
  • One of the reasons it worked is because they all used the Ansible modules we wrote based on the Shade library.
    • Shade contains business logic needed to hide vendor difference in clouds.
    • This means that there is a fantastic OpenStack interoperability story – but only if you program in Python.
  • OakTree is a gRPC-based APO service for OpenStack that is based on the Shade library.
  • Basing OakTree on Shade gets not only the business logic, Shade understands:
    • Multi-cloud world
    • Caching
    • Batching
    • Thundering herd protection sorted to handle very high loads efficiently.
  • The barrier to deployers adding it to their clouds needs to be as low as humanly possible.
  • Exists in two repositories:
    • openstack/oaktree [15]
    • openstack/oaktreemodel [16]
  • OakTree model contains the Protobuf definitions and build scripts to produce Python, C++ and Go code from them.
  • OakTree itself depends on python OakTree model and Shade.
    • It can currently list and search for flavors, images, and floating ips.
    • A few major things that need good community design listed in the todo.rst [17]
  • Full thread

 

Outlook Anywhere Gets the Bullet

The content below is taken from the original (Outlook Anywhere Gets the Bullet), to continue reading please visit the site. Remember to respect the Author & Copyright.

Outlook MAPI HTTP

Outlook MAPI HTTP

MAPI Over HTTP Is the default Outlook Desktop Connection Protocol for Office 365

Microsoft released MAPI over HTTP (the “Alchemy” project) as part of Exchange 2013 SP1 in May 2014. Well before that time, MAPI over HTTP had been running inside Office 365 to shake down the new protocol before it was released to on-premises customers. The replacement for the long-established RPC over HTTP (aka “Outlook Anywhere”) protocol, MAPI over HTTP is designed to accommodate the demands of modern networking environments where devices hop from network to network and seamless mobility is everything.

The natural conclusion for the process has now come to pass. Microsoft is giving Office 365 tenants almost a year’s warning that RPC over HTTP connections will not be supported for Exchange Online after October 31, 2017. Outlook Anywhere is heading for the rubbish heap, but only for Office 365 as on-premises Exchange will continue to support this venerable protocol.

Changing Networks, Changing Protocols

When you consider what’s happening, the process seems fair. Outlook Anywhere was designed to transport the Remote Procedure Calls (RPCs) used by Outlook desktop clients across an internet connection. Only HTTPS needed to be enabled for firewalls and no VPNs were necessary.

When it was released, Outlook Anywhere was an immediate hit, especially when used with the then-new Outlook 2003 client and its “drizzle mode synchronization.” The protocol, client, and growing availability of Wi-Fi networks made mobility a reality for many and released users from the need for tiresome dial-up connections and extended synchronization sessions.

That was some thirteen years ago. When you consider what’s happened since, especially in the areas of mobile devices and networks, a case can be advanced to change the default protocol used by Outlook to something more modern and discard a protocol designed for LAN connections.

Microsoft’s HTTP Based Clients

Microsoft has a lot of experience with clients that are based on HTTP. Both Exchange ActiveSync (EAS) and Outlook Web App (OWA) clients use longstanding HTTP connections to communicate with Exchange. These connections don’t require the complicated handshaking used when RPC connections are made, which means that they work much better than Outlook over low-quality links.

According to Microsoft, MAPI over HTTP:

  • Provides faster connection times to Exchange (especially after a network connection is interrupted, such as when a client switches network). RPC connections require more work to switch networks, most of which is hidden from users because the way that Outlook works in cached Exchange mode. Hibernation is another situation in which faster connections are enabled, again this is a capability that is more important in today’s environment.
  • Improves the connection resiliency. In other words, the protocol doesn’t experience as many problems over flaky networks when packets might be dropped.
  • Enables more secure connections by supporting Multi-Factor Authentication for Outlook 2013 and Outlook 2016.
  • Removes the dependency on the very old RPC technology and focuses on HTTP as the communications protocol for the future. MAPI over HTTP is easier for support personnel to debug when working on connectivity problems.

All sounds good. And MAPI over HTTP works and works well. I’ve been using it exclusively for the last two years and haven’t experienced any problems. That being said, Outlook desktop still remains the fat pig of Exchange clients and the new protocol does little to restrain the demands the client makes on network resources. If you’re in a network-constrained environment, such as using Wi-Fi on an airplane, OWA usually delivers a better user experience.

Client Updates Might Be Required

It’s easier for Microsoft to enforce change inside Office 365 than it is in the on-premises world. It controls all the necessary server upgrades (all of which are done) and can force tenants to upgrade clients to remain supported. Some work might be necessary to upgrade Outlook desktop software to one of the supported versions:

  • Outlook 2016
  • Outlook 2013 SP1
  • Outlook 2010 SP2

In all cases, clients should be updated with the latest patches. In particular, the December 2015 updates contain several important fixes to improve the stability and performance of MAPI over HTTP. These updates should be considered the baseline version for deployment. Outlook 2007 was never updated with MAPI over HTTP capability, so you won’t be able to use this client to connect to Exchange Online after the deadline. Microsoft has a useful Outlook Updates article that outlines the minimum version requirements for Office 365 and on-premises connectivity.

It is possible that some Outlook 2016 or Outlook 2013 clients might be blocking MAPI over HTTP connections. This is done by setting the MapiHttpDisabled registry key to 1 (one). Clients might have been configured for on-premises users to force the use of RPC over HTTP connections before their mailboxes were moved to Exchange Online and forgotten about since. The fix is easy (set the value to zero or delete it from the registry), but it’s wise to check before users start to report that they can’t connect on October 31, 2017.

Discovering Clients

No out-of-the-box method exists inside Office 365 to discover whether any Outlook clients need to be patched or updated. The Get-ConnectionByClientTypeDetailReport cmdlet tells you what users have used MAPI clients to connect to Exchange Online, but those MAPI clients can connect using either MAPI over HTTP or RPC over HTTP. Figuring out version numbers for clients that connect to Exchange is straightforward in an on-premises environment, but is more complex for Office 365 as Microsoft controls the required information. Curiously, although the Get-LogonStatistics cmdlet is available for Exchange Online, any attempt to pass it a mailbox identity fails.

Perhaps the new “Email clients used report” promised in the “What’s new in Office 365 Admin October 2016” post will provide all the necessary information. We shall see.

Extended Lifetimes

With respects to on-premises Exchange, you obviously have more control over when you upgrade both server and desktop software, so you can persist in using RPC over HTTP for as long as it’s supported. In some cases, software that can’t use MAPI over HTTP is supported for some time to come. Outlook 2007 leaves extended support on October 10, 2017 (one of the reasons why Microsoft never backported MAPI over HTTP to this client). Outlook 2010 lingers until October 13, 2020 and Outlook 2013 until April 11, 2023. This site provides a useful overview of support dates.

In some respects, this transition is simply an example of Microsoft changing the internal plumbing for the cloud. You shouldn’t have any concerns if modern clients are deployed. Some pain might be in prospect if ancient clients persist. Such is what happens with old (but still useful) software.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros,” the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

 

The post Outlook Anywhere Gets the Bullet appeared first on Petri.

HPE Shows the ProLiant m710x – Intel Xeon E3-1500 V5 for VDI

The content below is taken from the original (HPE Shows the ProLiant m710x – Intel Xeon E3-1500 V5 for VDI), to continue reading please visit the site. Remember to respect the Author & Copyright.

At SC16 HPE was showing off the HPE ProLiant m710x Moonshot cartridge. The m710x includes an Intel Xeon E3-1500 V5 processor aimed a VDI and transcoding

The post HPE Shows the ProLiant m710x – Intel Xeon E3-1500 V5 for VDI appeared first on ServeTheHome.

The PiDrive Foundation Edition Is an External Hard Drive for the Raspberry Pi That Simplifies Installing Multiple Operating Systems

The content below is taken from the original (The PiDrive Foundation Edition Is an External Hard Drive for the Raspberry Pi That Simplifies Installing Multiple Operating Systems), to continue reading please visit the site. Remember to respect the Author & Copyright.

Using an external hard drive with a Raspberry Pi takes just a few steps, but WDLabs new PiDrive Foundation Edition makes installing not just one, but multiple operating systems for a Raspberry Pi simple.

The PiDrive Foundation Edition is essentially a hard drive that also comes with an SD card installer that lets you install and boot from multiple operating systems. The SD card has a special version of the NOOBS OS Installer loaded onto it. With it, you can install either Raspbian or Raspbian Lite into different partitions of the drive, which WDLabs calls “Project Spaces.” Basically, this makes it so you can have several versions of Raspbian installed on one hard drive, making it much easier to bounce between different projects. It also just makes the process of installing an operating system on an external hard drive easier, so if you need to do that in lieu of a SD card, it’s worth looking at the PiDrive for that too.

The Project Spaces are the real selling point here, as the ability to load up multiple OS’s on a Raspberry Pi has always been a pain. This way, you can easily set up multiple users or projects on a single Pi, with a shared partition that each space can access. Sadly, only Raspbian and Raspbian Lite are included as installation options here, leaving a hole where the OSMC media center software or RetroPie game emulator software would have made a heck of a lot of sense as options. Of course, you can install those manually once Project Spaces sets up each partition, but it’s not totally the one-click installer I was hoping for.

Still, as pretty much the easiest way to divide up a Pi’s hard drive into different bootable operating systems, it does what it needs to. Each time you boot up the Pi, you’re presented with a screen to pick which operating system to boot into. Here’s what the installation screen looks like to give you a better idea of how it works:

The PiDrive is also priced at right around what you’d pay for any other hard drive and SD card, at $28.99 for 250GB and $37.49 for 375GB. There’s also a 64GB flash drive version that seems perfect for dividing up a Pi’s partitions into a couple different project spaces. This isn’t for everyone, and if you’re fine swapping out SD cards for each project, then there’s no reason to pick up a hard drive. That said, if you do want to run a hard drive with multiple partitions and operating systems, the PiDrive is a great way to do so.

WD PiDrive Foundation Edition

Four simple steps to Backup VMware VMs using Azure Backup Server

The content below is taken from the original (Four simple steps to Backup VMware VMs using Azure Backup Server), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Backup Server is recognized in the industry for protection of Microsoft workloads and environments. It protects key Microsoft workloads such as SQL, SharePoint and Exchange as well as virtual machines running on Hyper-V. Today, we are announcing support for protection of VMware virtual machines with Azure Backup Server. This will allow enterprise customers to have a single backup solution across their heterogeneous IT environment.

If you are new to Azure Backup Server and want to enable Microsoft Azure Backup Server, you can download Microsoft Azure Backup Server and start protecting your infrastructure today. If you already have Azure Backup Server installed, please download and install Update 1 to get started with VMware backup.

Here are the four simple steps to configure VMware server and Azure Backup Server to protect VMware VMs.

1. Enable a secure SSL communication between Azure Backup Server and VMware server

2. Add a new user with certain minimum privileges

3. Add VMware Server to Azure Backup Server

4. Protect VMware VMs with Azure Backup Server

Additional resources:

Azure Stack Portable – The Enterprise Cloud (in a Briefcase)

The content below is taken from the original (Azure Stack Portable – The Enterprise Cloud (in a Briefcase)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Stack is Microsoft’s enterprise cloud technology that allows organizations to run Microsoft Azure on their own premise.  During the early adopter testing phase, I needed an Azure Stack host that I could take between my office (by day) and home (on weekends), or to take to client sites to demonstrate.  So I created a portable version of Azure Stack by building a 14-core / 10-terabyte cloud in a briefcase!

For those who need a background on Azure Stack, please see a couple articles I wrote on Azure Stack describing the technology in more detail, as well as the business use cases of who is lining up to buy Azure Stack.  My initial article back in February 2016 covers some early experiences, and then a October 2016 article that updated some more recent work on Azure Stack.

To read this article in full or to leave a comment, please click here

Continuous integration and deployment to Azure Container Service

The content below is taken from the original (Continuous integration and deployment to Azure Container Service), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today’s businesses need to innovate at a rapid pace to be competitive in the marketplace. A ride sharing company may have to update its app several times a day to respond to daily demand fluctuations and adjust its pricing. A location-based social gaming app has to constantly engage users with new features to increase daily active users and stay at the top of app store rankings. Delivering high-quality, modern applications requires DevOps tools and processes that are critical to enable this constant cycle of innovation.  With the right DevOps tools, developers can streamline continuous deployment and get innovative applications into user’s hands faster. Again and again.

While continuous integration and deployment practices are well established, the introduction of containers brings new considerations, particularly when working with multi container applications. On Nov. 7, we announced a series of updates to Azure Container Service (ACS) that continue to demonstrate ACS is the most streamlined, open and flexible way to run your container applications in the cloud — providing even more customer choice in their cloud orchestrator.

Today, the preview of continuous integration and deployment of multicontainer Linux applications is now available using Visual Studio, Visual Studio Team Services, and the open source Visual Studio Code. To continue enabling deployment agility, these tools provide excellent dev-to-test-to-prod deployment experiences for container workloads using a choice of development and CI/CD solutions.

Key uses

Create a continuous pipeline to Azure Container Service with Visual Studio Team Services

You can write your app using the language of your choice (Java, C#, PHP, etc) and your favorite IDE (Eclipse, Visual Studio, IntelliJ, etc), with standard Docker assets. Then, using the Azure Command Line Interface (CLI), you can run a simple command to connect your source repository to a target Azure Container Service (ACS) cluster and set up a containerized build and deployment pipeline for a multi-container Docker application. So now anytime source code is pushed to a repository in GitHub, it can automatically trigger VSTS to build and tag container images, run unit tests, push to the Azure Container Registry, and deploy to ACS with zero-downtime. For Preview, we support creating pipelines that deploy to DC/OS only.

Sample App Azure Container Release

In addition to the Azure CLI, similar experiences for setting up CI/CD are also available directly in the Azure Portal (in the ACS blade) and the Visual Studio IDE.

Use the Azure Container Registry to store images

Azure Container Registry is a private registry for hosting container images. Using the Azure Container Registry, customers can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS and Kubernetes.  The continuous integration and deployment tools will push the container images to the Azure Container Registry after a build. Later, it will pull images from the container registry and deploy them into the ACS cluster.

Easily promote images across environments

The continuous integration and deployment tools support the immutable services principle. Which means, you can easily promote images from development to downstream release environments such as Test and Production and importantly, you don’t have to rebuild the container image each time you promote the image.

Azure Container Promoting Images

These innovations demonstrate our continued investment in the container ecosystem and highlight our unique strategy of offering the only public cloud container orchestration service that offers a choice of open source orchestration technologies — DC/OS, Docker Swarm and Kubernetes. The support for continuous integration and deployment tools amplifies our strategy to make it easier for organizations to adopt containers in the cloud.

Customers will be able to access the preview of continuous integration and deployment tools starting Nov. 16 — watch for more details at Microsoft Connect();!

Learn more

Check out this tutorial for setting up continuous integration and deployment of a multi-container app to Azure Container Service.

Latest Banana Pi offers SATA and 2GB RAM

The content below is taken from the original (Latest Banana Pi offers SATA and 2GB RAM), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sinovoip’s $48, open-spec “Banana Pi M2 Ultra” SBC updates the M2 with native SATA support and 2GB RAM, plus a new quad core Cortex-A7 Allwinner R40 SoC. Sinovoip’s Banana Pi project has launched the Banana Pi M2 Ultra on Aliexpress for $48, only $4 more than the Banana Pi M2 SBC, which similarly has a […]

Computer Aid Connect: taking the internet to remote areas

The content below is taken from the original (Computer Aid Connect: taking the internet to remote areas), to continue reading please visit the site. Remember to respect the Author & Copyright.

Computer Aid is aiming to bring offline access to educational websites to areas with limited internet access. Right now, it’s turning recycled Raspberry Pi boards into portable internet hotspots.

“It’s for offline students and teachers across the world,” said Nicola Gampell, E-Learning and Marketing Officer for Computer Aid International.

As a result, Connect will “bring them a local internet full of educational resources, ranging from scientific simulations to Wikipedia articles,” Nicola told us.

An internet for all, anywhere. Computer Aid Connect

Computer Aid’s ‘Connect’ device provides offline classrooms with a wealth of educational resources.

Computer Aid: recycling Raspberry Pis into remote routers

Inside the Connect is software based on RACHEL-Pi by World Possible.

“All too often we’re reminded of this reality,” wrote Jeremy Schwartz, Executive Director of World Possible. “There are places where young people aren’t given the resources they need to learn. For many, the internet has become a small equalising force, but for more, that equaliser does not exist.”

“In 2017, we’re going to test RACHEL against as many different use cases as we can,” said Jeremy. “We’ll be formalising our own testing through our social entrepreneurs, and intimately supporting a narrower group of other organisations”.

As a result, Computer Aid “currently has twenty units about to arrive at a project in Ethiopia and one in Mauritania,” said Nicola. “So hopefully we’ll be getting to see it in action soon.”

Computer Aid Connect

The Computer Aid Connect turns a Raspberry Pi into a router pre-packed with many websites

“The Raspberry Pi is a key component of the device, especially due to its low power usage and low cost,” said Nicola.

Also inside is a “UPS PIco Uninterruptible Power Supply,” said Nicola. As a result, Connect is “sustainable and stable during power outages.”

The Raspberry Pi is placed alongside a 64GB SD card and a Wireless N150 High-Power USB Adapter.

“The version of the Raspberry Pi changes between the Model 2 and the old A,” she explains. After all, “we receive donations of old Raspberry Pi devices.”

Visit the Computer Aid website if you’d like to donate a Raspberry Pi board to the project.

The post Computer Aid Connect: taking the internet to remote areas appeared first on Raspberry Pi.

Energenie MiHome Adds ‘Works with Alexia’ Amazon Echo Support to Nest & IFTTT

The content below is taken from the original (Energenie MiHome Adds ‘Works with Alexia’ Amazon Echo Support to Nest & IFTTT), to continue reading please visit the site. Remember to respect the Author & Copyright.

Mi|Home Wall Sockets

adam-smith-energenieWe sat down with Energenie’s Business Development Manager, Adam Smith a few weeks ago for a good chat on where the UK smart home company is headed.

Two Tribes

The Energenie system uses 2 carriers with their implementation of the OpenThings protocol. On-off keying (OOK) is used in the basic modules which keeps prices incredibly keen. For example a simple plugin module costs less than £10.

Although there’s no status reporting with these units Energenie say the commands are blasted for several seconds in an effort to make sure they get through.

Their fully featured modules use Frequency-shift keying (FSK) and these transceiver units have full status reporting so you can be sure your commands are carried out.

Energenie MiHome gets Amazon Echo Alexia SupportThe Mi|Home gateway has both protocols on board as Energenie don’t want to force people that have bought the basic modules to scrap them. Good plan as they’ve sold over half a million of the things to date!

Now with Added Amazon

Today brings the announcement that the Mi|Home system now ‘Works with Alexia’. So if you fancy some voice control of your lights and appliances for with Amazon Echo, suddenly those £8 plugin appliance modules are looking more interesting aren’t they.

Nest

The Energenie range also includes Smart Radiator Valves and these TRVs are compatible with the Nest learning thermostat. They can be set to follow the temperature target of your Nest system, or you can set them to an independent temperature too if you prefer.

energenie-heating-bundleWhile we were on the subject of heating we asked about boiler relay control and Adam confirmed a Boiler Control Switch is on the roadmap too.

IFTTT & API

Add that to the existing MiHome IFTTT compatibility and it’s easy to see why this system is really going places. Interestingly Adam told us that they have also completed the integration with SmartThings (through IFTTT) but they are still waiting for Samsung to ‘flip the switch’ and turn it on at their end. Hmmmm.

In addition, you can apply for access to the MiHome Cloud API if you’re ready to roll your own code and integrate with your DIY systems.

The Future

It’s only 18 months ago that the system was launched, but it’s clear from our chat that Mi|Home is forging ahead. In just the last six months they’ve seen the average devices per system increase from a 3 to almost a 6.

Their road map for 2017 will focus on two areas. Firstly ‘Monitoring’ with CCTV integration and the addition of a new multi-sensor coming around Q2 that will measure Light, Temperature and have an onboard Accelerometer. Their second target is the ‘Care Sector’ and we’ll be watching them with interest over the coming months.

energenie4u.co.uk  :  Available from Amazon

Want More? – Follow us on Twitter, Like us on Facebook, or subscribe to our RSS feed. You can even get these news stories delivered via email, straight to your inbox every day.

Power up your Google Compute Engine VMs with Intel’s next generation, Custom Cloud Xeon Processor

The content below is taken from the original (Power up your Google Compute Engine VMs with Intel’s next generation, Custom Cloud Xeon Processor), to continue reading please visit the site. Remember to respect the Author & Copyright.

Posted by Hanan Youssef, Product Manager, Google Cloud Platform

Google Cloud Platform’s focus on infrastructure excellence allows us to provide great price-performance and access to the latest hardware innovations. Working closely with hardware vendors, we help guide new advancements in data center technology and the speed at which Google Cloud Platform (GCP) customers can use them.

Yesterday, Google Cloud announced a strategic alliance with Intel that builds on our long-standing relationship developing data center technology. Today, we’re excited to announce that Google Compute Engine will support Intel’s latest Custom Cloud solution based on the next-generation Xeon Processor (codenamed Skylake) in early 2017.

The upcoming Xeon processor is an excellent choice for graphics rendering, simulations and any CPU intensive workload. At launch, Compute Engine customers will be able to utilize the processor’s AVX-512 extensions to optimize their enterprise-class and HPC workloads. We’ll also add support for additional Skylake extensions over time.

You’ll be able to use the new processor with Compute Engine’s standard, highmem, highcpu and custom machine types. We also plan to continue to introduce bigger and better VM instance types that offer more vCPUs and RAM for compute- and memory-intensive workloads.

If you’d like to be notified of upcoming Skylake beta testing programs, please fill out this form.