Now you can get a bachelor’s degree in data center engineering

The content below is taken from the original (Now you can get a bachelor’s degree in data center engineering), to continue reading please visit the site. Remember to respect the Author & Copyright.

In an era where all the hot tech jobs seem to focus on application development and cloud computing, it can be hard to find fresh data center engineering talent. The Institute of Technology in Sligo, Ireland, is trying to rewrite that story with a new Bachelors Degree in Data Center Facilities Engineering, starting this fall.

According to the school, “The purpose of this new engineering degree programme is to provide the Data Centre industry with staff who are qualified to provide the proficient and in-depth skills necessary for the technical management and operation of data centre facilities. Expert operation and maintenance of these facilities is crucial in order to maintain 24/7 services with optimum energy efficiency.”

Online classes, but you still have to go to Belgium

While the school may be in Ireland, the degree is offered online, in English, with lab sessions in Haute École Louvain en Hainaut (HELHa) in Mons, Belgium.  Students will have to show up there on specified days, which obviously tilts the program toward European students. Still, it provides a template that could be followed by schools around the world, creating cadres of new data center staff members without the need for intensive on-the-job training. 

Because the course was reportedly developed after 18 months of consultations with Google, Facebook and Microsoft, it’s not entirely clear to what extent the course will focus on traditional enterprise data center practices or if it will concentrate on the needs of the web giants. 

Creating a pool of entry-level workers?

But in an industry where technology graduates typically receive a more general education—perhaps bolstered with a short specialized course in data centers or other disciplines—the mere fact it exists also comes as a vote of confidence for careers in data center engineering. To be fair, there is a master’s program in data center engineering at SMU’s Lyle School of Engineering in Dallas, but that requires students to already have their B.S. in engineering or a related field. The IT Sligo course can be expected to turn out younger, presumably less expensive, data center workers who are better suited for entry-level positions.

IT Sligo President Dr. Brendan McCormack told Irish Tech News, “We are proud to be helping the data centres to address a specific skill need which several of the world’s leading tech companies recognise and value.”

Denis Browne, Google’s EU regional data center lead, added, “Google’s data centers are some of the best in the world, and we look for the best talent to work with us. Thanks to IT Sligo and HELha, this online course will increase the skills of people already working in the sector and for those who wish to work in the industry going forward.”

In four years or so, we’ll have a better idea of how it’s all working out.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Data-collecting benches are making their way into cities

The content below is taken from the original (Data-collecting benches are making their way into cities), to continue reading please visit the site. Remember to respect the Author & Copyright.

  • anchor

    Data-collecting benches are making their way into cities


    Anastasia Tokmakova

    Jul 17, ’17 4:47 PM EST


    Image courtesy of Soofa

    Image courtesy of Soofa

    A pair of USB ports on a console on the front of the bench provides juice from the solar panel mounted at lap level between the seats. Who wouldn’t want to hang out at a bench like this? It certainly catches the eye of passersby. What these kids might not realize, however, is that this bench is watching them back.
    Landscape Architecture Magazine

    “Smart” benches are spreading—recently a series of them, manufactured by Soofa, was installed in a tiny neighborhood park next to I-77 on the north end of Charlotte, North Carolina with the intent of the neighborhood’s analysis and redevelopment. 

    Soofa, founded in 2014 by three graduates of MIT Media Lab, is one of a handful of companies designing data-collecting street furniture. Their solar-powered benches register Wi-Fi enabled devices within 150 feet of them, sending data back to an office building in East Cambridge, Massachusetts. While the sensors can’t access personal information from your phone, they pick up and remember your devices’ MAC address. The technology allows cities and urban planners to count users of various public spaces, identifying when and for how long they’re visited, and potentially optimizing their design. 

    “The line between collecting data for a valid public purpose and the unreasonable surveillance of private citizens can be tough to tease out. Beyond clear dangers like hacking and data breaches, and underlying concerns about private corporations somehow benefiting from data collected on the taxpayer’s dime, are existential questions about privacy as a basic human right. “


     

  • The Acorn Archimedes At 30

    The content below is taken from the original (The Acorn Archimedes At 30), to continue reading please visit the site. Remember to respect the Author & Copyright.

    The trouble with being an incidental witness to the start of something that later becomes world-changing is that at the time you are rarely aware of what you are seeing. Take the Acorn Archimedes, the home computer for which the first ARM processor was developed, and which has just turned 30. If you were a British school pupil in 1987 who found a pair of the new machines alongside the row of BBC Micros in the school computer lab, for sure it was an exciting event, after all these were the machines everyone was talking about. But the possibility that their unique and innovative processor would go on to spawn a line of successors that would eventually power so much of the world three decades later was something that probably never occurred to spotty ’80s teens.

    [Computerphile] takes a look at some of the first Archimedes machines in the video below the break. We get a little of the history and a description of the OS, plus a look at an early model still in its box and one of the last of the Archimedes line. Familiar to owners of this era of hardware is the moment when a pile of floppies is leafed through to find one that still works, then we’re shown the defining game of the platform, [David Braben]’s Lander, which became the commercial Zarch, and provided the template for his Virus and Virus 2000 games.

    The Trojan Room Coffee Cam Archimedes, on display at the Cambridge University Computing Department.The Trojan Room Coffee Cam Archimedes, on display at the Cambridge University Computing Department.

    We see the RiscOS operating system booting lightning-fast from ROM and still giving a good account of itself 20 years later even on a vintage Philips composite monitor. If you were that kid in 1987, you were in for a shock when you reached university and sat down in front of the early Windows versions, it would be quite a few years before mainstream computers matched your first GUI.

    The Archimedes line and its successors continued to be available into the mid 1990s, but faded away along with Acorn through the decade. Even one being used to power the famous Trojan Room Coffee Cam couldn’t save it from extinction. We’re told they can still be found in the broadcast industry, and until fairly recently they powered much of the electronic signage on British railways, but other than that the original source of machines has gone. All is not lost though, because of course we all know about their ARM joint venture which continues to this day. If you would like to experience something close to an Archimedes you can do so with another computer from Cambridge, because RiscOS is available for the Raspberry Pi.

    Sit back and enjoy the video, and if you were one of those kids in 1987, be proud that you sampled a little piece of the future before everyone else did.

    VIDEO

    Thanks [AJCC] for the tip.

    Archimedes header image: mikkohoo, (CC BY-SA 4.0).

    Flat microscope for the brain could help restore lost eyesight

    The content below is taken from the original (Flat microscope for the brain could help restore lost eyesight), to continue reading please visit the site. Remember to respect the Author & Copyright.

    You’d probably prefer that doctors restore lost sight or hearing by directly repairing your eyes and ears, but Rice University is one step closer to the next best thing: transmitting info directly to your brain. It’s developing a flat microscope (the creatively titled FlatScope) that sits on your brain to both monitor and trigger neurons modified to be fluorescent when active. It should not only capture much more detail than existing brain probes (the team is hoping to see "a million" neurons), but reach levels deep enough that it should shed light on how the mind processes sensory input. And that, in turn, opens the door to controlling sensory input.

    FlatScope is part of a broader DARPA initiative that aims to create a high-resolution neural interface. If technologies like the microscope lead to a way to quickly interpret neuron activity, it should be possible to craft sensors that send audiovisual data to the brain and effectively take over for any missing senses. Any breakthrough on that level is a long way off (at best) when even FlatScope exists as just a prototype, but there is some hope that blindness and deafness will eventually become things of the past.

    Source: Rice University

    Here’s what Atari’s upcoming Ataribox console will look like

    The content below is taken from the original (Here’s what Atari’s upcoming Ataribox console will look like), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Retro consoles are the new next-gen consoles, and nothing’s more retro console than Atari. That’s why the teases from the gaming company about its upcoming ‘Ataribox’ have been so intriguing to gaming fans – it could be amazing. Now, we know what it looks like, and thanks to an email update (via The Verge), also broadly what it will be able to do.

    The design is clearly an homage to the Atari of yore, but it’s also not a straight up miniaturization like the NES Classic. Instead, it inherits some of the materials (there’s a woodgrain option and a black glass front, depending on your preference. There are also ports for an SD card, HDMI, and four USB, and the company will be offering classic games on the console, similar to the NES classic’s library.




    But the Ataribox will also be able to run “current” games, so it could be more like a modern set-top gaming device, too. We don’t yet know much about what that’s going to offer on that scale, but it’d be interesting if this was essentially a Shield-like Android TV device with a host of retro Atari titles pre-loaded and some media streaming capabilities.

    Nothing yet on final availability or pricing, but it’s still an intriguing project to keep an eye on – and one which could indicate the true depth of the retro gaming fad’s appeal.

    Enable Storage Sense in Windows 10 Creators Update

    The content below is taken from the original (Enable Storage Sense in Windows 10 Creators Update), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Ensuring that there is enough free disk space to install updates and keep Windows running smoothly can be a problem on devices with solid state storage. New in Windows 10 Creators Update, Storage Sense can automatically delete unnecessary files to maintain a healthy level of free disk space. In this Ask the Admin, I will show you how to turn it on.

     

     

    As solid-state disks (SSDs) become more common in all types of Windows 10 devices, there is often no reason to have massive amounts of storage on your device while working with cloud solutions like Office 365 and Google G Suite. A 256GB SSD should be plenty for most Windows 10 installations with a modest set of installed applications. There are always exceptions. If you edit video or need to keep tons of media available offline, 256GB might be stretching it. Most business users can make do with the minimum of local storage.

    In theory, 256GB should be enough storage most of the time. It might require either the user or an IT admin to perform periodic maintenance to ensure Windows has adequate free space to work properly and install updates. This is where Storage Sense can help.

    Sponsored

    Windows Defender and Storage

    What do security and storage have to do with each other? In the Creators Update, Microsoft added a new user interface for Windows Defender that alerts you about device health and performance, including issues with storage capacity. If the device is low on disk space, critical updates might not install.

    New Windows Defender UI in Windows 10 Creators Update (Image Credit: Russell Smith)

    New Windows Defender UI in Windows 10 Creators Update (Image Credit: Russell Smith)

    Storage Sense

    If you get a warning from Windows Defender about storage, then the Open Settings button will take you to Storage in the Settings app. From there, you can enable Storage Sense, which can automatically delete temporary files that apps are not using and any files that have been in the Recycle Bin more than 30 days. Additionally, you can click Clean now to force Storage Sense to run right away. You can also open the Storage page in the Settings app directly by typing storage into the search box in the bottom left of the taskbar and selecting Storage from the list of results.

    Storage sense in Windows 10 Creators Update (Image Credit: Russell Smith)

    Storage Sense in Windows 10 Creators Update (Image Credit: Russell Smith)

    Sponsored

    There is no setting to enable Storage Sense using Group Policy at this time. And while this new storage setting is a welcome addition to Windows, we will have to wait until the Fall Creators Update to get On-Demand Sync in One Drive and One Drive for Business. This is a new and improved version of the OneDrive placeholder files that were available in Windows 8.1. This allows users to sync files as needed, as opposed to having to sync entire folders from the cloud. In my experience, temporary files and the Recycle Bin certainly add to the problem but files synced from OneDrive are just as likely to cause free space issues.

    The post Enable Storage Sense in Windows 10 Creators Update appeared first on Petri.

    Here’s How Azure Stack Will Integrate into Your Data Center

    The content below is taken from the original (Here’s How Azure Stack Will Integrate into Your Data Center), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Azure Stack, the turnkey hybrid cloud system that you can now order from server vendors like Dell EMC and Hewlett Packard Enterprise or get as a managed service from providers like Avanade on hardware in your own data center, is intended to be concrete proof of Microsoft’s view that cloud is an operating model and not a place. It’s obviously designed to let you integrate private and public cloud services – but how well will it fit into your existing infrastructure?

    What it gives you is a system that’s not exactly the same as Azure running in an Azure data center but that’s consistent with it, using the same management API and portal, with many of the same services, giving you a unified development model. Think of it as a region in Azure. Not all Azure regions have exactly the same services available, but they all get the core services, ranging from storage, IaaS, and Azure Resource Manager to Key Vault, with Azure Container Service and Server Fabric coming to Azure Stack next year. Some public Azure services may never make it to Azure Stack, because some things only make sense at hyper-scale.

    Compliance, Performance, Data

    You can use Azure Stack to run cloud workloads that you don’t want in the public cloud for compliance reasons – the most common consideration when businesses weigh cloud services. That includes both the Azure services and third-party PaaS and IaaS workloads, such as Cloud Foundry, Kubernetes, Docker Swarm, Mesosphere DC/OS, and open source stacks like WordPress and LAMP, which come as services from the Azure Marketplace rather than bits you download, install, and configure manually. Just as interesting is the ability to use cloud tools and development patterns without the latency of an internet connection – whether you have poor connectivity (on oil rigs and cruise ships, in mines, and other challenging locations) or need to process sensor data in near-real-time.

    The hybrid option is going to be the most powerful. You can use Azure services like IoT Event Hubs and Cognitive Services APIs with serverless Functions and Azure Stack to build an AI-powered system that can recognize authorized workers and unauthorized visitors on your construction site and warn you when someone who’s not certified is trying to use dangerous machinery. Microsoft and Swedish industrial manufacturer Sandvik showed a prototype of that at the Build conference this year.

    That’s the kind of system you’d usually choose to build on a cloud platform, because setting up IoT data ingestion, data-lake, and machine learning systems you’d need before you could even start writing code would be a complex and challenging project. With Azure Stack, developers can write hybrid applications that integrate with services in the public Azure cloud that can be a first step in an eventual migration (if the issue is data residency and a cloud data center opens in the right geography), or to augment a system you never plan to put in the public cloud, and have the same DevOps process covering both environments.

    Image: Microsoft

    You can also use Azure Stack to run existing applications, especially if you want to start containerizing and modernizing them to move from monolithic apps to microservices. “You can connect to existing resources in your data center, such as SQL or other databases via the network gateway that is included in Azure Stack,” Natalia Mackevicius, director of Azure infrastructure management solutions, explained in an interview with Data Center Knowledge.

    But even if you’re using Azure Stack to virtualize existing applications, you’re going to be managing it in a very different way from your existing data center infrastructure – even if that includes Microsoft’s current Azure Pack way of offering cloud-style services on premises.

    Step Away from the Servers

    Azure Stack does integrate with your existing tools. When you set it up, you can choose whether to manage access using Azure Active Directory in a hybrid cloud situation, or Active Directory Federation Services if it’s not going to be connect to the public cloud.

    But you never do most of the setup you would with most servers. Network configuration happens automatically when you connect the switches in Azure Stack to your network, for example. “Customers complete a spreadsheet with relevant information for integration into their environment with information, such as the IP space to be used and DNS. When Azure Stack is deployed, the deployment automation utilizes this information to configure Azure Stack to connect into the customer’s network,” Mackevicius said.

    You won’t monitor Azure Stack like a normal server cluster because much of what an admin would normally do is automated and taken care of by the Infrastructure system. But there are REST APIs for monitoring and diagnostics – as well a System Center Operations Manager management pack for Azure Stack and a Nagios extension – so you can use your usual monitoring tools. Server vendors like HPE are using those APIs to integrate Azure Stack into their own tools, so if you already use HPE OneView, for example, you can manage Azure Stack compute, storage, and networking through that.

    “The switches in Azure Stack can be configured to send alerts and so on via SNMP, for example, to any central network monitoring tools,” Mackevicius said. “Each Azure Stack integrated system also has a Hardware lifecycle host (HLH), where the hardware partner runs software for hardware management, which may include tools for power management.”

    The portal on Azure Stack lets you manage the VMs that you’re running on it (and with the Windows Azure Pack Connector for Azure Stack, you can also manage VMs running on your existing infrastructure on Azure Pack), but not the IaaS service that runs them. “You can use monitoring tools such as System Center Operations Manager or Operations Management Suite to monitor IaaS VMs in Azure or Azure Stack in the same way you monitor VMs in your data centers.”

    Backup and DR

    For backup and DR, you need to think both about tenant workloads and the infrastructure for Azure Stack itself. Microsoft suggests Azure Backup and Azure Site Recovery for replication and failover, but that’s not the only option. “Tenant assets can use existing backup and DR tools such as Veeam, Commvault, Veritas Backup products,” or whatever other systems you already have in place.

    “For [its own] infrastructure, Azure Stack includes a capability which takes a periodic snap of the relevant data and places it on an externally configurable file share,” Mackevicius explained. That stores metadata like subscription and tenant-to-host mapping. so you can recover after a major failure, and you can use regions within your Stack deployment for scale and geo-redundancy.

    Updates on Your Own Schedule

    Updating is also very different. Updates to the Azure services and capabilities will come whenever they’re ready; updates for the Azure Stack infrastructure will come regularly, but that’s updates to infrastructure management. Even though Azure Stack runs on Windows Server, you’re not going to sit there testing and applying server patches. What Microsoft calls ‘pre-validated’ updates are delivered automatically, and what you control is when they’re applied, so they happen during your chosen maintenance window.

    Getting updates to be seamless and stress-free is why Microsoft turned to specific hardware partners rather than letting customers build DIY Azure Stack configurations. “Sure, you can get it up and running … but then you need everything to update, and by the way, that needs to happen while all the tenants continue to run,” explained Vijay Tewari of the Azure Stack team. “The thing people fixate on is getting the initial deployment right, but this is about the full operational lifecycle, which is a much bigger proposition.”

    That’s one of the reasons to bring cloud to your data center in the first place. “We have a highly simplified model of operation. We don’t want our customers spending inordinate amount of their resources, time, or money just trying to keep the infrastructure running. That’s not where the value of Azure comes from; it comes from innovative services, whether it’s Service Fabric, whether it is SQL DB, or Azure Machine Learning.”

    Azure Stack gives you the option of taking advantage of that cloud value without having to give up the value you get from your own data centers, but you will be doing things differently.

    Azure Adds D_v3 and E_v3 Virtual Machine Series

    The content below is taken from the original (Azure Adds D_v3 and E_v3 Virtual Machine Series), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Microsoft launched 2 new series of virtual machines in July, the D_v3 and the E_v3, which are successors to the D_v2-Series. There are some interesting new firsts with these series. In this article, I will discuss the features of these new series and how this impacts the promotional pricing of the D_v2-Series virtual machines.

    Successor Series

    This is not the first time that Microsoft has launched a successor series of virtual machines. In the past, the Standard A_v2-Series replaced the Standard A-Series with a more recognizable set of sizes and lower costs. We have seen the D_v2-Series come in with newer hardware and (eventually) lower costs than the original D-Series.

     

     

    Recently, Microsoft has started to split the categorization of the D_v2-Series machines into two groupings:

    • General purpose: A normal balance of CPU to RAM, including the D1_v2 to D5_v2 machines
    • Memory optimized: Higher than normal RAM, including the D11_v2 to D15_v2 machines

    This split is a little confusing. Instead of continuing this split of the D-Series, Microsoft has decided to replace the D_v2 machines in the memory optimized category with a new E_v3 series. The general purpose D_v2 virtual machines are replaced by the new D_v3 virtual machines.

    Host Changes

    The D_v2-Series machines were based on a 2.4GHz Intel Xeon E5-2673 v3 (Haswell) processor, which is capable of bursting up to 3.1GHz with Intel Turbo Boost Technology 2.0. The D_v3 and the E_v3-Series machines are based on the newer 2.3GHz Intel XEON ® E5-2673 v4 (Broadwell) processor, which can achieve up to 3.5GHz. This is also with Intel Turbo Boost Technology 2.0, which is an extra .4GHz.

    Sponsored

    Since last year, the two new series have departed from the multiples of 1.75GB RAM and have moved to using recognizable quantities of CPU and RAM. The name of the virtual machine size indicates the number of processors and RAM amounts are a multiple of the vCPU count. For example:

    • The D2_v3 has 2 virtual processors and 8GB (x4) RAM.
    • The D4_v3 has 4 virtual processors and 16GB (x4) RAM.

    The D_v3 and E_v3 also introduce the usage of Intel Hyperthreading. Past machines did not use Hyperthreading. As a result, you should notice a significant (up to 28 percent, according to Microsoft) price reduction between the regular price of the D_v2-Series and the D_v3-Series (West US 2 region):

    • D2_v2 with 2 cores and 7GB RAM costs $108.63 per month.
    • D2_v3 with 2 virtual CPUs and 8GB RAM costs $74.40 per month.

    Nested Virtualization

    Another significant change is that the v3 virtual machines and the M-Series run on hosts that are powered by Windows Server 2016 (WS2016) Hyper-V. That does not mean all that much by itself but it does mean that some WS2016 features might start to appear over time. The first of these new features is the support of nested virtualization. You can run Hyper-V virtual machines inside of the Azure (Hyper-V) virtual machines.

    I would not start by saying, “Hey let’s run production Hyper-V clusters on Azure,” but there will be some interesting scenarios:

    • Those of us that need accessible pay-as-you-go demo and training labs without expensive hardware have a new option.
    • Azure virtual machines can host Hyper-V containers. This is probably the core scenario that Microsoft was focusing on.

    Availability

    My guess is that new hardware is being deployed to support these two new series of virtual machines. This means that availability is limited to a few regions but this will grow over time:

    • West US 2
    • East US 2
    • West Europe
    • Southeast Asia

    D_v2 Promotional Pricing

    Microsoft offered promotional pricing for the D_v2-Series ahead of the launch of the D_v3- and E_v3-Series machines. This was probably done to seed the adoption. The pricing of the older series was to roughly match the two new ones.

    Microsoft will be winding down this promotional offer in regions where the D_v3 and E_v3 are now available. Customers will be able to deploy the D_v2 promotional machines until August 15th in the above regions. In the remaining regions without D_v3 and E_v3 availability, the promotional offer will continue until the new series are launched.

    Sponsored

    All currently deployed D_v2_promo virtual machines (a specific set of SKUs) will continue to be billed using the promotional pricing until June 30, 2018. At that time, the machines will revert back to D_v2 pricing. You should either stick with D_v2 machines or upgrade them to D_v3 or E_v3 machines.

    The post Azure Adds D_v3 and E_v3 Virtual Machine Series appeared first on Petri.

    Google Cloud Platform now open in London

    The content below is taken from the original (Google Cloud Platform now open in London), to continue reading please visit the site. Remember to respect the Author & Copyright.

    By Dave Stiver, Product Manager, Google Cloud Platform

    Starting today, Google Cloud Platform (GCP) customers can use the new region in London (europe-west2) to run applications and store data in London. London is our tenth region and joins our existing European region in Belgium. Future European regions include Frankfurt, the Netherlands and Finland.

    Incredible user experiences hinge on performant infrastructure. GCP customers throughout the British Isles and Western Europe will see significant reductions in latency when they run their workloads in the London region. In cities like London, Dublin, Edinburgh and Amsterdam, our performance testing shows 40%-82% reductions in round-trip time latency when serving customers from London compared with the Belgium region.

    We’ve launched London with three zones and the following services:

    The London region puts the control over how to deploy resources directly in the hands of GCP customers  giving them choice in some GCP services on where to run their applications and store their data. When a customer signs up for GCP services, they have three different options, depending on the service:

    1. Regional: Run applications and store data in a specific region, e.g., London, Tokyo, Iowa, etc.
    2. Multi-regional: Distribute applications and storage across two or more cloud regions on a given continent, e.g., Americas, Asia or Europe.
    3. Global: Distribute applications and store data globally across our entire global network for optimal performance and redundancy.

    In addition, we’ve worked diligently over the last decade to help customers directly address EU data protection requirements. Most recently, Google announced a commitment to GDPR compliance across GCP. The General Data Protection Regulation (GDPR), which takes effect on May 25, 2018, is the most significant piece of European privacy legislation in the last 20 years.

    "Google’s decision to choose London for its latest Google Cloud Region is another vote of confidence in our world-leading digital economy and proof Britain is open for business. It’s great, but not surprising, to hear they’ve picked the UK because of the huge demand for this type of service from the nation’s firms. Earlier this week the Digital Evolution Index named us among the most innovative digital countries in the world and there has been a record £5.6bn investment in tech in London in the past six months.

    Karen Bradley, Secretary of State for Digital, Culture, Media and Sport

    "At WP Engine, we look forward to extending our digital experience platform to an even broader set of our 10,000 European customers who want to be hosted on Google Cloud Platform based in the London region. We are excited about bringing reduced latency benefits from the ability to store and process data in London to our UK customers."  

     Jason Cohen, Founder and CTO

    "The Telegraph benefits greatly from Google Cloud’s global scale and is pleased to see continued investment from Google Cloud in the UK. We look forward to working with them closely as they expand their business in the UK and Europe."  

     Toby Wright, CTO, The Telegraph

    "Google Cloud enables Revolut to try new ideas and stay agile while providing secure, reliable services for our customers at scale."  

    Vladyslav Yatsenko, Co-founder & CTO, Revolut

    For the latest on the terms of availability for services from this new region as well as additional regions and services, visit our London region page or locations page. For guidance on how to build and create highly available applications, take a look at our zones and regions page. Give us a shout to request early access to new regions and help us prioritize what we build next.

    We’re excited to see what you’ll build on top of the new London region!

    HPE ProLiant Gen10 Featuring Intel Xeon Scalable Processors Launched

    The content below is taken from the original (HPE ProLiant Gen10 Featuring Intel Xeon Scalable Processors Launched), to continue reading please visit the site. Remember to respect the Author & Copyright.

    HPE DL380 Gen10 StackHPE DL380 Gen10 Stack

    HPE is synonymous with data center computing. The HPE ProLiant Gen10 was shown off at Discover but now with the launch of the Intel Xeon Scalable Processor Family launch, the company can start marketing the servers to the general public. HPE is one of the top volume vendors of servers so whenever it unveils a new generation of servers, it is time to take notice.

    HPE ProLiant DL380 Gen10 Example

    Like we have seen with Dell EMC and other vendors, HPE seems to be introducing updates to its Gen9 line in waves. One of the exciting platforms we are seeing in the first wave of launches in the HPE ProLiant DL380 Gen10 replete with a gorgeous new faceplate.

    HPE DL380 Gen10 FrontHPE DL380 Gen10 Front

    The HPE ProLiant DL380 is designed to be highly configurable for a user’s workload. There are different front storage options, multiple networking, and storage controller flavors available.

    HPE DL380 Gen10 Interior With Skylake SPHPE DL380 Gen10 Interior With Skylake SP

    As you can see, the HPE ProLiant DL380 has many customization points and can be used with the range of Intel Xeon Scalable Processor Family CPUs. The HPE ProLiant DL380 Gen10 can handle up to 192GB of persistent memory. HPE is making a major push toward NVDIMMs with this generation.

    Other key HPE ProLiant Gen10 launch week servers are the HPE ProLiant DL360 Gen10 which is a 1U dual socket server and the HPE ProLiant DL560 Gen10. The DL560 Gen10 is a 2U, quad socket server which means one has access to performance like we saw in our Quad Intel Platinum Xeon 8176 initial benchmarks in only 2U. For comparison our test system from the Intel OEM business was 4U and the Dell PowerEdge R940 is 3U giving HPE a 50-100% density advantage.

    More HPE Features for Gen10

    We wanted to highlight a few new features for Gen10. One example is iLO 5 which is the next generation of HPE’s management interface. We recently reviewed an HPE DL60 Gen9 with the iLO 4.

    VIDEO

    Beyond the iLO management interfaces, HPE’s other tools are upgraded to manage the new Gen10 servers.

    One area that HPE, and other vendors, have been pushing in this product cycle is the idea of security. For example, the company is offering a Silicon Root of Trust based firmware security protection option. If you have the iLO Advanced Premium Security Edition you can have server firmware checked every 24 hours to ensure that the firmware has not been tampered with. If it has been you can rollback to the to last known good state or factory settings after detection of compromised code.

    Final Words

    While the HPE ProLiant product pages have three new Gen10 models and the rest are listed as Gen9, we fully expect this to change over the next few months. Like other major vendors, HPE is releasing its products in phases with this generation. They are certainly great looking machines.

    Microsoft Defines Its Path Forward for On-Premises

    The content below is taken from the original (Microsoft Defines Its Path Forward for On-Premises), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Every time I present or speak at an event, one question that always comes up is “what is the future for on-premises” data centers. It’s a valid question as Microsoft is pushing everything cloud these days and while it may look like they are moving beyond supporting local data centers, that’s not accurate, but they are going to force the modernization of the environment.

    Let’s be clear, the on-premises data center is not going away anytime soon. While Microsoft, Amazon, and Google would like to think that Cloud is the be-all, end-all solution to IT infrastructure, HPE, Dell, and Lenovo can prove that based on their sales of hardware, this isn’t the case.

    That being said, Azure, AWS and similar will reduce the number of new data centers built because as new companies are born, it typically makes sense to build in the cloud than investing in server hardware. Granted, that doesn’t work for every company and there are genuine reasons why the cloud is not for everyone but for most, cloud solutions are viable options going forward.

    There are two things you need to understand about Microsoft’s support for on-premises data centers, the new servicing model for Windows Server and Azure Stack; these two items represent a genuine look at how Microsoft is viewing the evolution of your data center.

    There are two primary servicing channels for Windows Server going forward, Long Term Servicing Channel (LTSC) and Semi-annual Channel (SAC). As both names suggest, long term has a new release every 2-3 years with 5 years mainstream, 5 years extended support and also an option for 6 years of Premium Assurance assistance. The Semi-annual channel ships updates twice a year that will deliver new features to the platform; the slide below provides a good look at the delivery model.

    The twice a year update represents the nimble data center, one that can update frequently to receive the newest features first and the LTSC is for older installations or companies that don’t have the capacity to regularly update their infrastructure.

    For those on the SAC, you will be able to skip one feature update per year which will ease the burden on system admins who don’t want to devote two times a year to deploying upgrades. Each feature update will be supported for 18 months, the same as Windows 10.

    To gain access to the semi-annual channel, you will need to subscribe to Software Assurance or being using Azure. The LTSC will be available through all channels and is what many of us know as the traditional Windows Server model (for example, Windows Server 2016 falls into this branch).

    With this change, Microsoft will be updating its naming convention to one that is similar to how Windows operates. For SAC, this version will be known as Windows Server 1709, 1803, etc. and LTSC will stick to names like Windows Server 2012, and 2016. The goal here is simple; faster updates for its server software that results in the practices used inside the modern data center.

    On the other side of the coin is the hybrid world of the data center; Azure Stack is Microsoft’s big investment for the long term for medium to large environments. The goal here is to extend the benefits of Azure to on-premises operations to those who may not have typically used the cloud platform but need the benefits of its technology.

    But more importantly, by deploying Azure Stack in your data center, this sets up a future transition to the cloud-based solution. With hardware now being offered that is Azure Stack certified, when an on-premises deployment looks to move to the cloud, you can easily port the local solutions to the cloud with minimal impact as Azure Stack is an extension of Azure down to your data center.

    The goal for Microsoft is to bring Azure to everyone and while not all operations can or are ready to move to the cloud, Microsoft is going to keep adding features to Stack and Server that tie in natively to Azure so that when these operations do move to the cloud, jumping to Azure is the natural choice.

    For other services like SharePoint, Microsoft has already said that there will be another release designed for on-premises operations. But, they have said that cloud-based users will get new features first and that’s the overall theme for Microsoft.

    If you want to be on the bleeding edge of Microsoft software and services, the cloud is the way to go, or at a minimum, Azure Stack. For those who need to work on-premises, you will find yourself behind your cloud-based peers but the company is not abandoning these users anytime soon. That being said, the company is planting seeds to entice users to move to Azure and ditch local hardware and those efforts will only intensify with each new iteration of on-premise software they release.

    The post Microsoft Defines Its Path Forward for On-Premises appeared first on Petri.

    Any Alexa device can control your Fire TV

    The content below is taken from the original (Any Alexa device can control your Fire TV), to continue reading please visit the site. Remember to respect the Author & Copyright.

    You’d think that Amazon would have made it possible to control a Fire TV from external Alexa devices as soon as it was an option, but no — you’ve had to use the Fire TV itself if you wanted to play a video using your voice. At last, though, sense has prevailed. Amazon has updated all versions of the Fire TV and Fire TV Stick to add support for voice control from another Alexa-enabled device. If you want to skip to the next episode of a show, you can talk to your Echo or smartphone instead of scrounging for the Fire TV’s remote.

    Does the feature sound familiar? You’re not alone. One of the centerpieces of Google Home is its ability to queue up video on a Cast-enabled TV, and Amazon is effectively matching that feature note for note. Not that we’re complaining. This is arguably one of biggest omissions in the Fire TV’s feature set, and it only makes sense if you live in a household with more than one Amazon device at your beck and call.

    Via: Android Police

    Source: Amazon

    AI lawyer can help you with a thousand different legal issues

    The content below is taken from the original (AI lawyer can help you with a thousand different legal issues), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Over two years ago, Joshua Browder, now a junior at Stanford University, created a chatbot that could contest parking tickets in New York City and London. By June of 2016, DoNotPay had successfully contested 160,000 parking tickets — a 64 percent success rate — and earlier this year, Browder added capabilities to assist asylum seekers in the US, UK and Canada. Now, the bot is able to assist with over 1,000 different legal issues in all 50 states and across the UK.

    To use DoNotPay’s AI-assisted help, you just type your problem into its search bar and links to relevant aid pop up that are specific to your location. After you navigate through different options, a chatbot then asks you questions and puts together a letter or other legal documentation. The bots can help you write letters or fill out forms for issues like maternity leave requests, landlord disputes, insurance claims and harassment.

    Browder hasn’t accepted any outside funding as of yet, but monetization of DoNotPay is in its future. While he hasn’t decided on how that will go, Browder is considering bot sponsorships, like a car dealership sponsoring a parking ticket bot specific to its city, for example.

    The "world’s first robot lawyer," as Browder refers to his service, has beaten an estimated 375,000 parking tickets and saved around $9.3 million in fines. If that success can translate to the 1,000 new legal areas the bot is taking on, DoNotPay can become a seriously useful free legal aid.

    Via: The Verge

    Source: DoNotPay (1), (2)

    oBike arrives in London with its dockless take on Boris bikes

    The content below is taken from the original (oBike arrives in London with its dockless take on Boris bikes), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Already this year we’ve seen two Chinese companies that run novel bike rental schemes expand into the UK, and now Singaporean firm oBike is throwing its chips into the pot, too. The startup has this week put 400 of its two-wheelers to work in the London Borough of Tower Hamlets, despite the capital being home to over 11,000 for-hire ‘Boris bikes.’ Unlike these, though, oBikes don’t require docking. Through the company’s mobile app, you locate the nearest available pushbike on a map, unlock it by scanning its unique QR code, then leave it wherever you want when you’re done.

    The app also handles the payment side of things — 50p per half hour — and includes a credit system that gives users free ride time for reporting damaged and illegally parked bikes. The company tells Wired Tower Hamlets was a good place to start as it believes it to be an underserved area, and that it’s going to add hundreds of bikes to grow the scheme every day throughout July. According to oBike, it’s in ongoing discussions with local councils, which probably means it’s trying to persuade them there’s space for another player, considering the large number of ‘Boris bikes’ already in circulation.

    Even in the much less saturated city of Cambridge, Chinese company Ofo had to scale back its trial, which began in April, to only a handful of bikes after the council became concerned they would clutter pavements. Mobike, which runs another identical app-based rental scheme, has had problems of its own after putting 1,000 bikes on the street of Manchester last month. It’s far from a widespread issue, but some bikes have been the victim of run-of-the-mill vandalism, while others have simply been stolen after having their locks hacked off.

    Via: Wired

    Source: oBike

    Bitdefender Home Scanner scans your Home Network for vulnerabilities

    The content below is taken from the original (Bitdefender Home Scanner scans your Home Network for vulnerabilities), to continue reading please visit the site. Remember to respect the Author & Copyright.

    The internet is now not just limited to mobile phones and computers. With all these smart devices showing up, your home network has grown drastically. We have mobiles, computers, smart TVs, surveillance devices, automation devices and a lot of other devices connected to the same network. But have you ever given a thought to vulnerabilities of each of them? Putting up security on work networks was always a required thing, but security on home networks is also becoming a priority these days. To help you out with this we have Bitdefender Home Scanner.

    Bitdefender Home Scanner os a free & fast Wi-Fi scanner for your home network. It looks for vulnerable devices and passwords, and offers detailed security recommendations for your home network.

    Bitdefender Home Scanner

    Pretty much like its name, Bitdefender Home Scanner scans your home for all kinds of network vulnerabilities. Designed by network and security experts, this tool can take out some potentially harmful security flaws and weaknesses of your network. Moreover, it provides a detailed description and steps to fix a flaw and make your network secure and safe.

    So, what does it do?

    The first step is knowing your network, so the program will ask you whether the WiFi you are connected to is your home network? Using this tool on public networks is not recommended at all. Once you’ve setup your home network, the program starts scanning. And in the entire process, Bitdefender Home Scanner scans for all the connected devices. And then proceeds to scan them individually for vulnerabilities.

    In the process, the tool will scan all the open ports and also look for poorly encrypted connections. All in all, Home Scanner scans for insecure connections, weak credentials and any hidden backdoors in your network.

    Using this tool is pretty simple and straightforward. Just download, install and run it. You will need to create a Bitdefender account before using this tool, or you might Sign In if you already have one. Once registered, you can select your home Wi-Fi network and proceed to scan. Scanning might take a little time depending on the number of devices connected your network.

    Bitdefender Home Scanner

    Once the scan has been completed, a list of connected devices with their issues will be displayed. If all the devices are clean and safe, they will be marked with a green flag. And vulnerable devices will be marked with a red flag, and you can view issues in detail by click opening one single device.

    Other simple details like MAC Address, IP Address, device manufacturer and device type can also be viewed from the same window.

    Another great feature of this tool is that once you’ve configured up your home network, you will be notified whenever a new device connects. So that you can scan that device right away. Also, it is recommended to scan devices frequently as new vulnerabilities are discovered every day, and you need to be sure about them.

    You can also change your home network to something else by going into the ‘My Account’ page. A list will be displayed from where you can choose your home network and make this tool scan that network.

    Bitdefender Home Scanner is a great tool to have in addition to your antivirus/antimalware program. The tool makes sure you are always protected and none of your connected devices are vulnerable. It is a perfect tool if you use a lot of smart devices like TVs, surveillance systems, and automation devices. Click here to download Bitdefender Home Scanner.

    Read next: Bitdefender BOX will protect IoT Devices from Malware and Hacking.

    OpenStack Developer Mailing List Digest July 1-8

    The content below is taken from the original (OpenStack Developer Mailing List Digest July 1-8), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Important Dates

    • July 14, 2017 23:59 OpenStack Summit Sydney Call for Presentations closes 1.
    • Around R-3 and R-4 (July 31 – August 11, 2017) PTL elections 2
    • All 3

    Summaries

    • TC status update by Thierry 4
    • API Working Group new 5
    • Nova placement/resource providers update 6

    SuccessBot Says

    • pabelanger on openstack-infra 7: opensuse-422-infracloud-chocolate-8977043 launched by nodepool
    • clark on openstack-infra 8: infra added citycloud to the pool of test nodes.
    • fungi on openstack-infra 9: OpenStack general mailing list archives from Launchpad (July 2010 to July 2013) have been imported into the current general archive on lists.openstack.org.
    • adreaf on openstack-qa: 10 Tempest ssh validation running by default in the gate on master.
    • All 11

    Most Supported Goals And Improving Goal Completion

    • Community wide goals discussions started at the OpenStack Forum, then the mailing list and IRC for those that couldn’t be at the Forum.
      • These discussions help the TC make decisions on which goals will be to a release.
    • Potential goals:
      • Split Tempest plugins into separate repos/projects 12
      • Move policy and docs into code 13
    • Goals in Pike haven’t been really reached.
    • An idea from the meeting to address this is creating a role called “Champions” who are drum beaters to get a goal done, by helping projects with tracking status, and sometime doing code patches.
    • Interested volunteers who have a good understanding of their selected goal and its implementation to be a trusted person.
    • From the the discussion in thread, it seems we’re mostly in agreement with the Champion idea.
      • We have a volunteer for splitting out tempest plugins into repos/projects.
    • Full thread 14

     

    1. http://bit.ly/2ugoHjo/
    2. http://bit.ly/2uPd2Vxl
    3. http://bit.ly/1xKkuAH/
    4. http://bit.ly/2uPd3c38
    5. http://bit.ly/2ugpErZl
    6. http://bit.ly/2uP7Tgq8
    7. http://bit.ly/2scetfCl
    8. http://bit.ly/2scetfCl
    9. http://bit.ly/2rVtVwZl
    10. http://bit.ly/2rVtVwZl
    11. http://bit.ly/2rCKikRs
    12. http://bit.ly/2uPd3c38
    13. http://bit.ly/2uP7xq2l
    14. http://bit.ly/2ugkn3H8

    #openstack #openstack-dev-digest

    Gobs of free Microsoft eBooks.

    The content below is taken from the original (Gobs of free Microsoft eBooks.), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Here ya go!

    http://bit.ly/2ufNRyy

    New infographic: Cloud computing in 2017

    The content below is taken from the original (New infographic: Cloud computing in 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Cloud Computing Market - Infographic

    With 83% of businesses ranking cloud skills as critical for digital transformation in 2017, it’s great news for anyone with cloud architecting experience, and for those considering a career in cloud computing. In our new infographic, we compiled some of the latest industry research to look at the world of cloud computing in 2017.
    The cloud will continue to disrupt traditional IT models as the growing amount of data generated by people, machines, and things will increasingly be handled in the cloud. This is highlighted in both the shift to IT spending away from traditional on-premise hardware, and the increased adoption of public, private, and hybrid cloud models.
    As the cloud adoption increases, companies are using it to achieve greater scalability, higher performance, and faster time to market. As a result, skills for architecting, deploying, and securing the cloud will continue to be essential. And because companies are embracing multiple cloud models and multiple providers, those working in the cloud will need versatile skills that cover different platforms and services to help companies leverage the cloud for continued benefits and competitive advantage.
    Learn more about the world of cloud computing in 2017 in our infographic.
    Cloud Academy Infographic Web

    References: Microsoft Cloud Skills Report | Cloud Computing Top Markets Report | U.S Talent Shortage Survey | Intelligentcio.comForbes.com | Rightscale.com | Tech.web | Computerweekly.com

    Azure Site Recovery now supports large disk sizes in Azure

    The content below is taken from the original (Azure Site Recovery now supports large disk sizes in Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Following the recent general availability of large disk sizes in Azure, we are excited to announce that Azure Site Recovery (ASR) now supports the disaster recovery and migration of on-premises virtual machines and physical servers with disk sizes of up to 4095 GB to Azure.

    Many on-premises virtual machines that are part of the Database tier and file servers use disks with sizes greater than 1 TB. Support for protecting these virtual machines with large disk sizes has consistently featured as a top ask from both our customers and partners. With this enhancement, ASR now provides you the ability to recover or migrate these workloads to Azure.

    These large disk sizes are available on both standard and premium storage. In standard storage, two new disk sizes, S40 (2TB) and S50 (4TB) are available for managed and unmanaged disks. For workloads that consistently require high IOPS and throughput, two new disk sizes, P40 (2TB) and P50 (4TB) are available in premium storage, again for both managed and unmanaged disks. Depending upon your application requirements, you can choose to replicate your virtual machines to standard or premium storage with ASR. More details on the configuration, region availability and pricing of large disks is available in this storage documentation.

    To show you how Azure Site Recovery supports large disk sizes, I protected the Database tier VM of a SharePoint farm. You can see that this VM has data disks which are greater than 1 TB.

    Large disks

    Pre-requisite step for existing ASR users:

    Before you start protecting virtual machines/physical servers with greater than 1 TB disks, you need to install the latest update on your existing on-premises ASR infrastructure. This is a mandatory step for existing ASR users.

    For VMware environments/physical servers, install the latest update on the Configuration server, additional process servers, additional master target servers and agents.

    For Hyper-V environments managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider update on the on-premises VMM server.

    For Hyper-V environments not managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider on each node of the Hyper-V servers that are registered with Azure Site Recovery.

    I would like to call out that support for Disaster Recovery of IaaS machines in Azure with large disk sizes is not available currently. This support would be made available soon.

    Start using Azure Site Recovery today. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers. You can also use the ASR User Voice to let us know what features you want us to enable next.

    $99 buys you a useful, but plain, Android Wear watch

    The content below is taken from the original ($99 buys you a useful, but plain, Android Wear watch), to continue reading please visit the site. Remember to respect the Author & Copyright.

    When you think about all the Android Wear watches on the market, you probably recall LG, Huawei, Michael Kors or Tag Heuer. Google typically partners with heavyweights in tech and fashion. So it’s intriguing to see a small, obscure startup like Mobvoi offer its own Android Wear watch. What’s most interesting, though, is the Ticwatch E’s price tag: just $99.

    Of course that’s only if you buy the Ticwatch E on the company’s Kickstarter project before it becomes more widely available. When that happens, it will cost $159 and you’ll be able to get it on Amazon or Mobvoi’s website. A higher-end version called Ticwatch S will cost $119 on Kickstarter, and $199 at retail. Even after the early bird period, though, that’s still the cheapest Android Wear 2.0 watch around right now.

    What you get for those prices is surprising; Mobvoi didn’t cut corners. Both models sport a bright round 1.4-inch screen with a 400×400 resolution, a heart rate monitor and a GPS sensor — features that some more-expensive devices lack. Our demo unit of the Ticwatch E was responsive, and Google Assistant was actually faster than on competing devices I’ve tested. That’s particularly impressive given Mobvoi uses a Mediatek processor here instead of a higher-end Qualcomm option. During our preview, Google Maps was also quick to locate us, despite being indoors.

    With their bright colors (white, black and lime are available) and silicone rubber straps, the Ticwatch S and E both look and feel cheaper than the competition. While you can swap out the standard 22mm band on the S version to make it prettier, you’re stuck with the default non-removeable strap on the E flavor. That’s because the latter’s GPS antenna is built into the band. Mobvoi figures the E model is more appropriate for a sportier crowd, so it made the entire device lighter. It also designed the E’s strap to be "breathable" by carving out a hollow underneath to avoid sweat buildup.

    Mobvoi said it included only "essential hardware" that it believes its users would need, which explains why neither version has a cellular radio. That omission not only keeps the watches slim and lowers costs, but should also allow for longer-lasting batteries than the competition. Depending on how you use it, the company says each device should last between 1.5 to 2 days.

    Mobvoi also says it will include five of its own apps on the watches, such as Tic Fitness, Health and Music Player, which lets you store and play music on your watch. These are carried over from the core app suite on the Ticwatch 2, and can’t be uninstalled. The inclusion is meant to please fans of the company’s existing smartwatches, which run Mobvoi’s own OS. Up to 15 other apps from that system will be available for download from the store, too. Since the apps weren’t on our demo unit, we couldn’t tell if they would actually be useful or feel more like bloatware.

    Ultimately, we can’t determine whether the Ticwatch S and E will hold up after long-term use based on our brief preview. People who are style-conscious probably won’t appreciate the watches’ distinctly plastic, toy-like appearance. But those who could care less about looks and are more interested in trying out Android Wear 2.0 on a budget should hit up the team’s Kickstarter page before they’re sold out.

    Monitor Changes and Auto-Enable Logging in AWS CloudTrail

    The content below is taken from the original (Monitor Changes and Auto-Enable Logging in AWS CloudTrail), to continue reading please visit the site. Remember to respect the Author & Copyright.

    AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. Hence, it’s crucial to monitor any changes to CloudTrail and make sure that logging is always enabled.

    With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of API calls for your account, including API calls made through the console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.

    In this post, I describe a solution to notify on changes to CloudTrail and re-enable logging whenever logging is disabled.

    Change monitoring and notification

    For this walkthrough, you use an Amazon CloudWatch Events rule to monitor changes to a CloudTrail trail. An AWS Lambda function set as a target for this rule contains the logic to detect changes to the trail and publish a message to an Amazon SNS notification. The diagram below depicts the workflow.

     

     

    1. An IAM user makes changes to a CloudTrail trail.
    2. That change event gets detected by a CloudWatch Events rule.
    3. The rule triggers a Lambda function.
    4. The function publishes the change event to an SNS topic.
    5. The SNS topic sends the email to its subscribers.
    6. If the change event was to disable logging, the function re-enables logging on that trail.

    The CloudWatch Events rule detects the following CloudTrail operational events:

    • “StopLogging”
    • “StartLogging”
    • “UpdateTrail”
    • “DeleteTrail”
    • “CreateTrail”
    • “RemoveTags”
    • “AddTags”
    • “PutEventSelectors”

    After a “StopLogging” event is detected, the Lambda function re-enables logging for that trail. This generates a “StartLogging” event that again sends an SNS notification.

     

    Walkthrough

    Now, I walk you through creating an SNS topic and subscription, Lambda function, and CloudWatch Events rule. To deploy this solution, download the CloudTrailMonitor.json AWS CloudFormation template. The README document provides instructions to deploy the stack.

    Create the SNS topic and subscription

    In the SNS console, choose Create topic and enter appropriate values for Topic name (such as CloudTrailAlert) and Display name (CT-Alert). Choose Create topic. Select the topic and view the details.

    Next, choose Create subscription.

    For Protocol, choose Email-JSON. Enter the email address where notifications should be sent and choose Create subscription.

    An email is sent to confirm the SNS topic subscription. In the email, open the SubscribeURL link to complete the subscription. Note the SNS topic ARN, as it is used later by the Lambda function.

    For more information, see Create a Topic in the Amazon SNS Developer Guide.

     

    Create the Lambda function

    In the Lambda console, choose Functions, Create a Lambda function. Choose Blank Function and on the Configure trigger page, choose Next.

    On the next page, enter the following values:

    • Name: An appropriate name for the Lambda function
    • Runtime: Python 2.7
    • Code entry type: Upload a ZIP file
    • Function package: Upload the Cloudtraillambdamonitor.zip file
    • Environment variables:
      • Key: SNSARN
      • Value: The SNS topic ARN noted earlier
    • Role: Create a custom role (takes you to another page). Call the role CloudTrailLambda.

    For the policy document, enter the following policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "LambdaCloudtraiMonitor",
                "Effect": "Allow",
                "Action": [
                    "cloudtrail:DescribeTrails",
                    "cloudtrail:GetTrailStatus",
                    "cloudtrail:StartLogging"
                ],
                "Resource": [
                    "arn:aws:cloudtrail:*:<AWS-ACCOUNT-ID>:trail/*"
                ]
            },
            {
                "Sid": "CloudWacthLogspermissions",
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": [
                    "arn:aws:logs:*:*:*"
                ]
            },
            {
                "Sid": "SNSPublishpermissions",
                "Effect": "Allow",
                "Action": [
                    "sns:Publish"
                ],
                "Resource": [
                    "arn:aws:sns:*:*:*"
                ]
            }
        ]
    }
    

    On the Configure function page, choose Next. Review the configuration settings before choosing Create function.

    For more information, see Step 2.1: Create a Hello World Lambda Function in the AWS Lambda Developer Guide.

     

    Create the CloudWatch Events rule

    In the CloudWatch Events console, choose Create rule. Enter the following values:

    • Service Name:  CloudTrail
    • Event Type:  AWS API call via CloudTrail
    • Specific Operations:  StopLogging, StartLogging, UpdateTrail, DeleteTrail, CreateTrail, RemoveTags, AddTags, PutEventSelectors

    For Targets, select the name of the Lambda function created earlier and choose Configure details. On next page, enter an appropriate name and description for this rule. For State, select Enabled. Choose Create rule.

    For more information, see Tutorial: Schedule Lambda Functions Using CloudWatch Events in the Amazon CloudWatch Events User Guide.

     

    Validate monitoring

    To validate if the solution is working properly, make a change to CloudTrail and see if you get the notification about this change. The following are some sample emails for when a change in CloudTrail was detected. In this case, logging was disabled and re-enabled automatically.

     

    Summary

    In this post, I explained how to create a solution with CloudWatch Events, Lambda, and SNS to notify you about changes to CloudTrail trails, and to re-enable logging automatically whenever logging is disabled. If you can’t guarantee that your compliance logging is fully managed and automatic, your organizational governance or auditing may be at risk.

    For more information, I recommend the following whitepapers:

    About the Author

    Sudhanshu Malhotra is a Solutions Architect at AWS Professional Services. Sudhanshu enjoys working with our customers and helping them deliver complex solutions in AWS in the area of DevOps, Infrastructure-as-Code and Config Management. In his spare time, Sudhanshu enjoys spending time with his family, hiking and tinkering with cars.

    Mappy days! Ordnance Survey offers up free map of UK greenery

    The content below is taken from the original (Mappy days! Ordnance Survey offers up free map of UK greenery), to continue reading please visit the site. Remember to respect the Author & Copyright.

    The Ordnance Survey has launched a free online map of Britain’s green spaces with an open dataset for developers to get their hands on.

    The mapping agency’s latest offering pulls together geospatial data to create a map of concrete-free areas across the country – everything from your local park to an allotment.

    The work builds on the Scottish greenspace map, which the Ordnance Survey says was the first of its kind in the world when it was released in 2011. The latest map covers all of England and Wales, too.

    Using data from the Ordnance Survey as well as NGOs and other government agencies, the map pinpoints the location and extent of recreational areas and leisure facilities, colour coding them based on their use (bright green for public parks, brown for allotments).

    For the bigger sites, it lists the location of access points to the green spaces as a helpful green dot.

    The map is the latest in a line of releases from the Survey that aim to make its detailed maps more accessible for a digital age.

    Last year, it launched a smartphone app, giving would-be walkers more textures and information that your common (or garden) map app might.

    The aim is to get people off the couch and outdoors, and science minister Jo Johnson said that the latest map would “make it easier for people across the country to access greenspaces and lead healthier lives”.

    The Survey is also releasing a freely available dataset, OS Open Greenspace, which will become part of the Ordnance Survey’s Open Data portfolio.

    The dataset will be updated every six months, and the product can be downloaded in 100km sq tiles to allow people to easily locate and use just the area of interest. The technical specifications for the dataset can be found here (PDF).

    CEO Nigel Clifford said he was “excited to see how people experiment and work with the data” and looks forward “to seeing new products and services to help encourage an active Great Britain”.

    There’s also a public sector version of the greenspace map, called the MasterMap, which contains the location of all publicly accessible and non-accessible green spaces.

    The Survey said that giving the public sector accurate and up-to-date geospatial data would improve planning, analysis and decision-making.

    “It is hoped the dataset will prove instrumental in helping the public sector create and manage health and wellbeing strategies, active travel plans and various environmental initiatives that include air quality, biodiversity, housing regeneration and flood resilience,” the agency said. ®

    Astadia Releases Reference Architectures for Migrating Unisys and IBM Mainframe Workloads to Microsoft Azure

    The content below is taken from the original (Astadia Releases Reference Architectures for Migrating Unisys and IBM Mainframe Workloads to Microsoft Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

    Astadia

    ,
    the premier technology consultancy focused on modernizing mainframe
    workloads, today announced the release of two separate reference
    architectures for moving Unisys and IBM mainframe workloads to Microsoft
    Azure’s public cloud computing infrastructure. Astadia has moved
    mainframe workloads to distributed environments for more than 25 years
    and have applied that expertise to architecting the solution for
    customers adopting Azure.

    Mainframes
    still run significant workloads on behalf of commercial and public
    sector organizations; yet the cost of maintaining these platforms
    increases annually while the availability of skilled workers rapidly
    declines. Azure is now ready to support these workloads with security,
    performance and reliability, fueling new digital transformation and
    innovation.

    “For
    decades, mainframes have traditionally housed the most mission critical
    applications for an organization,” said Scott Silk, Astadia Chairman
    and CEO. “Microsoft Azure is ready to take on these workloads and
    Astadia is ready to help organizations make the move with a low-cost,
    low-risk approach, and then provide ongoing services to manage the
    resulting environment.”

    “Astadia
    has been a trusted Microsoft platform modernization partner for years,”
    said Bob Ellsworth, Microsoft’s Director of Enterprise Modernization
    and Azure HiPo Partners. “Astadia is a proven mainframe applications and
    database consultancy and their focus on Azure will benefit numerous
    enterprise companies.”

    Astadia’s Mainframe to Azure Reference Architectures Built on Decades of Experience

    Astadia
    has completed over 200 successful platform modernization projects and
    has a proven methodology, best practices and proprietary tools for
    discovery, planning, implementation and on-going management and support.
    The Mainframe to Cloud Reference Architectures cover the following
    topics:

    • Drivers and challenges associated with modernizing mainframe workloads
    • A primer on the specific mainframe architectures
    • A primer on the Microsoft Azure architecture
    • Detailed Reference Architecture diagrams and accompanying narrative

    Availability

    The Unisys to Microsoft Azure Reference Architecture is available for free download at http://bit.ly/2u5zz2K

    The IBM Mainframe to Microsoft Azure Reference Architecture is available for free download at http://bit.ly/2v487zt

    skyfonts (5.9.2.0)

    The content below is taken from the original (skyfonts (5.9.2.0)), to continue reading please visit the site. Remember to respect the Author & Copyright.

    SkyFonts allows you to install fonts from participating sites such as google fonts and keep them up to date

    6502 Retrocomputing Goes to the Cloud

    The content below is taken from the original (6502 Retrocomputing Goes to the Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

    In what may be the strangest retrocomputing project we’ve seen lately, you can now access a virtual 6502 via Amazon’s Lambda computing service. We don’t mean there’s a web page with a simulated CPU on it. That’s old hat. This is a web service that takes a block of memory, executes 6502 code that it finds in it, and then returns a block of memory after a BRK opcode or a time out.

    You format your request as a JSON-formatted POST request, so anything that can do an HTTP post can probably access it. If you aren’t feeling like writing your own client, the main page has a form you can fill out with some sample values. Just be aware that the memory going in and out is base 64 encoded, so you aren’t going to see instantly gratifying results.

    You may not be familiar with Amazon Lambda. It is the logical extension of the Amazon cloud services. Time was that you paid to have a server in a data center. The original Amazon cloud services let you spin up a virtual server that could come into existence when needed. You could also duplicate them, shut them down, and so on. However, Lambda is even one step further. You don’t have a server. You just have a service. When someone makes a request, the Amazon servers handle it. They also handle plenty of other services for other people.

    There’s some amount of free service, but eventually, they start charging you for every 100 ms of execution you use. We don’t know how long the average 6502 program runs.

    Is it practical? We can’t think of why, but we’ve never let that stop us from liking something before. Just to test it, we put the example code into an online base64 decoder and wound up with this:

    a9 01 8d 00 02 a9 05 8d 01 02 a9 08 8d 02 02

    Then we went over to an online 6502 disassembler and got:

    * = C000
    C000 A9 01 LDA #$01
    C002 8D 00 02 STA $0200
    C005 A9 05 LDA #$05
    C007 8D 01 02 STA $0201
    C00A A9 08 LDA #$08
    C00C 8D 02 02 STA $0202
    C00F .END

    We then ran the 6502cloud CPU and decoded the resulting memory output to (with a bunch of trailing zeros omitted):

    01 05 08 00 00 00 00 00

    So for the example, at least, it seems to work.

    We’ve covered giant 6502s and small 6502 systems. We have even seen that 6502 code lives inside Linux. But this is the first time we can remember seeing a disembodied CPU accessible by remote access in the cloud.

    Filed under: internet hacks, Microcontrollers