Prep for Cisco, CompTIA, and More IT Certifications With This $39 Bundle

The content below is taken from the original ( Prep for Cisco, CompTIA, and More IT Certifications With This $39 Bundle), to continue reading please visit the site. Remember to respect the Author & Copyright.

Large companies need to maintain a robust IT infrastructure if they want to thrive in the digital age, and they can’t accomplish this without certified IT professionals. Luckily, traditional schooling isn’t necessary to land an IT job; IT professionals simply need to pass their certification exams, and they can do so thanks to the wealth of training courses available. One such resource is this Ultimate IT Certification Training Bundle, which is currently on sale for $39.

To read this article in full, please click here

Deep dive into Azure Test Plans

The content below is taken from the original ( Deep dive into Azure Test Plans), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Test Plans, a service launched with Azure DevOps earlier this month, provides a browser-based test management solution for exploratory, planned manual, and user acceptance testing. Azure Test Plans also provides a browser extension for exploratory testing and gathering feedback from stakeholders.

Manual and exploratory testing continue to be important techniques for evaluating quality of a product/service, alongside the DevOps emphasis on automated testing. In modern software development processes, everybody in the team contributes to or owns quality – including developers, testers, managers, product owners, user experience advocates, and more. Azure Test Plans addresses all these needs. Let’s take a closer look.

Note: For automated testing as part of your CI/CD workflow, consider leveraging Azure Pipelines. It provides mechanisms for continuous build, test, and deployment to any platform and cloud.

Testing is integral to DevOps and Agile teams

A common practice is to base tests on user stories, features, or scenarios that are managed on a Kanban board as in Azure Boards. With Azure Test Plans, a team can leverage manual testing right from within their Kanban board. This provides end-to-end traceability because tests and defects are automatically linked to the requirements and builds being tested, which also helps you track the quality of the requirements.

Add, view, and interact with test cases directly from the cards on the Kanban board, and progressively monitor status directly from the card. Developers and testers can use this capability to maximize quality within their teams.

Testing in Kanban board

Quality is a team sport through exploratory testing

Exploratory testing is an approach to software testing that is described as simultaneous learning, test design and test execution. It complements planned testing by being completely unscripted yet being driven by themes/tours. Quality becomes a shared responsibility as exploratory testing can be leveraged by all team members including developers, testers, managers, product owners, user experience advocates, and more. Watch a short video of how this works.

The Test & Feedback extension enables exploratory testing techniques in Azure Test Plans. It allows you to spend more time finding issues, and less time filing them. Using the extension is simple:

  • Capture your findings along with rich diagnostic data. This includes comments, screenshots with annotations, and audio/video recordings that describe your findings and highlight issues. In the background, the extension captures additional information such as user actions via image action log, page load data, and system information about the browser, operating system, and more that later help in debugging or reproducing the issue.
  • Create work items such as bugs, tasks, and test cases from within the extension. The captured information automatically becomes part of the filed work item and helps with end-to-end traceability.
  • Collaborate with your team by sharing your findings. Export your session report or connect to Azure Test Plans for a fully integrated experience.

Exploratory testing session in progress

The extension also helps in soliciting feedback from stakeholders who may reside outside the development team, such as marketing, sales teams, and others. Feedback can be requested from these stakeholders on user stories and features. Stakeholders can then respond to feedback requests – not just to rate and send comments, but also file bugs and tasks directly. Read more in our documentation.

Feedback requests on a Stakeholder

Planned manual testing for larger teams

Testing from within the Kanban board suffices when your testing needs are simple. However, for larger teams with more complex needs such as creating and tracking all testing efforts within a test plan scope, testing across multiple configurations, distributing the tests across multiple testers, tracking the progress against the test plan, etc., you need a full-scale test management solution and Azure Test Plans fulfils this need. 

Planned manual testing

Planned manual testing in Azure Test Plans lets you organize tests into test plans and test suites. Test suites can be dynamic (requirements-based-suites and query-based-suites) to help you understand the quality of associated requirements under development, or static to help you cover regression tests. Tests can be authored using an Excel-like grid view or other means available. Testers execute tests assigned to them using a runner to test your app(s). The runner can execute in a browser or as a client on your desktop, enabling you to test on any platform or test any app. During execution, rich diagnostic data is collected to help with debugging or reproducing the issue later. Bugs filed during the process automatically include the captured diagnostic data.

Test execution with rich data capture

To track overall progress and outcomes, leverage lightweight charts, which can be pinned to your dashboard for easy monitoring. Watch a video showing planned manual testing in Azure Test Plans.

Charts to help track progress and outcomes

We hope this post gives you a quick peek into what Azure Test Plans can do for you – we recommend trying it out for free to learn more and to maximize quality for your software. Happy exploring and testing!

Further information

Want to learn more? See our documented best practices, videos, and other learning materials for Azure Test Plans.

Cloudflare’s new ‘one-click’ DNSSEC setup will make it far more difficult to spoof websites

The content below is taken from the original ( Cloudflare’s new ‘one-click’ DNSSEC setup will make it far more difficult to spoof websites), to continue reading please visit the site. Remember to respect the Author & Copyright.

Bad news first: the internet is broken for a while. The good news is that Cloudflare thinks it can make it slightly less broken.

With “the click of one button,” the networking giant said Tuesday, its users can now switch on DNSSEC in their dashboard. In doing so, Cloudflare hopes it removes a major pain-point in adopting the web security standard, which many haven’t set up — either because it’s so complicated and arduous, or too expensive.

It’s part of a push by the San Francisco-based networking giant to try to make the pipes of the internet more secure — even from the things you can’t see.

For years, you could open up a website and take its it’s instant availability for granted. DNS, which translates web addresses into computer-readable IP addresses, has been plagued with vulnerabilities, making it easy to hijack any step of the process to surreptitiously send users to fake or malicious sites.

Take two incidents in the past year — where traffic to and from Amazon and separately Google, Facebook, Apple, and Microsoft were hijacked and rerouted for between minutes and hours at a time. Terabytes of internet traffic were siphoned through Russia for reasons that are still unknown. Any non-encrypted traffic was readable, at least in theory, by the Russian government. Suspicious? It was.

That’s where a security-focused DNS evolution — DNSSEC — is meant to help. It’s like DNS, but it protects requests end-to-end, from computer or mobile device to the web server of the site you’re trying to visit, by cryptographically signing the data so that it’s far tougher — if not impossible — to spoof.

But DNSSEC adoption is woefully low. Just three percent of websites in the Fortune 1000 sign their primary domains, largely because the domain owners can’t be bothered, but also because their DNS operators either don’t support it or charge exorbitant rates for the privilege.

Cloudflare now wants to do the hard work in setting those crucial DS records, a necessary component in setting up DNSSEC, for customers on a supported registrar. Traditionally, setting a DS record has been notoriously difficult, often because the registrars themselves can be problematic.

As of launch, Gandi will be the first registrar to support one-click DNSSEC setup, with more expected to follow.

The more registrars that support the move, the fewer barriers to a safer internet, the company argues. Right now, the company says that services that users should consider switching from providers don’t support DNSSEC and “let them know that was the reason for the switch.”

Just like HTTPS was slow to adopt over the years — but finally took off in 2015 — there’s hope that DNSSEC can follow the same fate. The more companies that adoption the technology will help end users be less vulnerable to DNS attacks on the internet.

And besides the hackers, who doesn’t want that?

Scale Computing, APC partner to offer micro data center in a box

The content below is taken from the original ( Scale Computing, APC partner to offer micro data center in a box), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hyperconverged infrastructure (HCI) vendor Scale Computing and power management specialist APC (formerly American Power Conversion, now owned by Schneider Electric) have partnered to offer a range of turnkey micro data centers for the North American market.

The platform combines Scale’s hyperconverged software, HC3 HyperCore, running on top of its own hardware and built on APC’s ready-to-deploy racks for a micro data center. Micro will sell the platform as a single SKU.

The pre-packaged platform is entirely turnkey, with automated virtualization, power management resources, and built-in redundancy. This makes it well-suited for remote edge locations, such as cell phone towers, where staff is not immediately available to maintain the equipment.

To read this article in full, please click here

Tandem CEO will tell you why building a bank is hard at Disrupt Berlin

The content below is taken from the original ( Tandem CEO will tell you why building a bank is hard at Disrupt Berlin), to continue reading please visit the site. Remember to respect the Author & Copyright.

Challenger banks, neobanks or digital-only banks… Whatever we choose to call them, Europe — and the U.K. in particular — has more than its fair share of bank upstarts battling it out for a slice of the growing fintech pie. One of those is Tandem, co-founded by financial technology veteran Ricky Knox, who we’re excited to announce will join us at TechCrunch Disrupt Berlin.

Tandem — or the so-called “Good Bank” — has been on quite a journey this year. Most recently the bank launched a competitive fixed savings product, pitting it against a whole host of incumbent and challenger banks. It followed the launch of the Tandem credit card in February, which competes well on cash-back and FX rates when spending abroad.

Both products are part of a wider strategy where, like many other consumer-facing fintechs, Tandem wants to become your financial control centre and connect you to and offer various financial services. These are either products of its own or through partnerships with other fintech startups and more established providers.

At the heart of this is the Tandem mobile app, which acts as a Personal Finance Manager (PFM), including letting you aggregate your non-Tandem bank account data from other bank accounts or credit cards you might have, in addition to managing any Tandem products you’ve taken out. The company recently acquired fintech startup Pariti to beef up its account aggregation features.

However, what makes Tandem’s recent progress all the more interesting is that it comes after a definite bump in the road last year. This saw the company temporarily lose its banking license and forced to make lay-offs following the partial collapse of a £35 million investment round from department store House of Fraser, due to restrictions on capital leaving China. The remedy was further investment from existing backers and the bold move to acquire Harrods Bank, the banking arm of the U.K.’s most famous luxury department store.

As you can see, there is plenty to talk about. And some. So, why not grab your ticket to Disrupt Berlin to listen to the Tandem story. The conference will take place on November 29-30.

In addition to fireside chats and panels, like this one, new startups will participate in the Startup Battlefield Europe to win the highly coveted Battlefield cup.


Ricky Knox

CEO & Co-Founder, Tandem

Ricky is a serial investor and entrepreuner. He has built five technology disruptors in fintech and telecoms, each of which also does a bit of good for the world.

Before Tandem he founded Azimo and Small World, two remittance businesses, and is managing partner of Hexagon Partners, a private equity firm. He built Tandem to be a digital bank that helps improve customers’ lives with money.

Ricky has a first class degree from Bristol University and an MBA from INSEAD.

Titanium now available with 128GB storage

The content below is taken from the original ( Titanium now available with 128GB storage), to continue reading please visit the site. Remember to respect the Author & Copyright.

Now that’s a solid state offer, if ever I saw one! From now until 8th October, 2018, Elesar Ltd‘s Titanium motherboard is available to buy with a jolly useful optional extra – a SanDisk 128GB solid state drive (SSD), pre-loaded with a standard RISC OS 5 disc image. The company doesn’t sell the motherboard as […]

When To Use A Multi-Cloud Strategy

The content below is taken from the original ( When To Use A Multi-Cloud Strategy), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are both advantages and disadvantages of multi-cloud environments, and knowing when to use a multi-cloud strategy could depend on how minimizing your dependency on a single provider could affect costs, performance, and security. Before discussing these advantages and disadvantages in greater detail, it is best to first clarify what a multi-cloud environment is.

What is a multi-cloud environment?

For the purposes of this article, a multi-cloud environment is one in which you use two or more public cloud services from two or more cloud service providers. For example, you might use Azure in the US and Alibaba in Asia to avoid latency issues. Alternatively, you may find Google better for development and testing, and AWS preferable for running your production environment.

According to the 2016 IDC CloudView Survey, more than half of businesses using AWS also use another public cloud service provider (for information about mixing private and public clouds, see our blog “When to Use a Hybrid Cloud Strategy”). The market research company Research and Markets predicts businesses using multi-cloud environments will increase by approximately 30% per year.

The advantages of a multi-cloud environment

As well as choosing a multi-cloud strategy to avoid latency, and for development and testing in an isolated environment, businesses distribute their resources between cloud service providers for a number of reasons:

Cherry-pick services

No single cloud service provider has the best tools for everything and, by using multiple cloud service providers, you can cherry-pick the best services from each. For example, if you build apps using the Watson AI platform that need to integrate with Microsoft products, you would use both IBM and Azure.

Improved disaster recovery

Similarly, no single cloud service provider has avoided a major outage. By using two or more providers, your infrastructure becomes more resilient and you could, if you wish, keep replicas of your applications in two separate clouds so that, if one cloud service provider goes down, you don’t.

Potential negotiating power

Competition between major cloud service providers means that, if you are a high-volume customer (a million dollars or more per year), you may be in a position to negotiate lower prices. Distributing your business between providers can give you some leverage in your negotiations.

Less single-vendor dependency

Depending on one provider for any product or service can be risky. Not only might they suffer an outage, but their service levels could decline or—unlikely as it may seem—their prices could go up. By not putting all your eggs in one basket, you are minimizing the risk of your own business suffering.

The disadvantages of a multi-cloud environment

Although many businesses have decided that now is the time to be using a multi-cloud strategy, some are looking at the disadvantages and have concerns about how they might overcome them.

Managing costs and loss of discounts

If you are currently using a single cloud service provider and having difficulty managing costs, imagine how much trouble you may have with two or three providers. Certainly, by diluting your cloud deployments, you will also be diluting the discounts you are entitled to.

Performance challenges

Working with multiple cloud service providers also creates challenges with regard to having developers with the right skill sets to maximize the opportunities. Unless you have the right people in the right place, the resources you have deployed in the cloud may not work as well as they might do.

Increased security risk

Moving to a public cloud gives you less control over your data. Moving to two public clouds gives you even less, plus gives your applications a larger attack surface. There are tools to help secure multi-cloud environments, but you generally have to exercise a greater level of diligence.

Multi-cloud management

Managing costs is not the only thing you have to worry about in a multi-cloud environment. Managing all your assets can be very complicated. Fortunately, there are some very good cloud management platforms with multi-cloud support to help you overcome this challenge. Management of multiple cloud providers might require a change in your existing processes and skill sets.

When to use a multi-cloud strategy

Weighing up the advantages and disadvantages of multi-cloud environments, there are compelling cases for using a multi-cloud strategy, but when? At CloudHealth Technologies, we would suggest:

  • When there are operational advantages due to a wider choice of services.
  • When unscheduled downtime would severely disrupt your business.
  • When you have the right people in place to take advantage of the opportunities.
  • When you have a solution in place to help manage costs, performance and security.
  • When you have a global group of developers and you want to push resource efficiencies to them.

Want to learn more? Download the ebook 10 Frequently Asked Questions About Multi-cloud.

The post When To Use A Multi-Cloud Strategy appeared first on Cloud Academy.

Best GitHub Alternatives for hosting your open source project

The content below is taken from the original ( Best GitHub Alternatives for hosting your open source project), to continue reading please visit the site. Remember to respect the Author & Copyright.

Github is the most popular web-based, open source version control system used by developers to host their codes. The website provides a platform to easily collaborate with other programmers on the project. The Github is one of the best available […]

This post Best GitHub Alternatives for hosting your open source project is from TheWindowsClub.com.

Vtrus launches drones to inspect and protect your warehouses and factories

The content below is taken from the original ( Vtrus launches drones to inspect and protect your warehouses and factories), to continue reading please visit the site. Remember to respect the Author & Copyright.

Knowing what’s going on in your warehouses and facilities is of course critical to many industries, but regular inspections take time, money, and personnel. Why not use drones? Vtrus uses computer vision to let a compact drone not just safely navigate indoor environments but create detailed 3D maps of them for inspectors and workers to consult, autonomously and in real time.

Vtrus showed off its hardware platform — currently a prototype — and its proprietary SLAM (simultaneous location and mapping) software at TechCrunch Disrupt SF as a Startup Battlefield Wildcard company.

There are already some drone-based services for the likes of security and exterior imaging, but Vtrus CTO Jonathan Lenoff told me that those are only practical because they operate with a large margin for error. If you’re searching for open doors or intruders beyond the fence, it doesn’t matter if you’re at 25 feet up or 26. But inside a warehouse or production line every inch counts and imaging has to be carried out at a much finer scale.

As a result, dangerous and tedious inspections, such as checking the wiring on lighting or looking for rust under an elevated walkway, have to be done by people. Vtrus wouldn’t put those people out of work, but it might take them out of danger.


The drone uses depth-sensing both to build the map and to navigate and avoid obstacles.

The drone, called the ABI Zero for now, is equipped with a suite of sensors, from ordinary RGB cameras to 360 ones and a structured-light depth sensor. As soon as it takes off, it begins mapping its environment in great detail: it takes in 300,000 depth points 30 times per second, combining that with its other cameras to produce a detailed map of its surroundings.

It uses this information to get around, of course, but the data is also streamed over wi-fi in real time to the base station and Vtrus’s own cloud service, through which operators and inspectors can access it.

The SLAM technique they use was developed in-house; CEO Renato Moreno built and sold a company (to Facebook/Oculus) using some of the principles, but improvements to imaging and processing power have made it possible to do it faster and in greater detail than before. Not to mention on a drone that’s flying around an indoor space full of people and valuable inventory.

On a full charge, ABI can fly for about 10 minutes. That doesn’t sound very impressive, but the important thing isn’t staying aloft for a long time — few drones can do that to begin with — but how quickly it can get back up there. That’s where the special docking and charging mechanism comes in.

The Vtrus drone lives on and returns to a little box, which when a tapped-out craft touches down, sets off a patented high-speed charging process. It’s contact-based, not wireless, and happens automatically. The drone can then get back in the air perhaps half an hour or so later, meaning the craft can actually be in the air for as much as six hours a day total.

Probably anyone who has had to inspect or maintain any kind of building or space bigger than a studio apartment can see the value in getting frequent, high-precision updates on everything in that space, from storage shelving to heavy machinery. You’d put in an ABI for every X square feet depending on what you need it to do; they can access each other’s data and combine it as well.

The result of a quick pass through a facility. Obviously this would make more sense if you could manipulate it in 3D, as the operator would.

This frequency and the detail which which the drone can inspect and navigate means maintenance can become proactive rather than reactive — you see rust on a pipe or a hot spot on a machine during the drone’s hourly pass rather than days later when the part fails. And if you don’t have an expert on site, the full 3D map and even manual drone control can be handed over to your HVAC guy or union rep.

You can see lots more examples of ABI in action at the Vtrus website. Way too many to embed here.

Lenoff, Moreno, and third co-founder Carlos Sanchez, who brings the industrial expertise to the mix, explained that their secret sauce is really the software — the drone itself is pretty much off the shelf stuff right now, tweaked to their requirements. (The base is an original creation, of course.)

But the software is all custom built to handle not just high-resolution 3D mapping in real time but the means to stream and record it as well. They’ve hired experts to build those systems as well — the 6-person team already sounds like a powerhouse.

The whole operation is self-funded right now, and the team is seeking investment. But that doesn’t mean they’re idle: they’re working with major companies already and operating a “pilotless” program (get it?). The team has been traveling the country visiting facilities, showing how the system works, and collecting feedback and requests. It’s hard to imagine they won’t have big clients soon.

😤Angry London bike thief in slippers gets trapped by ‘Brakes’ Lorry driver 🤣 **MUST SEE**

[https://youtu.be/xUcRzx-HvlY], Bike thief gets stopped!

Don’t forget to like and SUBSCRIBE!!! !!! !!!

Stop Edge from hijacking your PDF/HTML file associations

The content below is taken from the original ( Stop Edge from hijacking your PDF/HTML file associations), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Edge is set as the default PDF reader to open and view PDF files in Windows. So, whenever I attempt to open any PDF file in Windows 10, it automatically gets opened in Edge browser, although my preferred choice […]

This post Stop Edge from hijacking your PDF/HTML file associations is from TheWindowsClub.com.

Acorn World exhibition in Cambridge – 8th and 9th September

The content below is taken from the original ( Acorn World exhibition in Cambridge – 8th and 9th September), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Centre for Computing History, a computing museum based in Cambridge, will be playing host to an event this coming weekend that should be of interest to any and all fans of Acorn Computers: Acorn World 2018. Organised by the Acorn and BBC User Group (ABUG) in association with the museum, the event will run […]

This Instagram page draws famous buildings, showing off the sketch process via timelapse

The content below is taken from the original ( This Instagram page draws famous buildings, showing off the sketch process via timelapse), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sam Picardal is a New York based artist and architectural illustrator running the Instagram instagram account @21.am, which shows off his sketching process for drawing buildings by famous architects. Capturing his practice via timelapse, the page posts videos of Picardal hand drawing landmark works such as the Art Gallery of Alberta by Frank Gehry, The Tower Bridge in London, Hadid‘s Heydar Aliyev Cultural Center, and even the Millennium Falcon. Entire cityscapes are given the pen-and-ink treatment as well, as are various spacecrafts and robotics.

ClearCube Launches New C3Pi+ Raspberry Pi 3 Model B+ Thin Client at VMworld 2018

The content below is taken from the original ( ClearCube Launches New C3Pi+ Raspberry Pi 3 Model B+ Thin Client at VMworld 2018), to continue reading please visit the site. Remember to respect the Author & Copyright.

ClearCube Technology, Inc. announced the launch of the new C3Pi+ Thin Client at VMworld 2018 US in Las Vegas on August 28, 2018. The low-cost,… Read more at VMblog.com.

This funky new font is made up entirely of brands

The content below is taken from the original ( This funky new font is made up entirely of brands), to continue reading please visit the site. Remember to respect the Author & Copyright.

A digital studio called Hello Velocity has created a typeface that embraces well-known corporate logos and is still somehow far less annoying than Comic Sans. The studio says it creates "thought-provoking internet experiences," and its Brand New Roma…

RippleNet Offers SMEs a Competitive Advantage in Global Payments

The content below is taken from the original ( RippleNet Offers SMEs a Competitive Advantage in Global Payments), to continue reading please visit the site. Remember to respect the Author & Copyright.

While global business may move at the speed of the web, international payments instead seem to move like smoke signals. This is because the world’s payments infrastructure hasn’t changed since the heady days of disco, nearly four decades ago. Especially for teams at small and medium enterprises (SMEs), this disconnect between the hustle of business today and inertia old infrastructure creates unnecessary hurdles and impediments to smooth business operations.

Today’s system of moving money around the world forces banks and payment providers to plan for days of delay, produce their own liquidity by funding accounts in local currencies on each side of a transaction, and pass along exorbitant costs to their customers. For SMEs, this translates into a cumbersome payments experience with high fees, limited visibility into transaction details or status, and a settlement time that can stretch from days into weeks.

In contrast, RippleNet delivers a new global payments standard that speeds up transactions, introduces certainty, and lowers fees to transform the cross-border payments experience. Using RippleNet, banks and payment providers can reimagine a payment from invoice to confirmed settlement for their clients. Just one small change, like the ability to drag-and-drop invoices as part of a RippleNet powered transaction, can have many benefits: it saves time with pre-populated fields, automatically confirms recipients for accuracy, and obtains real-time quotes.

The end result is a vastly improved user experience for a transaction delivered in seconds and with confidence, at a fraction of the usual price.

Emerging Markets Close the Gap with Ripple  

This sea change in international payments is even more important when you consider the World Bank forecasts global remittance payments to grow by 3.4 percent or roughly $466 billion in 2018. Much of this will happen in emerging markets, which are home to 85 percent of the global population and account for almost 60 percent of global GDP, with India and China having the highest incoming flows in 2017.

Designed to solve the modern challenge of international or cross-border transactions, RippleNet is already having an impact in these emerging markets. With a growing global need not just for access, but also a more efficient, transparent and cost-effective payments into and out of emerging markets, new financial institutions in India, Brazil and China have joined RippleNet to power instant remittance payments into their countries.

SMEs also play a critical economic role in these emerging markets. RippleNet delivers efficiencies and advantages that can help them be more competitive in an interconnected world.

Rather than be limited by borders and currencies, Ripple connects all parties in a global transaction through a single seamless, frictionless experience. Built for the Internet age, Ripple delivers access, speed, certainty and savings. And by leveraging the most advanced blockchain technology possible, it is scalable, secure and interoperable.

For more information on Ripple’s solutions or to learn how to join RippleNet contact us.

The post RippleNet Offers SMEs a Competitive Advantage in Global Payments appeared first on Ripple.

How to keep track of the Total Editing Time spent on a Microsoft Word document

The content below is taken from the original ( How to keep track of the Total Editing Time spent on a Microsoft Word document), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Word was designed with the purpose to enable its users to type and save documents. In addition to this utility, it has a feature that keeps a count of the amount of time spent on a document. Normally, you […]

This post How to keep track of the Total Editing Time spent on a Microsoft Word document is from TheWindowsClub.com.

Monitor all Azure Backup protected workloads using Log Analytics

The content below is taken from the original ( Monitor all Azure Backup protected workloads using Log Analytics), to continue reading please visit the site. Remember to respect the Author & Copyright.

We are excited to share that Azure Backup now allows you to monitor all workloads protected by it by leveraging the power of Log Analytics (LA). This allows enterprises to monitor key backup parameters across Recovery Services vaults and subscriptionsirrespective of which Azure backup solution you are using. In addition, configure custom alerts and actions for custom monitoring requirements for all Azure Backup workloads with this LA based solution.

This solution now covers all workloads protected by Azure Backupincluding Azure VMs, SQL in Azure VM backups, System Center Data Protection Manager connected to Azure (DPM-A), Microsoft Azure Backup Server (MABS), and file-folder backup from Azure backup agent.

Here’s how you get all the benefits.

Configure diagnostic settings

If you have already configured Log Analytics workspace to monitor Azure Backup, skip to the Deploy solution template section.

You can open the diagnostic setting window from the Azure Recovery services vault or from Azure Monitor. In the Diagnostic settings window, select “Send data to log analytics,” choose the relevant LA workspace and select the log accordingly, “AzureBackupReport,” and click “Save.”

Be sure to choose the same workspace for all the vaults so that you get a centralized view in the workspace. After completing the configuration, allow24 hours for initial data push to complete.

Deploy solution template

Once the data is in the workspace, we need a set of graphs to visualize the monitoring data. Deploy the Azure quick-start template to the workspace configured above to get a default set of graphs, explained below. Make sure you give the same resource group, workspace name and workspace location to properly identify the workspace and then install this template on it.

If you are already using this template as outlined in a previous blog and edited it, just add the relevant kusto queries from deployment JSON in github. If you didn’t edit the template, re-deploy the template onto the same workspace to view the updated template.

Once deployed, you will view an overview tile for Azure Backup in the workspace dashboard. Clicking on the overview tile will take you to the solution dashboard and provide you all the information shown below.

AzureMonitorTile

Monitor Azure Backup data

Monitor backups and restores

Monitor regular daily backups for all Azure Backup protected workloads. With this update, you can monitor even log backups for your SQL Databases whether they are running within Azure IaaS VMs or being run locally on-premises and being protected by DPM, MABS.

AllBackupJobs

RestoreJobs

Monitor all datasources

Monitor a spike or reduction in number of backed up datasources using the active datasources graph. The active datasources attribute is split across all Azure Backup types. The legend beside the pie graph shows the top three types. The list beneath the pie chart displays the top 10 active datasources. For example, datasources on which the greatest number of jobs were run in the specified time frame.

ActiveDatasources

Monitor Azure Backup alerts

Azure Backup generates alerts automatically when a backup and/or a restore job fails. You are now able to view all such alerts generated in a single place.

ActiveAlerts

However, be sure to select the relevant time range to monitor, such as the proper start and end dates.

SelectTime

Generate custom alerts

Whenever you click on any single row in the above graphs, it will lead to a more detailed view in the Log Search window and you can generate a custom alert for that scenario.

CustomAlert

To learn more, visit our documentation on how to configure alerts.

Summary

You can configure LA workspaces to receive key backup data across multiple Recovery Services vaults and subscriptions and deploy customizable solutions on workspaces to view and configure actions for business-critical events. This solution is key for any enterprise to keep a watchful eye over their backups and ensure that all actions are taken for successful backups and restores.

Related links and additional content

Microsoft removes device install limits for Office 365 subscribers

The content below is taken from the original ( Microsoft removes device install limits for Office 365 subscribers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft is removing limits on the number of devices on which some Office 365 subscribers can install the apps. From October 2nd, Home users will no longer be restricted to 10 devices across five users nor will Personal subscribers have a limit of o…

AzurePowerShell (6.8.1)

The content below is taken from the original ( AzurePowerShell (6.8.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure PowerShell provides a set of cmdlets that use the Azure Resource Manager model for managing your Azure resources.

Proving that Teams Retention Policies Work

The content below is taken from the original ( Proving that Teams Retention Policies Work), to continue reading please visit the site. Remember to respect the Author & Copyright.

Teams Splash

Teams Splash

Teams Retention

In April 2018, Microsoft introduced support for Teams as a workload processed by Office 365 retention policies. The concept is simple. Teams captures compliance records for channel conversations and private chats in Exchange Online mailboxes, including messages from guest and hybrid users.

When you implement a Teams retention policy, the Exchange Managed Folder Assistant (MFA) processes the mailboxes to remove the compliance records based on the policy criteria. The creation date for a compliance record in a mailbox is used to its assess age for retention purposes. Given that Office 365 creates compliance records very soon after users post messages in Teams, the creation date for a compliance record closely matches the original message in Teams.

A background synchronization process replicates the deletions to the Teams data service on Azure, and eventually the deletions show up in clients.

A Pragmatic Implementation

If you were to design a retention mechanism from scratch, you might not take the same approach. However, the implementation is pragmatic because it takes advantage of existing components, like MFA. The downside is that because so many moving parts exist, it’s hard to know if a retention policy is having the right effect.

Setting a Baseline

Before a retention policy runs against a mailbox, we need to understand how many Teams compliance items it contains. This command tells us how many compliance items exist in a mailbox (group or personal) and reveals details of the oldest and newest items in the “Team Chat” folder.

Get-MailboxFolderStatistics -Identity "HydraProjectTeam" -FolderScope ConversationHistory -IncludeOldestAndNewestItems | ? {$_.FolderType -eq “TeamChat”} | ft name, Itemsinfolder, Newestitemreceiveddate, Oldestitemreceiveddate

Name      ItemsInFolder NewestItemReceivedDate OldestItemReceivedDate
----      ------------- ---------------------- ----------------------
Team Chat           227 2 Aug 2018 16:10:41    11 Mar 2017 15:41:34

MFA Processes a Mailbox

After you create a Teams retention policy, Office 365 publishes details of the policy to Exchange Online and the Managed Folder Assistant begins to process mailboxes against the policy. MFA operates on a workcycle basis, which means that it tries to process every mailbox in a tenant at least once weekly. Mailboxes with less than 10 MB of content are not processed by MFA because it’s unlikely that they need the benefit of a retention policy. This is not specific to Teams, it’s just the way that MFA works.

When MFA processes a mailbox, it updates some mailbox properties with details of its work. We can check these details as follows:

$Log = Export-MailboxDiagnosticLogs -Identity HydraProjectTeam -ExtendedProperties
$xml = [xml]($Log.MailboxLog)
$xml.Properties.MailboxTable.Property | ? {$_.Name -like "ELC*"}

Name                                    Value
----                                    -----
ElcLastRunTotalProcessingTime           1090
ElcLastRunSubAssistantProcessingTime    485
ElcLastRunUpdatedFolderCount            33
ElcLastRunTaggedFolderCount             0
ElcLastRunUpdatedItemCount              0
ElcLastRunTaggedWithArchiveItemCount    0
ElcLastRunTaggedWithExpiryItemCount     0
ElcLastRunDeletedFromRootItemCount      0
ElcLastRunDeletedFromDumpsterItemCount  0
ElcLastRunArchivedFromRootItemCount     0
ElcLastRunArchivedFromDumpsterItemCount 0
ELCLastSuccessTimestamp                 02/08/2018 16:46:23
ElcFaiSaveStatus                        SaveSucceeded
ElcFaiDeleteStatus                      DeleteNotAttempted

Unfortunately, the MFA statistics don’t tell us how many compliance records it removed. If you run the same commands against a user mailbox, you’ll see the number of deleted items recorded in ElcLastRunDeletedFromRootItemCount. This doesn’t happen for Teams compliance records, perhaps because Exchange regards them as system items.

Compliance Items are Removed

Because MFA doesn’t tell us how many items it removes, we have to check the mailbox again. This time we see that the number of compliance records has shrunk from 227 to 3 and that the oldest item in the folder is from 20 July 2018. Given that users can’t access the Team Chat folder with clients like Outlook or OWA, the only way that items are removed is with a system process, so we can therefore conclude that MFA has done its work.

Get-MailboxFolderStatistics -Identity "HydraProjectTeam" -FolderScope ConversationHistory -IncludeOldestAndNewestItems | ? {$_.FolderType -eq “TeamChat”} | Format-Table Name, Itemsinfolder, Newestitemreceiveddate, Oldestitemreceiveddate

Name      ItemsInFolder NewestItemReceivedDate OldestItemReceivedDate
----      ------------- ---------------------- ----------------------
Team Chat             3 2 Aug 2018 16:10:41    20 Jul 2018 08:17:09

Synchronization to Teams

Background processes replicate the deletions made by MFA to Teams. It’s hard to predict exactly when items will disappear from user view because two things must happen. First, the items are removed from the Teams data store in Azure. Second, clients synchronize the deletions to their local cache.

In tests that I ran, it took between four and five days for the cycle to complete. For example, in the test reported above, MFA ran on August 2 and clients picked up the deletions on August 7.

You might not think that such a time lag is acceptable. Microsoft agrees, and is working to improve the efficiency of replication from Exchange Online to Teams. However, for now, you should expect several days to lapse before the effect of a retention policy is seen by clients.

The Effect of Retention

Figure 1 shows a channel after a retention policy removed any item older than 30 days. What’s immediately obvious is that some items older than 30 days are still visible. One is an item (Oct 25, 2017) created by an RSS connector to post notifications about new blog posts in the channel. The second (March 3, 2018) is from a guest user. The other visible messages are system messages, which Teams does not capture for compliance purposes.

Teams retention policy

Figure 1: Channel conversations after a retention policy runs (image credit: Tony Redmond)

The reason why the RSS item is still shown in Figure 1 is that items created in Teams by Office 365 connectors were not processed for compliance purposes until recently. They are now, and the most recent run of MFA removed the connector items. It is possible that Office 365 might fail to ingest some older items, in which case they will linger in channels because compliance records don’t exist.

We also see an old message posted by a guest user. Teams only began capturing hybrid user messages in January 2018, with an intention to go back in time for earlier messages as resources allow. Teams uses the same mechanism to capture guest user messages, but obviously Microsoft hadn’t processed back this far when I ran these tests. Other messages posted by guest users are gone because compliance records existed for these messages.

It’s worth noting that compliance records for guest-to-guest 1×1 chats are not processed by MFA. This is because MFA cannot access the phantom mailboxes used by Exchange to capture compliance records for the personal chats of guest and hybrid users. Guest contributions to channel conversations are processed because these items are in group mailboxes.

Some Tracking Tools Would be Useful

The Security and Compliance Center will tell you that a retention policy for Teams is on and operational (Status = Success), but after that an information void exists as to how the policy operates, when teams are processed, how many items are removed from channels and personal chats, and so on. There are no records in the Office 365 audit log and nothing in the usage reports either. All you can do is keep an eye on the number of compliance records in mailboxes.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Proving that Teams Retention Policies Work appeared first on Petri.

How to enable Chrome native notifications on Windows 10

The content below is taken from the original ( How to enable Chrome native notifications on Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 notification system contains the icons for easy access to the system functions. Google Chrome has recently rolled out a new notification experience update where it supports the native Windows 10 notifications. The new notification experience prompts all the […]

This post How to enable Chrome native notifications on Windows 10 is from TheWindowsClub.com.

ESXi on Arm? Yes, ESXi on Arm. VMware teases bare-metal hypervisor for 64-bit Arm servers

The content below is taken from the original ( ESXi on Arm? Yes, ESXi on Arm. VMware teases bare-metal hypervisor for 64-bit Arm servers), to continue reading please visit the site. Remember to respect the Author & Copyright.

No, we’re not pulling your leg

Coming as soon as they can make it

VMworld US VMware today showed off a port of its bare-metal ESXi hypervisor for 64-bit Arm servers at its VMworld US shindig in Las Vegas.…

New – Over-the-Air (OTA) Updates for Amazon FreeRTOS

The content below is taken from the original ( New – Over-the-Air (OTA) Updates for Amazon FreeRTOS), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon FreeRTOS is an operating system for the microcontrollers that power connected devices such as appliances, fitness trackers, industrial sensors, smart utility meters, security systems, and the like. Designed for use in small, low-powered devices, Amazon FreeRTOS extends the FreeRTOS kernel with libraries for communication with cloud services such as AWS IoT Core and with more powerful edge devices that are running AWS Greengrass (to learn more, read Announcing Amazon FreeRTOS – Enabling Billions of Devices to Securely Benefit from the Cloud).

Unlike more powerful, general-purpose computers that include generous amounts of local memory and storage, and the ability to load and run code on demand, microcontrollers are often driven by firmware that is loaded at the factory and then updated with bug fixes and new features from time to time over the life of the device. While some devices are able to accept updates in the field and while they are running, others must be disconnected, removed from service, and updated manually. This can be disruptive, inconvenient, and expensive, not to mention time-consuming.

As usual, we want to provide a better solution for our customers!

Over-the-Air Updates
Today we are making Amazon FreeRTOS even more useful with the addition of an over-the-air update mechanism that can be used to deliver updates to devices in the field. Here are the most important properties of this new feature:

Security – Updates can be signed by an integrated code signer, streamed to the target device across a TLS-protected connection, and then verified on the target device in order to guard against corrupt, unauthorized, fraudulent updates.

Fault Tolerance – In order to guard against failed updates that can result in a useless, “bricked” device, the update process is resilient and able to handle partial updates from taking effect, leaving the device in an operable state.

Scalability – Device fleets often contain thousands or millions of devices, and can be divided into groups for updating purposes, powered by AWS IoT Device Management.

Frugality – Microcontrollers have limited amounts of RAM (often 128KB or so) and compute power. Amazon FreeRTOS makes the most of these scarce resources by using a single TLS connection for updates and other AWS IoT Core communication, and by using the lightweight MQTT protocol.

Each device must include the OTA Updates Library. This library contains an agent that listens for update jobs and supervises the update process.

OTA in Action
I don’t happen to have a fleet of devices deployed, so I’ll have to limit this post to the highlights and direct you to the OTA Tutorial for more info.

Each update takes the form of an AWS IoT job. A job specifies a list of target devices (things and/or thing groups) and references a job document that describes the operations to be performed on each target. The job document, in turn, points to the code or data to be deployed for the update, and specifies the desired code signing option. Code signing ensures that the deployed content is genuine; you can sign the content yourself ahead of time or request that it be done as part of the job.

Jobs can be run once (a snapshot job), or whenever a change is detected in a target (a continuous job). Continuous jobs can be used to onboard or upgrade new devices as they are added to a thing group.

After the job has been created, AWS IoT will publish an OTA job message via MQTT. The OTA Updates library will download the signed content in streaming fashion, supervise the update, and report status back to AWS IoT.

You can create and manage jobs from the AWS IoT Console, and can also build your own tools using the CLI and the API. I open the Console and click Create a job to get started:

Then I click Create OTA update job:

I select and sign my firmware image:

From there I would select my things or thing groups, initiate the job, and monitor the status:

Again, to learn more, check out the tutorial.

This new feature is available now and you can start using it today.

Jeff;

Power Over Ethernet Splitter Improves Negotiating Skills

The content below is taken from the original ( Power Over Ethernet Splitter Improves Negotiating Skills), to continue reading please visit the site. Remember to respect the Author & Copyright.

Implementing PoE is made interesting by the fact that not every Ethernet device wants power; if you start dumping power onto any device that’s connected, you’re going to break things. The IEEE 802.3af standard states that the device which can source power should detect the presence of the device receiving power, before negotiating the power level. Only once this process is complete can the power sourcing device give its full supply. Of course, this requires the burden of smarts, meaning that there are many cheap devices available which simply send power regardless of what’s plugged in (passive PoE).

[Jason Gin] has taken an old, cheap passive PoE splitter and upgraded it to be 802.3af compatible (an active device). The splitter was designed to be paired with a passive injector and therefore did not work with Jason’s active 802.3at infrastructure.

The brain of the upgrade is a TI TPS2378 Powered Device controller, which does the power negotiation. It sits on one of two new boards, with a rudimentary heatsink provided by some solar cell tab wire. The second board comprises the power interface, and consists of dual Schottky bridges as well a 58-volt TVS diode to deal with any voltage spikes due to cable inductance. The Ethernet transformer shown in the diagram above was salvaged from a dead Macbook and, after some enamel scraping and fiddly soldering, it was fit for purpose. For a deeper dive on Ethernet transformers and their hacked capabilities, [Jenny List] wrote a piece specifically focusing on Raspberry Pi hardware.

[Jason]’s modifications were able to fit in the original box, and the device successfully integrated with his 802.3at setup. We love [Jason]’s work and have previously written about his eMMC adventures, repairing windows tablets and explaining the intricacies of SD card interfacing.