Posted on in category News

Counting Bees With A Raspberry Pi

The content below is taken from the original ( Counting Bees With A Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

Even if keeping bees sounds about as wise to you as keeping velociraptors (we all know how that movie went), we have to acknowledge that they are a worthwhile thing to have around. We don’t personally want them around us of course, but we respect those who are willing to keep a hive on their property for the good of the environment. But as it turns out, there are more challenges to keeping bees than not getting stung: you’ve got to keep track of the things too.

Keeping an accurate record of how many bees are coming and going, and when, is a rather tricky problem. Apparently bees don’t like electromagnetic fields, and will flee if they detect them. So putting electronic measuring devices inside of the hive can be an issue. [Mat Kelcey] decided to try counting his bees with computer vision, and so far the results are very promising.

After some training, a Raspberry Pi with a camera can count how many bees are in a given image to within a few percent of the actual number. Getting an accurate count of his bees allows [Mat] to generate fascinating visualizations about his hive’s activity and health. With real-world threats such as colony collapse disorder, this type of hard data can be crucial.

This is a perfect example of a hack which might not pertain to many of us as-is, but still contains a wealth of information which could be applicable to other projects. [Mat] goes into a fantastic amount of detail about the different approaches he tried, what worked, what didn’t, and where he goes from here. So far the only problem he’s having is with the Raspberry Pi: it’s only able to run at one frame per second due to the computational requirements of identifying the bees. But he’s got some ideas to improve the situation.

As it so happens, we’ve covered a few other methods of counting bees in the past, though this is the first one to be entirely vision based. Interestingly, this method is similar to the project to track squirrels in the garden. Albeit without the automatic gun turret part.

Posted on in category News

Receiving and handling HTTP requests anywhere with the Azure Relay

The content below is taken from the original ( Receiving and handling HTTP requests anywhere with the Azure Relay), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you followed Microsoft’s coverage from the Build 2018 conference, you may have been as excited as we were about the new Visual Studio Live Share feature that allows instant, remote, peer-to-peer collaboration between Visual Studio users, no matter where they are. One developer could be sitting in a coffee shop and another on a plane with in-flight WiFi, and yet both can collaborate directly on code.

The "networking magic" that enables the Visual Studio team to offer this feature is the Azure Relay, which is a part of the messaging services family along with Azure Service Bus, Azure Event Hubs, and Azure Event Grid. The Relay is, indeed, the oldest of all Azure services, with the earliest public incubation having started exactly 12 years ago today, and it was amongst the handful of original services that launched with the Azure platform in January 2010.

In the meantime, the Relay has learned to speak a fully documented open protocol that can work with any WebSocket client stack, and allows any such client to become a listener for inbound connections from other clients, without needing inbound firewall rules, public IP addresses, or DNS registrations. Since all inbound communication terminates inside the application layer instead of far down at the network link level, the Relay is also an all around more secure solution for reaching into individual apps than using virtual private network (VPN) technology.

In time for the Build 2018 conference, the Relay learned a brand-new trick that you may have missed learning about amidst the torrent of other Azure news, so we’re telling you about it again today: The Relay now also supports relayed HTTP requests.

This feature is very interesting for applications or application components that run inside of containers and where it’s difficult to provide a public endpoint, and is especially well-suited for implementing Webhooks that can be integrated with Azure Event Grid.

The Relay is commonly used in scenarios where applications or devices must be reached behind firewalls. Typical application scenarios include the integration of cloud-based SaaS applications with points of sale or service (shops, coffee bars, restaurants, tanning salons, gyms, repair shops) or with professional services offices (tax advisors, law offices, medical clinics). Furthermore, the Relay is increasingly popular with corporate IT departments who use relay based communication paths instead of complex VPN setups. We will have more news regarding such scenarios in the near future.

The new HTTP support lets you can create and host a publicly reachable HTTP(-S) listener anywhere, even on your phone or any other device, and leave it to the Azure Relay service to provide a resolvable DNS name, a TLS server certificate, and a publicly accessible IP address. All your application needs is outbound Websocket connectivity to the Azure Relay over the common HTTPS port 443.

For illustration, let’s take a brief look at a simple Node.js example. First, here’s a minimal local HTTP listener built with Node.js “out of the box”:

var http = require('http');
var port = process.env.PORT || 1337;

http.createServer(function (req, res) {
     res.writeHead(200, { 'Content-Type': 'text/plain' });
     res.end('Hello World\n');
}).listen(port);

This is the equivalent Node.js application using the Azure Relay:

var http = require('hyco-https');

var uri = http.createRelayListenUri("cvbuild.servicebus.windows.net", "app");
var server = http.createRelayedServer({
       server: uri,
       token: () => http.createRelayToken(uri, "listen", "{…key…}")
     },
     function (req, res) {
         res.writeHead(200, { 'Content-Type': 'text/plain' });
         res.end('Hello World\n');
}).listen();

The key changes are that the Relay application uses the ‘hyco-https’ module instead of Node.js’ built-in ‘http’ module, and that the server is created using the ‘createRelayedServer’ method supplying the endpoint and security token information for connecting to the Relay. Most important: The Node.js HTTP handler code is completely identical.

For .NET Standard, we have extended the existing Relay API in the latest preview of the Microsoft.Azure.Relay NuGet package to also support handling HTTP requests. You create a HybridConnectionListener as you do for Websocket connections, and then just add a RequestHandler callback.

var listener = new HybridConnectionListener(uri, tokenProvider);
listener.RequestHandler = (context) =>
{
     context.Response.StatusCode = HttpStatusCode.OK;
     context.Response.StatusDescription = "OK";
     using (var sw = new StreamWriter(context.Response.OutputStream))
     {
         sw.WriteLine("hello!");
     }
     context.Response.Close();
};

If you want to have existing ASP.NET Core services listen for requests on the Relay, you use the new Microsoft.Azure.Relay.AspNetCore NuGet package that was just released and that allows hosting existing ASP.NET Core apps behind the Relay by adding the "UseAzureRelay()" extension to the web host builder and configuring the connection string of a Hybrid Connection shared access rule (see readme, more samples).

    public static IWebHost BuildWebHost(string[] args) =>
             WebHost.CreateDefaultBuilder(args)
                 .UseStartup<Startup>()
                 .UseAzureRelay(options =>
                 {
                     options.UrlPrefixes.Add(connectionString);
                 })
                 .Build();

The HTTP feature is now in preview in production, meaning you can use it alongside the existing Relay features, complete with support and SLA. Because it’s in preview and not yet entirely in its final shape, we might still make substantial changes to the HTTP-related wire protocol.

Since the Relay isn’t a regular reverse proxy, there are some lower level HTTP details that the Relay overrides and that we will eventually try to align. An exemplary known issue of that sort is that the Relay will always transform HTTP responses into using chunked transfer encoding; while that is might be fairly annoying for protocol purists, it doesn’t have material impact on most application-level use cases.

To get started, check out the tutorials for C# or Node.js and then let us know via the feedback option below the tutorials whether you like the new HTTP feature.

Posted on in category News

UK puts legal limits on drone flight heights and airport no-fly zones

The content below is taken from the original ( UK puts legal limits on drone flight heights and airport no-fly zones), to continue reading please visit the site. Remember to respect the Author & Copyright.

The UK has announcednew stop-gap laws for drone operators restricting how high they can fly their craft — 400ft — and prohibiting the devices from being flown within 1km of an airport boundary. The measures will come into effect on July 30.

The government says the new rules are intended to enhance safety, including the safety of passengers of aircraft — given a year-on-year increase in reports of drone incidents involving aircraft. It says there were 93 such incidents reported in the country last year, up from 71 the year before.

And while the UK’s existing Drone Code(which was issued in 2016) already warns operators to restrict drone flights to 400ft — and to stay “well away” from airports and aircraft — those measures are now being baked into law, via an amendment to the 2016 Air Navigation Order (ahead of a full drone bill which was promised for Spring but still hasn’t materialized yet).

UK drone users who flout the new height and airport boundary restrictions face being charged with recklessly or negligently acting in a manner likely to endanger an aircraft or any person in an aircraft — which carries a penalty of up to five years in prison or an unlimited fine, or both.

Additional measures are also being legislated for, as announced last summer — with a requirement for owners of drones weighing 250 grams or more to register with the Civil Aviation Authority and for drone pilots to take an online safety test.

Users who fail to register or sit the competency tests could face fines of up to £1,000. Though those requirements will come into force later, on November 30 2019.

Commenting in a statement, aviation minister Baroness Sugg said: “We are seeing fast growth in the numbers of drones being used, both commercially and for fun. Whilst we want this industry to innovate and grow, we need to protect planes, helicopters and their passengers from the increasing numbers of drones in our skies. These new laws will help ensure drones are used safely and responsibly.”

In a supporting statement, Chris Woodroofe, Gatwick Airport’s COO, added: “We welcome the clarity that today’s announcement provides as it leaves no doubt that anyone flying a drone must stay well away from aircraft, airports and airfields. Drones open up some exciting possibilities but must be used responsibly. These clear regulations, combined with new surveillance technology, will help the police apprehend and prosecute anyone endangering the traveling public.”

Drone maker DJI also welcomed what it couched as a measured approach to regulation. “The Department for Transport’s updates to the regulatory framework strike a sensible balance between protecting public safety and bringing the benefits of drone technology to British businesses and the public at large,” said Christian Struwe, head of public policy Europe at DJI.

“The vast majority of drone pilots fly safely and responsibly, and governments, aviation authorities and drone manufacturers agree we need to work together to ensure all drone pilots know basic safety rules. We are therefore particularly pleased about the Department for Transport’s commitment to accessible online testing as a way of helping drone users to comply with the law.”

Last fall the UK government also announced it plans to legislate to give police more powers to ground drones to prevent unsafe or criminal usage — measures it also said it would include in the forthcoming drone bill.

Posted on in category News

Use a To-Do App to Take Notes

The content below is taken from the original ( Use a To-Do App to Take Notes), to continue reading please visit the site. Remember to respect the Author & Copyright.

When out and about, I used to put all my ideas into a pocket notebook. Then I switched to emailing myself from my phone. Then I tried the Notes app. Now I put them in Wunderlist, a to-do app. It’s not my favorite to-do app—Microsoft even released another app to replace it—because I use my favorite to-do app for my…

Read more…

Posted on in category News

ASUS’ latest crypto-mining motherboard can handle 20 GPUs

The content below is taken from the original ( ASUS’ latest crypto-mining motherboard can handle 20 GPUs), to continue reading please visit the site. Remember to respect the Author & Copyright.

ASUS is moving further into the cryptocurrency hardware market with a motherboard that can support up to 20 graphics cards, which are typically used for mining. The H370 Mining Master uses PCIe-over-USB ports for what ASUS says is sturdier, simpler c…

Posted on in category News

Skydio’s self-flying drone can now track down cars

The content below is taken from the original ( Skydio’s self-flying drone can now track down cars), to continue reading please visit the site. Remember to respect the Author & Copyright.

Skydio‘s first major update to their crazy cool self-flying drone fixes its 13 eyes on a new object to follow at high speeds: cars.

The Bay Area startup has expanded following capabilities of its R1 drone beyond just humans, with cars now firmly within their sights. Now, you’ll still be limited by the devices 25mph so this won’t be shooting any Nascar races, but the self-flying drone will be able to track and follow vehicles as they move through challenging terrain that would be impossible to film previously without a skilled drone pilot.

Just don’t send this thing following after a self-driving car — unless you want the two to probably run away together and come back with a vengeance at a later date.

In our review of the R1 drone, we were struck by the strength of its core tech and excited by the promise offered by future software updates. Well, less than two months later, new functionality is already coming to the device with this big new update.

“With Skydio R1, cinematography becomes a software defined experience,” Skydio CEO Adam Bry said in a statement. “That means we can regularly introduce fundamentally new capabilities over time for all existing and future users.”

In addition to the new car mode, Skydio has also updated its Lead mode which aims to plot a user’s path before they take it and shoot footage accordingly. The company says that the new update will bring “more intelligent behavior” when it comes to navigating obstacles. New “quarter lead” and “quarter follow” modes also shift the perspective from only allowing straight-on or profile shots.

The Skydio R1 Frontier Edition goes for a decently pricey $2,499 and the new update goes live today .

Posted on in category News

New capabilities to enable robust GDPR compliance

The content below is taken from the original ( New capabilities to enable robust GDPR compliance), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today marks the beginning of enforcement of the EU General Data Protection Regulation (GDPR), and I’m pleased to announce that we have released an unmatched array of new features and resources to help support compliance with the GDPR and the policy needs of Azure customers.

New offerings include the general availability of the Azure GDPR Data Subject Request (DSR) portal, Azure Policy, Compliance Manager for GDPR, Data Log Export, and the Azure Security and Compliance Blueprint for GDPR.

In our webcast today, President Brad Smith outlined our commitment to making sure that our products and services comply with the GDPR, including having more than 1,600 engineers across the company working on GDPR projects. As Brad noted, we believe privacy is a fundamental human right, and that individuals must be in control of their data. So I am pleased that Azure is part of keeping that commitment by being the only hyperscale cloud provider to offer the level of streamlined mechanisms and tools for GDPR compliance enforcement we are announcing today.

Azure Data Subject Request (DSR) portal enables you to fulfill GDPR requests. The DSR capability is generally available today through the Azure portal user interface, as well as through pre-existing application programming interfaces (APIs) and user interfaces (UIs) across the breadth of our online services. These capabilities allow customers to respond to requests to access, rectify, delete, and export personal data in the cloud. In addition, Azure enables customers to access system-generated logs as a part of Azure services.

Azure Policy enables you to set policies to conform to the GDPR. Azure Policy is generally available today at no additional cost to Azure customers. You can use Azure Policy to define and enforce policies that help your cloud environment become compliant with internal policies as well as external regulations.

Azure Policy is deeply integrated into Azure Resource Manager and applies across all resources in Azure. Individual policies can be grouped into initiatives to quickly implement multiple rules. You can also use Azure Policy in a wide range of compliance scenarios, such as ensuring that your data is encrypted or remains in a specific region as part of GDPR compliance. Microsoft is the only hyperscale cloud provider to offer this level of policy integration built in to the platform for no additional charge.

Extend Azure Policies for the GDPR into Azure Security Center. Azure Security Center provides unified security management and advanced threat protection to help meet GDPR security requirements. With Azure Policy integrated into Security Center, you can apply security policies across your workloads, enable encryption, limit your exposure to threats, and help you respond to attacks.

The Azure Security and Compliance GDPR Blueprint accelerates your GDPR deployment. This new Azure Security and Compliance Blueprint will help you build and launch cloud-powered applications that meet GDPR requirements. It includes common reference architectures, deployment guidance, GDPR article implementation mappings, customer responsibility matrices, and threat models that enable you to quickly and securely implement cloud solutions.

Compliance Manager for Azure helps you assess and manage GDPR compliance. Compliance Manager is a free, Microsoft cloud services solution designed to help organizations meet complex compliance obligations, including the GDPR, ISO 27001, ISO 27018, and NIST 800-53. Generally available today for Azure customers, the Compliance Manager GDPR dashboard enables you to assign, track, and record your GDPR compliance activities so you can collaborate across teams and manage your documents for creating audit reports more easily. Azure is the only hyperscale cloud provider with this functionality.

Azure GDPR support and guidance help you stay compliant. Our GDPR sites on the Service Trust Portal and the Trust Center provide you with current information about Microsoft services that support the requirements of the GDPR. These include detailed guidance on conducting Data Protection Impact Assessments in Azure, fulfilling DSRs in Azure, and managing Data Breach Notification in Azure for you to incorporate into your own GDPR accountability program.

Global Regions help you meet your data residency requirements. Azure has more global regions than any other cloud provider, offering the scale you need to bring applications closer to people around the world, preserve data residency, and give customers the confidence that their data is in under their control.

Microsoft has a long-standing commitment to privacy and was the first cloud provider to achieve certification for the EU Model Clauses and ISO/IEC 27018, and was the first to contractually commit to the requirements of the GDPR. Azure offers 11 privacy-focused compliance offerings, more than any other cloud provider. We are proud to be the first to offer customers this level of GDPR functionality.

Through the GDPR, Azure has strengthened its commitment to be first among cloud providers in providing a trusted, private, secure, and compliant private cloud. We are continuing to build and release new features, tools, and supporting materials for our customers to comply with the GDPR and other important standards and regulations. We are proud to release these new capabilities and invite you to learn more in the Azure portal today.

Posted on in category News

Handling GDPR Right to Erasure Requests for Office 365

The content below is taken from the original ( Handling GDPR Right to Erasure Requests for Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

Happy GDPR Day

Happy GDPR Day

GDPR Becomes Reality

The European Union General Data Protection Regulation (GDPR) comes into force today and we move from preparation to reality. Maybe now the flood of email asking for consent to remain on mailing lists will abate and we won’t see quite so many people trying to make hay from GDPR FUD. It’s not quite as bad when The Irish Times reported that “an army of advisors, some of them chancers, have fanned out in recent months to make GDPR the most profitable cash cow/scare story since the millennium bug,” but it has come close.

In any case, organizations must now cope with the requirements set down in GDPR, which means that practical interpretations of what needs to be done with IT systems are the order of the day. Lots of preparatory work has no doubt been done; now it’s game time.

Two practical issues that Office 365 tenants might be asked to deal with soon are Data Subject Requests and Data Erasure Requests, defined under Articles 15 and 17 respectively. Office 365 has an off-the-shelf (partial) answer for one; how to handle the other is not as obvious.

Data Subject Requests

The release of support for GDPR Data Subject Request (DSR) cases in the Security and Compliance Center is a welcome step to help Office 365 tenants cope with the new regulations. However, discovering what personal information exists in Exchange, SharePoint, OneDrive, and Teams in response to a request to know what a data controller (an Office 365 tenant) holds about a data subject (a person) is only the start of the journey.

A DSR is a modified form of a standard Office 365 eDiscovery case. The search results returned by the DSR criteria are deliberately broad to uncover everything a tenant holds about someone. For example, searching by someone’s name will find information, but that doesn’t mean that the search results are relevant to the data subject, especially if their name is common. The information found in a scan Office 365 probably includes many messages and files that don’t match the request, which means that some careful checking is necessary before anything is handed over.

Right to Erasure

The natural progression from searching to respond to an article 15 right of access request is when a data subject exercises their article 17 right to erasure. In other words, someone asks an organization to remove any personal information held about them without undue delay.

GDPR sets out several grounds to justify removal, including that personal data are no longer necessary for the purpose they were collected, the data subject withdrawing consent, or the data subject objects to how the controller processes their data.

For example, an ex-employee might ask their employer to remove all personal information held about them. This includes information like their personal email address and phone number, their national identification number, passport number, and other items of personal data that are not ordinarily in the public domain.

However, the data controller can argue that some information must be retained to comply with a legal obligation (article 17-3b) or to assist with legal claims (article 17-3e). For instance, an employer might need to keep tax records for an ex-employee for several years to comply with national tax regulations.

Deciding what personal data should be removed in response to right to erasure requests is an imprecise science at present. We probably need some guidance from the courts to establish exact boundaries for the data that must be removed and that which can be kept.

Office 365 and Personal Data

Office 365 is only one of the repositories where personal data lives within an organization, but given the pervasive nature of email for communications, and Word and Excel for documenting and organizing HR data, it’s likely that a lot of personal data exists within mailboxes and sites. Any request to erase requests that arrive into an organization using Office 365 means that searches are needed across:

Any personal data of interest in Teams conversations should be picked up in the compliance records captured for Teams in user and group mailboxes.

Office 365 DSRs are Starting Points for Erasure

Office 365 DSRs give a good start to solving the erasure dilemma because the output from searches show where personal data for the data subject might exist. Yammer is the outlier here because Yammer content is not scanned by Office 365 content searches, so searches and exports of Yammer data must be processed separately. On the upside, given how Yammer is generally used, it’s unlikely that much personal data exists in Yammer groups.

When you export the results of content searches, Office 365 generates manifests to show where the exported data originates. As noted above, it’s a mistake to assume that everything uncovered by a DSR case is relevant to a data subject, and manual checking is absolutely needed before any deletions occur. The export manifests are invaluable here because they tell those responsible for processing the request for erasure where to look.

The Need for Checking

Unfortunately, checking search results is a manual process. Before you delete messages or documents, you need to be sure that those items are relevant to the data subject and do not need to be kept for justifiable business reasons. For example, a check of a document might look for instances of the data subject’s name together with other indications that the document should be removed, such as it includes the data subject’s Social Security Number or passport number.

For this reason, the content searches used to find matches should use precise identifiers whenever possible. A DSR case can span several cases, so you can have one based on the data subject’s name and email address, and another for matches against their passport number, employee number, home address or a similarly unique identifier. You can export the combined results of all searches in a single operation.

Redaction

In many cases, the requirement for erasure can be satisfied through redaction, or editing to erase the data subject’s details from documents, spreadsheets, and other files. You cannot edit the body of an email, so these probably need to be removed. One complication that exists here is that some content might be protected by Azure Information Protection rights management. In this instance, protected files must be decrypted by an IRM super-user before they can be redacted.

Document processing is complicated by the fact that SharePoint stores multiple versions of a file, meaning that although you might redact the text relating to a data subject in the current version of a document, other versions still exist that might include the information. To get around the problem, you can save a copy of the document, remove the original document, and make the change to the copy before uploading it (as version 1) to SharePoint.

Inactive Mailboxes

Information in inactive mailboxes is indexed and discoverable, so content searches will pick up any references that exist in these mailboxes. To remove items, you’ll have to restore or recover the inactive mailboxes before you can access the content with clients like Outlook or OWA.

Preservation Locks

Some items cannot be deleted from Office 365 because they are subject to a preservation lock, a special form of retention policy designed to keep information for a predetermined period that cannot be interfered with. Office 365 will keep these items until the lock expires.

No Automatic Erasure

The bottom line is that responding to a request for erasure of Office 365 data under GDPR article 17 is unlikely to be an automatic or inexpensive process. Some simple cases might be processed by doing a search and then using something like the Search-Mailbox cmdlet to permanently remove items from mailboxes. However, the increasingly integrated nature of Office 365 means that those responsible for handling these cases can expect to do a lot of manual work to be sure that the organization responds as GDPR expects.

More Help in the Future?

We don’t know yet whether Microsoft will develop DSRs further to include processing to handle requests for erasure, or the article 18 right of restriction of processing, where a data subject contests the accuracy of their personal data held in a system like Office 365. In all cases, as noted above, depending on automatic processing without checking is not a good idea because the chance that you’ll erase something important is high. Maybe this is a case when artificial intelligence can help. Time will tell.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Handling GDPR Right to Erasure Requests for Office 365 appeared first on Petri.

Posted on in category News

Revolut adds Ripple and Bitcoin Cash support

The content below is taken from the original ( Revolut adds Ripple and Bitcoin Cash support), to continue reading please visit the site. Remember to respect the Author & Copyright.

Fintech startup Revolut is adding Bitcoin Cash and Ripple to its cryptocurrency feature. While cryptocurrency isn’t really Revolut’s focus point, it’s a good way to get started with cryptocurrencies.

If you have a Revolut account, you can now buy and hold Bitcoin, Litecoin, Ethereum, Ripple and Bitcoin Cash. Behind the scene, the startup has partnered with Bitstamp to process the transactions. Revolut currently charges a 1.5 percent fee for cryptocurrency transactions. There are currently 100,000 cryptocurrency transactions per day.

Compared to a traditional cryptocurrency exchange, you can’t send or receive cryptocurrencies from your Revolut account. You don’t get a bitcoin address for instance. All you can do is buy tokens in the app. If you want to transfer those tokens somewhere else, you’ll have to sell them for USD, GBP, etc. and then buy cryptocurrencies on a traditional exchange using your fiat money.

Recently, the startup also announced a new feature called Vaults. Revolut users can set up a vault to save money over time.

You can round up your spare change every time you make a transaction. For instance, if you pay $3.47 for that delicious ice cream, you’ll save 53 cents in your vault. You can also multiple that amount so that you save multiple times your spare change with each transaction. Many fintech startups also provide this feature.

You can also set up recurring payments to set aside a bit of money each day, each week or each month. Interestingly, you get to choose the currency of your vault. So it means that you can decide to buy ethers with spare change and weekly payments for instance. It’s a great way to hedge against the volatility of cryptocurrencies.

Users don’t earn interests on vaults. It’s just a way to set some money aside that doesn’t appear in your main Revolut account. You can decide to close your vault whenever you want.

Posted on in category News

See What Everyone Was Tweeting Ten Years Ago

The content below is taken from the original ( See What Everyone Was Tweeting Ten Years Ago), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you’re tired of the hell dimension that is present-day Twitter, internet renaissance man Andy Baio has the link for you: here’s what your Twitter feed would look like ten years ago today (if you followed all the people you follow now). Of course, you can only see tweets from people who were already on Twitter in…

Read more…

Posted on in category News

Sony shrinks its Digital Paper tablet down to a more manageable 10 inches

The content below is taken from the original ( Sony shrinks its Digital Paper tablet down to a more manageable 10 inches), to continue reading please visit the site. Remember to respect the Author & Copyright.

I had a great time last year with Sony’s catchily named catchily-named DPT-RP1, an e-paper tablet that’s perfect for reading PDFs and other big documents, but one of my main issues was simply how big the thing is. Light and thin but 13 inches across, the tablet was just unwieldy. Heeding (I assume) my advice, Sony is putting out a smaller version and I can’t wait to try it out.

At the time, I was comparing the RP1 with the reMarkable, a crowdfunded rival that offers fantastic writing ability but isn’t without its flaws. Watch this great video I made:

The 10-inch DPT-CP1 has a couple small differences from its larger sibling. The screen has a slightly lower resolution but should be the same PPI — it’s more of a cutout of the original screen than a miniaturization. And it’s considerably lighter: 240 grams to the 13-inch version’s 350. Considering the latter already felt almost alarmingly light, this one probably feels like it’ll float out of your hands and enter orbit.

More important are the software changes. There’s a new mobile app for iOS and Android that should make loading and sharing documents easier. A new screen-sharing screen sharing mode sounds handy but a little cumbrous — you have to plug it into a PC and then plug the PC into a display. And PDF handling has been improved so that you can jump to pages, zoom and pan pan, and scan through thumbnails more easily. Limited interaction (think checkboxes) is also possible.

There’s nothing that addresses my main issue with both the RP1 and the reMarkable: that it’s a pain to do anything substantial on the devices, such as edit or highlight in a document, and if you do, it’s a pain to bring that work into other environments.

So for now it looks like the Digital Paper series will remain mostly focused on consuming content rather than creating or modifying it. That’s fine — I loved reading stuff on the device, and mainly just wished it were a bit smaller. Now that Sony has granted that wish, it can get to work on the rest.

Posted on in category News

IBM built a handheld counterfeit goods detector

The content below is taken from the original ( IBM built a handheld counterfeit goods detector), to continue reading please visit the site. Remember to respect the Author & Copyright.

Just a month after IBM announced it's leveraging the blockchain to guarantee the provenance of diamonds, the company has revealed new AI-based technology that aims to tackle the issue of counterfeiting — a problem that costs $1.2 trillion globally….

Posted on in category News

This Computer Is As Quiet As The Mouse

The content below is taken from the original ( This Computer Is As Quiet As The Mouse), to continue reading please visit the site. Remember to respect the Author & Copyright.

[Tim aka tp69] built a completely silent desktop computer. It can’t be heard – at all. The average desktop will have several fans whirring inside – cooling the CPU, GPU, SMPS, and probably one more for enclosure circulation – all of which end up making quite a racket, decibel wise. Liquid cooling might help make it quieter, but the pump would still be a source of noise. To completely eliminate noise, you have to get rid of all the rotating / moving parts and use passive cooling.

[Tim]’s computer is built from standard, off-the-shelf parts but what’s interesting for us is the detailed build log. Knowing what goes inside such a build, the decisions required while choosing the parts and the various gotchas that you need to be aware of, all make it an engaging read.

It all starts with a cubic aluminum chassis designed to hold a mini-ITX motherboard. The top and side walls are essentially huge extruded heat sinks designed to efficiently carry heat away from inside the case. The heat is extracted and channeled away to the side panels via heat sinks embedded with sealed copper tubing filled with coolant fluid. Every part, from the motherboard onwards, needs to be selected to fit within the mechanical and thermal constraints of the enclosure. Using an upgrade kit available as an enclosure accessory allows [Tim] to use CPUs rated for a power dissipation of almost 100 W. This not only lets him narrow down his choice of motherboards, but also provides enough overhead for future upgrades. The GPU gets a similar heat extractor kit in exchange for the fan cooling assembly. A fanless power supply, selected for its power capacity as well as high-efficiency even under low loads, keeps the computer humming quietly, figuratively.

Once the computer was up and running, he spent some time analysing the thermal profile of his system to check if it was really worth all the effort. The numbers and charts look very promising. At 100% load, the AMD Ryzen 5 1600 CPU levelled off at 60 ºC (40 ºC above ambient) without any performance effect. And the outer enclosure temperature was 42 ºC — warm, but not dangerous. Of course, performance hinges around “ambient temperature”, so you have to start getting careful when that goes up.

Getting such silence comes at a price – some may consider it quite steep. [Tim] spent about A$3000 building this whole system, thanks in part due to high GPU prices because of demand from bitcoin mining. But cost is a relative measure. He’s spent less on this system compared to several of his earlier projects and it let’s him enjoy the sounds of nature instead of whiny cooling fans. Some would suggest a pair of ear buds would have been a super cheap solution, but he wanted a quiet computer, not something to cancel out every other sound in his surroundings.

Posted on in category News

22 essential security commands for Linux

The content below is taken from the original ( 22 essential security commands for Linux), to continue reading please visit the site. Remember to respect the Author & Copyright.

There are many aspects to security on Linux systems – from setting up accounts to ensuring that legitimate users have no more privilege than they need to do their jobs. This is look at some of the most essential security commands for day-to-day work on Linux systems.

To read this article in full, please click here

(Insider Story)

Posted on in category News

The Kata Containers project launches version 1.0 of its lightweight VMs for containers

The content below is taken from the original ( The Kata Containers project launches version 1.0 of its lightweight VMs for containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

The Kata Containers project, the first non-OpenStack project hosted by the OpenStack Foundation, today launched version 1.0 of its system for running isolated container workloads. The idea behind Kata Containers, which is the result of the merger of two similar projects previously run by Intel and Hyper, is to offer developers a container-like experience with the same security and isolation features of a more traditional virtual machine.

To do this, Kata Containers implements a very lightweight virtual machine (VM) for every container. That means every container gets the same kind of hardware isolation that you would expect from a VM, but without the large overhead. But even though Kata Containers don’t fit the standard definition of a software container, they are still compatible with the Open Container Initiative specs and the container runtime interface of Kubernetes. While it’s hosted by the OpenStack Foundation, Kata Containers is meant to be platform- and architecture-agnostic.

Intel, Canonical and Red Hat have announced they are putting some financial support behind the project, and a large number of cloud vendors have announced additional support, too, including 99cloud, Google, Huawei, Mirantis, NetApp and SUSE.

With this version 1.0 release, the Kata community is signaling that the merger of the Intel and Hyper technology is complete and that the software is ready for production use.

Posted on in category News

Review: 55 BBC Micro Books on CD ROM

The content below is taken from the original ( Review: 55 BBC Micro Books on CD ROM), to continue reading please visit the site. Remember to respect the Author & Copyright.

Introduction Christopher Dewhurst, the Drag ‘n Drop Publications editor, has released an excellent compilation of 55 BBC Micro Books, all together on one CD ROM. For any RISC OS user who wants to have a go at BASIC programming these are an essential buy. Although biased towards the BBC Micro, quite a few of the […]

Posted on in category News

DNS in the cloud: Why and why not

The content below is taken from the original ( DNS in the cloud: Why and why not), to continue reading please visit the site. Remember to respect the Author & Copyright.

As enterprises consider outsourcing their IT infrastructure, they should consider moving their public authoritative DNS services to a cloud provider’s managed DNS service, but first they should understand the advantages and disadvantages.

To read this article in full, please click here

(Insider Story)

Posted on in category News

List of new options in Windows 10 Settings

The content below is taken from the original ( List of new options in Windows 10 Settings), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 Settings

The most anticipated Windows 10 v1803 April 2018 Update was released recently and brought in a lot of new features. I’ve you been following the update, you might have already tried out a few of them. All Windows updates bring […]

This post List of new options in Windows 10 Settings is from TheWindowsClub.com.

Posted on in category News

Finnish university’s online AI course is open to everyone

The content below is taken from the original ( Finnish university’s online AI course is open to everyone), to continue reading please visit the site. Remember to respect the Author & Copyright.

Helsinki University in Finland has launched a course on artificial intelligence — one that's completely free and open to everyone around the world. Unlike Carnegie Mellon's new undergrad degree in AI, which the institution created to train future ex…

Posted on in category News

People with Dementia can DRESS Smarter

The content below is taken from the original ( People with Dementia can DRESS Smarter), to continue reading please visit the site. Remember to respect the Author & Copyright.

People with dementia have trouble with some of the things we take for granted, including dressing themselves. It can be a remarkably difficult task involving skills like balance, pattern recognition inside of other patterns, ordering, gross motor skill, and dexterity to name a few. Just because something is common, doesn’t mean it is easy. The good folks at NYU Rory Meyers College of Nursing, Arizona State University, and MGH Institute of Health Professions talked with a caregiver focus group to find a way for patients to regain their privacy and replace frustration with independence.

Although this is in the context of medical assistance, this represents one of the ways we can offload cognition or judgment to computers. The system works by detecting movement when someone approaches the dresser with five drawers. Vocal directions and green lights on the top drawer light up when it is time to open the drawer and don the clothing inside. Once the system detects the article is being worn appropriately, the next drawer’s light comes one. A camera seeks a matrix code on each piece of clothing, and if it times out, a caregiver is notified. There is no need for an internet connection, nor should one be given.

Currently, the system has a good track record with identifying the clothing, but it is not proficient at detecting when it is worn correctly, which could lead to frustrating false alarms. Matrix codes seemed like a logical choice since they could adhere to any article of clothing and get washed repeatedly but there has to be a more reliable way. Perhaps IR reflective threads could be sewn into clothing with varying stitch lengths, so the inside and outside patterns are inverted to detect when clothing is inside-out. Perhaps a combination of IR reflective and absorbing material could make large codes without being visible to the human eye. How would you make a machine-washable, machine-readable visual code?

Helping people with dementia is not easy but we are not afraid to start, like this music player. If matrix codes and barcodes get you moving, check out this hacked scrap-store barcode scanner.

Thank you, [Qes] for the tip.

Posted on in category News

Detect malicious activity using Azure Security Center and Azure Log Analytics

The content below is taken from the original ( Detect malicious activity using Azure Security Center and Azure Log Analytics), to continue reading please visit the site. Remember to respect the Author & Copyright.

This blog post was authored by Microsoft Threat Intelligence Center. the Azure Security Center team.

We have heard from our customers that investigating malicious activity on their systems can be tedious and knowing where to start is challenging. Azure Security Center makes it simple for you to respond to detected threats. It uses built-in behavioral analytics and machine learning to detect threats and generates alerts for the attempted or successful attacks. As discussed in a previous post, you can explore the alerts of detected threats through the Investigation Path, which uses Azure Log Analytics to show the relationship between all the entities involved in the attack. Today, we are going to explain to you how Security Center’s ability to detect threats using machine learning and Azure Log Analytics can help you keep pace with rapidly evolving cyberattacks.

Investigate anomalies on your systems using Azure Log Analytics

One method is to look at the trends of processes, accounts, and computers to understand when anomalous or rare processes and accounts are run on computers which indicates potentially malicious or unwanted activity. Run the below query against your data and note that what comes up is an anomaly or rare over the last 30 days. This query shows the processes run by computers and account groups over a week to see what is new and compare it to the behavior over the last 30 days. This technique can be applied to any of the logs provided in the Advanced Azure Log Analytics pane. In this example, I am using the Security Event table.

Please note the items in bold are an example of filtering your own results for noise and is not specifically required. The reason I have included it is to make it clear there will be certain items that are not run often and show up as anomalous when using this or similar queries, which are specific to your environment and may need manual exclusion to help focus the investigation. Please build your own list of “known good” items to filter out based on your environment.

let T = SecurityEvent
| where TimeGenerated >= ago(30d)
| extend Date = startofday(TimeGenerated)
| extend Process = ProcessName
| where Process != ""
| where Process != "-"
| where Process !contains "\\Windows\\System"
| where Process !contains "\\Program Files\\Microsoft\\"
| where Process !contains "\\Program Files\\Microsoft Monitoring Agent\\"
| where Process !contains "\\ProgramData\\"
| where Process !contains "\\Windows\\WinSxS\\"
| where Process !contains "\\Windows\\SoftwareDistribution\\"
| where Process !contains "\\mpsigstub.exe"
| where Process !contains "\\WindowsAzure\\GuestAgent"
| where Process !contains "\\Windows\\Servicing\\TrustedInstaller.exe"
| where Process !contains "\\Windows\\Microsoft.Net\\"
| where Process !contains "\\Packages\\Plugins\\"
| project Date, Process, Computer, Account
| summarize count() by Date, Process, Computer, Account
| sort by count_ desc nulls last;
T
| evaluate activity_counts_metrics(Process, Date, startofday(ago(30d)), startofday(now()), 1d, Process, Computer, Account)
| extend WeekDate = startofweek(Date)
| project WeekDate, Date, Process, PotentialAnomalyCount NewForWeek = new_dcount, Account, Computer
| join kind= inner
(
    T
    | evaluate activity_engagement(Process, Date, startofday(ago(30d)), startofday(now()),1d, 7d)
    | extend WeekDate = startofweek(Date)
    | project WeekDate, Date, Distribution1day = dcount_activities_inner, Distribution7days = dcount_activities_outer, Ratio = activity_ratio*100
)
on WeekDate, Date
| where PotentialAnomalyCount NewForWeek == 1 and Ratio < == 100
| project WeekDate, Date, Process, Account, Computer , PotentialAnomalyCount, NewForWeek, Distribution1day, Distribution7days, Ratio
| render barchart kind=stacked

When the above query is run, you will receive a TABLE similar to the item below, although the dates and referenced processes will be different. In this example, we can see when a specific process, computer and account had computer, and account has not been seen before based on week over week data for the last 30 days. Specifically, we can see regedit.exe portping.exe showed up in the week of 4/15 and on the specific date of 4/17, then PowerShell on 4/30 and then Procmon on 4/30 and 5/8 date of 4/16 for the first times each week during the last 30 days.

image

time in 30 days.

Table 1

You can also view the results in CHART mode and change the pivot of the bar CHART as seen below. For example, use the drop down and pivot on Computer instead of process and see the computers that launched this process.

Completed

Hover to see the specific computer and how many processes showed up for the first time.

Potential Anomaly Count

In the query above, we look at the items that run across more than one day, which is the ratio of less than 100. This is a way to parse the date and more easily understand the scope of when a process runs on a given computer. By looking at rare items that have run across multiple days, you can potentially detect manual activity by an attacker who is probing your environment for information that will further increase his attack surface.

We can alternatively look at the processes that ran only on 1 day of the last 30 days, which can be done by choosing only ratio == 100 in the above query, simply change the related line to this:

| where PotentialAnomalyCount NewForWeek == 1 and Ratio == 100 100. 

The above change to the query results in a different set of hits for rare processes and may indicate usage of a scripted attack to rapidly gather data from this system, system or several systems, or may just indicate attacker activity on a single day.

Lastly, we see several interactive processes run, which indicate an interactive logon, for example SQL Mgmt Studio process Ssms.exe. Potentially, this is an unexpected logon to this system and this query can help expose this type of anomaly in addition to unexpected processes.

image

Table 2

Once you have identified a computer or account you want to investigate, you can then dig in further on the full data for that computer. This can be done by opening a secondary query window and filtering only on the computer or account that you are interested in. Examples of this would be as follows. At that point, you can see what occurred around the anomalous or rare process execution time. We will select the portping.exe process and narrow the scope of the dates to allow for a closer look.  From the table above, we can see the Date[UTC] circled below. This date is rounded to the nearest day for the query to work properly, but this along with the computer and account used should allow us to focus in on the timeframe of when this was run on the computer.

image

Table 3

To focus in on the timeframe, we will use that date to provide our single day range. We can pass the range into the query by using standard date formats indicated below. Click on the + highlighted in yellow and paste the below query into your window.

In the results, the distinct time is marked in red. We will use that in a subsequent query.

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 00:00:00.000) and TimeGenerated <= datetime(2018-04-16 23:59:59.999)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

Code

Now that we have the exact time, we can look at activity occurring with smaller time frames around that date. We usually use +5 minute and -5 minute blocks. For example:

SecurityEvent
| where TimeGenerated >= datetime(2018-04-16 19:10:00.000) and TimeGenerated <= datetime(2018-04-16 19:21:00.000)
| where Computer contains "Contoso-2016" and Account contains "ContosoAdmin"
//| where Process contains "portping.exe"
| project TimeGenerated, Computer, Account, Process, CommandLine

In the results below, we can easily see that someone was logged into the system via RDP. We know this because RDPClip.exe is being launched, which indicated they were copying and pasting between their host and the remote system.

Additionally, we see after the portping.exe activity that they are attempting to modify accounts or password functionality with the command netplwiz.exe or control userpasswords2.

They are then running Procmon.exe to see what other processes are running on the system. Generally this is done to understand what is available to the attacker to further exploit.

Query

At this point, this machine should be taken offline and investigated more deeply to understand the extent of the compromise.

Find hidden techniques commonly deployed by attackers using Azure Log Analytics

Most security experts have seen the techniques attackers use to hide the usage of commands on a system to avoid detection. While there are certainly methods to avoid even showing up on the command line, the obfuscation technique used below is regularly used by various levels of attackers.

Below we will decode a base64 encoded string in the command line data and look for common PowerShell methods that are used in attacks.

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe" and CommandLine contains " -enc"
|extend b64 = extract("[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
|extend utf8_decode=base64_decodestring(b64)
|extend decode =  replace ("\x00","", utf8_decode)
|where decode contains 'Gzip' or decode contains 'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by Computer, Account, decode, CommandLine

image

Table 4

As you can see, the results provide you with details about what was in the encoded command line and potentially what an attacker was attempting to do.

You can now use the details in the above query to see what was running during the same time by adding the time and computer to the same table. This allows you to easily connect it with other activity on the system, the process by which is described just above in detail. One thing to note is that you can add these automatically by expanding the event with the arrow in the first column of the row. Then hover over TimeGenerated and click the + button.

image

Time Generated

This will add in an entry like so into your query window:

| where TimeGenerated == todatetime('2018-04-24T02:00:00Z')

Modify the range of time like this:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Computer == "XXXXXXX"
| where TimeGenerated >= todatetime('2018-04-24T02:00:00Z')-5m and TimeGenerated <= todatetime('2018-04-24T02:00:00Z')+5m
| project TimeGenerated, Account, Computer, Process, CommandLine, ParentProcessName
| sort by TimeGenerated asc nulls last

image

Table 5

Lastly, connect this to your various alerts using the join to alerts from the last 30 days to see what alerts are associated:

SecurityEvent
| where TimeGenerated >= ago(30d)
| where Process contains "powershell.exe"  and CommandLine contains " -enc"
| extend b64 = extract( "[A-Za-z0-9|+|=|/]{30,}", 0,CommandLine)
| extend utf8_decode=base64_decodestring(b64)
| extend decode =  replace ("\x00","", utf8_decode)
| where decode contains 'Gzip' or decode contains'IEX' or decode contains 'Invoke' or decode contains '.MemoryStream'
| summarize by TimeGenerated, Computer=toupper(Computer), Account, decode, CommandLine
| join kind= inner (
      SecurityAlert | where TimeGenerated >= ago(30d)
      | extend ExtProps = parsejson(ExtendedProperties)
      | extend Computer = toupper(tostring(ExtProps["Machine Name"]))
      | project Computer, AlertName, Description
) on Computer

image

Table 6

Security Center uses Azure Log Analytics to help you detect anomalies in your data as well as expose common hiding techniques used by attackers. By exploring more of your data through directed queries like these presented above, you may find anomalies that are both malicious and benign, but in doing so you will have made your environment more secure and have a better understanding of the activity that is going on systems and resources in your subscription.

Learn more about Azure Security Center

To learn more about Azure Security Center’s detection capabilities, visit our threat detection documentation.

To learn more about Azure Advance Threat Protection, visit our threat protection documentation.

To learn more about integration with Windows Defender Advanced Threat Protection, visit our threat protection integration documentation.

To stay up-to-date with the latest announcements on Azure Security Center, read and subscribe to our blog.

Posted on in category News

4 Best Practices to Get Your Cloud Deployments GDPR Ready

The content below is taken from the original ( 4 Best Practices to Get Your Cloud Deployments GDPR Ready), to continue reading please visit the site. Remember to respect the Author & Copyright.

Get Your Cloud Deployments GDPR Ready

With GDPR coming into force later this month, security and compliance will be the top-most priority for any cloud deployment that contains personal data of EU citizens.

While leading providers have moved to make their platforms and services compliant, ensuring compliance requires more than just technology. Companies will also need to invest time and resources to prepare internal cloud teams to correctly and effectively design secure, auditable, and traceable cloud solutions that also meet the demands of your business. Here are 4 steps to get your cloud deployments GDPR ready for compliance.

#1 Make sure your cloud partners are GDPR compliant

In the cloud, the entire security framework operates under a shared responsibility model between the provider and the customer.

From an infrastructure perspective, the cloud service provider is responsible for providing a secure cloud environment, from their physical presence to the underlying resources that provide compute, storage, database, and network services.

Customers who import data and utilize the provider’s services are responsible for using them to design and implement their own security mechanisms such as access control, firewalls (both at the instance and network levels), encryption, logging, and monitoring.

Under GDPR, both customers (as controllers who define how and why personal data is collected) and cloud providers (as processors who manage, process, or store personal data on behalf of the controller) must be compliant.

To date, AWS, Google Cloud, Microsoft Azure have announced their compliance (in the case of AWS) and of their commitment to GDPR (Google and Microsoft) by the May 25 deadline.

Enterprises should make sure that their cloud partners and any third party that processes, manages, or stores personal data of EU citizens on their behalf have the proper compliance and controls in place.

#2 Audit your systems for personal data

Personally identifiable information (PII) as defined by GDPR includes a range of data types, from names, email addresses, and phone numbers, to photos, genetic data, and IP addresses. But how much of the personal data that you store is actually required for your business?

GDPR is an opportunity to take a critical look at the types of data you collect and why. Use cloud services like AWS’s Amazon Macie to audit and assess the type of data currently in your data stores and determine which ones will be impacted by GDPR. Do they contain data that is outdated or personal data that is unnecessary for your business? Take this opportunity to redefine your processes for the type of data that you will collect going forward.

#3 Put proactive security services in place

A cloud security breach is more than just the loss of data. Exposed S3 buckets and other high-profile breaches that left millions of pieces of PII exposed in 2017 could prove fatal for a business under the new regulations. Under GDPR, a breach that results in exposure of personal data could result in fines of up to 4% of annual turnover or €20 million.

GDPR is an opportunity for companies to implement broader, more comprehensive cloud security and data protection in your deployments at every level. Amazon Web Services, Microsoft Azure, and Google Cloud Platform each have a range of services in place to support your security and compliance requirements. These include:

#4 Empower teams for compliance

A regulation as far-reaching as GDPR will impact your organization at the technology, process, and people levels. A shared understanding by your teams of the regulation and how it impacts your organization from the point of view of technology and the business will be an essential component of your compliance efforts.

A best practices approach will be key to get your cloud deployments GDPR ready and to prepare for any security and compliance challenges that your business will face.

Posted on in category News

Google Maps Platform now integrated with the GCP Console

The content below is taken from the original ( Google Maps Platform now integrated with the GCP Console), to continue reading please visit the site. Remember to respect the Author & Copyright.

Thirteen years ago, the first Google Maps mashup combined Craigslist housing data on top of our map tiles—before there was even an API to access them. Today, Google Maps APIs are some of the most popular on the internet, powering millions of websites and apps generating billions of requests per day.

Earlier this month, we introduced the next generation of our Google Maps business—Google Maps Platform—that included a series of updates to help you take advantage of new location-based features and products. We simplified our APIs into three product categories—Maps, Routes and Places—to make it easier for you to find, explore and add new features to your apps and sites. In addition, we merged our pricing plans into one pay-as-you go plan for our core products. With this new plan, you get the first $200 of monthly usage for free, so you can try the APIs risk-free.

In addition, Google Maps Platform includes simplified products, tighter integration with Google Cloud Platform (GCP) services and tools, as well as a single pay-as-you-go offering. By integrating with GCP, you can scale your business and utilize location services as you grow—we no longer enforce usage caps, just like any other GCP service.

You can also manage Google Maps Platform from Google Cloud Console—the same interface you already use to manage and monitor other GCP services. This integration provides a more tailored view to manage your Google Maps Platform implementation, so you can monitor individual API usage, establish usage quotas, configure alerts for more visibility and control, and access billing reports. All Google Maps Platform customers now receive free Google Maps Platform customer support, which you can also access through the GCP Console.

Check out the Google Maps Platform website where you can learn more about our products and also explore the guided onboarding wizard from the website to the console. We can’t wait to see how you will use Google Maps Platform with GCP to bring new innovative services to your customers.

Posted on in category News

How to Deploy An Azure Virtual Machine (May 2018)

The content below is taken from the original ( How to Deploy An Azure Virtual Machine (May 2018)), to continue reading please visit the site. Remember to respect the Author & Copyright.

This post will show you how you can quickly deploy an Azure virtual machine for evaluation purposes.

 

 

Before You Continue

It is actually very easy to next-next-next your way through the process of building a virtual machine in Azure. The “wizard” has been designed for newbies to get something up and running quickly. However, the results are not what anyone would recommend for production. Every next-next-next deployment will produce a virtual machine that has its own network security rules, public IP address with direct RDP/SSH access from the Internet, and so on.

In the training that I deliver, I strongly urge people to pre-create things such as their network, a diagnostics storage account, and remote/on-premises connectivity; then, when they create virtual machines in the Azure Portal, they tweak the wizard to use the already-created resources.

In this post, I will walk you through the default process at a high level. Note that Microsoft is constantly renaming and moving things around in the Azure Portal, so things might have changed since this post was written.

Starting Off

Log into the Azure Portal and click the button (highlighted below) in the top-right corner to make sure you are working in the correct customer tenant and Azure subscription.

Switch customer tenant and Azure subscription in the Azure Portal [Image Credit: Aidan Finn]

Switch Customer Tenant and Azure Subscription in the Azure Portal [Image Credit: Aidan Finn]

Now you will start the process of creating a virtual machine. Click Create A Resource in the top-left corner to open the New blade. If you click Compute, the results are filtered for things that use processors in Azure, such as virtual machines and Service Fabric. You can search for an operating system image or an operating system/application image from the Azure Marketplace. You can also click See All to browse the Azure Marketplace. In my example, I am selecting Windows Server 2016 Datacenter.

Create Virtual Machine Blade

A Create Virtual Machine blade opens. Here you will go through a number of steps (blades) to deploy your new virtual machine; the actual blades will depend on what you selected to deploy. For example, a network virtualization appliance such as a Check Point Firewall, will have some configurations that are specific to it. A virtual machine running SQL Server might allow advanced configurations for the SQL Server workload. Typically, you will find the following blades:

The standard Create Virtual Machine blade [Image Credit: Aidan Finn]

The Standard Create Virtual Machine Blade [Image Credit: Aidan Finn]

Basics Blade

In this blade, you will configure some naming and location settings, as well as setting up the default local administrator account. The following settings should be configured:

I want to highlight one setting with the title of Save Money. You can save up to 40 percent on the cost of a Windows virtual machine if you have the Software Assurance benefit of Hybrid Use Benefit (HUB). If you’re not sure about this, then please confirm if you have the rights to click the button with your administrators, resellers, distributor or LSP/LAR. You don’t want to be hit by an auditor for misusing this button!

Click OK when you are ready to move onto the Size blade.

Creating a new Azure virtual machine – Basics blade [Image Credit: Aidan Finn]

Creating a New Azure Virtual Machine — Basics Blade [Image Credit: Aidan Finn]

Size Blade

A blade called Choose A Size appears next; this blade recently went through an upgrade:

Searching for an selecting an Azure virtual machine size [Image Credit: Aidan Finn]

Searching for and Selecting an Azure Virtual Machine Size [Image Credit: Aidan Finn]

Note at the time of writing, the Temporary Storage (temp drive) column was misleadingly labeled as Local SSD. The size of the OS drive is either 30GB or 128GB depending on what OS image you selected and has nothing to do with what you select here.

Search for and pick an image size. Click Select to continue to the Settings blade.

Settings Blade

The Settings blade is the most detailed on in the standard set of blades for creating a virtual machine. It is so detailed that it has a scroll bar to get you from top to bottom. This is also where a lot of things are dumbed down for you by supplying defaults; these are the defaults that I teach people to undo in my classes. We’ll start with some availability, storage, and networking stuff.

My first note on this blade so far: Availability Zone and Availability Set are mutually exclusive. You can do one or the other, or not do either.

My second note: I normally:

Settings of a new Azure virtual machine – part 1 [Image Credit: Aidan Finn]

Settings of a New Azure Virtual Machine — Part 1 [Image Credit: Aidan Finn]

If you scroll down, you’ll find more settings to configure:

When you are ready, click OK to save your Settings configuration.

Settings of a new Azure virtual machine – part 2 [Image Credit: Aidan Finn]

Settings of a New Azure Virtual Machine — Part 2 [Image Credit: Aidan Finn]

Summary/Create Blade

The final blade does two things:

  1. Validates that everything you have selected is possible – as much as it can be before a deployment.
  2. Provides you with a summary that you can check before continuing.

There are two things to note.

Click OK and your request to create a new virtual machine will be submitted to Azure. If you click the Notifications icon (a bell) in the top right, you can track the progress of the deployment job. This can take anywhere from 2-15 minutes, for simple Windows or Linux machines, depending on the requested configuration.

The post How to Deploy An Azure Virtual Machine (May 2018) appeared first on Petri.

Posted on in category News

Brad Dickinson | Devs ask Microsoft for real .NET universal apps: Windows, Mac, iOS, Android

Devs ask Microsoft for real .NET universal apps: Windows, Mac, iOS, Android

The content below is taken from the original (Devs ask Microsoft for real .NET universal apps: Windows, Mac, iOS, Android), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 10 only is not a universal solution

Microsoft introduced the Universal Windows Platform (UWP) this year: applications that run across many device types, provided that they all run Windows 10.…