The content below is taken from the original (Fully functional Windows 3.1 in WebVR (happy 25th bday!)), to continue reading please visit the site. Remember to respect the Author & Copyright.
http://bit.ly/2pVpSOW
The content below is taken from the original (Fully functional Windows 3.1 in WebVR (happy 25th bday!)), to continue reading please visit the site. Remember to respect the Author & Copyright.
http://bit.ly/2pVpSOW
The content below is taken from the original (Don’t get bit by zombie cloud data), to continue reading please visit the site. Remember to respect the Author & Copyright.
The internet never forgets, which means data that should have been deleted doesn’t always stay deleted. Call it “zombie data,” and unless your organization has a complete understanding of how your cloud providers handle file deletion requests, it can come back to haunt you.
Ever since the PC revolution, the concept of data deletion has been a bit misunderstood. After all, dragging a file to the Recycle Bin simply removed the pointer to the file, freeing up disk space to write new data. Until then, the original data remained on the disk, rediscoverable using readily accessible data recovery tools. Even when new data was written to that disk space, parts of the file often lingered, and the original file could be reconstructed from the fragments.
Desktop—and mobile—users still believe that deleting a file means the file is permanently erased, but that’s not always the case. That perception gap is even more problematic when it comes to data in the cloud.
Cloud service providers have to juggle retention rules, backup policies, and user preferences to make sure that when a user deletes a file in the cloud, it actually gets removed from all servers. If your organization is storing or considering storing data in the cloud, you must research your service provider’s data deletion policy to determine whether it’s sufficient for your needs. Otherwise, you’ll be on the hook if a data breach exposes your files to a third party or stuck in a regulatory nightmare because data wasn’t disposed of properly.
With the European Union General Data Protection Regulation expected to go into effect May 2018, any company doing business in Europe or with European citizens will have to make sure they comply with rules for removing personal data from their systems—including the cloud—or face hefty fines.
Deleting data in the cloud differs vastly from deleting data on a PC or smartphone. The cloud’s redundancy and availability model ensures there are multiple copies of any given file at any given time, and each must be removed for the file to be truly deleted from the cloud. When a user deletes a file from a cloud account, the expectation is that all these copies are gone, but that really isn’t the case.
Consider the following scenario: A user with a cloud storage account accesses files from her laptop, smartphone, and tablet. The files are stored locally on her laptop, and every change is automatically synced to the cloud copy so that all her other devices can access the most up-to-date version of the file. Depending on the cloud service, previous file versions may also be stored. Since the provider wants to make sure the files are always available for all devices at all times, copies of the file live across different servers in multiple datacenters. Each of those servers are backed up regularly in case of a disaster. That single file now has many copies.
“When a user ‘deletes’ a file [in the cloud], there could be copies of the actual data in many places,” says Richard Stiennon, chief strategy officer of Blancco Technology Group.
Deleting locally and in the user account simply takes care of the most visible version of the file. In most cases, the service marks the file as deleted and removes it from view but leaves it on the servers. If the user changes his or her mind, the service removes the deletion mark on the file, and it’s visible in the account again.
In some cases, providers adopt a 30-day retention policy (Gmail has a 60-day policy), where the file may no longer appear in the user’s account but stay on servers until the period is up. Then the file and all its copies are automatically purged. Others offer users a permanent-delete option, similar to emptying the Recycle Bin on Windows.
Service providers make mistakes. In February, forensics firm Elcomsoft found copies of Safari browser history still on iCloud, even after users had deleted the records. The company’s analysts found that when the user deleted their browsing history, iCloud moved the data to a format invisible to the user instead of actually removing the data from the servers. Earlier, in January, Dropbox users were surprised to find files that had been deleted years ago reappearing in their accounts. A bug had prevented files from being permanently deleted from Dropbox servers, and when engineers tried to fix the bug, they inadvertently restored the files.
The impact for these incidents was limited—in Dropbox’s case, the users saw only their files, not other people’s deleted files—but they still highlight how data deletion mistakes can make organizations nervous.
There are also cases in which the user’s concept of deletion doesn’t match the cloud provider’s in practice. It took Facebook more than three years to remove from public view photographs that a user had deleted back in 2009; even then, there was no assurance, given that the photographs aren’t still lurking in secondary backups or cloud snapshots. There are stories of users who have removed their social media accounts entirely and find the photos they’ve shared remain accessible to others.
Bottom line, between backups, data redundancy, and data retention policies, it’s tricky to assume that data is ever completely removed from the cloud.
Stiennon declined to speculate on how specific cloud companies handle deleting files from archives but said that providers typically store data backups and disaster recovery files in the cloud and not as offsite tape backups. In those situations, when a file is deleted from the user’s account, the pointers to the file in the backup get removed, but the actual files remain in that blob. While that may be sufficient in most cases, if that archive ever gets stolen, the thief would be able to forensically retrieve the supposedly deleted contents.
“We know that basic deletion only removes pointers to the data, not the data itself, and leaves data recoverable and vulnerable to a data breach,” says Stiennon.
Some service providers wipe disks, Stiennon says. Typically in those situations, when the user sends a deletion command, the marked files are moved to a separate disk. The provider relies on normal day-to-day operations to overwrite the original disk space. Considering there are thousands of transactions per day, that’s a reasonable assumption. Once the junk disk is full or the retention time period has elapsed, the provider can reformat and degauss the disk to ensure the files are truly erased.
Most modern cloud providers encrypt data stored on their servers. While some ahead-of-the-game providers encrypt data with the user’s private keys, most go with their own keys, frequently the same one to encrypt data for all users. In those cases, the provider might remove the encryption key and not even bother with actually erasing the files, but that approach doesn’t work so well when the user is trying to delete a single file.
Here’s another reason to be paranoid in the likely event that not every copy of a file gets scrubbed from the cloud: There are forensics tools capable of looking into cloud services and recovering deleted information. Elcomsoft used such a tool on iCloud to find the deleted browser history, for example. Knowing that copies of deleted files exist somewhere in the cloud, the question becomes: How safe are these orphaned copies from government investigators and other snoops?
Research has shown that companies struggle to properly dispose of disks and the data stored on them. In a Blancco Technology Group research, engineers purchased more than 200 drives from third-party sellers and found personal and corporate data could still be recovered, despite previous attempts to delete it. A separate Blancco Technology Group survey found that one-third of IT teams reformat SSDs before disposing them but don’t verify that all the information has been removed.
“If you do not overwrite the data on the media, then test to see if it has been destroyed, you cannot be certain the data is truly gone,” Stiennon says.
While there have always been concerns about removing specific files from the cloud, enterprise IT teams are only now beginning to think about broader data erasure requirements for cloud storage. Many compliance regimes specify data retention policies in years, ranging from seven years to as long as 25 years, which means early cloud adopters are starting to think about how to remove the data that, per policy, now have to be destroyed.
GDPR is also on the way, with its rules that companies must wipe personal data belonging to EU residents from all its systems once the reasons for having the data expire. Thus, enterprises have to make sure they can regularly and thoroughly remove user data. Failure to do so can result in fines of up to 4 percent of a company’s global annual revenue.
That’s incentive, right there, for enterprises to make sure they are in agreement with their service providers on how to delete data.
Given these issues, it’s imperative that you ask to see your service provider’s data policy to determine how unneeded data is removed and how your provider verifies that data removal is permanent. Your service-level agreement needs to specify when files are moved and how all copies of your data are removed. A cloud compliance audit can review your storage provider’s deletion policies and procedures, as well as the technology used to protect and securely dispose of the data.
Considering all the other details to worry about in the cloud, it’s easy to push concerns about data deletion aside, but if you can’t guarantee that data you store in the cloud is effectively destroyed when needed, your organization will be out of compliance. And if supposedly deleted data is stolen from the cloud—or your storage provider mistakenly exposes data that should have been already destroyed—your company will ultimately pay the price.
“It’s more of a false sense of security than anything else when the wrong data removal method is used,” Stiennon says. “It makes you think the data can never be accessed, but that’s just not true.”
This story, “Don’t get bit by zombie cloud data” was originally published by
InfoWorld.
The content below is taken from the original (UK driving tests to include sat nav skills from December), to continue reading please visit the site. Remember to respect the Author & Copyright.
In biggest shake-up of the standardised driving exam since the introduction of the theory test, UK drivers will be required to demonstrate that they can navigate using a sat nav. The Driver & Vehicle Standards Agency has confirmed that from December 4th, learners will be required to drive independently for 20 minutes — double the current length — with four out of every five candidates being asked to follow directions displayed on a navigational device.
The agency says that drivers won’t be required to bring their own sat nav, nor will they be tasked with setting it up. They’ll also be able to ask for clarification of the route if they’re not sure, and it won’t matter if the wrong route is taken, as long as it doesn’t put other road users at risk.
The DVSA noted back in June 2016 that the introduction of technology element would likely improve safety, boost driver confidence and widen potential areas for practical tests. "Using a satnav goes some way to addressing concerns that inexperienced drivers are easily distracted, which is one of the main causes of crashes. We’re moving with technology and the technology that new drivers will be using," an agency spokesperson said in a statement.
Other changes to the test include the removal of the "reverse around a corner" and "turn-in-the-road" manoeuvres, which will be replaced with parallel parking, parking in a bay and a stop and go test on the side of the road. Examiners will also ask drivers two vehicle safety "show me, tell me" questions. One will be asked before setting off, while the other will be need to be answered while on the road.
Source: Gov.uk
The content below is taken from the original (Amazon Lex – Now Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.
During AWS re:Invent I showed you how you could use Amazon Lex to build conversational voice & text interfaces. At that time we launched Amazon Lex in preview form and invited developers to sign up for access. Powered by the same deep learning technologies that drive Amazon Alexa, Amazon Lex allows you to build web & mobile applications that support engaging, lifelike interactions.
Today I am happy to announce that we are making Amazon Lex generally available, and that you can start using it today! Here are some of the features that we added during the preview:
Slack Integration – You can now create Amazon Lex bots that respond to messages and events sent to a Slack channel. Click on the Channels tab of your bot, select Slack, and fill in the form to get a callback URL for use with Slack:
Follow the tutorial (Integrating an Amazon Lex Bot with Slack) to see how to do this yourself.
Twilio Integration – You can now create Amazon Lex bots that respond to SMS messages sent to a Twilio SMS number. Again, you simply click on Channels, select Twilio, and fill in the form:
To learn more, read Integrating an Amazon Lex Bot with Twilio SMS.
SDK Support – You can now use the AWS SDKs to build iOS, Android, Java, JavaScript, Python, .Net, Ruby, PHP, Go, and C++ bots that span mobile, web, desktop, and IoT platforms and interact using either text or speech. The SDKs also support the build process for bots; you can programmatically add sample utterances, create slots, add slot values, and so forth. You can also manage the entire build, test, and deployment process programmatically.
Voice Input on Test Console – The Amazon Lex test console now supports voice input when used on the Chrome browser. Simply click on the microphone:
Utterance Monitoring – Amazon Lex now records utterances that were not recognized by your bot, otherwise known as missed utterances. You can review the list and add the relevant ones to your bot:
You can also watch the following CloudWatch metrics to get a better sense of how your users are interacting with your bot. Over time, as you add additional utterances and improve your bot in other ways, the metrics should be on the decline.
Easy Association of Slots with Utterances – You can now highlight text in the sample utterances in order to identify slots and add values to slot types:
Improved IAM Support – Amazon Lex permissions are now configured automatically from the console; you can now create bots without having to create your own policies.
Preview Response Cards – You can now view a preview of the response cards in the console:
To learn more, read about Using a Response Card.
Go For It
Pricing is based on the number of text and voice responses processed by your application; see the Amazon Lex Pricing page for more info.
I am really looking forward to seeing some awesome bots in action! Build something cool and let me know what you come up with.
— Jeff;
The content below is taken from the original (readycloud (1.15.1292.517)), to continue reading please visit the site. Remember to respect the Author & Copyright.
ReadyCLOUD gives you remote access over the Internet to a USB storage device that is connected to your router’s USB port.
The content below is taken from the original (Humans are (still) the weakest cybersecurity link), to continue reading please visit the site. Remember to respect the Author & Copyright.
Humans remain the weak link in corporate data protection, but you might be surprised hat it isn’t only rank-and-file employees duped by phishing scams who pose risks. Some companies are lulled into a false sense of cybersecurity by vendors. You read that right:Some enterprises believe the shiny new technologies they’ve acquired will protect them from anything.
Just ask Theodore Kobus, leader of BakerHostetler’s Privacy and Data Protection team.
While Kobus was conducting an educational workshop on endpoint monitoring, an employee for a large company mentioned a tool that it had deployed to watch over computing devices connected to the corporate network. Kobus told him the move was great because it will help speed up the time it takes to detect an incident. The employee pushed back and said, “No, it’s much more than that; it’s going to stop these attacks.”
Taken aback by the staff’s confidence in a single tool, Kobus explained the inherent dangers in believing cybersecurity technologies, no matter their speedy detection capabilities, are fool-proof.
“We talked things through and they realized — because they weren’t really thinking at the time — that zero-day attacks are not going to be blocked by what they have in place and they need to understand what the tools are used for,” says Kobus, whose team has helped enterprises address 2,000 breaches in the past five years. “That’s a big problem that we’re seeing. Companies really need to focus on the key issues to help stop these attacks from happening in the first place.”
The anecdote underscores just how vulnerable companies are to attacks despite instituting proper protections, says Kobus, who explored the points in BakerHostetler’s 2017 Data Security Incident Response Report, which incorporated data from the 450 breaches his team worked on in 2016. Companies surveyed ranged from $100 million to $1 billion in revenues across health care, retail, hospitality, financial services, insurance and other sectors.
At 43 percent, phishing, hacking and malware incidents accounted for most incidents for the second year in a row, a 12 percentage-point jump from the firm’s incident response report in 2015. Thirty-two percent of incidents were initiated by human error, while 25 percent of attacks involved phishing and 23 percent were initiated via ransomware. Another 18 percent of comprises occurred due to lost or stolen devices and three percent reported internal theft.
Phishing is particularly difficult to stop, Kobus says, because digital natives — those who grew up accustomed to the rapid-fire response cadence of social media are programmed to answer emails from their coworkers quickly. Accordingly, many fall prey to business email comprises that appear to come from their CEO, CFO or another peer but in reality include a malicious payload.
“Phishing scams are never going to go away,” Kobus says. “No matter what technology we put in place, no matter how much money we spend on protections for the organization, we still have people and people are fallible.” With the rise of such social engineering attacks, Kobus says it’s important for IT leaders to caution employees to slow down, stop and consider such emails and either walk down the hall or phone to ask a colleague if they sent the email.
Ransomware attacks – in which perpetrators introduce malware that prevents or limits you from accessing your system until a ransom is paid – have increased by 500 percent year-over-year, with BakerHostetler responding to 45 such incidents in 2016. Ransomware scenarios range from sophisticated parties that break into the network and then broadly deploy ransomware to hundreds of devices, while others are carried out by rookies who bought a ransomware kit. BakerHostetler saw several demands in excess of $25,000, almost all of which called for payment via Bitcoin.
But most companies took several days to create and fund their Bitcoin wallet to pay the perpetrator(s), says Kobus, who added that ransomware incidents will probably increase over the short term because companies have proven unable to manage let alone prevent them.
The report findings suggest enterprises have more work to do with regard to shoring up their cybersecurity practices. Kobus, whose team of 40 conducts 75 “table-top exercises” involving incident response with corporations each year, says that companies are better-served by going back to the basics, starting with proper training and planning of cyber defenses rather than rushing out to buy the shiniest new technology on the market.
Companies should, for example, teach their workforce what phishing scams look like and pepper employees with fake phishing emails to test readiness. Other basic security measures include implementing multifactor authentication to remotely access any part of the company’s network or data; creating a forensics plan to quickly initiate a cybersecurity investigation; building business continuity into the incident response plan to ensure systems remain stable; vetting the technical ability, reputation and financial solvency of vendors; deploying off-site or air-gapped back-up systems in the event of ransomware; and acquiring the appropriate cyber insurance policy.
There is no one-size-fits-all approach to cybersecurity readiness. It invariably requires an enterprise-wide approach tailored to the culture and industry of the company, accounting for regulatory requirements. And in the event of a breach, communication and transparency to consumers is paramount, Kobus says.
“It’s really about getting in there and helping them manage the breach,” says Kobus, adding that includes working with security forensics and corporate communications teams to craft the right messaging. “The goal is to communicate in a transparent, thoughtful and meaningful way. You want to be able to answer the basic questions the consumers want answered: What happened? How did it happen? What are you doing to protect me? What are you doing to stop this from happening in the future?”
This story, “Humans are (still) the weakest cybersecurity link ” was originally published by
CIO.
The content below is taken from the original (You just sent an on-prem app to the cloud and your data centre has empty racks. What now?), to continue reading please visit the site. Remember to respect the Author & Copyright.
On-premises data centres are expensive to build and operate, which is one reason public cloud is so attractive … albeit not so attractive that organisations will immediately evacuate on-premises data centres.
Indeed, it’s accepted that workloads will migrate over years and that hybrid clouds will become very common. But it’s clear that data centres built for pre-cloud requirements are unlikely to be full in future.
Which raises the question of what to do with empty space in a perfectly good data centre full of expensive electrical and cooling kit.
The Register‘s asked around and deduced you have a few options.
One is doing literally doing nothing other than unplugging kit that’s no longer in use. This isn’t a mad idea because your on-premises data centre was designed to cool certain kit in certain places. Leaving decommissioned devices in place won’t disrupt those cooling concoctions. It can also save you the cost of securely destroying devices. Those devices will also be housed in a known secure environment.
Another idea advanced by Tom Anderson, Vertiv’s* ANZ datacentre infrastructure manager, is to optimise the room for its new configuration. This can be done by placing baffles on newly-empty racks so that cold air isn’t wasted, or by erecting temporary walls around remaining racks. In either case, you’ll create a smaller volume that needs less cooling. He also recommends throttling back cooling and UPSes because they’ll be more efficient under lighter workloads. Those running dual cooling units, he suggests, could run one at a time and switch between units.
An old data centre can also be useful as a disaster recovery site. Plenty of organisations outsource a DR site. Newly-freed space in one facility gives you run your own.
The world has an insatiable appetite for data centres, so renting out your spare space is another option. But a Gartner study titled “Renting Your Excess Data Center Capacity Delivers More Risk Than Reward” may deter you from exploring it. Analyst Bob Gill warns that most organisations just aren’t ready to become a hosting provider, because it’s a specialist business. It’s also a business in which clients have very high expectations that would-be-hosts have to learn in a hurry. He also worries about reputational risk flowing from news of a security breach.
Gill also notes that becoming a service provider means signing away data centre space for years, depriving you of an asset you could conceivably covet before contractual obligations expire.
Gill doesn’t rule out the idea, but says “the complexities and risks of offering commercial colocation will confine the majority of such offerings to educational and governmental agencies and vertical industry ecosystems”.
At this point of the story many of you may also be wondering if a partly-empty bit barn is a useful place to store booze. Sadly it’s not a good idea: wine is best stored at between 7C and 18C, but modern data centres run in the low twenties. Beer won’t be much fun at that temperature. Data centres also tend to be rather lighter than a good wine cellar.
Bit barns are also be a poor place to store documents, because they tend to run a little wetter than the United States Library of Congress’ recommended 35 per cent relative humidity. Paper also prefers to be away from air currents and data centres are full of them.
But a partly-empty data centre is a good place to store … computers. Which may seem stupidly obvious until you consider that some workloads just aren’t a good fit for the cloud. Vertiv’s Anderson suggested that video surveillance footage is so voluminous that moving it to the cloud is costly and slow, but that some spare racks could happily let you consolidate video storage.
Emerging workloads like AI and big data can also demand very hot and power-hungry servers. A dense, full, data centre may have struggled to house that kind of application. A data centre with some empty racks may be able to accommodate those workloads. That such applications tend to use large data sets and have high compute-affinity – they work best when storage and servers are physically close – makes them strong candidates for on-premises operations.
A final option is just to knock over the walls of a data centre, lay down some carpet and use the space for whatever purpose you fancy.
If you’ve found a good use for un-used data centre space, feel free to let me know or hit the comments. ®
Bootnote: A final treat, brought to El Reg‘s attention by a chap named John White: an IBM faux cop show from the golden age of server consolidation, of a standard to rank with vendors’ dire rock video pastiches. Enjoy. If you can.
* Vertiv used to be Emerson Network Power
The content below is taken from the original (Microsoft adds new tools for Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.
Microsoft announced several new Azure tools and features focused on migrating data to the cloud and helping businesses work securely with other companies and customers:
Also announced were new tools for Azure Active Directory, an identity and access management system, focused on business-to-business collaboration and ways for companies to store customer information:
Learn more about how each of these tools can benefit you in the original GeekWire blog post. Other questions or feedback? Please post them here!
The content below is taken from the original (Amazon Lightsail vs DigitalOcean: Why Amazon needs to offer something better), to continue reading please visit the site. Remember to respect the Author & Copyright.
In the first part of this series, Amazon Lightsail: how to set up your first instance, we introduced Amazon Lightsail, a low-cost virtual private server (VPS) platform from Amazon Web Services. As we saw in that post, it’s very simple to create a small infrastructure with Lightsail. With its launch, Amazon is trying to get a slice of the lucrative VPS market. The market has matured over the last few years and a number of players have a head start. Vendors like DigitalOcean or Linode have a large customer base. These companies are continuously improving their VPS features and Amazon will need to catch up fairly soon if it wants to capture a good market share. Also, these VPS providers have some big clients on their books, such as Atlassian, Creative Commons, or RedHat to name a few. This high level of trust is the result of the continuous expansion of data centers, the reliability of service, and the value customers get for their investment. In this post we will take a look at Amazon Lightsail vs DigitalOcean and why Amazon needs to offer something better.
Competing against these niche players means that Amazon will not only have to quickly include some of the features users now take for granted, but it will also have to differentiate itself with extra features that others are still lacking. In fact, in our opinion, the competition is just getting started.
To understand how Amazon Lightsail stands against its established competitors in the market, we decided to compare it with DigitalOcean, a widely popular VPS provider on a number of areas:
Amazon is always expanding its regions, which means that Lightsail would also be available in new regions over time. This will give users in different parts of the world greater network proximity to their servers. At the time of this post, AWS has the following regions, in addition to another region in China:
However, Lightsail is available only in the us-east-1 region.
DigitalOcean has data centers in the following countries:
Amazon EC2 has a large number of instance types, ranging from micro instances with less than a GB of memory to large, disk or memory optimized servers. Each of these instance types can be further enhanced with extra storage volumes with provisioned IOPs.
Lightsail has a much simpler instance model with only five types of servers.
This is expected because its target audience is developers or start-ups who don’t want to spend a lot of time comparing a lot of price performance ratios.
DigitalOcean has nine instance types for its "standard" instances.
And a few more in its "high memory" category:
As we can see, the high-end instances are suitable for large scale data processing and storage. This again proves that the company is targeting not only individuals or start-ups, but also corporate clients who are willing to pay extra money for their workloads.
Referencing the images above, we can see that the pricing for Amazon Lightsail instances is very similar to that of DigitalOcean Droplets for the same server specs.
As of March 2017, Amazon Lightsail comes bundled with only two operating systems: Ubuntu 16.04 LTS (Long Term Support) or Amazon Linux 2016.09.01:
DigitalOcean offers a number of open source Nix based operating system images, each with different versions.
VPS providers also offer something called "application images." These are generic installations of applications bundled with a base operating system. With application images, users don’t need to install applications after creating a server, and this significantly saves time. Some popular application packs are LAMP stack, Gitlab, or Node.js, which are baked in with operating systems like CentOS or Ubuntu.
Amazon Lightsail currently has a limited, but good collection of instance images.
DigitalOcean has a larger collection of "One-click apps" too:
User data scripts are special pre-written code blocks that run when a VPS instance is created. A common use case for user data is automating installation of applications, users or configuration files. For example, a server can be made to install a particular version of Java as it comes up. The developer would write a script to do this and put it in the user data section when creating the server. This saves time in two ways: when rolling out a number of instances, administrators don’t have to manually install applications or change configuration in each instance, and secondly, each instance will have a uniform installation, eliminating any chance of manual error. User data has been available for Amazon EC2 instances for a long time and is widely adopted for system automation.
Amazon Lightsail calls it "Launch Script" and DigitalOcean calls it "User data", but they are essentially the same.
Both Amazon Lightsail and DigitalOcean allow users SSH access from a web console. Most practical uses cases though require SSH access from OS shell prompt or a tool like PuTTy. Authentication can be done either with username and password or preferably with more secure SSH keys.
Amazon Lightsail allows SSH key access only, which is good for security. Users can create a new SSH key, upload their own public key or use an existing key when creating an instance.
DigitalOcean offers both key-based and password-based authentication. The choice of SSH key is optional. If no SSH key is chosen or created, the user is sent an e-mail with a temporary password for the root account. Upon first login, the user needs to change that password. The image below shows how new SSH keys can be created in DigitalOcean.
Note that unlike Lightsail, DigitalOcean does not offer a key generation facility.
Sometimes the data in a server will outgrow its original capacity. When disk space runs out and data cannot be deleted or archived, extra disk space needs to be added. Typically this involves creating one or more additional disk volumes and attaching to the instance.
For Amazon EC2, this is possible with Elastic Block Storage (EBS). Amazon Lightsail is yet to add this feature. DigitalOcean on another hand, has only recently added it for users.
DigitalOcean volumes can be attached during instance creation as well, but that facility is available in only selected regions.
Adding extra storage is one way to expand a server. Sometimes the instance may need extra computing power too. This can be done by adding more CPU and RAM to the server. Although this is fairly simple in EC2, we could not find a way to resize a Lightsail instance once it was created.
Again DigitalOcean wins in this area. It allows users to up-size the instance either with CPU and RAM only or with CPU, RAM, and disk. The first option allows the instance to be downsized again.
VPS snapshots are like "point in time" copies of the server instance. This is necessary for protection against data loss, data corruption, or simply creating a separate image from an existing instance. Creating a snapshot for an existing instance is a simple process in Lightsail:
If the instance is deleted for some reason, it can be recovered from a snapshot, if one exists.
However, there is no simple way to automate the snapshots process. Of course, this can be automated with a bit of scripting and scheduling a job from another server, but we could not find the feature as a native option.
DigitalOcean also offers snapshots:
However, there is also a scheduled backup option which can snapshot an instance once every week.
Performance monitor dashboards are present in both Amazon Lightsail and DigitalOcean.
With Lightsail, the performance counters are similar to what’s available for EC2 in CloudWatch: CPUUtilization, NetworkIn, NetworkOut, StatusCheckFailed, StatusCheckFailed_Instance and StatusCheckFailed_System. The metrics can be viewed over a period of two weeks. However, unlike CloudWatch for EC2, it’s not possible to create an alert on a metric.
DigitalOcean has a graph option for its Droplets: this would show the Droplet’s public network usage, CPU usage, and disk IO rate. In recent times it also added a feature where users can opt to capture more metrics. For existing Droplets, users can install a script, and for new Droplets, they can enable a monitoring option. With the monitoring agent installed, three more metrics are added: memory, disk usage and top processes sorted by CPU or memory.
Furthermore, it’s also possible to create alerts based on any of these metrics. The alerts can be sent to an e-mail address or a Slack channel.
Amazon Lightsail and DigitalOcean both allow users to attach "static IPs" to their server instances. A static IP is just like a public IP because it’s accessible from the Internet. However as the name suggests, static IPs don’t change with instance reboots. Without a static IP, an instance will get a new public IP every time it’s rebooted. When a static IP is attached to an instance, that IP remains assigned to the instance through system reboots. This is useful for internet facing applications like web or proxy servers.
In Amazon Lightsail, a static IP address can be assigned to an instance or kept as a standalone resource. Also, the IP can be re-assigned to another instance when necessary.
DigitalOcean has a slightly different approach. Here, the public IP assigned to the instance doesn’t change even after system goes through a power cycle (hard rebooted) or power off / power on. It also offers something called "Floating IP" which is essentially same as static IP. A floating IP can be assigned to an instance and if necessary, detached and reattached to another instance. This allows Internet traffic to be redirected to different machines when necessary. The image below shows how floating IPs are managed.
An Amazon Lightsail instance comes with a private IP address by default.
For DigitalOcean, this has to be enabled when the Droplet is created.
We could not find any option for enabling IP v6 for Lightsail instances. As shown above, this is possible with DigitalOcean instances.
Amazon Lightsail enables users to create multiple DNS zones (up to three DNZ zones are free). This is a great feature and very simple to set up. Users who have already registered domain names can create DNS zones for multiple sub-domains and map them to static IP addresses. Those static IPs can, in turn, be assigned to Lightsail instances. The image below shows how we are creating a DNS zone for our test website.
Once
Lightsail provides its own DNS name servers for users to configure their domain records. Users can also register their domain names with Amazon Route 53 without having to use another third-party domain name registrar.
A similar facility exists in DigitalOcean, except it allows users to create reverse domain lookup with PTR records.
This is an area where Amazon Lightsail fares better than DigitalOcean. With EC2 instances, AWS offers a firewall feature called "security groups". Security groups can control the flow of traffic for certain ports from one or more IP addresses or ranges of addresses. In Lightsail, the security group feature is present in a rudimentary form.
There is no finer grain control though: there is no way to restrict traffic from one or more IP addresses.
DigitalOcean Droplets do not have this feature. Any firewall rules have to be configured from within the instance itself.
Both Amazon Web Service and DigitalOcean console offer two-factor authentication. With Amazon, it’s possible to enable CloudTrail logs which can track every API action run against resources like EC2. Lightsail has a rudimentary form of this audit trail ("Instance history"), and so does DigitalOcean ("Security history").
This is an area where Amazon Lightsail clearly wins. It’s possible for Lightsail instances to access existing AWS resources and services. This is possible when VPC peering for Lightsail is enabled. Lightsail instances run within a VPC which is not available from the regular VPC screen of AWS console. Unless VPCs are "peered", they are separate networks and resources in one VPC cannot see resources in another. Peering makes it possible. It is possible to configure VPC peering for the "shadow VPC" Lightsail uses. This is configured from the advanced features screen.
With VPC peering enabled, Lightsail’s capabilities can be extended beyond a simple computing platform, something DigitalOcean cannot provide.
Load balancers are a great way to distribute incoming network traffic to more than one computing node. This can help the infrastructure become more resilient against failures or distribute read and write traffic evenly across the servers. When application traffic reaches a load balancer, it can send it to a node in the group either in round robin fashion or based on a specific algorithm. Any node not responding to traffic from the load balancer will be marked as "Out Of Service" after a number of attempts.
Although it would help developers test their applications for real-life use cases, Amazon Lightsail is yet to provide this feature.
DigitalOcean has recently added it to their offering, but it’s not cheap, it costs $20 per month.
AWS Billing Alert is a great way for customers to keep track of their cloud infrastructure spending. With billing alerts, AWS will send an automatic notification to a customer when its monthly AWS spending goes over a set limit. Typically the alert is set up to send an e-mail. Billing alert is a feature of CloudWatch metrics and it can be used to for Lightsail usage:
DigitalOcean has a similar feature for billing alerts.
Unlike AWS though, DigitalOcean would send the notification to an e-mail address only. With AWS, the alert can be sent to an SNS topic which can have a number of subscribing endpoints like e-mail, SMS, application or HTTP.
Both Lightsail and DigitalOcean have extensive API support for programmatic access and administration of their infrastructure. Both vendors make the documentation well accessible from their public websites.
Lightsail APIs are easily accessible from the AWS command line interface (CLI). There are also software development kits (SDK) available for a number of programming languages like Java, Python, Ruby, PHP, C#, Go, JavaScript, Node.js and C++.
DigitalOcean APIs are fairly extensive as well and their documentation shows how they can be invoked with HTTP payloads. Language support includes Ruby and Go. Unline AWS, DigitalOcean does not come with any CLI which can be automated with bash or PowerShell.
Third party tools like Terraform from HashiCorp also have a limited number of resources available for both Lightsail and DigitalOcean provider.
Online documentation for both Amazon Lightsail and DigitalOcean is easy-to-follow and can help a user get up and running in no time. Technical support request for Lightsail can be accessed from the AWS console. A similar link exists for DigitalOcean users in its web site.
DigitalOcean also offers a vast array of very useful tutorials. These tutorials can help users set up and run many different workloads on the DigitalOcean platform.
From our test comparison, we found DigitalOcean leading Amazon Lightsail in quite a few important areas. So does this mean developers and start-ups should shun Lightsail for now? We would say no. It depends on individual use cases and whether your organization is already an Amazon customer. Lightsail’s integration with other AWS services provides it an obvious advantage. Also, since the price tag for similar instances is very much similar, you may want to work with Lightsail unless your application requires some of the features it’s lacking… Typical uses cases can include:
Also, with AWS making a move into the VPS market, it’s only a matter of time before other players like Microsoft or Google start to include it in their arsenal. As the competition starts to gain momentum, more advanced features are sure to follow. Needless to say, established VPS providers wouldn’t be sitting idle either, they would be adding new features to keep their competitive advantage. With this in mind, we think Amazon needs to add some extra niche capabilities to it VPS platform to make it a more viable competitor.
The content below is taken from the original (Cloud Speech API is now generally available), to continue reading please visit the site. Remember to respect the Author & Copyright.
By Dan Aharon, Product Manager
Last summer, we launched an open beta for Cloud Speech API, our Automatic Speech Recognition (ASR) service. Since then, we’ve had thousands of customers help us improve the quality of service, and we’re proud to announce that as of today Cloud Speech API is now generally available.
Cloud Speech API is built on the core technology that powers speech recognition for other Google products (e.g., Google Search, Google Now, Google Assistant), but has been adapted to better fit the needs of Google Cloud customers. Cloud Speech API is one of several pre-trained machine-learning models available for common tasks like video analysis, image analysis, text analysis and dynamic translation.
With great feedback from customers and partners, we’re happy to share that we have new features and performance to announce:
Among early adopters of Cloud Speech API, we have seen two main use cases emerge: speech as a control method for applications and devices like voice search, voice commands and Interactive Voice Response (IVR); and also in speech analytics. Speech analytics opens up a hugely interesting set of capabilities around difficult problems e.g., real-time insights from call centers.
Houston, Texas based InteractiveTel is using Cloud Speech API in solutions that track, monitor and report on dealer-customer interactions by telephone.
“Google Cloud Speech API performs highly accurate speech-to-text transcription in near-real-time. The higher accuracy rates mean we can help dealers get the most out of phone interactions with their customers and increase sales.” — Gary Graves, CTO and Co-Founder, InterActiveTel
Saitama, Japan-based Clarion uses Cloud Speech API to power its in-car navigation and entertainment systems.
“Clarion is a world-leader in safe and smart technology. That’s why we work with Google. With high-quality speech recognition across more than 80 languages, the Cloud Speech API combined with the Google Places API helps our drivers get to their destinations safely.” — Hirohisa Miyazawa, Senior Manager/Chief Manager, Smart Cockpit Strategy Office, Clarion Co., Ltd.
Cloud Speech API is available today. Please click here to learn more.
The content below is taken from the original (Azure File Storage on-premises access for Ubuntu 17.04), to continue reading please visit the site. Remember to respect the Author & Copyright.
Azure File Storage is a service that offers shared File Storage for any OS that implements the supported SMB Protocol. Since GA we supported both Windows and Linux. However, on premises access was only available to Windows. While Windows customers widely use this capability, we have received the feedback that Linux customers wanted to do the same. And with this capability Linux access will be extending beyond the storage account region to cross region as well as on premises. Today we are happy to announce Azure File Storage on-premises access from across all regions for our first Linux distribution – Ubuntu 17.04. This support is right out of the box and no extra setup is needed.
Steps to access Azure File Share from an on-premises Ubuntu 17.04 or Azure Linux VM are the same.
Step 1: Check to see if TCP 445 is accessible through your firewall. You can test to see if the port is open using the following command:
nmap <azure storage account>.file.core.windows.net
Step 2: Copy the command from Azure Portal or replace <storage account name>, <file share name>, <mountpoint> and <storage account key> on the mount command below. Learn more about mounting at how to use Azure File on Linux.
sudo mount -t cifs //<storage account name>http://.file.core.windows.net/<file share name> <mountpoint> -o vers=3.0,username=<storage account name>,password=<storage account key>,dir_mode=0777,file_mode=0777,sec=ntlmssp
Step 3: Once mounted, you can perform file-operations
Backporting of this enhancement to Ubuntu 16.04 and 16.10 is in progress and can be tracked here: CIFS: Enable encryption for SMB3. RHEL is also in progress. Full-support will be released with next release of RHEL.
We are excited to see tremendous adoption of Azure File Storage. You can try Azure File storage by getting started in under 5 minutes. Further information and detailed documentation links are provided below.
We will continue to enhance the Azure File Storage based on your feedback. If you have any comments, requests, or issues, you can use the following channels to reach out to us:
The content below is taken from the original (Indian bar legally evades closure by adding 250-meter long maze entrance), to continue reading please visit the site. Remember to respect the Author & Copyright.
Since April 1 a large number of the bars, pubs and liquor shops across India has gone out of business, thanks to a Supreme Court order that the outlets should be at least 500m away from state and national highways…The Aishwarya Bar in North Paravoor, a Kochi suburb has built a 250m-long maze-like walkway to the entrance, theoretically making it more than 500m away from the highway.
— India Times
In a move that has even delighted the bureaucrats who initially drafted the rule that no bar could be within 500 meters of a highway, an Indian bar has managed to stay in business by virtue of building a 250 meter long maze that, like the snaking lines at an amusement park, greatly expands the literal number of feet one must traverse to get to the bar’s actual entrance. While this clever invention may necessitate a designated walker to help steer patrons back out, it ranks among the best architectural interpretations of law we’ve seen in some time. Cheers!
h/t bldgblog
The content below is taken from the original (Google’s AutoDraw turns your clumsy scribbles into art), to continue reading please visit the site. Remember to respect the Author & Copyright.
Google wants to help you get in touch with your inner Picasso. Today, it’s launching AutoDraw, a web-based tool that uses machine learning to turn your hamfisted doodling into art. It’s similar to, but clearly far more advanced than, Android Wear’s ability to recognize a crudely drawn smiley face and replace it with an emoji.
The app is free and it works on any phone, computer or tablet. It’s pretty straightforward: draw your best version of a cake, for example, and the auto suggestion tool will try to guess what that amorphous blob actually is. Then, you can choose from a number of better looking cakes made by talented artists. Or, if amorphous blob is actually what you were striving for, you can turn off the auto suggestions and doodle away.
AutoDraw uses the same technology as another Google experiment called QuickDraw. It’s a mini-game that tells you which objects to draw, like an eye or a helicopter, then gives the AI 20 seconds to recognize it. AutoDraw is more of a creative tool, allowing users to make things like posters or coloring books. But, both likely serve the same purpose of teaching a neural network to recognize doodles.
Right now, Google claims AutoDraw can guess hundreds of drawings, and the company plans to add more in the future. If you have suggestions on what objects Google should add, or you’re an artist who’d like to contribute to the project, you can do that here.
Via: TechCrunch
Source: Google
The content below is taken from the original (Hybrid Cloud just got easier: New Azure Migration resources and tools available), to continue reading please visit the site. Remember to respect the Author & Copyright.
Most customers we talk with are using a Hybrid Cloud approach to take advantage of the cloud and their existing applications and infrastructure. Whether you’re considering migrating some or all your applications to the cloud, the transition from on-premises requires careful planning. You need to understand how much it will cost, how to size your environment, what virtual machine options to choose, and more – and you want to do all this in the smartest and most cost-effective way possible.
With this in mind, today we are offering new tools and resources to help you tap into the power of the hybrid cloud to optimize your business:
With Azure, you get truly consistent hybrid capabilities across cloud and on-premises environments, offering you the flexibility to choose the optimal location for each application, based on your business requirements and reducing the complexity of moving to the cloud. Migrating virtual machines to the cloud is often one of the first steps organizations take in their cloud journey and is a natural part of any hybrid cloud strategy.
Learn more about the tools and resources available today by visiting the Azure Migration page. We’d love to hear from you on how we can continue making your path to the cloud easy and effective.
The content below is taken from the original (No more IP addresses for countries that shut down internet access), to continue reading please visit the site. Remember to respect the Author & Copyright.
Governments that cut off internet access to their citizens could find themselves refused new IP addresses under a proposal put through one of the five global IP allocation organizations.
The suggested clampdown will be considered at the next meeting of internet registry Afrinic in Kenya in June: Afrinic is in charge of managing and allocating IP address blocks across Africa.
Under the proposal, a new section would be added to Afrinic’s official rules that would allow the organization to refuse to hand over any new IP address to a country for 12 months if it is found to have ordered an internet shutdown.
The ban would cover all government-owned entities and others that have a "direct provable relationship with said government." It would also cover any transfer of address space to those entities from others.
That withdrawal of services would escalate if the country continued to pull the plug on internet access. Under the proposal: "In the event of a government performing three or more such shutdowns in a period of 10 years – all resources to the aforementioned entities shall be revoked and no allocations to said entities shall occur for a period of 5 years."
The proposal was sparked by a recent increase in the number of complete nationwide shutdowns of internet service – something that has been a cause of increasing concern and ire within the internet infrastructure community.
The trend started during the Egyptian revolution back in 2011 when authorities killed the entire’s country web access prior to a big protest march. Employees of ISPs and mobile phone companies reported troops turning up at their homes and pointing guns at their families in order to enforce the shutdown.
Until then, many governments had assumed it was largely impossible to turn off internet access to their entire nation. Soon after, government departments educated themselves about AS numbers and internet routing and started using their power to set up systems that would allow them to order the shutdown of all networks from a central point.
While some countries only used this ability in the more dire circumstances – riots or terrorist attacks – shutdowns quickly started being used preemptively and for political reasons.
Bangladesh switched off its entire country’s net connectivity prior to the sentencing of former government leaders for war crimes. Then Iraq started shutting down the entire country for several hours at a time in order to prevent exam cheating.
While these were enormously frustrating, the shutdown typically lasted only a few hours. But then Cameroon decided to cut off the internet for weeks – and targeted specific communities. The country’s southwest and northwest provinces were taken offline following violent protests: a decision that had a hugely damaging impact on its "Silicon Mountain" startup zone, and also took down its banks and ATMs.
In India, the number and frequency of internet shutdowns has sparked a new protest movement and website that tracks them.
The situation has grown so dire that the United Nations got involved and officially condemned the practice at a meeting of the Human Rights Council back in July. Despite opposition from a number of countries – including China, Russia, India and Kenya – a resolution passed forbidding mass web blockades.
The reality, however, is that there is nothing to prevent governments from shutting down the internet and very little anyone can do in the face of a determined push from the authorities.
But now the techies are fighting back. The Afrinic proposal has been put forward by the CTO and the Head of IP strategy for Liquid Telecommunications – a large pan-African ISP – as well as the CEO of Kenya’s main ISP Association. As such it is a proposal that many are taking seriously.
"While the authors of this policy acknowledge that what is proposed is draconian in nature, we feel that the time has come for action to be taken, rather than just bland statements that have shown to have little or no effect," they wrote, noting that "over the last few years we have seen more and more governments shutting down the free and open access to the internet in order to push political and other agendas."
Whether governments like it or not, they are reliant on the provision of IP address to expand their networks and digital economy, and Afrinic is the only organization that can realistically provide them. If the policy does get passed, it would almost certainly act as a strong deterrent for government ministers to shutting down internet access.
But there are a wealth of problems with the idea, not least of which would be the determination of what represents an internet shutdown. The authors put forward a suggested definition:
An internet shutdown is deemed to have occurred when it can be proved that there was an attempt, failed or successful, to restrict access to the internet to a segment of the population irrespective of the provider or access medium that they utilize.
That wording is likely to be very heavily scrutinized. And it would require someone or group to make a determination that it has happened – which would likely become a politically charged decision. And none of that considers the fact that national leaders are unlikely to accept punitive terms being placed against them by a third party.
In short, it is a huge political headache. But it may also be one that only the internet community is capable to taking on and winning. The next few months will see whether the ‘net community in Africa is willing to take on the challenge for the greater good. ®
The content below is taken from the original (Choosing an Azure Virtual Machine — April 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.
This post will explain how to select an Azure virtual machine (VM) series and size. This article includes updates to past versions of this post such as N-, H-, and L-Series VMs, as well as using the Azure Compute Unit (ACU) measurement to compare processor performance.
Azure is McDonald’s. It is not a Michelin Star restaurant. You cannot say, “I’d like a machine with 4 cores, 64GB RAM, and a 200GB C: drive.” That simply is not possible in Azure. Instead, there is a pre-set list of series of machines. Within those series, there are pre-set sizes.
The C: drive is always 127GB unless you upload your own template. No matter what the pricing pages claim as the disk size, it is actually the size of the temp drive. Any data you have goes into a data drive, which you specify the size of. Therefore, you control the cost. Remember that the storage (OS and data disks) costs extra!
There are two things to consider here. The first is common sense. The machine will need as much RAM, CPU (see the ACU section later in this article), and disk space as your operating system and service(s) will consume. That is no different than how you sized on-premises physical or VMs in the past.
The other factor of cloud-scale computing is that you should deploy an army of ants, not a platoon of giants. Big VMs are extremely expensive. A more affordable way to scale is to deploy smaller machines. Smaller machines can share a workload. They also can be powered on/off where billing starts/stops based on the demand or you can use the Scale Sets feature.
Microsoft created the concept of an ACU to help us distinguish between the various processor and VM series options that are available to us in Azure. The low-spec Standard A1 VM has a baseline rating of 100 and all other machines are scored in comparison to that machine. A VM size with a low number offers low-compute potential and a machine with a higher number offers more horsepower.
Note that some scores are marked with an asterisk. This represents a VM that is enhanced using Intel Turbo technology to boost performance. The results from the machine can vary depending on the machine size, the task being done, and other workloads also running on the machine.
Browse to the HPE or Dell sites and have a look at the standard range of rack servers. You will find DL360s, R420s, DL380s, R730s, and so on. Each of these is a series of machines. Within that series, you will find a range of pre-set sizes. Once you select a series, you find the size that suits your workload. The per-hour price is charged per minute of running. You will see this price listed. Let’s take a look at the different series of Azure VMs. Please remember that not all of the series are in all regions.
In the server world, Dell replaced the R720 with an R730. We stopped buying R720s and starting buying R730s. HPE replaced the DL380 G6 with a DL380 G7. That was replaced with the Gen 8. We stopped buying the older machine and started buying the newer machine.
The same thing happens in Azure. As the underlying Azure fabric improves, Microsoft occasionally releases a new version of a series. For example, the Dv2-Series replaced the D-Series. The Standard Av2-Series replaced the Standard A-Series.
The older series is still available but it usually makes sense to adopt the newer series. Late in 2016, Microsoft changed pricing so that the newer series was normally more affordable than the older one.
If you are reading this post, then you are deploying new services/machines. You should be using the latest version of a selected series. I will not detail older/succeeded series of machines in this article.
A code is normally used in the name of a VM to denote a special feature in that VM size. Examples of such codes are:
A is the start of the alphabet and this is the entry level VM.
This is the lowest and cheapest series of machines in Azure. The A-Series (Basic and Standard) uses a simulated AMD Opteron 4171 HE 2.1GHz virtual processors. This AMD processor was designed for core-scale with efficient electricity usage, rather than for horsepower. It is fine for lighter loads like small web servers, domain controllers, and file servers.
The Basic A-Series machines have some limitations:
I like this series for domain controllers because my deployments are not affected by the above. It keeps the costs down.
This is the most common machine that I have encountered in Azure. Using the same hardware as the Basic A-Series, the Standard Av2-Series has some differences:
These are the machines I use the most. They are priced well and offer good entry-level worker performance. If I find that I need more performance, then I consider Dv2-Series or F-Series.
D is for Disk in the D-Series.
The key feature of the Dv2-Series machine is disk performance. It can offer more throughput (Mbps) and speed (IOPS) than an F-Series VM. It is considered an excellent storage performance series for workloads such as databases.
Additional performance is possible because this is the first of the Azure machines to offer an Intel Xeon processor, the Intel Xeon E5-2673 v3 2.4GHz CPU. It can reach 3.1GHz using Intel Turbo Boost Technology 2.0.
The Dv2-Series also offers an S-option which supports Premium Storage. Microsoft recommends the DSv2-Series for SQL Server workloads. That has led to some of my customers asking questions when they get their first bill. Such a blanket spec generalization is unwise. Some SQL workloads are fine with HDD storage and some will require SSD. If you need lots of IOPS, then Premium Storage is the way to go. Do not forget that you can aggregate Standard Storage data disks to get more IOPS.
G is for Goliath.
The G-Series VMs offer much more RAM per core than any of the other VMs in Azure. They go all the way up to 448GB RAM based on hosts with a 2.0GHz Intel Xeon E5-2698B v3 CPU. If you need a lot of memory, then these are the machines to choose.
The G-Series also offers more data disk capacity and performance than the much more affordable Dv2-Series.
The F-Series name reminds me of a pickup truck and I think of an all-rounder with great horsepower.
The F-Series uses the same Intel Xeon E5-2673 v3 2.4GHz CPU as the Dv-2 Series with the same 3.1GHz turbo boost. It does have a slightly lower disk performance.
Microsoft talks about this being a good series for web servers that require performance. I see the F-Series as being the choice when you need something with more CPU performance than an A-Series machine. It does not have as much focus on disk performance as the Dv2-Series.
The N name stands for NVIDIA and that is because the hosts will have NVIDIA chipsets. They are presented directly to the VMs using a new Hyper-V feature called Discrete Device Assignment (DDA).
There are two versions of the N-Series:
The H is for SAP Hana or high-performance computing (HPC).
The H-Series VMs are sized with a large number of cores and run on hardware with Intel Haswell E5-2667 V3 3.2GHz processors and DDR4 RAM.
There are two core scenarios for the H-Series:
If you want low-latency storage, then the L-Series is for you.
We do not have too much detail but it appears that the Ls-Series runs on the same host hardware as the G-Series. Keep in mind, this is often referred to as the L-Series. The focus is not on a scale as it is on the G-Series. The focus is on low-latency storage.
The VMs of the Ls-Series use the local SSD storage of the hosts for data disks. This offers a much faster read time than can be achieved with Premium Storage (SSD) storage that is shared by many hosts across a network. This is much like a flash-based SAN.
This is a very niche machine series. It is expected that this type will be used for workloads where storage latency must be as low as possible, such as NoSQL.
Microsoft has announced that a new series and a successor to an existing series are both on the way. Microsoft refers to these machines as being the “next generation of Hyper-Threaded virtual machines.”
Both of these series will be based on hardware with Intel Broadwell E5-2673 v4 2.3GHz processors, which enables machine sizes up to 64vCPUs.
The post Choosing an Azure Virtual Machine — April 2017 appeared first on Petri.
The content below is taken from the original (Bringing Compliance to Office 365 Groups), to continue reading please visit the site. Remember to respect the Author & Copyright.
On April 6, Microsoft announced that “You are now able to manage group content produced by setting up retention policies to keep what you want and get rid of what you don’t want. Admins can now create Office 365 Groups retention policies that apply to the group’s shared inbox and files in one step using the Office 365 Security and Compliance Center.” Sounds good. But what does this mean in practical terms?
In my last article, I explained how Office 365 classification labels and retention policies function inside the new Office 365 data governance framework. In this, I explore how to apply retention policies to Office 365 Groups.
Is compliance important for Groups? The answer is obviously yes, especially when organizations use Groups as the basis for teams to collaborate using conversations, OneNote, and documents. Valuable information that needs to be controlled by an organization can swiftly accumulate inside Groups and that information is no less important than what exists inside mailboxes and public folders.
My previous article covers the basic of Office 365 classification labels and retention policies. You can use both features with Office 365 Groups. Group owners (but not members) can use classification labels to mark specific conversations for special processing. Only OWA shows classification labels for conversations in the group mailbox today. Outlook and the Outlook Groups mobile app both need refreshes before labels show up in their interfaces.
However, all group members can assign classification labels to files in the group document library and any group member can overwrite a label previously assigned by another group member, except if that label is a formal record (in which case only the site collection administrator can update the item). The reasons why labels behave differently in a group’s mailbox and document library are because of the way that Exchange retention tags and SharePoint permissions work. Although this might be confusing at times, I doubt it will cause much concern to end users.
Yammer-based groups do not support the application of classification labels to their conversations, but you can apply the labels to files in the document library.
As an example of how retention policies work with Groups, we assume that you have configured an Office 365 connector to bring some information from a network data source into a group. In many cases, this information is of transient interest, so we should clean out these items after they are no longer useful. To do this, we create a retention policy to remove items after 7 days and apply the policy to the relevant groups.
Go to the Data governance section of the Security and Compliance Center, select Retention, and then Create. Only members of the Compliance Administrator role group for the Security and Compliance Center can create or manage retention policies. The first step is to assign a name and description (which only administrators see) to the new policy (Figure 1).
The next step is to decide whether the policy is going to keep content or remove it from Office 365 (Figure 2). A delete action in a retention policy is the equivalent of “delete and allow recovery” in an Exchange retention tag. Items are not permanently removed and can be recovered later if necessary.
In either case, we must know how long the retention period lasts and how to decide when the period expires. You can keep content forever, but it is more usual to set a period like five years. To remove content, you must tell Office 365 how to calculate the age of the content. For mail messages, this is when they are created, but for documents and other items, you can choose creation or modification date.
At the bottom of Figure 2, you see the choice to use advanced retention settings. This means that you want to create a policy to find content based on a KQL query or DLP sensitive data types. For instance, you might want to find all content that has the keyword “Project Moonshot” and keep that for ten years. Or you could find all content that has sensitive data covered by the U.S. Patriot Act (credit cards, bank account numbers, taxpayer identification numbers, and social security numbers) and make sure that these items are removed from Office 365 after five years.
One of the interesting aspects of creating Office 365 retention policies and classification labels is that many of the settings that control how the policy functions cannot be changed after creation. You can alter the retention period for a policy, add new locations to its scope, and alter the KQL query to find content for the policy to apply. However, you cannot change its basic operation. For instance, you cannot change a policy that keeps content into one that simply removes content.
The logic is that users expect consistency in how their data is processing. If you can change the fundamental operation of how retention works inside a tenant, users will not know whether their data will be kept or removed or when this will happen. Cue fear, doubt, and uncertainty.
For this reason, it is wise to take time to chart out how retention will work before across the tenant you create any policies. Fools rush to implement retention without thought.
Next, we set the scope for the policy (Figure 3). You can decide to apply to all supported locations across Office 365 (an org-wide policy) or you can select specific locations (a non-org wide policy). You can have up to 10 org-wide and 1,000 non-org wide retention policies in a tenant. In general, you should simplify retention whenever possible by limiting the number of active policies.
In this case, we know that only a subset of groups that are configured with the kind of connectors that import data that we might want to clean up, so we opt for specific locations.
When you choose specific locations (Figure 4), you can select which Office 365 workloads that you want to process plus decide to include or exclude sub-locations. Unless you really want to remove every piece of content in every location a week after it is created, it would not make sense to apply a general “Remove everything after 7 days” policy across Office 365. We know that we only want to apply the policy to a small set of selected Office 365 Groups, so we can focus on that workload. However, you can apply the same policy to selected parts of multiple locations, like the single SharePoint site and the 2 groups selected here.
Click Choose groups to select the groups to process and expose the dialog shown in Figure 5. As you can see, you can search to find the groups to include in the policy. You can add groups that use Yammer to store their conversations. In this case, the conversations are ignored because Yammer does not yet support the data governance framework, but the documents in the group document library are covered.
When complete, click Choose to return to the Choose Locations screen and then Next to move to complete the policy.
Before reviewing the policy settings (Figure 6), you have the choice to turn on preservation lock. Do not turn on preservation lock without very good reason. If you do, you will only be able to make limited changes to the policy in the future (add new locations to its scope) and the policy cannot be disabled. In addition, any content that comes within the scope of the policy cannot be updated or removed while the policy is in force.
Preservation locks exist for a good reason – to ensure that information needed to satisfy certain regulatory requirements cannot be interfered with by anyone, including administrators, for the duration of the policy. However, most tenants do not need preservation lock and indeed, locking data down complicates situations like tenant merges and splits or, should that happen, if you ever want to leave Office 365 and move your data to another service.
Resist the temptation to enable the lock and move quickly to the last step in creating a retention policy (Figure 7). If you are happy with the settings, click Create this policy.
After saving the new policy, Office 365 makes the individual workloads aware that the policy exists. Each workload then uses its own mechanism to distribute and enforce the policy settings on the locations included in the policy. Because a retention policy assigned to a group applies to both the mailbox and the group’s team site, both Exchange and SharePoint must acknowledge and implement the policy.
A retention policy applies to all folders in the group mailbox, so it will clean out the Sent Items folder and Calendar folder too. And of course, because SharePoint implements the same policy for the group document library, the policy affects documents stored there. Unfortunately, tenants do not have any way of knowing when the background jobs run to process SharePoint content belonging to a group.
For group mailboxes, the Managed Folder Assistant (MFA) applies the policy and then processes the mailboxes. Because the MFA runs on a seven-day workcycle, it can take up to a week before a new retention policy is effective. The only way you know that the policy works is by checking the conversations in the mailbox. If any conversations are still present after 7 days, you know that something is not right.
Another way of checking is to use the Export-MailboxDiagnosticLogs cmdlet to examine the properties updated by the MFA when it processes a mailbox. The cmdlet outputs the properties in XML format, so some formatting is needed to make the information more legible. In this case, the ElcLastRunDeletedFromRootItemCount property tells us that MFA removed 12 items in the last run. In these examples, “Office365TenantServiceHealth” is the alias for the Office 365 Group we want to process.
[PS] C:\> $Log = Export-MailboxDiagnosticLogs -Identity Office365TenantServiceHealth -ExtendedProperties [PS] C:\> $xml = [xml]($Log.MailboxLog) [PS] C:\> $xml.Properties.MailboxTable.Property | ? {$_.Name -like "ELC*"} Name Value ---- ----- ElcLastRunTotalProcessingTime 637 ElcLastRunSubAssistantProcessingTime 140 ElcLastRunUpdatedFolderCount 0 ElcLastRunTaggedFolderCount 0 ElcLastRunUpdatedItemCount 0 ElcLastRunTaggedWithArchiveItemCount 0 ElcLastRunTaggedWithExpiryItemCount 0 ElcLastRunDeletedFromRootItemCount 12 ElcLastRunDeletedFromDumpsterItemCount 0 ElcLastRunArchivedFromRootItemCount 0 ElcLastRunArchivedFromDumpsterItemCount 0 ELCLastSuccessTimestamp 10/04/2017 22:26:55
If necessary, you can run the Start-ManagedFolderAssistant to force MFA to process the retention settings for a group. For example:
[PS] C:\> Start-ManagedFolderAssistant -Identity Office365TenantServiceHealth
If all goes well, MFA refreshes the retention settings for the mailbox and new tags and labels are available to users. Sometimes kicking the MFA convinces it to do the right thing and process a mailbox, sometimes it does not. The problem is that Exchange Online throttles background processing and does not allow mailbox assistants like MFA to run on demand. Microsoft wants to smoothen server load to reduce the risk that background processing interferes with the ability to be responsive to clients, which is why a seven-day workcycle exists for mailbox assistants. Telling MFA to process a mailbox will have an effect if MFA considers that it is reasonable to go ahead because it has not processed the mailbox recently. Running Start-ManagedFolderAssistant several times to convince MFA to start processing is a fruitless exercise.
You can recover items removed from the SharePoint document library by accessing the site recycle bin. However, because OWA does not support access to the Recoverable Items structure for a group mailbox, if a retention policy removes some items from a group mailbox that you need, you must use another method to recover the items before they expire from Recoverable Items 14 days after their deletion. If you know enough about the items to find them (for instance, they all include a common phrase), you can use a content search to find the items and then export them to a PST. At least the items are then available, even if you cannot restore them direct to the group mailbox.
Bringing Office 365 Groups into compliance is a major step forward for many tenants. Microsoft is removing obstacles to deployment with recent progress like the ability to recover deleted groups and now support for retention policies. It is all good news.
I have criticized Microsoft for their failure to implement compliance for Groups in the past and it is good to see them fix the problem now. Maybe so much so that I might even forgive the occasional tactical error in the evolving story of Office 365 Groups.
Follow Tony on Twitter @12Knocksinna.
Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle
The post Bringing Compliance to Office 365 Groups appeared first on Petri.
The content below is taken from the original (OpenShot (2.3.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.
Open Source Video Editor
The content below is taken from the original (Internet and computing pioneer Robert Taylor dies), to continue reading please visit the site. Remember to respect the Author & Copyright.
The technology world just lost one of its most prominent innovators. Robert Taylor, best known as the mastermind of ARPAnet (the internet’s precursor), has died at 85. As the director of the US military’s Advanced Research Projects Agency from 1965 until 1970, he helped pioneer the concept behind shared networks — he was frustrated with constantly switching terminals and wanted to access multiple networks from one system. While a lot of the credit goes to his team for implementing ARPAnet, he both pushed hard for the project and wrote a legendary 1968 essay that foretold the internet’s future: a vast, decentralized grid of connected devices that would reshape communication at just about every level.
This wasn’t Taylor’s only influential role. He went on to Xerox’s fabled Palo Alto Research Center, which he oversaw (first as associate manager and then as full manager) during one of its most influential periods. PARC broke ground on Ethernet technology, early internet development (by linking computers to ARPANET through Ethernet) and, of course, the windowed visual computing interfaces that people take for granted today. Even his later work at DEC was influential, as his research team helped usher in multi-processor workstations and sophisticated multitasking in Unix-based computers.
Moreover, Taylor was influential in laying the moral groundwork for the internet. Even in 1968, he was arguing that his connected future had to be open to everyone and not just the elite. He was also aware that the internet could be both harmful and helpful, and predicted the rise of botnets years in advance. Much like Tim Berners-Lee and other later pioneers, Taylor wasn’t just concerned about chips or software. He was thinking about the long-term social ramifications of his work, and you can arguably credit the mostly democratized nature of the internet to him.
Source: NPR
The content below is taken from the original (Check out new and improved Azure Training Today!), to continue reading please visit the site. Remember to respect the Author & Copyright.
Grow your Azure skills with the newly updated Massively Open Online Courses (MOOCs), which includes localized courses with multiple languages such as Spanish, traditional Chinese, Japanese, German, French, Portuguese, and Russian, customizable exam packs and Linux certifictions prices as low as $99!
Updated courses include:
New courses on the way:
Note: If you are in the process of completing a previous version of the refreshed course, you can still finish the course and earn the digital certificate.
Read this blog post to learn more.
Questions? Feedback? Post it here!
The content below is taken from the original (In India, people can now use their thumbs to pay at stores), to continue reading please visit the site. Remember to respect the Author & Copyright.
In its bid to boost digital payments, India introduced Bhim, a smartphone app that lets users make cashless payments and transfer funds.
But the Android and iOS versions of Bhim, which stands for Bharat Interface for Money, ran into two roadblocks – many Indians don’t have smartphones and those who have phones cannot afford data connections.
On Friday, India’s Prime Minister Narendra Modi launched Bhim-Aadhaar, a merchant interface for the Bhim app that allows Indians to pay digitally at stores using their thumb imprint through a merchants’ biometric-enabled device, which could be a smartphone having a reader and running the Bhim app.
“Any citizen without access to smartphones, internet, debit or credit cards will be able to transact digitally through the Bhim-Aadhaar platform,” the government said in a statement.
Key to the functioning of the payment system is Aadhaar, a biometrics-based authentication system for residents that the government has been pushing to the dismay of civil rights activists who are concerned that the data could be misused by hackers if they get access to the centralized database or even by the government that now has access to a lot of information on its citizens.
After enrolling, a person is given an Aadhaar number, which they enter when making a transaction. Once the Aadhaar number is entered into a terminal, the person’s fingerprint and iris are scanned and compared with information stored about the person in a central database. To pay at a store, users would also need to have their Aadhaar number linked to their bank account.
The new feature in Bhim will address about 400 million bank account customers spread across the country whose accounts are linked with Aadhaar, said A.P. Hota, managing director and CEO of the National Payments Corporation of India.
Many users have, however, complained that the system is inefficient in a country where connectivity in both urban and rural areas is still flaky. Privacy experts are also concerned that the government is extending the use of Aadhaar from its orginal purpose of identifying beneficiaries for government schemes for the poor, to requiring the Aadhaar number for a variety of other purposes including payment of income tax and the distribution of free mid-day meals to school children.
India’s Supreme Court has ruled that the biometric system should be used only for its original purpose, which was the distribution of state subsidies to the poor, and the use of Aadhaar should be purely voluntary. “The production of an Aadhaar card will not be condition for obtaining any benefits otherwise due to a citizen,” the top court ruled.
But the Indian government holds that the Aadhaar Act 2016, which was passed by parliament after the order of the Supreme Court, gives it the authority to use the Aadhaar biometrics for other purposes as well.
The Modi government is aiming at 25 billion digital transactions during the current financial year running through March 31, 2018. India has issued over 1.1 billion Aadhaar numbers so far.
The content below is taken from the original (RancherOS Hits General Availability), to continue reading please visit the site. Remember to respect the Author & Copyright.
Rancher Labs announced the general availability of RancherOS,
a simplified Linux distribution built from containers, for containers.
RancherOS eliminates any unnecessary libraries and services, resulting
in a footprint three times smaller than that of other container
operating systems. The simplified container environment reduces
container boot time, increases efficiency and improves security by
reducing the number of components that can be exploited.
“At
BRCloud Services, we strive to deliver the best solutions to address
our customers’ needs,” said Helvio Lima, CEO at BRCloud Services.
“RancherOS epitomizes what modern infrastructure should look like. We’re
thrilled to integrate the container operating system into our
portfolio.”
RancherOS
makes it simple to run containers at scale in development, test and
production. By containerizing system services and leveraging Docker for
management, the operating system provides an incredibly reliable and
simple to manage container-ready environment. System services are
defined by Docker Compose and automatically configured using cloud-init,
reducing administrative burden. Unneeded libraries and services are
eliminated, significantly reducing the OS footprint and minimizing the
hassle of updating, patching and maintaining a container host operating
system. Containers running on RancherOS boot in seconds, making the
operating system ideal for running microservices or auto-scaling. Teams
can use the Rancher container management platform to easily manage
RancherOS at large scale in production.
Key features of RancherOS include:
“RancherOS
is a minimalist Linux distribution that is perfect for running Docker
containers,” said Sheng Liang, co-founder and CEO of Rancher Labs. “By
running Docker directly on top of the kernel and delivering Linux
services as containers, RancherOS delivers just what you need to build a
containerized application environment.”
For
additional information on Rancher software and to learn more about
Rancher Labs and the company’s open source products, please visit www.rancher.com.
The content below is taken from the original (Dual SIM Hack For Single SIM Slot Phones.), to continue reading please visit the site. Remember to respect the Author & Copyright.
[RoyTecTips] shows us an ingenious hack which turns a single-SIM-slot phone into a fully functioning dual-SIM phone. All that needed for this hack is a heat-gun, solvent, micro SD card, nano SIM and some glue. The trick is that the phone has a SIM reader on the backside of an SD-card slot. Through some detailed dissection and reconstruction work, you can piggy-back the SIM on the SD card and have them both work at the same time.
Making the SD/SIM Franken-card is no picnic. First you start by filing away the raised bottom edge of the micro SD card and file down the side until the writing is no longer visible. Next get a heat gun and blast your nano SIM card until the plastic melts away. Then mark where the SIM card’s brains go and glue it on. Turn the phone on then, hey presto, you now have a dual SIM phone while keeping your SD storage.
This hack is reported to work on many Samsung phones that end in “7” and some that end in “5”, along with some 8-series phones from Huawei and Oppo clones of the Samsungs. Since you’re only modifying the SIM card, it’s a fairly low-risk hack for a phone. Combining two cards into one is certainly a neat trick, almost as neat as shoe-horning a microcontroller into an SD card. We wonder how long it will be before we see commercial dual SIM/SD cards on the market.
The content below is taken from the original (KFC Winged Aircraft Actually Flies), to continue reading please visit the site. Remember to respect the Author & Copyright.
[PeterSripol] has made an RC model airplane but instead of using normal wings he decided to try getting it to fly using some KFC chicken buckets instead. Two KFC buckets in the place of wings were attached to a motor which spins the buckets up to speed. With a little help from the Magnus effect this creates lift.
Many different configurations were tried to get this contraption off the ground. They eventually settled on a dual prop setup, each spinning counter to each other for forward momentum. This helped to negate the gyroscopic effect of the spinning buckets producing the lift. After many failed build-then-fly attempts they finally got it in the air. It works, albeit not to well, but it did fly and was controllable. Perhaps with a few more adjustments and a bit of trial and error someone could build a really unique RC plane using this concept.
The content below is taken from the original (Apparently Time IS Money), to continue reading please visit the site. Remember to respect the Author & Copyright.
Some people like to tweak cars. Some like to overclock PCs. Then there are the guys like [Jack Zimmermann] who are obsessed with accurate time. He’s working on a project that will deploy NTP (Network Time Protocol) servers in different African countries and needed small, cheap, energy-efficient, and accurate servers. What he wound up with is a very accurate setup for around $200. Along the way, he built some custom hardware, and hacked a computer to sync to the GPS clock reference.
His original attempt was with a Raspberry Pi 3. However, the network adapter isn’t the fastest possible, both because it is 100 MBPS and, primarily, because it is connected via the USB bus. Network latency due to these limitations makes it difficult to serve accurate time.
His solution includes an Odroid C2. For $50 it is a very capable computer with four cores, gigabit Ethernet, and can even use eMMC storage which is faster than the usual SD card. You can still use a conventional SD card, though, if you prefer.
For a time reference, [Jack] used a Trimble GPSDO (GPS-disciplined oscillator) that outputs a PPS (pulse per second) and two 10 MHz signals. These are locked to the GPS satellite clocks which are very accurate and [Jack] says the timing is accurate to within less than 50 ns. Unfortunately, the pulse from the Trimble board is too short to read, so he designed a pulse stretching circuit.
Instead of trying to discipline the existing clock on the Odroid to the GPS reference, [Jack] removed the crystal and associated components completely. He then used a frequency generator chip to convert the 10 MHz GPS signal to the 24 MHz clock the Odroid expects. He has plans to use the extra outputs from the chip to drive the ethernet and USB clocks, too, although their absolute accuracy is probably not that critical.
We’ve seen NTP clocks before that can consume this kind of time reference. If you want to know more about the Odroid C2, we talked about them at length last year.
Filed under: Network Hacks