AMD returns to high-end gaming CPUs with Ryzen 7

The content below is taken from the original (AMD returns to high-end gaming CPUs with Ryzen 7), to continue reading please visit the site. Remember to respect the Author & Copyright.

AMD has largely ceded the performance processor space to Intel in recent years. You typically get one of its chips inside a budget PC, not an all-out gaming rig. At last, though, you might have reason to get excited: AMD is launching Ryzen 7, a desktop CPU line based on its much-ballyhooed Zen architecture. The key is a dramatic improvement in the number of instructions the chip can handle at once. A Ryzen 7 CPU can do 52 percent more work every cycle than a similarly-clocked predecessor thanks to a newer 14-nanometer manufacturing process, five times the bandwidth and some overdue architectural upgrades. This is AMD’s first processor with simultaneous multithreading (Hyper-Threading in Intel speak), so each core can execute two code paths at the same time.

Depending on what you get, you might even get a relatively quiet, efficient system. AMD claims the 3GHz Ryzen 7 1700 is the lowest-power 8-core desktop chip you can buy, with a 65W thermal design target. And if you snag the new Wraith Spire cooler (included with the 1700), you’ll have a relatively silent system with a 32dB noise level.

The initial range arrives both by itself (including compatible motherboards) and in pre-assembled systems on March 2nd, and it unsurprisingly focuses on higher-end systems. AMD is still promising a lot of value for your money. though. Your selection starts off with the Ryzen 7 1700, which at $329 is supposed to beat Intel’s slightly pricier Core i7 7700K in multithreaded chip tests. The 3.4GHz 1700X reportedly outperforms the Core i7 6800K at a lower $399 price tag, and the 3.6GHz 1800X can just edge out a not-quite-top-tier Core i7 6900K while costing less than half as much, at $499.

These are lofty claims, and there’s good reason to be skeptical. AMD’s performance claims largely revolve around one benchmark (Cinebench R15), and it’s so far saying only that you can get a "comparable" 4K gaming experience. You’ll likely have to wait until Ryzen 7 ships to see how it fares in real-world tests, which could easily be less flattering. Still, the fact that AMD is even in the same ballpark as Intel is a huge deal — this promises real competition that gives you better choices, and could force Intel to lower prices.

Source: AMD

From Zero to Nano

The content below is taken from the original (From Zero to Nano), to continue reading please visit the site. Remember to respect the Author & Copyright.

Have you ever wanted to build your own Arduino from scratch? [Pratik Makwana] shares the entire process of designing, building and flashing an Arduino Nano clone. This is not an entry-level project and requires some knowledge of soldering to succeed with such small components, but it is highly rewarding to make. Although it’s a cheap build, it’s probably cheaper to just buy a Nano. That’s not the point.

The goal here and the interesting part of the project is that you can follow the entire process of making the board. You can use the knowledge to design your own board, your own variant or even a completely different project.

from-zero-to-nano-thumb[Pratik Makwana] starts by showing how to design the circuit schematic diagram in an EDA tool (Eagle) and the corresponding PCB layout design. He then uses the toner transfer method and a laminator to imprint the circuit into the copper board for later etching and drilling. The challenging soldering process is not detailed, if you need some help soldering SMD sized components we covered some different processes before, from a toaster oven to a drag soldering process with Kapton tape.

Last but not least, the bootloader firmware. This was done using an Arduino UNO working as master and the newly created the Arduino Nano clone as target. After that you’re set to go. To run an actual sketch, just use your standard USB to UART converter to burn it and proceed as usual.

Voilá, from zero to Nano:

If you still want to build an Arduino but the difficulty level is a bit high for you, maybe a good idea is to start with the Shrimp.

EFF: Half of web traffic is now encrypted

The content below is taken from the original (EFF: Half of web traffic is now encrypted), to continue reading please visit the site. Remember to respect the Author & Copyright.

Half the web is now encrypted, according to a new report from the EFF released this week. The rights organization noted the milestone was attributable to a number of efforts, including recent moves from major tech companies to implement HTTPS on their own properties. Over the years, these efforts have included pushes from Facebook and Twitter, back in 2013 and 2012 respectively, as well as those from other sizable sites like Google, Wikipedia, Bing, Reddit and more.

Google played a significant role, having put pressure on websites to adopt HTTPS by beginning to use HTTPS as a signal in its search ranking algorithms. This year, it also ramped up the push towards HTTPS by marking websites that use HTTP connections for transmitting passwords and credit data as insecure.

HTTPS, which encrypts data in transit and helps prevent a site from being modified by a malicious user on the network, has gained increased attention in recent years as users have woken up to how much of their web usage is tracked, and even spied on by their own government. Large-scale hacks have also generally made people more security-minded as well.

A number of larger players on the web also switched on HTTPS in 2016, like WordPress.com which added support for HTTPS for all its custom domains, meaning the security and performance of the encryption technology became available every blog and website it hosted. Elsewhere in the blogosphere, Google made HTTPS connections the default in May 2016 for all the sites on its Blogspot domain, after having made HTTPS optional in fall 2015.

More recently, the U.S. government has made strides toward ditching HTTP, with the news that all new executive branch domains would use HTTPS starting in the spring of this year. Another report on the federal government’s adoption of HTTPS from January found that of roughly 1,000 .gov domains, 61 percent enforce HTTPS.

And on the mobile web, Apple told iOS developers that HTTPS connections were required for apps by the end of last year, though it later extended that deadline. 

screen-shot-2017-02-22-at-3-01-48-pm

Many major news organizations have also moved forward (including us!), while efforts like the Let’s Encrypt project have helped pushed others, including WordPress, to take advantage of the technology. The EFF also has its own tool, Certbot, that is being used to help webmasters – even those running smaller sites – make the switch.

The EFF noted that the average volume of encrypted web traffic varies depending on which browser maker is reporting their metrics. However, Mozilla recently said that more of its traffic is encrypted than unencrypted. Google’s Chrome, which is widely adopted, is more in line with the “50%” figure, the EFF found, as it said that over half of web pages are protected by HTTPS across different operating systems.

screen-shot-2017-02-22-at-3-01-41-pm

Not all sites support HTTPS or have it as the default, of course. The Chrome extension HTTPS Everywhere can help with the latter, but more efforts still need to happen to get the other half the web encrypted as well.

Featured Image: Bryce Durbin

Battle of the clouds: Amazon Web Services vs. Microsoft Azure vs. Google Cloud Platform

The content below is taken from the original (Battle of the clouds: Amazon Web Services vs. Microsoft Azure vs. Google Cloud Platform), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Web Services is the consensus leader of the IaaS public cloud computing market according to industry watchers, but they credit Microsoft for closing the gap with Azure and say Google with its Cloud Platform has made considerable strides as well.

Gartner says as much in its annual in-depth comparison of these three cloud players based on a list of 234 evaluation criteria. This criteria consists of features that are either required, preferred or optional for cloud providers to host enterprise workloads.

Three years ago, AWS was a clear leader, meeting 92% of what Gartner considers required criteria for enterprise-grade IaaS public cloud providers, whereas Microsoft was back at 75%. AWS held steady at 92% last year, but Microsoft jumped up to 88% and Google came in at a respectable 70%.

+ MORE AT NETWORK WORLD: Interactive map: Where in the world do Amazon, Microsoft and Google have cloud data centers? | IDC: cloud market growing at 7 times the rate of the rest of IT market +`

New ThinPrint Client Delivers Secure, Compressed and Fast Printing with HP FutureSmart Ready Devices

The content below is taken from the original (New ThinPrint Client Delivers Secure, Compressed and Fast Printing with HP FutureSmart Ready Devices), to continue reading please visit the site. Remember to respect the Author & Copyright.

ThinPrint , provider of the world’s leading print management software, has developed a new ThinPrint Client. Within the framework of the HP… Read more at VMblog.com.

Kite-surfing could put rural Brits online

The content below is taken from the original (Kite-surfing could put rural Brits online), to continue reading please visit the site. Remember to respect the Author & Copyright.

U.K. network operator EE plans to deliver 4G internet access from a kite-like balloon first developed as an observation and communications platform for the military.

The Helikite was invented in the 1990s and has also found favor with Antarctic explorers, disaster relief workers and emergency services. EE revealed its plans to deliver wide-area network coverage from the helium-filled balloons on Tuesday, although it won’t have the first one in service until later in the year.

According to Allsopp Helikites, the maker of the kite-balloons, hoisting a 4G base station aloft can increase its range from 3 kilometers to between 30 km and 80 km.

In a city, that would cause trouble: So many smartphones would be within range that the bandwidth available to any one of them would shrink to a trickle.

In sparsely populated areas, though, operators face a different problem, struggling to find enough subscribers to make investing in new base stations economical. By increasing the range of their cells, they can serve a greater area with less equipment. That’s important for European mobile networks, which often have territorial coverage obligations written into their operating licenses. EE, for example, must offer 4G service across 92 percent of the U.K.

The system is reminiscent of Google’s Project Loon. That effort will put wireless internet access points on high-flying helium balloons that can navigate changing air currents at different altitudes to “loiter” over a particular region. In contrast, EE’s Helikite-mounted base stations will be tethered to a particular spot and fly at an altitude of about 100 meters.

EE is also planning to mount smaller base stations on drones to provide targeted network coverage during search-and-rescue operations.

Whether the base station is on a balloon or a drone, carrying data between it and mobile devices is just one of the challenges; building the “backhaul” to link it to the network core is another. For that, EE will use two systems. One, developed by Parallel Wireless, creates a mesh network from adjacent 4G base stations to carry traffic back to a point on the network backbone. Another, developed by Avanti, sends the traffic to the core via satellite.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

How to develop an internet of things strategy

The content below is taken from the original (How to develop an internet of things strategy), to continue reading please visit the site. Remember to respect the Author & Copyright.

The internet of things (IoT) may present the biggest opportunity to enterprises since the dawn of the internet age, and perhaps it will be bigger. Research firm Gartner predicts there will be nearly 20 billion devices on the IoT by 2020, and IoT product and service suppliers will generate $300 billion+ in revenue.

Successfully leveraging that opportunity — bringing together sensors, connectivity, cloud storage, processing, analytics and machine learning to transform business models and processes — requires a plan.

“In the course of my career, I’ve estimated and planned hundreds of projects,” John Rossman, who spent four years launching and then running Amazon’s Marketplace business (which represents more than 50 percent of all Amazon units sold today), writes in his new book, The Amazon Way on IoT: 10 Principles for Every Leader from the World’s Leading Internet of Things Strategies. “I’ve learned that, even before you start seeking answers, it’s imperative to understand the questions. Guiding a team to a successful outcome on a complex project requires understanding of the steps and deliverables, necessary resources, and roles and every inherent risk and dependency.”

Before you start the hardware and software design, and before you figure out how to engage developers, he says, you need to start with a better set of questions.

Rossman says there are three key phases to building a successful IoT strategy. While he presents the steps sequentially, he notes that many steps are actually taken concurrently in practice and can be approached in many different ways.

Part 1. Develop and articulate your strategy

First and foremost, Rossman says, you must narrow and prioritize your options. IoT presents a broad swathe of opportunities. Success depends upon understanding your market, evaluating the opportunities with deliberation and attacking in the right place.

Landscape analysis

It all begins with a landscape analysis. You need to thoroughly understand your industry and competitors — strengths, weaknesses, opportunities and threats (SWOT). This will help you see the megatrends and forces at play in your market.

“Creating a landscape analysis and value chain of your industry is a very important thing to do,” Rossman tells CIO.com. “Studying the market: What are they saying about IoT in your industry? Truly understanding what is your worst customer moment: Where do customers get frustrated? What data or what event improves that customer experience? What’s the sensor or IoT opportunity that provides that data?”

Value-chain analysis and profit-pool analysis

The next step, Rossman says, is to create a value-chain analysis and profit-pool analysis of your industry. It should be a broad view of the industry, don’t give in to tunnel-vision with a narrow view of your current business. In some cases, this may involve launching a business in one part of the value chain as a way to gain perspective on the rest of the value chain and to identify other business opportunities.

Partner, competitor and vendor analysis

Create a map of other solutions providers in your space to develop a clear understanding of what exactly each one does, who their key clients are and what their IoT use cases are. Rossman says you should even pick a few to interview. Use this process to understand the needs of customers, the smart way those needs are already being met and where the gaps are.

Customer needs

The next step, Rossman says, is to document specific unmet customer needs and identify the key friction points your future customers are currently experiencing.

“Following the path from start to your desired outcome can help you identify details and priorities that might otherwise be dealt with at too high a level or skipped over entirely,” he writes.

Rossman warns that crafting strong customer personas and journeys is hard work, and you may need to start over several times to get it right.

“The biggest mistake you can make on these is to build them for show rather than for work,” he writes. “Don’t worry about the beauty of these deliverables until things are getting baked (if at all). Do worry about getting at insights, talking to customers and validating your findings with others who can bring insights and challenges to your work.”

Evaluation framework and scoring

Design ways to assess the success of your work.

“This includes understanding a project’s feasibility and transition points and how it will tie into other corporate strategies at your company,” Rossman writes. “Sometimes, especially if your organization is new to the field of connected devices, the success of your project should be measured in terms of what you can learn from the project rather than whether or not it can be classically considered a success.”

You might undertake some early IoT initiatives purely to gain experience, with no expected ROI, he says.

Strategy articulation

Once you have all these analyses under your belt, you need share what you’ve learned with the rest of your team. Rossman says he’s had the most success articulating these learnings by building a flywheel model of business systems and by developing a business model.

Part 2. Build your IoT roadmap

Once you’ve explained your big idea and why your organization should pursue it, you need an IoT roadmap that helps you plan and communicate to others what the journey will be like, what is being built and how it will work.

“In creating your roadmap, embrace on of Amazon’s favorite strategies — think big, but bet small,” Rossman writes.

GPUs are now available for Google Compute Engine and Cloud Machine Learning

The content below is taken from the original (GPUs are now available for Google Compute Engine and Cloud Machine Learning), to continue reading please visit the site. Remember to respect the Author & Copyright.

By John Barrus, Product Manager

Google Cloud Platform gets a performance boost today with the much anticipated public beta of NVIDIA Tesla K80 GPUs. You can now spin up NVIDIA GPU-based VMs in three GCP regions: us-east1, asia-east1 and europe-west1, using the gcloud command-line tool. Support for creating GPU VMs using the Cloud Console appears later this week.

If you need extra computational power for deep learning, you can attach up to eight GPUs (4 K80 boards) to any custom Google Compute Engine virtual machine. GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.

NVIDIA K80 GPU Accelerator Board

Rather than constructing a GPU cluster in your own datacenter, just add GPUs to virtual machines running in our cloud. GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance. Each NVIDIA GPU in a K80 has 2,496 stream processors with 12 GB of GDDR5 memory. You can shape your instances for optimal performance by flexibly attaching 1, 2, 4 or 8 NVIDIA GPUs to custom machine shapes.

Google Cloud supports as many as 8 GPUs attached to custom VMs, allowing you to optimize the performance of your applications.

These instances support popular machine learning and deep learning frameworks such as TensorFlow, Theano, Torch, MXNet and Caffe, as well as NVIDIA’s popular CUDA software for building GPU-accelerated applications.

Pricing

Like the rest of our infrastructure, the GPUs are priced competitively and are billed per minute (10 minute minimum). In the US, each K80 GPU attached to a VM is priced at $0.700 per hour per GPU and in Asia and Europe, $0.770 per hour per GPU. As always, you only pay for what you use. This frees you up to spin up a large cluster of GPU machines for rapid deep learning and machine learning training with zero capital investment.

Supercharge machine learning

The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), helping you slash the time it takes to train machine learning models at scale using the TensorFlow framework. Now, instead of taking several days to train an image classifier on a large image dataset on a single machine, you can run distributed training with multiple GPU workers on Cloud ML, dramatically shorten your development cycle and iterate quickly on the model.

Cloud ML is a fully-managed service that provides end-to-end training and prediction workflow with cloud computing tools such as Google Cloud Dataflow, Google BigQuery, Google Cloud Storage and Google Cloud Datalab.

Start small and train a TensorFlow model locally on a small dataset. Then, kick off a larger Cloud ML training job against a full dataset in the cloud to take advantage of the scale and performance of Google Cloud GPUs. For more on Cloud ML, please see the Quickstart guide to get started, or this document to dive into using GPUs.

Next steps

Register for Cloud NEXT, sign up for the CloudML Bootcamp and learn how to Supercharge performance using GPUs in the cloud. You can use the gcloud command-line to create a VM today and start experimenting with TensorFlow-accelerated machine learning. Detailed documentation is available on our website.

UPS wants UAVs to cover its ‘last mile’ deliveries

The content below is taken from the original (UPS wants UAVs to cover its ‘last mile’ deliveries), to continue reading please visit the site. Remember to respect the Author & Copyright.

Drone-based deliveries are quickly moving out of the realm of science fiction. Amazon, 7-11 and a host of startups are already toying with the idea. Now, UPS, one of the biggest parcel delivery services on the planet, is testing a system that will drop packages at your door while the driver moves on to the next house.

The "last mile" isn’t just an issue for broadband service providers. That last bit of distance between a UPS driver’s van and the recipient’s door is the least efficient portion of the entire shipping process. In fact, UPS figures that if it can cut just one mile from the 66,000 routes its drivers cover every day, the company could save upwards of $50 million annually.

To do this, UPS has designed a specially-equipped diesel-electric delivery van that houses a swarm of UAVs. Rather than grab a package and walk to the front door, the driver instead pulls over, hops in the back of the van, drops the package in a delivery drone’s carrying cage and launches it through the van’s retractable roof. The drone then flies up to the drop off location, releases the package and autonomously returns to the van. What’s really cool is that the driver doesn’t have to wait for the drone to return, they can just head off to the next delivery location.

"Our drivers are still key, and our drones aren’t going to be replacing our service providers, but they can assist and improve efficiency," says Mark Wallace, senior VP of global engineering and sustainability, told Wired.

This system was developed back in 2014 by the University of Cincinnati in conjunction with the Workhorse Group, an Ohio-based company that builds electric delivery trucks. The drones themselves, weigh just under 10 pounds and can fly for up to 30 minutes at a time before needing to return to the van and recharge. Unfortunately, UPS still needs to get approval from the FAA before we’ll get drones dropping packages at our doorsteps and that could take a while.

Source: Wired

A 6502 Retrocomputer In A Very Tidy Package

The content below is taken from the original (A 6502 Retrocomputer In A Very Tidy Package), to continue reading please visit the site. Remember to respect the Author & Copyright.

One of the designers whose work we see constantly in the world of retrocomputing is [Grant Searle], whose work on minimal chip count microcomputers has spawned a host of implementations across several processor families.

Often a retrocomputer is by necessity quite large, as an inevitable consequence of having integrated circuits in the period-correct dual-in-line packages with 0.1″ spaced pins. Back in the day there were few micros whose PCBs were smaller than a Eurocard (100 mm x 160 mm, 4″ x 6.3″), and many boasted PCBs much larger.

[Mark Feldman] though has taken a [Grant Searle] 6502 design and fitted it into a much smaller footprint through ingenious use of two stacked Perf+ prototyping boards. This is a stripboard product that features horizontal traces on one side and vertical on the other, which lends itself to compactness.

On top of [Mark]’s computer are the processor and EPROM, while on the lower board are the RAM, UART, clock, and address decoding logic. It runs at 1.8 MHz, has 16 kB of ROM and 32 kB of RAM, which seems inconsequential in 2017 but would have been a rather impressive spec in the early 1980s.

There are three rows of pins connecting the boards, with the address bus carried up the middle and everything else at the edges. He’s toying with the idea of a third layer containing a keyboard and video display driver, something to look forward to.

The computer isn’t all on the page though, rather than wait for one to arrive he’s built his own EPROM programmer on a breadboard. He doesn’t have an eraser though, so has resorted to the Australian sunshine to (slowly) provide the UV light he needs.

We’ve featured other [Grant Searle] designs before, including a Sony Watchman clock with a Z80 board behind it, and another tiny board, this time with a Z80 rather than a 6502.

Filed under: classic hacks

Manage On-Premises Hyper-V from Azure

The content below is taken from the original (Manage On-Premises Hyper-V from Azure), to continue reading please visit the site. Remember to respect the Author & Copyright.

In this article I explain how you how you can (remotely and securely) manage your on-premises Hyper- V hosts, including Nano Server, from Azure’s (remote) server management tools, on Windows 7, Macs, and even non-Windows tablets.

 

 

The Problem with Server Management Tools

How would you manage servers today? Unfortunately, I expect that most of you will say “I log into the server and …”. Although I remain an advocate for a GUI on Windows Server (mainly for troubleshooting reasons), I still prefer working remotely. The best way to manage a server is to use the Remote Server Administration Toolkit (RSAT), a set of tools that you would normally get on a server, but can be installed on your PC.

That sounds perfect until you enter the real world. Most enterprises seem to adopt, not necessarily for widespread usage, newer versions of Windows Server faster than they deploy client OSs. This is often because there are some services that require a newer OS. The business might demand the latest CRM application, which requires Windows Server 2016 (WS2016). Some new ERP solutions might take advantage of a performance feature in WS2016. Or maybe you’ve opted to deploy WS2016 Hyper-V because of Nano Server, security, administration, operational, scalability, or management features? While that doesn’t force you to upgrade the guest OSs of your virtual machines, you might have been forced to look at your PCs. To manage Nano Server at all, or any other Windows Server installation type from your PC, you need to install RSAT on your PC. But RSAT is only ever designed to be installed on the matching desktop OS. For example:

  • RSAT for Windows Server 2012 required Windows 8
  • RSAT for Windows Server 2012 R2 required Windows 8.1
  • RSAT for Windows Server 2016 requires Windows 10

How many organizations do you think have deployed anything newer than Windows 7, even with the since-ended free upgrade to Windows 10? Not that many. And while some IT departments might be free to do limited upgrades for themselves, I’d wager that many more are limited by how they are licensed or by internal company policies (for example, “you use what you support”).

You cannot expect to work around the problem by using an older version of the administration tools with a newer version of the Windows Server OS.

Two workarounds were commonly used:

  • Local login: Administrators logged into each Windows Server that they were configuring, littering the server infrastructure with profiles, and probably browsing the Internet from a now-insecure system.
  • Remote Desktop Services (RDS): A better way was to install the server administration features on an RDS farm, using session hosts that were on the latest version of Windows Server. The administration tools could be published to each administrator. These published applications would run on the RDS farm, but be published as shortcuts on administrator PCs and appear to run locally. Combined with an RDS gateway and a good passphrase (passwords are for losers) policy, this also solved the problem of securely managing systems from a remote location. A nice solution, but we’re deploying more systems and using more capacity to manage to make management easier; doesn’t that sound wrong?

What if there was a solution that offered some of the benefits of the RDS option, such as centralized installation administration tools, always up-to-date versions, security, remote accessibility, and with the ability to use many kinds of devices … or browsers.

Sponsored

Azure Server Management Tools

Microsoft first started to talk about the Server Management Tools (in preview at the time of writing) in Azure back at Ignite 2015. The company first pitched the tools as a way to get a GUI experience for Nano Server, but that wasn’t quite correct because the solution, like RSAT, manages all kinds of on-premises Windows Server installations.

The solution offers you a set of GUI tools for server administration that run in your browser, via the Azure Portal. This means that to use the solution to manage Windows Serve 2016, including Nano Server:

  • You don’t need Windows 10 PCs (although Enterprise E3 or higher is best for a secure platform)
  • You can even use a Mac, iPad or Android tablet!
  • You can use Internet Explorer 11, Edge (latest), Safari (latest, Mac only), Chrome (latest), or Firefox (latest)

The Azure Portal is a web service, so administrators can sign in from anywhere to manage your on-premises servers. If you have configured Azure AD with either Azure AD Connect or ADFS, then you will also have single sign-on. You can further secure this remote access using conditional multi-factor authentication (MFA), a feature of Azure AD Premium.

How Server Management Tools Works

The system, from the customer perspective, is actually pretty simple. We deploy a Server Management Tools gateway onto a Windows Server 2016 server that is running on-premises. This will act as a proxy for discovering machines on our network and for funneling traffic to/from the management tools running in Azure.

A connection is created in Azure for each on-premises server that we want to manage; the server must be accessible to the gateway via IPv4/IPv6 address or DNS name. Credentials to manage the server must be either:

  • Saved after you create the connection.
  • Supplied on demand when managing a server via the connection.

Overview of Azure Server Management Tools architecture [Image Credit: Microsoft]

Overview of Azure Server Management Tools architecture [Image Credit: Microsoft]

Sponsored

I’ll explain how you can deploy this solution in a later post. For you Hyper-V administrators, there’s some good news. As with all good cloud services, Microsoft is continually adding features to the Server Management Tools. One of these additions was a Hyper-V console – Yay! We finally get a new Hyper-V management tool … sort of. But at least we get a great new way to manage those brand new WS2016 hosts from anywhere, and lack of Windows 10 deployments is no longer a blocker for adopting the most secure hypervisor and private cloud platform that is commercially available.

The post Manage On-Premises Hyper-V from Azure appeared first on Petri.

blender (2.78.1)

The content below is taken from the original (blender (2.78.1)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Blender is 3D creation for everyone, free to use for any purpose.

Keep Your Budget Simple By Sticking to Three Categories

The content below is taken from the original (Keep Your Budget Simple By Sticking to Three Categories), to continue reading please visit the site. Remember to respect the Author & Copyright.

Virtually all of your expenses fall into three overall categories: Fixed expenses, variable expenses, and non-necessities.

— Delivered by Feed43 service

Tokyo Institute of Technology taps Nvidia for Japan’s fastest AI supercomputer

The content below is taken from the original (Tokyo Institute of Technology taps Nvidia for Japan’s fastest AI supercomputer), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nvidia’s business is increasingly the business of artificial intelligence, and its latest partnership fits with that new role. The graphics processing maker is supplying the Tokyo Institute of Technology for the GPUs that will power its new AI supercomputer, which will be the fastest of its kind in Japan once completed.

Nvidia Tesla P100 GPUs, which use Pascal processing architecture, will be used in the creation of the cluster, which will be known as TSUBAME3.0, and which will replace TSUBAME2.5 with twice the performance capabilities. Don’t feel too badly for TSUBAME2.5, however – it’s still going to be in active use, adding its power to TSUBAME3.0’s projected 47 petaflops for a combined total of 64.3 petaflops in total – you’d need a heck of a lot of iPhones to match that (like very, very insanely many).

The goal is for TSUBAME3.0 to be up and processing by this summer, where its prowess will be put to use in service for education and high-tech research at the Tokyo academic institution. It’ll also be available for private sector contracting, and the school says it can’t wait to start teaching the new virtual brain.

Understanding Windows 10 Enterprise Licensing

The content below is taken from the original (Understanding Windows 10 Enterprise Licensing), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s Ask the Admin, I’ll explain the different options for licensing Windows 10 Enterprise edition.

Windows comes in several editions aimed at different markets and audiences. For instance, Windows 10 Home and Pro are intended for consumers and SMEs, respectively. Home cannot be joined to an Active Directory (AD) domain, while Pro supports AD, but doesn’t have all the bells and whistles of Enterprise Edition, which includes application control (AppLocker) and Credential Guard, amongst other enterprise-class features.

 

 

What Features Are Unique to Windows 10 Enterprise?

Quite a few features are unique to Windows 10 Enterprise, and some features that were previously available in Pro are now available only in Enterprise. Here’s a complete list of features that require Windows 10 Enterprise:

  • Long Term Servicing Branch
  • Windows To Go
  • AppLocker
  • Group Policy consumer experience settings
  • Application Virtualization (App-V) and User Environment Virtualization (UE-V)
  • Device Guard
  • Credential Guard
  • DirectAccess
  • BranchCache

Microsoft is gradually pushing small businesses, which in the past opted to use Pro editions of Windows, to Windows 10 Enterprise by removing features that enable organizations to restrict users’ access to the Windows Store, advertising, and customization features. For an overview of these changes, see Microsoft Cuts More Features From Windows 10 Pro To Push Businesses To Enterprise Edition on the Petri IT Knowledgebase.

How to Get Windows 10 Enterprise

Enterprise editions of Windows used to be only available through Volume Licensing. Software Assurance (SA), a subscription that included upgrade rights and the Microsoft Desktop Optimization Pack (MDOP), was also part of Volume Licensing.

Software Assurance is now known as E3. E5 is the same as E3, but includes an additional security service called Windows Defender Advanced Threat Protection (ATP), which uses behavioral analysis and machine learning to protect Windows if antivirus fails to stop a threat. For more information on Windows Defender ATP, see Advanced Threat Protection Service for Businesses is Coming to Windows 10.

Windows 10 Enterprise E3 and E5 can be purchased as Software-as-a-Service on a monthly subscription plan from Microsoft Partners that participate in the Cloud Solution Provider (CSP) Program or via Volume Licensing. Additionally, there are two bundles, Secure Productive Enterprise E3 and E5, that include Office 365, Enterprise Mobility + Security, and Windows 10 Enterprise. For more information on Enterprise Mobility + Security, see What is Microsoft Enterprise Mobility Suite? on the Petri.

Until recently, one drawback of the purchasing E3 via a CSP Partner was that you needed to already be on Windows 10 Pro to make the upgrade to Enterprise, because E3/E5 licenses are assigned to users via Azure Active Directory (AAD), and Windows 10 Pro would automatically be upgraded to Enterprise edition.

Sponsored

But as Brad Sams recently reported here, ‘Microsoft is allowing anyone with a Windows Enterprise E3 or E5 subscription as well as Secure Productive Enterprise E3 and E5 to upgrade Windows 7 and 8.1 machines at no additional cost’. Microsoft is providing customers with a perpetual Windows 10 Pro license, along with Volume Licensing media, so they can install Windows 10 Pro and then upgrade to E3 or E5 via CSP.

The post Understanding Windows 10 Enterprise Licensing appeared first on Petri.

Azure’s rise instills doubts in AWS shops

The content below is taken from the original (Azure’s rise instills doubts in AWS shops), to continue reading please visit the site. Remember to respect the Author & Copyright.

RightScale’s “State of the Cloud” report is out now and has been well covered by InfoWorld, so I won’t give you the rundown here. But there were key findings I want to point out in terms of impact to the enterprise.

According to the report: 

  • In polling the respondents, RightScale found that Azure grew overall from 20 to 34 percent, while Amazon Web Services stayed flat at 57 percent.
  • Google grew from 10 to 15 percent to maintain third position.
  • Within enterprises, Azure increased adoption significantly: from 26 percent to 43 percent. By contrast, AWS adoption increased slightly: from 56 percent to 59 percent.

Enterprises are asking about the last bullet point because they have made a tremendous investment in AWS, but are now second-guessing their support.

rightscale 2017 cloud market share RightScale

Microsoft Azure saw a great jump in cloud adoption last year, thanks to its installed base and good technology. AWS’s growth is holding steady with the market.

Let’s be clear: Azure’s success is well-earned. Microsoft was late to the party, but not so late that it missed out on doing the right things around adoption of its cloud.

And Microsoft has smartly leveraged its existing enterprises relationships, unlike AWS and Google. Enterprises that have their IT solutions based on Windows servers, which is a lot of enterprises, are low-hanging fruit for Microsoft Azure. The ease of migration is also tempting, even though they can also get Windows servers on AWS.

Still, AWS has kept up with the market growth. Indeed, AWS’s growth aligns almost perfectly to that of the cloud computing market in general. AWS will continue to set the standard for IaaS platforms, and it won’t give up market share unless it does something really dumb. I suspect that won’t happen.

Enterprises need to understand that the cloud computing market is dynamic and in its early days, so the volatility will be with us a while. Only about 5 to 7 percent of workloads migrated to an IaaS platform, such as AWS or Azure. About 20 percent should be migrated by the end of the year. We won’t hit 70 percent—when you’ll really know if your chosen platform won—for several years. Today, it could be anybody’s game.

That’s a fact of life. Enterprises shouldn’t panic if their chosen platform isn’t the top scorer or grower in this quarter or that. While the market sorts itself out, it’s clear that AWS, Azure, and Google Cloud are the right IaaS platforms for most enterprises to focus on. If you’re using one of those, you’ll be in good shape for the foreseeable future. That’s as good as it gets in the platform betting game.

User Group Newsletter February 2017

The content below is taken from the original (User Group Newsletter February 2017), to continue reading please visit the site. Remember to respect the Author & Copyright.


Welcome to 2017! We hope you all had a lovely festive season. Here is our first edition of the User Group newsletter for this year.

AMBASSADOR PROGRAM NEWS

2017 sees some new arrivals and departures to our Ambassador program. Read about them here.

 

WELCOME TO OUR NEW USER GROUPS

We have some new user groups which have joined the OpenStack community.

Bangladesh

Ireland – Cork

Russia – St Petersburg

Phoenix – United States

Romania – Bucharest

We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve!

Looking for a your local group? Are you thinking of starting a user group? Head to the groups portal for more information.


MAY 2017 OPENSTACK SUMMIT

We’re going to Boston for our first summit of 2017!!

You can register and stay updated here.

Consider it your pocket guide for all things Boston summit. Find out about the featured speakers, make your hotel bookings, find your FAQ and read about our travel support program.

 

NEW BOARD OF DIRECTORS
The community has spoken! A new board of directors has been elected for 2017.
Read all about it here. 


MAKE YOUR VOICE HEARD!
Submit your response the latest #OpenStack User Survey!
All data is completely confidential. Submissions close on the 20th of February 2017.
You can complete it here. 

CONTRIBUTING TO UG NEWSLETTER

If you’d like to contribute a news item for next edition, please submit to this etherpad.
Items submitted may be edited down for length, style and suitability.
This newsletter is published on a monthly basis. 

You Can Now Easily Connect to Your Raspberry Pi From Anywhere In World With VNC Connect

The content below is taken from the original (You Can Now Easily Connect to Your Raspberry Pi From Anywhere In World With VNC Connect), to continue reading please visit the site. Remember to respect the Author & Copyright.

Real VNC is an excellent, easy way to remotely connect to your Raspberry Pi from your home network, but it’s a little confusing for beginners. VNC Connect is a new version that simplifies the process and makes it easy to connect to your Raspberry Pi from outside your network.

Read more…

Microsoft drone simulator helps you prevent real-world crashes

The content below is taken from the original (Microsoft drone simulator helps you prevent real-world crashes), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s relatively easy to develop a drone that can fly on its own, but it’s another matter developing one that can navigate the many obstacles of real life. That’s where Microsoft thinks it can help. It just published an open source simulator, the Aerial Informatics and Robotics Platform, that helps designers test and train autonomous machines in realistic conditions without wrecking expensive prototypes. The tool has vehicles move through randomized environments filled with the minutiae you see on a typical street, such as power lines and trees — if your drone can’t dodge a tree branch, you’ll find out quickly. You can see what the vehicle would see (including simulated sensor data), and the software ties into both existing robotic hardware platforms and machine learning systems to speed up development.

As team lead Ashish Kapoor explains to The Verge, this isn’t meant to replace real-world testing. It’s more of a complement that can either account for hard-to-reproduce circumstances or perform extremely repetitive tests. Instead of having to launch a drone with just a few months of flying under its belt, you could have data equivalent to years of flight time.

Moreover, the simulator isn’t necessarily confined to testing hardware. Microsoft sees its tech helping with all kinds of computer vision and machine learning code. Really, this is more of an AI playground than a narrowly-focused tool. Whatever the initial goals may be, there are many more possibilities.

Via: The Verge

Source: Microsoft, GitHub

Cloud industry body sets up new data protection code

The content below is taken from the original (Cloud industry body sets up new data protection code), to continue reading please visit the site. Remember to respect the Author & Copyright.

A number of cloud infrastructure providers operating in Europe have signed up to a new data protection code of conduct.

The code, established by Cloud Infrastructure Services Providers in Europe (CISPE), places restrictions on the processing of personal data that cloud customers store with providers, defines responsibilities for data security, and requires providers to offer customers the option to process and store personal data entirely within the European Economic Area (EEA). It also, among other things, details protocols for handling requests for data from government authorities and law enforcement agencies, as well as for the notification of data breaches.

The code (41-page/364KB PDF) has been drafted to accord with the requirements of the EU’s new General Data Protection Regulation (GDPR), which does not take effect until 25 May 2018.

A number of cloud infrastructure providers, including Amazon Web Services (AWS), have already signed up to the voluntary code. Those providers can display a certification mark to notify cloud customers of their compliance with the code.

CISPE said the new code “can be used as a tool by customers in Europe to assess if a particular cloud infrastructure service provides appropriate safeguards for the processing they wish to perform”.

Data protection law expert Kuan Hon of Pinsent Masons, the law firm behind Out-Law.com, said: “This is a very positive step. I hope that this code will be approved for GDPR purposes, whether by the European Commission or a national data protection supervisory authority, to enable transfers to adhering cloud providers, even if they are outside the EU, as well as to help evidence their compliance generally.”

Hon has called for the EU’s E-Commerce Directive to be updated to address an anomaly which exposes infrastructure cloud providers to potential liabilities for unlawful handling of personal data by their customers, even if they are not aware of their customers’ activities. She said the anomaly will be more striking when the GDPR takes effect.

Alban Schmutz, chairman of CISPE, said: “Any customer will know that if their cloud infrastructure provider is complying with the CISPE code of conduct, their data will be protected to clear standards. CISPE code of conduct provides Europeans with the confidence that their information will not be used for anything other than what they stipulate. The CISPE compliance mark clearly addresses this, providing consistency across Europe, what European customers call for.”

Copyright © 2016, Out-Law.com

Out-Law.com is part of international law firm Pinsent Masons.

Connect Excel to an Azure Analysis Services server

The content below is taken from the original (Connect Excel to an Azure Analysis Services server), to continue reading please visit the site. Remember to respect the Author & Copyright.

You can connect to Azure Analysis Services from Power BI, Excel and many other third party client tools. This blog will focus on how to connect to your Azure Analysis Services server from Microsoft Excel.

Before getting started, you’ll need:

  1. In Excel 2016, on the Data ribbon, click Get External Data > From Other Sources > From Analysis Services.
  2. In the Data Connection Wizard, in Server name, enter the name of your Azure Analysis Services server. Then, in Logon credentials, select Use the following User Name and Password, and then type the organizational user name, for example [email protected], and password.

    Connect in Excel logon

  3. In Select Database and Table, select the database and model or perspective, and then click Finish.

    Connect in Excel select model

  4. Select OK to create a PivotTable report.

     image

A pivot table will be created and you will now see your field list on the side. You can drag and drop different fields to build out your pivot table.

image

Learn more about Azure Analysis Services and how to have Faster PivotTables in Excel 2016

Key updates to Azure Backup Server

The content below is taken from the original (Key updates to Azure Backup Server), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Azure Backup Server (MABS) is a cloud-first backup solution to protect data and workloads across heterogeneous IT environments of enterprises. It is available as a free download with Azure Backup without the requirement of System Center License or SQL license for server DB. The latest update released for Azure Backup Server ensures that customers are able to centrally monitor all their backup entities, perform agentless backups, secure data against cyber threats like ransomware, machine compromise, and recover from them. Azure Backup Server now goes a step further and provides security based mechanisms to safeguard all the operations that impact availability of cloud data.

If you are new to Azure Backup Server:

  • You can download Microsoft Azure Backup Server and start protecting your infrastructure today. It is available as a free download with Azure Backup without the requirement of System Center or SQL license for the server DB.
  • Learn more about Azure Backup Server using these short videos to get started.

Key features

Azure Backup Server recently added the following enterprise grade features to strengthen security, provide a centralized view of backup entities, and support key workloads:

  1. Central monitoring –  Customers can now monitor their on-premises assets backed up in Azure Backup Server from portal. Recovery Services vault now provides a centralized view of backup management servers, protected servers, backup items, and their associations. This gives a simple experience to search for backup items, identify Azure Backup Server they are associated to, view disk utilization, and other details related to these entities.
  2. Security features – Azure Backup recently announced Security Features, available as part of latest update. These features are built on three principles – Prevention, Alerting, and Recovery – to enable organizations increase preparedness against attacks and equip them with a robust backup solution.
  3. VMware support – Azure Backup Server also contains support for VMware backup. This capability provides agentless backups, seamless discovery, and auto-protection features.
  4. Availability in new regions – Azure Backup is already available in multiple regions. Customers can now backup data to new regions as well including Canada, UK, and West US2.

Getting started

To start leveraging these features, navigate to the links, and videos below.

Central monitoring

To start using central monitoring for Azure Backup Server, you must create a Recovery Services vault, download latest Azure Recovery Services Agent, and register Azure Backup Server to this vault. If your Azure Backup Server is already registered to Recovery Services vault, you can start leveraging these features by upgrading Azure Backup Server to update 1 and installing latest Azure Recovery Services Agent (minimum version 2.0.9062).

Azure Backup Server - central monitoring

Security features

The video below explains how to get started by enabling Security features and how to leverage them in Azure Backup Server.

VMware backups

Please go through 4 simple steps to protect VMware VMs using Azure Backup Server. The first video in the series is linked below.

Related links and additional content

Jaguar and Shell partner for in-car fuel payments

The content below is taken from the original (Jaguar and Shell partner for in-car fuel payments), to continue reading please visit the site. Remember to respect the Author & Copyright.

Luxury automaker Jaguar Land Rover has partnered with Shell to make fueling up a touch more convenient. That’s because everyone who owns a vehicle equipped with the company’s InControl Apps will be able to pay for gas without ever leaving the driver’s seat. All you need is the Shell mobile app, either a PayPal or Apple Pay account and at least $40,000 for one of the supported cars and you too can take advantage of the new feature.

As the video below shows, it looks like all you need to do is connect your iPhone (Android support arrives sometimes later this year) to your Jag’s infotainment system via USB. From there, everything is handled via the car’s touchscreen. How this differs from other mobile payment tech, Jaguar says, is that this one uses geolocation in concert with PayPal or Apple Pay for transactions.

The functionality launches February 15th in the UK and additional availability will roll out over the course of this year. Jaguar says that additional applications of the tech could include drive-through restaurants and parking services. Which, to be honest, sound far more convenient than paying for gas. I mean, you still have to get out of your car for the former. The latter? It should eliminate the awkwardness of digging your wallet from a back pocket while you’re seated.

Source: Jaguar

Managing Usernames and Passwords with PowerShell for SharePoint Online

The content below is taken from the original (Managing Usernames and Passwords with PowerShell for SharePoint Online), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you are like me, you find yourself logging into your Office 365 tenant via PowerShell almost every day. And you are probably doing it one of two ways: either you are using a cmdlet to prompt you for the information and then manually typing it in (or cutting and pasting from a text file) OR you have hard coded the plain text credentials into a login script, and you just run it. I will be honest, up until recently I have done both and just hoped no one ever looked in the file labeled passwords.

 

 

Thanks to some questions on one of my YouTube Channel videos, I have learned and implemented a better way. In this article, we are going to start at the beginning to talk about the bad, the good, and the best ways to manage your accounts when it comes to PowerShell, including prompting, plain text variables, hashed files, and my new favorite, Windows Credential Manager. Once we have discussed all of the options, then the responsibility is on you to implement the correct solution. Ready? Let’s do this.

One quick note: This article assumes you have already installed the Office 365 PowerShell and the Patterns and Practices PowerShell. If you have not, then check out my previous article on Getting Started with PowerShell for SharePoint Online.

Prompting for Passwords

This is the method that you will see most often, especially in “official” documentation. The idea is you use

$Credentials = Get-Credentials

 , which will then cause a pop-up box to appear where you can type in your username and password that is then stored in a variable. This is a very secure way to go about things, other than the fact that 9 out of 10 people usually have a OneNote document or other documentation that they cut and paste the account info from. You don’t have to admit it, but we all do it. Ignoring that flaw, here is what your PowerShell usually looks like:

$askCred = Get-Credential
Connect-msolService -Credential $askCred

That simple bit of PowerShell will have you logged into your tenant and ready to go. You could use something like

Get-MsolUser

 to confirm everything is working.

ProTip: Do you use the same username every time? I bet you do. If so, then you can update the line to $askCred = Get-Credential -Credential [email protected] Then it will fill in the username automatically for you so that you only have to paste type in the password.

Passwords Hard Coded in a Script

I know, I am a bad person for showing this to you, but truth be told it is so common I feel like we at least need to discuss it. So, for better or worse here it is.

$un = "[email protected]"
$pw = "HereToBeReadByAnyone"
$sp = $pw | ConvertTo-SecureString -AsPlainText -Force
$plainCred = New-Object system.management.automation.pscredential -ArgumentList $un, $sp
Connect-msolService -Credential $plainCred

It starts with the sin of setting the variables $un and $pw with your information right there in plain text. So not only do they show up on your screen and in our stored script but they will also be in plain text in your transcripts and/or logs. This is not good.

With your secrets not so safely hidden in the variables, you then have to convert the $pw variable to a secure string. This is because PowerShell will not let you use a plain text variable. If you are going to do these bad things, then it is going to force you to convert it and then use the -force parameter to really drive home what a bad idea this whole thing is.

Once you have an encrypted password, then you can create the credential object by setting it using the $un and $sp variables to the variable $plainCred. With $plainCred in hand, you can use that with

Connect-MsolService

 and you are in business. Connect-MsolService doesn’t judge you; it doesn’t know about the plain text, earlier.

This is a terrible way to do things, but because it is so very common, it is included here for completeness. Everyone I know is guilty of some form of this bad behavior. The good news is that you now understand it and could use it if someone held a gun to your head, and knowledge is power.

Sponsored

Using an Encrypted File

This method gives keeping your password secure a fighting chance. What you do is create a file on your local file system that has a hashed version of your password in it. The nice thing is that if someone finds the file, they will just see the crazy hash; they cannot directly use the hash because it is tied to your account. So even if they copy the file to their system and try to access it, they will get an error. Now with all of that said, this isn’t foolproof. The password is only hashed, and I bet if someone tried hard enough (or searched the internet hard enough), they could find a way to reverse it. But at that point, they are maliciously trying to break in, and that makes them a meanie. This method stops the casual passerby from accidentally tripping over your information but not the state sponsored hacker. So, plan accordingly. If you are storing my social security number on your SharePoint site, don’t use this method, either.

The first step, creating the file, just needs to be done once. To do so, you would use the following:

$credentials = Get-Credential 
$filename = 'C:\safe\secretfile.txt’ 
$credentials | Export-CliXml -Path $filename

This is prompting you for the username and password and then using the

Export-CliXml

 cmdlet to securely write them to a file. You’ll find more details about it here if you are curious about the cmdlet. The cool thing is that only your account can decrypt the file. I would encourage you to take a moment to open the file with Notepad so that you can see what is going on.

Now that you have the file on your system, you can reference the credentials in all of your scripts using the following:

$credPath = 'C:\safe\secretfile.txt’ $fileCred = Import-CliXml -Path $credPath Connect-msolService -Credential $fileCred

There is one thing to consider: If you are going to use this as part of an automated process, like a scheduled task, then the account that the job runs as has to be the same account you created the secure file with. Should make sense, you just need to consider which account is used for what.

Pretty cool and pretty easy. This is a solid method for making your life easier. Thanks to Todd Klindt for writing the original blog post on using this method with SharePoint on prem. Everything here was adapted with his permission, so you can’t tell on me but to rub salt in the wound, I should make him proofread this section.

Windows Credential Manager

Get ready for the safest, securest method that will change your life. You will brag to your friends about your skills after you put this method into place. Try not to gloat too much.

Windows has had Credential Manager built in for a long time, and honestly, most of us have just ignored it. The tool to me was always just there to store internal passwords and to give IE/Edge a dumping ground when you stored passwords for websites. Well, today we are going to unlock that monster and show you how to use it with PowerShell.

To open Credential Manager, press the Windows Key and type in Credential Manager. If you don’t know how to use the Windows key, then you can also just manually navigate to Control Panel and find the tool that way. Once there, you will want to click on Windows Credentials, so you have something like the following screenshot.

And don’t worry, you aren’t going blind in just your left eye, I blurred my accounts on purpose. I don’t need any of you looky-loos to go playing with my accounts. Anyway.

Now you want to add your credentials by clicking on Add a Windows Credential.

When prompted for Internet or network address, type in the name you want to use. For me the name is o365, but do what you want. I wouldn’t use a space in the name, though; that’s just a PowerShell nuisance you don’t want.

Then enter your username and password. If it all looks good, then click OK.

Here is my screenshot with fake information. I wish my password was that long.

Now that we have the information securely saved in Credential Manager, we need to download some new cmdlets to use it with PowerShell. To do that, switch back over to your PowerShell console. Make sure you are running PowerShell as an administrator and then type the following:

Install-Module -Name CredentialManager

This will download the CredetnailManager module from the PowerShell Gallery. This is not a Microsoft built tool, some kind person built and released it to the world for free. I use it without issue, but if you are the paranoid type, be sure to vet it yourself fully. Not everything on the Internet is true or safe, but I feel pretty good about this one.

Now you can use one of the four cmdlets in this package to make your life so much easier.

$managedCred = Get-StoredCredential -Target o365 Connect-msolService -Credential $askCred

You are just using the new cmdlet

Get-StoredCredential

 to pull from Credential Manager the entry you just created. You store that in a variable, and then you can reference it with your Connect cmdlet. How simple is that? No more typing account info, no more plain text, no more evil villains stealing my social security number. What a happy day.

Protip: Username and passwords stored in Credential Manager can be used with the Patterns and Practices (PNP) PowerShell natively. An example would be:

Connect-PnPOnline -Url http://bit.ly/2kTHT0y -Credentials o365

PowerShell doesn’t get any easier than that.

Sponsored

Now It Is Up to You

We have walked through all of the fun ways you can manage credentials for PowerShell. What method will you use?

As always, because I am the nicest guy you know, there is also a companion video to go with this article. So, if you want to see my beautiful face and hear me talk about all of this, check out the video below.

The post Managing Usernames and Passwords with PowerShell for SharePoint Online appeared first on Petri.

IBM to launch cheap ‘n’ cheerful Power server for i and AIX userbase

The content below is taken from the original (IBM to launch cheap ‘n’ cheerful Power server for i and AIX userbase), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM’s i and AIX customer bases can buy a cheaper box; its latest Power S812 server comes with just one socket and a single or quad-core processor.

It’s for relatively light use, in the retail area for example, and as an entry point for non-compute-intense workloads.

The S812 box, separate from the 2014 S812L Linux system, is a 2U, rack-mounted system for small and medium businesses – classic x86 server territory – and its POWER8 CPU ships with either one or four activated 3.026 GHz cores. It has a single core for IBM i use and four for AIX system use.

IBM i is the development of Big Blue’s old AS/400 server system, and System/38 before that, running a proprietary operating system.

The IBM i S812 version has up to 64GB of DRAM and a maximum of 25 user entitlements. The software requirement is 7.3 TR2, 7.2 TR6, or later. The AIX version comes with up to 128GB of memory and needs 6.1, 7.1, 7.2, or later.

IBM S812L

IBM S812L box – similar to S812.

Both systems have six low-profile, hot-swap PCIe Gen3 slots, an integrated service processor, hot-swap and redundant cooling, USB 2.0 and 3.0 ports, and an RJ45-connected system port.

They can have one of:

  • Eight SFF-3 (Small Form Factor) disk/SSD bays, DVD bay, and dual SAS controller with write cache
  • Twelve SFF-3 bays, DVD bay, and single SAS controller
  • Twelve SFF-3 bays, DVD bay, and split backplane two SAS controllers

IBM emphasises the system’s stability and reliability, saying that it’s for core workloads in small footprints at a very competitive price point. It adds that it has usable energy controls to make it energy efficient. We might imagine that here is a cheap IBM i or AIX box for Big Blue channel partners to punt into their customer bases and help slow the incessant x86 tide.

The S813 will be available from March 17. Get the formal European announcement doc here and the US one here [PDFs]. ®