21 Google Cloud tools, each explained in under 2 minutes

The content below is taken from the original ( 21 Google Cloud tools, each explained in under 2 minutes), to continue reading please visit the site. Remember to respect the Author & Copyright.

Need a quick overview of Google Cloud core technologies? Quickly learn these 21 Google Cloud products—each explained in under two minutes.

1. BigQuery in a minute

Storing and querying massive datasets can be time consuming and expensive without the right infrastructure. This video gives you an overview of BigQuery, Google’s fully-managed data warehouse. Watch to learn how to ingest, store, analyze, and visualize big data with ease.

Storing and querying massive datasets can be time consuming and expensive without the right infrastructure. In this episode of Cloud Bytes, we give you an overview of BigQuery, Google’s fully-managed data warehouse. Watch to learn how to ingest, store, analyze, and visualize big data with ease!

BigQuery in a minute

2. Filestore in a minute

Filestore is a managed file storage service that provides a consistent view of your file system data and steady performance over time. In this video, we give you an overview of Filestore, showing you what it does and how you can use it for your developer projects.

Filestore is a managed file storage service that provides a consistent view of your file system data and steady performance over time. In this video, we give you an overview of Filestore, showing you what it does and how you can use it for your developer projects.

Filestore in a minute

3. Local SSD in a minute

Need a tool that gives you extra storage for your VM instances? This video explains what a Local SSD is and the different use cases for it. Watch to learn if this ephemeral storage option fits best with your developer projects.

Need a tool that gives you extra storage for your VM instances? This video explains what a Local SSD is and the different use cases for it. Watch to learn if this ephemeral storage option fits best with your developer projects.

Local SSD in a minute

4. Persistent Disk in a minute

What are persistent disks? How can they help when working with virtual machines? This video gives you a snackable synopsis of what Persistent Disk is and how you can use it as an affordable, reliable way to store and manage the data for your virtual machines.

What are persistent disks? How can they help when working with virtual machines? This video gives you a snackable synopsis of what Persistent Disk is and how you can use it as an affordable, reliable way to store and manage the data for your virtual machines.

Persistent Disk in a minute

5. Cloud Storage in a minute

Managing file storage for applications can be complex, but it doesn’t have to be. In this video, learn how Cloud Storage allows enterprises and developers alike to store and access their data seamlessly without compromising security or hindering scalability. 

Managing file storage for applications can be complex, but it doesn’t have to be. In this video, learn how Cloud Storage allows enterprises and developers alike to store and access their data seamlessly without compromising security or hindering scalability.

Cloud Storage in a minute

6. Anthos in a minute

Modernizing your applications while keeping complexity to a minimum is no easy feat. In this video, learn why Anthos is a great platform for providing greater observability, managing configurations, and securing multi and hybrid cloud applications.

Modernizing your applications while keeping complexity to a minimum is no easy feat. In this video, learn why Anthos is a great platform for providing greater observability, managing configurations, and securing multi and hybrid cloud applications.

Anthos in a minute

7. Google Kubernetes Engine in a minute

In this video, watch and learn how Google Kubernetes Engine (GKE), our managed environment for deploying, managing, and scaling containerized applications, can increase developer productivity, simplify platform operations, and provide greater observability.

In this video, watch and learn how Google Kubernetes Engine (GKE), our managed environment for deploying, managing, and scaling containerized applications using Google infrastructure, can increase developer productivity, simplify platform operations, and provide greater observability.

Google Kubernetes Engine in a minute

8. Compute Engine in a minute

How do you migrate existing VM workloads to the cloud? In this video, get a quick overview of Compute Engine and how it can help you seamlessly migrate your workloads.

How do you migrate existing VM workloads to the cloud? In this video, get a quick overview of Compute Engine and how it can help you seamlessly migrate your workloads to the Cloud.

Compute Engine in a minute

9. Cloud Run in a minute

What is Cloud Run? How does it help you build apps? In this video, get an overview of Cloud Run, a fully managed serverless platform that allows you to easily create applications seamlessly. Watch to learn how you can use Cloud Run for your developer projects.

What is Cloud Run? How does it help you build apps? In this video, get an overview of Cloud Run, a fully managed serverless platform that allows you to easily create applications seamlessly. Watch to learn how you can use Cloud Run for your developer projects.

Cloud Run in a minute

10. App Engine in a minute

Learn how this serverless application platform allows you to write your code in any supported language, run custom containers with the framework of your choice, and easily deploy and run your code in the cloud. 

Learn how this serverless application platform allows you to write your code in any supported language, run custom containers with the framework of your choice, and easily deploy and run your code in the cloud.

App Engine in a minute

11. Cloud Functions in a minute

Get a quick overview of Cloud Functions, our scalable pay-as-you-go functions as a service (FaaS) to run your code with zero server management.

Get a quick overview of Cloud Functions, our scalable pay-as-you-go functions as a service (FaaS) to run your code with zero server management.

Cloud Functions in a minute

12. Firestore in a minute

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps, at global scale. In this video, learn how use Firestore and discover features that simplify app development without compromising security.

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps, at global scale. In this video, we show you how to use Firestore and showcase its features that simplify app development without compromising security.

Firestore in a minute

13. Cloud Spanner in a minute

Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. In this video, you’ll learn how Cloud Spanner can help you create time-sensitive, mission critical applications at scale.

Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. In this video, you'll learn how Cloud Spanner can help you create time-sensitive, mission critical applications at scale.

Cloud Spanner in a minute

14. Cloud SQL in a minute

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud. In this video you’ll learn how Cloud SQL can help you with time-consuming tasks such as patches updates, replicas, and backups so you can focus on designing your application.

Cloud SQL is a fully-managed database service that helps you set up, maintain, manage, and administer your relational databases on Google Cloud. In this video you'll learn how Cloud SQL can help you with time-consuming tasks such as patches updates, replicas, and backups so you can focus on designing your application.

Cloud SQL in a minute

15. Memorystore in a minute

Memorystore is a fully managed and highly available in-memory service for Google Cloud applications. This tool can automate complex tasks, while providing top-notch security by integrating IAM protocols without increasing latency. Watch to learn what Memorystore is and what it can do to help in your developer projects.

Memorystore is a fully managed and highly available in-memory service for Google Cloud applications. This tool can automate complex tasks, while providing top-notch security by integrating IAM protocols without increasing latency. Watch to learn what Memorystore is and what it can do to help in your developer projects.

Memorystore in a minute

16. Bigtable in a minute

Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. In this video, you’ll learn what Bigtable is and how this key-value store supports high read and write throughput, while maintaining low latency.

Cloud Bigtable is a fully managed, scalable NoSQL database service for large analytical and operational workloads. In this video, you'll learn what Bigtable is and how this key-value store supports high read and write throughput, while maintaining low latency.

Bigtable in a minute

17. BigQuery ML in a minute 

BigQuery ML lets you create and execute machine learning models in BigQuery by using standard SQL queries. In this video, learn how you can use BigQuery ML for your machine learning projects.

Learn how BigQuery ML enables users to create and execute machine learning models in BigQuery by using standard SQL queries.

BigQuery ML in a minute

18. Dataflow in a minute

Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. In this video, learn how it can be used to deploy batch and streaming data processing pipelines.

Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. In this video, learn how it can be used to deploy batch and streaming data processing pipelines.

Dataflow in a minute

19. Cloud Pub/Sub in a minute

Cloud Pub/Sub is an asynchronous messaging service that decouples services that produce events from services that process events. In this video, you’ll learn how you can use it for message storage, real-time message delivery, and much more, while still providing consistent performance at scale and high availability.

Cloud Pub/Sub is an asynchronous messaging service that decouples services that produce events from services that process events. In this video, learn how you can use Pub/Sub as messaging-oriented middleware or event ingestion and delivery for streaming analytics pipelines.

Cloud Pub/Sub in a minute

20. Dataproc in a minute

Dataproc is a managed service that lets you take advantage of open source data tools like Apache Spark, Flink and Presto for batch processing, SQL, streaming, and machine learning. In this video, you’ll learn what Dataproc is and how you can use it to simplify data and analytics processing.

Dataproc is a managed service that lets you take advantage of open source data tools like Apache Spark, Flink and Presto for batch processing, SQL, streaming, and machine learning. In this video, learn what Dataproc is and how you can use it to simplify data and analytics processing.

Dataproc in a minute

21. Data Fusion in a minute

Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines. In this video, you’ll learn how Cloud Data Fusion can help you build smarter data marts, data lakes, and data warehouses.

Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines. In this video, learn how Cloud Data Fusion can help you build smarter data marts, data lakes, and data warehouses.

Data Fusion in a minute

How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10

The content below is taken from the original ( How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10), to continue reading please visit the site. Remember to respect the Author & Copyright.

By default, when you initiate a file operation, which is basically Copy/Cut/Move/Paste or Delete, […]

This article How to always show Fewer or More Details in File Transfer Dialog Box in Windows 10 first appeared on TheWindowsClub.com.

GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero

The content below is taken from the original ( GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero), to continue reading please visit the site. Remember to respect the Author & Copyright.

GB Renewable generation forecast (including Octopus energy agile tariff) display using Inky wHAT and Pi Zero submitted by /u/Andybrace to r/raspberry_pi
[link] [comments]

Azure Firewall Premium is in public preview

The content below is taken from the original ( Azure Firewall Premium is in public preview), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Firewall Premium provides next generation firewall capabilities that are required for highly sensitive and regulated environments.

SkynetWiki is live! 🤖 Huge thanks to Danger, MrFlux, and our community of contributors! 🦾

The content below is taken from the original ( SkynetWiki is live! 🤖 Huge thanks to Danger, MrFlux, and our community of contributors! 🦾), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/nicolehtay to r/siacoin
[link] [comments]

How to stop display name based phishing easily

The content below is taken from the original ( How to stop display name based phishing easily), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to stop display name based phishing easily submitted by /u/Deku-shrub to r/Office365
[link] [comments]

How Covid has shaken up IT channel job market

The content below is taken from the original ( How Covid has shaken up IT channel job market), to continue reading please visit the site. Remember to respect the Author & Copyright.

How Covid has shaken up IT channel job market

Marc Sumner, CEO of channel recruitment agency Robertson Sumner, reflects on how Covid has impacted the channel jobs market over the past 12 months.

Throughout 2020 we have all had to adapt and change at speed. Businesses were expected to enable their teams to work from home with very limited notice from the government. Some businesses were well prepared and were able to ‘flick the switch’ allowing their staff to start working from home seamlessly right from the start of the imposed lockdown. But the vast majority of UK plc were not.  After nearly a year of operating through a series of lockdowns, what was the impact on the job market within the IT channel?

Despite an initial dip in hiring between March and May 2020, demand has bounced back strongly to pre-Covid levels. Technology is one of the markets that helped UK plc respond to the challenges of enabling people to work in a more collaborative way despite the restrictions of travelling and running traditional events. The IT channel has needed to double down on both technical and sales staff to present the most suitable solutions to their customers.

Companies became adept at hiring and onboarding remotely. This has freed up many organisations from the restrictions usually associated with hiring people who live within a commutable distance of an office base. In some cases, skills shortages have become a thing of the past with hiring managers realising that a person’s location is no barrier to their suitability for the organisation. Some 76 per cent of clients surveyed at the end of 2020 are now willing to hire somebody remotely for a traditionally ‘office based’ role, expanding the ‘talent-pool’ available to choose from.

Hiring managers and recruiters in the IT channel were forced to review their interview and assessment techniques to adapt to the lockdown restrictions. To minimise the risk of making bad decisions many organisations added skills or psychometric testing to the recruitment process. Some even started developing a ‘score-card’ based approach to their hiring taking away some of the ‘gut-feel’ out of the process. It will be interesting to see if this trend continues as the restrictions are eased.

There was a big rise in the way we consume our jobs-based training, with 90 per cent taking place online during 2020. On-boarding remotely also became the norm. Most channel leaders and many employees that I have spoken to have expressed a desire to return to a mix of online and traditional face to face on-boarding and training when practical.

Despite the lockdown being designed to physically keep people and teams apart, it has demonstrated how important working more collaboratively is to achieve a positive outcome. In the IT channel, businesses who are ‘competitors’ in the traditional sense have in some instances formed unlikely alliances to help deliver solutions to their customers. This level of creativity could go a long way to helping foster long-term growth and help deliver job opportunities that fall out of these joint ventures.

To conclude, it is still very early days to predict the medium/long term impact of Covid on the job market within the IT channel. However, the early signs are that the adoption of technology is still key in helping the country return to some sort of normality and therefore the opportunity for job growth is very realistic in 2021.

Marc Sumner is managing director of channel recruitment outfit Robertson Sumner. This article featured in the recent CRN Staff and Salaries Report 2021.

Blue Check Homes installs Twitter-style blue checks on houses for “authentic public figures”

The content below is taken from the original ( Blue Check Homes installs Twitter-style blue checks on houses for “authentic public figures”), to continue reading please visit the site. Remember to respect the Author & Copyright.

According to its satirical official website, BLUE CHECK HOMES offers clients a verified blue badge on their homes. “The blue verified badge on your house lets people outside know that you’re an authentic public figure. To receive the blue check crest, there must be someone authentic and notable actively living in the house,” the site explains.

Simply provide your first name and all of your social media accounts and BLUE CHECK HOMES wait for a review and interview with the company’s board. If you are approved, pay the fee and get a crest installed.

Pricing? For verified homeowners, “the BLUE CHECK HOMES installation team will secure your home’s very own plaster baronial crest for a fee of $2999.99.”

The jokester behind this is artist Danielle Baskin. In a postscript on the website, she writes, “If you thought this was a full-fledged service, please investigate the things you read on the internet! And if you’re an artist making jokes on the internet, we should consider adding disclaim…

The Bus That’s Not A Bus: The Joys Of Hacking PCI Express

The content below is taken from the original ( The Bus That’s Not A Bus: The Joys Of Hacking PCI Express), to continue reading please visit the site. Remember to respect the Author & Copyright.

PCI Express (PCIe) has been around since 2003, and in that time it has managed to become the primary data interconnect for not only expansion cards, but also high-speed external devices. What also makes PCIe interesting is that it replaces the widespread use of parallel buses with serial links. Instead of having a bus with a common medium (traces) to which multiple devices connect, PCIe uses a root complex that directly connects to PCIe end points.

This is similar to how Ethernet originally used a bus configuration, with a common backbone (coax cable), but modern Ethernet (starting in the 90s) moved to a point-to-point configuration, assisted by switches to allow for dynamic switching between which points (devices) are connected. PCIe also offers the ability to add switches which allows more than one PCIe end point (a device or part of a device) to share a PCIe link (called a ‘lane’).

This change from a parallel bus to serial links simplifies the topology a lot compared to ISA or PCI where communication time had to be shared with other PCI devices on the bus and only half-duplex operation was possible. The ability to bundle multiple lanes to provide less or more bandwidth to specific ports or devices has meant that there was no need for a specialized graphics card slot, using e.g. an x16 PCIe slot with 16 lanes. It does however mean we’re using serial links that run at many GHz and must be implemented implmented as differential pairs to protect signal integrity.

This all may seem a bit beyond the means of the average hobbyist, but there are still ways to have fun with PCIe hacking even if they do not involve breadboarding 7400-logic chips and debugging with a 100 MHz budget oscilloscope, like with ISA buses.

High Clocks Demand Differential Pairs

PCIe version 1.0 increases the maximum transfer rate when compared to 32-bit PCI from 133 MB/s to 250 MB/s. This is roughly the same as a PCI-X 64-bit connection (at 133 MHz) if four lanes are used (~1,064 MB/s). Here the PCIe lanes are clocked at 2.5 GHz, with differential signaling send/receive pairs within each lane for full-duplex operation.

Today, PCIe 4 is slowly becoming adopted as more and more systems are upgraded. This version of the standard  runs at 16 GHz, and the already released PCIe version 5 is clocked at 32 GHz. Although this means a lot of bandwidth (>31 GB/s for an x16 PCIe 4 link), it comes with the cost of generating these rapid transitions, keeping these data links full, and keeping the data intact for more than a few millimeters. That requires a few interesting technologies, primarily differential signaling and SerDes.

Basic visualization of how differential signaling works.

Differential signaling is commonly used in many communication protocols, including RS-422, IEA-485, Ethernet (via twisted-pair wiring), DisplayPort, HDMI and USB, as well as on PCBs, where the connection between the Ethernet PHY and magnetics is implemented as differential pairs. Each side of the pair conducts the same signal, just with one side having the inverted signal. Both sides have the same impedance, and are affected similarly by (electromagnetic) noise in the environment. As a result, when the receiver flips the inverted signal back and merges the two signals, noise in the signal will become inverted on one side (negative amplitude) and thus cancel out the noise on the non-inverted side.

The move towards lower signal voltages (in the form of LVDS) in these protocols and the increasing clock speeds makes the use of differential pairs essential. Fortunately they are not extremely hard to implement on, say, a custom PCB design. The hard work of ensuring that the traces in a differential pair have the same length is made easier by common EDA tools (including KiCad, Autodesk Eagle, and Altium) that thatl provide functionality for making the routing of differential pairs a semi-automated affair.

Having It Both Ways: SerDes

Schematic diagram of a SerDes link.

A Serializer/Deserializer (SerDes) is a functional block that is used to convert between serial data and parallel interfaces. Inside an FPGA or communications ASIC the data is usually transferred on a parallel interface, with the parallel data being passed into the SerDes block, where it is serialized for transmission or vice-versa. The PCIe PMA (physical media attachment) layer is the part of the protocol’s physical layer where SerDes in PCIe is located. The exact SerDes implementation differs per ASIC vendor, but their basic functionality is generally the same.

When it comes to producing your own PCIe hardware, an easy way to get started is to use an FPGA with SerDes blocks. One still needs to load the FPGA with a design that includes the actual PCIe data link and transaction layers, but these are often available for free, such as with Xilinx FPGAs.

PCIe HDL Cores

Recent Xilinx FPGAs not only integrate SerDes and PCIe end-point features, but Xilinx also provides free-as-in-beer PCIe IP blocks (limited to x8 at PCIe v2.1) for use with these hardware features that (based on the license) can be used commercially. If one wishes for a slightly less proprietary solution, there are Open Source PCIe cores available as well, such as this PCIe Mini project that was tested on a Spartan 6 FPGA on real hardware and provides a PCIe-to-Wishbone bridge, along with its successor project, which targets Kintex Ultrascale+ FPGAs.

On the other sides of the fence, the Intel (formerly Altera) IP page seems to strongly hint at giving their salesperson a call for a personalized quote. Similarly, Lattice has their sales people standing by to take your call for their amazing PCIe IP blocks. Here one can definitely see the issue with a protocol like PCIe: unlike ISA or PCI devices which could be cobbled together with a handful of 74xx logic chips and the occasional microcontroller or CPLD, PCIe requires fairly specialized hardware.

Even if one buys the physical hardware (e.g. FPGA), use of the SerDes hardware blocks with PCIe functionality may still require a purchase or continuous license (e.g. for the toolchain) depending on the chosen solution. At the moment it seems that Xilinx FPGAs are the ‘go-to’ solution here, but this may change in the future.

Also of note here is that the PCIe protocol itself is officially available to members of PCI-SIG. This complicates an already massive undertaking if one wanted to implement the gargantuan PCIe specification from scratch, and makes it even more admirable that there are Open Source HDL cores at all for PCIe.

Putting it Together

PCI Express x1 edge connector drawing with pin numbers.

The basic board design for a PCIe PCB is highly reminiscent of that of PCI cards. Both use an edge connector with a similar layout. PCIe edge connectors are 1.6 mm thick, use a 1.0 mm pitch (compared to 1.27 mm for PCI), a 1.4 mm spacing between the contact fingers and the same 20° chamfer angle as PCI edge connectors. A connector has at least 36 pins, but can have 164 pins in an x16 slot configuration.

PCIe card edge connector cross section.

An important distinction with PCIe is that there is no fixed length of the edge connector, as with ISA, PCI and similar interfaces. Those have a length that’s defined by the width of the bus. In the case of PCIe, there is no bus, so instead we get the ‘core’ connector pin-out with a single lane (x1 connector). To this single lane additional ‘blocks’ can be added, each adding another lane that gets bonded so that the bandwidth of all connected lanes can be used by a single device.

In addition to regular PCIe cards, one can also pick from a range of different PCIe devices, such as Mini-PCIe. Whatever form factor one chooses, the basic circuitry does not change.

This raises the interesting question of what kind of speeds your PCIe device will require. On one hand more bandwidth is nice, on the other hand it also requires more SerDes channels, and not all PCIe slots allow for every card to be installed. While any card of any configuration (x1, x4, x8 or x16) will fit and work in an x16 slot (mechanical), smaller slots may not physically allow a larger card to fit. Some connectors have an ‘open-ended’ configuration, where you can fit for example an x16 card into an x1 slot if so inclined. Other connectors can be ‘modded’ to allow such larger cards to fit unless warranty is a concern.

The flexibility of PCIe means that the bandwidth scales along with the number of bonded lanes as well as the PCIe protocol version. This allows for graceful degradation, where if, say, a PCIe 3.0 card is inserted into a slot that is capable of only PCIe 1.0, the card will still be recognized and work. The available bandwidth will be severely reduced, which may be an issue for the card in question. The same is true with available PCIe lanes, bringing to mind the story of cryptocoin miners who split up x16 PCIe slots into 16 x1 slots, so that they could run an equal number of GPUs or specialized cryptocoin mining cards.

It’s Full of PCIe

This flexibility of PCIe has also led to PCIe lanes being routed out to strange and wonderful new places. Specifications like Intel’s Thunderbolt (now USB 4) include room for multiple lanes of PCIe 3.0, which enables fast external storage solutions as well as external video cards that work as well as internal ones.

Solid-state storage has moved over from the SATA protocol to NVMe, which essentially defines a storage device that is directly attached to the PCIe controller. This change has allowed NVMe storage devices to be installed or even directly integrated on the main logic board.

Clearly PCIe is the thing to look out for these days. We have even seen that System-on-Chips (SoCs), such as those found on Raspberry Pi 4 boards now come with a single PCIe lane that has already been hacked to expand those boards in ways thought inconceivable. As PCIe becomes more pervasive, this seems like a good time to become more acquainted with it.

AWS PrivateLink for Amazon S3 is Now Generally Available

The content below is taken from the original ( AWS PrivateLink for Amazon S3 is Now Generally Available), to continue reading please visit the site. Remember to respect the Author & Copyright.

At AWS re:Invent, we pre-announced that AWS PrivateLink for Amazon S3 was coming soon, and soon has arrived — this new feature is now generally available. AWS PrivateLink provides private connectivity between Amazon Simple Storage Service (S3) and on-premises resources using private IPs from your virtual network.

Way back in 2015, S3 was the first service to add a VPC endpoint; these endpoints provide a secure connection to S3 that does not require a gateway or NAT instances. Our customers welcomed this new flexibility but also told us they needed to access S3 from on-premises applications privately over secure connections provided by AWS Direct Connect or AWS VPN.

Our customers are very resourceful and by setting up proxy servers with private IP addresses in their Amazon Virtual Private Clouds and using gateway endpoints for S3, they found a way to solve this problem. While this solution works, proxy servers typically constrain performance, add additional points of failure, and increase operational complexity.

We looked at how we could solve this problem for our customers without these drawbacks and PrivateLink for S3 is the result.

With this feature you can now access S3 directly as a private endpoint within your secure, virtual network using a new interface VPC endpoint in your Virtual Private Cloud. This extends the functionality of existing gateway endpoints by enabling you to access S3 using private IP addresses. API requests and HTTPS requests to S3 from your on-premises applications are automatically directed through interface endpoints, which connect to S3 securely and privately through PrivateLink.

Interface endpoints simplify your network architecture when connecting to S3 from on-premises applications by eliminating the need to configure firewall rules or an internet gateway. You can also gain additional visibility into network traffic with the ability to capture and monitor flow logs in your VPC. Additionally, you can set security groups and access control policies on your interface endpoints.

Available Now
PrivateLink for S3 is available in all AWS Regions. AWS PrivateLink is available at a low per-GB charge for data processed and a low hourly charge for interface VPC endpoints. We hope you enjoy using this new feature and look forward to receiving your feedback. To learn more, check out the PrivateLink for S3 documentation.

Try out AWS PrivateLink for Amazon S3 today, and happy storing.

— Martin

Azure achieves its first PCI 3DS certification

The content below is taken from the original ( Azure achieves its first PCI 3DS certification), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure’s PCI 3DS Attestation of Compliance, PCI 3DS Shared Responsibility Matrix, and PCI 3DS whitepaper are now available.

Plex Media Server – Desktop Monitoring

The content below is taken from the original ( Plex Media Server – Desktop Monitoring), to continue reading please visit the site. Remember to respect the Author & Copyright.

https://preview.redd.it/kjlrq7r69n761.png?width=1590&format=png&auto=webp&s=fd377584889dede7215ad5884d39d7b3cacabb39

Plex Media Server – Desktop Monitoring – Rainmeter Forums

This is my release for the Plex Media Server Desktop Monitor (as of this writing there is not a Rainmeter Skin to do this). It will allow you to monitor current stream count from the Plex API directly. This skin has a long way to go, but its in a decent enough state for public use.

  1. Download Rainmeter https://github.com/rainmeter/rainmeter/releases/download/v4.3.0.3321/Rainmeter-4.3.1.exe
  2. Download Skinhttps://www.deviantart.com/bdrumm/art/Plex-Desktop-Monitoring-1-0-865212103

Instructions

Known Issues

When a user starts a stream with the media being from Tidal, the Stream Decision and Media Decision is missing which messes with every stream (details) from that point.

submitted by /u/xX_limitless_Xx to r/PleX
[link] [comments]

Retired Microsoft engineer Dave Plummer talks about the history of task manager

The content below is taken from the original ( Retired Microsoft engineer Dave Plummer talks about the history of task manager), to continue reading please visit the site. Remember to respect the Author & Copyright.

Dave Plummer is the original author of the Windows Task Manager, a tool known to many around the world. In a series on YouTube he talks about it’s history and how he wrote it. Another credit to Dave Plummers name is that he also wrote Space Cadet Pinball for Windows.

It gives a unique insight into Task Manager and how it came to be:

Part 1

Part 2

Source code review of Windows Taskmanager

submitted by /u/a_false_vacuum to r/sysadmin
[link] [comments]

Five more free services now available in the Azure free account

The content below is taken from the original ( Five more free services now available in the Azure free account), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free amounts of Archive Storage, Container Registry, Load Balancer, Service Bus, and VPN Gateway are now available for eligible Azure free account users, as well as and increased free amounts of Cosmos DB.

Five more free services now available in the Azure free account

The content below is taken from the original ( Five more free services now available in the Azure free account), to continue reading please visit the site. Remember to respect the Author & Copyright.

Free amounts of Archive Storage, Container Registry, Load Balancer, Service Bus, and VPN Gateway are now available for eligible Azure free account users, as well as and increased free amounts of Cosmos DB.

Getting Started with Amazon Managed Service for Prometheus

The content below is taken from the original ( Getting Started with Amazon Managed Service for Prometheus), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Managed Service for Prometheus (AMP) is a Prometheus-compatible monitoring service for container infrastructure and application metrics for containers that makes it easy for customers to securely monitor container environments at scale. Prometheus supports a variety of metric collection mechanisms that includes a number of libraries and servers which help in exporting application-specific metrics from third-party systems as Prometheus metrics. You can also use AWS Distro for OpenTelemetry to ingest application metrics from your environment. With AMP, you can use the same open-source Prometheus data model and query language as they do today to monitor the performance of your containerized workloads. There are no up-front investments required to use the service, and you only pay for the number of metrics ingested.

Customers using Prometheus in their container environments face challenges in managing a highly-available, scalable and secure Prometheus server environment, infrastructure for long-term storage, and access control. AMP solves these problems by providing a fully-managed environment which is tightly integrated with AWS Identity and Access Management (IAM) to control authentication and authorization. You can start using AMP by following these two simple steps:

  • Create an AMP workspace
  • Configure your Prometheus server to remote-write into the AMP workspace

Once configured, you will be able to use the fully-managed AMP environment for ingesting, storing and querying your metrics. In this blog post, we will walk through the steps required to set up AMP to ingest custom Prometheus metrics collected from a containerized workload deployed to an Amazon EKS cluster and then query and visualize them using Grafana.

Architecture

The figure below illustrates the overall architecture of Amazon Managed Service for Prometheus and its interaction with other components.

AMP architecture

AMP architecture

Setting up a workspace to collect Prometheus metrics

To get started, you will first create a workspace. A workspace is the conceptual location where you ingest, store, and query your Prometheus metrics that were collected from application workloads, isolated from other AMP workspaces. One or more workspaces may be created in each Region within the same AWS account and each workspace can be used to ingest metrics from multiple workloads that export metrics in Prometheus-compatible format.

A customer-managed IAM policy with the following permissions should be associated with the IAM user that manages a workspace.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aps:CreateWorkspace",
                "aps:DeleteWorkspace",
                "aps:DescribeWorkspace",
                "aps:ListWorkspaces",
                "aps:UpdateWorkspaceAlias"
            ],
            "Resource": "*"
        }
    ]
}

A workspace is created from the AWS Management Console as shown below:

Creating a workspace in Amazon Managed Service for Prometheus

Create workspace

List of workspaces

List of workspaces

Alternatively, you can also create a workspace using AWS CLI as documented here.

Next, as an optional step, you create an interface VPC endpoint in order to securely access the managed service from resources deployed within your VPC. This will ensure that data ingested by the managed service do not leave the VPC in your AWS account. You can do this by using AWS CLI as follows. Ensure the placeholder string such as VPC_ID, AWS_REGION and others are replaced with appropriate values.

aws ec2 create-vpc-endpoint \
--vpc-id <VPC_ID> \
--service-name com.amazonaws.<AWS_REGION>.aps-workspaces \
--security-group-ids <SECURITY_GROUP_IDS> \
--vpc-endpoint-type Interface \
--subnet-ids <SUBNET_IDS>

In the above command, SECURITY_GROUP_IDS represents a list of security groups associated with the VPC interface endpoint to allow communication between the endpoint network interface and the resources in your VPC, such as worker nodes of the Amazon EKS cluster. SUBNET_IDS represents the list of subnets where these resources reside.

Configuring permissions

Metrics collectors (such as a Prometheus server deployed to an Amazon EKS cluster) scrape operational metrics from containerized workloads running in the cluster and send them to AMP for long-term storage as well as for subsequent querying by monitoring tools. The data is sent using HTTP requests which must be signed with valid AWS credentials using the AWS Signature Version 4 algorithm to authenticate and authorize each client request for the managed service. In order to facilitate this, the requests are sent to an instance of AWS signing proxy which will forward the requests to the managed service.

The AWS signing proxy can be deployed to an Amazon EKS cluster to run under the identity of a Kubernetes service account. With IAM roles for service accounts (IRSA), you can associate an IAM role with a Kubernetes service account and thus provide AWS permissions to any pod that uses that service account. This follows the principle of least privilege by using IRSA to securely configure the AWS signing proxy to help ingest Prometheus metrics into AMP.

The shell script shown below can be used to execute the following actions after substituting the placeholder variable YOUR_EKS_CLUSTER_NAME with the name of your Amazon EKS cluster.

  1. Creates an IAM role with an IAM policy that has permissions to remote-write into an AMP workspace
  2. Creates a Kubernetes service account that is annotated with the IAM role
  3. Creates a trust relationship between the IAM role and the OIDC provider hosted in your Amazon EKS cluster

The script requires that you have installed the CLI tools kubectl and eksctl and have configured them with access to your Amazon EKS cluster.

##!/bin/bash
CLUSTER_NAME=YOUR_EKS_CLUSTER_NAME
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

PROM_SERVICE_ACCOUNT_NAMESPACE=prometheus
GRAFANA_SERVICE_ACCOUNT_NAMESPACE=grafana
SERVICE_ACCOUNT_NAME=iamproxy-service-account
SERVICE_ACCOUNT_IAM_ROLE=EKS-AMP-ServiceAccount-Role
SERVICE_ACCOUNT_IAM_ROLE_DESCRIPTION="IAM role to be used by a K8s service account with write access to AMP"
SERVICE_ACCOUNT_IAM_POLICY=AWSManagedPrometheusWriteAccessPolicy
SERVICE_ACCOUNT_IAM_POLICY_ARN=arn:aws:iam::$AWS_ACCOUNT_ID:policy/$SERVICE_ACCOUNT_IAM_POLICY
#
# Setup a trust policy designed for a specific combination of K8s service account and namespace to sign in from a Kubernetes cluster which hosts the OIDC Idp.
# If the IAM role already exists, then add this new trust policy to the existing trust policy
#
echo "Creating a new trust policy"
read -r -d '' NEW_TRUST_RELATIONSHIP <<EOF
 [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${GRAFANA_SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:${PROM_SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        }
      }
    }
  ]
EOF
#
# Get the old trust policy, if one exists, and append it to the new trust policy
#
OLD_TRUST_RELATIONSHIP=$(aws iam get-role --role-name $SERVICE_ACCOUNT_IAM_ROLE --query 'Role.AssumeRolePolicyDocument.Statement[]' --output json)
COMBINED_TRUST_RELATIONSHIP=$(echo $OLD_TRUST_RELATIONSHIP $NEW_TRUST_RELATIONSHIP | jq -s add)
echo "Appending to the existing trust policy"
read -r -d '' TRUST_POLICY <<EOF
{
  "Version": "2012-10-17",
  "Statement": ${COMBINED_TRUST_RELATIONSHIP}
}
EOF
echo "${TRUST_POLICY}" > TrustPolicy.json
#
# Setup the permission policy grants write permissions for all AWS StealFire workspaces
#
read -r -d '' PERMISSION_POLICY <<EOF
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "aps:RemoteWrite",
            "aps:QueryMetrics",
            "aps:GetSeries",
            "aps:GetLabels",
            "aps:GetMetricMetadata"
         ],
         "Resource":"*"
      }
   ]
}
EOF
echo "${PERMISSION_POLICY}" > PermissionPolicy.json

#
# Create an IAM permission policy to be associated with the role, if the policy does not already exist
#
SERVICE_ACCOUNT_IAM_POLICY_ID=$(aws iam get-policy --policy-arn $SERVICE_ACCOUNT_IAM_POLICY_ARN --query 'Policy.PolicyId' --output text)
if [ "$SERVICE_ACCOUNT_IAM_POLICY_ID" = "" ]; 
then
  echo "Creating a new permission policy $SERVICE_ACCOUNT_IAM_POLICY"
  aws iam create-policy --policy-name $SERVICE_ACCOUNT_IAM_POLICY --policy-document file://PermissionPolicy.json 
else
  echo "Permission policy $SERVICE_ACCOUNT_IAM_POLICY already exists"
fi

#
# If the IAM role already exists, then just update the trust policy.
# Otherwise create one using the trust policy and permission policy
#
SERVICE_ACCOUNT_IAM_ROLE_ARN=$(aws iam get-role --role-name $SERVICE_ACCOUNT_IAM_ROLE --query 'Role.Arn' --output text)
if [ "$SERVICE_ACCOUNT_IAM_ROLE_ARN" = "" ]; 
then
  echo "$SERVICE_ACCOUNT_IAM_ROLE role does not exist. Creating a new role with a trust and permission policy"
  #
  # Create an IAM role for Kubernetes service account 
  #
  SERVICE_ACCOUNT_IAM_ROLE_ARN=$(aws iam create-role \
  --role-name $SERVICE_ACCOUNT_IAM_ROLE \
  --assume-role-policy-document file://TrustPolicy.json \
  --description "$SERVICE_ACCOUNT_IAM_ROLE_DESCRIPTION" \
  --query "Role.Arn" --output text)
  #
  # Attach the trust and permission policies to the role
  #
  aws iam attach-role-policy --role-name $SERVICE_ACCOUNT_IAM_ROLE --policy-arn $SERVICE_ACCOUNT_IAM_POLICY_ARN  
else
  echo "$SERVICE_ACCOUNT_IAM_ROLE_ARN role already exists. Updating the trust policy"
  #
  # Update the IAM role for Kubernetes service account with a with the new trust policy
  #
  aws iam update-assume-role-policy --role-name $SERVICE_ACCOUNT_IAM_ROLE --policy-document file://TrustPolicy.json
fi
echo $SERVICE_ACCOUNT_IAM_ROLE_ARN

# EKS cluster hosts an OIDC provider with a public discovery endpoint.
# Associate this Idp with AWS IAM so that the latter can validate and accept the OIDC tokens issued by Kubernetes to service accounts.
# Doing this with eksctl is the easier and best approach.
#
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve

The script shown above creates a service account named iamproxy-service-account under the amp namespace which is attached to an IAM role named EKS-AMP-ServiceAccount-Role. The role is attached to a customer-managed IAM policy which comprises the set of permissions shown below in order to send data over to the Amazon Managed Service for Prometheus.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aps:RemoteWrite",
                "aps:GetSeries",
                "aps:GetLabels",
                "aps:GetMetricMetadata"
            ],
            "Resource": "*"
        }
    ]
}

Deploying Prometheus server

Amazon Managed Service for Prometheus does not directly scrape operational metrics from containerized workloads in a Kubernetes cluster. It requires users to deploy and manage a standard Prometheus server, or an OpenTelemetry agent such as the AWS Distro for OpenTelemetry Collector in their cluster to perform this task. The implementation in this blog uses a Prometheus server which is deployed to an Amazon EKS cluster using Helm charts as follows:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
kubectl create ns prometheus
helm install prometheus-for-amp prometheus-community/prometheus -n prometheus

The AWS signing proxy can now be deployed to the Amazon EKS cluster with the following YAML manifest. Now substitute the placeholder variable {AWS_REGION} with the appropriate AWS Region name, replace ${IAM_PROXY_PROMETHEUS_ROLE_ARN} with the ARN of the EKS-AMP-ServiceAccount-Role you created and replace the placeholder {WORKSPACE_ID} with the AMP workspace ID you created earlier. The signing proxy references a Docker image from a public repository in ECR. Alternatively, you can follow the steps outlined here under the section titled Deploy the AWS signing proxy and build a Docker image from the source code for the AWS SigV4 Proxy.

Create a file called amp_ingest_override_values.yaml with the following content in it.


serviceAccounts:
    server:
        name: "iamproxy-service-account"
        annotations:
            eks.amazonaws.com/role-arn: "${IAM_PROXY_PROMETHEUS_ROLE_ARN}"
server:
  sidecarContainers:
    aws-sigv4-proxy-sidecar:
        image: public.ecr.aws/aws-observability/aws-sigv4-proxy:1.0
        args:
        - --name
        - aps
        - --region
        - ${AWS_REGION}
        - --host
        - aps-workspaces.${AWS_REGION}.amazonaws.com
        - --port
        - :8005
        ports:
        - name: aws-sigv4-proxy
          containerPort: 8005
  statefulSet:
      enabled: "true"
  remoteWrite:
      - url: http://localhost:8005/workspaces/${WORKSPACE_ID}/api/v1/remote_write

Execute the following command to modify the Prometheus server configuration to deploy the signing proxy and configure the remoteWrite endpoint

helm upgrade --install prometheus-for-amp prometheus-community/prometheus -n prometheus -f ./amp_ingest_override_values.yaml

With the above configurations, the Prometheus server is now ready to scrape metrics from services deployed in the cluster and send them to the specified workspace within Amazon Managed Service for Prometheus via the AWS signing proxy.

An application instrumented with Prometheus client library is now deployed as a replica set to the Amazon EKS cluster. It tracks the number of incoming HTTP requests using a Prometheus Counter named http_requests_total and exposes this data over HTTP at the endpoint /metrics. Invoking this endpoint gives the following output, which is scraped periodically by the Prometheus server.

# HELP http_requests_total Total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{job="recommender",path="/user/product",} 86.0
http_requests_total{job="recommender",path="/popular/product",} 128.0
http_requests_total{job="recommender",path="/popular/category",} 345.0

Visualizing metrics using Grafana

The metrics collected in a workspace within Amazon Managed Service for Prometheus can be visualized using Grafana. Grafana v7.3.x has added a new feature to support AWS Signature Version 4 (SigV4) authentication and we will be using that version here. A self-managed Grafana installation is deployed to the Amazon EKS cluster using Helm charts as follows:

helm repo add grafana https://grafana.github.io/helm-charts
kubectl create ns grafana
helm install grafana-for-amp grafana/grafana -n grafana

Update your Grafana server to use the AWS signing proxy

Create a new file and name it amp_query_override_values.yaml. This file will be used to update your Grafana deployment to enable the Sigv4 protocol which the AWS signing proxy uses to authenticate.


serviceAccount:
    name: "iamproxy-service-account"
    annotations:
        eks.amazonaws.com/role-arn: "${IAM_PROXY_PROMETHEUS_ROLE_ARN}"
grafana.ini:
  auth:
    sigv4_auth_enabled: true

Now execute the following command to update your Grafana environment.

helm upgrade --install grafana-for-amp grafana/grafana -n grafana -f ./amp_query_override_values.yaml

You can now access Grafana by forwarding the port to http://localhost:5001 using the following command. Replace the string GRAFANA_POD_NAME with the actual Grafana pod name you just created.

kubectl port-forward -n grafana pods/GRAFANA_POD_NAME 5001:3000

Next, open Grafana from an internet browser using the above URL and login with the admin username. The password is obtained from the Kubernetes secret as follows:

kubectl get secrets grafana-for-amp -n grafana -o jsonpath='{.data.admin-password}'|base64 --decode

Before we can visualize the metrics in Grafana, it has to be configured with one or more data sources. Here, we will specify the workspace within Amazon Managed Service for Prometheus as a data source, as shown below. In the URL field, specify the Endpoint – query URL displayed in the AMP workspace details page without the /api/v1/query string at the end of the URL.

Configure AMP data source

Configure AMP data source

You’re now ready to query metrics data for the Prometheus Counter http_requests_total stored in the managed service workspace and visualize the rate of HTTP requests over a trailing 5-minute period using a Prometheus query as follows:
sum(rate(http_requests_total{exported_job=”recommender”}[5m])) by (path)

The figure below illustrates how to visualize this metric in Grafana across the different path labels captured in the Prometheus Counter.

Visualizing rate of HTTP requests using metrics retrieved from Amazon Managed Service for Prometheus

PromQL and metric visualization

In addition to service metrics collected from application workloads, we can also query system metrics captured by Prometheus for all the containers and nodes in a Kubernetes cluster. For example, the Prometheus Counter node_network_transmit_bytes_total captures the amount of data transmitted from each node of a cluster. The figure below visualizes the rate of data transfer from each node using a Prometheus query as follows: sum(rate(node_network_transmit_bytes_total[5m])) by (instance)

Visualizing rate of data transfer using metrics retrieved from Amazon Managed Service for Prometheus

PromQL and metric data visualization

Users may also visualize these metrics using Amazon Managed Service for Grafana (AMG) as outlined in this blog post.

Concluding remarks

Prometheus is an extremely popular open source monitoring tool that provides powerful querying features and has wide support for a variety of workloads.This blog post outlined the steps involved in using Amazon Managed Service for Prometheus to securely ingest, store, and query Prometheus metrics that were collected from application workloads deployed to an Amazon EKS cluster. Amazon Managed Service for Prometheus can also be used in conjunction with any Prometheus-compatible monitoring and alerting service to collect metrics from other container environments such as Amazon ECS, self-managed Kubernetes on AWS, or on-premise infrastructure. Workspaces within Amazon Managed Service for Prometheus serve as a valid data source for Grafana. Therefore, users can visualize these metrics using a self-managed installation of Grafana or use Amazon Managed Service for Grafana.

Authors

Viji Sarathy is a Senior Solutions Architect at Amazon Web Services. He has 20+ years of experience in building large-scale, distributed software systems in a broad range of verticals, both traditional and cloud native software stacks. His current interests are in the area of Container Services and Machine Learning. He has an educational background in Aerospace Engineering, earning his Ph.D from the University of Texas at Austin, specializing in Computational Mechanics. He is an avid runner and cyclist.

 

 

Imaya Kumar Jagannathan is a Senior Solution Architect focused on Amazon CloudWatch and AWS X-Ray. He is passionate about Monitoring and Observability and has a strong application development and architecture background. He likes working on distributed systems and is excited to talk about microservice architecture design. He loves programming on C#, working with Containers and Serverless technologies.

Amazon Managed Grafana – Getting Started

The content below is taken from the original ( Amazon Managed Grafana – Getting Started), to continue reading please visit the site. Remember to respect the Author & Copyright.

Amazon Managed Service for Grafana (AMG) is a fully managed and secure data visualization service that enables customers to instantly query, correlate, and visualize operational metrics, logs, and traces for their applications from multiple data sources. AMG is based on the open source Grafana project, a widely deployed data visualization tool popular for its extensible data source support. Developed together with Grafana Labs, AMG manages the provisioning, setup, scaling, and maintenance of Grafana, eliminating the need for customers to do this themselves. Customers also benefit from built-in security features that enable compliance with governance requirements, including single sign-on, fine-grained data access control, and audit reporting. AMG is integrated with AWS data sources that collect operational data, such as Amazon CloudWatch, Amazon Elasticsearch Service, Amazon Timestream, AWS IoT SiteWise, AWS X-Ray, and Amazon Managed Service for Prometheus (AMP), and provides plug-ins to popular open-source databases, third-party ISV monitoring tools, as well as other cloud services. With AMG you can easily visualize information from multiple AWS services, AWS accounts, and Regions in a single Grafana dashboard.

You can also perform an in-place upgrade to Grafana Enterprise to get access to additional features and plugins. With Grafana Enterprise, you can consolidate your data from AppDynamics, DataDog, Dynatrace, New Relic, MongoDB, Oracle Database, ServiceNow, Snowflake, Splunk, and Wavefront. Additionally, you can access support and training content directly from Grafana Labs to help you explore and adopt Grafana’s advanced features easily. Click here to learn more.

Before you begin

To use AMG in a flexible and convenient manner, we chose to leverage AWS Single Sign-On (SSO) for user management. AWS SSO is available once you’ve enabled AWS Organizations. In order to check whether an AWS account is part of an AWS Organization, head over to https://console.aws.amazon.com/organizations/ and you should see a view akin to the following and if you do not see AWS Organizations activated go ahead and create one:

Existing AWS Organization

Existing AWS Organization

AMG integrates with AWS SSO so that you can easily assign users and groups from your existing user directory such as Active Directory, LDAP, or Okta within the AMG workspace and single sign on using your existing user ID and password. This allows you to enforce existing login security requirements for your company such as two-factor authentication and password complexity.

If you don’t have an existing user directory or do not want to integrate with external identity providers through the Security Assertion Markup Language (SAML) 2.0 standard, you can create local users and passwords within AWS SSO and use them to sign in to the Grafana workspace:

SSO console: add user

SSO console: add user

With the preparations around AWS SSO and AWS Organizations out of the way, we’re ready to get into the core AMG setup.

Setting up AMG on your AWS account

You can easily spin up on-demand, autoscaled Grafana workspaces (virtual Grafana servers) that enable you to create unified dashboards across multiple data sources. Before we can use AMG for the following example, we need to set it up. In the following we’re using the AWS console to walk you through the required steps and comment on things to consider when performing each step.

After you hit the Create workspace button in the right upper corner of the AMG console landing page, give your new workspace a name and optionally a description:

Create new AMG workspace

Create new AMG workspace

Next, you need to define user and data access permissions. Unless you have a use case where you have to or want to manage the underlying IAM role and policy yourself (to have fine-grained control for other AWS services), we suggest that you let AMG manage the creation of IAM roles and policies for accessing your AWS Services data.

In this step you also have to enable AWS Single Sign-On (SSO) for AMG since this is how we manage user authentication to Grafana workspaces:

Configure AMG workspace settings

Configure AMG workspace settings

If you haven’t set up users via AWS SSO as mentioned above you can use the inline experience offered by AMG and click Create user at this step:

Enable SSO, create user inline

Enable SSO, create user inline

Then fill in the details in the form, that is, provide their email address and first as well as last name:

Create SSO user

Create SSO user

You should quickly get a confirmation mail with the invitation:

Invitation mail for SSO user

Invitation mail for SSO user

The user (in our case grafana-user) can now use the URL provided by SSO (something like d-xxxxx-awsapps.com/start) or the specific Grafana workspace URL from the workspace details console page to log into their environment where they will be prompted to change their password using the one-time password that you as an admin get from SSO and share with them (see above screen shot).

Back to the AMG workspace setup: We’re almost done, we now need to tell AMG the data sources from which we want to consume and visualize data. To be able to try out everything, we have selected all of the following, but you may want to restrict to the necessary subset for your use case:

AMG workspace permissions

AMG workspace permissions

As usual, you will have a final opportunity to review your settings and then confirm the creation of the AMG workspace:

AMG workspace creation review

AMG workspace creation review

Once the workspace is created, you can assign users access to the Grafana workspace. You can either assign an individual user from AWS SSO, or you can choose to assign a user group:

AMG workspace, assign user

AMG workspace, assign user

Now we’re done from an administrative point of view, so let’s switch gears. We can use the SSO user from above to gain access to the AMG workspace. Now you can click on the Grafana workspace URL from the workspace details page, as shown in the image above, to log into your Grafana workspace using the SSO credentials (user grafana-user) and you should see AMG enabled and ready for you to access:

SSO landing page with AMG listed

SSO landing page with AMG listed

Now all you have to do is to click on the AMG icon and you’re logged into your Grafana workspace:

Grafana landing page

Grafana landing page

The setup is completed with this and we’re ready to use AMG. Let’s start with consuming data from Prometheus.

Integration with Amazon Managed Service for Prometheus (AMP) and other data sources

Amazon Managed Service for Grafana supports a variety of datasources such as Amazon Managed Service for Prometheus, Amazon CloudWatch, AWS X-Ray, Amazon Elasticsearch, Amazon Timestream, AWS IoT SiteWise plugin,  and several others. The full-list of data sources can be found here.

AMG can auto-discover the accounts and resources you have for the AWS data-sources and auto-configure based on the permissions setup using CloudFormation.

Auto-detected AWS data sources

Auto-detected AWS data sources

Provisioning a datasource

Provisioning a datasource

This option discovers the accounts and resources you have for the six AWS Services that AMG natively integrates with. Based on the permissions you granted during workflow creation, you can now just check the box for the account or resource you want to add, and AMG will automatically configure the data source with the right IAM role permissions without you having to manually copy and paste.

Manually configuring an AWS data source

You can also manually configure the data sources by following the steps below:

  • Select the Datasource plugin
  • Choose from one of the following authentication mechanisms:
    • Access & Secret key
    • Credentials file
    • ARN of the AssumeRole to authenticate into your AWS account
  • Select a default AWS Region (optional)

In this example, we are connecting to an Amazon Managed Prometheus (AMP) data source:

Connecting to AMP datasource manually

Connecting to AMP datasource manually

Once connected, you will be able to create dashboard panels by selecting the newly connected data source and query metrics using PromQL. Here is a screenshot showing metrics from an EKS cluster queried on an AMP data source.

EKS Pod Metrics dashaboard

EKS Pod Metrics dashaboard

Out-of-the-box Dashboards

Several AWS datasource plug-ins come with many out-of-the-box dashboards to help you get started quickly. You can find them under the Dashboards tab as shown below.

List of out-of-the-box dashboards for Amazon CloudWatch

List of out-of-the-box dashboards for Amazon CloudWatch

You can easily import a dashboard by clicking the Import button. Below you can see a screenshot of the Amazon EC2 dashboard.

EC2 Dashboard

EC2 Dashboard

Creating custom dashboards

Grafana supports a variety of panel visualizations to create dashboards using a variety of data sources. Below you can see a dashboard showing AWS X-Ray Trace data from the One Observability Demo application. You can use AWS X-Ray filter expressions to create dashboard panels in Grafana to visualize trace data as shown below.

Custom dashboard showing AWS X-Ray trace data

Custom dashboard showing AWS X-Ray trace data

You can also investigate a single trace to see the segment timeline by simply clicking on a Trace ID. Grafana also provides deep links to the AWS X-Ray console where you can use tools such as X-Ray Analytics.

You can take advantage of a large collection of pre-built Grafana dashboards built by the community that can be easily imported into your Grafana workspace and provide a domain specific quick start to visualizing and investigating observability data for a variety of popular data sources.

Conclusion

With AMG (and AMP), you can now use Grafana and Prometheus without having to worry about the operational management of maintaining infrastructure resources, and let AWS take care of the undifferentiated heavy lifting.

More importantly, you can rely on open standards such as CNCF OpenTelemetry along with our own AWS Distro for OpenTelemetry that, together with our open source-based, fully managed services for Grafana and Prometheus, enable you to build powerful observability on AWS.

Authors

Imaya Kumar Jagannathan

Imaya is a Senior Solution Architect focused on Amazon CloudWatch and AWS X-Ray. He is passionate about Monitoring and Observability and has a strong application development and architecture background. He likes working on distributed systems and is excited to talk about micro-service architecture design. He loves programming on C#, working with Containers and Serverless technologies.

 

Michael Hausenblas

Michael is an Open Source Product Developer Advocate in the AWS container service team covering open source observability and service meshes. Before AWS, Michael worked at Red Hat, Mesosphere, MapR and as a PostDoc in applied research. Reach him on Twitter via @mhausenblas.

Securing the post-quantum world

The content below is taken from the original ( Securing the post-quantum world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Securing the post-quantum world

Quantum computing is inevitable; cryptography prepares for the future

Securing the post-quantum world

Quantum computing began in the early 1980s. It operates on principles of quantum physics rather than the limitations of circuits and electricity, which is why it is capable of processing highly complex mathematical problems so efficiently. Quantum computing could one day achieve things that classical computing simply cannot.

The evolution of quantum computers has been slow. Still, work is accelerating, thanks to the efforts of academic institutions such as Oxford, MIT, and the University of Waterloo, as well as companies like IBM, Microsoft, Google, and Honeywell. IBM has held a leadership role in this innovation push and has named optimization the most likely application for consumers and organizations alike. Honeywell expects to release what it calls the “world’s most powerful quantum computer” for applications like fraud detection, optimization for trading strategies, security, machine learning, and chemistry and materials science.

In 2019, the Google Quantum Artificial Intelligence (AI) team announced that their 53-qubit (analogous to bits in classical computing) machine had achieved “quantum supremacy.” This was the first time a quantum computer was able to solve a problem faster than any classical computer in existence. This was considered a significant milestone.

Quantum computing will change the face of Internet security forever — particularly in the realm of cryptography, which is the way communications and information are secured across channels like the Internet. Cryptography is critical to almost every aspect of modern life, from banking to cellular communications to connected refrigerators and systems that keep subways running on time. This ultra-powerful, highly sophisticated new generation of computing has the potential to unravel decades of work that have been put into developing the cryptographic algorithms and standards we use today.

Quantum computers will crack modern cryptographic algorithms

Quantum computers can take a very large integer and find out its prime factor extremely rapidly by using Shor’s algorithm. Why is this so important in the context of cryptographic security?

Most cryptography today is based on algorithms that incorporate difficult problems from number theory, like factoring. The forerunner of nearly all modern cryptographic schemes is RSA (Rivest-Shamir-Adleman), which was devised back in 1976. Basically, every participant of a public key cryptography system like RSA has both a public key and a private key. To send a secure message, data is encoded as a large number and scrambled using the public key of the person you want to send it to. The person on the receiving end can decrypt it with their private key. In RSA, the public key is a large number, and the private key is its prime factors. With Shor’s algorithm, a quantum computer with enough qubits could factor large numbers. For RSA, someone with a quantum computer can take a public key and factor it to get the private key, which allows them to read any message encrypted with that public key. This ability to factor numbers breaks nearly all modern cryptography. Since cryptography is what provides pervasive security for how we communicate and share information online, this has significant implications.

Theoretically, if an adversary were to gain control of a quantum computer, they could create total chaos. They could create cryptographic certificates and impersonate banks to steal funds, disrupt Bitcoin, break into digital wallets, and access and decrypt confidential communications. Some liken this to Y2K. But, unlike Y2K, there’s no fixed date as to when existing cryptography will be rendered insecure. Researchers have been preparing and working hard to get ahead of the curve by building quantum-resistant cryptography solutions.

When will a quantum computer be built that is powerful enough to break all modern cryptography? By some estimates, it may take 10 to 15 years. Companies and universities have made a commitment to innovation in the field of quantum computing, and progress is certainly being made. Unlike classical computers, quantum computers rely on quantum effects, which only happen at the atomic scale. To instantiate a qubit, you need a particle that exhibits quantum effects like an electron or a photon. These particles are extremely small and hard to manage, so one of the biggest hurdles to the realization of quantum computers is how to keep the qubits stable long enough to do the expensive calculations involved in cryptographic algorithms.

Both quantum computing and quantum-resistant cryptography are works in progress

It takes a long time for hardware technology to develop and mature. Similarly, new cryptographic techniques take a long time to discover and refine. To protect today’s data from tomorrow’s quantum adversaries, we need new cryptographic techniques that are not vulnerable to Shor’s algorithm.

The National Institute of Standards and Technology (NIST) is leading the charge in defining post-quantum cryptography algorithms to replace RSA and ECC. There is a project currently underway to test and select a set of post-quantum computing-resistant algorithms that go beyond existing public-key cryptography. NIST plans to make a recommendation sometime between 2022 and 2024 for two to three algorithms for both encryption and digital signatures. As Dustin Moody, NIST mathematician points out, the organization wants to cover as many bases as possible: “If some new attack is found that breaks all lattices, we’ll still have something to fall back on.”

We’re following closely. The participants of NIST have developed high-speed implementations of post-quantum algorithms on different computer architectures. We’ve taken some of these algorithms and tested them in Cloudflare’s systems in various capacities. Last year, Cloudflare and Google performed the TLS Post-Quantum Experiment, which involved implementing and supporting new key exchange mechanisms based on post-quantum cryptography for all Cloudflare customers for a period of a few months. As an edge provider, Cloudflare was well positioned to turn on post-quantum algorithms for millions of websites to measure performance and use these algorithms to provide confidentiality in TLS connections. This experiment led us to some useful insights around which algorithms we should focus on for TLS and which we should not (sorry, SIDH!).

More recently, we have been working with researchers from the University of Waterloo and Radboud University on a new protocol called KEMTLS, which will be presented at Real World Crypto 2021. In our last TLS experiment, we replaced the key negotiation part of TLS with quantum-safe alternatives but continued to rely on digital signatures. KEMTLS is designed to be fully post-quantum and relies only on public-key encryption.

On the implementation side, Cloudflare team members including Armando Faz Hernandez and visiting researcher Bas Westerbaan have developed high-speed assembly versions of several of the NIST finalists (Kyber, Dilithium), as well as other relevant post-quantum algorithms (CSIDH, SIDH) in our CIRCL cryptography library written in Go.

Securing the post-quantum world
A visualization of AVX2-optimized NTT for Kyber by Bas Westerbaan

Post-quantum security, coming soon?

Everything that is encrypted with today’s public key cryptography can be decrypted with tomorrow’s quantum computers. Imagine waking up one day, and everyone’s diary from 2020 is suddenly public. Although it’s impossible to find enough storage to record keep all the ciphertext sent over the Internet, there are current and active efforts to collect a lot of it. This makes deploying post-quantum cryptography as soon as possible a pressing privacy concern.

Cloudflare is taking steps to accelerate this transition. First, we endeavor to use post-quantum cryptography for most internal services by the end of 2021. Second, we plan to be among the first services to offer post-quantum cipher suites to customers as standards emerge. We’re optimistic that collaborative efforts among NIST, Microsoft, Cloudflare, and other computing companies will yield a robust, standards-based solution. Although powerful quantum computers are likely in our future, Cloudflare is helping to make sure the Internet is ready for when they arrive.

For more on Quantum Computing, check out my interview with Scott Aaronson and this segment by Sofía Celi and Armando Faz (in Spanish!) on Cloudflare TV.

Microsoft brings new process mining features to Power Automate

The content below is taken from the original ( Microsoft brings new process mining features to Power Automate), to continue reading please visit the site. Remember to respect the Author & Copyright.

Power Automate is Microsoft’s platform for streamlining repetitive workflows — you may remember it under its original name: Microsoft Flow. The market for these robotic process automation (RPA) tools is hot right now, so it’s no surprise that Microsoft, too, is doubling down on its platform. Only a few months ago, the team launched Power Automate Desktop, based on its acquisition of Softomotive, which helps users automate workflows in legacy desktop-based applications, for example. After a short time in preview, preview Power Automate Desktop is now generally available.

The real news today, though, is that the team is also launching a new tool, the Process Advisor, which is now in preview as part of the Power Automate platform. This new process mining tool provides users with a new collaborative environment where developers and business users can work together to create new automations.

The idea here is that business users are the ones who know exactly how a certain process works. With Process Advisor, they can now submit recordings of how they process a refund, for example, and then submit that to the developers, who are typically not experts in how these processes usually work.

What’s maybe just as important is that a system like this can identify bottlenecks in existing processes where automation can help speed up existing workflows.

Image Credits: Microsoft

“This goes back to one of the things that we always talk about for Power Platform, which, it’s a corny thing, but it’s that development is a team sport,” Charles Lamanna, Microsoft’s corporate VP for its Low Code Application Platform, told me. “That’s one of our big focuses: how to do bring people to collaborate and work together who normally don’t. This is great because it actually brings together the business users who live the process each and every day with a specialist who can build the robot and do the automation.”

The way this works in the backend is that Power Automate’s tools capture exactly what the users do and click on. All this information is then uploaded to the cloud and — with just five or six recordings — Power Automate’s systems can map how the process works. For more complex workflows, or those that have a lot of branches for different edge cases, you likely want more recordings to build out these processes, though.

Image Credits: Microsoft

As Lamanna noted, building out these workflows and process maps can also help businesses better understand the ROI of these automations. “This kind of map is great to go build an automation on top of it, but it’s also great because it helps you capture the ROI of each automation you do because you’ll know for each step step, how long it took you,” Lamanna said. “We think that this concept of Process Advisor is probably going to be one of the most important engines of adoption for all these low-code/no-code technologies that are coming out. Basically, it can help guide you to where it’s worth spending the energy, where it’s worth training people, where it’s worth building an app, or using AI, or building a robot with our RPA like Power Automate.”

Lamanna likened this to the advent of digital advertising, which for the first time helped marketers quantify the ROI of advertising.

The new process mining capabilities in Power Automate are now available in preview.

Azure Digital Twins now generally available: Create IoT solutions that model the real world

The content below is taken from the original ( Azure Digital Twins now generally available: Create IoT solutions that model the real world), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure Digital Twins, virtually model the physical world.

Today, organizations are showing a growing appetite for solutions that provide a deeper understanding of not just assets, but also the complex interactions across environments. This shift is driven by the need to be more agile and respond to real-time changes in the world, and it involves moving beyond tracking individual assets or devices to focusing on everything within a rich environment—from people, places, processes, and things to their very relationships.

To really understand these intricate environments, companies are creating digital replicas of their physical world also known as digital twins. With Microsoft Azure Digital Twins now generally available, this Internet of Things (IoT) platform provides the capabilities to fuse together both physical and digital worlds, allowing you to transform your business and create breakthrough customer experiences.

According to the IoT Signals report, the vast majority of companies with a digital twin strategy see it as an integral part of their IoT solution. Yet the reality is that modeling entire environments can be complicated. It involves bringing together the many elements that make up a digital twin to capture actionable insights. However, siloed data across these experiences makes it challenging to build digital twin solutions that bring those models to life, and doing so in a scalable, secure way is often time-consuming.

Azure Digital Twins now ready for enterprise-grade deployments

Azure Digital Twins is an industry-first solution. It breaks down silos within intelligent environments by fusing data from previously disparate devices and business systems. It means you can track both past and present events, simulate possibilities, and help predict future events for those environments.

With Azure Digital Twins now generally available, it offers ready-to-use building blocks that can simplify the creation of detailed, comprehensive digital models that bring solutions to life. This trusted enterprise-grade platform brings the scale, reliability, security, and broad market availability that enables customers to build production-ready solutions.
 

Customer insights and partner solutions with Azure Digital Twins

Intelligent environments come in all shapes and sizes, and their attributes and outcomes are as varied as the industries in which they are used. As such, the possibilities for digital twins are endless—it can be used to model industries and environments that include factories, buildings, stadiums, and even entire cities. Today, we are working with customers and partners who are creating digital twins to model everything from facilities and spaces such as, smart buildings to manufactured products and the very processes within their supply chains.

One company pushing the boundaries of renewable energy production and efficiency is Korea-based Doosan Heavy Industries and Construction. Doosan worked with Microsoft and Bentley Systems to develop a digital twin of its wind farms, which allows operators to remotely monitor equipment performance and predict energy generation based on weather conditions.

“To maintain our competitive edge and increase energy efficiency, we need to optimize turbine operations through data analysis and gather feedback to develop and test advances in wind turbine development. We created solutions with Azure Digital Twins, Azure IoT Hub, and Bentley iTwin to make that possible by combining data from multiple sources into actionable insights.”—Seiyoung Jang, General Manager, Strategy and Innovation, Doosan

The solution uses Azure Digital Twins to combine real-time and historical IoT, weather, and other operational data with physics and machine learning-based models to accurately predict production output for each turbine in the farm. Based on the simulated models, Doosan can proactively adjust the pitch and performance of individual turbines, maximize energy production, and generate insights that will improve the future design of its next-generation wind turbines.

The innovation and productivity that Azure Digital Twins enables doesn’t stop there:

  • Johnson Controls is collaborating with Microsoft to digitally transform how buildings and spaces are conceived, built, and managed. At the center of this collaboration is a holistic integration between their platform, OpenBlue Digital Twin, and Azure Digital Twins. This partnership helps enable integrated building management, and the platform serves as the foundation for energy and space optimization, predictive maintenance, and remote operations.
  • Ansys, a market leader in simulation software, now offers native integration of simulation-based digital twins via Ansys Twin Builder for customers using Azure Digital Twins. This lets engineers quickly deliver real-time, physics-based simulation models for operational use. Doing so helps make it possible to efficiently integrate the simulation-based twins into a broader IoT solution.
  • Brookfield Properties, one of the world’s largest commercial office landlords, partnered with Willow to create a digital replica of their One Manhattan West (OMW) property using Willow’s Azure Digital Twins-based product. Dedicated to creating a “live, work, play” ecosystem, this lays the groundwork for a digital-first future while unlocking cost savings, energy optimizations, and value-added services today. This solution leverages premade, open source models by RealEstateCore that can be used to help accelerate development across industries.

Azure Digital Twins platform capabilities

The Azure Digital Twins platform was built to simplify and accelerate the creation of IoT connected solutions. With a comprehensive set of capabilities, companies can develop customized, connected solutions with ease. And with the ability to layer your vertical domain specialization—such as 3D or 4D visualizations, physics-based simulation, and AI—on top of Azure Digital Twins, it’s easier than ever to focus on driving results for your customers.

This also includes new developer experiences with broad language support for SDKs, Digital Twins Definition Language (DTDL) modeling and validation tools, and the Azure Digital Twins explorer sample which helps visualize the graph representing your environment. Other capabilities at the core of the Azure Digital Twins platform allow you to:

  • Use an open modeling language, DTDL, to easily create custom models of intelligent environments. In addition, premade models and converter tools for vertical standards help accelerate development when getting started with use cases across industries.
  • Bring digital twins to life with a live execution environment that is scalable and secure and uses data from IoT and other sources. Using a robust event system, you can build dynamic business logic that helps keep business apps fresh and always up to date. You can also extract insights in the context of the modeled world by querying data on a wide range of conditions and relationships.
  • Break down silos using input from IoT and business systems by easily connecting assets such as IoT and Azure IoT Edge devices via Azure IoT Hub, as well as existing business systems such as ERP and CRM to Azure Digital Twins to extract relevant insights across the entire environment.
  • Output to storage and analytics by integrating Azure Digital Twins with other Azure services. This includes the ability to send data to Azure Data Lake for long-term storage or to data analytics services such as Azure Synapse Analytics to apply machine learning. Another important use case is time series data integration and historian analytics with Azure Time Series Insights.

Digital Twins Consortium: Advancing technology with open partnerships and ecosystems

Built upon open partnerships and open ecosystems, Microsoft co-founded the Digital Twins Consortium in May 2020 in collaboration with other companies. Together, we are contributing to best practices and shared digital twin models. We are committed to helping create an industry standard and bringing digital twins—including DTDL—as well as all the different vertical specializations of digital twins across industries into an open ecosystem.

There are now more than 170 members that span companies, government agencies, and academia to drive consistency in the vocabulary, architecture, security, and compatibility of digital twin technology as it’s used across industries. This means that everyone across the ecosystem can benefit from a collective pool of industry standardized models so you can accelerate your digital twin journey.

Get started with Azure Digital Twins

As Microsoft Azure partners and businesses who are already using Azure Digital Twins have shown, it offers a robust platform for building enterprise grade IoT connected solutions with the scale, compliance, security, and privacy benefits that customers can bet their business on. Learn more about Azure Digital Twins and how to get started building digital replicas of your own intelligent business environment.

Additional resources

•    Learn more about Azure Digital Twins.
•    Get started with Azure Digital Twins technical resources.
•    Watch Azure Digital Twins demo video.
•    Read Azure Digital Twins customer stories.
•    Watch the Azure Digital Twins technical deep dive video featuring the WillowTwin solution.
•    Learn how IoT and Azure Digital Twins can help connect urban environments.
•    Learn more about Microsoft and Johnson Controls digital twin collaboration.

How to Enable or Disable Scrollable Tabstrip in Google Chrome

The content below is taken from the original ( How to Enable or Disable Scrollable Tabstrip in Google Chrome), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to Enable or Disable Scrollable Tabstrip in Google ChromeWhenever you open multiple tabs on your Chrome browser, the tabs start narrowing and […]

This article How to Enable or Disable Scrollable Tabstrip in Google Chrome first appeared on TheWindowsClub.com.

OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware

The content below is taken from the original ( OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware), to continue reading please visit the site. Remember to respect the Author & Copyright.

It is appropriate that, 10 years after the first Open Hardware Summit, open source hardware was a key part of the initial Covid-19 response. Engineers, designers, and medical professionals collaborated from around the world to design and deploy medical equipment to meet the world’s unprecedented need. In many ways, this is […]

Read more on MAKE

The post  OSHW Turns 10: Lessons Learned Over a Decade of Open Hardware appeared first on Make: DIY Projects and Ideas for Makers.

Enabling Microsoft-based workloads with file storage options on Google Cloud

The content below is taken from the original ( Enabling Microsoft-based workloads with file storage options on Google Cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Enterprises are rapidly moving Microsoft and Windows-based workloads to the cloud to reduce license spend and embark on modernization strategies to fully leverage the power of cloud-native architecture. Today’s business climate requires agility, elasticity, scale, and cost optimization, all of which are far more difficult to attain by operating out of data centers. Google Cloud offers a top-level enterprise-grade experience for Microsoft-based services and tools. 

Many Windows-based workloads require a Server Message Block (SMB) file service component. For example, highly available SAP application servers running in Windows Server clusters need SMB file servers to store configuration files and logs centrally. The COVID-19 pandemic has resulted in increased demand for virtual desktop solutions to enable workers to adapt to the sudden necessity of working remotely. Those virtual desktop users often require access to SMB file servers to store documents and to collaborate with coworkers. 

Fortunately, there are numerous options for SMB file services in Google Cloud that meet the varying needs of Microsoft shops. They fall into three categories: fully managed, semi-managed, and self-managed services. In this post, we’ll examine several options across those three buckets. (Note: this is by no means an exhaustive list of SMB file service providers for Google Cloud. Rather, this is a brief review of some of the common ones.)

Fully managed SMB file services

For many enterprises, reducing operational overhead is a key objective of their cloud transformation. Fully managed services provide the capabilities and outcomes, without requiring IT staff to worry about mundane tasks like software installation and configuration, application patching, and backup. These managed SMB file service options let customers get their Windows applications and users to work expeditiously, reducing toil and risk. (Note that these are managed partner-provided services, so make sure to check the region you’ll be using to ensure availability.)

NetApp Cloud Volumes Service

If you work in IT and have ever managed, used, or thought about storage, chances are you’re familiar with NetApp. NetApp has been providing enterprise-grade solutions since 1992. With NetApp Cloud Volumes Service (CVS), you get highly available, cloud-native, managed SMB services that are well-integrated with Google Cloud. Storage volumes can be sized from 1 to 100 TB to meet the demands of large-scale application environments, and the service includes tried-and-true NetApp features like automated snapshots and rapid volume provisioning. It can be deployed right from the Google Cloud Marketplace, managed in the Google Cloud console, supported by Google, and paid for in your Google Cloud bill.

Dell Technologies PowerScale

Dell Technologies is another leader in the enterprise storage market, and have partnered with them to offer PowerScale on Google Cloud. PowerScale leverages an all-flash architecture for blazing fast storage operations. However, it will be backward-compatible, allowing you to choose between PowerScale all-flash nodes and Isilon nodes in all-flash, hybrid, or archive configuration. The OneFS file system boasts a maximum of 50 PB per namespace; this thing scales! And as with NetApp, PowerScale in Google Cloud includes enterprise-grade features like snapshots, replication, and hybrid integration with on-premises storage. It’s tightly integrated with Google Cloud: it can be found in the Google Cloud Marketplace, is integrated with the Google Cloud console, and billed and supported directly by Google.

Both of these managed file storage products support up to SMBv3, making them outstanding options to support Windows workloads, without a lot of management overhead.  

Semi-managed SMB file services

Not everyone wants fully managed SMB services. While managed services take a lot of work off your plate, as a general rule they also reduce the ways in which you can customize the solution to meet your particular requirements. Therefore, some customers prefer to use self-managed (or semi-managed) services, like the storage services below, to tailor the configurations to the exact specifications needed for their Windows workloads.

NetApp Cloud Volumes ONTAP

Like the fully managed NetApp Cloud Volumes Service, NetApp Cloud Volumes ONTAP (CVO) gives you the familiar features and benefits you’re likely used to with NetApp in your data center, including SnapMirror. However, as a semi-managed service, it’s well-suited for customers who need enhanced control and security of their data on Google Cloud. CVO deploys into your Google Cloud virtual private cloud (VPC) on Google Compute Engine instances, all within your own Google Cloud project(s), so you can enforce policies, firewall rules, and user access as you see fit to meet internal or external compliance requirements. You will need to deploy CVO yourself by following NetApp’s step-by-step instructions. In the Marketplace, you get your choice of a number of CVO price plans, each with varying SMB storage capacity (2 TB to 368 TB) and availability. NetApp Cloud Volumes ONTAP is available in all Google Cloud regions.

Panzura Freedom Hybrid Cloud Storage

Panzura Freedom is a born-in-the-cloud, hybrid file service that allows global enterprises to store, collaborate, and back up files. It presents a single, geo-distributed file system called Panzura CloudFS that’s simultaneously accessible from your Google Cloud VPCs, corporate offices, on-premises data centers, and other clouds. The authoritative data is stored in Google Cloud Storage buckets and cached in Panzura Freedom Filers deployed locally, giving your Windows applications and users high-performing access to the file system. Google Cloud’s global fiber network and 100+ points of presence (PoPs) reduce global latency to ensure fast access from anywhere. Panzura can be found in the Google Cloud Marketplace as well.  

Self-managed SMB file services

In some cases, managed services will not meet all the requirements. This is not limited to technical requirements. For example, in your industry you might be subject to a compliance regulation for which none of the managed services are certified. If you consider all of the fully managed and semi-managed SMB file service options, but none of them are just right for your budget and requirements, don’t worry. You still have the option of rolling your own Windows SMB file service on Google Cloud. This approach gives you the most flexibility of all, along with the responsibility of deploying, configuring, securing, and managing it all. Don’t let that scare you, though: These options are likely very familiar to your Microsoft-focused staff.

Windows SMB file servers on a Google Compute Engine instance

This option is quite simple: you deploy a Compute Engine instance running your preferred version of Windows Server, install the File Server role, and you’re off to the races. You’ll have all the native features of Windows at your disposal. If you’ve extended or federated your on-premises Active Directory into Google Cloud or are using the Managed Service for Active Directory, you’ll be able to apply permissions just as you do on-prem.  Persistent Disks add a great deal of flexibility to Windows file servers. You can add or expand Persistent Disks to increase the storage capacity and disk performance of your SMB file servers with no downtime. Although a single SMB file server is a single point of failure, the native protections and redundancies of Compute Engine make it unlikely that a failure will result in extended downtime. If you choose to utilize Regional Persistent Disks, your disks will be continuously replicated to a different Google Cloud zone, adding an additional measure of protection and rapid recoverability in the event of a VM or zone failure.  

Windows clustering

If your requirements dictate that your Windows file services cannot go down, a single Windows file server will not do. Fortunately, there’s a solution: Windows Failover Clustering. With two or more Windows Compute Engine instances and Persistent Disks, you can build a highly available SMB file cluster that can survive the failure of Persistent Disks, VMs, the OS, or even a whole Google Cloud zone with little or no downtime. There are two different flavors of Windows file clusters: File Server Cluster and Scale-out File server (SOFS).  

Windows file server clusters have been around for around 20 years. The basic architecture is two Windows servers in a Windows Failover Cluster, connected to shared storage such as a storage area network (SAN). These clusters are active-passive in nature. At any given time, only one of the servers in the cluster can access the shared storage and provide file services to SMB clients. Clients access the services via a floating IP address, front-ended by an internal load balancer. In the event of a failure of the active node, the passive node will establish read/write access to the shared storage, bind the floating IP address, and launch file services. In a cloud environment, physical shared storage devices cannot be used for cluster storage. Instead, Storage Spaces Direct (S2D) may be used. S2D is a clustered storage system that combines the persistent disks of multiple VMs into a single, highly available, virtual storage pool. You can think of it as a distributed virtual SAN.

Scale-Out File Server (SOFS) is a newer and more capable clustered file service role that also runs in a Windows Failover Cluster. Like Windows File Server Clusters, SOFS makes use of S2D for cluster storage. Unlike a Windows File Server Cluster, SOFS is an active-active file server. Rather than presenting a floating IP address to clients, SOFS creates separate A records in DNS for each node in the SOFS role. Each node has a complete replica of the shared dataset and can serve files to Windows clients, making SOFS both vertically and horizontally scalable. Additionally, SOFS has some newer features that make it more resilient for application servers.  

As mentioned before, both Windows File Server Clusters and SOFS depend on S2D for shared storage. You can see the process of installing S2D on Google Cloud virtual machines hereis described, and the chosen SMB file service role may be installed afterwards. Check out the process of deploying a file server cluster role here, and the process for an SOFS role.  

Scale-Out File Server or File Server Cluster?

File Server Clusters and SOFS are alike in that they provide highly available SMB file shares on S2D. SOFS is a newer technology that provides higher throughput and more scalability than File Server Cluster. However, SOFS is not optimized for the metadata-heavy operations common with end-user file utilization (opening, renaming, editing, copying, etc.). Therefore, in general, choose File Server Clusters for end-user file services and choose SOFS when your application(s) need SMB file services. See this page for a detailed comparison of features between File Server Cluster (referred to there as “General Use File Server Cluster”) and SOFS.

Which option should I choose?

We’ve described several good options for Microsoft shops to provide their Windows workloads and users access to secure, high-performing, and scalable SMB file services. How do you choose which one is best suited for your particular needs? Here are some decision criteria you should consider:

  • Are you looking to simplify your IT operations and offload operational toil? If so, look at the fully managed and semi-managed options.

  • Do you have specialized technical configuration requirements that aren’t met by a managed service? Then consider rolling your own SMB file service solution as a single Windows instance or one of the Windows cluster options.

  • Do you require a multi-zone for fully automated high availability? If so, NetApp Cloud Volumes ONTAP and the single instance Windows file server are off the table. They run in a single Google Cloud zone.

  • Do you have a requirement for a particular Google Cloud region? If so, you’ll need to verify whether NetApp Cloud Volumes Service and NetApp Cloud Volumes ONTAP are available in the region you require. As partner services that require specialized hardware, these two services are available in many, but not all, Google Cloud regions today.

  • Do you require hybrid storage capabilities, spanning on-premises and cloud? If so, all of the managed options have hybrid options.

  • Is your budget tight? If so, and if you’re OK with some manual planning and work to minimize the downtime that’s possible with any single point of failure, then a single Windows Compute Engine instance file server will do fine. 

  • Do you require geo-diverse disaster recovery? You’re in luck—every option described here offers a path to DR.

What next?  

This post serves as a brief overview of several options for Windows file services in Google Cloud. Take a closer look at the ones that interest you. Once you’ve narrowed it down to the top candidates, you can go through the Marketplace pages (for the managed services) to get more info or start the process of launching the service. The self-managed options above include links to Google Cloud-specific instructions to get you started, then general Microsoft documentation to deploy your chosen cluster option.

Related Article

Filestore Backups eases migration of file-based apps to cloud

The new Filestore Backups lets you migrate your copy data services and backup strategy for your file systems in Google Cloud.

Read Article

New BOINC Network Website!!

The content below is taken from the original ( New BOINC Network Website!!), to continue reading please visit the site. Remember to respect the Author & Copyright.

I’ve converted the podcast site into a full site for BOINC.

I am not a designer or developer… suggestions (and help if you have the time) are very much invited!

I particularly want to get rid of the stock images if possible.

Anyway, enjoy!

https://boinc.network

submitted by /u/jring_o to r/BOINC
[link] [comments]

Retired engineer confesses to role in sliding Microsoft Bob onto millions of XP install CDs

The content below is taken from the original ( Retired engineer confesses to role in sliding Microsoft Bob onto millions of XP install CDs), to continue reading please visit the site. Remember to respect the Author & Copyright.

‘While Bob never got to play on the big stage, he always followed the band around and got to ride on the bus’

One anniversary unlikely to be marked within the bowels of Redmond is that of Microsoft Bob. However, a retired engineer has taken to YouTube to confirm that the unloved interface did indeed enjoy a second life as digital ballast for Windows XP.…