Another day, another cloud price cut – from partly free to all free

The content below is taken from the original (Another day, another cloud price cut – from partly free to all free), to continue reading please visit the site. Remember to respect the Author & Copyright.

IBM has revealed that the standard tier BlueMix Lift service is now free. Which sounds great save for one small problem: the service was just about free already.

BlueMix lift is a database migration service that Big Blue says “makes it easy to quickly, securely and reliably migrate your database from on-premises data centers to an IBM Bluemix cloud data property … with zero downtime.” That’s possible because IBM’s Aspera file transfer tech can monitor changes to the source database and make sure they’re captured on the cloudy side of the house.

Which sounds pretty handy because downtime is bad and cloud has may benefits.

But was we mentioned at the top of this story, BlueMix Lift standard tier was already cheap as: the product’s pricing page says shifting the first 100GB of data was free and thereafter cost US$0.02 per gigabyte. Let’s do the math: if you uploaded another 100GB after the free allowance that would be 100 times two cents … or two dollars!

Which may not make a huge difference to those of you considering a move to the cloud.

It is possible to export data from BlueMix, but The Register is always pleased to hear recitations of the The Eagles’ Hotel California – “You can check out any time you want, but you can never leave” – when discussing cloud computing. Which is our way of pointing out that the savings on offer could be expensive in the long run.

BlueMix Lift’s enterprise tier offers unlimited inbound data per target database at $430. ®

7 musts for any successful BYOD program

The content below is taken from the original (7 musts for any successful BYOD program), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today, employee mobility and office BYOD programs are critical for enterprise productivity. Mobile devices add new security challenges, bypassing many of the security controls you have in place. Mobile devices, mobile apps and the networks they use are now essential to satisfy customers, collaborate more effectively with suppliers, and keep employees productive anytime and anywhere.

Unfortunately, increased connectivity often translates to increased security threats. Gartner predicts that by 2018, 25 percent of corporate data traffic will flow directly from mobile devices to the cloud, bypassing traditional enterprise security controls. Hackers are constantly innovating to target your organization and mobile devices have become their path of least resistance. John Michelsen, chief product officer at Zimperium, shares seven musts for any BYOD program to successfully thwart mobile cyber attacks.

Docker Native Now Supported by Nanobox for Linux, Windows, and Mac Operating Systems

The content below is taken from the original (Docker Native Now Supported by Nanobox for Linux, Windows, and Mac Operating Systems), to continue reading please visit the site. Remember to respect the Author & Copyright.

Nanobox, a "micro platform" that simplifies the dev to production life cycle, announced today that it has released support for native Docker on all… Read more at VMblog.com.

How to automatically delete Microsoft Edge browsing history on exit

The content below is taken from the original (How to automatically delete Microsoft Edge browsing history on exit), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft Edge may not come with loads of features but this is still fast loading, secure and a good solution for regular web browsers. Edge takes care of the users’ privacy very well. Having said that, you also need to do something more to get better privacy while using Microsoft Edge web browser on Windows 10 computer.

Whenever you browse the web, Windows 10 stores a copy of the web page on your computer in its Cache as well as saves the URL of the web page you visited n the form of Browsing History. The advantage of this function is you can check what you have browsed. The disadvantage of this feature is anybody can check which sites you visited. To solve that problem, there are two solutions. First, you can opt for Incognito Mode or Private Browsing mode or you can automatically make Edge browser clear or delete your browsing history on exit.

Delete Edge browsing history on exit

To do this, open Microsoft Edge browser, click on the three-dotted More button and go to Settings. Here, under Clear browsing data option, you will see Choose what to clear. Click on it.

On the next page, you can select the data you want to delete, every time Edge is closed. For instance, you can choose Browsing History, Cookies and saved website data, Cached data and files, Download history, Form data, Passwords. Clicking on the Show more link will display Media licenses, Pop-up exceptions, Location permissions, Full-screen permissions, and Notifications permissions options too.



Make your selection. Moving on, toggle the button that says Always clear this when I close the browser.

Delete Edge browsing history on exit

That’s it! To test it, you can close your browser and re-open it to check whether everything is deleted or not. Now, every time you shut your Edge browser, this data will be removed automatically.

If you use this solution, you need not use the Private browsing mode and keep using the normal mode. Also, you do not have to worry about deleting browsing data manually for privacy reasons.



The Multi-Cloud Convergence Tipping Point

The content below is taken from the original (The Multi-Cloud Convergence Tipping Point), to continue reading please visit the site. Remember to respect the Author & Copyright.

Tony Bishop<br/>Equinix

Tony Bishop

Equinix

Tony Bishop works in Global Vertical Strategy & Marketing at Equinix.

Cloud adoption has matured to an advanced stage where enterprises are increasingly relying more on cloud infrastructures, and the industry at large is extremely bullish when it comes to cloud futures. Cisco predicts that global cloud IP traffic will almost quadruple between 2015 – 2020, reaching 14.1 zettabytes. By then, global cloud IP traffic will account for more than 92 percent of total data center traffic. This surge in cloud adoption also represents a huge shift in cloud spending by IT organizations, directly or indirectly affecting more than $1 trillion dollars in IT purchases dedicated to the cloud by 2020, according to Gartner.

Forrester predicts 2017 will be the tipping point for cloud adoption and sees a convergence of multiple clouds across the enterprise as “CIOs step up to orchestrate cloud ecosystems that connect employees, customers, partners, vendors and devices to serve rising customer expectations.”

In 2017, more than 85 percent of enterprises will commit to multi-cloud architectures that IDC describes as “encompassing a mix of public cloud services, private clouds, community clouds and hosted clouds.” We see much of the multi-cloud migration within our customers stemming from diverse organic cloud adoption by different groups within the same organization. And, the majority of enterprise hybrid-cloud adoption is coming from businesses leveraging the flexibility and cost-effectiveness of public clouds, while securing sensitive assets in on-premises IT or a private cloud for protection and compliance.

Are You Ready for Cloud Convergence?

The cloud is now a major catalyst for changing how enterprises will do business in the emerging global digital economy. Some of its greatest benefits to organizations are:

  • Faster access to infrastructure and IT resources and services
  • Greater speed-to-market and global expansion
  • Business continuity and disaster recovery
  • Higher performance and scalability

The economies of scale of pay-per-use cloud business models are also enticing enterprises to move to the cloud, however, they have also been a major source of confusion and frustration for companies. Today, there are multiple ways to buy cloud services ̶ on-demand, pre-paid, reserved capacity, monthly enterprise agreements ̶ and this trend will accelerate in 2017.

Also, migration of those applications that are not “cloud-ready” is not a slam-dunk. This has brought about the rise of cloud migration and orchestration tools, such as open source containers (Docker, Mesosphere) and container-based migration services from leading cloud providers such as Amazon, Google and Microsoft. These solutions are making the “lift-and-shift” application migration model more viable, and it is expected that in 2017, these tools and advancements in automated cloud orchestration and management will accelerate the rate of cloud migration given their low cost for bulk application migrations.

The Next Steps

Ultimately, a well-planned hybrid and multi-cloud cloud migration strategy is necessary to facilitate comprehensive assessment, migration and optimization plans to reduce cloud migration risks and costs. Your strategy should also include cloud exchanges for fast, cost-effective, direct and secure provisioning of virtualized connections to multiple cloud vendors and services to best leverage the flexibility and agility converged cloud infrastructures contribute to becoming a competitive digital business.

Ultimately, it will be an ubiquitous cloud infrastructure providing the backbone for digital business. To ensure cloud convergence success, cloud strategies cannot be from siloed and fixed; but rather organizations need to take a more, holistic integrated and dynamic approach to cloud interconnection in order to best position business and IT infrastructures for digital transformation.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Considering SD-WAN? Key questions to address before making the leap

The content below is taken from the original (Considering SD-WAN? Key questions to address before making the leap), to continue reading please visit the site. Remember to respect the Author & Copyright.

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Cost reduction and enhanced network performance are just two of the many benefits promised by SD-WAN technology. IDC believes the SD-WAN market will be a $6 billion industry by 2020, so it’s no surprise that solutions are popping up everywhere you turn.

At a high level, SD-WANs promise a more cost-effective and simpler way to operate secure, virtualized WAN connections between enterprise branches, data centers and the internet. Traditional MPLS links from the branch to the data center are reliable and secure, but typically offer lower performance for users accessing cloud-based services, and are considerably more expensive than widely available broadband access links.  The Internet provides global access to cloud applications, but is limited by poor reliability, unpredictable performance and weak security.

The benefit of SD-WAN is that it provides a software controlled and programmable environment that allows you to augment or replace your existing WAN, lower costs by leveraging cheaper broadband access links and dynamically scale bandwidth capacity to the cloud when needed.

But how do you know if SD-WAN is right for your business? Consider the following questions before you make the leap: 

* Do you actually need an SD-WAN? This is the rather obvious question, but with most new technologies, the hype can distract from the actual need. SD-WAN is no different. To get started, ask yourself the following:

  • Am I reliant on MPLS or Carrier Ethernet services?
  • Am I seeing more internet connectivity requests? For example, are my sales guys using salesforce.com or social media for sales and enhanced customer support? Or, do customers in my retail outlet want to browse the internet while they wait for service?
  • Am I migrating in-house IT systems to the cloud?

Enterprise network traffic has exploded with organizations incrementally adding bandwidth to reduce service latency and avoid network failures. And, because many of today’s applications are moving out of the enterprise and into third-party cloud and SaaS environments, traffic flows within the network have drastically changed and become inefficient. Adding direct internet connections and broadband circuits can provide the needed bandwidth, but it also requires purchasing, deploying and managing daisy-chains of on-premises devices for different circuits and network functions, including routing, WAN optimization and firewalls at multiple locations.

If you answered yes to any of the questions above, SD-WAN can provide new choices. With SD-WAN, you can prioritize application and traffic flows, reduce the number of on-premise devices, as well as more dynamically manage the services deployed at a given branch location.  Together, this translates into lower capital expenditure and operating expenses overall. These solutions can also provide visibility into application performance so you can optimize your end-user experience.

* What are the pitfalls? One of the major selling points with SD-WAN is you can avoid service provider lock-in by buying and deploying the components internally or working with multiple service providers. However, whether you buy or lease your WAN, it requires a deep understanding of the network. You need to understand what type of traffic traverses your network; you need to know what applications are preforming well and what needs to be changed or optimized.

There are several vendors and offerings on the market so you should consider the time it will take to research and select products, and if you have the engineering expertise required to build and monitor the SD-WAN. Also, you’ll need to determine what traffic you want to keep on your existing network and which you want to send over the internet. How should you configure traffic management policies? What security measures need to be implemented? Answering those questions requires a deep understanding of application performance, network security, and network engineering.

Another pitfall is to think of SD-WAN as a complete solution, rather than another tool in the toolbox. So, while SD-WAN may enable choice in access, it doesn’t give you full connectivity to the cloud. In other words, to connect from remote sites to cloud services, it is the combination of orchestration of cloud, WAN and SD-WAN access that completes the solution.  Orchestration allows you to coordinate and automate across different pieces of the network.  SD-WAN is an important part of how the enterprise access the WAN. The combination SD-WAN and orchestration provides that comprehensive solution for integrating the enterprise to the WAN and into the cloud. 

* Do I build it or do I buy it?  If you decide SD-WAN is the way to go, you’ll have to decide if you want to build it yourself or consume SD-WAN as a managed service. Each option has its pros and cons.  The key question is, just how critical the network is to your business? If you’re in financial services, you’ll answer that differently than your IT peers in the retail industry. When the network is absolutely critical to your business you probably want more customization. If your needs are more flexible, you can work with different “off the shelf” options.

When you take the “buy” route and get your SD-WAN as a managed service, someone else is owning and managing the solution, saving your operations staff valuable training and support time.A buy option also may allow you to take advantage of other resources that your service provider offers, such as  NFV-based firewalls or cloud connectivity, giving you a more robust catalog of managed services that might be hard to develop internally.

Building it out yourself, on the other hand, offers the ultimate in customization. You can develop the services that work for your business and can be infinitely flexible.

 

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

How to extend Email Retention Policy for Deleted Items in Office 365

The content below is taken from the original (How to extend Email Retention Policy for Deleted Items in Office 365), to continue reading please visit the site. Remember to respect the Author & Copyright.

It’s really a terrible feeling when you know the email or a calendar Invite you are trying to find was flushed out of the mailbox through a retention policy. Every email service is operated by a Retention Policy that deletes items automatically from the ‘Delete’ folder after a certain period of time (30 days)

To change this, Microsoft offers an Email Retention Period feature for Deleted items in Office. Still, many of the users today are not aware of this feature. So, this is today’s topic of discussion. With Microsoft Office 365 Suite updated, admins can set the length of time for items to remain in the Deleted Items folder.  This simplifies the task of looking for finding that email or calendar invite you might have deleted accidentally.

Create custom Email Retention Policy

Email Retention Policy

You can either edit the name of the Default MRM Policy or create a new policy to opt out of this change. To change the policy name in Office 365, navigate to Office 365 Admin, chose Exchange admin center and select compliance management option. Next, look for ‘retention policies’ option.

Next, select Default MRM Policy, click the edit icon and then change the name of the policy. Once done, Office 365 will maintain the settings you’ve specified and your policy will not be overwritten.



If you have customized your Default MRM Policy and kept the original name, the change will still apply.

Kindly note that the changes you make will not apply to the Recoverable Items folder. It will be only for the visible Deleted Items folder and for the Deleted Items folder in both the primary and archive mailbox. It will also not affect any “Move to Archive” actions on the Deleted Items folder.

Email missing from the Deleted Items folder

If you find or notice that messages older than 30 days do not appear under Deleted Items folder of an Exchange Online user’s mailbox you can try the following as a workaround.

  1. Increase the number of days in the custom retention policy.
  2. Assign the Default MRM Policy to the mailbox.
  3. Change the name for the retention policy that’s assigned to the mailbox as “Default MRM Policy.

Final words: Every organization has their own set of business requirements, compliance and legal rules, and general culture of how email is consumed. Administrators must confirm that this change remains in line with existing compliance rules and if not, make all the changes appropriately. They also need to take a look at the potential impact on the amount of new data that will be downloaded by Office 365 client.



How Background Office 365 Processes Cause Confusion

The content below is taken from the original (How Background Office 365 Processes Cause Confusion), to continue reading please visit the site. Remember to respect the Author & Copyright.

Some Fraying Edges in the Office 365 service

Typically, system designers attempt to keep background processing hidden from end users. No need exists for a user to understand how server maintenance happens or what needs to be done to ensure databases stay in good health. Office 365 usually delivers service with no fuss to its users, but recently I have noticed some instances when background processes have made themselves felt. Although these are not serious issues, they are a worrying sign of a lack of attention to detail.

SharePoint’s App

Since mid-November, I have been complaining to Microsoft about the way that “app@sharepoint” shows up in Delve as the author of many documents (Figure 1). I was told that the problem was fixed in early December, but it came back and has remained constant since.

Odd App@SharePoint

Figure 1: Odd app@sharepoint document owners – at least, according to Delve (image credit: Tony Redmond)

What Microsoft Support Says

My tenant is not the only one affected by the problem. I have heard from many others who experience the same issue, including several who have reported the issue to Microsoft Support. One response received from Microsoft Support and reported on the Microsoft Tech Community is:

The Account app@sharepoint is related to the SharePoint Applications (Auditing logs/Virus), it is a system account, belongs to the SharePoint Farms infrastructure, it was created to run on all Site Collections/Personal sites to collect auditing information. When the user provides some changes for his own Site Collection, or enables features or activates the site Auditing logs, his own account does not have the write permissions on the SharePoint Farm and this account it will be used to run and collect all the necessary information to be provided to the customer. Microsoft has set up a few security accounts to run in all Farms, in order for our customer to do some tasks, and get the required information. Also it is necessary for user security, for these accounts to always track the Site Collection searching for viruses that can be uploaded into SharePoint. Microsoft will not change that, all Farms have the same configuration and this same system account, and all Site Collections will use this account if they need to do it.”

The answer reveals that app@sharepoint is a system account. System accounts should stay invisible to users because users do not care what system accounts do. We learn that the account is used when specific administrative operations need elevated permissions. OK, but that is not a reason to replace document authors with a system account. Finally, app@sharepoint appears to be used by processes that check for viruses that might be uploaded to SharePoint site collections. It is still no reason to hijack documents.

Keep Maintenance Hidden

The point is that Microsoft should be able to hide the seamy side of datacenter operations from the sight of normal users. It is one thing when tenant administrators get sight of stuff that they should not see; it is quite another when Delve presents users with a view appearing to tell them that their documents are now under the control of a random app. What is worrying is that Microsoft has not been able to fix the problem in over two months. That is simply unacceptable.

Perhaps the coming move to the http://bit.ly/2kmaA4h root will make Delve work properly. That move is supposed to free Delve and give the application a distinct identity of its own. We shall see whether the new identity discards some of the rubbish of the past.

Sponsored

Strange Accounts Show Up in Audit Events

It would be nice if strange Delve names were the only thing to worry about, but they are not. I have configured activity alerts to report any time that a user account is added to my tenant. An alert fired yesterday and reported that an account called [email protected] added a user. Further investigation via the audit log proved that the added account was valid, but I had added it (for a shared mailbox), not that strange account.

Diving deeper into the audit log, I established that this was not the only time that exo_evo_migration had added an account. Indeed, the activities of this account were joined by another called fim_password_service from the same support.onmicrosoft.com domain (Figure 2), this time to update my own account a few days ago.

Strange Office 365 audit events

Figure 2: Odd account names show up in the Office 365 Audit log (image credit: Tony Redmond)

I checked with Microsoft and discovered that the mysterious accounts are system accounts that should not turn up in audit events, especially when they replace the name of the real person who executes the action. Although I do not know for certain, it seems likely that these system accounts are used to synchronize updates between Azure Active Directory and the set of workload directories running inside Office 365. Pure speculation on my part – or a reasonable guess.

A Need for Clean-Up

Office 365 is a huge and complicated infrastructure that depends on many automated background processes to keep all the parts moving together. Anyone who has built computer systems realizes the need for system or service accounts to get things done and it is unsurprising to discover that Microsoft uses such accounts inside Office 365. In fact, I would be shocked if no system accounts were used.

But that is no reason to cause tenant administrators to worry when strange and unexplainable events show up. Who is this app@sharepoint person and why do they own user documents? Why are accounts in another Office 365 tenant adding and updating users in my domain? The fear of data loss or compromise is intense today and seeing odd things show up where they should not occur is always likely to cause concern.

Sponsored

Generally, Microsoft does a good job of drawing a veil across the messy bits of Office 365 to present an image of calm serenity to the world. Losing control over details like this does not help customers have confidence in cloud operation. After all, if this kind of obvious issue gets through the cracks, what else might happen?

Connect with Tony on Twitter @12Knocksinna

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros”, the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post How Background Office 365 Processes Cause Confusion appeared first on Petri.

Bat Bot is an autonomous drone that mimics a bat’s flight

The content below is taken from the original (Bat Bot is an autonomous drone that mimics a bat’s flight), to continue reading please visit the site. Remember to respect the Author & Copyright.

For roboticists working in the field of biomimetics, copying a bat’s complex flight patterns has been a difficult problem to solve. Or, as Caltech professor and Jet Propulsion Laboratory researcher Soon-Jo Chung put it during a press conference, "bat flight is the holy grail of aerial robotics." And according to a new research paper published by Chung and his JPL colleagues in the journal Science Robotics this week, that holy grail has officially been discovered.

Robotic birds and winged insects are relatively easy to create, but with over 40 joints in their wings, bats offer a new level of intricacy. By simplifying that wing structure into just nine key joints covered by a flexible membrane, however, the team successfully created the first Bat Bot. Built from carbon fiber bones and 3D-printed socket joints, Bat Bot weighs just 93 grams and the silicon-based wing membrane is only 56 microns thick with a roughly one-foot wingspan. "Our work demonstrates one of the most advanced designs to date of a self-contained flapping-winged aerial robot with bat morphology that is able to perform autonomous flight," Alireza Ramezani, one of the paper’s co-authors said.

Like a real bat, Bat Bot can move each wing independently and constantly change each wing’s shape to perform complex maneuvers that would be impossible otherwise. The flapping motion also conserves battery power, making it both quieter and more efficient than its fixed-wing or quadcopter counterparts.

Although the battery technology is still too clunky to allow for long flights, the research team believes Bat Bot’s agility would make it ideal for search and rescue operations or other applications in tight, urban environments. For now, however, the team is working on their next major milestone: teaching Bat Bot how to perch like its SCAMP cousin.

Via: CNET

Source: Science Robotics, Caltech

Rugged SODIMM module runs Linux on i.MX6UL

The content below is taken from the original (Rugged SODIMM module runs Linux on i.MX6UL), to continue reading please visit the site. Remember to respect the Author & Copyright.

EMAC’s $150, SODIMM-style “SoM-iMX6U” COM runs Linux on an i.MX6UL, and offers 4GB eMMC, 4x serial ports, -40 to 85°C support, and an optional carrier. NXP’s low-power, IoT oriented i.MX6 UltraLite (i.MX6UL) SoC has conquered yet another computer-on-module. EMAC’s SoM-iMX6U joins other Linux-driven modules based on the single-core, 528MHz Cortex-A7 SoC such as OpenEmbed’s SOM6210. […]

Sega adds ‘OutRun’ and other classic soundtracks to Spotify

The content below is taken from the original (Sega adds ‘OutRun’ and other classic soundtracks to Spotify), to continue reading please visit the site. Remember to respect the Author & Copyright.

If you need a distraction from the stress of our new Orwellian order, why not take a trip back to a simpler time with Sega? It just released nearly 20 classic soundtracks from the ’80s and ’90s onto streaming site Spotify, including OutRun, Virtua Fighter, Fantasy Zone and NiGHTS. OutRun is probably the standout, as many of us wasted a good chunk of our youth (and quarters) racing in the multiplayer arcade version.

The new releases join other Sega classics, including Sonic the Hedgehog, Golden Axes, Skies of Arcadia and Jet Set Radio. If you’re looking for something else, Spotify is one of the more gaming-oriented streaming services out there. It recently created a dedicated portal where you can find game-inspired playlists curated by PlayStation Music, EA Sports and even Engadget. However, it’s still difficult to stream soundtracks directly from developers and gaming companies — perhaps Nintendo and others could take a page from Sega.

Via: Destructoid

Source: Sega

Five Easy Photo Improving Tricks Anyone Can Do

The content below is taken from the original (Five Easy Photo Improving Tricks Anyone Can Do), to continue reading please visit the site. Remember to respect the Author & Copyright.

A lot of this can be boiled to down to thinking before you shoot, but not many people realise how little is required to turn a boring shot into something eye-catching.

— Delivered by Feed43 service

Shell to start installing EV chargers in UK petrol stations

The content below is taken from the original (Shell to start installing EV chargers in UK petrol stations), to continue reading please visit the site. Remember to respect the Author & Copyright.

The UK’s EV charging infrastructure continues to grow thanks to a few dedicated players, but the likes of Tesla and nemesis Ecotricity will be joined by an unlikely newcomer later this year. Petrol pusher Shell has confirmed plans to add EV charging points to some UK filling stations — a move the company has been thinking about since last year. Speaking to the Financial Times, Shell director John Abbott implied denser, urban areas were highest on the to-do list, and a spokesperson told us specific info on sites will be shared ahead of the first installations, which are expected before summer this year.

With these first steps towards creating an EV charging network of its own, Shell is preparing for a future where emissions-free vehicles outnumber today’s gas-guzzlers — quite the statement for the oil giant and one of the world’s biggest companies. Shell will still make money from filling cars up with electricity instead of petrol, of course, but savvy Abbott has spotted a bonus revenue stream: Selling coffee and sandwiches to drivers waiting around for enough juice to get home.

Via: Gizmodo

Source: The Financial Times

Krita Is a Fast, Flexible, and Free Photoshop Alternative Built by Artists

The content below is taken from the original (Krita Is a Fast, Flexible, and Free Photoshop Alternative Built by Artists), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows/macOS/Linux: If you’re on the lookout for a digital painting tool and Photoshop is too expensive, Krita is a fast, free, and open source art tool that was developed by artists looking for something that met their needs without a ton of bloat or overhead. Plus, it’s completely cross-platform.

Read more…

Electronic glasses auto-focus on what you’re looking at

The content below is taken from the original (Electronic glasses auto-focus on what you’re looking at), to continue reading please visit the site. Remember to respect the Author & Copyright.

They’re not very pretty, but prototype eyeglasses from University of Utah scientists could make progressive lenses obsolete for older people. Using electronically activated lenses and infrared distance meters, they can focus automatically on whatever you’re looking at, whether it’s far or close up. Once perfected, the device could eliminate the need for multiple pairs of reading or driving glasses for folks with presbyopia or farsightedness.

Age-related far- or nearsightedness happens when the lenses in your eyes can no longer change focus between objects. As a result, many people between their 40s and 50s have to wear progressive-lens eyeglasses divided into small focal zones depending on object distance. One company called Deep Optics has pursued an auto-focusing solution using see-through liquid-crystal lenses, but is still working on a practical prototype. Google is also working on an auto-focus contact lens with startup Novartis, but recently said that it wouldn’t be testing them anytime soon.

To make the lenses adjustable, the University of Utah team placed glycerine — a thick, clear liquid — within membranes on the front and back. The chunky frame, meanwhile, holds electronics, a battery and an infrared distance meter. When you look at something, the meter gauges the distance and sends a signal to a mechanical actuator on the rear membrane. Within 14 milliseconds, it switches focus from one object to another, giving you clear vision without the need to look up or down.

Users can upload their prescriptions to the glasses by pairing them with a smartphone over Bluetooth. That means that, in theory, you could keep the same pair of glasses forever, even if your eyesight changes. You’d need to recharge them like a smartphone, but that could be less of a hassle than packing multiple pairs around. "Most people who get reading glasses have to put them on and take them off all the time," says research lead Carlos Mastrangelo. "You put these on, and it’s always clear."

The current prototype, which first debuted at CES 2017, is obviously something nobody would want to wear in public. The goal now, then, is to make the whole package smaller and lighter via some serious miniaturization. The team has created a startup company to commercialize the smart glasses and, hopefully, get them on the market in as little as three years.

Via: Presse Citron (translated)

Source: University of Utah

Using the Office 365 Connector Incoming WebHook to Post Service Health Information

The content below is taken from the original (Using the Office 365 Connector Incoming WebHook to Post Service Health Information), to continue reading please visit the site. Remember to respect the Author & Copyright.

Office 365 Connectors

Office 365 Connectors

Exploiting Office 365 Connectors for Groups and Teams

Office 365 Groups and Microsoft Teams both support Connectors, which allow cards representing information drawn from a wide range of network data resources to be created in group conversations or team chats.

A card holds a snippet of information extracted from a network data source, like Twitter or an RSS feed for a blog. They’re not intended to be full extracts, such as the complete text of a blog post. Instead, cards are there to notify users about events. Some cards include methods, like a hyperlink, to bring users to the original source.

Connector sources

The set of network sources that Connectors support is now over 90, including those featured on project activities (Trello, Asana, and Wunderlist), customer relationships (Salesforce, Dynamics 365, and Zendesk), news (Bing News, Twitter, and RSS feeds), and developer tools (GitHub and Visual Studio). In addition, an “Incoming Webhook” connector is available as a generic link to allow developers to fetch data from other services to an Outlook group. Programmers can use the webhook to create a link to a group for a company-specific system or some other network data source for which a connector does not currently exist.

Connecting one of these sources to a group or team is simple. Use OWA or a Teams client to select the source to which you want to connect, give the necessary credentials to authenticate to the data source, find the data you want to extract, and let the data flow.

It’s also worth noting that Microsoft is working on “actionable messages” that leverage a subset of the connectors available for groups and teams. The same idea applies in that items from a network source flow from the source to a destination (in this case, a user’s Inbox) and are created there as cards. The cards are “actionable” because the user can interact with the content. For example, an actionable card created from a tweet allows you to retweet or like the tweet while cards created through the Twitter connector for a group are read-only. The feature is now in preview.

Creating Your Own Source for Connectors

Nice as it is to have so many out-of-the-box connectors, it’s always interesting to be able to link up your own data source. Microsoft makes this straightforward by providing the Incoming Webhook connector.

The function of the Incoming Webhook connector is to receive incoming HTTP requests containing simple JSON-format payloads. Once received, the connector routes the information to a destination Office 365 Group or Microsoft Team (the webhook address) where the information is created as a new card. If the connector links to a team, a new chat is created using the inbound information.

Therefore, to connect a data source to a group or team, all you need to do is:

  1. Identify the data source to use. Any data source is acceptable if you can extract relevant information from it.
  2. Decide on the destination – an Office 365 group or a team.
  3. Create an Incoming Webhook connector for the destination.
  4. Extract data from the source and format the data as a JSON payload.
  5. Post the data to the webhook address.

Sounds easy enough. Let’s figure out what needs to be done to get something going.

Identify a Data Source

I decided to use the Office 365 Service Communication API as a data source, mostly because it can be used with PowerShell. Microsoft is currently previewing V2 of the API, so the PowerShell code to interrogate service health is likely to change when V2 is generally available. However, it works for now and that’s enough for test purposes.

Create an Incoming Webhook Connector

Before we can create a new connector, we need an Office 365 Group to host the cards that we’re going to create. I created a new group from OWA and then selected the Connectors option in the toolbar. I then selected Incoming Webhook from the list of available connectors and then Add to arrive at the screen shown in Figure 1. Here we decide what to call the webhook and whether to replace the default icon with a different image. Although I uploaded an image, OWA had the good taste to ignore it thereafter when displaying cards. The image appears in Outlook, so that is a little bug.

New Webhook connector

Figure 1: Creating a new Incoming Webhook Connector (image credit: Tony Redmond)

When you click Create, Office 365 responds with a URL. This is the webhook address that you need to specify to route information to the connector. The webhook address is of the form:

http://bit.ly/2kOcWvR

The webbook address is a critical piece of information because without it you can’t route items to the destination group or team. Copy it to the clipboard and store it for use. If you forget to do this, you can always access the webhook address by managing the connector.

Sponsored

Extract Data

Now the fun part begins because we have to write some code to extract the data from the source that we want to use. For this example, we’re going to import the Office 365 Service Communications module into a PowerShell session that is already connected to Exchange Online. We then fetch details about current service incidents using the API to create a set of objects. Before you can use the Office 365 Service Communications module, you’ll have to fetch and install it using an administrator account:

[PS] C:\> Find-Module O365ServiceCommunications | Install-Module -Scope CurrentUser

After the module is installed, we can connect and use it to fetch service information.

[PS] C:\> Import-Module O365ServiceCommunications
$O365ServiceSession = New-SCSession -Credential $O365Cred
$Incidents = Get-SCEvent -EventTypes Incident -PastDays 2 -SCSession $O365ServiceSession | Sort StartTme -Descending | Select-Object Id, Status, StartTime,
@{n='ServiceName'; e={$_.AffectedServiceHealthStatus.servicename}},
@{n='Message';e={$_.messages[0].messagetext}}

Formatting Payloads

Payloads enable us to transmit information through the connector to end up in the group. The important things to remember are that:

  • Some practice is needed to master the creation of the JSON content using PowerShell. It is best to start with a basic card and then gradually build up the full content for the cards you want to create.
  • Full information about how to format the JSON payload is available in the Office 365 Connectors API Reference.
  • Posting to the connector depends on proper formatting of the payload. You might be able to convert your payload to JSON successfully using the ConvertTo-JSON cmdlet only to find that the payload is rejected when it is submitted to the connector using the Invoke-RestMethod All of which leads to hours of fun debugging payloads.

So here goes. First, we put the webhook address into a variable to make it easier to handle. We also create some other variables to use in the card.

$InfoFeed = " http://bit.ly/2kOcWvR”

Now we build the JSON payload in a PowerShell variable. For this example, we extract details of the latest service incident from the set retrieved from the Service Communications API. Note how different elements of a service incident are formatted in name-value pairs.

$InfoBit = ConvertTo-Json -Depth 4 @{
Text = "Office 365 Service Update"
Title = "New Office 365 Service Information at " + (Get-Date -Format U)
Summary = $Incidents.ServiceName[0] + " incident"
sections = @(
  @{
  facts = @(
     @{
     name = "Incident Number"
     value = $Incidents.ID[0]
     },
     @{
     name = "Affected Service:"
     value = $Incidents.ServiceName[0]
     },
     @{
     name = "Current Status:"
     value = $Incidents.Status[0]
     },
     @{
     name = "Start date:"
     value = Get-Date $Incidents.StartTime[0] -Format u
     },
     @{
     name = "Information:"
     value = $Incidents.Message[0]
     }
   )
  }
 )
}

Submitting the Payload

The Invoke-RestMethod cmdlet submits the payload to the connector, which is identified by the webhook address.

Invoke-RestMethod -ContentType "Application/Json" -Method Post -Body $InfoBit -Uri $InfoFeed

If everything goes well, the cmdlet will respond with 1 (one) to indicate that the post was successful. And of course, a new card should appear in the target group (Figure 2).

Office 365 service incident via webhook

Figure 2: A service incident posted via the Incoming Webhook connector (image credit: Tony Redmond)

No Checking

One thing you’ll have noticed about this code is that it features no error checking. We grab the latest service incident and use it. No validation is present to see if any service incidents are present (a simple check against $Incidents.Count would be enough) nor do we verify whether the service incident has already been posted to the group. But the joy of PowerShell is the speed in which you can prove that something works. Adding the code normally found in operational-quality scripts can be left to later.

Sponsored

Connectors are Fun

It’s surprising just how much value can be extracted through Office 365 Connectors for both Office 365 Groups and Microsoft Teams. The out-of-the-box connectors are very functional, but it’s great to be able to use the Incoming Webhook connector to grab information from other systems and post that data too.

Follow Tony on Twitter @12Knocksinna.

Want to know more about how to manage Office 365? Find what you need to know in “Office 365 for IT Pros,” the most comprehensive eBook covering all aspects of Office 365. Available in PDF and EPUB formats (suitable for iBooks) or for Amazon Kindle.

The post Using the Office 365 Connector Incoming WebHook to Post Service Health Information appeared first on Petri.

5 AWS limitations every CEO needs to be aware of

The content below is taken from the original (5 AWS limitations every CEO needs to be aware of), to continue reading please visit the site. Remember to respect the Author & Copyright.

“Today, it’s not what we can do with technology, but what we expect from our technology.” – DJ Patil, Chief Data Scientist, U.S. Office of Science and Technology Policy, at AWS re:Invent 2016.

I couldn’t agree more. The general opinion is that today’s technology is so advanced that we can build practically anything. For the most part, it’s true. Hover cars (done), self-driving vehicles (done), AI robots (done), space travel (almost there), and so on. The problem lies in our expectations. We all expect hover cars to work like the ones on The Jetsons, or that space travel can take us to another far away galaxy. Expectations are the thing that gets us in trouble, in the end. The same rule applies in the business world. Our expectations often exceed what’s really possible.

Today, I will try to help you avoid this situation when it comes to choosing a Cloud platform for your business. If you’re still on the fence about which provider is right for you, take a look at our post comparing  Amazon Web Services, Google Cloud, and Microsoft Azure. In this post, I will help you clarify what AWS technology can do for you by looking at some of the key AWS limitations that you should be aware of.

ceo=decideing

AWS Limitations

AWS is the fastest growing Cloud provider, and it offers more than 70 different services. For just about any service that you could think of, there is probably already a specialized service on AWS where you can deploy your setup. And, the entire AWS infrastructure is at your disposal.

However, this doesn’t mean that you can literally do whatever you want. Some of the AWS limitations are obvious, but others are hidden and should be carefully considered before you get started.

Let’s take a look at some of these limitations and how you can overcome them and keep your business safe in the AWS world.

1. AWS service limits

AWS service limits are set by the platform. The restrictions are there to:

  • Prevent you from spending too much money on your first encounter with the platform
  • Protect the system itself from uncontrolled resource usage

One of the main features of all Cloud systems, including AWS, is scalability and the ability to increase resources up when necessary. So, what’s the problem?

The answer is quite simple. You don’t really need that many resources. Most of the companies don’t need to have more than five Elastic IPs per region or more than 20 EC2 instances per region. The default limitations are set based on the needs of an average user. Increase these and you’ll pay more.

The good news is that you can submit a request for more resources if you really need more than five Elastic IP addresses per region.

AWS places default limits on several critical resources. These include:

  • EC2 Instance: Default Limit: 20 per region
  • EBS Volume: Default Limit: 5,000 volumes or an aggregate size of 20 TiB
  • Elastic IP: Default Limit: 5 per region
  • Elastic Load Balancer: Default Limit: 10
  • High I/O Instance: Default Limit: 2
  • Virtual Private Cloud: Default Limit: 5

These limits are often called soft limitations because you can also request fewer resources if you wish.

service-limitations

There are also hard AWS limitations that can not be changed at all. The following areas have hard limitations:

  • EC2 Security Groups (EC2 Classic): Maximum of 500 per instance and each Security Group can have a maximum of 100 rules/permissions
  • EC2 Security Groups (EC2-VPC): Up to 100 security groups per VPC

As you can see, these limits are related to security issues. Since safety is one of the key issues in Cloud computing, you should trust your Cloud provider in this area.

You can see the full list of AWS limitations on the official AWS service limits page.

2. Technology limitations

An exceptional characteristic of this limiting factor is that it can be applied to all Cloud services, not just on AWS. It depends on the general technology development that can not be quickly resolved.

Let me explain with an example. You have decided to create a teleport service that people can use online. To teleport myself, all I have to do is log in to the app, choose the location, and click “Go!” There’s just one catch. This service cannot actually work because, you guessed it, teleporting technology hasn’t been developed yet (unfortunately).

Yes, my example is a bit extreme, but you get the idea. If it is not possible for Amazon SES (Simple Email Service) to send more than 1 email per second (in the sandbox environment), there is no point in asking for an increase at this time. Technology is developing each day, and it will surely be possible to send 100 emails per second using Amazon SES at some point in the future. It’s just not feasible right now.

The best way to protect your business against such limitations is to be aware of them. So, make sure you have all the relevant information before you make an unreasonable request. It’s much easier to work around technology limitations than trying to fix them.

3. Lack of relevant knowledge by your team

If you choose to work with AWS as your Cloud provider, be prepared to learn and invest in your team’s education. As we mentioned before, AWS is an excellent and extensive platform, and you need to know what you’re doing if you want to use it. To be able to take advantage of all the useful features and services offered by AWS, you’ll want to know the platform as deeply as possible.

To successfully manage your AWS platform, you will need to invest in your team. It is always good to hire an experienced engineer who has already worked with AWS, but you should also help your team learn as much as they can about the platform. There are a lot of resources that they can use, from the rich AWS Docs section, community websites, and forums, to online learning platforms .

team

It is always good to complete the education process with certification, so make sure to encourage your co-workers to take a step forward and get AWS certified.

4. Technical support fee

When I say that technical support is an AWS limitation, I’m not referring to untrained personnel. I’m referring to the additional costs that dedicated tech support requires.

Just to be clear, your monthly fee includes a limited amount of support. If you need immediate assistance you can opt for one of three support packages: Developer, Business, or Enterprise. While this will increase your monthly costs, it’s also an investment that ensures that you will have the top team at your disposal in case of crisis.

Here is a snapshot of AWS’s pricing for support:

  • Developer: $29/month
  • Business: Greater of $100 – or –
    10% of monthly AWS usage for the first $0–$10K
    7% of monthly AWS usage from $10K–$80K
    5% of monthly AWS usage from $80K–$250K
    3% of monthly AWS usage over $250K
  • Enterprise: Greater of $15,000 – or –
    10% of monthly AWS usage for the first $0–$150K
    7% of monthly AWS usage from $150K–$500K
    5% of monthly AWS usage from $500K–$1M
    3% of monthly AWS usage over $1M

You can see a more detailed report with a few examples of the pricing models on the official AWS Support page.

tech-support

There are two things that you can do to overcome this limitation:

  • Be prepared for additional costs and add it to your general business expenses
  • Find your own AWS Consulting Partner

Since the first solution is quite straightforward, I will talk about the second one, because it might be very useful for you. AWS has a broad network of partner companies in its APN – AWS Partner Network. APN includes two types of partner companies:

  • AWS Consulting partners: According to Amazon, “APN Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers.”
  • AWS Technology partners: Are defined as “providers of software solutions that are either hosted on or integrated with the AWS platform. APN Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, Developer Tools, Management, and Security Vendors.”

In this scenario, the ideal solution could be to find a local AWS Consulting Partner and use their assistance in adopting a new system. The main advantage of this is geographical location. You can find a Consulting partner in your area who will help you understand all of the aspects of AWS and provide the guidance and technical assistance whenever you need it.

5. General Cloud Computing issues

Finally, I’d like to mention some of the concerns that often come up when considering a move to the cloud, such as downtime, security, privacy, limited control, and backup protection. It is natural to worry about such issues (they are crucial to your business, after all), but the entire Cloud computing process and system already takes care of most of them. Large and respectable companies, such as Amazon, Google, and Microsoft, stand behind these systems and I believe that we can trust them with our business since they use the same resources to run theirs.

Final few words

As I said at the beginning, it is critical to know the difference between our expectations and reality. When it comes to AWS, you should not expect a perfect system with a simple setup where everything and everyone is waiting just for you. AWS is a complex infrastructure with its own rules and laws that you should respect and know. Once you are aware of them, your Cloud adventure will be much more comfortable than you ever imagined.

Supporting our global community

The content below is taken from the original (Supporting our global community), to continue reading please visit the site. Remember to respect the Author & Copyright.

OpenStack is a global open source community. The OpenStack Foundation serves members in 180 countries focused on advancing the capabilities and accessibility of open infrastructure everywhere. We fundamentally believe diversity and collaboration are a powerful force for innovation, and it has been amazing to see the product of tens of thousands of people around the world over the last 6+ years.

Lauren, Mark and I disagree with the executive order issued by President Trump that targets individuals from 7 countries. The order restricts the travel and movement of people in a discriminatory way that  results in a restriction on access to talent and ideas. It is still unclear how the policies will play out and be enforced, but we will be watching, advocating for and supporting our community members to the best of our ability.

This executive order will not impact the governance of the Foundation or the way the community operates globally. We will continue to support user groups and community members that are active in the seven countries named by the executive order, alongside our 120+ user groups around the world. However, we have two scheduled events in the United States within the next six months that will attract a global audience: the PTG (Project Teams Gathering) in Atlanta, Feb 20-24, a smaller event that will bring together hundreds of upstream contributors, and the OpenStack Summit in Boston, May 8-11, our larger event that happens every six months.

This executive order could impact some community members ability to travel to Atlanta and Boston, but unfortunately it is too late at this point to change the location of these events. The following three OpenStack Summits, however, are now scheduled to occur outside of the United States. The next Summit will be in November 2017 in Sydney, Australia and we are working to finalize the details so we can announce the following two Summit locations soon.

We’ve already heard from one community member, Mohammed Naser, who is concerned that his plans to travel from Canada to Atlanta to attend the PTG may be restricted, simply because he a dual citizen of Canada and Iraq.  Mohammed has been contributing code to OpenStack since 2011 and is the CEO and Founder of Vexxhost. Blocking his travel would serve no purpose and rob the community of a valuable contributor during an important event. If you are concerned about the impact or have any questions, please don’t hesitate to reach out to me at [email protected].

Political actions like this highlight the importance of our collective values. The Four Opens, the founding principles of our community, exist to ensure the free flow of talent and ideas, across geographic, national, organizational or other lines that might divide us. We believe in humanity. We believe in opportunity. We believe in the power of collaboration across borders, and we will continue to carry forward our mission.

Jonathan Bryce
Mark Collier
Lauren Sell

Ireland votes to stop investing public money in fossil fuels

The content below is taken from the original (Ireland votes to stop investing public money in fossil fuels), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ireland just took a big step toward cutting coal and oil out of the picture. Its Parliament has passed a bill that stops the country from investing in fossil fuels as part of an €8 billion ($8.6 billion) government fund. The measure still has to clear a review before it becomes law, but it would make Ireland the first nation to completely eliminate public funding for fossil fuel sources. Even countries that have committed to ditching non-renewable energy, like Iceland, can’t quite make that claim. The closest is Norway, which ditched some of its investments back in 2015.

The bill was put forward by Deputy Thomas Pringle, who sees this as a matter of "ethical financing." It’s a message to energy companies that both deny human-made climate change and lobby politicians to look the other way, he says.

Ireland’s decision won’t have the greatest environmental impact given its relative size, but this is still an aggressive move when many other countries aren’t ready or willing to drop their support for conventional energy. It’s a particularly sharp contrast to the US, whose new leadership is already going to great lengths to suppress climate change science and protect the fossil fuel industry.

Source: Independent

List of free/cheap licenses/hardware for homelabbing?

The content below is taken from the original (List of free/cheap licenses/hardware for homelabbing?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Searching for individual products may produce results in the sub, but it would be nice if we could collect our resources in one place and add them all into the wiki. For example, I had no idea about VMUG EVALExperience until a couple days ago. For a homelabber trying to learn new things, having a list of where to get (legal) licenses without spending thousands of dollars would be awesome.

I'll start out with what I know:

VMUG EVALExperience – $200 USD a year gets you a bunch of VMWare licenses for use at home

Microsoft Imagine – formerly Dreamspark. Free licenses for students. I'm personally not a student so I don't know whats all given here but my understanding is "lots"

Free Meraki hardware – For IT professionals. Attend a one hour webinar for three different products and get a free AP, security appliance and switch with a three year license.

And that's where I cap out for what I know. Anything else that can be accessed and used for us homelabbers?

After running XenApp for the free trial I would love to get my hands on a cheap version to keep on messing around if I could, but if not I may buy it when I have an extra $500 kicking about… Outside of the learning aspects this has the very killer plus of letting me run Windows programs on Linux which is unreal and valuable to me, once I can afford it.

Edit

There's lots of good stuff in here, thanks everyone. /u/MonsterMufffin – obviously this info can be added into the wiki in software, but would it make sense to have a separate page just for discounted software and gear for homelab use?

OpenStack Developer Mailing List Digest January 21-27

The content below is taken from the original (OpenStack Developer Mailing List Digest January 21-27), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • dims [1] : Nova now has a python35 based CI job in check queue running Tempest tests (everything running on py35)
  • markvoelker [2]: Newly published Foundation annual report starts off with interoperability right in the chairman’s note [3]
  • Tell us yours via OpenStack IRC channels with message “#success <message>”
  • All: [4]

Get Active in Upstream Training

  • There is a continuous effort in helping newcomers join our community by organizing upstream contribution trainings [5][6] before every summit.
    • 1.5 – 2 days of hands-on steps of becoming an active OpenStack contributor.
  • Like everything else, this is a community effort.
    • In preparation for the Boston summit and the upcoming PTG in Atlanta, we are looking for coaches and mentors to help us make the training better.
    • If you’re interested in helping contact:
      • Ildiko Vancsa IRC freenode at ildikov or email [7]
      • Kendall Nelson IRC freenode at diablo_rojo or email [8]
  • Full thread: [9]

Project Team Gathering Coordination Tool

  • No central scheduling, beyond assigned rooms to teams and days.
    • Each team arranges their time in their room.
    • List of etherpads [10]
  • We still need centralized communication beyond each room:
    • An event IRC channel: ##openstack-ptg on free node IRC
      • Do public service announcements
      • Pings from room to room.
    • An EtherCalc spreadsheet powered dynamic schedule with extra rooms available:
      • One fishbowl
      • A few dark rooms with projectors and screens (not all will have a/v equipment due to budget).
      • Infra is working on setting up EtherCalc
  • Full thread: [11]

POST /api-wg/news

  • API Guidelines proposed for freeze:
    • Add guidelines on usage of state vs. status [12]
    • Clarify the status values in versions [13]
    • Add guideline for invalid query parameters [14]
  • Guidelines currently under review:
    • Add guidelines for boolean names [15]
    • Define pagination guidelines [16]
    • Add API capabilities discovery guideline [17 ]
  • Full thread: [18]

Lots of Teams Without PTL Candidates

  • We are reaching close to the end of the PTL nominations (Jan 29, 2017 23:45 UTC), but have projects that are leaderless:
  • Community App Catalog
  • Ec2 API
  • Fuel
  • Karbor
  • Magnum
  • Monasca
  • OpenStackClient
  • OpenStackUX
  • Packaging Prm
  • Rally
  • RefStack
  • Requirements
  • Senlin
  • Stable Branch Maintenance
  • Vitrage
  • Zun
  • Full thread [19]

How to Reset Default Security ACLs in Windows

The content below is taken from the original (How to Reset Default Security ACLs in Windows), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s Ask the Admin, I’ll show you how to reset security ACLs in Windows to their defaults using the secedit tool.

If you’ve ever been in a situation where Windows Server exhibits strange behavior, or even worse, something has stopped working completely, you might have traced the issue to changes in security permissions on files, folders, or registry keys. Access control lists (ACLs) determine access to the filesystem and registry and can be changed manually, using Group Policy, or other tools, and untested modifications to default security settings can prove catastrophic.

Prevention is better than cure, so adhering to security best practices is the best way to ensure that unwanted changes don’t cause any nasty surprises in your production environment, such as not granting IT staff permanent administrative access to servers and implementing a solid change control process. But in cases where those measures have either failed or were not present to protect your systems, it might be necessary to reset permissions to their out-of-the-box defaults.

The method I’m going to show you in this article resets filesystem and registry ACLs to their defaults. Production systems are rarely configured without significant changes to the OS defaults, so applying a mass rollback of ACLs is likely to cause some issues. But in a lab environment, you might decide it’s worth the risk.

Back up and test a restore operation of your server before following the instructions below. You might also consider using secedit’s /generaterollback switch to create a template that would allow you to restore the security ACLs to their current state. For more information about backing up Windows Server, see Back Up a Windows Server 2012 R2 Domain Controller on the Petri IT Knowledgebase.

Reset Default Security ACLs

Before using the secedit tool to reset permissions, you might consider using the Security Configuration and Analysis Tool instead, as it allows you to compare current settings against those in a template. Also, bear in mind that custom security settings you’ve defined in areas not covered by the security template won’t be rolled back. For more information about using secedit and the GUI Security Configuration and Analysis Tool, see Using the Windows Server 2012 Security Configuration and Analysis Tool on Petri.

To perform the steps below, you’ll need to log in to Windows Server with an account that has local administrative permissions. The default permissions that I’m going to apply using the command below are for servers that are not domain controllers (DCs). If you want to reapply default security settings to a DC, use the defltdc.inf template instead.

  • Log in to Windows Server.
  • Press WIN+R to open the Run dialog box.
  • Type cmd into the Run dialog box and then press ENTER.
  • In the command prompt window, type the following command and then press ENTER.
secedit /configure /cfg %windir%\inf\defltsv.inf /db defltbase.sdb /verbose

Note that the defltsv.inf template is part of a standard Windows Server install and is located in the Windows directory.

Sponsored

In this article, I showed you how to reset Windows security to settings to their defaults.

 

The post How to Reset Default Security ACLs in Windows appeared first on Petri.

The Most Common Terminal Commands Worth Knowing for the Raspberry Pi

The content below is taken from the original (The Most Common Terminal Commands Worth Knowing for the Raspberry Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

While the Raspberry Pi’s operating system is Linux, that doesn’t necessarily mean you should go out and memorize every Linux command. The Pi is a different type of computer that’s used for different kinds of projects, so the most useful commands differ from what you’d use on an everyday Linux machine. Over at Circuit…

Read more…

Amazon Cloud Directory – A Cloud-Native Directory for Hierarchical Data

The content below is taken from the original (Amazon Cloud Directory – A Cloud-Native Directory for Hierarchical Data), to continue reading please visit the site. Remember to respect the Author & Copyright.

Our customers have traditionally used directories (typically Active Directory Lightweight Directory Service or LDAP-based) to manage hierarchically organized data. Device registries, course catalogs, network configurations, and user directories are often represented as hierarchies, sometimes with multiple types of relationships between objects in the same collection. For example, a user directory could have one hierarchy based on physical location (country, state, city, building, floor, and office), a second one based on projects and billing codes, and a third based on the management chain. However, traditional directory technologies do not support the use of multiple relationships in a single directory; you’d have to create and maintain additional directories if you needed to do this.

Scale is another important challenge. The fundamental operations on a hierarchy involve locating the parent or the child object of a given object. Given that hierarchies can be used to represent large, nested collections of information, these fundamental operations must be as efficient as possible, regardless of how many objects there are or how deeply they are nested. Traditional directories can be difficult to scale, and the pain only grows if you are using two or more in order to represent multiple hierarchies.

New Amazon Cloud Directory
Today we are launching Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data as described above. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

Cloud Directory is a building block that already powers other AWS services including Amazon Cognito and AWS Organizations. Because it plays such a crucial role within AWS, it was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).

Amazon Cloud Directory is a managed service; you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and for scale, with efficient, batch-based read and write functions.

The long-lasting nature of a directory, combined with the scale and the diversity of use cases that it must support over its lifetime, brings another challenge to light. Experience has shown that static schemas lack the flexibility to adapt to the changes that arise with scale and with new use cases. In order to address this challenge and to make the directory future-proof, Cloud Directory is built around a model that explicitly makes room for change. You simply extend your existing schemas by adding new facets. This is a safe operation that leaves existing data intact so that existing applications will continue to work as expected. Combining schemas and facets allows you to represent multiple hierarchies within the same directory. For example, your first hierarchy could mirror your org chart. Later, you could add an additional facet to track some additional properties for each employee, perhaps a second phone number or a social network handle. After that, you can could create a geographically oriented hierarchy within the same data: Countries, states, buildings, floors, offices, and employees.

As I mentioned, other parts of AWS already use Amazon Cloud Directory. Cognito User Pools use Cloud Directory to offer application-specific user directories with support for user sign-up, sign-in and multi-factor authentication. With Cognito Your User Pools, you can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support hundreds of millions of users. Similarly, AWS Organizations uses Cloud Directory to support creation of groups of related AWS accounts and makes good use of multiple hierarchies to enforce a wide range of policies.

Before we dive in, let’s take a quick look at some important Amazon Cloud Directory concepts:

Directories are named, and must have at least one schema. Directories store objects, relationships between objects, schemas, and policies.

Facets model the data by defining required and allowable attributes. Each facet provides an independent scope for attribute names; this allows multiple applications that share a directory to safely and independently extend a given schema without fear of collision or confusion.

Schemas define the “shape” of data stored in a directory by making reference to one or more facets. Each directory can have one or more schemas. Schemas exist in one of three phases (Development, Published, or Applied). Development schemas can be modified; Published schemas are immutable. Amazon Cloud Directory includes a collection of predefined schemas for people, organizations, and devices. The combination of schemas and facets leaves the door open to significant additions to the initial data model and subject area over time, while ensuring that existing applications will still work as expected.

Attributes are the actual stored data. Each attribute is named and typed; data types include Boolean, binary (blob), date/time, number, and string. Attributes can be mandatory or optional, and immutable or editable. The definition of an attribute can specify a rule that is used to validate the length and/or content of an attribute before it is stored or updated. Binary and string objects can be length-checked against minimum and maximum lengths. A rule can indicate that a string must have a value chosen from a list, or that a number is within a given range.

Objects are stored in directories, have attributes, and are defined by a schema. Each object can have multiple children and multiple parents, as specified by the schema. You can use the multiple-parent feature to create multiple, independent hierarchies within a single directory (sometimes known as a forest of trees).

Policies can be specified at any level of the hierarchy, and are inherited by child objects. Cloud Directory does not interpret or assign any meaning to policies, leaving this up to the application. Policies can be used to specify and control access permissions, user rights, device characteristics, and so forth.

Creating a Directory
Let’s create a directory! I start by opening up the AWS Directory Service Console and clicking on the first Create directory button:

I enter a name for my directory (users), choose the person schema (which happens to have two facets; more about this in a moment), and click on Next:

The predefined (AWS) schema will be copied to my directory. I give it a name and a version, and click on Publish:

Then I review the configuration and click on Launch:

The directory is created, and I can then write code to add objects to it.

Pricing and Availability
Cloud Directory is available now in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions and you can start using it today.

Pricing is based on three factors: the amount of data that you store, the number of reads, and the number of writes (these prices are for US East (Northern Virginia)):

  • Storage – $0.25 / GB / month
  • Reads – $0.0040 for every 10,000 reads
  • Writes – $0.0043 for every 1,000 writes

In the Works
We have some big plans for Cloud Directory!

While the priorities can change due to customer feedback, we are working on cross-region replication, AWS Lambda integration, and the ability to create new directories via AWS CloudFormation.

Jeff;

 Microsoft’s Adding New Features To OneDrive For Business

The content below is taken from the original ( Microsoft’s Adding New Features To OneDrive For Business), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft’s OneDrive storage platform has become a critical component for the company’s Office 365 service and today the company is announcing several new features coming to OneDrive that will enhance collaboration and management of content. Specifically, these new features will make it easier to sync, share, and collaborate with the content that you are storing in OneDrive.

If you use SharePoint Online team sites and OneDrive, Microsoft is enabling the ability to sync content between the two services. This sync includes files inside of Teams, OneDrive, folders that are shared, and this service works across both PC and Mac. This feature is a huge improvement for companies that use both services as it finally bridges the gap between these two online repositories of content.

When it comes to syncing files, Microsoft is adding a new ‘Activity Center’ that will provide visibility into the process. This feature is coming to both PC and Mac; in addition, there is a new stand-alone Mac OneDrive client that works outside of the app store.

To help IT admins manage their content, sync, and sharing capabilities, Microsoft is releasing an updated OneDrive admin center. This updated control center has several improvements including a new dashboard for more granular control for sharing, syncing, and storage.

As Office 365 continues to expand, so will the usage of OneDrive for Business and these new features will enhance compatibility with older services like SharePoint. You can read more about the updates to the platform, here.

The post  Microsoft’s Adding New Features To OneDrive For Business appeared first on Petri.