OpenStack Weekly Community Newsletter (Sept.,19 – 25)

The content below is taken from the original (OpenStack Weekly Community Newsletter (Sept.,19 – 25)), to continue reading please visit the site. Remember to respect the Author & Copyright.

Register for OpenStack Summit Tokyo 2015

Full access registration prices increase on 9/29 at 11:59pm PT

This trove of user stories highlights what people want in OpenStack

The Product Working Group recently launched a Git repository to collect requirements ranging from encrypted storage to rolling upgrades.

How storage works in containers

Nick Gerasimatos, senior director of cloud services engineering at FICO, dives into the lack of persistent storage with containers and how Docker volumes and data containers provide a fix.

The Road to Tokyo 

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

Handling Projects with no PTL candidates

  • The technical committee will appoint a PTL [1] if there is no identified eligible candidate.
  • Appointed PTLs:
    • Robert Clark nominated security PTL
    • Serg Melikyan nominated Murano PTL
    • Douglas Mendizabal nominated Barbican PTL
    • Election for Magnum PTL between Adrian Otto and Hongbin Lu
  • MagnetoDB being abandoned, not PTL was chosen. Instead, it will be fast tracked for removal [2] from the official list of OpenStack projects.

Release help needed – we are incompatible with ourselves

  • Robert Collins raises that while the constraints system in place for how we recognize incompatible components in our release is working, the release team needs help from the community to fix the incompatibility that exists so we can cut the full Liberty release.
  • Issues that exist:
    • OpenStack client not able to create an image.
      • Fix is merged [3].

Semver and dependency changes

  • Robert Collins says currently we don’t provide guidance on what happens when the only changes in a project are dependency changes and a release is made.
    • Today the release team treats dependency changes as a “feature” rather than a bug fix. (e.g. if the previous release 1.2.3, requirement sync happens, the next version is  1.3.0.)
    • Reasons behind this are complex, some guidance is needed to answer the questions:
      • Is this requirements change an API break?
      • Is this requirements change feature work?
      • Is this requirements change a bug fix?
    • All of these questions can be true. Some examples:
      • If library X exposes library Y as part of its API, and library Y’s dependency changes from Y>=1 to Y>=2. X does this because it needs a feature from Y==2.
      • Library Y is not exposed in library X’s API, however, a change in Y’s dependencies for X will impact users who independently use Y. (ignoring intricacies surrounding PIP here.)
    • Proposal:
      • nothing -> a requirement -> major version change
      • 1.x.y -> 2.0.0 -> major version change
      • 1.2.y -> 1.3.0 -> minor version change
      • 1.2.3. -> 1.2.4 -> patch version change
    • Thierry Carrez is ok with the last two proposals. Defaulting to a major version bump sounds a bit overkill.
    • Doug Hellmann reminds that we can’t assume the dependency is using semver itself. We would need something other than the version number to determine from the outside whether the API is in fact breaking.
    • Due this problem being so complicated, Doug would rather over-simplify the analysis of requirements updates until we’re better at identifying our own API breaking changes and differentiating between features and bug fixes. This will allow us to be consistent, if not 100% correct.

Criteria for applying vulnerability:managed tag

  • The vulnerability management processes were brought to the big tent a couple of months ago [4].
  • Initially we listed what repos the Vulnerability Manage Team (VMT) tracks for vulnerabilities.
    • TC decided to change this from repos to deliverables as per-repo tags were decided against.
  • Jeremy Stanley provides transparency for how deliverables can qualify for this tag:
    • All repos in a given deliverable must qualify. If one repo doesn’t, they all don’t in a given deliverable.
    • Points of contact:
      • Deliverable must have a dedicated point of contact.
        • The VMT will engage with this contact to triage reports.
      • A group of core reviewers should be part of the <project>-corsec team and will:
        • Confirm whether a bug is accurate/applicable.
        • Provide pre-approval of patches attached to reports.
    • The PTLs for the deliverable should agree to act as (or delegate) a vulnerability management liaison to escalate for the VMT.
    • The bug tracker for the repos within a deliverable should have a bug tracker configured to initially provide access to privately reported vulnerabilities initially to the VMT.
      • The VMT will determine if the vulnerability is reported against the correct deliverable and redirect when possible.
    • The deliverable repos should undergo a third-party review/audit looking for obvious signs of insecure design or risky implementation.
      • This aims to keep the VMT’s workload down.
      • It has not been identified who will perform this review. Maybe the OpenStack Security project team?
  • Review of this proposal is posted [5].

Consistent support for SSL termination proxies across all APIs

  • While a bug [6] was being debugged, an issue was identified where an API sitting behind a proxy performing SSL termination would not generate the right redirection (http instead of https).
    • A review [7] has been given to have a config option ‘secure_proxy_ssl_header’ which allows the API service to detect ssl termination based on the header X-Forwarded-Proto.
  • Another bug back in 2014 was open with the same issue [8].
    • Several projects applied patches to fix this issue, but are inconsistent:
      • Glance added public_endpoint config
      • Cinder added public_endpoint config
      • Heat added secure_proxy_ssl_header config (through heat.api.openstack:sslmiddleware_filter)
      • Nova added secure_proxy_ssl_header config
      • Manila added secure_proxy_ssl_header config (through oslo_middleware.ssl:SSLMiddleware.factory)
      • Ironic added public_endpoint config
      • Keystone added secure_proxy_ssl_header config
  • Ben Nemec comments that solving this at the service level is the wrong place, due to this requiring changes in a bunch of different API services. Instead it should be fixed in the proxy that’s converting the traffic to http.
    • Sean Dague notes that this should be done in the service catalog. Service discovery is a base thing that all services should use in talking to each other. There’s an OpenStack spec [9] in an attempt to get a handle on this
    • Mathieu Gagné notes that this won’t work. There is a “split view” in the service catalog where internal management nodes have a specific catalog and public nodes (for users) have a different one.
      • Suggestion to use oslo middleware SSL for supporting the ‘secure_proxy_ssl_header’ config to fix the problem with little code.
      • Sean agrees the split view needs to be considered, however, another layer of work shouldn’t decide if the service catalog is a good way to keep track of what our service urls are. We shouldn’t push a model where Keystone is optional.
      • Sean notes that while the ‘secure_proxy_ssl_header’ config solution supports the cases where there’s a 1 HA proxy with SSL termination to 1 API service, it may not work in the cases where there’s a 1 API service to N HA Proxies for:
        • Clients needing to understand the “Location:” headers correctly.
        • Libraries like request/phatomjs can follow the links provided in REST documents, and they’re correct.
        • The minority of services that “operate without keystone” as an option are able to function.
      • ZZelle mentions this solution does not work in the cases when the service itself acts as a proxy (e.g. nova image-list).
      • Would this solution work in the HA Proxy case where there is one terminating address for multiple backend servers?
        • Yes, by honoring the headers X-Forwarded-Host and X-Forwarded-Port which are set by HTTP proxies, making WSGI applications unaware of the fact that there is a request in front of them.
  • Jamie Lennox says this same topic came up as a block in a Devstack patch to get TLS testing in the gate with HA Proxy.
    • Longer term solution, transition services to use relative links.
      • This is a pretty serious change. We’ve been returning absolute URLs forever, so assuming that all client code out there would with relative code is a big assumption. That’s a major version for sure.
  • Sean agrees that we have enough pieces to get something better with proxy headers for Mitaka. We can do the remaining edge cases if clean up the service catalog use.

[1] –

[2] –

[3] –

[4] –

[5] –

[6] –

[7] –

[8] –

[9] –