For the sake of simplicity, I shall build the post, based on https://www.modulesgarden.com/products/whmcs/proxmox-vps example.
Normally, this should be posted in https://www.forum.modulesgarden.com/, but the discussion is more general, regarding the security of any third party application, that integrates with Proxmox API. That’s why I thought that is better to bring it into Proxmox community.
The trigger was the fact that root password of Proxmox is stored in WHMCS database, Modules Garden Proxmox module is using the Servers feature of WHMCS.
I don’t dispute that Proxmox VPS For WHMCS by Modules Garden is a mature, well maintained, full featured piece of software ( including automatic Ip management, cloud init integration). But, from a security point of view, it wouldn’t be acceptable to use it IN PRODUCTION, with full root Proxmox password stored!
No matter how much I would love the automation provided by the module, I cannot take the risk of compromising the whole Proxmox clusters.
At first glance, since the server password is stored in WHMCS, and Modules Garden Proxmox is using the server feature of WHMCS, I considered that this is fixable at WHMCS level.
But there’s nothing to do at WHMCS level, due to 2 reasons:
1) There’s nothing to improve on WHMCS side, regarding storage of passwords, since it’s proprietary software.
WHMC needs to decrypt the password, before sending to Proxmox API http://www.webhostingtalk.com/showthread.php?t=921279.
In the optimistic scenario, even if the attacker couldn’t gain access to the full plain root password ( due to proprietary WHMCS protection) , it would be easily exploitable by forking the Proxmox VPS For WHMCS open source, into a malefic version. Still could hurt via api ( run deletion commands).
2) Isn’t possible to setup a general WHMCS module, because the proxy gateway api must be adapted to the backend WHMCS is trying to manage ( in our case Proxmox).
How to mitigate ?
My first idea was to setup another php application framework, to act as api middleware.
Main advantages would be the absence of authenticated users ( excluding the risk of privilege escalation in WHMCS). The web access could even be restricted via firewall, because clients don’t need to login here.
The surface attack will be smaller, being a less complex php app, comparing to WHMCS vulnerabilities ( remember Whmcs sql injection). WHMCS is being more targeted by attackers, due to its popularity.
Proxy api gateway
Then, I took the idea a little bit further. Why reinventing the wheel, using another php framework, since I could use a reverse proxy server ?
How this would work?
See https://pve.proxmox.com/wiki/Proxmox_VE_API#Save_an_authorization_cookie_on_the_hard_drive
Current approach:
Proxy approach:
At WHMCS level, let’s store only FirstHalfPass , declared as root password server. A small hack in Modules Garden Module would be required, because checking connectivity to the proxmox server will fail, with only half password.
At proxy level, let’s store SecondHalfPass. With transformation rules at proxy level, depending on the request, the proxy concatenate FirstHalfPass + SecondHalfPass= yourpassword and send full password to the upstream Proxmox server. The cookie is returned intact to the client WHMCS, keeping the workflow of Modules Garden module.
Mitigation advantages
Due to Proxmox architecture, cannot have any workarounds with Proxmox api https://forum.proxmox.com/threads/proxmox-api.30007/#post-150636
No matter of the level of security in WHMCS, with proxy api gateway solution, I am full confident that an attacker never gets the plain root password of Proxmox.
It’s possible to setup whitelist filters at the proxy level, authorized only allowed Proxmox api commands ( for instance, do not allow automatically delete proxmox services).
Also setup custom rate limitation, based on the sent api commands :
Choosing the proxy api gateway
Very relevant articles are
ENVOY
Even if they choose Envoy, they highlight others alternatives, too. Envoy is the new kid on the block , with an impressive list of users https://www.envoyproxy.io/, with no commercial pressure for a proprietary Envoy Plus or Envoy Enterprise Edition. But, from my point of new, I prefer a more established solution ( https://github.com/envoyproxy/envoy/releases is released in 20116), better documented and more related articles online.
HAPROXY
“The velocity of the HAProxy community didn’t seem to be very high. We ourselves had experienced the challenges of hitless reloads (being able to reload your configuration without restarting your proxy) which were not fully addressed until the end of 2017 despite epic hacks from folks like Joey at Yelp. With v1.8, the HAProxy team has started to catch up to the minimum set of features needed for microservices, but 1.8 didn’t ship until November 2017.”
MY CHOICE KONG API GATEWAY BASED ON NGINX
First, I quote their reluctance for pure NGINX
“NGINX open source has a number of limitations, including limited observability and health checks. To circumvent the limitations of NGINX open source, our friends at Yelp actually deployed HAProxy and NGINX together. More generally, while NGINX had more forward velocity than HAProxy, we were concerned that many of the desirable features would be locked away in NGINX Plus. The NGINX business model creates an inherent tension between the open source and Plus product, and we weren’t sure how this dynamic would play out if we contributed upstream.” via https://blog.getambassador.io/envoy...bassador-api-gateway-chose-envoy-23826aed79ef
On the other hand,
“These features are particularly interesting to customers who want an easy‑to‑use solution without compiling in third‑party modules or building supporting tools. There’s no attempt to limit the open source version, and many of the features we add to NGINX Plus already have third‑party implementations that our open source users can use” .https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
That’s true, for example, we have https://github.com/yaoweibin/nginx_upstream_check_module
https://openresty.org/en/ is another example: “OpenResty® is a full-fledged web platform that integrates the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. “
“Kong is built on top of OpenResty, and provides a system for easily applying extra functionality to backend services using a RESTful API interface. You can see the Plugins that it offers from the Plugin Gallery. Some of the extra functionality that Kong provides out of the box are rate-limiting, security and transformations." https://github.com/Kong/kong/issues/484
“We took a look at Tyk, Kong, and a few other open source API Gateways, and found that they had a common architecture — a persistent data store (e.g., Cassandra, Redis, MongoDB), a proxy server to do the actual traffic management, and REST APIs for configuration. “ https://blog.getambassador.io/build...-gateway-on-kubernetes-and-envoy-ed01ed520844
Kong is a dedicated solution for proxy api gateway https://konghq.com/, with proeminent users.
What is Kong? “Kong is an API gateway. That means it is a form of middleware between computing clients and your API-based applications. Kong easily and consistently extends the features of your APIs. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations and logging.” https://konghq.com/faqs/
CHOOSING KONG DUE TO NGINX, AS MY PERSONAL PREFERENCE
Finally, I have to admit my preference for nginx, due to the fact that I am using it for reverse proxy and cache solution. So, for the ease of maintenance, I would prefer a proxy api gateway, also based on nginx.
Even if haproxy is state of art as load balancer, and varnish is state of art as cache proxy, it’s harder to maintain a mix solution ( haproxy in front of varnish. See https://serverfault.com/questions/7...-varnish-is-between-haproxy-and-apache/715864 )
It’s obvious the direction towards an integrated solution ( varnish is using ssl, but only in paid version. Haproxy has introduced caching, but very limited https://www.haproxy.com/blog/whats-new-haproxy-1-8/) .
Nginx offers the best for one all in one product!
Thanks!
Normally, this should be posted in https://www.forum.modulesgarden.com/, but the discussion is more general, regarding the security of any third party application, that integrates with Proxmox API. That’s why I thought that is better to bring it into Proxmox community.
The trigger was the fact that root password of Proxmox is stored in WHMCS database, Modules Garden Proxmox module is using the Servers feature of WHMCS.
I don’t dispute that Proxmox VPS For WHMCS by Modules Garden is a mature, well maintained, full featured piece of software ( including automatic Ip management, cloud init integration). But, from a security point of view, it wouldn’t be acceptable to use it IN PRODUCTION, with full root Proxmox password stored!
No matter how much I would love the automation provided by the module, I cannot take the risk of compromising the whole Proxmox clusters.
At first glance, since the server password is stored in WHMCS, and Modules Garden Proxmox is using the server feature of WHMCS, I considered that this is fixable at WHMCS level.
But there’s nothing to do at WHMCS level, due to 2 reasons:
1) There’s nothing to improve on WHMCS side, regarding storage of passwords, since it’s proprietary software.
WHMC needs to decrypt the password, before sending to Proxmox API http://www.webhostingtalk.com/showthread.php?t=921279.
In the optimistic scenario, even if the attacker couldn’t gain access to the full plain root password ( due to proprietary WHMCS protection) , it would be easily exploitable by forking the Proxmox VPS For WHMCS open source, into a malefic version. Still could hurt via api ( run deletion commands).
2) Isn’t possible to setup a general WHMCS module, because the proxy gateway api must be adapted to the backend WHMCS is trying to manage ( in our case Proxmox).
How to mitigate ?
My first idea was to setup another php application framework, to act as api middleware.
Main advantages would be the absence of authenticated users ( excluding the risk of privilege escalation in WHMCS). The web access could even be restricted via firewall, because clients don’t need to login here.
The surface attack will be smaller, being a less complex php app, comparing to WHMCS vulnerabilities ( remember Whmcs sql injection). WHMCS is being more targeted by attackers, due to its popularity.
Proxy api gateway
Then, I took the idea a little bit further. Why reinventing the wheel, using another php framework, since I could use a reverse proxy server ?
How this would work?
See https://pve.proxmox.com/wiki/Proxmox_VE_API#Save_an_authorization_cookie_on_the_hard_drive
Current approach:
Code:
curl --silent --insecure --data "username=root@pam&password=yourpassword" \
https://$APINODE:8006/api2/json/access/ticket\
| jq --raw-output '.data.ticket' | sed 's/^/PVEAuthCookie=/' > cookie
Proxy approach:
At WHMCS level, let’s store only FirstHalfPass , declared as root password server. A small hack in Modules Garden Module would be required, because checking connectivity to the proxmox server will fail, with only half password.
Code:
curl --silent --insecure --data "username=root@pam&password=FirstHalfPass" \
https://$APINODE:8006/api2/json/access/ticket\
| jq --raw-output '.data.ticket' | sed 's/^/PVEAuthCookie=/' > cookie
At proxy level, let’s store SecondHalfPass. With transformation rules at proxy level, depending on the request, the proxy concatenate FirstHalfPass + SecondHalfPass= yourpassword and send full password to the upstream Proxmox server. The cookie is returned intact to the client WHMCS, keeping the workflow of Modules Garden module.
Code:
curl --silent --insecure --data "username=root@pam&password=yourpassword" \
https://$APINODE:8006/api2/json/access/ticket\
| jq --raw-output '.data.ticket' | sed 's/^/PVEAuthCookie=/' > cookie
Mitigation advantages
Due to Proxmox architecture, cannot have any workarounds with Proxmox api https://forum.proxmox.com/threads/proxmox-api.30007/#post-150636
No matter of the level of security in WHMCS, with proxy api gateway solution, I am full confident that an attacker never gets the plain root password of Proxmox.
It’s possible to setup whitelist filters at the proxy level, authorized only allowed Proxmox api commands ( for instance, do not allow automatically delete proxmox services).
Also setup custom rate limitation, based on the sent api commands :
- For instance, restrict creation of Proxmox VPS at maximum X /minute
- On the other hand, allow querying api for Proxmox stats and graphs, very often.
Choosing the proxy api gateway
Very relevant articles are
- https://blog.getambassador.io/build...-gateway-on-kubernetes-and-envoy-ed01ed520844
- https://blog.getambassador.io/envoy...bassador-api-gateway-chose-envoy-23826aed79ef
ENVOY
Even if they choose Envoy, they highlight others alternatives, too. Envoy is the new kid on the block , with an impressive list of users https://www.envoyproxy.io/, with no commercial pressure for a proprietary Envoy Plus or Envoy Enterprise Edition. But, from my point of new, I prefer a more established solution ( https://github.com/envoyproxy/envoy/releases is released in 20116), better documented and more related articles online.
HAPROXY
“The velocity of the HAProxy community didn’t seem to be very high. We ourselves had experienced the challenges of hitless reloads (being able to reload your configuration without restarting your proxy) which were not fully addressed until the end of 2017 despite epic hacks from folks like Joey at Yelp. With v1.8, the HAProxy team has started to catch up to the minimum set of features needed for microservices, but 1.8 didn’t ship until November 2017.”
MY CHOICE KONG API GATEWAY BASED ON NGINX
First, I quote their reluctance for pure NGINX
“NGINX open source has a number of limitations, including limited observability and health checks. To circumvent the limitations of NGINX open source, our friends at Yelp actually deployed HAProxy and NGINX together. More generally, while NGINX had more forward velocity than HAProxy, we were concerned that many of the desirable features would be locked away in NGINX Plus. The NGINX business model creates an inherent tension between the open source and Plus product, and we weren’t sure how this dynamic would play out if we contributed upstream.” via https://blog.getambassador.io/envoy...bassador-api-gateway-chose-envoy-23826aed79ef
On the other hand,
“These features are particularly interesting to customers who want an easy‑to‑use solution without compiling in third‑party modules or building supporting tools. There’s no attempt to limit the open source version, and many of the features we add to NGINX Plus already have third‑party implementations that our open source users can use” .https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
That’s true, for example, we have https://github.com/yaoweibin/nginx_upstream_check_module
https://openresty.org/en/ is another example: “OpenResty® is a full-fledged web platform that integrates the standard Nginx core, LuaJIT, many carefully written Lua libraries, lots of high quality 3rd-party Nginx modules, and most of their external dependencies. “
“Kong is built on top of OpenResty, and provides a system for easily applying extra functionality to backend services using a RESTful API interface. You can see the Plugins that it offers from the Plugin Gallery. Some of the extra functionality that Kong provides out of the box are rate-limiting, security and transformations." https://github.com/Kong/kong/issues/484
“We took a look at Tyk, Kong, and a few other open source API Gateways, and found that they had a common architecture — a persistent data store (e.g., Cassandra, Redis, MongoDB), a proxy server to do the actual traffic management, and REST APIs for configuration. “ https://blog.getambassador.io/build...-gateway-on-kubernetes-and-envoy-ed01ed520844
Kong is a dedicated solution for proxy api gateway https://konghq.com/, with proeminent users.
What is Kong? “Kong is an API gateway. That means it is a form of middleware between computing clients and your API-based applications. Kong easily and consistently extends the features of your APIs. Some of the popular features deployed through Kong include authentication, security, traffic control, serverless, analytics & monitoring, request/response transformations and logging.” https://konghq.com/faqs/
CHOOSING KONG DUE TO NGINX, AS MY PERSONAL PREFERENCE
Finally, I have to admit my preference for nginx, due to the fact that I am using it for reverse proxy and cache solution. So, for the ease of maintenance, I would prefer a proxy api gateway, also based on nginx.
Even if haproxy is state of art as load balancer, and varnish is state of art as cache proxy, it’s harder to maintain a mix solution ( haproxy in front of varnish. See https://serverfault.com/questions/7...-varnish-is-between-haproxy-and-apache/715864 )
It’s obvious the direction towards an integrated solution ( varnish is using ssl, but only in paid version. Haproxy has introduced caching, but very limited https://www.haproxy.com/blog/whats-new-haproxy-1-8/) .
Nginx offers the best for one all in one product!
Thanks!