Wednesday, March 31, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Tracking TCP Connection in background

Posted: 31 Mar 2021 10:14 PM PDT

I am looking for a daemon utility to track all non local TCP connections and which binaries establish the TCP connections (actively and passively) with which IPs and ports.

auditd seems like a great tool.

Following this post, I notice that the following rule captures all connections: auditctl -a exit,always -F arch=b64 -S connect -k MYCONNECT

I see many entries like these:

type=SOCKADDR msg=audit(04/01/2021 10:54:23.327:397) : saddr={ fam=local path=/dev/log }   type=SYSCALL msg=audit(04/01/2021 10:54:23.327:397) : arch=x86_64 syscall=connect success=yes exit=0 a0=0x4 a1=0x7fc64b29a6c0 a2=0x6e a3=0x20656c62616e6520 items=1 ppid=3116 pid=3156 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=2 comm=sudo exe=/usr/bin/sudo key=MYCONNECT   type=SOCKADDR msg=audit(04/01/2021 10:54:23.328:403) : saddr={ fam=local path=/var/run/dbus/system_bus_socket }   type=SYSCALL msg=audit(04/01/2021 10:54:23.328:403) : arch=x86_64 syscall=connect success=yes exit=0 a0=0x4 a1=0x55e28814cac8 a2=0x21 a3=0x7fff6e3462d0 items=1 ppid=3116 pid=3156 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=2 comm=sudo exe=/usr/bin/sudo key=MYCONNECT   

I wonder whether there is a way to filter by the AF family, limiting to IPv4 and IPv6.

I can add a filter to capture socket system call with AF family = IPv4 or IPv6. But for connect system call, I am not sure how to do so.

Thanks.

systemd raspbian wpa_supplicatant access point not working

Posted: 31 Mar 2021 09:44 PM PDT

Moving on from my prior question about identifying source of a wireless SSID, I am now ready to move on to the next challenge. Most of the documentation I found for wpa_supplicant and creation of a new access point revolved around a change from dhcpcd networking to systemd networking. Though I feel it was a mistake to do so, I have done this.

Problems: No IP address on wlan0 at boot; my SSID does not show up on wifi devices. I'm sure these are related.

I have the following configuration:

(DNS and DHCP provided by dnsmasq)

/etc/systemd/network/04-wired.network

[Match]  Name=eth0    [Network]  Address=10.158.54.3/24  Gateway=10.158.54.1  DNS=127.0.0.1 75.75.75.75  MulticastDNS=no  

/etc/systemd/network/08-wifi.network

[Match]  Name=wlan0    [Network]  DHCP=yes  

/etc/wpa_supplicant/wpa_supplicant-wlan0.conf (some values masked)

ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev  ctrl_interface_group=wheel  update_config=1  p2p_disabled=1  country=US  ap_scan=1  #arris24 is the main router's SSID  #cfg2021 is the SSID I want to create  network={          ssid="arris24.xx.com"          #psk="xx"          psk=xx  }  network={          ssid="cfg2021.xx.com"          psk="xx"          mode=2          key_mgmt=WPA-PSK          }  

I have done the appropriate systemctl enable magic, and on reboot, eth0 has its static address, but wlan0 does not get a DHCP address.

> ifconfig  eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 10.158.54.3  netmask 255.255.255.0  broadcast 10.158.54.255          inet6 fe80::dea6:32ff:fe47:16c2  prefixlen 64  scopeid 0x20<link>          inet6 2601:c7:8400:f870:dea6:32ff:fe47:16c2  prefixlen 64  scopeid 0x0<global>          ether dc:a6:32:47:16:c2  txqueuelen 1000  (Ethernet)          RX packets 1837912  bytes 2596534251 (2.4 GiB)          RX errors 0  dropped 41  overruns 0  frame 0          TX packets 871080  bytes 285375665 (272.1 MiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536          inet 127.0.0.1  netmask 255.0.0.0          inet6 ::1  prefixlen 128  scopeid 0x10<host>          loop  txqueuelen 1000  (Local Loopback)          RX packets 3506  bytes 305045 (297.8 KiB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 3506  bytes 305045 (297.8 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet6 fe80::dea6:32ff:fe47:16c3  prefixlen 64  scopeid 0x20<link>          ether dc:a6:32:47:16:c3  txqueuelen 1000  (Ethernet)          RX packets 2234  bytes 257504 (251.4 KiB)          RX errors 0  dropped 9  overruns 0  frame 0          TX packets 566  bytes 78118 (76.2 KiB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

This is the dnsmasq.conf minus all the dhcp-host and comment entries

t> egrep -v '^ *#|dhcp-host|^ *$' /etc/dnsmasq.conf  interface=eth0  listen-address=10.158.54.3  listen-address=127.0.0.1  log-queries=extra  bind-interfaces  domain-needed  bogus-priv  cname=songs,pi-in-the-sky  cname=hpmicro,hpmicro1  dhcp-range=10.158.54.101,10.158.54.200,25h  server=10.158.54.3  server=8.8.8.8  server=75.75.75.75  dhcp-option=option:router,10.158.54.1    # Default gateway  dhcp-option=6,10.158.54.3                # DNS (Hey, that's me!)  domain=xx.com  expand-hosts # Add domain also to any simple names in /etc/hosts  

I can see that wpa_supplicant began running at startup:

> ps -ef|grep wpa|grep pli  root       373     1  0 00:09 ?        00:00:00 /sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-wlan0.conf -Dnl80211,wext -iwlan0  

When other devices connect via ethernet, they receive an IP address, so I believe dnsmasq is OK. Supporting that, if I do killall wpa_supplicant and then run it locally :

/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-wlan0.conf -Dnl80211,wext -iwlan0 -d -f /tmp/wpa-debug.txt  

then wlan0 gets an IP address (10.158.54.162) from dnsmasq. But I never see the cfg2021.xx.com network in the list of SSIDS on my wireless devices. (/tmp/wpa-debug.txt is a 36KB file by the time I kill the process.

I'm way out of my league when it comes to systemd networking. (Again, I think it was a mistake to go that route since originally the networking was more traditional - and it worked. For a long time. But here we are.)

I'd appreciate your guidance. There are two problems I perceive:

-- wlan0 not getting network IP address at startup  -- cfg2021 network SSID never shows up on wifi devices  

Can Google Cloud Organization be used for internet uptime?

Posted: 31 Mar 2021 07:31 PM PDT

Maybe I'm barking up the wrong tree! I just want to be able to monitor our internet connection (i.e. ping every 15 sec) and get a notification (email or preferably text) when it goes down for more than a minute. I have the NetUptime Monitor installed on one PC and it works great, keeps a log, but does not notify me. I've just spent hours trying different (Spiceworks, Zabbix, PRTG) solutions but they either had issues or seemed like overkill. Does Google have a solution?

DNS VIEW not working for any

Posted: 31 Mar 2021 04:42 PM PDT

I have configured a dns view configuration

view "local-lan" {

match-clients { 192.168.0.0/24 };

zone "localtesting.com" {

type master;

file "internal/internal.localtesting.com";};};

view "any" {

match-clients { any };

zone "betatesting.com" {

type master;

file "external/betatesting.com";

};

};

When i tried to access betatesting.com domain from 192.168.0.0/24 network server it matches with only local-lan view and returned NXDOMAIN. I hope it suppossed to match any if the zone is not present in local-lan. Anybody please give me some details on it.

Change time in Xampp Apache

Posted: 31 Mar 2021 04:38 PM PDT

I want to know if there is a way to change Apache time to another time. for example, we are in 2021,31, March. but I want to set time in Apache to 2021,1, March to test my applications.
Let me know if it is possible?

apache reverse proxy to main domain on one port and subdomain on another port

Posted: 31 Mar 2021 04:04 PM PDT

I am trying to add a subdomain to an existing configuration using a different port than the main domain. The existing config looks like this:

<VirtualHost *:80>      ServerName example.com      ServerAdmin webmaster@localhost      UseCanonicalName On      RewriteEngine On      RewriteCond %{HTTPS} off      RewriteRule ^(.*)$ https://example.com%{REQUEST_URI} [L,R=301]  </VirtualHost>    <VirtualHost *:443>      ServerName example.com      UseCanonicalName On      ProxyPass / http://127.0.0.1:5001/      ProxyPassReverse / http://127.0.0.1:5001/      LogLevel warn      ErrorLog ${APACHE_LOG_DIR}/example_error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined      SSLEngine on      SSLCertificateFile /etc/ssl/certs/server.crt      SSLCertificateKeyFile /etc/ssl/private/server.key      SSLCACertificateFile /etc/ssl/certs/ca.crt      SSLVerifyDepth 2  </VirtualHost>  

I have tried adding another config file with essentially the exact same information, but replacing example.com with sub.example.com, and changing the port, but that did not work. What is the best way to add a subdomain to this configuration?

My certificate is for *.example.com, if that is important.

Need to deploy Laps via GPO cant use computer configuration or user configuration

Posted: 31 Mar 2021 11:17 PM PDT

I am trying to deploy Laps to all my users via GPO but issue i am having is nobody has local admin rights on their machines so obviously the install wont work with user config and i cant really use computer configuration as most of my users are now working from home and are not logged into the vpn at startup/shutdown so the policy never triggers. Is there another way to achieves this?

Back up arbitrary config files on linux servers in RANCID

Posted: 31 Mar 2021 08:37 PM PDT

I use RANCID to back up router and switch configurations.

I'd also like to be able to have it take automatic backups of configuration files on my servers so I can easily see when changes occur and if something breaks, revert to the last known config.

There are a number of approaches to this, but RANCID has everything I'm looking for in terms of features and I already use it, so it would be ideal if I could have it built in to that.

I see this question from 9 years ago asking the same thing and the top answer pretty much just says "build your own module" - I've had a look at the RANCID modules and I can't wrap my head around how to do that, so looking to see if in the past 9 years if anyone knows of a module that's now out there for this.

Edit: Not yet a complete solution, but I found this repository which seems to have the basics for what I'd need to be able to grab files by SCP and load them into RANCID: https://github.com/drewbeer/rancid-scp

SSSD integration with Ldap Error 'Could not start TLS encryption. TLS: hostname does not match CN in peer certificate'

Posted: 31 Mar 2021 11:09 PM PDT

We are currently using Wildcard certificate with SAN. I can successfully run ldapsearch from my client machine when I added TLS_REQSAN allow in openldap configuration.

Now i'm trying to integrate SSSD with secure LDAP but getting the below error

'Could not start TLS encryption. TLS: hostname does not match CN in peer certificate'

How can I force SSSD to check for Subject Alternate Name(SAN) instead of CN.

Is there a property I could set in SSSD configuration.

ldapsearch fails with TLS: hostname does not match CN in peer certificate

Posted: 31 Mar 2021 11:09 PM PDT

I'm trying to configure secure LDAP client using the certificates (RootCA, IntermediateCA, IssuingCA and Server certificate) and created the truststore. openssl s_client works successfully but when I run ldapsearch I get the below error:

ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)  additional info: TLS: hostname does not match CN in peer certificate  

ldap.conf:

SASL_NOCANON    on  #Configration for LDAP  URI ldaps://ldapserver.abc.example.com/  BASE dc=ldapserver,dc=abc,dc=example,dc=com  TLS_CACERTDIR /etc/openldap/cacerts  TLS_CACERT /etc/pki/tls/certs/ca-bundle.crt  

LDAP server FQDN: ldapserver.abc.example.com
Client FQDN: centos7.xyz.example.com

Do I need to create a new certificate for the client using the provided certificates, if yes how?

Apache 2.4 LDAP lookup sllow

Posted: 31 Mar 2021 04:53 PM PDT

Server is running RHEL 7 and Apache 2.4.6; this is a pretty new (about a week old) problem. My department Intranet uses authentication against the university's Active Directory environment, and authentication for end-users takes over 30 seconds. Subsequent page loads are nearly-instant, and after some time (timeout, I assume), the problem is back.

<Directory /var/www/html/intranet>    AuthType Basic    AuthName "Restricted files"    AuthBasicProvider ldap    AuthLDAPBindDN CN=dept-binder,OU=Generic-Logon,OU=Generic,DC=example,DC=edu    AuthLDAPBindPassword lamepassword    AuthLDAPURL ldaps://ldap-ad.example.edu:636/dc=example,dc=edu?sAMAccountName?sub      <RequireAny>      require ldap-group CN=ug-dept-intranet,OU=Deoartment,OU=Dept-Groups,DC=example,DC=edu    </RequireAny>  </Directory>  

Here are some relevant lines from error_log:

AH02034: Initial (No.1) HTTPS request received for child 36 (server dept.example.edu:443)  AH01626: authorization result of Require ldap-group CN=ug-psy-employees,OU=Dynamic,OU=Psychology,OU=FSU-Dept-Groups,DC=fsu,DC=edu: denied (no authenticated user yet)  AH01626: authorization result of Require ldap-group CN=ug-dept-intranet,OU=Dept,OU=Dept-Groups,DC=example,DC=edu: denied (no authenticated user yet)  AH01691: auth_ldap authenticate: using URL ldaps://ldap-ad.example.edu:636/dc=example,dc=edu?sAMAccountName?sub  AH02001: Connection closed to child 11 with standard shutdown (server dept.example.edu:443)    # 37 seconds pass    AH01697: auth_ldap authenticate: accepting jsmith  AH01713: auth_ldap authorize: require group: testing for group membership in "CN=ug-dept-intranet,OU=Department,OU=Dept-Groups,DC=example,DC=edu"  AH01714: auth_ldap authorize: require group: testing for member: CN=jsmith,OU=PEOPLE,DC=example,DC=edu (CN=ug-dept-intranet,OU=Department,OU=Dept-Groups,DC=example,DC=edu)  AH01715: auth_ldap authorize: require group: authorization successful (attribute member) [Comparison true (adding to cache)][6 - Compare True]  

Cheapest way to setup a domain controller with AD-DS for a small business with multiple locations

Posted: 31 Mar 2021 10:04 PM PDT

I work for a small business with little IT infrastructure. We want to be able to join all computers throughout the company to a single domain to push group policies and conduct other management functions, however, we have 15 offices with 1-2 employees at each office and 10 at corporate with a total of 36 employees. To me, it doesn't make sense to invest in the infrastructure to setup a domain controller with a firewall at each location.

Based on my research it seems like moving everything to the cloud (Azure) or doing a hybrid approach with our on-premise server would make more sense. Is my thinking correct here? Would there be a cheaper way?

Deploy MSI via GPO to specific users "Admin right issue"

Posted: 31 Mar 2021 07:01 PM PDT

I'm trying to deploy an MSI via GPO to specific users (120 users) from different departments and sites, the problem is that they don't have admin rights so the application cannot be installed due to insufficient privileges.

Can anyone have an idea to get around this problem? Thanks

How to find source of inherited permission on Exchange online mailbox?

Posted: 31 Mar 2021 09:02 PM PDT

Example:

Get-MailboxPermissions -Identity "<user>"  

Shows permissions with IsInherited=True Where would this permission be inherited from in Exchange online?

In on premise exchange I would use Get-MailboxDatabase and/or Get-ADPermission but these are unavailable in Exchange online.

There is a permission we want to remove, but can't because it's inherited:

WARNING: An inherited access control entry has been specified: [Rights: ReadControl, ControlType: Allow]  and was ignored on object "CN=<user>,OU=<organization>,OU=Microsoft Exchange Hosted Organizations,DC=<server>,DC=PROD,DC=OUTLOOK,DC=COM".  

Port accessing error for a docker app on google compute engine VM instance

Posted: 31 Mar 2021 05:04 PM PDT

I'm trying to deploy an web app in a VM instance at Google Compute Engine (GCP). I connect to instance via ssh and deployed docker-compose orchestrated app. Which runs two docker containers as below.

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                 6824c19fad9b        wordpress:latest    "docker-entrypoint.s…"   3 hours ago         Up 22 seconds       0.0.0.0:8065->80/tcp     3079c9872e3d        mysql:5.7           "docker-entrypoint.s…"   3 hours ago         Up 3 hours          3306/tcp  

As per my previous experiences I mapped host instance's port 8065 to the wordpress container's port 80 (which works fine on my local machine and some other machines) So as you could see above docker has properly done the mapping I assume.

To test the setup from the instance, when I run curl http://localhost:8065
the terminal responds curl: (52) Empty reply from server

Since I can't make the internal mapping work, its useless to map from outside also. However I've made new ingress and egress firewall rules to enable tcp:8065 for this instance. However still no luck.

I'm aware that GCP recommends to use their Kubernates Engine to deploy containerized apps. However switching to that option is not the solution I'm expecting here. I just want to make sure what went wrong and how to make the current setup work in the same platform.

AWS Windows EC2 instance does not recognize assigned IAM role

Posted: 31 Mar 2021 07:09 PM PDT

Initially I launched a brand-new Windows Server 2016 server EC2. I assigned a S3 full admin IAM role to this instance when launching it. I installed CLI on it. I started a CMD window, and typed in "aws s3 ls". It lists all my buckets. All working fine.

I then created an AMI from this instance. I launched a new instance from this instance with that S3 full admin IAM role. "aws s3 ls" still works.

Then, after a number of days, when I repeat the above process (launching an instance from the same AMI), "aws s3 ls" will stop working, with the following error:

Unable to locate credentials. You can configure credentials by running "aws configure".  

It happened many times. Every time I rebuilt a new Windows Server, install CLI, assign the S3 full admin role to the instance, it works. After a number of days, when I launch a new instance from the exact same AMI, "aws s3 ls" will stop working.

It is so mysterious! Can someone shed some light on this please?

mod_evasive doesn't do anything on Ubuntu server 16.04

Posted: 31 Mar 2021 05:04 PM PDT

I set up mod_evasive on Apache/2.4.18 using this guide: https://komunity.komand.com/learn/article/server-administration/how-to-configure-modevasive-with-apache-on-ubuntu-linux/

I only changed email@yourdomain.com to root@localhost.

The first time I used test.pl it worked, but every remaining time it only shows HTTP/1.1 400 Bad Request. I'm not sure if I accidentally changed anything, but here's my test.pl.

#!/usr/bin/perl    # test.pl: small script to test mod_dosevasive's effectiveness    use IO::Socket;  use strict;    for(0..100) {      my($response);      my($SOCKET) = new IO::Socket::INET( Proto   => "tcp",                                          PeerAddr=> "127.0.0.1:80");      if (! defined $SOCKET) { die $!; }      print $SOCKET "GET /?$_ HTTP/1.0\n\n";      $response = <$SOCKET>;      print $response;      close($SOCKET);  }  

Because it worked the first time, shouldn't there be a log of it? I checked /var/log/mod_evasive/ and it's empty. In syslog there is also no mention of mod_evasive. There is only root in /var/mail/ which hasn't received a mail of mod_evasive either.

Could it be because I'm redirecting http to https? I setup a Redirect permanent / https://mydomain.example in 000-default.conf.

Https on iis not working with domain name of ip address

Posted: 31 Mar 2021 06:03 PM PDT

Using Windows 2012 R2 Standard server with IIS. Windows firewall has preset rules World Wide Web Services (HTTP Traffic-In) and World Wide Web Services (HTTPS Traffic-In) enabled. The server has one web with the following bindings:

http - empty value / any domain - 80
http - example.com - 80
https - example.com - 443
https - empty value / any domain - 443

Urls tried from external machine:
http://example.com - works
http://my.ip.address - works
https://example.com - not working
https://my.ip.address - not working

Urls tried from local server
http://example.com - works
http://localhost - works
http://my.ip.address - works
https://example.com - not working
https://localhost - works
https://my.ip.address - not working

So http works for all addresses from all locations. Https works when run on local machine with address localhost but https does not work in any other way. What am I missing? Do I need to open other firewall rules/ports other than 443?

Chef Private Key Could Not Be Loaded from /user.pem

Posted: 31 Mar 2021 11:07 PM PDT

I just finished the install chef-server tutorial at Chef's website, using an ec2 instance for my chef-server (t2.medium Ubuntu 16.04 AMI), and my laptop for my workstation, which also runs Ubuntu 16.04.

It appears that I succeeded in setting up a chef-workstation and chef-server. However, my 'user.pem' key is not being located. This is bazaar because my pem keys were successfully pulled from my chef-server to my chef-workstation using 'scp'. I can see them in my chef-repo directory on my workstation.

Might anyone be kind enough to help figure out why my pem key is not being located?

From my chef-workstation at:

~/chef-repo/  

I run:

knife ssl fetch  

I get:

WARNING: Certificates from ec2-XX-XX-XXX-XXX.us-west-1.compute.amazonaws.com will be fetched and placed in your trusted_cert  directory (/home/user/chef-repo/.chef/trusted_certs).    Knife has no means to verify these are the correct certificates. You should  verify the authenticity of these certificates after downloading.    Adding certificate for ec2-XX-XX-XXX-XXX_us-west-1_compute_amazonaws_com in /home/user/chef-repo/.chef/trusted_certs/ec2-XX-XX-XXX-XXX_us-west-1_compute_amazonaws_com.crt  

So now I have a:

 '/chef-repo/.chef/trusted_certs/ec2-52-53-255-252_us-west-1_compute_amazonaws_com.crt'   

file as expected.

Next I run:

knife ssl check  

I get:

Connecting to host ec2-XX-XX-XXX-XXX.us-west-1.compute.amazonaws.com:443  Successfully verified certificates from `ec2-XX-XX-XXX-XXX.us-west-1.compute.amazonaws.com'  

But when I run:

knife client list  

I get:

WARN: Failed to read the private key /user.pem: #<Errno::ENOENT: No such file or directory @ rb_sysopen - /user.pem>    Your private key could not be loaded from /user.pem  Check your configuration file and ensure that your private key is readable  

My 'knife.rb.' settings are:

log_level                :info  log_location             STDOUT  node_name                "user"  client_key               "#{current_dir}/user.pem"  validation_client_name   "myorg_shortname-validator"  validation_key           "#{current_dir}/myorg_shortname-validator.pem"  chef_server_url          "https://ec2-XX-XX-XXX-XXX.us-west-1.compute.amazonaws.com/organizations/myorg_shortname"  syntax_check_cache_path  "#{ENV['HOME']}/.chef/syntaxcache"  cookbook_path            ["#{current_dir}/../cookbooks"]  

On my chef-server, my /etc/hosts, and /etc/hostname settings are both:

ip-XXX-XX-XX-XX.us-west-1.compute.internal  

Strangely enough, I had to set 'chef_server_url' in 'knife.rb' to:

ec2-XX-XX-XXX-XXX_us-west-1_compute_amazonaws_com  

as opposed to:

ip-XXX-XX-XX-XX.us-west-1.compute.internal  

or else it wouldn't fetch my keys

What I am missing?

How to change sites-available configurations in NGinx

Posted: 31 Mar 2021 11:00 PM PDT

I am new to Linux. I want to deploy my asp.net core application on Ubuntu 16.04 LTS virtual machine. I installed asp.net core on Ubuntu and managed to run a simple asp.net core web application on Ubuntu. In addition, I want to setup Nginx web server as the reverse proxy for my application. I followed this article in order to install Nginx. Eventhough Nginx server successfully installed I cannot change following configurations on default file of Nginx Sites-Available section as above article explained since the whole directory is read only.

server {      listen 80;      location / {          proxy_pass http://localhost:5000;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection keep-alive;          proxy_set_header Host $host;          proxy_cache_bypass $http_upgrade;      }  }  

enter image description here

What have I done wrong and please tell me how to edit this file.

Apache returns invalid Content-Length for gzip compressed 204 response

Posted: 31 Mar 2021 08:06 PM PDT

When apache returns a gzip compressed response with 204 response code and empty body server returns invalid header Content-Length: 20 instead of Content-Length: 0.

Without gzip compression (without Accept-Encoding header) server returns valid header Content-Length: 0.

Request and response with compression:

0 % curl -v http://mta.dev/api/wtf/\?id\=09102 --compressed  * Hostname was NOT found in DNS cache  *   Trying 172.17.0.2...  * Connected to mta.dev (172.17.0.2) port 80 (#0)  > GET /api/wtf/?id=09102 HTTP/1.1  > User-Agent: curl/7.38.0  > Host: mta.dev  > Accept: */*  > Accept-Encoding: deflate, gzip  >   < HTTP/1.1 204 No Content  < Date: Thu, 09 Jun 2016 15:44:53 GMT  * Server Apache/2.4.7 (Ubuntu) is not blacklisted  < Server: Apache/2.4.7 (Ubuntu)  < X-Powered-By: PHP/5.5.9-1ubuntu4.17  < P3P: policyref="/bitrix/p3p.xml", CP="NON DSP COR CUR ADM DEV PSA PSD OUR UNR BUS UNI COM NAV INT DEM STA"  < X-Powered-CMS: Bitrix Site Manager (d04cd2b3dbab106e7537af3767043172)  < Set-Cookie: PHPSESSID=8arlnd14t1k97bri56clb2qhh1; path=/; HttpOnly  < Expires: Thu, 19 Nov 1981 08:52:00 GMT  < Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0  < Pragma: no-cache  < Set-Cookie: BITRIX_SM_GUEST_ID=2328047; expires=Sun, 04-Jun-2017 15:44:53 GMT; Max-Age=31104000; path=/  < Set-Cookie: BITRIX_SM_LAST_VISIT=09.06.2016+18%3A44%3A53; expires=Sun, 04-Jun-2017 15:44:53 GMT; Max-Age=31104000; path=/  < Content-Encoding: gzip  < Content-Length: 20  < Content-Type: application/json  <   * Excess found in a non pipelined read: excess = 20 url = /api/wtf/?id=09102 (zero-length body)  * Connection #0 to host mta.dev left intact  

Request and response without compression:

0 % curl -v http://mta.dev/api/wtf/\?id\=09102  * Hostname was NOT found in DNS cache  *   Trying 172.17.0.2...  * Connected to mta.dev (172.17.0.2) port 80 (#0)  > GET /api/wtf/?id=09102 HTTP/1.1  > User-Agent: curl/7.38.0  > Host: mta.dev  > Accept: */*  >   < HTTP/1.1 204 No Content  < Date: Thu, 09 Jun 2016 15:38:43 GMT  * Server Apache/2.4.7 (Ubuntu) is not blacklisted  < Server: Apache/2.4.7 (Ubuntu)  < X-Powered-By: PHP/5.5.9-1ubuntu4.17  < P3P: policyref="/bitrix/p3p.xml", CP="NON DSP COR CUR ADM DEV PSA PSD OUR UNR BUS UNI COM NAV INT DEM STA"  < X-Powered-CMS: Bitrix Site Manager (d04cd2b3dbab106e7537af3767043172)  < Set-Cookie: PHPSESSID=ceqsuv4ie3fkq497uvk6e2gki1; path=/; HttpOnly  < Expires: Thu, 19 Nov 1981 08:52:00 GMT  < Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0  < Pragma: no-cache  < Set-Cookie: BITRIX_SM_GUEST_ID=2328047; expires=Sun, 04-Jun-2017 15:38:43 GMT; Max-Age=31104000; path=/  < Set-Cookie: BITRIX_SM_LAST_VISIT=09.06.2016+18%3A38%3A43; expires=Sun, 04-Jun-2017 15:38:43 GMT; Max-Age=31104000; path=/  < Content-Length: 0  < Content-Type: application/json  <   * Connection #0 to host mta.dev left intact  

Setting Content-Length: 0 header in PHP application manually has no effect because apache recalculates length after gzipping.

I found this bug in Apache bugtracker https://bz.apache.org/bugzilla/show_bug.cgi?id=51350 where developer says that this bug fixed in 2.4.1 version. I have 2.4.7 version installed and this bug still occurs.

How i can disable gzip compression for 204 responses, or for responses with empty body? Or maybe there is a way to disable overwriting Content-Length header by apache?

Dovecot dict: Can't open configuration file, Permission denied

Posted: 31 Mar 2021 04:00 PM PDT

I'm trying to set up a dovecot mysql dict for quota in a FreeBSD jail.

This is the log I'm getting:

an 13 10:03:23 mail dovecot: dict(71120): Error: Failed to initialize dictionary 'sqlquota': dict mysql: Can't open configuration file /usr/local/etc/dovecot/dovecot-dict-sql.conf: Permission denied  

These are my file permissions:

5 -r--------   1 root  mail    353 12 Jan 16:41 dovecot-dict-sql.conf  5 -r--------   1 root  mail    526 12 Jan 17:04 dovecot-sql.conf  5 -r--r-----   1 root  mail   5531 13 Jan 09:58 dovecot.conf  

This is /var/run/dovecot:

9 drwxr-xr-x   5 root     wheel     37 13 Jan 10:02 ./  9 drwxr-xr-x  11 root     wheel     20 13 Jan 09:42 ../  1 srw-------   1 root     wheel      0 13 Jan 09:42 anvil  1 srw-------   1 root     wheel      0 13 Jan 09:42 anvil-auth-penalty  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 auth-client  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 auth-login  1 srw-rw----   1 vmail    mail       0 13 Jan 10:02 auth-master  1 -rw-------   1 root     wheel     32 13 Jan 09:42 auth-token-secret.dat  1 srw-rw-rw-   1 dovecot  wheel      0 13 Jan 10:02 auth-userdb  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 auth-worker  1 srw-------   1 root     wheel      0 13 Jan 10:02 config  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 decode2text  1 srw-rw----   1 root     mail       0 13 Jan 10:02 dict  1 srw-------   1 root     wheel      0 13 Jan 10:02 dict-async  1 srw-------   1 root     wheel      0 13 Jan 10:02 director-admin  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 dns-client  1 srw-------   1 root     wheel      0 13 Jan 10:02 doveadm-server  1 lrwx------   1 root     wheel     35 13 Jan 09:42 dovecot.conf -> /usr/local/etc/dovecot/dovecot.conf  1 drwxr-xr-x   2 root     wheel      2 13 Jan 09:42 empty/  1 srw-------   1 root     wheel      0 13 Jan 10:02 imap-hibernate  1 srw-------   1 root     wheel      0 13 Jan 10:02 imap-master  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 imap-urlauth  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 imap-urlauth-worker  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 indexer  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 indexer-worker  1 srw-------   1 root     wheel      0 13 Jan 10:02 ipc  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 lmtp  1 srw-------   1 root     wheel      0 13 Jan 10:02 log-errors  9 drwxr-x---   2 root     dovenull   7 13 Jan 10:02 login/  1 -rw-------   1 root     wheel      6 13 Jan 09:42 master.pid  1 srw-------   1 root     wheel      0 13 Jan 10:02 replication-notify  1 prw-------   1 root     wheel      0 13 Jan 10:02 replication-notify-fifo  1 srw-------   1 dovecot  wheel      0 13 Jan 10:02 replicator  1 srw-rw-rw-   1 root     wheel      0 13 Jan 10:02 ssl-params  1 srw-------   1 root     wheel      0 13 Jan 10:02 stats  1 prw-------   1 root     wheel      0 13 Jan 10:02 stats-mail  1 drwxr-x---   2 root     dovenull   4 13 Jan 10:02 token-login/  

And this is my dovecot.conf:

[…]  dict {    sqlquota = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf  }    service dict {    unix_listener dict {      mode = 0660      group = mail    }  }  […]  

What am I missing?

htaccess - redirect based on the request origin

Posted: 31 Mar 2021 07:01 PM PDT

Please help me out here. I would like to use .htaccess to redirect based on request origin, for example:

User requests:

http://www.domain.com/subfolder/  

And should be redirected to:

http://www.domain.com/  

This is very simple to accomplish BUT on the homepage there is a link to:

http://www.domain.com/subfolder/   

And it should work just WHEN the request comes from that link.

In other words my goal is to force all users through the homepage BEFORE other URLs even if they know the page URL (bookmarked or otherwise) and it is a valid one.

Is it possible?

How to debug 403 error on Cent OS?

Posted: 31 Mar 2021 06:03 PM PDT

I'm trying to install phpMyAdmin and I'm getting a 403.

/etc/httpd/conf/httpd.conf

<Directory /usr/share/phpMyAdmin/>          Order Deny,Allow          Allow from all  </Directory>  

/etc/httpd/conf.d/phpMyAdmin.conf

<Directory /usr/share/phpMyAdmin/>     AddDefaultCharset UTF-8       <IfModule mod_authz_core.c>       # Apache 2.4       <RequireAny>         Require ip 99.232.55.0/24       </RequireAny>     </IfModule>       <IfModule !mod_authz_core.c>       # Apache 2.2        AllowOverride All       Order Deny,Allow       Deny from All       Allow from 99.232.55.96     </IfModule>  </Directory>  

This is running on CentOS 6.6 on Apache 2.2

I've tried a ton of combinations and none of these files seem to make a difference. I have a feeling there is another file having effect but the logs say nothing about how to find it.

The Apache ErrorLog and AccessLog give nothing of use.

I am running a Django site inside of Virtual Env at the domain root.

Active Directory control client hyper-v permissions

Posted: 31 Mar 2021 09:02 PM PDT

I've done lots of googling and the only thing I find relates to, I believe, Hyper-V server and not client Hyper-V.

The scenario is that we have a domain here at the college and we are trying to use client Hyper-V on the win8 pro machines. The students are part of the Hyper-V Administrators group and we tried using Authorization Manager, but nothing has worked to allow students to run Hyper-V Manager without being an Administrator or having an administrator use their credentials to run Hyper-V Manager with elevated permissions. The administrator walking around running Hyper-V Manager with elevated privileges is not really convenient, so what we are looking for is a way to control the ability of students to run Hyper-V Manager and load their Win2012r2 VMs and create new VMs without the administrator's credentials and without making the students administrators. The students very specifically only need administrative privileges for client Hyper-V or for client Hyper-V not to require administrative privileges.

Make an error page folder serve a 403 error to external requests

Posted: 31 Mar 2021 08:06 PM PDT

I'm fiddling about with a server, and I've made one of the subdomains a proxy for a service that isn't always up. The server block looks like:

server {      server_name servlet.example.org;      error_page 502 /error/down.html;        location / {          proxy_pass http://127.0.0.1:12510;          proxy_redirect default;          proxy_intercept_errors on;      }        location /error/ {          root /path/to/servlet;          autoindex off;      }  }  

This serves /path/to/servlet/error/down.html to any request when the service is down and that's great.

My issue is that I would like to make any external request to /error/ return a 403 status code, with a custom error page of its own—say forbidden.html, also to be found in the /error/ folder. The internal directive sounds like it's what I want, but that returns 404s. I can't just override 404 errors on the whole server to a 403 with error_page, because the service may return 404s of its own and I'd like to preserve that.

Is this possible? How would I go about it? I have tried seemingly meaningful combinations of internal and error_page but can't get anywhere.

Barring that, can I at least serve a 403 to anything that would otherwise 404 in /error/? I.e. down.html and forbidden.html show up normally, but anything else gets a 403 and displays forbidden.html.

vmware thin disk usage powercli

Posted: 31 Mar 2021 10:04 PM PDT

I want to ask a question about thin provisioning. get-vm commandlet can easly give us real space used by a vm totally. Assume that you have a virtual machine which has more than one thin disk. If we want to get more detail so as to calculate each disk real used space which powercli command does this? I do not prefer getting it by datastore browser for performance issues.

Apache forces Cache-Control: private automatically for HTTPS requests

Posted: 31 Mar 2021 11:00 PM PDT

I'm trying to get browsers to cache assets over HTTPS. I am using MD5 fingerprinting method to allow long-term caching and I have this part working OK.

What doesn't work is setting the Cache-Control headers in Apache.

My config for both regular and SSL vhost contains:

ExpiresActive On  ExpiresByType text/css "now plus 1 year"  

HTTP request to /test.css produces headers:

Cache-Control: max-age=31536000  Content-Type: text/css  Date: Wed, 15 May 2013 10:33:01 GMT  Etag: "7e572-19-4dcbdc8c04529"  Expires: Thu, 15 May 2014 10:33:01 GMT  Last-Modified: Wed, 15 May 2013 08:46:21 GMT  Server: Apache/2.2.15 (Oracle)  Vary: Accept-Encoding,User-Agent  

But HTTPS request to same file produces headers:

Cache-Control: private, must-revalidate, no-cache, no-store  Content-Type: text/css  Date: Wed, 15 May 2013 10:33:58 GMT  Etag: "7e572-19-4dcbdc8c04529"  Expires: Thu, 01 Jan 1970 00:00:00 GMT  Last-Modified: Wed, 15 May 2013 08:46:21 GMT  Server: Apache/2.2.15 (Oracle)  Vary: Accept-Encoding,User-Agent  

BTW, Adding this right after the ExpiresByType:

Header unset Expires  Header unset Cache-Control  

removes these headers from HTTP, but not from HTTPS request.

Also, I have verified that any other header I set gets passed, but not cache related headers like Cache-Control or Expires - these get overwritten somewhere.

Is this normal Apache behavior or some Oracle or Red Hat patch that aims to security?

Can this be turned off somehow?

System info:

OS: Oracle Linux 6.4 (RHEL 6.4 based)  Apache: 2.2.15 (from rpm)  

Preferred format of file names which include a timestamp

Posted: 31 Mar 2021 06:28 PM PDT

As we all know "unix" can have anything in a file except '/' and '\0', sysadmins however tend to have a much smaller preference, mainly due to nothing liking spaces as input ... and a bunch of things having a special meaning for ':' and '@' among others.

Recently I'd seen yet another case where a timestamp was used in a filename, and after playing with different formats a bit to make it "better" I figured I'd try to find a "best practice", not seeing one I figured I'd just ask here and see what people thought.

Possible "common" solutions (p=prefix and s=suffix):

  1. syslog/logrotate/DNS like format:

    p-%Y%m%d-suffix = prefix-20110719-s  p-%Y%m%d%H%M-suffix = prefix-201107191732-s  p-%Y%m%d%H%M%S-suffix = prefix-20110719173216-s  

    pros:

    • It's "common", so "good enough" might be better than "best".
    • No weird characters.
    • Easy to distinguish the "date/time blob" from everything else.

    cons:

    • The date only version isn't easy to read, and including the time makes my eyes bleed and seconds as well is just "lol".
    • Assumes TZ.
  2. ISO-8601- format

    p-%Y-%m-%d-s = p-2011-07-19-s  p-%Y-%m-%dT%H:%M%z-s = p-2011-07-19T17:32-0400-s  p-%Y-%m-%dT%H:%M:%S%z-s = p-2011-07-19T17:32:16-0400-s  p-%Y-%m-%dT%H:%M:%S%z-s = p-2011-07-19T23:32:16+0200-s  

    pros:

    • No spaces.
    • Takes TZ into account.
    • Is "not bad" to read by humans (date only is v. good).
    • Can be generated by $(date --iso={hours,minutes,seconds})

    cons:

    • scp/tar/etc. won't like those ':' characters.
    • Takes a bit for "normal" people to see WTF that 'T' is for, and what the thing at the end is :).
    • Lots of '-' characters.
  3. rfc-3339 format

    p-%Y-%m-%d-s = p-2011-07-19-s  p-%Y-%m-%d %H:%M%:z-s = p-2011-07-19 17:32-04:00-s  p-%Y-%m-%d %H:%M:%S%:z-s = p-2011-07-19 17:32:16-04:00-s  p-%Y-%m-%d %H:%M:%S%:z-s = p-2011-07-19 23:32:16+02:00-s  

    pros:

    • Takes TZ into account.
    • Can easily be read by "all humans".
    • Can distinguish date/time from prefix/suffix.
    • Some of the above can be generated with $(date --iso={hours,seconds})

    cons:

    • Has spaces in the time versions (which means all code will hate it).
    • scp/tar/etc. won't like those ':' characters.
  4. I love hyphens:

    p-%Y-%m-%d-s = p-2011-07-19-s  p-%Y-%m-%d-%H-%M-s = p-2011-07-19-17-32-s  p-%Y-%m-%d-%H-%M-%S-s = p-2011-07-19-23-32-16-s  

    pros:

    • basically a slightly nicer syslog/etc. variant.

    cons:

    • Lots of '-' characters.
    • Assumes TZ.
  5. I love hyphens, with extensions:

    p.%Y-%m-%d.s = p.2011-07-19.s  p.%Y-%m-%d.%H-%M.s = p.2011-07-19.17-32.s  p.%Y-%m-%d.%H-%M-%S.s = p.2011-07-19.23-32-16.s  

    pros:

    • basically a slightly nicer "I love hyphens" variant.
    • No weird characters.
    • Can distinguish date/time from prefix/suffix.

    cons:

    • Using '.' here is somewhat non-traditional.
    • Assumes TZ.

...so anyone want to give a preference and a reason, or more than one (Eg. don't care about TZ if it's 95+% to stay machine local, but care a lot if it isn't).

Or, obviously, something not in the above list.

No comments:

Post a Comment