Wednesday, February 23, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


My Ubuntu server keeps losing its internet connection while the LAN connection works fine

Posted: 23 Feb 2022 01:47 PM PST

I have an Ubuntu server that has two ethernet connections. One goes to a wall port to a campus network, and the other goes to a LAN switch. While working from home, I will occasionally get disconnected from the server and attempts to reconnect time out. I can connect to another server on the LAN, and then from there ssh into the server. Once logged in, the internet connection can be restored with this command (left by a previous member of our team):

sudo ifmetric eno1 50  

This works great, until it happens again, sometimes within an hour, sometimes not until next week. Does anyone know what is happening, and how to permanently fix this?

How to activate async option on NFS V4 Debian

Posted: 23 Feb 2022 01:43 PM PST

Performance with NFS writes is absolutely dreadful (52sec for writing 10 small files)

EDIT: ls is sometimes ultra fast, but some other times it takes like 20 seconds to list a dozen files!!! which means that "reads" are also a huge problem here...

I tested with the generation + write of a dozen thumbnails (for the same 500KB source image, of course), originating from two client servers:

  1. An Ubuntu 21.10 VM on my home computer (NOT inside a VPN)
  • ping to nfs server: 30ms
  • local test duration: 2.5s
  • nfs test duration: 8s ==> 6s performance impact
  1. A Debian 11 server (VPN linked using WireGuard to the test NFS server)
  • ping to nfs server: 110ms
  • local test duration: 2.3s
  • nfs test duration: 52s ==> 50s performance impact

(server and home computer are in western europe, debian client is in eastern usa)

The performance impact is not caused by WireGuard, as I ruled out this possibility by also mounting the NFS drive using the public IP of the NFS server (therefore bypassing WireGuard), and the result was exactly the same.

Here is /etc/exports from the server:

/export/nfs 192.168.1.0/24(rw,fsid=0,crossmnt,no_subtree_check,insecure,anonuid=33,anongid=33,all_squash,async)  

(I skipped the part with my home computer public IP address as it is the same options)

Maybe I should add that I followed a tutorial that advised to use a --bind option from another directory beforehand for creating the export: mount --bind /var/nfs /export/nfs

And here is how I mounted the NFS drive on the client:

mount -t nfs -o async,noatime,nodiratime,noacl,nocto,vers=4 192.168.1.2:/ /mnt/nfs  

Please notice the presence of the async option.

And the result of /proc/mounts:

192.168.1.2:/ /mnt/nfs nfs4 rw,noatime,nodiratime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,nocto,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.11,local_lock=none,addr=192.168.1.2 0 0  

Here, as you can see, the async option does not seem to be saved, which should be the cause of the poor performance (at least, I guess so).

Why is the option not saved? I really need the performance to be better than 52 seconds for transferring 1 lousy MB of data... (of course, using rsync to upload/download files is super fast, which means it's not caused by either the internet connection between any of these 3 machines or by the NFS server drive write speed)

Another weird thing is that my files are around 100KB each (some are less, some are more), and wsize seems to cover them completely, which should make the latency/ping irrelevant (even without the async option, I mean). Well, it should add 100ms per file, which means 1s total... not 50 seconds...

I spent hours searching the internet, but to no avail. Maybe I don't use the right keywords, but "nfs async debian" seems fine...

The performance on Ubuntu is still not good enough, I would accept a 1s delay maybe for using a network drive (especially with async), but not 6 seconds. But the performance on Debian is just unbearable... I wonder why there is so much difference.

Just for the record:

apt-cache policy nfs-common  nfs-common:    Installed: 1:1.3.4-6  

(for both Debian and Ubutun)

Thank you a lot in advance, I am really at a loss here!

Best,

Use Nginx for audit logs

Posted: 23 Feb 2022 01:30 PM PST

I have a sensitive webapp used only internally. I want to log all the actions of my users for 90 days.

To achieve that, I'm using a nginx reverse proxy that forwards all the requests to the webapp.

I have the following configuration

log_format postdata $request_body;    server {         access_log  /var/log/nginx/access-post.log  postdata;           location / {            proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            proxy_set_header Host $http_host;            proxy_pass https://sensitive-app/;         }  }  

But I'm only getting logs like this, without the JSON body of the requests

83.199.111.11 -  [23/Feb/2022:20:17:00 +0000] "POST /rts/?EIO=4&transport=polling HTTP/1.1" 200 189 "https://myapp.com/applications/61f/pages/61/edit" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36" ""  

What is the best way in 2022 to use Nginx for audit logging ? Is there any better tool to achieve that ?

error: can not detect sig_atomic_t size on cofigure step for make in ngx_http_proxy_connect_module on some computers

Posted: 23 Feb 2022 01:35 PM PST

I followed steps from https://github.com/chobits/ngx_http_proxy_connect_module

wget http://nginx.org/download/nginx-1.9.3.tar.gz -p  tar -xzvf nginx-1.9.3.tar.gz  cd nginx-1.9.3/  patch -p1 < /tmp/ngx_http_proxy_connect_module/patch/proxy_connect_rewrite_1018.patch/proxy_connect.patch  ./configure --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/usr/local/nginx/nginx.pid --error-log-path=/var/log/nginx/nginx_error.log --http-log-path=/var/log/nginx/nginx_access.log --pid-path=/var/run/nginx.pid --with-http_ssl_module --add-module=/nginx_test/ngx_http_proxy_connect_module  

And getting

...  checking for sig_atomic_t ... found  checking for sig_atomic_t size ...  ./configure: error: can not detect sig_atomic_t size  

https://trac.nginx.org/nginx/ticket/1539#no1 was already applied.

But on others servers its works, what is the reason of it? How to fix it?

Wireguard won't tunnel all traffic to server

Posted: 23 Feb 2022 12:21 PM PST

I'm having a heck of a time getting WG to tunnel all my traffic back to the server. I thought it would be a simple one line process, but it isn't. I've installed the latest version, removed, reinstalled, done just about everything. iptables changes are made in the server, too, but it isn't even getting that far. It's just not routing to wg0. If I try to manually add the route, it says it's already there. What am I missing?

wg0.conf  [Interface]  Address = 172.20.3.9/32  PrivateKey =     [Peer]  PublicKey =   Endpoint = 18.x.x.x:51820  AllowedIPs = 0.0.0.0/0,::/0  PersistentKeepalive = 25  
Route tables on the client:  route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eno1  192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eno1    ip route show table main  default via 192.168.1.1 dev eno1 proto static   192.168.1.0/24 dev eno1 proto kernel scope link src 192.168.1.62   
wg show on the client:  interface: wg0    public key:     private key: (hidden)    listening port: 39804    fwmark: 0xca6c    peer:     endpoint: 18.x.x.x:51820    allowed ips: 0.0.0.0/0, ::/0    latest handshake: 38 seconds ago    transfer: 20.05 KiB received, 33.70 KiB sent    persistent keepalive: every 25 seconds  
Console output when it starts:  wg-quick up wg0  [#] ip link add wg0 type wireguard  [#] wg setconf wg0 /dev/fd/63  [#] ip -4 address add 172.20.3.9/32 dev wg0  [#] ip link set mtu 1420 up dev wg0  [#] wg set wg0 fwmark 51820  [#] ip -6 route add ::/0 dev wg0 table 51820  [#] ip -6 rule add not fwmark 51820 table 51820  [#] ip -6 rule add table main suppress_prefixlength 0  [#] ip6tables-restore -n  [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820  [#] ip -4 rule add not fwmark 51820 table 51820  [#] ip -4 rule add table main suppress_prefixlength 0  [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1  [#] iptables-restore -n    

Benchmarking AWS outbound Internet bandwidth (egress) "up to 25 Gbps"

Posted: 23 Feb 2022 12:34 PM PST

We conducted our tests on c6gn.2xlarge AWS instances located is us-east-1 region, which are advertised in AWS documentation to have a network performance of "Up to 25 Gbps" with a baseline bandwidth of 12.5 Gbps.

We ran UDP tests with iperf3, from a client VM in Europe, outside AWS network.

On the server side: iperf3 -s -p 45000

On the client side: iperf3 -c <server_public_IPv4> -p 45000 -u -i 1 -b 500M -P 5 -R -t 3600

(sending 5 streams of 500 Mbps each, every second for 1 hour)

After a few minutes (depending on previous usage), the bandwidth will collapse to 250 Mbps, and 90% of packets will get lost.

Yes it's 1/100th of the advertised bandwidth.

Has anyone experienced similar behaviour?

Are you aware of other limitations at the VPC level, rather than per instance?

iperf3 UDP test towards AWS c6gn instance showing network degradation

How to hide restricted nginx subdomains?

Posted: 23 Feb 2022 12:35 PM PST

To hide a restricted location, e.g.

location /secret/ {   allow 10.0.0.0/24;   deny all;  }  

one could set

error_page 403 =404 /404.html;  error_page 404 /404.html;  

to make impossible to distinguish a non-existing location (404) from a restricted one (403).

Is there a way to perform a similar spoof for subdomains?

I want https://admin.example.org/, which normally returns 403 if not visited via VPN, to show the same of https://nonexistingsubdomain.example.org/, e.g. a .html page with a redirect to https://example.org/.

CloudFront gives 403 error when accessing a web app hosted outside AWS through the configured subdomain

Posted: 23 Feb 2022 12:02 PM PST

I've been tasked with setting up our web app on CloudFront. Our web app is hosted on an Ubuntu server that is completely outside AWS.

I have little to no experience with CDNs, but I've made some decent progress on it. Unfortunately, the docs are unhelpful because most of them assume you're using S3, especially hosting a static site or something to that effect.

So, here is what is unique about our setup:

  • We originally used Cloudflare (not CloudFront) and our DNS is still ultimately hosted with them.
  • I've updated the Cloudflare DNS entries with NS records that point to Route 53. So now Route 53 handles DNS for the subdomain I'm working with, and points us toward the CloudFront distribution domain instead.
  • I've created a distribution for the subdomain (let's say app.example.com), and requested a public SSL/TLS certificate, which I believe I have now installed and configured correctly. (The reason I say this is that I was originally getting privacy errors in Chrome when visiting app.example.com, but this error went away after I figured out the SSL/TLS certificate part.)

Now, what is happening, is when I visit app.example.com I am getting a 403 error that reads:

The request could not be satisfied.

Bad request. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.

CloudFront is having issues talking to the origin server.

I'm not sure if the issue is possibly a secondary SSL/TLS certificate issue (i.e. do I need to install another cert on the Ubuntu box? It already uses letsencrypt. Does it need to be the public certificate I requested from AWS or a new one?).

Or, is it possible that the DNS setup is somehow making it impossible for CloudFront to know how to even find the origin server? (After all, the DNS for app.example.com points us to CloudFront, so how is CloudFront supposed to know how to find the origin server?) Having never worked with CDNs before, I'm a bit confused.

So far every troubleshooting guide assumes the 403 error is coming from an incorrect S3 bucket policy or something like that, but again, I'm not using S3 to serve the web app.

finding and enabling LAN DNS

Posted: 23 Feb 2022 12:00 PM PST

While looking into mdns for an IOT device I was making I discovered that on my home network if I typed in hostname.lan, chrome browser would resolve the address to the appropriate local device with that hostname. When I ran nmap to scan for local devices it would also provide this information in the output

Example:

$: nmap -p 80 192.168.0.100    Nmap scan report for hostname.lan (192.168.0.100)  Host is up (0.00031s latency).  PORT   STATE SERVICE  80/tcp open  HTTP  

I like this feature, but I am annoyed because it randomly stops for days at a time.

At first, I assumed this was something my router was doing (a TP-link Archer C9) but I have not been able to find a place to enable or disable this setting, nor any documentation about it on the internet. Then I thought maybe it was a program running on a raspberrypi, but have not been able to turn it on and off with any programs from my RPIs. I've spent a lot of time spinning my wheels searching the internet for what program makes the .lan domain, will very little luck.

My question is, how do I figure out who is resolving these addresses. Or in the sad situation where this never returns, how do I setup a DNS server that automatically resolves hostnames to IP addresses on my Lan?

Reverse IP lookup with docker?

Posted: 23 Feb 2022 11:25 AM PST

I am trying to build a docker container that advertises an SMB share, then connects to a remote host and tells that host to connect to the SMB share. In my case the remote host is across a VPN tunnel (though it could be accessible via a different interface when I'm on the same LAN as the endpoint), and it can reach my machine's tunnel IP, but I want a programmatic way of passing my machine's IP to the container.

I can get the result I want on Windows running Docker for desktop via pathping per this example, but I want to make this in a way that is OS-independent as this will also be used on Macs and potentially on Linux machines. Is there any way to get the tunnel IP of the host directly from within the docker container?

EDIT: Another note, if (from the container) I curl a webserver that is running on the same subnet as my target host and look at the logs, I see the request comes from the exact IP that I want to pass as an argument to the command I am running in the container. Not sure if that helps, but I don't have a way of accessing logs on the target device and the web server won't always be up and accessible.

Why would cloudinit resort to using iid-datasource-none?

Posted: 23 Feb 2022 10:36 AM PST

Had my ssh host key reset by GCE. Found

/var/lib/cloud/instances/iid-datasource-none  

was created.

https://cloudinit.readthedocs.io/en/latest/topics/datasources/fallback.html?highlight=iid-datasource-none

is not enlightening as to cause / prevention. Anyone know how this aspect of cloudinit works?

JDK Mission Control vs Flight Recorder

Posted: 23 Feb 2022 10:11 AM PST

Sorry, newb...

I'm just trying to figure out if it's worth figuring out how to connect JMC remotely to a server to look at a JVM issue... If I use Flight Recorder to record the log, is that log basically the same thing as taking the resulting log file and loading that into the JMC where I can launch the UI?

How does dovecot performs the hash comparison process without using salt in password

Posted: 23 Feb 2022 09:48 AM PST

I got a mail server using dovecot, postfix with mysql

I insert an email user through the following sql statement:

insert into users(email, password) values('dave@example.com',ENCRYPT('secret'))  

Besides, the file /etc/dovecot/dovecot-sql.conf.ext contains:

driver = mysql  connect = host=127.0.0.1 port=3306 dbname=mail user=mail_admin password=password  default_pass_scheme = CRYPT  password_query = SELECT email as user, password FROM users WHERE email='%u';  

Also, by using thunderbird I could add an email account successfully to test.

The question is: How dovecot performs the hash comparison in the email authentication process if there's no salt involved in the insert statement?

Use nginx location blocks with Shinyproxy

Posted: 23 Feb 2022 10:36 AM PST

I recently successfully deployed a ShinyProxy + app using SSL with nginx and certbot in the following manner:

  1. Dockerize ShinyProxy + app and launch on port 127.0.0.1:5001.
  2. Create Nginx config and proxy_pass to 127.0.0.1:5001.
  3. Secure using certbot.

This is the successful nginx.conf location section:

    location / {                       proxy_set_header        Host $host;                       proxy_set_header        X-Real-IP $remote_addr;                       proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;                       proxy_set_header        X-Forwarded-Proto $scheme;                       proxy_set_header        Upgrade $http_upgrade;                       proxy_set_header        Connection "upgrade";                             proxy_http_version 1.1;                       proxy_redirect off;                             proxy_read_timeout  90s;                       proxy_pass          http://127.0.0.1:5001;             }  

This nicely redirects me to https://app.myweb.com/login as I have set up a CNAME. Important to note, {ShinyProxy} redirects to the login at the end automatically. On successful login the url redirects to https://app.myweb.com/app/website.

What I really struggle with is the following: adding a location block or as I understand it, include my upstream block into my downstream (correct my terms if I am wrong). So, have my url go from https://app.myweb.com/login to https://app.myweb.com/dashboard/login using the following configuration in nginx:

  location /dashboard/ { # THIS IS WHAT I WANT TO ADD                       proxy_set_header        Host $host;                       proxy_set_header        X-Real-IP $remote_addr;                       proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;                       proxy_set_header        X-Forwarded-Proto $scheme;                       proxy_set_header        Upgrade $http_upgrade;                       proxy_set_header        Connection "upgrade";                             proxy_http_version 1.1;                       proxy_redirect off;                             proxy_read_timeout  90s;                       proxy_pass          http://127.0.0.1:5001;             }  

All that happens is, if I type https://app.myweb.com/dashboard/ it doesn't go to https://app.myweb.com/dashboard/login as I would expect, but redirects back to https://app.myweb.com/login which 404's.

Any advice on what I am doing wrong?

.htaccess mod_rewrite not catching all RewriteRules

Posted: 23 Feb 2022 10:59 AM PST

There is a PHP application with a PHP router as entry point for all the requests placed inside index.php. I am trying to write a .htaccess file to forward every request to index.php except for API requests and for design resources. So I am trying to obtain the following behavior:

  1. example.com/api/v1/* should serve api_v1.php
  2. example.com/any_path/resource.css => should serve resource.css if it exists (there are multiple extensions allowed; .css is just one example)
  3. serve index.php for anything that did not fall for the conditions above

Given that .htaccess is evaluated from top to the bottom, from particular to general conditions and that flag [L] would break the execution if anything matched, I have managed to come up with the following .htaccess:

RewriteEngine On    # Prevent 301 redirect with slash when folder exists and does not have slash appended  # This is not a security issue here since a PHP router is used and all the paths are redirected  DirectorySlash Off    #1. Rewrite for API url  RewriteRule ^api/v1/(.*)$ api_v1.php [L,NC]    #2. Rewrite to index.php except for for design/document/favicon/robots files that exists  RewriteCond %{REQUEST_URI} !.(css|js|png|jpg|jpeg|bmp|gif|ttf|eot|svg|woff|woff2|ico|webp|pdf)$  RewriteCond %{REQUEST_URI} !^(robots\.txt|favicon\.ico)$ [OR]  RewriteCond %{REQUEST_FILENAME} !-f  RewriteRule ^(.*)$ index.php [L]    #3. Rewrite enything else  RewriteRule ^(.*)$ index.php [L]  

Using the code above, it seems that accessing example.com/api/v1/ does not execute api_v1.php. Instead, it would continue and execute index.php

example.com/api/v1/ would only work if I remove all conditions after line 8.

What am I doing wrong here?

How does F5 Bigip packets route inside/among its route domains?

Posted: 23 Feb 2022 10:59 AM PST

How does F5 Bigip route packets inside/among its route domains?

I have an F5 BigIP device. On that BigIP, I create a test partition called test123, the route-domain, VLAN, self IP of that test123 partition. It is like below:

Phenomenon Description:

  [root@bigip:Active:Standalone] partitions # cat test123/bigip_base.conf  #TMSH-VERSION: 14.1.0    net route-domain /test123/test111 {      id 111      strict disabled      vlans {          /test123/test111      }  }  net route-domain /test123/test321 {      id 321      strict disabled      vlans {          /test123/test321      }  }  net self /test123/test111 {      address 172.168.111.111%111/24      allow-service all      traffic-group /Common/traffic-group-local-only      vlan /test123/test111  }  net self /test123/test321 {      address 172.168.32.32%321/24      allow-service all      traffic-group /Common/traffic-group-local-only      vlan /test123/test321  }  net vlan /test123/test111 {      interfaces {          1.1 {              tagged          }      }      sflow {          poll-interval-global no          sampling-rate-global no      }      tag 111  }  net vlan /test123/test123 {      interfaces {          1.1 {              tagged          }      }      sflow {          poll-interval-global no          sampling-rate-global no      }      tag 123  }  net vlan /test123/test321 {      interfaces {          1.1 {              tagged          }      }      sflow {          poll-interval-global no          sampling-rate-global no      }      tag 321  }  net fdb vlan /test123/test111 { }  net fdb vlan /test123/test123 { }  net fdb vlan /test123/test321 { }  

As you can see the configuration of partition test123.

I create a tagged VLAN named vlan111 with tag 111, a route-domain called test111 with domain id 111 use the vlan111, and last I also bind a self IP 172.168.111.111%111/24 on vlan111.

Similar to the self IP 172.168.32.32%321/24 I create a tagged VLAN named vlan321 with tag 321, a route-domain called test321 with domain id 321 use the vlan321, and last I also bind a self IP 172.168.32.32%321/24 on vlan321.

Till now, I have self IP 172.168.111.111%111 and 172.168.32.32%321.

Then I ssh to my BigIP terminal, ping each IP locally, like below:

# I am in the default route-domain ping both IP without domain id  # cannot reach.    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.111.111  PING 172.168.111.111 (172.168.111.111) 56(84) bytes of data.    --- 172.168.111.111 ping statistics ---  3 packets transmitted, 0 received, 100% packet loss, time 1999ms    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.32.32  PING 172.168.32.32 (172.168.32.32) 56(84) bytes of data.    --- 172.168.32.32 ping statistics ---  3 packets transmitted, 0 received, 100% packet loss, time 1999ms    # ping with the route domain, they can be reached    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.111.111%111  PING 172.168.111.111%111 (172.168.111.111%111) 56(84) bytes of data.  64 bytes from 172.168.111.111%111: icmp_seq=1 ttl=64 time=0.039 ms  64 bytes from 172.168.111.111%111: icmp_seq=2 ttl=64 time=0.042 ms  64 bytes from 172.168.111.111%111: icmp_seq=3 ttl=64 time=0.043 ms    --- 172.168.111.111%111 ping statistics ---  3 packets transmitted, 3 received, 0% packet loss, time 1999ms  rtt min/avg/max/mdev = 0.039/0.041/0.043/0.005 ms    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.32.32%321  PING 172.168.32.32%321 (172.168.32.32%321) 56(84) bytes of data.  64 bytes from 172.168.32.32%321: icmp_seq=1 ttl=64 time=0.032 ms  64 bytes from 172.168.32.32%321: icmp_seq=2 ttl=64 time=0.039 ms  64 bytes from 172.168.32.32%321: icmp_seq=3 ttl=64 time=0.033 ms    --- 172.168.32.32%321 ping statistics ---  3 packets transmitted, 3 received, 0% packet loss, time 1999ms  rtt min/avg/max/mdev = 0.032/0.034/0.039/0.007 ms    # Swith to route domain 111, the 172.168.111.111 can be reached.     [root@bigip:Active:Standalone] partitions # rdsh 111    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.111.111  PING 172.168.111.111 (172.168.111.111) 56(84) bytes of data.  64 bytes from 172.168.111.111: icmp_seq=1 ttl=64 time=0.027 ms  64 bytes from 172.168.111.111: icmp_seq=2 ttl=64 time=0.025 ms  64 bytes from 172.168.111.111: icmp_seq=3 ttl=64 time=0.035 ms    --- 172.168.111.111 ping statistics ---  3 packets transmitted, 3 received, 0% packet loss, time 1999ms  rtt min/avg/max/mdev = 0.025/0.029/0.035/0.004 ms    [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.32.32  connect: Network is unreachable    # Ping other route domain IP, it needs %route-domain-id  [root@bigip:Active:Standalone] partitions # ping -W 5 -c 3 172.168.32.32%321  PING 172.168.32.32%321 (172.168.32.32%321) 56(84) bytes of data.  64 bytes from 172.168.32.32%321: icmp_seq=1 ttl=64 time=0.050 ms  64 bytes from 172.168.32.32%321: icmp_seq=2 ttl=64 time=0.029 ms  64 bytes from 172.168.32.32%321: icmp_seq=3 ttl=64 time=0.021 ms    --- 172.168.32.32%321 ping statistics ---  3 packets transmitted, 3 received, 0% packet loss, time 1999ms  rtt min/avg/max/mdev = 0.021/0.033/0.050/0.013 ms  

My Question: The ICMP packets flow between different subnets and route-domains without a static gateway configured.

What is the flow(process/mechanism) of F5 BigIP inner packets routing among different route domains?

I try to figure out the question by tracing the route between different subnets.

# I switch to 321 route domain  [root@bigip:Active:Standalone] config # rdsh 321      # in 321 route domain net space shows route table, no route to 172.168.111.0/24 network.  [root@bigip:Active:Standalone] config # ip r  127.1.1.0/24 dev if3  proto kernel  scope link  src 127.1.1.254  172.168.32.0/24 dev if5  proto kernel  scope link  src 172.168.32.32      # trace the route, the bigip.hostname is the hostname mapped to IP 172.168.111.111  [root@bigip:Active:Standalone] etc #  tmsh run util traceroute  172.168.111.111%111  traceroute to 172.168.111.111 (172.168.111.111), 30 hops max, 60 byte packets   1  bigip.hostname (172.168.111.111)  0.047 ms  0.009 ms  0.008 ms      # switch to 111 route domain net space  [root@bigip:Active:Standalone] config # rdsh 111      # the IP bigip.hostname is changed to 172.168.32.32  [root@bigip:Active:Standalone] config # tmsh run util traceroute 172.168.32.32%321  traceroute to 172.168.32.32 (172.168.32.32), 30 hops max, 60 byte packets   1  bigip.hostname (172.168.32.32)  0.036 ms  0.084 ms  0.070 ms  

It seems the packets just go to the interface directly because the IP is the local IP on the BigIP machine. And there is no routing table. Is that mean I could regard it just as a local IP, and there is no routing among different subnet IPs in different route domains?

But I guess there are must be something to do with the map, right? is there any route domain map that can be shown?

There is little information about the mechanism of F5 BigIP route domain mapping on the Internet, most of the information on route domain is on the management and use cases of BigIP route domain.

Hope anyone could help to shed some light on this part?

How to Pre-seed Salt Minion's Archives

Posted: 23 Feb 2022 12:18 PM PST

So I am creating a state file to install MatterMost on a minion. So far it looks like this:

  mattermost-usergroup:    user.present:      - name: mattermost      - shell: /bin/sh      - createhome: False      - usergroup: True      - system: True      - require:        # From postgresql-formula:        # https://github.com/saltstack-formulas/postgres-formula/blob/master/postgres/server/init.sls#L278        - service: postgresql-running    mattermost-opt:    archive.extracted:      - name: /opt      - source: https://releases.mattermost.com/{{ pillar['mattermost'].version }}/mattermost-{{ pillar['mattermost'].version }}-linux-amd64.tar.gz      - source_hash: a194fd3d2bebed8e6b5721261621030e573f4500c54fb251cfd8ff6f32fe714e      - user: mattermost      - group: mattermost      - require:        - user: mattermost-usergroup  

My problem is: Prior to creating this SLS, MatterMost has been installed (exact same version as the one specified in the pillar) by downloading the tarball to an admin's home, then extracting the tarball manually to opt. If I run state.highstate with this, I fear it will redownload the tarball, then because the tarball is 'new' (from the Minion's POV), it will be extracted over the existing installation in /opt

How do I "pre-seed" the Minion's "archive cache" so the Minion can see the file is already downloaded, and will not (re)download+overwrite?

esxi upgrade - Upgrade VIB(s) "loadesx" is required for the transaction

Posted: 23 Feb 2022 11:34 AM PST

Applying a esxi patch (from HPE custom esxi 7.0 to esxi 7.0U2c) fails

esxcli software vib update -d /full/path/VMware-ESXi-7.0U2c-18426014-depot.zip

Error message

[InstallationError] Upgrade VIB(s) "loadesx" is required for the transaction. Please use a depot of a complete set od ESXi VIBs.

Server hardware

HPE gen10

Any clue why I get the error message?

Some emails show blank on Squirrelmail

Posted: 23 Feb 2022 11:00 AM PST

For some reason certain messages, only for some users, are showing up either partially or totally blank and are not selectable. The mail server is not self hosted (migadu.com), only Squirrelmail (v 1.4.22).

I've confirmed the messages themselves seem just fine and are perfectly fine when downloaded via another IMAP client. I'm guessing it's something specific about these messages, but I can't figure out what to look for.

I saw some other posts talking about similar things and they seemed to point to a permissions issue, but I can't figure out what permissions I should be setting, and wasn't sure if those suggestions related to the actual mail servers.

See screenshot below.

enter image description here

Is there a good way to shrink disk usage of sparse file containing luks-encrypted btrfs file system image?

Posted: 23 Feb 2022 12:15 PM PST

I've created a sparse file filesystem.img, formatted in with cryptsetup luksFormat, created a btrfs filesystem on it. The image file disk usage expanding fine while adding files to the btrfs filesystem. However deleting a file on it of course do not reduce sparse file disk usage, so I need a solution to do it manually.

Unfortunately fstrim does not work, saying the discard operation is not supported.

I can't just write zeros by 'dd' or 'freezero' to a filesystem's file since encrypted zeros are not zeros and this will result into enlarging, not reducing image size.

I probably could resize the filesystem to it's minimal size and then truncate the image file to the filesystem size + luks offset size, but I found that btrfs is very shrink-unfriendly, currently btrfs filesystem usagereports ~23G free and ~81G used but I can't reduce it further, so I have ~28% overusage.

'btrfs balance' probably would help, but looks like it could runs even longer than recopying of all data to new image.

The last of course a solution but not a good one. And it is not always possible to create a new disk image of required space.

I tried to find how 'decoded zeros' looks like by creating same passphrase-encrypted zero-empty image, but each of 512 byte block (the size reported by cryptosetup status) is different. Looks like luks do not crypt each block with the same key.

Is there any other ideas?

UPD. What I've also tried to fill btrfs with zero file, find it offsets:

# filefrag -b4K -ves zero  Filesystem type is: 9123683e  File size of zero is 34811904 (8499 blocks of 4096 bytes)  ext:     logical_offset:        physical_offset: length:   expected: flags:   0:        0..    3477:      17664..     21141:   3478:               1:     3478..    4732:       3399..      4653:   1255:      21142:   2:     4733..    5673:      11673..     12613:    941:       4654:   3:     5674..    6379:      16400..     17105:    706:      12614:   4:     6380..    6908:       4654..      5182:    529:      17106:   5:     6909..    7305:      12614..     13010:    397:       5183:   6:     7306..    7823:      15770..     16287:    518:      13011:   7:     7824..    8220:      17106..     17502:    397:      16288:   8:     8221..    8338:       5183..      5300:    118:      17503:   9:     8339..    8418:      13011..     13090:     80:       5301:  10:     8419..    8477:      17503..     17561:     59:      13091:  11:     8478..    8489:      13091..     13102:     12:      17562:  12:     8490..    8493:      13564..     13567:      4:      13103:  13:     8494..    8496:       3328..      3330:      3:      13568:  14:     8497..    8498:      13103..     13104:      2:       3331: last,eof  

save it into another filesystem file zero.frag and try to fill image file `physical' blocks with zeros:

# offset=4096   # cat zero.frag | tail -n +4 | head -n -1 | while read rec    do seek=${rec#*:*:}; seek=${seek%%.*}; seek=$((seek+offset))       count=${rec#*:*:*:  }; count=${count%%:*}; count="${count#"${count%%[![:space:]]*}"}"       dd if=/dev/zero bs=4096 seek=$seek count=$count of=filesystem.img    done  

but this destroyed the filesystem. It was still mountable, but existing files was incorrect. Also 'filesystem.img''s disk usage became even less than btrfs filesystem used space. So still unsolved.

How to write a sparse Linux (EXT4) disk image without writing gigabytes of zeros?

Posted: 23 Feb 2022 12:20 PM PST

I have a 64 GB Linux disk image with ~50 GB of unused space across the partitions. The file is sparse, so it only takes ~14 GB on disk.

But if I dd the image, it writes the full 64 GB, which takes quite a while.

Is there any way I can do the equivalent of dd if=os.img of=/dev/sdb with this image, without having to write 50 GB of zeros?

Is there any tool that is smart enough to do this i.e. an imaging tool that has an awareness of the EXT4 filesystem?

win10 how to encrypt single file, right click folder encrypt invalid recovery certificate

Posted: 23 Feb 2022 10:04 AM PST

In window 10 enterprise, corporate environment (if it matters). I have a single file with info I want to protect, and I want to encrypt just this single file. And I only want to do this in Windows, if it's just win10 that's ok.

I right click on that file and do Properties - General - Advanced and check Encrypt contents to secure data.

The resulting error is Error applying attributes - Recovery policy configured for this system contains invalid recovery certificate.

  • What does that error mean?
  • What means are there of encrypting just a single file (win7 or win10)? I do not want to use 3rd party software if I do not have to.
  • Is this an Enterprise setup type of error/problem, or should I expect any win10 home/pro/enterprise version to be able to do this much like opening/creating a .zip file?

Add-NetNatStaticMapping not port forwarding to local VM

Posted: 23 Feb 2022 12:01 PM PST

I'm running windows 10 build 1809 and have hyper-v installed. I have a Linux machine running behind a NAT with internet connectivity working on IP 10.0.5.5. I basically followed instructions on the link below

https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/setup-nat-network

When created the port mapping I called

Add-NetNatStaticMapping -ExternalIPAddress 0.0.0.0/24 -ExternalPort 8500 -Protocol TCP -InternalIPAddress 10.0.5.5 -InternalPort 8500 -NatName YetAnotherNAT  

If i try to hit http://10.0.5.5:8500 it works (page loads). If i try to hit http://127.0.0.1:8500 it doesn't work (nothing loads). Even if I try to use any of my external IPs, it doesn't work.

It's basically like the whole port forwarding is not doing anything.

Any ideas?

Get-VmSwitch returns the following

PS C:\> Get-VMSwitch    Name             SwitchType NetAdapterInterfaceDescription  ----             ---------- ------------------------------  nat              Internal  Wifi             External   Intel(R) Dual Band Wireless-AC 7265  DockerNAT        Internal  Default Switch   Internal   Teamed-Interface  MyNATSwitch      Internal  YetAnotherSwitch Internal  

Get-NetNat returns the following

PS C:\> get-netnat      Name                             : YetAnotherNAT  ExternalIPInterfaceAddressPrefix :  InternalIPInterfaceAddressPrefix : 10.0.5.0/24  IcmpQueryTimeout                 : 30  TcpEstablishedConnectionTimeout  : 1800  TcpTransientConnectionTimeout    : 120  TcpFilteringBehavior             : AddressDependentFiltering  UdpFilteringBehavior             : AddressDependentFiltering  UdpIdleSessionTimeout            : 120  UdpInboundRefresh                : False  Store                            : Local  Active                           : True  

server can't find 2.0.9.10.in-addr.arpa: NXDOMAIN

Posted: 23 Feb 2022 10:04 AM PST

I have set up a VPC peering between two different project's VPC on GCP and it works fine when i ping my vm-instance and i'm also able to ssh to my instance with private ips. However, if I query it for reverse dns from one vm-instance to another instance with nslookup, it throws error; server can't find 2.0.9.10.in-addr.arpa: NXDOMAIN

my arp doesn't show connected devices either just one ip for router i believe, i get the same status: NXDOMAIN when i dig 10.9.0.2

Any help would be much appreciated.

negative WMI-Filter for security filtering in GPO

Posted: 23 Feb 2022 11:03 AM PST

I need to create a group policy object (GPO) that will disable printer redirection for all computers except certain servers.

I considered making a security group and adding all the computers except the servers that I wanted to permit printer redirection on and then applying security filtering on the GPO so only the computers that are a member of the security group will not have printer redirection. Due to the number of servers in the environment and the number of technicians making changes in Active Directory (AD) I feel that people will not remember to add new computers to the security group. :) So, I want to create a GPO that applies to all computers but has a rule that excludes the members of a security group from the GPO.

I believe that I want to do this with a WMI filter but I don't know how to create a WMI filter and the examples I found do not seem to give me the information I need to create the required WMI filter.

The example I found is this.

Select * From Win32_Group where Name <> "security group"  

Can someone help me edit this WMI filter to identify all servers that are not members of that security group.

Nginx Redirect all JPEG URL to single JPEG

Posted: 23 Feb 2022 12:01 PM PST

There are two scenario that I'm trying to achieve.

Scenario A : If client request URL that contains .jpeg or .jpg file, redirect the user to a single .jpg file that are on the server in this case myimage.jpg

Scenario B : If client request URL that contains /abc/ directory, redirect the user to other domain through proxy while keeping the URL in tact.

Below is the content of my nginx.conf

http {        server {          listen 80;          root /usr/share/nginx/html;            #Scenario A          location ~* \.(jpg|jpeg){             rewrite ^(.*) http://$server_name/myimage.jpg last;          }            #Scenario B          location ^~ /abc/ {              proxy_pass http://cd.mycontent.com.my;              proxy_redirect localhost http://cd.mycontent.com.my;              proxy_set_header Host $host;              }      }  ......  

Most of it I referred to Nginx redirect to a single file The config does not contain error in /var/log/nginx/error.log but it does not perform as intended to.

Apache 2.4 with PHP-FPM and ProxyPassMatch for PHPMyAdmin, is it secure?

Posted: 23 Feb 2022 01:01 PM PST

I recently configured a Debian 8 with Apache 2.4. Since I have a fairly recent version of Apache, I used ProxyPassMatch instead of FastCgiExternalServer.

But when configuring my alias for PhpMyAdmin, I wondered if this was secure. Here's my configuration :

<VirtualHost *:80>      ServerName www.my-website.com        DocumentRoot /var/www/html/      Alias /phpmyadmin/ "/usr/share/phpmyadmin/"      <Directory "/usr/share/phpmyadmin/">              Options FollowSymLinks              DirectoryIndex index.php      </Directory>        # Disallow web access to directories that don't need it      <Directory /usr/share/phpmyadmin/libraries>              Order Deny,Allow              Deny from All              Require all granted      </Directory>      <Directory /usr/share/phpmyadmin/setup/lib>              Order Deny,Allow              Deny from All              Require all granted      </Directory>        ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/var/run/php5-fpm-pma.sock|fcgi://localhost/usr/share"        ErrorLog ${APACHE_LOG_DIR}/error.log        # Possible values include: debug, info, notice, warn, error, crit,      # alert, emerg.      LogLevel warn  </VirtualHost>  

What is bothering me is the ProxyPassMatch that allows to load any file in the /usr/share/ directory that ends with .php*. I only want to execute files in /usr/share/phpmyadmin/ but since it's an alias the /phpmyadmin/ part is already appended =>

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/var/run/php5-fpm-pma.sock|fcgi://localhost/usr/share/phpmyadmin/"  

does not work, with the error that /usr/share/phpmyadmin/phpmyadmin/index.php was not found.

So is my actual configuration secure enough regarding the access of /usr/share/ ?

Thank you for your help!

Troubleshooting kerberos problems with Samba

Posted: 23 Feb 2022 11:03 AM PST

I've run into an odd problem with Samba 3.6.23. Right now I have a Windows 2008 R2 machine that has trouble accessing shares on a domained Samba box.

  • \\example_serv\my_share : Fails with LOGIN FAILURE
  • \\172.16.102.19\my_share : Works just fine.

When I set smbd to debug logging, I get this:

[2015/03/23 17:33:03.306499,  3] smbd/sesssetup.c:662(reply_spnego_negotiate)    reply_spnego_negotiate: Got secblob of size 1840  [2015/03/23 17:33:03.306939, 10] libads/kerberos_verify.c:386(ads_secrets_verify_ticket)    libads/kerberos_verify.c:386: found previous password  [2015/03/23 17:33:03.315587, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [18] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.319930, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [17] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.320027,  3] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [23] failed to decrypt with error Decrypt integrity check failed  [2015/03/23 17:33:03.320101, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [1] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.320162, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [3] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.328693, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [18] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.332985, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [17] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.333065,  3] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [23] failed to decrypt with error Decrypt integrity check failed  [2015/03/23 17:33:03.333128, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [1] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.333192, 10] libads/kerberos_verify.c:435(ads_secrets_verify_ticket)    libads/kerberos_verify.c:435: enc type [3] failed to decrypt with error Bad encryption type  [2015/03/23 17:33:03.333234,  3] libads/kerberos_verify.c:638(ads_verify_ticket)    libads/kerberos_verify.c:638: krb5_rd_req with auth failed (Bad encryption type)  [2015/03/23 17:33:03.333264, 10] libads/kerberos_verify.c:648(ads_verify_ticket)    libads/kerberos_verify.c:648: returning error NT_STATUS_LOGON_FAILURE  

Which was enough to point me at something kerberos-y. So I did a bit of tcpdumping, and learned that different login methods are negotiated for machine-name and ip-only styles. When accessing via machine-name, it attempts a kerberos login and fails. When accessing via IP-address, it attempts NTLMv2, which works just fine.

Of interest, the Win 2008 R2 machine is in a child-domain of the one the Samba server is in. However, I have lots of examples of machines in the child domain correctly accessing the Samba machine.

Confoundingly, I have an identically configured samba system (testparm shows identical [global] settings) in another AD Site that is working just fine for this machine.

I'm at a loss over where to poke next.

  • Something weird on the AD DC's in those two sites?
  • Obscure Samba settings I'm not seeing?

I'm not sure where to go from here.

Troubleshooting Redmine (Bitnami Stack) performance

Posted: 23 Feb 2022 01:01 PM PST

I've got a Redmine instance (Bitnami Stack) that's unusually slow. Because I'm just trying to get to the bottom of this, I have some theories which I'd like to discuss here. So, if anybody has any ideas about this, please feel free to help :-)

System:

Bitnami Stack with Redmine 1.4.x upgraded to Bitnami Stack with Redmine 2.1.0 like this:

  • mysqldump'd the old database
  • installed new Bitnami Stack with Redmine 2.1.0
  • imported the dump cleanly with recreating all tables
  • rake db:migrate and all that

The stack is running on a Virtual Machine with OpenSUSE 12.1. The resources shouldn't be a problem, as there are always multiple gigabytes of free RAM and CPU spikes on Redmine requests go only up to 50% of 2 CPU cores. Also, there are only a few users accessing it.

What may be totally important: User login is handled via LDAP (ActiveDirectory).

Problem:

On each request, Redmine reacts unusually slow. Sometimes it takes 3 seconds, sometimes even up to 10 seconds to deliver the page.

My thoughts:

  • I don't know if "On-the-fly user creation" is checked in Redmine's LDAP settings, I can only check this one later today. But could the lack of a check here be a problem? Authentication takes a moment when logging in that's normal and acknowledged. But when not creating the user on the fly, does it keep a session only or does it re-authenticate on each request, so that could be the problem?
  • Is Redmine 2.x maybe so much slower than 1.4.x that it's just plain normal?
  • Is Bitnami's Apache2+Passenger config faulty?
  • MySQL indexes wouldn't be a problem given the fact that MySQL is very calm on the CPU, would it?

One more thing that seems very odd to me, but maybe a false measurement result (need to re-check this tomorrow when I see the machine):

I tried to check if it's a network problem (network reacting slow, maybe DNS or something; server is in the local network). It seemed like requests on localhost (Browser directly on the OpenSUSE VM) were fast, but requests over the network weren't. Usually, I would think of a network problem, but the strange thing is: When actually measuring connect times, the network is fast as hell. Ping is good, static delivery times too. It seemed like only Redmine-side calculated pages are slowly sent by the application server while Apache's still fast - but only when the request is a remote LAN request. Very strange … but as I mentioned above, I have to re-check this one. It just seems illogical to me.

Upgrade cURL to latest on CentOS

Posted: 23 Feb 2022 12:58 PM PST

I need to upgrade cURL to the latest version on Centos

2.6.18-164.15.1.el5.centos.plusxen #1 SMP Wed Mar 17 20:32:20 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

I'm unable to find any suitable packages to do so via yum or rpm. Is there a standard way to do this upgrade without installing from source?

No comments:

Post a Comment