Sunday, September 5, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


IPsec site-to-site VPN issues after recent Linux kernel update

Posted: 05 Sep 2021 09:42 PM PDT

Last weekend we had an automatic security upgrade on one of our VPN gateways that connect sites to our cloud environment. After performing troubleshooting (via basic network troubleshooting e.g. via Wireshark) we identified one of the most recent security updates to be the cause of this. We have restored the system back to a known good state and have set (we believe to be) affected packages on hold.

It is a Ubuntu 20.04 LTS instance on AWS with linux-image-aws installed. We are using IPsec to connect several EdgeRouters to a private cloud environment.

After the upgrade all sites connect and communicate as usual, e.g. ICMP is working but we are unable to access certain services (such as RDP or SMB) in the private cloud environment.

The change logs for the related packages don't show any obvious linked change, so I am wondering if I am missing something fundamental. This configuration/setup has worked well for over a year now with no issues.

Known good version: linux-image-aws 5.8.0.1041.43~20.04.13

Problematic version: linux-image-aws 5.8.0.1042.44~20.04.14 and onwards (we have also tested latest 5.11 which seems to be affected)

IPsec configuration extract

# MAIN IPSEC VPN CONFIG  config setup    conn %default          keyexchange=ikev1    # <REMOVED>  conn peer-rt1.<REMOVED>.net.au-tunnel-1          left=%any          right=rt1.<REMOVED>.net.au          rightid="%any"          leftsubnet=172.31.0.0/16          rightsubnet=10.35.0.0/16          ike=aes128-sha1-modp2048!          keyexchange=ikev1          ikelifetime=28800s          esp=aes128-sha1-modp2048!          keylife=3600s          rekeymargin=540s          type=tunnel          compress=no          authby=secret          auto=route          keyingtries=%forever          dpddelay=30s          dpdtimeout=120s          dpdaction=restart  

Thank you in advance.

Why is mdadm unable to deal with an "almost failed" disk?

Posted: 05 Sep 2021 09:52 PM PDT

Multiple times in my career now I've come across mdadm RAID sets (RAID1+0, 5, 6 etc) in various environments (e.g. CentOS/Debian boxes, Synology/QNAP NASes) which appear to be simply unable to handle failing disk. That is a disk that is not totally dead, but has tens of thousands of bad sectors and is simply unable to handle I/O. But, it isnt totally dead, it's still kind of working. The kernel log is typically full of UNC errors.

Sometimes, SMART will identify the disk as failing, other times there are no other symptoms other than slow I/O.

The slow I/O actually causes the entire system to freeze up. Connecting via ssh takes forever, the webGUI (if it is a NAS) stops working usually. Running commands over ssh takes forever as well. That is until I disconnect / purposely "fail" the disk out of the array, then things go back to "normal" - that is as normal as they can be with a degraded array.

I'm just wondering, if a disk is taking so long to read/write from, why not just knock it out of the array, drop a message in the log and keep going? It seems making the whole system grind to a halt because one disk is kinda screwy totally nullifies one of the main benefits of using RAID (fault tolerance - the ability to keep running if a disk fails). I can understand that in a single-disk scenario (e.g. your system has as single SATA disk connected and it is unable to execute read/writes properly) this is catastrophic, but in a RAID set (especially the fault tolerant "personalities") it seems not only annoying but also contrary to common sense.

Is there a very good reason the default behavior of mdadm is to basically cripple the box until someone remotes in and fixes it manually?

Error Hostname DOES NOT VERIFY - Test certificates TLS Exchange 2016 cu21

Posted: 05 Sep 2021 10:43 PM PDT

Practicing with the certificates, in let's encrypt win-acme normal is created, I send and receive normal mail, https in owa and the other services

Testing with checktls, it gives me an alert message :

Cert Hostname DOES NOT VERIFY:

(mail.contoso.com != mail | DNS:mail | DNS:mail.lan.contoso.com)  

I don't understand the mail.lan.contoso.com DNS error. I thought the error was the DNS SPLIT, but reading in the forum they comment on something about the error.

I understand that the other connectors should not be changed in forums, books and tutorials, nobody changes them. That is why a new connector is created to receive from the internet, to which the FQDN can be changed.

Recommendations of this forum, my dns settings :

Private AD DNS (lan.contoso.com)

Record Type DNS Name Internal IP
A mail.lan.contoso.com 192.168.1.4
A DC01.lan.contoso.com 192.168.1.3

Private DNS (contoso.com) SPLIT

Record Type DNS Name Internal IP
A mail.contoso.com 192.168.1.4
A autodiscover.contoso.com 192.168.1.4

Public DNS (contoso.com)

Record Type DNS Name Value
A mail.contoso.com xxx.xxx.xxx.xxx
A autodiscover.contoso.com xxx.xxx.xxx.xxx
MX @ mail.contoso.com

Privoxy -> Tor Does Not Go Through Tor on Ubuntu 20.04

Posted: 05 Sep 2021 05:24 PM PDT

It is quite the simple setup, as one could imagine, yet it seems that I am having trouble getting Privoxy to talk to Tor. The setup is running Ubuntu 20.04 with the latest packages for tor, privoxy, and squid, whereas the computer I am browsing from is on the same local network. I am able to access error pages for squid and privoxy, as well as privoxy's configuration page, so there is no error between those two...

Here is my Privoxy configuration file:

user-manual /usr/share/doc/privoxy/user-manual  confdir /etc/privoxy  logdir /var/log/privoxy  actionsfile match-all.action # Actions that are applied to all sites and maybe overruled later on.  actionsfile default.action   # Main actions file  actionsfile user.action      # User customizations  filterfile default.filter  filterfile user.filter      # User customizations  logfile logfile  debug  4096 # Startup banner and warnings  debug  8192 # Non-fatal errors  listen-address  127.0.0.1:8118  listen-address  [::1]:8118  toggle  1  enable-remote-toggle  0  enable-remote-http-toggle  0  enable-edit-actions 0  enforce-blocks 1  buffer-limit 4096  enable-proxy-authentication-forwarding 0  forward-socks4a / 127.0.0.1:9050  foward-socks4 / 127.0.0.1:9050  forward-sock5 / 127.0.0.1:9050  forward-socks5t / 127.0.0.1:9050  forwarded-connect-retries  0  accept-intercepted-requests 0  allow-cgi-request-crunching 0  split-large-forms 0  keep-alive-timeout 5  tolerate-pipelining 1  socket-timeout 300  

I have tried editing the forward lines to include a trailing dot, and that does not work either.

Here, then, is my tor configuration file:

SocksPort 9050 # Default: Bind to localhost:9050 for local connections.  SocksPolicy accept 192.168.1.0/24  SocksPolicy accept 127.0.0.1  SocksPolicy reject *  SocksBindAddress 127.0.0.1  SocksListenAddress 127.0.0.1  RunAsDaemon 1  OutboundBindAddress 192.168.1.3  

From the Ubuntu machine, for a short period of time, I was able to use wget to reach api.ipify.org and request my IP, which was different from my usual IP, thus signifying that tor was indeed working, however I cannot reproduce this and I suspect that my request was not being routed through privoxy. (Though I do not have proof of this either.)

Here is my ufw rules list. Pardon the mess.

To                         Action      From  --                         ------      ----  192.168.1.3 3128/tcp       ALLOW IN    192.168.1.0/24  3128/tcp                   ALLOW IN    192.168.1.0/24  192.168.1.0/24 67/udp      ALLOW IN    Anywhere  22/tcp                     ALLOW IN    192.168.1.0/24  53/udp                     ALLOW IN    Anywhere  25/tcp                     ALLOW IN    Anywhere  192.168.1.3 80/tcp         ALLOW IN    Anywhere  192.168.1.3 443/tcp        ALLOW IN    Anywhere  23728/udp                  ALLOW IN    Anywhere  40036/udp                  ALLOW IN    Anywhere  23728/tcp                  ALLOW IN    Anywhere  40036/tcp                  ALLOW IN    Anywhere  9050/tcp                   ALLOW IN    127.0.0.1  9050/tcp                   ALLOW IN    Anywhere  53/udp (v6)                ALLOW IN    Anywhere (v6)  25/tcp (v6)                ALLOW IN    Anywhere (v6)  23728/udp (v6)             ALLOW IN    Anywhere (v6)  40036/udp (v6)             ALLOW IN    Anywhere (v6)  23728/tcp (v6)             ALLOW IN    Anywhere (v6)  40036/tcp (v6)             ALLOW IN    Anywhere (v6)  9050/tcp (v6)              ALLOW IN    Anywhere (v6)    192.168.1.3 8118/tcp       ALLOW OUT   192.168.1.3 3128/tcp  53/udp                     ALLOW OUT   Anywhere  80/tcp                     ALLOW OUT   Anywhere  443/tcp                    ALLOW OUT   Anywhere  25/tcp                     ALLOW OUT   Anywhere  40036/udp                  ALLOW OUT   Anywhere  23728/udp                  ALLOW OUT   Anywhere  23728/tcp                  ALLOW OUT   Anywhere  40036/tcp                  ALLOW OUT   Anywhere  123/udp                    ALLOW OUT   Anywhere  192.168.1.3 9050/tcp       ALLOW OUT   192.168.1.3 8118/tcp  9050/tcp                   ALLOW OUT   Anywhere  53/udp (v6)                ALLOW OUT   Anywhere (v6)  80/tcp (v6)                ALLOW OUT   Anywhere (v6)  443/tcp (v6)               ALLOW OUT   Anywhere (v6)  25/tcp (v6)                ALLOW OUT   Anywhere (v6)  40036/udp (v6)             ALLOW OUT   Anywhere (v6)  23728/udp (v6)             ALLOW OUT   Anywhere (v6)  23728/tcp (v6)             ALLOW OUT   Anywhere (v6)  40036/tcp (v6)             ALLOW OUT   Anywhere (v6)  123/udp (v6)               ALLOW OUT   Anywhere (v6)  9050/tcp (v6)              ALLOW OUT   Anywhere (v6)  

Can anybody tell me where I went wrong with my setup, aside from redundant firewall rules?

Create lvm volume in the last cylinders of the disk

Posted: 05 Sep 2021 05:37 PM PDT

In all magnetic disks, the speed difference between the first and last sectors is noticeable, up to a factor two or three. This is still true with the new multi-terabyte disks. So it still makes sense to reserve the last cylinders of the disk for infrequently used files, particularly backups.

With a partitioned disk, it is trivial to do this, with the only caveat of allowing for future changes of size.

But my configuration is unpartitioned LVM. So I need either to put two volume groups in the same physical hardware (one using the starts of a set of disks, other using the ends) or to make sure that a logical volume prefers to use the extensions in the last cylinders of the hardware. Is it possible? Do we have some control about where a LV is going to be?

Why an AC wifi with an AC router is slow? [migrated]

Posted: 05 Sep 2021 03:08 PM PDT

I have a Lenovo ideapad l340 with a built in Realtel 8821CE Wireless 802.11 ac network adapter. The fastest download speed I can reach is around 150 Mbps, even though windows claims the double :

enter image description here

My machine is close to the router and other devices with the same router are much quicker. What can be the problem?

AZURE Extend an on-premises network using multiple VPN's and VPN GateWay

Posted: 05 Sep 2021 03:36 PM PDT

I am looking at this https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/

It states 4 options to connect from on-prem to AZURE:

  • Hybrid network with VPN gateway
  • Hybrid network with ExpressRoute
  • Hybrid network with ExpressRoute and VPN failover
  • Hub-spoke topology

I am looking at the first option: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn?tabs=portal

This diagram is there:

enter image description here

Questions

I find the diagram above hard to follow, being a non-networking specialist. I suspect it is economizing things and for folks such as myself less clear.

  • What if there are 2 VNETs to connect to on-premise using VPN Gateway? How would we draw that here? Will both be shown with a line to the same On Premises Gateway? Or is that via the Azure Stack VPN Gateway? I.e. we must first connect from on-prem to Azure Stack VNET?

  • See picture below. From this I get the impression I can connect N VNETs directly to on-prem using same s2s VPN with VPN Gateway approach. Yes or no? Never seen such a picture I note.

enter image description here

  • Moreover, the main document lists 4 types of approaches - one being Hub Spoke approach. But this document, for this option, also talks about Hub Spoke approach. I find that hard to follow. What is Hub Spoke here? Seems like the VNET Peering means talk of hub and spoke.

Hard to interpret diagram with text. I think I am missing something elementary here.

Only have connectivity to nginx pod from the node its running on

Posted: 05 Sep 2021 04:40 PM PDT

I've installed kubernetes master and one node v 1.20. I deployed nginx with

kubectl run nginxpod --image=nginx      $ kubectl get pods -o wide  NAME       READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES  nginxpod   1/1     Running   0          19s   192.168.2.195   xps15-9560   <none>           <none>  

On master when I curl on master it times out:

$ curl 192.168.2.195  curl: (7) Failed to connect to 192.168.2.195 port 80: Connection timed out  

On the node it works. I've tried from other hosts on my network and they timeout too. Why can I only connect from the node the pod is actually running on?

----Edit----

The calico-nodes are running but they are not ready. I don't know what this means:

$ kubectl get pods -A  NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE  default       nginxpod                                   1/1     Running   0          64m  kube-system   calico-kube-controllers-5f6cfd688c-wk5jp   1/1     Running   0          69m  kube-system   calico-node-t47kf                          0/1     Running   0          45m  kube-system   calico-node-vqj6m                          0/1     Running   0          68m  kube-system   calico-node-wzwzb                          0/1     Running   0          69m  kube-system   coredns-74ff55c5b-mb2vj                    1/1     Running   0          69m  kube-system   coredns-74ff55c5b-pvsgz                    1/1     Running   0          69m  kube-system   etcd-ubuntu-18-extssd                      1/1     Running   0          69m  kube-system   kube-apiserver-ubuntu-18-extssd            1/1     Running   0          69m  kube-system   kube-controller-manager-ubuntu-18-extssd   1/1     Running   0          69m  kube-system   kube-proxy-5fq9b                           1/1     Running   0          68m  kube-system   kube-proxy-bxhfm                           1/1     Running   0          69m  kube-system   kube-proxy-pp9sb                           1/1     Running   0          45m  kube-system   kube-scheduler-ubuntu-18-extssd            1/1     Running   0          69m  

Same DNS names, private ip-addresses used over multiple AZURE Corporate Accounts

Posted: 05 Sep 2021 08:31 PM PDT

Looking at the below: enter image description here

Here we see a single AZURE Corporate Account X. See "azsql1.database.windows.net". You can access that from on-prem.

What if for arguments sake I had a second AZURE env. configured exactly the same - AZURE Corporate Account Y, with "azsql1.database.windows.net".

It's theoretical, but I would like to know how the on-prem reolves this if one tries to use "azsql1.database.windows.net" for a connection report in say, Tableau, Spotfire?

I presume that in some way you need to tell which DNS Forwarder to use in which AZURE Corporate Account.

So, forgive me, but I understand basic DNS resolution stuff with internet bla bla bla, but not a networking expert.

Apache Indexes Option works for HTTP but not for HTTPS

Posted: 05 Sep 2021 04:00 PM PDT

I am testing with a vanilla install of Rocky Linux 8.4 and Apache 2.4. I have a virtual host configured and working and I also configured Lets Encrypt cert via Certbot, this also works great.

I want to allow directory listings on a specific folder so have enabled Options Indexes, this works as expected via HTTP but via HTTPS I get 403 Forbidden. The Certbot script inserted the rewrite rule but I don't think that is the issue, I tried disabling that so I could test via HTTP and makes no difference but including it here in case it is infact relevant.

My virtual host conf looks like this:

<VirtualHost *:80>      ServerName test.prot0type.com      ServerAlias test.prot0type.com      DocumentRoot /var/www/test.prot0type.com        <Directory /var/www/test.prot0type.com/test>          Options +Indexes      </Directory>        RewriteEngine on      RewriteCond %{SERVER_NAME} =test.prot0type.com      RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]  </VirtualHost>    <VirtualHost *:443>      ServerName test.prot0type.com      ServerAlias test.prot0type.com      DocumentRoot /var/www/test.prot0type.com        <Directory /var/www/test.prot0type.com/test>          Options +Indexes      </Directory>    </VirtualHost>  

Accessing http://test.prot0type.com/test/ works as expected.

Accessing https://test.prot0type.com/test/ results in 403 and in the error log I get:

Cannot serve directory /var/www/test.prot0type.com/test/: No matching DirectoryIndex (index.html) found, and server-generated directory index forbidden by Options directive

How do I find which Options directive is doing this? I have searched all the conf files but can't find it.

SELinux Issue - git status fatal: Out of memory? mmap failed: Permission denied

Posted: 05 Sep 2021 04:58 PM PDT

I have Centos 7.9 server running with Apache and Git, however if I do a

[root@a]# git status  fatal: Out of memory? mmap failed: Permission denied  

But if Disable or Permissive the SE-Linux via below commands it start working fine.

setenforce Permissive  

Any idea on how to fix this issue permanently with SELinux enabled?

Audit log says

node=a type=PROCTITLE msg=audit(1630636505.296:37076): proctitle=67697400737461747573  node=a type=MMAP msg=audit(1630636505.296:37076): fd=3 flags=0x2  node=a type=SYSCALL msg=audit(1630636505.296:37076): arch=c000003e syscall=9 success=no exit=-13 a0=0 a1=3ebd0 a2=3 a3=2 items=0 ppid=8008 pid=8156 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=570 comm="git" exe="/usr/bin/git" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)  node=a type=AVC msg=audit(1630636505.296:37076): avc:  denied  { map } for  pid=8156 comm="git" path="/www/site/.git/index" dev="sda2" ino=540400 scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:httpd_t:s0 tclass=file permissive=0  

azure linux has not default ipv6 route

Posted: 05 Sep 2021 07:52 PM PDT

Environment

  • VM: Linux Debian 10.10
  • Virtual Network with ipv4 and ipv6 address space and subnets
  • Public ipv4 address and public ipv6 address (standard SKU)
  • VM NIC associate public ipv4 and public ipv6 address
  • VM NIC assigned private ipv4 and ipv6 address (check with ip address)

Network Security Group:

enter image description here

Network NIC Effective routes

enter image description here

Problem

Cannot connect http://ipv6.google.com

# curl -v http://ipv6.google.com  *   Trying 2404:6800:4005:812::200e...  * TCP_NODELAY set  * Immediate connect fail for 2404:6800:4005:812::200e: Network is unreachable  * Closing connection 0  curl: (7) Couldn't connect to server  

No ipv6 default route

# ip -6 r  ::1 dev lo proto kernel metric 256 pref medium  fd00::/80 dev docker0 metric 1024 linkdown pref medium  fd00:4244:7016::4 dev eth0 proto kernel metric 256 pref medium  fe80::/64 dev docker0 proto kernel metric 256 linkdown pref medium  fe80::/64 dev br-a3568bc4adc5 proto kernel metric 256 pref medium  fe80::/64 dev veth907e563 proto kernel metric 256 pref medium  fe80::/64 dev vethdf50b7b proto kernel metric 256 pref medium  fe80::/64 dev veth1322b71 proto kernel metric 256 pref medium  fe80::/64 dev veth6d1b4d6 proto kernel metric 256 pref medium  fe80::/64 dev vethca17875 proto kernel metric 256 pref medium  fe80::/64 dev eth0 proto kernel metric 256 pref medium  

No default route in ipv6 router advertisement

Router advertisement only contains prefix length

# tcpdump -i eth0 -vv icmp6  tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes  00:51:28.053407 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 56) fe80::1234:5678:9abc > ip6-allnodes: [icmp6 sum ok] ICMP6, router advertisement, length 56          hop limit 0, Flags [managed, other stateful], pref medium, router lifetime 9000s, reachable time 0ms, retrans timer 0ms            source link-address option (1), length 8 (1): 12:34:56:78:9a:bc              0x0000:  1234 5678 9abc            prefix info option (3), length 32 (4): fd00:4244:7016::/64, Flags [onlink], valid time infinity, pref. time infinity              0x0000:  4080 ffff ffff ffff ffff 0000 0000 fd00              0x0010:  4244 7016 0000 0000 0000 0000 0000  

IPV6 Address

# ip -6 address  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000      inet6 ::1/128 scope host         valid_lft forever preferred_lft forever  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000      inet6 fd00:4244:7016::4/128 scope global         valid_lft forever preferred_lft forever      inet6 fe80::20d:3aff:fe82:b7d3/64 scope link         valid_lft forever preferred_lft forever  3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 state DOWN      inet6 fe80::1/64 scope link tentative         valid_lft forever preferred_lft forever  4: br-a3568bc4adc5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::42:5bff:fe7d:1e0d/64 scope link         valid_lft forever preferred_lft forever  16: veth907e563@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::44da:baff:fef3:c54d/64 scope link         valid_lft forever preferred_lft forever  18: vethdf50b7b@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::5c5d:93ff:fead:8190/64 scope link         valid_lft forever preferred_lft forever  20: veth1322b71@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::f4a1:ceff:fe3e:55f4/64 scope link         valid_lft forever preferred_lft forever  22: veth6d1b4d6@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::60de:73ff:fe59:74ec/64 scope link         valid_lft forever preferred_lft forever  24: vethca17875@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP      inet6 fe80::f47a:a6ff:febf:a444/64 scope link         valid_lft forever preferred_lft forever  

Is JDK 1.6 supported in IBM POWER 9(AIX 7.1-7.2)?

Posted: 05 Sep 2021 06:41 PM PDT

I would like to check if jdk 1.6 is supported in AIX 7.2.

We are planning to upgrade from AIX7.1-7.2 (POWER7-POWER9).

But we are not sure if we have to upgrade java . As our development is done in JDK 1.6.

Can anyone suggest if java needs to be upgraded to JDK 1.8 if we are upgrading AIX 7.1-7.2 (POWER 7 TO POWER 9)?

High CPU usage by Apache/MySQL

Posted: 05 Sep 2021 10:57 PM PDT

I have a problem with CPU usage on the website that uses WordPress, Apache, and MySQL. During the day, from time to time, CPU usage by MySQL and Apache goes up to 2400% (I have 24 cores in total), the server freezes, the average load goes up to 24.

Recently, there was a little more traffic than usual, but this thing shouldn't be permanent, right? I've updated the kernel, the database, libraries, restarted many times. And still, it freezes. I've looked at the process list of the DB, but there is nothing extraordinary. In the database, there are pretty large amounts of data. Just a couple of weeks ago it worked fine, and now it doesn't. So, it shouldn't be unoptimized queries.

What can be the causes of such behavior?

Update:

the result of A) SHOW GLOBAL STATUS LIKE 'com_%r%_table';

+-----------------------+-------+  | Variable_name         | Value |  +-----------------------+-------+  | Com_alter_table       | 5     |  | Com_create_table      | 34    |  | Com_drop_table        | 0     |  | Com_rename_table      | 0     |  | Com_show_create_table | 0     |  +-----------------------+-------+  5 rows in set (3.04 sec)  

B) SHOW GLOBAL STATUS LIKE 'uptime%';

+---------------------------+--------+  | Variable_name             | Value  |  +---------------------------+--------+  | Uptime                    | 455524 |  | Uptime_since_flush_status | 455524 |  +---------------------------+--------+  2 rows in set (0.01 sec)  

C) SHOW GLOBAL STATUS LIKE '%dirty%';

+--------------------------------+-------+  | Variable_name                  | Value |  +--------------------------------+-------+  | Innodb_buffer_pool_pages_dirty | 0     |  | Innodb_buffer_pool_bytes_dirty | 0     |  +--------------------------------+-------+  2 rows in set (0.00 sec)  

p.s. I still have problems with the server. I needed to change the character set on one of the databases, and it took a little more than a day to finish, with just 400 000 rows. Before, it used to take some time, but not that much. I was wondering, could it be, that after the DDOS attack, there can be some changes to the database, so that it performs worse?

Error "This certificate cannot be verified up to a trusted certification authority"

Posted: 05 Sep 2021 08:03 PM PDT

In my VirtualBox I have following network for testing and every software on virtual machines is a fresh installation.

enter image description here

On virtual machine, named www.home.local, where my web server resides, I created a certificate request, then submitted this request to ws01.home.local, then got a certificate issued and downloaded, then completed the certificate request. After that I added binding to Default Web Site with https protocol and the certificate.

Now,

this Default Web Site is accessible from www.home.local at https:www.home.local without any error

this Default Web Site is accessible from ws01.home.local at https:www.home.local without any error

However, I am getting error from vm02 and the host computer

enter image description here

What can be a solution to this issue? What should I do next?

Ubuntu 20.10 Active directory integration not working

Posted: 05 Sep 2021 09:08 PM PDT

I've just installed Ubuntu 20.10 and I enabled Active Directory integration during setup. It asked me AD user and password, I provided those and the setup showed green thicks and went on.

After completing setup, I tried to login with a domain user (ufficio.lan\lucio), but it failed as if the password was incorrect (which was not, I tried several times and I'm sure about my password). I then logged in with the local user I created during setup and checked the machine was effectively joined to the domain:

# realm join -U Administrator ufficio.lan  realm: already joined to this domain  

Please note that after trying to login with my AD user, gdm added my real name and surname to the list of available users, so it actually managed to contact my AD server and obtain some information about me. However it didn't create the home directory, nor it mounted my home directory that the server shares (this would be my final goal) and it didn't let me in, as described above.

I tried to install Ubuntu 20.10 from scratch again, just in case I made some mistakes the first time, but I got the same results.

The server is a Zentyal Community Edition 6.2 and other Linux computers in the LAN manage to login with AD credentials, but those are old Fedora or Ubuntu 14.04 setups that were manually joined to the AD domain back then, so I can't just copy /etc/ over and hope for the best: it won't work.

EDIT after Sturban's answer:

Before reinstalling from scratch I had already tried to follow the guide linked in the answer, but it did not solve the problem. It was precisely that guide that, in Step 5, suggested me the command

# realm join -U Administrator ufficio.lan  

to check if the system was already joined to the domain. Despite being already joined, I tried following that guide anyway (even from its Step 1), but at the end of Step 5 the id command did not find my domain user and gdm kept refusing my domain login and not creating my home directory.

Anyway, I suspect the point is quite different, and that's why I did not mention these trials before: Ubuntu 20.10 has AD integration option during setup and it's a new feature that up to 20.04 included did not exist, so I suspect something different is needed on Ubuntu 20.10, while that guide assumes Ubuntu 20.04.

EDIT #2

I've tried starting from fresh Zentyal 6.2 + Ubuntu 20.04 (mind it, not 20.10) virtual machines in a virtual LAN and then following the guide linked in Sturban's answer, which is supposed to be valid for Ubuntu 20.04. It didn't work just the same way as with Ubuntu 20.10.

To be honest, I did NOT follow the guide verbatim (never did that), but I always assumed I had to adapt Step 1 to the actual OS I was using. Step 1 suggests to add Ubuntu 18.04 repositories to /etc/apt/sources.list, but I always assumed it actually means I have to add my distro repositories that contain the packages to be installed in Step 3. Besides, I think adding bionic repos to a focal or buster setup and then installing old packages from there would wreck the OS of its own, right? Or do I really have to go through the hassle of adding outdated repos to a current OS in order to have AD authentication working?

Other than that, I followed the guide verbatim, but at the end of step 5 the id command still could not find AD users.

So now I assume my question is applicable to Ubuntu 20.04 too, and that guide is more outdated than I thought. That means if you know the solution to have AD users authentication working on Ubuntu 20.04 I assume it will work on Ubuntu 20.10 too, but that guide is missing something and it's not enough as solution.

Memcached error: SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY

Posted: 05 Sep 2021 04:03 PM PDT

Here is my test code:

$host = 'localhost';  $port = '11211';    if (extension_loaded('memcached')) {      $mc = new Memcached;      $mc->addServer($host, $port);      if ($mc->set('test', 'TEST')) {          echo 'true';      } else {          echo $mc->getResultCode();          echo $mc->getResultMessage();      }  } else {      echo 'no_memcached';  }  

The output is:

47 SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY

SELinux is disabled, I've also tried to turn off tcp_nodelay in nginx.conf and tried different types of host (127.0.0.1 and localhost) and ports.

I've read this question - How to debug memcached "SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY" errors?

But nothing helped and I cannot comment, because I have not enough points.

Why is are database queries running so much slower on AWS RDS?

Posted: 05 Sep 2021 08:32 PM PDT

I've been working on performance-sensitive features. I've been developing locally, running a MySQL server on my Mac. One key query runs in about 1.2 seconds on my machine, which is in the acceptable range for me. Everything was running speedy enough, so I decided to move it onto an AWS RDS Aurora database so that I could start using the new system in production.

But once I started connecting to RDS instead of my local server, the queries started to take more than twice as long. And this is comparing the time it takes the actual query to run, unaffected by networking speed. This is how I'm measuring.

I've bumped up the instance that RDS is using to db.r3.4xlarge, which has 122 GB of RAM, an Intel Xeon E5-2670 v2 (Ivy Bridge), and 16 vCPUs. My local machine has 32 GB of ram, and a 4 GHz Intel Core i7. I don't know much about this stuff, but it sure seems like the database in the cloud is running on more powerful hardware no matter what metric you're looking at.

Main question: What else can I look into to get the cloud database running as quickly as my local machine?

Things that seemed like plausible causes but don't appear to be:

  • Using explain at the start of my queries results in the exact same index plan on both DBs.
  • The DB running on AWS has fewer rows than the local one, as I've loaded less data into it.
  • The hardware in the cloud is more powerful than my local machine, unless I'm overlooking something.
  • Network performance is not part of what I'm measuring.
  • It isn't isolated to a certain query -- almost every query is running 2 or 3 times slower.
  • I'm comparing just running the queries plain, so my application code doesn't come into it.

Things that could be factors, but I really don't know:

  • The AWS database is running Aurora mimicking MySQL 5.6.10, whereas locally I'm running MySQL 5.6.43.
  • Maybe another chip or component of my computer is effecting this, other than my RAM or CPU.
  • Could the cloud one still be building indexes or something? I think MySQL indexes are built as data is inserted, though.

I'm really at a bit of a loss here. If anyone has any ideas or advice, it would be very much appreciated!

How to create a non-nat lxd network bridge (using lxd network)?

Posted: 05 Sep 2021 04:06 PM PDT

How to create a non-nat lxd network bridge?

I have tried the below network configs, then ran sudo service networking reload and lxc stop and lxc start for the container in question. I was unable to get the host and the containers to both be on the 10.1.1.1/24 subnet using a non-NAT bridge. When using the default lxdbr0 with NAT everything works fine.

I have tried the below configurations. First without assigning a subnet:

config:    ipv4.nat: "false"    ipv6.address: none  description: ""  name: testbr0  type: bridge  used_by:  - /1.0/containers/test  managed: true  

The with assigning a subnet:

config:    ipv4.address: 10.1.1.1/24    ipv4.nat: "false"    ipv6.address: none  description: ""  name: testbr0  type: bridge  used_by:  - /1.0/containers/test  managed: true  

When to above configurations were used the host lost network connectivity.

How to create a non-nat lxd network bridge (using lxd network)?

SSH and GIT auth suddenly stopped working

Posted: 05 Sep 2021 06:03 PM PDT

I've been happily pulling from my repository for months, until now.. :'(

For the first time ever git now asks me to add github.com to the known_hosts file. It never did that before, I even didn't have a .ssh directory until after I say 'yes' to the question below.

# git pull    Host 'github.com' is not in the trusted hosts file.  (ssh-rsa fingerprint md5 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48)  Do you want to continue connecting? (y/n) y    /usr/bin/ssh: Connection to git@github.com:22 exited: No auth methods could be used.  fatal: Could not read from remote repository.    Please make sure you have the correct access rights  and the repository exists.  

I can't figure out what could have changed... doing git status works, but I can't pull anything.

I can't do a ssh -vvv, as this is on a small embedded linux system, and apparently ssh didn't compile with the verbose option, so I can't really know what's going on.

Shouldn't I be able to pull from repo's without having to add a github ssh key? This all happens on a small embedded systems, that I deploy on various places, so I don't like to add any account details, I just want it to pull the latest version from github.

My git config:

[core]          repositoryformatversion = 0          filemode = true          bare = false          logallrefupdates = true  [remote "origin"]          url = git@github.com:MyUser/MyRepo.git          fetch = +refs/heads/*:refs/remotes/origin/*  [branch "master"]          remote = origin          merge = refs/heads/master  

I've tried changing git to https, but that gives me a certificate error instead. (I've redacted the user and repo name)

nginx: FastCGI sent in stderr: "Primary script unknown"

Posted: 05 Sep 2021 04:06 PM PDT

Using the latest version of nginx (1.10.0) and php-fpm (PHP 7.0.6) on 64-bit arch linux.

When attempting to request index.php for a DokuWiki installation, I get the following error:

2016/05/21 22:09:50 [error] 11099#11099: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.70.3, server: doku.test.com, request: "GET /install.php HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/php-fpm.sock:", host: "doku.test.com"  

Here is the relevant server config:

server {    listen 80;    server_name doku.test.com;    root /var/www/doku/public_html/;    access_log /var/log/nginx/scripts.log scripts;      location ~ \.php$ {      include fastcgi_params;      fastcgi_pass  unix:/run/php-fpm/php-fpm.sock;      fastcgi_index index.php;      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;    }    }  

Here is fastcgi_params:

fastcgi_param  QUERY_STRING       $query_string;  fastcgi_param  REQUEST_METHOD     $request_method;  fastcgi_param  CONTENT_TYPE       $content_type;  fastcgi_param  CONTENT_LENGTH     $content_length;    fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;  fastcgi_param  REQUEST_URI        $request_uri;  fastcgi_param  DOCUMENT_URI       $document_uri;  fastcgi_param  DOCUMENT_ROOT      $document_root;  fastcgi_param  SERVER_PROTOCOL    $server_protocol;  fastcgi_param  REQUEST_SCHEME     $scheme;  fastcgi_param  HTTPS              $https if_not_empty;    fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;  fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;    fastcgi_param  REMOTE_ADDR        $remote_addr;  fastcgi_param  REMOTE_PORT        $remote_port;  fastcgi_param  SERVER_ADDR        $server_addr;  fastcgi_param  SERVER_PORT        $server_port;  fastcgi_param  SERVER_NAME        $server_name;    # PHP only, required if PHP was built with --enable-force-cgi-redirect  fastcgi_param  REDIRECT_STATUS    200;  

As can be seen in my server config, I am attempting to log the output of the SCRIPT_FILENAME parameter as all of my researching seems to be pointing to that as the culrpit. Here is the relevant part of nginx.conf:

log_format scripts '$document_root$fastcgi_script_name > $request';  

When requesting the index.php page, the below is generated in scripts.log:

/var/www/doku/public_html/index.php > GET /index.php HTTP/1.1  

Doing an ls on that file:

-rwxr-xr-x 1 nginx nginx 182 May 21 06:45 /var/www/doku/public_html/index.php  

It's worth noting that both the nginx daemon and the php-fpm daemon are configured to run as the nginx user using the nginx group. I'm at a loss as to why I am getting the initial error as the logging as effectively proven that SCRIPT_FILENAME is indeed pointing to the correct path.

Out of all the ServerFault answers I reviewed, adding that param to the server config seemed to be the #1 solution to my error, but it does not seem to fix it in my case.

Any suggestions?

Why may add_header not work in nginx' reverse-proxy configuration?

Posted: 05 Sep 2021 07:05 PM PDT

Please help me to understand why the following proxy configuration does not set the header X-Discourse-Echo-Proxy:

server {    listen 80;    server_name corsproxy.discourseecho.com;      error_log /data/nginx/proxy-debug warn;    # access_log /data/nginx/proxy-access;      location / {        proxy_redirect off;      proxy_set_header Host $arg_proxy_target_domain;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;        proxy_cache discourse_corsproxy;      proxy_cache_valid any 30m;      proxy_cache_lock on;      proxy_cache_lock_age 7s;      proxy_cache_lock_timeout 5s;      proxy_cache_methods GET HEAD POST;        proxy_ignore_headers Cache-Control X-Accel-Expires Expires Set-Cookie Vary;      proxy_hide_header Set-Cookie;      proxy_hide_header Cache-Control;      proxy_hide_header Expires;      proxy_hide_header X-Accel-Expires;      proxy_hide_header Vary;      proxy_hide_header Access-Control-Allow-Origin;      proxy_hide_header Access-Control-Allow-Credentials;        add_header X-Cache-Status $upstream_cache_status always;      add_header X-Discourse-Echo-Proxy "1" always;        # Nginx doesn't support nested If statements, so we      # concatenate compound conditions on the $cors variable      # and process later     # If request comes from allowed subdomain      # (discourseecho.com) then we enable CORS      if ($http_origin ~* (https?://discourseecho\.com$)) {         set $cors "1";      }        # If request comes from my home IP (Adelaide), enable CORS      if ($remote_addr = "103.192.193.144") {        set $cors "1";      }        # OPTIONS indicates a CORS pre-flight request      if ($request_method = 'OPTIONS') {         set $cors "${cors}o";      }        # Append CORS headers to any request from      # allowed CORS domain, except OPTIONS      if ($cors = "1") {         add_header Access-Control-Allow-Origin $http_origin always;         add_header Access-Control-Allow-Credentials "true" always;         proxy_pass $arg_proxy_target_protocol://$arg_proxy_target_domain;      }     # OPTIONS (pre-flight) request from allowed      # CORS domain. return response directly      if ($cors = "1o") {         add_header Access-Control-Allow-Origin $http_origin' always;         add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, PUT, DELETE" always;         add_header Access-Control-Allow-Credentials "true" always;         add_header Access-Control-Allow-Headers "Origin,Content-Type,Accept" always;         add_header Content-Length 0;         add_header Content-Type text/plain;         return 204;      }        # Requests from non-allowed CORS domains      proxy_pass $arg_proxy_target_protocol://$arg_proxy_target_domain;    }  }  

I expect the header to be added because of the following instruction:

add_header X-Discourse-Echo-Proxy "1" always;

But whatever HTTP requests I make, no such a header is present in responses. No errors or warnings in the log file. What should I check to identify the problem?

showmount -e fails from one node

Posted: 05 Sep 2021 08:22 PM PDT

When I run:

showmount -e <IP>  rpc mount export: RPC: Unable to receive; errno = Connection reset by peer    mount <IP>:/path /mnt  mount.nfs: Connection reset by peer  

But

mount -t nfs -c vers=3 <IP>:/path /mnt  

works

The client and server (freenas 9.3) are on the same subnet. How to resolve this?

lftp reverse mirror silently skips files in subfolders

Posted: 05 Sep 2021 05:06 PM PDT

I'm using lftp to push content to an ftp-only web-server. It worked to upload the files recursively at first, and even incrementally.

Any idea why this would skip files changed in a subfolder, but not skip files changed in the home directory?

Details: I'm using the reverse mirror mode, which pushes local data up to the server instead of downloading it from the server. Throughout the web, this is the recommended option for recursive upgrade.

Here's the full script (from this answer)

#!/bin/bash      HOST="..."  USER="..."  PASS="..."  FTPURL="ftp://$USER:$PASS@$HOST"  LCD="/local/directory"  #RCD=""  #RCDCMD=cd $RCD;  #DELETE="--delete"  lftp -c "set ftp:ssl-allow no;  set ftp:list-options -a;  open '$FTPURL';  lcd $LCD;  $RCDCMD \  mirror --reverse \     $DELETE \     --verbose \     --exclude-glob .*swp \     --exclude-glob .*swn \     --exclude-glob .*swo"  

The related question, was solved by permission issues, which is not an issue in this case. Everything is "rwxr-xr-x" on the server.

Further Testing: The lftp seems to work intermittently. For example, I will run the command twice, and it skips the changes, then the third time it works, correctly copying the changed files up to the server.

Do I need a RHEL subscription to install packages?

Posted: 05 Sep 2021 06:58 PM PDT

I'm new to RHEL. Trying to install software this morning and running into road blocks. Is it required to have a subscription to download packages via yum on RHEL?

I'm coming across different sources on the net, some make it sound like yes, you need a subscription, others making it sound like no, a subscription is only required for support.

In either case I'm stuck unable to install software ATM, because the machines I'm on don't have the subscription registered. Is there a way to install RHEL software without registering a subscription? If so, how?

OpenLDAP memberOf attribute is not updated after group update

Posted: 05 Sep 2021 08:03 PM PDT

I have an OpenLDAP setup on Debian 7.1, (OpenLDAP 2.4.31), and I am trying to set up the memberof overlay. My configuration is just like I have read at lots of sites throughout the internet, however, it still does not work for me.

The issue is that the memberOf attributes of the entities are only updated when I create a group, but are not updated when I modify or delete a group. Actually this same issue was once asked before here: How do I configure Reverse Group Membership Maintenance on an openldap server? (memberOf), but even if it is checked as answered, I could not find any usable information in the answers. (Even the original poster couldn't do anything with the answers according to the comments...)

My configuration is like this: cn=config/cn=module{0}.ldif

dn: cn=module{0}  objectClass: olcModuleList  cn: module{0}  olcModulePath: /usr/lib/ldap  olcModuleLoad: {0}back_hdb  olcModuleLoad: {1}memberof  structuralObjectClass: olcModuleList  

And for the module: cn=config/olcDatabase={1}hdb/olcOverlay={0}memberof.ldif

dn: olcOverlay={0}memberof  objectClass: olcMemberOf  objectClass: olcOverlayConfig  olcOverlay: {0}memberof  structuralObjectClass: olcMemberOf  olcMemberOfGroupOC: groupOfNames  olcMemberOfMemberAD: member  olcMemberOfMemberOfAD: memberOf  olcMemberOfRefInt: TRUE  

The group I add:

dn: cn=test,ou=services,dc=x,dc=y  cn: test  objectClass: groupOfNames  objectClass: top  description: test group  member: cn=Almafa Teszt,ou=users,dc=x,dc=y  

The query I run:

$ ldapsearch -LLL -h localhost -x -D cn=admin,dc=x,dc=y -b u=users,dc=x,dc=y -W  '(memberOf=cn=test,ou=services,dc=x,dc=y)' memberOf  

So the issue is not with how to query the attribute, but that after modifying or removing the group, the result of the search does not change...

Update: As for Brian's answer, I also set up refint overlay, with the following config:

$ ldapsearch -LLL -b cn=module{0},cn=config  dn: cn=module{0},cn=config  objectClass: olcModuleList  cn: module{0}  olcModulePath: /usr/lib/ldap  olcModuleLoad: {0}back_hdb  olcModuleLoad: {1}memberof.la  olcModuleLoad: {2}refint    $ ldapsearch -LLL -b olcOverlay={1}refint,olcDatabase={1}hdb,cn=config  dn: olcOverlay={1}refint,olcDatabase={1}hdb,cn=config  objectClass: olcConfig  objectClass: olcOverlayConfig  objectClass: olcRefintConfig  objectClass: top  olcOverlay: {1}refint  olcRefintAttribute: memberof member manager owner  

But neither it fixed memberof overlay, nor it worked in itself. When I modified the name of a member of a group, the member attribute of the group was not updated. Could this two issues be related?

Adding a "dynamic" route manually for troubleshooting

Posted: 05 Sep 2021 05:06 PM PDT

Is there a way to add a route in Linux with the Dynamic flag set? The reason I want to do this is to troubleshoot an issue where there exist identically static and dynamic routes, and what happen if I try to delete the static route.

We suspect that the dynamic route was removed, and not the static route.

I have tried:

route add -net 192.168.100.0/24 gw 192.168.0.1 dyn  

But route -n only show the flags UG.

How to view if partitions primary or secondary in Linux

Posted: 05 Sep 2021 03:43 PM PDT

How do I view my partitions if they are primary or secondary in Linux CentOS? I tried df -T but it does not show if partitions are primary or secondary.

Is it safe to set validateIntegratedModeConfiguration=false in order to continue using identity impersonate=true?

Posted: 05 Sep 2021 09:08 PM PDT

We have upgraded an ASP.NET web application from IIS6 to IIS7 integrated mode. Our application uses:

<identity impersonate="true"/>  

and therefore we have had to set:

<validation validateIntegratedModeConfiguration="false" />  

Is this sensible? My instincts say not, but searching on google for this issue, this "workaround" is suggested on every page visited.

Is impersonation no longer a good practice in IIS7 integrated, and should we abandon it and come up with a different solution?

Execute local (bash|python) script with mysql SQL

Posted: 05 Sep 2021 06:03 PM PDT

I want to create a trigger so that when a field is updated it kicks off a local bash script (or python...whatever) to kick off a workflow (emails, work requests, etc). Is it possible to execute local system scripts/executables from mysql SQL? My google searches have been unsuccessful.

No comments:

Post a Comment