Wednesday, November 24, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Setup IP alias for GKE kubectl connection via tunneling through the "bastion" host to avoid insecure-skip-tls-verify option

Posted: 24 Nov 2021 06:22 AM PST

In reference to this question: Run 'kubectl' commands from my localhost to GKE - but via tunnelling through a bastion host

I'm facing the very same situation. I did all that is described in the accepted answer and achieved the point where I can communicate with my cluster via kubectl with --insecure-skip-tls-verify flag. However, I want to get rid of this flag as it is not the secure option. For that I tried to set up IP alias on my local, as suggested in the answer using command like this:

ifconfig lo0 alias 10.0.0.2

Where 10.0.0.2 is my cluster private IP and lo0 is my loopback interface. However after that (and bringing back https://10.0.0.2:8443 instead https://127.0.0.1:8443 in ~/.kube/config) my kubectl command stopped working with the message

The connection to the server 10.0.0.2:8443 was refused - did you specify the right host or port?

I also tried assigning 10.0.0.2 as a hostname for 127.0.0.1 in /etc/hosts or adding static route in my system to point to 127.0.0.1 - none of these worked.

My system is mac os.

Salt: connection refused to proxy

Posted: 24 Nov 2021 06:55 AM PST

I have troubles with Salt, installed on an HPC cluster. All maintenance commands (i.e. salt commands) are made from master0. The minions to be managed are named node0, node1, ..., node4. When I try to install a package on a minion (let's say python's numpy on node0), I get the following error:

[root@master0 ~]# salt 'node0' pkg.install python3-numpy    *duplicate lines removed*  https://vault.centos.org/7.8.2003/os/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to master0:3142; Connection refused"  

On node0, file /etc/yum.conf looks like this:

[main]  exclude=ibutils-libs*  cachedir=/var/cache/yum/$basearch/$releasever  keepcache=0  debuglevel=2  logfile=/var/log/yum.log  exactarch=1  obsoletes=1  gpgcheck=1  plugins=1  installonly_limit=5  bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum  distroverpkg=centos-release  proxy=http://master0:3142  

As far as I understand, node0 does not manage to connect to master0 through port 3142. Indeed, I can curl master0 on default port (80 I guess), but not on 3142:

[root@node0 ~]# curl 'http://master0:3142'  curl: (7) Failed connect to master0:3142; Connection refused  [root@node0 ~]# curl 'http://master0'  <!DOCTYPE html>  <html>    *Some HTML stuff, not relevant here*  </body>  </html>  

But I don't get where I am supposed to open this port on master0.

Note: Complete newbie here. To be honest, I don't get the point of passing through this port either, but the whole HPC structure was built by an external service provider, and I just try to handle all this stuff by my own.

Can one pre-populate a directory (e.g., on EFS) for an EKS cluster?

Posted: 24 Nov 2021 06:08 AM PST

We want to run an interactive application on AWS that should be able to read and write a common workspace directory on a shared filesystem. Our approach so far was to use EFS for the shared data, and to use a PersistentVolumeClaim to reference it. However, the dynamically provisioned PersistentVolume will always create a new empty directory on EFS.

Is it possible to pre-populate an EFS directory and to set up an EKS cluster with persistent volumes accessing that directory?

It seems to be possible to do that with EBS, but then we could only mount that workspace into a single pod. We would like to have multiple pods access the same, pre-populated directory (but we're not limited to using EFS). Are we overlooking anything?

Kubernetes pod can not resolve domain name if it is running on a specific node

Posted: 24 Nov 2021 05:10 AM PST

We have an on premises Kubernetes cluster, running on nodes with hostnames node1.mycompany.local through node7.mycompany.local. We also have a database server on node16.mycompany.local, outside the Kubernetes cluster.

When a pod runs on node4 or node7, it can not resolve database domain name and fails. If I move the pod to a different node other than 4 or 7, it can connect to the database and runs without a problem.

When I ssh into any of the nodes in the cluster, I can ping to the database server by hostname without any problems.

When running a docker container directly without Kubernetes, we specify extra hostnames alongside IP's for the container to resolve, but I don't know how Kubernetes is handling this, because I couldn't find any config which specifies IP's of the external nodes.

My kubernetes version is:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0"...  Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5+coreos.0",...  

What can cause this problem?

Attempting to create an Raid array in ubuntu server 20.10

Posted: 24 Nov 2021 05:06 AM PST

I'm attempting to create a RAID 0 RAID array on Ubuntu server 21.10, but I keep getting the following error: mdadm: An option must be given to set the mode before a second device Here is the syntax I used:

~$ sudo mdadm –create –verbose /dev/md0 –level=0 –raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm: An option must be given to set the mode before a second device (–verbose) is listed

I've seen some websites that list the syntax as: sudo mdadm -–create -–verbose /dev/md0 -–level=0 -–raid-devices=2 /dev/sdb1 /dev/sdc1

To which I get: mdadm: invalid option -- '?' Usage: mdadm --help for help

I've tried removing the partitions and attempting the command without partitions, still no joy. I've tried the -c vs the --create option, still no fun.

Do I need to install any additional packages in order to create an array?

Any ideas?

CORS is not working NGINX + DJANGO + REACT application

Posted: 24 Nov 2021 04:01 AM PST

Existing nginx configuration given bellow. I tried in multiple way but nothing is working.

server {      server_name backend.xxxxxx.com www.backend.xxxxxx.com;      client_max_body_size 100M;      #add_header Access-Control-Allow-Origin *;      #add_header 'Access-Control-Allow-Origin' '*' always;      location / {          #add_header 'Access-Control-Allow-Origin' '*' always;          include proxy_params;          proxy_pass http://unix:/var/log/gunicorn/xxxxxx.sock;            # Simple requests          #if ($request_method ~* "(GET|POST)") {                  #add_header "Access-Control-Allow-Origin"  *;          #}            # Preflighted requests          #if ($request_method = OPTIONS ) {          #       add_header "Access-Control-Allow-Origin"  *;          #       add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";          #       add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";          #       return 200;          #}      }        listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/backend.xxxxxx.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/backend.xxxxxx.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    }    server {      if ($host = backend.xxxxxx.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen 80;      server_name backend.xxxxxx.com www.backend.xxxxxx.com;      client_max_body_size 100M;      return 404; # managed by Certbot      }  

In Django I have added django-cors-headers plugin with all configuration. Need your expert suggestion.

stop mail.domain.com from browser

Posted: 24 Nov 2021 03:35 AM PST

I have a mail server setup for a domain using postfix. For the ssl cert, I have created a virtual host for that mail server. The virtual host look like this

<VirtualHost *:80>    ServerName mail.domainX.net    DocumentRoot /var/www/roundcube/      ErrorLog ${APACHE_LOG_DIR}/mail.domainX.net_error.log    CustomLog ${APACHE_LOG_DIR}/mail.domainX.net_access.log combined      <Directory />      Options FollowSymLinks      AllowOverride All    </Directory>      <Directory /var/www/roundcube/>      Options FollowSymLinks MultiViews      AllowOverride All      Order allow,deny      allow from all    </Directory>    RewriteEngine on  RewriteCond %{SERVER_NAME} =mail.domainX.net  RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]  </VirtualHost>  

My problem is when I hit mail.domain.com in the browser it opens the webpage for another domain set up on my server using apache virtual host. How can I stop this from the browser?

Thanks in advance.

ssh-copy-id ceph@osd-0 /bin/ssh-copy-id: ERROR: failed to open ID file '/home/ceph/.pub': No such file or directory

Posted: 24 Nov 2021 05:35 AM PST

I can ssh successfully my ceph osd nodes but when i am trying to copy ssh id i am getting below error can anyone guide why i am getting below errors.

[ceph@monitor ~]$ ssh-copy-id ceph@osd-0

/bin/ssh-copy-id: ERROR: failed to open ID file '/home/ceph/.pub': No such file or directory (to install the contents of '/home/ceph/.pub' anyway, look at the -f option) [ceph@monitor ~]$

How to get a static IP address for outbound requests in AWS?

Posted: 24 Nov 2021 05:25 AM PST

We are in an integration process and the company is asking for whitelisted static IP addresses. The suggested solution by AWS is to create new ECS that uses new subnets. Is it possible to change subnets in an existing cluster? Do we have to redeploy everything in a new cluster? Since, redeployment is a risky process for us, we need a easier solution for that.

How to send/broadcast ipv6-mac maping cache update request for IPv6 IP

Posted: 24 Nov 2021 06:47 AM PST

We can update IPv4 neighbors by using arping command. I have used arping -A -I -c <interface_name> <IP_address_of_interface> with success.

what is the command to update mapping of IPv6 address and mac on router/gateway/nodes. we have observed when IPv6 address is removed from one node N1 and assigned to other node N2, mac address on router dont get updated. It eventually gets updated but that time is not consistent. for this duration N2 is not reachable to gateway.

PHP Maximum execution time exceeded - sign of attack?

Posted: 24 Nov 2021 05:41 AM PST

We were facing a very high CPU load on our web server today. Our application was freezing and not reaction. We could reduce the load by setting the maximum execution time from 180 to 90 seconds.

However, the log files are now full of the following error:

Maximum execution time of 90 seconds exceeded  {"exception":"[object] (Symfony\\Component\\ErrorHandler\\Error\\FatalError(code: 0):  

And about every 10 seconds, there is a new error of this type in the log.

None of our application's forms and scripts should take this amount of execution time. Therefore my question is, if this error logs could be the sign of an attack (e.g. DDoS)? In addition, is there a chance to find out the IP address of the client that triggered the error?

This site is missing a valid, trusted certificate || Apache2 webserver, Windows root CA

Posted: 24 Nov 2021 06:50 AM PST

I'm learning about certificates, HTTPS together and after 4 days I'm out of idea how to set up to become trusted. In my lab env. I have a Windows server with a CA role.

Previously I installed a VM-Dell OpenManage for my server. It has a graphical interface for requests and an import certificate for HTTPS access. I successfully generated a Certificate Signing Request and get a cert from my windows CA server (https://x.x.x.x/certsrv/) It was done under 2 min.

I thought I can try this, on an apache2 webserver (Ubunut20.04). Well, now I am stuck and still don't know how to get it to work.

1. Currently (after ~50 openssl req) I requested certificate with these commands:

openssl req -new -newkey rsa:2048 -nodes -addext "subjectAltName = DNS:*.mydomain.local" -keyout serverkey.key -out serverreq.csr  

2. I opened my windows CA server from browser https://x.x.x.x/certsrv/ and Request Certificate-->Advanced Certificate Request-->paste the serverreq.csr content-->WebserverTemplate. Download the cert.

3. Back to linux, my conf file (/etc/apache2/sites-avaliable/mysite.conf): look like this.

<VirtualHost _default_:443>          Protocols h2 http/1.1                  ServerName  mysite.local                  ServerAlias www.mysite.local                  DocumentRoot /var/www/html/mysite                  SSLEngine on                  SSLCertificateFile      /etc/ssl/certandkey/myservercert.crt                  SSLCertificateKeyFile   /etc/ssl/certandkey/myserverkey.key  </VirtualHost>    # vim: syntax=apache ts=4 sw=4 sts=4 sr noet  <VirtualHost *:80>      ServerName mysite.local      Redirect / https://mysite.local/  </VirtualHost>  

Do I need to configure the # Server Certificate Chain: and # Certificate Authority (CA):?

Apache is running

4. After this, If I open the webpage it says

Certificate - missing  This site is missing a valid, trusted certificate (net::ERR_CERT_COMMON_NAME_INVALID).  

But if I open the OpenManage it says

Certificate - valid and trusted  The connection to this site is using a valid, trusted server certificate issued by mydomain-DC-CA  

Both certs are from the same windows CA server.

5. I tried to config /etc/ssl/openssl.cnf, but I do not really understand how. If I edit something, then nothing works.

What is wrong with my config, how can I config it? Is there any good tutorial? 90% of the time google shows only self-signed cert and browser magic. But I would like to config it with windows CA.

Thanks for help

Sorry for my english.

File server fail over cluster

Posted: 24 Nov 2021 05:59 AM PST

In my domain environment I have a guest machine that acts as a file sever. I've recently purchased a new server for the purpose of implementing high availability between the 2 servers/sites. How can I go about implementing a failover cluster with the existing file server which already hosts shared folders along with implementing the new server.

ipv6 distribute an ula prefix without an router

Posted: 24 Nov 2021 04:14 AM PST

I have a network which has no ipv6 connectivity to the internet. But I'd like to play a little with private networking.

So I want to use ULAs for my devices and automatically distribute the prefix.

I know that ipv6 uses router advertisement to do so. But (hence the name...) it advertises a router, too. In my case using dnsmasq it's the address of the server running it. So it ends up in my clients as a default route.

Since I don't have a router and don't want my clients to have a default route is there a way to dynamically assign ula prefixes without a router?

And even more "advanced": Is there a way to use dhcpv6 without a router?

No incoming traffic on Hyper-V Adapter inside VM

Posted: 24 Nov 2021 04:51 AM PST

I have a very annoying problem with my Hyper-V setup and it's bugging me quite a bit, because I can't figure out why.

My Hyper-V Host is a Windows Server 2019 and has three network adapters. Adapter 1 & 2 are physical and have static configurations.

Adapter 3 has been configured as a Hyper-V extensible Switch. (External) This adapter is connected to a trunk port on a cisco switch, where multiple vlans have been tagged. The host-adapter uses VLAN 777 and can connect just fine.

I've created a new VM, where I've installed another instance of Windows Server 2019. This VM uses the same vSwitch but configured to the VLAN 666.

Inside the VM, I looked at the status information of the adapter:

  • X number of packets since device was started were sent out
  • 0 packets were received

At first I suspected some weird firewall to block the connection.

For sanity, I disabled ALL firewall rules/settings/profiles, because this host is inside a local network anyways. I did this for the Host and the VM.

Nothing changed, still no packets incoming.

I checked the switch that it's connected to - It showed the static MAC-Adress of the VM, confirming that it somehow registered it. However, appearantly it can't receive any packets, which makes DHCP or any communication impossible.

  • What else could be blocking this connection?
  • What logs could I activate or look at to figure this out?

There aren't any blocking rules or anything on the switch. The host confirms this, because the virtual Host-Adapter works just fine.

It's just inside the VM.

Some technical notes about the VM:

  • It was setup using ISO 17763.737.190906-2324.rs5_release_svc_refresh_SERVERESSENTIALS_OEM_x64FRE_de-de_1
  • It is set to Generation 2
  • Additional Features and checkbox inside the VM configuration of the Network adapter are all unchecked.

Update - What I have checked so far:

  • Eventlog in VM and on Host (However, I did not know what to look for specifically!)
  • Used an Adapter mirror to look at the connection inside Wireshark (Adapter from VM as Source with new Host-Adapter as Destination) -> Didn't really lead to anything but I did notice that I can't see ICMP pings going out, weirdly enough
  • Firewall settings on Host and VM -> On both systems those have been turned OFF on ALL profiles to troubleshoot this without further possible disturbances
  • Activated logging of firewall for dropped packets and checked those, nothing there.
  • Triple-Checked Hyper-V Settings, there are no special features activated, regarding the network connection.
  • Change the configuration on the Switch to be an access port for the vlan instead of a Trunk -> No difference
  • Use a dynamic MAC Adress for the Adapter instead of a Static one -> No difference

mount can't read superblock on /dev/sda5

Posted: 24 Nov 2021 05:49 AM PST

My notebook couldn't start anymore.

I started it with a live USB Ubuntu 20.04 and tried mount /dev/sda5 but receive this error:

mount: /mnt: can't read superblock on /dev/sda5

So I tried the commands below:

ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda  Disk /dev/sda: 931,53 GiB, 1000204886016 bytes, 1953525168 sectors  Disk model: WDC WD10SPZX-21Z  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: 8798743C-1711-4EC3-9829-F392872338D3    Device          Start        End    Sectors   Size Type  /dev/sda1        2048     206847     204800   100M EFI System  /dev/sda2      206848     239615      32768    16M Microsoft reserved  /dev/sda3      239616  408477642  408238027 194,7G Microsoft basic data  /dev/sda4  1951424512 1953521663    2097152     1G Windows recovery environment  /dev/sda5   408477696 1951424511 1542946816 735,8G Linux filesystem    Partition table entries are not in disk order.  

I tried the solution that Xiao Guoa give in this site, but I haven't sucesss.

Source: https://www.linuxbabe.com/desktop-linux/fix-cant-read-superblock-error

ubuntu@ubuntu:~$ sudo mke2fs -n /dev/sda5   mke2fs 1.45.5 (07-Jan-2020)  Creating filesystem with 192868352 4k blocks and 48218112 inodes  Filesystem UUID: 9f522638-1d6a-451f-b608-dc03acd3250b  Superblock backups stored on blocks:       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,       4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000  

Now I tried with all blocks * 4, without success.

block 98304 * 4 = 393216

Source: https://unix.stackexchange.com/questions/315063/mount-wrong-fs-type-bad-option-bad-superblock

ubuntu@ubuntu:~$ sudo mount -o sb=393216 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=655360 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=917504 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=1179648 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=3276800 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=3538944 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=6422528 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=10616832 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=16384000 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=31850496 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=44957696 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=81920000 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=95551488 /dev/sda5 /mnt  mount: /mnt: can't read superblock on /dev/sda5.  ubuntu@ubuntu:~$ sudo mount -o sb=286654464 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=314703872 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  ubuntu@ubuntu:~$ sudo mount -o sb=409600000 /dev/sda5 /mnt  mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error.  

After that I have runned e2fsck for 3 days, without success.

ubuntu@ubuntu:~$ sudo e2fsck -f -y -v -c -b 32768 /dev/sda5   e2fsck 1.45.5 (07-Jan-2020)  Error reading block 1149 (Input/output error).  Ignore error? yes    Force rewrite? yes    Superblock has an invalid journal (inode 8).  Clear? yes    *** journal has been deleted ***    Resize inode not valid.  Recreate? yes    Error reading block 1117 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1133 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1118 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1134 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1119 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1120 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes    Force rewrite? yes    Error reading block 1122 (Input/output error) while reading inode and block bitmaps.  Ignore error? yes  

Error dmesg

[217797.911529] ata1: EH complete  [217807.220171] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0  [217807.220191] ata1.00: irq_stat 0x40000001  [217807.220198] ata1.00: failed command: READ DMA EXT  [217807.220203] ata1.00: cmd 25/00:08:d8:e0:98/00:00:18:00:00/e0 tag 24 dma 4096 in                           res 51/40:08:d8:e0:98/00:00:18:00:00/e0 Emask 0x9 (media error)  [217807.220220] ata1.00: status: { DRDY ERR }  [217807.220225] ata1.00: error: { UNC }  [217812.260182] ata1.00: qc timeout (cmd 0xec)  [217812.260211] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x5)  [217812.260222] ata1.00: revalidation failed (errno=-5)  [217812.260240] ata1: hard resetting link  [217812.575276] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)  [217818.410875] ata1.00: configured for UDMA/133  [217818.410924] sd 0:0:0:0: [sda] tag#24 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=20s  [217818.410936] sd 0:0:0:0: [sda] tag#24 Sense Key : Medium Error [current]   [217818.410945] sd 0:0:0:0: [sda] tag#24 Add. Sense: Unrecovered read error - auto reallocate failed  [217818.410955] sd 0:0:0:0: [sda] tag#24 CDB: Read(10) 28 00 18 98 e0 d8 00 00 08 00  [217818.410960] blk_update_request: I/O error, dev sda, sector 412672216 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0  [217818.411029] ata1: EH complete  [217821.396529] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0  [217821.396556] ata1.00: irq_stat 0x40000001  [217821.396568] ata1.00: failed command: READ DMA EXT  [217821.396575] ata1.00: cmd 25/00:08:d8:e0:98/00:00:18:00:00/e0 tag 20 dma 4096 in                           res 51/40:08:d8:e0:98/00:00:18:00:00/e0 Emask 0x9 (media error)  [217821.396604] ata1.00: status: { DRDY ERR }  [217821.396612] ata1.00: error: { UNC }  [217826.592096] ata1.00: qc timeout (cmd 0xec)  [217826.592112] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x5)  [217826.592116] ata1.00: revalidation failed (errno=-5)  [217826.592122] ata1: hard resetting link  [217826.908977] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)  [217845.710563] ata1.00: configured for UDMA/133  [217845.710611] sd 0:0:0:0: [sda] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=27s  [217845.710624] sd 0:0:0:0: [sda] tag#20 Sense Key : Medium Error [current]   [217845.710632] sd 0:0:0:0: [sda] tag#20 Add. Sense: Unrecovered read error - auto reallocate failed  [217845.710641] sd 0:0:0:0: [sda] tag#20 CDB: Read(10) 28 00 18 98 e0 d8 00 00 08 00  [217845.710647] blk_update_request: I/O error, dev sda, sector 412672216 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0  [217845.710670] Buffer I/O error on dev sda5, logical block 524315, async page read  [217845.710722] ata1: EH complete  [217861.384383] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0  [217861.384409] ata1.00: irq_stat 0x40000001  [217861.384420] ata1.00: failed command: READ DMA EXT  [217861.384428] ata1.00: cmd 25/00:08:d8:e0:98/00:00:18:00:00/e0 tag 0 dma 4096 in                           res 51/40:08:d8:e0:98/00:00:18:00:00/e0 Emask 0x9 (media error)  [217861.384457] ata1.00: status: { DRDY ERR }  [217861.384465] ata1.00: error: { UNC }  [217866.528299] ata1.00: qc timeout (cmd 0xec)  [217866.528326] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x5)  [217866.528335] ata1.00: revalidation failed (errno=-5)  [217866.528352] ata1: hard resetting link  [217866.843346] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)  [217885.732336] ata1.00: configured for UDMA/133  [217885.732383] sd 0:0:0:0: [sda] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=40s  [217885.732395] sd 0:0:0:0: [sda] tag#0 Sense Key : Medium Error [current]   [217885.732404] sd 0:0:0:0: [sda] tag#0 Add. Sense: Unrecovered read error - auto reallocate failed  [217885.732414] sd 0:0:0:0: [sda] tag#0 CDB: Read(10) 28 00 18 98 e0 d8 00 00 08 00  [217885.732419] blk_update_request: I/O error, dev sda, sector 412672216 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0  [217885.732442] Buffer I/O error on dev sda5, logical block 524315, async page read  [217885.732492] ata1: EH complete  [217895.039725] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0  [217895.039750] ata1.00: irq_stat 0x40000001  [217895.039761] ata1.00: failed command: READ DMA EXT  [217895.039769] ata1.00: cmd 25/00:08:60:e0:98/00:00:18:00:00/e0 tag 1 dma 4096 in                           res 51/40:08:60:e0:98/00:00:18:00:00/e0 Emask 0x9 (media error)  [217895.039797] ata1.00: status: { DRDY ERR }  [217895.039806] ata1.00: error: { UNC }  [217900.067695] ata1.00: qc timeout (cmd 0xec)  [217900.067710] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x5)  [217900.067715] ata1.00: revalidation failed (errno=-5)  [217900.067723] ata1: hard resetting link  [217900.383017] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)  [217919.221200] ata1.00: configured for UDMA/133  [217919.221247] sd 0:0:0:0: [sda] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=33s  [217919.221260] sd 0:0:0:0: [sda] tag#1 Sense Key : Medium Error [current]   [217919.221269] sd 0:0:0:0: [sda] tag#1 Add. Sense: Unrecovered read error - auto reallocate failed  [217919.221279] sd 0:0:0:0: [sda] tag#1 CDB: Read(10) 28 00 18 98 e0 60 00 00 08 00  [217919.221285] blk_update_request: I/O error, dev sda, sector 412672096 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0  [217919.221353] ata1: EH complete  

In this thread rknichols show how to find a sector offset in the partition to zero the block safely, but I don't really understand and I was afraid to do that.  

Source: https://www.linuxquestions.org/questions/linux-hardware-18/buffer-i-o-error-on-dev-sdb1-async-page-read-4175600715/

ubuntu@ubuntu:~$ sudo smartctl -a /dev/sda5   smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.11.0-27-generic] (local build)  Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org    === START OF INFORMATION SECTION ===  Model Family:     Western Digital Blue  Device Model:     WDC WD10SPZX-21Z10T0  Serial Number:    WD-WXS1A877RJED  LU WWN Device Id: 5 0014ee 6081e1249  Firmware Version: 02.01A02  User Capacity:    1.000.204.886.016 bytes [1,00 TB]  Sector Sizes:     512 bytes logical, 4096 bytes physical  Rotation Rate:    5400 rpm  Form Factor:      2.5 inches  Device is:        In smartctl database [for details use: -P show]  ATA Version is:   ACS-3 T13/2161-D revision 5  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 1.5 Gb/s)  Local Time is:    Wed Nov 10 02:11:30 2021 UTC  SMART support is: Available - device has SMART capability.  SMART support is: Enabled    === START OF READ SMART DATA SECTION ===  SMART overall-health self-assessment test result: FAILED!  Drive failure expected in less than 24 hours. SAVE ALL DATA.  See vendor-specific Attribute list for failed Attributes.    General SMART Values:  Offline data collection status:  (0x00) Offline data collection activity                      was never started.                      Auto Offline Data Collection: Disabled.  Self-test execution status:      (   0) The previous self-test routine completed                      without error or no self-test has ever                       been run.  Total time to complete Offline   data collection:        ( 7680) seconds.  Offline data collection  capabilities:            (0x71) SMART execute Offline immediate.                      No Auto Offline data collection support.                      Suspend Offline collection upon new                      command.                      No Offline surface scan supported.                      Self-test supported.                      Conveyance Self-test supported.                      Selective Self-test supported.  SMART capabilities:            (0x0003) Saves SMART data before entering                      power-saving mode.                      Supports SMART auto save timer.  Error logging capability:        (0x01) Error logging supported.                      General Purpose Logging supported.  Short self-test routine   recommended polling time:    (   2) minutes.  Extended self-test routine  recommended polling time:    ( 223) minutes.  Conveyance self-test routine  recommended polling time:    (   1) minutes.  SCT capabilities:          (0x3035) SCT Status supported.                      SCT Feature Control supported.                      SCT Data Table supported.    SMART Attributes Data Structure revision number: 16  Vendor Specific SMART Attributes with Thresholds:  ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE    1 Raw_Read_Error_Rate     0x002f   001   001   051    Pre-fail  Always   FAILING_NOW 19950    3 Spin_Up_Time            0x0027   194   187   021    Pre-fail  Always       -       1258    4 Start_Stop_Count        0x0032   092   092   000    Old_age   Always       -       8733    5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0    7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0    9 Power_On_Hours          0x0032   093   093   000    Old_age   Always       -       5258   10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0   11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0   12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       1222  191 G-Sense_Error_Rate      0x0032   001   001   000    Old_age   Always       -       31491  192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       44  193 Load_Cycle_Count        0x0032   181   181   000    Old_age   Always       -       59346  194 Temperature_Celsius     0x0022   104   098   000    Old_age   Always       -       39  196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0  197 Current_Pending_Sector  0x0032   198   198   000    Old_age   Always       -       441  198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0  199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0  200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0    SMART Error Log Version: 1  ATA Error Count: 219 (device log contains only the most recent five errors)      CR = Command Register [HEX]      FR = Features Register [HEX]      SC = Sector Count Register [HEX]      SN = Sector Number Register [HEX]      CL = Cylinder Low Register [HEX]      CH = Cylinder High Register [HEX]      DH = Device/Head Register [HEX]      DC = Device Command Register [HEX]      ER = Error register [HEX]      ST = Status register [HEX]  Powered_Up_Time is measured from power on, and printed as  DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,  SS=sec, and sss=millisec. It "wraps" after 49.710 days.    Error 219 occurred at disk power-on lifetime: 5257 hours (219 days + 1 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 02 02 a8 03 e0  Error: UNC 2 sectors at LBA = 0x0003a802 = 239618      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 02 02 a8 03 e0 08   2d+00:46:14.317  READ DMA    ec 00 00 00 00 00 a0 08   2d+00:46:14.316  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 08   2d+00:46:14.316  SET FEATURES [Set transfer mode]    ec 00 00 00 00 00 a0 08   2d+00:46:14.212  IDENTIFY DEVICE    Error 218 occurred at disk power-on lifetime: 5200 hours (216 days + 16 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 08 00 28 03 e0  Error: UNC 8 sectors at LBA = 0x00032800 = 206848      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 08 00 28 03 e0 08      02:27:11.836  READ DMA    e5 00 00 00 00 00 00 08      02:27:11.836  CHECK POWER MODE    ec 00 00 00 00 00 a0 08      02:27:11.835  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 08      02:27:11.825  SET FEATURES [Set transfer mode]    ec 00 00 00 00 00 a0 08      02:27:11.704  IDENTIFY DEVICE    Error 217 occurred at disk power-on lifetime: 5200 hours (216 days + 16 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 08 00 28 03 e0  Error: UNC 8 sectors at LBA = 0x00032800 = 206848      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 08 00 28 03 e0 08      02:26:57.401  READ DMA    e5 00 00 00 00 00 00 08      02:26:57.400  CHECK POWER MODE    ec 00 00 00 00 00 a0 08      02:26:57.384  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 08      02:26:57.384  SET FEATURES [Set transfer mode]    Error 216 occurred at disk power-on lifetime: 5200 hours (216 days + 16 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 08 80 27 03 e0  Error: UNC 8 sectors at LBA = 0x00032780 = 206720      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 08 80 27 03 e0 08      02:26:15.363  READ DMA    ec 00 00 00 00 00 a0 08      02:26:15.349  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 08      02:26:15.337  SET FEATURES [Set transfer mode]    ec 00 00 00 00 00 a0 08      02:26:15.223  IDENTIFY DEVICE    Error 215 occurred at disk power-on lifetime: 5200 hours (216 days + 16 hours)    When the command that caused the error occurred, the device was active or idle.      After command completion occurred, registers were:    ER ST SC SN CL CH DH    -- -- -- -- -- -- --    40 51 08 80 27 03 e0  Error: UNC 8 sectors at LBA = 0x00032780 = 206720      Commands leading to the command that caused the error were:    CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name    -- -- -- -- -- -- -- --  ----------------  --------------------    c8 00 08 80 27 03 e0 08      02:26:03.763  READ DMA    ec 00 00 00 00 00 a0 08      02:26:03.762  IDENTIFY DEVICE    ef 03 46 00 00 00 a0 08      02:26:03.759  SET FEATURES [Set transfer mode]    ec 00 00 00 00 00 a0 08      02:26:03.630  IDENTIFY DEVICE    SMART Self-test log structure revision number 1  No self-tests have been logged.  [To run self-tests, use: smartctl -t]    SMART Selective self-test log data structure revision number 1   SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS      1        0        0  Not_testing      2        0        0  Not_testing      3        0        0  Not_testing      4        0        0  Not_testing      5        0        0  Not_testing  Selective self-test flags (0x0):    After scanning selected spans, do NOT read-scan remainder of disk.  If Selective self-test is pending on power-up, resume after 0 minute delay.  

Someone can help me to solve this issue. I have some files that I don't have backup and I wouldn't like to give up before try everything to recover this hard disk.

Thank you.

how to fix the npm install fails with 'An unknown git error occurred' in one folder but works in a another folder

Posted: 24 Nov 2021 06:06 AM PST

My Server runs on Centos 8 and I installed nodejs latest (v14.17.5) and NPM (v7.21.0), both dont have a .git folder in there.

I have a very very strange issue, If I goto folder example1.com, and run npm install I get the below error,

[root@www centos]# cd /var/www/example1.com/httpdocs/  [root@www chat]#  npm install  npm WARN old lockfile  npm WARN old lockfile The package-lock.json file was created with an old version of npm,  npm WARN old lockfile so supplemental metadata must be fetched from the registry.  npm WARN old lockfile  npm WARN old lockfile This is a one-time fix-up, please be patient...  npm WARN old lockfile  npm ERR! code 128  npm ERR! An unknown git error occurred  npm ERR! command git --no-replace-objects ls-remote ssh://git@github.com/Defipoolhub/node-time.git  npm ERR! git@github.com: Permission denied (publickey).  npm ERR! fatal: Could not read from remote repository.  npm ERR!  npm ERR! Please make sure you have the correct access rights  npm ERR! and the repository exists.    npm ERR! A complete log of this run can be found in:  npm ERR!     /root/.npm/_logs/2021-08-23T10_03_50_797Z-debug.log  

However if goto a different folder, same server, and same package.json, i get no issues at all, how could it be like that,

[root@www centos]# cd /var/www/example2.com/httpdocs/  [root@www chat]#  npm install    up to date, audited 224 packages in 3s    4 packages are looking for funding    run `npm fund` for details    4 vulnerabilities (2 low, 2 high)    To address all issues possible (including breaking changes), run:    npm audit fix --force    Some issues need review, and may require choosing  a different dependency.    Run `npm audit` for details.  

Any idea on how to debug this issue, i did the SELinux disable and that didnt solve the problem.

Location of global phonebook

Posted: 24 Nov 2021 05:06 AM PST

I have question where can i find global phone book in windows 10.

When you create VPN connection with powershell or windows settings it will be saved into:

%AppData%\Roaming\Microsoft\Network\Connections\Pbk\rasphone.pbk

Problem is when i use parameter -AllUserConnection for creating VPN so other users can use the connection from login screen too it will be saved into global phonebook due to msdocs.

It's not same phonebook in location above, cause i can see VPN connection in my network adapters but its not store in the phonebook and without that parameter it will stored in that phonebook.

Any ideas where this global phonebook is stored?

Need it for my automatization script.

Re-enable S5WakeOnLan setting for Realtek NIC driver that's disabled and hidden by inf file

Posted: 24 Nov 2021 05:06 AM PST

To be able to wake up a PC from the S5 'shut down' state, the OS has to prepare the NIC during system shutdown. For Realtek NICs, this is done by the "Realtek PCIe GBE Family Controller" driver, but only if the option S5WakeOnLan "Shutdown Wake-On-Lan" is enabled. This setting is located at HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318} in an unpredictable subkey between 0000 and 0012.

I have discovered that it's the driver's .inf file that controls what the default value for this option is, and whether it's even visible in the UI. In an old sample from 2008, the setting simply defaulted to '1' and was made visible for all models. Recent versions have categories for individual NIC models, like

  • [s5wol.reg] - visible and defaults to '1'
  • [s5wol.reg] - hidden and defaults to '0' (faulty? found in 2015-04-06)
  • [s5wolhidedisable.reg] - hidden and set to '0'
  • [s5wolhideenable.reg] - hidden and set to '1'
  • [s5wolhidetype2.reg] - hidden and set to '2' ?!?!

I have done a survey of the PCs I manage, and found that I'm dealing with all of the above. Initially, I just manually set S5WakeOnLan to 1 and it worked just fine, for over a year. A few old Vista PCs sometimes hang during power-on or reboot, which may be the reason why Realtek disabled it (or it might be caused by something unrelated). I was not able to find relevant documentation.

The problem is that everytime the OS decides to reinstall the NIC's driver (upgrade to Win10, driver update, every major OS Update which seems to be twice a year), the setting reverts to default, and the PC can no longer be turned on remotely. I have already had to fix this several times across all the affected WIn10 machines, and it's turning out to be an ongoing burden.

Half a year is enough for me to forget that this is still a thing, so when it does happen, it screws up my maintenance schedule, since I have to wait a day+ for the PC to be turned on physically, then patch the registry remotely, then wait again for the PC to be turned off and on, so that the registry change takes effect. I'd like to get rid of this issue, but so far the only workaround I came up with is a Group Policy Registry rule that shoves S5WakeOnLan = 1 into every one of those numeric subkeys.

disable ssl for mysql client apps

Posted: 24 Nov 2021 04:06 AM PST

I have set up SSL for mysql replication. The problem is, that it makes problems on the other local apps which use mysql.

Like postfix:

Jul 25 23:00:22 srv1 postfix/proxymap[3141]: warning: connect to mysql server 127.0.0.1: SSL connection error: unable to verify peer checksum  Jul 25 23:00:22 srv1 postfix/trivial-rewrite[3353]: warning: virtual_mailbox_domains: proxy:mysql:/etc/postfix/mysql-virtual_domains.cf: table lookup problem  Jul 25 23:00:22 srv1 postfix/trivial-rewrite[3353]: warning: virtual_mailbox_domains lookup failure  

or amavis:

Jul 25 23:08:12 srv1 amavis[5625]: (05625-01) (!)connect_to_sql: unable to connect to DSN 'DBI:mysql:database=dbispconfig;host=127.0.0.1;port=3306': SSL connection error: unable to verify peer checksum  

and also pureftp

Jul 25 23:02:42 srv1 pure-ftpd: (?@2a02:810c:XXXXXXXX) [ERROR] The SQL server seems to be down [SSL connection error: unable to verify peer checksum]  

Because I dont need local encryption, i want to disable it, but I dont know how. I have only set a cnf entry for the clients with:

[client]  #ssl-ca=/etc/letsencrypt/live/mydomain/chain.pem  #ssl-mode=DISABLED  ssl=0  

But without luck. For postfix I found in the docs this note:

Postfix 3.1 and earlier don't read [client] option group settings unless a non-empty option_file or option_group value are specified. To enable this, specify, for example "option_group = client".

So I added to all /etc/postfix/mysql-*.cf files the option_group syntax. But after the restart it is the same problem..

When I disable ssl on the server, the problems are gone. But I want to have ssl for security of the replication.

Any Ideas?

Iptables rule to block http traffic not working

Posted: 24 Nov 2021 06:06 AM PST

sudo iptables -A INPUT -p tcp --destination-port 80 -j DROP  

Seems that this rule is not blocking the internet traffic comming from the subnetwork (10.0.0.*)

Blocking ssh and ftp works well..

Iptables

Chain INPUT (policy ACCEPT)  num  target     prot opt source               destination           1    DROP       tcp  --  anywhere             anywhere             tcp dpt:ftp  2    DROP       tcp  --  anywhere             anywhere             tcp dpt:ssh  3    DROP       tcp  --  anywhere             anywhere             tcp dpt:http    Chain FORWARD (policy ACCEPT)  num  target     prot opt source               destination           1    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED  2    ACCEPT     all  --  anywhere             anywhere                Chain OUTPUT (policy ACCEPT)  num  target     prot opt source               destination   

Automatically mount bucket with s3fs on boot

Posted: 24 Nov 2021 06:38 AM PST

I use an Amazon S3 bucket to deliver some of my server's content. I was able to mount it successfully, and grant Apache rights over it, but can't get it mounted properly at reboot.

I updated my /etc/fstab with this line, but nothing happens when I boot

s3fs#my-bucket-name /mnt/s3_bucket fuse allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0  

So, I tried another way, commented said line, and just put my command line in /etc/init.d/local :

#!/usr/bin/env bash  s3fs -o allow_other,uid=33,gid=33,umask=227,use_cache=/root/cache my-bucket-name /mnt/s3_bucket  

... didn't work either.

I ended up putting a cron, and now, it works, but it feels terribly hacky to me, and I wonder why mounting it at start doesn't work.

//Crontab  */10 * * * * ~/mountBucket.sh 1>/dev/null    //Mount script  #!/usr/bin/env bash  if [[ -d /mnt/s3_bucket/someBucketVirtualDirectoryName ]] ; then echo 'Bucket already mounted' ; else s3fs -o allow_other,uid=33,gid=33,umask=227,use_cache=/root/cache my-bucket-name /mnt/s3_bucket ; fi 1>/dev/null  

Is there something I missed ?
I'm Using Ubuntu 14.04.4 LTS, with Fuse 2.9.2


EDIT : Here is another unrelated, yet important performance issue I had to figure out by myself:

If your system includes locate and/or mlocate (and Ubuntu 14.04 does), you may want to add an exception, so that it does NOT scan your bucket. I had to modify both my /etc/updatedb.conf and /etc/cron.daily/locate, adding " /mnt/my-bucket-name" to PRUNEPATHS and " fuse.s3fs" to PRUNEFS I suppose adding fuse.s3fs should be enough, but... no time to take risks right now :)

Properly manage iptables rules on Docker host

Posted: 24 Nov 2021 03:01 AM PST

I am using Gentoo and Docker. I have bunch of own iptables rules, and keep them in /var/lib/iptables/rules-save. Docker adding bunch of own rules when start. It seems that iptables is auto-saving each time to rules-save, however I thought this file acts like /etc/iptables/rules.v4 from iptables-persistent ubuntu package.

My question is, how to properly manage all rules? Is it safe to load previously saved docker rules before docker starts? If docker decided to change some rules, it will not happen in such setup?

When I am adding a new rule, I do it manually with iptables -I, then edit /var/lib/iptables/rules-save and add there too. I think it is not safe to just add the rule and reload all rules from that file, because of docker. I need to add a rule to DOCKER-ISOLATION chain and be sure this rule exist BEFORE any DOCKER-ISOLATION rules added by docker, even if docker restarts.

Please advice, how to safely manage iptables rules with docker.

No 'locked for editing' message for an Excel file on a network share

Posted: 24 Nov 2021 04:40 AM PST

I get no 'locked for editing' message when accessing an Excel spreadsheet file on a network share (Windows SBS 2011 Standard) even though the file is open by a different user on a different workstation (Office 2013, Windows 8.1).

Is there any way of getting this fixed or should I just accept the fact that it won't work as intended?

We use Sophos, I've already tried excluding .xls files from real time scanning but the problem still persists.

LDAP auth fails for some users

Posted: 24 Nov 2021 04:06 AM PST

Something weird is happening, some users are not able to authenticate via our LDAP to access services (SSH connection, Samba, etc.).

Their entries are in the LDAP and everything seems to be fine, I made a comparison with a working LDAP entry created with the same scripts only 5 minutes before, and the only differences I can see are the IDs/timestamps (sambaPwdMustChange, etc...)

If I do a getent passwd | grep "username" I find their record, but a ssh username@localhost fails (while again it works with other users created with the same script)

My log here /var/log/ldap/slapd.d shows error messages

Apr  9 14:09:48 je nslcd[3293]: [2fc6ce] lookup of user uid=someone,ou=People,dc=something,dc=com failed: Invalid credentials  

But when I check their password on the phpldapadmin interfaces, it DOES match with the password I am entering. I am trying with a default password 123456789, which works for other users created with the same script.

Any ideas ?

EDIT 1

Authenticating with

ldapwhoami -vvv -D "uid=someone,ou=People,dc=something,dc=com" -x -W  

and the password in question does return a Success (0)

EDIT 2

The authentication of these users against the same LDAP does work on many apps, like a Dokuwiki and a Rails application using the devise ldap-authenticatable gem. Only SSH and Samba seem to have problems.

My public website name and AD domain name are the same. How can I get to my external website from inside my network?

Posted: 24 Nov 2021 05:46 AM PST

I am using my domain example.orgin my firm. I can use www.example.orgto view my website. If I try http://example.org from outsite my firm there is no problem, but if I try it from inside, my windows DNS servers deliverthe IPs of domain controllers.

How can I solve this? Can I prevent my DCs from registering as example.org in my DNS and will this be a problem for my enviroment?

How to Restrict a user account to single session via VPN to Windows 2008 R2 server?

Posted: 24 Nov 2021 03:01 AM PST

I setup a VPN server and going to create an account for each user , but i don't want a user sign-in to the server from more than one place at time. I see some VPN Hosting in web offer VPN account and prevent multiple log-in for single user, but I couldn't find any solution for it. How can we do it too?

BackupPC - why does it use rsync --sender --server ...?

Posted: 24 Nov 2021 06:28 AM PST

I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values.

I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors:

full backup started for directory /www  Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/  Xfer PIDs are now 30395  Read EOF: Connection reset by peer  Tried again: got 0 bytes  Done: 0 files, 0 bytes  Got fatal error during xfer (Unable to read 4 bytes)  Backup aborted (Unable to read 4 bytes)  Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0)  

First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password.

In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get:

debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/  Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0  

Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for?

Looking at the BackupPC config I have this:

RsyncClientPath = /usr/bin/rsync  RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+  

And for the arguments, I have the following listed:

--numeric-ids  --perms  --owner  --group  -D  --links  --hard-links  --times  --block-size=2048  --recursive  

Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

Prevent port change on redirect in nginx

Posted: 24 Nov 2021 06:35 AM PST

I currently have nginx setup to serve content through Varnish. Nginx listens on port 8000 and varnish connects users' requests from 80 to 8000.

The problem is, on some occasions, particularly when trying to hit a directory, like site.com/2010, nginx is redirecting the request to site.com:8000/2010/.

How can I prevent this?

No comments:

Post a Comment