Saturday, July 3, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Change number that stays the same in IP Address Range

Posted: 03 Jul 2021 11:17 PM PDT

I have a network with a firewall that failed and i would like to see if I can continue using the network without the firewall until i can get a new one.

The ISP router has an IP address of 192.168.100.1 but my network is already set up to the range of 192.168.1.1 to 192.168.1.254. The default gateway on all the nodes was set to the firewall's ip address of 192.168.1.253

I would like to plug the ISP router directly into my switch and still keep the IP address range of 192.168.1.1 to 254 (as opposed to switching to the range of 192.168.100.1 - 254)

How can i achieve this?

unable to login into VM using RDP

Posted: 03 Jul 2021 10:37 PM PDT

I am unable to login the VM using RDP. I have tried various option and no luck.

I have created the Azure VM using portal. but unable to login the VM using RDP. I have restarted/redeployed, nothing is working.

When I reset the password it failed. It says that "provisioningState": "Failed". I am not what do the next.

How can I forward smartd alerts to multiple emails?

Posted: 03 Jul 2021 06:03 PM PDT

I want to forward smartd alerts to multiple email addresses. According to smartd.conf man page,

To send email to more than one user, please use the following "comma separated" form for the address: user1@add1,user2@add2,...,userN@addN (with no spaces).

However, on Ubuntu 18.04 and with smartmontools release 6.6, if I drop a line like this in the smartd.conf, and then do systemctl restart smartd,

 DEVICESCAN -M test -d removable -n standby -m root,user@gmail.com -M exec /usr/share/smartmontools/smartd-runner  

I do receive email in root's system mailbox, but not in the Gmail ID. However, if I use just the Gmail address, it works. Is it a bug with handling multiple email addresses, or am I missing something?

How do I get smartctl offline test results emailed?

Posted: 03 Jul 2021 05:57 PM PDT

I want to schedule smartctl tests like smartctl -t long /dev/sdc, and then have the test result emailed to an email address. Is there any way I can do that?

What do Offline_Uncorrectable and Current_Pending_Sector mean in the context of SMART tests?

Posted: 03 Jul 2021 05:35 PM PDT

SMART shows there are 6 Offline_Uncorrectable sectors and 1 Current_Pending_Sector on one of my drives. What do these parameters mean? I failed to find a detailed explanation by Googling. Also, is there any way to know which exact sectors are the problematic ones?

SMART Attributes Data Structure revision number: 16  Vendor Specific SMART Attributes with Thresholds:  ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE    1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       3    3 Spin_Up_Time            0x0027   152   152   021    Pre-fail  Always       -       9375    4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       49    5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0    7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0    9 Power_On_Hours          0x0032   026   026   000    Old_age   Always       -       54193   10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0   11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0   12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       49  192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       38  193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       10  194 Temperature_Celsius     0x0022   123   100   000    Old_age   Always       -       29  196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0  197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       1  198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       6  199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0  200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       7  

LTO windows tape drivers vs IBM drivers in Win10/LTFS

Posted: 03 Jul 2021 04:42 PM PDT

I use two ibm 3580 hh6 LTO drives both in windows 7 and windows 10. The performance in windows 7 (same machine) is about 20% faster.

I have read here and there that it is best to use the ibm drivers, and not the windows ones. I tried to install them in windows 10 but there would be an "incompatible with this version of windows message". Then I installed manually from the windows system management panel and it does install but shows a warning sign on the drive, like it does not work properly. I am not saying which version I tried because I tried many, but could provide the details. Just wanted to know if anyone has experience in switching to the IBM drivers in a non server windows and if they are worth the hassle installing. Thanks

Sendmail authenticating with DKIM but Roundcube is not authenticating

Posted: 03 Jul 2021 08:04 PM PDT

So I have set up the mail server, Roundcube, and Sendmail both work as expected.

but many of my emails were going to spam in Gmail and others, so I was setting up the DKIM auth and it was successful.

[Side Note] for some reason, I set it up so SMTP uses port 25 instead of 587 (that other people recommended). So I don't know if that causes any issues.

First I tested it with Roundcube and sent an email to my Gmail account, when I click on the See Original section of the Gmail email, it doesn't show DKIM 'PASS' with domain mydomain.com

But when I send in the terminal with Sendmail it does show under DKIM: 'PASS' with domain mydomain.com

What am I doing wrong? Does Roundcube have a plugin to enable DKIM?

https://pastebin.pl/view/5d87eb76 <- that is the Orignal Message for both Sendmail and Roundcube.

SMART offline test terminated with read failure while individual attributes are above the threshold

Posted: 03 Jul 2021 04:22 PM PDT

I ran an extended SMART test on a drive today which terminated early reporting a read filute. However, if I look at the individual attributes, they all are above the threshold. Should I be worried?

Currently, the offline test just reports the LBA of the first failure. Is there any way to make the offline test run until completion in spite of the failures?

SMART Attributes Data Structure revision number: 16  Vendor Specific SMART Attributes with Thresholds:  ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE    1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       3    3 Spin_Up_Time            0x0027   152   152   021    Pre-fail  Always       -       9375    4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       49    5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0    7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0    9 Power_On_Hours          0x0032   026   026   000    Old_age   Always       -       54191   10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0   11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0   12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       49  192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       38  193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       10  194 Temperature_Celsius     0x0022   123   100   000    Old_age   Always       -       29  196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0  197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       1  198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       6  199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0  200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       7    SMART Error Log Version: 1  No Errors Logged    SMART Self-test log structure revision number 1  Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error  # 1  Extended offline    Completed: read failure       90%     54190         3884004178  # 2  Short offline       Completed without error       00%     54189         -  # 3  Extended offline    Completed without error       00%     53801         -  # 4  Short captive       Completed without error       00%      1530         -  

In addition, my system email is flooded with a bunch of these messages:

>U   1 root               Fri Jun 18 01:40  32/1107  SMART error (OfflineUncorrectableSector) detected on host: myhost   U   2 root               Mon Jun 21 02:10  31/1081  SMART error (OfflineUncorrectableSector) detected on host: myhost   U   3 root               Tue Jun 22 01:40  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U   4 root               Tue Jun 22 02:10  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U   5 root               Wed Jun 23 01:40  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U   6 root               Wed Jun 23 02:10  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U   7 root               Thu Jun 24 01:40  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U   8 root               Thu Jun 24 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U   9 root               Fri Jun 25 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  10 root               Fri Jun 25 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  11 root               Sat Jun 26 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  12 root               Sat Jun 26 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  13 root               Sun Jun 27 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  14 root               Sun Jun 27 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  15 root               Mon Jun 28 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  16 root               Mon Jun 28 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  17 root               Tue Jun 29 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  18 root               Tue Jun 29 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  19 root               Wed Jun 30 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  20 root               Wed Jun 30 02:40  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  21 root               Thu Jul  1 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost   U  22 root               Thu Jul  1 03:10  31/1082  SMART error (OfflineUncorrectableSector) detected on host: myhost   U  23 root               Fri Jul  2 02:10  31/1085  SMART error (CurrentPendingSector) detected on host: myhost  

Why does my Linux Kernel have missing directories / files that are crucial for ip_tables to run?

Posted: 03 Jul 2021 04:03 PM PDT

as mentioned in the header - my Linux Kernel seems to be missing files / directories that are crucial for iptables to run properly. I'm able to temporarily resolve this by reinstalling my kernel, but it's not a permanent resolve as after a reboot I'm back to where I started.

When I run iptables -L I receive an error saying, "Perhaps iptables or your kernel needs to be upgraded." I've found I'm able to resolve this error by running sudo apt-get install --reinstall linux-modules-5.8.0-59-generic. I've noticed that after the reinstall I have additional files and directories in my /lib/modules/5.8.0-59-generic directory, which leads me to believe that my kernel by default is missing these and disabling iptables from functioning properly. After a reboot it goes back to missing these said files/directories and iptables not working.

Is it possible to reinstall the kernel module permanently, so I don't have to reinstall after every reboot to get iptables to work?

I'm running Ubuntu 20.04.2, and as mentioned above my kernel is 5.8.0-59-generic. I appreciate any assistance I can get!

edit:

The output I get from ls /boot/vmlinuz* is: [vmlinuz output]: https://i.stack.imgur.com/dDroe.png

And the ouput I get from apt-cache policy linux-image-generic is: [apt-cache output]: https://i.stack.imgur.com/OY9Cj.png

The computer that is running is a Dell Optiplex 3020 with specs of:

  • CPU: Quad Core Intel(R) Core(TM) i5-4570 CPU @ 3.20 GHz
  • RAM: 8Gb ram

Upgrade from 2012 Standard to 2019 Essentials

Posted: 03 Jul 2021 08:38 PM PDT

We have a client running a very old physical server running 2012 Standard. We want to upgrade to 2019 essentials. We want to switch to essentials since they will not need more than 25 cals. Is there any way to do an in-place install (even in multiple steps) so we do not have to re-install all of the software? I have tried some different approaches but the option to keep current files is always disabled.

Less aggressive garbage collection in Windows server [closed]

Posted: 03 Jul 2021 04:57 PM PDT

I have a web request that uses around 400 MB of memory, and takes about four seconds if there is no GC. When the Visual Studio profiler shows that a GC takes place during the request, it takes five to six. When the GC is triggered the profiler shows only one GB of memory is in use. During stress testing, Dynatrace shows that only 14 GB out of 32 GB is ever used, and the average response time goes up to sixteen seconds. Is there any way to configure a less aggressive GC in a Windows server? More likely it would be a CLR setting. edit I was given permissions to access Dynatrace on the servers and found that there are no GC issues, so this is no longer an issue. It appears the task is CPU bound and the reports from the stress tests did not take that in to account when calling out the excessive response duration.

Do network acls block inter-subnet traffic as well?

Posted: 03 Jul 2021 05:13 PM PDT

I have VMs placed in different AZs on AWS. In order to be able to do this, you need a subnet in each AZ.

If I'm creating a network acl for the entire setup (ie to be associated with all subnets) do I need to specify allow rules from all the subnet CIDR ranges? If I don't, will the network acl block inter-subnet traffic based on my port rules?

I'm assuming they will...but want confirmation.

rsync: [generator] failed to set permissions : Operation not supported (95)

Posted: 03 Jul 2021 04:05 PM PDT

I want to virtualize correctly Android 10 on top of my Jetson nano (arm64) using qemu and kvm on ubuntu 18.04. This is the tutorial that I'm following :

https://github.com/antmicro/kvm-aosp-jetson-nano

everything went good until this command :

sudo rsync -avxHAX system-r{o,w}/  

something is not good because I get a lot of errors when I transfer the files and permissions from the source to the destination path (both are on the same disk and on the same ext4 partition. You can see the full log with the errors here :

https://pastebin.ubuntu.com/p/W9GjPCt8G4/

the consequence of these errors is that when I try to emulate android with qemu like this :

qemu-system-aarch64 \  -enable-kvm \  -smp 4 \  -m 2048 \  -cpu host \  -M virt \  -device virtio-gpu-pci \  -device usb-ehci \  -device usb-kbd \  -device virtio-tablet-pci \  -usb \  -serial stdio \  -display sdl,gl=on \  -kernel aosp/Image \  -initrd aosp/ramdisk.img \  -drive index=0,if=none,id=system,file=aosp/system.img \  -device virtio-blk-pci,drive=system \  -drive index=1,if=none,id=vendor,file=aosp/vendor.img \  -device virtio-blk-pci,drive=vendor \  -drive index=2,if=none,id=userdata,file=aosp/userdata.img \  -device virtio-blk-pci,drive=userdata \  -full-screen \  -append "console=ttyAMA0,38400 earlycon=pl011,0x09000000 drm.debug=0x0 rootwait rootdelay=5 androidboot.hardware=ranchu androidboot.selinux=permissive security=selinux selinux=1 androidboot.qemu.hw.mainkeys=0 androidboot.lcd.density=160"  

this is the error that I get :

[ 2.532754] init: init first stage started!  [ 2.535936] init: [libfs_mgr]ReadFstabFromDt(): failed to read fstab from dt  [ 2.540632] init: [libfs_mgr]ReadDefaultFstab(): failed to find device default fstab  [ 2.546246] init: Failed to fstab for first stage mount  [ 2.549616] init: Using Android DT directory /proc/device-tree/firmware/android/  [ 2.555116] init: [libfs_mgr]ReadDefaultFstab(): failed to find device default fstab  [ 2.560762] init: First stage mount skipped (missing/incompatible/empty fstab in device tree)  [ 2.566906] init: Skipped setting INIT_AVB_VERSION (not in recovery mode)  [ 2.571227] init: execv("/system/bin/init") failed: No such file or directory  [ 2.593768] init: #00 pc 00000000000e90a0 /init  [ 2.599958] reboot: Restarting system with command 'bootloader'  

I've just edited my /etc/fstab file like this :

UUID=84d024e0-c8c7-42c0-ad3e-c3e0c1cacdb7 / ext4 acl,user_xattr,noatime,errors=remount-ro 0 1  

and also like this :

UUID=84d024e0-c8c7-42c0-ad3e-c3e0c1cacdb7 / ext4 defaults,acl,user_xattr,noatime,errors=remount-ro 0 1  

but the error still there :

sending incremental file list  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/bin": Operation not supported (95)  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/bugreports": Operation not supported (95)  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/charger": Operation not supported (95)  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/d": Operation not supported (95)  .....  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/system/usr/icu": Operation not supported (95)    sent 109,493 bytes received 1,223 bytes 221,432.00 bytes/sec  total size is 1,354,488,586 speedup is 12,233.90  rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1330) [sender=3.2.3]  

this is also interesting :

root@Z390-AORUS-PRO:/home/ziomario/Scrivania/antmicro/aosp_images# sudo mount -o remount,acl /    root@Z390-AORUS-PRO:/home/ziomario/Scrivania/antmicro/aosp_images# sudo rsync -avxHAX system-r{o,w}/    sending incremental file list  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/bin": Operation not supported (95)  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/bugreports": Operation not supported (95)  rsync: [generator] failed to set permissions on "/home/ziomario/Scrivania/antmicro/aosp_images/system-rw/charger": Operation not supported (95)  

and so on.

someone knows why I get those errors and how can I fix them ? thanks.

User is not authorized to perform: iam:PassRole on resource

Posted: 03 Jul 2021 05:02 PM PDT

I'm attempting to create an eks cluster through the aws cli with the following commands:

aws eks create-cluster --name ekCsluster --role-arn arn:aws:iam::111111111111:role/eksServiceRole --resources-vpc-config subnetIds=subnet-1,subnet-2,subnet-3,subnet-4,subnet-5,subnet-6,securityGroupIds=sg-1  

And get the following error:

An error occurred (AccessDeniedException) when calling the CreateCluster operation: User: arn:aws:iam::111111111111:user/userName is not authorized to perform: iam:PassRole on resource: arn:aws:iam::111111111111:role/eksServiceRole  

However, I've created a permission policy, AssumeEksServiceRole and attached it directly to the user, arn:aws:iam::111111111111:user/userName:

{      "Version": "2012-10-17",      "Statement": [          {              "Effect": "Allow",              "Action": [                  "iam:GetRole",                  "iam:PassRole"              ],              "Resource": "arn:aws:iam::111111111111:role/eksServiceRole"          }      ]  }  

In the eksServiceRole role, I've defined the trust relationship as follows:

{    "Version": "2012-10-17",    "Statement": [      {        "Effect": "Allow",        "Principal": {          "Service": "eks.amazonaws.com"        },        "Action": "sts:AssumeRole"      },      {        "Effect": "Allow",        "Principal": {          "AWS": "arn:aws:iam::111111111111:user/userName"        },        "Action": "sts:AssumeRole"      }    ]  }  

What am I missing? How can I go about debugging this error message? Thanks for any and all help.

Windows Error reporting stopped one service?

Posted: 03 Jul 2021 06:00 PM PDT

I have the problem with one service in win2k8

One of my service terminated unexpectedly while going through event logs i have following logs

Event 7036  Source - service control manager  The Windows Error Reporting Service service entered the running state.  

After this log , it shows our application service was terminated unexpectedly

 Event 7034 Source - service control manager      LK  service  terminated unexpectedly. It has done this time(s).    

Next, the Windows Error reporting entered stopped state

 Event 7036 Source - service control manager      The Windows Error Reporting Service service entered the Stopped state.  

what made the service terminate unexpectedly, in any case WER cause the service to terminate?

LDAP - Get CN, DC, OU for logged in account Windows

Posted: 03 Jul 2021 11:01 PM PDT

I am not very familiar with LDAP but need to add LDAP authentication to an existing application. I am trying to test LDAP authentication using ldapsearch but it keeps failing I suspect it due to me using incorrect CN, DC and OUs

I am currently logged into my domain account on windows. Can I obtain the required parameters from this logged in user and use it as the bindDN ?

Windows 10 network mapping using server name in hosts file

Posted: 03 Jul 2021 09:06 PM PDT

I want to map a samba shared folder on my Windows 10 Home PC. The server is a Linux - CentOS 7 with Samba 4.4.4.
If I use the server IP address then it works fine however if I create an entry in the hosts file to name my server then I got path not found error.

First with simple net view this works:

net view \\192.168.0.10  

I added the following to my hosts file:

192.168.0.10 myserver  

But got the following result:

net view \\myserver    System error 53 has occurred.  The network path was not found.  

Pinging the server works fine using myserver

UPDATE

Using the IP I can access the server and the Get-SMBConnection result is:

PS C:\WINDOWS\system32> Get-SMBConnection    ServerName   ShareName UserName               Credential              Dialect NumOpens  ----------   --------- --------               ----------              ------- --------  192.168.0.20 IPC$      DEVELOPER-PC-01\vilma DEVELOPER-PC-01\unixmen 3.1.1   1  

Using the server name I can not even browse the server.

SCSI virtual disks not showing up on RHEL 7.X KVM guests

Posted: 03 Jul 2021 06:00 PM PDT

For whatever reason I can't get my virtual disks to show up on any of my RHEL 7.X guests (libvirt + KVM). The XML is configured just like my other guests so I know it isn't an issue on that end. It almost seems like my VMs are missing a SCSI driver or something but it's difficult to tell.

/proc/scsi/scsi has no entries in it and none of the disks are in /dev or /dev/disk/by-*. I'm not exactly sure what I should be looking for, so if anyone has any ideas about why this would happen please let me know.

how to increase Apache mod_proxy Jetty 5minute timeout

Posted: 03 Jul 2021 07:03 PM PDT

we use Apache and Jetty to do install components behind a firewall. Some actions take a while ( 10-15 minutes ). Apache is the proxy and Jetty is the proxy target on some machines. Everything works fine for actions taking less than 5 minutes. Actions taking longer than 5 minutes fail with a 502 proxy error.

I have seen some similar topics and the advice was to define timeout and keepalive - both did not help.

our setup ist:

Windows 2012R2 Apache 2.4.9 Jetty 7

Initially I forgot to mention that there is a firewall between the apache and the Jetty.

In apache httpd.conf we have:

ProxyPassMatch       ^/([a-z0-9\-]+)/(.+)$ http://$1:3000/$2       timeout=3000 ttl=3000 Keepalive=On  

We hoped that timeout=3000 ( 3000 seconds ) would keep Apache waiting for about 50 minutes for the response from Jetty. Keepalive and and ttl are trials ...

On Jetty we are calling a simple Groovy script that simply sits and waits for a long time. If the waittime is small this works as expected. If the waittime is beyond 5minutes we get an error:

Apache Access: ( the request starts at 17:25 )

xxx.xxx.xxx.xxx- - [02/Apr/2016:17:25:47 +0200] "GET /server/scripts/waitlong.groovy HTTP/1.1" 502 445 "-" 300509428 "-" "10.119.1.20" 10.119.1.20 3000  

As you can see the duration is about 5Minutes ~ 300509428 and thus a timeout - it should have lastet for 10 minutes.

Apache Error: ( the request times out at 17:30 )

[Sat Apr 02 17:30:47.815050 2016] [proxy_http:error] [pid 11656:tid 12736] (OS 10060)A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.  : [client 10.119.1.20:60466] AH01102: error reading status line from remote server w-bidb1hf:3000  [Sat Apr 02 17:30:47.815050 2016] [proxy:error] [pid 11656:tid 12736] [client 10.119.1.20:60466] AH00898: Error reading from remote server returned by /w-bidb1hf/scripts/waitlong.groovy  

Any ideas how to do to keep Apache waiting for a longer time ??

Windows Service with dependency cannot start automatically

Posted: 03 Jul 2021 08:02 PM PDT

I have Service B that is dependent on Service A. Both services are set for automatic(delayed) start upon boot and both services have set Recovery Action of restart in case of first, second and subsequent failures.

The scenario is:

  1. Windows is booting.
  2. It tries to start Service A.
  3. Service A crashes upon start because it cannot initialize (e.g. connect to remote database).
  4. Recovery Action kicks in and windows keep starting Service A at some intervals.
  5. Service A finally starts fine (e.g. remote db is now accessible).

And that's it, Windows doesn't bother to start Service B despite it having Automatic (delayed) startup type. I'm confused a bit with this behavior. Is there anything I can do to make windows start Service B ?

Limit Number of TCP connections in Linux Server, to avoid attack

Posted: 03 Jul 2021 08:02 PM PDT

I want to limit the number of TCP connections in Linux server, I have used the following command.

iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 25 --connlimit-mask 32 -j REJECT --reject-with tcp-reset

It seems like, something is wrong and desired results are not coming. I get the number of active connections using the following command

netstat -n | grep ':80' | awk -F' ' '{print $5}' | awk -F':' '{print$1}' | sort | uniq -c | sort -n

Now, When I type the above command, I get the following results.

44 122.179.103.8  45 107.167.107.123  46 120.60.76.201  48 122.162.172.182  49 183.87.48.105  51 122.161.241.33  71 198.72.112.97  98 122.168.167.114  103 122.177.169.21  134 106.51.130.193  137 122.165.226.196  

As you can see there are more active tcp connections than allowed limit of 25. Can someone please help me with correct command , or What is going wrong in this ?

How can I dump nginx requests for a specific location while nginx is secured?

Posted: 03 Jul 2021 07:03 PM PDT

I want to dump all request that nginx is getting for a specific location so I can debug a strange problem that I have.

Usually tcpdump would be the solution but remember that nginx is accessed using HTTPS so dumping secured packages wouldn't be useful.

Note: in fact I am mostly intereted to dump all headers as I need to find out if any proxy modified the requests made by the client.

Obviously, I already used Wireshark or Charles on the client side but I came to the conclusion that that reaches the server is different that what it was sent by the client.

Create route through OpenVPN via specific IP address / virtual interface on Linux

Posted: 03 Jul 2021 04:02 PM PDT

I have a Linux server at home, on which I run an OpenVPN client connected to some server on the Net. What I want to archive is this: I want my home server to expose an interface (e.g. an IP address), which I can put as the default gateway on another machine in my local network, which will then route traffic through the OpenVPN.

For example, if my home server has the internal IP 192.168.1.1, the OpenVPN IP 10.0.1.1, my external server has the OpenVPN IP 10.0.1.2 and public IP 1.2.3.4, while another computer on my network has the internal IP 192.168.1.2, I would want a traceroute to public IP 9.8.7.6 like this:

(192.168.1.2) => (192.168.1.1 > 10.0.1.1) => (10.0.1.2, 1.2.3.4) => ... => (9.8.7.6)

where each (.*) represents one computer. I have searched through the net and didn't find a similar setup yet. The idea behind this is to have one stable (always-up) VPN tunnel instead of have to install it on all the machines. I'm guessing this has to be accomplished with iptables, but am currently at a loss of what needs to be done.

VPN user restricted login to workstations cannot login to VPN server

Posted: 03 Jul 2021 05:02 PM PDT

We have a vendor that requires Domain Admin access on the servers where their software is deployed. (Obviously we want to restrict them to only being able to login to the servers where their software is deployed.) In AD, we have used the "Log On To..." to restrict that user to those particular servers.

However, our VPN (Sonicwall NSA 2400) cannot authenticate the user when restricted servers are set. It returns: "80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 531, v1db1". According to this, the error is that the Sonicwall is not a permitted workstation. I have added the IP of the Sonicwall to the allowed workstations, but it has not removed the error. When I change the logon restriction to all workstations, the user is allowed to login to the VPN and the Sonicwall says login successful.

Is there a way I can get the Sonicwall to authenticate the user while still keeping the restricted login? I am open to alternatives to our method.

Dropping incoming requests for a specific file with iptables

Posted: 03 Jul 2021 11:01 PM PDT

Server is a standard LAMP stack configured via cpanel on CentOS 5.9.

We have one file, call it bad.php, on one of our domains that is mistakenly being accessed about 10 times a second by a service provider. The file no longer exists, and we want to block these requests in the most efficient way possible. Currently we're returning bare-bones 410 responses, but that still involves tying up apache threads, sending headers, etc.

Ideally I want to just drop the requests, not sending any response. Blocking by IP is not an option, because we need to allow these IPs to legitimately access other files. (And no, we can't just ask them to stop.) We also don't have an external firewall to work with (leased server, custom external firewall costs extra).

My thinking is that the best option would be an iptables rule like this:

iptables -I INPUT -p tcp --dport 80 --destination [ip address] -m string \      --algo kmp --string "bad\.php" -j DROP  

Two questions:

First, I tried that rule (with the domain's IP address in place of ip address), but it had no effect. It was the very first rule shown by iptables -L, so it should not be overridden by an earlier rule:

Chain INPUT (policy ACCEPT)  target     prot opt source               destination  DROP       tcp  --  anywhere             [ip address]       tcp dpt:http STRING match "bad\.php" ALGO name kmp TO 65535  

Have I messed up somewhere there? I'm very much an iptables noob.

Second question is, are there any caveats to this? Will there be significant overhead having iptables string match every request (compared to the apache RewriteRule with R=410, as we're using now)? Am I better off just living with it? Or is there a better option? (mod_security perhaps?) The server isn't anywhere close to being strained, so it's not a necessity, just an optimization.

Edit in response to Saurabh Barjatiya:

Here is everything I see from tcpdump when I make a request for the bad.php file:

20:21:09.740217 IP [clientIP].62790 > [serverIP].http: S 3454863895:3454863895(0) win 8192 <mss 1460,nop,wscale 2,nop,nop,sackOK>  20:21:09.740243 IP [serverIP].http > [clientIP].62790: S 4112555138:4112555138(0) ack 3454863896 win 5840 <mss 1460,nop,nop,sackOK,nop,wscale 7>  20:21:09.838595 IP [clientIP].62790 > [serverIP].http: . ack 1 win 16425  20:21:09.838606 IP [clientIP].62790 > [serverIP].http: . 1:1461(1460) ack 1 win 16425  20:21:09.838622 IP [serverIP].http > [clientIP].62790: . ack 1461 win 69  20:21:09.838632 IP [clientIP].62790 > [serverIP].http: P 1461:1476(15) ack 1 win 16425  20:21:09.838638 IP [serverIP].http > [clientIP].62790: . ack 1476 win 69  

Obviously the actual url string is not here. My understanding is that iptables can filter for url strings though, so presumably I'm checking the wrong thing.

Debian package performance on XFS, btrfs, ext3, ext4

Posted: 03 Jul 2021 09:06 PM PDT

I did 4 clean installations of debian 6 and measured time of installing some average virtual package. FS options are default.

time apt-get install build-essential  

I got very strange results (min:sec, less is faster):

XFS:   3:12  btrfs: 2:45  ext3:  0:30  ext4:  0:50  

What is wrong with XFS and btrfs? 6 times slower than ext3? Am I doing something wrong?

enter image description here

Upd (some details):

All LVM volumes are local to VM and are on idle RAID. CDROM image is local and the same, Internet connection is stable and it factors max 10-15 seconds. All the visible slowdowns are after downloads: XFS and btrfs guests think over 1 second every Unpacking. Low-level caching is disabled. Host node is idle every installation, no active guests but the one.

ls hangs for a certain directory

Posted: 03 Jul 2021 06:10 PM PDT

There is a particular directory (/var/www), that when I run ls (with or without some options), the command hangs and never completes. There is only about 10-15 files and directories in /var/www. Mostly just text files. Here is some investigative info:

[me@server www]$ df .  Filesystem            Size  Used Avail Use% Mounted on  /dev/mapper/vg_dev-lv_root                         50G   19G   29G  40% /    [me@server www]$ df -i .  Filesystem            Inodes   IUsed   IFree IUse% Mounted on  /dev/mapper/vg_dev-lv_root                          3.2M    435K    2.8M   14% /  

find works fine. Also I can type in cd /var/www/ and press TAB before pressing enter and it will successfully tab-completion list of all files/directories in there:

[me@server www]$ cd /var/www/  cgi-bin/         create_vhost.sh  html/            manual/          phpMyAdmin/      scripts/         usage/  conf/            error/           icons/           mediawiki/       rackspace        sqlbuddy/        vhosts/  [me@server www]$ cd /var/www/  

I have had to kill my terminal sessions several times because of the ls hanging:

[me@server ~]$ ps | grep ls  gdm       6215  0.0  0.0 488152  2488 ?        S<sl Jan18   0:00 /usr/bin/pulseaudio --start --log-target=syslog  root     23269  0.0  0.0 117724  1088 ?        D    18:24   0:00 ls -Fh --color=always -l  root     23477  0.0  0.0 117724  1088 ?        D    18:34   0:00 ls -Fh --color=always -l  root     23579  0.0  0.0 115592   820 ?        D    18:36   0:00 ls -Fh --color=always  root     23634  0.0  0.0 115592   816 ?        D    18:38   0:00 ls -Fh --color=always  root     23740  0.0  0.0 117724  1088 ?        D    18:40   0:00 ls -Fh --color=always -l  me       23770  0.0  0.0 103156   816 pts/6    S+   18:41   0:00 grep ls  

kill doesn't seem to have any affect on the processes, even as sudo.

What else should I do to investigate this problem? It just randomly started happening today.

UPDATE

dmesg is a big list of things, mostly related to an external USB HDD that I've mounted too many times and the max mount count has been reached, but that is an un-related problem I think. Near the bottom of dmesg I'm seeing this:

INFO: task ls:23579 blocked for more than 120 seconds.  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.  ls            D ffff88041fc230c0     0 23579  23505 0x00000080   ffff8801688a1bb8 0000000000000086 0000000000000000 ffffffff8119d279   ffff880406d0ea20 ffff88007e2c2268 ffff880071fe80c8 00000003ae82967a   ffff880407169ad8 ffff8801688a1fd8 0000000000010518 ffff880407169ad8  Call Trace:   [<ffffffff8119d279>] ? __find_get_block+0xa9/0x200   [<ffffffff814c97ae>] __mutex_lock_slowpath+0x13e/0x180   [<ffffffff814c964b>] mutex_lock+0x2b/0x50   [<ffffffff8117a4d3>] do_lookup+0xd3/0x220   [<ffffffff8117b145>] __link_path_walk+0x6f5/0x1040   [<ffffffff8117a47d>] ? do_lookup+0x7d/0x220   [<ffffffff8117bd1a>] path_walk+0x6a/0xe0   [<ffffffff8117beeb>] do_path_lookup+0x5b/0xa0   [<ffffffff8117cb57>] user_path_at+0x57/0xa0   [<ffffffff81178986>] ? generic_readlink+0x76/0xc0   [<ffffffff8117cb62>] ? user_path_at+0x62/0xa0   [<ffffffff81171d3c>] vfs_fstatat+0x3c/0x80   [<ffffffff81258ae5>] ? _atomic_dec_and_lock+0x55/0x80   [<ffffffff81171eab>] vfs_stat+0x1b/0x20   [<ffffffff81171ed4>] sys_newstat+0x24/0x50   [<ffffffff810d40a2>] ? audit_syscall_entry+0x272/0x2a0   [<ffffffff81013172>] system_call_fastpath+0x16/0x1b  

And also, strace ls /var/www/ spits out a whole BUNCH of information. I don't know what is useful here... The last handful of lines:

ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0  ioctl(1, TIOCGWINSZ, {ws_row=68, ws_col=145, ws_xpixel=0, ws_ypixel=0}) = 0  stat("/var/www/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0  open("/var/www/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3  fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)  getdents(3, /* 16 entries */, 32768)    = 488  getdents(3, /* 0 entries */, 32768)     = 0  close(3)                                = 0  fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 9), ...}) = 0  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3093b18000  write(1, "cgi-bin  conf  create_vhost.sh\te"..., 125cgi-bin  conf  create_vhost.sh      error  html  icons  manual  mediawiki  phpMyAdmin  rackspace  scripts  sqlbuddy  usage   vhosts  ) = 125  close(1)                                = 0  munmap(0x7f3093b18000, 4096)            = 0  close(2)                                = 0  exit_group(0)                           = ?  

Some clients aren't updating using dhcpd's "ddns-update-style interim" with Windows dns servers

Posted: 03 Jul 2021 04:02 PM PDT

I am using dhcpd 3.0.5 and Windows 2008 R2 for dns. I have "ddns-update-style interim;" set and the Windows server is set to allow unauthenticated updates. Most of the time this works great, but occasionally I'm coming across computers that aren't resolving the hostname to the correct IP address. When I look at dns, there is an A record for the wrong ip, but no TXT record (so dhcpd must not have set it). Not surprisingly the dhcpd logs for that hostname will show "Has an A record but no DHCID, not mine."

Does anyone have any idea how these A records got in there? I'm thinking the client somehow got it in there before dhcpd was able to set it. Is there some way to prevent this? Is there any way to make dhcpd update a record even if it does not have a TXT record? If the client is creating the A record, then it is also not updating itself, but that's not surprising because that seems to be common and is the reason I want dhcpd to do the updates in the first place.

Also, it would be helpful if anyone knew of a way to script deleting an A record and then force dhcpd to retry updating the record (without having to go to the client and send another dhcp request)?

How to add a timestamp to bash script log?

Posted: 03 Jul 2021 04:08 PM PDT

I have a constantly running script that I output to a log file:

script.sh >> /var/log/logfile  

I'd like to add a timestamp before each line that is appended to the log. Like:

Sat Sep 10 21:33:06 UTC 2011 The server has booted up.  Hmmph.  

Is there any jujitsu I can use?

No comments:

Post a Comment