Sunday, November 14, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How to use EFI Shell to recover from SELinux Lockout?

Posted: 14 Nov 2021 08:03 PM PST

I enabled SELinux on my Centos 8 box and now I can't get back into the server. It's hosted with a company, so I don't have physical access to it. But I do have access to advanced boot options, including the EFI shell. I am thinking if I can get access to the partitions, I can disable SELinux like that. However, I don't know how to get to the files. I followed the instructions here but I get stuck on this part:

For example, to select the storage device fs1, you can run the following command:  Shell> fs1:  

or in my case:

Shell> blk0:  

However, when I type that, I still see:

Shell>  

When I am expecting to see:

blk0:\>   

If I type:

blk9:  

Then I get error:

'blk9:' is not a valid mapping  

I dont get that if I type blk0. So it is aware of the mapping, it's just not switching folders.

Any thoughts on this?

/var/log/messages getting huge in size

Posted: 14 Nov 2021 07:22 PM PST

I am using CentOS 7 and facing issues with /var/log/messages. For some reason, /var/log/messages gets huge in size filling up the whole partition. To resolve the issue, every time in have to empty the file but it get huge again.

Can anyone please advise permanent solution on this?

Downgrade or compile rsync to fix bug

Posted: 14 Nov 2021 07:08 PM PST

rsync: [generator] failed to set permissions : Operation not supported (95)

I'm running ubuntu 21.10 and facing the same problem as the above link, but could not follow the solution steps, for e.g. installing libssl-dev gives me this error below:

[~/tmp/rsync-be3d6c0fbbd07781bbae6261cda109f8f08c031b]# apt install libssl-dev  Reading package lists... Done  Building dependency tree... Done  Reading state information... Done  Suggested packages:    libssl-doc  E: Sub-process returned an error code  

Also can't comment (ask questions) there due to low reputation, I need my rsync to run, is there a way to downgrade to 3.1.3-8 or some easier solution?

Thank you.

Docker multiple port too single container port configuration

Posted: 14 Nov 2021 04:22 PM PST

I want to configure localstack as a shared container between different micro services. But the problem in my company's environment is, these different micro services configure different ports for different services in localstack and hence every microservice's docker-compose.yml file has localstack configured as a service and custom ports pointing to different services on localstack. So for example, one micro service will configure DynamoDB with port 4569 while another with port 8000.

What I want to do is I want to either configure nginx proxy so that all the traffic from any of the DynamoDB ports goes to one DynamoDB port on localstack(I have used DynamoDB as an example, basically it can be any service on localstack) OR do some port configuration in docker-compose.yml (Configure multiple ports connecting to one container port on DynamoDB on localstack) file that will facilitate me using localstack without much changes.

Is this possible? Is there any example that I can refer or use to configure this?

Is it possible to use Lambda with ALB to control maintenance page?

Posted: 14 Nov 2021 04:16 PM PST

Lambda can act as an ALB target. If set priority the ALB will send traffic to different backend targets between real application and maintenance page.

import json  import boto3    def lambda_handler(event, context):      client = boto3.client('elbv2')        responce = client.set_rule_priorities(          RulePriorities=[              {                  'RuleArn': 'EC2 target group listener rule ARN',                  'Priority': 2              },              {                  'RuleArn': 'Lambda target group listener rule ARN',                  'Priority': 1              },          ]      )            result = client.describe_rules(          ListenerArn='ALB listener ARN',      )                return result  

Maybe it needs to set priority to switch manually in Lambda if want to change to each other. Can it set fixed time such as show maintenance page between 8AM ~ 8PM, other time show application?

In other words, is there any other way can realize the goal? Such as use Route53 or anything?

How to retrieve non-delivered postfix emails?

Posted: 14 Nov 2021 09:41 PM PST

I recently discovered that all emails that were meant to be being sent to a particular address of mine via postfix on my Ubuntu server have been getting rejected by the 3rd party email provider.

So there is about 6 months of emails I have not received (the emails were from a submission form on my website).

I have checked and postfix mail queue is empty.

This is a sample log entry when an email was non-delivered (xxx's for privacy).

Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/smtp[2932654]: D6F393EA37: to=<xxxx@xxxxx.com>, relay=mx2.privateemail.com[198.54.122.215]:25, delay=12, delays=0.02/0.01/7.2/5.1, dsn=5.1.8, status=bounced (host mx2.privateemail.com[198.54.122.215] said: 554 5.1.8 <runcloud@ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal>: Sender address rejected: Domain not found (in reply to RCPT TO command))  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/cleanup[2932652]: 44F5E3EA38: message-id=<20211114211751.44F5E3EA38@ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal>  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/bounce[2932655]: D6F393EA37: sender non-delivery notification: 44F5E3EA38  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/qmgr[4079]: 44F5E3EA38: from=<>, size=3166, nrcpt=1 (queue active)  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/qmgr[4079]: D6F393EA37: removed  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/local[2932656]: 44F5E3EA38: to=<runcloud@ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox)  Nov 14 21:17:51 ip-xxx-xxx-xxx-xxx postfix/qmgr[4079]: 44F5E3EA38: removed  

Is there any way to retrieve the non-delivered emails over the last 6 months?

UPDATES not working on LINUX Maschine after changing iptables

Posted: 14 Nov 2021 09:55 PM PST

I am currently programming a web server. I have an FTP and an HTTP server running on that. Of course I am configuring iptables to optimize the Maschine, currently, I have the following rules.

iptables -P INPUT DROP  iptables -P FORWARD DROP  sudo iptables -A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT  sudo iptables -A INPUT -p tcp –dport 22 -j ACCEPT   sudo iptables -A INPUT -p tcp –dport 80 -j ACCEPT  sudo iptables -A INPUT -p tcp –dport 21 -j ACCEPT  

I have added the rules

-A OUTPUT -p tcp --dport 80 -m state --state NEW -j ACCEPT                           53                udp        53  

After doing this I can not do any more updates do I just get this message. Posting a screenshot of the error. Can anyone tell me what I should do to still do the updates?
enter image description here

Best regards

Thank you in advance

Retrieve parent folder size for each IIS site

Posted: 14 Nov 2021 06:20 PM PST

Im trying to get a list of sites from IIS (8.5) to include the folder size but cant get this to work.

Below is the current code i have that is working without the size

Import-Module webAdministration # Required for Powershell v2 or lower      $filepath = C:\sites.csv  $sites = get-website    foreach($site in $sites) {      $name = $site.name      $bindings = $site.bindings.collection.bindinginformation.split(":")      $ip = $bindings[0]      $port = $bindings[1]      $hostHeader = $bindings[2]      "$name,$hostHeader,$ip,$port" | out-host      "$name,$hostHeader,$ip,$port" | out-file $filePath -Append  }  

I then attempted to add in this line

$size = Get-ChildItem -Directory -Force|ForEach {"{0,-30} {1,-30} {2:N2}MB" -f $_.Name, $_.LastWriteTime, ((Get-ChildItem $_ -Recurse|Measure-Object -Property Length -Sum -ErrorAction Stop).Sum/1MB)}  

but that didnt work either.

I then attempted with

$size = Get-ChildItem $name + "\folderName\" | Measure-Object -Property Length -sum  

which was getting closer but i think my syntax is wrong with $name + "\folderName\" as im getting a series of errors. I say this is close as it has the path to the directory but it doesnt exist. The directory would exist if i can add the foldername to the $name variable?

Where am i going wrong? Or how could i retrieve the parent of each website folder size?

Subversion repo inside a repo

Posted: 14 Nov 2021 06:35 PM PST

It appears that some bright soul has created a repo inside of an existing repo.

Will this work? Are there any problems that might result from this?

UPDATE 001:

As requested, here is a list of the repository directory where a new repository appears to have been created. I did not create this. I have files in other parts of the same repo.

Will this work? Are there any pitfalls?

PS C:\> svn ls https://fugu.company.com/db/trunk/Scripts/DR01  ProjRAF/  ProjRAF_Staging/  ... <several more directories>  ProjDocData/  ProjCleanup  Desktop.ini  MSDB/  Master/  README.txt  Shared/  Tools/  conf/  db/  format  hooks/  locks/  svn.ico  PS C:\>  

Port 1723 won't close

Posted: 14 Nov 2021 05:02 PM PST

I'm not very experienced in linux administration but have been running a server that hosts django for a year or so now. Using ufw I had only opened port 80, 443 and 69 for nginx and SSH.

Recently whilst running tripwire checks I've seen a lot of modifications to files - which I've assumed is just usual system files doing their thing. I also notice port 1723 is open whilst checking with nmap scans from another machine. I can't get it to close even when denying with ufw & iptables.

When I check netstat for listening ports it never lists 1723. Is there something suspicious going on or am I missing something?

Restart-Computer : Failed to restart the computer with the following error message: A system shutdown is in progress

Posted: 14 Nov 2021 08:58 PM PST

I have installed updates on my Windows 2012r2 Machine and as usual, I did a reboot. However, it seems that the machine has hung itself during the reboot process and does not do a proper shutdown. I can initiate a connection via RDP, but not connect to the machine, I can also send commands via powershell, so I have tried sending a Force reboot:

Restart-Computer -Force -Credential domain\adminuser -ComputerName COMPUTERNAME  

The reply from the server is the following:

Restart-Computer : Failed to restart the computer 10.250.35.16 with the following error message: A system shutdown is in progress.  

Is there a way to force the reboot and kill the processes?

Install software on azure VM

Posted: 14 Nov 2021 03:01 PM PST

Is it possible to install softwares on a windows VM as and when they are created in a particular subscription? The actual need is to install endpoint protection software and vulnerability assessment tools onto the windows VMs every time a new one is deployed, all should happen without a admin triggering the installs.

How can I SSH into a server that is using a VPN?

Posted: 14 Nov 2021 06:03 PM PST

I have a Raspberry Pi server (rpi) with a static internal IP using a VPN service. My router has a static public IP and I have the NAT set up to forward SSH traffic to the rpi as I have other devices on the network.

fictitious IP numbers.

I am able to SSH into the rpi server remotely (out of my network) when no VPN is used. I am able to SSH into the rpi internally (in my network) when the VPN is used. I am not able to remotely SSH into the rpi when the VPN is used.

I have seen other questions that are similar but I'm such a novice I couldn't quite understand fully what was explained or ascertain if the situation was the same as mine.

I don't believe I'm using a firewall on the server but am relying on the router to block connections and using NAT to forward connections. I don't understand what iproute is for or on which machine it should be configured.

Where is default soft limit config file debian?

Posted: 14 Nov 2021 09:05 PM PST

I have a process running as root that is capped to 1024 ( in reality lsof shows me up to 1031 for it) open files but I don't find the file to modify this limit.

Here is the output of cat /proc/PID/limits to confirm it

    #cat /proc/32531/limits      Limit                     Soft Limit           Hard Limit           Units       Max cpu time              unlimited            unlimited            seconds     Max file size             unlimited            unlimited            bytes       Max data size             unlimited            unlimited            bytes       Max stack size            8388608              unlimited            bytes       Max core file size        0                    unlimited            bytes       Max resident set          unlimited            unlimited            bytes       Max processes             515045               515045               processes   Max open files            1024                 4096                 files       Max locked memory         65536                65536                bytes       Max address space         unlimited            unlimited            bytes       Max file locks            unlimited            unlimited            locks       Max pending signals       515045               515045               signals     Max msgqueue size         819200               819200               bytes       Max nice priority         0                    0                      Max realtime priority     0                    0                      Max realtime timeout      unlimited            unlimited            us    

However, I can't find that limit in "classic" config files :

#cat /proc/sys/fs/file-max   13106306    #ulimit -S -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 515045  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 65536  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 515045  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited    #ulimit -H -a  core file size          (blocks, -c) unlimited  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 515045  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 65536  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) unlimited  cpu time               (seconds, -t) unlimited  max user processes              (-u) 515045  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

/etc/security/limits.conf is fully commented and /etc/security/limits.d/ is empty

I'm running debian 8.8 (jessie) on Linux version 3.14.32-xxxx-grs-ipv6-64 (kernel@kernel.ovh.net) (gcc version 4.9.2 (Debian 4.9.2-10) )

Thanks,

BackupPC files on NFS - web interface not working

Posted: 14 Nov 2021 02:50 PM PST

if BackupPC files from dir /var/lib/BackupPC is moved to NFS mount, web interface not working. I see an only home page, but when I want to go to host config or to summary webpage timeouts (504 Gateway Timeout).

In BackupPC logs isn't anything relevant. In httpd error log are these lines:

[Wed Jun 07 14:39:04.655260 2017] [cgi:warn] [pid 1078] [client 83.208.46.101:53404] AH01220: Timeout waiting for output from CGI script /usr/share/BackupPC/sbin/BackupPC_Admin, referer: http://10.0.0.15:8081/backuppc?action=summary  [Wed Jun 07 14:39:04.655326 2017] [cgi:error] [pid 1078] [client 83.208.46.101:53404] Script timed out before returning headers: BackupPC_Admin, referer: http://10.0.0.15:8081/backuppc?action=summary  

If I switched configuration back (/etc/BackupPC/config.pl, variable $Conf{TopDir}), all works fine.

I use CentOS Linux 7 x64 and BackupPC-3.3.1-5.el7.x86_64

How to enable HTTP access to Intel RMM3 through ssh console?

Posted: 14 Nov 2021 05:06 PM PST

Would you be so kind to write how to enable HTTP access to Intel RMM3 through ssh (SMASH-CLP) console, please?

I have already tried to reset it with a procedure recommended on the Intel forum (https://communities.intel.com/thread/17372?tstart=0)

cd /system1/sp1/enetport1/lanendpt1/ipendpt1    set committed=0    set committed=1   

but it didn't solve the problem

Best regards, Grzegorz

iptables doesn't recognize --log-prefix

Posted: 14 Nov 2021 03:01 PM PST

I'm having difficulty getting iptables to log. Here are the relevant commands:

/usr/sbin/iptables -N LOG_DROP  /usr/sbin/iptables -A LOG_DROP -m limit --limit 2/min -j LOG --log-prefix "iptables drop: " --log-level 7  /usr/sbin/iptables -A LOG_DROP -j DROP  

Entering these commands, results in:

iptables v1.4.21: unknown option "--log-prefix"

I believe the following modules are important, so they're active in my kernel:

`nf_log_common   nf_log_ipv4   nf_log_ipv6`  

Any suggestions for solving this problem?

certutil -TCAInfo error message RegConnectRegistry/RegOpenKeyEx: The network path was not found. 0x80070035 (WIN32: 53 ERROR_BAD_NETPATH)

Posted: 14 Nov 2021 10:06 PM PST

Recently we noticed the following errors were occurring daily in our Event Logs for servers in our DMZ:

CertificateServicesClient-CertEnroll EventID 82  Certificate enrollment for Local system failed in authentication to all urls for enrollment  server associated with policy id: {00B9F3A7-...-50628BC5AE7E} (The RPC server is  unavailable. 0x800706ba (WIN32: 1722 RPC_S_SERVER_UNAVAILABLE)). Failed to enroll for  template: Machine    CertificateServicesClient-CertEnroll EventID 13  Certificate enrollment for Local system failed to enroll for a Machine certificate with   request ID N/A from NY-CA01.company.com\Company Internal Root CA (d0 7a ... f3 e4 70).    CertificateServicesClient-AutoEnrollment EventID 6  Automatic certificate enrollment for local system failed (0x800706ba) The RPC server is  unavailable.  

I suspect it is a firewall issue, and tried to use the certutil.exe tool to verify connectivity to the certificate authorities, but when running the -TCAInfo command I received the following error message:

PS C:\windows\system32> certutil -tcainfo  ================================================================  CA Name: Company Internal Root CA    Machine Name: NY-CA01.Company.com    DS Location: CN=Company Internal Root CA,CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=Company,DC=com    Cert DN: CN=CompanyInternal Root CA, DC=Company, DC=com  RegConnectRegistry/RegOpenKeyEx: The network path was not found. 0x80070035 (WIN32: 53 ERROR_BAD_NETPATH)    CA Registry Validity Period: ? ???   NotAfter: 10/11/2031 7:05 PM    Connecting to NY-CA01.Company.com\Company Internal Root CA ...  Server "Company Internal Root CA" ICertRequest2 interface is alive (47ms)  ...  ================================================================  NY-CA01.Company.com\Company Internal Root CA:    Enterprise Root CA    A certification chain processed correctly, but one of the CA certificates is not trusted by the policy provider. 0x800  b0112 (-2146762478 CERT_E_UNTRUSTEDCA)    Online  

It appears to think that the CA is Online and alive, but there is an The network path was not found. error and the CA Registry Validity Period: ? ??? is unknown.

I confirmed that https://ny-ca01.company.com/certsrv/ is accessible from the DMZ servers, so what other ports are needed for renewing certificates?

Windows Server 2012 Terminal Server Degrading Performance on User Session

Posted: 14 Nov 2021 08:05 PM PST

We have a terminal server environment with about 40 users which is experiencing a curious performance issue: when a given user logs in initially, everything functions properly, once a particular user starts to eat up more resources (upwards of 2GB/memory and 2%-5% of overall CPU usage), their applications seem to slow down considerably. If I have the user close everything, log off and log back in, performance on the applications is restored.

It's almost as if there's some kind of throttling on resources going on for each user session.

Has anyone experienced this phenonmenon? The server resources are adequate as at peak we're using 50%-70% CPU and about 75% of memory.

Thanks in advance!

Why does my server running nginx/php-fpm keep losing session capability without generating any errors?

Posted: 14 Nov 2021 07:02 PM PST

I am managing a server that has a couple dozen websites on it and they have all been working fine until last week when it was noticed that one site had seemingly lost the ability to maintain session data. Then another. (I am guessing it is affecting all sites on this server but just has not been reported yet.) I changed absolutely nothing in either site's configs recently. I have added no software to the server recently. I have not changed the general nginx or php-fpm configs. There are no errors in the nginx or php-fpm error logs that correspond to this failure. Restarting php-fpm appears to clear up the problem at least temporarily. Inevitably, the problem recurs. How is it possible that php-fpm can fail like this without producing an error message somewhere? I have been googling extensively and have not found anyone else with this problem.

The server is running RHEL 6 with nginx and php-fpm (remi repo). I can't remember if this server is running APC but I don't think it is. All patches are up to date.

I am guessing I just have hit some sort of threshold where the current php-fpm configs are insufficient, though I don't understand why I am getting no errors when that limit is reached. Here are what I suspect are the relevant php-fpm settings...

pm = dynamic  pm.max_children = 50  pm.start_servers = 5  pm.min_spare_servers = 5  pm.max_spare_servers = 35  php_admin_value[error_log] = /var/log/php-fpm/www-error.log  php_admin_flag[log_errors] = on  

Is there an error log somewhere I' missing where this would be reported? As I mentioned, there is nothing in /var/log/php-fpm/www-error.log, or in the general nginx error log or in the site-specific nginx error logs.

P.S. : I do get other kinds of error messages in all of the logs I mentioned so the lack of error messages is not a permission issue.

Here are df outputs (edited to remove identifying physical paths)...

# df -h  Filesystem            Size  Used Avail Use% Mounted on  xxx                        8.4G  3.8G  4.2G  48% /  xxx                   7.8G     0  7.8G   0% /dev/shm  xxx                   477M   79M  373M  18% /boot  xxx                        976M  713M  213M  78% /home  xxx                        976M   30M  896M   4% /tmp  xxx                        9.8G  4.6G  4.7G  50% /var      # df -i  Filesystem            Inodes IUsed   IFree IUse% Mounted on  xxx                        547584 87083   460501   16% /  xxx                   2041821    1  2041820   1%  /dev/shm  xxx                   128016    50  127966    1%  /boot  xxx                        65536   19285 46251     30% /home  xxx                        65536   173   65363     1%  /tmp  xxx                        655360 19441  635919    3%  /var  

And here is the php-fpm status page while the site is not allowing sessions to be saved...

pool:                 www  process manager:      dynamic  start time:           06/Aug/2015:10:53:06 -0400  start since:          332263  accepted conn:        2899  listen queue:         0  max listen queue:     0  listen queue len:     128  idle processes:       9  active processes:     1  total processes:      10  max active processes: 9  max children reached: 0  slow requests:        0  

Configuring multiple domain in nginx in one file

Posted: 14 Nov 2021 08:05 PM PST

I am still newbie configuring nginx.

Is it posibble to configure multiple domain in one file and they share mostly the same config?
For example I want to configure two domains that based from one app and one domain need basic auth, the other doesn't.
I would like to do something like this, but I think this does not work:

sites-enabled/mysite

server {        listen 127.0.0.1:80 default_server;      server_name www.mysite.com;      include sharedconf.conf;  }      server {      listen 127.0.0.1:80;      server_name www.mysite.co.jp;      auth_basic "restricted";      auth_basic_user_file /etc/nginx.htpasswd;      include sharedconf.conf;  }  


sharedconf.conf

location / {      proxy_pass_header Server;      #... bunch of config line ...    }  

How to automatically increase the partition size of multiple Ubuntu nodes in VMware vSphere?

Posted: 14 Nov 2021 10:06 PM PST

We have dozens of Ubuntu nodes where I have to resize the hard disk drive to different sizes. Currently I'm doing all of the following steps manually:

  1. Increase the size of each node's virtual hard disk in VMware vCenter.
  2. Change the configuration of the DVD drive, mount a GParted ISO, boot from BIOS, and change the boot order.
  3. Boot into GParted, manually increase /dev/sda2 and /dev/sda5.
  4. Stop the VM, disable the DVD drive, and start the VM.
  5. Use lvextend -r /dev/ubuntu/root /dev/sda5 to extend the LVM and resize the partition to its maximum possible size.
  6. Optional: Check with df -h if everything's OK.

I would love to automate this process, in a best case to provide a list of node names and corresponding sizes and let the tool do its job. In the case where there is no automated solution available I would love to hear about micro-optimizing every of these steps to make my tedious job easier.

We're already automatically provisioning our nodes with Chef, and a VM template with a hard disk size of 16 GB.

Any smart ideas?

EL6/KVM guest dies with "pthread_create failed: Resource temporarily unavailable"

Posted: 14 Nov 2021 07:02 PM PST

I've got a CentOS 6.5 x86-64 KVM server with a bunch of guest VMs of different breeds, mostly EL5 and EL6. However one and only one of them keeps crashing every couple of days with:

pthread_create failed: Resource temporarily unavailable  

Here is the full log from /var/log/libvirt/qemu/vws3-pp.log:

2014-07-24 21:27:27.451+0000: starting up  LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none      /usr/libexec/qemu-kvm -name vws3-pp,process=qemu:vws3-pp -S -M rhel6.5.0      -enable-kvm -m 1536 -redhat-disable-KSM -realtime mlock=on      -smp 1,sockets=1,cores=1,threads=1 -uuid d11de823-8bab-4e8d-8457-61ef7ab877a7      -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vws3-pp.monitor,server,nowait      -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown      -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2      -drive file=/vm/prod/vws3-pp-disk1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=writethrough      -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1      -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=32      -device virtio-net-pci,netdev=hostnet0,id=net0,bus=pci.0,addr=0x3      -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0      -vnc 127.0.0.1:9,password -vga cirrus      -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5  char device redirected to /dev/pts/5    pthread_create failed: Resource temporarily unavailable   <====  ### HERE ####  2014-07-29 15:29:52.063+0000: shutting down  

There are 8 other VMs on the box and all of them run happily for months, just this one crashes every few days. There is nothing special about this VM - pretty standard LAMP, not overloaded - I can't think of any significant difference between this and the other VMs that exhibit no problems. Some of those are very busy but still rock stable.

Somewhere on the net I found a suggestion to set max_processes = 4096 in /etc/libvirt/qemu.conf and restart the box - done that but it didn't help. The VM crashed again this morning for no good reason.

NEW INFO:

As it turns out the VM always dies while rdiff-backup is running from a remote backup server and in most cases the last log in the rdiff-backup-data/backup.log (on the remote side, ie not affected by the crash) is:

Processing changed file tmp  Incrementing mirror file /extpool/backup/vws3-pp/tmp  

Even though /tmp/** is excluded from the backup. It could indeed be failing in /usr which is the next one alphabetically in /, who knows...

Backup runs every night but the VM crashes only about once a week.

What does rdiff-backup do so strange that it makes a KVM gues die with pthread_create failed: Resource temporarily unavailable?

Any ideas?

Upgrade SATA drive on Dell Poweredge 1950, RAID 1, Ubuntu server

Posted: 14 Nov 2021 09:05 PM PST

My current set up is a Dell PowerEdge 1950, two 250GB SATA drives, RAID 1, OS is Ubuntu server. Using it for running OTRS (open source help desk).

I'd like to upgrade the drives to the maximum capacity possible, and it's my understanding that 2TB is the max. I also understand that I can use a non Dell hard drive with the only risk of not having Dell supporting it, although it can work.

So, first of all, are these two statements correct? Secondly, what would be the best way to do that? Can I just replace the first drive, let the array rebuild, then replace the other drive? I assume that if this is a valid practice, I will have to expand capacity from the original size to the new one?

I appreciate any help and advise on this matter in advance!

Domino HTTP Server: Error - Unable to Bind 1.2.3.4, port 80, port in use or Bind To Host configuration specifies a duplicate IP address/host

Posted: 14 Nov 2021 06:02 PM PST

We have a Domino 9.0.1 Server hosted on Ubuntu 14.04 Server, which hosts several other http based Tasks, (Nginx, Couchdb, Confluence on Tomcat).

The Ubuntu Server has multiple IPs, all bind correctly to the different Tasks.

The Domino SMTP task binds correctly and is working well.

All http tasks (other than Domino) are proxied behind Nginx version 1.6x and all are working well, netstat shows no 0.0.0.0 bindings, no one is listening on 1.2.3.4:80 .

when I try to load http on the (Domino) server console it failes with

HTTP Server: Error - Unable to Bind 1.2.3.4, port 80, port in use or Bind To Host configuration specifies a duplicate IP address/host  

a couple of times, may be 4 or 5 times then it loads without failure!

And: when it comes up, I see http is listening on 80 AND 443, but SSL Connections are not working, nor any error log!

It must be a kind of bad magic :-(

thanks in advance

Pitt

Configuring DNS with router and BIND

Posted: 14 Nov 2021 06:02 PM PST

Goal

I am trying to setup a local DNS server here in our office.

Problem

Apparently Comcast has a loop-back so when we configured the domain to go to our IP it works outside of the office but inside it fails.

Actions taken

We decided to setup a local DNS server so that anyone requesting our domains inside will still be able to view them.

We have it setup for the most part but it just wont seem to work when we add the IP of the DNS server to the DNS settings in the router.

However, when I go into my local computer and add the IP there in the DNS settings it resolves correctly.

Request

There must be something that I am missing in the router configuration.

If you have any links with really good examples of how to setup one up that would be great.

We are using Red Hat but anything is helpful.

Thanks in advance.

Forwarding everything from external DMZ ip to NAT ip using iptables

Posted: 14 Nov 2021 04:07 PM PST

I know there are a lot of questions about this, but I still struggling to get it working.

I have a firewall which has 3 external IPs. (IPs have been changed randomly for security)

eth0      Link encap:Ethernet  HWaddr 50:46:5d:64:ed:e4              inet addr:51.215.232.147  Bcast:51.215.232.159  Mask:255.255.255.240            inet6 addr: fe80::5246:5dff:fe64:ede4/64 Scope:Link            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:70219084 errors:0 dropped:17443 overruns:0 frame:0            TX packets:63956103 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:1000             RX bytes:51508818511 (51.5 GB)  TX bytes:27933240304 (27.9 GB)    eth0:1    Link encap:Ethernet  HWaddr 50:46:5d:64:ed:e4              inet addr:51.215.232.148  Bcast:51.215.232.159  Mask:255.255.255.240            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1    eth0:2    Link encap:Ethernet  HWaddr 50:46:5d:64:ed:e4              inet addr:51.215.232.150  Bcast:51.215.232.159  Mask:255.255.255.240            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1  

And I have these simple rules:

# Generated by iptables-save v1.4.10 on Sat Mar  3 14:48:42 2012  *filter  :INPUT ACCEPT [13766:4986720]  :FORWARD ACCEPT [992:122980]  :OUTPUT ACCEPT [11894:5582822]  -A FORWARD -s 172.16.0.0/16 -o eth0 -j ACCEPT   -A FORWARD -d 172.16.0.0/16 -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT   COMMIT  # Completed on Sat Mar  3 14:48:42 2012  # Generated by iptables-save v1.4.10 on Sat Mar  3 14:48:42 2012  *nat  :PREROUTING ACCEPT [77:8206]  :INPUT ACCEPT [48:6367]  :OUTPUT ACCEPT [55:3300]  :POSTROUTING ACCEPT [55:3300]  -A POSTROUTING -s 172.16.0.0/16 -o eth0 -j MASQUERADE   -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE   -A POSTROUTING -s 10.1.0.0/16 -o eth0 -j MASQUERADE   COMMIT  # Completed on Sat Mar  3 14:48:42 2012  

So I want to forward everything from 51.215.232.150 to internal IP 172.16.5.218.

So I thought this would work:

iptables  -t nat -I PREROUTING -p tcp -d 51.215.232.150 -j DNAT --to 172.16.5.218  

But alas no.

Thanks in advance. Edward

What are the main points to avoid RAID5 with SSD?

Posted: 14 Nov 2021 03:05 PM PST

My understanding is that an SSD has a limited amount of writes. RAID5 performs many writes due to parity information across the drives. So reasoning states that RAID5 would kill and lower the performance of Solid State Drives at a faster rate.

The following statement from This Article, makes me think I don't fully understand or might be incorrect with my above reasoning.

Another niche for high-endurance SSDs is in parity RAID arrays. SLC, due to its inherently superior write latency and endurance, is well suited for this type of application.

Apache Alias to access folder in different harddrive? (localhost)

Posted: 14 Nov 2021 04:07 PM PST

Installed Appserv. Made a php.

D:/Appserv/www/x/y/file.php

Then I have a folder, like "E:/foldie"

I want file.php to mess with that folder. I found this somewhere:
<IfModule mod_alias.c>
Alias /foldie/ "E:/foldie"
<Directory "E:/foldie">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
</IfModule>

So I added it to my httpd.config file.Then I added the following to file.php:

echo(realpath("../../foldie/"));

Was expecting "G:/foldie". Nothing happened.

Help?

No comments:

Post a Comment