Wednesday, February 2, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Forward cforward some connections on specific interfaces - ubuntu 20.04

Posted: 02 Feb 2022 02:19 AM PST

i need to find a way to redirect the traffic on the interfaces depending on the url you need to connect to. I'll explain: I have two network interfaces, one that must be used as default (like a normal network interface, as if there was only that one) and another that should only be used if I connect to certain ip or sites (for example, if I I connect to 192.168.254.5 or to https://foo.it I want you to use the second interface). I'm currently on ubuntu2 0.04

Configure jobs on only run on empty nodes in SGE

Posted: 02 Feb 2022 02:17 AM PST

Firstly, apologies if this doesn't make a whole heap of sense I'm very new to HPC and SGE etc. On the cluster I have each node in SGE corresponds to an individual CPU with 6 cores. Most of the jobs I submit run on 12 cores and hence fill 2 nodes / CPUs completely. However, on occasion I run 1 core jobs. This creates an issue where the rest of the 12 core jobs run on 5 cores on one CPU (that is also running the 1 core job) and 1 on a different CPU and 6 on another. Needless to the cascades across all the jobs that are running and significant slows the speed of all the jobs.

Is there a way in SGE to specify that my 12 core jobs only go onto empty nodes? Unfortunately, there is not the capacity to set up an individual node to only run the 1 core jobs. I did think to try and set SGE to block the empty cores / fill them with dummy jobs when running 1 core jobs. However, this would lead to the cluster filling with 1 core jobs when they could all be stacked onto 1-2 CPUs without reducing performance of the 1 core jobs.

Any help would be much appreciated 😊

Looking for intermediate proxy reading a proxy.pac

Posted: 02 Feb 2022 02:00 AM PST

I have a setup where I need a corporate upstream proxy, as well as several differnt proxies that are set up as SSH Socks proxies.

In the past I used px on the windows host machine to parse a self-made proxy.pac (which then depending on the host uses either the corporate proxy or a suitable local proxy via ssh).

Unfortunately they introduced a new security solution that blocks px somehow. Tried to build px as a docker image but it appears to me that px is 100% intended to be run on windows, so it looks like I cannot build it on linux (VM or docker image).

Do you know about any other proxy alternative which I can configure to read a proxy.pac and forward a web request according to this proxy.pac?

This way I could just set the http_proxy variables to this service and let it decide on the upstream instead of changing http_proxy over and over manually.

Do I need a dedicated IP address to install an SSL certificate on Azure Windows VM with a FQDN?

Posted: 02 Feb 2022 02:34 AM PST

I have an Azure Windows VM. I want to manually install the SSL certificate rather than use Azure's Keyvault.

The server has a public IP address, but it is non-static. It also has a FQDN (not sure if that's relevant). I also registered a domain name, so I'm not trying to tie the certificate directly to the FQDN.

I'm going to purchase a certificate from Namecheap, but their documentation suggests that "A dedicated IP address is required to install an SSL certificate.".

Am I going to run into issues with this approach?

Post Domain Transfer - What are the risks associated with updating NameServers of a live website?

Posted: 02 Feb 2022 02:51 AM PST

I recently transferred a domain name from NameCheap to GoDaddy. When I tried to manage the DNS records on GoDaddy, GoDaddy showed me a message that my NameServers are not managed by them. The website associated with the domain name is live. What steps do I need to take to ensure the transfer of nameservers happens seamlessly, with no downtime and no impact on my SEO? Thanks.

SMTP relay with FQDN, but SpamAssassin marks FSL_HELO_NON_FQDN_1 and HELO_NO_DOMAIN

Posted: 02 Feb 2022 02:09 AM PST

my mail provider (posteo.de) offers an FQDN during the HELO/EHLO exchange (e.g. mout2.posteo.de) – which is seen by policyd-spf (s. below) – but SpamAssassin on a receiving server marks the relay as offering no FQDN or domain:

Feb  2 06:17:10 eden policyd-spf[478301]: prepend Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=185.67.36.66; helo=mout02.posteo.de; envelope-from=xxx@posteo.de; receiver=<UNKNOWN>   Feb  2 06:17:10 eden postfix/smtpd[478286]: 502C97F91B: client=mout02.posteo.de[185.67.36.66]  Feb  2 06:17:10 eden postfix/cleanup[478303]: 502C97F91B: message-id=<xxx@xxx.xxx>  Feb  2 06:17:10 eden spamd[423260]: spamd: connection from 127.0.0.1 [127.0.0.1]:60310 to port 783, fd 6  Feb  2 06:17:10 eden spamd[423260]: spamd: processing message <xxxxx> for xxx:111  Feb  2 06:17:13 eden spamd[423260]: spamd: clean message (0.8/6.0) for xxx:111 in 3.6 seconds, 1910 bytes.  Feb  2 06:17:13 eden spamd[423260]: spamd: result: . 0 - DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FSL_HELO_NON_FQDN_1,HELO_NO_DOMAIN,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED scantime=3.6,size=1910,required_score=6.0,rhost=127.0.0.1,raddr=127.0.0.1,autolearn=no autolearn_force=no  

Why doesn't SpamAssassin see the FQDN which is obviously present?

Thanks, Jan

Is there a way to edit Android OAUTH SHA1 fingerprints in the Google Cloud Console

Posted: 02 Feb 2022 01:42 AM PST

We've been working on an app for the better part of 1 year now. We set up an Android OAUTH credential for using with Google Sign in. We used a debugging SHA1. Now, we are getting close to release and would like to change to the release SHA1. Unfortunately, the Google Cloud Console shows this error: enter image description here

when we try to change the key. Is there a way to change the key without having to change our apps package identifier? That seems too drastic of a step. There also doesn't seem to be a simple way to contact Google for assistance either.

Does my swapping server with ~80GB available RAM need even more?

Posted: 02 Feb 2022 01:46 AM PST

I'm administering a Debian Linux x86_64 server with 125GB RAM, a 10GB swap partition and a swappiness value of 60.

A free -gw output prints:

              total        used        free      shared     buffers       cache   available  Mem:            125          20           1          25           0         103          78  Swap:             9           1           7  

Every night, a cron job runs which is very disk R/W intensive and raises used memory by 2GB during its execution before returning to ~20GB.

In the same time frame, certain long-running containerized Python processes (especially gunicorn) are paged out.

With time, those processes will gradually fill the swap - after a couple of weeks, it will be 99% full.

I can clear the swap by either restarting the processes or disabling and re-enabling with swapoff and swapon. But the swap usage of the Python processes will just creep up slowly again.

I guess that part of the explanation may be, that because of the nightly cron job the Kernel is prioritizing the filesystem cache higher than the Python processes. But I also guess that some software bug in the memory handling of those Python processes may be to blame.

A friend has suggested that I purchase even more RAM so there will be more room for the filesystem cache. That seems excessive to me. I would like to more precisely diagnose the cause of the swapping and find some software solution to the problem.

So now I pass the question to Serverfault - what do you think of this phenomenon, and where should I go from here?

squid4 proxy with only whitelist allow policy for all

Posted: 02 Feb 2022 01:03 AM PST

Ubuntu 20.04; Squid 4.10 (build with ssl);common dns-server for clients and server 192.168.15.1

I can't configure squid4 proxy with only whitelist(with SSL) allow policy for all. I build squid by sources with ssl support, and generate cert, all works (when allow all for all), but when i configure squid to allow only whitelist, it not works: all users have permissions for all sites, not only whitelist. I can't solve this problem. My squid.conf

❯ cat squid.conf  debug_options 28,9  dns_nameservers 192.168.15.1  logformat squid-host %tl %6tr %>a %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt  access_log daemon:/var/log/squid/access.log squid-host      acl localnet src 192.168.15.0/24            # RFC 1918 local private network (LAN)  #========================START STANDART BLOCK======================  acl SSL_ports port 443  acl Safe_ports port 80          # http  acl Safe_ports port 21          # ftp  acl Safe_ports port 443         # https  acl Safe_ports port 70          # gopher  acl Safe_ports port 210         # wais  acl Safe_ports port 1025-65535  # unregistered ports  acl Safe_ports port 280         # http-mgmt  acl Safe_ports port 488         # gss-http  acl Safe_ports port 591         # filemaker  acl Safe_ports port 777         # multiling http  acl CONNECT method CONNECT  http_access deny !Safe_ports  http_access deny CONNECT !SSL_ports  http_access allow localhost manager  http_access deny manager  #========================END STANDART BLOCK======================      acl whitelist url_regex -i "/etc/squid/lists/whitelist"  http_access allow whitelist localnet    http_access deny all      http_port 3128    http_port 3129 intercept    https_port 3130 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=20MB tls-cert=/etc/squid/ssl/proxyCA.pem tls-key=/etc/squid/ssl/proxyCA.pem cipher=HIGH:MEDIUM:!LOW:!RC4:!SEED:!IDEA:!3DES:!MD5:!EXP:!PSK:!DSS options=NO_TLSv1,NO_SSLv3 tls-dh=prime256v1:/etc/squid/bump_dhparam.pem    always_direct allow all  sslproxy_cert_error allow all    acl whitelist_ssl ssl::server_name_regex "/etc/squid/lists/whitelist"  acl step1 at_step SslBump1  ssl_bump peek step1  ssl_bump bump all    sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/squid/ssl_db -M 4MB  refresh_pattern ^ftp:           1440    20%     10080  refresh_pattern ^gopher:        1440    0%      1440  refresh_pattern -i (/cgi-bin/|\?) 0     0%      0  refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims  refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims  refresh_pattern \/InRelease$ 0 0% 0 refresh-ims  refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims  # example pattern for deb packages  #refresh_pattern (\.deb|\.udeb)$   129600 100% 129600  refresh_pattern .               0       20%     4320%  

whitelist

❯ cat lists/whitelist  ya\.ru  ^([A-Za-z0-9.-]*\.)?ok\.ru  

When tagging EC2 instances, is it possible to automatically tag the attached services (such as volumes)?

Posted: 02 Feb 2022 01:52 AM PST

Is it possible in AWS to automatically tag services that are related to an EC2 instance when tagging the EC2 instance itself? To illustrate, when I tag the EC2 instance in the tag manager, services related to the EC2 (such as volumes) remain untagged. Manually tagging all related resources seems like a lot of redundant work - is there any way to automate this?

SPF/DKIM/DMARC for Gmail "Send mail as" via smtp.gmail.com on external domain

Posted: 02 Feb 2022 02:39 AM PST

Since "Google Apps" / "Google Apps for business" / "G-Suite" / "Google Workspaces" free tier is being discontinued, I need a solution to migrate my ~30 extended family to a sustainable solution.

I'm looking at the option of having them each piggy-back off a personal @gmail.com address they should each create, forwarding the email, and adding the address using "Send mail as" in gmail, using Google's gmail SMTP server and an app-specific password: enter image description here

I'm using CloudFlare for DNS, and I've activated the CloudFlare Email routing (beta) feature, and I've set the MX records to the various .mx.cloudflare.net servers. I also added the CloudFlare SPF TXT record: v=spf1 include:_spf.mx.cloudflare.net ~all.

Now, it all seems to be working, except what is happening is sent emails seem to often end up in junk/spam. I guess this is possibly something to do with SPF/DKIM/DMARC but this is way outside my domain of knowledge.

I've modified the SPF header from v=spf1 include:_spf.mx.cloudflare.net ~all to v=spf1 include:_spf.mx.cloudflare.net include:_spf.google.com ~all as I saw suggested elsewhere, but that doesn't seem to have solved the problem.

Is it possible to add DKIM and/or DMARC records, and if so, how? My (limited) understanding is that Google would need to give me a key (probably unique to my account) to add, which validates that not only is it Google/gmail that's sending the mail, but specifically me and not some other random gmail user.

Moreover, how would this work with the other users? I need all users to be able to reliably be able to send/receive emails and not have them end up in spam/junk.

If this were like SSH, I would generate a key pair, put the public key on the DNS and each user would add the same private key somewhere in their "Send As" on their gmail settings.

I guess this is probably unrelated to emails getting into spam/junk but I added the _dmarc TXT record: v=DMARC1; p=none; rua=mailto:{{me@gmail.com}}; ruf=mailto:{{me@gmail.com}}; sp=none; fo=1; ri=86400.

Pros/Cons of running separate user accounts for an administrative subdomain?

Posted: 02 Feb 2022 01:37 AM PST

What are the pros/cons of creating a separate account for an administrative subdomain rather than adding a subdomain under the same account?

Specifically, but limited to:

  • Server load/process management (eg: process hanging/conflicts).
  • Server security.
  • MySQL query processing (eg: process heavy query hanging).
  • Cross-site code library sharing (eg: storing only 1 copy of a code library with Read, Write, and/or Execute permissions for each site).

Other Factors/Clarifications:

  • Server is a dedicated server using CloudLinux v7.9 & cPanel v100.0, running Apache v2.4, PHP v7.4 and MySQL v10.2-MariaDB.
  • Likely, I will be using cPanel's scripts/interface to create the account separation.
  • We're talking 1 primary domain and 1 administrative subdomain (for example: www.mydomain.com and admin.mydomain.com).
  • The primary domain and admin subdomain will be managed by the same people.
  • Most of the code for each will be independent from one-another except for some code libraries which they will need to share, whether by duplicating the code or sharing them from a common directory.

NGINX: try_files with proxy_pass in docker container

Posted: 02 Feb 2022 12:16 AM PST

I use nginx as a reverse proxy for my Node.js application(s) which is running on localhost:3000. Both nginx itself and my application are two separate docker containers. (Separate because later I plan to add more apps to it.)

In my app container I have some .webp alternative images. E.g.

$ ls     /app/assets/img/image.jpeg  /app/assets/img/image.jpeg.webp  /app/assets/img/other.jpeg  

I've tried using try_files with nginx but as far as I know this is not possible because try_files doesn't "see" my files inside another docker container unless they share a volume. Which they cant. The App container is built somewhere else with everything inside.

What I have right now it this:

upstream app {    server localhost:3000;  }    [...]    location ~* .+\.(jpg|jpeg|svg|png)$ {    proxy_pass http://app$uri$webp_suffix;  }    [...]    location / {    proxy_pass http://app;  }  

This is in fact working. The client requests .jpg image and gets .jpg.webp (because he can handle it). The problem is he always gets .jpg.webp. What I need is some if to check if the requested file is available in my app.

So something similar like this maybe? But this is not working :-(

  location ~* .+\.(jpg|jpeg|svg|png)$ {    if (-f http://app$uri$webp_suffix) {      proxy_pass http://app$uri$webp_suffix;    }    proxy_pass http://app$uri;  }  

Using quotes in variables of a crontab job

Posted: 02 Feb 2022 01:28 AM PST

I'm trying to get a past date inside a cron job using something like this. But the DATE variable is always empty.

DATE=date -d "$date -1 days -1 months" +%Y-%m  * * * * * /bin/echo "Date: $($DATE) Test" >> /tmp/crontab.log 2>&1  

I know how tricky crontab is thanks to this question with good answers, but the problem here doesn't seem to be the % sign, because the code above works fine:

DATE=date -d @2147483647 +%Y-%m  * * * * * /bin/echo "Date: $($DATE) Test" >> /tmp/crontab.log 2>&1  

But if I use quotes, then the job again can't get the date.

DATE=date -d"@2147483647" +%Y-%m  * * * * * /bin/echo "Date: $($DATE) Test" >> /tmp/crontab.log 2>&1  

I tried replacing quotes with single quotes, double quotes, escaping the quotes, but none of this options solved the problem. And I need quotes to specify the "$date -1 days -1 months" part. Is there a way to do this in crontab without creating an external script?

CentOS 7 and crontab (cronie-1.4.11-23.el7.x86_64)

Ping works on second time [closed]

Posted: 02 Feb 2022 12:59 AM PST

I had problem to access domain name in my code. Then I tried to ping to that domain and get timed out at the first time and success after another try.

What kind of other tool would you suggest to use to debug DNS then ping?

Thanks

Roundcube webmail on Debian 11 bullseye - Could not save password

Posted: 02 Feb 2022 01:22 AM PST

first question here, take me slow.

I have installed dovecot, postfix and Roundcube webmail in a VM to test a new mail server for the company I work for. All good until I try changing a password as a logged in user from roundcube settings->password.

What I have done:

  • Enabled password plugin in roundcube
  • Set the driver to "chpasswd" as my users are system users created with "useradd -m user password"
  • I have created a new file in sudoers.d and added this www-data ALL=NOPASSWD:/usr/sbin/chpasswd as I understand apache2 runs under www-data user and it need sudo privileges. Still after doing all these things, I get the same error "Could not save password"

No logs that I can find show me other information about the problem. If there is a specific log I should look into, please tell me and I will do. If any configuration should I provide to you, ask and I will provide. Thank you!

EDIT: auth.log shows this when using sudo -u www-data /usr/sbin/chpasswd

sudo: pam_unix(sudo:session): session opened for user www-data(uid=33) by (uid=0)  Feb  2 11:08:54 mail unix_chkpwd[7140]: check pass; user unknown  Feb  2 11:08:54 mail unix_chkpwd[7141]: check pass; user unknown  Feb  2 11:08:54 mail unix_chkpwd[7141]: password check failed for user (test2)  Feb  2 11:08:54 mail chpasswd[7139]: pam_unix(chpasswd:chauthtok): authentication failure; logname= uid=33 euid=33 tty= ruser= rhost=  user=test2  Feb  2 11:09:01 mail CRON[7144]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)  Feb  2 11:09:01 mail CRON[7144]: pam_unix(cron:session): session closed for user root  Feb  2 11:09:05 mail sudo: pam_unix(sudo:session): session closed for user www-data  

Running multiple flask applications with different domain names using mod_wsgi

Posted: 02 Feb 2022 02:08 AM PST

We are trying to run 2 different flask applications with different domain names from same server using mod_wsgi + Apache2.

This is the settings configured in httpd.conf

# For www.yyy.com  WSGIDaemonProcess yyy python-path=/var/www/yyy  WSGIScriptAlias / /var/www/yy/wsgi.py process-group=yyy application-group=%{GLOBAL}    # For www.zzz.com  WSGIDaemonProcess zzz python-path=/var/www/zzz  WSGIScriptAlias / /var/www/zzz/wsgi.py process-group=zzz application-group=%{GLOBAL}  

and created /etc/httpd/conf.d/yyy.conf & zzz.conf similar to

<VirtualHost *:80>          ServerName www.yyy.com          ServerAdmin joel          DocumentRoot /var/www/yyy          ErrorLog /var/log/httpd/error.log          CustomLog /var/log/httpd/access.log combined          Alias /static /var/www/yyy/static          <Directory /var/www/yyy/static>                  Require all granted          </Directory>          <Directory /var/www/yyy>          <Files wsgi.py>                  Require all granted          </Files>          </Directory>  </VirtualHost>  

but whatever we try we end up getting only first application that is configured in httpd.conf.

MX record pointing to a dynamic IP

Posted: 02 Feb 2022 02:19 AM PST

I'd like to host simple-login on my home Synology server. For this, I have to register an MX record on a domain that points my IP address.

However, my internet provider does not guarantee a fixed IP, though it's not changing a lot.

If I just buy a domain name, when my IP address changes, this can take up to 72h to be propagated to DNS servers, which is too long.

The only solution I see is using a dynamic DNS service somewhere in the chain.

Is it possible to register a MX record on a dynamic DNS? I can't find any clear information about that.

If so, how to do it?

Is a dynamic DNS enough? If so, how to register a MX record on it? Do I need a domain name? If so, how to ensure that my IP is always pointed to even when being refreshed by my ISP?

How to use ManagedCertificate in namespaced Ingress

Posted: 02 Feb 2022 02:30 AM PST

I tried to use Google Managed Certificate (not through k8s) in Ingress.

If Ingress is in default namespace, everything works fine using ingress.gcp.kubernetes.io/pre-shared-cert: my-cert-name annotation.

However, if Ingress is in a namespace, it looks for a certificate named my-namespace/my-cert-name. But it's impossible to create a certificate with / in its name.

Using GKE k8s ManagedCertificate everything works fine. How to make it work with a non-k8s ManagedCertificate?

Issues while adding CentOS EC2 to Windows AD

Posted: 02 Feb 2022 12:47 AM PST

I am trying to add my CentOS EC2 machine to Windows AD. My Windows Active Directory is configured on EC2 Instance in another account. There are two AD Instances (Multi-AZ) that are configured and replication etc is configured by the AD Administrator on the Servers. He has created a User for me and shared the credentials with me. I have performed the following steps according to this AWS Documentation to add the CentOS EC2 machine to Windows AD.

Still, I am listing down the steps which I have executed on my Server.

  • sudo yum -y update
  • cat /etc/hostname Output : ip-1-7-2-6.xyz.local
  • sudo yum -y install sssd realmd krb5-workstation samba-common-tools
  • sudo realm join -U user-shivkumar@XYZ.local XYZ.local --verbose
    The above command gave me error:
    * Resolving: _ldap._tcp.xyz.local
    * Resolving: xyz.local
    * No results: xyz.local
    realm: No such realm found

So I made the following entries in sudo vi /etc/hosts as mentioned in this link. host The above two IP's are of my AD Servers

I also made changes to /etc/resolv.conf as follows : enter image description here

Then I used the sudo realm discover XYZ.local command to check if realm is able to discover the domain:

enter image description here

I am able to see the details. After this when I again tried to join the domain it gave me following error:

realm: Couldn't join realm: Necessary packages are not installed: oddjob, oddjob-mkhomedir, sssd, adcli  

So I Installed the above packages as well. I tried once again and this time error changed to :

Error :  ! Couldn't get kerberos ticket for: user-shivkumar@XYZ.local: Cannot find KDC for realm "XYZ.local"  adcli: couldn't connect to XYZ.local domain: Couldn't get kerberos ticket for: user-shivkumar@XYZ.local: Cannot find KDC for realm "XYZ.local"   ! Failed to join the domain  realm: Couldn't join realm: Failed to join the domain  

I found a solution to the above problem over this link and executed the command once again. This time it is successful. Here is the output: enter image description here

Output for realm list:

enter image description here

I tried id command to verify the user's uid and gid id user-shivkumar, but this failed with message no such user.

Still I proceeded ahead with the AWS doc to complete all of the steps and then cross check.
sudo vi /etc/ssh/sshd_config
PasswordAuthentication yes
sudo systemctl restart sshd.service
sudo service sshd restart
sudo visudo
## Add the "AWS Delegated Administrators" group from the example.com domain. %AWS\ Delegated\ Administrators@example.com ALL=(ALL:ALL) ALL

Here are the details of my /etc/sssd/sssd.conf
enter image description here

Still I am not able to access the EC2 Instance using AD credentials. It says Access Denied.

I am not able to understand what other configurations needs to be made?

run ngrok using subprocess. how to use subprocess with both ip address and port?

Posted: 02 Feb 2022 01:01 AM PST

def runNgrok():     ngrokDir = "/home/ubuntu/ngrokFunctionalities"     port = 8081     ngrok_command = "ngrok"     make_executable = str(Path(ngrokDir, ngrok_command))     ngrok = subprocess.Popen([make_executable, 'http', 127.0.0.2, '-inspect=false','-bind-tls=true', port])     atexit.register(ngrok.terminate)     time.sleep(7)     return True    File "ngrokRunKeepAlive.py", line 25  ngrok = subprocess.Popen([make_executable, 'http', 127.0.0.2, '-inspect=false','-bind-tls=true', port])                                                         ^  SyntaxError: invalid syntax                                    

POST requests fail after updating Apache load balancer ("empty stream")

Posted: 02 Feb 2022 12:22 AM PST

We have an apache configured as a load balancer (mod_proxy_balancer afaik) distributing the load to several wildfly application nodes. This was working fine until we updated the operating system from debian buster to bullseye.

The OS update increased the apache version from 2.4.38 -> 2.4.51 (plus some debian changes I guess). Since then all HTTP POST requests fail with an "input stream is empty" error message as if the body of that request is somehow swallowed on the way.

I didn't find anything related on the net - so does anbody know whats going on?

Is there some option for this? I browsed the change log but could not find anything related...

We tried several different ReST Clients however all result in the same error.

`GLIBCXX_3.4.20' not found Centos7

Posted: 02 Feb 2022 02:00 AM PST

while starting deepstream output show like this how can i resolve this issue. I installed the latest version which is not available in yum repo by using rpm file and i can't remove this package too..shows the same error. please help

deepstream: /lib64/libstdc++.so.6: version GLIBCXX_3.4.20' not found (required by deepstream) deepstream: /lib64/libstdc++.so.6: versionGLIBCXX_3.4.21' not found (required by deepstream) deepstream: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by deepstream)

elasticsearch: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65536]

Posted: 02 Feb 2022 03:03 AM PST

When I tried to run the logging aggregation I found out the following error generated by elasticsearch:

[2018-02-04T13:44:04,259][INFO ][o.e.b.BootstrapChecks ] [elasticsearch-logging-0] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks  ERROR: [1] bootstrap checks failed  [1]: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65536]  [2018-02-04T13:44:04,268][INFO ][o.e.n.Node ] [elasticsearch-logging-0] stopping ...  [2018-02-04T13:44:04,486][INFO ][o.e.n.Node ] [elasticsearch-logging-0] stopped  [2018-02-04T13:44:04,486][INFO ][o.e.n.Node ] [elasticsearch-logging-0] closing ...  [2018-02-04T13:44:04,561][INFO ][o.e.n.Node ] [elasticsearch-logging-0] closed  [2018-02-04T13:44:04,564][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started  

BTW I am running a kubernetes cluster v1.8.0 on minions and 1.9.0 on masters using cri-containerd on Ubuntu machines 16.04.

Any help will be appreciated.

Disable Nginx Logging for "forbidden by rule"

Posted: 02 Feb 2022 02:34 AM PST

In my Nginx config I have some IP blocks in place, to fight off spammers & bots.

This is very effective, but as a result, my error logs get filled up super fast with error messages like these:

2015/12/16 00:56:28 [error] 27748#0: *120462 access forbidden by rule, client: 167.114.xxx.xxx, server: bla bla ....

Now I don't want to fully disable error logging, as I want to find out what is going wrong when something goes wrong. I just want to disable logging of these "forbidden by rule" messages.

Any idea how to do this?

custom ldap conf for apache 2.2

Posted: 02 Feb 2022 01:02 AM PST

System: Apache 2.2.12 on Suse linux 11 patchset 2, with PHP and Perl.

I want to create a custom ldap.conf file for use by Apache for user authentication to a remote system. I can't put my config info in /etc/openssl/ldap.conf or /etc/ldap.conf -- it's a long story.
But, I could create my own /etc/openldap/apache_ldap.conf.

The question is -- Where in Apache (and maybe PHP) can I tell Apache to look for and use the settings in my custom apache_ldap.conf ? My custom ldap conf file would include things like TLS_REQCERT, uri and base.

Supervisord (exit status 1; not expected) centos python

Posted: 02 Feb 2022 01:02 AM PST

Ran into additional issue with Supervisord.

Centos 6.5 supervisor python 2.6 installed with the OS python 2.7 installed in /usr/local/bin

supervisord program settings

[program:inf_svr]  process_name=inf_svr%(process_num)s  directory=/opt/inf_api/  environment=USER=root,PYTHONPATH=/usr/local/bin/  command=python2.7 /opt/inf_api/inf_server.py --port=%(process_num)s  startsecs=2  user=root  autostart=true  autorestart=true  numprocs=4  numprocs_start=8080  stderr_logfile = /var/log/supervisord/tornado-stderr.log  stdout_logfile = /var/log/supervisord/tornado-stdout.log  

I can run inf_server.py with:

python2.7 inf_server.py --port=8080  

with no problems. I made sure the files were executable (that was my problem before).

Any thoughts?

UPDATE: I cant get it to even launch a basic python script without failing. Started by commenting out the old program, adding a new one and then putting in:

command=python /opt/inf_api/test.py  

where test.py just writes something to the screen and to a file. Fails with exit status 0. So I started adding back in the location of python (after discovering it with 'which python')

environment=PYTHONPATH=/usr/bin  

Tried putting the path in single quote, tried adding USER=root, to the environment, tried adding

directory=opt/inf_api/  

tried adding

user=root  

All the same thing, exit status 0. Nothing seems to added to any log files either, except what Im seeing from the debug of supervisord.

Man I am at a loss.

UPDATE 2: So there is absolutely NOTHING in the stderr and std out logs for this. It just fails with (Exit Status 1; not expected) no errors from the program I am trying to run, nothing...

2014-03-05 11:25:01,027 INFO spawned: 'inf_svr' with pid 15542  2014-03-05 11:25:01,315 INFO exited: inf_svr (exit status 1; not expected)  2014-03-05 11:25:01,315 INFO received SIGCLD indicating a child quit  

Update 3: I have a simple python script started and managed by supervisord using /usr/local/bin/python 2.7 so at least I know it can invoke using the different version of python passed.

However I am still getting an (error status 1, not expected) on the original script.

I added in some exception logging, and even just some print to file lines at the beginning of the script. Its like it never even makes it there, fails before it even launches.

rewrite engine: How to test if URL on different server is reachable?

Posted: 02 Feb 2022 03:03 AM PST

I am trying to use the rewrite engine of nginx to redirect requests.
On the same machine I have a Jetty running on port 8080, which is delivering some content.

I want to check if the content is available on Jetty otherwise I want to redirect it.
For that I have to locations added, one is used for the redirect on the Jetty and the other should check it.

location /nexus/ {      rewrite ^/nexus/(.*)$ http://A:8080/nexus/$1 permanent;  }    location ~* "^/releases/([a-z]*)/(.*)$" {      try_files /nexus/content/repositories nexus/content/repositories /nexus/content/;      # rewrite ^/releases/test/(.*)$ http://A:8080/nexus/content/repositories/Test/$2 break;  }  

My idea was to use try_files, but the first parameters should be files or directories.
Is there any other method to test if an URL is accessible?

At the moment I am using Nginx for checking the reachable of the URL, perhaps I can better use Jetty, which is in front of Nexus.
Nginx was just a choice and if there are better possibilities, I am willing to switch. :)

Here some details about the environment:

enter image description here

PHP pages working slow from time to time

Posted: 02 Feb 2022 02:00 AM PST

I have VPS with limit of 2GB of ram and 8 CPU cores.

I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery.

Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time.

So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast.

Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor).

When site start to slow down sometimes in apache status I can see something like this:

mod_fcgid status:

Total FastCGI processes: 37

Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State

11300 39 28 7 Working

11274 47 28 7 Working

11296 40 29 3 Working

11283 45 30 3 Working

11304 36 31 1 Working

11282 46 32 3 Working

11292 42 33 1 Working

11289 44 34 1 Working

11305 35 35 0 Working

11273 48 36 2 Working

11280 47 39 1 Working

10125 133 40 12 Exiting(communication error)

11294 41 41 1 Exiting(communication error)

11277 47 42 2 Exiting(communication error)

11291 43 43 1 Exiting(communication error)

10187 108 43 10 Exiting(communication error)

10209 95 44 7 Exiting(communication error)

10171 113 44 5 Exiting(communication error)

11275 47 47 1 Exiting(communication error)

10144 125 48 8 Exiting(communication error)

10086 149 48 20 Exiting(communication error)

10212 94 49 5 Exiting(communication error)

10158 118 49 5 Exiting(communication error)

10169 114 50 4 Exiting(communication error)

10105 141 50 16 Exiting(communication error)

10094 146 50 15 Exiting(communication error)

10115 139 51 17 Exiting(communication error)

10213 93 51 9 Exiting(communication error)

10197 103 51 7 Exiting(communication error)

Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State

7983 1079 2 149 Ready

7979 1079 11 151 Ready

Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State

7990 1066 0 57 Ready

8001 1031 64 35 Ready

7999 1032 94 29 Ready

8000 1031 91 36 Ready

8002 1029 34 52 Ready

Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State

7991 1064 29 115 Ready

When it is working nicly there is no lines with "Exiting(communication error)"

Active and Idle are time active and time since last request, in seconds.

Here are system info.

Sysem info:

Total processors: 8

Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB

All other seven are the same.

System Information

Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux

Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444

Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none

System Details:

Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10

Current Configuration

Default PHP Version (.php files) 5

PHP 5 Handler fcgi PHP 4 Handler suphp

Apache suEXEC on

Apache Ruid2 off

PHP 4 Handler suphp

Apache suEXEC on

Apache Configuration

The following settings have been saved:

fileetag: All

keepalive: On

keepalivetimeout: 3

maxclients: 150

maxkeepaliverequests: 10

maxrequestsperchild: 10000

maxspareservers: 10

minspareservers: 5

root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch

serverlimit: 256

serversignature: Off

servertokens: Full

sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH

startservers: 5

timeout: 30

I hope, I explained my problem nicely.

Any help would be nice.

No comments:

Post a Comment