Friday, February 25, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Run commands that run in a shell as a script

Posted: 25 Feb 2022 02:04 AM PST

Running the following commands in a shell runs without issues:

ssh user@machine systemctl status my-service.service  ssh user@machine sudo systemctl stop my-service.service  scp -r ./my-service/* user@machine:/home/user/my-service  ssh user@machine chmod +x /home/user/my-service/my-service  ssh user@machine sudo systemctl start my-service.service  ssh user@machine sudo systemctl status my-service.service  

However, putting this in a deploy.sh file results in none of the above being able to execute.

Errors:

  • Invalid unit name "my-service" was escaped as "my-service\x0d" (maybe you should use systemd-escape?)
  • Unit my-service\x0d.service could not be found.
  • Invalid unit name "my-service.service" was escaped as "my-service.service\x0d" (maybe you should use systemd-escape?)
  • Failed to stop my-service\x0d.service: Unit my-service.service\x0d.service not loaded. : No such file or directorynlock/
  • chmod: cannot access '/home/user/my-service/my-service'$'\r': No such file or directory
  • Invalid unit name "my-service.service" was escaped as "my-service.service\x0d" (maybe you should use systemd-escape?)
  • Failed to start my-service.service\x0d.service: Unit my-service.service\x0d.service not found. Invalid unit name "my-service.service" was escaped as "my-service.service\x0d" (maybe you should use systemd-escape?)
  • Unit my-service.service\x0d.service could not be found.

Some were broken up. It seems something related to escaping. For some reason adding a space at the end of the lines makes it sort of work but still not without errors.

Googling on the errors shows some hits about using -- and adding it together with the trailing space makes some commands work but still giving an escaping error.

How to automatically turn off write cache after the kernel "hard resetting link"

Posted: 25 Feb 2022 01:50 AM PST

I turned off write caching on an SATA drive using hdparm -W0 $DEV

Now, sometimes, the kernel decides, it's a good idea to hard reset an SATA link in order to get communication to a drive back up and running (maybe it is a good idea, won't judge it here).

However, if that happens, the drive's write cache will be re-enabled. Upon plugging in, I can catch an udev event to automatically run hdparm to turn off the write cache.

But I can't find any information about how to catch this hard resetting link events to fire up hdparm again (or make sure the write cache is disabled again by some other means).

Generating Message-ID for mails sent by crond

Posted: 25 Feb 2022 01:31 AM PST

On my NixOS installation I have Vixie cron installed, that runs regular jobs. The output for the scripts that are run is sent by e-mail using a ssmtp on the same host, that forwards the mails to my standard mail server (Postfix with rspamd).

My problem now is that most of these mails are stored in my Spam folder by the mail server. I checked the classification that is done by the spam filter and saw that one big 'problem' is that the messages sent by cron don't include a Message-ID header.

I searched for options to let crond generate message-ids or ssmtp to inject message-ids if they are not present yet (like other MSA can do). For neither of these I found a way to implement that.

What could I do to handle that better and to make sure these administrative mails are not classified as Spam?

How to give Wireguard client access to Internet only

Posted: 25 Feb 2022 12:54 AM PST

I have a couple of WireGuard interfaces set up and can per peer decide to give access to server only or server and LAN/Internet. What I want to do for a specific peer is to give access to the Internet only and not to the server and LAN.

I think that I can't do this on tunnel/interface level but have to do it with iptables in the peer config - right? How would I go about doing this?

I have tried to find information regarding this, but I'm 4 country borders away from the server and terrified to configure something wrong :-$ The best way would probably be to take the whole iptables and routing course, but trying to find something quicker than that. All my Internet searches miss my problem because most people have problems with clients not being able to reach the Internet through the tunnel...

I can't view my new SSD on my Xenserver, What happen?

Posted: 25 Feb 2022 12:53 AM PST

I have a Server HP Gen9 with Xenserver Xencenter 7.0, and i installed the new ssd (samsung evo with 1tb) but my disk does not appear in tab fdisk -l. What happend? I want to mount the disk and install a SR...

Thanks, A grettings

Tomcat application (ERDDAP server) behind a proxy redirection issues

Posted: 25 Feb 2022 12:38 AM PST

I have an ERDDAP instance running on a Tomcat server behind a NGINX reverse proxy. The environment is completely on Kubernetes, the RP is an NGINX ingress-controller that forwards the request on port 443 to the service instance on port 8080 associated to the container where Tomcat (and ERDDAP instance) runs.

I found this tutorial (https://www.n0r1sk.com/post/nginx-reverse-proxy-with-ssl-offloading-and-apache-tomcat-backends/) that shows how to configure server.xml for a Tomcat behind a reverse proxy, so the HTTP Connector for my Tomcat server is:

<Connector server="Apache" secure="true" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"         proxyPort="443"         relaxedPathChars='[]|'         relaxedQueryChars='[]:|{}^&#x5c;&#x60;&quot;&lt;&gt;' />  

With this configuration, when I request the URL https://erddap.ve.ismar.cnr.it/erddap:

 GET /erddap/ HTTP/1.1   Host: erddap.ve.ismar.cnr.it   X-Request-ID: 484e514b4038614090bf34061e9287f3   X-Real-IP: 10.104.235.192   X-Forwarded-For: 10.104.235.192   X-Forwarded-Host: erddap.ve.ismar.cnr.it   X-Forwarded-Port: 443   X-Forwarded-Proto: https   X-Scheme: https   ...  

I get the following response:

HTTP/1.1 302   Strict-Transport-Security: max-age=0  X-Frame-Options: SAMEORIGIN  X-Content-Type-Options: nosniff  X-XSS-Protection: 1; mode=block  vary: Origin  Access-Control-Allow-Credentials: true  Access-Control-Expose-Headers: Access-Control-Allow-Origin,Access-Control-Allow-   Credentials  Location: https://erddap.ve.ismar.cnr.it/erddap/index.html  Content-Length: 0  Date: Thu, 24 Feb 2022 17:55:41 GMT  Server: Apache  

and everything works fine with proxy forwarding and backend Tomcat reponse.

But if I add in Tomcat Connector configuration the parameter scheme="https" as suggested in the tutorial mentioned above:

<Connector server="Apache" secure="true" port="8080" protocol="HTTP/1.1" connectionTimeout="20000"         proxyPort="443"         scheme="https"         relaxedPathChars='[]|'         relaxedQueryChars='[]:|{}^&#x5c;&#x60;&quot;&lt;&gt;' />  

the same request fails and I get the following response from Tomcat:

HTTP/1.1 302   Strict-Transport-Security: max-age=0  X-Frame-Options: SAMEORIGIN  X-Content-Type-Options: nosniff  X-XSS-Protection: 1; mode=block  vary: Origin  Access-Control-Allow-Credentials: true  Access-Control-Expose-Headers: Access-Control-Allow-Origin,Access-Control-Allow-   Credentials  Location: (not specified)/erddap/index.html  Content-Length: 0  Date: Thu, 24 Feb 2022 18:02:44 GMT  Server: Apache  

You can notice the "Location:" header is completely wrong with "(not specified)" prefix and causes the client to make the subsequent request for the URL https://erddap.ve.ismar.cnr.it/erddap/(not%20specified)/erddap/index.html (and of course the request fails).

Can anyone of you help me to spot where could be the problem in my Tomcat configuration? Why the behaviour is so different just adding the scheme="https" parameter in server.xml?

Many thanks in advance, Pierpaolo

DNS problems on pool of preemptible-only nodes on GKE: endpoints of kube-dns service keeps failed pods

Posted: 25 Feb 2022 02:07 AM PST

I do have a GKE k8s cluster (k8s 1.21) that consists of preemptible nodes only, which includes critical services like kube-dns. It's a dev machine which can tolerate some broken minutes a day. Every time a node gets shut down which hosts a kube-dns pod, I run into DNS resolution problems that persist until I delete the failed pod (in 1.21, pods stay "Status: Failed" / "Reason: Shutdown" until manually deleted).

While I do expect some problems on preemptible nodes while they are being recycled, I would expect this to self-repair after some minutes. The underlying reason for the persistent problems seems to be that the failed pod does not get removed from the k8s Service / Endpoint. This is what I can see in the system:

Status of the pods via kubectl -n kube-system get po -l k8s-app=kube-dns

NAME                        READY   STATUS       RESTARTS   AGE  kube-dns-697dc8fc8b-47rxd   4/4     Terminated   0          43h  kube-dns-697dc8fc8b-mkfrp   4/4     Running      0          78m  kube-dns-697dc8fc8b-zfvn8   4/4     Running      0          19h  

IP of the failed pod is 192.168.144.2 - and it still is listed as one of the endpoints of the service:

kubectl -n kube-system describe ep kube-dns brings this:

Name:         kube-dns  Namespace:    kube-system  Labels:       addonmanager.kubernetes.io/mode=Reconcile                k8s-app=kube-dns                kubernetes.io/cluster-service=true                kubernetes.io/name=KubeDNS  Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2022-02-21T10:15:54Z  Subsets:    Addresses:          192.168.144.2,192.168.144.7,192.168.146.29    NotReadyAddresses:  <none>    Ports:      Name     Port  Protocol      ----     ----  --------      dns-tcp  53    TCP      dns      53    UDP    Events:  <none>  

I know others worked around these issues by Scheduling kube-dns to other pods, but I would rather want to make this self-healing instead, as node failures can still happen on non-preemptible nodes, they are just less likely.

My questions:

  • Why is the failed pod still listed as one of the endpoints of the service, even hours after the initial node failure?
  • What can I do to mitigate the problem (besides adding some non-ephemeral nodes)?

It seems that kube-dns in the default deployment in GKE does not have a readiness probe attached to dnsmasq (port 53), which is targeted in the kube-dns service, and that having that could solve the issue - but I suspect it's not there for a reason that I don't yet understand.

ERROR FirebaseError: Quota exceeded with recaptcha authentication for both developement and production

Posted: 24 Feb 2022 11:58 PM PST

We are using recaptcha of firebase for Authentication with ionic4. So after 1 month of firebase's project registration. Firebase showing "You have gone over your daily usage limits" on firebase Database and giving message("ERROR FirebaseError: Quota exceeded") with Authentication.

And for production build firebase not allow more then 4 request for phone-Authentication with recaptcha.

Please help ot use ASAP.

What does "SSL alert number 20" mean?

Posted: 24 Feb 2022 11:45 PM PST

I have a NodeJS server running for 2 years. I use this error handler for JSON:

// Error handler  app.use(methodOverride())  app.use(function (err, req, res, next) {    console.error("\n# JSON error!", req.body, " from ", req.ip);    console.error(err.stack);    next(err);  })  

Today, for the first time, I saw this error in the logs:

# JSON error! {}  from  ::ffff:68.118.161.91  [2022-02-24T11:23:20.798Z] Error: 140257308841792:error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1544:SSL alert number 20    [2022-02-24T11:23:20.798Z] Error: 140257308841792:error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1544:SSL alert number 20  

What does it mean, and what can I do to fix it?

Is possible chnage meta-da of xsf system to another disk (such SSD)

Posted: 24 Feb 2022 11:33 PM PST

I've a system in VPS with a big XFS.

Now I have the possibility to add an SSD device to that VPS.

Is it possible to change the meta-data destination from partition formatted as SSD to SSD partition to improve logging?

All the information I see is to do it before starting to use it, not when it is in use, and it scares me because it is a storage vps that already has 20TB of data.

Now I've this information

 xfs_info /dev/mapper/stor-stor_vol  meta-data=/dev/mapper/stor-stor_vol isize=512    agcount=40, agsize=268435455 blks           =                       sectsz=512   attr=2, projid32bit=1           =                       crc=1        finobt=1, sparse=1, rmapbt=0           =                       reflink=1  data     =                       bsize=4096   blocks=10737417216, imaxpct=5           =                       sunit=0      swidth=0 blks  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1  log      =internal log           bsize=4096   blocks=521728, version=2           =                       sectsz=512   sunit=0 blks, lazy-count=1  realtime =none                   extsz=4096   blocks=0, rtextents=0  

I like change meta-data to a SSD device

cannot connect to gcp instance Remote side unexpectedly closed network connection

Posted: 24 Feb 2022 10:53 PM PST

when trying to connect to gcp VM instance via ssh key the error that i am getting is

Connection closing...Socket close. Connection closed by foreign host. Disconnected from remote host(akr@34.125.177.6)

when i tried connecting through ssh inside gcp instance i am getting the following error

Connection via Cloud Identity-Aware Proxy Failed Code: 4010 Reason: destination read failed You may be able to connect without using the Cloud Identity-Aware Proxy.

the commands that I have done before the issue is

cd /var/lib/  sudo chmod -R 777 *  sudo chmod -R 777 * lib/  

can changing the permission cause this issue ?

when I changed the permission to 755 it worked !

Is rejecting email based on IP address in chain allowed?

Posted: 25 Feb 2022 02:42 AM PST

A major ISP is rejecting email (bouncing with error 550) on the basis that an IP address in the transmission chain is on their 'blocklist'. Are they allowed to do this and still be IETF compliant?

All I can find is RFC 2821:

An SMTP server MAY verify that the domain name parameter in the EHLO
command actually corresponds to the IP address of the client.
However, the server MUST NOT refuse to accept a message for this
reason if the verification fails: the information about verification
failure is for logging and tracing only

This indicates that systems must not reject email, though in a different scenario.

Can someone enlighten us?

Is it necessary to set a different ttl before change a Route53 DNS record value?

Posted: 25 Feb 2022 12:48 AM PST

The first time create the Route53 DNS record with a load balancer origin DNS name with ttl 1 day.

After some days, we want to change the value to another load balancer origin DNS name. Do we need to set the target DNS record's ttl to a short time such as 1 hour first? Then after 1 hour change the DNS value. Does it can refresh its DNS cache and update to new record perfectly?

How to enter "special" characters in the password file?

Posted: 24 Feb 2022 10:34 PM PST

What is the range of characters allowed in the password.client file in Exim4?

My password has the :, ! and . characters. Are these permitted as is? If not, how do I escape them?

PS: The credentials are for Exim as a client to a "smarthost".

ssh and sshfs connection through nginx reverse proxy problems

Posted: 25 Feb 2022 01:00 AM PST

I have a small annoying problem with ssh and sshfs connection to a server that is behind an nginx reverse proxy.

I use sshfs remotely to mount some folders from the server and ssh to connect to it and both get disconnected when idle. I solved the ssh connection by adding the ServerAliveInterval and ServerAliveCountMax in .ssh/config but the problem remains with sshfs.

I have used sshfs in the past (without the reverse proxy) and adding those two options to the fstab line and all was well, except in this case where it doesn't work. The folders are mounted, I can use them, but if they remain idle for a minute they disconnect.

I cannot figure this out! Is there a way to solve this? Is there some setting in the nginx that can also solve the ssh connection problem without those options (I have some people connecting to the server too that are not too technical and they complain about disconnects!)

Here is the ssh portion of the reverse proxy:

upstream server {    server 10.10.0.xxx:SSH_INTERNAL_PORT;  }    server {    listen SSH_EXTERNAL_PORT;    proxy_pass server;  }  

EDIT:

I forgot to mention that I have already set this in /etc/nginx/nginx.conf:

# ssh reverse proxy conf  stream {      include /etc/nginx/ssh_enabled/*;  }  

.htaccess mod_rewrite not catching all RewriteRules

Posted: 25 Feb 2022 01:33 AM PST

There is a PHP application with a PHP router as entry point for all the requests placed inside index.php. I am trying to write a .htaccess file to forward every request to index.php except for API requests and for design resources. So I am trying to obtain the following behavior:

  1. example.com/api/v1/* should serve api_v1.php
  2. example.com/any_path/resource.css => should serve resource.css if it exists (there are multiple extensions allowed; .css is just one example)
  3. serve index.php for anything that did not fall for the conditions above

Given that .htaccess is evaluated from top to the bottom, from particular to general conditions and that flag [L] would break the execution if anything matched, I have managed to come up with the following .htaccess:

RewriteEngine On    # Prevent 301 redirect with slash when folder exists and does not have slash appended  # This is not a security issue here since a PHP router is used and all the paths are redirected  DirectorySlash Off    #1. Rewrite for API url  RewriteRule ^api/v1/(.*)$ api_v1.php [L,NC]    #2. Rewrite to index.php except for for design/document/favicon/robots files that exists  RewriteCond %{REQUEST_URI} !.(css|js|png|jpg|jpeg|bmp|gif|ttf|eot|svg|woff|woff2|ico|webp|pdf)$  RewriteCond %{REQUEST_URI} !^(robots\.txt|favicon\.ico)$ [OR]  RewriteCond %{REQUEST_FILENAME} !-f  RewriteRule ^(.*)$ index.php [L]    #3. Rewrite enything else  RewriteRule ^(.*)$ index.php [L]  

Using the code above, it seems that accessing example.com/api/v1/ does not execute api_v1.php. Instead, it would continue and execute index.php

example.com/api/v1/ would only work if I remove all conditions after line 8.

What am I doing wrong here?

Scaling Elasticsearch down to single-node

Posted: 24 Feb 2022 11:26 PM PST

Is it possible to scale Elasticsearch from multiple nodes down to one node?

I have a 3-node cluster that is way overkill for the amount of data being logged. To scale it down, I set "cluster.routing.allocation.exclude._ip" to the IP nodes 2 and 3 to get all the data on to one node.

I stopped Elasticsearch on node 3, and the cluster remained healthy.

In preparation to turn off the second node, I adjusted the cluster settings to require a quorum of 1 and make sure it was persistent instead of transient. Then I stopped Elasticsearch on node 2.

Finally I went on to node 1 and set discovery.type to single-node and restarted Elasticsearch.

Elasticsearch is throwing an error:

cannot start with [discovery.type] set to [single-node] when local node {node1.customer.local}{r5tnzHEYRN6TNPNur9jpqA}{PjBDleWmTeSvkRUuUNWVWw}{10.132.135.55}{10.132.135.55:9300}{dilm}{ml.machine_memory=33730138112, xpack.installed=true, ml.max_open_jobs=20} does not have quorum in voting configuration VotingConfiguration{tlbB7vMgQXOzvO36iboqOQ,r5tnzHEYRN6TNPNur9jpqA,s1fLGX7RS│fL  GX│tGpFh2xPZkkIw}  

How can I scale down to one node?

mount: mounting /dev on /root/dev failed: No such file or directory after converting lxc to virtual machine

Posted: 25 Feb 2022 12:04 AM PST

I've been trying to convert an lxc container to a virtual machine, but I've encountered a problem when trying to boot. I end up with the (initramfs) command line and I've got the following errors:

mount: mounting /dev on /root/dev failed: No such file or directory    mount: mounting /run on /root/run failed: No such file or directory  run-init: opening console: No such file or directory     Target filesystem  doesn't have requested /sbin/init.    run-init: opening console: No such  file or directory (repeated a few times)    No init found. Try passing init= bootarg    BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1) built-in shell (ash) Enter  'help' for a list of built-in commands.  (initramfs)  

From the live cd ubuntu 16.04:

blkid  /dev/sda1: UUID="3e671c97-7695-49e7-8c83-4527c94d8f14" TYPE="ext4" PARTUUID="406cef0c-01"  /dev/sda2: UUID="c555438a-fd29-4cad-a8cf-fe92c3b78e0b" TYPE="ext4" PARTUUID="406cef0c-02"  /dev/sr0: UUID="2018-07-31-01-12-13-00" LABEL="Ubuntu 16.04.5 LTS amd64" TYPE="iso9660" PTUUID="6be2cd0d" PTTYPE="dos         cat /etc/fstab:     UUID="3e671c97-7695-49e7-8c83-4527c94d8f14"    /boot   ext4    defaults,noatime0   0 UUID="c555438a-fd29-4cad-a8cf-fe92c3b78e0b"   /   ext4    defaults,noatime0   1   UUID="688b6a9b-0f30-450c-b8d6-1316c0d17798"    none    swap    defaults    00  

Relevant parts of /boot/grub/grub.cfg:

set root='hd0,msdos2' if [ x$feature_platform_search_hint = xy ];  then   search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2   --hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2  c555438a-fd29-4cad-a8cf-fe92c3b78e0belse   search --no-floppy   --fs-uuid --set=root c555438a-fd29-4cad-a8cf-fe92c3b78e0b fi  

and:

menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu   --class os $menuentry_id_option 'gnulinux-simple-c555438a-fd29-4cad-a8cf-fe92c3b78e0b' {          recordfail           load_video           gfxmode $linux_gfx_mode           insmod gzio           if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi           insmod part_msdos           insmod ext2           set root='hd0,msdos1'           if [ x$feature_platform_search_hint = xy ]; then             search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  3e671c97-7695-49e7-8c83-4527c94d8f14           else             search --no-floppy --fs-uuid --set=root 3e671c97-7695-49e7-8c83-4527c94d8f14           fi           linux   /vmlinuz-4.15.0-30-generic root=UUID=3e671c97-7695-49e7-8c83-4527c94d8f14 ro  quiet splash   $vt_handoff initrd  /initrd.img-4.15.0-30-generic }  

I changed the uuid for the linux /vmlinuz-4.15.0-30-generic so that it matches /dev/sda1 and not /dev/sda2. The grub-installer placed the UUID of /dev/sda2, where the root partition is, and I'm not sure why. Any ideas as to how to solve this problem?

Limited Access to Domain Controller for Active Directory Administration

Posted: 24 Feb 2022 10:02 PM PST

I have to provide a group Jr. Sys Admins limited access to a domain controller for the purpose of limited Active Directory User and Group administration (i.e. user creation, password reset, etc.) I have implemented delegation to limit the scope of tasks the Jr. Sys Admins may execute on the Active Directory. Some of these users use macOS, so using Remote Server Administration Tools like one might use on a Windows machine is not an option for them.

As such I would like to give them RDP access to a domain controller. I'd like them to be able to open Active Directory Users and Computers (without prompting for administrator credentials) but limit their access to the remainder of the system as much as possible. Note: I may need to give them access to a few other items for other related job responsibilities.

  • What is the best way to accomplish this?
  • If imposing restrictions via Group Policy is the best method, what is the most efficient way to construct a policy that would accomplish my stated objective?

Spring Boot Apache SSL Reverse Proxy

Posted: 25 Feb 2022 12:04 AM PST

I have a Spring Boot application that runs on a Amazon Linux server. I use Apache HTTP server as a proxy server for this application. Recently I installed Let's Encrypt SSL certificate and added a virtual host entry on Apache for that. However, I cannot get it to work with Spring Boot properly. No SSL version seems to be working fine though.

What I observed is that the requests comes to the Spring Boot application when a user calls the https version of it, but user receives a HTTP 404 error from Apache. For example this works fine: http://example.com/oauth/token but this does not work and return 404: https://example.com/oauth/token

I posted the config files below, what am I missing?

vhosts.conf

<VirtualHost *:443>      ServerName example.com      ServerAlias www.example.com      ServerAdmin support@example.com      DocumentRoot /var/www/example.com/public_html      ErrorLog /var/www/example.com/logs/error.log      CustomLog /var/www/example.com/logs/access.log combined        RewriteEngine On      RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]      RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d      RewriteRule ^ - [L]      RewriteRule ^(/api/v1) - [L]      RewriteRule ^(/oauth/token) - [L]        RewriteRule ^ /index.html [L]        SSLEngine on      SSLCertificateFile /var/www/example.com/cert/cert.pem      SSLCertificateKeyFile /var/www/example.com/cert/privkey.pem        ProxyPreserveHost on      RequestHeader set X-Forwarded-Proto https      RequestHeader set X-Forwarded-Port 443      ProxyPass /api/v1 http://127.0.0.1:8080/api/v1      ProxyPassReverse /api/v1 http://127.0.0.1:8080/api/v1      ProxyPass /oauth/token http://127.0.0.1:8080/oauth/token      ProxyPassReverse /oauth/token http://127.0.0.1:8080/oauth/token  </VirtualHost>    <VirtualHost *:80>      ServerName example.com      ServerAlias www.example.com      ServerAdmin support@example.com      DocumentRoot /var/www/example.com/public_html      ErrorLog /var/www/example.com/logs/error.log      CustomLog /var/www/example.com/logs/access.log combined        RewriteEngine On      RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]      RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d      RewriteRule ^ - [L]      RewriteRule ^(/api/v1) - [L]      RewriteRule ^(/oauth/token) - [L]        RewriteRule ^ /index.html [L]        ProxyPreserveHost on      ProxyPass /api/v1 http://127.0.0.1:8080/api/v1      ProxyPassReverse /api/v1 http://127.0.0.1:8080/api/v1      ProxyPass /oauth/token http://127.0.0.1:8080/oauth/token      ProxyPassReverse /oauth/token http://127.0.0.1:8080/oauth/token  </VirtualHost>  

application.properties

server.context-path=/api/v1  server.address=127.0.0.1  server.port=8080  server.use-forward-headers=true  server.tomcat.remote_ip_header=x-forwarded-for  server.tomcat.protocol_header=x-forwarded-proto  

Authentication is required to manage system services or units.

Posted: 24 Feb 2022 11:03 PM PST

I have a strange issue whenever trying to stop/start a daemon as a regular user, it asks to authenticate with the credentials of another regular user - for example:

[bob@server ~]$ systemctl stop some-daemon.service  ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===  Authentication is required to manage system services or units.  Authenticating as: alice  Password:   

Why is it asking for alice to authenticate when bob is logged in, and how do I fix this?

Can't connect to Azure DocumentDB using MongoDB Compass or MongoVUE

Posted: 25 Feb 2022 02:06 AM PST

I've created a DocumentDB instance on Microsoft Azure, but I'm unable to connect to it from MongoDB Compass (or MongoVUE). In MongoDB Compass, I've entered all of the connection parameters and it is logging in, however then it opens a windows which just sits there with a loading icon forever. I can connect to a MongoDB instance on the local machine, so I know that works. Is MongoDB Compass incompatible with DocumentDB for some reason? Is there another tool that I can use to connect and browse my DocumentDB instance? screenshot

ceph osd down and rgw Initialization timeout, failed to initialize after reboot

Posted: 24 Feb 2022 11:06 PM PST

Centos7.2, Ceph with 3 OSD, 1 MON running on a same node. radosgw and all the daemons are running on the same node, and everything was working fine. After reboot the server, all osd could not communicate (looks like) and the radosgw does not work properly, it's log says:

2016-03-09 17:03:30.916678 7fc71bbce880  0 ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403), process radosgw, pid 24181  2016-03-09 17:08:30.919245 7fc712da8700 -1 Initialization timeout, failed to initialize  

ceph health shows:

HEALTH_WARN 1760 pgs stale; 1760 pgs stuck stale; too many PGs per OSD (1760 > max 300); 2/2 in osds are down  

and ceph osd tree give:

ID WEIGHT  TYPE NAME               UP/DOWN REWEIGHT PRIMARY-AFFINITY  -1 2.01999 root default  -2 1.01999     host app112   0 1.00000         osd.0              down  1.00000          1.00000   1 0.01999         osd.1              down        0          1.00000  -3 1.00000     host node146   2 1.00000         osd.2              down  1.00000          1.00000  

and service ceph status results:

=== mon.app112 ===  mon.app112: running {"version":"0.94.6"}  === osd.0 ===  osd.0: running {"version":"0.94.6"}  === osd.1 ===  osd.1: running {"version":"0.94.6"}  === osd.2 ===  osd.2: running {"version":"0.94.6"}  === osd.0 ===  osd.0: running {"version":"0.94.6"}  === osd.1 ===  osd.1: running {"version":"0.94.6"}  === osd.2 ===  osd.2: running {"version":"0.94.6"}  

and this is service radosgw status:

Redirecting to /bin/systemctl status  radosgw.service  ● ceph-radosgw.service - LSB: radosgw RESTful rados gateway     Loaded: loaded (/etc/rc.d/init.d/ceph-radosgw)     Active: active (exited) since Wed 2016-03-09 17:03:30 CST; 1 day 23h ago       Docs: man:systemd-sysv-generator(8)    Process: 24134 ExecStop=/etc/rc.d/init.d/ceph-radosgw stop (code=exited, status=0/SUCCESS)    Process: 2890 ExecReload=/etc/rc.d/init.d/ceph-radosgw reload (code=exited, status=0/SUCCESS)    Process: 24153 ExecStart=/etc/rc.d/init.d/ceph-radosgw start (code=exited, status=0/SUCCESS)  

Seeing this, I have tried sudo /etc/init.d/ceph -a start osd.1 and stop for a couple of times, but the result is the same as above.

sudo /etc/init.d/ceph -a stop osd.1  === osd.1 ===  Stopping Ceph osd.1 on open-kvm-app92...kill 12688...kill 12688...done    sudo /etc/init.d/ceph -a start osd.1  === osd.1 ===  create-or-move updated item name 'osd.1' weight 0.02 at location {host=open-kvm-app92,root=default} to crush map  Starting Ceph osd.1 on open-kvm-app92...  Running as unit ceph-osd.1.1457684205.040980737.service.  

Please help. thanks

EDIT: it seems like mon cannot talk to osd. but both daemons are running ok. the osd log shows:

2016-03-11 17:35:21.649712 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:22.649982 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:23.650262 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:24.650538 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:25.650807 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:25.779693 7f0024c96700  5 osd.0 234 heartbeat: osd_stat(6741 MB used, 9119 MB avail, 15861 MB total, peers []/[] op hist [])  2016-03-11 17:35:26.651059 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:27.651314 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:28.080165 7f0024c96700  5 osd.0 234 heartbeat: osd_stat(6741 MB used, 9119 MB avail, 15861 MB total, peers []/[] op hist [])  

IIS application request routing changes 206 partial content to 200

Posted: 24 Feb 2022 11:06 PM PST

I've setup a reverse proxy server in an azure cloud service using IIS rewrite rules and the Application Request Routing module (according to the instructions here. Everything is working well except for calls to endpoints I've created to download mp4 files. These endpoints can serve up partial content when the request contains the Range header. The problem I'm having is that when I hit the server directly, it correctly responds with 206 (partial content) and the correct range of bytes, but sometimes when I hit the endpoints through the proxy server, it responds with a 200, and the full file contents, which causes errors in video playback in Chrome.

Example: When hitting the server directly with a request like this:

GET server.domain.com/api/adFile/fileName With header: Range: bytes=168-3922822

I correctly receive a 206 response. Here are some of the relevant headers in the response:

  • Cache-Control: no-cache
  • Pragma: no-cache
  • Content-Length: 3922655
  • Content-Type: video/mp4
  • Content-MD5: f1+K8OT8TEjvtlPU5iUY8a==
  • Content-Range: bytes 168-3922822/3922823
  • Expires: -1
  • Last-Modified: Tue, 16 Feb 2016 15:46:46 GMT
  • ETag: "0x8D336E86040C217"
  • Server: Microsoft-IIS/8.0
  • X-AspNet-Version: 4.0.30319
  • X-Powered-By: ASP.NET

When hitting the server through the reverse proxy, with a request like this:

GET proxy.domain.com/api/adFile/fileName With header: Range: bytes=168-3922822

I incorrectly receive a 200 status code and the full file contents. here are the relevant headers from that response:

  • Cache-Control: no-cache
  • Pragma: no-cache
  • Content-Length: 3922823
  • Content-Type: video/mp4
  • Content-MD5: f1+K8OT8TEjvtlPU5iUY8a==
  • Expires: -1 Last-Modified: Tue, 16 Feb 2016 15:46:46 GMT
  • ETag: "0x8D336E86040C217"
  • Server: Microsoft-IIS/8.5
  • X-AspNet-Version: 4.0.30319
  • X-Powered-By: ASP.NET
  • X-Powered-By: ARR/3.0
  • X-Powered-By: ASP.NET

Is there any way that I can modify the proxy behavior to match the behavior of the main server (i.e. return just the partial content requested)? It seems that it might be caching the file contents and serving all of them when the requested byte range is close to the full file size.

How to define another source IP in snmp traps

Posted: 25 Feb 2022 02:06 AM PST

I'm looking for a way to change source IP in traps sent by snmpd (CentOS 6.6).

My requirement is to set in trap configurable virtual IP (VIP) instead of real station IP in case of system defined in High-Availability mode.

Attempts to define another IP via snmpd.conf like:

trapsess -v 2c -c public -Ci 5.5.5.5:162 0.0.0.0:162  

do not succeed.


<>Lenniey,

The procedure was to create additional virtual interface and routing with virtual IP address:

cd /etc/sysconfig/network-scripts/

cp  ifcfg-eth0 ifcfg-eth0:1

vi ifcfg-eth0:1 (define virtual IP, remove gateway)

service network restart

ip route add VIRTUAL_IP/32 dev eth0:1

But traps sent from my application via AgentX to snmpd and forwarded to target address have the same local IP address as was before these changes. BR Alex

Installing SecAst on AsteriskNOW with CentOS

Posted: 25 Feb 2022 01:07 AM PST

Having some issues installing SecAst for IPS,

Followed the directions up to 2.1.6 and found a way (on this forum) to install qt5-qtbase (thanks) but when I run ldd /usr/local/secast/secast the return is "not a dynamic executable". I unpacked and installed -x86_64-rh6 tarball .. any suggestions?

Also there are directions in 2.1.9 to make a directory structure with /etx/xdg .. is this a typo and should it be /etc/xdg .. /etc/xdg/generationd ? If not where does the directory go under /etc/ ?

Also in /usr/local/secast/ there appears to be a secast file but when secast --help is run return is command not found. Files unpacked with no errors (re-unpacked to be sure), and the color of the font is green.

Thanks

Varnish installer cannot find PCRE when it is already installed

Posted: 25 Feb 2022 12:01 AM PST

I am trying to install Varnish-Cache 4 on my Mac OS X 10.9.3.

But I get this error:

checking for PCRE... no  configure: error: Package requirements (libpcre) were not met:    No package 'libpcre' found    Consider adjusting the PKG_CONFIG_PATH environment variable if you  installed software in a non-standard prefix.    Alternatively, you may set the environment variables PCRE_CFLAGS  and PCRE_LIBS to avoid the need to call pkg-config.  See the pkg-config man page for more details.  

The thin is PCRE is installed. I can find it in /usr/bin/. When I do man pcre I get it's documentation.

Any ideas? I am not sure how to solve this.

In config.log I found this:

configure:14734: $PKG_CONFIG --exists --print-errors "libpcre"  Package libpcre was not found in the pkg-config search path.  Perhaps you should add the directory containing `libpcre.pc'  to the PKG_CONFIG_PATH environment variable  No package 'libpcre' found  

Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

Posted: 25 Feb 2022 01:07 AM PST

We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production.

As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes.

The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain.

The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m"

The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue.

The new host is quite a bit stricter with their server lock downs so there are a few differences....

  • Redhat 6.3 in new env, RH 5.7 in current
  • SElinux is targeted in new env and disabled in current
  • VM in new env and dedicated hardware in current
  • iptables disabled in current. It was enabled in new prod but I had them disable it just in case

I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow.

Thanks you all in advance for reading and best regards. Wade

OpenVPN host cannot access client LAN

Posted: 24 Feb 2022 10:02 PM PST

I have an OpenVPN server, call it vpn-server, with a LAN 192.168.3.0/24 behind it. The client, vpn-client, also has a LAN behind it, 10.4.0.0/24. Machines on 192.168.3.0/24 can access 10.4.0.0/24 (with one exception). Machines on 10.4.0.0/24 can access 192.168.3.0/24. (Server and client are both Linux based.)

The one exception is that the VPN host itself cannot access 10.4.0.0/24. Someone in #openvpn on irc mentioned that when the openvpn server is connecting to the client network, it uses the the VPN IP, not the local IP, and I should check out my masquerade rules for iptables. My masquerade rules, and the interface config for the related interfaces are at http://pastebin.com/Q9RDy0es .

OpenVPN configuration files, for both server and client, can be found at: http://pastebin.com/gtfm82pE .

I feel like it's a firewall issue on the host side, but I can't seem to get it worked out. Do I need new/different masquerade rules? I'm pretty sure the VPN configurations are correct.

vpn-server routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  10.54.1.9       *               255.255.255.255 UH    0      0        0 tun1  10.8.1.2        *               255.255.255.255 UH    0      0        0 tun0  <pubIP redacted> *               255.255.255.248 U     0      0        0 eth1  10.18.1.0       10.8.1.2        255.255.255.0   UG    0      0        0 tun0  172.16.20.0     10.54.1.9       255.255.255.0   UG    0      0        0 tun1  192.168.3.0     *               255.255.255.0   U     0      0        0 eth0  10.8.1.0        10.8.1.2        255.255.255.0   UG    0      0        0 tun0  10.54.1.0       10.54.1.9       255.255.255.0   UG    0      0        0 tun1  172.16.30.0     10.54.1.9       255.255.255.0   UG    0      0        0 tun1  10.3.0.0        10.54.1.9       255.255.255.0   UG    0      0        0 tun1  172.16.10.0     *               255.255.255.0   U     0      0        0 vlan4000  10.3.1.0        10.54.1.9       255.255.255.0   UG    0      0        0 tun1  10.4.0.0        10.8.1.2        255.255.0.0     UG    0      0        0 tun0  link-local      *               255.255.0.0     U     0      0        0 eth0  loopback        *               255.0.0.0       U     0      0        0 lo  default         <pubIP redacted> 0.0.0.0         UG    0      0        0 eth1  

vpn-server output of iptables -L

Chain INPUT (policy ACCEPT)  target     prot opt source               destination           FW-1-INPUT  all  --  anywhere             anywhere                Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  192.168.3.0/24       anywhere              ACCEPT     all  --  anywhere             anywhere              ACCEPT     all  --  anywhere             anywhere              ACCEPT     icmp --  anywhere             anywhere            icmp any   ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED   REJECT     all  --  anywhere             anywhere            reject-with icmp-host-    prohibited     Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination                Chain FW-1-INPUT (1 references)  target     prot opt source               destination           ACCEPT     all  --  anywhere             anywhere              ACCEPT     icmp --  anywhere             anywhere            icmp any   ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:7788   ACCEPT     udp  --  anywhere             anywhere            udp dpt:ha-cluster   ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain   ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain   ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain   ACCEPT     udp  --  anywhere             anywhere            udp dpt:bootps   ACCEPT     udp  --  anywhere             anywhere            udp dpt:bootpc   ACCEPT     udp  --  anywhere             anywhere            udp dpt:openvpn   ACCEPT     tcp  --  sysmon.example.com  anywhere            tcp dpt:nrpe   ACCEPT     tcp  --  sysmon1.example.com  anywhere            tcp dpt:nrpe   ACCEPT     udp  --  sysmon1.example.com  anywhere            udp dpt:ntp   ACCEPT     udp  --  sysmon.examplecom  anywhere            udp dpt:ntp   ACCEPT     tcp  --  anywhere             anywhere            tcp multiport dports     iax,sip   ACCEPT     udp  --  anywhere             anywhere            udp multiport dports iax,sip   ACCEPT     tcp  --  anywhere             anywhere            state NEW tcp dpt:ssh   REJECT     all  --  anywhere             anywhere            reject-with icmp-host-   prohibited  

No comments:

Post a Comment