Sunday, September 12, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Does storage engine ndbcluster support Data-at-Rest Encryption?

Posted: 12 Sep 2021 09:16 PM PDT

I want to enable Data-at-Rest Encryption in ndbcluster. I try to find how to make that but it doesn't solve this problem.

Is there another way ? (I care about every solution. Although MySQL Enterprise Production is required.)

My Environment Ubuntu 21.04 MySQL Cluster 8.0.26

Regards Rapepat

AWS EKS add-on coredns status as degraded and node group creation failed( is unable to join Cluster)

Posted: 12 Sep 2021 09:06 PM PDT

I'm trying to create node group on EKS Cluster(region = ap-south-1) but it is failing to join cluster. Health issues : NodeCreationFailure Instances failed to join the kubernetes cluster

I found that it may be because AWS EKS add-on(coredns) for Cluster is degraded. I tried to create new Cluster but it shows same status for add-on as degraded . Health issues shows: InsufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas.

And in the same region other Clusters with node group are working fine.Their all add-ons are in Active State. I'm creating cluster from console.

did you know about carpartsnow.ng

Posted: 12 Sep 2021 07:55 PM PDT

AUTO PARTS SHOP Are you looking for Auto Parts Shop in Nigeria? Auto parts are an essential necessity for any car. But with so many different stores to choose from, how can you be sure that the one you're buying from is reliable and trustworthy? Carpartsnow.com is your best choice if you live in Nigeria or have plans of visiting. We offer a wide selection of spare parts at wholesale prices, making it easy to find what you need without breaking the bank! check out here: https://carpartsnow.ng/pages/auto-parts-shop

GET ORIGINAL PARTS WITHOUT THE STRESS. Good things don't come easy they say, but what if you could get good parts for your car and delivered to your doorstep without breaking a sweat and without the risk of losing your money buying the wrong part that doesn't fit or worse- a counterfeit. At Carpartsnow, we are here to make that happen. We stock only original parts and accessories and our system makes shopping for parts easy and less complicated. You can shop with confidence of knowing that our online store is backed by an actual brick and mortar physical location which you can visit to pick up your parts or make inquiries or find resolution to your auto parts issues. We are here to make your auto parts purchases a pleasant experience.

WHO WE ARE. Carpartsnow is an online retail website owned by Brakes and Shocks Limited. Brakes and Shocks is a holding company for physical and digital automotive brands in Nigeria including Brakes and Shocks Mobile Mechanics and Partboyz Auto Parts. Since our founding over 10 years ago, we have become synonymous with quality aftermarket auto parts and on-demand car repairs. We bring the same culture of excellent service and genuine auto products to our customers with Carpartsnow online portal for quality auto parts.

MORE ABOUT OUR SERVICES You can find the parts you need quickly by browsing our selection of categories or using our search bar. If there's any questions, we offer live chat to assist with your order!

At CarpartsNow, customer satisfaction is priority number one. We offer Auto Parts at wholesale prices, making it easy to find what you need without breaking the bank!

At CarpartsNow Auto Parts Shop Nigeria we understand that the needs of our customers change over time. We are committed to providing you with Auto Parts at affordable prices while offering a wide selection of quality products for your vehicle!

We offer AutoParts for all major brands like BMW, Toyota, Mercedes Benz and Ford. You can find the parts you need quickly by browsing our selection of categories or using our search bar. If there's any questions, we offer live

Centos server logs "systemd: Started Telnet Server (127.0.0.1:52050)." every minute

Posted: 12 Sep 2021 07:18 PM PDT

I got a centos 7.2 server,It hang over night, I get strange message like 'systemd: Started Telnet Server (127.0.0.1:52050).' it runs every 1 min. I think it is the reason. How to find the proccess which print the logs? thanks

enter image description here

Unbound error - unbound.service: Start request repeated too quickly

Posted: 12 Sep 2021 04:41 PM PDT

I am new using unbound.

I have a network 192.168.50.1 to 192.168.50.240. And I'd like to use DoH for non cache data.

my conf file:

# Unbound configuration file for Debian.  #  # See the unbound.conf(5) man page.  #  # See /usr/share/doc/unbound/examples/unbound.conf for a commented  # reference config file.  #  # The following line includes additional configuration files from the  # /etc/unbound/unbound.conf.d directory.  include: "/etc/unbound/unbound.conf.d/*.conf"    server:        forward-zone:          name: "."          forward-addr: https://cloudflare-dns.com/dns-query        directory: "/etc/unbound"        username: unbound        verbosity: 2        interface: 0.0.0.0        prefetch: yes        access-control: 192.168.50.0/24 allow      access-control: 127.0.0.1/24 allow        hide-identity: yes      hide-version: yes        remote-control:          control-enable: no        control-interface: 127.0.0.1      control-port: 8953  

What is wrong in my conf file?

Thanks a lot!

Scuttle Boot Option

Posted: 12 Sep 2021 03:15 PM PDT

I'm trying to devise a simple boot option that would secure erase one or more drives in a computer. Imagine a scenario such as airport security where somebody has the authority to compel you to turn on and unlock a laptop that contains trade secrets. You power on the device and enter a password, but instead of logging into the OS, a script is triggered that executes a secure erase on the boot device.

I think the following features would be required or desirable:

  1. minimal interaction required. Perhaps selecting an alternate item on a bootloader menu
  2. password protected to prevent accidental or unauthorised activation
  3. does not require unlocking device encryption or logging into OS
  4. minimal execution time, like ATA enhanced security erase or 'nvme format'
  5. minimal footprint

I think a UEFI utility might be ideal in fulfilling requirements 1 and 5, but I'm not aware of the existence of such. I know Lenovo has a bootable utility to erase an nvme device, but it boots in legacy mode and requires multiple steps, including a menu, a security code, a reboot, and fineally entering the security code before the erase is executed. The process wouldn't meet the first requirement and would not be quick or subtle enough to be practical in the described scenario.

Of course one could set up a dedicated Linux environment similar to the Parted Magic distribution and have a simple erase script executed automatically at boot time or login, but I'd prefer not to dedicate a whole partition to such a utility, and I'm not sure if a secure erase would even run properly on a boot drive in a Linux environment. Any Windows-based secure erase utility I've tried won't work on the boot drive. I've secure-erased drives using Linux bootable USB sticks, but I've never tried it on the Linux boot device itself.

This points to another possibility if running Linux as a primary OS on the device, to use the installed OS, but configure a dedicated user account that runs the scuttle script on login. But again, I don't know if this would work on the system boot drive, plus this approach requires unlocking the boot drive, in violation of requirement #3 above.

Any suggestions?

Getting ERR_NAME_NOT_RESOLVED only from MY PC and only from WIFI

Posted: 12 Sep 2021 04:29 PM PDT

I am setting up the website melius.live and it literally works fine from all my devices using mobile data, but not from WiFi (any WiFi, not just a specific one). However, from anyone I ask to test it, it works for them. I literally wrote the web app and set up the server and am the only one who is unable to access it (unless using mobile data). Can it be related to some DNS settings? Because other people in the same WiFi can access it. However, the issue is on my Mac, iPad, and iPhone. Thank you.

Passive ftp not working behind nat

Posted: 12 Sep 2021 02:49 PM PDT

I have a big problem. Let me explain. I have configured two machines, one called "fw" that is the firewall and the other one connected to this one called "server", both are Debian 10 buster systems. The fw machine uses iptables to masquerade the IP. "Public IP": 88.20.100.2, local range: 192.168.150.0/24

This is the configuration of my FTP server, vsftpd to have passive mode

pasv_enable=Yes  pasv_max_port=2000  pasv_min_port=1000  pasv_address=88.20.100.2  

Anythin special. It works if I have this iptables enabled on the firewall (enp0s9 = internet, enp0s3 = LAN)

iptables -P FORWARD DROP  iptables -A FORWARD -p tcp --dport 21 -i enp0s9 -o enp0s3 -d 192.168.150.98 -j ACCEPT  iptables -A FORWARD -p tcp --sport 21 -i enp0s3 -o enp0s9 -s 192.168.150.98 -j ACCEPT  iptables -A FORWARD -p tcp --dport 1000:2000 -d 192.168.150.98 -i enp0s9 -o enp0s3 -j ACCEPT  iptables -A FORWARD -p tcp --sport 1000:2000 -s 192.168.150.98 -i enp0s3 -o enp0s9 -j ACCEPT    iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -o enp0s9 -j MASQUERADE  iptables -t nat -A PREROUTING -p tcp --dport 21 -j DNAT --to-destination 192.168.150.98:21  iptables -t nat -A PREROUTING -p tcp --destination-port 1000:2000 -j DNAT --to-destination 192.168.150.98  

My problem is that I want to be able to open the 1000:2000 ports only when the connection es related to the FTP server, not always. I have tried with -m state and -m conntrack but I guess I made something wrong. Any idea? Thanks

Windows Server Backup Fails: There is not enough disk space to create the volume shadow copy

Posted: 12 Sep 2021 05:56 PM PDT

We have a brand Dell PowerEdge - Windows Server 2012 R2 is running , The server is an Active Directory Domain Controller - two NTFS partitions-C:\ FOR OS : 400 GB - E:\ for data … I connect 1TB external drive for Windows Server Backup

I try to backup windows server but failed The message is:

There is not enough disk space to create the volume shadow copy  

I can successfully backup while only including the System State and OS (C:) items. If I adjust the backup selection to include the Recovery partition, it fails. If I choose to include Bare Metal Recovery which implicitly includes EFI System Partition and Recovery Partition - it fails as well.

Ubuntu 18.04.4 Zimbra 8.8.15 warning: connect to transport private/smtp-amavis: Connection refused

Posted: 12 Sep 2021 05:49 PM PDT

Starting LSB: Starts amavisd-new mailfilter...  Sep 12 14:33:51 domainname amavis[56480]: Starting amavisd: changed ownership of '/var/run/amavis' from root:root to amavis:amavis  Sep 12 14:33:57 domainname amavis[56508]: starting. /usr/sbin/amavisd-new at domainname.com amavisd-new-2.11.0 (20160426), Unico  Sep 12 14:33:59 domainname amavis[56515]: (!)Net::Server: 2021/09/12-14:33:59 Can't connect to TCP port 10024 on 127.0.0.1 [Address already in use]\n  Sep 12 14:33:59 domainname amavis[56480]: amavisd-new.  Sep 12 14:33:59 domainname systemd[1]: Started LSB: Starts amavisd-new mailfilter.  

After updating my Ubuntu can't send email from Zimbra.

I found this error in my logs:

23:24:00 domainname postfix/qmgr[12761]: 49FBF15ADD8: from=<test@yahoo.com>, size=3276, nrcpt=1 (queue active)  Sep 9 23:24:00 domainname postfix/qmgr[12761]: warning: connect to transport private/smtp-amavis: Connection refused  Sep 9 23:24:00 domainname postfix/smtpd[38463]: disconnect from smtp-out-19.di.u-psud.fr[129.175.213.19] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7  Sep 9 23:24:00 domainname postfix/error[38470]: 49FBF15ADD8: to=<user@domain.com>, relay=none, delay=0.44, delays=0.35/0.04/0/0.04, dsn=4.3.0, status=deferred (mail transport unavailable)  

Can something in my Nginx config imply why my backend is not sending the 'Access-Control-Allow-Origin' header in POST request?

Posted: 12 Sep 2021 02:52 PM PDT

*Edit 1: The error seem to be only with POST requests

I have a frontend website on localhost. There is a registration page on localhost/register

The website calls a backend function to register a user at localhost:8080/api/register

I use Axios to POST the username and password. The browser sends two requests: OPTIONS pre-flight request, and then the POST request.

The user is created successfully, however the browser throws an error for the POST request:

Reason: CORS header 'Access-Control-Allow-Origin' missing  

And indeed it's missing in the response to the POST. Assuming my backend cors file is configured properly, could the issue be from the combination of my Docker + Nginx setup that blocks it or proxy the headers to a wrong place?

This is my nginx config:

server {      listen 8080;      index index.php index.html;          error_log /var/log/nginx/error.log;      access_log /var/log/nginx/access.log;      root /var/www/html/public;            location / {          try_files $uri $uri/ /index.php?$query_string;      }            location ~ \.php$ {                  try_files $uri = 404;          fastcgi_split_path_info ^(.+\.php)(/.+)$;          fastcgi_pass php:9000;          fastcgi_index index.php;          include fastcgi_params;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_param PATH_INFO $fastcgi_path_info;      }        }      server {          listen       80;               location / {              proxy_pass      http://node:3000;              }    }  

and this is my docker-compose.yml:

networks:      mynetwork:          driver: bridge    services:      nginx:          image: nginx:stable-alpine          container_name: nginx          ports:              - "8080:8080"              - "80:80"                      volumes:                          - ./php:/var/www/html               - ./nginx/default.conf:/etc/nginx/conf.d/default.conf                        depends_on:              - php              - node          networks:              - mynetwork            php:                  build:              context: ./php              dockerfile: Dockerfile          container_name: php          user: "1000:1000"          volumes:              - ./php:/var/www/html          ports:              - "9000:9000"          networks:              - mynetwork        node:          build:              context: ./react              dockerfile: Dockerfile          container_name: next                  volumes:              - ./react:/var/www/html                          ports:              - "3000:3000"                   networks:              - mynetwork                   

**Edit 2:

The backend is Laravel and it has a CORS middleware that is supposed to take care of it. And in fact it does seem to be working because GET and OPTIONS request pass without error, only the POST request throws this error.

This is the CORS config file (cors.php) in Laravel:

'paths' => ['api/*', 'sanctum/csrf-cookie'],    'allowed_methods' => ['*'],    'allowed_origins' => ['http://localhost'],    'allowed_origins_patterns' => ['*'],    'allowed_headers' => ['*'],    'exposed_headers' => [],    'max_age' => 0,    'supports_credentials' => true  

iSCSI separation from Ethernet via VLAN

Posted: 12 Sep 2021 08:41 PM PDT

I've set up a small cluster of a few servers along with a SAN. The servers are running Ubuntu 20.04 LTS.

Using instructions provided by the vendor (I can't find where I read it before), they suggested that the iSCSI connections between the SAN and the servers should be (or maybe it was "must be"?) separated from any ethernet traffic. Because of this, I've configured two VLANs on our switch -- one for iSCSI traffic and one for ethernet traffic between the servers (which the SAN is not on).

So far, it seems fine. Suppose the Ethernet is on 172.16.100.XXX/24 and iSCSI is on 172.16.200.XXX/24. More specifically, the addresses look something like this:

machine ethernet iSCSI Outside ethernet also?
server 1 172.16.100.1 172.16.200.1 Yes
server 2 172.16.100.2 172.16.200.2 Yes
server 3 172.16.100.3 172.16.200.3 Yes
SAN N/A 172.16.200.4 No

Not surprisingly, I can ssh between servers using either VLAN. That is, from server 2 to server 1, I can do any of the following:

  • ssh 172.16.100.1
  • ssh 172.16.200.1
  • ssh via the outside-visible IP address

What I'm worried about is whether or not I should better separate non-iSCSI traffic from the 172.16.200.X subnet with firewall rules so that port 22 (ssh) is blocked out on all servers.

I'm not concerned about the reverse -- the SAN is only on VLAN 200. It doesn't know VLAN 100 exists so it won't suddenly send iSCSI traffic down that VLAN.

I'm using the Oracle Cluster Filesystem which seems to use port 7777 -- perhaps I should block all ports on the VLAN so that only port 7777 is used? Does having ethernet traffic on an iSCSI network create problems (either lag or errors?) I should be aware of?

Thank you!

How to upgrade rhel 7.3 to 8.1 using iso cd

Posted: 12 Sep 2021 09:20 PM PDT

I wish to upgrade Rhel 7.3 to 8.1 by using an iso cd. I mount it to /home/cdrom This iso contains the following directories: BaseOS AppStream RPM-GPG-KEY-redhat-release, and so on

I got one repo file called /etc/yum.repos.d/rhel8.repo. This contains:

[rhel8-Server]  mediaid=78347539434.4444  name=RHEL8-Server  baseurl=file:///home/cdrom/AppStream  gpgkey=file:///home/cdrom/RPM-GPG-KEY-redhat-release  enabled=1  gpgcheck=1  

Then I executed yum update but it didn't work. I also tried with baseurl=file:///home/cdrom/BaseOS but there's no results. I got result messages such like 'You could try using --skip-broken to work around the problem' or 'Error: Invalid version flag: if'. What can I do?

Problems with setting up bonding on Netplan (Ubuntu server 18.04)

Posted: 12 Sep 2021 09:02 PM PDT

I have a dual port network card that I want to bond both ports and balance the traffic between ports. I want 1 static IP address. I used to ubuntu 16.04 and this worked fine. Im now trying to set up the same thing in netplan and am struggling. My config is below...

network:  version: 2  renderer: networkd  ethernets:    enp1s0f0:      dhcp4: false      dhcp6: false    enp1s0f1:      dhcp4: false      dhcp6: false   bonds:     bond0:      dhcp4: false      dhcp6: false     interfaces:        - enp1s0f0       - enp1s0f1     addresses: [192.168.3.250/24]     gateway4: 192.168.3.1     parameters:       mode: 802.3ad     nameservers:       addresses: [8.8.8.8,8.8.4.4]  

Scheduled Task in Windows Server 2016, run by non-admin Users

Posted: 12 Sep 2021 04:04 PM PDT

In earlier windows server versions (prior to 2016) it was possible to grant non-admin users the permission to run a scheduled task by doing following steps:

  1. Scheduled Task: run under system, execute script
  2. Give user read and execute rights on specific task under C:\Windows\System32\Tasks\

Now in server 2016 this doesn't work anymore. Do you know how to do it?

Thank you

related post, which didn't get answered, neither helped: Allow non-admin user to run scheduled task in Windows Server 2016

Dockerfile cloning from private gitlab with ssh and deploy key

Posted: 12 Sep 2021 07:07 PM PDT

(EDIT) This problem was happening also from my laptop using root and my user, which could get the greeting when trying to ssh with git user. Then tried the ansible playbook and it raised errors for the repo too. Tried another one and that clones flawlessly. The problem, then, doesn't seem to be with git, docker or ssh, but with the gitlab configuration.


On a Dockerfile I am trying to clone private repositories hosted on a company server running gitlab and setup with a non standard ssh port.

This is what I expected to run (alongside with some params in ssh config file)

RUN git clone git@companyname.ddns.net:GroupName/repo_name.git  

Things I've checked already:

  • The repo has a deploy key and it is active.
  • Automating with Ansible instead of docker, it can connect and clone repos.
  • The key is named id_rsa and it is inside ~/.ssh/
  • Tried with ssh-agent and seems ok (though I specify the file in the config and shouldn't need it)

    Identity added: /opt/.ssh/id_rsa (/opt/.ssh/id_rsa)   
  • The ~/.ssh/config has the following:

    Host *    StrictHostKeyChecking no    PubkeyAcceptedKeyTypes +ssh-rsa    PasswordAuthentication no    User git    ForwardAgent yes    Identityfile /opt/.ssh/id_rsa # /opt is the home of the user RUNning the command in docker    port 22022  

RUNning this from the container:

ssh -vT -i /opt/.ssh/id_rsa git@companyname.ddns.net:GroupName/repo_name.git  

Gets the result

Welcome to GitLab, Anonymous!

But the git clone command gets:

Cloning into 'repo_name'...  GitLab: The project you were looking for could not be found.  fatal: Could not read from remote repository.    Please make sure you have the correct access rights   and the repository exists.  The command '/bin/sh -c git clone git@companyname.ddns.net:GroupName/repo_name.git;' returned a non-zero code: 128  

Win 2012 R2 / IIS 8.5 intermittent Connection Refused

Posted: 12 Sep 2021 06:07 PM PDT

We suffer from a connection refused problem when the users of our web site try to open it. This problem happens in a random manner, about once or twice a month, and problem continues for a few hours. Also when happening, almost all connections are rejected by connection refused error. but there are successful connections meanwhile.

  • OS: Win 2012 R2 Standard hosted on ESXI 6
  • IIS 8.5
  • Web server is hosting an ASP.NET application.
  • Windows Firewall is on.
  • Average current connection on server: ~3500 (based on Web Service\Current connection performance monitor counter)
  • Total RAM: 40GB
  • CPU: 30 cores, 2.30 GHz

There is plenty of RAM (more than ~60%) and CPU (more than ~70%) available while this problem happens. Also we checked the network firewall and apparently traffic is passing through network firewall without problem and problem happens at the server level. And we can not even open the web site by doing Remote desktop and trying to open in locally.

We checked about exhausted port problem and it seems that is not the problem. The number of SYN packets are high, but its similar to other days when everything is fine.

This is one day summery of HTTPERR log:

s-reason    COUNT(ALL *)  Timer_ConnectionIdle    462040  Timer_MinBytesPerSecond 27555  Request_Cancelled   1757  Timer_EntityBody    428  Forbidden   247  URL 130  Hostname    117  BadRequest  102  Connection_Dropped  96  Client_Reset    88  Connection_Dropped_List_Full    40  Verb    10  Header  7  Connection_Abandoned_By_ReqQueue    1  

Any help is really appreciated to find the reason why we get connection refused when trying to open web site hosted on this server.

AWS API Gateway Custom Domain: the domain you provided is already associated with an existing CloudFront distribution

Posted: 12 Sep 2021 03:12 PM PDT

I'm simply attempting to set up a Custom Domain in API Gateway. I have ACM certificate "*.mysite.com.au" that is currently being used to serve a static S3 website out via CloudFront at "beta.mysite.com.au". I wish to create a custom domain for "api.mysite.com.au" with this certificate.

However, I'm receiving the following error in the AWS API Gateway console:

The domain name you provided is already associated with an existing CloudFront distribution. Remove the domain name from the existing CloudFront distribution or use a different domain name. If you own this domain name and are not using it on an existing CloudFront distribution, please contact support.

I'm not currently using "api.mysite.com.au" in a CloudFront distribution. So I'm lost. Has anyone encountered this issue before? And if so, how may I go about resolving it?

Thanks in advance,

Strainy

Empty nginx logs

Posted: 12 Sep 2021 03:04 PM PDT

I'm trying to get nginx to log access and error logs. My logs currently have very old content, a mix of logs and gzipped logs.

$ ls -la access*.log*  -rw-rw-rw- 1 nobody nogroup       0 Jan  8  2016 access.log  -rw-rw-rw- 1 nobody nogroup 2261400 Jan  7  2016 access.log.1  -rw-rw-rw- 1 nobody nogroup  311947 Dec 30  2015 access.log.10.gz  -rw-rw-rw- 1 nobody nogroup  434744 Dec 29  2015 access.log.11.gz  

My configuration is:

user www-data www-data;  error_log /var/log/nginx/error.log info;  ...  http {    access_log /var/log/nginx/access.log combined;  ...  

Strangely, despite the user declaration the worker processes still run as nobody:

# ps -eo "%U %G %a" | grep nginx  root     root     nginx: master process /usr/local/openresty/nginx/sbin/nginx -c /usr/local/openresty/nginx/conf/nginx.conf  nobody   nogroup  nginx: worker process                                                                nobody   nogroup  nginx: worker process     

I tried setting the owner of the existing access.log and error.log files to be nobody:nogroup but still it doesn't log anything.

There's nothing (relevant) in syslog.

I have tried a mixture (!) or reloading and restarting nginx after changing the configuration file. Still nothing...

How is my configuration incorrect?

NGINX subdomain with proxy_pass

Posted: 12 Sep 2021 04:04 PM PDT

I have nginx running as a reverse proxy for a nextcloud server hosted on apache on a different virtual machine. I'd like to be able to access it via cloud.example.com. With my current rules I have to put in cloud.example.com/nextcloud. I have googled, searched, and the closest I got was being able to go to cloud.example.com and it would redirect to cloud.example.com/nextcloud, but I'd like to keep the /nextcloud out of the address bar if possible. Do I need to have a /nextcloud location that does the proxy pass in addition to the /?

This is my current nginx.conf:

server {      listen       443 ssl http2 default_server;      server_name  _;      ssl_certificate /etc/letsencrypt/live/cloud.domain.com/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/cloud.domain.com/privkey.pem;      ssl_stapling on;      ssl_stapling_verify on;        location /.well-known {          alias /var/www/.well-known;      }      location / {          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-By $server_addr:$server_port;          proxy_set_header Host $http_host;          proxy_pass http://10.37.70.6:8080;      }    }  

Tomcat startup script on RHEL not starting tomcat on reboot

Posted: 12 Sep 2021 08:04 PM PDT

My tomcat startup script is not starting tomcat on reboot of the Red Hat Enterprise Linux server.

I have narrowed it down to the start function:

 41 start() {   42 echo "instart $(date)" > /tmp/tomcatscript.out   43   pid=$(tomcat_pid)   44   if [ -n "$pid" ]   45   then   46     echo -e "\e[00;31mTomcat is already running (pid: $pid)\e[00m"   47   else   48 echo "inelse $(date)" >> /tmp/tomcatscript.out   49     # Start tomcat   50     echo -e "\e[00;32mStarting tomcat\e[00m"   51     #ulimit -n 100000   52     #umask 007   53     #/bin/su -p -s /bin/sh $TOMCAT_USER   54         if [ `user_exists $TOMCAT_USER` = "1" ]   55         then   56 echo "in if then PID [$pid] whoami [$(whoami)] $(date)">> /tmp/tomcatscript.out   57                 sudo su - $TOMCAT_USER -s /bin/sh -c $CATALINA_HOME/bin/startup.sh >> /tmp/tomcatscript.out   58         else   59 echo "in else $(date)" >> /tmp/tomcatscript.out   60                 echo -e "\e[00;31mTomcat user $TOMCAT_USER does not exists. Starting with $(id)\e[00m"   61                 sh $CATALINA_HOME/bin/startup.sh   62         fi   63 echo "calling status $(date)">> /tmp/tomcatscript.out   64   65        status >> /tmp/tomcatscript.out   66   fi   67   return 0   68 }  

When I reboot the server running /sbin/reboot, the contents of the file that I echo out to are:

# cat /tmp/tomcatscript.out  instart Wed Jun 15 20:24:25 PDT 2016  inelse Wed Jun 15 20:24:25 PDT 2016  in if then PID [] whoami [root] Wed Jun 15 20:24:25 PDT 2016  calling status Wed Jun 15 20:24:25 PDT 2016  Tomcat is not running  

When I run the tomcat script in /etc/rc.d/init.d as follows:

[root@server init.d]# ./tomcat start  

The contents of the file are:

[root@server init.d]# cat /tmp/tomcatscript.out  instart Wed Jun 15 20:38:30 PDT 2016  inelse Wed Jun 15 20:38:30 PDT 2016  in if then PID [] whoami [root] Wed Jun 15 20:38:30 PDT 2016  Using CATALINA_BASE:   /users/tomcat/apache-tomcat-8.0.30  Using CATALINA_HOME:   /users/tomcat/apache-tomcat-8.0.30  Using CATALINA_TMPDIR: /users/tomcat/apache-tomcat-8.0.30/temp  Using JRE_HOME:        /users/java/jdk1.8.0_71  Using CLASSPATH:       /users/tomcat/apache-tomcat-8.0.30/bin/bootstrap.jar:/users/tomcat/apache-tomcat-8.0.30/bin/tomcat-juli.jar  Tomcat started.  calling status Wed Jun 15 20:38:30 PDT 2016  Tomcat is not running  

* I have also tried using the daemon function -- that didn't work for me either *

#!/bin/bash  #  # description: Apache Tomcat init script  # processname: tomcat  # chkconfig: 234 20 80  #  ### BEGIN INIT INFO  # Provides:        tomcat8  # Required-Start:  2 3 4 5  # Required-Stop:   0 1 6  # Default-Start:   2 3 4 5  # Default-Stop:    0 1 6  # Short-Description: Start/Stop Tomcat server  ### END INIT INFO    #Location of JAVA_HOME (bin files)  export JAVA_HOME=/user/java/jdk1.8.0_71    #Add Java binary files to PATH  export PATH=$JAVA_HOME/bin:$PATH    #CATALINA_HOME is the location of the bin files of Tomcat  export CATALINA_HOME=/users/tomcat/apache-tomcat-8.0.30    #CATALINA_BASE is the location of the configuration files of this instance of Tomcat  export CATALINA_BASE=/users/tomcat/apache-tomcat-8.0.30/conf    #TOMCAT_USER is the default user of tomcat  export TOMCAT_USER=tomcat    #TOMCAT_USAGE is the message if this script is called without any options  TOMCAT_USAGE="Usage: $0 {\e[00;32mstart\e[00m|\e[00;31mstop\e[00m|\e[00;31mkill\e[00m|\e[00;32mstatus\e[00m|\e[00;31mrestart\e[00m}"    #SHUTDOWN_WAIT is wait time in seconds for java proccess to stop  SHUTDOWN_WAIT=20    tomcat_pid() {          echo `ps -fe | grep $CATALINA_BASE | grep -v grep | tr -s " "|cut -d" " -f2`  }    # Source function library.  . /etc/init.d/functions    start() {  echo "instart $(date)" > /tmp/tomcatscript.out    pid=$(tomcat_pid)    if [ -n "$pid" ]    then      echo -e "\e[00;31mTomcat is already running (pid: $pid)\e[00m"    else  echo "inelse $(date)" >> /tmp/tomcatscript.out      # Start tomcat      echo -e "\e[00;32mStarting tomcat\e[00m"  #    ulimit -n 100000   #   umask 007    #  /bin/su -p -s /bin/sh $TOMCAT_USER          if [ `user_exists $TOMCAT_USER` = "1" ]          then                  echo "in if then PID [$pid] whoami [$(whoami)] $(date)">> /tmp/tomcatscript.out                  echo "[$TOMCAT_USER] and [$CATALINA_HOME]" >> /tmp/tomcatscript.out                  daemon --user $TOMCAT_USER  $CATALINA_HOME/bin/startup.sh > /dev/null  #                sudo su - $TOMCAT_USER -s /bin/sh -c $CATALINA_HOME/bin/startup.sh >> /tmp/tomcatscript.out                  echo "called daemon" >>  /tmp/tomcatscript.out          else                  echo "in else $(date)" >> /tmp/tomcatscript.out                  echo -e "\e[00;31mTomcat user $TOMCAT_USER does not exists. Starting with $(id)\e[00m"                  sh $CATALINA_HOME/bin/startup.sh          fi          echo "calling status $(date)">> /tmp/tomcatscript.out           status >> /tmp/tomcatscript.out    fi    return 0  }    status(){            pid=$(tomcat_pid)            if [ -n "$pid" ]              then echo -e "\e[00;32mTomcat is running with pid: $pid\e[00m"            else              echo -e "\e[00;31mTomcat is not running\e[00m"              return 3            fi  }  

How to modify querystring using URL rewriting?

Posted: 12 Sep 2021 05:03 PM PDT

I am having very less knowledge in URL rewriting, So not sure weather this can be done or not using URL rewrite?

I have a URL like www.test.com/categroy.cfm?categoryid=12&weight=any&brandid=23

For weight parameter: if its value is 'any' i want to remove it from the url .

For brandid parameter: if brandid is 'any' remove it else replace with 'filter_brand=value'

Outpul like: www.test.com/categroy.cfm?categoryid=12&filter_brand=23

Is it possible ? If yes could anyone please show me an example. I am using IIS.

iptables to allow input and output traffic to and from web server only

Posted: 12 Sep 2021 03:04 PM PDT

I have an Elastic Search server which seems to have been exploited (it's being used for a DDoS attack having had NO firewall for about a month).

As a temporary measure while I create a new one I was hoping to block all traffic to and from the server which wasn't coming from or going to our web server. Will these iptables rules achieve this:

iptables -I INPUT \! --src 1.2.3.4 -m tcp -p tcp --dport 9200 -j DROP  iptables -P FORWARD \! --src 1.2.3.4 DROP  iptables -P OUTPUT \! --src 1.2.3.4 DROP  

The first rule is tried and tested but obviously wasn't preventing traffic coming from my server to other IP addresses so I was hoping I could add the second two rules to full secure it.

PAM LDAP configuration for non-local user authentication

Posted: 12 Sep 2021 09:02 PM PDT

I have a requirement to allow non-local user accounts to be logged in via LDAP authentication. Meaning, the user that is trying to login is allowed access, if the user account exists in LDAP server database, there is no need to have local user.

I'm able to achieve this if I run NSLCD(/usr/sbin/nslcd).

Would like to know if we can do this with any configuration in /etc/pam.d/sshd or /etc/pam_ldap.conf without the use of running NSLCD.

Please let me know your suggestions

Thanks, Sravani

How to filter TCP packets based on flags using Packet Filter

Posted: 12 Sep 2021 05:03 PM PDT

Well, I didn't know exactly how to ask this question, but I know that you can use the keyword flags to especify which flags you want to filter.

According to the documentation of the Packet filter:

To have PF inspect the TCP flags during evaluation of a rule, the flags keyword is used with the following syntax:

flags check/mask flags any

The mask part tells PF to only inspect the specified flags and the check part specifies which flag(s) must be "on" in the header for a match to occur. Using the any keyword allows any combination of flags to be set in the header.

pass in on fxp0 proto tcp from any to any port ssh flags S/SA pass in on fxp0 proto tcp from any to any port ssh

As flags S/SA is set by default, the above rules are equivalent, Each of these rules passes TCP traffic with the SYN flag set while only looking at the SYN and ACK flags. A packet with the SYN and ECE flags would match the above rules, while a packet with SYN and ACK or just ACK would not.

So, I understood the example and why the packet with the flags S and E can pass (because the E flag is not considered due to the mask SA) and why the packet with only the Ack flag can't pass the firewall.

What I didn't understand is why the packet with the flags S and A can't pass the rule S/SA, if the flag S is "on" in the packet header. Maybe the documentation is ambiguous? Sorry if this is a stupid question or an english misunderstood.

I imagine that it can only pass if it MUST HAS ONLY the flag S. In set arithmetic would be something like this:

flag(s) must be 'on' in the header -> flag(s) pertains to the masked subset [pf doc] only the flag(s) must be 'on' in the header -> flag(s) is egual to the masked subset [what I understood from the example given]

Thanks in advance!

Concatenating files to a virtual file on Linux

Posted: 12 Sep 2021 06:06 PM PDT

On a Linux system, is there any way to concatenate a series of files into one exposed file for reading and writing while not actually taking up another N bytes of disk space? I was hoping for something like mounting these files via loopback/devmapper to accomplish this.

I have a problem where there are split binary files that can get quite large. I don't want to double my space requirements with massive disk IO just to temporarily read / write contents from these files by cating them all together into one enormous file.

I found this project here, but it seems to have a very specific use case and also depends on perl

Error 503 Service Unavailable Varnish

Posted: 12 Sep 2021 07:07 PM PDT

So I setup a new cloud based instance with Ubuntu 12.04, with nginx, php5-fpm and varnish.

Before I installed and configured Varnish, the website worked fine, virtual hosts worked. After setting up varnish I'm getting Error 503 Service Unavailable now.

My nginx conf looks like this:

server {     listen xxx.xxx.xxx.xxx:8080;     server_name example.com ;     root /var/www/example.com/public_html;     if ($http_host != "example.com") {               rewrite ^ http://www.example.com$request_uri permanent;     }     index index.php index.html;     location = /favicon.ico {              log_not_found off;              access_log off;     }     location = /robots.txt {              allow all;              log_not_found off;              access_log off;     }     # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).     location ~ /\. {              deny all;              access_log off;              log_not_found off;     }     location / {              try_files $uri $uri/ /index.php?$args;     }     # Add trailing slash to */wp-admin requests.     rewrite /wp-admin$ $scheme://$host$uri/ permanent;     location ~*  \.(jpg|jpeg|png|gif|css|js|ico)$ {              expires max;              log_not_found off;     }     location ~ \.php$ {              try_files $uri =404;              include /etc/nginx/fastcgi_params;              fastcgi_pass unix:/var/run/php5-fpm.sock;              fastcgi_index index.php;              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;     }        #Roots theme clean URL rewrites      location ~ ^/assets/(img|js|css)/(.*)$ {        try_files $uri $uri/ /wp-content/themes/sitename/assets/$1/$2;      }      location ~ ^/plugins/(.*)$ {        try_files $uri $uri/ /wp-content/plugins/$1;      }    }  

/etc/default/varnish looks like the this:

# Configuration file for varnish  #  # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK  # to be set from this shell script fragment.  #  # Note: If systemd is installed, this file is obsolete and ignored.  You will  # need to copy /lib/systemd/system/varnish.service to /etc/systemd/system/ and  # edit that file.    # Should we start varnishd at boot?  Set to "no" to disable.  START=yes    # Maximum number of open files (for ulimit -n)  NFILES=131072    # Maximum locked memory size (for ulimit -l)  # Used for locking the shared memory log in memory.  If you increase log size,  # you need to increase this number as well  MEMLOCK=82000    # Default varnish instance name is the local nodename.  Can be overridden with  # the -n switch, to have more instances on a single server.  # INSTANCE=$(uname -n)    # This file contains 4 alternatives, please use only one.    ## Alternative 1, Minimal configuration, no VCL  #  # Listen on port 6081, administration on localhost:6082, and forward to  # content server on localhost:8080.  Use a 1GB fixed-size cache file.  #  # DAEMON_OPTS="-a :6081 \  #              -T localhost:6082 \  #            -b localhost:8080 \  #            -u varnish -g varnish \  #            -S /etc/varnish/secret \  #            -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"      ## Alternative 2, Configuration with VCL  #  # Listen on port 6081, administration on localhost:6082, and forward to  # one content server selected by the vcl file, based on the request.  Use a 1GB  # fixed-size cache file.  #  DAEMON_OPTS="-a :80 \               -T localhost:6082 \               -f /etc/varnish/default.vcl \               -S /etc/varnish/secret \               -s malloc,256m"  

/etc/varnish/default.vcl looks like the following:

 backend default {  .host = "xxx.xxx.xxx.xxx";  .port = "8080";    }  

Checking varnishlog I see this:

11 SessionOpen  c xx.xxx.xxx.xxx 4712 :80  11 ReqStart     c xx.xxx.xxx.xxx 4712 1475226459  11 RxRequest    c GET  11 RxURL        c /  11 RxProtocol   c HTTP/1.1  11 RxHeader     c Host: example.com  11 RxHeader     c Connection: keep-alive  11 RxHeader     c Cache-Control: max-age=0  11 RxHeader     c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8  11 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.155 Safari/537.22  11 RxHeader     c Accept-Encoding: gzip,deflate,sdch  11 RxHeader     c Accept-Language: en-US,en;q=0.8  11 RxHeader     c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3  11 RxHeader     c Cookie: __utma=148547044.766489551.1362139914.1362151355.1362156101.3; __utmz=148547044.1362139914.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __cfduid=d92c50421d4bba63041d6d12ba960d3151362551863; wordpress_test_cookie=WP+Cookie+check; wordpress_lo  11 VCL_call     c recv lookup  11 VCL_call     c hash  11 Hash         c /  11 Hash         c example.com  11 VCL_return   c hash  11 VCL_call     c miss fetch  11 FetchError   c no backend connection  11 VCL_call     c error deliver  11 VCL_call     c deliver deliver  11 TxProtocol   c HTTP/1.1  11 TxStatus     c 503  11 TxResponse   c Service Unavailable  11 TxHeader     c Server: Varnish  11 TxHeader     c Content-Type: text/html; charset=utf-8  11 TxHeader     c Retry-After: 5  11 TxHeader     c Content-Length: 419  11 TxHeader     c Accept-Ranges: bytes  11 TxHeader     c Date: Wed, 06 Mar 2013 20:14:13 GMT  11 TxHeader     c X-Varnish: 1475226459  11 TxHeader     c Age: 0  11 TxHeader     c Via: 1.1 varnish  11 TxHeader     c Connection: close  11 Length       c 419  11 ReqEnd       c 1475226459 1362600853.513662100 1362600853.513980627 0.000180006 0.000249863 0.000068665  11 SessionClose c error  11 StatSess     c 70.194.150.230 4712 0 1 1 0 0 0 257 419  11 SessionOpen  c 70.194.150.230 4713 :80  11 ReqStart     c 70.194.150.230 4713 1475226460  11 RxRequest    c GET  11 RxURL        c /favicon.ico  11 RxProtocol   c HTTP/1.1  11 RxHeader     c Host: example.com  11 RxHeader     c Connection: keep-alive  11 RxHeader     c Accept: */*  11 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.155 Safari/537.22  11 RxHeader     c Accept-Encoding: gzip,deflate,sdch  11 RxHeader     c Accept-Language: en-US,en;q=0.8  11 RxHeader     c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3  11 VCL_call     c recv lookup  11 VCL_call     c hash  11 Hash         c /favicon.ico  11 Hash         c example.com  11 VCL_return   c hash  11 VCL_call     c miss fetch  11 FetchError   c no backend connection  11 VCL_call     c error deliver  11 VCL_call     c deliver deliver  11 TxProtocol   c HTTP/1.1  11 TxStatus     c 503  11 TxResponse   c Service Unavailable  11 TxHeader     c Server: Varnish  11 TxHeader     c Content-Type: text/html; charset=utf-8  11 TxHeader     c Retry-After: 5  11 TxHeader     c Content-Length: 419  11 TxHeader     c Accept-Ranges: bytes  11 TxHeader     c Date: Wed, 06 Mar 2013 20:14:13 GMT  11 TxHeader     c X-Varnish: 1475226460  11 TxHeader     c Age: 0  11 TxHeader     c Via: 1.1 varnish  11 TxHeader     c Connection: close  11 Length       c 419  11 ReqEnd       c 1475226460 1362600853.751207590 1362600853.751491070 0.000154257 0.000221491 0.000061989  11 SessionClose c error  11 StatSess     c 70.194.150.230 4713 0 1 1 0 0 0 257 419  

save Performance Monitor settings

Posted: 12 Sep 2021 06:07 PM PDT

I have added several counters to Window 2008 Performance Monitor to monitor web application. When I restart server or close Server Manager console I loose all added monitor. I do not see a way to save and later load counters and every time adding the same counters are boring and takes some time. How to save counters?

Troubleshooting 'Could Not Start' scheduled task error:

Posted: 12 Sep 2021 08:04 PM PDT

I'm trying to run snapshot on my server to back up the drive onto a local NAS server. I'm currently using this on a Win2k, Win2k3, and Win2k8 servers. Both the Win2k and Win2k8 servers are correctly backup up the data, but the Win2k3 is returning a:

Could not Start

error. I use a batch file to run snapshot, and it's run using a Domain Admin account. Here's the specific Batch code:

pskill snapshot  rem @echo off  echo. 2>"C:\Program Files\Snapshot\logs\monday_snapshot.log"  "C:\Program Files\Snapshot\snapshot.exe" c: \\NAS\Data_Backup\snapshot\server\monday_cdrive.sna -Go -T --novss --LogFile:"C:\Program Files\Snapshot\logs\monday_snapshot.log"  "C:\Program Files\Snapshot\snapshot.exe" F: \\NAS\Data_Backup\snapshot\server\monday_fdrive.sna -Go -T --novss --LogFile:"C:\Program Files\Snapshot\logs\monday_snapshot.log"  blat -bodyF "C:\Program Files\Snapshot\logs\monday_snapshot.log" -server mail.netcommusa.net -portSMTP 2525 -f mailrelay@netcommusa.net -i snapshot@*******.com -subject "Snapshot of Main Server" -u mailrelay@*******.net -pw mailrelay -to ********@gmail.com  

Note blat is a simple program to send email from the command window

I've tried following this KB article found from this answer to a similar problem with no success. I've also tried this solution as well, but alas still no success.

My last result was:

0x0

which means that:

0x0: The operation completed successfully.

(from this KB article) but it's not successfully completing as it's not backup up the drives. Not sure where to go from here. Any suggestions?

No comments:

Post a Comment