Sunday, January 2, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


"nginx_reverseproxy_vhost.conf.master" ISPconfig 3.2

Posted: 02 Jan 2022 11:25 PM PST

I am using ISPconfig host management and just installed it. After a few settings, the software stopped working. Processes suddenly stopped and I came across this when I wanted to know the reason for stopping; It seems to be a little bug inside the plugin.

root@serve:~# /usr/local/ispconfig/server/server.sh  02.01.2022-21:43 - WARNING - There is already a lockfile set, but no process running with this pid (5985). Continuing.  vlibTemplate Error: Template (nginx_reverseproxy_vhost.conf.master) file not found.root@serve:~# ^C  root@serve:~#  

Crontab;

52 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null  * * * * * /usr/local/ispconfig/server/server.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done  * * * * * /usr/local/ispconfig/server/cron.sh 2>&1 | while read line; do echo `/bin/date` "$line" >> /var/log/ispconfig/cron.log; done  0 0 * * * /usr/local/ispconfig/server/scripts/create_daily_nginx_access_logs.sh &> /dev/null  

Then I tried to update the ISPconfig, maybe the problem could be solved, but it didn't work. What causes this problem and how can I fix it?

Server: (Debian Buster) ISPConfig 3.2.7p1 // PHP 7.3

What could be the problem with this?

Ubuntu configure Postfix mail server encoding to support IDN (internationalized domain names)

Posted: 02 Jan 2022 10:30 PM PST

To my understanding the default mail server on Ubuntu is Postfix.

I have Virtualmin and Webmin installed on my Ubuntu installation. I'm trying to figure out where to configure encoding for sending and receiving emails to handle IDN in the email address.

I read somewhere that you can get this to work by changing the encoding on Postfix to SMTPUTF8 or IDNA encoding. Is it possible? If so, how do I change this configuration?

Example of an IDN domain name: 日本語.idn.icann.org.

NOTE: Special characters used in domain name... browsers have built-in support to deal with this. I want to achieve something similar for email addresses for both sending and receiving emails.

100% packet loss when trying to setup an IPV6 tunnel using tunnelbroker

Posted: 02 Jan 2022 09:56 PM PST

I have a VPS running on Ubuntu 20.04. I wanted to setup an IPV6 tunnel using tunnelbroker, however when pasting the example configuration they provided, pinging google.com gives me 100% packet loss. The system uses netplan as its network manager. The configuration file is below:

  version: 2    tunnels:      he-ipv6:        mode: sit        remote: 216.218.221.42        local: 144.xxx.xxx.xx        addresses:          - "2001:470:xx:xxx::2/64"        gateway6: "2001:470:xx:xxx::1"```    What am I doing wrong here?  

AWS EC2 | increase/decrease RAM dinamically

Posted: 02 Jan 2022 10:46 PM PST

Hi dear forum members.

Is there a way to choose a VM or other service in AWS to dinamically add memory in case of extra load?

I am going to deploy a small EC2 in AWS. And I don't need too much memory because it will be loaded only while downloading reports from DB. So basically I am looking for the cheapest possible solution, as the machine will be idle 90 % of the time.

How to generate sonar report with sonarqube version 9.2.1.49989. Which plugin i need to install for that?

Posted: 02 Jan 2022 09:44 PM PST

How to generate sonar report with sonarqube version 9.2.1.49989. Which plugin i need to install for that?

I installed CNES Report .jar plugin to generate sonar report but this plugin(CNES) not supporting could you please help how to generate sonarqube report in latest sonarqube version 9.2.1.49989.

Module ngx_http_realip_module and how to set_real_ip_from

Posted: 02 Jan 2022 10:53 PM PST

I have a very basic question about Module ngx_http_realip_module. I checked the documentation and I saw this example:

set_real_ip_from  192.168.1.0/24;  set_real_ip_from  192.168.2.1;  set_real_ip_from  2001:0db8::/32;  real_ip_header    X-Forwarded-For;  real_ip_recursive on;  

I also understand that these are:

real_ip_header X-Forwarded-For;  real_ip_recursive on;  set_real_ip_from <your VPC IPV4 CIDR here>;  set_real_ip_from <your VPC IPV6 CIDR here>;  

My question is this: If my IPV4 that I use to connect to the server is: 123.12.12.123. Where do I get the number(s) /24 after that to make it: 123.12.12.123/24? The same question applies to IPV6.

Packets sent in IPv4

Posted: 02 Jan 2022 08:51 PM PST

I'm just trying to understnd how packets work. So if i hypothetically had a image of a size of 1mb which is 1,000,000 bytes and the maximum packet size for a IPv4 packet is 65,536 bytes does that mean that optimally there would be only 16 packets sent? Sorry if this a dumb question i'm just doing a presentation for school and would like to know as much as possible.

Force new process to use the specific network interface (using netns/network namespaces)

Posted: 02 Jan 2022 08:50 PM PST

I have a number of interfaces available on Ubuntu 20.04 machine. Among others enx0c5b8f279a64 and usb0 with the later being used as the default one. I want to make sure that a particular process started in terminal will use only one of these interfaces (say enx0c5b8f279a64 even if the other interface is default). If this process is started and the selected enx0c5b8f279a64 inteface is down, it should not to even try to use any other interface as a fallback (as if other interfaces would not even exist from the perspective of this process).

I think that ip netns is the way to go, but I have trouble implementing any working solution. I have tried https://superuser.com/a/750412 however I am getting Destination Host Unreachable if I try to ping 8.8.8.8 :(

Relevant part of the ip a s oputput:

 9: enx0c5b8f279a64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000       link/ether 12:12:12:12:12:12 brd ff:ff:ff:ff:ff:ff       inet 192.168.7.100/24 brd 192.168.7.255 scope global dynamic noprefixroute enx0c5b8f279a64          valid_lft 84611sec preferred_lft 84611sec       inet6 2323::2323:2323:2323:2323/64 scope link noprefixroute           valid_lft forever preferred_lft forever   10: usb0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000       link/ether 34:34:34:34:34:34 brd ff:ff:ff:ff:ff:ff       inet 192.168.42.**47**/24 brd 192.168.42.255 scope global dynamic noprefixroute usb0          valid_lft 1858sec preferred_lft 1858sec       inet6 4545:454:545:4545:454:5454:5454:5454/64 scope global temporary deprecated dynamic           valid_lft 2461sec preferred_lft 0sec       inet6 5656:565:656:5656:5656:5656:5656:5656/64 scope global deprecated dynamic mngtmpaddr noprefixroute           valid_lft 2461sec preferred_lft 0sec       inet6 6767::6767:6767:6767:6767/64 scope link noprefixroute           valid_lft forever preferred_lft forever  

and the route table:

 route -n   Kernel IP routing table   Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   0.0.0.0         192.168.7.1     0.0.0.0         UG    100    0        0 enx0c5b8f279a64   0.0.0.0         192.168.42.129  0.0.0.0         UG    101    0        0 usb0   192.168.7.0     0.0.0.0         255.255.255.0   U     100    0        0 enx0c5b8f279a64   192.168.42.0    0.0.0.0         255.255.255.0   U     101    0        0 usb0  

So far all the solutions that I have tried started in a similar way:

# create namespace  sudo ip netns add nspace0  # create link  sudo ip link add veth0 type veth peer name veth1  # move link  sudo ip link set veth1 netns nspace0    # DO SOME MAGIC   # (none of the solutions that I have tried to adapt here worked for me so far).  ...    # run process in namespace (here I use curl as an example)   sudo ip netns exec nspace0 curl ifconfig.me/ip  

But the problem is the MAGIC part. I have seen a number of approaches with bridges and other ip-forwarding solutions. Unfortunately none of these worked for me so far and I am not that versed in networking to be able to diagnose and fix.

CouchDB replication tanks performance

Posted: 02 Jan 2022 04:57 PM PST

We have an API implemented in Tornado using CouchDB as it's backend.

Usually, queries to CouchDB finishes in less than 50ms.

But lately, queries takes up to 250ms, sometimes more. After some New Year's Day troubleshooting, we figured out that replication tanks performance so much, we disabled it, and performance rose again.

Last night around midnight (UTC+7) we reenabled replication, no impact. But this morning when the API starts to see high loads, the CouchDB performance tanked again.

  1. Is there a way to troubleshoot this performance issue?
  2. If replication turns out to be the culprit, how to optimize replication?

recovering lost access to RAID5

Posted: 02 Jan 2022 04:17 PM PST

I had perfectly working RAID5 and added another device. I lost power during the RAID recovery. Now I have no access to it - it does not exist - lsblk does no longer recognize it as RAID5 - just individual devices.

I have build a new RAID5 and trying to add one of the devices from the old array. I am getting "device busy" and the add is failing. I found a link solving this issue and started with this

$ sudo dmsetup table No devices found

Hence I stopped to follow the tutorial because it assumes some kind of "multipath" process and expects entries in the dmsetup table.

My questions are Is my process of build new array then add devices from the old reasonable ? How can I gain access to the old device or restore the old array?

Hosting Multiple Django Websites on a VPS

Posted: 02 Jan 2022 04:54 PM PST

I'm moving away from WordPress and into bespoke Django websites.

I've settled on Django as my Python framework, my only problems at the moment are concerning hosting. My current shared hosting environment is great for WordPress (WHM on CloudLinux), but serving Django on Apache/cPanel appears to be hit and miss, although I haven't tried it as yet with my new hosting company. - who have Python enabled in cPanel.

What is the easiest way for me to set up a VPS to run a hosting environment for say, twenty websites with unique URLs? I develop everything in a virtualenv, but I have no experience in running Django in a production environment as yet. I would assume that venv isn't secure enough or has scalability issues? I've read some things about people using Docker to set up separate Django instances on a VPS, but I'm not sure whether they wrote their own management system.

It's my understanding that each instance Python/Django needs uWSGI and Nginx residing within that virtual container? I'm looking for a simple and robust solution to host 20 Django sites on a VPS - is there an out of the box solution? I'm also happy to develop one and set up a VPS if I'm pointed in the right direction.

Any wisdom would be gratefully accepted.

Andy :)

Segmentation fault (core dumped) on Ubuntu when running most commands

Posted: 02 Jan 2022 05:57 PM PST

I have an Ubuntu 20.10 server that, when I run commands on such as sudo apt-get install and a lot of other commands the server responds with Segmentation fault (core dumped) which I don't understand what it means. Any help would be appreciated!

DNS Record to "redirect" from old server to new server

Posted: 02 Jan 2022 11:12 PM PST

I have a question regarding DNS:

I have the following setup:

srv-old.example.com | Host(A) | 192.168.1.2 | timestamp  srv-new.example.com | Host(A) | 192.168.1.3 | static   

can I just add another static A-Record like the following to achive, that all requests to srv-old.example.com will redirected to srv-new.example.com:

srv-old.example.com | Host(A) | 192.168.1.3 | static  

so in summary i will have

srv-old.example.com | Host(A) | 192.168.1.2 | timestamp  srv-new.example.com | Host(A) | 192.168.1.3 | static   srv-old.example.com | Host(A) | 192.168.1.3 | static  

Or will this lead to a 50/50 chance to land on the new server when calling srv-old.example.com? I already tried to edit the already existing non-static A-Record of srv-old.example.com, but it got updated after 1 day (i think by DHCP-Server).

The difficulty is, that srv-old needs to exist some more weeks, so i just cant take it offline and deleting the non-static A-Record will bring nothing in my opinion, because it will be recreated after one day.

I thought about a CNAME-Record like this too:

srv-old.example.com | CNAME | srv-new.example.com  

But i think this will cause the same Problem to have a 50/50 Chance to land on the old or new server

Does anyone of you have a hint for me? (A simple redirect 301 on the old webserver is no option at the moment)

Cannot build a working docker image for an openldap service

Posted: 02 Jan 2022 11:22 PM PST

I'm new to docker and I'm doing a little bit of experimenting with it.

I was trying to create a docker image for an openldap service. I tried creating the image starting from debian:latest image provided from the official docker repos.

This is the content of my Dockerfile

FROM debian  RUN DEBIAN_FRONTEND="noninteractive" apt-get update  RUN DEBIAN_FRONTEND="noninteractive" apt-get install --yes --no-install-recommends slapd ldap-utils  RUN apt-get clean  

I tried to create a container based on this image with

docker container run --interactive --tty --name=prova image  

here image is the name of the image build from the Dockerfile above. When I try to run slapd with service slapd start I get the following error:

[614.896012] Out of memory: Killed process 4005 (slapd) total-vm: 795276KB, anon-rss:334664KB, file-rss:8KB, shmem-rss:0kB, UID:101, pgtables:1108kB, oom_score_adj:0   

So it seems to be a kernel error, due to explosion of the process inside the memory, though I cannot understand what causes it, the same ldap service works fine in the host system or in kvm virtual machines I created.
I've also tried to install openldap inside a live container created from the debian:latest image, I get the same error.

So here's my question: can anyone explain what is going on here and what it's causing the error? Thanks for your help.

Kubernetes API server not able to register master node

Posted: 02 Jan 2022 10:37 PM PST

I was trying to create a Kubernetes Cluster using kubeadm. I had spin up an Ubuntu 18.04 server, installed docker (made it sure that docker.service was running), installed kubeadm kubelet and kubectl.

The following are the steps that I did:

sudo apt-get update  sudo apt install apt-transport-https ca-certificates curl software-properties-common -y  curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -  sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu `lsb_release -cs` test"  sudo apt update  sudo apt install docker-ce  sudo systemctl enable docker  sudo systemctl start docker    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add  sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"  sudo apt-get install kubeadm kubelet kubectl -y  sudo apt-mark hold kubeadm kubelet kubectl   kubeadm version  swapoff –a  

Also, in order to configure the Docker cgroup driver, I had edited /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. Within the file, I added Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" and commented out Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml".

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf for reference:

# Note: This dropin only works with kubeadm and kubelet v1.11+  [Service]  Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"  #Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"  Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"  # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically  EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env  # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use  # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.  EnvironmentFile=-/etc/default/kubelet  ExecStart=  ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS  

After this I ran: systemctl daemon-reload and systemctl restart kubelet. kubelet.service was running fine.

Next, I ran sudo kubeadm init --pod-network-cidr=10.244.0.0/16 and got the following error:

root@ip-172-31-1-238:/home/ubuntu# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-1-238 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.1.238]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:                timed out waiting for the condition          This error is likely caused by:                - The kubelet is not running                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)          If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:                - 'systemctl status kubelet'                - 'journalctl -xeu kubelet'          Additionally, a control plane component may have crashed or exited when started by the container runtime.        To troubleshoot, list all containers using your preferred container runtimes CLI.          Here is one example how you may list all Kubernetes containers running in docker:                - 'docker ps -a | grep kube | grep -v pause'                 Once you have found the failing container, you can inspect its logs with:                - 'docker logs CONTAINERID'    

After running systemctl status kubelet.service, seems that kubelet is running fine.
However, after running journalctl -xeu kubelet, I got the following logs:

kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://172.31.1.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-1-238?timeout=10s": dial tcp 172.31.1.238:6443: connect: connection refused
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-1-238"
kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://172.31.1.238:6443/api/v1/nodes": dial tcp 172.31.1.238:6443: connect: connection refused" node="ip-172-31-1-238"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"

Versions:
Docker: Docker version 20.10.12, build e91ed57
Kubeadm: {Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:39:51Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Not sure whether this is a connection issue between the Kube Api Server and Kubelet.
Does anyone know how to fix this?

How to resolve nginx internal server error

Posted: 02 Jan 2022 05:04 PM PST

I'm configuring Nginx as a public-facing proxy server to my internal Gunicorn server to host a "Reddit clone" Flask project I'm developing. At one point, Nginx was working properly (when I'd used mostly the same configuration as an online tutorial), but after making updates appropriate for my application, I'm getting an "Internal Server Error" when navigating to my Amazon Lightsail (Ubuntu 16.04) server's IP address, and reverting the changes back to the tutorial configuration now doesn't work.

I tried:
1. Stopping and starting the Nginx service
2. Running sudo netstat -tulpn, finding the PID (seems to appear twice for the local addresses 0.0.0.0:80 and 0.0.0.0:443), killing the process with sudo fuser -k 80/tcp and sudo fuser -k 443/tcp and then starting Nginx again
3. Completely removing Nginx from my system and reinstalling with: sudo apt-get purge --auto-remove nginx sudo apt-get -y install nginx

flask_reddit (my configuration file in /etc/nginx/sites-enabled/):

server {      # As Gunicorn documentation states, prevent host spoofing by blocking requests without "Host" request header set  #    access_log /var/log/nginx/flask_reddit/flask-reddit_access.log;  #    error_log /var/log/nginx/flask_reddit/flask-reddit_error.log;        listen 80;      listen 443;      server_name "";      return 444;  }    server {  #    access_log /var/log/nginx/flask_reddit/flask-reddit_access.log;  #    error_log /var/log/nginx/flask_reddit/flask-reddit_error.log;        # listen on port 80 (http)      listen 80 default_server;      server_name _;      location / {          # redirect any requests to the same URL but on https          return 301 https://$host$request_uri;      }  }  server {  #    access_log /var/log/nginx/flask_reddit/flask-reddit_access.log;  #    error_log /var/log/nginx/flask_reddit/flask-reddit_error.log;        # listen on port 443 (https)      listen 443 ssl default_server;      server_name _;      client_max_body_size 5m; # Useful for situations such as file uploads; will return 413 code in violation of this limit      keepalive_timeout 120s 120s; # Used to expedite request processing        # location of the self-signed SSL certificate      ssl_certificate /home/ubuntu/flask-reddit/certs/cert.pem;      ssl_certificate_key /home/ubuntu/flask-reddit/certs/key.pem;        location / {          # forward application requests to the gunicorn server          proxy_pass http://localhost:8000;          proxy_redirect off; # Preserve the fact that Gunicorn handled the request by disabling proxy_pass->location URL prefix change          proxy_set_header Host $host; # When a domain name is configured, this will equal the name in lowercase with no port (protocol added in X-Forwarded-Proto)          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }        location /static {          # handle static files directly, without forwarding to the application          root /home/ubuntu/flask-reddit/app;          try_files $uri /templates/404.html; # Provide custom-written 404 response page          expires 30d;      }  }  

/etc/nginx/nginx.conf (my main Nginx configuration file):

user www-data;  worker_processes auto;  pid /run/nginx.pid;    events {      worker_connections 768;      # multi_accept on;  }    http {        ##      # Basic Settings      ##        sendfile on;      tcp_nopush on;      tcp_nodelay on;      keepalive_timeout 65;      types_hash_max_size 2048;      # server_tokens off;        # server_names_hash_bucket_size 64;      # server_name_in_redirect off;        include /etc/nginx/mime.types;      default_type application/octet-stream;        ##      # SSL Settings      ##        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE      ssl_prefer_server_ciphers on;        ##      # Logging Settings      ##        access_log /var/log/nginx/access.log;      error_log /var/log/nginx/error.log;        ##      # Gzip Settings      ##        gzip on;      gzip_disable "msie6";        # gzip_vary on;      # gzip_proxied any;      # gzip_comp_level 6;      # gzip_buffers 16 8k;      # gzip_http_version 1.1;      # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;        ##      # Virtual Host Configs      ##        include /etc/nginx/conf.d/*.conf;      include /etc/nginx/sites-enabled/*;  }      #mail {  #   # See sample authentication script at:  #   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript  #  #   # auth_http localhost/auth.php;  #   # pop3_capabilities "TOP" "USER";  #   # imap_capabilities "IMAP4rev1" "UIDPLUS";  #  #   server {  #       listen     localhost:110;  #       protocol   pop3;  #       proxy      on;  #   }  #  #   server {  #       listen     localhost:143;  #       protocol   imap;  #       proxy      on;  #   }  #}  

When I run sudo service nginx status, I get the following output:

● nginx.service - A high performance web server and a reverse proxy server     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)     Active: active (running) (Result: exit-code) since Thu 2019-08-29 04:07:42 UTC; 3 days ago    Process: 21652 ExecReload=/usr/sbin/nginx -g daemon on; master_process on; -s reload (code=exited, status=0/SUCCESS)   Main PID: 4855 (nginx)      Tasks: 2     Memory: 5.5M        CPU: 1.521s     CGroup: /system.slice/nginx.service             ├─ 4855 nginx: master process /usr/sbin/nginx -g daemon on; master_process on             └─21657 nginx: worker process                               Sep 01 02:18:29 ip-172-26-5-151 systemd[1]: Reloading A high performance web server and a reverse proxy server.  Sep 01 02:18:29 ip-172-26-5-151 systemd[1]: Reloaded A high performance web server and a reverse proxy server.  Sep 01 04:58:21 ip-172-26-5-151 systemd[1]: Reloading A high performance web server and a reverse proxy server.  Sep 01 04:58:21 ip-172-26-5-151 systemd[1]: Reloaded A high performance web server and a reverse proxy server.  Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.  

My sudo netstat -tulpn output is:

Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      4855/nginx -g daemo  tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      4036/sshd         tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      19927/master      tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      4855/nginx -g daemo  tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      6398/python       tcp        0      0 0.0.0.0:9001            0.0.0.0:*               LISTEN      20037/python      tcp6       0      0 :::22                   :::*                    LISTEN      4036/sshd         tcp6       0      0 :::25                   :::*                    LISTEN      19927/master      udp        0      0 0.0.0.0:68              0.0.0.0:*                           943/dhclient      

Using sudo nginx -t says that this main Nginx configuration in nginx.conf is valid, but running sudo nginx -t -c /etc/nginx/sites-enabled/flask-reddit gives:

nginx: [emerg] "server" directive is not allowed here in /etc/nginx/sites-enabled/flask-reddit:1  nginx: configuration file /etc/nginx/sites-enabled/flask-reddit test failed  

Why is this occurring?

Why freeradius server says invalid Message-Authenticator which is generated from radtest?

Posted: 02 Jan 2022 06:08 PM PST

I am learning how to use freeradius, the version is v2.1.12. When I run radtest, there is no response from server, I see server side debug message has the following:

Received packet from 127.0.0.1 with invalid Message-Authenticator!  (Shared secret is incorrect.) Dropping packet without response.  

Here is radtest command: radtest -x selftest password 127.0.0.1 0 secret

Here is my edit of /etc/freeradius/clients.conf:

client selftest {       ipaddr = 127.0.0.1       secret = secret  }  

Here is my edit of /etc/freeradius/users:

selftest Cleartext-Password := "password"  

Here is the full output from radtest:

radtest -x selftest password 127.0.0.1 0 secret  Sending Access-Request of id 238 to 127.0.0.1 port 1812          User-Name = "selftest"          User-Password = "password"          NAS-IP-Address = 127.0.0.1          NAS-Port = 0          Message-Authenticator = 0x00000000000000000000000000000000  

Do you see what is wrong?

[UPDATE] Thanks arran-cudbard-bell, I change to "testing123", it is better, it got reject, but this is better.

Indeed I made some changes in /etc/hosts which could be the reason, it is like this:

127.0.0.1 localhost     <== pre-existed  127.0.0.1 selftest      <== my edit  

The reason I add the line is, without it, I cannot even run radtest, I get this error:

# radtest -x -t pap localhost password 127.0.0.1 0 testing123  radclient:: Failed to find IP address for test-server  radclient: Nothing to send.  

You know how to solve it?

How to escape double quotes and exclamation mark in password?

Posted: 02 Jan 2022 09:06 PM PST

I have the following code:

curl -s --insecure -H "Content-Type: application/json" -X POST -d "{\"username\":\"$1\",\"password\":\"$2\"}" http://apiurl  

In the above curl command I want to escape the " and ! in the password.

I have modified the curl command as below but it doesn't seem to work

curl -s --insecure -H "Content-Type: application/json" -X POST -d "{\"username\":\"$1\",\"password\":\"$2\"}" http://apiurl  

The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /abcef/report

Posted: 02 Jan 2022 07:03 PM PST

I am getting below error while trying to do some access website url.

     The proxy server received an invalid response from an upstream server.        The proxy server could not handle the request GET /abcef/report.  Reason: Error reading from remote server  Apache/2.2.15 (Red Hat) Server at www.abc.nl  Port 80    

I am confused because same is working when i hit the IP address directly and try to access the url. Anybody can help me to sort this issue. I have goggle for this issue and came to know the issue might be with the vhost.conf file. We are using the ajp transfer using mod_jk to redirect from port 8080 to port 80 and vice versa files used are mod_jk.conf and/etc/httpd/conf/worker .properties. the name of the worked property ajp13 as defined below will be used in the virtual host configuration worker.list=ajp13

I have added some modifications to the files and try to verify but nothing is working. Below is my vhosts.conf file:

NameVirtualHost *:80  <VirtualHost *:80>  ServerName aa.bb.cc.dd  <ifModule mod_headers.c>   Header set Connection keep-alive   </ifModule>  RewriteEngine on ....  

Please find the httpd.conf file

ServerRoot "/etc/httpd"  PidFile run/httpd.pid  Timeout 300  KeepAlive Off  MaxKeepAliveRequests 100  KeepAliveTimeout 15   TraceEnable off   <IfModule prefork.c>   StartServers       20   MinSpareServers    5   MaxSpareServers    100   ServerLimit      512   MaxClients       512   MaxRequestsPerChild  0   </IfModule>  <IfModule worker.c>  StartServers         4  MaxClients         300  MinSpareThreads     25  MaxSpareThreads     75   ThreadsPerChild     25  MaxRequestsPerChild  0  </IfModule>  

mod_jk file:

LoadModule jk_module modules/mod_jk.so  JkWorkersFile conf/workers.properties  JkLogLevel info  JkLogStampFormat  "[%a %b %d %H:%M:%S %Y]"  JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories  JkRequestLogFormat "%w %V %T"  JkMount /zwr* ajp13  

workers.properties:

worker.list=ajp13  worker.ajp13.port=8009  worker.ajp13.host=localhost  worker.ajp13.type=ajp13  worker.ajp13.socket_keepalive=true  worker.ajp13.connection_pool_size=10000  worker.ajp13.connect_timeout=5000000  worker.ajp13.prepost_timeout=5000000  

WDS&PXE booting using Grub2 - choosing menuentry

Posted: 02 Jan 2022 04:03 PM PST

I have problem with booting to WDS using PXE with Grub2. Actually we are using WDS with DHCP (Windows) - on DHCP I have WDS IP and 'grldr' file as bootfile name (grub loader) with menu.lst file (on target computer). I booting via pxe, grub searching menu.lst on hdd, loading menu.lst - I can choose WDS or HDD.

Today, I must enable WDS on EFI platform (grldr doesn't run on EFI platform). I installed another WDS, configured DHCP for one test platform and added grub2 file as "Bootfile Name" - and there is a problem. Machine booting to grub2 commandline..How can I add menuentry with WDS and HDD? I can boot WDS manually from command line but, where should I put grub.cfg?

Send mail service start error

Posted: 02 Jan 2022 08:02 PM PST

I installed sendmail on CentOS based on some tutorial. When I start sendmail, it showing sendmail failed error.

Here the following command result:

systemctl status sendmail  sendmail.service - Sendmail Mail Transport Agent     Loaded: loaded (/usr/lib/systemd/system/sendmail.service; enabled)     Active: failed (Result: exit-code) since Sun 2015-08-23 10:57:25 EDT; 12min ago    Aug 23 10:57:25 test systemd[1]: Starting Sendmail Mail Transport Agent...  Aug 23 10:57:25 test systemd[1]: sendmail.service: control process exited, code=exited status=203  Aug 23 10:57:25 test systemd[1]: Failed to start Sendmail Mail Transport Agent.  Aug 23 10:57:25 test systemd[1]: Unit sendmail.service entered failed state.  

vsftpd: 500 OOPS: cannot change directory

Posted: 02 Jan 2022 05:04 PM PST

Could you help me with vsftpd server. I am trying to configure it to make it work with virtual users. The problem is that I am still getting following error in ftp client:

500 OOPS: cannot change directory: [there is nothing more after : ]

Details:

# getenforce Disabled

#ls -al /home/back drwxrwxrwx+ 4 ftp ftp 4096 Jan 13 14:49 . drwxr-xr-x. 5 root root 4096 Dec 23 16:10 .. drwxrwxrwx. 2 ftp ftp 4096 Dec 3 18:00 it

#cat vsftpd.conf anonymous_enable=YES local_enable=YES virtual_use_local_privs=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=NO xferlog_std_format=YES ascii_upload_enable=YES ls_recurse_enable=YES listen=YES pam_service_name=vsftpd.virtual userlist_enable=YES tcp_wrappers=YES listen_port=12121 ftp_data_port=12020 pasv_min_port=12022 pasv_max_port=12099 user_sub_token=$USER local_root=/home/back/$USER chroot_local_user=YES hide_ids=YES guest_enable=YES allow_writeable_chroot=YES xferlog_file=/var/log/vsftpd.log xferlog_enable=YES dual_log_enable=YES port_enable=YES pasv_enable=YES pasv_promiscuous=YES

# cat /etc/pam.d/vsftpd.virtual

#%PAM-1.0 auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user session required pam_loginuid.so

Logs:

vsftpd.log (with "log_ftp_protocol=YES"):

Tue Jan 20 17:04:42 2015 [pid 13493] CONNECT: Client "127.0.0.1" Tue Jan 20 17:04:42 2015 [pid 13492] [test] OK LOGIN: Client "127.0.0.1" Tue Jan 20 17:06:57 2015 [pid 13584] CONNECT: Client "127.0.0.1" Tue Jan 20 17:06:57 2015 [pid 13584] FTP response: Client "127.0.0.1", "220 (vsFTPd 3.0.2)" Tue Jan 20 17:06:57 2015 [pid 13584] FTP command: Client "127.0.0.1", "USER test" Tue Jan 20 17:06:57 2015 [pid 13584] [test] FTP response: Client "127.0.0.1", "331 Please specify the password." Tue Jan 20 17:06:57 2015 [pid 13584] [test] FTP command: Client "127.0.0.1", "PASS <password>" Tue Jan 20 17:06:57 2015 [pid 13583] [test] OK LOGIN: Client "127.0.0.1"

secure:

Jan 13 17:49:35 localhost vsftpd[10198]: pam_userdb(vsftpd.virtual:auth): user 'test' granted access

Info:

Fedora 20 3.17.7-200.fc20.x86_64 #1 SMP Wed Dec 17 03:35:33 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Do you have any idea what should I change to be able to use vsftpd server?

Authenticate users with Zimbra LDAP Server from other CentOS clients

Posted: 02 Jan 2022 07:03 PM PST

I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ?

My zimbra server version is

[zimbra@zimbra ~]$ zmcontrol -v  Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition.  

My LDAP Server status is

[zimbra@ldap ~]$ zmcontrol status  Host ldap.domain.com      ldap                    Running      snmp                    Running      stats                   Running      zmconfigd               Running  

I already installed nss-pam-ldapd packages to my servers.

[root@www]# rpm -qa | grep ldap  nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64  apr-util-ldap-1.3.9-3.el6_0.1.x86_64  pam_ldap-185-11.el6.x86_64  openldap-2.4.23-32.el6_4.1.x86_64  

My /etc/nslcd.conf is

[root@www]# tail -n 7 /etc/nslcd.conf  uid nslcd  gid ldap  # This comment prevents repeated auto-migration of settings.  uri ldap://ldap.domain.com  base dc=domain,dc=com  binddn uid=zimbra,cn=admins,cn=zimbra  bindpw **pass**  ssl no  tls_cacertdir /etc/openldap/cacerts  

When i run

[root@www ~]# id username  id: username: No such user  

But i am sure that username user exist on ldap server.

EDIT : When i run ldapsearch command i got all result with credentials and dn.

[root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*'    # extended LDIF  #  # LDAPv3  # base <dc=domain,dc=com> (default) with scope subtree  # filter: objectclass=*  # requesting: ALL  #    # domain.com  dn: dc=domain,dc=com  zimbraDomainType: local  zimbraDomainStatus: active  .  .  .  

Changing physical path on IIS through appcmd isn't activated

Posted: 02 Jan 2022 06:08 PM PST

We have come across an issue on IIS 7.5 where we have a simple deploy system which consists of the following:

Create a zip-file of new webroot, consisting of three folders:

Api  Site  Manager  

This is unzipped into a new folder (let's say we call it "SITE_REV1"), and contains a script which invokes the following (one for each webroot):

C:\Windows\system32\inetsrv\appcmd set vdir "www.site.com/" -physicalPath:"SITE_REV1\Site"

This usually work, in 9/10 times. In some cases, the webroot seems to be updated correctly (if I inspect basic settings in IIS Manager, the path looks correct), but the running site in question is actually pointed to the old location. The only way we have managed to "fix it", is by running an IIS-reset. It isn't enough to recycle the application pool in question.

Sometimes it seems to even necessary be to make a reboot, but I'm not 100% sure that is accurate (it hasn't always been myself that was fixing the problem).

I rewrote the script using Powershell and the Web-Administration module, hoping that there was a glitch in appcmd, but the same issue occurs.

Set-ItemProperty "IIS:\Sites\www.site.com" -Name physicalPath -Value "SITE_REV1\Site"

Has anyone experienced something like this? Do anyone have a clue on what's going on, and what I can try and do to prevent this issue? Doing an IIS reset is not really a good option for us, because that would affect all sites on the server every time we try and deploy changes on a single site.

EDIT: We have identified that a start/stop of the site (NOT the application pool) in IIS Manager resolves the errorneous physical path, but if I stop the site using appcmd, change physical path, and then start it, I still suffer from the same issues. I'm at a blank...

Varnish not showing custom headers

Posted: 02 Jan 2022 10:06 PM PST

In my Varnish 3 configuration (default.vcl) I configured the following to pass along information via the response headers:

sub vcl_deliver {      if (obj.hits > 0) {          set resp.http.X-Cache = "HIT";          set resp.http.X-Cache-Hits = obj.hits;      } else {         set resp.http.X-Cache = "MISS";      }      set resp.http.X-Cache-Expires = resp.http.Expires;      set resp.http.X-Test = "LOL";        # remove Varnish/proxy header      remove resp.http.X-Varnish;      remove resp.http.Via;      remove resp.http.Age;      remove resp.http.X-Purge-URL;      remove resp.http.X-Purge-Host;      remove resp.http.X-Powered-By;  }  

And yet the only thing I can see is

HTTP/1.1 200 OK  Vary: Accept-Encoding  Content-Encoding: gzip  Content-Type: text/html  Content-Length: 8492  Accept-Ranges: bytes  Date: Tue, 05 Feb 2013 10:11:02 GMT  Connection: keep-alive  

It doesn't show any headers that we have added inside the vcl_deliver method.

EDIT: This is my vcl_fetch method:

sub vcl_fetch {      unset beresp.http.Server;      unset beresp.http.Etag;      remove req.http.X-Forwarded-For;      set req.http.X-Forwarded-For = req.http.rlnclientipaddr;      set beresp.http.X-Wut = "YAY";        if (req.url ~ "^/w00tw00t") {          error 750 "Moved Temporarily";      }        # allow static files to be cached for 7 days      # with a grace period of 1 day      if (req.url ~ "\.(png|gif|jpeg|jpg|ico|swf|css|js)$") {          set beresp.ttl = 7d;          set beresp.grace = 1d;          return(deliver);      }        # cache everythig else for 1 hours      set beresp.ttl = 1h;        # grace period of 1 day      set beresp.grace = 1d;        return(deliver);  }  

Anyone got an idea how to solve this as NO custom headers are included in the response headers... As you can see above in my vcl_fetch method I add several custom response headers but none of they are showing.

Nginx SSL redirect for one specific page only

Posted: 02 Jan 2022 04:03 PM PST

I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration:

...    ssl_certificate      example.net.crt;  ssl_certificate_key  example.key;    server {      listen 80 default;      listen 443 ssl;        server_name www.example.net example.net;      access_log /srv/www/example.net/logs/access.log;      error_log /srv/www/example.net/logs/error.log;        root /srv/www/example.net/public_html;        location / {          if ( $scheme = https ){              return 301 http://example.net$request_uri;          }          try_files $uri $uri/ /index.php?$uri&$args;          index index.php index.html;      }        location ^~ /admin.php {          if ( $scheme = http ) {              return 301 https://example.net$request_uri;          }          include fastcgi_params;          fastcgi_pass 127.0.0.1:9000;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_param HTTPS on;      }        location ~ \.php$ {          try_files $uri =404;          include fastcgi_params;          fastcgi_pass 127.0.0.1:9000;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;      }  }    ...  

It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files.

Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem.

Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance!

EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

Sonicwall Email Security Audit Logs

Posted: 02 Jan 2022 09:06 PM PST

I have a Sonicwall Email Security appliance from which I would like to extract the audit logs on a daily basis. Currently, I can only see how to get them manually by going to the web interface.

Is there a way that I can automate the process, preferably from a Linux box?

Lighttpd proxy module - use with hostname

Posted: 02 Jan 2022 10:06 PM PST

I have to proxy a site which is hosted on an external webspace through my lighty on example.org. My config so far:

$HTTP["url"] =~ "^/webmail" {      proxy.server =  ("/webmail/" => (          # this entry should link to example2.org          ("host" => "1.2.3.4", "port" => 80)      ))  }  

The webspace provider has configured my domain as vhost. So if i access http://1.2.3.4/webmail/ lighttpd will only deliver the main site of the webspace provider which says "Site example.org was not found on our server."

Any suggestions how i have to configure lighty to proxy sites that are only hosted as vhost (and do not have an ip on their own)?

How to limit the From header to match MAIL FROM in postfix?

Posted: 02 Jan 2022 08:02 PM PST

SMTP clients are required to pass user authentication before sending emails to other domains (relay). And we can use smtpd_sender_restrictions to make sure the MAIL FROM address matches the authenticated user. But how to make sure the From address in the mail header matches the MAIL FROM address? We also want to limit Reply-To header, so spam senders can hardly use our SMTP server, even if they break some of the user passwords.

No comments:

Post a Comment