Friday, May 20, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Why does StrongSwan charon-cmd client require the --cert command-line option for multiple CA chain certificates?

Posted: 20 May 2022 03:36 PM PDT

I have a StrongSwan charon server on Ubuntu 18.04. I connect to this server with a StrongSwan charon-cmd client from another Ubuntu Linux machine.

The command I use from the client machine to connect to the server is:

charon-cmd --cert ./GoDaddyCA1.crt --cert GoDaddyCA2.crt --host xxx.example.com --identity myusername

It works great, but I don't understand why I need two "--cert" options in the command line to trust both GoDaddy CA certificates in the chain.

My personal certificate is served by the StrongSwan server, and its authority is the GoDaddyCA1.crt. The GoDaddyCA1.crt certificate has an authority of the GoDaddyCA2.crt certificate. The GoDaddyCA2.crt is a self-signed root certificate.

So, the authority chain is:

MyPersonalCert.crt -> GoDaddyCA1.crt -> GoDaddyCA2.crt

The meaning of the charon-cmd command-line option "--cert" is to declare that "this is a certificate that I trust". So, I would expect that by trusting the GoDaddyCA1.crt, then my personal certificate should also be trusted.

But that's not good enough for charon-cmd. The charon-cmd client demands that I specify "--cert" to trust all the way to a self-signed certificate. But this seems superfluous. If I trust the intermediate CA certificate, then obviously I must also trust it's authority CA cert, right?

Is this a bug, or a feature? If it's a feature, what benefit does it provide?

Active Directory Web based Self Service

Posted: 20 May 2022 03:40 PM PDT

I am a service provider and have installed an active directory infrastructure as the backend to a number of applications. The issue is that the end users will never login to a domain bound machine, Limiting the ability to have passwords expire, reset, etc.

Looking for recommendations on solutions. I like the look of https://www.logonbox.com/ however cant swallow the cost.

The ability to give the customer the ability to manage users, Limited by which OU's, users, Groups they can see and manage would be a bonus.

Ansible AWX - ansible-playbook command not found

Posted: 20 May 2022 02:20 PM PDT

For some reason I'm getting the following error ansible-playbook: command not found. I logged into the server and I can run the ansible-playbook command.

sh-4.2$ ansible-playbook  usage: ansible-playbook [-h] [--version] [-v] [-k]                          [--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]                          [-c CONNECTION] [-T TIMEOUT]                          [--ssh-common-args SSH_COMMON_ARGS]                          [--sftp-extra-args SFTP_EXTRA_ARGS]                          [--scp-extra-args SCP_EXTRA_ARGS]                          [--ssh-extra-args SSH_EXTRA_ARGS] [--force-handlers]                          [--flush-cache] [-b] [--become-method BECOME_METHOD]                          [--become-user BECOME_USER] [-K] [-t TAGS]                          [--skip-tags SKIP_TAGS] [-C] [--syntax-check] [-D]                          [-i INVENTORY] [--list-hosts] [-l SUBSET]                          [-e EXTRA_VARS] [--vault-id VAULT_IDS]                          [--ask-vault-pass | --vault-password-file VAULT_PASSWORD_FILES]                          [-f FORKS] [-M MODULE_PATH] [--list-tasks]                          [--list-tags] [--step] [--start-at-task START_AT_TASK]                          playbook [playbook ...]  ansible-playbook: error: too few arguments  

Any idea what the issue might be?

enter image description here

How to unlock multiple luks-devices using dropbear-initramfs

Posted: 20 May 2022 02:22 PM PDT

My system setup is as following:

  1. One single SSD with LUKS and LVM (and of course an unencrypted boot partition). The debian system is installed there.
  2. Two HDDs assembled as RAID0 with LUKS and LVM for some custom data

To unlock to LUKS-devices at boot time from remote, I tried to use dropbear-initramfs.

That works fine, to unlock the first LUKS device (on the SSD, with the debian system installed on):

  1. I log in with ssh to dropbear/busybox
  2. I use cryptroot-unlock, insert the key, and unlock it

But to unlock the second LUKS device (on the RAID0), I still needs some console.

Is there any way to unlock both LUKS devices together (or after another) using dropbear-initramfs / busybox? TIA!

What's required to let a domain send outbound emails from multiple domains?

Posted: 20 May 2022 02:06 PM PDT

Let's say I have:

  • 40 outbound emails to send
  • main domain - main_domain.com
  • 4 separate domains-workers with Postfix - mail.my_postfix1.com...mail.my_postfix4.com

Goal: send 10 emails via each of 4 worker-Postfix domains, on behalf on main_domain.com. In other words, let the main domain use 4 domains with Postfix to send outbound emails, equally so.

Question: How? What is it what should be set up in terms of DNS records?

Since MX records are required to receive email, that is, for inboud emails, what is it what's required to send, outbound, emails?

Can not connect to Linux instance using SSH through Putty

Posted: 20 May 2022 01:47 PM PDT

Just signed up for GCP account and created a Linux VM on GCP (SUSE) and having issues to connect via SSH through Putty. Also tried through CMD prompt on Windows 10 and get the same issue. Followed up all the steps to generate ssh keys and uploaded public key on GCP (also, I can see the key in .ssh directory). Firewall is set to enable all hosts and ports to connect. I am able to ping the External IP but when trying to ssh into the VM there is no response and ends in timeout. ssh -i pathtoKeyfile user@External_IP ssh: connect to host xx.xx.xx.xx port 22: Connection timed out

Please help with any suggestion on similar issues.

Windows Server Slow NFS Performance

Posted: 20 May 2022 12:42 PM PDT

Setup: Server: Windows Server 2019 w/NFS feature installed. Virtualized on Proxmox backed by 6-disk zfs Client: Windows 10 Network: 1GB backbone NFS: Authentication Kerberos V5 w/Server authentication disabled. ACL by IP. Otherwise default configuration.

Issue: 5-10mbps peak performance. SMB Share for the same folder I'm seeing closer to 100-200mbps. Server/Client CPU/Memory usage isn't indicating saturation. Changing mode from TCP/UDP to TCP only didn't impact rate.

Kubeadm 1.24 with containerd. Kubeadm init fail (centos 7)

Posted: 20 May 2022 02:03 PM PDT

I try to install a single node cluster on centos 7, with kubadm 1.24 and with containerd, i followed the installation steps,

and i did: containerd config default > /etc/containerd/config.toml and passed : SystemdCgroup = true

but the kubeadm init fails at :

Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s  [kubelet-check] Initial timeout of 40s passed.    Unfortunately, an error has occurred:          timed out waiting for the condition    This error is likely caused by:          - The kubelet is not running          - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:          - 'systemctl status kubelet'          - 'journalctl -xeu kubelet'    Additionally, a control plane component may have crashed or exited when started by the container runtime.  To troubleshoot, list all containers using your preferred container runtimes CLI.  Here is one example how you may list all running Kubernetes containers by using crictl:          - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'          Once you have found the failing container, you can inspect its logs with:          - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'  error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster  To see the stack trace of this error execute with --v=5 or higher  

systemctl status kubelet : is Active: active (running)

and the logs : journalctl -xeu kubelet :

mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.715751    8685 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reas  mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.809523    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:05 master-node kubelet[8685]: E0520 17:07:05.910121    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.010996    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.111729    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.185461    8685 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.3  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.212834    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.313367    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.413857    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: I0520 17:07:06.433963    8685 kubelet_node_status.go:70] "Attempting to register node" node="master-node"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.434313    8685 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.  mai 20 17:07:06 master-node kubelet[8685]: W0520 17:07:06.451759    8685 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDr  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.451831    8685 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSID  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.514443    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573293    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573328    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573353    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.573412    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574220    8685 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Un  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574254    8685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574279    8685 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.574321    8685 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.615512    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.716168    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  mai 20 17:07:06 master-node kubelet[8685]: E0520 17:07:06.816764    8685 kubelet.go:2419] "Error getting node" err="node \"master-node\" not found"  

i can't figure out if, due to containerd, i have to run a kubeadm init --config.yaml ? or what could be the error Could you help me with that ?

Parallel OpenVPN Connection Over multiple Libvirt Network Interfaces

Posted: 20 May 2022 12:30 PM PDT

Let's Say That I have 20 VMs (Host: Ubuntu, Using Qemu-KVM, libvirt) and that I would use different network interfaces for different groups of vms. (1-6 using Network1, 7-15 using Network2, 16-20 using Network3). The Network interfaces were created by libvirt. And I would want The network interfaces to use a openVPN connection. (So Network1 uses conn1, Network2 uses conn2, Network3 uses conn3) to serve the purpose which is that the first group of vms all use the conn1 and so on...
What I wouldn't want is to run the openvpn inside each VM.
So is there a way?
if so how advanced is the way (cause I'm kinda a newbie :) )
and How would I go about it? (I would even appreciate knowing what subject of networking I should study)

kubeadm init failing to connect through proxy

Posted: 20 May 2022 10:07 AM PDT

I have this version of kubeadm

[root@megatron ~]# kubeadm version  kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}  

My docker is setup and working properly, and can easily and properly pull all the needed images through the proxy I am using.

I have the HTTP proxy configurations across the board in profile and bashrc and environment etc..

When I try to run kubeadm and have it pull images it times out

I0520 10:56:32.085975  260833 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt  exit status 1  output: time="2022-05-20T10:59:03-04:00" level=fatal msg="pulling image: rpc error:                                                                                        code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:                                                                                       v1.24.0\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.24.0\": faile                                                                                       d to do request: Head \"https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.0\": d                                                                                       ial tcp 172.253.63.82:443: i/o timeout"  , error  k8s.io/kubernetes/cmd/kubeadm/app/util/runtime.(*CRIRuntime).PullImage          cmd/kubeadm/app/util/runtime/runtime.go:121  k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages          cmd/kubeadm/app/cmd/config.go:340  k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1          cmd/kubeadm/app/cmd/config.go:312  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute          vendor/github.com/spf13/cobra/command.go:856  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC          vendor/github.com/spf13/cobra/command.go:974  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute          vendor/github.com/spf13/cobra/command.go:902  k8s.io/kubernetes/cmd/kubeadm/app.Run          cmd/kubeadm/app/kubeadm.go:50  main.main          cmd/kubeadm/kubeadm.go:25  runtime.main          /usr/local/go/src/runtime/proc.go:250  runtime.goexit          /usr/local/go/src/runtime/asm_amd64.s:1571  failed to pull image "k8s.gcr.io/kube-apiserver:v1.24.0"  k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages          cmd/kubeadm/app/cmd/config.go:341  k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1          cmd/kubeadm/app/cmd/config.go:312  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute          vendor/github.com/spf13/cobra/command.go:856  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC          vendor/github.com/spf13/cobra/command.go:974  k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute          vendor/github.com/spf13/cobra/command.go:902  k8s.io/kubernetes/cmd/kubeadm/app.Run          cmd/kubeadm/app/kubeadm.go:50  main.main          cmd/kubeadm/kubeadm.go:25  runtime.main          /usr/local/go/src/runtime/proc.go:250  runtime.goexit          /usr/local/go/src/runtime/asm_amd64.s:1571  

I have even manually pulled the necessary images

[root@megatron ~]# kubeadm config images list  k8s.gcr.io/kube-apiserver:v1.24.0  k8s.gcr.io/kube-controller-manager:v1.24.0  k8s.gcr.io/kube-scheduler:v1.24.0  k8s.gcr.io/kube-proxy:v1.24.0  k8s.gcr.io/pause:3.7  k8s.gcr.io/etcd:3.5.3-0  k8s.gcr.io/coredns/coredns:v1.8.6  [root@megatron ~]# docker pull k8s.gcr.io/coredns/coredns:v1.8.6  v1.8.6: Pulling from coredns/coredns  Digest: sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e  Status: Image is up to date for k8s.gcr.io/coredns/coredns:v1.8.6  k8s.gcr.io/coredns/coredns:v1.8.6  [root@megatron ~]#  

I need help understanding why kubeadm is not using the proper http proxy which seems to be the case when trying to get https://dl.k8s.io/release/stable-1.txt

There are no problems getting that file, why isn't kubeadm getting it?

[root@starscream ~]# wget https://dl.k8s.io/release/stable-1.txt  --2022-05-20 11:09:46--  https://dl.k8s.io/release/stable-1.txt  Connecting to [PROXY]:8080... connected.  Proxy request sent, awaiting response... 302 Moved Temporarily  Location: https://storage.googleapis.com/kubernetes-release/release/stable-1.txt [following]  --2022-05-20 11:09:47--  https://storage.googleapis.com/kubernetes-release/release/stable-1.txt  Connecting to [PROXY]:8080... connected.  Proxy request sent, awaiting response... 200 OK  Length: 7 [text/plain]  Saving to: 'stable-1.txt'    stable-1.txt                               100%[======================================================================================>]       7  --.-KB/s    in 0s    2022-05-20 11:09:48 (331 KB/s) - 'stable-1.txt' saved [7/7]  

Update:

After looking at the forced version option I tried that

kubeadm config images pull --kubernetes-version 1.24.0 --v=5  

Now it doesn't try to retrieve the stable-1.txt. I suspect I missed the fact that it may have been able to retrieve it regardless.

Now it is STILL trying to pull images that docker already has. Why is kubeadm trying to pull images that already exist?

I0520 12:57:24.402436  228499 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"  exit status 1  output: time="2022-05-20T12:59:54-04:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.24.0\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.24.0\": failed to do request: Head \"https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.0\": dial tcp 142.250.145.82:443: i/o timeout"  , error  

It still doesn't help that the proxy is not being used.I can accept that but why isn't kubeadm using the existing images?

Basic auth and data from curl to HAProxy backend not working on TLS Termination - but works on TLS passthrough

Posted: 20 May 2022 11:16 AM PDT

listen pki      bind *:8884 ssl no-sslv3 crt /HAPROXY.pem.ecdsa verify required ca-file /CA_CHAIN.pem      mode http      http-request add-header Content-Type "application/pkcs10"      http-request add-header Content-Transfer-Encoding "base64"      http-request add-header Authorization "Basic somebase64encodedstring"      default_backend pkis_1        backend pkis_1      mode http      http-request add-header Content-Type "application/pkcs10"      http-request add-header Content-Transfer-Encoding "base64"      http-request add-header Authorization "Basic somebase64encodedstring"      server pkis my.domain.com:443 ssl verify none  

Using the above config we are able to call the backend successfully from curl on a certain endpoint, using the same certificates, but we are blocked on another endpoint of the same server which requires basic auth.

The curl call is:

curl --cacert '$INITIAL_CACERT' --key '$INITIAL_DEVICE_KEY' --cert '$INITIAL_DEVICE_CERT' --user '$USER':'$PWD' --data @'$1'/'$KEY_NAME'-key.b64 -o '$1'/'$KEY_NAME'-cert-p7.b64 -H "Content-Type: application/pkcs10" -H "Content-Transfer-Encoding: base64" https://'$PKI_SERVER':'$PORT'/.well-known/est/'$2'/simpleenroll

Is there some way to forward everything from this curl command to the backend?

The weird thing is , when we remove all ssl auth and switch to tcp mode as transparent proxy, the basic auth works!

Should HTTP load balancer forward bad requests to backend?

Posted: 20 May 2022 03:08 PM PDT

If a HTTP client sends a GET request with a body that would generate a 400 Bad Request response, should the load balancer forward that request to the backend or deal with it immediately? Is there any advantage in NOT dealing with it at the load balancing layer?

Recently, an application team complained that a load balancer was returning 400 Bad Request when the application itself would return 405 Method Not Allowed. It seemed the load balancer was right and the application team had a misunderstanding but that left me wondering when the load balancer should more forgiving and forward crap to backends anyway.

Exposing an internal IP to the internet on GCP

Posted: 20 May 2022 11:03 AM PDT

Be warned, noob question here.

I want to play around with GCP AlloyDB. I have created a cluster and it has been assigned an internal IP. This is fine for applications running in the same VPC/ project network but I would love to connect to it directly from my workstation in the simplest way.

I am a total noob and don't even know what I don't know and would really appreciate any guidance on how to expose/map an internal IP to an external IP in GCP. Especially when I cannot pick the AlloyDB instance or internal IP when trying to reserve an external IP via GCP web console.

I'm thinking NAT and some router would do the trick but it's beyond my current knowledge and not even sure where to start searching.

Alertmanager telegram config chat_id and cannot unmarshal errror

Posted: 20 May 2022 01:10 PM PDT

I am trying to configure alertmanager to send alerts to my telegram group. Following the configuration I have:

global:    resolve_timeout: 5m  route:    group_by:    - job    group_interval: 5m    group_wait: 30s    receiver: "telegram"    repeat_interval: 1d    routes:    - match:        alertname: Watchdog      receiver: "null"  receivers:  - name: "null"  - name: 'telegram'    telegram_configs:    - bot_token: '5_REDACTED'       chat_id: '-1234567'  templates:  - /etc/alertmanager/config/*.tmpl  

The problem is that the container crashloopback with ts=2022-05-01T22:06:11.142Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config/alertmanager.yaml err="yaml: unmarshal errors:\n line 26: cannot unmarshal !!str {{ -123... into int64"

How can I fix this? I have tried add single quotes and double quotes but I still get the same errors

Can you damage a POE camera by plugging it quickly into a PoE switch over and over

Posted: 20 May 2022 02:14 PM PDT

Here is the context as I am a newbie at this - I got a free HP procurve 24 port POE switch from a nearby university. I have 12 cameras and long story short only 4 of the ports worked and the unit flashes stating their is a POE fault. I tested the unit by plugging in a live single camera connection into each port 1-24 rapidly to see which ports would show activity and which ports do not. Now that camera does not work. Its a Ubiquiti G3 bullet.

I wanna know if plugging in a ethernet connection rapidly into each port could cause damage to the camera.

I would like to understand why if possible, thanks

Postfix - Recipient Address Rejected on Incoming Mail Only

Posted: 20 May 2022 12:00 PM PDT

I am working on building a secure mail server for the first time using Postfix and Dovecot and I have encountered a problem that I cannot surpass.

To avoid the email delivery in the spam box of remote servers I set up a SPF and DKIM following this tutorial. The problem now I have is my server is rejecting the recipient address on my server when delivered from remote services like Gmail.

  • Sending from john@example.com to alex@example.com works.
  • Sending from john@example.com to alex@gmail.com works.
  • Sending from john@gmail.com to alex@example.com fails.
NOQUEUE: reject: RCPT from sonic311-43.consmr.mail.bf2.yahoo.com[74.6.131.217]: 451 4.3.5 <john@example.com>: Recipient address rejected: Server configuration problem;  

This is my /etc/postfix/main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version      # Debian specific:  Specifying a file name will cause the first  # line of that file to be used as the name.  The Debian default  # is /etc/mailname.  #myorigin = /etc/mailname    smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  biff = no    # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    readme_directory = no    # TLS parameters  #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  #smtpd_use_tls=yes  #smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache  #smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache  smtpd_tls_cert_file=/etc/letsencrypt/live/mail.example.com/fullchain.pem  smtpd_tls_key_file=/etc/letsencrypt/live/mail.example.com/privkey.pem  smtpd_use_tls=yes  smtpd_tls_auth_only = yes    smtpd_sasl_type = dovecot  smtpd_sasl_path = private/auth  smtpd_sasl_auth_enable = yes  smtpd_recipient_restrictions =    permit_mynetworks    permit_sasl_authenticated    reject_unauth_destination    check_policy_service unix:private/policyd-spf    # Milter configuration  milter_default_action = accept  milter_protocol = 6  smtpd_milters = local:/opendkim/opendkim.sock  non_smtpd_milters = $smtpd_milters      # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for  # information on enabling SSL in the smtp client.    smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = example.com  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  # mydestination = $myhostname, localhost.localdomain, localhost  mydestination = localhost  relayhost =  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = all  home_mailbox = Maildir/  virtual_transport = lmtp:unix:private/dovecot-lmtp  virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf  virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf  virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf    policyd-spf_time_limit = 3600  

Initially, Outbound mail timed out until I added

permit_sasl_authenticated  reject_unauth_destination  

under smtpd_recipient_restrictions

How do I get my server to accept mail?

Edit This is what I get when using a testing tool:

CLIENT -> SERVER: MAIL FROM:  SERVER -> CLIENT: 250 2.1.0 Ok  CLIENT -> SERVER: RCPT TO:  SERVER -> CLIENT: 451 4.3.5 : Recipient address rejected: Server configuration problem  SMTP ERROR: RCPT TO command failed: 451 4.3.5 : Recipient address rejected: Server configuration problem  CLIENT -> SERVER: QUIT  SERVER -> CLIENT: 221 2.0.0 Bye  Connection: closed  2019-04-10 19:53:53 SMTP Error: The following recipients failed: john@example.com: : Recipient address rejected: Server configuration problem  Message sending failed.  

Edit 2 This is the output in /var/log/mail.log

Apr 11 05:24:17 alice postfix/smtpd[22573]: connect from mail-wr1-f42.google.com[209.85.221.42]  Apr 11 05:24:17 alice postfix/smtpd[22573]: warning: connect to private/policyd-spf: No such file or directory  Apr 11 05:24:18 alice postfix/smtpd[22573]: warning: connect to private/policyd-spf: No such file or directory  Apr 11 05:24:18 alice postfix/smtpd[22573]: warning: problem talking to server private/policyd-spf: No such file or directory  Apr 11 05:24:18 alice postfix/smtpd[22573]: NOQUEUE: reject: RCPT from mail-wr1-f42.google.com[209.85.221.42]: 451 4.3.5 <john@example.com>: Recipient address rejected: Server configuration problem; from=<jonbonsilver@gmail.com> to=<john@example.com> proto=ESMTP helo=<mail-wr1-f42.google.com>  Apr 11 05:24:19 alice postfix/smtpd[22573]: disconnect from mail-wr1-f42.google.com[209.85.221.42] ehlo=2 starttls=1 mail=1 rcpt=0/1 data=0/1 quit=1 commands=5/7  Apr 11 05:27:39 alice postfix/anvil[22498]: statistics: max connection rate 1/60s for (smtp:185.234.217.223) at Apr 11 05:18:44  Apr 11 05:27:39 alice postfix/anvil[22498]: statistics: max connection count 1 for (smtp:185.234.217.223) at Apr 11 05:18:44  Apr 11 05:27:39 alice postfix/anvil[22498]: statistics: max cache size 2 at Apr 11 05:24:17  Apr 11 05:27:44 alice postfix/smtpd[22676]: connect from unknown[185.234.217.223]  

Edit 3

This is my /etc/postfix/master.cf

#  # Postfix master process configuration file.  For details on the format  # of the file, see the master(5) manual page (command: "man 5 master" or  # on-line: http://www.postfix.org/master.5.html).  #  # Do not forget to execute "postfix reload" after editing this file.  #  # ==========================================================================  # service type  private unpriv  chroot  wakeup  maxproc command + args  #               (yes)   (yes)   (no)    (never) (100)  # ==========================================================================  smtp      inet  n       -       y       -       -       smtpd  #  -o content_filter=spamassassin  #smtp      inet  n       -       y       -       1       postscreen  #smtpd     pass  -       -       y       -       -       smtpd  #dnsblog   unix  -       -       y       -       0       dnsblog  #tlsproxy  unix  -       -       y       -       0       tlsproxy  submission inet n       -       -       -       -       smtpd    -o syslog_name=postfix/submission    -o smtpd_tls_security_level=encrypt    -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no    -o smtpd_client_restrictions=permit_sasl_authenticated,reject_unauth_destination  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions    -o smtpd_recipient_restrictions=permit_sasl_authenticated, reject_unauth_destination  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject    -o smtpd_relay_restrictions=permit_sasl_authenticated,reject_unauth_destination  #  -o milter_macro_daemon_name=ORIGINATING  #smtps     inet  n       -       y       -       -       smtpd  #  -o syslog_name=postfix/smtps  #  -o smtpd_tls_wrappermode=yes  #  -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions  #  -o smtpd_recipient_restrictions=  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #628       inet  n       -       y       -       -       qmqpd  pickup    unix  n       -       y       60      1       pickup  cleanup   unix  n       -       y       -       0       cleanup  qmgr      unix  n       -       n       300     1       qmgr  #qmgr     unix  n       -       n       300     1       oqmgr  tlsmgr    unix  -       -       y       1000?   1       tlsmgr  rewrite   unix  -       -       y       -       -       trivial-rewrite  bounce    unix  -       -       y       -       0       bounce  defer     unix  -       -       y       -       0       bounce  trace     unix  -       -       y       -       0       bounce  verify    unix  -       -       y       -       1       verify  flush     unix  n       -       y       1000?   0       flush  proxymap  unix  -       -       n       -       -       proxymap  proxywrite unix -       -       n       -       1       proxymap  smtp      unix  -       -       y       -       -       smtp  relay     unix  -       -       y       -       -       smtp  #       -o smtp_helo_timeout=5 -o smtp_connect_timeout=5  showq     unix  n       -       y       -       -       showq  error     unix  -       -       y       -       -       error  retry     unix  -       -       y       -       -       error  discard   unix  -       -       y       -       -       discard  local     unix  -       n       n       -       -       local  virtual   unix  -       n       n       -       -       virtual  lmtp      unix  -       -       y       -       -       lmtp  anvil     unix  -       -       y       -       1       anvil  scache    unix  -       -       y       -       1       scache    #  # ====================================================================  # Interfaces to non-Postfix software. Be sure to examine the manual  # pages of the non-Postfix software to find out what options it wants.  #  # Many of the following services use the Postfix pipe(8) delivery  # agent.  See the pipe(8) man page for information about ${recipient}  # and other message envelope options.  # ====================================================================  #  # maildrop. See the Postfix MAILDROP_README file for details.  # Also specify in main.cf: maildrop_destination_recipient_limit=1  #  maildrop  unix  -       n       n       -       -       pipe    flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}  #  # ====================================================================  #  # Recent Cyrus versions can use the existing "lmtp" master.cf entry.  #  # Specify in cyrus.conf:  #   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4  #  # Specify in main.cf one or more of the following:  #  mailbox_transport = lmtp:inet:localhost  #  virtual_transport = lmtp:inet:localhost  #  # ====================================================================  #  # Cyrus 2.1.5 (Amos Gouaux)  # Also specify in main.cf: cyrus_destination_recipient_limit=1  #    #cyrus     unix  -       n       n       -       -       pipe  #  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}  #  # ====================================================================  # Old example of delivery via Cyrus.  #  #old-cyrus unix  -       n       n       -       -       pipe  #  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}  #  # ====================================================================  #  # See the Postfix UUCP_README file for configuration details.  #  uucp      unix  -       n       n       -       -       pipe    flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)  #  # Other external delivery methods.  #  ifmail    unix  -       n       n       -       -       pipe    flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)  bsmtp     unix  -       n       n       -       -       pipe    flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient  scalemail-backend unix  -       n       n       -       2       pipe    flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}  mailman   unix  -       n       n       -       -       pipe      flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py    ${nexthop} ${user}    spamassassin unix -     n       n       -       -       pipe  user=spamd argv=/usr/bin/spamc -f -e  /usr/sbin/sendmail -oi -f ${sender} ${recipient}    policyd-spf  unix  -       n       n       -       0       spawn      user=policyd-spf argv=/usr/bin/policyd-spf  

Mounting a cifs share dir_mode and file_mode are being ignored

Posted: 20 May 2022 11:06 AM PDT

I have an Ubuntu Server that is trying to mount a Windows Server shared folder.

Firstly, what I'm trying to do here is mount the share as the person in the credentials file and all actions take place as this person regardless of what user on the Ubuntu server is accessing the files.

/etc/fstab

//server/data /media/data cifs credentials=/root/.credentials,iocharset=utf8,sec=ntlm,noexec,dir_mode=0770,file_mode=0660,rw 0 0  

mount -l

//server/data on /media/data type cifs (rw,noexec,relatime,vers=1.0,sec=ntlm,cache=strict,username=windows_user,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.50.31,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=61440,wsize=65536,echo_interval=60,actimeo=1)  

I can read/write as the root user but I can only read as any other user on the Ubuntu server. How can I give other users (like my web server user) write access to this cifs share?

Azure DNS with GoDaddy

Posted: 20 May 2022 11:06 AM PDT

I have purchased my domain from GoDaddy. I have hosted my website in Azure VMSS with Azure Application Gateway. In Azure DNS, I have create the zone for mine website. In GoDaddy, I have added Name Servers that I got from Azure DNS. But my site is not accessible. Also when I am trying to do nslookup "site.com" it is giving me error as "** server can't find site.com: NXDOMAIN" Please help

Unable to retrieve users from AD on VMWare vcenter

Posted: 20 May 2022 02:06 PM PDT

I have added the AD successfully from the vcenter console and also configured the Single Sign On.

enter image description here

Single Sign On Config

When I try to retrieve the users from the AD, I first set the domain in the Users and Groups tab, then after a while I get the following error:

enter image description here

So, I attempted to do the same procedure using the Desktop client.

enter image description here

But, I get the following error.

enter image description here

Call "UserDirectory.RetrieveUserGroups" for object "UserDirectory" on vCenter Server "gspsec-vcenter" failed.

I cannot understand as to what I doing wrong. There is no firewall in between. IN fact, the AD is a VM on the same ESXi that the vCenter is managing.

The version is 5.5.0.5101 Build 1398493. I restarted the complete appliance after configuring the AD auth as that was recommended

Can't connect to Ubuntu server on LAN from pfSense VPN

Posted: 20 May 2022 12:00 PM PDT

Quick summary

pfSense server is connected to the WAN and LAN. This box also has an OpenVPN server running.

  • The LAN clients use 192.168.20.0/24
  • OpenVPN clients use 192.168.30.0/24

On the LAN I have two servers, one running Ubuntu (15.10) and one running OS X (10.11 with Server).

  • OS X server is at 192.168.20.10 (static DHCP assignment)
  • Ubuntu server is at 192.168.20.12 (static DHCP assignment)

Problem

When connected via the VPN, I can ping, traceroute and generally access the OS X server fine. However, the Ubuntu server just times out (no ping, and the traceroute stops at 192.168.30.1).

I've confirmed this problem using both ping and traceroute tools from pfSense as well. I can hit both servers with a LAN source, but only OS X with an OpenVPN source.

This lead me to believe it's an issue with Ubuntu, so I temporarily disabled UFW and enabled IP Forwarding. Didn't fix it (not that I expected either of those to work, but I'm drawing at straws at this point).

More details about the VPN setup

Tunnel settings

  • IPv4 Tunnel Network = 192.168.30.0/24
  • Redirect Gateway = TRUE
  • IPv4 Local network(s) = 192.168.20.0/24
  • Type-of-Service = FALSE
  • Duplicate Connection = FALSE

Client settings

  • Dynamic IP = FALSE
  • DNS Default Domain = TRUE, internal
  • DNS Server enable = TRUE, 192.168.20.1 (pfSense address, which is running the DHCP server and DNSMasq)

Conclusion

The part I can't wrap my head around is why it works for one server, but not the other. I suspect something is wrong with the Ubuntu setup, but I can't put my finger on what. Any thoughts on what I'm missing here, or where I should be looking?

Update 1

I've also made sure that unbound on the pfSense box explicitly allows DNS traffic between 192.168.30.0/24 and 192.168.20.0/24. I've also confirmed the firewall and gateway rules allow traffic between these two subnets. Still can't access the ubuntu server from the VPN either directly to the IP or via a domain lookup. However, both methods work for the OS X box.

Update 2

I've found that I can ping the OpenVPN gateway 192.168.30.1 from OS X; however, it times out from Ubuntu. I suspect this means there's something wrong with the routing table on the Ubuntu side, because it doesn't appear to be communicating with the VPN subnet the way OS X is.

Update 3

After more hours than I care to admit, I found the solution. I was missing the bloody route to the VPN subnet (I assume OS X just falls back to the main gateway when in doubt or something, which is why I didn't have to add a route there).

So, this fixed everything from the Ubuntu server side.

sudo ip route add 192.168.30.0/24 via 192.168.20.1

Once that was fixed, everything worked like a charm. Many thanks to this issue as well for pointing me the right direction.

SERVER2012 R2 Core access denied when deploying domain controller from remote system

Posted: 20 May 2022 01:01 PM PDT

I have installed Windows Server 2012r2 Core edition and want to promote it to my first domain controller. I intended to do this with a Server Manager installed on a client computer.

I connected to the server with the Server Manager and was able to install the AD DS role and the DNS role. After installation when I want to deploy the Domain Controller however I get an error:

An unexpected error occured while trying to configure AD DS. Error encountered while trying to get the domain information. Access is denied

This happens while using the same user that was used to install the roles.

The server IP has been set to static, the DNS server points to itself, client and server are part of the same workgroup. I tried with RSAT tools in both Windows 8.1 and 10.

As a stop gap I installed the Server Manager on the server itself and from there I can deploy the Domain controller. However, I would like to understand why it is not working from the remote system.

All is done in virtual machines (not actual hardware) and I can therefore go back to the state before deployment.

How do iptables work with NFQ in terms of traffic shaping in snort?

Posted: 20 May 2022 04:04 PM PDT

I'm trying to understand how iptables and NFQ work together with snort.

The reason that I ask this is because from what I understand snort can be set to IPS via NFQ but if you have iptables there essentially firewall rules hence my question as what I'm trying to do is drop packets that match to the rule below (split for readability):

drop tcp any any -> $HOME_NET 80     (flags:S; msg:"Possible TCP Dos Be Careful !!"; flow:stateless;     detection_filter: track by_dst, count 70, seconds 10;     sid:10001;rev:1;)  

The caveat to this is that iptables also seems to be able to drop packets based on a rule, so if that's true, then what I'm asking is how does it all work together with respect to the configuration that I have when running snort (see below)?

vim /usr/local/snort/etc/snort.conf  config daq: nfq  config daq_mode: inline  config daq_var: queue=0    iptables --append FORWARD --jump NFQUEUE --queue-num 0    /usr/local/snort/bin/snort -m 027 -d -l /var/log/snort \    -u snort -g snort -c /usr/local/snort/etc/snort.conf \    -Q -S HOME_NET=[192.168.1.0/24]  

Configure Centos7 Apache 2.4 php-fpm to run as user

Posted: 20 May 2022 02:06 PM PDT

I would like to configure a Centos 7 Apache 2.4 Linode to use php-fpm to execute php as the file owner. The docs for earlier Centos6 / Apache2.2 don't work and the configurations I have seen for setting up Lamp servers on Centos7 just run as the apache user. Are there any good tutorials to do this, or can someone provide the configuration files and virtual host directives need to do so? Thanks.

How do I upgrade an end-of-life Ubuntu distribution?

Posted: 20 May 2022 03:22 PM PDT

My do-release-upgrade fails. I get the following error message:

Checking for a new Ubuntu release  Your Ubuntu release is not supported anymore.  For upgrade information, please visit:  http://www.ubuntu.com/releaseendoflife    Err Upgrade tool signature    404  Not Found [IP: 2001:67c:1360:8c01::19 80]  Err Upgrade tool    404  Not Found [IP: 2001:67c:1360:8c01::19 80]  Fetched 0 B in 0s (0 B/s)  WARNING:root:file 'raring.tar.gz.gpg' missing  Failed to fetch  Fetching the upgrade failed. There may be a network problem.  

So it says my version is not supported anymore. I have Ubuntu Quantal (12.10). What should I do now?

Redirecting www.subdomain.domain.com to domain.com with htaccess

Posted: 20 May 2022 04:04 PM PDT

So I'm trying to get my server configured in a specific way so that anyone who visits http://www.subdomain.domain.com or https://www.subdomain.domain.com gets redirected to https:// without the www.

What Htaccess would I need to achieve this?

NGINX Reverse Proxy Sharepoint 2010 authentication fails

Posted: 20 May 2022 03:01 PM PDT

When presented with the Windows forms based authentication after entering the users credentials I am prompted again for a username and password. This just keeps prompting you, I see no errors in the logs that would help. I feel the Microsoft side may be seeing some errors but I do not have access to that server.

I am sure this is a common issue. Can anyone give me some pointers?

My config:

server {          listen       x.x.x.x:443;          server_name mle.x.x.co.uk;            # Enable SSL          ssl                     on;          ssl_certificate         /etc/nginx/ssl/certs/x.x.co.uk-cert.pem;          ssl_certificate_key     /etc/nginx/ssl/private/x.x.co.uk-key.pem;          ssl_session_timeout     5m;            # Set global proxy settings          proxy_read_timeout      360;            proxy_pass_header       Date;          proxy_pass_header       Server;            proxy_set_header        Host $host;          proxy_set_header        X-Real-IP $remote_addr;          proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header        Authorization $http_authorization;          proxy_pass_request_headers on;          proxy_pass_header       Authorization;            location / {          proxy_pass         https://1.1.1.1/;          allow all;          }            error_log /var/log/nginx/mle.x.x.co.uk-error.log;          access_log /var/log/nginx/mle.x.x.co.uk-access.log;            error_page 500 502 503 504  /500.html;          location = /500.html {          root  /var/www/errorpages;          }  }  

Factory Reset Cisco SPA504G without admin password

Posted: 20 May 2022 02:05 PM PDT

I've been trying to factory reset some SPA50x phones, specifically the 504G but the old provider has locked out everything.

I need to reset about 50 phones for use with a new service. I've man in the middle'd their provisioning server, but the phone profiles are compiled with SPC.

I've found a reference to how the old provider created the compiled profiles, and I have recompiled a new profile overriding the Admin_Passwd in the config file, but the phone simply complained that the config file was corrupted.

The phone is configured for SIP, but I have tried connecting it to a UC540 to see what would happen. The phone is able to re provisioned against it, but I still can't reset it without the admin password. This was just for testing anyways, since I actually need the phones connected to Asterisk.

I am very close to considering opening the phone and looking for a jtag port or some other way to reset these phones. I have a single phone on my desk right now that I can play with. I am hoping to find a repeatable solution to this.

Any advice would be great.

Bad IIS 7.5 performance on webserver

Posted: 20 May 2022 03:01 PM PDT

I have a webpage (ASP.NET 4.0 / MVC 4).

On my development machine (i5-2500 3.3 8GB Win7 VS2010 SP1 Fujitsu Esprimo P700) the page performs with 160 requests/sec on devenv webserver on my machine. The page performs with 250 requests/sec on my local IIS 7.5. (uncompiled web)

The page performs with 20 requests per second on a 16core 32gb ram production server (Fujitsu RX-300 w2k8 rc2 IIS 7.5). (compiled web)

Why? I think it's the IIS configuration but i can't figure out whats the problem. The page runs with 1 worker process on both machines. Web garden is not an option (it helps but the app isnt compatible with)

EDIT:

The driver versions of http.sys and tcpip.sys are same on prod and dev. The tests were always run on the machines itsself on localhost. The CPU usage on prod is 95% @ 20 req. On dev 80% @ 250 req. (32 threads) there is no db or io involved in this test. I opened the server, and yes there are really 16 xeon cores inside on prod.

IIS returns 302 when not on local host

Posted: 20 May 2022 01:01 PM PDT

I am working on an asp.net page that handles paypal IPNs (instant payment notification).

For those who don't know how IPNs work, I'll explain. When some kind of transaction occurs on paypals servers, paypal will send a POST message back to a certain page on my server. If customer A uses paypal, paypal lets us know so we can keep records of transactions that don't actually occur on our machines.

My box has proper port forwarding. Paypay sends data to http://www.companyurl.com:2343/ProcessPaypal.ashx, where port 2343 redirects to my box running IIS. This set up was working fine yesterday. Today, however, I stopped getting any of my test IPNs.

Running wireshark, it looks like my box is receiving those IPNs, but returning 302s that look like this: http://www.companyurl.com/ProcessPaypal.ashx (notice the lack of a port number).

My question is this: is there a way to tell my computer not to 302 the IPNs and just process them like it should? As far as I know I didn't change any config files. Also, I can access these pages fine on local host.

No comments:

Post a Comment