Sunday, July 24, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Kubernetes Ingress auth-signin redirect to different port

Posted: 23 Jul 2022 10:04 PM PDT

For me the redirect is happening only to the host (ie metrics.staging.com) and I want to redirect to metrics.staging.com:9099 ? Does anyone know how to redirect it to a host:port?

My ingress: nginx.ingress.kubernetes.io/auth-signin: https://oauth2-qa.staging.com:9043/oauth2/start?rd=$scheme://$host$request_uri nginx.ingress.kubernetes.io/auth-url: https://oauth2-qa.staging.com:9043/oauth2/auth

My whitlist-domain and cookie-domain are both set to .staging.com

I need something like: nginx.ingress.kubernetes.io/auth-signin: https://oauth2-qa.staging.com:9043/oauth2/start?rd=$scheme://$host:9099$request_uri

ingress-nginx version 4.2.0.

Any help is much appreciated!

ansible find: get path of a directory from register

Posted: 23 Jul 2022 07:53 PM PDT

Thanks in advance for any assistance. I can't seem to figure out what I am doing wrong hence why I am seeking some help. I wish to search for a folder using ansible, locate the folder and copy its contents to another directory. This is what I have so far. I think I am stuck in the with_items section.

- name: Folder find and file copy    hosts: "{{ target }}"    gather_facts: no      vars:      search_path: ~/oldfolder/backups      id: patient_1234      dest: "~/newfolder/{{ id }}"      tasks:        - name: Find directory using patterns        ansible.builtin.find:          paths: "{{ search_path }}/"          file_type: directory          patterns: "{{ id[:-4] }}*"          recurse: yes        register: find_matches        - name: Print return information from the previous task        ansible.builtin.debug:          var: find_matches.files[0].path        when: find_matches is defined        - name: Copy from backup to destination        ansible.builtin.copy:          src: "{{ item.path }}"          dest: "{{ dest }}"          remote_src: yes        with_items: "{{ find_matches.files }}"    

Can Intune be used to decrypt TLS/SSL web traffic on my managed Mac?

Posted: 23 Jul 2022 08:31 PM PDT

I'm curious to know how much of my personal activities I should run thru my new workplace managed Mac.

enter image description here

I notice Intune installed a number of certificates and keys. Is this effectively MitM for all my HTTPS web traffic? How do I know what can be decrypted and what is safe?

Emails to all Gmail Accounts bounce back (Error 550-5.7.1)

Posted: 23 Jul 2022 06:39 PM PDT

Our mail server uses Cpanel, and we have been able to send emails to Gmail accounts until July 8th, 2022. Since July 8, our emails to all Gmail accounts, whether personal or business email accounts, have been bouncing back with the same error message:

host gmail-smtp-in.l.google.com [173.194.76.27]  SMTP error from remote mail server after end of data:  550-5.7.1 [31.210.79.7      12] Our system has detected that this message is  550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail,  550-5.7.1 this message has been blocked. Please visit  550-5.7.1  https://support.google.com/mail/?p=UnsolicitedMessageError  550 5.7.1  for more information. e13-20020a5d6d0d000000b0021d1a34e643si8109664wrq.1029 - gsmtp  

All emails to other email servers work well, except for Gmail. We have valid DKIM, SPF and DMARC records and are below:

SPF:

v=spf1 +a +mx +ip4:31.210.79.7 ~all  

DKIM:

v=DKIM1;  k=rsa;  p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDXnyVc+eL+tgXpawFuKH+zNDBIAFk8qsX/R/YA2uMZDTTjZqhWDqat3k9dmqYRf141iknu2ppni9i2tkrAZv4PqIWp8lpNQNNT0V6C1zNMfssX5e3+ub1hjFUXPdYI0bqMNyGrRFI4pvAIKOZa89Lw2DP0FbVFhaIZuLkRk08JXwIDAQAB  

DMARC:

v=DMARC1; p=none  

PTR is valid. We have not changed any setting recently. We are not listed on any SPAM lists.

What we have done until now:

  • We have sent requests to Gmail using their form to unblock our mail server.
  • We have activated our Gmail Postmaster account, but we cannot see any activity on the account because it is new and there are currently no exchanges between our mail server and Gmail.

Any help to resolve the issue would be appreciated because we have not been able to email any Gmail accounts and we cannot figure out why!

rsync: connection unexpectedly closed

Posted: 23 Jul 2022 06:35 PM PDT

I have a error in rsync after update to use iconv option. I search in everything but not find any solution

The command is:

rsync -rltPDvhbz -e 'ssh -vvvv' --progress --iconv=utf-8-mac,utf-8 --exclude={'.Spotlight*', '.TemporaryItems', '.Trashes'} --delete --backup-dir=/Volumes/wd/meumundo/backup/  user@192.168.0.7:/mnt/nas/ /Volumes/wd/meumundo/original/  

I run in Mac with the origin is in raspberry raspberrypi 5.15.32-v8+ and destination is on external disc with exfat.

I try to update to same version in MAC and in raspberry but the mac version by brew is most updated:

  • Mac:
rsync  version 3.2.4  protocol version 31  Copyright (C) 1996-2022 by Andrew Tridgell, Wayne Davison, and others.  Web site: https://rsync.samba.org/  Capabilities:      64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,      socketpairs, symlinks, symtimes, hardlinks, hardlink-specials,      hardlink-symlinks, IPv6, atimes, batchfiles, inplace, append, ACLs,      xattrs, optional protect-args, iconv, no prealloc, stop-at, crtimes,      file-flags  Optimizations:      no SIMD-roll, no asm-roll, openssl-crypto, no asm-MD5  Checksum list:      xxh128 xxh3 xxh64 (xxhash) md5 md4 none  Compress list:      zstd lz4 zlibx zlib none    rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you  are welcome to redistribute it under certain conditions.  See the GNU  General Public Licence for details.  
  • raspberry:
rsync  version 3.2.3  protocol version 31  Copyright (C) 1996-2020 by Andrew Tridgell, Wayne Davison, and others.  Web site: https://rsync.samba.org/  Capabilities:      64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,      socketpairs, hardlinks, hardlink-specials, symlinks, IPv6, atimes,      batchfiles, inplace, append, ACLs, xattrs, optional protect-args, iconv,      symtimes, prealloc, stop-at, no crtimes  Optimizations:      no SIMD, no asm, openssl-crypto  Checksum list:      xxh128 xxh3 xxh64 (xxhash) md5 md4 none  Compress list:      zstd lz4 zlibx zlib none    rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you  are welcome to redistribute it under certain conditions.  See the GNU  General Public Licence for details.  

Using the verbose ssh I get logs:

debug2: channel 0: window 1990734 sent adjust 98226  debug2: channel 0: window 1990690 sent adjust 90078  debug2: channel 0: window 1982477 sent adjust 106483  debug2: channel 0: window 1990690 sent adjust 90078  debug2: channel 0: window 1982522 sent adjust 106438  debug2: channel 0: window 1997357 sent adjust 99795  debug2: channel 0: window 1997790 sent adjust 96134  debug2: channel 0: window 1990419 sent adjust 101054  debug2: channel 0: window 1991175 sent adjust 100128  debug2: channel 0: window 1992523 sent adjust 96437  debug2: channel 0: write failed  debug2: chan_shutdown_write: channel 0: (i0 o0 sock -1 wfd 5 efd 6 [write])  debug2: channel 0: send eow  debug3: send packet: type 98  debug2: channel 0: output open -> closed  rsync: connection unexpectedly closed (7994120 bytes received so far) [generator]  rsync error: error in rsync protocol data stream (code 12) at io.c(228) [generator=3.2.4]  

Anyone can help me ? Thanks

Dell Tower 7820 and No Boot Media Found on Every Other Reboot

Posted: 23 Jul 2022 04:53 PM PDT

So this is weird, it really makes no sense, but here it goes.

So in our environment, we've been using USB media to image with a MECM-created WIM. On newer devices, sometimes drivers need to update. We've done a bunch of 5820s and a few 7820s. However, with this new version of a 7820, I couldn't get past installing the client device. It would jump to "No boot media" after the restart.

So I hope into the BIOS and switch from RAID to AHCI, which usually fixes the issues, but no dice.

So then I decided to try to install a vanilla copy of Windows using the media creation tool. At first, it couldn't find the NVMe drive, but after loading the driver, it found it and installed it. The installation went smooth, and I got to the desktop.

I joined the machine to our domain and allowed GPOs to install the CCM client and provision as necessary. When it rebooted again, I got "No boot device found". I thought that was strange, and rebooted the machine. Then it went into Windows just fine. After allowing updates and more software to install, I decided to do another reboot. Same thing: no boot device found, but works when rebooting again.

Even though the device is usable, it will not help when working remotely obviously. Wanted to see if anyone else is running into this same issue.

On the T7820, there is a 1TB NVMe KXG70ZNV1T02 NVM with the FlexBay config and a 2TB drive, running on Windows 10.19044.1826. It has all the latest drivers and updates (including BIOS) from Dell using the Dell Command update tool.

Truly appreciate any assistance. Thanks!

Cannot open Cubernetis dashboard page

Posted: 23 Jul 2022 02:42 PM PDT

I'm trying to install Kubernetis Cluster with Dashboard on Ubuntu 20.04 TLS using the following commands:

Swapoff -a  Remove following line from /etc/fstab  /swap.img       none    swap    sw      0       0    sudo apt update  sudo apt install docker.io  sudo systemctl start docker  sudo systemctl enable docker    sudo apt install apt-transport-https curl  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add  echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list  sudo mv ~/kubernetes.list /etc/apt/sources.list.d  sudo apt update  sudo apt install kubeadm kubelet kubectl kubernetes-cni    sudo kubeadm init --pod-network-cidr=192.168.0.0/16    mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml      kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml    kubectl proxy --address 192.168.1.133 --accept-hosts '.*'  

But when I open http://192.168.1.133:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy

I get:

{    "kind": "Status",    "apiVersion": "v1",    "metadata": {},    "status": "Failure",    "message": "services \"kubernetes-dashboard\" not found",    "reason": "NotFound",    "details": {      "name": "kubernetes-dashboard",      "kind": "services"    },    "code": 404  }  

I tried to list the pods:

root@ubuntukubernetis1:~# kubectl get pods --all-namespaces  NAMESPACE              NAME                                         READY   STATUS              RESTARTS       AGE  kube-flannel           kube-flannel-ds-f6bwx                        0/1     Error               11 (29s ago)   76m  kube-system            coredns-6d4b75cb6d-rk4kq                     0/1     ContainerCreating   0              77m  kube-system            coredns-6d4b75cb6d-vkpcm                     0/1     ContainerCreating   0              77m  kube-system            etcd-ubuntukubernetis1                       1/1     Running             1 (52s ago)    77m  kube-system            kube-apiserver-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m  kube-system            kube-controller-manager-ubuntukubernetis1    1/1     Running             1 (52s ago)    77m  kube-system            kube-proxy-n6ldq                             1/1     Running             1 (52s ago)    77m  kube-system            kube-scheduler-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m  kubernetes-dashboard   dashboard-metrics-scraper-7bfdf779ff-sdnc8   0/1     Pending             0              75m  kubernetes-dashboard   dashboard-metrics-scraper-8c47d4b5d-2sxrb    0/1     Pending             0              59m  kubernetes-dashboard   kubernetes-dashboard-5676d8b865-fws4j        0/1     Pending             0              59m  kubernetes-dashboard   kubernetes-dashboard-6cdd697d84-nmpv2        0/1     Pending             0              75m  root@ubuntukubernetis1:~#  

Do you know how I can fix the issue?

How can I make TPROXY option in iptables work when the destination proxy address is a non-local one?

Posted: 23 Jul 2022 02:03 PM PDT

I installed a TPROXY server in my router that forwards the traffic to a SOCKS5 server.

The router has the address 192.168.1.1 and my PC has the address 192.168.1.33. Also, I have a local bridge "virbr0" in PC side that forwards traffic to a virtual machine, having it the gateway address as 192.168.11.1 and peer address as 192.168.11.2.

In the PC side:

ip rule add fwmark 1088 table 100  ip route add local default dev virbr0 table 100  iptables -t mangle -A PREROUTING -i virbr0 -p tcp -j TPROXY -s 192.168.11.2 --on-ip 192.168.0.1 --on-port 1088 --tproxy-mark 1088  

When I try to curl any IP in the virtual machine side (192.168.11.2) I get timeouts, seeing the Wireshark logs, any packet is forwarded from my PC to the router.

And when I change the address of "--on-ip" to 127.0.0.1 and run the TPROXY server locally listening on 127.0.0.1:1088 everything works ok.

How can I make the TPROXY option in iptables "see" the external address of the router (192.168.1.1) and connect?

PS.: I don't know if TPROXY was designed to work with non-local addresses when sending the packets, but I searched a lot in Google and I could see examples of TPROXY using non-local addresses, but when I try to reproduce the examples, nothing works.

openstack magnum error when create cluster

Posted: 23 Jul 2022 01:47 PM PDT

I create Cluster via this Cluster template

openstack coe cluster template create \    --coe "Kubernetes" \    --image "fedora-coreos" \    --flavor "g1.medium" \    --master-flavor "g1.medium" \    --volume-driver cinder \    --docker-storage-driver overlay2 \    --external-network "External_Net" \    --floating-ip-enabled \    --network-driver flannel \    --docker-volume-size 10 \    --dns-nameserver 8.8.8.8 \    --labels="container_runtime=containerd,cinder_csi_enabled=true,cloud_provider_enabled=true" \    --http-proxy "" \    --https-proxy "" \    --no-proxy ""     $TEMPLATE_NAME    

when i create cluster i got bellow error

Exception during  message handling: magnum.common.exception.GetDiscoveryUrlFailed: Failed to get discovery url from 'https://discovery.etcd.io/new?size=1'.  

i set http proxy and everything looks good. any suggestion?

What is the difference between DNS Lookup and DNS Resolution?

Posted: 23 Jul 2022 03:40 PM PDT

I have traversed through many websites, tutorials, documentations or personal blogs; yet, I couldn't find an exact and clear (scientific) definition (or distinction) of these two:

  • DNS Lookup process
  • DNS Resolution process

My understanding (what I base on the etymology of these words) is, that lookup is the process of giving a domain name, and looking up its respective IP address (or maybe vice versa), whereas resolution is the process of translation from one to another.

Still.. even these two definitions are confusing to me, because, at the end of the day, even if my understanding is correct, lookup would still need a resolution process.. which make these two, effectively synonyms and interchangeable.

Am I confused? or am I correct? or am I missing something important?

Thank you!

Ansible does not load environment variables from .bashrc

Posted: 23 Jul 2022 05:03 PM PDT

I want to preload variables from .bashrc file with an Ansible playbook.

I tried these ways:

- hosts: my_host    tasks:       - name: Display environment variables        shell: |          . ./.env_file_name && env        - name: Do another action        shell: |          . ./.env_file_name && do_something_else  

Another way:

- hosts: "{{ host }}"     tasks:      - name: source bashrc file        shell: . /home/user/.bashrc && env        register: env_file_result        - name: Show        debug:          msg: "{{ env_file_result.stdout_lines }}"  

Both returns this:

TASK [source bashrc file] ************************************************************************************************************************************************************************************  task path: /home/srvadm/playbooks/hello.yml:3  Using module file /usr/lib/python3.6/site-packages/ansible/modules/commands/command.py  Pipelining is enabled.  <XX.XX.XX.XX> ESTABLISH SSH CONNECTION FOR USER: user  <XX.XX.XX.XX> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="user"' -o ConnectTimeout=10 -o ControlPath=/home/srvadm/.ansible/cp/d9553c19b6 XX.XX.XX.XX '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''  <XX.XX.XX.XX> (0, b'\n{"changed": true, "end": "2021-03-12 11:56:15.596390", "stdout": "MAIL=/var/mail/user\\nSSH_CLIENT=XX.XX.XX.XX 41318 22\\nUSER=user\\nSHLVL=1\\nHOME=/home/user\\nLC_CTYPE=C.UTF-8\\nLOGNAME=user\\n_=/bin/sh\\nXDG_SESSION_ID=35493\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games\\nXDG_RUNTIME_DIR=/run/user/1000\\nLANG=en_US.UTF-8\\nSHELL=/bin/bash\\nPWD=/home/user\\nSSH_CONNECTION=XX.XX.XX.XX 41318 XX.XX.XX.XX 22", "cmd": ". /home/user/.bashrc && env", "rc": 0, "start": "2021-03-12 11:56:15.593574", "stderr": "", "delta": "0:00:00.002816", "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": ". /home/user/.bashrc && env", "removes": null, "argv": null, "warn": true, "chdir": null, "stdin_add_newline": true, "stdin": null}}}\n', b'')  changed: [XX.XX.XX.XX] => {      "changed": true,      "cmd": ". /home/user/.bashrc && env",      "delta": "0:00:00.002816",      "end": "2021-03-12 11:56:15.596390",      "invocation": {          "module_args": {              "_raw_params": ". /home/user/.bashrc && env",              "_uses_shell": true,              "argv": null,              "chdir": null,              "creates": null,              "executable": null,              "removes": null,              "stdin": null,              "stdin_add_newline": true,              "strip_empty_ends": true,              "warn": true          }      },      "rc": 0,      "start": "2021-03-12 11:56:15.593574",      "stderr": "",      "stderr_lines": [],      "stdout": "MAIL=/var/mail/user\nSSH_CLIENT=XX.XX.XX.XX 41318 22\nUSER=user\nSHLVL=1\nHOME=/home/user\nLC_CTYPE=C.UTF-8\nLOGNAME=user\n_=/bin/sh\nXDG_SESSION_ID=35493\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games\nXDG_RUNTIME_DIR=/run/user/1000\nLANG=en_US.UTF-8\nSHELL=/bin/bash\nPWD=/home/user\nSSH_CONNECTION=XX.XX.XX.XX 41318 XX.XX.XX.XX 22",      "stdout_lines": [          "MAIL=/var/mail/user",          "SSH_CLIENT=XX.XX.XX.XX 41318 22",          "USER=user",          "SHLVL=1",          "HOME=/home/user",          "LC_CTYPE=C.UTF-8",          "LOGNAME=user",          "_=/bin/sh",          "XDG_SESSION_ID=35493",          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games",          "XDG_RUNTIME_DIR=/run/user/1000",          "LANG=en_US.UTF-8",          "SHELL=/bin/bash",          "PWD=/home/user",          "SSH_CONNECTION=XX.XX.XX.XX 41318 XX.XX.XX.XX 22"      ]  }    TASK [Show] **************************************************************************************************************************************************************************************************  task path: /home/srvadm/playbooks/hello.yml:7  ok: [XX.XX.XX.XX] => {      "msg": [          "MAIL=/var/mail/user",          "SSH_CLIENT=XX.XX.XX.XX YYYY 22",          "USER=user",          "SHLVL=1",          "HOME=/home/user",          "LC_CTYPE=C.UTF-8",          "LOGNAME=user",          "_=/bin/sh",          "XDG_SESSION_ID=35493",          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games",          "XDG_RUNTIME_DIR=/run/user/1000",          "LANG=en_US.UTF-8",          "SHELL=/bin/bash",          "PWD=/home/user",          "SSH_CONNECTION=XX.XX.XX.XX YYYY XX.XX.XX.XX 22"      ]  }  META: ran handlers  META: ran handlers  

I got this "solution" from here: https://stackoverflow.com/questions/60209185/ansible-environment-variables-from-env-file but It does not work as I expected. How I can preload these shell variables from .bashrc file?

Route all traffic through TUN interface

Posted: 23 Jul 2022 09:06 PM PDT

I want all my traffic go through TUN interface.

Here is the flowchart

So, as you can see, the traffic is routed to TUN iface on 10.0.0.1 address from every program. Then, the program attached to the TUN does something with the packets, and then they are sent to my router on 192.168.1.1. Then they're routed across the Internet (for example, to my proxy server, but it doesn't actually matter that much for my problem).

So my goal is just to route traffic in that manner: $any_program <--> tunX <--> 192.168.1.1 (the router) (<--> thing means that traffic goes both in and out).

What I've did so far:

  1. First, I initialized tunX device with this function:
int tun_open(char *device)  {      struct ifreq ifr;      int fd, err;            fd = open("/dev/net/tun", O_RDWR);      if (fd == -1)      {          perror("opening /dev/net/tun");          exit(1);      }        memset(&ifr, 0, sizeof (ifr));      ifr.ifr_flags = IFF_TUN;      strncpy(ifr.ifr_ifrn.ifrn_name, device, IFNAMSIZ);            err = ioctl(fd, TUNSETIFF, (void *) &ifr);      if (err == -1)      {          perror("ioctl TUNSETIFF");          close(fd);          exit(1);      }        return fd;  }  

And then:

tunfd = tun_open("tun6");  

Also, I enabled TUNSETPERSIST:

ioctl(tunfd, TUNSETPERSIST, 1) < 0);  
  1. Then, I set up the device with following commands:
$ sudo ip addr add 10.0.0.1/24 dev tun6  $ sudo ip link set tun6 up                                                   

The program reads from tunfd and outputs the content. So far the only thing it reads is the following:

:B\{k  HOST: 239.255.255.250:1900  MAN: "ssdp:discover"   MX: 1  ST: urn:dial-multiscreen-org:service:dial:1  USER-AGENT: Google Chrome/86.0.4240.198 Linux    %   N%*.%K%M%P%M%M%M%HP%,%M%*K%(aP%>O%M%LqP%@K%`P%P%Ҵ u@=U繤湤}=UoK%0=U  

ssdp:discover? Why is this getting through my tun interface?

Output of route -n:

Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.1.1     0.0.0.0         UG    600    0        0 wlp2s0  10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 tun6  169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 wlp2s0  192.168.1.0     0.0.0.0         255.255.255.0   U     600    0        0 wlp2s0    

I've been playing around with iptables and ip route programs, but I'm kind of a newbie in all these. As far as I understood, iptables doesn't actually route the packets, but filter them (I can be very wrong). So is there a way to route the packets with ip route?

How can I allow http when on a specific subdomain with nginx? (HSTS)

Posted: 23 Jul 2022 10:05 PM PDT

I am trying to test my site on a stage site before making it live. Obviously it doesn't have the same certificate. When I try to going in with the testing.domain.com subdomain, I get this error in firefox:

SSL_ERROR_BAD_CERT_DOMAIN    testing.website.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can't add an exception to visit this site.  
upstream website {      server 127.0.0.1:3000;  }    #prevent www  server {    server_name www.website.com;    return 301 $scheme://website.com$request_uri;   }    #redirect http to https  server {      listen 80;      listen [::]:80;      server_name website.com;        return 301 https://$host$request_uri;  }    #https  server  {      listen 443 ssl http2;      listen [::]:443 ssl http2;      server_name website.com;        include /etc/nginx/config/sites/headers.conf;        include /etc/nginx/config/ssl/resolver.conf;        ssl on;      ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem;        include /etc/nginx/config/ssl/ssl.conf;        location /      {          proxy_pass http://website;            include /etc/nginx/config/proxy/proxy.conf;      }        #include /etc/nginx/config/cache/static.conf;  }  

I added in this server block in the hopes that it would handle the HTTP requests coming from the testing subdomain:

#allow http through testing subdomain  server {      listen 80;      listen [::]:80;      server_name testing.website.com;        location /      {          proxy_pass http://website;          include /etc/nginx/config/proxy/proxy.conf;       }  }  

And I found that under headers.conf there is a line that says

   add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";  

so I removed the includeSubDomains part in hopes that it would disable HSTS.

Even after these changes, it's still immediately redirecting from http://testing.website.com to https://testing.website.com and then giving me the HSTS error.

Every time I make changes, I do either nginx -s reload or reboot the whole server, but neither makes a difference.

Azure Storage Files read only access - what permissions should I give?

Posted: 23 Jul 2022 11:04 PM PDT

I'm trying to give my team mate an access to backups kept on Azure Storage Files.

I gave him a Reader for a Resource Group and a Storage File Data SMB Share Reader for File Share resource. He's getting Access Denied (to perform listKey action). What did I missed?

(It works when I'm giving Contributor but it's to much of course.)

enter image description here

enter image description here

Run Systemd Service Unit After AWS EBS Volume Mount

Posted: 23 Jul 2022 04:06 PM PDT

I launch m5.large (nitro-based) EC2 instance from Ubuntu AMI and attach EBS volume. There is systemd as a default init system. As AWS documentation "Making an Amazon EBS Volume Available for Use on Linux" stands, I mount EBS volume within user data:

#!/bin/bash    # Sleep gives the SSD drive a chance to mount before the user data script completes.  sleep 15    mkdir /application    mount /dev/nvme1n1 /application  

I need Nginx and provide site configuration for it at EBS volume. For default nginx package with systemd unit file I declare a dependency on the mount with RequiresMountsFor directive within drop-in:

# /lib/systemd/system/nginx.service    [Unit]  Description=A high performance web server and a reverse proxy server  After=network.target    [Service]  Type=forking  PIDFile=/run/nginx.pid  ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'  ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'  ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload  ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid  TimeoutStopSec=5  KillMode=mixed    [Install]  WantedBy=multi-user.target  
# /etc/systemd/system/nginx.service.d/override.conf    [Unit]  RequiresMountsFor=/application    [Service]  Restart=always  

But this doesn't help to run Nginx only after mount will be completed (in user data) for some reason. I can see the mount unit for /application path, but I don't see Required=application.mount as I'd expect:

$ sudo systemctl show -p After,Requires nginx  Requires=system.slice sysinit.target -.mount  After=sysinit.target -.mount systemd-journald.socket basic.target application.mount system.slice network.target  

Nginx service still tries to run before cloud-init completes user data execution, exhausts all attempts to run the service and fails:

Apr 08 15:34:32 hostname nginx[1303]: nginx: [emerg] open() "/application/libexec/etc/nginx/nginx.site.conf" failed (2: No such file or directory) in /etc/nginx/sites-e  Apr 08 15:34:32 hostname nginx[1303]: nginx: configuration file /etc/nginx/nginx.conf test failed  Apr 08 15:34:32 hostname systemd[1]: nginx.service: Control process exited, code=exited status=1  Apr 08 15:34:32 hostname systemd[1]: Failed to start A high performance web server and a reverse proxy server.  

I assume the service should be started by systemd on mount notification for the specified path /application. What am I missing?

What is the most flexible and correct way to mount EBS volumes at Ubuntu + systemd?

CentOS 7 - firewalld[8509]: ERROR: COMMAND_FAILED

Posted: 23 Jul 2022 06:00 PM PDT

[root@localhost ~]# systemctl status firewalld -l  ● firewalld.service - firewalld - dynamic firewall daemon     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)     Active: active (running) since Sun 2018-09-23 00:27:10 EDT; 2h 51min ago       Docs: man:firewalld(1)   Main PID: 8509 (firewalld)     CGroup: /system.slice/firewalld.service             └─8509 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid    Sep 23 00:27:09 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...  Sep 23 00:27:10 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.  Sep 23 03:01:26 localhost.localdomain firewalld[8509]: WARNING: '/usr/sbin/iptables-restore --wait=2 -n' failed: iptables-restore: line 2 failed  Sep 23 03:01:26 localhost.localdomain firewalld[8509]: ERROR: COMMAND_FAILED  Sep 23 03:01:32 localhost.localdomain firewalld[8509]: WARNING: '/usr/sbin/iptables-restore --wait=2 -n' failed: iptables-restore: line 2 failed  Sep 23 03:01:32 localhost.localdomain firewalld[8509]: ERROR: COMMAND_FAILED  

I am getting the above error message. I tried Googling for solutions, but couldn't find a solution. Any idea why this is happening?

Ansible: Calling tags from role but they are not getting executed

Posted: 23 Jul 2022 07:01 PM PDT

I have some tasks as shown below

- name: Add the server's domain to the hosts file    lineinfile:     dest: /etc/hosts     #regexp='.*{{ item }}$'     line: "{{ hostvars[item].ansible_default_ipv4.address }} {{ LOCAL_FQDN_NAME }} {{ LOCAL_HOSTNAME }}"     state:  present    when: hostvars[item].ansible_default_ipv4.address is defined    with_items: "{{ groups['cache'] }}"    tags: [ 'never', 'hostname' ]    - name: Set the timezone for the server to be UTC    file:      path: /usr/share/zoneinfo/UTC      dest: /etc/localtime      state: link    - name: Copy the NGINX repository definition    copy: src=nginx.repo dest=/etc/yum.repos.d/    tags: [ 'never', 'setuprepo' ]  

and I call them from my playbook as

- hosts: cache    vars:     LOCAL_HOSTNAME: 'web02'    roles:    - { role: basic-setup, tags: [ 'hostname', 'setuprepo', 'firewall' ]}  

But despite calling the tags explicitly, the appropriate tasks like "Add the server's domain to the hosts file" is not getting executed whereas "Set the timezone for the server to be UTC" is getting executed.

edit: My command line is a simple

ansible-playbook server.yml   

Here's how the command was executed enter image description here

As you can see when I execute the command I don't see any tasks for the tags I called from

  • { role: nginx, tags: [ 'hostname', 'setuprepo', 'firewall' ]}

What am I doing wrong here?

Port not responding remotely, but shows as open locally

Posted: 23 Jul 2022 02:01 PM PDT

UPDATE:

I opened a ticket with DO and was informed that they'd automatically closed port 8083 due to the VestaCP vulnerability which allowed root access to droplets. While I'm happy that I found out what was causing my problem, I'm disappointed in DO that they did not contact their users to inform them about this. Multiple hours were wasted on this problem, hours that I won't get back.

On my DO server, I have bound my API to port 8083, and it was working normally until today. Now whenever I try to connect to my API, the connection times out. I tried to connect to that port using nc -zv host port but it hangs up as well.

Strangely, changing the port in my API, recompiling it and running it works perfectly. Almost all other ports work, except 8083.

I sshd into the box, and ran nc -zv localhost 8083 and got a connection successful message. I don't think I have any firewall blocking it, because I ran service iptables status and it says iptables.service not running.

So, now I have two options, either use a different port for my API (which is troublesome, as the port is hardcoded into the Android app I use the API for), or figure this out.

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 0.0.0.0:2222            0.0.0.0:*               LISTEN      1328/sshd  tcp6       0      0 :::8086                 :::*                    LISTEN      1586/api1  tcp6       0      0 :::3306                 :::*                    LISTEN      1343/mysqld  tcp6       0      0 :::2222                 :::*                    LISTEN      1328/sshd  tcp6       0      0 :::8080                 :::*                    LISTEN      1583/api2  tcp6       0      0 :::8082                 :::*                    LISTEN      1574/api3  tcp6       0      0 :::8083                 :::*                    LISTEN      1801/api4  tcp6       0      0 :::8084                 :::*                    LISTEN      1571/api5  tcp6       0      0 :::8085                 :::*                    LISTEN      1577/api6  

What could be the problem.

Weblogic server arguments via Admin Console

Posted: 23 Jul 2022 08:06 PM PDT

I'm working on a Weblogic domain where I have deployed a Web Application on the Admin server node.

Admin Server

I want to pass an argument when the server starts. I'm trying to do this via the Admin Console, more specifically Servers -> Admin -> Server Start -> Arguments as depicted below

Server start - Console

But I don't see the arguments in the Server log. What should be done in order the argument to take effect?

HTTP 405 Submitting Wordpress comments (Nginx/PHP-FPM/Memcached)

Posted: 23 Jul 2022 11:04 PM PDT

I just realized that the comments are broken on a Wordpress site I'm working on (LEMP+memcached), and can't figure out why. I'm sure it's not related to my theme nor any plugins. Basically, anyone tries to submit a comment, nginx gets stuck on the wp-comments-post.php with an HTTP 405 error instead of fulfilling the POST request.

From what I can tell, the issue appears to be how nginx handles a POST request to wp-comments-post.php, where it returns an HTTP 405 instead of redirecting it correctly.

I had a similar issue here with doing a POST request on an email submission plugin, and that was fixed by telling memcached to redirect the 405 error. Memcached should be passing 405s back to nginx, but I'm not sure how nginx and php-fpm handle errors from there (especially with fastcgi caching being used).

Here is my nginx.conf:

user www-data;  worker_processes 4;  pid /run/nginx.pid;    events {      worker_connections 4096;      multi_accept on;      use epoll;  }    http {    ##  # Basic Settings  ##    sendfile on;  tcp_nopush on;  tcp_nodelay on;  keepalive_timeout 15;  keepalive_requests 65536;  client_body_timeout 12;  client_header_timeout 15;  send_timeout 15;  types_hash_max_size 2048;  server_tokens off;    server_names_hash_max_size 1024;  server_names_hash_bucket_size 1024;    include /etc/nginx/mime.types;      index index.php index.html index.htm;    client_body_temp_path /tmp/client_body;  proxy_temp_path /tmp/proxy;  fastcgi_temp_path /tmp/fastcgi;  uwsgi_temp_path /tmp/uwsgi;  scgi_temp_path /tmp/scgi;     fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=phpcache:100m inactive=60m;  fastcgi_cache_key "$scheme$request_method$host$request_uri";  default_type application/octet-stream;    client_body_buffer_size 16K;  client_header_buffer_size 1K;  client_max_body_size 8m;  large_client_header_buffers 2 1k;    ##  # Logging Settings  ##    access_log /var/log/nginx/access.log;  error_log /var/log/nginx/error.log;    ##  # Gzip Settings  ##    gzip on;  gzip_disable "msie6";  gzip_min_length 1000;  gzip_vary on;  gzip_proxied any;  gzip_comp_level 2;  gzip_buffers 16 8k;  gzip_http_version 1.1;  gzip_types text/plain text/css application/json image/svg+xml image/png image/gif image/jpeg application/x-javascript text/xml application/xml application/xml+rss text/javascript font/ttf font/otf font/eot x-font/woff application/x-font-ttf application/x-font-truetype application/x-font-opentype application/font-woff application/font-woff2 application/vnd.ms-fontobject audio/mpeg3 audio/x-mpeg-3 audio/ogg audio/flac audio/mpeg application/mpeg application/mpeg3 application/ogg;    etag off;    ##  # Virtual Host Configs  ##    include /etc/nginx/conf.d/*.conf;  include /etc/nginx/sites-enabled/*;    upstream php {      server unix:/var/run/php/php7.0-fpm.sock;  }    server {          listen 80; # IPv4      listen [::]:80; # IPv6      server_name example.com www.example.com;      return 301 https://$server_name$request_uri;  }    server {      server_name example.com www.example.com;      listen 443 default http2 ssl; # SSL      listen [::]:443 default http2 ssl; # IPv6      ssl on;      ssl_certificate /etc/nginx/ssl/tls.crt;      ssl_certificate_key /etc/nginx/ssl/priv.key;      ssl_dhparam /etc/nginx/ssl/dhparam.pem;      ssl_session_cache shared:SSL:10m;      ssl_session_timeout 24h;      ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES256+EECDH:AES256+EDH:!aNULL;      ssl_protocols TLSv1.1 TLSv1.2;      ssl_prefer_server_ciphers on;      ssl_stapling on;      ssl_stapling_verify on;        add_header Public-Key-Pins 'pin-sha256="...; max-age=63072000; includeSubDomains;';      add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";      add_header X-Content-Type-Options "nosniff";      add_header X-Frame-Options SAMEORIGIN;      add_header X-XSS-Protection "1; mode=block";      add_header X-Dns-Prefetch-Control 'content=on';        root /home/user/selfhost/html;      include /etc/nginx/includes/*.conf; # Extra config        client_max_body_size 10M;        location / {          set $memcached_key "$uri?$args";          memcached_pass  127.0.0.1:11211;          error_page 404 403 405 502 504 = @fallback;               expires 86400;                location ~ \.(css|ico|jpg|jpeg|js|otf|png|ttf|woff) {                                  set $memcached_key "$uri?$args";                                  memcached_pass 127.0.0.1:11211;                                  error_page 404 502 504 = @fallback;                                  #expires epoch;                      }        }        location @fallback {          try_files $uri $uri/ /index.php$args;          #root /home/user/selfhost/html;          if ($http_origin ~* (https?://[^/]*\.example\.com(:[0-9]+)?)) {                      add_header 'Access-Control-Allow-Origin' "$http_origin";                      }          if (-f $document_root/maintenance.html) {              return 503;          }        }      location ~ [^/]\.php(/|$) {          # set cgi.fix_pathinfo = 0; in php.ini          include proxy_params;          include fastcgi_params;          #fastcgi_intercept_errors off;          #fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;          fastcgi_pass php;          fastcgi_cache phpcache;          fastcgi_cache_valid 200 60m;          #error_page 404 405 502 504 = @fallback;      }        location ~ /nginx.conf {          deny all;      }                location /nginx_status {                      stub_status on;                      #access_log off;          allow 159.203.18.101;                      allow 127.0.0.1/32;          allow 2604:a880:cad:d0::16d2:d001;                      deny all;              }          location ^~ /09qsapdglnv4eqxusgvb {          auth_basic "Authorization Required";          auth_basic_user_file htpass/adminer;          #include fastcgi_params;                    location ~ [^/]\.php(/|$) {                          # set cgi.fix_pathinfo = 0; in php.ini                          #include fastcgi_params;              include fastcgi_params;                          #fastcgi_intercept_errors off;                          #fastcgi_pass unix:/var/run/php7.0-fpm.sock;                          fastcgi_pass php;                          fastcgi_cache phpcache;                          fastcgi_cache_valid 200 60m;                  }            }          error_page 503 @maintenance;      location @maintenance {          rewrite ^(.*)$ /.maintenance.html break;      }    }  

And here is fastcgi_params:

fastcgi_param   SCRIPT_FILENAME     $document_root$fastcgi_script_name;  fastcgi_param   QUERY_STRING        $query_string;  fastcgi_param   REQUEST_METHOD      $request_method;  fastcgi_param   CONTENT_TYPE        $content_type;  fastcgi_param   CONTENT_LENGTH      $content_length;    #fastcgi_param  SCRIPT_FILENAME     $request_filename;  fastcgi_param   SCRIPT_NAME         $fastcgi_script_name;  fastcgi_param   REQUEST_URI         $request_uri;  fastcgi_param   DOCUMENT_URI        $document_uri;  fastcgi_param   DOCUMENT_ROOT       $document_root;  fastcgi_param   SERVER_PROTOCOL     $server_protocol;    fastcgi_param   GATEWAY_INTERFACE   CGI/1.1;  fastcgi_param   SERVER_SOFTWARE     nginx/$nginx_version;    fastcgi_param   REMOTE_ADDR     $remote_addr;  fastcgi_param   REMOTE_PORT     $remote_port;  fastcgi_param   SERVER_ADDR     $server_addr;  fastcgi_param   SERVER_PORT     $server_port;  fastcgi_param   SERVER_NAME     $server_name;    fastcgi_param   HTTPS           $https if_not_empty;    fastcgi_param AUTH_USER $remote_user;  fastcgi_param REMOTE_USER $remote_user;    # PHP only, required if PHP was built with --enable-force-cgi-redirect  fastcgi_param   REDIRECT_STATUS     200;    fastcgi_param   PATH_INFO       $fastcgi_path_info;    fastcgi_connect_timeout 60;  fastcgi_send_timeout 180;  fastcgi_read_timeout 180;  fastcgi_buffer_size 128k;  fastcgi_buffers 256 16k;  fastcgi_busy_buffers_size 256k;  fastcgi_temp_file_write_size 256k;  fastcgi_intercept_errors on;  fastcgi_max_temp_file_size 0;  fastcgi_index index.php;    fastcgi_split_path_info ^(.+\.php)(/.+)$;  fastcgi_keep_conn on;  

Here are request logs:

xxx.xxx.xxx.xxx - - [26/Apr/2017:00:11:59 +0000] "GET /2016/12/31/hello-world/ HTTP/2.0" 200 9372 "https://example.com/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:12:01 +0000] "POST /wp-comments-post.php HTTP/2.0" 405 626 "https://example.com/2016/12/31/hello-world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:12:01 +0000] "GET /favicon.ico HTTP/2.0" 200 571 "https://example.com/wp-comments-post.php" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:21:20 +0000] "POST /wp-comments-post.php HTTP/2.0" 405 626 "https://example.com/2016/12/31/hello-world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:21:21 +0000] "GET /favicon.ico HTTP/2.0" 200 571 "https://example.com/wp-comments-post.php" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:24:07 +0000] "POST /wp-comments-post.php HTTP/2.0" 405 626 "https://example.com/2016/12/31/hello-world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  xxx.xxx.xxx.xxx - - [26/Apr/2017:00:24:07 +0000] "GET /favicon.ico HTTP/2.0" 200 571 "https://example.com/wp-comments-post.php" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/57.0.2987.98 Chrome/57.0.2987.98 Safari/537.36"  

Freshly installed Puppet Environment not working, CSR not matching public key

Posted: 23 Jul 2022 09:06 PM PDT

I want to play around with puppet, so I set up a small test environment, consisting of 4 VMs

  • pfSense: Router
  • Windows Server 2012 R2: DNS, DHCP
  • Ubuntu Server 16.04: Puppetmaster
  • Ubuntu Server 16.04: Puppet agent

DNS is set up correctly, it answers all forward- and reverse lookups correctly.

Here is the set of command I executed on both of the ubuntu vms (base configuration)

sudo dpkg-reconfigure keyboard-configuration  sudo apt-get install -y vim openssh-server ntp  sudo dpkg-reconfigure tzdata    vi /etc/hostname (set to puppet / puppetclient)  sudo reboot now    wget https://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb  sudo dpkg -i puppetlabs-release-pc1-xenial.deb  sudo apt-get update  

And then on the master:

sudo apt-get -y install puppetserver  sudo /opt/puppetlabs/bin/puppet resource service puppetserver ensure=running enable=true  sudo service puppetserver restart  

The puppetserver-service is running nicely (after assignign 6GB of RAM to the VM ;))

On the client:

sudo apt-get install puppet-agent  sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true  

On the client, I then do a:

puppet agent --server puppet.puppet.intra --waitforcert 60 --test  

This is answered by

Error: Could not request certificate: The CSR retrieved from the master does not match the agent's public key.  CSR fingerprint: 82:F5:08:CC:98:8A:D1:8F:EC:3D:B0:F7:5B:EB:43:FC:FC:0D:95:30:E8:6F:7F:81:9E:1B:02:CB:A4:01:0E:50  CSR public key: Public-Key: (4096 bit)  Modulus:      ...  Exponent: 65537 (0x10001)    Agent public key: Public-Key: (4096 bit)  Modulus:      ...  Exponent: 65537 (0x10001)    To fix this, remove the CSR from both the master and the agent and then start a puppet run, which will automatically regenerate a CSR.  On the master:    puppet cert clean puppetclient.puppet.intra  On the agent:    1a. On most platforms: find /home/administrator/.puppetlabs/etc/puppet/ssl -name puppetclient.puppet.intra.pem -delete    1b. On Windows: del "\home\administrator\.puppetlabs\etc\puppet\ssl\certs\puppetclient.puppet.intra.pem" /f    2. puppet agent -t  

Of course, I executed the proposed troubleshooting steps, without result. I further checked:

  • I can open port 8140 on the server
  • the time settings to match
  • both machines have the correct hostname set and are resolved by the dns correctly

What am I doing wrong?

Regards, Christian

  Edit  
I just realized something: It seems like the problem only occurs when I try to run puppet as a different user than I installed it with. I wanted to run puppet agent -t as root with sudo on an OS X client and got the error message described earlier. When I run puppet as the user I installed it with, the error doesn't occur. How can I fix this?

python - How to deploy Flask+Gunicorn+Nginx+supervisor on a cloud server?

Posted: 23 Jul 2022 10:05 PM PDT

I've read a lot of instructions since yesterday about this issue but all of them have similar steps. However I followed step by step but still can't get everything Ok.

Actually I can make Flask+Gunicorn+supervisor working but Nginx is not working well.

I connect my remote cloud server with SSH and I'm not deploying the site on my computer.

Nginx is installed correctly because when I visit the site via the domain name (aka. example.com) it shows the Nginx welcome page.

I use supervisor to start Gunicorn and the configuration is

[program:myapp]  command=/home/fh/test/venv/bin/gunicorn -w4 -b 0.0.0.0:8000 myapp:app  directory=/home/fh/test  startsecs=0  stopwaitsecs=0  autostart=false  autorestart=false  stdout_logfile=/home/fh/test/log/gunicorn.log  stderr_logfile=/home/fh/test/log/gunicorn.err  

here I bind the server to port 8000 and I don't actually know what does 0.0.0.0 stand for but I think it doesn't mean the localhost because I can visit the site via http://example.com:8000 and it works well.

Then I tried to use Nginx as a proxy server.

I deleted /etc/nginx/sites-available/default' and '/etc/nginx/sites-enabled/default/ and created /etc/nginx/sites-available/test.com and /etc/nginx/sites-enabled/test.com and symlink them.

test.com

server {          server_name www.penguin-penpen.com;          rewrite ^ http://example/ permanent;  }    # Handle requests to example.com on port 80  server {          listen 80;          server_name example.com;            # Handle all locations          location / {                  # Pass the request to Gunicorn                  proxy_pass http://127.0.0.1:8000;                    # Set some HTTP headers so that our app knows where the request really came from                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          }  }  

To my understanding, what Nginx do is when I visit http://example.com it passes my request to http://example.com:8000.

I'm not quite sure that I should use proxy_pass http://127.0.0.1:8000 here because I don't know whether should Nginx pass the request to localhost But I 've tried to change it to 0.0.0.0:8000 but it still doesn't work.

Can anyone help?

How to prevent any ACL from NFS mount?

Posted: 23 Jul 2022 04:06 PM PDT

I'm trying to export/mount a NFS volume with no ACL at all (POSIX or NFS one), but I fail about that.

Technical context: last current debian on both sides, ext4 volume.

Goal: I enforce strict access using POSIX ACLs on the server, and users can (will) access to the volume on an other machine, with NFS. But any user owning a dir/file can change the ACLs, which is not good here. So I want to prevent users to change ACLs, and simply removing get/setfacl commands is not a good way. Removing ACL support on server-side volume is not good…

So my question: is it possible to prevent ACLs from a NFS mount, without removing ACLs on server-side volume? If yes how can it be performed?

I tested using no_acl / noacl without success: my exports are done in NFSv3 version, with "no_acl" option. In /etc/exports:

/exports ip-of-client-during-tests(rw,sync,no_acl,no_subtree_check,fsid=0)  /exports/data ip-of-client-during-tests(rw,sync,no_acl,no_subtree_check)  

All services reloaded/restarted. Then I mount it on client with "noacl" option (whatever):

mount -t nfs -o noacl,vers=3 my-server:/exports/data/ /var/data/  

which gives in /proc/mounts:

server-name:/exports/data/ /var/data nfs rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,noacl,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=server-ip,mountvers=3,mountport=53844,mountproto=udp,local_lock=none,addr=server-ip 0 0  

And I'm able on the client to get/set ACLs using get/setfacl on dir/files I own, and changes are visible on the server filesystem. I also tried using NFSv4, no change. BTW on server I can't see no "no_acl" option in /proc/fs/nfs/exports:

/exports/data   client-ip(rw,root_squash,sync,wdelay,no_subtree_check,uuid=0bac8439:e7e2488e:817358d2:f2c94b85,sec=1)  

even if it is visible with exportfs -v:

/exports/data   client-ip(rw,wdelay,root_squash,no_subtree_check,no_acl,sec=sys,rw,root_squash,no_all_squash)  

ssh: ProxyCommand via persistant ControlMaster connection

Posted: 23 Jul 2022 05:03 PM PDT

I have two servers, middle and remote. middle is used as a proxy to access remote. I've set up middle's ssh config so that it preserves connections to remote via ControlMaster, as follows

Host remote ControlMaster auto ControlPath ~/.ssh/%r@%h:%p ControlPersist yes

I've created a persistent connection from middle to remote. This is convenient because the authentication on remote is complex.

I'd like to set up my local ssh config so that I can ssh from localhost to remote via middle, reusing the connection created above. I can do this manually as ssh -t middle ssh remote, but I can't figure out a way to accomplish the same thing using the ProxyCommand option, which is especially annoying if I want to scp a file to remote.

ProxyCommands which do not work include

  • ssh middle -W remote:22 (does not reuse connection)
  • ssh middle -t remote (goes all the way to a shell, confusing my local ssh client, which is expecting to talk to sshd, not a shell)

impossible creating a ext4 fs with block size of 1024

Posted: 23 Jul 2022 08:06 PM PDT

Im triying to build up a new server for a service that saves the data on very small files of max 1 kb on the fs. The problem its, now we are using a block size of 4 kb and we are wasting a lot of space, so we are planning to change it to a new fs of max 1k block size.

The problem it is the partition its around 5.7 T, when i create the mkfs.ext4 command with the block size of 1024 it trown me the error

/dev/sda5: Cannot create filesystem with requested number of in odes  

But if i change to 2048 its works perfect.

I tried to run with th 64bit flag, the e2fsprogs are on the last version, 1.42 something. Also i tried to set the in ode size from 1024 to 16365 and i had no luck.

Im running out of ideas, switching to another FS could be an option but i saw a lot of benchmarks and XFS or ZFS doesn't perform good as ext4 on small files :(

Any ideas?

Running centos 2.6.32-431.20.3.el6.x86_64

How to Automatically delete files from NAS Server

Posted: 23 Jul 2022 06:00 PM PDT

We have a security camera in our office that saves video files in a NAS Server. These video files are occupying a lot of storage. I am looking a way to delete these files automatically which are older than x number of days.

VPN connection reset

Posted: 23 Jul 2022 02:01 PM PDT

I have a device running ArchLinux and OpenVPN, which was connecting to VPN server without problems until recently. Now it can't connect, with the following output that keeps looping indefinitely: http://pastebin.com/BU6aiBVn

Is the WARNING message from the log the reason for this? I have checked the link provided in the log: http://openvpn.net/howto.html#mitm but I am currently using easy-rsa 2.0 to create the certificate and I am using it when connecting.

How can I investigate further? I guess this is not enough data for anyone to really know what is happening, but I am not sure what else to provide, so please say in the comments what else is needed for debugging this issue, and I will edit my question.

UPDATE
Also, now it seems that sometimes I get this error, but I am not sure what is different in such case:

Mar 31 09:39:32 alarmpi openvpn[530]: TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)  Mar 31 09:39:32 alarmpi openvpn[530]: TLS Error: TLS handshake failed  Mar 31 09:39:32 alarmpi openvpn[530]: Fatal TLS error (check_tls_errors_co), restarting  Mar 31 09:39:32 alarmpi openvpn[530]: SIGUSR1[soft,tls-error] received, process restarting  

UPDATE 2
As per MadHatter suggestion, I tried connecting via Telnet from the client, and it seems to work:

[root@alarmpi ~]# telnet <SERVER_IP> 443  Trying <SERVER_IP>...  Connected to <SERVER_IP>.  Escape character is '^]'.  

UPDATE 3
It would seem that after the openvpn restart, clients are now able to connect. I am not sure what caused this or how it got overcome, but I can't seem to reproduce this issue at the moment. I will try some more and if I can't reproduce I will delete the question.

set up apache http server in windows as proxy to access another domain

Posted: 23 Jul 2022 03:06 PM PDT

I know this should be very basic and simple in theory, but I need to complete this task, I'm new to this and for some reason I can't find a suitable example that works for me.

I am running apache 2.2 in windows 8. I need to access a website, let's call it x.com, through my proxy. The reason is that I need to show it in an iframe and also programmatically log in on it, for which I need to use javascript. This is prevented by cross domain ajax security constraints. By proxying the site I could do that.

I have installed Apache http server. Uncommented the following line

LoadModule proxy_module modules/mod_proxy.so  

in file "httpd.conf" and overwritten file conf\extra\httpd-vhosts.conf with the following:

NameVirtualHost *:80    <VirtualHost *:80>      ServerAdmin webmaster@dummy-host.localhost      DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/docs/dummy-host.localhost"      ServerName 127.0.0.1:80      ProxyRequests off      ProxyPass /feature http://x.com/      ProxyPassReverse /feature https://x.com/      ProxyPassReverseCookieDomain x.com localhost      ErrorLog "logs/dummy-host.localhost-error.log"      CustomLog "logs/dummy-host.localhost-access.log" common      <Directory "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/docs/dummy-host.localhost">          AllowOverride all          Order Deny,Allow          Deny from all          Allow from 127.0.0.1      </Directory>  </VirtualHost>  

I restarted the apache service, now I go to :

http://localhost/feature  

and get

Not Found

The requested URL /feature was not found on this server.

What could be wrong with this set up? Is there something else I need to configure?

Thank you

Dependency loop while trying to install gcc on Fedora 17

Posted: 23 Jul 2022 07:01 PM PDT

I'm trying to yum install gcc but getting this message.

Error: Package: glibc-common-2.15-37.fc17.i686 (@anaconda-0)         Requires: glibc = 2.15-37.fc17         Removing: glibc-2.15-37.fc17.i686 (@anaconda-0)             glibc = 2.15-37.fc17         Updated By: glibc-2.15-57.fc17.i686 (updates)             glibc = 2.15-57.fc17         Removing: glibc-2.15-56.fc17.i686 (installed)             glibc = 2.15-56.fc17         Updated By: glibc-2.15-57.fc17.i686 (updates)             glibc = 2.15-57.fc17  

And, if it's helpful:

uname -a  Linux laptop 3.3.4-5.fc17.i686.PAE #1 SMP Mon May 7 17:37:39 UTC 2012 i686 i686 i386 GNU/Linux  

I'm not really sure how to resolve these issues.. any ideas?

Prosody mod auth external not working

Posted: 23 Jul 2022 03:06 PM PDT

I installed mod_auth_external for 0.8.2 on ubuntu 12.04 but it's not working. I have external_auth_command = "/home/yang/chat/testing" but it's not getting invoked. I enabled debug logging and see no messages from that mod. Any help?

I'm using the Candy example client. Here's what's written to the log after I submit a login request (and nothing in err log):

Oct 24 21:02:43 socket        debug   server.lua: accepted new client connection from 127.0.0.1:40527 to 5280  Oct 24 21:02:43 mod_bosh        debug   BOSH body open (sid: %s)  Oct 24 21:02:43 boshb344ba85-fbf5-4a26-b5f5-5bd35d5ed372        debug   BOSH session created for request from 169.254.11.255  Oct 24 21:02:43 mod_bosh        info    New BOSH session, assigned it sid 'b344ba85-fbf5-4a26-b5f5-5bd35d5ed372'  Oct 24 21:02:43 httpserver      debug   Sending response to bf9120  Oct 24 21:02:43 httpserver      debug   Destroying request bf9120  Oct 24 21:02:43 httpserver      debug   Request has destroy callback  Oct 24 21:02:43 socket  debug   server.lua: closed client handler and removed socket from list  Oct 24 21:02:43 mod_bosh        debug   Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 0 out of 1 requests open  Oct 24 21:02:43 mod_bosh        debug   and there are 0 things in the send_buffer  Oct 24 21:02:43 socket  debug   server.lua: accepted new client connection from 127.0.0.1:40528 to 5280  Oct 24 21:02:43 mod_bosh        debug   BOSH body open (sid: b344ba85-fbf5-4a26-b5f5-5bd35d5ed372)  Oct 24 21:02:43 mod_bosh        debug   Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 1 out of 1 requests open  Oct 24 21:02:43 mod_bosh        debug   and there are 0 things in the send_buffer  Oct 24 21:02:43 mod_bosh        debug   Have nothing to say, so leaving request unanswered for now  Oct 24 21:02:43 httpserver      debug   Request c295d0 left open, on_destroy is function(mod_bosh.lua:81)  

Here's the config I added:

modules_enabled = {      ...      "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"      ...  }    authentication = "external"  external_auth_protocol = "generic"  external_auth_command = "/home/yang/chat/testing"                                 

No comments:

Post a Comment