Saturday, April 2, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


MAC address spoofing but not receiving packets

Posted: 02 Apr 2022 09:43 AM PDT

I have several hosts connected via an unmanaged switch in a LAN. I have managed to change the MAC address of a host via macchanger, say [HOST Eve] to that of another host say [HOST Alice]. By doing so, I would naturally assume that [HOST Eve] should be able to view all traffic sent to [HOST Alice] via a tool like tcpdump at this point. However, I do not see that. I think that this is not an issue of setting [HOST Eve]'s network card in promiscuous mode (I think tcpdump does that automatically). I am almost certain that no packets even get forwarded to [HOST Eve]'s port. Theoretically, the switch should learn that the specific MAC address is also connected to an additional port and forward any packets having that MAC address as DST to both ports.

Questions

(a) Any ideas why this fails?

(b) Any alternatives to achieve the same?

Note: I have implemented this topology in GNS3 not a real network.

Routing does not work

Posted: 02 Apr 2022 09:17 AM PDT

I have two interfaces wan0 and wg0. The routing table looks like this:

::1 dev lo proto kernel metric 256 pref medium  2a0c:xxx:yyy:zz00::/56 dev wg0 proto static metric 20 pref medium  2a0c:xxx::/32 dev wan0 proto kernel metric 256 pref medium  2a0c:xxx::/32 dev wan0 proto ra metric 1024 expires 2591957sec pref medium  fe80::/64 dev wan0 proto kernel metric 256 pref medium  default proto static metric 1024 pref medium          nexthop via 2a0c:xxx::1 dev wan0 weight 1          nexthop via fe80::****:****:****:3780 dev wan0 weight 1  

When I try to ping dns.google from the wg0 interface, the packets are not redirected to the wan0 interface. Why?


tcpdump on the wg0 interface gives something like this:

IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, echo request, id 1, seq 5093, length 40  IP6 2a0c:xxx:yyy::1 > 2a0c:xxx:yyy:zz60::wwww: ICMP6, destination unreachable, unreachable address dns.google, length 88  IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, echo request, id 1, seq 5094, length 40  IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, neighbor solicitation, who has dns.google, length 26  IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, neighbor solicitation, who has dns.google, length 26  IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, neighbor solicitation, who has dns.google, length 26  IP6 2a0c:xxx:yyy::1 > 2a0c:xxx:yyy:zz60::wwww: ICMP6, destination unreachable, unreachable address dns.google, length 88  IP6 2a0c:xxx:yyy:zz60::wwww > dns.google: ICMP6, echo request, id 1, seq 5095, length 40  

Checking the route via ip -6 r get shows the correct route (2001:4860:4860::8888 is dns.google):

❯ ip -6 r get to 2001:4860:4860::8888 from 2a0c:xxx:yyy:zz60::wwww iif wg0  2001:4860:4860::8888 from 2a0c:xxx:yyy:zz60::wwww via 2a0c:xxx::1 dev wan0 proto static metric 1024 iif wg0 pref medium  

Sometimes packets are still redirected to the wan0 interface, but this happens very rarely.

Apache 2.4 block all directories in root except for two specified

Posted: 02 Apr 2022 09:10 AM PDT

I want to block all directories in the document root except for two.

I don't want to manually block the ones I don't need. Instead, I just want to specify the two to keep.

I have a list of folders and files structured like this:

- Docs  - Docs_Files      Img1.jpg      Img2.jpg      Img3.png      Test.jpg      TestLarge.png      Zones.bmp  - Images      A1.jpg      B5.jpg      B5.png      X7.png  - Report 1  - Report 2  - Report 3  …  - Report 25  

I'd like to just grant access to the Docs_Files directory and the Images directory and deny everything else, without having to specify every new folder that gets added (same idea with file types of jpg). My attempt at doing this didn't work:

DocumentRoot "b:"    <Directory "b:">      Options Indexes FollowSymLinks      AllowOverride None      Require all denied  </Directory>    <Directory "b:/Docs_Files">      AllowOverride None      Require all granted        <Files *>          Require all denied      </Files>        <Files *.jpg>          Require all granted      </Files>  </Directory>    <Directory "b:/Images">      AllowOverride None      Require all granted        <Files *>          Require all denied      </Files>        <Files *.jpg>          Require all granted      </Files>  </Directory>  

Any suggestions? Thanks!

Shared Host - Entry Process peaks - Cpanel

Posted: 02 Apr 2022 08:45 AM PDT

There is a way to find the origin of access peaks in Entry Process on my Cpanel?

This image below, show 3 moments. I have IP monitoring if is necessary.

enter image description here

How to share an instance to a CLD user under specific INSTANCE user

Posted: 02 Apr 2022 08:40 AM PDT

I have an example.instance_1.2.3.4_22_root instance in the default group, access to instance from CLD server by admin user works fine.

I changed the instance string to user dev example.instance_1.2.3.4_22_dev and added it via web interface to the clouds of user jonhdoe, but when i try to access instance from jonhdoe user through the interactive gate with CLI or web terminal i getting error choosen INSTANCE example.instance_1.2.3.4_22_dev have incorrect GROUP

How to fix the error?

Init question related to open source infrastructure management system CLD from ClassicDevOps https://github.com/classicdevops/cld

NGINX Reverse Proxy Server not working with IIS6 website

Posted: 02 Apr 2022 08:01 AM PDT

I'm experimenting with an NGINX Reverse Proxy Server (RPS) configuration on a Raspberry Pi V4B running Buster (Linux 10). In the most generous description, I'm a rank amateur with RPS and NGINX.

I have several real and test websites, NGINX, Windows IIS6, and "commercial". Proxy forwards to my NGINX sites and my Netgear WiFi configuration web (ie commercial) all work fine.

I'm having problems with the my IIS sites as they are doing a

<%@ LANGUAGE="VBSCRIPT" %>  <%      Response.Redirect("MyFolder/default.asp")  %>  

in the Default.asp page in the root level of the site which affects a redirect to a particular folder. For example, www.aaa.com in the browser will get redirected to the "MyFolder" for content. That effects also an update of the browser URL to "www.aaa.com/MyFolder"

When I change the router for www.aaa.com to point to my RPS, it is not working correctly. The browser shows an error something about not having a default index file. I tried creating a Default.htm, assuming the RPS did not like the ASP code

<head>  <meta http-equiv="refresh" content="0; URL=http://www.rdkscorner.com/rdk">  </head>  <body>  </body>  

which also works directly from the Browser but not when I route it to the RPS.

Here is the Server Block for this on the RPS

server {          listen 80;          server_name aaa.com www.aaa.com;          location / {                  proxy_pass http://10.0.20.115;          }  }  

Ideally what I want is to isolate the base address (www.aaa.com) and passing everything past the ".com" on to the IIS6 server. Since IIS6 is looking at the headers for www.aaa.com, assume that the headers are correct, as that server host other sites.

However, it is not at all clear that this process is correct or how to do it if it is correct.

Thanks for any guidance or help....RDK

NFSv4 and kerberos: access denied 50% of the time

Posted: 02 Apr 2022 07:41 AM PDT

We are trying to mount NFSv4 shares on RHEL 8 clients, with kerberos. We have a very similar setup on another environment, and it worked fine. But on this setup, it happens that we get access denied around 50% of the times we try to mount a share:

# failed attempt    bash-4.4$ sudo mount -t nfs -o sec=krb5 server.com:/homes/francis test -vvvv  mount.nfs: timeout set for Sat Apr  2 16:28:32 2022  mount.nfs: trying text-based options 'sec=krb5,vers=4.2,addr=192.168.1.89,clientaddr=192.168.2.29'  mount.nfs: mount(2): Protocol not supported  mount.nfs: trying text-based options 'sec=krb5,vers=4,minorversion=1,addr=192.168.1.89,clientaddr=192.168.2.29'  mount.nfs: mount(2): Protocol not supported  mount.nfs: trying text-based options 'sec=krb5,vers=4,addr=192.168.1.89,clientaddr=192.168.2.29'  mount.nfs: mount(2): Permission denied  mount.nfs: trying text-based options 'sec=krb5,vers=4,addr=192.168.1.88,clientaddr=192.168.2.29'  mount.nfs: mount(2): Permission denied  mount.nfs: trying text-based options 'sec=krb5,addr=192.168.1.89'  mount.nfs: prog 100003, trying vers=3, prot=6  mount.nfs: trying 192.168.1.89 prog 100003 vers 3 prot TCP port 2049  mount.nfs: prog 100005, trying vers=3, prot=17  mount.nfs: trying 192.168.1.89 prog 100005 vers 3 prot UDP port 32767  mount.nfs: mount(2): Permission denied  mount.nfs: trying text-based options 'sec=krb5,addr=192.168.1.88'  mount.nfs: prog 100003, trying vers=3, prot=6  mount.nfs: trying 192.168.1.88 prog 100003 vers 3 prot TCP port 2049  mount.nfs: prog 100005, trying vers=3, prot=17  mount.nfs: trying 192.168.1.88 prog 100005 vers 3 prot UDP port 32767  mount.nfs: mount(2): Permission denied  mount.nfs: access denied by server while mounting hypatia.uio.no:/uioit-usit-drift-homes/francis    # working attempt two seconds later  bash-4.4$ sudo mount -t nfs -o sec=krb5 server.com:/homes/francis test -vvvv  mount.nfs: timeout set for Sat Apr  2 16:30:09 2022  mount.nfs: trying text-based options 'sec=krb5,vers=4.2,addr=192.168.1.88,clientaddr=192.168.2.29'  mount.nfs: mount(2): Protocol not supported  mount.nfs: trying text-based options 'sec=krb5,vers=4,minorversion=1,addr=192.168.1.88,clientaddr=192.168.2.29'  mount.nfs: mount(2): Protocol not supported  mount.nfs: trying text-based options 'sec=krb5,vers=4,addr=192.168.1.88,clientaddr=192.168.2.29'  mount.nfs: mount(2): Permission denied  mount.nfs: trying text-based options 'sec=krb5,vers=4,addr=192.168.1.89,clientaddr=192.168.2.29'  

I have checked the logs on the client side, and not much there that points to the cause of the failure to mount. It works one time, and it won't work two seconds later. Or vice-versa.

I thought at first it could be a cross-mount issue, but I also tried with the upper directory of the share, and it was the same problem.

Any hints on what can be the problem?

Windows Server - Windows Backup Error 0X81000036 on Restore - Logon with CTRL-ALT-CANC didn't work after restore - SOLVED

Posted: 02 Apr 2022 07:14 AM PDT

I want to post here to help everyone solve two problems generated by Windows Backup on Windows Server, in my case the 2019 version. The first problem is the error 0x81000036 when you go to restore the entire hard disk, in short. More than once you encounter error code 0x81000036 and there is no way to resolve with the solutions that others have posted (see Disabling Sandbox and / or Hyper-V and / or Windows Advanced Guard). At the same time, when this happens, one often encounters the problem that it is not possible to logon after a complete restore, i.e. the system does not accept neither the CTRL-ALT-DEL, nor the connection via RDP but the server works regularly, the services are all on and go smoothly. Faced with this damn situation (thanks Microsoft for the low level of implementation of the SW and thanks for letting us customers do it as beta-testers who, moreover, pay to do it) I personally did this:

  1. I installed a new Windows Server from scratch on the disk where I had problems restoring everything;
  2. From this Windows Server, through its Windows Backup, I recovered the entire Volume C: saving it on another external hard disk;
  3. I removed the HDD from inside the server where I installed the new Windows Server and mounted it on a bay with the USB as a connection;
  4. Using a Cloning application (I use HD Clone, very fast), I cloned the volume C: recovered from the backup in Point 2) on Volume C: of the C: of the internal HD on which I had installed the Windows Server from scratch;
  5. I reassembled the original internal HD on the server and everything worked right away on the first hit. I wanted to write this thing somewhere because as I found myself, with the anguish of not being able to solve it and with nothing of the suggestions read around on the internet that somehow worked, I hope that so it can be of help to some unfortunate friend who was in the same condition as me, victim of that cabbage Windows Backup that really sucks. I found myself like when I was a skydiver and, relying on the reserve parachute in case of problems with the main parachute, I needed to open it and it didn't work at all. Here, the same feeling ...

Set IIS 10 server to public [closed]

Posted: 02 Apr 2022 08:58 AM PDT

I'm pretty new to fiddling with servers (have a background in programming, but not server management), so please bear with me if I ask basic questions and link through to any tutorials you think I should follow.

I'm trying to set up a public IIS 10 server from an AWS Windows VM. From the VM I can connect to the localhost, but from anywhere else the connection results in ERR_CONNECTION_TIMED_OUT. I think it's because my server is set to localhost, but I can't find how I can make it public. Whenever I Google it I just get "how to make a local IIS server".

Here's what it currently looks like: image ss

I got this result by following the official Windows tutorial for setting up an ASP.NET website with IIS Manager, found here. I'm hosting the site on port 6911 and I've opened that port on the Windows Firewall as well. Please ask for any additional clarification if necessary.

So basically my questions are:

  • Will setting it from localhost to public (or the equivalent) work?
  • If so, how do I do it?

No SYN-ACK Packet from server

Posted: 02 Apr 2022 06:38 AM PDT

I have two servers and I used my own embedded system with LwIP to do connection to these server.

My embedded system with LwIP is the client and I have server1 and server2. I connected to server1 and end the connection before connecting to server2.

Further breakdown on the flow:

  1. Client creates New Socket with server1
  2. Client sent DNS packet to obtain server1's ip address; received ACK from AP
  3. Client send TCP SYN packet;
  4. Server1 send TCP SYN-ACK and perform some data transmission
  5. Client ends connection with server1 by sending TCP RST packet; and close socket
  6. Client creates New Socket with server2
  7. Client sent DNS packet to obtain server2's ip address; received ACK from AP
  8. Client send TCP SYN packet to server2
  9. Server2 send TCP SYN-ACK and perform some data transmission
  10. Client ends connection with server2 by sending TCP RST packet; and close socket

However, sometimes server2 did not response to client's SYN Packet which is in Step 9. Its only happens sometime. I checked several forum like:

[1] Why would a server not send a SYN/ACK packet in response to a SYN packet

[2] Server not sending a SYN/ACK packet in response to a SYN packet

My code does not enable window scaling. I cannot check the server as its a private server, so I am not very sure if it was dropped. My environment is quite noisy and busy with many routers plus communication devices. This problem only happens in noisy environment but not in a cleaner environment.

What can I do as a client to fix this problem?

SFTP client connection fails using hostname

Posted: 02 Apr 2022 07:07 AM PDT

I have configured a debian server, hosting a website, for FTPS access using vsftpd. Port 22, SSL enabled. When testing the connection with FileZilla, I successfully connect if I put the server's IP address in the host field. If I put the wesbsite's hostname, it fails. The server is under a private router with dynamic IP. Therefore, I am using a dynamic DNS service provided by Dynu DNS (my internet provider gives me the possibility to connect the router to Dynu, in order to let the router inform Dynu when the IP has changed). DNS records in Dynu are the A record, updated by my router, and the AAAA record (updated by Dynu).

Hostname Type Data
*.myhostname A 12.34.567.89
*.myhostname AAAA [IPv6 address]

And in FileZilla:

Host Connection Status FileZilla logs
12.34.567.89 Success Command: open "username@12.34.567.89" 22
Trace: Looking up host "12.34.567.89" for SSH connection
Trace: Connecting to 12.34.567.89 port 22
myhostname Failed Command: open "username@myhostname" 22
Trace: Looking up host "myhostname" for SSH connection
Trace: Connecting to [IPv6 address] port 22

Why is my Laravel App having 3+ seconds TTFB in Production?

Posted: 02 Apr 2022 09:22 AM PDT

I've looked around but can't find a definitive answer on if things like Images affect TTFB which would be my best guess as to why my site is taking so long to load in production. After the page is completely done being received, I see it's transferred 40.7 mb resources which is a lot but the initial page load accounts for only 20.1kb of that, followed by images/js/css.

The .har file exported from the Network inspector:

"pages": [    {      "startedDateTime": "2022-04-01T23:10:26.010Z",      "id": "page_1",      "title": "https://example.com/",      "pageTimings": {        "onContentLoad": 5878.881000004185,        "onLoad": 6390.734000000521      }    }  ],  

And after this follows things such as images/js/css.

Things I tried:

  • Replacing the content in index.php with a simple echo statement which was <?php echo 'foobar'; ?> and this resolved the issue immediately as the page took less than a second to load.
  • Ensured that it had the same configuration for the cache as other applications hosted on the same server and which also take much less time to load.
  • composer install --optimize-autoloader --no-dev
  • composer dump-autoload -o
  • php artisan route:cache
  • php artisan config:cache

My Question is: although resources such as images/css/js have their own TTFB could they be increasing the time to first byte for the initial page?

Edit: Another thing I wanted to point out was that this occurs on pages that are not resource intensive and also that the server it's on is Microsoft Windows Server 2016 Standard and VMware, Inc. VMware7.1

Unable to get kernel log messages written to a specific log file on syslog server

Posted: 02 Apr 2022 08:31 AM PDT

I have a working rsyslog setup with CentOS as my server and I am using Kali as the client.

I am able to use logger on Kali to send test log messages and see the log messages appear in the CentOS messages file and in the facility specific files I have setup in /var/log. All with the exception of kernel messages.

I see kernel messages appear in the messages file on CentOS, but it does not write to the kernel.log file I have in /var/log.

From this I deduce that the Kali client sends the log message correctly (as it is received and present in the messages file), but I am missing something in the rsyslog.conf file on CentOS.

This is what I use to generate a log message from Kali:

logger -t "new test" -p kern.err "testing kernel log messages"  

I do a tail -f messages on CentOS and the log message appears. However, when I cat the kernel.log file, it is empty.

This is what I have for rsyslog.conf on the CentOS machine. Any advice is welcome.

Centos rsyslog.conf

Centos rsyslog.conf page 2

How to serve 2 react apps on nginx with same ip and port

Posted: 02 Apr 2022 08:19 AM PDT

I have two applications one is public and another one is admin, I want to serve these two apps on same port but it is not working, below is my configuration file:

build folder for both apps is saved in below directory

/var/www/html/admin/build

/var/www/html/public/build

Configuration file:

server{      listen 80;      server_name 192.xx.xx.42;           location /public {      root /var/www/html/public/build;      index login.html;     }          location /admin {        root /var/www/html/admin/build;        index login.html     }  }  

How do I set up a Let's Encrypt wildcard certificate for Apache on an Amazon Linux 2 AMI EC2 instance?

Posted: 02 Apr 2022 08:45 AM PDT

I have a domain (let's say example.com), and I currently have a Let's Encrypt certificate set up and properly working for example.com and www.example.com for Apache on an Amazon Linux 2 AMI EC2 instance, and I'm trying to reconfigure the certificate to set it up for a wildcard domain (i.e., *.example.com).

I SSH'ed into the EC2 instance and ran the following command in an attempt to do this (with the real domain, not example.com):

sudo certbot certonly --manual --preferred-challenges=dns --server https://acme-v02.api.letsencrypt.org/directory -d example.com -d *.example.com  

Upon running that command, I get the following message:

Let's Encrypt wildcard certificate attempt

I then add a TXT record to my DNS settings in Google Domains as the prompt suggested as follows:

Google Domains DNS settings

I then verified that the TXT record is there by using the following site and inputting the _acme-challenge URL / host name:

https://dnslookup.online/txt.html

Upon confirming the record is there, I then hit Enter in the SSH console, but I get the following error:

Let's Encrypt wildcard certificate error message

What am I doing wrong that's not allowing me to issue a wildcard certificate? Any help/guidance is greatly appreciated. Thank you.

Edit: I should note that I used the following post as a starting point for this: https://community.letsencrypt.org/t/you-may-need-to-use-a-different-authenticator-plugin/115026/4

How to implement caching of HTTP responses in Kubernetes?

Posted: 02 Apr 2022 08:47 AM PDT

How can I cache HTTP responses from my services in Kubernetes?

I have a simple web service in my cluster and am wondering how I could cache static assets (static html, images, fonts, etc.) beyond relying on client caches.

My setup is very simple:

 ┌─────────────────┐    ┌─────────────┐   ┌─────────────────┐   │                 │    │             │   │                 │   │  ingress-nginx  ├────►     svc     ├───►   deployment    │   │                 │    │             │   │                 │   └─────────────────┘    └─────────────┘   └─────────────────┘  

Options I've considered:

  • external CDN (e.g. Cloudflare)
    • => ruled out due to data protection compliance rules
  • Cloud provider's CDN (e.g. Cloudfront)
    • => our cloud provider doesn't have such a service
  • proxy_cache in the ingress-nginx-controller & ingress
    • => seems… messy?
  • a dedicated caching service (e.g. Varnish) between ingress-nginx and my service
    • => is this a good idea?
    • => are there more "cloud-native" choices than configuring my own Varnish deployment?
  • a caching proxy in a sidecar (e.g. Varnish or nginx)
    • => not ideal because cache pods have to scale in line with application pods
  • caching in the application
    • => I'd prefer keeping this concern out of the application

I'm curious: how are people solving this problem in their clusters?

Private Docker registry w/ NFS storage and Jenkins

Posted: 02 Apr 2022 07:25 AM PDT

I would essentially like to run a private docker registry on an internal LAN that would provide read-only access to a couple of Jenkins nodes. Jenkins Pipeline seems to only allow images from a local registry or an HTTPS based registry via withRegistry() or 'registryUrl'. I'd like to avoid setting up HTTPS/TLS, if possible.

I'm pondering the idea of running a localhost private registry on each Jenkins node and have the registry storage be based on a NFS mount to a NAS so that each localhost registry can pull images from the same storage location and workaround the Jenkins Pipeline API limitation. My question is, what type of issues could this cause or is there a better approach?

How to fix ContainerCreating errors while deploying metallb?

Posted: 02 Apr 2022 09:18 AM PDT

For testing purpose, I have installed ubuntu 21 on vmware esxi server. On that machine, spinned up kubernetes using lxc containers following this repository LXC is spinned up and running.

adminuser@testing:~/Desktop$ lxc list  +----------+---------+-------------------+-----------------------------------------------+-----------+-----------+  |   NAME   |  STATE  |       IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |  +----------+---------+-------------------+-----------------------------------------------+-----------+-----------+  | kmaster  | RUNNING | 10.8.0.217 (eth0) | fd42:666f:471d:3d53:216:3eff:fe54:dce6 (eth0) | CONTAINER | 0         |  +----------+---------+-------------------+-----------------------------------------------+-----------+-----------+  | kworker1 | RUNNING | 10.8.0.91 (eth0)  | fd42:666f:471d:3d53:216:3eff:fee4:480e (eth0) | CONTAINER | 0         |  +----------+---------+-------------------+-----------------------------------------------+-----------+-----------+  | kworker2 | RUNNING | 10.8.0.124 (eth0) | fd42:666f:471d:3d53:216:3eff:fede:3c9d (eth0) | CONTAINER | 0         |  +----------+---------+---------------  

Then started deploying metallb on this cluster using the steps mentioned in this link. And applied this configmap for routing. GNU nano 4.8 k8s-metallb-configmap.yaml

apiVersion: v1  kind: ConfigMap  metadata:    namespace: metallb-system    name: config  data:    config: |      address-pools:      - name: default        protocol: layer2        addresses:        - 10.8.0.240-10.8.0.250  

But the metallb pods are not running.

kubectl get pods -n metallb-system  NAME                          READY   STATUS                       RESTARTS   AGE  controller-6b78bff7d9-cxf2z   0/1     ContainerCreating            0          38m  speaker-fpvjt                 0/1     CreateContainerConfigError   0          38m  speaker-mbz7b                 0/1     CreateContainerConfigError   0          38m  speaker-zgz4d                 0/1     CreateContainerConfigError   0          38m  

I checked the logs.

kubectl describe pod controller-6b78bff7d9-cxf2z -n metallb-system  Name:           controller-6b78bff7d9-cxf2z  Namespace:      metallb-system  Priority:       0  Node:           kworker1/10.8.0.91  Start Time:     Wed, 14 Jul 2021 20:52:10 +0530  Labels:         app=metallb                  component=controller                  pod-template-hash=6b78bff7d9  Annotations:    prometheus.io/port: 7472                  prometheus.io/scrape: true  Status:         Pending  IP:               IPs:            <none>  Controlled By:  ReplicaSet/controller-6b78bff7d9  Containers:    controller:      Container ID:        Image:         quay.io/metallb/controller:v0.10.2      Image ID:            Port:          7472/TCP      Host Port:     0/TCP      Args:        --port=7472        --config=config      State:          Waiting        Reason:       ContainerCreating      Ready:          False      Restart Count:  0      Environment:        METALLB_ML_SECRET_NAME:  memberlist        METALLB_DEPLOYMENT:      controller      Mounts:        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76kg (ro)  Conditions:    Type              Status    Initialized       True     Ready             False     ContainersReady   False     PodScheduled      True   Volumes:    kube-api-access-j76kg:      Type:                    Projected (a volume that contains injected data from multiple sources)      TokenExpirationSeconds:  3607      ConfigMapName:           kube-root-ca.crt      ConfigMapOptional:       <nil>      DownwardAPI:             true  QoS Class:                   BestEffort  Node-Selectors:              kubernetes.io/os=linux  Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                               node.kubernetes.io/unreachable:NoExecute op=Exists for 300s  Events:    Type     Reason                  Age                 From               Message    ----     ------                  ----                ----               -------    Normal   Scheduled               32m                 default-scheduler  Successfully assigned metallb-system/controller-6b78bff7d9-cxf2z to kworker1    Warning  FailedCreatePodSandBox  32m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8a6fa54086b9e65c42c8a0478dcac0769e8b278eeafe11eafb9ad5be40d48eb": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "264ee423734139b712395c0570c888cff0b7b526e5154da0b7ccbdafe5bd9ba3": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1a3cb9e20a2a015adc7b4924ed21e0b50604ee9f9fae52170c03298dff0d6a78": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "56dd906cdadc8ef50db3cc725d988090539a0818c2579738d575140cebbec71a": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8ddcfa704da9867c3a68030f0dc59f7c0d04bdc3a0b598c98a71aa8787585ca6": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "50431bbc89188799562c48847be90e243bbf49a2c5401eb2219a0c4745cfcfb6": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da9ad1d418d3aded668c53f5e3f98ddfac14af638ed7e8142b904e12a99bfd77": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4dc6109c696ee410c58a0894ac70e5165a56bab99468ee42ffe88b2f5e33ef2f": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8f1cad2ce9f8c278c07c924106a1b6b321a80124504737a574bceea983a0026": open /run/flannel/subnet.env: no such file or directory    Warning  FailedCreatePodSandBox  2m (x131 over 29m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f5e93b893275afe5309eddd9686c0ecfeb01e91141259164082cb99c1e2c1902": open /run/flannel/subnet.env: no such file or directory  

And the speaker container.

kubectl describe pod  speaker-zgz4d -n metallb-system  Name:         speaker-zgz4d  Namespace:    metallb-system  Priority:     0  Node:         kmaster/10.8.0.217  Start Time:   Wed, 14 Jul 2021 20:52:10 +0530  Labels:       app=metallb                component=speaker                controller-revision-hash=7668c5cdf6                pod-template-generation=1  Annotations:  prometheus.io/port: 7472                prometheus.io/scrape: true  Status:       Pending  IP:           10.8.0.217  IPs:    IP:           10.8.0.217  Controlled By:  DaemonSet/speaker  Containers:    speaker:      Container ID:        Image:         quay.io/metallb/speaker:v0.10.2      Image ID:            Ports:         7472/TCP, 7946/TCP, 7946/UDP      Host Ports:    7472/TCP, 7946/TCP, 7946/UDP      Args:        --port=7472        --config=config      State:          Waiting        Reason:       CreateContainerConfigError      Ready:          False      Restart Count:  0      Environment:        METALLB_NODE_NAME:       (v1:spec.nodeName)        METALLB_HOST:            (v1:status.hostIP)        METALLB_ML_BIND_ADDR:    (v1:status.podIP)        METALLB_ML_LABELS:      app=metallb,component=speaker        METALLB_ML_SECRET_KEY:  <set to the key 'secretkey' in secret 'memberlist'>  Optional: false      Mounts:        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2gzm (ro)  Conditions:    Type              Status    Initialized       True     Ready             False     ContainersReady   False     PodScheduled      True   Volumes:    kube-api-access-l2gzm:      Type:                    Projected (a volume that contains injected data from multiple sources)      TokenExpirationSeconds:  3607      ConfigMapName:           kube-root-ca.crt      ConfigMapOptional:       <nil>      DownwardAPI:             true  QoS Class:                   BestEffort  Node-Selectors:              kubernetes.io/os=linux  Tolerations:                 node-role.kubernetes.io/master:NoSchedule op=Exists                               node.kubernetes.io/disk-pressure:NoSchedule op=Exists                               node.kubernetes.io/memory-pressure:NoSchedule op=Exists                               node.kubernetes.io/network-unavailable:NoSchedule op=Exists                               node.kubernetes.io/not-ready:NoExecute op=Exists                               node.kubernetes.io/pid-pressure:NoSchedule op=Exists                               node.kubernetes.io/unreachable:NoExecute op=Exists                               node.kubernetes.io/unschedulable:NoSchedule op=Exists  Events:    Type     Reason       Age                  From               Message    ----     ------       ----                 ----               -------    Normal   Scheduled    41m                  default-scheduler  Successfully assigned metallb-system/speaker-zgz4d to kmaster    Warning  FailedMount  41m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-l2gzm" : failed to sync configmap cache: timed out waiting for the condition    Warning  Failed       39m (x12 over 41m)   kubelet            Error: secret "memberlist" not found    Normal   Pulled       78s (x185 over 41m)  kubelet            Container image "quay.io/metallb/speaker:v0.10.2" already present on machine  

container state after setting the value from null to 0.

kube-apiserver-kmaster            1/1     Running             0          27m  kube-controller-manager-kmaster   1/1     Running             0          27m  kube-flannel-ds-7f5b7             0/1     CrashLoopBackOff    1          76s  kube-flannel-ds-bs9h5             0/1     Error               1          72s  kube-flannel-ds-t9rpf             0/1     Error               1          71s  kube-proxy-ht5fk                  0/1     CrashLoopBackOff    3          76s  kube-proxy-ldhhc                  0/1     CrashLoopBackOff    3          75s  kube-proxy-mwrkc                  0/1     CrashLoopBackOff    3          76s  kube-scheduler-kmaster            1/1     Running             0          2  

How can I further troubleshoot a "502 Bad Gateway" error?

Posted: 02 Apr 2022 08:08 AM PDT

I am running a set of services in a Docker environment. All services are behind the same nginx reverse proxy container that encrypts with letsencrypt and splits the incoming traffic based on subdomains.

Today all of a sudden (while I was tinkering with another service) my Nextcloud container started returning 502 Bad Gateway when accessed through the reverse proxy.

All other services are doing fine.

When inspecting the error.log that nginx logs these errors to I can see lots of this error:

512 connect() failed (111: Connection refused) while connecting to upstream  

Leading me to believe something is wrong with the Nextcloud container instance.

So I checked the status of the container (I recently restarted the system, therefore Up 13 minutes):

docker ps -a | grep Nextcloud  6a4cd6dde4f6   nextcloud:21.0.1                      "/entrypoint.sh apac…"   About an hour ago   Up 13 minutes             80/tcp                                                               Nextcloud  

Here all seems fine. So I checked the output of the container by running the docker-compose in the terminal (as opposed to running it as a daemon in the background), which gave me no new interesting output at all. My browser refreshes did not seem to reach the Nextcloud container at all.

After this I wanted to see if the Nextcould container was responsive at all, so I forwarded the host's port 5555 to the nextcloud container's port 80 and connected to the host IP directly on port 5555. This worked. I got the "Access through untrusted domain" page, which makes sense since I was accessing it straight throught the host's IP.

Ok, so the reverse proxy container is experiencing connection refused, and the Nextcloud container is not receiving any requests at all, but seems to be working fine other than that.

I then created a temporary Ubuntu troubleshooting container and connected it to the same docker network as the reverse proxy container and the Nextcloud container. After installing some tools, I ran these commands:

root@491b7ef0f34f:/# ping Nextcloud  PING Nextcloud (10.10.7.3) 56(84) bytes of data.  64 bytes from Nextcloud.couplernets_nextcloud (10.10.7.3): icmp_seq=1 ttl=64 time=0.127 ms  64 bytes from Nextcloud.couplernets_nextcloud (10.10.7.3): icmp_seq=2 ttl=64 time=0.080 ms  64 bytes from Nextcloud.couplernets_nextcloud (10.10.7.3): icmp_seq=3 ttl=64 time=0.083 ms  64 bytes from Nextcloud.couplernets_nextcloud (10.10.7.3): icmp_seq=4 ttl=64 time=0.085 ms  ^C  --- Nextcloud ping statistics ---  4 packets transmitted, 4 received, 0% packet loss, time 3074ms  rtt min/avg/max/mdev = 0.080/0.093/0.127/0.019 ms    root@491b7ef0f34f:/# nmap -sV -p- Nextcloud  Starting Nmap 7.80 ( https://nmap.org ) at 2021-04-25 18:43 UTC  Nmap scan report for Nextcloud (10.10.7.3)  Host is up (0.0000090s latency).  rDNS record for 10.10.7.3: Nextcloud.couplernets_nextcloud  Not shown: 65534 closed ports  PORT   STATE SERVICE VERSION  80/tcp open  http    Apache httpd 2.4.38 ((Debian))  MAC Address: 02:42:0A:0A:07:03 (Unknown)  Service detection performed. Please report any incorrect results at https://nmap.org/submit/  Nmap done: 1 IP address (1 host up) scanned in 7.64 seconds    root@491b7ef0f34f:/# wget Nextcloud:80/index.html  --2021-04-25 18:45:28--  http://nextcloud/index.html  Resolving nextcloud (nextcloud)... 10.10.7.3  Connecting to nextcloud (nextcloud)|10.10.7.3|:80... connected.  HTTP request sent, awaiting response... 200 OK  Length: 156 [text/html]  Saving to: 'index.html.1'    index.html.1        100%[===================>]     156  --.-KB/s    in 0s    2021-04-25 18:45:28 (7.66 MB/s) - 'index.html.1' saved [156/156]  

This tells me that the Nextcloud instance should be fine and dandy since its port is open and I can access the index.html file with no problems.

I then went to check on my nginx reverse proxy configuration.

 server {      listen 443 ssl;      listen [::]:443 ssl;        #Add HSTS preload header      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;        #Remove revealing headers      server_tokens off;      proxy_hide_header X-Powered-By;        server_name <cloud.domain.topdomain>;        include /config/nginx/ssl.conf;        client_max_body_size 0;        location / {          include /config/nginx/proxy.conf;          proxy_max_temp_file_size 2048m;          proxy_pass http://Nextcloud:80/;      }  }  

This configuration is just the same as all the other services that pass through the very same reverse proxy container. The only thing that differs is the server_name and the proxy_pass config parameters.

From here I have no idea what I should try next. Please help me. Any help is very much appreciated.

msmtp and OVH mail

Posted: 02 Apr 2022 07:29 AM PDT

I'd like to send a mail when a user authenticates on a Debian 9 server. I use OVH mail server. I've set up msmtp like so :

account myaccount  tls_starttls off  logfile ~/.msmtp.log    host ssl0.ovh.net  port 465  from user@mydomain.com  auth on  user user@mydomain.com  password XXXXXXXXXXXXXXXX    account default : myaccount  

I tried to send a mail with :

echo "Hello this is sending email using msmtp" | msmtp otheruser@mydomain.com  

But didn't work, nothing happens. Same for the command :

msmtp --serverinfo --tls --tls-certcheck=off --host ssl0.ovh.net --port 465  

EDIT 1

I tried the command proposed by @Anfi in the comments and I get :

-bash: subject:: command not found  ignoring system configuration file /etc/msmtprc: No such file or directory  loaded user configuration file /home/myuser/.msmtprc  falling back to default account  using account default from /home/myuser/.msmtprc  host = ssl0.ovh.net  port = 465  proxy host = (not set)  proxy port = 0  timeout = off  protocol = smtp  domain = localhost  auth = choose  user = user@mydomain.com  password = *  passwordeval = (not set)  ntlmdomain = (not set)  tls = off  tls_starttls = off  tls_trust_file = (not set)  tls_crl_file = (not set)  tls_fingerprint = (not set)  tls_key_file = (not set)  tls_cert_file = (not set)  tls_certcheck = on  tls_min_dh_prime_bits = (not set)  tls_priorities = (not set)  auto_from = off  maildomain = (not set)  from = user@mydomain.com  add_missing_from_header = on  add_missing_date_header = on  remove_bcc_headers = on  dsn_notify = (not set)  dsn_return = (not set)  logfile = /home/myuser/.msmtp.log  syslog = (not set)  aliases = (not set)  reading recipients from the command line  msmtp: the server sent an empty reply   msmtp: could not send mail (account default from /home/myuser/.msmtprc)  

OpenDKIM: Use MySQL socket in KeyTable/SigningTable

Posted: 02 Apr 2022 07:07 AM PDT

How can i connect to a MySQL socket (not TCP) with e.g. KeyTable/SigningTable in OpenDKIM?

The dataset which needs to be used is "dsn:" and the manual says:

If the string begins with "dsn:" and the OpenDKIM library was compiled to support that database type, then the remainder of the string is a Data Store Name describing the type, location parameters and access credentials for an ODBC or SQL database. The DSN is of the form: backend://[user[:pwd]@][port+]host/dbase[/key=value[?...]]

where backend is the name of a supported backend database mechanism (e.g. "mysql"), user and password are optional login credentials for the database, port and host describe the destination of a TCP connection to connect to that database, dbase is the name of the database to be accessed, and the key=value pairs must specify at least "table", "keycol" and "datacol" values specifying the name of the table, the name of the column to consider as the key, and the name(s) of the column(s) to be considered as the values (separated by commas). For example (all in one line):

mysql:://dbuser:dbpass@3306+dbhost/odkim/table=macros ?keycol=host?datacol=v1,v2

defines a MySQL database listening at port 3306 on host "dbhost"; the userid "dbuser" and password "dbpass" should be used to access the database; the database name is "odkim", and the data are in columns "host" (the keys) and "v1" and "v2" (the values) inside table "macros". This example would thus return two values when a match is found.

No value within the DSN may contain any of the six punctuation characters (":", "/", "@", "+", "?" and "=") used to separate portions of the DSN from each other.

It seems it's not possible to connect to MySQL socket, but only via TCP?

BitLocker with TPM: how to replace the numerical password recovery key protector with an alphanumeric password recovery key protector?

Posted: 02 Apr 2022 07:03 AM PDT

C:\Windows\system32>manage-bde -status  BitLocker Drive Encryption: Configuration Tool version 10.0.17763  Copyright (C) 2013 Microsoft Corporation. All rights reserved.    Disk volumes that can be protected with  BitLocker Drive Encryption:  Volume C: [OS]  [OS Volume]    Size:                 77.62 GB  BitLocker Version:    2.0  Conversion Status:    Fully Encrypted  Percentage Encrypted: 100.0%  Encryption Method:    XTS-AES 128  Protection Status:    Protection On  Lock Status:          Unlocked  Identification Field: Unknown  Key Protectors:      TPM      Numerical Password      C:\Windows\system32>  

Can I replace the numerical password key protector with an alphanumeric password protector since they're more secure (more possible permutations with all characters instead of just numbers 0-9)?

Why do I get different SpamAssassin results via spamd vs command line

Posted: 02 Apr 2022 06:13 AM PDT

My SpamAssassin daemon is not flagging as much spam as I would like (but it is flagging some), so I took a sample message which had not been flagged, and ran it thru SpamAssassin at the command line. The results are entirely different for the same message (see below).

  • What should I look for in the configuration which might cause this?

  • How can I temporarily enable debug for spamd (Ubuntu 16.4, not using amavis)

Results via spamd:

X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on myhost  X-Spam-Level:  X-Spam-Status: No, score=-1.1 required=2.0 tests=BAYES_00,RDNS_NONE,      SPF_HELO_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0  

Results via command line:

X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on myhost  X-Spam-Flag: YES  X-Spam-Level: ****  X-Spam-Status: Yes, score=4.5 required=2.0 tests=RCVD_IN_BL_SPAMCOP_NET,      RDNS_NONE,SPF_HELO_PASS,SPF_PASS,URIBL_ABUSE_SURBL,URIBL_BLOCKED autolearn=no      autolearn_force=no version=3.4.0  X-Spam-Report:     *  0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked.    *       See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block    *      for more information.    *      [URIs: sarasotasailingsquadron.org]    *  1.9 URIBL_ABUSE_SURBL Contains an URL listed in the ABUSE SURBL    *      blocklist    *      [URIs: afxled.trade]  *  1.2 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net  *      [Blocked - see <http://www.spamcop.net/bl.shtml?107.173.40.66>]  * -0.0 SPF_PASS SPF: sender matches SPF record  * -0.0 SPF_HELO_PASS SPF: HELO matches SPF record  *  1.3 RDNS_NONE Delivered to internal network by a host with no rDNS  

Using Let's Encrypt certs on LAN with DNS redirection?

Posted: 02 Apr 2022 06:02 AM PDT

I'm trying to use existing LE certs with a server on my LAN. I exposed port 443 to get the certs for mine.example.com and https access works fine from the WAN.

However, I assumed (perhaps foolishly) that I might be able to use the same certs internally by setting up DNS redirection (using dnsmasq on a separate box) on my LAN to point mine.example.com to the local IP.

Redirection works fine and points local machines to the internal IP when I go to mine.example.com but the certs now show 'Certificate Authority Invalid' errors.

Perhaps I misunderstand how the CA process works but I assumed that, since LE certs are DNS based, they should still work with local DNS redirection.

Does anyone know how to make this work?

Or can anyone explain why it doesn't work?


I know I can get different certs for local machines from LE but that would mean trying to configure the server to use different certs for internal and external access. Assuming I need to do this, is there an easy way to use different certs depending on source traffic?

I'll be serving web content through nginx and also a Webmin admin panel so it may be relatively easy to do for nginx given the flexibility in the configs (although google hasn't been too helpful here either) but not sure about other web services running on the machine?


P.S. sorry if this turns out to be a duplicate but couldn't find anything with a lot of searching here (or on the Googles).

OpenVPN Access Server Creating Groups & Users

Posted: 02 Apr 2022 07:03 AM PDT

I have created an OpenVPN on aws via cloudformations - all working as expected except bootstrapping.

on user data i have entered some comands to do some of changes e,g, enabling googleauth etc.

in addition to this I also would like to create a group - and then create users and assign to this group, onnce those are done than i want to create a role to the group so it does the split tunnelling e.g. not all traffic go through the internet - so i want only want to redirect couple IP to this group

I am stack at the moment by creating groups - I have found the command https://evanhoffman.com/2014/07/22/openvpn-cli-cheat-sheet/ and enter on boot strap but i cant find anything for creating a group creating an Access control which allow access to netwokr serices to internet.

so anyone has any expirence with openvpn access server CLI? how to create groups and assgin users to this group for split tunneling?

OwnCloud and Azure Active Directory integration

Posted: 02 Apr 2022 08:08 AM PDT

Is it possible to integrate ownCloud (https://owncloud.org) with Azure Active Directory for auth?

Commenting multiple lines in SaltStack?

Posted: 02 Apr 2022 06:02 AM PDT

I'm using the following state to try to comment out two lines in a file:

/etc/cloud/cloud.cfg:    file.comment:      - regex: ^ - set_hostname      - regex: ^ - update_hostname  

Unfortunately, as expected it's only using the latter regex line, and ignoring the first.

How can I comment out more than one line in a file using file.comment?

What does "-/filepath" ACTION mean in rsyslog configuration

Posted: 02 Apr 2022 07:41 AM PDT

I came across this one Debian Linux installation (6.0.6), and examining its /etc/rsyslog.conf, I see configuration lines like this:

auth,authpriv.*                 /var/log/auth.log  *.*;auth,authpriv.none          -/var/log/syslog  

I can't find anything about prepending dashes to the file action in rsyslog.conf(5), and what the meaning of it might be, and would like to know what they actually do.

Creating RAID1 on Windows Server causes not enough disk space error

Posted: 02 Apr 2022 07:20 AM PDT

I have three disks. Disk0 (boot), Disk1 and Disk2. Disk 1 and 2 are both unformatted and unallocated drives. I am trying to mirror Disk0 to Disk1. They are both Dynamic and are both the same size (1TB). When I select Disk1 to be the mirror I get the error "There is not enough space available on the disk(s) to complete this operation". I have spent several hours searching for a solution but have not found one. Why do I get this error when they are both the same size?

EDIT: Shrinking the volume size on the boot disk by 100MB allowed me to get past this error. From what I read the mirror drive needs to be the same size or larger than the boot drive. So I am confused why that change worked. However, I now get the error " all disks holding extents for a given volume must have the same sector size and the sector size must be valid". I believe this is because the drives are different and one has 512B and the other is the Advanced Drive that is 4KB. What the different sector sizes cause both problems? If I got the same disks would both issues go away?

No comments:

Post a Comment