Wednesday, June 8, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Setting up Kubernetes on LXC: Kubeadm init times out, cannot connect to API server

Posted: 08 Jun 2022 09:32 AM PDT

Situation:

I am trying to create a Kubernetes cluster running on Linux containers, however Kubeadm init fails by timing out (four minutes pass). I have done the same on Ubuntu VMs before with no issue, and that cluster is running happily.

The Kubelet is active (running) according to the system. I can successfully pull all Kubeadm images.

When I check journalctl -xeu kubelet, it says that it cannot connect to the API server, and then cannot add a node and connect to the node.

Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta ... 'Post "https://10.15.10.100:6443/api/v1/namespaces/default/events": dial tcp 10.15.10.100:6443: connect: connection refused'(may retry after sleeping)  ...  vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://10.10.10.10:6443/api/v1/nodes?fieldSelector=metadata ... connect: connection refused  vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.10.10.10:6443/api/v1/no ... connect: connection refused  vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://10.10.10.10:6443/apis/storage.k8s.io/v1/csidriv ... connect: connection refused  vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.10.10.10:6443 ... connect: connection refused  

In order to figure out the issue, I installed crictl, but it also could not connect to any API server.

I attempted to re-install everything on a fresh LXC, and received the same issue.

Thus, I attempted to join the node as a worker to an old Kubernetes cluster running on normal Ubuntu VMs, and see if I could use Kubectl to get information on what could be failing. When I checked the Calico CNI pod, it gave me this repeated error:

Warning  FailedCreatePodSandBox  2m19s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15003/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15003/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/16/fs]}: invalid argument: unknown  

Environment:

  • LXCs are from TurnKeyLinux, according to the person who set up the server.
  • The LXC OS is Debian Buster
  • LXC config is the following (with placeholder ips):
arch: amd64  cores: 2  hostname: kubecontrol01  memory: 8192  net0:   name=eth0,bridge=vmbr1,firewall=1,gw=10.10.10.1,hwaddr=5E:83:5C:16:4B:68,ip=10.10.10.10/24,type=veth  ostype: debian  rootfs: local-zfs:subvol-100-disk-0,size=50G  searchdomain: 1.2.3.4  swap: 0  lxc.apparmor.profile: unconfined  lxc.cap.drop:  lxc.cgroup.devices.allow: a  lxc.mount.auto: proc:rw sys:rw  
  • This system is using ZFS, however, the person who set up the server says that it should be properly abstracted, so ZFS and OverlayFS should not be conflicting.
  • Kubeadm, Kubelet, and Kubectl are installed via the instructions on Kubernetes.io
  • Containerd.io is installed from download.docker.com using apt-get. However, Docker is NOT installed, as it is no longer natively supported in Kubernetes.
  • Swap is off on all LXCs, as well as their host machine.
  • The Firewall was temporarily disabled, and made no difference when it was on or off.

Summary of Question:

Can I get Kubernetes containers to run on this setup? Is it an incompatibility, or am I missing some program or configuration? What would I need to do to fix it?

If I missed any details, please tell me with a comment.

Is it possible for me to identify what services are using an AWS Lambda function?

Posted: 08 Jun 2022 09:12 AM PDT

I'm trying to clean up some Lambda functions in our account. When I click on the Monitor tab for each function, everything is coming up with "No data available". I know at least one of those functions are associated with a CloudFront distribution, and I was able to pull up that function's logs via CloudFront. But I don't know where to look for any information or logs for the other functions.

Basically, I want to verify whether or not the other functions are being used/invoked at all so that I know I'm not disrupting anything if I delete them.

Why does trying to bind mount a file under /proc/<pid>/fd fail in docker?

Posted: 08 Jun 2022 08:59 AM PDT

An example of the titular failure:

user@host ~> touch a  user@host ~> tail -f ./a &  user@host ~> ps aux | grep tail  user-+ 1457  0.0  0.0   8120   580 pts/1    S    16:04   0:00 tail -f ./a  user-+ 1459  0.0  0.0   9040  2568 pts/1    S+   16:04   0:00 grep --color=auto tail  user@host ~> ls -l /proc/1457/fd  total 0  lrwx------ 1 user user 64 Jun  8 16:04 0 -> /dev/pts/1  lrwx------ 1 user user 64 Jun  8 16:04 1 -> /dev/pts/1  lrwx------ 1 user user 64 Jun  8 16:04 2 -> /dev/pts/1  lr-x------ 1 user user 64 Jun  8 16:04 3 -> /home/user/a  lr-x------ 1 user user 64 Jun  8 16:04 4 -> anon_inode:inotify  user@host ~> docker run -it --rm -v /proc/1457/fd/3:/a ubuntu  docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:75: mounting "/proc/1457/fd/3" to rootfs at "/a" caused: mount through procfd: invalid argument: unknown.  

This doesn't appear to be a fundamental limitation of bind mounts:

user@host ~> touch b  user@host ~> sudo mount --bind /proc/1457/fd/3 b  user@host ~> cat b  user@host ~> echo "test" > a  test  user@host ~> cat b  test  

I've looked at the source for opencontainer/runc and there's one slight complication above and beyond the proof-of-concept above. To avoid an attacker switching out the mount destination path for a symlink, runc opens the destination path and then uses the corresponding path under /proc/self/fd to refer to it.

I wrote a little C program to simulate those conditions to make sure the concept was still sound:

// a.c    #include <sys/mount.h>  #include <stdio.h>  #include <fcntl.h>    int main(int argc, char const *argv[])  {    if (argc < 4) {      fprintf(stderr, "Not enough args");      return 1;    }      char src_path[1024];    {      int r = snprintf(src_path, sizeof(src_path), "/proc/%s/fd/%s", argv[1], argv[2]);      if (r < 0 || r >= sizeof(src_path)) {        fprintf(stderr, "src sprintf failed\n");        return 1;      }    }      int fd = open(argv[3], O_RDWR);    if (fd == 0) {      perror("Open file failed: ");      return 1;    }      char dst_path[1024];    {      int r = snprintf(dst_path, sizeof(dst_path), "/proc/self/fd/%d", fd);      if (r < 0 || r >= sizeof(dst_path)) {        fprintf(stderr, "dst sprintf failed\n");        return 1;      }    }      int r = mount(src_path, dst_path, "none", MS_BIND, 0);    if (r != 0) {      perror("Mount failed: ");      return 1;    }      return 0;  }  

Which runs successfully to completion and has the desired effect:

user@host ~> gcc ./a.c  user@host ~> sudo ./a.out 1457 3 c  user@host ~ [1]> echo "another test" >> c  another test  user@host ~> cat a  test  another test  

Grasping at straws I have straced docker-containerd to make sure that runc is just using mount with an analogous set of arguments, here's an excerpt of the log:

1491 newfstatat(AT_FDCWD, "/proc/1457/fd/3", {st_mode=S_IFSOCK|0777, st_size=0, ...}, 0) = 0  1491 newfstatat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged/volume", 0xc0000e81d8, AT_SYMLINK_NOFOLLOW) = -1 ENOENT (No such file or directory)  1491 newfstatat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged/volume", 0xc0000e82a8, 0) = -1 ENOENT (No such file or directory)  1491 newfstatat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged",  <unfinished ...>  1491 <... newfstatat resumed>{st_mode=S_IFDIR|0755, st_size=4096, ...}, 0) = 0  1491 openat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged/volume", O_RDONLY|O_CREAT|O_CLOEXEC, 0755) = 7  1491 epoll_ctl(8, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2955630296, u64=140083313989336}} <unfinished ...>  1491 <... epoll_ctl resumed>)        = -1 EPERM (Operation not permitted)  1491 close(7)                        = 0  1491 newfstatat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged/volume", {st_mode=S_IFREG|0755, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0  1491 openat(AT_FDCWD, "/var/lib/docker/overlay2/31b012c4b8edbb2e8c1e0115e4e7c6a4b88a8045f39975367f01f61ccf9d1a5b/merged/volume", O_RDONLY|O_CLOEXEC|O_PATH) = 7  1491 epoll_ctl(8, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2955630296, u64=140083313989336}} <unfinished ...>  1491 <... epoll_ctl resumed>)        = -1 EBADF (Bad file descriptor)  1491 readlinkat(AT_FDCWD, "/proc/self/fd/7", "/var/lib/docker/overlay2/31b012c"..., 128) = 103  1491 mount("/proc/1457/fd/3", "/proc/self/fd/7", 0xc0001bf1d7, MS_BIND, NULL) = -1 EINVAL (Invalid argument)  1491 close(7)                        = 0  

You can see /proc/1457/fd/3 is stat, and it exists, but when an attempt is made to mount it it fails with EINVAL. Having read the docs I can't see an obvious cause for this.

OS details:

user@host ~> uname -a  Linux host 5.13.0-44-generic #49~20.04.1-Ubuntu SMP Wed May 18 18:44:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux  

Docker details:

user@host ~> docker info  Client:   Context:    default   Debug Mode: false   Plugins:    app: Docker App (Docker Inc., v0.9.1-beta3)    buildx: Docker Buildx (Docker Inc., v0.8.1-docker)    scan: Docker Scan (Docker Inc., v0.17.0)    Server:   Containers: 62    Running: 17    Paused: 0    Stopped: 45   Images: 1198   Server Version: 20.10.14   Storage Driver: overlay2    Backing Filesystem: extfs    Supports d_type: true    Native Overlay Diff: true    userxattr: false   Logging Driver: json-file   Cgroup Driver: cgroupfs   Cgroup Version: 1   Plugins:    Volume: local    Network: bridge host ipvlan macvlan null overlay    Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog   Swarm: inactive   Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc   Default Runtime: runc   Init Binary: docker-init   containerd version: 3df54a852345ae127d1fa3092b95168e4a88e2f8   runc version: v1.0.3-0-gf46b6ba   init version: de40ad0   Security Options:    apparmor    seccomp     Profile: default   Kernel Version: 5.13.0-44-generic   Operating System: Ubuntu 20.04.4 LTS   OSType: linux   Architecture: x86_64   CPUs: 12   Total Memory: 15.55GiB   Name: host   ID: BWNT:CW6A:OMFO:TI67:5TMC:CUCT:CPXG:GSXM:NKNZ:VMBK:MBEM:RRZV   Docker Root Dir: /var/lib/docker   Debug Mode: false   Registry: https://index.docker.io/v1/   Labels:   Experimental: false   Insecure Registries:    127.0.0.0/8   Live Restore Enabled: false  

I've run out of ideas about what could be wrong or what to try next. Thanks in advance for any help.

Creating a custom URL for reverse local proxy with NGINX

Posted: 08 Jun 2022 08:42 AM PDT

I'm trying to set a custom URL for a reverse proxy. From what I understand, the code should be fairly straight forward. Here's what I've got:

worker_processes 1;    events {         worker_connections 1024;    }    http {      server {            listen 8080;          server_name www.example.com;            location / {              proxy_pass http://localhost:3000/;          }      }  }  

Afterwards in terminal I made sure to type in the command:

sudo nginx -s reload  

When I go to www.example.com I get "This site can't be reached", but when I type "localhost:8080" into the url it successfully loads the content from localhost:3000.

What exactly am I doing wrong here?

Kubernetes Cluster on CentOS 7 with kubeadm 1.24 - calico => coredns stuck in ContainerCreating

Posted: 08 Jun 2022 08:07 AM PDT

In order to install a master kubernetes node on centos7 with containerd and calico :

I followed this steps : https://computingforgeeks.com/install-kubernetes-cluster-on-centos-with-kubeadm/

After the kubeadm init --pod-network-cidr=192.168.0.0/16 --upload-certs

I installed calico with :

(the proxy didn't let me run as it, so i first downloaded the files, they did a create on the file)

Then the install was ok but :coredns and calico-kube-controllers are stuck in ContainerCreating

this installation use a company dns and proxy, i have been stuck on this for days and can't findout why coredns is stuck in ContainerCreating

 [root@master-node system]# kubectl get pod -A     NAMESPACE         NAME                                       READY   STATUS              RESTARTS        AGE          calico-system     calico-kube-controllers-68884f975d-6qm5l   0/1     Terminating         0               16d          calico-system     calico-kube-controllers-68884f975d-ckr2g   0/1     ContainerCreating   0               154m          calico-system     calico-node-5n4nj                          0/1     Running             7 (165m ago)    16d          calico-system     calico-node-gp6d5                          0/1     Running             1 (15d ago)     16d          calico-system     calico-typha-77b6fb6f86-zc8jn              1/1     Running             7 (165m ago)    16d          kube-system       coredns-6d4b75cb6d-2tqk9                   0/1     ContainerCreating   0               4h46m          kube-system       coredns-6d4b75cb6d-9dn5d                   0/1     ContainerCreating   0               6h58m          kube-system       coredns-6d4b75cb6d-vfchn                   0/1     Terminating         32              15d          kube-system       etcd-master-node                           1/1     Running             14 (165m ago)   16d          kube-system       kube-apiserver-master-node                 1/1     Running             8 (165m ago)    16d          kube-system       kube-controller-manager-master-node        1/1     Running             7 (165m ago)    16d          kube-system       kube-proxy-c6l9s                           1/1     Running             7 (165m ago)    16d          kube-system       kube-proxy-pqrf8                           1/1     Running             1 (15d ago)     16d          kube-system       kube-scheduler-master-node                 1/1     Running             8 (165m ago)    16d          tigera-operator   tigera-operator-5fb55776df-955dj           1/1     Running             13 (164m ago)   16d  

kubectl describe pod coredns

[root@master-node system]# kubectl describe pod coredns-6d4b75cb6d-2tqk9  -n kube-system  Name:                 coredns-6d4b75cb6d-2tqk9  Namespace:            kube-system  Priority:             2000000000  Priority Class Name:  system-cluster-critical  Node:                 master-node/10.32.67.20  Start Time:           Wed, 08 Jun 2022 11:59:59 +0200  Labels:               k8s-app=kube-dns                        pod-template-hash=6d4b75cb6d  Annotations:          <none>  Status:               Pending  IP:  IPs:                  <none>  Controlled By:        ReplicaSet/coredns-6d4b75cb6d  Containers:    coredns:      Container ID:      Image:         k8s.gcr.io/coredns/coredns:v1.8.6      Image ID:      Ports:         53/UDP, 53/TCP, 9153/TCP      Host Ports:    0/UDP, 0/TCP, 0/TCP      Args:        -conf        /etc/coredns/Corefile      State:          Waiting        Reason:       ContainerCreating      Ready:          False      Restart Count:  0      Limits:        memory:  170Mi      Requests:        cpu:        100m        memory:     70Mi      Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5      Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3      Environment:  <none>      Mounts:        /etc/coredns from config-volume (ro)        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ch9xq (ro)  Conditions:    Type              Status    Initialized       True    Ready             False    ContainersReady   False    PodScheduled      True  Volumes:    config-volume:      Type:      ConfigMap (a volume populated by a ConfigMap)      Name:      coredns      Optional:  false    kube-api-access-ch9xq:      Type:                    Projected (a volume that contains injected data from multiple sources)      TokenExpirationSeconds:  3607      ConfigMapName:           kube-root-ca.crt      ConfigMapOptional:       <nil>      DownwardAPI:             true  QoS Class:                   Burstable  Node-Selectors:              kubernetes.io/os=linux  Tolerations:                 CriticalAddonsOnly op=Exists                               node-role.kubernetes.io/control-plane:NoSchedule                               node-role.kubernetes.io/master:NoSchedule                               node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                               node.kubernetes.io/unreachable:NoExecute op=Exists for 300s  Events:    Type     Reason                  Age                   From     Message    ----     ------                  ----                  ----     -------    Warning  FailedCreatePodSandBox  114s (x65 over 143m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "de60ae0a286ad648a9691065e68fe03589b18a26adfafff0c089d5774b46c163": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": Service Unavailable  

kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp'

[root@master-node system]# kubectl get events --all-namespaces  --sort-by='.metadata.creationTimestamp'  NAMESPACE       LAST SEEN   TYPE      REASON                   OBJECT                                         MESSAGE  calico-system   5m52s       Warning   Unhealthy                pod/calico-node-gp6d5                          (combined from similar events): Readiness probe failed: 2022-06-08 14:50:45.231 [INFO][30872] confd/health.go 180: Number of node(s) with BGP peering established = 0...  calico-system   4m16s       Warning   FailedKillPod            pod/calico-kube-controllers-68884f975d-6qm5l   error killing pod: failed to "KillPodSandbox" for "c842d857-88f1-4dfa-b3e8-aad68f626c8c" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"5002f084e667a7e70654136b237ae2924c268337c1faf882972982e888784bb9\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": Service Unavailable"  kube-system     87s         Warning   FailedCreatePodSandBox   pod/coredns-6d4b75cb6d-9dn5d                   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "acd785aa916d2c97aa16ceeaa2f04e7967a1224cb437e50770f32a02b5a9ed3f": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": Service Unavailable  calico-system   13m         Warning   FailedKillPod            pod/calico-kube-controllers-68884f975d-6qm5l   error killing pod: failed to "KillPodSandbox" for "c842d857-88f1-4dfa-b3e8-aad68f626c8c" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"5002f084e667a7e70654136b237ae2924c268337c1faf882972982e888784bb9\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": context deadline exceeded"  kube-system     4m6s        Warning   FailedKillPod            pod/coredns-6d4b75cb6d-vfchn                   error killing pod: failed to "KillPodSandbox" for "23c399a5-daa6-4f01-b7ee-7822b828d966" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"7621e8d64c84554d75030375b0355a67c60b62c8d240741aa78189ffabedc913\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": Service Unavailable"  calico-system   6s          Warning   Unhealthy                pod/calico-node-5n4nj                          (combined from similar events): Readiness probe failed: 2022-06-08 14:56:31.871 [INFO][17966] confd/health.go 180: Number of node(s) with BGP peering established = 0...  calico-system   45m         Warning   DNSConfigForming         pod/calico-kube-controllers-68884f975d-ckr2g   Search Line limits were exceeded, some search paths have been omitted, the applied search line is: calico-system.svc.cluster.local svc.cluster.local cluster.local XXXXXX.com cs.XXXXX.com fr.XXXXXX.com  kube-system     2m49s       Warning   FailedCreatePodSandBox   pod/coredns-6d4b75cb6d-2tqk9                   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "529139e14dbb8c5917c72428600c5a8333aa21bf249face90048d1b344da5d9a": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": context deadline exceeded  calico-system   3m42s       Warning   FailedCreatePodSandBox   pod/calico-kube-controllers-68884f975d-ckr2g   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "45dd6ebfb53fd745b1ca41853bb7744e407b3439111a946b007752eb8f8f7abd": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": Service Unavailable  kube-system     9m6s        Warning   FailedKillPod            pod/coredns-6d4b75cb6d-vfchn                   error killing pod: failed to "KillPodSandbox" for "23c399a5-daa6-4f01-b7ee-7822b828d966" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"7621e8d64c84554d75030375b0355a67c60b62c8d240741aa78189ffabedc913\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": context deadline exceeded"  

Search Active Directory by Canonical Name

Posted: 08 Jun 2022 08:14 AM PDT

The Manager field for contacts stored in Active Directory holds the CanonicalName of the manager's AD object, not the username or DistinguishedName. I need to be able to search AD for the manager of a mail contact using PowerShell, but Get-ADUser doesn't allow filtering by CanonicalName because it's a constructed attribute, not an actual attribute of the object.

How can I search AD by Canonical Name using Powershell?

This command works, but takes too long for scripting purposes because the filter is on the wrong side of the pipe:

Get-ADUser -Filter * -ResultSetSize 10000 -Properties CanonicalName | ?{$_.CanonicalName -eq $MailContact.Manager}

Packer build with Ubuntu 20.4 and autoinstall with vSphere 7.0

Posted: 08 Jun 2022 07:46 AM PDT

I am trying to build a packer image with packer 1.8.1 and Ubuntu 20.4

My HCL manifest is correct but i always get the prompt with " Continue with Autoinstall ? Add autoinstall in kernel command line to avoid this"

I tried many configurations without anny effets. Doest anyone has a tip ?

Here is my hcl manifest and my user-data

source "vsphere-iso" "example" {    CPUs                 = 4    RAM                  = 4096    RAM_reserve_all      = true    boot_command         = [                            "<enter><wait><wait><enter><wait><f6><esc><wait> ",                            "autoinstall net.ifnames=0 biosdevname=0 ip=dhcp ipv6.disable=1 ds=nocloud;",                            "<wait><enter>"                           ]    disk_controller_type = ["pvscsi"]    guest_os_type        = "ubuntu64Guest"    host                 = "esx6.maquette.lan"    insecure_connection  = true    iso_paths            = ["[DT1_ESX6] ISO/ubuntu-20.04.4-live-server-amd64.iso"]    cd_files = ["./meta-data", "./user-data"]    cd_label = "cidata"    network_adapters {      network_card = "vmxnet3"      network = "PG_WINDOWS-SRV_VL8"    }    ssh_password = "ubuntu"    ssh_username = "ubuntu"    ssh_timeout = "180m"    disable_shutdown = true    shutdown_timeout = "180m"    storage {      disk_size             = 55000      disk_thin_provisioned = true    }    username            = "<username>"    vcenter_server      = "vcenter.lan"    password            = "<password>"  

And the user-date used for ubuntu autoinstall:

#cloud-config  autoinstall:    version: 1    early-commands:      - sudo systemctl stop ssh      - echo "yes"    locale: en_US.UTF-8    keyboard:      layout: fr      variant: us    packages: [open-vm-tools, openssh-server, chrony, curl, vim, ifupdown, unzip,  gnupg2, software-properties-common, apt-transport-https, ca-certificates, lsb-release]    identity:      hostname: epachard      password: '$6$wdAcoXrU039hKYPd$508Qvbe7ObUnxoj15DRCkzC3qO7edjH0VV7BPNRDYK4QR8ofJaEEF2heacn0QgD.f8pO8SNp83XNdWG6tocBM1'      username: ubuntu    ssh:      install-server: yes    network:      network:        version: 2        ethernets:          ens192: {dhcp4: true, dhcp-identifier: mac}    apt:      preserve_sources_list: false      primary:          - arches: [amd64]            uri: "http://archive.ubuntu.com/ubuntu"          - arches: [default]            uri: "http://ports.ubuntu.com/ubuntu-ports"      geoip: true    storage:    config:    - grub_device: true      id: disk-sda      name: ''      path: /dev/sda      preserve: false      ptable: gpt      type: disk      wipe: superblock    - device: disk-sda      flag: bios_grub      grub_device: false      id: partition-0      number: 1      preserve: false      size: 1048576      type: partition    - device: disk-sda      flag: ''      grub_device: false      id: partition-1      number: 2      preserve: false      size: 1073741824      type: partition      wipe: superblock    - fstype: xfs      id: format-2      preserve: false      type: format      volume: partition-1    - device: disk-sda      flag: ''      grub_device: false      id: partition-3      number: 3      preserve: false      size: 56908316672      type: partition      wipe: superblock    - devices:      - partition-3      id: lvm_volgroup-1      name: vg0      preserve: false      type: lvm_volgroup    - id: lvm_partition-1      name: lv_root      preserve: false      size: 10737418240B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-3      preserve: false      type: format      volume: lvm_partition-1    - device: format-3      id: mount-3      path: /      type: mount    - id: lvm_partition-2      name: lv_home      preserve: false      size: 1073741824B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-4      preserve: false      type: format      volume: lvm_partition-2    - device: format-4      id: mount-4      path: /home      type: mount    - id: lvm_partition-3      name: lv_tmp      preserve: false      size: 4294967296B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-5      preserve: false      type: format      volume: lvm_partition-3    - device: format-5      id: mount-5      path: /tmp      type: mount    - id: lvm_partition-4      name: lv_var      preserve: false      size: 3221225472B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-6      preserve: false      type: format      volume: lvm_partition-4    - device: format-6      id: mount-6      path: /var      type: mount    - id: lvm_partition-5      name: lv_var_log      preserve: false      size: 4294967296B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-7      preserve: false      type: format      volume: lvm_partition-5    - device: format-7      id: mount-7      path: /var/log      type: mount    - id: lvm_partition-6      name: lv_var_log_audit      preserve: false      size: 2147483648B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-8      preserve: false      type: format      volume: lvm_partition-6    - device: format-8      id: mount-8      path: /var/log/audit      type: mount    - id: lvm_partition-7      name: lv_var_tmp      preserve: false      size: 1073741824B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-9      preserve: false      type: format      volume: lvm_partition-7    - device: format-9      id: mount-9      path: /var/tmp      type: mount    - id: lvm_partition-8      name: lv_opt      preserve: false      size: 2147483648B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: xfs      id: format-10      preserve: false      type: format      volume: lvm_partition-8    - device: format-10      id: mount-10      path: /opt      type: mount    - id: lvm_partition-9      name: lv_swap      preserve: false      size: 2147483648B      type: lvm_partition      volgroup: lvm_volgroup-1      wipe: superblock    - fstype: swap      id: format-11      preserve: false      type: format      volume: lvm_partition-9    - device: format-11      id: mount-11      path: ''      type: mount    - device: format-2      id: mount-2      path: /boot      type: mount    late-commands:      - sed -i 's/^#*\(send dhcp-client-identifier\).*$/\1 = hardware;/' /target/etc/dhcp/dhclient.conf  

Thanks by advance for any help

Identify Dell FX2 blade location via Redfish?

Posted: 08 Jun 2022 07:12 AM PDT

I would like to identify the location of a particular blade within a Dell FX2 chassis using Redfish. I've looked in:

  • /redfish/v1/Systems/System.Embedded.1
  • /redfish/v1/Chassis/System.Embedded.1
  • /redfish/v1/Chassis/Chassis.Embedded.1

But I'm not seeing anything there. Is there a way to retrieve this information from the FX2 chassis?

How to inherit user permissions to subfolders and files for "mounted folders"

Posted: 08 Jun 2022 08:05 AM PDT

We have created a mounted folder and had given it the permission via FileSystemAccessRule

FileSystemAccessRule fileSystemAccessRule = new FileSystemAccessRule(@"IIS AppPool" + appPoolName.Trim(), FileSystemRights.FullControl, InheritanceFlags.ObjectInherit | InheritanceFlags.ContainerInherit, PropagationFlags.None, AccessControlType.Allow);

Now this works if the folder is normal directory. However, in case of a mounted folder it fails. Although it doesn't throw any error, it simply doesn't works. It does not passes its permission to its Subfolders and files. eg. if I have executed above command for Mounted Folder "A" which contains Folder B, then the permission is set for "A", but not for "B".

We have tried giving the permission via ICACLS command too. startInfo.Arguments = "/c ICACLS ""+ mountedDrive + "" /INHERITANCE:e /GRANT "IIS AppPool\TEST_AppPool":(OI)(CI)(F) /T /C"; process.StartInfo = startInfo; process.Start();

Here, in this case we are successful to add permissions to already available folders in mountedDrive, however new files or folders created does not have the same privilages. These already existing folders say's the permission is inherited from "None".

Is there something we are missing? Do mounted folders are different from normal folders in terms of permissions? How to make permission of mounted folder to be inherited to its subfolders?

Thanks in advance

MariaDB 10.8.x Strict Mode always ON?

Posted: 08 Jun 2022 08:16 AM PDT

I'm installing MariaDB on a new Debian 11 server and transferring my projects but I have a problem. STRICT MODE is always ENABLED on MariaDB.

I check many sites, and try to add the string sql_mode=NO_ENGINE_SUBSTITUTION in the section [mysqld] file /etc/my.cnf. And in [mariadb] section I try innodb_strict_mode=OFF

Command mysqld --verbose --help says ALL OK, sql_mode right, strict mode is OFF.

But when I login to mysql and make SELECT @@sql_mode, @@GLOBAL.sql_mode; I get the result: STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_... - NO EFFECT

If I SET variables in MYSQL - I get it for the session. I try to add this command to the beginning of my script and get an error (need SUPER Priv for user)...

How can I disable strict mode?

CORS error - Website is not reachable (ERR_FAILED), then redirects and works by itself

Posted: 08 Jun 2022 08:25 AM PDT

Error: I get "This site can't be reached ..." and then it automatically redirects and then the page works. Please see the screengrab GIF below.

enter image description here

This error happens in random unpredictable intervals. On average it occurs once in ten times.

It happens even in a plain HTML page where there is no content. I tested with only a "Hello World" text. There are no CSS or other file includes, and no access to third-party websites.

In Chrome Dev Tools, Status says CORS error and then redirects and the page displays.

This is occurring only on the latest Chrome browsers (Chromium: 102.0.5005.) Latest as of the moment.

This issue started from 25-May-2022 which coincides with the date of launch of this Chrome 102.0.5005. I tested on Chrome Beta (103.0.5060 the next version due) and it is reproducible.

The traffic stats from Google Analytics has also dropped by around 20%. This drop started from the same date 25-May-2022.

I checked the hosting server error log and there are no errors. I have hosted all my websites in DreamHost VPS and it occurs in all my websites. I checked a random website (not mine) hosted in DreamHost and can reproduce this issue.

Steps to reproduce:

  1. Use latest Chrome browsers (Chrome or Brave) Chromium: 102.0.5005.
  2. Go to Website https://www.mattbeno.com/
  3. Refresh like around 10 times with a few seconds interval between each refresh.

Please help to fix this issue.

Saltstack - mapping values are not allowed in this context

Posted: 08 Jun 2022 08:24 AM PDT

Following state file throws an error "Rendering SLS 'base:settings.app.state.sls' failed: mapping values are not allowed in this context"

I rendered the state of the minion into a file and threw it into yaml-lint. That part of the state seems ok to me...

openliberty-firewalld-web-service:    firewalld.service      - name: openlibweb      - ports:        - 10080/tcp        - 10443/tcp        - 11080/tcp        - 11443/tcp  

Yamllint: (): mapping values are not allowed in this context at line 219 column 11

Might the error still origin from another part of the state file?

Thanks!

new-compliancesearcaction delete more than 10 mails

Posted: 08 Jun 2022 08:18 AM PDT

By negligence or mistake from my colleague we need to delete all mails before a specific date. When we do a content search it comes to 5 million emails(many items per mailbox, way more than 10)

Is there another way to delete those emails? We used a search-mailbox but that is too slow.

How to estimate how much resources are needed in a cluster?

Posted: 08 Jun 2022 08:55 AM PDT

I'm new to kubernetes. So I need some help with things, which are basic stuff for most of you. I build my first cluster and added some workers (bare metal).

How do I estimate how much resources/power I need for my apps? For example I'm using 10 different nextJS / node frontend applications and three nestJS backend applications, I really don't know which server dimension I have to choose. How much CPU and memory do I need? I know there is no exact answer for this, as you cannot know how complex each app is (although in my case they are all very simple). But I hope to get some infos from you how I can estimate or measure which resources are needed in my cluster. How do I see if I need to add another worker to the cluster?

Authorization Header Missing Upon NGINX Proxy Pass to subdomain

Posted: 08 Jun 2022 08:58 AM PDT

Hi I'm running Laravel on NGINX server and I would like to use NGINX reverse proxy capability as an API gateway for my Laravel and other node API application. Here are my configurations:

Application URL: staging-app.example.com
Application API Endpoint: staging-app.example.com/api
API Gateway URL: api.example.com

What I want to do, is to redirect all API requests api.example.com/staging-app to staging-app.example.com/api. I have succeed in redirecting the API request, but somehow the Authorization header is not passed along to the proxy pass resulting in 401 unauthorized while other header do get passed along.

Here is my current api.example.com nginx config:

server {          server_name api.example.com;              location /staging-app {                  rewrite ^/staging-app/(.*)$ /$1 break;                   proxy_pass http://staging-app.example.com/;          }            location /test {                  rewrite ^/test/(.*)$ /$1 break;                  proxy_pass http://127.0.0.1:3333/;           }        listen [::]:443 ssl; # managed by Certbot      listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    }  server {      if ($host = api.example.com) {          return 301 https://$host$request_uri;      } # managed by Certbot              listen 80;          listen [::]:80;            server_name api.example.com;      return 404; # managed by Certbot  }  

and for my laravel application, I use the configuration given from Laravel themselves

Update 1: I tried adding proxy_set_header Test testingvalue in the location block directly, but it doesn't seems to work either

Executing one task from a playbook using tags

Posted: 08 Jun 2022 08:11 AM PDT

I have a playbook with multiple tasks for turning on/off machines. I tried using tags for running only one task " to start VM" using the command ansible run.yaml --tags on but it throws ERROR! tags must be specified as a list. Please tell me where I have done the mistake. Thanks

---  - hosts: list    gather_facts: no    tasks:    - name: start      command: >              virsh start {{ inventory_hostname }}      tags: on      delegate_to: inv  - hosts: off    gather_facts: no    tasks:    - name: stop vm      command: >              virsh shutdown --domain {{ inventory_hostname }}      delegate_to: inv      tags: off  

k3s without HA: how to switch master node?

Posted: 08 Jun 2022 08:19 AM PDT

Rancher documentation on k3s is quite nice and its HA support (both with external DB or embedded etcd) look nice, but I don't want/need an HA setup.

In case my master node fails, I don't mind having downtime while I re-create it or make a master out of another one, but I cannot find documentation how to switch master node.

Which files should I backup regularly (or host in NFS mount which is what I want to do) so I can easily re-setup the cluster with another server node but with the exact same state?

For the initial setup, we do:

  • master: curl -sfL https://get.k3s.io | sh
  • worker: curl -sfL https://get.k3s.io | K3S_URL=\"https://${MASTER_IP}:6443\" K3S_TOKEN=\"${TOKEN}\" sh -s

If I just re-run the scripts choosing another master, I'll have all the state lost. Is there a single directory I can copy/preserve (and an argument to pass to script to re-use it)?

How to get into the single user mode in Linux VM in Google Cloud

Posted: 08 Jun 2022 08:19 AM PDT

Can anyone help me to get into the single user mode in Linux VM, Google Cloud.

I have tried to change the grub setting in "/etc/default/grub" as " GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0 ro single" but no luck.

[root@test-linux admin]# cat /etc/default/grub  GRUB_TIMEOUT=100  GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"  GRUB_DEFAULT=saved  GRUB_DISABLE_SUBMENU=true  GRUB_TERMINAL="serial console"  GRUB_SERIAL_COMMAND="serial --speed=38400"  GRUB_CMDLINE_LINUX="crashkernel=auto console=ttyS0,38400n8 elevator=noop"  GRUB_DISABLE_RECOVERY="true"  GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0 ro single"  [root@test-linux admin]#  

gcloud app deploy error --> ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred

Posted: 08 Jun 2022 08:21 AM PDT

I am new on gcloud apps, need help on deployment web apps.

I created an App Engine and a Cloud SQL for MYSQL, it is in a different project. Let's call App Engine as web1 and sql as mysql1.

When I deployed web1 using "gcloud app deploy app.yaml", the upload process was success but when in the process "Updating service" it turns with error "ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred"

Here is with the full error

Updating service [default]...⠶DEBUG: Operation [apps/and-web-test/operations/f7c36da5-de76-436d-9e71-785d549273e0] complete. Result: {

"metadata": {      "target": "apps/and-web-test/services/default/versions/and-web-mm-001",       "method": "google.appengine.v1.Versions.CreateVersion",       "user": "xxxxxxxxx@gmail.com",       "insertTime": "2019-07-25T04:54:53.630Z",       "endTime": "2019-07-25T04:55:11.386Z",       "@type": "type.googleapis.com/google.appengine.v1.OperationMetadataV1"  },   "done": true,   "name": "apps/and-web-test/operations/f7c36da5-de76-436d-9e71-785d549273e0",   "error": {      "message": "An internal error occurred.",       "code": 13  } }  

Is it because my script has a bug or something else that i need to setup like permission on the Cloud SQL project?

Configure apache for lets-encrypt and django

Posted: 08 Jun 2022 09:08 AM PDT

I am able to set up my domain for https without problems with the following example.com.conf file:

<VirtualHost *:80>      ServerName example.com      ServerAlias www.example.com      DocumentRoot /www/example.com        ErrorLog /var/log/example.com/error.log        LogLevel warn        CustomLog ${APACHE_LOG_DIR}/example.com_access.log combined        RewriteEngine on    </VirtualHost>  

Then I run certbot --apache, choose my domain, install the certificate and then choose option 2: Secure - Make all requests redirect to secure HTTPS access. The file example.com-le-ssl.conf is created like this:

<IfModule mod_ssl.c>      <VirtualHost *:443>          ServerName example.com          ServerAlias www.example.com          DocumentRoot /www/example.com            ErrorLog /var/log/example.com/error.log            LogLevel warn            CustomLog ${APACHE_LOG_DIR}/example.com_access.log combined            RewriteEngine on            SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem          SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem          Include /etc/letsencrypt/options-ssl-apache.conf      </VirtualHost>  </IfModule>  

and visiting example.com redirects to https://example.com and going directly to https://example.com also works fine.

Now I would like to change the apache config to work with django

I have tried using the example.com.conf file at the bottom and then run the certbot --apache command, but it gives the error

Name duplicates previous WSGI daemon definition.  

I also tried changing the file after having it working as shown above, but with no luck. Any idea how to do this properly?

<VirtualHost *:80>      ServerName example.com      ServerAlias www.example.com      WSGIPassAuthorization On          WSGIDaemonProcess myapp  python-home=/opt/MyProject-Master python-path=/opt/MyProject-Master/MyProject processes=2 threads=5        WSGIProcessGroup myapp      WSGIScriptAlias / /path/MyProject-Master/MyProject/MyProject/wsgi.py process-group=myapp      <Directory /path/MyProject-Master/MyProject/MyProject>          <Files wsgi.py>              Require all granted          </Files>      </Directory>        ErrorLog /var/log/example.coms/error.log        LogLevel warn        CustomLog ${APACHE_LOG_DIR}/example.com_access.log combined    </VirtualHost>  

centos 7 kvm error trying to guestmount qcow2 image

Posted: 08 Jun 2022 07:01 AM PDT

I have a CentOS 7 KVM host. A partition on one of the VM's seems to be corrupt. Image is qcow2. When trying to mount the image to troubleshoot I receive the following error:

[root@vmhost02 images]# guestmount -a cpanel-vm.qcow2 -m /dev/sbcd /mnt/temp  libguestfs: error: vfs_type: vfs_type_stub: /dev/sbcd: No such file or directory  libguestfs: error: mount_options: mount_options_stub: /dev/sbcd: No such file or directory  guestmount: '/dev/sbcd' could not be mounted.  guestmount: Did you mean to mount one of these filesystems?  guestmount:     /dev/sda1 (xfs)  guestmount:     /dev/centos/home (xfs)  guestmount:     /dev/centos/root (xfs)  guestmount:     /dev/centos/swap (swap)    [root@vmhost02 images]# guestmount -a cpanel-vm.qcow2 -m /dev/centos/root /mnt/temp  libguestfs: error: mount_options: /dev/centos/root on / (options: ''): mount: mount /dev/mapper/centos-root on /sysroot failed: Structure needs cleaning  guestmount: '/dev/centos/root' could not be mounted.  

I am unsure of how to repair this as I can only use guestmount to my knowledge to access the partition but that is failing?

OpenVPN client not working on a GCE instance

Posted: 08 Jun 2022 07:01 AM PDT

I have set up an OpenVPN client config in my GCE instance and it's able to establish the connection correctly and finally creates a tunnel interface. But I cannot ping anything through that tunnel(using ping -I tun0 8.8.8.8 or curl www.google.com --interface tun0 won't get a respond). I tried different subnet IP range(10.8.x.x or 192.168.x.x), different protocol(TCP or UDP), different auth method(TLS or static-key) but still no any luck.

If I configure an OpenVPN server config on the instance then it's working correctly, server(the GCE instance) and clients can ping each other. Is OpenVPN client on GCE not supported or are there anything I've missed to configure?

It seems that operating systems and configs are irrelavent. I've tried multiple instances with different OSes, or change configs(these configs are working on other VPS), but the problem insists.

More detail on Google groups gce-discussion: https://groups.google.com/forum/#!topic/gce-discussion/0KoMnaojG6E

Update: I noticed when I ping something through the tunnel the RX and TX counts change(don't know if this means icmp packages actually transfer correctly) but ping still telling me 100% packet loss. ifconfig before ping:

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00              inet addr:192.168.179.21  P-t-P:192.168.179.22  Mask:255.255.255.255            UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1            RX packets:10 errors:0 dropped:0 overruns:0 frame:0            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:100             RX bytes:1661 (1.6 KiB)  TX bytes:0 (0.0 b)  

ping:

[root@mario-vps mario]# ping -I tun0 8.8.8.8  PING 8.8.8.8 (8.8.8.8) from 192.168.179.21 tun0: 56(84) bytes of data.  ^C  --- 8.8.8.8 ping statistics ---  1 packets transmitted, 0 received, 100% packet loss, time 870ms  

ifconfig after ping:

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00              inet addr:192.168.179.21  P-t-P:192.168.179.22  Mask:255.255.255.255            UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1            RX packets:11 errors:0 dropped:0 overruns:0 frame:0            TX packets:1 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:100             RX bytes:1745 (1.7 KiB)  TX bytes:84 (84.0 b)  

If I write "ping" and "ping-restart" arguments in the config file the connection won't go down. Don't know if this means even openvpn daemon is considering connection is well, but ping and curl-like stuff never work for me through that tunnel.

CentOS 7 MariaDB Error "Failed to start mariadb.service: Unit not found." [closed]

Posted: 08 Jun 2022 08:12 AM PDT

I am somewhat new to Linux, testing various LAMP setups in Virtualbox on Windows. Currently I have a Centos 7 VM that I am trying to install MariaDB on. I am following instructions here: http://www.tecmint.com/install-lamp-in-centos-7

I ran

# yum install mariadb-server mariadb  

Installation was successful according to terminal output, but when I run:

 # systemctl start mariadb  

I get

Failed to start mariadb.service: Unit not found.  

I spent past couple of hours googling this, but nothing seems to solve my issue, inlcuding this (No mysqld or mysql.server after mariadb-server install) and many other posts.

Any help is greatly appreciated.

UPDATE 01

I uninstalled mariadb:

[root@centos7 admin]# yum remove mysql  Loaded plugins: fastestmirror  Resolving Dependencies  --> Running transaction check  ---> Package MariaDB-client.x86_64 0:10.0.30-1.el7.centos will be erased  --> Processing Dependency: MariaDB-client for package: MariaDB-server-10.0.30-1.el7.centos.x86_64  --> Running transaction check  ---> Package MariaDB-server.x86_64 0:10.0.30-1.el7.centos will be erased  --> Finished Dependency Resolution    Dependencies Resolved    ===================================================================================   Package              Arch         Version                    Repository      Size  ===================================================================================  Removing:   MariaDB-client       x86_64       10.0.30-1.el7.centos       @mariadb        49 M  Removing for dependencies:   MariaDB-server       x86_64       10.0.30-1.el7.centos       @mariadb       237 M    Transaction Summary  ===================================================================================  Remove  1 Package (+1 Dependent package)    Installed size: 286 M  Is this ok [y/N]: y  Downloading packages:  Running transaction check  Running transaction test  Transaction test succeeded  Running transaction    Erasing    : MariaDB-server-10.0.30-1.el7.centos.x86_64                      1/2    Erasing    : MariaDB-client-10.0.30-1.el7.centos.x86_64                      2/2    Verifying  : MariaDB-client-10.0.30-1.el7.centos.x86_64                      1/2    Verifying  : MariaDB-server-10.0.30-1.el7.centos.x86_64                      2/2    Removed:    MariaDB-client.x86_64 0:10.0.30-1.el7.centos    Dependency Removed:    MariaDB-server.x86_64 0:10.0.30-1.el7.centos    Complete!  

ran yum clean all and yum update

reinstalled mariadb:

# yum install mariadb-server mariadb  Loaded plugins: fastestmirror  Loading mirror speeds from cached hostfile   * Webmin: download.webmin.com   * base: anorien.csc.warwick.ac.uk   * extras: centos.mirrors.nublue.co.uk   * updates: centos.serverspace.co.uk  Package mariadb-server is obsoleted by MariaDB-server, trying to install MariaDB-server-10.0.30-1.el7.centos.x86_64 instead  Package mariadb is obsoleted by MariaDB-client, trying to install MariaDB-client-10.0.30-1.el7.centos.x86_64 instead  Resolving Dependencies  --> Running transaction check  ---> Package MariaDB-client.x86_64 0:10.0.30-1.el7.centos will be installed  ---> Package MariaDB-server.x86_64 0:10.0.30-1.el7.centos will be installed  --> Finished Dependency Resolution    Dependencies Resolved    ===================================================================================   Package              Arch         Version                     Repository     Size  ===================================================================================  Installing:   MariaDB-client       x86_64       10.0.30-1.el7.centos        mariadb        10 M   MariaDB-server       x86_64       10.0.30-1.el7.centos        mariadb        55 M    Transaction Summary  ===================================================================================  Install  2 Packages    Total download size: 65 M  Installed size: 65 M  Is this ok [y/d/N]: y  Downloading packages:  (1/2): MariaDB-10.0.30-centos7-x86_64-client.rpm            |  10 MB  00:00:22  (2/2): MariaDB-10.0.30-centos7-x86_64-server.rpm            |  55 MB  00:01:15  -----------------------------------------------------------------------------------  Total                                                 876 kB/s |  65 MB  01:15  Running transaction check  Running transaction test  Transaction test succeeded  Running transaction    Installing : MariaDB-client-10.0.30-1.el7.centos.x86_64                      1/2    Installing : MariaDB-server-10.0.30-1.el7.centos.x86_64                      2/2  libsemanage.map_file: Unable to open /usr/share/mysql/SELinux/mariadb.pp   (No such file or directory).  libsemanage.semanage_direct_install_file: Unable to read file /usr/share/mysql/SELinux/mariadb.pp   (No such file or directory).  /usr/sbin/semodule:  Failed on /usr/share/mysql/SELinux/mariadb.pp!    Verifying  : MariaDB-client-10.0.30-1.el7.centos.x86_64                      1/2    Verifying  : MariaDB-server-10.0.30-1.el7.centos.x86_64                      2/2    Installed:    MariaDB-client.x86_64 0:10.0.30-1.el7.centos    MariaDB-server.x86_64 0:10.0.30-1.el7.centos    Complete!  

Still no go, what gives?

# systemctl start mariadb.service  Failed to start mariadb.service: Unit not found.  

UPDATE 02

There could be something with package versions and capitalization, I used mariaDB repos instead of centos for installation, so it picked up version 10.0.30:

]# yum info mariadb-server  Loaded plugins: fastestmirror  Loading mirror speeds from cached hostfile   * Webmin: download.webmin.com   * base: anorien.csc.warwick.ac.uk   * extras: centos.mirrors.nublue.co.uk   * updates: centos.serverspace.co.uk  Installed Packages  Name        : MariaDB-server  Arch        : x86_64  Version     : 10.0.30  Release     : 1.el7.centos  Size        : 237 M  Repo        : installed  From repo   : mariadb  Summary     : MariaDB: a very fast and robust SQL database server  URL         : http://mariadb.org  License     : GPLv2  Description : MariaDB: a very fast and robust SQL database server              :              : It is GPL v2 licensed, which means you can use the it free of charge              : under the conditions of the GNU General Public License Version 2              : (http://www.gnu.org/licenses/).              :              : MariaDB documentation can be found at https://mariadb.com/kb              : MariaDB bug reports should be submitted through              : https://jira.mariadb.org    Available Packages  Name        : mariadb-server  Arch        : x86_64  Epoch       : 1  Version     : 5.5.52  Release     : 1.el7  Size        : 11 M  Repo        : base/7/x86_64  Summary     : The MariaDB server and related files  URL         : http://mariadb.org  License     : GPLv2 with exceptions and LGPLv2 and BSD  Description : MariaDB is a multi-user, multi-threaded SQL database server. It is a              : client/server implementation consisting of a server daemon (mysqld)              : and many different client programs and libraries. This package              : contains the MariaDB server and some accompanying files and              : directories. MariaDB is a community developed branch of MySQL.  

Apache 2.4 w/ PHP 7: PHP7.1-FPM and/or libapache2-mod-fastcgi

Posted: 08 Jun 2022 08:01 AM PDT

I am in the process of upgrading a web server from using the slower, resource-intensive mod_php, and all has been well, until I noticed that PHP 7.1 is running successfully using only the php7.1-fpm package (from ondrej/php repository), without libapache2-mod-fastcgi installed. This behavior persists after restarting the system (Ubuntu 16.04.1) as well.

It was previously my understanding that Apache required both packages to be installed for the php7.1-fpm to work. However, this is evidently incorrect. Should I install the package libapache2-mod-fastcgi as well? On this same note, should I consider installing apache2-mpm-worker instead of (or in addition to) the standard apache2 package?

I understand that this is more of a best practices question, but I am looking more for if I am preparing a recipe for disaster.

UPDATE: I haven't yet found any difference in performance, but I still feel that there might be a technical reason that many "tutorials" and such suggest the aforementioned packages all in pair, especially at high loads or high traffic instances...

Linux process killed even though enough memory available

Posted: 08 Jun 2022 07:23 AM PDT

I am investigating why two of our processes were killed by the Linux OOM killer - even though there seems to have been enough RAM and plenty of SWAP available at both times.

When I interpret it as by this answer the first memory request asked for 2^2=4 pages (16KB) of memory (order flag) and wanted it to be from the "Normal" zone.

Jan 27 04:26:14 kernel: [639964.652706] java invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0  

And if I correctly parse the output, there is more than enough space:

Node 0 Normal free:178144kB min:55068kB low:68832kB high:82600kB   

The second time has the same request a few minutes later - and there also seems to be enough space available.

Why was the OOM killer triggered then? Am I parsing the information wrong?

  • The system is a 14.04 Ubuntu with the 4.4.0-59 x64 kernel
  • The vm.overcommit_memory setting is set to "0" (heuristic) which might not be optimal.

Instance one:

Jan 27 04:26:14 kernel: [639964.652706] java invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0  Jan 27 04:26:14 kernel: [639964.652711] java cpuset=/ mems_allowed=0  Jan 27 04:26:14 kernel: [639964.652716] CPU: 5 PID: 2152 Comm: java Not tainted 4.4.0-59-generic #80~14.04.1-Ubuntu  Jan 27 04:26:14 kernel: [639964.652717] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 06/22/2012  Jan 27 04:26:14 kernel: [639964.652719]  0000000000000000 ffff88041a963b38 ffffffff813dbd6c ffff88041a963cf0  Jan 27 04:26:14 kernel: [639964.652721]  0000000000000000 ffff88041a963bc8 ffffffff811fafc6 0000000000000000  Jan 27 04:26:14 kernel: [639964.652722]  0000000000000000 0000000000000000 ffff88042a6d1b88 0000000000000015  Jan 27 04:26:14 kernel: [639964.652724] Call Trace:  Jan 27 04:26:14 kernel: [639964.652731]  [<ffffffff813dbd6c>] dump_stack+0x63/0x87  Jan 27 04:26:14 kernel: [639964.652736]  [<ffffffff811fafc6>] dump_header+0x5b/0x1d5  Jan 27 04:26:14 kernel: [639964.652741]  [<ffffffff813766f1>] ? apparmor_capable+0xd1/0x180  Jan 27 04:26:14 kernel: [639964.652746]  [<ffffffff81188b35>] oom_kill_process+0x205/0x3d0  Jan 27 04:26:14 kernel: [639964.652747]  [<ffffffff8118916b>] out_of_memory+0x40b/0x460  Jan 27 04:26:14 kernel: [639964.652749]  [<ffffffff811fba7f>] __alloc_pages_slowpath.constprop.87+0x742/0x7ad  Jan 27 04:26:14 kernel: [639964.652752]  [<ffffffff8118e167>] __alloc_pages_nodemask+0x237/0x240  Jan 27 04:26:14 kernel: [639964.652754]  [<ffffffff8118e32d>] alloc_kmem_pages_node+0x4d/0xd0  Jan 27 04:26:14 kernel: [639964.652758]  [<ffffffff8107c125>] copy_process+0x185/0x1ce0  Jan 27 04:26:14 kernel: [639964.652763]  [<ffffffff810fd0b4>] ? do_futex+0xf4/0x520  Jan 27 04:26:14 kernel: [639964.652766]  [<ffffffff810a71c9>] ? resched_curr+0xa9/0xd0  Jan 27 04:26:14 kernel: [639964.652768]  [<ffffffff8107de1a>] _do_fork+0x8a/0x310  Jan 27 04:26:14 kernel: [639964.652769]  [<ffffffff8107e149>] SyS_clone+0x19/0x20  Jan 27 04:26:14 kernel: [639964.652775]  [<ffffffff81802c76>] entry_SYSCALL_64_fastpath+0x16/0x75  Jan 27 04:26:14 kernel: [639964.652776] Mem-Info:  Jan 27 04:26:14 kernel: [639964.652780] active_anon:1596719 inactive_anon:281182 isolated_anon:0  Jan 27 04:26:14 kernel: [639964.652780]  active_file:953586 inactive_file:952370 isolated_file:0  Jan 27 04:26:14 kernel: [639964.652780]  unevictable:0 dirty:7358 writeback:0 unstable:0  Jan 27 04:26:14 kernel: [639964.652780]  slab_reclaimable:217903 slab_unreclaimable:12162  Jan 27 04:26:14 kernel: [639964.652780]  mapped:40068 shmem:34861 pagetables:8261 bounce:0  Jan 27 04:26:14 kernel: [639964.652780]  free:71705 free_pcp:0 free_cma:0  Jan 27 04:26:14 kernel: [639964.652783] Node 0 DMA free:15892kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes  Jan 27 04:26:14 kernel: [639964.652787] lowmem_reserve[]: 0 2951 16005 16005 16005  Jan 27 04:26:14 kernel: [639964.652789] Node 0 DMA32 free:92784kB min:12448kB low:15560kB high:18672kB active_anon:1094416kB inactive_anon:368444kB active_file:579188kB inactive_file:561504kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3129216kB managed:3048784kB mlocked:0kB dirty:1188kB writeback:0kB mapped:32604kB shmem:27372kB slab_reclaimable:336288kB slab_unreclaimable:7196kB kernel_stack:1520kB pagetables:3964kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no  Jan 27 04:26:14 kernel: [639964.652793] lowmem_reserve[]: 0 0 13054 13054 13054  Jan 27 04:26:14 kernel: [639964.652795] Node 0 Normal free:178144kB min:55068kB low:68832kB high:82600kB active_anon:5292460kB inactive_anon:756284kB active_file:3235156kB inactive_file:3247976kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13631488kB managed:13367448kB mlocked:0kB dirty:28244kB writeback:0kB mapped:127668kB shmem:112072kB slab_reclaimable:535324kB slab_unreclaimable:41436kB kernel_stack:3968kB pagetables:29080kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:128 all_unreclaimable? no  Jan 27 04:26:14 kernel: [639964.652798] lowmem_reserve[]: 0 0 0 0 0  Jan 27 04:26:14 kernel: [639964.652800] Node 0 DMA: 1*4kB (U) 0*8kB 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15892kB  Jan 27 04:26:14 kernel: [639964.652807] Node 0 DMA32: 18127*4kB (UME) 2601*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 93316kB  Jan 27 04:26:14 kernel: [639964.652814] Node 0 Normal: 32943*4kB (UMEH) 5702*8kB (UMEH) 19*16kB (H) 13*32kB (H) 9*64kB (H) 2*128kB (H) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 178940kB  Jan 27 04:26:14 kernel: [639964.652820] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB  Jan 27 04:26:14 kernel: [639964.652821] 1949078 total pagecache pages  Jan 27 04:26:14 kernel: [639964.652822] 8225 pages in swap cache  Jan 27 04:26:14 kernel: [639964.652824] Swap cache stats: add 1131771, delete 1123546, find 7366438/7540102  Jan 27 04:26:14 kernel: [639964.652824] Free swap  = 4080988kB  Jan 27 04:26:14 kernel: [639964.652825] Total swap = 4194300kB  Jan 27 04:26:14 kernel: [639964.652826] 4194174 pages RAM  Jan 27 04:26:14 kernel: [639964.652826] 0 pages HighMem/MovableOnly  Jan 27 04:26:14 kernel: [639964.652827] 86139 pages reserved  Jan 27 04:26:14 kernel: [639964.652828] 0 pages cma reserved  Jan 27 04:26:14 kernel: [639964.652828] 0 pages hwpoisoned  Jan 27 04:26:14 kernel: [639964.652829] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name  Jan 27 04:26:14 kernel: [639964.652834] [  424]     0   424     4909      388      14       3       68             0 upstart-udev-br  Jan 27 04:26:14 kernel: [639964.652836] [  439]     0   439    13075      456      29       3      322         -1000 systemd-udevd  Jan 27 04:26:14 kernel: [639964.652839] [  724]     0   724     3816      226      13       3       53             0 upstart-socket-  Jan 27 04:26:14 kernel: [639964.652840] [  813]     0   813     5856      449      16       3       57             0 rpcbind  Jan 27 04:26:14 kernel: [639964.652842] [  865]   108   865     5386      456      16       3      113             0 rpc.statd  Jan 27 04:26:14 kernel: [639964.652844] [ 1034]     0  1034     3820      281      12       3       35             0 upstart-file-br  Jan 27 04:26:14 kernel: [639964.652846] [ 1041]   102  1041     9817      366      23       3       50             0 dbus-daemon  Jan 27 04:26:14 kernel: [639964.652847] [ 1045]   101  1045    65018     1203      31       3      384             0 rsyslogd  Jan 27 04:26:14 kernel: [639964.652849] [ 1056]     0  1056    10870      525      26       4       49             0 systemd-logind  Jan 27 04:26:14 kernel: [639964.652851] [ 1063]     0  1063     5870        0      16       3       53             0 rpc.idmapd  Jan 27 04:26:14 kernel: [639964.652852] [ 1153]     0  1153     2558      371       9       3      517             0 dhclient  Jan 27 04:26:14 kernel: [639964.652854] [ 1374]     0  1374     3955      401      13       3       40             0 getty  Jan 27 04:26:14 kernel: [639964.652855] [ 1377]     0  1377     3955      406      13       3       38             0 getty  Jan 27 04:26:14 kernel: [639964.652857] [ 1383]     0  1383     3955      406      13       3       39             0 getty  Jan 27 04:26:14 kernel: [639964.652858] [ 1384]     0  1384     3955      418      13       3       37             0 getty  Jan 27 04:26:14 kernel: [639964.652859] [ 1386]     0  1386     3955      418      12       3       38             0 getty  Jan 27 04:26:14 kernel: [639964.652861] [ 1403]     0  1403    15346      735      34       3      142         -1000 sshd  Jan 27 04:26:14 kernel: [639964.652863] [ 1436]     0  1436     4825      408      13       3       28             0 irqbalance  Jan 27 04:26:14 kernel: [639964.652864] [ 1440]     0  1440     1093      379       8       3       35             0 acpid  Jan 27 04:26:14 kernel: [639964.652866] [ 1442]     0  1442     4785      176      14       3       38             0 atd  Jan 27 04:26:14 kernel: [639964.652867] [ 1443]     0  1443     5914      466      17       3       43             0 cron  Jan 27 04:26:14 kernel: [639964.652869] [ 1464]   105  1464    61957     3600      59       3      273          -900 postgres  Jan 27 04:26:14 kernel: [639964.652870] [ 1561]   107  1561     7864      657      21       3      113             0 ntpd  Jan 27 04:26:14 kernel: [639964.652872] [ 1762]   105  1762    62036    35419     117       3      264             0 postgres  Jan 27 04:26:14 kernel: [639964.652873] [ 1763]   105  1763    61957    35051     117       3      266             0 postgres  Jan 27 04:26:14 kernel: [639964.652875] [ 1764]   105  1764    61957     1773      52       3      306             0 postgres  Jan 27 04:26:14 kernel: [639964.652877] [ 1765]   105  1765    62166     4999     116       3      374             0 postgres  Jan 27 04:26:14 kernel: [639964.652878] [ 1766]   105  1766    25910      617      48       3      274             0 postgres  Jan 27 04:26:14 kernel: [639964.652880] [ 1834]  1004  1834  2002886   692615    1549      10    12707             0 java  Jan 27 04:26:14 kernel: [639964.652881] [ 1921]   106  1921     5835      452      16       3      112             0 nrpe  Jan 27 04:26:14 kernel: [639964.652883] [ 1943]     0  1943   175986      420      41       4       50             0 nscd  Jan 27 04:26:14 kernel: [639964.652884] [ 1978]   109  1978   111112      309      48       4      213             0 nslcd  Jan 27 04:26:14 kernel: [639964.652886] [ 2007]     8  2007     3172      326      11       3       52             0 nullmailer-send  Jan 27 04:26:14 kernel: [639964.652887] [ 2092]     0  2092    34005     1947      70       3     3067             0 /usr/bin/monito  Jan 27 04:26:14 kernel: [639964.652889] [ 2110]     0  2110     1901      367       9       3       25             0 getty  Jan 27 04:26:14 kernel: [639964.652891] [ 2146] 65534  2146    34005     1101      67       3     3810             0 monitorix-httpd  Jan 27 04:26:14 kernel: [639964.652893] [24525]   105 24525  1826264  1151331    3568      10      299             0 postgres  Jan 27 04:26:14 kernel: [639964.652895] [20380]   105 20380    62511    36514     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652897] [21273]   105 21273    62532    36508     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652898] [22133]   105 22133    62610    36827     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652900] [22135]   105 22135    62541    34994     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652901] [22428]     0 22428    15436      739      35       4       11             0 cron  Jan 27 04:26:14 kernel: [639964.652903] [22429]     0 22429    15489      749      35       4       12             0 cron  Jan 27 04:26:14 kernel: [639964.652904] [22442]     0 22442     1112      198       8       3        0             0 sh  Jan 27 04:26:14 kernel: [639964.652906] [22443]  1004 22443     1112      191       8       3        0             0 sh  Jan 27 04:26:14 kernel: [639964.652908] [22444]  1004 22444     3102      748      11       3        0             0 syncDaily.sh  Jan 27 04:26:14 kernel: [639964.652909] [22445]     0 22445     1112      420       8       3        0             0 cron-apt  Jan 27 04:26:14 kernel: [639964.652911] [22465]  1004 22465    55074    10532     113       3        0             0 rsync  Jan 27 04:26:14 kernel: [639964.652912] [22466]     0 22466     1087      171       8       3        0             0 sleep  Jan 27 04:26:14 kernel: [639964.652914] [22467]  1004 22467    29820     9151      62       3        0             0 rsync  Jan 27 04:26:14 kernel: [639964.652915] [22468]  1004 22468    61770     7168     125       3        0             0 rsync  Jan 27 04:26:14 kernel: [639964.652917] [22990]   105 22990    62490    35099     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652919] [23138]   105 23138    62491    35578     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652920] [23139]   105 23139    62690    36657     121       3      236             0 postgres  Jan 27 04:26:14 kernel: [639964.652922] [23140]   105 23140    62455    32973     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652923] [23631]   105 23631    62518    34978     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652925] [23635]   105 23635    62506    35193     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652927] [23636]   105 23636    62455    30085     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652928] [23637]   105 23637    62470    33106     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652930] [23639]   105 23639    62511    34295     120       3      237             0 postgres  Jan 27 04:26:14 kernel: [639964.652940] Out of memory: Kill process 24525 (postgres) score 224 or sacrifice child  Jan 27 04:26:14 kernel: [639964.652975] Killed process 24525 (postgres) total-vm:7305056kB, anon-rss:4460244kB, file-rss:145080kB  

Instance two

Jan 27 04:34:36 kernel: [640466.131656] java invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0  Jan 27 04:34:36 kernel: [640466.131660] java cpuset=/ mems_allowed=0  Jan 27 04:34:36 kernel: [640466.131665] CPU: 7 PID: 2152 Comm: java Not tainted 4.4.0-59-generic #80~14.04.1-Ubuntu  Jan 27 04:34:36 kernel: [640466.131666] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 06/22/2012  Jan 27 04:34:36 kernel: [640466.131668]  0000000000000000 ffff88041a963b38 ffffffff813dbd6c ffff88041a963cf0  Jan 27 04:34:36 kernel: [640466.131670]  0000000000000000 ffff88041a963bc8 ffffffff811fafc6 0000000000000000  Jan 27 04:34:36 kernel: [640466.131671]  0000000000000000 0000000000000000 ffff88042a6d1b88 0000000000000015  Jan 27 04:34:36 kernel: [640466.131673] Call Trace:  Jan 27 04:34:36 kernel: [640466.131698]  [<ffffffff813dbd6c>] dump_stack+0x63/0x87  Jan 27 04:34:36 kernel: [640466.131712]  [<ffffffff811fafc6>] dump_header+0x5b/0x1d5  Jan 27 04:34:36 kernel: [640466.131721]  [<ffffffff813766f1>] ? apparmor_capable+0xd1/0x180  Jan 27 04:34:36 kernel: [640466.131728]  [<ffffffff81188b35>] oom_kill_process+0x205/0x3d0  Jan 27 04:34:36 kernel: [640466.131730]  [<ffffffff8118916b>] out_of_memory+0x40b/0x460  Jan 27 04:34:36 kernel: [640466.131732]  [<ffffffff811fba7f>] __alloc_pages_slowpath.constprop.87+0x742/0x7ad  Jan 27 04:34:36 kernel: [640466.131734]  [<ffffffff8118e167>] __alloc_pages_nodemask+0x237/0x240  Jan 27 04:34:36 kernel: [640466.131736]  [<ffffffff8118e32d>] alloc_kmem_pages_node+0x4d/0xd0  Jan 27 04:34:36 kernel: [640466.131745]  [<ffffffff8107c125>] copy_process+0x185/0x1ce0  Jan 27 04:34:36 kernel: [640466.131755]  [<ffffffff810fd0b4>] ? do_futex+0xf4/0x520  Jan 27 04:34:36 kernel: [640466.131761]  [<ffffffff810a71c9>] ? resched_curr+0xa9/0xd0  Jan 27 04:34:36 kernel: [640466.131763]  [<ffffffff8107de1a>] _do_fork+0x8a/0x310  Jan 27 04:34:36 kernel: [640466.131765]  [<ffffffff8107e149>] SyS_clone+0x19/0x20  Jan 27 04:34:36 kernel: [640466.131779]  [<ffffffff81802c76>] entry_SYSCALL_64_fastpath+0x16/0x75  Jan 27 04:34:36 kernel: [640466.131781] Mem-Info:  Jan 27 04:34:36 kernel: [640466.131784] active_anon:463046 inactive_anon:339934 isolated_anon:0  Jan 27 04:34:36 kernel: [640466.131784]  active_file:1074992 inactive_file:1398029 isolated_file:0  Jan 27 04:34:36 kernel: [640466.131784]  unevictable:0 dirty:1307 writeback:0 unstable:0  Jan 27 04:34:36 kernel: [640466.131784]  slab_reclaimable:626085 slab_unreclaimable:26239  Jan 27 04:34:36 kernel: [640466.131784]  mapped:40618 shmem:35429 pagetables:4038 bounce:0  Jan 27 04:34:36 kernel: [640466.131784]  free:161367 free_pcp:0 free_cma:0  Jan 27 04:34:36 kernel: [640466.131788] Node 0 DMA free:15892kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes  Jan 27 04:34:36 kernel: [640466.131792] lowmem_reserve[]: 0 2951 16005 16005 16005  Jan 27 04:34:36 kernel: [640466.131794] Node 0 DMA32 free:112056kB min:12448kB low:15560kB high:18672kB active_anon:465908kB inactive_anon:478436kB active_file:620808kB inactive_file:963844kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3129216kB managed:3048784kB mlocked:0kB dirty:844kB writeback:0kB mapped:11132kB shmem:5644kB slab_reclaimable:390764kB slab_unreclaimable:8488kB kernel_stack:1408kB pagetables:2304kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no  Jan 27 04:34:36 kernel: [640466.131798] lowmem_reserve[]: 0 0 13054 13054 13054  Jan 27 04:34:36 kernel: [640466.131800] Node 0 Normal free:517520kB min:55068kB low:68832kB high:82600kB active_anon:1386276kB inactive_anon:881300kB active_file:3679160kB inactive_file:4628272kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13631488kB managed:13367448kB mlocked:0kB dirty:4384kB writeback:0kB mapped:151340kB shmem:136072kB slab_reclaimable:2113576kB slab_unreclaimable:96452kB kernel_stack:3904kB pagetables:13848kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no  Jan 27 04:34:36 kernel: [640466.131803] lowmem_reserve[]: 0 0 0 0 0  Jan 27 04:34:36 kernel: [640466.131805] Node 0 DMA: 1*4kB (U) 0*8kB 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15892kB  Jan 27 04:34:36 kernel: [640466.131812] Node 0 DMA32: 20157*4kB (UME) 4165*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 113948kB  Jan 27 04:34:36 kernel: [640466.131817] Node 0 Normal: 119665*4kB (UMEH) 4706*8kB (UMEH) 12*16kB (H) 13*32kB (H) 10*64kB (H) 2*128kB (H) 1*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 518068kB  Jan 27 04:34:36 kernel: [640466.131824] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB  Jan 27 04:34:36 kernel: [640466.131825] 2516698 total pagecache pages  Jan 27 04:34:36 kernel: [640466.131826] 8199 pages in swap cache  Jan 27 04:34:36 kernel: [640466.131828] Swap cache stats: add 1131970, delete 1123771, find 7374629/7548428  Jan 27 04:34:36 kernel: [640466.131828] Free swap  = 4085700kB  Jan 27 04:34:36 kernel: [640466.131829] Total swap = 4194300kB  Jan 27 04:34:36 kernel: [640466.131830] 4194174 pages RAM  Jan 27 04:34:36 kernel: [640466.131830] 0 pages HighMem/MovableOnly  Jan 27 04:34:36 kernel: [640466.131831] 86139 pages reserved  Jan 27 04:34:36 kernel: [640466.131832] 0 pages cma reserved  Jan 27 04:34:36 kernel: [640466.131832] 0 pages hwpoisoned  Jan 27 04:34:36 kernel: [640466.131833] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name  Jan 27 04:34:36 kernel: [640466.131838] [  424]     0   424     4909      388      14       3       68             0 upstart-udev-br  Jan 27 04:34:36 kernel: [640466.131841] [  439]     0   439    13075      456      29       3      322         -1000 systemd-udevd  Jan 27 04:34:36 kernel: [640466.131843] [  724]     0   724     3816      226      13       3       53             0 upstart-socket-  Jan 27 04:34:36 kernel: [640466.131845] [  813]     0   813     5856      449      16       3       57             0 rpcbind  Jan 27 04:34:36 kernel: [640466.131846] [  865]   108   865     5386      456      16       3      113             0 rpc.statd  Jan 27 04:34:36 kernel: [640466.131848] [ 1034]     0  1034     3820      281      12       3       35             0 upstart-file-br  Jan 27 04:34:36 kernel: [640466.131850] [ 1041]   102  1041     9817      366      23       3       50             0 dbus-daemon  Jan 27 04:34:36 kernel: [640466.131852] [ 1045]   101  1045    65018     1255      31       3      362             0 rsyslogd  Jan 27 04:34:36 kernel: [640466.131854] [ 1056]     0  1056    10870      525      26       4       49             0 systemd-logind  Jan 27 04:34:36 kernel: [640466.131855] [ 1063]     0  1063     5870        0      16       3       53             0 rpc.idmapd  Jan 27 04:34:36 kernel: [640466.131857] [ 1153]     0  1153     2558      371       9       3      517             0 dhclient  Jan 27 04:34:36 kernel: [640466.131858] [ 1374]     0  1374     3955      401      13       3       40             0 getty  Jan 27 04:34:36 kernel: [640466.131860] [ 1377]     0  1377     3955      406      13       3       38             0 getty  Jan 27 04:34:36 kernel: [640466.131861] [ 1383]     0  1383     3955      406      13       3       39             0 getty  Jan 27 04:34:36 kernel: [640466.131863] [ 1384]     0  1384     3955      418      13       3       37             0 getty  Jan 27 04:34:36 kernel: [640466.131864] [ 1386]     0  1386     3955      418      12       3       38             0 getty  Jan 27 04:34:36 kernel: [640466.131866] [ 1403]     0  1403    15346      735      34       3      142         -1000 sshd  Jan 27 04:34:36 kernel: [640466.131868] [ 1436]     0  1436     4825      408      13       3       28             0 irqbalance  Jan 27 04:34:36 kernel: [640466.131869] [ 1440]     0  1440     1093      379       8       3       35             0 acpid  Jan 27 04:34:36 kernel: [640466.131871] [ 1442]     0  1442     4785      176      14       3       38             0 atd  Jan 27 04:34:36 kernel: [640466.131872] [ 1443]     0  1443     5914      466      17       3       43             0 cron  Jan 27 04:34:36 kernel: [640466.131874] [ 1464]   105  1464    61957     4409      59       3      254          -900 postgres  Jan 27 04:34:36 kernel: [640466.131876] [ 1561]   107  1561     7864      657      21       3      113             0 ntpd  Jan 27 04:34:36 kernel: [640466.131877] [ 1834]  1004  1834  2002886   692883    1549      10    12598             0 java  Jan 27 04:34:36 kernel: [640466.131879] [ 1921]   106  1921     5835      452      16       3      112             0 nrpe  Jan 27 04:34:36 kernel: [640466.131880] [ 1943]     0  1943   175986      420      41       4       50             0 nscd  Jan 27 04:34:36 kernel: [640466.131882] [ 1978]   109  1978   111112      309      48       4      213             0 nslcd  Jan 27 04:34:36 kernel: [640466.131883] [ 2007]     8  2007     3172      326      11       3       52             0 nullmailer-send  Jan 27 04:34:36 kernel: [640466.131885] [ 2092]     0  2092    34005     1947      70       3     3067             0 /usr/bin/monito  Jan 27 04:34:36 kernel: [640466.131887] [ 2110]     0  2110     1901      367       9       3       25             0 getty  Jan 27 04:34:36 kernel: [640466.131888] [ 2146] 65534  2146    34005     1101      67       3     3810             0 monitorix-httpd  Jan 27 04:34:36 kernel: [640466.131891] [22428]     0 22428    15436      739      35       4       11             0 cron  Jan 27 04:34:36 kernel: [640466.131892] [22429]     0 22429    15489      749      35       4       12             0 cron  Jan 27 04:34:36 kernel: [640466.131894] [22442]     0 22442     1112      198       8       3        0             0 sh  Jan 27 04:34:36 kernel: [640466.131895] [22443]  1004 22443     1112      191       8       3        0             0 sh  Jan 27 04:34:36 kernel: [640466.131897] [22444]  1004 22444     3102      748      11       3        0             0 syncDaily.sh  Jan 27 04:34:36 kernel: [640466.131899] [22445]     0 22445     1112      420       8       3        0             0 cron-apt  Jan 27 04:34:36 kernel: [640466.131900] [22465]  1004 22465    54754    27012     113       3        0             0 rsync  Jan 27 04:34:36 kernel: [640466.131902] [22466]     0 22466     1087      171       8       3        0             0 sleep  Jan 27 04:34:36 kernel: [640466.131903] [22467]  1004 22467    34234    26953      72       3        0             0 rsync  Jan 27 04:34:36 kernel: [640466.131905] [22468]  1004 22468    62154    15613     126       3        0             0 rsync  Jan 27 04:34:36 kernel: [640466.131907] [24170]   105 24170    61990    34251     117       3      237             0 postgres  Jan 27 04:34:36 kernel: [640466.131908] [24171]   105 24171    61957    33191     115       3      238             0 postgres  Jan 27 04:34:36 kernel: [640466.131910] [24172]   105 24172    61957     1190      52       3      238             0 postgres  Jan 27 04:34:36 kernel: [640466.131911] [24173]   105 24173    62166     1333      54       3      239             0 postgres  Jan 27 04:34:36 kernel: [640466.131913] [24174]   105 24174    25876      642      48       3      242             0 postgres  Jan 27 04:34:36 kernel: [640466.131915] [24175]   105 24175    62464    35199     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131916] [24203]   105 24203    62467    22296     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131918] [24266]   105 24266    62475    36452     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131920] [24317]   105 24317    62424    17702     119       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131921] [24318]   105 24318    62449    24858     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131923] [24320]   105 24320    62485    24779     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131925] [24321]   105 24321    62449    27595     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131926] [24452]   105 24452    62484    16118     120       3      232             0 postgres  Jan 27 04:34:36 kernel: [640466.131928] Out of memory: Kill process 1834 (java) score 137 or sacrifice child  Jan 27 04:34:36 kernel: [640466.132070] Killed process 1834 (java) total-vm:8011544kB, anon-rss:2763340kB, file-rss:8192kB  

Proxmox VE (routing and port-fowarding issue)

Posted: 08 Jun 2022 08:01 AM PDT

I have installed PVE, I received three public Ip address two of them are in the same range, the third one is in different range. I wanted to give PVE host one pub IP to be reachable externally, and the other two to VMs I also wanted to created two VMs with private IP address and make port forwarding. below is my configuration:

auto lo    iface lo inet loopback    auto eth0  iface eth0 inet static    address  x.x.203.141  netmask  255.255.255.128  pointopoint x.x.203.137  gateway  x.x.203.137  broadcast  x.x.203.255  #post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp      iface eth1 inet manual        auto vmbr0      iface vmbr0 inet static        address x.x.203.141      netmask 255.255.255.128      #gateway x.x.203.137      bridge_ports none      bridge_stp on      bridge_fd 0      bridge_maxwait 0           iface vmbr1 inet manual      bridge_ports none      bridge_stp on      bridge_fd 0    up ip route add x.x.203.142/32 dev vmbr0  ##IP of the first VM  up ip route add x.x.220.37/32 dev vmbr1   ## IP of the second VMS    auto vmbr2    iface vmbr2 inet static    address 192.168.0.254  netmask 255.255.255.0  bridge_ports none  bridge_stp on  bridge_fd 0    post-up echo 1 > /proc/sys/net/ipv4/ip_forward    post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o eth0 -j MASQUERADE    post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o eth0 -j MASQUERADE  

yet I am losing ping externally to the host machine. and also with the second VMs which has Public IP with a different range I have a very slow internet? another thing is that I am not able to ssh to the VM with private IP address externally.

Thanks for your help in advance!

How can I restart a IIS site by using appcmd commands?

Posted: 08 Jun 2022 08:18 AM PDT

I'm wondering the possibility of restart a site in the IIS 7.5 via appcmd commands. To list the all sites available I got success by using:

appcmd.exe list site

How do I set locale when building an Ubuntu Docker image with Packer?

Posted: 08 Jun 2022 09:08 AM PDT

I'm using Packer to build a Docker image based on Ubuntu 14.04, i.e., in my Packer template I have:

"builders": [{      "type": "docker",      "image": "ubuntu",      "commit": true  }],  

and I build it using:

$ packer build my.json  

What do I need to put in the template to get a specific locale (say en_GB) to be set when I subsequently run the following?

$ sudo docker run %IMAGE_ID% locale  

Additional info

As it stands, I get:

LANG=  LANGUAGE=  LC_CTYPE="POSIX"  LC_NUMERIC="POSIX"  LC_TIME="POSIX"  ...  LC_IDENTIFICATION="POSIX"  LC_ALL=  

which causes a few problems for things I want to do next, like installing certain Python packages.

I've tried adding:

{      "type": "shell",      "inline": [          "locale-gen en_GB.UTF-8",          "update-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB.UTF-8 LC_ALL=en_GB.UTF-8"      ]  }  

but while that does set up the locale config it doesn't affect the env used by docker run. Even if I add extra export lines like:

{      "type": "shell",      "inline": [      ...          "export LANG=en_GB.UTF-8"      ]  }  

they have no effect, presumably because when using docker run, it's not a child process of the command packer build uses when running these commands initially.

As a workaround I can pass env vars to docker run, but don't want to have to do that each time, e.g.:

sudo docker run -e LANG=en_GB.UTF-8 -e LANGUAGE=en_GB.UTF-8 -e LC_ALL=en_GB.UTF-8 %IMAGE_ID% locale  

SSH client option to suppress server banners?

Posted: 08 Jun 2022 08:41 AM PDT

I've read Stop ssh login from printing motd from the client?, however my situation is a bit different :

  • I want to keep Banner /path/to/sometxt serverside
  • I would like to pass an option under specific conditions so that Banner is not printed (eg ssh -o "PrintBanner=No" someserver).

Any idea?

Using ssh-agent with KDE?

Posted: 08 Jun 2022 08:19 AM PDT

I had this working once before, but for some reason it's not working on my new system.

in .kde4/Autostart/ I have a symlink to ssh-agent called 01-sshagent and then a simple script called 02-sshkeys that looks like this:

/usr/bin/ssh-add $(find $HOME/.ssh/keys -type f | egrep -v '\.pub$')  

The problem seems to be that when I startup, ssh-agent is run alright, but KDE doesn't hold onto the output and store it in the environment, so for every Konsole session, I have to run ps to find the PID and then manually type:

SSH_AUTH_SOCK=/tmp/ssh-YtvdiEtW3065/agent.3065; export SSH_AUTH_SOCK;  SSH_AGENT_PID=<pidnumber>; export SSH_AGENT_PID;  

...just to get it to work, and it does... just in that Konsole window.

I've tried removing the aforementioned symlink and just havining the ssh script look like this:

/usr/bin/ssh-agent | sh  /usr/bin/ssh-add $(find $HOME/.ssh/keys -type f | egrep -v '\.pub$')  

But still, the agent variables aren't in the session and I'm never prompted for the password to my keys.

I'm obviously missing something, but what is it?

No comments:

Post a Comment