Wednesday, February 16, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Readiness probe failed: HTTP probe failed with statuscode: 503 after installing Cilium on Kubernetes cluster

Posted: 16 Feb 2022 02:34 AM PST

I'm new to Kubernetes world .Followed InstallKubernetesI have installed Kubernetes cluster with 1 master node and 2 worker nodes.I used kubeadm to install on master node on my local machine.After installing CILIUMQuickInstallation,my coredns pod is Running but not ready

[root@master-node ~]# kubectl describe pod coredns-64897985d-cfpvn -n kube-system  Name:                 coredns-64897985d-cfpvn  Namespace:            kube-system  Priority:             2000000000  Priority Class Name:  system-cluster-critical  Node:                 node-2/10.193.160.170  Start Time:           Tue, 15 Feb 2022 08:12:33 -0500  Labels:               k8s-app=kube-dns                        pod-template-hash=64897985d  Annotations:          <none>  Status:               Running  IP:                   10.0.1.95  IPs:    IP:           10.0.1.95  Controlled By:  ReplicaSet/coredns-64897985d  Containers:    coredns:      Container ID:  docker://2283b42b559c95513f3bfd8c90491b3c8acc55b5542a88d22d2fceaa8c347f9a      Image:         k8s.gcr.io/coredns/coredns:v1.8.6      Image ID:      docker-pullable://k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e      Ports:         53/UDP, 53/TCP, 9153/TCP      Host Ports:    0/UDP, 0/TCP, 0/TCP      Args:        -conf        /etc/coredns/Corefile      State:          Running        Started:      Tue, 15 Feb 2022 08:12:35 -0500      Ready:          False      Restart Count:  0      Limits:        memory:  170Mi      Requests:        cpu:        100m        memory:     70Mi      Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5      Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3      Environment:  <none>      Mounts:        /etc/coredns from config-volume (ro)        /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b64rl (ro)  Conditions:    Type              Status    Initialized       True    Ready             False    ContainersReady   False    PodScheduled      True  Volumes:    config-volume:      Type:      ConfigMap (a volume populated by a ConfigMap)      Name:      coredns      Optional:  false    kube-api-access-b64rl:      Type:                    Projected (a volume that contains injected data from multiple sources)      TokenExpirationSeconds:  3607      ConfigMapName:           kube-root-ca.crt      ConfigMapOptional:       <nil>      DownwardAPI:             true  QoS Class:                   Burstable  Node-Selectors:              kubernetes.io/os=linux  Tolerations:                 CriticalAddonsOnly op=Exists                               node-role.kubernetes.io/control-plane:NoSchedule                               node-role.kubernetes.io/master:NoSchedule                               node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                               node.kubernetes.io/unreachable:NoExecute op=Exists for 300s  Events:    Type     Reason     Age                 From               Message    ----     ------     ----                ----               -------    Normal   Scheduled  2m1s                default-scheduler  Successfully assigned kube-system/coredns-64897985d-cfpvn to node-2    Normal   Pulled     119s                kubelet            Container image "k8s.gcr.io/coredns/coredns:v1.8.6" already present on machine    Normal   Created    119s                kubelet            Created container coredns    Normal   Started    119s                kubelet            Started container coredns    Warning  Unhealthy  1s (x15 over 119s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503  

Logs of the POD

[INFO] plugin/ready: Still waiting on: "kubernetes"  [ERROR] plugin/errors: 2 62173616170875592.2350229940780337967. HINFO: read udp 10.0.1.95:45853->10.192.0.250:53: read: no route to host  [ERROR] plugin/errors: 2 62173616170875592.2350229940780337967. HINFO: read udp 10.0.1.95:44728->10.192.0.250:53: i/o timeout  [ERROR] plugin/errors: 2 62173616170875592.2350229940780337967. HINFO: read udp 10.0.1.95:45554->10.193.0.250:53: i/o timeout  [ERROR] plugin/errors: 2 62173616170875592.2350229940780337967. HINFO: read udp 10.0.1.95:57310->10.193.0.250:53: i/o timeout  [INFO] plugin/ready: Still waiting on: "kubernetes"  [ERROR] plugin/errors: 2 62173616170875592.2350229940780337967. HINFO: read udp 10.0.1.95:47540->10.193.0.250:53: i/o timeout  [INFO] plugin/ready: Still waiting on: "kubernetes"  

How to link the IP address from one private network through a public network to another private network

Posted: 16 Feb 2022 02:31 AM PST

I would like for the user to find all PLCs on a range of following IP addresses in their network. The user network and PLC networks are private and separated.

network interface

The network is set up with a managed switch and a router-on-a-stick. Every PLC has their own network in the VLAN. The user network is also in its own VLAN. At a later stage this will be further isolated with a VLAN ACL.

A NAT is used for the PLC networks to make all PLCs available on their own IP on the public network.

At the moment I'm testing to reach IP 192.168.10.5 ( NAT PLC1 on public network ) from the user network. I've tried to NAT 172.16.11.11 -> 192.168.10.5 (private to public) and ping 172.168.11.11 from the user PC. I've also tried to NAT 192.168.10.5 -> 172.16.11.11 (public to private).

With no NAT configured on VLAN200 I can reach 192.168.10.5 from the user PC.

How to choose a valid VIP for kube-vip HA Cluster setup?

Posted: 16 Feb 2022 02:15 AM PST

I am following this reference https://kube-vip.io/control-plane/ for HA cluster setup. I did this setup using my digitalocean droplets. I created 3 droplets (2 for master, 1 for worker -- testing purpose) say, those machines are master-1, master-2 & worker-1

The issue was I used the IPV4 of master-1 as the VirtualIP (VIP) in kubeadm init so as soon as I delete master-1, the cluster is gone. This leads me to the decision that I cannot use the IP of my remote machine as VIP

So, how can I get a correct value for VIP ?

The documentation uses 192.168.0.75, but not mentioning how they got this IP

I tried using IP from VPC IP range, say it is 10.x.y.z which is same for all the droplets (of course) But, I am not able to do kubelet join with this IP -- times out always..

How do I get a valid VIP value ?

Additional Bind caching nameserver next to domain controllers

Posted: 16 Feb 2022 02:04 AM PST

In an environment where I already have two domain controllers acting as name servers I would like to run the Bind cache server installed on Ubuntu 20.04.3 LTS to act as nameserver for specific hosts. Domain controllers use ISP DNS as forwarders.

I was using this tutorial: https://kifarunix.com/setup-caching-only-dns-server-using-bind9-on-ubuntu-20-04/

Ubuntu Server with Bind IP: 192.168.1.240, DC1: 192.168.1.180, DC2: 192.168.1.250

My /etc/bind/named.conf.options:

//DNS Server ACL  acl "trusted" {          192.168.8.0/24;  };    options {          directory "/var/cache/bind";            recursion yes;          allow-recursion { localhost; trusted; };          listen-on port 53 { localhost; 192.168.1.240; };          allow-query { localhost; trusted; };          allow-transfer { none; };            // If there is a firewall between you and nameservers you want          // to talk to, you may need to fix the firewall to allow multiple          // ports to talk.  See http://www.kb.cert.org/vuls/id/800113            // If your ISP provided one or more IP addresses for stable          // nameservers, you probably want to use them as forwarders.          // Uncomment the following block, and insert the addresses replacing          // the all-0's placeholder.            // forwarders {          //        0.0.0.0;          // };            //========================================================================          // If BIND logs error messages about the root key being expired,          // you will need to update your keys.  See https://www.isc.org/bind-keys          //========================================================================          dnssec-validation auto;            listen-on-v6 { none; };  };  

My /etc/resolv.conf

nameserver 127.0.0.53  options edns0 trust-ad  search wielton.corp  

Output of "systemd-resolve --status" on Bind server:

Global         LLMNR setting: no  MulticastDNS setting: no    DNSOverTLS setting: no        DNSSEC setting: no      DNSSEC supported: no            DNSSEC NTA: 10.in-addr.arpa                        16.172.in-addr.arpa                        168.192.in-addr.arpa                        17.172.in-addr.arpa                        18.172.in-addr.arpa                        19.172.in-addr.arpa                        20.172.in-addr.arpa                        21.172.in-addr.arpa                        22.172.in-addr.arpa                        23.172.in-addr.arpa                        24.172.in-addr.arpa                        25.172.in-addr.arpa                        26.172.in-addr.arpa                        27.172.in-addr.arpa                        28.172.in-addr.arpa                        29.172.in-addr.arpa                        30.172.in-addr.arpa                        31.172.in-addr.arpa                        corp                        d.f.ip6.arpa                        home                        internal                        intranet                        lan                        local                        private                        test    Link 2 (ens160)        Current Scopes: DNS  DefaultRoute setting: yes         LLMNR setting: yes  MulticastDNS setting: no    DNSOverTLS setting: no        DNSSEC setting: no      DNSSEC supported: no    Current DNS Server: 192.168.1.180           DNS Servers: 192.168.1.180                        192.168.1.250            DNS Domain: mydomain.name  

I test bind from client in 192.168.8.0 subnet (added as trusted in named.conf.options). It is possible to resolve IP's and names of external domains (I suppose bind uses root hints to do that) but query for local domain ends with Non-existent Domain.

I left forwarders commented in named.conf.options but I see no difference when I uncomment forwarders and add DC1 and DC2 IP's there. When recursion is set to no external domains aren't resolved. Maybe there is something to do on domain controllers? "Enable BIND secondaries"?

Please advice.

Use cgroups to limit *per process*

Posted: 16 Feb 2022 02:03 AM PST

I need to limit the CPU usage of each individual process in a group (not just all processes in the same group).

My current config is like this, and I think it limits the total cpu usage of the processes in the group. I have tried searching for this, but haven't really found anything.

group browsers {       cpu {           cpu.cfs_quota_us=1000000000;       }  }  

cgrules.conf:

# user:process                                          subsystems      group  myuser:<...>/chrome                                     cpu             browsers  

Even though I am updating the DNS in domain.com it doesn't change

Posted: 16 Feb 2022 01:10 AM PST

I am trying to redirect my emails to titan mail and have updated the dns server accordingly by adding an MX and TXT but when I try to verify domain it says that they are unverified I have also waited for more than 24 hrs to check if it happens but nothing took place.

dns and nameservers

dns verification status

How to configure service log format?

Posted: 16 Feb 2022 01:02 AM PST

I have a service, for example :

[Unit]  Description=My app  PartOf=myorg.target  After=network-online.target    [Service]  Type=notify  User=myuser  NotifyAccess=main  ExecStart=/usr/bin/my_app  Restart=always  RestartSec=1s    [Install]  WantedBy=myorg.target  Alias=myapp.service  

Its logs are visible in journalctl. One log example :

Feb 16 08:32:11 mycomputer-0013952a677a hapic[410613]: TheLogMessage  

Format look like :

<date> <hostname> <logger name>[<thread id>]: <message>  

How can I configure this format ? My goal is to add the service alias in log format.

Jenkins pipeline using docker agent can't push on artifactory due to jvm cacert

Posted: 16 Feb 2022 01:01 AM PST

I need to push some jar files obtained during a Jenkins pipeline, to Jfrog; below the code:

stage ('Artifactory configuration') {              when { expression { params.runDelivery } }              steps {                  rtServer (                      id: "artifactory",                      url: "https://jfroglocal/artifactory",                      credentialsId: "jfrog"                  )                    rtMavenDeployer (                      id: "MAVEN_DEPLOYER",                      serverId: "artifactory",                      releaseRepo: "example-repo-local",                      snapshotRepo: "example-repo-local"                  )              }          }  

here the error:

[m org.apache.maven.cli.MavenCli -  Skipping deployment of remaining artifacts (if any) and build info. sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target etc  

if I run the pipeline directly from the "jenkins slave server" the error disappear after linkng /usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacert to /etc/ssl/certs/java/cacerts

if I run the same pipeline from an docker agent the error persists; below the declared agent:

agent {          docker {              label 'Ubuntu-20.04-Slave'              image 'node:10'              args '-u root'          }        }  

how can i link the cacert file (of the jenkins slave) into the container?

there is a way to configure jenkins to share the JVM cacert with the container?

route to non-connected gateway

Posted: 16 Feb 2022 12:53 AM PST

I have:

  • a linux box: 10.20.0.2
  • a gateway which I have no access on it: 10.20.0.1
  • a Fortigate: 10.10.0.1

I would like to provide internet to 10.20.0.2 through 10.10.0.1

I can't add a route since 10.10.0.1 is not directly connected. How to achieve this ? I am thinking to a tunnel between 10.20.0.2 and 10.10.0.1, like GRE. not sure this is a good idea...

    ******   ****    *****  **           **              ┌───────────┐                           ┌───────────┐       ┌───────────┐  **  0.0.0.0   *──────────────┤ 10.10.0.1 ├────────IPSEC─VPN──────────┤10.20.0.1  ├───────┤ 10.20.0.2 │   *            *              └───────────┘                           └───────────┘       └───────────┘   ******    ****        ******  

Missing Globalization Cultures on IIS8.5 / Server 2012R2

Posted: 16 Feb 2022 12:45 AM PST

I'm trying to set the culture of a website to en-ZM in the web.config of the site but I get an error:

The <globalization> tag contains an invalid value for the 'culture' attribute.

When I try to set it via IIS' interface, it seems Zambia isn't listed in the available options, although other African countries like Zimbabwe are (en-ZW).

My local machine is running Win10 and has this culture available for configuration so I hadn't expected this to be an issue. Is there any update I can manually install to add the missing cultures?

What is the IP address value in the listen field?

Posted: 16 Feb 2022 01:08 AM PST

I'm looking at an example from here: http://nginx.org/en/docs/http/request_processing.html

The listen value is the IP and port. Does this refer to the IP address of the client or the IP address of the target server? If its the later, then does this mean that 1 machine can have more than 1 IP?

server {      listen      192.168.1.1:80;      server_name example.org www.example.org;      ...  }    server {      listen      192.168.1.1:80;      server_name example.net www.example.net;      ...  }    server {      listen      192.168.1.2:80;      server_name example.com www.example.com;      ...  }  

How to allow certbot to be able to access http://myapi.com/.well-known/acme-challenge/2d8dvxv8x9dvxd9v via nginx?

Posted: 16 Feb 2022 02:04 AM PST

My nginx.conf file is as follows:

user www-data;  worker_processes auto;  pid /run/nginx.pid;  include /etc/nginx/modules-enabled/*.conf;   #the above include brings in the following default files:  #50-mod-http-image-filter.conf    #50-mod-http-xslt-filter.conf    #50-mod-mail.conf    #50-mod-stream.conf    events {          worker_connections 500;  }    http {      include        /etc/nginx/proxy.conf;      limit_req_zone $binary_remote_addr zone=one:10m rate=100r/m;      server_tokens  off;        sendfile on;      keepalive_timeout   30;      client_body_timeout 10; client_header_timeout 10; send_timeout 10;        upstream myapp{          server 127.0.0.1:5000;      }        server {          listen 443 ssl http2;          listen [::]:443 ssl http2;          server_name myapi.com;          ssl_certificate /etc/letsencrypt/live/myapi.com/fullchain.pem; # managed by Certbot          ssl_certificate_key /etc/letsencrypt/live/myapi.com/privkey.pem; # managed by Certbot          include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot            add_header X-Frame-Options DENY;          add_header X-Content-Type-Options nosniff;              #Redirects all traffic          location / {              proxy_pass http://myapi;              limit_req  zone=one burst=10;          }      }  }  

I installed the certbot and certbot-nginx (ubuntu).

SSL is working fine. Firewall only allows port 443.

I am trying to renew the certbot certificate with command: sudo certbot renew --dry-run

This tries to verify that I own the domain by making a request to http://myapi.com/.well-known/acme-challenge/2d8dvxv8x9dvxd9v (note: I have obfuscated the key value 2d8dvxv8x9dvxd9v as this is something private)

But this time's out. So I have enabled port 80 and added the following additional server item:

   server {           listen 80;           server_name myapi.com;           return 301 https://$host$request_uri;        }  

Now the certbot renew command (sudo certbot renew --dry-run) is able to renew the certificate.

  1. Where is the .well-known/acme-challenge path? Is it generated/deleted on the fly?

  2. For my application, I only want to serve the https requests and block all http requests, other than the certbot renewal. What is the change I need to make to the nginx.config?

nginx is not respecting the server_name value

Posted: 16 Feb 2022 01:16 AM PST

server{      ..      server_name some_other_domain_name.com;      ..  }  

I have mapped my domain name to the public IP of my VM via godaddy.

When I enter the domain name in the browser, then it is able to access the website hosted on the VM (via nginx). However, I was expecting that the request will not be allowed by nginx because the server_name property is set to some_other_domain_name.com

Does nginx not check the server_name property?

Ansible playbook to post message into kafka topic

Posted: 16 Feb 2022 12:37 AM PST

Playbook 1:  ---  - name: Message into topic    hosts: web1    become: yes    tasks:      - name: post message        expect:          shell: "/usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic"          responses:            Question:              - "Hi there"            

Playbook 2:

- name: Message into topic    hosts: web1    gather_facts: false    tasks:      - name: post message        shell:          command: /usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic          responses:                                (?i)Message: "Hello From Playbook"      

Tried with the above two playbooks and ended up with some errors. Unable to find a proper solution for this. [enter image description here][1]

root@ip-172-31-83-195:/usr/local/kafka# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic  

hello there Hi there

This was the command I am trying to run in the playbook. If you have any examples that can take to other prompt like above command(ctrl+c is to come out of the prompt). Please let me know how we can use them in the playbook. Thanks in advance!

Errors:

Error: For Playbook 1    root@ip-172-31-87-7:~# ansible-playbook Playbook_to_post_message_into_the_topic.yaml    PLAY [Message into topic] ***********************************************************************************************************************************    TASK [post message] *****************************************************************************************************************************************  fatal: [172.31.83.195]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": true, "cmd": "/bin/bash -c \"/usr/local/kafka/bin/kafka-console-producer.sh --broker-list 54.87.252.89:9092 --topic testTopic\"", "delta": "0:00:30.407263", "end": "2022-02-16 08:32:24.990783", "msg": "non-zero return code", "rc": 129, "start": "2022-02-16 08:31:54.583520", "stdout": ">[2022-02-16 08:32:07,214] WARN [Producer clientId=console-producer] Bootstrap broker 54.87.252.89:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)\r\n[2022-02-16 08:32:24,532] WARN [Producer clientId=console-producer] Bootstrap broker 54.87.252.89:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)", "stdout_lines": [">[2022-02-16 08:32:07,214] WARN [Producer clientId=console-producer] Bootstrap broker 54.87.252.89:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)", "[2022-02-16 08:32:24,532] WARN [Producer clientId=console-producer] Bootstrap broker 54.87.252.89:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)"]}    PLAY RECAP **************************************************************************************************************************************************  172.31.83.195              : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

Error: for Playbook 2

PLAY [Message into topic] ***********************************************************************************************************************************    TASK [post message] *****************************************************************************************************************************************  fatal: [172.31.83.195]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "msg": "Unsupported parameters for (command) module: command, responses Supported parameters include: _raw_params, _uses_shell, argv, chdir, creates, executable, removes, stdin, stdin_add_newline, strip_empty_ends, warn"}    PLAY RECAP **************************************************************************************************************************************************  172.31.83.195              : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0  

While creating App Engine firewall rules, How to get get Max existing firewall priority from the rule list

Posted: 16 Feb 2022 12:58 AM PST

In creating an app Engine firewall rule, we need the priority number.

While adding a new rule to the firewall our code checks a database for the latest priority number on the console and calculates the next number by incrementing the same. In case this fails, or the value is edited on console, firewall rule creation might fail. Hence, an API call should be made as a backup measure that gets the max firewall number on console.

Is there a specific API given on the documentation regarding this? Or do we have to list all the rules and then find the latest rule priority?

LDAP: Why does slapcat truncate my slapd.log file?

Posted: 16 Feb 2022 12:06 AM PST

I have two identical OpenLDAP 2.4 servers running on Ubuntu 18.04 LTS. On one of them, everytime I run

# slapcat -l test.ldif  

my slapd.log file gets truncated (i. e. previous log messages are deleted and new ones are written at the beginning of the file).

Actually, the first line of slapd.log shows slapcat's output:

# head /var/log/slapd.log  620ca0f1 The first database does not allow slapcat; using the first available one (2)  Feb 16 08:00:15 srv21449 slapd[2096]: conn=274238 op=552602 SRCH base="" scope=0 deref=0 filter="(objectClass=*)"  Feb 16 08:00:15 srv21449 slapd[2096]: conn=274238 op=552602 SRCH attr=1.1  

That leads me to think the truncation of the file happens even before slapcat runs (!).

The second server keeps log contents after slapcat, as you would normally expect.

As far as I understand slapcat doesn't have anything to do with logging (which is done by slapd daemon), so I believe I'm missing something... Any ideas out there?

Edit: I run slapcat command with slapd running. At the moment I am not able to stop the service to check if this would happen when the daemon is not running.

Failed instance in google compute engine

Posted: 16 Feb 2022 12:56 AM PST

I have an GCE instance which has been running for several years. During night, the instance was restarted with following logs:

2022-02-13 04:46:36.370 CET compute.instances.hostError Instance terminated by Compute Engine.  2022-02-13 04:47:08.279 CET compute.instances.automaticRestart Instance automatically restarted by Compute Engine.  

However the instance did not restart.

I can connect to the serial console where I see this:

serialport: Connected to ***.europe-west1-b.*** port 1 (  [ TIME ] Timed out waiting for device ***  [DEPEND] Dependency failed for File… ***.  [DEPEND] Dependency failed for /data.  [DEPEND] Dependency failed for Local File Systems.  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.  [  OK  ] Stopped Forward Password R…uests to Wall Directory Watch.  [  OK  ] Reached target Timers.           Starting Raise network interfaces...  [  OK  ] Closed Syslog Socket.  [  OK  ] Reached target Login Prompts.  [  OK  ] Reached target Paths.  [  OK  ] Reached target Sockets.  [  OK  ] Started Emergency Shell.  [  OK  ] Reached target Emergency Mode.           Starting Create Volatile Files and Directories...  [  OK  ] Finished Create Volatile Files and Directories.           Starting Network Time Synchronization...           Starting Update UTMP about System Boot/Shutdown...  [  OK  ] Finished Update UTMP about System Boot/Shutdown.           Starting Update UTMP about System Runlevel Changes...  [  OK  ] Finished Update UTMP about System Runlevel Changes.  [  OK  ] Started Network Time Synchronization.  [  OK  ] Reached target System Time Set.  [  OK  ] Reached target System Time Synchronized.           Stopping Network Time Synchronization...  [  OK  ] Stopped Network Time Synchronization.           Starting Network Time Synchronization...  [  OK  ] Started Network Time Synchronization.  [  OK  ] Finished Raise network interfaces.  [  OK  ] Reached target Network.  [  OK  ] Reached target Network is Online.  You are in emergency mode. After logging in, type "journalctl -xb" to view  system logs, "systemctl reboot" to r  Cannot open access to console, the root account is locked.  See sulogin(8) man page for more details.  Press Enter to continue.  

It seems that one of the disks cannot be connected – but what can I do about it now? The disk seems to be normally available within the compute engine.

GCP: How to delete project that is assigned to "No Organisation"

Posted: 16 Feb 2022 02:53 AM PST

I have been using Google cloud services for a few simple things over the years. Recently I've started to explore using GCP for learning about AI and I'm trying to get my projects in order.

For some historical reasons I appear to have a couple of projects that are assigned to "No Organization" and even though I have full admin rights on the G-Suite I don't seem to be able to get the permissions required to migrate them to the proper organisation nor delete them.

I don't even appear to be able to see the permissions or enable APIs required to change them.

Can anyone tell me how to get rid of them?

Additional Information:

I'm the only person who has created projects associated with this g-suite domain. I did briefly create a second identity (since deleted) to work with projects; it is possible (though unlikely) that one of these two projects was created by that identity. The other project must have been created by my normal identity.

I can't tell whether I am currently the IAM Owner of the projects, I should be theoretically but I can't see the permissions. Everything I attempt to do with the projects complains about my not having the correct permissions.

"No organization" appears to be a state defined/created by Google.

Update 2:

If I go to the Google Cloud Platform Console and open the console left side menu, and click IAM & Admin for either of these projects it lists no owner/users and all the information is blank because I don't have permission.

If I try to go to project settings I'm told I don't have permission to view the settings and I would need to contact support (which I don't have permission to do).

I don't really understand what No Organization means in this case as the projects seem to be attached to my domain.

GCP Manage Resources

Bind/Unbind PCI device on the Ubuntu host

Posted: 16 Feb 2022 02:01 AM PST

I have to NIC devices on the host:

# list Ethernet PCI devices to find out names  lspci -nn | grep Ethernet  # 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06)  # 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)    # to get <domain>.<bus>.<slot>.<function>  lspci -n -s 0000:04:00.0  # 04:00.0 0200: 10ec:8168 (rev 06)  

And I want to pass through the device 0000:04:00.0 to the KVM ubuntu 20.04 virtual machine. The host may not see this device when VM is running. To bind PCI NIC to the guest I successfully followed instruction VFIO - "Virtual Function I/O". But unbinding from the guest to the host came more difficult.

I bind NIC device from the host to the guest manually in the following way:

# find Ethernet controller  ls -l /sys/class/net/ | grep pci  # enp3s0 -> ../../devices/pci0000:00/0000:00:1c.5/0000:03:00.0/net/enp3s0  # enp4s0 -> ../../devices/pci0000:00/0000:00:1c.7/0000:04:00.0/net/enp4s0  lspci -nn | grep Ethernet  # 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06)  # 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)    # check the IOMMU grouping  for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort | grep '04:00\|05:00'  # /sys/kernel/iommu_groups/12/devices/0000:04:00.0  # /sys/kernel/iommu_groups/13/devices/0000:05:00.0    # find MAC addresses  for eth in enp4s0 enp5s0; do ifconfig $eth | grep ether | echo "MAC" "$eth:" $(awk '{print $2}'); done  # MAC enp4s0: 50:...  # MAC enp5s0: 18:...    # load the vfio-pci device driver  modprobe vfio_pci    # unbind from host  echo 0000:04:00.0 | sudo tee /sys/bus/pci/devices/0000:04:00.0/driver/unbind  # make available for guests  echo 10ec 8168 | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id    # IOMMU group for device 0000:04:00.0 is 12  # add libvirt-qemu to sudoers  sudo usermod -aG sudo libvirt-qemu  sudo chown libvirt-qemu:libvirt-qemu /dev/vfio/12    # look at what other devices are in the group to free it for use by VFIO  ls -l /sys/bus/pci/devices/0000:04:00.0/iommu_group/devices | awk '{print $9 $10 $11}'  # 0000:04:00.0->../../../../devices/pci0000:00/0000:00:1c.5/0000:04:00.0  # also add other devices if they belong to the same group (not needed in this case)    # make sure the device is not visible from host  ls -l /sys/class/net/ | grep enp4s0  

Then I created new Ubuntu 20.04 VM using Virtual Machine Manager (virt-manager) to run on KVM.

I added new device to VM by editing its xml configuration in virt-manager during creation. In particular, <devices> section contains the following tag

 <hostdev mode="subsystem" type="pci" managed="yes">     <driver name="vfio"/>     <source>       <address domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>     </source>     <mac address='50:...'/>     <address type="pci">        <zpci uid="0x0001" fid="0x00000000"/>     </address>   </hostdev>    

Than I installed Ubuntu 20.04 in regular way. The system reboots properly, without deadlocks (black screen).

When I turn off the VM, I want to return the PCI NIC to the host. I did a research on forums but there is no clear instructions on how to do that.

If I reboot the host, all devices return to the host, so vfio binding is released. But how I can do that without host rebooting.

What causes - Error: pam...Multiple password values not supported?

Posted: 16 Feb 2022 12:26 AM PST

On a linux server a user is unable to collect email using Microsoft Office. in /var/log/maillog I see this

Mar 1 20:49:48 nitrogen dovecot: auth-worker(15749): Error: pam(usern@example.com, 1.2.3.4,<WkxqYjY6G152yDAG>): Multiple password values not supported

followed immediately by Mar 1 20:49:50 nitrogen dovecot: imap-login: Aborted login (auth failed...

I can't find any information about this error apart from it seems to be associated with authentication and 2FA.

Can anyone shed some light on what might be the cause? I don't have access to the client computer.

Problems with SSH ProxyCommand execution

Posted: 16 Feb 2022 02:26 AM PST

I'm following this resource to set up something like a proxy server for ssh

My setup is exactly like the mention in the post:

Remote Server A  Machine-1  Machine-2  

Machine-1 is the bastion server from which I can ssh to the remote server (using identity file)

Machine-2 can connect to Machine-1 using a password.

Requirement:

Connect to Remote Server(A) from Machine-2 via Machine-1

Machine-2 <--> Machine-1 <--> Remote Server(A)  

Here how my command looks (running this command from machine-2)

ssh -o "ProxyCommand=ssh -W user2@%h:%p user1@machine-1" user2@remote_server   

But I yet to see the success.

The error I see on the

channel 0: open failed: connect failed: nodename nor servname provided, or not known  stdio forwarding failed  kex_exchange_identification: Connection closed by remote host   

I don't this often so I'm not sure what is wrong. But according to the resource the step done looks correct to me.

aws ssm start-session .. AWS-StartPortForwardingSession .. hangs

Posted: 16 Feb 2022 12:06 AM PST

I am trying to set up port forwarding between my local PC and an AWS EC2 based on the AWS SSM port forwarding article instance like this:

aws ssm start-session --target  i-0822c9a6c52ca7394 \    --document-name AWS-StartPortForwardingSession --parameters \   '{"portNumber":["55555"],"localPortNumber":["6666"]}'  

I already use SSM to access the instance (using ssm-session) and have used it to start python -m SimpleHTTPServer 55555 listening on the port.

The output I get from SSM is just this:

Starting session with SessionId: jakub.holy@481473109573-0dd8f51cc06ef4469

(And, eventually, after a long while: SessionId: jakub.holy@481473109573-0dd8f51cc06ef4469 : Your session timed out due to inactivity and has been terminated. - and I still need to kill it.)

at which point it hangs. I have expected, but do not see, the following, right after "Startin session...":

Port 6666 opened for sessionId ....

Connection accepted for session .....

Any idea why is it hanging and not establishing the port forwarding?

ec2 not able to a ping google.com

Posted: 16 Feb 2022 02:00 AM PST

We created new vpc for our new architecture and vpc has 4 subnets in that private 2 and public 2.

Private and public one will be Mumbai A and Mumbai B region.

If I Try to do ping google.com from public Mumbai A it is not working but if I do ping google.com from Public Mumbai B. I'm able to do the same. I tried with 2 servers on each.

Note: All the server has the same security configuration.

Anyone has any idea on how to resolve this.

how to do basic IMAP setup / troubleshooting in Exchange 2013

Posted: 16 Feb 2022 01:03 AM PST

So I've had an Exchange Server (v2013 with cu21) running on a Windows Server 2012 R2 box for several years now, without many problems.

We almost exclusively use Outlook-Exchange integration and active sync.

Recently, I have a need to use IMAP on a particular client, but it doesn't seem to work. I thought I've had IMAP enabled since the original install, and everything I check seems to support that, yet I simply can't connect using IMAP.

Things I've tried:

  1. IMAP4 service and IMAP4 Backend service are both running, automatically. I tried restarting both services.
  2. Connecting via a local LAN, so external firewall shouldn't be an issue.
  3. Verifying ports 143 and 993 are open. They are. Tried disabling the Windows firewall just for good measure. Nothing.
  4. I'm using a wildcard SSL certificate works fine for other purposes. I used Get-IMAPSettings to check that the certificate was properly defined, and I used Set-IMAPSettings -X509CertificateName to set it again just to be sure.
  5. I've checked Get-ServerHealth to make sure all IMAP-related processes are Healthy.
  6. I downloaded a script called HealthChecker.ps1 and ran that just for good measure.
  7. Test-IMAPConnectivity fails with the very helpful status of Failure
  8. I can telnet to 578 (smtp) from another machine on the LAN, but when I telnet to 143 or 993, I get a black terminal (like the entire terminal window resets) and then after about 5 seconds it sends me back to the command prompt, with no messages.
  9. I tried Set-ImapSettings -ProtocolLogEnabled $true, but my login attempts don't even appear in the log. The only user I see accessing IMAP in the logs is the Health mailbox.
  10. Tried Set-IMAPSettins -ExternalConnectionSetting just for fun
  11. Verified that IMAP is enabled in ECP for the user that I'm using to test

What could I be missing here?

IIS ARR ReverseProxy with Client Certificate Authentication for backend IIS

Posted: 16 Feb 2022 12:06 AM PST

We have legacy SOAP Web Services (https://dev-ms01/Services/default.asmx) which are written in asp.net 1.1 hosted on IIS7(win server 2008 standard),web services consumed by clients by providing Client Certificate. For the SSL Certificates settings we have Accept on this IIS

`Client(Request with SSL Client Certificate)--> IIS7 (on host dev-ms01)--> Asp.Net SOAP WebServices`  

Now I'm trying to set up proxy IIS(IIS10 on win server 2016 64bit host secure-dev-ms01) with revere proxy for the IIS7. I've followed msdn article https://blogs.msdn.microsoft.com/friis/2016/08/25/setup-iis-with-url-rewrite-as-a-reverse-proxy-for-real-world-apps/ to configure URL rewrite with ReverseProxy as below

`Client(Request with SSL Client Certificate)--> Proxy IIS10 Server with ReverseProxy (on host secure-dev-ms01)--> IIS7 (on host dev-ms01) --> Asp.Net SOAP WebServices`  

On the IIS10(host secure-dev-ms01) for the SSL Certificates settings I've chosen Accept and I've tried the below ReverseProxy configuration enter image description here. When I'm trying to browse the proxy web services URL as https://secure-dev-ms01/Services/default.asmx it is prompting the client certificate but after providing the client certificate am seeing below error

403 - Forbidden: Access is denied.  You do not have permission to view this directory or page using the credentials that you supplied.  

I've tried using below RevereProxy as wellenter image description here and tried browsing the proxy web services URL https://secure-dev-ms01/Services/default.asmx and provided the client certificate but still am seeing below error. I've also tried unchecking the option Enable SSL Offloading for both of the above RevereseProxy configurations, but that didnt work either

403 - Forbidden: Access is denied.  You do not have permission to view this directory or page using the credentials that you supplied.  

I found this msdn article https://blogs.msdn.microsoft.com/asiatech/2014/01/27/configuring-arr-with-client-certificate/ which suggests changingSSL Certificates settings to Ignore on the backend server(but we can not adopt this for our organization) and try using the certificate from the headers X-ARR-ClientCert but we are trying to avoid making any code changes to the legacy asp.net 1.1 services

I couldnt find any relevant articles that could make IIS ARR ReverseProxy with Client Certificate Authentication work for backend IIS with just configuration tweaks on the IIS10 with ReverseProxy instead of code/config change on the backend IIS7, can someone please help me to make this work?

unable to find ecdh parameters

Posted: 16 Feb 2022 02:00 AM PST

I'm working on an SLES 11 SP4 box and trying to connect to the host api.onedrive.com. Since a few days this connection is broken and returns with:

# curl https://api.onedrive.com  curl: (35) error:1408D13A:SSL routines:SSL3_GET_KEY_EXCHANGE:unable to find ecdh parameters  

I suspect that the OpenSSL version shipped with SLES 11 cannot handle this connection.

Is there some backports repository or another way to make those connections work again?

Clone git repo via https without entering the password manually

Posted: 16 Feb 2022 01:03 AM PST

I've got a stack in Amazon OpsWorks and within this stack I got a RailsApp Layer. The repository is only accessible via https and is protected with username and password. On my machine I can clone the repo via:

git clone https://username:password@XXX.com/repo.git

OpsWorks complains that the URL is invalid because of the :

Is there a way to set the password explicitly before chef tries to clone the repo? All the articles I red just describe how to cache credentials but they have to be typed in manually the first time. I look for a way to fully automate this.

Any ideas?

What happens when you plug two sides of a cable to a single networking device?

Posted: 16 Feb 2022 02:46 AM PST

What is likely to happen when you plug two ends of a network cable to a single switch/router? Will this create problems on the network, or just be ignored?

Private staff network within public network

Posted: 16 Feb 2022 02:55 AM PST

I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way.

Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least.

Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). Both networks have wireless networks (staff secured with WPA, of course).

tl;dr: We have a public network, and a NATed staff network within it.

My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget.

(My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

No comments:

Post a Comment