Thursday, January 20, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


DFS Replication slow to a single server

Posted: 20 Jan 2022 04:58 AM PST

This is a proper head-scratcher...

A customer has a domain with eight DCs. Two (including the PDC) are Hyper-V VMs in our datacentre, four are vSphere VMs at national site offices (not RODCs) and the other two are also vSphere VMs in a 3rd-party datacentre.

SYSVOL replication is all but instantaneous between the PDC (DC1001) and the site DCs, yet between DC1001 and the other Hyper-V DC (DC1002), replication is taking a couple of hours.

I've checked AD Sites and Services to ensure that all the links are in place and I can see direct connections between the two Hyper-V boxes.

We've put the two Hyper-V boxes on the same host to see if there was a Hyper-V networking problem, but slow replication is persisting.

I've run a DFS Replication heath check report and the results are a bit confusing to say the least... Of all eight DCs, none of them have any backlogged receiving transactions yet all of them except DC1002 have over 300 backlogged sending transactions. DC1002 has no backlog at all and this is the "slow" node in the web. How can this be?

DFSR Diag commands (with the backlog exception) report everything to be hunky-dory yet there must be an explanation as to why replication is taking hours to show on DC1002.

I'm no Hyper-V expert, so there may well be something I've missed there.

There appears to be no file replication taking place, just SYSVOL.

There are also associated entries in Event Viewer > Applications and Services Logs > DFS Replication

Log Name: DFS Replication
Source: DFSR
Date: 20/01/2022 11:42:06
Event ID: 5014
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: DC1002.domain.local
Description:
The DFS Replication service is stopping communication with partner Site1-001 for replication group Domain System Volume due to an error. The service will retry the connection periodically.

Additional Information:
Error: 1726 (The remote procedure call failed.)

I've looked up this error in relation to DFS replication, but there's a myriad of solutions for all kinds of problems so this seems like an unassailable minefield.

The RPC service is running on all hosts (Auto startup), there are not firewall rules blocking this and general network connectivity is fine throughout.

Any advice or experience with this problem would be greatly appreciated!!!

DHCP issue in Windows Server 2019

Posted: 20 Jan 2022 04:41 AM PST

In a virtual machine on VirtualBox with Windows Server 2019 (named 'Server2019') as PDC of a domain called 'infordidactica.local', starting the DHCP configuration, a pupil of mine came with this situation: a DHCP server that doesn't exist and a server that exists (server2019.infordidactica.local) but doesn't show up in the DHCP servers, even when expressely added. DHCP Problem

Any help would be very apreciated, I'm stuck here.

Thanks in advance, Paulo

How to implement dynamic DNS with Google Cloud?

Posted: 20 Jan 2022 04:38 AM PST

I have the following infrastructure: pages.frontend.com/client-id - business page for client-id with his services. I want to allow him to map his own domain: clientid.com and get the page pages.frontend.com/client-id. My domain, frontend.com is hosted with Cloudflare and mapped to a Google Cloud Run instance via CNAME.

How can I implement this using Google Cloud?

I managed to create client-id.frontend.com using Cloud Run domain mapping, but how can I allow the client to have clientid.com pointing to the Cloud Run instance without adding his domain manually in managed custom domains?

Thank you.

Kyocera ECOSYS M2640idw print 4 times instead of 1 using cups+raspberrypi

Posted: 20 Jan 2022 04:01 AM PST

I installed an ecosys printer M2640idw on a raspberry pi (raspbian 10) using the PPD provided by the manufacturer, but when I send one copy it prints four copies. I also tried a generic ppd (Generic PCL 6/PCL XL) and it does the same thing.

I attach the PPD and the cups error_log.

PPD: https://drive.google.com/file/d/1uK7ynzNxPicHXTc8D-OEZC_BEA4TxH_Z/view?usp=sharing

cups error_log: https://drive.google.com/file/d/1j47ioan9N1oWrw73kr0YaGuLohJVdISt/view?usp=sharing

Thanks for your help.

How to trigger an event after saving a chosen csv file in Google Cloud Storage bucket

Posted: 20 Jan 2022 03:57 AM PST

Trying to make a synchronous pipeline, I need to copy a csv file from Google Cloud Storage after it has been saved in Google Cloud Storage. The copy job does not have to be triggered right after the saving, it can also happen within some time frame at least. It just may not happen before the file has been saved. Therefore, either a trigger event or a cronjob are possible, or you may suggest something else.

How can I trigger copying a chosen csv file after it has been saved in Google Cloud Storage? Can I use a Cloud Function to do the copy job or are there other ways?

Apache 2.4.x Reverse Proxy subdirectory and ports issue?

Posted: 20 Jan 2022 03:57 AM PST

I have 3 different applications deployed on the same server. Each on different ports

  1. One java API application running on port 8080. directory (/home/api/)
  2. Second nextjs web app running on port 3000. directory (/home/web)
  3. Third vuejs admin panel application on default port 80 deployed inside subdirectory (/var/www/html/admin)

This is an apache config file test.conf

    ServerName www.test.com        ServerAdmin webmaster@localhost      DocumentRoot /var/www/html        ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined        # Setup reverse proxy for all server      ProxyPreserveHost On        ProxyPass /admin http://localhost/admin      ProxyPassReverse /admin http://localhost/admin        ProxyPass /api http://localhost:8080/api      ProxyPassReverse /api http://localhost/api        ProxyPass / http://localhost:3000/      ProxyPassReverse / http://localhost/  

API is working fine at http://test.com/api URL.

The website is also working fine at http://test.com URL.

The problem occurs when I access http://test.com/admin URL.

It shows the following error in browser:-

Your browser sent a request that this server could not understand. The size of a request header field exceeds the server limit.  

with status code 400 Bad request and If I remove the admin panel from reverse proxy and create another vhost file with a simple configuration like below:-

This is another apache vhost config file test-admin.conf

    ServerName www.test.com        ServerAdmin webmaster@localhost      DocumentRoot /var/www/html/        #LogLevel info ssl:warn        ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined         <Directory /var/www/html/admin>              Options Indexes FollowSymLinks              AllowOverride All              Require all granted              Order allow,deny              Allow from all      </Directory>  

then other URL stops working showing 404 Not Found error.

Note:- www.test.com is just to represent the actual domain name or IP.

Unable to deploy and use Google Vision On-Prem

Posted: 20 Jan 2022 03:53 AM PST

After purchasing/subscribing to the Vision On-Prem OCR from Marketplace, I have configured and deployed the application on a GKE cluster. The automatic deployment seems to have encountered an error. Please find the attached screenshots.

Also after successful deployment, we need support in order to access the application. The documentation page is no longer accessible - https://cloud.google.com/vision/on-prem/priv/docs

OCR-Service-Deployement-Failed OCR-Service-Deployement-Failed

Split tunnle on OpenVPN cummunity for everything except VPNs internal network

Posted: 20 Jan 2022 03:46 AM PST

As I had very short time to set remote access, I have set up my OpenVPN server using the script which is available via github.

Install OpenVPN via script

The downside of this script is that all traffic is pushed via that VPN server, so if I want to browse anything on the internet or download "heavy" data, it uses all traffic from that VPN server as it pulls all the data from there, instead of that specific website -> my router/ISP -> my PC

I have found numerous articles but they are all confusing to follow, as many of them has some custom things that they want to implement.

What I currently have:

My PC from home: 123.123.123.123/30 My VPN server: 223.223.223.223/23 my internal tunel interface: 10.5.0.1/24 internal server network: 223.223.223.0/23 (yes, it is unfortunately public IP range same as the server).

local 223.223.223.223  port 1194  proto udp  dev tun  ca ca.crt  cert server.crt  key server.key  dh dh.pem  auth SHA512  tls-crypt tc.key  topology subnet  server 10.5.0.0 255.255.255.0  push "redirect-gateway def1 bypass-dhcp"  ifconfig-pool-persist ipp.txt  push "dhcp-option DNS 223.223.223.224"  push "dhcp-option DNS 223.223.223.225"  push "dhcp-option DNS 223.223.223.226"  keepalive 10 120  cipher AES-256-CBC  user nobody  group nobody  persist-key  persist-tun  status openvpn-status.log  verb 3  crl-verify crl.pem  explicit-exit-notify  management 0.0.0.0 5555  

The client ovpn config looks like this:

client  dev tun  proto tcp  remote 223.223.223.223 14433  resolv-retry infinite  nobind  persist-key  persist-tun  remote-cert-tls server  auth SHA512  cipher AES-256-CBC  ignore-unknown-option block-outside-dns  block-outside-dns  verb 3  <ca>  -----BEGIN CERTIFICATE-----  ...  -----END CERTIFICATE-----  </ca>  <cert>  -----BEGIN CERTIFICATE-----  ...  -----END CERTIFICATE-----  </cert>  <key>  -----BEGIN PRIVATE KEY-----  ...  -----END PRIVATE KEY-----  </key>  <tls-crypt>  -----BEGIN OpenVPN Static key V1-----  ...  -----END OpenVPN Static key V1-----  </tls-crypt>  

My question is:

  • how can I route entire traffic for surfing or downloading the data from the internet via my own gateway instead of pushing it through VPN server (split tunnel), and only push data that is meant for the network 223.223.223.0/23 to go through the VPN?

How my config should look like to be able to achieve that?

Thank you in advance.

Issues starting Kube-scheduler [ Kubernetes the hard way ]

Posted: 20 Jan 2022 03:13 AM PST

I am trying to setup kubernetes cluster the hardway by following guide from kelsey hightower kubernetes-the-hard-way kubernetes-the-hard-way

After setting up the kube-scheduler when I start the scheduler I am seeing the following error :-

Jan 20 10:20:01 xyz.com kube-scheduler[12566]: F0120 10:20:01.025675 12566 helpers.go:119] error: no kind "KubeSchedulerConfiguration" is registered for version "kubescheduler.config.k8s.io/v1beta1" Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Main process exited, code=exited, status=255/n/a Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Unit entered failed state. Jan 20 10:20:01 xyz.com kube-scheduler systemd1: kube-scheduler.service: Failed with result 'exit-code'. Jan 20 10:20:06 xyz.com kube-scheduler systemd1: kube-scheduler.service: Service hold-off time over, scheduling restart.

My Kube-scheduler.yaml inside /etc/kubernetes/config looks like this.Can somebody please provide some pointers to what is going on or what am i missing? My kube-apiserver and Kube-controller manager are active.

apiVersion: kubescheduler.config.k8s.io/v1beta1  kind: KubeSchedulerConfiguration  clientConnection:    kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"  leaderElection:    leaderElect: true  

Pass variable value from one build step to other in jenkins job

Posted: 20 Jan 2022 03:05 AM PST

I want to pass variable value from one build step that is from 'execute shell' to 'send files or execute commands over SSH" my script in Execute shell* is:

if [ "$var" == "1"]; then  package="newpackage"  fi  if [ "$var" == "2"]; then  package="oldpackage"  fi  Given_order=${package}  

send files or execute commands over SSH echo "$Given_order" but the value is not passed from execute shell build step to other. Please suggest Thanks

Best RAID and file system for all SSD nginx web server - large static files in one box

Posted: 20 Jan 2022 03:03 AM PST

I need to build a server for serving large static files using nginx, that requires a total 40TB of disk space. The total download bandwidth is 30gbps at peek, for 15k connections.

Actually the workload is unknown, so I decided to use all SSD file system in an HP DL380 G8 or G9 server with 2 dual 10g NIC card (4x 10g link bonded).

Is there any best practice for using SSD's in this setup? For example, hardware raid or software raid? zfs, or xfs, or ... ? RAID5, RAID6? Can I use 12x SSDs in one raid setup?

exim interface setting does not work

Posted: 20 Jan 2022 02:57 AM PST

we have a exim server with two IP's. We create a script to change the interface for sending mails. We add the second interface settings on remote_smtp section. When we check the email sent, the interface is always the same, the primary interface.

remote_smtp:    driver = smtp    interface = 178.XXX.XXX.XXX  

Is there other exim setting that could be change the interface? Thanks, Best regards.

Reading only the metadata of a file in a Google Cloud Storage bucket into a Cloud Function in Python (without loading the file or its data!)

Posted: 20 Jan 2022 03:03 AM PST

I need something like Cloud Storage for Firebase: download metadata of all files, just not in angular but in Python and just for a chosen file instead.

The aim is to return this information when the cloud function finishes with the return statement or just to log it during the run of the cloud function as soon as the file is saved in the Google storage bucket. With that information at hand, another job can be started after the given timestamp. The pipeline is synchronous.

I have found Q/A's on loading a file or its data into the cloud function

to extract data stats into the running Cloud Function from the external file.

Since I do not want to save the large file or its data in memory at any time only to get some metadata, I want to download only the metadata from that file that is stored in a bucket in Google storage, meaning timestamp and size.

How can I fetch only the metadata of a csv file in a Google Cloud Storage bucket to the Google Cloud Function?

how to configurate ipv6_autocon = No is persistent

Posted: 20 Jan 2022 01:58 AM PST

I modify the line "IPV6_AUTOCONF=no". When we try to do this manually, the change is lost after a network restart/reboot. How do we configure the system so this change is persistent

How to catch access/request to an (non-existent) directory under a base path?

Posted: 20 Jan 2022 01:57 AM PST

I would like to reproduce the autofs capability to detect the access to a sub-path of a base path and call the corresponding handler. The motive is that autofs cannot set (nor keep) the shared or rshared property of the mountpoint. While systemd have the path,mount and automount type of units and i could use generators, i failed to find a way to catch the request to an non-{existent,mounted yet} directory under a base path e.g: if the request is to anything under /cvmfs/unpacked.cern.ch/some_other_dir_in_hierarchy to catch that the request is relative to /cvmfs and call the mount handler for /cvmfs/unpacked.cern.ch Is there a solution to this? (and it should be available under centos 7)

If I setup a 'Passwordless SSH connection' as root user, will it be applied to all other users on the server?

Posted: 20 Jan 2022 02:34 AM PST

Hi I'm new to the concept of SSH & password-less authentication.

I'm trying to setup password-less SSH connection between two servers A & B, using SSH-keygen.

If I generate the keys on "Server A" as "root" user, can all the other users on "SERVER A" use the password-less SSH connection

(or)

Do I need to create separate keys for each and every user?

I'm trying to setup password-less SSH connection for a set of specific users including root user.

Istio - Prometheus - HPA Stack not communicating [ HPA could not calculate the number of replicas ]

Posted: 20 Jan 2022 02:06 AM PST

I have cluster with 1 control panel and 2 nodes.

Istio is installed as Service Mesh.

I do request management via istio ingress.

I want it to automatically scale by sharing metrics between Kubernetes HPA and istio prometheus, but I couldn't.

My pods on kube-system

kube-system pods

root@ubuntu-master:~# kubectl get pods -n kube-system  NAME                                    READY   STATUS    RESTARTS        AGE  coredns-78fcd69978-pk69f                1/1     Running   1 (2d11h ago)   2d12h  coredns-78fcd69978-t5dkx                1/1     Running   1 (2d11h ago)   2d12h  etcd-ubuntu-master                      1/1     Running   1 (2d11h ago)   48d  kube-apiserver-ubuntu-master            1/1     Running   2 (2d11h ago)   48d  kube-controller-manager-ubuntu-master   1/1     Running   3 (2d11h ago)   48d  kube-proxy-72q2r                        1/1     Running   0               2d10h  kube-proxy-8qgr9                        1/1     Running   1 (2d11h ago)   48d  kube-proxy-t4wgr                        1/1     Running   0               2d10h  kube-scheduler-ubuntu-master            1/1     Running   3 (2d11h ago)   48d  metrics-server-84b4bfc7fb-h8gd2         1/1     Running   0               18h  

My pods on istio-system

istio-system

root@ubuntu-master:~# kubectl get pods -n istio-system  NAME                                    READY   STATUS    RESTARTS   AGE  grafana-6ccd56f4b6-bw5md                1/1     Running   0          2d12h  istio-ingressgateway-57c665985b-wj5gr   1/1     Running   0          2d12h  istiod-78cc776c5b-qkr6b                 1/1     Running   0          2d12h  jaeger-5d44bc5c5d-db2pj                 1/1     Running   0          2d12h  kiali-79b86ff5bc-mj8bn                  1/1     Running   0          2d12h  prometheus-64fd8ccd65-22znf             2/2     Running   0          2d12h  prometheus-adapter-6d9c6c8fdf-lxfbp     1/1     Running   0          17h  

Prometheus UI result :

Prometheus UI result

Metrics server response;

Metrics server response

root@ubuntu-master:~# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"  {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"namespaces/network_transmit_packets_dropped","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/processes","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/sockets","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/spec_memory_swap_limit_bytes","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"jobs.batch/kiali_single_validation_processing_duration_seconds_sum","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"jobs.batch/kiali_validation_processing_duration_seconds_count","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/kubelet_container_log_filesystem_used_bytes","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/cpu_system","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"namespaces/fs_reads_bytes","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]},{"name":"services/kiali_validation_processing_duration_seconds_sum","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},......  

here in my HPA definition

istio kubernetes hpa

root@ubuntu-master:~# kubectl describe hpa html-pdf-v2-hpa -n html-pdf-v2  Name:                                                            html-pdf-v2-hpa  Namespace:                                                       html-pdf-v2  Labels:                                                          <none>  Annotations:                                                     metric-config.object.istio-requests-total.prometheus/query:                                                                     sum(                                                                       rate(                                                                         istio_requests_total{                                                                           destination_workload="html-pdf-deploy",                                                                           destination_workload_namespace="html-pdf-v2"                                                                         }[1m]                                                                       )                                                                     ) /                                                                     count(                                                                       count(                                                                         container_memory_usage_bytes{                                                                           namespace="html-pdf-v2",                                                                           pod=~"html-pdf-deploy.*"                                                                         }                                                                       ) by (pod)                                                                     )  CreationTimestamp:                                               Wed, 19 Jan 2022 18:11:14 +0300  Reference:                                                       Deployment/html-pdf-deploy  Metrics:                                                         ( current / target )    "istio-requests-total" on Pod/html-pdf-deploy (target value):  <unknown> / 10  Min replicas:                                                    1  Max replicas:                                                    10  Deployment pods:                                                 5 current / 0 desired  Conditions:    Type           Status  Reason                 Message    ----           ------  ------                 -------    AbleToScale    True    SucceededGetScale      the HPA controller was able to get the target's current scale    ScalingActive  False   FailedGetObjectMetric  the HPA was unable to compute the replica count: unable to get metric istio-requests-total: Pod on html-pdf-v2 html-pdf-deploy/unable to fetch metrics from custom metrics API: the server could not find the metric istio-requests-total for pods  Events:    Type     Reason                 Age                   From                       Message    ----     ------                 ----                  ----                       -------    Warning  FailedGetObjectMetric  55s (x4255 over 17h)  horizontal-pod-autoscaler  unable to get metric istio-requests-total: Pod on html-pdf-v2 html-pdf-deploy/unable to fetch metrics from custom metrics API: the server could not find the metric istio-requests-total for pods  

kubectl top pods result

root@ubuntu-master:~# kubectl top pods -n istio-system  NAME                                    CPU(cores)   MEMORY(bytes)  grafana-6ccd56f4b6-bw5md                2m           39Mi  istio-ingressgateway-57c665985b-wj5gr   10m          85Mi  istiod-78cc776c5b-qkr6b                 7m           61Mi  jaeger-5d44bc5c5d-db2pj                 3m           809Mi  kiali-79b86ff5bc-mj8bn                  4m           1125Mi  prometheus-64fd8ccd65-22znf             34m          744Mi  prometheus-adapter-6d9c6c8fdf-lxfbp     40m          76Mi  

HPA Yaml.

apiVersion: autoscaling/v2beta1  kind: HorizontalPodAutoscaler  metadata:    name: html-pdf-v2-hpa    namespace: html-pdf-v2    annotations:      metric-config.object.istio-requests-total.prometheus/query: |        sum(          rate(            istio_requests_total{              destination_workload="html-pdf-deploy",              destination_workload_namespace="html-pdf-v2"            }[1m]          )        ) /        count(          count(            container_memory_usage_bytes{              namespace="html-pdf-v2",              pod=~"html-pdf-deploy.*"            }          ) by (pod)        )  spec:    maxReplicas: 10    minReplicas: 1    scaleTargetRef:      apiVersion: apps/v1      kind: Deployment      name: html-pdf-deploy    metrics:      - type: Object        object:          metricName: istio-requests-total          target:            apiVersion: v1            kind: Pod            name: html-pdf-deploy          targetValue: 10  

I have concerns about where I went wrong or if I was walking on the right path.

First post I'm excited for answers. I hope I explained myself correctly.

Thanks

Nginx wildcard subdomain

Posted: 20 Jan 2022 03:09 AM PST

I have setup my nginx config file as:

server {     server_name example.com www.example.com;     listen 80;          location = / {       .....    }  }    server {    server_name *.example.com;    listen 80;    location = / {        proxy_pass ...    }  }  

browsing through example.com and www.example.com is fine, but when I use some subdomain like a.example.com or b.example.com I get a message like "301 moved permanently" and I am redirected back to example.com

Here is the actual file;

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok  nginx: configuration file /etc/nginx/nginx.conf test is successful  # configuration file /etc/nginx/nginx.conf:  user www-data;  worker_processes auto;  pid /run/nginx.pid;  include /etc/nginx/modules-enabled/*.conf;    events {      worker_connections 768;      # multi_accept on;  }    http {        ##      # Basic Settings      ##        sendfile on;      tcp_nopush on;      tcp_nodelay on;      keepalive_timeout 65;      types_hash_max_size 2048;      # server_tokens off;        # server_names_hash_bucket_size 64;      # server_name_in_redirect off;       client_max_body_size 50M;      include /etc/nginx/mime.types;      default_type application/octet-stream;        ##      # SSL Settings      ##        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE      ssl_prefer_server_ciphers on;        ##      # Logging Settings      ##        access_log /var/log/nginx/access.log;      error_log /var/log/nginx/error.log;        ##      # Gzip Settings      ##        gzip on;        # gzip_vary on;      # gzip_proxied any;      # gzip_comp_level 6;      # gzip_buffers 16 8k;      # gzip_http_version 1.1;      # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;        ##      # Virtual Host Configs      ##        include /etc/nginx/conf.d/*.conf;      include /etc/nginx/sites-enabled/*;  }      #mail {  #   # See sample authentication script at:  #   # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript  #   #   # auth_http localhost/auth.php;  #   # pop3_capabilities "TOP" "USER";  #   # imap_capabilities "IMAP4rev1" "UIDPLUS";  #   #   server {  #       listen     localhost:110;  #       protocol   pop3;  #       proxy      on;  #   }  #   #   server {  #       listen     localhost:143;  #       protocol   imap;  #       proxy      on;  #   }  #}    # configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:  load_module modules/ngx_http_image_filter_module.so;    # configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:  load_module modules/ngx_http_xslt_filter_module.so;    # configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:  load_module modules/ngx_mail_module.so;    # configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:  load_module modules/ngx_stream_module.so;    # configuration file /etc/nginx/mime.types:    types {      text/html                             html htm shtml;      text/css                              css;      text/xml                              xml;      image/gif                             gif;      image/jpeg                            jpeg jpg;      application/javascript                js;      application/atom+xml                  atom;      application/rss+xml                   rss;        text/mathml                           mml;      text/plain                            txt;      text/vnd.sun.j2me.app-descriptor      jad;      text/vnd.wap.wml                      wml;      text/x-component                      htc;        image/png                             png;      image/tiff                            tif tiff;      image/vnd.wap.wbmp                    wbmp;      image/x-icon                          ico;      image/x-jng                           jng;      image/x-ms-bmp                        bmp;      image/svg+xml                         svg svgz;      image/webp                            webp;        application/font-woff                 woff;      application/java-archive              jar war ear;      application/json                      json;      application/mac-binhex40              hqx;      application/msword                    doc;      application/pdf                       pdf;      application/postscript                ps eps ai;      application/rtf                       rtf;      application/vnd.apple.mpegurl         m3u8;      application/vnd.ms-excel              xls;      application/vnd.ms-fontobject         eot;      application/vnd.ms-powerpoint         ppt;      application/vnd.wap.wmlc              wmlc;      application/vnd.google-earth.kml+xml  kml;      application/vnd.google-earth.kmz      kmz;      application/x-7z-compressed           7z;      application/x-cocoa                   cco;      application/x-java-archive-diff       jardiff;      application/x-java-jnlp-file          jnlp;      application/x-makeself                run;      application/x-perl                    pl pm;      application/x-pilot                   prc pdb;      application/x-rar-compressed          rar;      application/x-redhat-package-manager  rpm;      application/x-sea                     sea;      application/x-shockwave-flash         swf;      application/x-stuffit                 sit;      application/x-tcl                     tcl tk;      application/x-x509-ca-cert            der pem crt;      application/x-xpinstall               xpi;      application/xhtml+xml                 xhtml;      application/xspf+xml                  xspf;      application/zip                       zip;        application/octet-stream              bin exe dll;      application/octet-stream              deb;      application/octet-stream              dmg;      application/octet-stream              iso img;      application/octet-stream              msi msp msm;        application/vnd.openxmlformats-officedocument.wordprocessingml.document    docx;      application/vnd.openxmlformats-officedocument.spreadsheetml.sheet          xlsx;      application/vnd.openxmlformats-officedocument.presentationml.presentation  pptx;        audio/midi                            mid midi kar;      audio/mpeg                            mp3;      audio/ogg                             ogg;      audio/x-m4a                           m4a;      audio/x-realaudio                     ra;        video/3gpp                            3gpp 3gp;      video/mp2t                            ts;      video/mp4                             mp4;      video/mpeg                            mpeg mpg;      video/quicktime                       mov;      video/webm                            webm;      video/x-flv                           flv;      video/x-m4v                           m4v;      video/x-mng                           mng;      video/x-ms-asf                        asx asf;      video/x-ms-wmv                        wmv;      video/x-msvideo                       avi;  }    # configuration file /etc/nginx/sites-enabled/default.old:  ssl_certificate /etc/letsencrypt/live/billspree.work/fullchain.pem;  ssl_certificate_key /etc/letsencrypt/live/billspree.work/privkey.pem;      server {          listen 80;          listen [::]:80;          root /var/www/html/billspree.work;          index index.php index.html index.htm index.nginx-debian.html testing.html;          server_name billspree.work www.billspree.work;            location = / {                  try_files $uri $uri/ /index.php?args;          }            location ~ \.php$ {                  include snippets/fastcgi-php.conf;                  fastcgi_pass unix:/run/php/php7.4-fpm.sock;                  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          }              location ^~ /tenant {                  proxy_pass http://localhost:9003;                 # proxy_http_version 1.1;                 # proxy_set_header Upgrade $http_upgrade;                 # proxy_set_header Connection 'upgrade';                 # proxy_set_header Host $host;                 # proxy_cache_bypass $http_upgrade;          }              location ^~ /tenant/portal {                  proxy_pass http://localhost:7172/prod/tenant/portal;                  # proxy_http_version 1.1;                  # proxy_set_header Upgrade $http_upgrade;                  # proxy_set_header Connection 'upgrade';                  #proxy_set_header Host $host;                  #proxy_cache_bypass $http_upgrade;          }              #deny access to .htaccess files, if Apache's document root          #concurs with nginx's one            location ~ /\.ht {                  deny all;          }        listen [::]:443 ssl ipv6only=on; # managed by Certbot      listen 443 ssl; # managed by Certbot     # ssl_certificate /etc/letsencrypt/live/billspree.work/fullchain.pem; # managed by Certbot     # ssl_certificate_key /etc/letsencrypt/live/billspree.work/privkey.pem; # managed by Certbot     # include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot     # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot  #    return 301 https://www.billspree.com$request_uri;        }    server {      listen [::]:80 ssl ipv6only=on;      listen 80 ssl;      server_name *.billspree.work;        location = /portal {          proxy_pass http://localhost:7172/portal;      }  }    # test  server config with port 1027  server {        server_name *.billspree.work;      listen [::]:1027 ssl ipv6only=on; # managed by Certbot      listen 1027 ssl; # managed by Certbot        ssl_certificate /etc/letsencrypt/live/billspree.work/fullchain.pem; # $      ssl_certificate_key /etc/letsencrypt/live/billspree.work/privkey.pem;   #   include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot    #  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    location ^~ / {          proxy_pass http://localhost:9001;          proxy_http_version 1.1;          proxy_set_header Upgrade $http_upgrade;          proxy_set_header Connection 'upgrade';          proxy_set_header Host $host;          proxy_cache_bypass $http_upgrade;      }  }    #server {  #    if ($host = billspree.work) {  #        return 301 https://www.$host$request_uri;  #    } # managed by Certbot  #    if ($host = www.billspree.work) {  #        return 301 https://$host$request_uri;  #    } # managed by Certbot    #   server_name www.billspree.work billspree.work#  #       listen [::]:80 ipv6only=on;  #       listen 80;  #}    # configuration file /etc/nginx/snippets/fastcgi-php.conf:  # regex to split $uri to $fastcgi_script_name and $fastcgi_path  fastcgi_split_path_info ^(.+?\.php)(/.*)$;    # Check that the PHP script exists before passing it  try_files $fastcgi_script_name =404;    # Bypass the fact that try_files resets $fastcgi_path_info  # see: http://trac.nginx.org/nginx/ticket/321  set $path_info $fastcgi_path_info;  fastcgi_param PATH_INFO $path_info;    fastcgi_index index.php;  include fastcgi.conf;    # configuration file /etc/nginx/fastcgi.conf:    fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;  fastcgi_param  QUERY_STRING       $query_string;  fastcgi_param  REQUEST_METHOD     $request_method;  fastcgi_param  CONTENT_TYPE       $content_type;  fastcgi_param  CONTENT_LENGTH     $content_length;    fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;  fastcgi_param  REQUEST_URI        $request_uri;  fastcgi_param  DOCUMENT_URI       $document_uri;  fastcgi_param  DOCUMENT_ROOT      $document_root;  fastcgi_param  SERVER_PROTOCOL    $server_protocol;  fastcgi_param  REQUEST_SCHEME     $scheme;  fastcgi_param  HTTPS              $https if_not_empty;    fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;  fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;    fastcgi_param  REMOTE_ADDR        $remote_addr;  fastcgi_param  REMOTE_PORT        $remote_port;  fastcgi_param  SERVER_ADDR        $server_addr;  fastcgi_param  SERVER_PORT        $server_port;  fastcgi_param  SERVER_NAME        $server_name;    # PHP only, required if PHP was built with --enable-force-cgi-redirect  fastcgi_param  REDIRECT_STATUS    200;    

Enable Host Guest Domain Resuloution

Posted: 20 Jan 2022 02:08 AM PST

Situation:
I have Ubuntu 20.04 server inside Vbox6.1 with an Ubuntu 20.04 Desktop Host. Host-Guest communications are configured correctly using vboxnet0 adapter. I can readily ping the static ip of the guest from the host's command line.

Problem:
I recently install a server control panel on the guest and oddly enough, I can only reach access the server control from my host's web browser only using the ip address, not its domain name. For exmaple:

https://192.168.62.87:3080 correctly displays control panel, whereas
https://example.com:3080 has Firefox's "Hmm. We're having trouble finding that site." error message.

Solutions that I have tried:

1.) First, I tried the obvious. I edited my /etc/hosts file to have
192.168.62.87 example.com didn't work

2.) Next, I tried installing avahi-daemon on the guest server as follows:
sudo apt-get install avahi-daemon & rebooted the guest <-didn't work

Does anyone know how I can get my vbox domain names visible to my host? thanks

Update @Gaétan RYCKEBOER Advice below, revealed something useful.

when I ran dig example.com it revealed that my host is trying to resolve example.com using my PROD server's nameserver, which means of course the control panel will not load because **test**.example.com doesn't exist on y prod server.

It seems that 192.168.62.87 example.com in my /etc/hosts file is being ignored.

This is what I need to correct.
NOTE: my ubuntu test server does have bind9 installed and it is running correctly.

Totally isolate two interfaces -Linux

Posted: 20 Jan 2022 04:53 AM PST

I'm a bit embarrassed but I need your help.

I have three interfaces on a virtual machines. I want to completely isolate my interfaces between them. I created one route table for each interface:

    inet 192.168.1.100/24 brd 192.168.1.255 scope global ens192  3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      inet 192.168.10.100/24 brd 192.168.10.255 scope global ens224  4: ens256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      inet 192.168.20.100/24 brd 192.168.20.255 scope global ens256  Network interface exemple:          up /sbin/ip route add default via 192.168.10.1 dev ens224 table in          up /sbin/ip rule add from 192.168.10.100/32 table  in          post-down /sbin/ip rule del from 192.168.10.100/32 table  in          post-down /sbin/ip route del default via 192.168.10.1 dev ens224 table  in  

But when I try to telnet or ping or whatever from one interface to another one, all the traffic go through the loopback. Is there a way to correct that?

CentOS 7 , OpenVPN Server Radius Plugin

Posted: 20 Jan 2022 04:51 AM PST

On my new openvpn server install radius plugin can not read client status. It worked on previous installation, now all things are the same, but not working.

OpenVPN Server Conf:

port 57192   proto udp  dev tun  user nobody  group nobody  persist-key  persist-tun  keepalive 10 120  plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf  duplicate-cn  topology subnet  server 10.8.0.0 255.255.255.0  ifconfig-pool-persist ipp.txt  push "dhcp-option DNS 8.8.8.8"  push "dhcp-option DNS 8.8.4.4"  push "redirect-gateway def1 bypass-dhcp"  dh dh.pem  tls-auth tls-auth.key 0  crl-verify crl.pem  ca ca.crt  cert server_Vfvm7OXXaOglE58z.crt  key server_Vfvm7OXXaOglE58z.key  auth SHA256  cipher AES-128-GCM  ncp-ciphers AES-128-GCM  tls-server  tls-version-min 1.2  tls-cipher TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256  client-config-dir /etc/openvpn/ccd  status /var/log/openvpn/status.log  verb 3   

Client:

client  proto udp  explicit-exit-notify  remote ?.?.?.? 57192  dev tun  resolv-retry infinite  nobind  persist-key  persist-tun  auth-user-pass  remote-cert-tls server  verify-x509-name server_Vfvm7OXXaOglE58z name  auth SHA256  auth-nocache  cipher AES-128-GCM  tls-client  tls-version-min 1.2  tls-cipher TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256  ignore-unknown-option block-outside-dns  setenv opt block-outside-dns # Prevent Windows 10 DNS leak  verb 3  

Radius Pluginn:

NAS-Identifier=OpenVpn  Service-Type=5  Framed-Protocol=1  NAS-Port-Type=5  NAS-IP-Address=127.0.0.1  OpenVPNConfig=/etc/openvpn/server.conf  subnet=255.255.255.0  overwriteccfiles=true  nonfatalaccounting=false  server  {  acctport=1813  authport=1812  name=?.?.?.?  retry=1  wait=1  sharedsecret=?????  }  

On server log it shows this:

RADIUS-PLUGIN: BACKGROUND ACCT: No accounting data was found for user01

Access Denied when mounting Kerberised NFS v4 Share

Posted: 20 Jan 2022 03:09 AM PST

I want to mount an NFS4 share, but with Kerberos security enabled. This is my setup:

  • Debian Server (dns fqdn: nfsv4test.subnet.example.org)

  • Debian Client (dns fqdn: nfsv4client.subnet.example.org)

  • Windows ADC, acts also as KDC

  • My realm is REALM.EXAMPLE.ORG

  • The subnet where the both Debian machines are located in is called subnet.example.org

  • There is no NAT going on.

  • Both machines are up-to-date.

So as I'm still struggling with Kerberos, that is how I tried to archieve my goal:

Chapter I: Setup

1- Put both machines in the same Realm/Domain (This has already been set up by others and works)

2- Created two users (users, not computers!) per machine: nfs-nfsv4client, host-nfsv4client, nfs-nfsv4test and host-nfsv4test After the creation I enabled AES256 Bit encryption for all of the accounts.

3- Set a service principal for the users:

setspn -S nfs/nfsv4test.realm.example.org@REALM.EXAMPLE.ORG nfs-nfsv4test  

I did this for all 4 users/principals.

3- Created the keytabs on the Windows KDC:

ktpass -princ host/nfsv4test.realm.example.org@REALM.EXAMPLE.ORG +rndPass -mapuser host-nfsv4test@REALM.EXAMPLE.ORG -pType KRB5_NT_PRINCIPAL -out c:\temp\host-nfsv4test.keytab -crypto AES256-SHA1  

So after that I had 4 keytabs.

4- Merged the keytabs on the server (and client):

ktutil    read_kt host-nfsv4test.keytab     read_kt nfs-nfsv4test.keytab      write_kt /etc/krb5.keytab  

The file has 640 permissions.

5- Exported the directories on the server; this has already worked without kerberos. With Kerberos enabled, the export file looks like this:

/srv/kerbnfs4 gss/krb5(rw,sync,fsid=0,crossmnt,no_subtree_check,insecure)  /srv/kerbnfs4/homes gss/krb5(rw,sync,no_subtree_check,insecure)  

Running exportfs -rav works:

root@nfsv4test:~# exportfs -rav  exporting gss/krb5:/srv/kerbnfs4/homes  exporting gss/krb5:/srv/kerbnfs4  

...and on the client I can view the mounts on the server:

root@nfsv4client:~# showmount -e nfsv4test.subnet.example.org  Export list for nfsv4test.subnet.example.org:  /srv/kerbnfs4/homes gss/krb5  /srv/kerbnfs4       gss/krb5  

6a- the krb5.conf has the default config for the enviroment it's was set up for and I havn't changed anything:

[libdefaults]      ticket_lifetime = 24000      default_realm = REALM.EXAMPLE.ORG      default_tgs_entypes = rc4-hmac des-cbc-md5      default_tkt__enctypes = rc4-hmac des-cbc-md5      permitted_enctypes = rc4-hmac des-cbc-md5      dns_lookup_realm = true      dns_lookup_kdc = true      dns_fallback = yes    # The following krb5.conf variables are only for MIT Kerberos.      kdc_timesync = 1      ccache_type = 4      forwardable = true      proxiable = true    # The following libdefaults parameters are only for Heimdal Kerberos.      fcc-mit-ticketflags = true    [realms]      REALM.EXAMPLE.ORG = {          kdc = kdc.realm.example.org          default_domain = kds.realm.example.org      }    [domain_realm]      .realm.example.org = KDC.REALM.EXAMPLE.ORG      realm.example.org = KDC.REALM.EXAMPLE.ORG    [appdefaults]  pam = {     debug = false     ticket_lifetime = 36000     renew_lifetime = 36000     forwardable = true     krb4_convert = false  }  

6- Then I set up my sssd.conf like this, but I havn't really understood what's going on here:

[sssd]  domains = realm.example.org  services = nss, pam  config_file_version = 2    [nss]  filter_groups = root  filter_users = root  default_shell = /bin/bash    [pam]  reconnection_retries = 3    [domain/realm.example.org]  krb5_validate = True  krb5_realm = REALM.EXAMPLE.ORG  subdomain_homedir = %o  default_shell = /bin/bash  cache_credentials = True  id_provider = ad  access_provider = ad  chpass_provider = ad  auth_provide = ad  ldap_schema = ad  ad_server = kdc.realm.example.org  ad_hostname = nfsv4test.realm.example.org  ad_domain = realm.example.org  ad_gpo_access_control = permissive  use_fully_qualified_names = False  ad_enable_gc = False  

7- idmap.conf on both machines:

[General]    Verbosity = 0  Pipefs-Directory = /run/rpc_pipefs    Domain = realm.example.org    [Mapping]    Nobody-User = nobody  Nobody-Group = nogroup  

8- And /etc/default/nfs-common on both machines:

NEED_STATD=yes  NEED_IDMAPD=yes  NEED_GSSD=yes  

9- Last but not least, nfs-kernel-server on the server:

RPCNFSDCOUNT=8  RPCNFSDPRIORITY=0  RPCMOUNTDOPTS="--manage-gids --no-nfs-version 3"  NEED_SVCGSSD="yes"  RPCSVCGSSDOPTS=""  

10- Then, after rebooting both server and client, I tried to mount the share (as root user):

mount -t nfs4 -o sec=krb5 nfsv4test.subnet.example.org:/srv/kerbnfs4/homes /media/kerbhomes -vvvv   

But sadly, the mount doesn't work. I don't get access. On the first try, it takes quite long and this is the output:

root@nfsv4client:~# mount -t nfs4 -o sec=krb5 nfsv4test.subnet.example.org:/srv/kerbnfs4/homes /media/kerbhomes  mount.nfs4: timeout set for Wed Dec 15 15:38:09 2021  mount.nfs4: trying text-based options 'sec=krb5,vers=4.2,addr=********,clientaddr=*******'  mount.nfs4: mount(2): Permission denied  mount.nfs4: access denied by server while mounting nfsv4test.subnet.example.org:/srv/kerbnfs4/homes  

Chapter II: Debugging

For a more detailed log, I ran

rpcdebug -m nfsd -s lockd  rpcdebug -m rpc -s call  

on the server but I don't get really that much logs.

However, when trying to mount, syslog tells me that:

Dec  6 11:20:02 testserver kernel: [ 2088.771800] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.771808] svc: svc_authenticate (0)  Dec  6 11:20:02 testserver kernel: [ 2088.771811] svc: calling dispatcher  Dec  6 11:20:02 testserver kernel: [ 2088.771840] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.773222] svc: server 00000000c1c7fb25, pool 0, transport 00000000fc9bd395, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.774697] svc: server 00000000c1c7fb25, pool 0, transport 00000000fc9bd395, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.774705] svc: svc_authenticate (6)  Dec  6 11:20:02 testserver kernel: [ 2088.774711] RPC:       Want update, refage=120, age=0  Dec  6 11:20:02 testserver kernel: [ 2088.774712] svc: svc_process close  [... 7x same message ]  Dec  6 11:20:02 testserver kernel: [ 2088.791514] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791519] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791521] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791538] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791913] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.791918] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791920] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.791940] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.792292] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  Dec  6 11:20:02 testserver kernel: [ 2088.792296] svc: svc_authenticate (1)  Dec  6 11:20:02 testserver kernel: [ 2088.792298] svc: authentication failed (1)  Dec  6 11:20:02 testserver kernel: [ 2088.792316] svc: server 00000000c1c7fb25, pool 0, transport 00000000c5641df0, inuse=2  

As this didn't really help me at all, I recorded the traffic with tcpdump, which gives me this:

11:12:02.856200 IP ip-client.740 > ip-server.nfs: Flags [S], seq 763536441, win 65160, options [mss 1460,sackOK,TS val 2364952579 ecr 2826266858,nop,wscale 7], length 0  11:12:02.856295 IP ip-server.nfs > ip-client.740: Flags [S.], seq 2444950221, ack 763536442, win 65160, options [mss 1460,sackOK,TS val 2826266858 ecr 2364952579,nop,wscale 7], length 0  11:12:02.856304 IP ip-client.740 > ip-server.nfs: Flags [.], ack 1, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 0  11:12:02.856324 IP ip-client.740 > ip-server.nfs: Flags [P.], seq 1:245, ack 1, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 244: NFS request xid 4035461122 240 getattr fh 0,2/42  11:12:02.856408 IP ip-server.nfs > ip-client.740: Flags [.], ack 245, win 508, options [nop,nop,TS val 2826266858 ecr 2364952579], length 0  11:12:02.856421 IP ip-server.nfs > ip-client.740: Flags [P.], seq 1:25, ack 245, win 508, options [nop,nop,TS val 2826266858 ecr 2364952579], length 24: NFS reply xid 4035461122 reply ERR 20: Auth Bogus Credentials (seal broken)  11:12:02.856425 IP ip-client.740 > ip-server.nfs: Flags [.], ack 25, win 510, options [nop,nop,TS val 2364952579 ecr 2826266858], length 0  11:12:02.867582 IP ip-client.740 > ip-server.nfs: Flags [F.], seq 245, ack 25, win 510, options [nop,nop,TS val 2364952590 ecr 2826266858], length 0  11:12:02.867751 IP ip-server.nfs > ip-client.740: Flags [F.], seq 25, ack 246, win 508, options [nop,nop,TS val 2826266869 ecr 2364952590], length 0  11:12:02.867759 IP ip-client.740 > ip-server.nfs: Flags [.], ack 26, win 510, options [nop,nop,TS val 2364952590 ecr 2826266869], length 0  

(I redacted the real ip addresses)

So the interesting part here is the Auth Bogus (Seal broken)? Is there really something bad or is it just the error which appears when something is wrong? I couldn't find anything helpful about this error on the web.

So to come back to Kerberos itself, the keytab seems to be ok:

root@nfsv4client:~# klist -k -e  Keytab name: FILE:/etc/krb5.keytab  KVNO Principal  ---- --------------------------------------------------------------------------     7 host/nfsv4client.realm.example.org@REALM.EXAMPLE.ORG     6 nfs/nfsv4client.realm.example.org@REALM.EXAMPLE.ORG  

When trying to test the keytab file, it seems to work:

root@nfsv4client:~# kinit -k nfs/nfsv4client.realm.example.org  root@nfsv4client:~#  

But on this page it's stated that the keytab should be tested with

kinit -k `hostname -s`$  

which resolves to

kinit -k nfsv4client  

which doesn't work as no key was found for nfsv4client@REALM.EXAMPLE.ORG. So is the keytab wrong or the test method?

Another log I found on the mounting client machine (in messages):

 nfsv4client kernel: [ 4355.170940] svc: initialising pool 0 for NFSv4 callback   nfsv4client kernel: [ 4355.170940] nfs_callback_create_svc: service created   nfsv4client kernel: [ 4355.170941] NFS: create per-net callback data; net=f0000098   nfsv4client kernel: [ 4355.170942] svc: creating transport tcp-bc[0]   nfsv4client kernel: [ 4355.171032] nfs_callback_up: service started   nfsv4client kernel: [ 4355.171033] svc: svc_destroy(NFSv4 callback, 2)   nfsv4client kernel: [ 4355.171034] NFS: nfs4_discover_server_trunking: testing 'nfsv4test.subnet.example.org'   nfsv4client kernel: [ 4355.171040] RPC:       new task initialized, procpid 9204   nfsv4client kernel: [ 4355.171041] RPC:       allocated task 000000006bdb9e01   nfsv4client kernel: [ 4355.171042] RPC:   110 __rpc_execute flags=0x5280   nfsv4client kernel: [ 4355.171044] RPC:   110 call_start nfs4 proc EXCHANGE_ID (sync)   nfsv4client kernel: [ 4355.171045] RPC:   110 call_reserve (status 0)   nfsv4client kernel: [ 4355.171046] RPC:       wake_up_first(000000005af696f3 "xprt_sending")   nfsv4client kernel: [ 4355.171047] RPC:   110 reserved req 00000000d1a7d1a4 xid 04f914c3   nfsv4client kernel: [ 4355.171047] RPC:   110 call_reserveresult (status 0)   nfsv4client kernel: [ 4355.171048] RPC:   110 call_refresh (status 0)   nfsv4client kernel: [ 4355.171049] RPC:       gss_create_cred for uid 0, flavor 390004   nfsv4client kernel: [ 4355.171050] RPC:       gss_create_upcall for uid 0   nfsv4client kernel: [ 4355.171052] RPC:       __gss_find_upcall found nothing   nfsv4client kernel: [ 4355.201976] RPC:       __gss_find_upcall found msg 000000000e5abcbc   nfsv4client kernel: [ 4355.201978] RPC:       gss_fill_context returns error 13   nfsv4client kernel: [ 4355.201982] RPC:       gss_pipe_downcall returning 16   nfsv4client kernel: [ 4355.201986] RPC:       gss_create_upcall for uid 0 result -13   nfsv4client kernel: [ 4355.201987] RPC:   110 call_refreshresult (status -13)   nfsv4client kernel: [ 4355.201988] RPC:   110 call_refreshresult: refresh creds failed with error -13   nfsv4client kernel: [ 4355.201989] RPC:   110 return 0, status -13   nfsv4client kernel: [ 4355.201990] RPC:   110 release task  

It's a lot of stuff, but I can't find the meaning of error -13, except that it's Permission Denied.

Chapter III: The question

The principals are there in the keytab. So when the client asks the server about the NFS share and tries to access it, both should have the keys to interact with each other. But for some reason it doesn't work. May it be because of the assignment of the principals to the user accounts?

How can I get this to work? How do I get better infos when debugging? Sorry for the wall of china of text.

PS. I mainly followed this tutorial. It seemed like a perfect match for my enviroment..

ansible: why I cannot use {{ ansible_facts['ansible_distribution_release'] }} in playbook

Posted: 20 Jan 2022 04:56 AM PST

I have ansible task run on localhost like this

  - name: add docker repository      apt_repository:        repo: "deb [arch=amd64] https://download.docker.com/linux/debian {{ ansible_facts['ansible_distribution_release'] }} stable"        state: present        filename: docker-ce  

I wish to use variable ansible_facts['ansible_distribution_release'] to get the os distribution name, in my case, it should be buster. But it runs into an error like this

"The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ansible_distribution_release'

I tried to use {{ ansible_distribution_release }} directly, and it works

repo: "deb [arch=amd64] https://download.docker.com/linux/debian {{ ansible_distribution_release }} stable"  

Then I thought I should only access the facts directly, not access it as a key of the variable ansible_facts, but then I read the official document, I see use cases like

{{ ansible_facts['devices']['xvda']['model'] }}

It make me suspicious there is something wrong about my understanding of ansible variables

I've tried to not quote ansible_distribution_release in the [], ie, ansible_facts[ansible_distribution_release], but without luck

I run command below

$ ansible localhost -m setup -a "filter=ansible_distribution_release"    localhost | SUCCESS => {      "ansible_facts": {          "ansible_distribution_release": "buster"      },      "changed": false  }  

thus proved there did have an attribute named ansible_distribution_release under ansible_facts.

Any help will be appreciated


udpate: I use instructions show below

- name: debug                                                                                                                                                                                                          block:                                                                                                                                                                                                                 - debug:                                                                                                                                                                                                                 var: distribution_release                                                                                                                                                                                - debug:                                                                                                                                                                                                                 var: ansible_distribution_release                                                                                                                                                                                  - debug:                                                                                                                                                                                                                 var: "{{ ansible_facts.keys() }}"                                                                                                                                                                              tags: show         

and find out distribution_release is not defined, ansible_distribution_release can be directly accessed, but there is no such key as ansible_distribution_release in ansible_facts, but there do have a key named distribution_release. this is difference from the out put from

ansible localhost -m setup  

the documents says

INJECT_FACTS_AS_VARS

Facts are available inside the ansible_facts variable, this setting also pushes them as their own vars in the main namespace. Unlike inside the ansible_facts dictionary, these will have an ansible_ prefix.

It seems I can access facts in the mainspace without the ansible_ prefix

Is there a CloudWatch metric that corresponds to ALB data transfer usage/cost?

Posted: 20 Jan 2022 02:47 AM PST

I have an Application Load Balancer whose data transfer cost I want to monitor.

In Cost Explorer, I can filter on usage type "DataTransfer-Out-Bytes", and see how many GB of data it is sending, and how much that costs. However, it only shows the total for each day, and the data is delayed by several hours. In order to see how the amount of traffic is affected by changes I make, I'd like to see that same number in CloudWatch, but I can't find any corresponding metric.

The Per-AppELB "ProcessedBytes" metric sounded promising, but that number is slightly more than half the number I see in Cost Explorer. (My best guess is that TLS handshake overhead isn't included.)

Is there any metric or combination of metrics that matches what I end up getting billed for?

How can I configure haproxy to put two frontends to access owa online?

Posted: 20 Jan 2022 02:07 AM PST

I am facing a problem with HAPROXY on an Ubuntu 16.04 server when redirecting to show OWA on the internet. I have a domain, and I installed exchange server 2013 on windows server 2012 r2. I need to use a second frontend with tcp for OWA on both 443 and 80 ports.

The problem is that OWA appears sometimes and after refresh the page it gives error or another site of mine with different CA, because of the old frontend haproxy-in (mode http). I have LetsEncrypt for all my sites assigned to port 443.

Please, I need a solution to open OWA and the other sites with.

This is my haproxy configuration file from the first frontend:

frontend haproxy_in  bind *:80  bind *:443 ssl crt /etc/haproxy/certs/mdl.ief.tishreen.edu.sy.pem crt /etc/haproxy/certs/mail.ief.tishreen.edu.sy.pem  crt /etc/haproxy/certs/lib.ief.tishreen.edu.sy.pem crt /etc/haproxy/certs/ief.tishreen.edu.sy.pem crt /etc/haproxy/certs/www.ief.tishreen.edu.sy.pem crt /etc/haproxy/certs/educloud.ief.tishreen.edu.sy.pem crt /etc/haproxy/certs/vpn.ief.tishreen.edu.sy.pem  mode http  # Define Path For LetsEncrypt.........................  acl is_letsencrypt path_beg -i /.well-known/acme-challenge/  use_backend letsencrypt if is_letsencrypt  # Define hosts........................................  acl is_moodle hdr_dom(host) -i mdl.ief.tishreen.edu.sy  acl is_lib hdr_dom(host) -i lib.ief.tishreen.edu.sy  acl is_mail hdr_dom(host) -i mail.ief.tishreen.edu.sy  acl is_vpn hdr_dom(host) -i vpn.ief.tishreen.edu.sy  acl is_www hdr_dom(host) -i www.ief.tishreen.edu.sy  # Direct hosts to backend..............................  use_backend moodle if is_moodle  use_backend lib if is_lib  use_backend vpn if is_vpn  use_backend www if is_www  default_backend base  # Redirect port 80 t0 443 except lets encrypt............  redirect scheme https code 301 if !{ ssl_fc } !is_letsencrypt  ### exchange owa frontend####  frontend exchange-server  bind *:80  bind *:443  mode tcp  acl is_mail hdr_dom(host) -i mail.ief.tishreen.edu.sy  use_backend mail if is_mail    default_backend base    backend mail  balance roundrobin  mode tcp  server vm3 172.17.16.22:443 check  ######################  #            #  #   Backends     #  #            #  ######################  backend letsencrypt  server letsencrypt 127.0.0.1:8888  backend moodle  balance roundrobin  mode http  server vm1 172.17.16.20:80 check    backend lib  balance roundrobin  mode http  server vm2 172.17.16.18:80/akasia check      backend vpn  balance roundrobin  mode http  server vm4 172.17.16.35:1194 check    backend www  balance roundrobin  mode http  server vm5 172.17.16.25:80 check    backend base  balance roundrobin  mode http  server vmtest 172.17.16.25:80 check      ###############################  

ADFS: Convert SAML Assertion to OAuth Token?

Posted: 20 Jan 2022 03:01 AM PST

We have Microsoft Active Directory Federation Services (ADFS) as our authentication/federation provider. We use it for performing identity federation via SAML to several external vendors, SaaS providers, etc. In addition, we have several vendors that only support OAuth, so we have configured integrations with those vendors using ADFS 2016's OAuth support. As such, we are able to generate both SAML assertions and OAuth access tokens, as needed.

Now we have run into a situation where Vendor A (configured for SAML auth) needs to make a RESTful service call to Vendor B (configured to require OAuth tokens). Is there a way to convert an ADFS-generated SAML assertion into an ADFS-generated OAuth token? Given that both credentials are generated by ADFS, I would think that ADFS would have a way of performing the conversion. Is there an endpoint where I can POST a SAML assertion and get back the OAuth token in return? Any help would be GREATLY appreciated!!

How do I secure the access token, on Linux, to remote, automated secrets stores like Hashicorp Vault?

Posted: 20 Jan 2022 04:00 AM PST

There seems to be a bit of a "chicken and egg" problem with the passwords to the password managers like Hashicorp Vault for Linux.

While researching this for some Linux servers, someone clever asked, "If we're storing all of our secrets in a secrets storage service, where do we store the access secret to that secrets storage service? In our secrets storage service?"

I was taken aback, since there's no point to using a separate secrets storage service if all the Linux servers I'd store the secrets on anyway have its access token.

For example, if I move my secrets to Vault, don't I still need to store the secrets to access Hashicorp Vault somewhere on the Linux server?

There is talk about solving this in some creative ways, and at least making things better than they are now. We can do clever things like auth based on CIDR or password mashups. But there is still that trade-off of security For example, if a hacker gains access to my machine, they can get to vault if the access is based on CIDR.

This question may not have an answer, in which case, the answer is "No, this has no commonly accepted silver bullet solution, go get creative, find your tradeoffs bla bla bla"

I want an answer to the following specific question:

Is there a commonly accepted way that one secures the password to a remote, automated secrets store like Hashicorp Vault on modern Linux servers?

Obviously, plaintext is out of the question.

Is there a canonical answer to this? Am I even asking this in the right place? I considered security.stackexchange.com, too, but this seemed specific to a way of storing secrets for Linux servers. I'm aware that this may seem too general, or opinion based, so I welcome any edit suggestions you might have to avoid that.

We laugh, but the answer I get on here may very well be "in vault". :/ For instance, a Jenkins server or something else has a 6-month revokable password that it uses to generate one-time-use tokens, which they then get to use to get their own little ephemeral (session limited) password generated from Vault, which gets them a segment of info.

Something like this seems to be along the same vein, although it'd only be part of the solution: Managing service passwords with Puppet

JBoss EAP 6.2 on RHEL 6: ./bin/init.d/jboss-as-standalone.sh hangs while calling via SSH

Posted: 20 Jan 2022 02:07 AM PST

I'm using jboss-as-standalone.sh to manage JBoss EAP standalone as a service. I can start/stop the service with "service jboss-as-standalone.sh start/stop" while I'm on a terminal.

But I would like to start JBoss from outside the server via SSH using our Continuous Deployment Infrastructure. Therefore I'm issueing a command like this:

ssh root@myserver "service jboss-as-standalone.sh start"  

The server starts up normally but SSH hangs. It seems it isn't able to close the connection because of the background job forked by this command in the script:

daemon --user $JBOSS_USER LAUNCH_JBOSS_IN_BACKGROUND=1 JBOSS_PIDFILE=$JBOSS_PIDFILE SERVER_HOME=$SERVER_HOME $JBOSS_SCRIPT -c $JBOSS_CONFIG 2>&1 > $JBOSS_CONSOLE_LOG &  

Is there any other possibility to start JBoss as a service which works with notty SSH connections as well?

Best regards

Jan

PHP Files being cached by unknown entity

Posted: 20 Jan 2022 03:01 AM PST

I'm hitting a weird cache issue on my server, the project I am working on doesn't have any caching enabled at this time, but the server it self has APC installed (which was set to cache everything by default, this has been disabled now).

The problem is, my old code is running still, and I don't know how to get the amended code to trigger.

I have tried deleting the file entirely, this makes my project error with "missing file" as it should, but once I upload my file (new version), it starts serving up the old version of the file again.

I've uploaded a uniquely labeled file with apc_clear_cache(); and apc_clear_cache( 'opcode' ); but this didn't appear to help.

I have also commented out APC from loading with PHP, but it still served old files, so I am wondering if there's something underlying that is causing this aggressive caching.

Apache2, PHP, APC etc is all loaded up using Aptitude on Debian Wheezy

PHP 5.4.4-14+deb7u3 (running under mod_php) Apache 2.2.22

Between each config change and disabling APC I did a complete apache restart.

I've checked the apache2 modules list, no cache modules are loaded up, there are also no services such as varnish etc running.

Update

Did some additional testing, added some html output before the <?php tag which is output, so content outside the php tags aren't being cached it seems.

The file that isn't updating is being included with include_once() and disabling APC didn't appear to have any kind of impact on the file being served incorrectly.

The problem is with tryign to use HTML2PDF to generate a .pdf file after form submission

PHP Fatal error: Uncaught ERROR File : /lib/html2pdf/html2pdf.class.php Line : 1319, Impossible to load the image 'logo.png' thrown in /lib/html2pdf/html2pdf.class.php on line 1319

The new version of the file uses logo.jpg

Setup squid3 proxy server on linux server with 2 ethernet ports

Posted: 20 Jan 2022 04:00 AM PST

I need to setup squid3 proxy server on my linux machine with 2 ethernet ports(eth0 and eth1). eth0 has an IP address of 192.168.1.2 assigned by a router which provides internet to the system. eth1 is connected to a switch. I need squid3 to serve the switch through eth1. How should I configure eth1? I don't need the configurations for squid3. What should I do?

No comments:

Post a Comment